gedaliyah@lemmy.world to Lemmy Shitpost@lemmy.worldEnglish · 3 months agoAutomationlemmy.worldimagemessage-square81fedilinkarrow-up163arrow-down10
arrow-up163arrow-down1imageAutomationlemmy.worldgedaliyah@lemmy.world to Lemmy Shitpost@lemmy.worldEnglish · 3 months agomessage-square81fedilink
minus-squareOsrsNeedsF2P@lemmy.mllinkfedilinkarrow-up0·3 months agoWhile I believe that, it’s an issue with the training data, and not the hardest to resolve
minus-squaremerc@sh.itjust.workslinkfedilinkarrow-up1·3 months agoYes, “Bias Automation” is always an issue with the training data, and it’s always harder to resolve than anyone thinks.
minus-squaredondelelcaro@lemmy.worldlinkfedilinkarrow-up1·3 months agoMaybe not the hardest, but still challenging. Unknown biases in training data are a challenge in any experimental design. Opaque ML frequently makes them more challenging to discover.
While I believe that, it’s an issue with the training data, and not the hardest to resolve
Yes, “Bias Automation” is always an issue with the training data, and it’s always harder to resolve than anyone thinks.
Maybe not the hardest, but still challenging. Unknown biases in training data are a challenge in any experimental design. Opaque ML frequently makes them more challenging to discover.