gedaliyah@lemmy.world to Lemmy Shitpost@lemmy.worldEnglish · 5 months agoAutomationlemmy.worldimagemessage-square54fedilinkarrow-up10arrow-down10
arrow-up10arrow-down1imageAutomationlemmy.worldgedaliyah@lemmy.world to Lemmy Shitpost@lemmy.worldEnglish · 5 months agomessage-square54fedilink
minus-squareOsrsNeedsF2P@lemmy.mllinkfedilinkarrow-up0·5 months agoWhile I believe that, it’s an issue with the training data, and not the hardest to resolve
minus-squaredondelelcaro@lemmy.worldlinkfedilinkarrow-up0·5 months agoMaybe not the hardest, but still challenging. Unknown biases in training data are a challenge in any experimental design. Opaque ML frequently makes them more challenging to discover.
minus-squaremerc@sh.itjust.workslinkfedilinkarrow-up0·5 months agoYes, “Bias Automation” is always an issue with the training data, and it’s always harder to resolve than anyone thinks.
While I believe that, it’s an issue with the training data, and not the hardest to resolve
Maybe not the hardest, but still challenging. Unknown biases in training data are a challenge in any experimental design. Opaque ML frequently makes them more challenging to discover.
Yes, “Bias Automation” is always an issue with the training data, and it’s always harder to resolve than anyone thinks.