- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.
Not CP, but normal porn and select op CP traits, moron
https://en.m.wikipedia.org/wiki/False_positives_and_false_negatives
Not that I think you will understand. I’m posting this mostly for those moronic enough to read your comments and think “that seems reasonable”