- cross-posted to:
- privacy@lemmy.ca
- cross-posted to:
- privacy@lemmy.ca
Follow-up to last week’s story:
https://lemmy.ml/post/16672524
EDIT1: Politicians expect to be be exempt.
EDIT2: Good news: Vote has been postponed due to disagreements.
Follow-up to last week’s story:
https://lemmy.ml/post/16672524
EDIT1: Politicians expect to be be exempt.
EDIT2: Good news: Vote has been postponed due to disagreements.
They say they the images are merely matched to pre-determined images found on the web. You’re talking about a different scenario where AI detects inappropriate contents in an image.
It will detect known images and potential new images…how do you think it will the potential new and unknown images?
Source? Does the law require that? That’s not my impression.
Literally the article linked in the OP…
Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”
My bad. But that phrasing is super stupid, honestly. What company would want to promise to detect new child sex abuse material? Impossible to avoid false negatives.