This is such a weird story. Is google really using computer vision to detect CSAM? How could that possibly work? This seems like a tremendous technical challenge.
Usually photoDNA has been deployed for this, but that almost certainly wouldn't be triggered by the dad uploading his own photos that hadn't been previously marked as CSAM in the photoDNA database.
Everyone's using AI, and widely. I sell stuff online and sync a product feed to Facebook. Products often get banned based on image analysis. Sometimes it is reasonably close, eg darts getting classified as dangerous weapons, other times sneakers get that classification.
If I appeal, it usually gets overturned, but sometimes sneakers get confirmed as weapons after review. There seems to be no image history; when a previously whitelisted product gets imported again (with a minor change in description or something), it may get classified as weapon again.
Needless to say, my ad spend is now zero and I expect my account to get banned any moment.
Fuzzy AI-based image analysis is OK for things like extracting roof shapes from aerial images, but seems totally inadequate for moderation, because it lacks nuance and context.
Usually photoDNA has been deployed for this, but that almost certainly wouldn't be triggered by the dad uploading his own photos that hadn't been previously marked as CSAM in the photoDNA database.