Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Quick question: How is Apple reporting you to the government, and Google not? When they both report to the exact same entity? (The NCMEC.)

Also how do you rationalise Apple bad, everyone else Good. If Apple are actually the only company to have public consultation about the new feature - while the others implemented theirs silently and have been running for years?

One last one: Why do you think political memes uploaded to an iCloud Photo Library are the target of governments, if such memes are trivially detected when posted online? Why do you think Apple would comply here if Google(et.al.) don't? If referencing China, are you not aware who runs their social media? Also why do you think that new memes can't be created on the fly - ones that wouldn't match a set of hashes used by nefarious governments.

Side note: Are you aware Google uses hashes to detect CSAM imagery too? Better still, are you aware Google use AI to guess at what might be a CSAM image and report those as well?

https://support.google.com/transparencyreport/answer/1033093... https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...



If I put something not encrypted on internet I expect it to be scanned/leaked etc. I don't want my device (with my battery...) to spy on me.

Signed ~(Mostly)Happy google-less LineageOS user


Right, but the Apple paper I read [1] said that if you did not have iCloud Photos (iow, "put it on the Internet") turned on, CSAM scanning would not occur.

So, how is it different, again?

1. I can't link the paper, because apparently, Apple took it offline. But it was widely-reported on.


Generating a encrypted voucher based on a known CSAM image is not spying anymore than the device cataloguing images by descriptions.

For a technical forum the lack of willingness of individuals to read into the system is perplexing. So far most of the arguments I've read are entirely based on creating a strawman then beating that to death.


Is there anything which prevents non-CSAM images from being added into this catalogue? As I understand it, the only thing stopping that is a promise from Apple - which can be steamrolled by a government request.


>For a technical forum the lack of willingness of individuals to read into the system is perplexing.

Admittedly, I may have missed it... But can you point out to me where this system cannot be expanded to non-CSAM material?

As a totally wild example, is this technology restricted, in some technical manner, from scanning for images which display a certain political leader as Winnie the Pooh?


What can happen:

+ Approximate matching. They might want to have images almost similar also to be flagged. + Scope creep. Particular images of people who are wanted or areas where they might live or shots of joints etc. + Mistakes. Accidentally flagging honest citizens and the bureaucracy that will follow. "Innocent until proven guilty..." That's not preemptively scanning someone's personal devices.


Does Google only scan images you send them or does it also scan every image you take, store or receive on your phone?


Are you using Google's cloud services? Then all of the above with exception to "receive" - just like Apple's messages, merely receiving a message doesn't add it to your library (even if it's surfaced there for convenience.)

If using Whatsapp - turn off autosave on images.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: