Partially agree.
However, this problem has existed with scam e-mails since the 90s.
For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.
I think he was referring to a cryptographic signature, possibly using the "web of trust" to get the key. I'm not convinced we need central authority to solve this.
people at my org were gleeful when they learned they could hook LLMs into Slack. Even if we had some reliable, well-used signature system, I think people would just let AI use it to send emails on their behalf.
If the AI age has taught me anything, it's that most people do not care what their output is. They'll put their name on anything, taste or quality does not matter in the least. It's incredibly depressing.
Enshittification never stopped we just stopped talking about it because it became normal. Quality does not matter anymore. I agree its depressing, seeing AI Slop being pushed and no one even putting the time or effort in to say this is bad and you should feel bad.
Picture this: your grandma calls you in a panic, and you tell her, "Drop me your public PGP key so I can verify the signature".. PGP is dead outside of niche geek circles exactly because key management is basically an unsolvable problem for the average person
> PGP is dead outside of niche geek circles exactly because key management is basically an unsolvable problem for the average person
Can this problem be solved with better software?
I believe it can, it is just average person doesn't need PGP. No demand for software solving this problem, therefore no software for that.
The problem can be solved, like a storage for known PGP public keys with their history: like where the key was acquired, and a simple algo that calculated trust to the key as a probability of it being valid (or what adjective cryptographers would use in this case?).
You can start with PGP keys of people you know, getting them as QR codes offline, marking them as "high trust" and then pull from them keys stored at their devices (lowering their trust levels by the way). There are some issues how to calculate probability, because when we pull some keys from different sources we can't know are their reported trust levels are independent variables or not, but I believe you can deal with it, by pulling the whole chain of transfers of the key, starting from the owner of the key and ending at your device.
It is just a rough idea, how it can be made. Maybe other solutions are possible. My point is: the ugliness of PGP is a result of PGP was made by nerds and for the nerds. There is no demand for PGP-like solutions outside of nerd communities. But maybe LLM induced corrosion of trust will create demand?
What you're describing (hidden key exchanges with Trust-On-First-Use) is exactly what Signal and WhatsApp already do - they just hid all the math under the hood and tied it to your phone number. A pure Web of Trust where normal people have to manually weigh probabilities is never going to take off. The average user will blindly click "Accept Risk and Continue" on literally any certificate warning just to get back to looking at pictures of their grandkids
PGP works if you vouch for keys in person, both of you are honest and can be trusted to act in good faith when not in person, have good key chain and rotation hygiene, and the private keys can't be exfiltrated.
Yeah, there is no silver bullet solving the problem of trust completely and perfectly. People can lie and we can make them stop, while everything else is just a workaround.
The point of GP was that there any such system will require a central authority, PGP shows that you don't need it. I didn't claimed that PGP is a perfect or good enough solution, just that it exists and works for some people.
> both of you are honest and can be trusted to act in good faith when not in person
I believe it is not strictly necessary for the scheme to work. It is a limitation of OpenPGP and other implementations that they do not allow convert multiple independent observation of a public key (finding it from different sources, or encountering them used to sign messages) into a measure of trust to the key.
It is not a silver bullet either, but it can alleviate the problem and make it tractable.
The only doubts I have is how this system will stand against multiple actors trying to undermine it, but still I believe you can get something that will be better than nothing, and probably better than a central authority.
Same way security cameras prove that they are authentic camera recordings that have not been modified. If modified, the video will no longer match the signature that was generated with it.
> If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
In the interview scenario, generating an email signature is hardly beyond what an AI can do.
You have no prior knowledge of this person or his signature, it's not some government issued ID, it's in essence just random data unless you know the person to be real.
With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.
With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.
At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.
The highlighted parallel is usually drawn between cryptocurrency and cash, not between cryptocurrency and banks. With both cash and cryptocurrency, as is the idea behind the analogy, 1) there’s no intermediary and 2) once it’s gone, it’s gone. Obviously, the banking system is not immune to fraud (not sure why you think I made that claim, unless your definition of “cash” includes electronic transfers), but banks and/or payment systems can (and do) resolve these cases and have certain KYC requirements.
There are people hosting agents online to talk to other agents etc. on their behalf. How difficult is it to just instruct such an agent to do the tasks you mentioned? You're assuming it's done by "bad actors" while it's most likely just going to be done by "everyone" that knows how to do it.
I mean emails were and still are a huge security risk. Sometimes I'm more scared of employees opening and engaging with emails than I am than anything else.
For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.