I'm trying to understand the vulnerability you are pointing out; in the example of an AI assistant w/ access to your email, is that AI assistant also reading it's instructions from your email?
Yes. You can't guarantee that the assistant won't ever consider the text of an incoming email as a user instruction, and there is a lot of incentive to find ways to confuse an assistant in that specific way.
BTW, I find it weird that the Von Neumann vs. Harvard architecture debate (ie. whether executable instructions and data should even exist in the same computer memory) is now resurfacing in this form, but even weirder that so many people don't even see the problem (just like so many couldn't see the problem with MS Word macros being Turing-complete).
The key problem is that an LLM can't distinguish between instructions from a trusted source and instructions embedded in other text it is exposed to.
You might build your AI assistant with pseudo code like this:
prompt = "Summarize the following messages:"
emails = get_latest_emails(5)
for email in emails:
prompt += email.body
response = gpt4(prompt)
That first line was your instruction to the LLM - but there's no current way to be 100% certain that extra instructions in the bodies of those emails won't be followed instead.
If the interface is just text-in and text-out then Prompt injection seems like an incredibly large problem. Almost as large as SQL injection before ORMs and DB libraries became common.
Yeah, that's exactly the problem: it's string concatenation, like we used to do with SQL queries.
I called it "prompt injection" to name it after SQL injection - but with hindsight that was a bad choice of name, because SQL injection has an easy fix (escaping text correctly / parameterizing your queries) but that same solution doesn't actually work with prompt injection.
Quite a few LLMs offer a concept of a "system prompt", which looks a bit like your pseudocode there. The OpenAI ones have that, and Anthropic just announced the same feature for their Claude 2.1 model.
The problem is the system prompt is still concatenated together with the rest of the input. It might have special reserved token delimiters to help the model identify which bit is system prompt and which bit isn't, and the models have been trained to pay more attention to instructions in the system prompt, but it's not infallible: you can still put instructions in the regular prompt that outweight the system prompt, if you try hard enough.
The way I see it, the problem is almost closer to social engineering than SQL injection.
A manager can instruct their reception team to only let people in with an ID Badge, and they already know they need to follow their manager’s direction, but when someone smooth persuades their way through they’re going to give a reason like “he said he was building maintenance and it was an emergency”.