Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fair enough. Forgive my probably ignorance, but if Claude Code can be attacked like this, doesn’t that means that also foundation LLMs are vulnerable to this, and is not a local LLM thing?


It's not an LLM thing at all. Prompt injection has always been an attack against software that uses LLMs. LLMs on their own can't be attacked meaningfully (well, you can jailbreak them and trick them into telling you the recipe for meth but that's another issue entirely). A system that wraps an LLM with the ability for it to request tool calls like "run this in bash" is where this stuff gets dangerous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: