Attack vector where untrusted input (a webpage, a PDF, a tool output) contains instructions that trick the LLM into ignoring its system prompt. 2025's most feared AI vulnerability.
"Prompt injection in an email could make the agent exfiltrate all your unread messages."
No comments yet — say something.
Add your own interpretation of "prompt injection".