The OWASP LLM Top 10 is the best available taxonomy for understanding risk in AI systems. It's also widely misread. Most teams treat it the way they treat the original OWASP Top 10, as a checklist to run down and close. That's not how it works, and organizations that approach it that way will close the list and still be exposed.

Here's a practitioner's view of what the top entries actually mean, what they look like in deployed systems, and what you need to address if you're operating AI in a regulated environment.

LLM01: Prompt Injection

Prompt injection is the most frequently exploited vulnerability in production AI systems right now. The attack works by embedding instructions in user-supplied input, or in data the model retrieves, that cause the model to behave in ways its designers didn't intend. Unlike SQL injection, there's no syntax boundary between instructions and data. The model doesn't see them as different things.

Direct prompt injection: a user types instructions that override the system prompt. Indirect prompt injection: the model retrieves external content (a document, a webpage, a database record) that contains embedded instructions. In a regulated environment where AI systems are retrieving and summarizing clinical records, financial documents, or manufacturing data, indirect injection is the higher-risk vector. The adversary doesn't need access to the application, they need access to a data source the application reads.

Detection and response for prompt injection is immature. Most organizations don't have it. Build input validation, output monitoring, and treat every model interaction as potentially adversarial input.

LLM02: Sensitive Information Disclosure

Language models can leak sensitive information from their training data, from their context window, or from retrieved data in RAG architectures. In regulated industries, this has concrete compliance implications. A model fine-tuned on patient records that surfaces specific clinical details in response to carefully crafted queries is a HIPAA incident. A model that reveals strategic customer information from its context window is a confidentiality breach.

The mitigation isn't primarily technical, it's data governance. What data did this model train on? What data flows through its context? Who can access the system and what can they elicit from it? These are questions your data governance function needs to own, not just your security team.

LLM06: Excessive Agency

This is the one that keeps me up at night more than the others. As AI systems become agentic, able to take actions, not just generate text, excessive agency becomes the highest-consequence risk in the taxonomy. A model that can read email, write email, access databases, execute code, or make API calls has real-world blast radius when it's manipulated or makes an error.

The OWASP guidance is correct: minimize capability, enforce least-privilege, require human confirmation for consequential actions. But "consequential" is doing a lot of work in that sentence, and most organizations don't have a clear definition of what it means in their environment. Before you deploy an agentic AI system, write down exactly what it can do, every API call, every data source it can write to, every external system it can affect, and then ask whether you've applied the same controls you'd apply to a human employee with that level of access. Almost always, the answer is no.

LLM09: Misinformation

For most industries, hallucination is an annoyance. In regulated industries, pharma, healthcare, financial services, it's a material risk. A model that confidently states an incorrect drug interaction, fabricates a regulatory citation, or generates a financial calculation with plausible-looking but wrong numbers creates real liability.

Regulated environments using AI for anything that touches clinical decisions, regulatory submissions, or financial reporting need explicit hallucination controls: retrieval-augmented generation with cited sources, human review requirements for consequential outputs, and audit trails that distinguish model output from verified information. "The AI said it" is not a defensible position in a 483 observation or an SEC enforcement action.

What the Top 10 Doesn't Tell You

The OWASP LLM Top 10 is a risk taxonomy, not a remediation playbook. It tells you what categories of problems exist, it doesn't tell you which ones apply to your specific system, how severe they are in your environment, or what to do first.

A few things it doesn't address that matter in regulated environments:

Using It Well

Use the OWASP LLM Top 10 as a structured starting point for AI risk assessment, not as a compliance checklist. For each system you're assessing: identify which top-10 categories are relevant to that system's architecture and data flows, assess severity in your specific context (a model that only generates internal reports has different exposure than one integrated into a customer-facing application), and prioritize remediation based on likelihood and impact in your environment, not based on OWASP's ordering.

Combine it with the NIST AI RMF for governance structure, and with the CSA AI Safety guidance for cloud-specific controls. No single framework covers everything, the organizations doing this well are using all three.