Your engineers are already using AI tools to write code. If you're at a company with more than a dozen developers and haven't explicitly addressed this, the number of AI-assisted code contributions in your codebase today is not zero, it's large and unknown. Cursor, Claude Code, GitHub Copilot, ChatGPT, and a dozen other tools are in daily use. The governance question isn't whether to allow this. It's whether you'll govern it before it governs you.

This isn't a hypothetical risk. It's a set of concrete, observable problems: source code flowing to third-party AI services without data handling review, credentials appearing in AI prompts and being retained in vendor logs, AI-generated code with subtle security defects passing code review because reviewers assume AI-produced code is correct, and package hallucinations leading to dependency confusion attacks when developers install packages that don't exist.

The Four Risk Categories That Matter

Data exfiltration via AI tools. When a developer pastes code into an AI assistant, or when a tool like Cursor sends entire file contexts automatically, that code leaves your environment. For most organizations, proprietary source code is a confidential asset. In regulated industries, it may contain embedded configuration values, internal API structures, or references to regulated data systems. Every AI tool used for development is a potential data exfiltration channel, and most organizations have no visibility into what's leaving through them.

Credential exposure. Developers expose secrets to AI tools with alarming frequency, not from carelessness, but because the interaction feels private. A developer asking an AI tool to help debug a connection string that includes a database password has just sent that password to a third-party service. Depending on the vendor's data retention policy, that credential may be stored for weeks. Secret scanning at the repository level catches committed secrets, it doesn't catch secrets shared in AI prompts.

Insecure AI-generated code. AI models generate code that looks correct but contains security defects: SQL queries built with string concatenation, hardcoded timeouts that create race conditions, authentication logic with subtle flaws, dependencies that are hallucinated or outdated. Code review is the control that's supposed to catch these, but reviewers often apply less scrutiny to AI-generated code, not more, because it looks polished and comes with confident-sounding explanations.

Dependency confusion and supply chain risk. Language models trained on older data will suggest packages that have been deprecated, renamed, or that never existed. When developers install a hallucinated package name, they may install a malicious package that a threat actor has pre-positioned under that name. This is a documented attack vector. Your dependency management gates need to account for AI-suggested packages.

What a Functioning Governance Program Looks Like

A governance program for AI-assisted development has four components. Each one is independently valuable; together they provide defense in depth.

Policy and tool approval. Which tools are approved? Under what conditions? What data can they access? Most organizations let developers choose their tools informally, which means the tool choices are made by whichever engineer discovers them first, with no assessment of data handling practices, training policies, or contractual protections. A tool approval process doesn't need to be bureaucratic. It needs to be consistent and fast enough that developers use it rather than route around it.

Configuration controls. AI tools have configuration options that affect their security posture. Enterprise accounts typically disable training on customer code. Repository-level configuration files (like CLAUDE.md for Claude Code, or .github/copilot-instructions.md for Copilot) can enforce behavioral constraints, prohibiting certain patterns, requiring specific security checks, refusing to generate certain categories of code. These controls are often invisible to developers who don't know to look for them, and absent from organizations that haven't explicitly set them.

CI/CD security gates. Policy and configuration are preventive. Gates are detective and enforcement controls. Secret scanning on every commit (not just periodic scans) catches credentials before they reach main. SAST on every pull request catches common vulnerability patterns in AI-generated code. Dependency scanning catches the hallucinated and malicious packages. License scanning catches GPL code introduced through AI suggestions. Each of these gates is a standard tool; what's missing in most organizations is the combination and the mandate.

Developer training. Not compliance training, operational training. Developers need to understand what their AI tools actually do with their code, what the specific risks are in their environment, and what the concrete behaviors they're responsible for are. "Don't put credentials in prompts" is actionable. "Be aware of security risks when using AI" is not. Training that shows developers the pre-commit hook catching a secret in real time is more effective than a policy document they read once.

The CLAUDE.md Pattern

One of the most underused controls available to teams using Claude Code is the repository-level CLAUDE.md file. This file provides persistent instructions to Claude Code that apply across every interaction in that repository, and it survives developer turnover, unlike tribal knowledge.

A well-configured CLAUDE.md can: prohibit generation of code that touches regulated data without explicit annotation, require specific security patterns for authentication and authorization code, ban certain dependency categories, mandate specific review steps before committing, and alert the developer when they're working in a security-sensitive context. It's a lightweight but durable control layer that most teams using Claude Code haven't deployed.

The Regulatory Dimension

For regulated industries, AI-assisted development isn't just a security question, it's a compliance question. Software that touches regulated data is subject to validation requirements. Code generated by AI doesn't exempt it from those requirements; if anything, it may intensify the scrutiny regulators apply, because the provenance of AI-generated code is harder to trace than code written by a named human author.

Organizations developing software for FDA-regulated devices or processes, financial systems subject to SOX, or healthcare applications under HIPAA need to establish how AI-assisted code fits into their software development lifecycle documentation, validation protocols, and audit trails. These aren't hypothetical future requirements, they're questions that will be asked in your next audit cycle.

Starting Point: The Ungoverned State

Most teams we work with start from an ungoverned state. Developers are using multiple AI tools with no formal approval, no consistent configuration, no CI/CD gates, and no training beyond what individual developers have picked up on their own. The tools are producing real value, development velocity is up, and teams aren't willing to give that up.

The goal isn't to slow AI adoption in the development workflow. It's to give it a structure that makes it sustainable, where the security and compliance teams can account for what's happening, auditors can verify controls exist, and developers have clear guidance that reduces their uncertainty rather than adding to it. The organizations that build this governance now will be able to adopt increasingly capable AI development tools as they emerge. The ones that don't will face increasing pressure to slow down as auditors and regulators catch up.