The mandate is everywhere now: “We need to use AI.” Boards want efficiency. Executives want innovation. Vendors are quietly flipping on AI “copilots” in tools you already own. And somewhere in the middle sits security and compliance, being told to adopt AI with little clarity on why, where, or how.
Simply saying “no” is no longer a serious option. The US Department of Justice has already folded artificial intelligence into how it evaluates corporate compliance programs, via updates to its Evaluation of Corporate Compliance Programs (ECCP). That means prosecutors will treat AI like any other powerful technology: if you use it, they’ll ask how you governed it.
The opportunity here is subtle but important: when AI is pushed onto security and compliance, ECCP gives you a structured way to respond. Instead of arguing about whether to use AI at all, you can steer the conversation toward how to deploy it with defensible governance, risk assessment, and monitoring.
ECCP: The Lens Prosecutors Will Use on Your AI
The ECCP is DOJ’s playbook for assessing whether a company’s compliance program is real or cosmetic. It asks practical questions about design, implementation, and effectiveness, and prosecutors use the answers when deciding on charging decisions, penalties, and remediation expectations.
Recent updates and commentary make clear that AI and other emerging technologies are now part of that lens. DOJ expects companies to think about:
-
How AI is used in the business and in compliance functions.
-
What risks those uses create (bias, misuse, data leakage, evasion of controls).
-
What governance, documentation, and oversight exist around AI deployments.
The core message is simple: you do not get a free pass because an AI tool was “just added” to a system you already use. If it influences decisions, touches sensitive data, or is embedded in critical processes, DOJ will treat it as part of your compliance landscape.
The “Forced AI” Pattern: Pressure Without a Plan
If you work in security or compliance, the pattern is familiar:
-
A senior leader emerges from a board meeting and announces that “we’re behind on AI.”
-
Vendors start pitching “GenAI‑enabled” features as table stakes, bundled into renewals.
-
Business units experiment with AI agents or copilots on their own data, then ask compliance to bless it after the fact.
The risks of this “AI by decree” approach are obvious:
-
Tools get rolled out without a clear problem statement or success criteria.
-
Sensitive data flows into systems without adequate access controls or guardrails.
-
Decision‑making is quietly shifted from humans to algorithms, often without explainability or audit trails.
Compliance and security teams feel cornered: if they resist, they appear anti‑innovation; if they cave, they own the fallout when something goes wrong. This is where ECCP becomes extremely useful—not as yet another checklist, but as a language and structure you can use to redirect the conversation.
Governance: Who Owns AI Risk?
One of ECCP’s strongest through‑lines is governance: it asks who owns the program, how it’s structured, and whether leadership is actually engaged. Apply that same thinking to AI and you quickly see why off‑the‑cuff deployments are dangerous.
Before you bless any AI use—especially one being pushed from above—press for clarity on governance questions that mirror ECCP themes:
-
Ownership: Who in the organization owns AI risk overall? Is it tucked into IT, scattered across business units, or anchored in an enterprise risk function?
-
Policy and approval: Do you have written policies describing acceptable AI uses, approval processes, and restricted domains (e.g., sanctions, anti‑bribery, HR, investigations)?
-
Oversight structure: Does an existing governance body (risk committee, ethics committee, tech review board) formally review high‑risk AI deployments, or are decisions happening ad hoc?
DOJ has consistently emphasized leadership engagement and tone from the top in its ECCP guidance; it will apply the same logic to AI. When an executive insists on AI, you have every right to insist that they also sign up for an oversight structure consistent with ECCP expectations.
Risk Assessment: From “Use AI” to “Use AI Where It Makes Sense”
ECCP expects companies to conduct meaningful risk assessments and adjust their programs as risks evolve. AI is now one of those evolving risks and opportunities. Instead of treating “use AI” as a blanket directive, recast it as a set of risk‑based questions.
You can start with three simple steps:
-
Clarify the problem.
Ask stakeholders to articulate what they’re trying to fix: Is it manual policy review? Transaction monitoring? Hotline triage? Third‑party risk scoring? Vague aspirations like “be more efficient” are a red flag. -
Classify the risk of the use case.
Use a basic matrix to sort AI ideas into low‑ and high‑risk buckets:-
Low‑risk examples: drafting first‑pass training content, summarizing public policies, assisting with non‑binding research.
-
High‑risk examples: scoring third‑party integrity risk, prioritizing investigations, making hiring or disciplinary recommendations, influencing sanctions or export decisions.
-
-
Document the assessment.
For each proposed use, record why it’s acceptable or not, what mitigations are needed, and how you’ll revisit the decision. That record becomes part of your ECCP‑aligned evidence that you didn’t adopt AI blindly.
The goal is not to choke off AI entirely; it’s to ensure you can explain, in a future conversation with regulators or prosecutors, how you decided where AI belongs and where it does not.
Data, Controls, and Monitoring: The Hard Work Under the Hype
ECCP spends a lot of time on the nuts and bolts: how data is used, what controls are in place, and how the program is monitored and updated. That is exactly where AI can either reinforce your program or quietly undermine it.
A few practical angles to focus on:
-
Data provenance and access.
Where does the AI system get its inputs and training data? Are you mixing regulated or sensitive data (PII, health information, financial data, investigations content) into systems that weren’t designed to hold it? Who can access prompts, outputs, and logs? -
Control alignment.
Does the AI system respect existing controls (segregation of duties, approval chains, sanctions filters), or does it create shortcuts that effectively bypass them? For example, an “AI assistant” that drafts due‑diligence summaries might subtly steer reviewers toward a particular conclusion if not supervised. -
Testing and ongoing monitoring.
How do you know the AI system is doing what you think it’s doing—and nothing more? Are you periodically testing for bias, false positives/negatives, and edge cases? Do you have metrics or dashboards for AI‑enabled processes, not just traditional ones?
Many compliance leaders at recent AI‑focused events have reiterated a basic principle: AI should augment human judgment, not replace it. That principle lines up cleanly with ECCP’s emphasis on effective, well‑monitored controls: you gain efficiency without surrendering accountability.
A Playbook for When AI Is Pushed on You
So what do you do the next time someone walks in with a “We already bought this AI add‑on, can you sign off?” moment? An ECCP‑aligned playbook gives you a structured, repeatable response instead of improvisation.
You might anchor it around four moves:
-
Ask for the “why” in writing.
Request a short description of the AI use case: objectives, expected benefits, affected processes, data sources, and who will rely on the outputs. This is not bureaucratic clutter; it becomes part of your risk‑assessment and governance record. -
Run a mini risk assessment.
Use the low‑/high‑risk classification and ECCP themes to identify where this use case sits and what safeguards it needs. For high‑risk areas, escalate to your governance forum before anything ships. -
Embed ECCP questions into vendor and internal approvals.
For third‑party tools, ask vendors how they handle data, logging, model updates, and explainability—and put those answers in the contract or DPIA/PIA documentation. For internal builds, require design docs that address the same points. -
Define monitoring and evidence up front.
Decide how you will test the AI’s behavior, what metrics you’ll track, and how often you’ll revisit the deployment. Make sure the resulting evidence (logs, test results, governance minutes) is stored where you can produce it quickly if regulators come knocking.
Handled this way, you’re not the person saying “no” to AI; you’re the person saying “yes, but here’s how we’ll do it responsibly—and here’s what we need from you to make that happen.”
Examples and Red Flags
A few patterns, drawn from recent AI and compliance discussions, are worth keeping on your radar.
-
Risk scoring without explainability.
A team deploys an AI model to score third‑party integrity risk on the promise of “faster onboarding.” They never validate for geographic or sector bias, and they can’t explain why specific high‑risk scores were assigned.-
ECCP angle: prosecutors will ask how you ensured fairness, how you tested the tool, and what recourse affected parties had if they were wrongly flagged.
-
-
AI‑summarized investigations.
An internal investigations function starts using AI to summarize hotline complaints and draft investigative reports to save time. Overworked teams begin relying heavily on those summaries, and subtle mischaracterizations make it into final reports.-
ECCP angle: DOJ may question whether investigations were thorough and independent, especially if problematic behavior was underplayed in AI‑generated summaries.
-
-
AI copilots feeding on everything.
A generalized AI assistant is integrated into collaboration tools and gains access to broad swaths of company data, including HR files, legal memos, and compliance investigations. There’s no clear scoping or residency limits.-
ECCP angle: regulators may view this as a failure to safeguard sensitive data, especially if access control principles were bypassed or ignored.
-
In all of these scenarios, ECCP‑style questions—who approved this, how was risk assessed, what controls were in place, how was it monitored—would surface the problems early.
Turning ECCP Into Your AI Translation Layer
The AI wave is not going away. New tools and mandates will keep arriving, often faster than your policies or training can keep up. But you’re not starting from zero: DOJ has already given you a detailed view of what “good” governance looks like in the ECCP, and that view now includes AI.
When AI is forced on security and compliance, you can respond in one of two ways:
-
Treat each request as an isolated battle over a particular tool.
-
Or use ECCP as your translation layer—accepting that AI will be part of the program, but insisting that it be designed, approved, and monitored like any other high‑impact control.
The second path puts you back in your proper role: not the department of “no,” but the function that ensures innovation doesn’t outpace accountability. The next time someone insists you “just turn on the AI,” you can say, “We will—here’s the governance and evidence we’ll need to do it right.”
Additional Sources:
Artificial Intelligence: DOJ Update to the Evaluation of Corporate Compliance Programs
When AI Is Forced on Compliance: the ECCP as your Guide (Opinion)
FINRA’s GenAI wake-up call: What compliance professionals must do now (Opinion)
DOJ Adds AI Considerations to Its Evaluation of Corporate Compliance Programs