AI Audit
An independent evaluation of an AI system's compliance, performance, fairness, and governance practices. Audits can be internal (conducted by the organization's own team) or external (by independent third parties), and may be required by regulation for high-risk systems.
Why It Matters
Self-assessment has limits. Independent audits provide the credibility that regulators, customers, and the public need to trust that AI systems actually work as claimed and don't cause hidden harm.
Example
A fintech company hires an external auditor to evaluate its AI credit scoring model, testing for disparate impact across racial groups, verifying that the model's documentation matches its actual behavior, and assessing whether the governance processes described in policy are followed in practice.
Think of it like...
An AI audit is like a financial audit — the company keeps its own books, but an independent auditor verifies that the numbers are real and the controls actually work.
Related Terms
AI Risk Register
A documented inventory of identified AI risks, their likelihood, severity, mitigation measures, and responsible owners. It serves as a living document that tracks risk across the AI portfolio and informs governance decisions about resource allocation and priority.
Conformity Assessment
The process by which a high-risk AI system is evaluated against regulatory requirements before being placed on the market. Under the EU AI Act, this may involve self-assessment by the provider or evaluation by an independent third-party body, depending on the system's use case.
Red Teaming (AI)
A structured adversarial testing exercise where testers deliberately attempt to find failures, vulnerabilities, biases, or harmful outputs in an AI system. Unlike standard testing that checks if the system works, red teaming checks how the system breaks.