AI Governance

Systemic Risk (AI)

Under the EU AI Act, a classification for general-purpose AI models whose capabilities or reach are significant enough to pose risks to public health, safety, security, or fundamental rights across the EU. Currently triggered when a model's training compute exceeds 10^25 FLOPs, or by European Commission decision based on other criteria.

Why It Matters

Systemic risk classification triggers the EU AI Act's most stringent GPAI obligations: model evaluation, adversarial testing, incident tracking, cybersecurity protections, and energy consumption reporting. Only a handful of models currently meet the threshold, but it's a moving target.

Example

A frontier AI lab whose latest model was trained using more than 10^25 FLOPs must conduct and document adversarial red-teaming exercises, track and report serious incidents, implement cybersecurity protections, and report the model's energy consumption during training.

Think of it like...

Systemic risk in AI is like 'too big to fail' in banking — when a single model is so widely used that its failures could cascade across industries and borders, regulators impose extra safeguards.

Related Terms