General-Purpose AI Model (GPAI)
Under the EU AI Act, an AI model trained on broad data that can perform a wide range of tasks rather than being designed for a single purpose. GPAI providers face transparency obligations, with additional requirements if the model poses systemic risk — currently triggered at a training compute threshold of 10^25 floating-point operations (FLOPs).
Why It Matters
GPAI rules affect the largest AI model providers (OpenAI, Google, Anthropic, Meta) and cascade downstream to everyone who fine-tunes or deploys these models. Understanding the GPAI framework is essential for any organization building on top of foundation models.
Example
A company that trains a large language model capable of code generation, text summarization, and translation must provide technical documentation to downstream deployers, publish a training data summary, and comply with EU copyright rules — regardless of whether the model is open-source or proprietary.
Think of it like...
GPAI regulation is like regulating a steel manufacturer — they don't control what gets built with their steel, but they're responsible for making sure the raw material meets safety standards and comes with proper specifications.
Related Terms
EU AI Act
The European Union's comprehensive regulatory framework for artificial intelligence, establishing rules based on risk levels. It categorizes AI systems from minimal to unacceptable risk with corresponding compliance requirements.
Systemic Risk (AI)
Under the EU AI Act, a classification for general-purpose AI models whose capabilities or reach are significant enough to pose risks to public health, safety, security, or fundamental rights across the EU. Currently triggered when a model's training compute exceeds 10^25 FLOPs, or by European Commission decision based on other criteria.
Foundation Model
A large AI model trained on broad data at scale that can be adapted to a wide range of downstream tasks. Foundation models serve as the base upon which specialized applications are built.