AI Governance

Fundamental Rights Impact Assessment (FRIA)

An assessment required under the EU AI Act for deployers of high-risk AI systems that evaluates the system's impact on fundamental rights — including non-discrimination, privacy, freedom of expression, and human dignity — before deployment begins.

Why It Matters

FRIAs expand the lens beyond data privacy to cover the full spectrum of human rights. An AI system might be privacy-compliant but still undermine freedom of expression or entrench discrimination.

Example

A government agency deploying AI for social benefit eligibility screening must conduct a FRIA examining whether the system could disproportionately deny benefits to specific ethnic groups, people with disabilities, or non-native language speakers.

Think of it like...

If a DPIA asks 'is the patient's data safe?' a FRIA asks 'is the patient being treated fairly, with dignity, and with respect for their autonomy?'

Related Terms