The European Union has officially launched a cross-border risk review mechanism for large-scale AI models.
The European Union today launched its new Cross-Border AI Model Risk Review Mechanism, requiring all AI platforms, cloud model providers, and frontier model developers operating in the EU to submit at least one annual cross-border risk assessment report.
The report must disclose:
-
Potential for large-scale societal harm (misinformation, election interference, security vulnerabilities)
-
Data-source legality and cross-border privacy impacts
-
Existence of kill-switch protocols, audit mechanisms, and transparency tools
-
Risks of regulatory arbitrage in non-EU jurisdictions
Companies that fail to comply may face fines up to 6% of global annual revenue and possible EU market access restrictions.
This move marks a major regulatory escalation following the EU AI Act, signaling stricter scrutiny across global AI supply chains.
