Consolidated Policy Dossier

Global Governance
Report.

A synthesis of major regulatory signals issued between October and December 2025. Featuring primary data from the UN High-Level Advisory Body and OECD AI Monitor.

United Nations (Nov 2025)

The Global Digital Compact

The UN General Assembly passed the definitive resolution on 'AI for Equity'. Key focus: Mandating that Tier-1 AI labs provide 10% of their compute credits to the Global South for sustainable development goals.

Impact: Sovereignty & Access
OECD (Dec 2025)

Principles 2.0

Updated AI Principles focusing on 'Agentic Accountability'. The new framework establishes that the Provider of an autonomous agent is liable for financial damages unless rigorous 'safety-testing' certificates are presented.

Impact: Liability Law
Consulting View (Dec 2025)

Sovereign Maturity

McKinsey's 'State of AI' report highlights a 'Governance Premium'. Organizations in strictly regulated zones (EU, India) are seeing 15% higher institutional investment due to lower long-term legal risk.

Impact: Capital Markets

Key Developments: Q4 2025

1. The AI Safety Summit (London, November 2025)

The UK hosted the second AI Safety Summit with participation from 28 nations. Key outcome: Agreement on International AI Safety Standards (IAISS) for frontier models exceeding 10^26 FLOPs in training compute.

Notable Absence: China did not sign the agreement, citing concerns over "Western dominance" in setting technical standards. This creates a bifurcation in global AI governance.

2. OECD AI Principles 2.0

The OECD updated its 2019 AI Principles to address autonomous agents and agentic liability. The new framework establishes that:

  • AI providers are vicariously liable for agent actions within granted authority
  • Pre-deployment safety testing is mandatory for high-risk agents
  • Transparency requirements for agent decision-making

3. G20 Ministerial Declaration on AI

At the G20 Summit in Brazil (December 2025), member states agreed to establish a Global AI Compute Fund to provide subsidized GPU access to developing nations. Initial funding: $5B over 5 years.

Controversy: Critics argue this creates dependencies on Western cloud providers (AWS, Azure, GCP) rather than building indigenous AI infrastructure.

Regulatory Divergence: The Split Internet

The world is fragmenting into three AI regulatory zones, each with incompatible requirements:

🇪🇺 Brussels Effect

Strict risk-based regulation. High compliance burden but clear legal certainty. EU AI Act is becoming the de facto global standard.

27 member states

🇺🇸 Silicon Valley Model

Light-touch regulation. Sectoral approach with voluntary industry standards (NIST AI RMF). High innovation, high litigation risk.

US + aligned democracies

🇨🇳 Beijing Consensus

State-controlled deployment. Mandatory content filtering for "socialist values." Algorithm filing (Bei'an) required before public release.

China + sphere of influence

Corporate Strategy: Multi-Jurisdictional Compliance

Global AI companies now face the "compliance trilemma"—they cannot simultaneously optimize for:

  • Speed to Market: Launching quickly without regulatory delays
  • Global Scale: Operating in all major markets
  • Compliance Certainty: Avoiding legal risk in each jurisdiction

Solution: Many companies are adopting geo-fencing—offering different feature sets in different regions. Example: OpenAI's GPT-4 image generation is disabled in China to comply with content restrictions.

Looking Ahead: 2026 Regulatory Calendar

  • February 2026: EU AI Act conformity deadlines for prohibited AI systems (Art. 5)
  • May 2026: India's AI Ethics Bill expected to pass Lok Sabha
  • August 2026: UK AI Safety Institute publishes first mandatory safety evaluation framework
  • November 2026: Third AI Safety Summit (location TBD) to assess IAISS implementation