Since then, the pace of AI innovation has accelerated dramatically. Large language models have evolved from static retrieval systems to autonomous, agentic architectures capable of making decisions, taking actions, and interacting with live systems. This transformation has expanded both the opportunity and the attack surface for enterprise AI.
Introducing AIGF v2.0 - Meeting the Moment for Agentic AI
Today, we’re proud to announce the release of AIGF v2.0, marking the framework’s most significant evolution to date. This version introduces a dedicated agentic AI risk catalogue, expanding total coverage from 46 risks and mitigations, cross-referenced to 7 existing frameworks (including OWASP, MITRE and EU AI Act).
At the heart of this update is six additional risk / mitigation pairs, developed collaboratively by FINOS members and contributors from across the financial services and AI infrastructure ecosystems. These additions directly address emerging threats such as: the
- Prompt injection and memory poisoning
- Persistent agent compromise
- Chain-of-thought leakage
- Supply-chain tampering in AI agents
The new agentic section provides not just policy guidance but operational pathways, defining how controls can be enforced in production environments. These mitigations can now be implemented as active runtime defenses, bridging the gap between governance and engineering.
Why This Matters for Financial Institutions
For banks, insurers, and capital markets firms, agentic AI systems represent both a strategic advantage and a new governance frontier. With AIGF v2.0:
- Risk managers and compliance teams gain a community-tested framework aligned to evolving regulatory guidance across the EU AI Act and U.S. prudential standards.
- Developers and architects can translate AIGF v2.0 controls into operational safeguards bringing governance directly into production environments.
- Regulators and auditors benefit from a transparent, open reference that connects risk theory to verifiable operational controls.
This collaborative approach ensures consistent, defensible AI governance, allowing FSIs to innovate confidently without compromising safety, ethics, or compliance.
Integrating with the Broader FINOS Ecosystem
AIGF v2.0 doesn’t stand alone. The next phase of work focuses on integration with other FINOS initiatives, including:
- CALM (Common Architecture Language Model) – enabling machine-readable control mappings and visualization of AI governance architectures.
- CC4AI (Common Controls for AI Services) – ensuring control inheritance across cloud and agentic workloads.
- AI Supplemental Directed Fund (AI SDF) – supporting open development of pre-competitive AI governance tools and metrics.
These integrations bring us closer to a world where AI compliance is codified, automatable, and shared across the industry, reducing redundancy and accelerating adoption.
The Road Ahead, Toward Living Governance
AIGF v2.0 is more than a framework update; it’s a signal of continuous evolution. As AI architectures grow more complex, governance must keep pace, iteratively, openly, and collaboratively.
The next steps are already underway:
- Integrating AIGF controls into the CALM visualization and control mapping layer (github issue)
- Training: targeted to technical teams, risk practitioners and leadership teams, these courses will provide the knowledge and tools to accelerate adoption and effective operationalization of the AIGF (sign up here)
- Expanding community workshops and regulator engagement in 2026
Adopt, Contribute, and Collaborate
The AI landscape isn’t waiting and neither can governance. If your organization is experimenting with agentic AI, now is the time to adopt AIGF v2.0, pilot the controls, and share feedback.
Together, we can turn the complexity of agentic AI into shared, open, and trusted governance.
Author: Colin Eberhardt, CTO, Scott Logic