The biggest AI risk for plan sponsors may be doing nothing at all

Report says financial sector drives up to 8% of GDP as AI turns into a board‑level risk

The biggest AI risk for plan sponsors may be doing nothing at all

Boards that steward Canadians’ retirement savings now face an uncomfortable reality: the biggest AI risk may be “not doing enough.” 

The Financial Industry Forum on Artificial Intelligence’s second phase (FIFAI II) said rapid AI adoption is reshaping strategic, cyber, consumer and systemic risks across Canadian financial services and that institutions must move “dynamically to capture AI’s benefits while responding to fast‑evolving risks.” 

AI shifts from edge tool to system‑level risk 

The forum, led by the Global Risk Institute (GRI) with OSFI, the Bank of Canada, the Department of Finance Canada, FCAC and FINTRAC, finds that AI is already embedded in critical functions—from fraud detection and compliance to trading and credit decisions. 

OSFI Superintendent Peter Routledge calls AI “a transformative force–both awe‑inspiring and potentially perilous…Its true impact will hinge on disciplined, responsible innovation and robust collaboration across borders and sectors.”  

FIFAI II warns that AI‑driven operational disruptions, correlated trading behaviours and new credit risks could introduce “new challenges for financial instability.” 

The report stresses that responsible AI adoption is now necessary both for “competitive resiliency” and to defend against more sophisticated AI‑enabled threats. 

Governance: AI as a board‑level risk 

FIFAI II frames AI risk as a strategic issue. It notes that moving too quickly without proper risk management can lead to operational and consumer harms, while moving too slowly can create “missed opportunities and competitive disadvantages.” 

According to the report, boards and senior management should: 

  • improve AI literacy so leaders understand “AI capabilities, risks, and limitations”  

  • establish explicit AI executive oversight where it is not already in place  

  • embed horizon scanning and emerging‑risk assessment into standard risk practice  

  • keep control and governance frameworks “evergreen” as technologies such as agentic AI and quantum computing advance 

The forum bluntly records the view of one participant: “The biggest risk is not doing enough.” 

Cyber, fraud and identity: AI escalates the threat 

Security and cybercrime emerge as urgent concerns.  

As per Michael Barr of the Federal Reserve Board of Governors, “Deepfake attacks have seen a twentyfold increase over the last three years.”  

The report says AI can enable convincing deepfakes with minimal information, often pulled from social media, and notes that Canada “lacks a universally adopted secure digital identity,” leaving onboarding and remote environments exposed. 

According to FIFAI II, a 2024 industry survey found that 91 percent of financial institutions globally are reconsidering voice‑verification systems due to AI voice‑cloning.  

Fraud‑as‑a‑Service lets criminals buy turnkey AI tools that “dramatically increase the scale, speed, and sophistication of financial fraud.” 

Anthropic reported disrupting a state‑sponsored effort to manipulate one of its models to autonomously attack corporate and government targets. 

The report urges institutions to “promote a culture of cyber vigilance,” strengthen identity and access management, and use AI itself to “enhance identification of, response to, and recovery from cyber attacks.” 

Third‑party concentration and technology supply chains 

FIFAI II highlights the growing risk of dependence on a small number of AI and cloud providers.  

It cites Paramatrix’s estimate that the July 2024 CrowdStrike outage caused about US$5.4bn in losses for the Fortune 500 (excluding Microsoft), calling it an illustration of “the systemic impact of single points of failure.” 

The report notes that AI services often involve complex “nth‑party” chains, where failures at any layer can propagate across institutions.  

Even the largest Canadian firms may have “limited leverage regarding contractual terms, operational transparency, or remediation timelines.” 

Recommended responses include mapping deeper supply‑chain dependencies, setting concentration limits, developing exit and substitution plans, and testing scenarios that assume “correlated disruption across multiple providers.” 

Market volatility, labour and consumer outcomes 

FIFAI II warns that many AI‑powered trading models trained on similar data may move in concert, potentially intensifying short‑term volatility and creating “procyclical shifts in financial markets during periods of stress.”  

Agentic AI systems that “act autonomously, make multi‑step decisions, and trigger financial actions at machine speed” could amplify funding outflows and destabilise balance sheets during stress. 

On the real‑economy side, the report references IMF analysis that 60 percent of jobs in advanced economies will be affected by AI automation and Citi research that 54 percent of finance jobs face potential AI‑led displacement.  

It cautions that fast disruption could produce a “k‑shaped” economy and raise credit risks for affected households and businesses. 

Consumer‑facing AI intensifies the need for transparency and “Inclusion by Design.”  

FCAC Commissioner Shereen Benzvy Miller says this work focuses on innovation that is “grounded in fairness, transparency, and a strong commitment to protecting consumers.” 

AGILE: a concise roadmap 

To navigate these pressures, FIFAI II sets out the AGILE framework: 

  • Awareness – anticipate AI‑driven threats, from macro disruption to market volatility and disinformation, and build them into stress tests.  

  • Guardrails – maintain strong, adaptive controls over data quality, consumer protection and third parties, with clear accountability for AI outcomes.  

  • Innovation – deploy AI to improve fraud detection, cyber defence, compliance and operational efficiency.  

  • Learning – scale AI literacy for boards, executives, staff and consumers.  

  • Ecosystem Resiliency – strengthen information‑sharing, crisis‑response playbooks and common standards for critical third parties and digital identity. 

According to the report, “the greatest risk of AI is failing to act decisively.”