You don't have javascript enabled.

MAS and FCA Launch Joint Partnership for Responsible AI in Finance

The future of global finance is increasingly being written in code, specifically in the algorithms and large language models that drive Artificial Intelligence (AI). Recognising the need for a unified, secure, and trustworthy cross-border framework, the Monetary Authority of Singapore (MAS) and the UK’s Financial Conduct Authority (FCA) have formalised a pivotal agreement: the UK-Singapore […]

  • Bobsguide
  • November 19, 2025
  • 4 minutes

The future of global finance is increasingly being written in code, specifically in the algorithms and large language models that drive Artificial Intelligence (AI). Recognising the need for a unified, secure, and trustworthy cross-border framework, the Monetary Authority of Singapore (MAS) and the UK’s Financial Conduct Authority (FCA) have formalised a pivotal agreement: the UK-Singapore AI-in-Finance Partnership.

Announced at the Singapore FinTech Festival, this strategic alliance is a landmark move that aims to promote the safe and responsible integration of AI, enabling fintech providers and financial institutions (FIs) in both markets to scale their solutions more effectively across two of the world’s premier financial hubs. For a global audience of fintech professionals and FIs based in the UK and US, this partnership signals a crucial effort to define the boundaries of trustworthy AI.

The Strategic Imperative: Safety and Scalability

The partnership comes as AI in finance reaches a critical inflection point. Kenneth Gay, Chief FinTech Officer at MAS, noted that AI is moving “from experiments to enterprise use, and from individual models to connected, agentic systems“. As this shift accelerates, the regulatory priority must be to ensure adoption is “both safe and scalable“.

The risks inherent in AI—including issues of bias, opacity, model drift, and operational resilience—necessitate a coordinated global response. Without harmonised standards, firms face fragmentation and increased compliance costs when attempting to deploy innovative AI solutions in multiple jurisdictions.

Jessica Rusu, Chief Data, Information and Intelligence Officer at the FCA, framed the partnership as a commitment to “championing safe and responsible AI innovation across UK and Singapore markets”. The goal is explicit: to help firms “grow through collaboration” and “shape the future of responsible AI innovation in finance”.

Mechanisms for Cross-Border Innovation

The new partnership is not merely a declaration of intent; it establishes concrete, operational mechanisms for regulatory alignment and innovation scaling:

  • Joint Testing and Regulatory Insights: The MAS and FCA will collaborate on the joint testing of AI solutions, sharing regulatory insights and hosting discussions on responsible AI adoption. This allows FIs to test technologies against converging cross-border standards in supervised environments, reducing time-to-market and compliance uncertainty.

  • Leveraging Existing Sandboxes: The partnership will be built on each authority’s existing AI industry programs: MAS’s PathFin.ai and the FCA’s AI Spotlight. These programs will facilitate the cross-sharing of quality AI solutions and related research, creating a recognised channel for firms to enter both markets.

  • FCA’s New Singapore Presence: Significantly, the FCA is strengthening its international footprint by establishing its first formal presence in Singapore, appointing a Financial Services Attaché at the British High Commission. This is designed to promote the UK as a global hub for financial services and strengthen the regulatory relationship with MAS.

Case Context: Defining Trustworthy AI

The alliance directly addresses the core supervisory focus areas for AI in finance:

  1. Model Risk Governance and Accountability

  2. Bias Detection and Data Quality Controls

  3. Explainability Standards (especially for customer-facing tools)

  4. Operational Resilience and Stress Testing

Real-world context for this collaboration is demonstrated by related regulatory developments. For instance, MAS has simultaneously released a consultation paper proposing new Guidelines on AI Risk Management for FIs. These guidelines—which emphasise the role of the board and senior management in overseeing AI risk, the need for comprehensive AI inventories, and controls over data management, fairness, and transparency—provide a clear supervisory expectation that will likely inform the MAS/FCA joint framework.

The collective work streams in the partnership, such as joint testing, will be particularly valuable in rapidly advancing fields like credit assessment, compliance monitoring, and automated surveillance, where robust explainability and governance are paramount. By collaborating, the MAS and FCA are creating an “important bridge” for regulators, FIs, and innovators to work together on trustworthy AI, ultimately aiming to set global benchmarks for its responsible use in finance.