The Board of the International Organisation of Securities Commissions (IOSCO) has recently published its latest report on the rise of AI in capital markets. The significant work that has gone into this report (the 2025 Report) represents a globally shared understanding of the challenges that the use of AI in financial products and services presents to investors, to the markets, and to financial stability.
The purpose of IOSCO's work
The purpose of the 2025 Report, which has been based on contributions from IOSCO members and from industry experts beyond IOSCO's membership (including self-regulatory organisations, affiliate members, trade associations and attendees at IOSCO roundtables held around the world), is to inform regulatory thinking even further and to assist those charged with mitigating the risks presented by AI, by generating more feedback from other stakeholders, including ‘financial market participants, AI developers, academics, researchers, public policy experts, and other interested parties’.
Latest observations
The 2025 Report makes five key observations:
- The use of AI by firms in support of ‘decision-making processes’ and in ‘surveillance and compliance’ is increasing;
- Firms are making use of recent advancements in AI to ‘support internal operations and processes’;
- The most commonly cited risks associated with the use of AI in the financial sector include:
- ‘malicious uses of AI’,
- ‘model and data’ risks,
- ‘concentration, outsourcing and third-party dependency’ risks and
- risks around ‘interactions between humans and AI’;
- Industry practices are evolving but with different approaches being taken; and
- Regulatory responses are also evolving, but again with different approaches being taken.
More on all of this below.
What next?
The next phase of IOSCO's work in this fast-moving area will be to consider whether additional ‘tools, recommendations, or considerations’ are needed to address the ‘issues, risks and challenges posed by the use of AI in financial products and services’.
Background
This 2025 Report is not the only work of IOSCO relative to the use of AI in the financial sector. IOSCO has published many reports, including a significant report in 2021 (the 2021 Report), which looked into the ‘use of AI by market intermediaries and asset managers’, highlighting the ‘transformative nature of AI….relating to investment strategies, operational efficiency, and the development of new financial products’ and identifying key challenges around:
- ‘governance and oversight’;
- ‘algorithm development, testing, and ongoing monitoring’;
- ‘data quality and bias’;
- ‘transparency and explainability’;
- ‘outsourcing’; and
- ‘ethical concerns’.
Since 2021, however, there have been significant advancements in AI technologies, and in industry practice and in regulation. There have been ‘innovations and developments in theory, hardware, software, algorithmic efficiency, compute power, data, and end-user applications’. The 2025 Report is therefore intended to bring the combined thinking up to date, taking into account the many technological advancements that have been made since 2021, and to look forward at the future of AI's role in the financial markets. It cites recent developments such as the emergence of large language models and the release of ChatGPT, and the ability of these technologies to enable convincing human-like interactions, as critical in driving more financial institutions to seek to leverage them to drive operational efficiencies and to create market opportunities.
Don’t get left behind
Looking to the future, the 2025 Report notes that it is likely that, driven by ‘demand, investment, and competition’, some financial institutions will consider that ‘not adopting AI technologies or failing to do so quickly enough’, could in itself create risks and challenges.
Risk v opportunity
The 2025 Report also represents a revised assessment of the risks and of the opportunities that could arise through the use of AI technology in the financial markets, looking at the latest reported current and proposed real-use examples, and considering whether the existing regulatory frameworks are able to do enough to protect market integrity and investor protection, or whether more needs to be done to address both the known and the unknown or emergent risks.
Definition of AI
IOSCO does not seek to define AI but instead refers to widely drawn ‘common understanding of the types of technologies referred to’ and for this it refers to the OECD's definition of an AI system as:
‘a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment’.
This definition (or understanding) will include a diverse range of technologies that can analyse data (either from specifically curated data sets or from vast amounts of data drawn from a variety of sources), identify patterns, make predictions, extract insights, and generate other outputs (such as text, images and audio). It will include machine learning models, systems that are designed to perform specific tasks, deep learning models, GenAI systems, foundation models, general purpose AI systems and further models and advancements of models that will no doubt evolve in the future.
Application of AI to financial products and services
IOSCO discovered that the benefits and risks analysis of using AI in financial services is dependent on a number of factors including:
- the type of AI, including the design and the type of technology, that is being deployed;
- the use case or purpose for which the AI is being deployed;
- the environment in which it is being deployed;
- the way in which the AI is being deployed;
- whether there is going to be any human oversight of the deployment; and
- the potential for impact on investors and the markets.
Use cases
The use of AI in capital markets is not a new thing. The 2021 Report recorded AI being used in:
- ‘advisory and support services’: using ‘simple, rules-based algorithms’ to generate draft advice or suggestions for human investment advisors to review;
- ‘risk management’: using machine learning-based systems to monitor credit, liquidity and other market risks;
- ‘client identification and monitoring’; using machine learning to automate client onboarding processes and associated checks, and to monitor for and detect fraud and cyber risks;
- ‘selection of trading algorithims’: using software to select trading strategies; and
- ‘portfolio management’: using supervised learning for pattern recognition and prediction models to assist trading and to optimise portfolio management.
In the period intervening between the two reports, AI technologies have evolved and advanced, giving rise to ‘new possibilities’ and to an expanded range of potential uses. Through its work, IOSCO has gathered information that indicates that ‘firms are investing in AI technologies and that these technologies are being explored, piloted, and adopted in various activities in capital markets’. IOSCO currently categorises current AI application (meaning that firms are using or considering using AI technologies) by firms into three broad areas:
- ‘internal operations and processes’: including the automation of tasks such as information extraction, transcription, translation and drafting, in support of human decision-making, in investment research and sentiment analysis, and in enhancing surveillance and compliance (particularly in anti-money laundering and counter-terrorist financing systems) capabilities;
- ‘client interactions’: primarily through the use of chatbots and virtual assistants, client query review and management, and in marketing and promotional activity; and
- ‘trading and investing product and process enhancements’: including algorithmic trading and high-frequency trading.
The overall position, based on the variety of market participants and the different jurisdictions involved in the gathering of IOSCO's information, is well summarised in this quote:
‘the uses of AI that were reported by IOSCO Member/SRO Survey respondents to be most observed across market participants, including broker-dealers, asset managers, and exchanges, were: • Anti-Money Laundering and Counter Terrorist Financing (AML and CFT) (50%) • Internal Productivity Support (50%) • Market Analysis and Trading Insights (40%)’
Drilling down into this information in more granular detail, the 2025 Report identifies the details of many uses to which AI is being put in capital markets across the globe, including:
- performing ‘pattern recognition and anomaly detection in surveillance software’;
- enhancing ‘the interpretation of unstructured data...to facilitate name screening and news analysis’;
- analysing client and other behaviours;
- spotting and researching red flags, threats and anomalies, and suspicious activity and enhancing associated report writing;
- forecasting of asset prices and liquidity, and predicting market trends;
- for ‘internal productivity…internal operations….software development, back-office operations, automation, compliance and human resources’;
- experimenting with functionality in ‘speech-to-text and video recognition for note taking and meeting summarization’ and ‘translation capabilities to facilitate cross-language communication’;
- data and intelligence sharing between financial institutions; and
- portfolio construction, monitoring, re-balancing, and customising.
Advantages
While it seems that much of the work in AI deployment across the capital markets is currently happening at the experimental and exploratory phases of development, there is already evidence emerging of the advantages that AI can bring. The 2025 Report includes reference to these examples:
- Surveillance and Fraud Detection: 'While traditional, rule-based approaches continue to be used by broker-dealers, some respondents noted that these approaches are limited by the complexity of markets and the constant evolution of market behavior and manipulative practices, and that the use of AI systems for surveillance and fraud detection could help overcome some of these challenges and could potentially offer higher detection rates than traditional approaches.’;
- Broker-dealing: evidence of broker-dealers using Retrieval Augmented Generation or RAG, which is more a more complex form of AI architecture, ‘to improve response accuracy by incorporating internal knowledge bases and referencing source documents’;
- Asset management: ‘AI was reportedly used to enhance activities across the asset management lifecycle, such as for data synthesis, pattern and anomaly detection and monitoring, prediction and forecasting, and process automation’;
- Investment research: ‘asset managers and investment research firms, have used AI to enhance the investment selection process. Typical uses to augment human decision making include market sentiment analysis, pattern detection, data summarization, and process automation’; and
- Transaction processing and automation by financial exchanges: ‘The exchange’s AI model leverages reinforcement learning to evaluate the duration of the holding period based on local market conditions, and the exchange claims that applying this technique could achieve higher fill rate and lower markouts (a measure of price movement in a security at some defined time interval following a trade)’.
Novel applications of AI
Through its research, IOSCO identified a range of novel applications that remain in the exploratory phases, in the evolving and expanding phase of applications in testing but not yet deployed, these include:
- using GenAI to ‘streamline the process of developing new trading strategies by searching through research papers for relevant topics, generating economic rationale for various trading hypotheses, generating code to implement these trading hypotheses, and conducting back-testing of the strategies on portfolios’;
- using GenAI to ‘analyze financial reports, news, and social media to generate faster and deeper insights and capabilities for investment firms’;
- using specialised large language models to ‘perform more advanced investment research tasks and report generation’; and
- using large language models to ‘to automate the publication of investment research…to gather relevant data as input to compile into a draft investment research paper by adopting the writing style of certain investment analysts’.
Risks, issues and challenges
Expanding upon the most cited risks that are noted above, IOSCO's research identifies the following key risk areas for the financial markets:
- Malicious use: the most cited risk, malicious risk uses, include cybersecurity, data privacy and protection, fraud, market manipulation and deepfakes. In more granular detail:
- Cybersecurity: AI systems have the potential to exacerbate existing and create new cybersecurity risks including some risks that are associated with AI and with AI being used to ‘plan, enhance or automate cyberattacks’ and where bad actors could harness the power of AI to make ‘their threats more sophisticated and challenging to detect’. Examples relevant to financial firms include ‘harder to detect phishing scams’, malware designed ‘to steal data and evade detection’, ‘creating or manipulating identification documents, images, or video that are used to convince a firm to disclose customer data or grant access to customer accounts’, and using deepfakes to ‘steal information or funds, or to damage individual and firm reputations and security’.
- Targeting AI: AI technologies may have certain inherent vulnerabilities that make it vulnerable to attacks that could manipulate it, affect its output, and compromise its internal data (including sensitive data).
- Fraud and scams: AI technologies have the potential to provide bad actors with cheaper and more sophisticated ways to conduct fraud, launch cyberattacks and commit other forms of misconduct. As AI outputs become ‘more convincingly humanlike or realistic, bad actors likely will exploit it to carry out schemes to defraud investors or to engage in other misconduct related to the financial industry’ including ‘misinformation, disinformation, or malicious content, including that which mimics real persons (deepfakes) to lure investors into fraudulent investment schemes or to facilitate other misconduct’. AI could also enhance existing common scam techniques by ‘increasing their reach, efficiency, and effectiveness’ and by making them ‘substantially more pervasive and convincing when augmented with GenAI technologies’. Additionally, as AI technologies advance, their output is likely to ‘become more convincing and hence it may be more difficult to differentiate real content from that which is AI-generated’.
- Model and data considerations: the risks identified in this category relate to matters such as interpretability and explainability, bias, data drift, complexity, resilience and hallucinations. In more granular detail:
- Explainability: Complex AI systems that are ‘difficult or impossible to comprehend or explain’ present particular difficulties in the context of the disclosure requirements applicable to investment products and services because of the possibility that AI generated disclosures could be ‘ineffective, difficult to comprehend, incomplete, or inaccurate’. Such disclosures could result in the taking of investment decisions that are unsuitable for the investor.
- Drift and ‘confabulations’: Unsuitable investment decisions could also result from the use of AI technologies that do not respond to changing market conditions or to unforeseen market events, or which generate hallucinations.
- Bias: AI outputs that stem from a form of bias could lead to the ‘unfair treatment of certain groups of investors’. It is possible that source data may contain forms of bias that lead to discriminatory outputs ‘favoring or disfavoring a particular group of investors and exacerbating inequalities’.
- Data: AI technologies may have vulnerabilities to ‘data quality issues’ and risks could arise from data which is ‘inaccurate, imprecise, outdated, irrelevant, and harmful’. Issues in the training data quality could lead to issues with output causing it too to be inaccurate, inadequate, erroneous, unreliable, or otherwise poor. AI technologies that have been trained on synthetic data could fail to perform in ‘real-world conditions’.
- Concentration, outsourcing and third-party risks: IOSCO was unable to get a clear picture of the ‘range of AI model types that are being used across financial services, including the role of proprietary versus open models’ but was able to make some observations:
- Concentration risk: There are a variety of different concentrations that can occur, including reliance by financial services firms on a small number of providers, and reliance on certain datasets. These concentrations can create the potential for single points of failure, and the amplification of other risks associated with an asset (be it a technical provision or service, or a dataset, for example), becoming critical: ‘There is a risk of high concentration in a small number of tech providers in the financial sector, given the resource demands of AI development in terms of development costs, computing capacity, access to data, talent, and existing market penetration. A concentration of AI-related products and services vertically within a dominant tech provider can introduce correlated risks’.
- Outsourcing and third-party dependency risks: There are a number of inter-related risks in this category including the fact that most technology providers will not be part of the regulated financial community, that the use of AI technologies by regulated firms brings with it ‘third-party outsourcing risk and dependencies’ (which will include risks noted above such as cyber, model and data risks), risks of reliance on a concentrated number of providers and associated resiliency risks. Some firms have reported ‘difficulties in obtaining information from a third party about its AI technology—models, and training data in particular—to assess and manage the risks of using the AI technology’. Further, ‘Vendors may not be able to describe how a complex model processes data. They may be unwilling to reveal information about a model or the data used to train a model, given competition concerns and exposure to liability if the data was obtained without appropriate consents’.
- Human – AI interaction: IOSCO’s research has identified a number of ‘risks that stem from the interaction of humans and AI systems’:
- Accountability and non-compliance: In order to protect against investor and market harms, the use of AI technologies by financial firms will need to be supervised with appropriate risk management and governance policies, and with appropriate procedures and controls. IOSCO has identified associated risks relating to the potential difficulties of ‘identifying and holding accountable responsible persons’ in the event that harms arise from the use of AI technologies.
- Oversight and expertise: Firms may face difficulties with risk management and governance if they experience ‘talent deficits’ and cannot recruit and retain staff with the required expertise. There will be an evolving challenge for firms to ensure that their risk management and governance systems can keep pace with technological advancements and associated emergent risks.
- Technology over-reliance: There is a risk of users gaining confidence in the capabilities of AI technologies and placing high levels of trust in its outputs, which could in turn lead to failures in appropriate oversight or inadequate risk management, and to the ‘degradation of human skillsets over time’. In relation to critical market participants IOSCO observes that these ‘could become excessively dependent on AI to handle mission critical tasks that may impact key infrastructures’.
The Future
The 2025 Report notes that there are currently many unknowns and that in this place of unknowns there are ‘knowledge gaps’ within which there is not very much in the way of regulatory publications, and within which the regulators are still evolving their own understandings of how AI technologies work and the capabilities that they have. There is therefore an acknowledged risk that ‘data and knowledge gaps remain and may widen as technological advances may outpace regulatory assessment’. Given the possibility of limited regulatory (i.e. including both regulator and regulated firm) understanding of complex AI technologies, the 2025 Report notes a number of additional challenges for the financial system, including:
- the potential for difficulties in untangling where and how AI systems are being used once they have been embedded into the broader financial system;
- the increasing challenge of managing risks as AI technological complexities increase;
- risks presented by the growing interconnectedness of the financial ecosystem and the associated shared reliance on ‘technology, infrastructure, software and data’ where the actions of one actor or the vulnerabilities of one system can cause disruption and potentially cascading effects on others, amplifying risks and leading to potentially systemic risks and instabilities;
- the potential for commonly used technologies to drive large numbers of market participants to ‘make the same decisions at the same time’, for example, if commonly used models responded to a market shock in a similar way this could generate an adverse market event; and
- the risk of AI technologies learning to co-ordinate their behaviours and optimise against intended objectives ‘research has shown that even when unintended, multiple black box models will eventually learn to engage in collusive behavior to maximize their profits’.
Governance
IOSCO’s research has identified that firms are evolving their approaches to the ‘development, deployment, and maintenance of AI systems’ and reports evidence of diverse approaches including:
- the creation of specific AI risk management and governance frameworks;
- bespoke AI policies, procedures and controls;
- adapting and incorporating AI into existing frameworks;
- implementing AI specific independent audit functions;
- deploying AI focused education and training throughout ‘a much broader population of employees’;
- broadening risk management and governance teams to include experts with a wider base of relevant expertise including AI, legal, data, compliance and risk expertise;
- ensuring a senior management role for an AI expert, possibly a ‘Chief AI Officer’ to ensure the ‘right “tone from the top”’;
- the creation of ‘Centers of Excellence’ that bring together the knowledge and skills that a firm needs to develop expertise, assess and analyse, identify and evaluate, design, build and test, maintain and monitor, and place controls around its use of AI technologies; and
- deploying sandbox environments to enable ‘experimentation without the possibility of data leakage or client harm’.
IOSCO has been able to identify that ‘larger firms in the financial sector’ are using ‘risk management and governance frameworks’ that incorporate elements of:
- Transparency: ensuring that users and customers have ‘accurate and complete disclosure around the use of AI’ associated with the firm’s financial products and services;
- Reliability, robustness and resilience: ensuring that AI systems perform ‘consistently, reliably, and as intended’;
- Investor and market protection: ensuring that the use of AI in the financial sector is subject to applicable protection frameworks;
- Fairness: mitigating against bias and discrimination;
- Security, safety and privacy: implementing appropriate measures around data protection;
- Accountability: ensuring clear roles and responsibilities for the use of AI by the firm;
- Risk management and governance: implementing effective mechanisms to oversee the use of AI by the firm including strategy, training, implementation and monitoring; and
- Human oversight: ensuring that there is a human in the loop: ‘AI systems should be used as a tool to augment, and not replace, human decision making and judgment’.
Conclusions
IOSCO’s important work in this area is far from over. In terms of conclusions from the 2025 Report (on which IOSCO remains open for input from ‘the public, including financial market participants, AI developers, academics, researchers, public policy experts, and other interested parties’ until mid-April), the following are significant take-away points for the financial services sector:
- development and adoption of AI technologies across the financial services sector is accelerating;
- there exists the potential for efficiencies, opportunities and other benefits;
- there are known and emerging issues, risks and challenges and regulatory attention needs to remain focused on these;
- guidance to market participants in the form of expected standards can help to address risks;
- additional tools such as recommendations, considerations, and good practice guides, may also assist both regulators and market participants;
- there is likely to be a need for education for investors; and
- information sharing between regulators and between financial market participants could be useful and helpful.
If you would like to discuss how current or future regulations impact what you do with AI, please contact me, Tom Whittaker, or Martin Cook. You can meet our financial services experts here and our technology experts here.
While the use of AI technologies in capital markets is not a new phenomenon, AI
technologies have recently experienced significant innovations, investment, and
interest, for which generative AI is a key gamechanger. As market participants explore
and test new possibilities, and as AI technologies continue to advance, the range of
AI uses in capital markets will continue to expand.
https://www.iosco.org/news/pdf/IOSCONEWS761.pdf