Artificial Intelligence in the Financial Services Sector: UK Regulators Publish Feedback Statement

October 30, 2023

On October 26, 2023, the Bank of England, including the Prudential Regulation Authority (the “Bank”) and the UK Financial Conduct Authority (the “FCA”) published a Feedback Statement relating to Artificial Intelligence (“AI”) and Machine Learning (the “Feedback Statement”).[1]

The Feedback Statement summarises the responses received to the regulators’ earlier discussion paper published in October 2022 (the “Discussion Paper”).[2]

While the regulators emphasize that the Feedback Statement does not include specific policy proposals or commitments to any specific regulatory approach, Discussion Paper and Feedback Statement are important indicators of the potential direction that UK financial-services regulation will take in respect of AI. This is both in relation to overarching regulatory approaches (such as the aim to achieve cross-sectoral and cross-jurisdictional alignment) and specific areas requiring particular consideration (such as consumer protection and the significance of data).

This memorandum sets out the context against which the Feedback Statement has been published, and the key themes emerging from it.

I. Context

    In October 2020, the Bank and the the FCA established the AI Public-Private Forum (the “AIPPF”). The AIPPF’s aim was to bring together a diverse group of experts from across financial services, the tech sector, and academia, to further dialogue on AI innovation and safe adoption within financial services.

    In February 2022, the AIPPF published its final report,[3] exploring the various barriers to adoption, challenges, and risks related to the use of AI in financial services. The AIPPF focuses on three core areas:

    • Data, with an emphasis on the importance of data quality, on the understanding of data attributes (including provenance, completeness, and representativeness), and on ongoing documentation, versioning and monitoring. The final report also observes that the use of unstructured or ‘alternative’ data in AI and machine-learning contexts can increase risks and issues relating to data (e.g., quality, provenance and sometimes legality).
    • Model risk, the key challenge in this respect being complexity, for example as regards inputs (e.g., because of a large number of input layers/dimensions), relationships between variables, models themselves (e.g., deep learning models), or outputs (e.g., actions, algorithms, quantitative, or unstructured outputs). An important factor is explainability, with a focus not only on the features or parameters of models, but also on engagement with, and communication to, consumers.
    • Governance, highlighting that existing frameworks (e.g., data governance, model risk management, operational risk management) provide a useful starting point for governance considerations, but should reflect risks and materiality of any specific use cases. The report also considers that governance standards should be set by a centralised body within a firm, but cover the full range of functions and business units. Moreover, an appropriate level of understanding and awareness of AI is required throughout an organisation employing it.

    In response to the AIPPF final report, the Bank and the FCA, in October 2022, published the Discussion Paper to deepen dialogue on how AI may affect their respective objectives for the prudential and conduct supervision of financial firms. Specifically, the Discussion Paper set out the regulators views, and sought input, on issues such as (i) the potential benefits, risks, and harms related to the use of AI in financial services; (ii) how the current regulatory framework could apply to AI; (iii) whether additional clarification may be helpful; and (iv) how policy can best support further safe AI adoption.

    On October 26, 2023, the Bank and the FCA published the Feedback Statement. The Feedback Statement aims to acknowledge the responses to the DP, identify themes, and provide an overall summary in an anonymised way. Notably, however, the regulators emphasise that the Feedback Statement does not include policy proposals, nor does it signal how they are considering clarifying, designing, and/or implementing current or future regulatory proposals on this topic.

    II. Overarching regulatory approach

    The Discussion Paper explains that, against the background of the regulators’ priorities and objectives, they have a close interest in the safe and responsible adoption of AI in UK financial services. Specifically, they wish to avoid introducing barriers to entry such as unnecessarily burdensome rules.

    While the regulators generally take a technology- neutral approach to regulation (meaning that their core principles, rules and regulations neither prohibit nor mandate any specific technologies), they are aware that risks may relate to the use of specific technologies. In respect of AI, the Discussion Paper notes that novel challenges can arise in particular in the areas of data, models and governance.

    Against this background, the Discussion Paper raised the question whether a sectoral regulatory definition of AI should be included in the supervisory authorities’ rulebooks, or whether there were equally effective alternative approaches.

    While the Discussion Paper noted that distinguishing between AI and non-AI was something that regulators and authorities abroad have generally found useful, it also noted the challenges of laying down a definition that remains up-to-date, is neither too broad nor too narrow, and creates no incentives for firms to misclassify AI to reduce regulatory oversight.

    According to the Feedback Statement, respondents agreed with the latter concerns, considering that a regulatory definition of AI would not be useful. Instead, most respondents considered that a technology-neutral, outcomes- or principles- based approach would be preferable. This could take the form of high-level principles that would allow firms to tailor the identification, assessment, and management of risks to the purpose, function, and outcomes of each specific AI use case or application. This should be underpinned by a focus on relevant outcomes (consumers and markets) rather than on specific technologies. Moreover, the respondents considered that the approach to AI should be proportionate to the risks associated with, or materiality of, a given specific AI application.

    This approach is notably different from the one in adopted by the proposed EU AI Act which includes a definition of ‘Artificial Intelligence’. However, the difficulties in ascertaining the proper scope of such a definition might have been experienced by EU lawmakers as well, seeing, for example, that the Commission’s proposal to define ‘Artificial Intelligence System’ partly by reference to certain techniques and approaches[4] was rejected both by the Council[5] and the European Parliament.[6]

    III. Potential benefits and risks

    The Discussion Paper set out the regulators’ initial thoughts as to the potential benefits and risks that AI would involve. These were grouped under headings according to the regulators’ key objectives, namely: consumers, competition, firms, and financial stability/market integrity. Whilst the Feedback Statement adopts a similar approach of laying out a landscape of relevant considerations, certain among these emerge as key considerations.

    Regarding regulatory priorities, the majority of respondents considered consumer protection to be an area for regulators to prioritise. This is because of some of the specific risks AI could create, such as bias, discrimination, lack of explainability, transparency, and exploiting vulnerable consumers. These risks are generally seen as particularly acute in respect of consumers with protected characteristics.

    The origin of such consumer harms was generally regarded to be inadequate data, specifically data bias or unavailability of sufficient key data. To mitigate the risk of consumer harms, it therefore needs to be ensured that data used to build an AI system is sufficiently representative, diverse and free from bias.

    Regarding the question what metrics would be most relevant when assessing the benefits and risks of AI in financial services, the responses to the Discussion Paper did not show a clear consensus. However, two categories of metrics that were widely seen as important are (i) metrics focused on consumer outcomes, and (ii) metrics focused on data and model performance.

    IV. Existing regulation and scope for improvement

    The final section of the Discussion Paper focused on the current legal requirements and guidance that may be relevant to regulated firms in connection with the use of AI. Once more, the discussion was grouped under headings according to the regulators’ objectives and remits.

    While again the responses to the Discussion Paper reflect a wide spectrum of opinions, the Feedback Statement draws out a number of points that the regulators consider to be of key significance.

    A key theme emerging from respondents’ feedback is that greater coordination, alignment and consistency across different regulators, sectors and possibly jurisdictions would be desirable. This is because the regulatory landscape is complex and fragmented with respect to AI.

    In particular in respect of data regulation, current frameworks are often insufficient or not entirely clear in their application to AI. Accordingly,  greater regulatory alignment would be useful in addressing data risks, especially those related to fairness, bias, and management of protected characteristics.

    Other areas where additional guidance was considered to be potentially helpful include outsourcing and the use of third-party models and data, as well as certain aspects of risk management relating to models with AI characteristics.

    Regarding governance considerations, respondents considered that a joined-up approach across business units and functions would be helpful to mitigate AI risks, especially closer collaboration between data management and model risk management teams. The reason for this is that AI systems can be complex and involve many areas across the firm. Regarding the adequacy of existing regulatory frameworks for governance, however, most respondents thought that existing structures were sufficient to address AI risks. In particular, creating a new Prescribed Responsibility for AI to be allocated to a Senior Management Function was generally not considered to be helpful for enhancing effective governance of AI.

    As to the regulators’ approach to this area, responses emphasised the importance of collaborating, and/or setting up working groups, with industry, academia and civil society. For example, initiatives such as the AIPPF have been useful and could serve as templates for ongoing public-private engagement. The guidance that regulators should aim to create should be “live”, i.e., periodically updated to keep pace with rapidly changing developments in AI technology. Moreover, guidance should generally be “practical or actionable”, possibly involving best practice examples.

    This article was republished in the March edition of The Journal of Robotics, Artificial Intelligence & Law.


    [1] The Feedback Statement (FS2/23) is accessible here.

    [2] The Discussion Paper (DP5/22 on Artificial Intelligence and Machine Learning) is accessible here.

    [3] The AIPPF Final Report is accessible here.

    [4] See the European Commission’s AI Act Proposal (accessible here), Article 3(1) and Annex I.

    [5] See the Council’s General Approach, accessible here.

    [6] See the European Parliament’s negotiating position, accessible here.