UK Government Prefers Context-specific Guidance Instead of Formal New Regulation for Artificial Intelligence Technology

April 12, 2023

Rapidly emerging artificial intelligence (AI) technology is poised to transform how businesses operate across almost all sectors, from social media to education to healthcare.

Globally, governments and regulators are starting to react to the potential risks, but also opportunities, that AI and machine learning models can bring.  Earlier this month, data protection authorities in Italy, Canada and South Korea have opened a series of investigations into data privacy issues related to OpenAI’s ChatGPT, with the Italian agency temporarily banning the use of ChatGPT in the country.

Against this backdrop, on 29 March 2023, the UK Department for Science, Innovation and Technology (DSIT) published a white paper setting out its proposed approach for regulating artificial intelligence.v The white paper outlines five principles that should guide the use of AI in the UK and should be taken into account by regulators, including the Competition and Markets Authority (CMA).  In the DSIT’s view, AI regulation should be “context-specific” because AI technology can be deployed in many different ways with varying degrees of risk.  For instance, the risks and implications associated with AI being used in chatbot applications will be different from those associated with medical use-cases. The government therefore concluded that a one-size-fits-all approach would not be appropriate.  Instead, existing regulatory bodies with long-standing expertise in different sectors and areas of regulation should lead on the implementation of the government’s framework by using existing regulatory tools and issuing guidance to the industry.  The DSIT is seeking feedback on its proposed framework by 21 June 2023.

Please click here to continue reading the Cleary Antitrust Watch blog.