UK government to adopt 'light touch' regulations around AI as concrete legislation currently tricky

AI has the ability to pass exams and write poems, but it can also spread misinformation. The Science, Innovation and Technology Secretary said: "If we legislate now, it will be out of date."

UK government to adopt 'light touch' regulations around AI as concrete legislation currently tricky

The government has published plans for how it wants to regulate AI technology which it says will "turbocharge" the growth of AI in the UK, while countering potential risks of rapidly emerging computer intelligence to society.

The regulations will apply to all applications of AI including powerful "language models" like the headline-grabbing Chat-GPT and image-generating software like Midjourney AI.

These algorithms' abilities to pass exams and write poetry, as well as generate misinformation and fake images have instilled awe and anxiety in equal measure.

"We're not denying the risks," said Science, Innovation and Technology Secretary, Michelle Donelan. "That's why we've got a proportionate framework in terms of this regulatory approach, one that can help the UK to seize the opportunities."

Ms Donelan spoke to Sky News during a tour of UK AI company DeepMind, now owned by Google, which last year used its AlphaFold AI to solve the structure of almost every known protein. The development was a landmark moment for understanding biology, and could lead to faster and safer drug development.

AI has huge potential to increase the productivity of businesses, improve access to learning and public services and revolutionise healthcare. The government claims the sector was worth £3.7bn to the UK economy last year.

And it wants that to grow, by offering AI companies a regulatory environment with less legal and administrative red tape than rival economies.

So, it's not proposing new laws. Instead, it's looking to existing regulators like the Health and Safety Executive and the Competition and Markets Authority, to apply key principles around safety, transparency, and accountability to emerging AI.

In a very Silicon Valley-sounding move, the government is even offering a £2m "sandbox" for AI developers to test how regulation will be applied to AI before they release it to market.

But is a "light touch" regulatory approach a mistake, in the face of looming concerns around AI that could either run out of control or be misused?

Examples are already emerging of text and image-based AI's ability to generate misinformation, like entirely fake images of the arrest, and then triumphant escape of Donald Trump; or the Pope sporting a white puffer jacket.

That's not to mention AI being used by hackers or scammers to write code for computer viruses or peddle ever more convincing online frauds.

In the face of that, the EU is proposing strong AI legislation and a "risk-based" approach to regulating AI.

'It means that every day is the best day in surgery': Robotic arm assists with knee replacement

'If we legislate now, it will be out of date'

The UK government makes the not unreasonable point that it's hard to know what an AI law should say, given we don't know what the AI of tomorrow looks like.

"If we legislate now, it will be out of date," said Ms Donelan. "We want a process that can be nimble, can be agile, can be responsible can prioritise safety can prioritise transparency, but can keep up with the pace of the change that's happening in this sector."

The government says it doesn't rule out the possibility of legislation to regulate AI in the future and Donelan is unapologetic in trying to make the UK attractive to AI companies.

"Shouldn't the UK be leading the way? Shouldn't we be in securing the benefits for our public services for our NHS or our education system for our transport network?" she says.

But it's proving very hard for the government to protect the privacy and the safety of children online. When it comes to AI, its regulatory battles with Big Tech are probably only just beginning.

"Many [Big Tech companies] to me seem honestly to want to do the best for humanity," says Professor Anil Seth, a cognitive scientist at the University of Sussex. "Unfortunately, markets don't work that way and companies are rewarded for their share price."

Many experts point to the fierce battle right now between Google, which is rushing to release Bard, its AI chatbot, and Microsoft, which has already built OpenAI's GPT4 language model into its Bing search engine.

These tools have the power to emulate and interpret natural human language, or "understand" images so well, even their developers appear to be unsure of how they might be used. Yet they've been released publicly for us to try. A commendably open and transparent way of introducing AI to the world, or a recipe for disaster?

"Good intentions are not enough," says Professor Seth. "We do need good intentions coupled with wise and enforceable regulation."

-sky news