AI could help produce deadly weapons that 'kill humans' in two years' time, Rishi Sunak's adviser warns
Mr Clifford acknowledged that the prediction of computers surpassing human intelligence within two years was at the "bullish end of the spectrum", but said AI systems are improving rapidly and becoming increasingly capable.
Artificial Intelligence (AI) could have the power to be behind advances that "kill many humans" in just two years' time, Rishi Sunak's adviser has warned.
Matt Clifford expressed concern over the lack of global regulation for AI producers and said that if left unregulated, they could become "very powerful" and difficult for humans to control, creating significant risks in the short term.
He made the comments in a TalkTV interview, citing the potential for AI to create dangerous cyber and biological weapons that could lead to many deaths.
Such concerns have been shared by countless experts in the field, as evidenced by a letter published last week, which urged for increased attention and action towards mitigating the risks of AI on par with pandemics or nuclear war.
The letter rejecting the harmful use of artificial intelligence was signed by top executives from leading companies like Google DeepMind and Anthropic.
Geoffrey Hinton, popularly known as the "godfather of AI", also endorsed the letter, warning that if AI falls into the wrong hands, it could be catastrophic for humanity.
Mr Clifford, who is the chairman of the Advanced Research and Invention Agency (ARIA), is currently advising the prime minister on the development of the government's Foundation Model Taskforce, which focuses on investigating AI language models such as ChatGPT and Google Bard.
"I think there are lots of different types of risks with AI and often in the industry we talk about near-term and long-term risks, and the near-term risks are actually pretty scary," Mr Clifford told TalkTV.
"You can use AI today to create new recipes for bioweapons or to launch large-scale cyber attacks. These are bad things.
"The kind of existential risk that I think the letter writers were talking about is... about what happens once we effectively create a new species, an intelligence that is greater than humans."
Mr Clifford acknowledged that the prediction of computers surpassing human intelligence within two years was at the "bullish end of the spectrum", but said that AI systems are improving rapidly and becoming increasingly capable.
During an appearance on the First Edition programme on Monday, he was asked what percentage he would give on the chance humanity could be wiped out by AI, replying: "I think it is not zero."
He continued: "If we go back to things like the bioweapons or cyber (attacks), you can have really very dangerous threats to humans that could kill many humans - not all humans - simply from where we would expect models to be in two years' time.
"I think the thing to focus on now is how do we make sure that we know how to control these models because right now we don't."
The tech expert added that AI production needed to be regulated on a global scale - not only by national governments.
The warnings on AI come as apps using the technology have gone viral, with users sharing fake images of celebrities and politicians, while students use ChatPGT and other "language learning models" to generate university-grade essays.
AI is also being used in a positive way - such as performing life-saving tasks including algorithms analysing medical images from X-rays to ultrasounds, thus helping doctors to identify and diagnose diseases such as cancer and heart conditions more accurately and quickly.
If harnessed in the right way, Mr Clifford said AI could be a force for good.
"You can imagine AI curing diseases, making the economy more productive, helping us get to a carbon-neutral economy," he said.
But the Labour Party has been urging ministers to bar technology developers from working on advanced AI tools unless they have been granted a licence.
Shadow digital secretary Lucy Powell, who is set to speak at TechUK's conference today, said AI should be licensed in a similar way to medicines or nuclear power.
"That is the kind of model we should be thinking about, where you have to have a licence in order to build these models," she told The Guardian.
-sky news