The rapid rise of artificial intelligence (AI) is not only raising concerns among societies and lawmakers, but also some tech leaders at the heart of its development.
Some experts, including the ‘godfather of AI’ Geoffrey Hinton, have warned that AI poses a similar risk of human extinction as pandemics and nuclear war.
From the boss of the firm behind ChatGPT to the head of Google’s AI lab, over 350 people have said that mitigating the “risk of extinction from AI” should be a “global priority”.
While AI can perform life-saving tasks, such as algorithms analysing medical images like X-rays, scans and ultrasounds, its fast-growing capabilities and increasingly widespread use have raised concerns.
We take a look at some of the main ones – and why critics say some of those fears go too far.
Disinformation and AI-altered images
AI apps have gone viral on social media sites, with users posting fake images of celebrities and politicians, and students using ChatGPT and other “language learning models” to generate university-grade essays.
One general concern around AI and its development is AI-generated misinformation and how it may cause confusion online.
British scientist Professor Stuart Russell has said one of the biggest concerns was disinformation and so-called deepfakes.
These are videos or photos of a person in which their face or body has been digitally altered so they appear to be someone else – typically used maliciously or to spread false information.
Please use Chrome browser for a more accessible video player
Prof Russell said even though disinformation has been around for a long time for “propaganda” purposes, the difference now is that, using Sophy Ridge as an example, he could ask online chatbot GPT-4, to try to “manipulate” her so she’s “less supportive of Ukraine”.
Last week, a fake image that appeared to show an explosion near the Pentagon briefly went viral on social media and left fact-checkers and the local fire service scrambling to counter the claim.
It appeared the image, which purported to show a large cloud of black smoke next to the US headquarters of the Department of Defence, was created using AI technology.
It was first posted on Twitter and was quickly recirculated by verified, but fake, news accounts. But fact-checkers soon proved there was no explosion at the Pentagon.
But some action is being taken. In November, the government confirmed that sharing pornographic “deepfakes” without consent will be made crimes under new legislation.
Exceeding human intelligence
AI systems involve the simulation of human intelligence processes by machines – but is there a risk they might develop to the point they exceed human control?
Professor Andrew Briggs at the University of Oxford, told Sky News that there is a fear that as machines become more powerful the day “might come” where its capacity exceeds that of humans.
Please use Chrome browser for a more accessible video player
He said: “At the moment, whatever it is the machine is programmed to optimise, is chosen by humans and it may be chosen for harm or chosen for good. At the moment it’s human who decide it.
“The fear is that as machines become more and more intelligent and more powerful, the day might come where the capacity vastly exceeds that of humans and humans lose the ability to stay in control of what it is the machine is seeking to optimise”.
Read more:
What is GPT-4 and how is it improved?
He said that this is why it is important to “pay attention” to the possibilities for harm and added that “it’s not clear to me or any of us that governments really know how to regulate this in a way that will be safe”.
But there are also a range of other concerns around AI – including its impact on education, with experts raising warnings around essays and jobs.
Please use Chrome browser for a more accessible video player
Just the latest warning
Among the signatories for the Centre for AI Safety statement were Mr Hinton and Yoshua Bengio – two of the three so-called “godfathers of AI” who received the 2018 Turing Award for their work on deep learning.
But today’s warning is not the first time we’ve seen tech experts raise concerns about AI development.
In March, Elon Musk and a group of artificial intelligence experts called for a pause in the training of powerful AI systems due to the potential risks to society and humanity.
The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people, warned of potential risks to society and civilisation by human-competitive AI systems in the form of economic and political disruptions.
It called for a six-month halt to the “dangerous race” to develop systems more powerful than OpenAI’s newly launched GPT-4.
Earlier this week, Rishi Sunak also met with Google’s chief executive to discuss “striking the right balance” between AI regulation and innovation. Downing Street said the prime minister spoke to Sundar Pichai about the importance of ensuring the right “guard rails” are in place to ensure tech safety.
Please use Chrome browser for a more accessible video player
Are the warnings ‘baloney’?
Although some experts agree with the Centre for AI Safety statement, others in the field have labelled the notion of “ending human civilisation” as “baloney”.
Pedro Domingos, a professor of computer science and engineering at the University of Washington, tweeted: “Reminder: most AI researchers think the notion of AI ending human civilisation is baloney”.
Mr Hinton responded, asking what Mr Domingos’s plan is for making sure AI “doesn’t manipulate us into giving it control”.
The professor replied: “You’re already being manipulated every day by people who aren’t even as smart as you, but somehow you’re still OK. So why the big worry about AI in particular?”