ChatGPT can be tricked into producing malicious code that could be used to launch cyberattacks, a study has found.
OpenAI’s tool and similar chatbots can create written content based on user commands, having been trained on enormous amounts of text data from across the internet.
They are designed with protections in place to prevent their misuse, along with address issues such as biases.
As such, bad actors have turned to alternatives that are purposefully created to aid cyber crime, such as a dark web tool called WormGPT that experts have warned could help develop large-scale attacks.
But researchers at the University of Sheffield have warned that vulnerabilities also exist in mainstream options that allow them to be tricked into helping destroy databases, steal personal information, and bring down services.
These include ChatGPT and a similar platform created by Chinese company Baidu.
Computer science PhD student Xutan Peng, who co-led the study, said: “The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than as a conversational bot.
Cher hits out at AI after hearing fake version of herself covering a Madonna track
New York’s mayor uses audio deepfakes to call residents in languages he doesn’t speak
MI5 boss says ‘tens of thousands’ of UK companies at risk from Chinese AI threat
“This is where our research shows the vulnerabilities are.”
Read more:
Martin Lewis warns against ‘frightening’ AI scam video
AI ‘doesn’t have capability to take over’, says Microsoft boss
Please use Chrome browser for a more accessible video player
AI-generated code ‘can be harmful’
Much like these generative AI tools can inadvertently get their facts wrong when answering questions, they can also create potentially damaging computer code without realising.
Mr Peng suggested a nurse could use ChatGPT to write code for navigating a database of patient records.
“Code produced by ChatGPT in many cases can be harmful to a database,” he said.
“The nurse in this scenario may cause serious data management faults without even receiving a warning.”
During the study, the scientists themselves were able to create malicious code using Baidu’s chatbot.
The company has recognised the research and moved to address and fix the reported vulnerabilities.
Be the first to get Breaking News
Install the Sky News app for free
Such concerns have resulted in calls for more transparency in how AI models are trained, so users become more understanding and perceptive of potential problems with the answers they provide.
Cybersecurity research firm Check Point has also urged companies to upgrade their protections as AI threatens to make attacks more sophisticated.
It will be a topic of conversation at the UK’s AI Safety Summit next week, with the government inviting world leaders and industry giants to come together to discuss the opportunities and dangers of the technology.