Technology
A new report shows that ChatGPT can be manipulated to instruct people on how to commit crimes
The Norwegian technology company confirmed that ChatGPT can be manipulated to instruct users how to approve CNN reports that quite a few crimes occurred.
Strise conducted some experiments that showed the software could be tricked into giving detailed insight into how certain crimes were committed, including money laundering and arms exports to countries subject to sanctions, equivalent to some against Russia, that include bans on cross-border payments and arms sales .
The experiment raised some red flags. Strise co-founder and CEO Marit Rødevand said it was eye-opening how easy it was. “It’s really effortless. It’s just an app on my phone,” Rødevand said. “It’s like having a corrupt financial advisor on your desk.”
Strise sells software that financial clients equivalent to PwC Norway and Handelsbanken use to combat money laundering schemes. However, the corporate behind ChatGPT, OpenAI, has likely placed some locks on the platform to prevent the chatbot from being manipulated and answering certain questions by not directly asking questions or accepting an individual. “We are continually improving ChatGPT to stop intentional attempts to deceive it, without losing its usefulness and creativity,” an OpenAI spokesperson said.
“Our latest (model) is the most advanced and safest ever, significantly outperforming previous models in resisting intentional attempts to generate dangerous content.”
This is just not the primary time a chatbot has been deemed a dangerous tool. Since its launch in 2022, GPT Chat has been described as too accessible to criminals. ChatGPT “makes it much easier for malicious actors to better understand and then commit different types of crimes,” said a March 2023 report by Europol, the EU’s law enforcement agency.
“The ability to delve deeper into topics without having to manually search and summarize the vast amounts of information found in traditional search engines can significantly speed up the learning process.” According to, an identical report raised concerns after finding a way to “jailbreak” ChatGPT, which ends up in finding instructions on how to create a bomb.
Even though the reports are still coming out, OpenAI stays adamant that it’s aware of the “power of this technology” but is working on it as best it can. Company policy features a warning that accounts may be suspended or canceled for certain violations.