Table of Contents
While experts in the media are discussing how smart intelligence apps will change the world, which professions will remain and which ones will emerge, fraudsters operating in the virtual space have already started using ChatGPT and other smart programs in their activities.
Thanks to this fundamental transition of our society, soon we may receive not only a larger number, but also much smarter attacks.
One of the main risks comes from the fact that smart machines can easily share advice even to those people who have a clear desire not to improve this world. Therefore, developers create new strategies to limit the use of AI in committing crimes.
AI Chatbots and Cybersecurity Challenges
In response to the growing public attention on ChatGPT, Europol’s Innovation Lab is actively analyzing the situation to find out how large language models (LLMs) like ChatGPT can be abused by criminals and how they can help investigators in their daily work.
At the moment, Europol distinguishes the three biggest threats:
- Phishing and Social Engineering: ChatGPT’s ability to produce highly realistic text makes it a useful tool for scammers, as it can mimic the speech style of specific individuals or groups. This can confuse potential victims and lead them to trust criminals.
- Disinformation: ChatGPT’s ability to quickly create authentic-sounding text makes it ideal for propaganda and disinformation purposes, as users can create and disseminate large numbers of messages that reflect a particular narrative with relatively little effort.
- Cybercrime: ChatGPT can not only generate human-like language but also program code in various programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource for developing malicious code.
ChatGPT and other programs really do write quite convincing and persuasive letters, and create formal documents using basically any requested style. But that’s not everything: AI-based chatbots can create a working software code, too.
Hacking with ChatGPT (and Other Chatbots)
Of course, if someone simply asks to create a ransomware program, ChatGPT will refuse. However, if we ask a little more cleverly – we will ask him to generate separate functions, then combine them and thus make a data-encrypting app, for example, the problem transitions to a higher domain.
Asking ChatGPT to create separate functions is actually possible, using a programming language of your choice. The only technical skill you need is to combine them into a single working program.
Of course, you need some IT knowledge, to understand the context, so that you can properly formulate your requests for a chatbot, and be able to precisely formulate the technical task that you want to be implemented.
Some cybersecurity experts claim that they managed to create fully working “replicas” of malicious code in less than an hour, using this particular AI-assisted technique. Identically, cybercriminals may produce parts of code for programs for cracking passwords, analyze vulnerabilities of different websites, or find technical exploits in other software products.
Not That Helpful for Cybersecurity
While being able to provide great help for potential criminals, AI can be used just to a very limited extent by the other side – cyber security specialists.
Artificial intelligence can advise on how to protect yourself from a specific malicious action, help create a security rule or help analyze a larger amount of data, but it does not implement all the actions required to safeguard against newly emerging threats. It cannot address each specific case in an individual way, as a human specialist does.
Written by Alius Noreika