Subscribe to our Telegram channel
Hackers have begun to use ChatGPT artificial intelligence to create viruses
Since the launch of ChatGPT AI bot, enthusiasts have managed to test a wide range of tasks: the bot not only answers questions but also writes technical articles, essays, poems, and computer code. As it turned out, the latter should be taken into account, as this code can be harmful if a user sets such a task for artificial intelligence.
Cybersecurity experts from Check Point Research published a study where they said that participants of hacker forums use ChatGPT to write malicious code and phishing emails — some of these people have little or no experience in programming.
One of the examples above describes a Python script that, with some modification, can be turned into a ransomware capable of encrypting data on a user’s computer. Another Python script created by ChatGPT searches for files of a given type, for example, PDF, on a local machine, compresses them, and sends them to a potential attacker’s server — a standard information theft scenario.
In another example, the script written by ChatGPT was intended to launch an online trading platform where compromised accounts, bank card data, malware, and other virtual goods sold on the darknet are purchased or exchanged. The script was connected to a third-party interface to obtain up-to-date data on the quotes of the most popular cryptocurrencies to simplify calculations.
The Check Point Research researchers tried to use the neural network to model a hacker attack themselves. The robot kindly wrote them a convincing phishing email informing them that their account with one of the hosts had been blocked and asking them to open an attached Excel file. After a few attempts, ChatGPT wrote a malicious VBA macro embedded in this file.