Image credits: Irrmago/DepositPhotos
ChatGPT is a revolutionary artificial intelligence technology that is rapidly gaining popularity in the world of natural language processing, data science and machine learning. The AI tool, developed by OpenAI, is gaining traction mainly due to its ability to interact with humans more naturally than other AI systems.
Like any other technology, AI-powered ChatGPT can be used infinitely for good and evil. Realizing the tool’s potential, cybercriminals have already hacked it to use the existing ChatGPT tool for their attacks.
What is ChatGPT?
ChatGPT is an AI-powered chatbot launched by OpenAI in November 2022. According to OpenAI, ChatGPT is refined from the GPT-3.5 series model, which completed training in early 2022.
“ChatGPT is a sibling model to InstructGPT that is trained to follow instructions and provide a detailed response,” says OpenAI.
In a recent report, cybersecurity firm Check Point Research (CPR) reveals that several large underground hacking communities have already demonstrated the first cases of cybercriminals using OpenAI to develop malicious tools.
“As we suspected, some cases clearly showed that many cybercriminals using OpenAI lack development skills,” the report said.
Check Point Research (CPR) explained several cases showing the growing interest of cybercriminals in ChatGPT.
Case 1:
On December 29, 2022, a thread titled “ChatGPT – Malware Benefits” appeared on a popular underground hacking forum.
According to CPR’s analysis, the theme’s publisher revealed that it had been experimenting with ChatGPT to recreate malware strains and methods described in research publications and common malware blogs.
“Our analysis of the script confirms the cybercriminal’s claims. It’s a really basic thief that searches for 12 common file types (such as MS Office documents, PDF files, and images) throughout the system. If any files of interest are found, the malware copies the files to a temporary directory, compresses them, and sends them over the Internet. It should be noted that the actor did not bother to encrypt the files or send them securely so that the files would also end up in the hands of third parties,” the CPR report said.
Case 2:
On December 21st, the threat actor duplicated USDoD posted a Python script that he emphasized was “the first script he’s ever created.”
When another cybercriminal commented that the style of the code was similar to OpenAI code, USDoD confirmed that OpenAI gave him “good [helping] hand to complete the script with a beautiful volume.’
“Our analysis of the script confirmed that it is a Python script that performs encryption operations. To be more specific, it is a suite of various signing, encryption and decryption functions,” the report reveals.
Case 3:
The third case shared by CPR reveals a discussion titled “Abusing ChatGPT to Script Dark Web Marketplaces.” In this thread, a cybercriminal demonstrates how easy it is to create a Dark Web marketplace using ChatGPT.
“The primary role of the marketplace in the illegal underground economy is to provide a platform for the automated trading of illegal or stolen goods, such as stolen accounts or payment cards, malware or even drugs and ammunition, all paid in cryptocurrencies,” he explains. report.
As an example, a cybercriminal published code that uses a third-party API to retrieve updated prices for cryptocurrencies (Monero, Bitcoin, and Etherium) as part of a Dark Web market payment system.
In addition to the above cases, CPR notes that several threat actors have opened discussions to focus on using ChatGPT for fraudulent schemes.
“Many of these focused on creating random art using another OpenAI technology (DALLE2) and selling it online using legitimate Etsy platforms. In another example, a threat actor explains how to create an e-book or short chapter on a specific topic (using ChatGPT) and sells this content online,” the report concludes.
Comments