Cyber Defense Advisors

OpenAI Bans ChatGPT Accounts Used by Russian, Iranian and Chinese Hacker Groups

OpenAI Bans ChatGPT Accounts

OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things.

“The [Russian-speaking] actor used our models to assist with developing and refining Windows malware, debugging code across multiple languages, and setting up their command-and-control infrastructure,” OpenAI said in its threat intelligence report. “The actor demonstrated knowledge of Windows internals and exhibited some operational security behaviors.”

The Go-based malware campaign has been codenamed ScopeCreep by the artificial intelligence (AI) company. There is no evidence that the activity was widespread in nature.

The threat actor, per OpenAI, used temporary email accounts to sign up for ChatGPT, using each of the created accounts to have one conversation to make a single incremental improvement to their malicious software. They subsequently abandoned the account and moved on to the next.

This practice of using a network of accounts to fine-tune their code highlights the adversary’s focus on operational security (OPSEC), OpenAI added.

The attackers then distributed the AI-assisted malware through a publicly available code repository that impersonated a legitimate video game crosshair overlay tool called Crosshair X. Users who ended up downloading the trojanized version of the software had their systems infected by a malware loader that would then proceed to retrieve additional payloads from an external server and execute them.

Cybersecurity

“From there, the malware was designed to initiate a multi-stage process to escalate privileges, establish stealthy persistence, notify the threat actor, and exfiltrate sensitive data while evading detection,” OpenAI said.

“The malware is designed to escalate privileges by relaunching with ShellExecuteW and attempts to evade detection by using PowerShell to programmatically exclude itself from Windows Defender, suppressing console windows, and inserting timing delays.”

Among other tactics incorporated by ScopeCreep include the use of Base64-encoding to obfuscate payloads, DLL side-loading techniques, and SOCKS5 proxies to conceal their source IP addresses.

The end goal of the malware is to harvest credentials, tokens, and cookies stored in web browsers, and exfiltrate them to the attacker. It’s also capable of sending alerts to a Telegram channel operated by the threat actors when new victims are compromised.

OpenAI noted that the threat actor asked its models to debug a Go code snippet related to an HTTPS request, as well as sought help with integrating Telegram API and using PowerShell commands via Go to modify Windows Defender settings, specifically when it comes to adding antivirus exclusions.

The second group of ChatGPT accounts disabled by OpenAI are said to be associated with two hacking groups attributed to China: ATP5 (aka Bronze Fleetwood, Keyhole Panda, Manganese, and UNC2630) and APT15 (aka Flea, Nylon Typhoon, Playful Taurus, Royal APT, and Vixen Panda)

While one subset engaged with the AI chatbot on matters related to open-source research into various entities of interest and technical topics, as well as to modify scripts or troubleshooting system configurations.

“Another subset of the threat actors appeared to be attempting to engage in development of support activities including Linux system administration, software development, and infrastructure setup,” OpenAI said. “For these activities, the threat actors used our models to troubleshoot configurations, modify software, and perform research on implementation details.”

This consisted of asking for assistance building software packages for offline deployment and advice pertaining to configured firewalls and name servers. The threat actors engaged in both web and Android app development activities.

In addition, the China-linked clusters weaponized ChatGPT to work on a brute-force script that can break into FTP servers, research about using large-language models (LLMs) to automate penetration testing, and develop code to manage a fleet of Android devices to programmatically post or like content on social media platforms like Facebook, Instagram, TikTok, and X.

Cybersecurity

Some of the other observed malicious activity clusters that harnessed ChatGPT in nefarious ways are listed below –

  • A network, consistent with the North Korea IT worker scheme, that used OpenAI’s models to drive deceptive employment campaigns by developing materials that could likely advance their fraudulent attempts to apply for IT, software engineering, and other remote jobs around the world
  • Sneer Review, a likely China-origin activity that used OpenAI’s models to bulk generate social media posts in English, Chinese, and Urdu on topics of geopolitical relevance to the country for sharing on Facebook, Reddit, TikTok, and X
  • Operation High Five, a Philippines-origin activity that used OpenAI’s models to generate bulk volumes of short comments in English and Taglish on topics related to politics and current events in the Philippines for sharing on Facebook and TikTok
  • Operation VAGue Focus, a China-origin activity that used OpenAI’s models to generate social media posts for sharing on X by posing as journalists and geopolitical analysts, asking questions about computer network attack and exploitation tools, and translating emails and messages from Chinese to English as part of suspected social engineering attempts
  • Operation Helgoland Bite, a likely Russia-origin activity that used OpenAI’s models to generate Russian language content about the German 2025 election, and criticized the U.S. and NATO, for sharing on Telegram and X
  • Operation Uncle Spam, a China-origin activity that used OpenAI’s models to generate polarized social media content supporting both sides of divisive topics within U.S. political discourse for sharing on Bluesky and X
  • Storm-2035, an Iranian influence operation that used OpenAI’s models to generate short comments in English and Spanish that expressed support for Latino rights, Scottish independence, Irish reunification, and Palestinian rights, and praised Iran’s military and diplomatic prowess for sharing on X by inauthentic accounts posing as residents of the U.S., U.K., Ireland, and Venezuela.
  • Operation Wrong Number, a likely Cambodian-origin activity related to China-run task scam syndicates that used OpenAI’s models to generate short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole that advertised high salaries for trivial tasks such as liking social media posts

“Some of these companies operated by charging new recruits substantial joining fees, then using a portion of those funds to pay existing ’employees’ just enough to maintain their engagement,” OpenAI’s Ben Nimmo, Albert Zhang, Sophia Farquhar, Max Murphy, and Kimo Bumanglag said. “This structure is characteristic of task scams.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

 

Leave feedback about this

  • Quality
  • Price
  • Service
Choose Image