OpenAI Warns Foreign Adversaries Use Multiple AI Tools
OpenAI released a report Tuesday saying foreign adversaries are increasingly chaining multiple AI models—often using ChatGPT to plan schemes and other models (e.g., DeepSeek, Anthropic tools) to carry them out—to power hacking, phishing and covert influence operations. The company said it has banned several accounts tied to China-based entities and Russian-speaking criminal groups after finding the multi-model approach used to develop malware, craft phishing automation and generate content for covert campaigns. OpenAI researchers, including Ben Nimmo, warned the cross-model use complicates investigators' visibility and that actors are learning to disguise AI fingerprints.
AI & Tech
National security
📌 Key Facts
- OpenAI released a threat report Tuesday and said it banned accounts tied to China-based entities and Russian-speaking criminal groups.
- Adversaries used ChatGPT to plan or refine operations (e.g., writing prompts, researching phishing) and other models (DeepSeek, Anthropic-flagged tools) to execute tasks such as malware development and automated phishing.
- OpenAI principal investigator Ben Nimmo said investigators often only get 'a glimpse' into actors' activity because campaigns use multiple AI models and are learning to hide AI signatures.