top of page
  • Koosmik

AI-Powered Virus evades detection and spreads through emails


An illustration of a hooded figure wearing a robotic mask with glowing red eyes, sitting at a desk and typing on a laptop. The background features various diagrams and charts, suggesting a high-tech environment. The laptop has a simple face doodle on its cover, and the overall scene conveys a sense of cyber activity or hacking.

Researchers have developed a sophisticated computer virus that harnesses the capabilities of ChatGPT to evade detection and spread itself through AI-generated emails. This malicious program was created by David Zollikofer, a computer science grad student at ETH Zurich, and Ben Zimmerman, an AI malware researcher at Ohio State University.


The virus works by rewriting its own code with the help of ChatGPT, ensuring it retains its original function while changing variable names and logic to stay undetectable. Named “synthetic cancer” by its creators, this virus is cleverly camouflaged and invisible to traditional antivirus scans.


Once it infects a system, the virus accesses Outlook to generate and send realistic, contextually appropriate email replies. These emails include the virus as an attachment disguised as something harmless, like a file named "80s_Nostalgia_Playlist.exe."


Such advancements illustrate the alarming potential of AI in cyber-attacks. The virus’s ability to convincingly mimic human language makes it particularly dangerous, as it can deceive recipients into opening infected attachments.


The researchers highlighted the risks posed by large language models (LLMs) like ChatGPT, calling for more research to develop defenses against intelligent malware. Interestingly, ChatGPT sometimes recognizes the unethical request and refuses to generate the malicious code, indicating some level of built-in safeguards.


While other groups have also used AI to create malware, experts say this example represents just the beginning of the potential misuse of LLMs. University of Surrey cybersecurity researcher Alan Woodward warns that we should be concerned about how these technologies can be exploited.


However, Zollikofer remains optimistic, suggesting that these AI technologies can also bolster cybersecurity defenses if correctly applied, balancing the scales between attack and defense.


Source: Futurism

留言


bottom of page