Wednesday, November 5, 2025

Skynet has arrived

Um, I've seen this movie:

Nation-state goons and cybercrime rings are experimenting with Gemini to develop a "Thinking Robot" malware module that can rewrite its own code to avoid detection, and build an AI agent that tracks enemies' behavior, according to Google Threat Intelligence Group.

In its most recent AI Threat Tracker, published Wednesday, the Chocolate Factory says it observed a shift in adversarial behavior over the past year. 

Attackers are no longer just using Gemini for productivity gains - things like translating and tailoring phishing lures, looking up information about surveillance targets, using AI for tech support, and writing some software scripts. They are also trialing AI-enabled malware in their operations, we're told. 

It seems that the Bad Guys are using all the old malware tricks (obfuscation, hidden files, etc) plus some new ones (sending commands via LLM prompts, i.e. the malware queries (prompts) other LLMs to get commands.

The security model for AI/LLM is hopelessly broken, and the design is defective.  I mean heck - the designers didn't consider two decade old attack techniques.  I don't know if it's correct to label this broken as designed but it's not far off.  This is software engineering malpractice.

I can't wait to see what happens with this and one of Elon's humanoid robots ... 

2 comments:

SiGraybeard said...

It seems that everyday something new reinforces my belief that AI is nothing but the biggest marketing scam in human history. "Buy our stock! AI will run everything! It'll do everything!"

Old NFO said...

One more reason to stay the hell away from AI