Wednesday, November 5, 2025

Skynet has arrived

Um, I've seen this movie:

Nation-state goons and cybercrime rings are experimenting with Gemini to develop a "Thinking Robot" malware module that can rewrite its own code to avoid detection, and build an AI agent that tracks enemies' behavior, according to Google Threat Intelligence Group.

In its most recent AI Threat Tracker, published Wednesday, the Chocolate Factory says it observed a shift in adversarial behavior over the past year. 

Attackers are no longer just using Gemini for productivity gains - things like translating and tailoring phishing lures, looking up information about surveillance targets, using AI for tech support, and writing some software scripts. They are also trialing AI-enabled malware in their operations, we're told. 

It seems that the Bad Guys are using all the old malware tricks (obfuscation, hidden files, etc) plus some new ones (sending commands via LLM prompts, i.e. the malware queries (prompts) other LLMs to get commands.

The security model for AI/LLM is hopelessly broken, and the design is defective.  I mean heck - the designers didn't consider two decade old attack techniques.  I don't know if it's correct to label this broken as designed but it's not far off.  This is software engineering malpractice.

I can't wait to see what happens with this and one of Elon's humanoid robots ... 

Monday, November 3, 2025

Back Soon

Chasing Ghosts.  And Ghosts I don't want to catch.

Damn Ghosts. 

Wednesday, October 29, 2025

I would have throught that German IT Security teams would be more competent than this

I was not expecting this:

Germany's infosec office (BSI) is sounding the alarm after finding that 92 percent of the nation's Exchange boxes are still running out-of-support software, a fortnight after Microsoft axed versions 2016 and 2019.

While the end of Windows 10 updates occupied most of the headlines, Microsoft's support for Exchange and a bunch of other 2016 and 2019-branded products ended on October 14, as scheduled a year earlier.

Alternate title: 90% of German firms fail their SOC 2 audit.  Look, this isn't landing a man on the moon, and you had a whole year.  You just couldn't be bothered.

Was ist los? 

 

Tuesday, October 28, 2025

AI Browsers considered unsafe

OK, that post title is more than a bit inflammatory, but who on earth would want to use something like this?

Several new AI browsers, including OpenAI's Atlas, offer the ability to take actions on the user's behalf, such as opening web pages or even shopping. But these added capabilities create new attack vectors, particularly prompt injection.

Prompt injection occurs when something causes text that the user didn't write to become commands for an AI bot. Direct prompt injection happens when unwanted text gets entered at the point of prompt input, while indirect injection happens when content, such as a web page or PDF that the bot has been asked to summarize, contains hidden commands that AI then follows as if the user had entered them.

This is unbelievably bad.  How bad?  This bad: 

Last week, researchers at Brave browser published a report detailing indirect prompt injection vulns they found in the Comet and Fellou browsers. For Comet, the testers added instructions as unreadable text inside an image on a web page, and for Fellou they simply wrote the instructions into the text of a web page.

When the browsers were asked to summarize these pages – something a user might do – they followed the instructions by opening Gmail, grabbing the subject line of the user's most recent email message, and then appending that data as the query string of another URL to a website that the researchers controlled. If the website were run by crims, they'd be able to collect user data with it.

Surely they must be exaggerating, I hear you say.  Nope - the author of the post at El Reg recreated the exploit his very own self, simply by creating a web page with the commands hidden in it.  FYI, that's 1996 technology right there.

Now look, I may be an old crabby security geezer (no comments, Glen Filthie!) but the problem of sanitizing user input is a really old one.  So old that it was old when XKCD did it's classic "Bobby Tables" cartoon:


There have been over 3000 XKCD cartoons; that one was number 327.  Yeah, that long ago. 

My opinion about anything regarding AI is that the hype is so fierce that the people developing the applications don't really focus much on security, because security is hard and it would slow down the release cadence.  And so exploits that wouldn't have surprised anyone back in 2010 keep popping up.

Le sigh.  Once again, security isn't an afterthought, it wasn't thought of at all.  My recommendation is not to touch these turkeys with a 100' pole.

Thursday, October 23, 2025

AI LLM poisoning attacks are trivially easy

This doesn't seem good:

Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. 

Researchers at the US AI firm, working with the UK AI Security Institute, Alan Turing Institute, and other academic institutions, said today that it takes only 250 specially crafted documents to force a generative AI model to spit out gibberish when presented with a certain trigger phrase. 

For those unfamiliar with AI poisoning, it's an attack that relies on introducing malicious information into AI training datasets that convinces them to return, say, faulty code snippets or exfiltrate sensitive data.

The common assumption about poisoning attacks, Anthropic noted, was that an attacker had to control a certain percentage of model training data in order to make a poisoning attack successful, but their trials show that's not the case in the slightest - at least for one particular kind of attack. 

...

According to the researchers, it was a rousing success no matter the size of the model, as long as at least 250 malicious documents made their way into the models' training data - in this case Llama 3.1, GPT 3.5-Turbo, and open-source Pythia models. 

Security companies using AI to generate security code need to pay close attention to this.  Probably everybody else, too.

UPDATE 23 OCTOBER 2025 13:08:  More here. It looks like solutions may prove elusive. 

Wednesday, October 22, 2025

Earth has some solar system stalkers

Well, they're sure acting like stalkers:

You might recall that in late 2024, Earth gained a temporary mini-moon, an asteroid that partially orbited our planet for about two months. Now astronomers have discovered another temporary companion to Earth, but this time it’s a quasi-moon. The Pan-STARRS observatory on Haleakala in Hawaii first spotted the quasi-moon, named 2025 PN7, on August 29, 2025. Older data revealed that 2025 PN7 has been in this particular orbit for about 60 years and will stay in this orbit for about another 60 years before the tug of the sun once again releases it from its quasi-moon status.

Huh.

Saturday, October 18, 2025

Dad Joke CCCLXIIII

Tuna sends in another:

I went to a haunted Bed & Breakfast in France, but checked out early- the place was giving me the crepes. 

Mmmm, Ghost crepes!

Tuesday, October 14, 2025

Underwater archaeology recovers WWII airman's body

This is from a few years back but is a cool story.  Rest in Peace, Lieutenant.