OK, that post title is more than a bit inflammatory, but who on earth would want to use something like this?
Several new AI browsers, including OpenAI's Atlas, offer the ability to take actions on the user's behalf, such as opening web pages or even shopping. But these added capabilities create new attack vectors, particularly prompt injection.
Prompt injection occurs when something causes text that the user didn't write to become commands for an AI bot. Direct prompt injection happens when unwanted text gets entered at the point of prompt input, while indirect injection happens when content, such as a web page or PDF that the bot has been asked to summarize, contains hidden commands that AI then follows as if the user had entered them.
This is unbelievably bad. How bad? This bad:
Last week, researchers at Brave browser published a report detailing indirect prompt injection vulns they found in the Comet and Fellou browsers. For Comet, the testers added instructions as unreadable text inside an image on a web page, and for Fellou they simply wrote the instructions into the text of a web page.
When the browsers were asked to summarize these pages – something a user might do – they followed the instructions by opening Gmail, grabbing the subject line of the user's most recent email message, and then appending that data as the query string of another URL to a website that the researchers controlled. If the website were run by crims, they'd be able to collect user data with it.
Surely they must be exaggerating, I hear you say. Nope - the author of the post at El Reg recreated the exploit his very own self, simply by creating a web page with the commands hidden in it. FYI, that's 1996 technology right there.
Now look, I may be an old crabby security geezer (no comments, Glen Filthie!) but the problem of sanitizing user input is a really old one. So old that it was old when XKCD did it's classic "Bobby Tables" cartoon:
There have been over 3000 XKCD cartoons; that one was number 327. Yeah, that long ago.
My opinion about anything regarding AI is that the hype is so fierce that the people developing the applications don't really focus much on security, because security is hard and it would slow down the release cadence. And so exploits that wouldn't have surprised anyone back in 2010 keep popping up.
Le sigh. Once again, security isn't an afterthought, it wasn't thought of at all. My recommendation is not to touch these turkeys with a 100' pole.

2 comments:
I can't help but wonder: what goes through the mind of a user who would want an AI browser to shop on that user's behalf? Other than the gentle breeze blowing in one ear and out the other.
I know almost nothing about "all of that", but it sounds like the computer age might accidentally suicide itself.
Post a Comment