Showing posts with label pwned. Show all posts
Showing posts with label pwned. Show all posts

Friday, February 20, 2026

Don't buy TP-Link home firewalls

This is pretty skeevy:

TP-Link is facing legal action from the state of Texas for allegedly misleading consumers with "Made in Vietnam" claims despite China-dominated manufacturing and supply chains, and for marketing its devices as secure despite reported firmware vulnerabilities exploited by Chinese state-sponsored actors.

The Lone Star State's Attorney General, Ken Paxton, is filing the lawsuit against California-based TP-Link Systems Inc., which was originally founded in China, accusing it of deceptively marketing its networking devices and alleging that its security practices and China-based affiliations allowed Chinese state-sponsored actors to access devices in the homes of American consumers.

Anyone who has ever ordered something from Amazon that looked like a good deal, only to discover that the photos weren't exactly depicting what you got - you know that the People's Republic of Chine (a.k.s. PRD, a.k.a. Red China a.k.a. West Taiwan) has a very different (dare we say "predatory") concept of truth in advertising than we do on these shores.

Me, I wouldn't buy one of these things on a dare.  FYI, they are something like 60% of the market because they're cheap. 

 

Tuesday, January 6, 2026

The 2025 most dangerous software exploits list


 Dad (who was a history professor) liked to say that History repeats itself because nobody listens the first time.  I get an incredible sense of deja vu all over again looking at Mitre's list of top 25 exploits for 2025.

The top 4 are all very, very old.  I myself demonstrated #4 when I taught a computer security class (with corporate IT Security present) back in 1994.  That's three decades ago.

And what's with numbers 11 and 14?  One of the classic papers on software security is Smashing The Stack For Fun And Profit - from 1996.

Numbers 3, 6, and 22 are web server vulnerabilities that are over 20 years old, and I've posted about them before. 

17, 19, and 21 have been known since before I was in this industry.  Call it the 1980s, although it's likely older.

I guess it's nice to see a shout-out to DoS (number 25) although geez, this is depressing.

So that's half the list having been known for literally multiple decades. So what gives?

I blame Agile Software Development.   I guess I'm the cranky old guy yelling at the sky here, because this is how all software is developed these days.  Product Managers (my old field) are to blame here, having spent the last 20 or 30 years pushing Go Ugly Early - get working product shipping as soon as possible and let customers tell you how to improve it.  Essentially, a lot of what you would have the developers spend their time fixing are things that customers just don't care about.

This has led to a pushback of sorts from software professionals, particularly the Software Craftsmanship movement.  Their manifesto is interesting:

As aspiring Software Craftsmen we are raising the bar of professional software development by practicing it and helping others learn the craft. Through this work we have come to value:

  • Not only working software, but also well-crafted software
  • Not only responding to change, but also steadily adding value
  • Not only individuals and interactions, but also a community of professionals
  • Not only customer collaboration, but also productive partnerships

So what's missing from this?  How about don't keep making the same dumb security mistakes that people have been making for decades?

And what do Product Managers miss in their rush to go ugly early? How about don't keep making the same dumb security mistakes that people have been making for decades?

And so here we are.  The IT infrastructure of the 21st Century has been constructed out of moonbeams and cotton candy.

I don't see anything changing here, as the incentive structures are all stacked against good security. 

Thursday, December 18, 2025

AI Browser Extensions considered harmful

Well, duh:

Ad blockers and VPNs are supposed to protect your privacy, but four popular browser extensions have been doing just the opposite. According to research from Koi Security, these pernicious plug-ins have been harvesting the text of chatbot conversations from more than 8 million people and sending them back to the developers.

The four seemingly helpful extensions are Urban VPN Proxy, 1ClickVPN Proxy, Urban Browser Guard, and Urban Ad Blocker. They're distributed via the Chrome Web Store and Microsoft Edge Add-ons, but include code designed to capture and transmit browser-based interactions with popular AI tools.

I believe that the very first of Borepatch's Laws of Security - from way, way back in 2008 - was "Free Download" is Internet-speak for "Open your mouth and close your eyes".

Plus ca change ... 

So you really shouldn't use them. 

 

Tuesday, December 2, 2025

How to attack AI systems

Use poetry.  No, really:

In a new paper, “Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models,” researchers found that turning LLM prompts into poetry resulted in jailbreaking the models

...
Poetic framing achieved an average jailbreak success rate of 62% for hand-crafted poems and approximately 43% for meta-prompt conversions (compared to non-poetic baselines), substantially outperforming non-poetic baselines and revealing a systematic vulnerability across model families and safety training approaches.

Whoops.  Looks, this is a new class of attack (seriously, I've been in this biz for a long time and have never seen weaponized verse before), so maybe we need to cut folks some slack here.  But I'm somewhat less inclined to do so with AI's track record of falling for 30 year old attacks.

Enjoyed no sooner but despisèd straight,
Past reason hunted; and, no sooner had
Past reason hated as a swallowed bait
On purpose laid to make the taker mad; 
- Wm. Shakespeare, Sonnet 129 

 

Monday, November 24, 2025

The Age of AI Espionage has arrived

Well, it very likely arrived some time ago but now it's confirmed:

In mid-September 2025, we detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign. The attackers used AI’s “agentic” capabilities to an unprecedented degree—using AI not just as an advisor, but to execute the cyberattacks themselves.

The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention.

This is very interesting, and is very bad news.  This is one heck of a tool: 

In Phase 1, the human operators chose the relevant targets (for example, the company or government agency to be infiltrated). They then developed an attack framework—a system built to autonomously compromise a chosen target with little human involvement.  

Essentially, this is the cyberpunk version of "fire and forget" weaponry.  The only thing that would be more ironic is if they had a Clippy front end ...


(via)

Wednesday, November 5, 2025

Skynet has arrived

Um, I've seen this movie:

Nation-state goons and cybercrime rings are experimenting with Gemini to develop a "Thinking Robot" malware module that can rewrite its own code to avoid detection, and build an AI agent that tracks enemies' behavior, according to Google Threat Intelligence Group.

In its most recent AI Threat Tracker, published Wednesday, the Chocolate Factory says it observed a shift in adversarial behavior over the past year. 

Attackers are no longer just using Gemini for productivity gains - things like translating and tailoring phishing lures, looking up information about surveillance targets, using AI for tech support, and writing some software scripts. They are also trialing AI-enabled malware in their operations, we're told. 

It seems that the Bad Guys are using all the old malware tricks (obfuscation, hidden files, etc) plus some new ones (sending commands via LLM prompts, i.e. the malware queries (prompts) other LLMs to get commands.

The security model for AI/LLM is hopelessly broken, and the design is defective.  I mean heck - the designers didn't consider two decade old attack techniques.  I don't know if it's correct to label this broken as designed but it's not far off.  This is software engineering malpractice.

I can't wait to see what happens with this and one of Elon's humanoid robots ... 

Tuesday, October 28, 2025

AI Browsers considered unsafe

OK, that post title is more than a bit inflammatory, but who on earth would want to use something like this?

Several new AI browsers, including OpenAI's Atlas, offer the ability to take actions on the user's behalf, such as opening web pages or even shopping. But these added capabilities create new attack vectors, particularly prompt injection.

Prompt injection occurs when something causes text that the user didn't write to become commands for an AI bot. Direct prompt injection happens when unwanted text gets entered at the point of prompt input, while indirect injection happens when content, such as a web page or PDF that the bot has been asked to summarize, contains hidden commands that AI then follows as if the user had entered them.

This is unbelievably bad.  How bad?  This bad: 

Last week, researchers at Brave browser published a report detailing indirect prompt injection vulns they found in the Comet and Fellou browsers. For Comet, the testers added instructions as unreadable text inside an image on a web page, and for Fellou they simply wrote the instructions into the text of a web page.

When the browsers were asked to summarize these pages – something a user might do – they followed the instructions by opening Gmail, grabbing the subject line of the user's most recent email message, and then appending that data as the query string of another URL to a website that the researchers controlled. If the website were run by crims, they'd be able to collect user data with it.

Surely they must be exaggerating, I hear you say.  Nope - the author of the post at El Reg recreated the exploit his very own self, simply by creating a web page with the commands hidden in it.  FYI, that's 1996 technology right there.

Now look, I may be an old crabby security geezer (no comments, Glen Filthie!) but the problem of sanitizing user input is a really old one.  So old that it was old when XKCD did it's classic "Bobby Tables" cartoon:


There have been over 3000 XKCD cartoons; that one was number 327.  Yeah, that long ago. 

My opinion about anything regarding AI is that the hype is so fierce that the people developing the applications don't really focus much on security, because security is hard and it would slow down the release cadence.  And so exploits that wouldn't have surprised anyone back in 2010 keep popping up.

Le sigh.  Once again, security isn't an afterthought, it wasn't thought of at all.  My recommendation is not to touch these turkeys with a 100' pole.

Thursday, October 23, 2025

AI LLM poisoning attacks are trivially easy

This doesn't seem good:

Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. 

Researchers at the US AI firm, working with the UK AI Security Institute, Alan Turing Institute, and other academic institutions, said today that it takes only 250 specially crafted documents to force a generative AI model to spit out gibberish when presented with a certain trigger phrase. 

For those unfamiliar with AI poisoning, it's an attack that relies on introducing malicious information into AI training datasets that convinces them to return, say, faulty code snippets or exfiltrate sensitive data.

The common assumption about poisoning attacks, Anthropic noted, was that an attacker had to control a certain percentage of model training data in order to make a poisoning attack successful, but their trials show that's not the case in the slightest - at least for one particular kind of attack. 

...

According to the researchers, it was a rousing success no matter the size of the model, as long as at least 250 malicious documents made their way into the models' training data - in this case Llama 3.1, GPT 3.5-Turbo, and open-source Pythia models. 

Security companies using AI to generate security code need to pay close attention to this.  Probably everybody else, too.

UPDATE 23 OCTOBER 2025 13:08:  More here. It looks like solutions may prove elusive. 

Monday, September 29, 2025

Attacking AI via prompt manipulation

This is actually pretty clever:

The attack involves hiding prompt instructions in a pdf file—white text on a white background—that tell the LLM to collect confidential data and then send it to the attackers.

...

The fundamental problem is that the LLM can’t differentiate between authorized commands and untrusted data. So when it encounters that malicious pdf, it just executes the embedded commands. And since it has (1) access to private data, and (2) the ability to communicate externally, it can fulfill the attacker’s requests. I’ll repeat myself:

This kind of thing should make everybody stop and really think before deploying any AI agents. We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks. Any AI that is working in an adversarial environment­—and by this I mean that it may encounter untrusted training data or input­—is vulnerable to prompt injection. It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there.

Essentially, this means that AI is simply not fit for purpose.  And clearly, it's not even a little bit "intelligent", security-wise.  

Thursday, September 18, 2025

Apple or Android for security?

Glen Filthie left a comment asking what I like for vendors providing good phone security. I replied:

I think that Apple is much more serious about their customer's privacy than Google is. Apple has repeatedly told governments to get bent when they demand encryption backdoors; Google seemingly couldn't care less.

Also, I think that Apple's update model is superior (it certainly was just a few years ago; I don't get the sense that this is a big area of concern to Google).

Your mileage may vary, void where prohibited, do not remove tag under penalty of law.
And here's an example of how Apple's update model is superior:

Samsung has fixed a critical flaw that affects its Android devices - but not before attackers found and exploited the bug, which could allow remote code execution on affected devices.

The vulnerability, tracked as CVE-2025-21043, affects Android OS versions 13, 14, 15, and 16. It's due to an out-of-bounds write vulnerability in libimagecodec.quram.so, a parsing library used to process image formats on Samsung devices, which remote attackers can abuse to execute malicious code.

"Samsung was notified that an exploit for this issue has existed in the wild," the electronics giant noted in its September security update.

Note that you get this patch from Samsung, not Google.  Samsung is the phone handset manufacturer, and has customized the (Google supplied) Android OS so they rolled the patch.  Now customizing the OS isn't bad per se, but it's fair to ask who has a better security group: Apple or Samsung.  Same question for Motorola and all the Android phone vendors.

So I like my chances better with Apple, at least for security.  And notice that this is only looking at the patching cadence.  Apple has a history of standing up to governments who ask for encryption backdoors (by my count this is the US.gov, the UK.gov, and the EU.gov).  Each time, Apple told them not just "no" but "Hell, no".

Once again, your mileage may vary, void where prohibited, do not remove tag under penalty of law. But Glen did ask.

Wednesday, September 17, 2025

Hey, remember that Apple iOS fix last month?

It looks like the Bad Guys are attacking older devices as well:

Apple backported a fix to older iPhones and iPads for a serious bug it patched last month – but only after it may have been exploited in what the company calls "extremely sophisticated" attacks.

The latest security update, pushed on Monday, fixes an out-of-bounds write issue tracked as CVE-2025-43300 in the ImageIO framework, which Apple uses to allow applications to read and write image file formats. It's available for iPhone 8, iPhone 8 Plus, iPhone X, iPad 5th generation, iPad Pro 9.7-inch, and iPad Pro 12.9-inch 1st generation, and the iThings maker on August 20 patched the same CVE in its newer devices.

Well done to Apple for this.  iPhone 8 was released a long time ago, but they're still supporting it with security fixes.  Bravo. 

Tagged with my Apple Sucks tag because this time they absolutely do not. 

 

Wednesday, August 27, 2025

Google Play store filled with malware

Yesterday was Apple's turn, today it's Android:

Cloud security vendor Zscaler says customers of Google’s Play Store have downloaded more than 19 million instances of malware-laden apps that evaded the web giant’s security scans.

Zscaler’s ThreatLabz spotted and reported 77 apps containing malware, many of them purporting to be utilities or personalization tools.

Sneer all you want at Apple, they take security for iOS much more seriously than Google does for Android.

Zscaler noted that the software requires users to grant it elevated permissions before it can cause harm, but attackers are hiding it in legitimate-seeming apps to fool users, and the technique is obviously working.

Probably the best thing you can do is refuse permissions for new apps.  Heck, I don't even let most apps have access to location data.

And quite frankly, I don't have many apps installed.  That's probably the best way you can deal with this sort of nonsense.

Tuesday, August 26, 2025

iOS fanboys - update toute suite

OldNFO mentioned this earlier, but this bug in iOS is really bad juju

Apple warned that the flaw could let miscreants hijack devices with a booby-trapped image – and for some iDevice users, it sounds like the damage has already been done.

"Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals," Cupertino said.

Apple went on to explain that "processing a malicious image file may result in memory corruption," but didn't say what that could lead to.

This is pretty much the trifecta of badness:

  1. The attack is delivered by a file that looks harmless (an image), so you start out with your guard being down.  Hey, just me gathering memes, amirite?
  2. Active exploit in the wild means that the Bad Guys know how to use this, and in fact are.
  3. Apple isn't saying what else this exploit can do, which is a sign that this is security badness of Biblical proportions.  Maybe I'm wrong here, but this smells of "there's more to the Rest Of The Story".

So when your iPhone/iPad/iWatch go to update, let them.  If they haven't updated, go do this manually right now.  You can do this my going to the Settings app - going to Settings -> Update will tell you if you are up to date, and will allow you to update if you are not. 

 

Wednesday, June 11, 2025

40,000 Internet-connected cameras have no security

This is my shocked face.  I mean who would have seen that coming?

But the cameras are insecure by default, which means that they are insecure forever:

Security researchers managed to access the live feeds of 40,000 internet-connected cameras worldwide and they may have only scratched the surface of what's possible....

Aside from the potential national security implications, cameras were also accessed in hotels, gyms, construction sites, retail premises, and residential areas, which the researchers said could prove useful for petty criminals.

...

"It should be obvious to everyone that leaving a camera exposed on the internet is a bad idea, and yet thousands of them are still accessible," said Bitsight in a report.

Gee, ya think? 

There are two problems, both related:

  1. Profit margins on consumer goods are razor thin.  Money spent on secure-by-default designs cost money.
  2. Most consumer electronics are manufactured in China.  The Red Chinese* government doesn't encourage better security for devices intended to be shipped to the USA.

So if you get any of these God forsaken things, look online on how to secure them before you install them.  You can get most manuals in PDF - although I expect a lot of them won't go deep into the issue.  For example, I can't find a single Youtube video on how to set up a Ring doorbell securely.

Also expect to may more for devices with better security, assuming you can find any. 

There's some good ideas on IoT security here. I have posted in the past about having a separate WiFi network that is firewalled off from your home WiFi.

Thursday, April 24, 2025

Blue Shield sent a boatload of member's health data to Google

This is pretty big:

US health insurance giant Blue Shield of California handed sensitive health information belonging to as many as 4.7 million members to Google's advertising empire, likely without these individuals' knowledge or consent.

The data shared may have included medical claim dates and providers used, which raises the specter of Google targeting ads based on the fact that you booked an appointment with a certain type of doctor - say, a cancer specialist, fertility clinic, or psychiatrist.

Other info potentially shared with Google ranged from patient names, insurance plan details, city of residence and zip code, gender, family size, and Blue Shield-assigned account identifiers, to financial responsibility info, and search queries and results for the "Find a Doctor" tool — including location, plan type, and provider details.

Other than that, Mrs. Lincoln - how did you like the play?

Blue Shield declined to answer The Register's questions, including how it discovered this years-long data leak, and what other third-party trackers (if any) are on its websites.
...

"This isn't just a technical misstep. It's a HIPAA compliance failure," Ensar Seker, CISO at threat intel firm SOCRadar, told The Register, referring to America's Health Insurance Portability and Accountability Act that safeguards medical data.

Bingo is his name-o.  Just to emphasize that: this wasn't just a "data breach", it was a criminal violation of US law.

 

Monday, March 3, 2025

The (Security) lamps are going out all across Europe

We shall not see them relit in our lifetimes:

Signal CEO Meredith Whittaker says her company will withdraw from countries that force messaging providers to allow law enforcement officials to access encrypted user data, as Sweden continues to mull such plans.

Whittaker said Signal intends to exit Sweden should its government amend existing legislation essentially mandating the end of end-to-end encryption (E2EE), an identical position it took as the UK considered its Online Safety Bill, which ultimately did pass with a controversial encryption-breaking clause, although it can only be invoked where technically feasible.

Basically the Sweden.Gov is asking Signal to get pregnant, but only a little bit pregnant.  But vulnerabilities (and that's exactly what a government mandated encryption backdoor is) don't work that way.

And from the Department of Irony, the Swedish military oppose this:

The Swedish Armed Forces routinely use Signal and are opposing the bill, saying that a backdoor could introduce vulnerabilities that could be exploited by bad actors. 
I guess this is just Exhibit 14,543,928 that Europe is fundamentally unserious about their own defense.

This follows hard on the heels of Apple turning off encryption in the UK

Looking at what's going on over there, it makes me think that maybe we should just cut the whole of them loose, to sink or swim on their own.  Unwilling to defend themselves, increasingly despotic to their subjects at home, maybe JD Vance is right after all that we no longer have shared values.

 

 

Wednesday, February 12, 2025

The reason that your iPhone updated its code last night

Apple patched a Zero Day bug that was being actively exploited in the wild.  I'm not a big fan of unannounced updates where I don't get a choice to approve it or not, but in this case Apple did exactly the right thing.

Friday, January 31, 2025

Anatomy of an online crime takedown

LOLOL:

"Bro we are in big trouble," said Callum Picari, 23, from Hornchurch, in East London, after infosec reporter Brian Krebs mentioned OTP Agency in a February 2021 investigation related to a separate phishing kit operation.

"U will get me bagged [sic]," Picari went on to say. "Bro delete the chat."

The perps in question look pretty much like you would expect them to look.

 

Tuesday, December 17, 2024

Trump Administration to up cyber attacks overseas?

This is interesting:

President-elect Donald Trump's team wants to go on the offensive against America's cyber adversaries, though it isn't clear how the incoming administration plans to achieve this. 

...

"We have been, over the years, trying to play better and better defense when it comes to cyber," Waltz said. "We need to start going on offense and start imposing, I think, higher costs and consequences to private actors and nation state actors."

There's no question that attacks on US critical infrastructure have massively increased during the last decade, and there's also no question that foreign governments often take the stance that "Shucks, it must be criminal gangs".

The idea of threatening higher (retaliatory) costs to another country seems very Trumpian.

 

Friday, December 6, 2024

If you have one of the D-Link home routers, you need to replace it now

This is not good:

Owners of older models of D-Link VPN routers are being told to retire and replace their devices following the disclosure of a serious remote code execution (RCE) vulnerability.

Most of the details about the bug are being kept under wraps given the potential for wide exploitation. The vendor hasn't assigned it a CVE identifier or really said much about it at all other than that it's a buffer overflow bug that leads to unauthenticated RCE.

This bug is so serious that the vendor is not releasing any details about it at all, because this will help the Bad Guys create exploits.  There will not be a patch because all of these devices are End-Of-Life.

Affected devices (all hardware revisions) include:

  • DSR-150 (EOL May 2024)

  • DSR-150N (EOL May 2024)

  • DSR-250 (EOL May 2024)

  • DSR-250N (EOL May 2024)

  • DSR-500N (EOL September 2015)

  • DSR-1000N (EOL October 2015)

If you have one of these, you need to replace it.  Details are interesting (at the link) but the bottom line is: get shopping.