People have complained for years that Twitter would "Shadow Ban" people - silently remove visibility of their tweets. The complaints alleged that it was conservatives who were targeted. These complaints were denied for years until Elon Musk took over Twitter when (what do you know) it was shown that it was all true. The US Government seems to have used a Twitter API to exercise this against their opponents.
Well, one of the things that Elon did was to Open Source some of the Twitter code. Open Source is where the source code is released so that anyone can look at (or use) it. Obviously, Open Source gives a great deal of transparency - which may be why the "New" Twitter did this. But transparency gives people the chance to look for security bugs, and lookee here:
The chunk of internal source code Twitter released the other week contains a "shadow ban" vulnerability serious enough to earn its own CVE, as it can be exploited to bury someone's account of sight "without recourse."
The issue was discovered by Federico Andres Lois while reviewing the tweet recommendation engine that's said to power Twitter's For You timeline. This system was made public by Twitter on March 31, adding to the libraries of open source software it already released over years, long before Elon Musk took over.
...
According to Lois's study of the engine bug he found, coordinated efforts to unfollow, mute, block and/or report a targeted user applies global reputation penalties to the account that are practically impossible to overcome based on how Twitter's recommendation algorithm treats negative actions.
As a result, Lois said, Twitter's current recommendation algorithm "allows for coordinated hurting of account reputation without recourse." Mitre has assigned CVE-2023-23218 to the issue.
Because this bug is in Twitter's recommendation algorithm, it means that accounts that have been subject to mass blocking are essentially "shadow-banned," and won't show up in recommendations despite the user being unaware they've been penalized. There seems to be no way to correct that kind of action, and it ideally shouldn't be possible to game the system in this way, but it is.
I find this interesting because it seems that the Twitter programmers who wrote this didn't have any idea that someone could exploit this in ways that they hadn't anticipated. Actually, that applies to almost all security bugs. Most security bugs are not broken functionality (this would almost always be found during the test cycle) but rather correctly functioning functionality that can be used in unintended ways.
This is one of the most interesting security bugs I've seen in quite a while, because it's in such a high visibility social media platform.
6 comments:
Your tech posting make me even more happy I am a high-tech redneck.
I don't have twitter, I don't have Fake Book, I don't even have Wi-Fi let alone smart appliances.
I do (sigh) have that damn "Smart Phone".
Maybe Einstein was right about technology turning us into idiot slaves (or something close to that).
It's not a bug.
It's a feature.............
Matt beat me to it... just like MS and others, 'features'...
I'm going to lean towards malice here. Having an easy way to shadowban someone without any recourse sounds like a hidden feature, not a bug.
That may have been why the Left was so hot on autoblockers and similar during Gamergate..
During development, the emphasis is always getting your product to do what it's supposed to do, so it's natural that a development team focuses on the requirements for what they're designing. That's why "most security bugs are not broken functionality..."
I've been on a lot of development teams, and there has never been a requirement that the system could not be used for any other purpose. It's an infinite set.
"Yes it's a 10 GHz weather radar, but it must not be possible for it to ever be used as a cabbage grater to make cole slaw." Even that's a bad analogy because it's a definite thing you're trying to prevent. How do you define a requirement that it can't be used for anything else?
It leads me to conclude software can't be made secure against any misuse. It's a fight that can never be won.
Code that is open enough to be usable can be misused, code that is closed down to prevent misuse is too closed down to be useful. One man's 1% edge case is another's core function.
Having the ability to dynamically globally restrict the distribution of tweets meant it was going to be weaponized eventually. A requirement that the system must able to silently amplify individual ban or block actions is the problem.
Post a Comment