Or do they?
Researchers from the Max Planck Institute for Informatics have defeated facial recognition on big social media platforms – by removing faces from photos and replacing them with automatically-painted replicas.
As the team of six researchers explained in their arXiv paper this month, people who want to stay private often blur their photos, not knowing that this is “surprisingly ineffective against state-of-the-art person recognisers.”
AI can beat blurring, but not the inpainting adversarial model.
The researchers went a step further by constructing a realistic “fake face” that replaced the original image, but was perturbed enough to beat the AI while still looking “right” to a friend perusing the image.The problem facing (so to speak) the AI researchers is that it's not just good enough to be accurate in regular use, you have to be accurate even when someone is trying to subvert your algorithms. It's the same problem that Internet Security faces - the QA teams test for "correct" feature/functionality, but then the Bad Guys look for bugs that don't effect feature operation but which do effect security.
I'm not at all optimistic that AI will win this one.
Alert readers will notice that to really fool big Social Media, there are some very significant OPSEC issues that you would have to take. As with most things, OPSEC is security's Achilles' Heel. However, as more and more people get fed up with Social Media spying, expect more ways to automate things like this - for example, a photo archive tool that modifies all faces to defeat facial recognition.
Smells like an arms race to me.