- cross-posted to:
- privacyguides@lemmy.one
- cross-posted to:
- privacyguides@lemmy.one
Israel has deployed a mass facial recognition program in the Gaza Strip, creating a database of Palestinians without their knowledge or consent, The New York Times reports. The program, which was created after the October 7th attacks, uses technology from Google Photos as well as a custom tool built by the Tel Aviv-based company Corsight to identify people affiliated with Hamas.
Israel is the type of control-heavy far-right state other dictators wish they could govern, and it’s made possible by Western money and technology (I was going to name just the US but my country of Canada, among others, is not blameless either). This news also sucks because there’s no way that tech is staying in Israel only. Citizens of the world better brace for convictions via AI facial recognition.
“Our computer model was able to reconstruct this image of the defendant nearly perfectly. It got the hands wrong and one eye is off-center, but otherwise that’s clearly them committing the crime.”
It’s true… far right dictators love this shit:
https://www.cnet.com/news/politics/in-china-facial-recognition-public-shaming-and-control-go-hand-in-hand/
From what I remember, AI facial recognition tech was already being used by police and agencies worldwide, like the FBI, PRC police etc, or am I misinformed? I remember something about Chinese and American facial recognition software.
I had not read anything like that but a quick search pulled up this story from last September by Wired that supports your post: FBI Agents Are Using Face Recognition Without Proper Training. “Yet only 5 percent of the 200 agents with access to the technology have taken the bureau’s three-day training course on how to use it, a report from the Government Accountability Office (GAO) this month reveals.” So it sounds like you’re right, and also that they are probably inadequately trained even if they complete all 3 days on how to identify people with legal ramifications.
And I wonder how many of those 95% have already used misapplied AI facial recognition to justify FISA court warrants for
stalkinginvestigatingrandom peoplesuspected terrorists?Facial tattoos of drop table commands. Embed computer worms into your iris. We can get insane to fuck all this shit up too. I bet theres a way to embed a computer virus on your own face.
I guess I’ll adjust my life goals to “hot cyberpunk partner in technological dystopia”, because that sounds like some Bladerunner/Cyberpunk 2077 stuff.
Its not that far off. We’ll see exactly what I said soon enough. You can put a virus or worm inside an image in an email. You can do the same thing with a tattoo. Its unfortunate it will be here so far before the superhuman cybernetics.
I’d much prefer that people who haven’t done this wouldn’t talk.
Are you implying you can’t use steganography techniques on real objects and images? You act like I stated it would be easy.
OK, so who’ll decode your “virus” from those real objects? Or it’s a case of “I’m a poor Nigerian virus, please kindly run me with root privileges on a system with such and such”?
EDIT: I mean, steganography is too a word a person should know the meaning of before using.
Just because you said this wouldn’t work like SQL Injection, does not mean it won’t. You don’t know either. Have you worked on facial recognition databases? How do they store their data? Its most likely just a database. Then I would start by looking at steganography techniques to see how those can be applied. Obviously I’m not hiding an executable in there, but I don’t see why you couldn’t try for unsanitized input, you never know. Now if you want to continue into realism, you would just wear a full face mask outside. You also never answered my question about steganography.
Your question doesn’t make any fucking sense in the context of attacking anything, steganography is encoding your message inside redundant encoding for something else.
So, about that word.
A “virus in an image” situation is for cases when a program which will open that image has some vulnerability the attacker knows about, so the image is formed specifically to execute some shellcode in this situation.
Same with “a virus in an MP3”, some MP3 decoder has a known vulnerability allowing a shellcode.
Same with PDFs and anything else.
There are more high-level situations where programs with their own complex formats (say, DOCX which is a ZIP archive with some crap inside) execute stuff.
All this is not steganography.
Steganography is when, a dumb example, you have an image and you hide your message in lower bits of pixel color values. Or something like that with an MP3 file.
Attacks are a matter of probabilities, and “you never know” doesn’t suffice.
Sounds like a great time to start a costume & mask making company named “The ministry of silly walks”.
This is probably what people would actually do. Just wear a full mask.
Honestly with enshittification “technological dystopia” sounds like exactly where we already are. Now, if only implants weren’t being R&D’d by Muskrat and there were some open source non-invasive version…
Which may even work with 0.001% probability of that recognized string not being screened.
There’s a difference between SQL injections on thematic web forums and the same in such a system.
That “we can … too” is lazy complacency. “They” will get even stronger while “we” talk like this.
Nothing is casual about this. Be pessimistic if you want. But we will not stop jabbing the eye that watches. This is an arms race.
What I’m saying is that you personally haven’t done any of this and look stupid.
Yep, people do use vulnerabilities in software and hardware to do things. Just not you, so that “we” seems weird.
Neither did I, I just played with crackmes and shellcodes a bit, but I’m not the person writing pretentious posts with that “we”.
The original commentor I replied to was speculating about this being commonplace. You came in with your statements about people having to do things to talk about them in a post about speculation.
You are doing absolutely nothing, are you.
Attempts at adversial ai tatoo, face masks and clothing have been done before. Basically exploiting the model not having a deeper understanding of the world so you can trick it with specific visual artifacts.