TL;DR: The new Reimage feature on the Google Pixel 9 phones is really good at AI manipulation, while being very easy to use. This is bad.
This is bad
Some serious old-man-yelling-at-cloud energy
It’ll sink in for you when photographic evidence is no longer admissible in court
Photoshop has existed for a bit now. So incredibly shocking it was only going to get better and easier to do, move along with the times oldtimer.
Well yeah, I’m not concerned with its ease of use nowadays. I’m more concerned with the computer forensics experts not being able to detect a fake for which Photoshop has always been detectable.
As the cat and mouse game continues, we ask ourselves, is water still wet?
Just wait, image manipulation will happen at image creation and there will be no “original”. Proving an image is unmanipulated will be a landmark legal precedent and set the standard for being able to introduce photographic evidence. It is already a problem for audio recordings and will be eventually for video.
Photoshop requires time and talent to make a believable image.
This requires neither.
But it has been possible, for more than a decade
You said “but” like it invalidated what I said, instead of being a true statement and a non sequitur.
You aren’t wrong, and I don’t think that changes what I said either.
Lmao, “but” means your statement can be true and irrelevant at the same time. From the day photoshop could fool people lawyers have been trying to mark any image as faked, misplaced or out of context.
When you just now realise it’s an issue, that’s your problem. People can’t stop these tools from existing, so like, go yell at a cloud or something.
I really don’t have much knowledge on it but it sound like it’s would be an actual good application of blockchain.
Couldn’t a blockchain be used to certify that pictures are original and have not been tampered with ?
On the other hand if it was possible I’m certain someone either have already started it, it is the prefect investor magnet “Using blockchain to counter AI”
How would that work?
I am being serious, I am an IT and can’t see how that would work in any realistic way.
And even if we had a working system to track all changes made to a photo, it would only work if the author submitted the original image before any change haf been made, but how would you verify that the original copy of a photo submitted to the system has not been tempered with?
Sure, you could be required to submit the raw file from the camera, but it is only a matter of time untill AI can perfectly simulate an optical sensor to take a simulated raw of a simulated scene.
Nope, we simply have to fall back on building trust with photo journalists, and trust digital signatures to tell us when we are seeing a photograph modified outsided of the journalist’s agency.
Yep, I think we pictures are becoming a valuable as text and it is fine, we just need to get used to it.
Before photography became mainstream the only source of information was written, it is extremely simple to make a fake story so people had to rely on trusted sources. Then for a short period of history photography became a (kinda) reliable sources of information by itself and this trust system lost its importance.
In most cases seeing a photo means that we were seeing a true reflection of what happened, especially if we were song multiple photos of the same event.
Now we are arriving at the end of this period, we cannot trust a photo by itself anymore, tampering a photo is becoming as easy as writing a fake story. This is a great opportunity for journalists I believe.
Relevant XKCD. Humans have always been able to lie. Having a single form of irrefutable proof is the historical exception, not the rule.
interesting thought. we haven’t had photos in history, and people didn’t need them. also, we’ve been able to produce text deepfakes all throughout history (and people actually did that - a lot) and somehow, humanity still survived and made progress. maybe we should question our assumptions whether we really need a medium to communicate absolute truth.
humanity still survived and made progress
Humanity never needed truth, for all of that. Only a good enough illusion.
It’s just that, most of the times the illusions are not good enough and the truth comes out.
If you’re getting your truth from somewhere you don’t trust, you’ve already lost the plot. Having a medium to convey absolute truth is NOT the exception, because it never existed. Not with first hand accounts, not with photos, not with videos. Anything, from its inception, has been able to be faked by someone motivated enough.
What we need is an industry of independent ethically driven individuals to investigate and be a trusted source of truth on the world’s important events. Then they can release journals about their findings. We can call them journalers or something, I don’t know, I don’t have all the answers. Too bad nothing like that exists when we need it most 🥲
What we need is distribution of power. Power acts upon information. There was that weird idea that with solid information there’s no need to distribute power. When people say “due process”, they usually mean that. This wasn’t true anyway.
Information is still fine, people lie and have always lied, humanity has always relied upon chains and webs of trust.
The issue is centralized power forcing you to walk their paths.
Regarding that last panel, why would multiple people go through the trouble of carving lies about Ea-Nasir’s shitty copper? And even if they did, why would he keep them? No, his copper definitely sucked.
The obvious conjecture is that they were trying to commit fraud and get free copper
Awful title.
Clickbait 101
It’s the verge, after all. Nobody should read their slop
Image manipulation has always been a thing, and there are ways to counter it…
But we already know that a shocking amount of people will simply take what they see at face value, even if it does look suspicious. The volume of AI generated misinformation online is already too damn high, without it getting more new strings in it’s bow.
Governments don’t seem to be anywhere near on top of keeping up with these AI developments either, so by the law starts accounting for all of this, the damage will be far done already.
On our vacation 2 weeks ago my wife made an awesome picture just with one guy annoyingly in the background. She just tucked him and clicked the button… poof gone, perfect photo.
Honestly yeah I agree. Many mainstream social media platforms are infested with shitty generated content to the point of being insanity.
But it’s never been this absolutely trivial to generate and distribute completely synthetic media. THAT is the real problem here.
Yep, this is a problem of volume of misinformation, the truth can just get buried by one single person generating thousands of fake photos, it’s really easy to lie, it’s really time consuming to fact check.
That’s precisely what I mean.
The effort ratio between generating synthetic visual media and corroborating or disproving a given piece of visual media has literally inverted and then grown by an order of magnitude in the last 3-5 years. That is fucking WILD. And more than a bit scary, when you really start to consider the potential malicious implications. Which you can see being employed all over the place today.
Even a few months ago it was hard for people with the knowledge to use AI on photos. I don’t like the idea of this but its unavoidable. There is already so much misinformation and this will make it so much worse.
I wish tools to detect if an image is real or not become as easy to use and good as these AI tools bullshit.
Any tool someone invents will be used to train an AI to circumvent that tool.
In fact that’s how a lot of AI training is done in the first place.
We need to bring back people who can identify shops from some of the pixels and having seen quite a few shops in their time.
Captain Disillusion vs. The Artificer
It’s fundamentally not possible.
At some point fakes will be picture perfect indistinguishable.
Meh, those edited photos could have been created in Photoshop as well.
This makes editing and retouching photos easier, and that’s a concern, but it’s not new.
Something I heard in the photoshop VS ai argument is it makes an already existing process much faster and almost anyone can do it which increases the shear amount that one person or a group could make almost how a printing press made the production of books so much faster (if you’re in to history)
I’m too tired to take a stance so I’m just sharing some arguments I’ve heard
Making creating fake images even easier definitely isn’t great, I agree with you there, but it’s nothing that couldn’t already be done with Photoshop.
I definitely don’t like the idea you can do this on your phone.
Exactly, it was already established that pictures from untrusted sources are to be disregarded unless they can be verified by trusted sources.
It is basically how it has been forever with the written press: Just like everyone now has the capability to manipulate a picture. Everyone can write we are being invaded by aliens, but whether we should believe it is another thing.
It might take some time for the general public to learn this, but it should be a focus area of general schooling within the area of source criticism.
almost how a printing press made the production of books so much faster
… and we all know that lead to 30 years of bloody war, btw
we’ve been able to do this kinda shit since the days of film, it wasn’t hard, just required some clever stitching and blending.
It’s “more accessible” I’m more concerned about shit like AI generated videos though. Those are spooky. Or also just the general accessibility of “natural bot nets” now.
It’s a shitty toy that’ll make some people sorry when they don’t have any photos from their night out without tiny godzilla dancing on their table. It won’t have the staying power Google wishes it to, since it’s useless except for gags.
But, please, Verge,
It took specialized knowledge and specialized tools to sabotage the intuitive trust in a photograph.
get fucked
Photography manipulation existed almost since the invention of photography. It was only much harder see the famous photo edition https://www.history.com/news/josef-stalin-great-purge-photo-retouching
Great point. But tools that make it so a 10 year old can manipulate photos even better than your example in several minutes, are in fact fairly new.
Hell they can generate photos that fool 70% of people on Facebook, though now that I say that, maybe that bar isn’t too high…
This reaffirms my wish to go back to monkey.
This is a hyperbolic article to be sure. But many in this thread are missing the point. It’s not that photo manipulation is new.
It’s the volume and quality of photo manipulation that’s new. “Flooding the zone with bullshit,” i.e. decreasing the signal-to-noise ratio, has demonstrable social effect.
It seems like the only defense against this would be something along the lines of FUTO’s Harbor, or maybe Ghost Keys. I’m not gonna pretend to know enough about them technically or practically, but a system that can anonymously prove that you’re you across websites could potentially de-fuel that fire.
There was actually a user on Lemmy that asked if the original photo for the massacre was AI. It hadn’t occurred to me that people who never heard of the 1989 Tiananmen Square protests and massacre would find the image and question if it was real or not.
A very sad sight, a very sad future.
Were they from the .ml instances?
Photoshop has existed for years. It’s no different than a student in 2010 being shocked at the horrors of man and trying to figure out how it could be faked with a computer. People have denied the Holocaust for generations!
This argument keeps missing that it is not only the quality but mainly the quantity of fakes which is going to be the problem. The complete undermining of trust in photographic evidence is seen as a good thing for so many nefarious vested interests, that this is an aim they will actively strive for.
It is different. The old Photoshop process took a lot of time. Now an image can be manipulated incredibly quickly and spread almost as fast before anyone has time to do anything about it.
How is it sad? If they’re young and/or don’t have the best schooling, it’s not their fault they haven’t heard of it. And then they encounter an absurd picture and approach it with skepticism? That’s not sad at all. Healthy skepticism is good, especially with the influx of AI generated content
We’ve had fake photos for over 100 years at this point.
https://en.wikipedia.org/wiki/Cottingley_Fairies
Maybe it’s time to do something about confirming authenticity, rather than just accepting any old nonsense as evidence of anything.
At this point anything can be presented as evidence, and now can be equally refuted as an AI fabrication.
We need a new generation of secure cameras with internal signing of images and video (to prevent manipulation), built in LIDAR (to make sure they’re not filming a screen), periodic external timestamps of data (so nothing can be changed after the supposed date), etc.
I am very opposed to this. It means surrendering all trust in pictures to Big Tech. If at some time only photos signed by Sony, Samsung, etc. are considered genuine, then photos taken with other equipment, e.g., independently manufactured cameras or image sensors, will be dismissed out of hand. If, however, you were to accept photos signed by the operating system on those devices regardless of who is the vendor, that would invalidate the entire purpose because everyone could just self-sign their pictures. This means that the only way to effectively enforce your approach is to surrender user freedom, and that runs contrary to the Free Software Movement and the many people around the world aligned with it. It would be a very dystopian world.
There’s no need to make these things Big Tech, so if that’s why you are opposed to it, reconsider what you are actually opposed to. This could be implemented in a FOSS way or an open standard.
So you not trust HTTPS because you’d have to trust big tech? Microsoft and Google and others sign the certificates you use to trust that your are sending your password to your bank and not a phisher. Like how any browser can see and validate certificates, any camera could have a validation or certificate system in place to prove that the data is straight from an unmodified validated camera sensor.
It would also involve trusting those corporations not to fudge evidence themselves.
I mean, not everything photo related would have to be like this.
But if you wanted you photo to be able to document things, to provide evidence that could send people to prison or be executed…
The other choice is that we no longer accept photographic, audio or video evidence in court at all. If it can no longer be trusted and even a complete novice can convincingly fake things, I don’t see how it can be used.
Damn, those are pretty damn good!