Google’s AI Detection Flip-Flops on Doctored White House Photo
© THE INTERCEPT
ALL RIGHTS RESERVED
Google’s AI Detection Tool Can’t Decide if Its Own AI Made Doctored Photo of Crying Activist
Google’s SynthID AI detection tool flip-flopped when asked if an image posted by the White House was altered by Google’s own AI.
When the official White House X account posted an image depicting activist Nekima Levy Armstrong in tears during her arrest, there were telltale signs that the image had been altered.
Less than an hour before, Homeland Security Secretary Kristi Noem had posted a photo of the exact same scene, but in Noem’s version Levy Armstrong appeared composed, not crying in the least.
Seeking to determine if the White House version of the photo had been altered using artificial intelligence tools, we turned to Google’s SynthID — a detection mechanism that Google claims is able to discern whether an image or video was generated using Google’s own AI. We followed Google’s instructions and used its AI chatbot, Gemini, to see if the image contained SynthID forensic markers.
The results were clear: The White House image had been manipulated with Google’s AI. We published a story about it.
Related
White House Doctored Photo With AI to Make It Look Like an Activist Was Sobbing During Perp Walk
After posting the article, however, subsequent attempts to use Gemini to authenticate the image with SynthID produced different outcomes.
In our second test, Gemini concluded that the image of Levy Armstrong crying was actually authentic. (The White House doesn’t even dispute that the image was doctored. In response to questions about its X post, a spokesperson said, “The memes will continue.”)
In our third test, SynthID determined that the image was not made with Google’s AI, directly contradicting its first response.
At a time when AI-manipulated photos and videos are growing inescapable, these inconsistent responses raise serious questions about SynthID’s reliability to tell fact from fiction.
Initial SynthID Results
Google describes SynthID as a digital watermarking system. It embeds invisible markers into AI-generated images, audio, text or video created using Google’s tools, which it can then detect — proving whether a piece of online content is authentic.
“The watermarks are embedded across Google’s generative AI consumer products, and are imperceptible to humans — but can be detected by SynthID’s technology,” says a page on the site for DeepMind, Google’s AI division.
Google presents SynthID as having what in the realm of digital watermarking is known as “robustness” — it claims to be able to detect the watermarks even if an image undergoes modifications, such as cropping or compression. Therefore, an image manipulated with Google’s AI should contain detectable watermarks even if it has been saved multiple times or posted on social media.
Most Read
Google steers those who want to use SynthID toward its Gemini AI chatbot, which they can prompt with questions about the authenticity of digital content.
“Want to check if an image or video was generated, or edited, by Google AI? Ask Gemini,” the SynthID landing page says.
We decided to do just that.
We saved the image file that the official White House account posted on X, bearing the filename G_R3H10WcAATYht.jfif, and uploaded it to Gemini. We asked whether SynthID detected the image had been generated with Google’s AI.
To test SynthID’s claims of robustness, we also uploaded a further cropped and re-encoded image, which we named imgtest2.jpg.
Finally, we uploaded a copy of the photo where Levy Armstrong was not crying, as previously posted by Noem. (In the above screenshot, Gemini refers to Noem’s photo as signal-2026-01-22-122805_002.jpeg because we downloaded it from the Signal messaging app).
“I’ve analyzed the images you provided,” wrote Gemini. “Based on the results from SynthID, all or part of the first two images were likely generated or modified with Google AI.”
“Technical markers within the files imgtest2.jpg and G_R3H10WcAATYht.jfif indicate the use of Google’s generative AI tools to alter the subject’s appearance,” the bot wrote. It also identified the version of the image posted by Noem as appearing to “be the original photograph.”
With confirmation from Google that its SynthID system had detected hidden forensic watermarks in the image, we reported in our story that the White House had posted an image that had been doctored with Google’s AI.
This wasn’t the only evidence the White House image wasn’t real; Levy Armstrong’s attorney told us that he was at the scene during the arrest and that she was not at all crying. The White House also openly described the image as a meme.
We’re independent of corporate interests — and powered by members. Join us.
Join Our Newsletter
Thank You For Joining!
Original reporting. Fearless journalism. Delivered to you.
Will you take the next step to support our independent journalism by becoming a member of The Intercept?
By signing up, I agree to receive emails from The Intercept and to the Privacy Policy and Terms of Use.
Join Our Newsletter
Original reporting. Fearless journalism. Delivered to you.
A Striking Reversal
A few hours after our story published, Google told us that they “don’t think we have an official comment to add.” A few minutes after that, a spokesperson for the company got back to us and said they could not replicate the result we got. They asked us for the exact files we uploaded. We provided them.
The Google spokesperson then asked, “Were you able to replicate it again just now?”
We ran the analysis again, asking Gemini to see if SynthID detected the image had been manipulated with AI. This time, Gemini failed to reference SynthID at all — despite the fact we followed Google’s instructions and explicitly asked the chatbot to use the detection tool by name. Gemini now claimed that the White House image was instead “an authentic photograph.”
It was a striking reversal considering Gemini previously said that the image contained technical markers indicating the use of Google’s generative AI. Gemini also said, “This version shows her looking stoic as she is being escorted by a federal agent” — despite our question addressing the version of the image depicting Levy Armstrong in tears.
Less than an hour later, we ran the analysis one more time, prompting Gemini to yet again use SynthID to check whether the image had been manipulated with Google’s AI. Unlike the second attempt, Gemini invoked SynthID as instructed. This time, however, it said, “Based on an analysis using SynthID, this image was not made with Google AI, though the tool cannot determine if other AI products were used.”
Google did not answer repeated questions about this discrepancy. In response to inquiries, the spokesperson continued to ask us to share the specific phrasing of the prompt that resulted in Gemini recognizing a SynthID marker in the White House image.
We didn’t store that language, but told Google it was a straightforward prompt asking Gemini to check whether SynthID detected the image as being generated with Google’s AI. We provided Google with information about our prompt and the files we used so the company could check its records of our queries in its Gemini and SynthID logs.
“We’re trying to understand the discrepancy,” said Katelin Jabbari, a manager of corporate communications at Google. Jabbari repeatedly asked if we could replicate the initial results, as “none of us here have been able to.”
After further back and forth following subsequent inquiries, Jabbari said, “Sorry, don’t have anything for you.”
Bullshit Detector?
Aside from Google’s proprietary tool, there is no easy way for users to test whether an image contains a SynthID watermark. That makes it difficult in this case to determine whether Google’s system initially detected the presence of a SynthID watermark in an image without one, or if subsequent tests missed a SynthID watermark in an image that actually contains one.
As AI become increasingly pervasive, the industry is trying to put behind its long history of being what researchers call a “bullshit generator.”
Supporters of the technology argue tools that can detect if something is AI will play a critical role establishing the common truth amid the pending flood of media generated or manipulated by AI. They point to their successes, as with one recent example where SynthID debunked an arrest photo of Venezuelan President Nicolas Maduro flanked by federal agents as an AI-generated image. The Google tool said the photo was bullshit.
If AI-detection technology fails to produce consistent responses, though, there’s reason to wonder who will call bullshit on the bullshit detector.
IT’S EVEN WORSE THAN WE THOUGHT.
What we’re seeing right now from Donald Trump is a full-on authoritarian takeover of the U.S. government.
This is not hyperbole.
Court orders are being ignored. MAGA loyalists have been put in charge of the military and federal law enforcement agencies. The Department of Government Efficiency has stripped Congress of its power of the purse. News outlets that challenge Trump have been banished or put under investigation.
Yet far too many are still covering Trump’s assault on democracy like politics as usual, with flattering headlines describing Trump as “unconventional,” “testing the boundaries,” and “aggressively flexing power.”
The Intercept has long covered authoritarian governments, billionaire oligarchs, and backsliding democracies around the world. We understand the challenge we face in Trump and the vital importance of press freedom in defending democracy.
We’re independent of corporate interests. Will you help us?
IT’S BEEN A DEVASTATING year for journalism — the worst in modern U.S. history.
We have a president with utter contempt for truth aggressively using the government’s full powers to dismantle the free press. Corporate news outlets have cowered, becoming accessories in Trump’s project to create a post-truth America. Right-wing billionaires have pounced, buying up media organizations and rebuilding the information environment to their liking.
In this most perilous moment for democracy, The Intercept is fighting back. But to do so effectively, we need to grow.
That’s where you come in. Will you help us expand our reporting capacity in time to hit the ground running in 2026?
We’re independent of corporate interests. Will you help us?
I’M BEN MUESSIG, The Intercept’s editor-in-chief. It’s been a devastating year for journalism — the worst in modern U.S. history.
We have a president with utter contempt for truth aggressively using the government’s full powers to dismantle the free press. Corporate news outlets have cowered, becoming accessories in Trump’s project to create a post-truth America. Right-wing billionaires have pounced, buying up media organizations and rebuilding the information environment to their liking.
In this most perilous moment for democracy, The Intercept is fighting back. But to do so effectively, we need to grow.
That’s where you come in. Will you help us expand our reporting capacity in time to hit the ground running in 2026?
We’re independent of corporate interests. Will you help us?
Contact the author:
Related
AI’s Imperial Agenda
The FBI Wants AI Surveillance Drones With Facial Recognition
Pentagon Document: U.S. Wants to “Suppress Dissenting Arguments” Using AI Propaganda
Profits Skyrocket for AI Gun Detection Used in Schools — Despite Dubious Results
Latest Stories
Voices
We Can Fight This: Minnesota’s General Strike Shows How
Natasha Lennard
Only a future of general strikes involving large-scale disruptions has the chance of stopping Trump’s forces.
Unmasking ICE
Man Feds Killed in Minneapolis Was an Observer, Eyewitnesses Say
Noah Hurowitz, Jacqueline Sweet
The killing marked the second fatal shooting by federal agents in the city in less than a month. Neither victim appeared to be the target of immigration enforcement.
Chilling Dissent
New Legal Documents Show Marco Rubio Targeted Students for Op-Eds and Protesting
Jessica Washington
Rubio accused students including Mahmoud Khalil of supporting terrorism, but records unsealed after litigation by The Intercept undermine his claims.
© The Intercept. All rights reserved
Enter your email to keep reading for free.
This is not a paywall.
By signing up, I agree to receive emails from The Intercept and to the Privacy Policy and Terms of Use.
No ads. No corporate BS. Skip the propaganda and donate to keep The Intercept going strong:
No ads. No corporate BS. Skip the propaganda and donate to keep The Intercept going strong:
We’re independent of corporate interests. Will you join us?