Sycophantic synthetic eyes
My thoughts on AI imagery and journalism


I, too, fell for the AI trampoline bunnies.
It was late at night, prime time for a silly little vid. I saw those whimsical little guys hopping around, blissful and free, and my brain uncritically went, “yes, this is exactly what I want,” followed by “wow, the internet is a beautiful and magical place.”
Almost immediately, I realized that I had got got, that I had let my guard down and had been bamboozled for it. I can’t trust anything, I thought.
“Trust” is the key word here. An 8-second video of bunnies is a pretty low-stakes thing to be fooled by, but I can’t help but catastrophize about how AI imagery is degrading our grounding in what we can trust to be real. Especially in the attention economy, where engagement is literal currency.
What happens to the world—and to journalism and news media, specifically—when we can seldom trust what we see?
Uneasy, I turned to Fred Ritchin, a scholar and critic of photography and previous picture editor for The New York Times Magazine. I hoped that his book “The Synthetic Eye,” about the history of photo manipulation and the implications of AI for the future of photography as documentary, might help me make sense of things. What I learned is that we are already well on our way down a slippery slope of image manipulation that makes it difficult to rein things in.
Ritchin opens his book with a story:
In 1982, at the dawn of the digital image revolution, National Geographic used a computer to modify a horizontal photograph of the pyramids of Giza so that it would better fit on its vertical cover, shifting one pyramid closer and partially behind the other. Two years later I interviewed the magazine’s editor, who defended the alteration, viewing it not as a falsification but…“merely the establishment of a new point of view, as if the photographer had been retroactively moved a few feet to one side.” I was astonished. It seemed to me that National Geographic had just introduced to photography a concept from science fiction — virtual time travel — as if one could revisit a scene and photograph it again.
This anecdote surprised me. Or rather, Ritchin’s astonishment in this instance surprised me. Lightly editing a photo so that it fits more pleasingly on a cover didn’t feel abhorrently transgressive to me. But as I read on and considered the issue further, I began to see the connective chain linking that 1982 anecdote to where we are today.
Consider that Nat Geo editor’s defense: that the image editing was only “as if the photographer had been retroactively moved a few feet to the side.” Reworded, this defense could be boiled down to something like, “this image could exist in real life, so why shouldn’t it?” Because who’s to say the photographer didn’t take just one more, perfect frame? And if that perfect photograph could have been taken in real life, some alternate reality, then modifying the image is a harmless substitution, yes?
Forty years later, this attitude—that a better image could and therefore should exist—is basically how we all operate today. The pursuit of alternate realities is de rigeur. We put on filters, remove pimples, play with color warmth and saturation, and crop out strangers, all in the name of a “better” picture that, in an ideal world, we could have taken without other technological meddling. We tell ourselves these pictures are more or less still “real.” Ritchin points out that our smartphone cameras have already quietly become enablers of this ethos, automatically removing our photography a few degrees away from truth to give us a version we’ll like better. Rather than faithfully capture images based on the detection of photons, they've been algorithmically editing our pictures for years. (This technology is the reason why people struggle to capture orange post-wildfire skies on their phone cameras.)
So we remove a pimple—because who’s to say that this selfie wasn’t taken on a day when I had better skin? And we photoshop a magazine cover so the photographer is retroactively repositioned a few feet to the right.
Once you accept this line of thinking—as so many of us have—the leap to accepting AI-generated images is actually quite small. Who’s to say that the photographer wasn’t repositioned a few feet to the right, and that there wasn’t a camel in the background? Egypt has camels. And who’s to say that the camel wasn’t looking directly at the camera with a goofy expression? Camel faces are so silly! (This example is super dumb and benign, but I don’t have the heart to think up a more political, inflammatory example.) There’s a world where this image could, or has, happened—so it might as well be this one. Generative AI has revolutionized our ability to remake reality.
And it is actively remaking reality, frictionlessly. This past summer, Reddit cofounder Alex Ohanian went viral on Twitter/X after he posted about how he used Midjourney to animate an old photograph of him with his late mother. In response to a commenter pondering whether to similarly create a high quality AI-generated reel based on low-quality pictures and videos of their late father, Ohanian responds, “I genuinely don’t understand why you wouldn’t use AI for this. Few hundred years ago, you’d have been burned as a witch for even having the video recordings.”
Putting aside the bizarre logical fallacy in Ohanian’s response, it’s striking to me how readily he and others view AI-generated videos as valid documentation of the past. Their treatment of the technology is as a tool to materialize evidence for memories that would only otherwise exist in their minds. The phrase “pics or it didn’t happen” is twisted, perverted into an imperative: don’t you want imagery to prove that your version of reality happened?
This is bad for our brains, for at least a few reasons.
For one, our brains are bad at metadata. Years of research has shown that when people are shown false information, that information leads to persistent misunderstandings even after people learn that what they’ve just seen is false. There’s the illusory truth effect, which is when repeated exposure to statements or ideas increases our chances of believing those claims—and this happens even if we start out knowing the statements are false. Even if we had perfect disclosure, where every AI-generated image or video was labeled as such, that wouldn’t stop fake facts and images from permeating our subconscious, influencing our sense of what is real.
Reason number two: Having a shaky grasp of what is real and what is not, combined with the sycophantic tendencies of AI models to agree with everything you tell it, is a recipe for psychosis. Accounts of mental harm from AI are already multiplying with stories of people spiraling through distortions of reality. Services claiming to “restore old memories” through AI are drip feeding clients false memories. “AI psychosis” is a real psychiatric concern now.
Lastly (and possibly most existentially), thanks to this tech, it’s now impossible believe any photorealistic image or video without first applying intense scrutiny. Every image, no matter how reputable the source, is suspect. We are all aware of the endless imagery out there intended to trick us, so we guard our minds against it.
But because our brains are stupid and fallible, perpetual wariness only nudges us to believe things that confirm our biases. Anything opposing our worldviews and values rouse the most suspicion. Meanwhile, anything that sycophantically confirms our biases triggers our desire for that thing to be real, a desire strong enough to blunt the modicum of critical thinking needed to catch the trick.
So now we cannot rally behind devastating photojournalism begging our attention, like images of starving children in Gaza, without someone callously and baselessly calling the image a fake or a hoax. Because, this image could’ve been made by AI, so who’s to say it hasn’t been? The thinking goes both ways.
This is in stark contrast to even just a decade ago. Ritchin recalls the last time photojournalism galvanized real movement for change: when Alan Kurdi, a very young Syrian child, drowned while trying to escape with his family and washed up, face down, on a Turkish beach. Because of that photo, Ritchin writes, Save the Children saw a 70% increase in outreach from people wanting to donate time, money, clothes, or food. Donations to Migrant Offshore Aid Stations shot up 15-fold in 24 hours. The Swedish Red Cross, which had just set up a fund specifically for Syrian refugees, saw a more than 100-fold increase in their mean number of daily donations. *
The documentation of real-life horrors through photography has been one of our most powerful tools to plead for change, crack the shell of apathy, and invigorate action. Humans, after all, are highly visual creatures. But the tools are dissolving in our hands. Our visual nature becomes our biggest vulnerability. And so we sit on isolated thrones of immovable beliefs, beliefs that can be corroborated ad infinitum by a stream of artificial “evidence.”
All of this, and we haven’t even touched upon AI models’ tendencies toward sexism, racism, or revisionist histories (or AI’s environmental harms). Perhaps that’s for another essay.
My lasting feelings on this subject, however, can be summarized by two quotes Ritchin includes in his book. The first is from Hungarian artist László Moholy-Nagy who said, “The illiterate of the future will be the person ignorant of the use of the camera as well as the pen.”
The second is from Hannah Arendt’s “The Origins of Totalitarianism”:
If you want to destroy people’s ability to resist control, you must destroy the distinction between truth and lies, because if you can’t believe anything, you can’t act.
I know that I, personally, am desperate for ways to improve my literacy of AI and its implications for our sense of shared reality and trust. And I truly think that the stakes are high should we choose to ignore the need for broad, society-wide awareness and education on the subject. Because none of us are entitled to an alternate version of reality, but when someone does present me with one, I’d like to know it when I see it.
A post-script on image manipulation and journalism: If the use of AI for image generation sits mid-slope on the curve of photo manipulation, it’s clear that the media industry needs stronger, more explicit standardization for what kind of image manipulation is allowable. Short of offering any kind of panacea, Ritchin does offer a few guiding principles (consistent with the AP’s standards):
Minor modifications — modestly changing the contrast, cropping the image, cleaning up digital “noise” — would be allowed. This would be equivalent to the latitude given to a writer to modify a quote by leaving out ums or uhs, deciding where to begin and end the quote, or to use an ellipsis to indicate words that have been removed from within a phrase. And in the same way that responsible writers of nonfiction are unable to insert words that a person did not say, a nonfiction photographer would not be allowed to add or subtract visual elements from a photograph.
In other words, photo editing done for clarity and understandability is fine. But fabrication is not. To spell it out just one step further, I’d say that AI-generated imagery has no place (and should never have a place) in journalism or documentary, even if you have a set of training data based on real life photographs or videos, because the process of prompting required is akin to directing someone to pose or fabricating a quote based on things a source has said in the past. It’s fiction.
In the meantime here are other things I’ve been thinking about:
The 1966 cult classic “Valley of the Dolls” by Jacqueline Susann. An exciting and eventful novel that flashes by in a snap. Though it might seem like chick-lit, it’s a fascinating tableau on womanhood and showbiz in the 1940s and 1950s.
Korean reality competition television is the gift that keeps on giving. If you haven’t seen Culinary Class Wars, Devil’s Plan, Siren: Survive the Island, or Physical: 100, literally what are you doing? Speaking of the latter, the latest iteration of the fitness competition show is Physical: Asia, where various nations send teams of their top athletes to compete in an ultimate test of physical prowess. It’s like if the Olympics consisted of playground games on steroids. Fun!
I cannot stop thinking about this piece about aphantasia/hyperphantasia by Larissa MacFarquhar in The New Yorker. It’s such a thorough, considered, and fascinating breakdown of the research on mental imagery, and all the implications this work has on how we understand memory and what it means to process our own lives. It’s so good it fills me with envy!
And lastly, as a treat, a short-ish (~1 hour) playlist of my latest sonic fixations, in order of vaguely increasing energy:
Every single one of these tracks is a 10/10! The Sarah McLachlan and Boy Harsher tracks have been especially sticky in my brain, only occasionally beaten out by “Golden” from K-Pop Demon Hunters (if anyone has tips for exorcizing it from my brain, please share, I need relief).
Until next time!


