Fred Ritchin, 02 May 2025
Much criticism has been aimed at AI-generated images as being derivative, simplistic, racist and misogynistic, in large part due to their having been trained on masses of online images that are themselves highly compromised.
But what of the current use of photographs by conventional media outlets? Do these images distinguish themselves in their complexity, their recognition of nuance, their in-depth exploration of underlying issues? Or are they more of a repetitive catalogue of tropes — government and business leaders, demonstrators with signs, bombed-out buildings, flashy celebrities, flooded streets, sports heroes, etc.?
For the most part it seems that we no longer expect to be transformed, moved to action, by what we view in our newspapers and magazines.
This, of course, was not always the case. The 1957 photograph, for example, of a 15-year-old student named Dorothy Counts being jeered and spat upon while trying to integrate a previously all-white school (she had to leave after a few days when her safety could not be guaranteed) made racism explicit; it was awarded the World Press Photo of the Year.
The American writer James Baldwin saw this photograph (it was published on the front page of the New York Times under the headline “Soldiers and Jeering Whites Greet Negro Students”) and recounted how it caused him to return to the United States from France to write about civil rights. “There was unutterable pride, tension and anguish in that girl’s face as she approached the halls of learning, with history jeering at her back,” he said. “It made me furious. It filled me with both hatred and pity. And it made me ashamed. Some one of us should have been there with her.”
These days, however, rather than empathy and understanding, we are bombarded by cascading numbers of disorienting imagery, much of it increasingly weaponised by one faction or another. As a result, there may never have been a time when the ability of photography to intervene, asserting a sense of the actual, has been more necessary. But despite a maelstrom of challenging problems, a focus on the real is not a high priority for many of the billion-dollar corporations that produce the equipment and software. For them, photography’s function as a recording of the visible is all but over:
Scott Belsky, Adobe’s chief strategy officer, recently described generative artificial intelligence as “the new digital camera, and we have to embrace it.”
Isaac Reynolds, group project manager for Google’s Pixel Camera, declared that today’s photographer should be able to override the evidence of the photograph in pursuit of a depiction “that’s authentic to your memory and to the greater context, but maybe isn’t authentic to a particular millisecond.”
Patrick Chomet, Samsung’s head of customer experience, suggested that “actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture. You can try to define a real picture by saying, ‘I took that picture,’ but if you used AI to optimize the zoom, the autofocus, the scene—is it real? Or is it all filters? There is no real picture, full stop.”
Microsoft’s chief economist, Michael Schwarz, contrasting the emergence of artificial intelligence systems with the invention of the automobile, stated that we should wait until we see “meaningful harm” from AI before we regulate it. Comparing it to driver’s licenses that were introduced only after dozens of people were killed in accidents, he said “There has to be at least a little bit of harm so that we see what is the real problem.”
Or, summing up our current situation, Geoffrey A. Fowler, in an article entitled, “Your smartphone photos are totally fake – and you love it,” wrote in The Washington Post: “Think of your camera less as a reflection of reality and more [as] an AI trying to make you happy.”
Unsurprisingly, coincident with the diminished status of the photograph as a credible witness, AI-generated imagery is becoming preponderant, in part due to the ease of its production (one does not have to go anywhere to generate photorealistic imagery) but also due to its ability to configure the world as one wants it to be, a potential salve in this moment of global chaos. Now one can go back in time and add imagery that Robert Capa might have made during the D-Day invasion at Normandy, as Phillip Toledano did in his “We Are At War,” or imagine Iranian women as they might have looked if the Islamic Revolution had not taken place in Ghazale Pourreza’s images; or one might envision contemporary Russia without going there as in Magnum photographer Carl De Keyzer’s new self-published book, “Putin’s Dream,” or the exodus from Cuba as Michael Christopher Brown did in his project “90 Miles,” which he describes as a “post-photography AI reporting illustration experiment exploring historical events and realities of Cuban life”; or menacingly reconfigure Gaza as a beachfront property like the French Riviera as did Donald Trump in “GAZA 2025 WHATS NEXT?” initially an experiment to test the technology by others.

AI-generated imagery allows imagined scenarios far outside the reach of photography –depicting a potential future or distant past, life on other planets, dreams and thoughts. But it also allows for obscene and predatory imagery. In a New York Times article entitled “Why the White House Started Making Deportation Cartoons,” Peter C. Baker argues that “In recent months, this piece of the Trump presidency — its content strategy, as it were — has taken an especially dark turn. Trump was re-elected thanks in part to his promise to lead a crackdown on undocumented immigrants. But the promised wave of mass deportations hasn’t yet materialized… In the absence of an increase in actual deportations, the administration seems to have pursued an increase in deportation spectacles: images celebrating people’s expulsion from the country with a visceral glee expressed in the native idioms of internet culture.” And then he suggests, in a chilling parenthetical remark, that if “The Abu Ghraib photos leaked today, it’s possible to imagine that the White House would repost them approvingly.”
The empathy provoked by a photograph of a young girl intent on desegregating an American school in 1957 is a very different response than that which is elicited by much of the imagery circulating today. An important challenge, then, is how to re-establish connections that are both meaningful and impactful in the media. The problem is not only the distortions evoked by AI-generated imagery, but the failure of photography and related media to help create a shared reality. Rather than cameras and software engineered to give us what we might want, perhaps they can be allowed to show us what is.
Fred Ritchin is a writer, educator and critic. Currently the Dean Emeritus of the International Center of Photography (ICP) School, he was previously professor of photography and imaging at New York University’s Tisch School of the Arts. He has worked as the picture editor of The New York Times Magazine (1978–1982) and created the first multimedia version of the New York Times newspaper (1994–95).
Fred Ritchin’s latest book, The Synthetic Eye: Photography Transformed in the Age of AI, was published by Thames & Hudson on February 27, 2025. In this timely work, Ritchin explores how artificial intelligence is reshaping photography, questioning the authenticity of images in an era of synthetic visuals. The book explores the ethical, historical, and future implications of AI in photography, offering a roadmap for understanding this rapidly evolving field.