As “All eyes on Rafah” circulated, Shayan Sardarizadeh, a journalist with BBC Verify, posted on X that it “has now become the most viral AI-generated image I’ve ever seen.” Ironic, then, that all those eyes on Rafah aren’t really seeing Rafah at all.
Establishing AI’s role in the act of news-spreading got fraught quickly. Meta, as NBC News pointed out this week, has made efforts to restrict political content on its platforms even as Instagram has become a “crucial outlet for Palestinian journalists.” The result is that actual footage from Rafah may be restricted as “graphic or violent content” while an AI image of tents can spread far and wide. People may want to see what’s happening on the ground in Gaza, but it’s an AI illustration that’s allowed to find its way to their feeds. It’s devastating.
Journalists, meanwhile, sit in the position of having their work fed into large-language models. On Wednesday, Axios reported that Vox Media and The Atlantic had both made deals with OpenAI that would allow the ChatGPT maker to use their content to train its AI models. Writing in The Atlantic itself, Damon Beres called it a “devil’s bargain,” pointing out the copyright and ethical battles AI is currently fighting and noting that the technology has “not exactly felt like a friend to the news industry”—a statement that may one day itself find its way into a chatbot’s memory. Give it a few years and much of the information out there—most of what people “see”—won’t come from witness accounts or result from a human looking at evidence and applying critical thinking. It will be a facsimile of what they reported, presented in a manner deemed appropriate.
Admittedly, this is drastic. As Beres noted, “generative AI could turn out to be fine,” but there is room for concern. On Thursday, WIRED published a massive report looking at how generative AI is being used in elections around the world. It highlighted everything from fake images of Donald Trump with Black voters to deepfake robocalls from President Biden. It’ll get updated throughout the year, and my guess is that it’ll be hard to keep up with all the misinformation that comes from AI generators. One image may have put eyes on Rafah, but it could just as easily put eyes on something false or misleading. AI can learn from humans, but it cannot, like Ut did, save people from the things they do to each other.
Loose Threads
Search is screwed. Like a stupid aughts Bond villain, The Algorithm has menaced internet users for years. You know what I’m talking about: The mysterious system that decides which X post, Instagram Reel, or TikTok you should see next. The prevalence of one such algorithm really got the spotlight this week, though: Google. After a few rough days during which the search giant’s “AI Overviews” got pummeled on social media for telling people to put glue on pizza and eat rocks (not at the same time), the company hustled to scrub the bad results. My colleague Lauren Goode has already written about the ways in which search—and the results it provides—as we know it is changing. But I’d like to proffer a different argument: Search is just kind of screwed. It seems like every query these days calls up a chatbot no one wants to talk to, and personally, I spent the better part of the week trying to find new ways to search that would pull up what I was actually looking for, rather than an Overview. Oh, then there was that whole matter of 2,500 search-related documents getting leaked.
TikTok content
This content can also be viewed on the site it originates from.
Discover more from reviewer4you.com
Subscribe to get the latest posts to your email.