We have a new challenge in the digital sphere, thanks to new AI generated videos. Last week, Open AI released Sora and Meta delivered Vibes. Both of these tools have unleashed an avalanche of AI generated videos. Users, including some OpenAI employees on social media, have been revelling in their ability to create outlandish content involving real-life characters—a consequence of unusually lax rules set out by OpenAI, the company behind Sora. That’s despite the AI giant purporting to have some rules designed to prevent IP infringement.
Fast Company contributing writer, Chris Stokel-Walker, has outlined why we are not ready for a world of AI generated videos. This piece highlights some of her findings about this new attention economy.
She highlights that, “Social networks, which were once designed to connect us with one another have now been subsumed by AI slop. She also makes the point that we are now confronted by a steady scroll of the unreal and outlandish content, and not a single human involved. She goes on to say that this has experts worried about our ability to distinguish fact from fiction, and how it can tamper with our temperaments. One expert indicates that “it isn’t entirely surprising that businesses are effectively following the money as to what we’ve seen over the last 12 to 18 months, particularly in terms of AI generated video content.”
Some of the most viewed videos on platforms like YouTube Shorts, traditionally home to human-only content, are now AI-generated.
At a surface level, it may be easy to dismiss these videos as just harmless fun. Scholars, however, are beginning to sound warning bells.
“The danger in sharing and enjoying AI images, even when people know they’re not real, is that people will now have to chase more fictional, manipulated media to get that feeling” says Jessica Maddox, an associate professor of media studies at the University of Georgia.
On the other hand, with the apps in question explicitly saying there are few, if any, guardrails around copyrighted content, and limited ones around the type of content that can be created, there are real risks of polluting our pools of content for years to come. Some suggest that we’re ill-equipped to deal with the problem—in part because what we consider ‘real’ images haven’t been real for a while, thanks to the volume of preprocessing that takes place in the millisecond between clicking the shutter on your smartphone and the image being saved in your camera roll.
According to some academics even fake detection tools are not that effective under current circumstances. The gap between the quality of images used to train deepfake detectors and the average smartphone snap is now so significant as to make any detection tools useless. Detection tools are trained on ground truth images
“It’s going to be really hard to filter out AI content,” says Janis Keuper. “It’s really hard in text. It’s really hard in images as the generators become better and better. And well, we’ve been looking at AI generated images for a while now.” He says that these are as similar to today’s photographs as early 20th century cameras were.
However, what is different with the advent of Vibes and Sora is that they explicitly say they want AI content first—and usually foremost. “Meta Vibes is perfectly named for the problem of AI slop,” says Maddox.
In a world where the world itself is our imagination, it doesn’t matter if the actual image represents anything close to reality. It’s akin to “alternative facts.” No matter how outlandish the video, it’s legitimate. All it has to do is reinforce our viewpoint.
That bleeds through to the common perception of how people often react to AI-generated content, says Maddox.
“People will say, ‘But I agree with what it’s trying to say, whether it’s real or not,’” she says. And that’s proof positive of what’s going on. “AI is vibes only,” she says. “Unfortunately, that means something like Meta Vibes is likely to be incredibly successful with Meta’s audience that seems to love AI imagery. It won’t matter, because with AI, feelings reign supreme.”
And that’s what worries the experts the most. The apps are being foisted on users, but may well succeed—in part because we’re already inured to the persuasive power of content to move us.
“Reality is one now where authentic and synthetic collapses, right?,” says Ajder. “People have authentic experiences—that is, experiences that move them, change their beliefs, change their relationships, change their opinions—with AI. They’re influenced with virtual companions, via chatbots, and with AI-generated disinformation content around war zones and conflicts.”
Ajder doesn’t believe Meta and OpenAI are thinking about the emotional response to AI. “The idea is less passionate,” he says. “It’s more market driven. These kinds of videos are cheap to make. They’re quick to make. We can scale them easily, and they get engagement, they get views, they get clicks.”
But as well as getting rid of the “social” from social media, the second- and third-order ramifications of driving an AI-powered attention economy could have more significant consequences than keeping us scrolling.
Wesley Diphoko is a Technology Analyst and Editor-in-Chief of Fast Company (South Africa) magazine.
Image: Supplied
Wesley Diphoko is a Technology Analyst and the Editor-In-Chief of FastCompany (SA) magazine.
*** The views expressed here do not necessarily represent those of Independent Media or IOL.
BUSINESS REPORT