© 2025 WNIJ and WNIU
Northern Public Radio
801 N 1st St.
DeKalb, IL 60115
815-753-9000
Northern Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

AI video slop is everywhere, take our quiz to try and spot it

Loading...

You're lazing around the living room after a big holiday meal, when your uncle starts flipping through vertical videos. "Did you see that one of the cat snatching that snake out of a dude's bed?" he asks.

Is it real? Is it fake? You feel a headache coming on.

"We're being overrun by slop," said Mike Caulfield, a co-author of the book Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online. "It just floods the zone and at some point your mental faculties are just exhausted."

But Caulfield and other experts say you don't need to give in to despair, at least not yet. There are a few simple dos and don'ts you can use to try and evaluate the authenticity of what you see online.

Don't assume everything is fake

With so much slop in our feeds, it's easy to think that everything you see online is fake. But that bias is just as dangerous as believing everything you see is real, warned Kolina Koltai, a senior investigator at Bellingcat, an organization that specializes in open-source investigations.

Bystander videos remain an extremely important source of evidence for misdeeds by individuals and law enforcement. When people stop believing those videos, researchers call it the "liar's dividend" because it makes it easier for bad actors to claim real events are fake to evade responsibility.

"I think that's one of the bigger risks when it comes to this kind of content," said Koltai. "It's not that someone's going to believe a fake video, it's that people won't believe real videos."

Koltai and others say it's especially important to carefully consider videos that might provoke a strong emotional response or run contrary to your beliefs. Real videos are often complicated situations that can provoke a response that challenges our understanding of the world. That being said, many fake videos are designed to do just that, in order to drive engagement.

Do pay attention to some simple features of the video

AI-generated videos are already excellent and improving quickly, said Hany Farid, a professor at the University of California, Berkeley who studies manipulated media. Even experts can be duped, he said. "I've been doing this every day for a long time, and it's really hard. It's really hard."

But there are some fairly straightforward features that can clue you in to whether the content you're watching might be AI. The biggest tip is the video length.

Most companies limit the lengths of AI videos "because making these videos is computationally very expensive," Farid said. Many videos end up being just 8-10 seconds in length. While it is possible to cobble together a longer video with a bunch of short cuts, "when you see those little bite-sized videos, it's a good indication that you should take a breath."

Length isn't the only thing; AI-generated videos tend to perfectly frame their subject, said Farid. The main characters in the video are prominently featured, and the action starts and stops cleanly, even if the video is short. That's one reason the quiz video of the New York City police officer shouting at ICE agents is obviously fake.

"It has that kind of almost professional look to it," he said. The position of the camera can also be strange–is it too close to the subjects of say, an ICE raid? Or does it move too smoothly to follow a running animal as if it's on a gimbal? Those could be clues it's generated by AI.

Do check the context

The features of a video are important, but the place it's being shared can sometimes matter even more, said Caulfield.

Checking where a video was posted and even simply looking at the comments can provide powerful clues. For example, the second video of an ICE raid in the quiz came from a Reddit community for the Logan Square neighborhood of Chicago.

Similarly, look at who posted the video. If they have other sorts of content on their feed than just immigration raids, it lends credibility to the idea that they witnessed the raid. "It might be easy to fake a video but it's hard to get into a time machine and build yourself ten years of history talking about Chicago hot dogs," Caulfield said.

If you're unsure whether the video you're seeing was originally posted by the account you're watching, try a simple reverse image search on Google or another platform. Often such searches can turn up the original post, other videos from the same event or news reports that either confirm or refute the video. Both the Logan Square ICE raid and the moose eating popcorn were reported by the media at the time they happened.

Conversely, identifying an AI video is often as simple as looking carefully at the account that posted it. It's common for accounts to identify their content as AI-generated in their profile description, said Koltai. Even if they don't, checking the comments can often reveal that many people believe a video is AI-generated.

Don't feel like you have to share it, especially if you have doubts

Finally, all three researchers agree that in an age where algorithms reward speed over accuracy, sharing is not really caring.

Much of the AI content being shared online is engagement bait, said Koltai. Its creators "often have a monetary incentive to get you to like, comment, and share, because it often results in them making more money," she said.

When in doubt, Caulfield said, the best thing may be to wait. "You don't necessarily have to be the first one to share this thing, you can be the person who waits," he said. Often within a matter of hours, a video of an event will be confirmed by corroborating videos or news reports.

Many people might not think it matters whether you share an AI video of bunnies jumping on a trampoline, or a cat snatching a snake from its owner's bed, but experts agree that it does. When people are duped by AI videos, it erodes faith in the videos that matter.

"People are like oh is it really that big a deal?" she said. But Koltai said everyone should be worried. "If we are unable to tell what's real and what's unreal online? That to me is really incredibly dangerous."

Hany Farid agrees: "Every one of those likes, clicks, shares, engagements, you're part of the problem at this point," he said.

Copyright 2025 NPR

Tags
Geoff Brumfiel works as a senior editor and correspondent on NPR's science desk. His editing duties include science and space, while his reporting focuses on the intersection of science and national security.
Sanidhya Sharma