- AI In Real Life
- Posts
- How an AI Snafu and A Video of Flowers Nearly Got Me Kicked Off YouTube
How an AI Snafu and A Video of Flowers Nearly Got Me Kicked Off YouTube
AI moderation run amok
Make more money from your website & future-proof your business
Join Raptive and earn a +15% higher RPM year-over-year— guaranteed.
As the world’s largest ad management platform, our best-in-class ad code optimizations, direct sales team, and exclusive partnerships deliver industry-leading RPMs.
Plus, you will gain access to a team of 350+ experts and an exclusive suite of services and solutions to drive revenue, build audience, and future-proof your business.
Apply now if your site is 100% original content and consistently earns 100K+ PVs per month.
The California Golden Poppy is my favorite flower.
Paper thin, aggressively orange, and nearly impossible to use in a bouquet (the instant you cut the flowers, they begin to shrivel, are are gone in hours), the Golden Poppy appears at random on hillsides around my Bay Area home in the Spring.
When I saw a little stand of these dramatic flowers last year, I took a quick vertical video of it and posted it to YouTube Shorts.
That’s when all hell broke loose.
Within a few minutes, I received a sternly worded message from YouTube, indicating that my channel had received a Content Moderation Strike.
Apparently, I had posted a video of Drug Paraphernalia. That’s against YouTube’s Content Guidelines. It’s the kind of action that can cost creators their channel.
I was confused. My video was an innocent, five second clip of some flowers. What was going on?
I did some digging, and it turns out YouTube’s AI-powered content moderation was to blame.
YouTube receives an obscene number of submissions—millions of new videos per day. They can’t possibly have humans review all of these. So, they use AI to take a first look.
YouTube’s AI is pretty good at spotting obvious policy violations, like violence or obscene content. But it makes a ton of mistakes.
The Verge found that in 2020, YouTube removed at least 160,000 videos incorrectly because they were flagged by the company’s AI.
My video, it seems, got caught up in that AI filter. Why?
As a hilarious early essay by Michael Pollan explores, poppies exist in a legal grey area in the United States. Technically, they can indeed be used to make illegal drugs. But if they’re grown for ornamental purposes, they’re perfectly legal.
Any human would look at my video and immediately go “Ah, he posted some photos of a beautiful flower he found growing in a park".”
But with AI, it’s a different story. YouTube’s algorithms looked at my video, saw poppies, and immediately screamed “DRUGS!!”
I was lucky—my channel is other squeaky clean. I’ve never had a content strike, so YouTube basically let me off with a warning for posting this very scandalous video of some orange flowers.
If my record wasn’t otherwise spotless—or the AI really felt my video was egregious—I would have faced a 2 week suspension or even the loss of my channel.
Those kinds of false positives are a huge issue for creators. But with YouTube, at least the platform told me why my content was being suppressed.
I’ve noticed that when I publish stories elsewhere that talk about wildflower superblooms and show photos of poppies, they never do well. Likewise, these otherwise beautiful flowers rarely show up on social media, unless they’re an unidentifiable part of an otherwise large hillside scene or the like.
My guess is that social platforms’ content moderation algorithms—similarly equipped with visual AI—have been tuned to suppress these “drug related” images.
The end result is that—because of badly trained AI—a whole genus of perfectly innocent flowers is suppressed from public view.
Why it Matters
Who cares—they’re just flowers right?
Yes, my inability to post content about my favorite flower is only a minor tragedy. But the implications of AI related moderation errors are much bigger.
What if AI algorithms are making similar errors around potentially far more important content?
What if they’re inadvertently suppressing news stories, political content, or other mission critical information, based on the words those stories contain, or some other “suspicious” factor?
Or, what if the AI decided that a certain person needed to be suppressed, based on their name, their face, or some spurious correlation between them and something the AI deemed offensive?
They could see their visibility drop across the internet, as algorithms selectively weeded them out of online existence.
And crucially, they’d have no idea it was happening. Again, YouTube was kind enough to inform me of the idiotic decision its AI had made about my flower video. Most platforms aren’t so kind. They’ll cheerfully shadow ban such content without creators having any idea that it’s happening.
I’ve seen similar things happen with other content I’ve published. I once wrote a story about an astronaut toilet at a space museum for my Bay Area news site. For weeks after publishing it, my content stopped appearing in Google Discover. My best guess is that Google’s AI saw the word “toilet” in the headline and decided to suppress my website for a while, lest it inadvertently share something offensive.
As creators, we need the kind of transparency that YouTube provides. If AI is going to make dumb errors, we should at least know about them.
Ideally, we’d also have humans in the loop, checking AI’s work to make sure legit content (or legit people) aren’t getting suppressed.
In the meantime, I’m self-censoring. You’ll notice that this email doesn’t contain a single photo of golden poppies. That’s because I want it to actually reach your inbox.