YouTube logo.
Credit: Reuters Photo
By Catherine Thorbecke
Keeping young people safe online is a rallying cry we can all get behind. But the furor in Australia over plans to ban children under 16 from social media shows it’s not as simple as it sounds.
At the center of the latest debate over the rules set to come into effect later this year is Alphabet Inc.’s YouTube. Officials last week said they were reversing a promised exemption from the legislation for the video-sharing site. Part of the reason YouTube’s inclusion has struck such a nerve is because it’s impossible to overstate how intertwined it has become with pop culture for a generation who grew up on it.
Australia’s online regulator said it was the most-used social media platform for the nation’s youth (and claimed it’s the biggest source of online harms, from misogynistic content to violent videos). But it has also been a springboard for young creators, especially from marginalized groups, to find community and even launch multi-million-dollar careers. YouTube is where Grammy-nominated singer Troye Sivan first gained popularity, posting covers of pop songs and vlogs as a teen in Perth — and where he publicly came out as gay in 2013.
All social media platforms have risks, and global lawmakers can’t afford to ignore them. But banning young people from participating in online life doesn’t eliminate these threats. If anything, Australia’s debate over blanket age bans distracts from the difficult policy work required to hold tech companies accountable for keeping teens safe.
Research has shown that hardline age limits aren’t effective at preventing the online harms to developing minds, as they ignore individual differences in maturity levels and the positive use cases platforms can bring. They might keep some 15-year-olds away from toxic algorithms and going down dangerous rabbit holes, but don’t address their root causes, leaving the same teens vulnerable when they turn 16.
Tech-savvy young people will likely find ways around the ban. Norway already has a minimum age limit of 13 for social media — but found that 72 per cent of 11-year-olds are still logging on. Recent age-verification legislation in the UK has exposed even more creative ways to bend the rules, such as using video game avatars to beat facial recognition tools. It would be naïve to think that these age restrictions will do much more than encourage teens to either lie or continue to use YouTube without making an account. How would that protect them from its risks?
Data from the US reveals that YouTube is the most-watched of any television streaming service, beating out juggernauts like the Walt Disney Co., Netflix Inc. and NBCUniversal. YouTube’s strength in allowing anyone to participate in the modern media ecosystem has proven a huge hit among young viewers, but this can also be a vulnerability. It’s far less common to hear anecdotes of people being radicalized and becoming a White supremacist after binging Netflix.
The reality is more nuanced. Researchers led by a team at Dartmouth College in 2022 found that it was exceedingly rare for YouTube’s algorithm to suggest extremist content to users who were not actively seeking similarly harmful material. That doesn’t mean that bigoted or conspiracy-theory-laden videos don’t exist on the platform, but that it’s highly unlikely for users to stumble upon them while watching cat videos or makeup tutorials.
More importantly, this study came after YouTube bowed to pressure and made sweeping changes to its recommendation system in 2020. Similarly, YouTube Kids was launched in 2015 to make the platform safer for users under 12 — acknowledging that children will still log on, and there are vast amounts of positive and educational content available. This shows it is possible to force tech companies to clean up their act.
During an interview supporting the inclusion of YouTube in the teen social media ban, one Australian lawmaker said that the goal of the government here was to “protect children, to protect our most vulnerable, but also to assist parents.” It’s a great soundbite that will likely appeal to a lot of exhausted caregivers. But in the same clip, the representative admitted that he used YouTube during high school to help learn advanced concepts in math.
Global policymakers must push social media platforms to take action about the risks lurking on their sites. Regulators can start by demanding that tech companies offer outside academics more transparency about how their algorithms work. Caretakers, educators, researchers and other stakeholders can then recommend targeted solutions that don’t just strip young people’s right to access information and express themselves. There are legitimate concerns, especially around the addictive nature of these platforms, where regulation can make a difference.
Lawmakers must also keep the pressure on tech companies. Following news of Australia’s ban, Meta Platforms Inc. launched new privacy settings for teens on Instagram, and YouTube announced strengthened protections for US users under 18 years. The onus must be on these tech firms to use their vast resources to make the internet safer for children.
Parents and the next generation deserve more than Band-Aid solutions. Pretending teens won’t access YouTube is easy, governing is harder.