YouTube's chief product officer Neal Mohan on how the platform is dealing with fake news videos in the UK

YouTube says changes to its recommendation algorithm is reducing the harmful content online but it still has a long way to go 
YouTube's chief product officer Neal Mohan at YouTube Space in King's Cross
Matt Writtle
Amelia Heathman15 August 2019

It's a difficult time to be running a social media platform. People are dropping off Facebook, 8chan is offline following accusations it contributed to a mass shooting by a suspected white supremacist in Texas, and YouTube seems to be battling fires weekly, from paedophiles linking each other to videos of young children via comments, to the neo-Nazis that plague the video platform.

Despite this, its chief product officer Neal Mohan is upbeat when I meet him at YouTube Space near King’s Cross, to talk about the work it is doing to stop its algorithm uplifting “borderline” or potentially harmful content on the platform.

This algorithm has undergone a fair beating recently. YouTube’s target of one billion hours watched every day — set by CEO Susan Wojcicki and achieved back in 2017 — meant the algorithm was designed to keep people online. Divisive content, whether from anti-vaxxers, flat-Earthers or other conspiracy theorists, has grown and grown. Now it’s time to change, says Mohan.

“This notion of our responsibility as a global platform has been a top priority for us for the last couple of years. We’ve introduced more than 30 policy changes, we have moderators rating videos based on the changes. We’ve built up dozens of machine-learning systems to detect content that might potentially violate the new policy changes we’ve developed.”

He says the company is starting to see some success. Tweaks were made in its Up Next algorithm in the US, leading to a 50 per cent reduction in this harmful content, according to YouTube’s figures. The same technology will be rolled out in the UK too this week.

To help it determine what counts as borderline or harmful content, YouTube engaged hundreds of ordinary people in the UK in a research exercise. Provided with the firm’s community guidelines, they were asked to watch videos and answer questions about them. The same process is being rolled out country by country, language by language. “The ambition is to make it so that the improvements to our recommendation algorithms eventually cover all of our users across the globe,” says Mohan.

The implementation takes a long time, given that 500 hours of new video are uploaded to YouTube every minute. The underlying tech framework took about a year to develop and has to be adjusted in line with regional feedback. What videos are affected? Those in the UK, Mohan says, are on the same lines as in the US. He cites subjects such as moon-landing conspiracies and anti-vaccination stories.

Another task is to ensure more information is offered about who made a video, and its contents. Click on BBC footage and you’ll be greeted with a new box stating it is a “British public broadcast service”, linking out to a Wikipedia page. A video from a parent talking about why they don’t want to vaccinate their child may be accompanied by a box explaining the anti-vaccine controversy. “This is so a user can make an informed decision on their own,” says Mohan. Over time more background information will be linked to more videos.

It all marks a fundamental shift in how YouTube works, by moving people off the platform, therefore spending less time online. “The changes to some principles — or incorporation of new principles — are all things we will do in service of making sure that we live up to our responsibility,” Mohan adds.

YouTube says this type of video makes up less than one per cent of what is on the platform — though it takes up most of Mohan’s time. A tricky job? “It keeps me and my teams busy.” How does he relax? “I enjoy my job because I’m a technologist at heart, but I love all forms of media — sports, music, the other 99 per cent on YouTube. Consuming some of that is my way of de-stressing.”