Predators use Twitch to record and share child abuse

Twitch hosts more than 96,000 live channels and seven million monthly broadcasters. PHOTO: AFP

NEW YORK – In the spring of 2023, a 12-year-old boy went live on Twitch, the popular live streaming site owned by Amazon.com, to eat a sandwich and play his French horn. Minutes later, about a dozen viewers joined him. Through Twitch’s chat, one viewer asked him to do a somersault. Another requested that he show his muscles.

In response, the boy pulled his pants down. The whole thing ended in an instant, but one viewer, who was following more than a hundred other Twitch accounts appearing to belong to children, used a feature called “clips” to capture the fleeting moment in a 20-second video. The resulting clip has since been viewed more than 130 times.

Twitch is planning to expand the “clips” function in 2024.

As part of the previously announced build-out, the company will be encouraging more users to turn ephemeral live streaming events into short, on-demand videos for display on a soon-to-be-launched discovery feed – or for easy export to other social networks, including TikTok.

Now, ahead of the move, online safety experts are warning that the tool is already being exploited by child predators to record and share the abuse of underage Twitch users. 

An analysis by Bloomberg News of nearly 1,100 clips on Twitch found that at least 83 of the short videos contain sexualised content involving children.

The Canadian Centre for Child Protection, which reviewed the material, identified 34 that depict young users – primarily boys between the ages of five and 12 – showing genitalia to the camera, often apparently following the encouragement of viewers during a live stream.

Another 49 videos included sexualised content involving minors exposing body parts or being subjected to grooming efforts. 

According to the centre, the 34 most egregious clips have been viewed 2,700 times. The other explicit videos, the centre noted, have been watched 7,300 times.

When a viewer captures a live stream involving predation, it “becomes an almost permanent record of that sexual abuse”, said Mr Stephen Sauer, the centre’s director. 

“There’s a broader victimisation that occurs once the initial live stream and grooming incident has happened because of the possibility of further distribution of this material,” he added.

Once Bloomberg alerted the company, Twitch removed the prohibited content.

“Youth harm, anywhere online, is deeply disturbing,” Twitch’s chief executive Dan Clancy said in a statement.

“Even one instance is too many, and we take this issue extremely seriously.”

Twitch initially launched the clips function in 2016.

Over the past year, facing greater competition from ByteDance’s short-form video site TikTok, Twitch’s product team has made the expansion of clips a major focus. 

At the same time, the clips feature has remained among the least moderated on the site, according to people familiar with the safety protocols, who asked for anonymity while discussing the inner workings of the company.

In the statement, Mr Clancy said “combating child predation meaningfully” requires collaboration.

He noted that Twitch is partnering with various agencies to do so and has “made significant progress”.

By continuously screening the live content on Twitch, Mr Clancy said, the company is preventing “the creation and spread of harmful clips at the source”.

Twitch is also working retroactively to “delete and disable” harmful clips, while making sure such videos “aren’t available through public domains or other direct links”.

“Like all other online services, this problem is one that we’ll continue to fight diligently,” he added. 

Over the years, company executives have struggled to stamp out child grooming on Twitch, which hosts over 96,000 live channels and seven million monthly broadcasters.

In 2022, Bloomberg News reported that child predators routinely use Twitch to track kids in real time, with more than 297,000 children apparently targeted.

Since the report, Twitch has announced several changes.

The company launched a new phone-verification requirement and is developing technology to catch and terminate accounts belonging to kids under 13.

The company has also made it harder for minors who have been previously banned to create new accounts.

Twitch uses language analysis tools to detect child grooming and artificial intelligence (AI) technology to flag nudity. 

In April 2023, Twitch laid off at least 15 per cent of its internal trust and safety team, as well as increased its reliance on outside providers to identify and remove problematic content.

At the moment, Twitch focuses most of its monitoring efforts on its live streams, which the company says are reviewed using human moderators, AI and other tools.

By contrast, when it comes to moderating clips, Twitch relies solely on its users to report instances of suspicious or upsetting material. 

In the last quarter of 2022, Twitch detected 318 instances a day of ostensible child grooming, a fifth of which led to account suspensions, according to a recent report by Australia’s eSafety Commissioner. 

Experts say moderation challenges are intrinsic to Twitch’s design.

Typically, social media sites such as YouTube and Instagram can identify when users are attempting to upload abusive videos by comparing new content with previous sources featuring similar material. On Twitch’s live streams, child predation happens in real time, so there are no prior videos to compare with.

“Hash technology looks for something that’s a match to something seen previously,” said Ms Lauren Coffren of the United States National Centre for Missing and Exploited Children.

“Live streaming means it’s brand new.” 

In April 2023, a bipartisan group of US senators introduced the Protecting Kids on Social Media Act, a Bill aiming to improve children’s safety on sites such as Twitch that feature user-generated content. 

Mr Sauer said social media companies can no longer be trusted to regulate themselves.

“We’ve been on the sidelines watching the industry do voluntary regulation for 25 years now,” he said.

“We know it’s just not working. We see far too many kids being exploited on these platforms. And we want to see government step in and say, ‘These are the safeguards you have to put in place.’” BLOOMBERG

Join ST's Telegram channel and get the latest breaking news delivered to you.