MCI proposes filters, tools to limit interaction with children on social media

MCI proposed that the tools be activated by default for services that allow users below the age of 18 to sign up. PHOTO: ST FILE

SINGAPORE - Tools that allow parents and guardians to limit who can contact and interact with their children on social media, and filters that limit what is viewed, are among many proposed measures by the local authorities to better tackle online harm.

In a consultation paper launched on Wednesday (July 13), the Ministry of Communications and Information (MCI) proposed that the tools be activated by default for services that allow users below the age of 18 to sign up for an account.

"The services could provide warnings to young users and parents/guardians of young users of the implications when they choose to weaken the settings," said MCI in consultation documents.

Under its proposed Code of Practice for Online Safety and the Content Code for Social Media Services, social media platforms should also push relevant information like helpline numbers and counselling services to users who search for high-risk content, including those related to self-harm and suicide.

Members of the public can provide their views at this website until Aug 10, and the ministry will later publish a summary of the key feedback received along with its response.

The ministry also wants to empower the Infocomm Media Development Authority (IMDA) to direct any social media platform to disable access to specified harmful content for users in Singapore, or to disallow specified online accounts on the platform from communicating or interacting with users in Singapore.

The proposed codes require social media platforms to implement community standards for six types of content: sexual content, violent content, self-harm content, cyber bullying content, content that endangers public health and content that facilitates vice and organised crime.

The platforms will be expected to moderate users’ exposure or disable access to these types of content when users report them.

The reporting process should be easy to access and use, and platforms should assess and take appropriate action “in a timely and diligent manner”.

Platforms will also be required to proactively detect and remove child sexual exploitation and abuse material as well as terrorism content.

MCI on Wednesday also offered more details on its definition of online harms that will be covered by the codes when they come into effect, including additional standards for young users.

For example, the definition of harmful sexual content for all users covers content that depicts explicit sexual activities as well as content that depicts or promotes deviant sexual behaviour such as incest, bestiality, necrophilia and paedophilia.

It also covers content "relating to or encouraging sexual offences under the Penal Code, the Children and Young Persons Act, and the Women's Charter".

These include distribution of child sexual abuse material, voyeuristic and intimate images distributed without consent, sexual communication with a minor and content encouraging sexual assault.

For young users, harmful sexual content also encompasses content that depicts any sexual activity, even in a fictional context.

It also covers content with implied or obscured depiction of sexual activities, content containing nudity in a sexual context and content containing the frequent use of sexual references or innuendoes intended for sexual gratification.

Another example related to self-harm defines harmful content for all users as those that depict graphic details of self-harm like wounds or injuries. For young users, this would also include content with implied or non-detailed depictions of self-harm like healed scars or blurred visuals.

Remote video URL

MCI also defines cyber bullying behaviour as harmful content that is covered under the codes of practice.

This includes content that is likely to mock, humiliate or cause embarrassment to the target person, such as edited content of a young person or child in an embarrassing situation with captions intended to mock and ridicule.

Negative statements or references about a young person or child on the basis of attributes such as intellect, behaviour or physical attributes will also be covered.

Under the proposed codes, platform operators will be required to regularly publish reports on the effectiveness of their measures, including information on how prevalent harmful content is on their platforms, user reports they received and acted on, and the systems and processes they have in place to address such content.

Join ST's Telegram channel and get the latest breaking news delivered to you.