Artificial Intelligence & Machine Learning , Geo Focus: The United Kingdom , Geo-Specific

UK Regulator Tells Platforms to 'Tame Toxic Algorithms'

Ofcom Prepares to Enforce the Online Safety Act
UK Regulator Tells Platforms to 'Tame Toxic Algorithms'
U.K. regulator Ofcom says online platforms must "tame toxic algorithms." (Image: Shutterstock)

The British media regulator called on online platforms including search engines to roll out safety measures for recommendation algorithms to ensure online child safety.

See Also: New OnDemand Webinar | Overcoming Top Data Compliance Challenges in an Era of Digital Modernization

The Office of Communications, better known as Ofcom, on Wednesday urged online intermediaries, which include end-to-end encrypted platforms such as WhatsApp, to "tame toxic algorithms."

Ensuring recommender systems "do not operate to harm children" is a measure the regulator made in a draft proposal for regulations enacting the Online Safety Act, legislation the Conservative government approved in 2023 that is intended to limit children's exposure to damaging online content (see: UK Parliament Approves Online Safety Bill).

The law empowers the regulator to order online intermediaries to identify and restrict pornographic or self-harm content. It also imposes criminal prosecution for those whose send harmful or threatening communications.

Instagram, YouTube, Google and Facebook that are among 100,000 web services that come under the scope of the regulation and are likely to be affected by the new requirements.

"Any service which operates a recommender system and is at higher risk of harmful content should identify who their child users are and configure their algorithms to filter out the most harmful content from children's feeds and reduce the visibility of other harmful content," Ofcom said.

Ofcom requires the affected service providers to adopt three safety measures for AI algorithms: ensuring that the AI algorithms do not recommend harmful or pornographic content, reducing such content's prominence in recommender feeds, and enabling children to provide negative feedback on content recommended to them.

"Ofcom will launch an additional consultation later this year on how automated tools, including AI, can be used to proactively detect illegal content," the agency added.

The latest measure from the regulator comes after a U.K parliamentary committee criticized the regulator for its lack of clarity on how to process data of nearly 100,000 service providers that come under the scope of the regulation, which they argued could delay the rollout of regulations (see: Report: Ofcom Unprepared to Implement UK Online Safety Bill).

Tech companies including Apple and WhatsApp have criticized a provision in the law allowing the regulator to require that online services deploy "accredited technology" to identify content tied to terrorism or child sexual exploitation and abuse (see: Tech Companies on Precipice of UK Online Safety Bill).


About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.com, you agree to our use of cookies.