Ofcom, the UK online safety regulator, has accepted new commitments from X to enhance protection for UK users against illegal hate and terror content. As per the announced agreement, X will restrict access in the UK to accounts that post illegal terrorist content, which are identified as operated by UK terror groups. Furthermore, X will evaluate at least 85% of user-reported hate and terror speech within 48 hours. X has also promised to collaborate with experts on reporting systems for illegal content and provide quarterly performance reports to Ofcom for a year to ensure compliance with these commitments.
According to Ofcom’s online safety director, Oliver Griffiths, while these commitments represent progress, further action is necessary. He emphasized that illegal hate speech and terrorist content are still present on major social media platforms, urging them to address the issue effectively.
This investigation is part of a compliance review initiated by Ofcom in December to assess social media platforms’ measures against illegal hate and terrorist content. Griffiths clarified that a separate probe into how X’s chatbot handles illegal content continues after the tool was used to create unauthorized deepfakes.
These commitments allow Ofcom to impose fines on X if it fails to meet these obligations. However, the regulator stops short of deeming X non-compliant with current UK online safety regulations. The commitments are somewhat vague as X outlines intent to accelerate content reviews but does not mention proactive content detection. There’s no clarification on whether the review process relies on automated systems or human moderators, raising concerns due to X’s limited safety team.
