Elon Musk’s platform has agreed to review illegal hate and terrorism posts within a day on average, restrict UK-proscribed groups, and report quarterly to the regulator. A separate Ofcom investigation continues.
X has entered into commitments on illegal hate speech and terrorist content with Ofcom, Britain’s communications regulator stated on Friday, following months of pressure heightened in autumn and winter.
As part of the agreement, Elon Musk’s platform will review suspected illegal hate and terrorism posts on average within 24 hours, assess at least 85% within 48 hours, and submit quarterly performance data to the regulator for the next year.
The platform also promised to limit UK access to accounts managed by or for organizations banned under British terrorism law and to engage external experts to revamp a reporting flow criticized by civil-society groups as opaque.
Clear wording is important here, as unclear reception or action on flagged content has been a common complaint against X with Ofcom over the last year.
Suzanne Cater, Ofcom’s online safety enforcement director, stated that ‘terrorist content and illegal hate speech persists on some major social media platforms,’ and that the issue has become ‘particularly significant in the UK following several recent hate-motivated crimes affecting the country’s Jewish community.’
Imran Ahmed of the Center for Countering Digital Hate noted the commitments resulted from ‘sustained campaigning’ post last year’s attack on Heaton Park Synagogue near Manchester.
Britain has faced a series of difficult incidents. The Heaton Park attack was succeeded by a fatal event in north London last month, treated by police as terrorism, and CCDH monitoring after the Golders Green attack documented a surge in antisemitic posts on X (the underlying CCDH dataset is here).
The new commitments don’t directly address these incidents but set a procedural foundation beneath them.
The response was mixed. Danny Stone, chief executive of the Antisemitism Policy Trust, called the package ‘a good start’ but mentioned X still ‘failing in many aspects’ to combat racism.
Ofcom noted that its formal inquiry into X, including the company’s methods for handling illegal content and questions raised by its Grok AI assistant, remains ongoing. The Friday agreement is a negotiated commitment, not a settlement.
There is a separate Grok track in progress. Ofcom is reviewing how X manages AI-generated sexualized imagery from the chatbot, and X limited Grok’s image-editing features to paid users after a deepfake controversy and UK ban threat. The Friday commitments don’t resolve that issue. They coexist with it.</span
