The Cyberspace Administration’s annual ‘Qinglang’ campaign arrives amid a significantly altered regulatory environment from last year’s edition, coinciding with the White House’s accusation of China conducting ‘industrial-scale’ AI theft operations.
China has initiated a months-long enforcement campaign addressing artificial intelligence misuse, according to Reuters.
Launched by the Cyberspace Administration of China (CAC), in collaboration with the Ministry of Public Security and other agencies, the campaign targets AI-driven fraud, deepfakes, disinformation, and privacy and intellectual property-violating illegal applications.
This 2026 initiative is part of the annual enforcement series ‘Qinglang’ (Clear and Bright). Its immediate predecessor, launched on 30 April 2025, was titled ‘Rectification of AI Technology Misuse’ and lasted three months over two phases.
By the conclusion of its first phase in June 2025, authorities had dismantled more than 3,500 AI-related products, removed over 960,000 pieces of illegal or harmful content, and shut down or penalized over 3,700 accounts.
This year’s campaign emerges within a more mature regulatory environment and amidst a geopolitically sensitive backdrop, adding complexity to its scope and objectives compared to its predecessor.
What the campaign targets?
China’s AI abuse enforcement campaigns focus on an expanding taxonomy of misuse as AI’s capabilities and criminal applications advance.
Based on the established Qinglang framework and new regulatory measures from 2025 and early 2026, this year’s campaign is expected to target several categories simultaneously.
The first and most commercially significant target is AI-enabled fraud and impersonation. There has been a significant increase in voice-cloning and face-swapping deepfake technology used to impersonate celebrities, executives, and officials in scams targeting regular consumers.
The CAC’s 2025 campaign targeted AI use for ‘impersonating relatives and friends and engaging in illegal activities such as online fraud’ and the ‘unauthorized AI use to resurrect the dead’ via AI-generated likenesses of deceased individuals.
On 3 April 2026, the CAC published draft rules for digital virtual human services, focusing on consent for likeness use and banning biometric authentication system bypassing, with public comment closing on 6 May.
The second major category is AI-generated disinformation and ‘online water army’ activity, using AI at scale to create fake social media accounts, distribute coordinated content, manipulate engagement metrics, and create artificial trending topics.
The 2025 campaign prioritized these issues in its second phase, focusing on platforms enabling AI-powered account farming, batch content generation, and social bot networks.
Third is the lack of compliance with mandatory filing and registration procedures. China mandates that large language models offering generative AI services undergo a security assessment and file with the CAC before launching.
By March 2025, 346 gen AI services had completed LLM filing; many more had not. The 2025 campaign’s first phase identified unfiled AI products as a primary rectification target, with local regulators penalizing non-compliant applications, including face-swapping apps unauthorized for use.
The fourth area involves managing training
