The Trump administration announced a national AI legislative framework aimed at centralizing US AI policy by overriding state laws. The directive emphasizes consistency across states to maintain American innovation and leadership in AI. The framework consists of seven objectives focusing on innovation while proposing federal standards that could nullify stricter state regulations. It suggests more responsibility for parents on child safety issues and sets nonbinding expectations for platform accountability, advising Congress to mandate AI firms to include features to mitigate risks to minors, without enforceable demands. This follows Trump’s previous executive order to challenge state AI laws, instructing the Commerce Department to list restrictive state AI laws but remains unpublished. The framework opposes state regulation of AI development, citing national security, and aims to prevent states from penalizing AI developers for third-party misuse. It fails to include liability frameworks or oversight for AI-related harms, centralizing AI policy in Washington and shrinking state regulation capabilities. Critics argue states are proactive regulators of emerging risks, as seen in New York and California’s recent AI laws. Some industry leaders support the framework for its unified national standards, facilitating growth without state regulation conflicts. The framework emphasizes parental control over child safety online and proposes AI platforms should implement child protection features, though it lacks clear requirements. It also balances creator protection with AI’s need for training on existing works. The proposal focuses on preventing government censorship, not platform content moderation, and allows legal recourse against government attempts to censor AI platforms. The framework parallels Trump’s past orders against “woke AI,” aiming for ideologically neutral systems, potentially complicating regulation coordination on misinformation or safety risks.
