Biden Administration Moves to Limit State AI Laws
President Donald Trump has signed an executive order aimed at challenging state laws related to artificial intelligence. The move is part of an effort to prevent a patchwork of regulations that could hinder US competitiveness in AI development. The order specifically targets state laws that require AI models to include ideological bias or impose transparency and fairness standards.
What the Executive Order Does
The order directs the Justice Department to create an AI Litigation Task Force. Its goal is to challenge state laws that are deemed unconstitutional, pre-empted by federal authority, or otherwise unlawful. For example, Trump pointed to Colorado’s upcoming law banning “algorithmic discrimination,” arguing it could force AI systems to produce false results to avoid impacting protected groups.
Additionally, the order instructs the Commerce Department to evaluate state AI laws that conflict with national policies. It also calls for withholding broadband funding from states with conflicting laws. The Federal Trade Commission will be asked to clarify when state laws requiring modifications to AI outputs are pre-empted by existing federal regulations against unfair or deceptive practices. Meanwhile, the Federal Communications Commission may implement a national reporting standard for AI models to pre-empt conflicting state rules.
Balancing State and Federal AI Policies
The executive order emphasizes establishing a uniform federal approach to AI. It asks a new AI and Crypto advisor to recommend legislation that creates a consistent national framework. This would make it easier for companies to operate across states without facing different rules. However, the order still allows states to set their own laws on issues like child safety protections related to AI, data center infrastructure, and procurement policies.
States have been increasingly active in regulating AI. In 2024 alone, nearly 700 AI-related bills were introduced, with 113 passing into law. Colorado’s AI Act, for instance, will take effect in 2026 and requires developers to disclose information about high-risk AI systems and conduct impact assessments. California’s SB 53 mandates transparency about AI use in employment decisions, while Texas’s TRAIGA sets disclosure rules for generative AI in contracts starting in 2026.
This growing number of state laws has created a complex legal landscape for AI companies. The new federal push aims to simplify compliance for US firms operating domestically. But it’s important to note that US companies selling AI products in Europe will still need to follow the EU AI Act, which came into force in August 2024. The EU law classifies AI systems by risk level and imposes requirements like transparency, human oversight, and conformity assessments, especially for high-risk applications such as hiring tools and credit scoring systems.
According to experts, a deregulated US approach can help American AI firms move faster at home but may not give them an advantage abroad. The EU’s regulations are more comprehensive and aim to protect consumers and ensure safety, regardless of US policies. As AI technology continues to evolve, balancing innovation with regulation remains a key challenge for policymakers and industry leaders alike.















What do you think?
It is nice to know your opinion. Leave a comment.