State Law Patchwork, Federal Proposals, and the New Executive Order
January 2026
Artificial intelligence is no longer an experimental HR tool. Across recruiting, screening, promotion, scheduling, and termination, AI systems are increasingly embedded in core employment decisions. As adoption has accelerated, so has regulation. Employers now face a complex and sometimes conflicting web of state statutes, federal enforcement priorities, proposed congressional legislation, and a recent executive order signaling skepticism toward state-level AI regulation.
This article summarizes the current state of U.S. law governing AI in employment decisions, highlights pending federal proposals, and analyzes the practical impact of the December 2025 executive order on artificial intelligence.
The Baseline: Existing Federal Employment Law Still Applies
Despite the rapid emergence of AI-specific laws, traditional employment statutes remain the foundation of employer liability.
Title VII and Disparate Impact
The Equal Employment Opportunity Commission has made clear that employers remain fully responsible under Title VII when AI-driven tools produce discriminatory outcomes. If an algorithm results in a disparate impact on protected classes, liability attaches regardless of whether the tool was internally developed or procured from a third-party vendor.
Key points for employers:
- Vendor responsibility does not displace employer liability
- Inability to explain or justify an AI-driven decision increases enforcement risk
- βBlack boxβ systems are particularly vulnerable in audits and investigations
FCRA and Background Screening
Where AI is used in background checks, inaccuracies, hallucinated information, or automated adverse actions can implicate the Fair Credit Reporting Act. Human review remains critical to mitigating false positives and ensuring procedural compliance.
State Laws: A Growing and Uneven Patchwork
In the absence of comprehensive federal legislation, states and local governments have moved aggressively to regulate AI in employment. The result is a fragmented but increasingly influential framework.
New York City: Transparency and Audits
New York Cityβs Local Law 144 remains the most prescriptive regime in effect. Covered employers must:
- Conduct an independent bias audit of automated employment decision tools before use and annually thereafter
- Provide advance notice to candidates and employees
- Publish a summary of audit results
Failure to meet these requirements can result in civil penalties and enforcement actions.
California: Expanding Anti-Discrimination and Recordkeeping
California regulations extend existing civil rights protections to AI systems used in employment. Employers must:
- Avoid deploying AI that screens out applicants based on protected characteristics
- Maintain detailed records of automated decision-making data for multiple years
- Integrate AI governance into broader compliance and privacy programs
Separately, Californiaβs privacy regime increasingly intersects with AI use in employment, particularly where automated decisions occur without meaningful human involvement.
Colorado: High-Risk AI Systems (Effective June 30, 2026)
Coloradoβs Artificial Intelligence Act, delayed until mid-2026, introduces a risk-based framework. Employment-related AI systems are classified as βhigh risk,β triggering obligations such as:
- Impact assessments
- Risk mitigation and monitoring
- Transparency and notice to affected individuals
This statute is widely viewed as a bellwether for other states considering similar approaches.
Illinois, Texas, Maryland, and Others
Several additional states have enacted or finalized laws targeting AI in employment contexts. While details vary, common themes include:
- Prohibitions on discriminatory AI use
- Notice requirements for applicants and employees
- Governance and documentation obligations
Importantly, employers recruiting remote workers may be subject to these laws even if they are not physically located in the regulating jurisdiction.
Federal Legislative Proposals: Momentum Without Consensus
Congress has not yet enacted comprehensive AI legislation for employment decisions, but bipartisan interest is increasing.
Senate Proposal: AI-Related Job Impacts Clarity Act
Introduced in November 2025, this bill would require certain employers to report quarterly on:
- Jobs eliminated or automated due to AI
- Jobs created as a result of AI adoption
- Employees retrained due to AI implementation
If enacted, the law would impose substantial tracking and documentation burdens, particularly for large or publicly traded employers.
House Proposal: No Robot Bosses Act of 2025
This proposal would go further, mandating:
- Pre-deployment and periodic audits of AI tools for bias and discrimination
- Independent human oversight of AI-generated decisions
- Disclosure to employees and applicants when AI is used in employment decisions
While passage remains uncertain, the bill reflects growing concern over fully automated management and decision-making.
The December 2025 Executive Order: Policy Signal, Not Preemption
On December 11, 2025, the President signed the Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence. The order emphasizes:
- Reducing barriers to AI innovation
- Minimizing inconsistent state regulation
- Challenging laws perceived to embed ideological bias in AI systems
The order directs federal agencies to review state AI laws and establishes an AI litigation task force to potentially challenge certain statutes.
What the EO Does Not Do
Critically, the executive order:
- Does not invalidate any state or local AI law
- Does not preempt existing employment statutes
- Does not relieve employers of current compliance obligations
Unless and until a court enjoins a state law or Congress enacts preemptive legislation, employers must continue complying with applicable state and local requirements.
Practical Implications for Employers in 2026
Given the convergence of enforcement risk, state legislation, and federal uncertainty, several practical themes emerge:
Human Oversight Is No Longer Optional
Across nearly all regimes, existing and proposed, human review remains a central risk-mitigation strategy. Fully automated adverse decisions significantly increase exposure.
Vendor Due Diligence Is a Legal Necessity
Employers remain responsible for the outputs of third-party AI tools. Contractual protections alone are insufficient without operational governance and testing.
Transparency Expectations Are Rising
Notice obligations are expanding rapidly. Employers should assume that disclosure of AI use in employment decisions will become the norm, not the exception.
Documentation and Explainability Matter
The ability to explain how AI influences employment decisions is increasingly essential, for regulators, courts, and candidates alike.
Conclusion
The regulatory environment for AI in employment decisions is no longer speculative. State laws are in force, federal agencies are actively enforcing existing statutes, Congress is proposing new guardrails, and the executive branch has signaled a preference for national coordination without immediate preemption.
For now, compliance with state and local AI laws remains mandatory, and employers should not assume that federal action will simplify obligations in the near term. Organizations using AI in hiring, promotion, or workforce management should proactively assess their systems, governance structures, and documentation practices to remain defensible in this evolving legal landscape.
