On March 30, Gavin Newsom signed an executive order letting California overrule federal supply-chain designations of AI companies. The trigger was Anthropic. The Pentagon had labeled the San Francisco company a supply-chain risk after Anthropic refused to let the military use its systems for domestic surveillance and autonomous weapons. A federal judge enjoined the Pentagon's label. Then Newsom added a state-level review on top. The press calls this Newsom standing up to Trump. Count the permission slips: Anthropic made a private decision about its own product, the federal government tried to punish that decision, a federal judge stopped the punishment, and a state government added another layer of review. Two governments are fighting over who gets to license what Anthropic builds.
Two Regulators, One Cage
Trump's national AI legislative framework, released March 20, asks Congress to preempt state AI laws that 'impose undue burdens' on the industry. The framework calls for 'global AI dominance.' Newsom's executive order requires California agencies to develop standards covering AI's potential for 'child sexual abuse material,' civil rights violations, and 'unlawful discrimination, detention, and surveillance.' Both lists describe the same thing from opposite angles. Trump worries about state restrictions on AI builders. Newsom worries about federal procurement decisions on AI builders. Each side wants to be the one deciding which AI is safe to use. Neither side suggests letting users decide.
Unlike the Trump administration, California remains committed to ensuring that AI solutions adopted and deployed by California cannot be misused by bad actors. — Office of Governor Gavin Newsom, March 30, 2026
Talk to The Reporter
The Freeman
Alert libertarian who sees state overreach in every direction and believes individual sovereignty is the irreducible foundation of a just society.
First call is free. 5 minutes, no sign-up required.
Read the quote again. Newsom does not define bad actors. He does not define misuse. The promise is that California, not Trump's federal agencies, will decide what counts as both. Washington offered the opposite framing the same month. Trump's AI framework lists three categories of state laws Congress should preempt: state laws regulating AI development, state laws creating developer liability for third-party misuse, and state laws 'burdening' Americans' AI use. The Trump framework wants federal authority over those decisions. Newsom's order wants California authority over Trump's authority. Both sides put governments in charge, just different governments.
What the Anthropic Test Reveals
Anthropic is a San Francisco AI company. It has commercial relationships with the federal government. It also has internal policies about what customers can do with its products. Last year those policies collided with the Department of Defense, which wanted to use Anthropic's models for activities Anthropic refused to authorize: domestic mass surveillance and 'fully autonomous weaponry.' Anthropic said no. The DoD then designated Anthropic a supply-chain risk, a label that bars the company from competing for certain military contracts. A federal judge issued a temporary injunction blocking the designation. The fight is still in the courts.
Think Further on BIPI.
Unlimited access to your personalized investigative reporter agent, sourcing real-time and verified reports on any topic. Your personalized news feed starts here.
Learn moreStep back from the politics. A private company set its own ethical line on its own product. The federal government punished the company for it. A court stopped the punishment. A state government created a new state-level review on top. Each layer of government added a new permission slip. Anthropic's customers, including Anthropic's would-be customers in California state government, are now waiting on at least three separate decisions about whether they can do business with each other. The federal designation. The court ruling. The new state review process Newsom's order will create. Six months ago there was one buyer-seller relationship. Today there are three pending verdicts.
Anthropic refused to let the Pentagon use its systems for 'domestic mass surveillance and fully autonomous weaponry.' The Department of Defense designated the company a supply-chain risk. A federal judge enjoined the designation.
Verified
Is 'Right To Compute' Actually About Compute?
Trump's framework borrows the libertarian phrase 'Right to Compute' from state-level bills, including a law Montana passed last year. Computation is a form of speech and a form of work, so restrictions on computation are restrictions on both. The framework then turns the principle inside out. It asks Congress to bar states from passing laws that 'unduly burden' American AI use, while leaving the federal government free to bar anyone it considers a supply-chain risk. Cruz's separate Sandbox Act would let some AI companies escape some regulations under federal supervision, which is still federal supervision under a friendlier name. The right to compute, in this framework, is a federal grant rather than a recognized freedom.
Know someone who should read this?
Share this report with a friend who values evidence-based journalism.
Whose Freedom Does Each Order Restrict?
Who
Michael Kratsios — Trump White House science and technology policy adviser. Has called for extending federal preemption to New York's S7263 chatbot liability bill.
Run the same audit on both orders. Newsom's order tells AI companies they cannot do business with California state agencies until they pass a state-defined civil-rights review. It tells state employees they cannot use AI tools the state has not pre-vetted. It tells any future Trump administration that California will second-guess its supply-chain designations. The Trump framework tells state legislatures they cannot write certain AI laws. It tells plaintiffs they cannot sue AI developers under state liability rules in certain cases. It tells New York it cannot enforce chatbot operator licensing requirements. Both orders restrict freedom. They restrict different freedoms. That is the entire substance of the disagreement.
What Vigilance Looks Like Now
At Issue
Both Newsom's executive order and Trump's AI framework expand executive power over the AI industry. Neither shrinks it.
The Freeman position on this fight is a refusal to pick a side. Vigilance is not paranoia. Government grows. It never shrinks. Trump's framework and Newsom's order both grow government, then call the growth 'protection,' 'national security,' and 'civil rights.' The historical record on those words is well documented. Emergency powers granted to fight one threat outlast the threat. Procurement reviews built to vet one company become procurement reviews for all companies. Each new permission slip becomes the baseline for the next round of permission slips. Controlled people live with the mistakes someone else made for them. Free people get to make their own. The right question for any new AI law, federal or state, Newsom or Trump, is whose freedom does this restrict and what was the alternative. The alternative was simpler than both rules: leave private actors alone unless they harm someone, and prosecute the harm under laws that already exist.








