Turing Giants Split Again: Hinton Supports California's AI Restriction Bill

Despite strong opposition from AI leaders, tech giants, startups, and venture capitalists, California's "AI Restriction Bill" has successfully passed its preliminary stages.

As we all know, aside from various depictions in sci-fi movies, AI has not yet caused any harm in the real world, nor has it launched a large-scale cyber attack. However, some U.S. lawmakers still wish to implement adequate safety measures before such dystopian futures become a reality.

This week, California's "Frontier AI Model Safety Innovation Act" — SB 1047 — took another important step towards becoming law.

In simple terms, SB 1047 will hold developers accountable to prevent AI systems from causing mass casualties or triggering cybersecurity incidents resulting in losses exceeding $500 million. However, due to intense opposition from academia and industry, California lawmakers have made some compromises — incorporating several amendments suggested by AI startup Anthropic and other opponents. Compared to the original proposal, the current version reduces the power of California's government to hold AI labs accountable.

Bill text: Link to SB 1047

But even with these changes, (almost) no one likes SB 1047.

AI heavyweights like Yann LeCun, Fei-Fei Li, and Andrew Ng have repeatedly expressed dissatisfaction with the bill, which they see as a threat to open-source AI and a force that could slow or even halt AI innovation. Numerous open letters have surfaced, including one signed by more than 40 researchers from the University of California, USC, Stanford, and Caltech, urging against the bill's passage. Additionally, eight congress members representing various California districts have also called on the governor to veto the bill. LeCun even echoed a previous sentiment, calling for a six-month pause on AI legislation!

So why did I say "almost everyone"? Because, apart from LeCun, two other Turing Award winners, Yoshua Bengio and Geoffrey Hinton, strongly support the bill's passage. In fact, they believe that the current provisions are too lenient.

Despite significant opposition from U.S. congress members, prominent AI researchers, large tech companies, and venture capitalists, SB 1047 has still managed to pass relatively easily in the California legislature.

Next, SB 1047 will move to the California Assembly for a final vote. Given the recent amendments, the bill, if passed, will need to be sent back to the California Senate for another vote. If it passes both votes, SB 1047 will be sent to the governor for either a veto or to be signed into law.

01 Which Models and Companies Will Be Affected?

Under SB 1047, developers or companies that develop models will be responsible for preventing their AI models from being used to cause "significant harm."

For instance, developing weapons of mass destruction or launching a cyber attack resulting in damages exceeding $500 million. As a side note, CrowdStrike's "Global Windows Blue Screen Event" caused over $5 billion in damages.

However, the rules of SB 1047 only apply to extremely large AI models — those with training costs of at least $100 million and over 10^26 floating-point operations. (Essentially, this is targeted at GPT-4's training costs.)

Reportedly, Meta's next-generation Llama 4 will require 10 times the computational power, thus falling under SB 1047's regulation. For open-source models and their fine-tuned versions, the original developers will be held responsible unless the cost exceeds three times that of the original model. Given this, it's no wonder LeCun reacted so strongly.

Moreover, developers must create testing programs to address AI model risks and hire third-party auditors annually to assess their AI safety practices. For AI products built on these models, appropriate safety protocols must be established to prevent misuse, including an "emergency stop" button to shut down the entire AI model.

02 What Does SB 1047 Do Now?

SB 1047 no longer allows the California Attorney General to sue AI companies for security negligence before a catastrophic event occurs (a suggestion from Anthropic). Instead, the Attorney General can seek injunctions to stop any operation deemed dangerous by the company, and if the model does indeed cause a catastrophic event, they can still sue the AI developers.

The bill no longer establishes a new government agency, the "Frontier Model Division (FMD)," originally included in the legislation. However, it still establishes the core of the FMD — the Frontier Model Committee — within the existing Government Operations Agency, expanding its size from 5 to 9 members. The committee will continue to set computing thresholds for covered models, issue safety guidelines, and regulate auditors.

In terms of ensuring AI model safety, SB 1047's language is now more lenient. Developers now only need to exercise "reasonable caution" to ensure AI models do not pose a significant disaster risk, as opposed to the previous requirement of providing "reasonable assurance." Additionally, developers only need to submit a public "statement" outlining their safety measures, rather than certifying safety test results under penalty of perjury.

For open-source fine-tuned models, there is also a separate protection. If less than $10 million was spent on fine-tuning the model, the individual will not be considered a developer, and responsibility will remain with the original large-scale model developer.