When California Gov. Gavin Newsom vetoed a key piece of AI oversight legislation Sunday, he said he did so because the measure “falls short of providing a flexible, comprehensive solution to curbing the potential catastrophic risks.”
He then said he “has asked the world’s leading experts on genAI to help California develop workable guardrails for deploying genAI, focusing on developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks.”
Those would be laudable sentiments if any of them had any chance of actually delivering a more secure and trustworthy environment for Californians. But Newsom, one of the nation’s smarter politicians, well knows that such an effort is a fool’s mission. I could add cynically that the governor merely wants to be seen trying to do something, but why state the obvious?
Problem One: GenAI deployments are already happening and the technology is being deeply embedded into untold number of business operations. It’s all-but-ubiquitous on the major cloud environments, so even an enterprise that has wisely opted to hold off its genAI efforts for now would still be deeply exposed. (Fear not: There are no such wise enterprises.)
The calendar simply doesn’t make sense. First, Newsom’s experts get together and come up with a proposal, which in California will take a long time. Then that proposal goes to the legislature, which means lobbyists will take turns watering it down. What are the chances the final result will be worthy of signature? Even if it is, it arrive far too late to do any good.
Candidly, given how far genAI has progressed in the last two years, there’s a fine chance that had Newsom signed the bill into law on Sunday, it would have still been too late.
Part of the reason for that is because the enforcement focus is on AI vendors, and it is highly unlikely that state regulators will be able to effectively perform oversight on something as complex as genAI development is today.
In his veto message, Newsom pointed to the flaw of vendor oversight, but zeroed in on the wrong reason.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” he said. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
In short, the governor is arguing that regulators shouldn’t only look at the biggest players, but focus on many of the smaller specialty shops as well. That argument makes sense in a vacuum. But in the real world, regulators are understaffed and under-resourced to effectively manage a handful of major players, much less the many niche offerings that exist. It sounds great spoken from a podium, but it’s not realistic.
Here’s the real problem: No one in the industry — big players included — truly knows what genAI can and can’t do. No one can accurately predict its future. (I’m not even talking about five years from now; experts struggle to predict capabilities and problems five months from now.
We’ve all seen the dire predictions of what might happen with genAI. Some are overblown — remember the extinction reports from February? And some are frighteningly plausible, such as this Cornell University report on how AI training AI could lead to a self-destructive loop. (By the way, kudos to Cornell’s people for comparing it to Mad Cow disease. But to make the analogy work, they created the term Model Autophagy Disorder so they could use the acronym MAD. Sigh.)
There is a better way. Regulators — state, federal and industry-specific — should focus on rules for enterprises and hyperscalers deploying genAI tools rather than the vendors creating and selling the technology. (Granted, the big hyperscalers are also selling their own flavors of genAI, but they are different business units with different bosses.)
Why? First of all, enterprises are more likely to cooperate, making compliance more likely to succeed. Secondly, if regulators want vendors to take cybersecurity and privacy issues seriously, take the fight to their largest customers. If the customers start insisting on the government’s rules, vendors are more likely to fall in line.
In other words, the paltry fines and penalties regulators can threaten are nothing compared to the revenue their customers provide. Influence the customers and the vendors will get the message.
What kind of requirements? Let’s consider California. Should the CIO for every healthcare concern insist on extensive testing before any hospital uses genAI code? Shouldn’t those institutions face major penalties if private healthcare data leaks because someone trusted Google’s or OpenAI’s code without doing meaningful due diligence? What about a system that hurts patients by malfunctioning? That CIO had better be prepared to detail every level of pre-launch testing.
How about utilities? Financial firms? If the state wants to force businesses to be cautious, there are ways of doing so.
Far too many enterprises today are feeling pressured by hype and being forced by their boards to jump into the deep end of the genAI pool. CIOs — and certainly CISOs — are not comfortable with this, but they have nothing to fight back with. Why not give CIOs a tool with which to push back: state law.
Give every CEO an out for not risking their businesses and customers by accepting magical-sounding predictions of eventual ROI and other benefits. Regulators could become CIOs’ new best friends by giving them cover to do what they want to do anyway: take everything slowly and carefully.
Trying to regulate vendors won’t work. But giving political cover to their customers? That, at least, has a real chance of succeeding.
When California Gov. Gavin Newsom vetoed a key piece of AI oversight legislation Sunday, he said he did so because the measure “falls short of providing a flexible, comprehensive solution to curbing the potential catastrophic risks.”
He then said he “has asked the world’s leading experts on genAI to help California develop workable guardrails for deploying genAI, focusing on developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks.”
Those would be laudable sentiments if any of them had any chance of actually delivering a more secure and trustworthy environment for Californians. But Newsom, one of the nation’s smarter politicians, well knows that such an effort is a fool’s mission. I could add cynically that the governor merely wants to be seen trying to do something, but why state the obvious?
Problem One: GenAI deployments are already happening and the technology is being deeply embedded into untold number of business operations. It’s all-but-ubiquitous on the major cloud environments, so even an enterprise that has wisely opted to hold off its genAI efforts for now would still be deeply exposed. (Fear not: There are no such wise enterprises.)
The calendar simply doesn’t make sense. First, Newsom’s experts get together and come up with a proposal, which in California will take a long time. Then that proposal goes to the legislature, which means lobbyists will take turns watering it down. What are the chances the final result will be worthy of signature? Even if it is, it arrive far too late to do any good.
Candidly, given how far genAI has progressed in the last two years, there’s a fine chance that had Newsom signed the bill into law on Sunday, it would have still been too late.
Part of the reason for that is because the enforcement focus is on AI vendors, and it is highly unlikely that state regulators will be able to effectively perform oversight on something as complex as genAI development is today.
In his veto message, Newsom pointed to the flaw of vendor oversight, but zeroed in on the wrong reason.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” he said. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
In short, the governor is arguing that regulators shouldn’t only look at the biggest players, but focus on many of the smaller specialty shops as well. That argument makes sense in a vacuum. But in the real world, regulators are understaffed and under-resourced to effectively manage a handful of major players, much less the many niche offerings that exist. It sounds great spoken from a podium, but it’s not realistic.
Here’s the real problem: No one in the industry — big players included — truly knows what genAI can and can’t do. No one can accurately predict its future. (I’m not even talking about five years from now; experts struggle to predict capabilities and problems five months from now.
We’ve all seen the dire predictions of what might happen with genAI. Some are overblown — remember the extinction reports from February? And some are frighteningly plausible, such as this Cornell University report on how AI training AI could lead to a self-destructive loop. (By the way, kudos to Cornell’s people for comparing it to Mad Cow disease. But to make the analogy work, they created the term Model Autophagy Disorder so they could use the acronym MAD. Sigh.)
There is a better way. Regulators — state, federal and industry-specific — should focus on rules for enterprises and hyperscalers deploying genAI tools rather than the vendors creating and selling the technology. (Granted, the big hyperscalers are also selling their own flavors of genAI, but they are different business units with different bosses.)
Why? First of all, enterprises are more likely to cooperate, making compliance more likely to succeed. Secondly, if regulators want vendors to take cybersecurity and privacy issues seriously, take the fight to their largest customers. If the customers start insisting on the government’s rules, vendors are more likely to fall in line.
In other words, the paltry fines and penalties regulators can threaten are nothing compared to the revenue their customers provide. Influence the customers and the vendors will get the message.
What kind of requirements? Let’s consider California. Should the CIO for every healthcare concern insist on extensive testing before any hospital uses genAI code? Shouldn’t those institutions face major penalties if private healthcare data leaks because someone trusted Google’s or OpenAI’s code without doing meaningful due diligence? What about a system that hurts patients by malfunctioning? That CIO had better be prepared to detail every level of pre-launch testing.
How about utilities? Financial firms? If the state wants to force businesses to be cautious, there are ways of doing so.
Far too many enterprises today are feeling pressured by hype and being forced by their boards to jump into the deep end of the genAI pool. CIOs — and certainly CISOs — are not comfortable with this, but they have nothing to fight back with. Why not give CIOs a tool with which to push back: state law.
Give every CEO an out for not risking their businesses and customers by accepting magical-sounding predictions of eventual ROI and other benefits. Regulators could become CIOs’ new best friends by giving them cover to do what they want to do anyway: take everything slowly and carefully.
Trying to regulate vendors won’t work. But giving political cover to their customers? That, at least, has a real chance of succeeding. Read More