As Britain’s King Charles III stood up in the Houses of Parliament on Wednesday to present the new Labour government’s proposed legislative program, technology experts were primed for any mention of artificial intelligence (AI).
In the event, amidst the colorful pomp and arcane ceremony the British state is famous for in the state opening of Parliament, what the speech delivered was mostly a promise of future legislation shorn of any detail on the form this will take.
Talking head
The King’s Speech is where Britain’s elected government, in this case the recently elected Labour administration, lays out bills it plans to enact into law in the coming year.
The monarch delivers the speech, but it is written for him by the government. His role is purely constitutional and ceremonial.
It is hard to imagine a greater contrast than a ceremony whose origins date back hundreds of years and topics such as AI, which embodies the promise and peril of 21st century technology.
The government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models,” announced King Charles.
Beyond the focus on regulating models used for generative AI, though, that leaves the government’s plans and their timing open to interpretation. But even the willingness to act marks a change of direction from the policy of the deposed Conservative administration to legislate on AI within narrow constraints.
Everyone wants to regulate AI
There had been an expectation that the new government would go further, primed by general statements of intent in the Labour Party Manifesto 2024.
“We will ensure our industrial strategy supports the development of the Artificial Intelligence (AI) sector, removes planning barriers to new datacentres,” stated the Manifesto before turning to the need for regulation.
“Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.”
The disappearance of these modest ambitions could signal that the government has yet to work out what “binding regulation” should look like at a time when other legislation seems more pressing.
The previous government worried that too much regulation risked stifling development. Equally, no regulation at all carries the risk that by the time it becomes necessary it will be too late to act.
The EU, of course, already has its AI Act while the US is still working through a mixture of proposed legislation bolstered by the Biden administration’s executive orders describing first principles.
Still too early?
A comment by open-source industry advocate OpenUK in advance of the King’s Speech sums up the dilemma.
“There are lessons the UK can learn from the EU’s AI Act that will likely prove to be an overly prescriptive and unwieldy cautionary tale of regulatory capture with only the largest companies able to comply, stifling innovation in the EU,” said the organization’s CEO, Amanda Brock.
It was still too early to legislate in a way that creates walls and legal restrictions.
“For the UK to stay relevant globally, and to build successful AI companies, openness is crucial. This will allow the UK ecosystem to grow its status as a world leader in open- source AI, behind only the US and China,” she added.
But not everyone is convinced that the wait-and-see approach is the right one.
“Regulation is not just about setting restrictions on AI development; it’s about providing the clarity and guidance needed to promote safe and sustainable innovation,” said Bruna de Castro e Silva of AI Governance specialist Saidot.
“As the EU moves forward with publishing its official AI Act, UK businesses have been left waiting for clear guidance on how to develop and deploy AI safely and ethically.”
This is why AI regulation is seen as a thankless task. Take an interventionist approach and experts will line up to say you’re stifling a technology with huge economic and social potential. Take a more cautious approach and others will say you’re not doing enough.
Last November, the previous Conservative administration of Rishi Sunak jumped on the theme of AI, hosting a global AI Safety Summit with symbolic flourish at the famous Second World War code-breaking facility just outside London, Bletchley Park.
At that event, several big AI names — OpenAI, Google DeepMind, Anthropic — undertook to give a new Frontier AI Taskforce early access to their models to conduct safety evaluations.
The new government inherits that promise even if to many others it will seem as if certainty about the UK’s AI legislative regime is no nearer than it was then.
More on AI regulation:
AI regulation: While Congress fiddles, California gets it done
Senators propose $32B on AI spending without firm regulatory oversight
The complex patchwork of US AI regulation has already arrived
As Britain’s King Charles III stood up in the Houses of Parliament on Wednesday to present the new Labour government’s proposed legislative program, technology experts were primed for any mention of artificial intelligence (AI).
In the event, amidst the colorful pomp and arcane ceremony the British state is famous for in the state opening of Parliament, what the speech delivered was mostly a promise of future legislation shorn of any detail on the form this will take.
Talking head
The King’s Speech is where Britain’s elected government, in this case the recently elected Labour administration, lays out bills it plans to enact into law in the coming year.
The monarch delivers the speech, but it is written for him by the government. His role is purely constitutional and ceremonial.
It is hard to imagine a greater contrast than a ceremony whose origins date back hundreds of years and topics such as AI, which embodies the promise and peril of 21st century technology.
The government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models,” announced King Charles.
Beyond the focus on regulating models used for generative AI, though, that leaves the government’s plans and their timing open to interpretation. But even the willingness to act marks a change of direction from the policy of the deposed Conservative administration to legislate on AI within narrow constraints.
Everyone wants to regulate AI
There had been an expectation that the new government would go further, primed by general statements of intent in the Labour Party Manifesto 2024.
“We will ensure our industrial strategy supports the development of the Artificial Intelligence (AI) sector, removes planning barriers to new datacentres,” stated the Manifesto before turning to the need for regulation.
“Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.”
The disappearance of these modest ambitions could signal that the government has yet to work out what “binding regulation” should look like at a time when other legislation seems more pressing.
The previous government worried that too much regulation risked stifling development. Equally, no regulation at all carries the risk that by the time it becomes necessary it will be too late to act.
The EU, of course, already has its AI Act while the US is still working through a mixture of proposed legislation bolstered by the Biden administration’s executive orders describing first principles.
Still too early?
A comment by open-source industry advocate OpenUK in advance of the King’s Speech sums up the dilemma.
“There are lessons the UK can learn from the EU’s AI Act that will likely prove to be an overly prescriptive and unwieldy cautionary tale of regulatory capture with only the largest companies able to comply, stifling innovation in the EU,” said the organization’s CEO, Amanda Brock.
It was still too early to legislate in a way that creates walls and legal restrictions.
“For the UK to stay relevant globally, and to build successful AI companies, openness is crucial. This will allow the UK ecosystem to grow its status as a world leader in open- source AI, behind only the US and China,” she added.
But not everyone is convinced that the wait-and-see approach is the right one.
“Regulation is not just about setting restrictions on AI development; it’s about providing the clarity and guidance needed to promote safe and sustainable innovation,” said Bruna de Castro e Silva of AI Governance specialist Saidot.
“As the EU moves forward with publishing its official AI Act, UK businesses have been left waiting for clear guidance on how to develop and deploy AI safely and ethically.”
This is why AI regulation is seen as a thankless task. Take an interventionist approach and experts will line up to say you’re stifling a technology with huge economic and social potential. Take a more cautious approach and others will say you’re not doing enough.
Last November, the previous Conservative administration of Rishi Sunak jumped on the theme of AI, hosting a global AI Safety Summit with symbolic flourish at the famous Second World War code-breaking facility just outside London, Bletchley Park.
At that event, several big AI names — OpenAI, Google DeepMind, Anthropic — undertook to give a new Frontier AI Taskforce early access to their models to conduct safety evaluations.
The new government inherits that promise even if to many others it will seem as if certainty about the UK’s AI legislative regime is no nearer than it was then.
More on AI regulation:
AI regulation: While Congress fiddles, California gets it done
Senators propose $32B on AI spending without firm regulatory oversight
The complex patchwork of US AI regulation has already arrived Read More