Once again, AI industry employees warn the tech could lead to ‘human extinction’

More than a dozen current and former AI industry employees have signed an open letter warning that the technology’s dangers could result in “human extinction.”

The letter was written by 13 people who have worked at OpenAI, Google DeepMind, and Anthropic, all leading providers of generative AI (genAI) technology. Specifically, they raised alarms about a series of risks from AI, ranging from “further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.

“AI companies themselves have acknowledged these risks, as have governments across the world and other AI experts,” the letter says. It also calls for assurances that employees that do raise concerns will not be retaliated against by their companies.

The letter also received the endorsement of AI scientist Yoshua Bengio, British-Canadian computer scientist and psychologist Geoffrey Hinton, and University of California Computer Science professor Stuart Russell.

The latest missive echoes an open letter released In March 2023, when more than 150 leading AI researchers and others called on genAI companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections. Later that same month, more than 1,000 signatories including industry experts, scientists, ethicists and others, posted an open letter warning about a possible “loss of control of our civilization” from unchecked AI.

And in a May 2023 open letter, many of the technology’s most prominent AI creators called controlling it “a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The new letter noted there is no effective government oversight of corporations creating and selling AI solutions. “Current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the signatories said.

“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated. Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.”

The employees laid down four specific measures they want from companies to ensure the safety of genAI technology:

The company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit.

The company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise.

The company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.

The company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. 

“We accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public,” the letter said.

Earlier this year, more than 200 companies and organizations agreed to participate in the AI Safety Institute Consortiumto create guidelines ensuring the safety of AI systems. But participation to date has been voluntary, and the US is well behind other efforts to curb AI’s potential problems. For example, the European Union finished writing the EU AI Act more than a year ago; it was approved in June 2023.

The EU AI Act required genAI systems to meet transparency standards to help regulators and others distinguish deep-fake images from real ones. The measure also prohibited social scoring systems and manipulative AI.

In the United States, there have been several efforts to curb AI, but no meaningful legislation from Congress. For example, in October 2023, US President Joseph R. Biden Jr. issued an executive order that hammered out clear rules and oversight measures to ensure AI is kept in check, while providing paths for it to grow. Among more than two dozen initiatives, Biden’s “Safe, Secure, and Trustworthy Artificial Intelligence” order was a long time coming, according to AI industry experts who’ve been watching the rise of genAI tools and platforms since late 2022.

​More than a dozen current and former AI industry employees have signed an open letter warning that the technology’s dangers could result in “human extinction.”

The letter was written by 13 people who have worked at OpenAI, Google DeepMind, and Anthropic, all leading providers of generative AI (genAI) technology. Specifically, they raised alarms about a series of risks from AI, ranging from “further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.

“AI companies themselves have acknowledged these risks, as have governments across the world and other AI experts,” the letter says. It also calls for assurances that employees that do raise concerns will not be retaliated against by their companies.

The letter also received the endorsement of AI scientist Yoshua Bengio, British-Canadian computer scientist and psychologist Geoffrey Hinton, and University of California Computer Science professor Stuart Russell.

The latest missive echoes an open letter released In March 2023, when more than 150 leading AI researchers and others called on genAI companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections. Later that same month, more than 1,000 signatories including industry experts, scientists, ethicists and others, posted an open letter warning about a possible “loss of control of our civilization” from unchecked AI.

And in a May 2023 open letter, many of the technology’s most prominent AI creators called controlling it “a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The new letter noted there is no effective government oversight of corporations creating and selling AI solutions. “Current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the signatories said.

“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated. Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.”

The employees laid down four specific measures they want from companies to ensure the safety of genAI technology:

The company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit.

The company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise.

The company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.

The company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. 

“We accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public,” the letter said.

Earlier this year, more than 200 companies and organizations agreed to participate in the AI Safety Institute Consortiumto create guidelines ensuring the safety of AI systems. But participation to date has been voluntary, and the US is well behind other efforts to curb AI’s potential problems. For example, the European Union finished writing the EU AI Act more than a year ago; it was approved in June 2023.

The EU AI Act required genAI systems to meet transparency standards to help regulators and others distinguish deep-fake images from real ones. The measure also prohibited social scoring systems and manipulative AI.

In the United States, there have been several efforts to curb AI, but no meaningful legislation from Congress. For example, in October 2023, US President Joseph R. Biden Jr. issued an executive order that hammered out clear rules and oversight measures to ensure AI is kept in check, while providing paths for it to grow. Among more than two dozen initiatives, Biden’s “Safe, Secure, and Trustworthy Artificial Intelligence” order was a long time coming, according to AI industry experts who’ve been watching the rise of genAI tools and platforms since late 2022. Read More