A recent landmark decision from the US Supreme Court has put content created by generative artificial intelligence (genAI) at the forefront of free speech rights as states grapple with how to regulate social media platforms.
Specifically, the decision calls into question whether textural and video content created by genAI can be considered free speech because human beings were involved in crafting the algorithms that produced that content.
Two Supreme Court cases (Moody v. NetChoice and NetChoice v. Paxton) specifically challenged the state laws passed in Florida and Texas that aimed to prevent social-media platforms from silencing conservative content. In its decision, the Supreme Court combined both cases to decide whether Florida and Texas had unfairly interfered with social media companies’ ability to remove or moderate potentially offensive content.
The cases are about a very specific type of expressive activity, content curation.
“So, in terms of AI, they’re mainly focused on recommender systems and systems that automatically identify, remove, or down-rank content for content moderation purposes,” said Tom McBrien, counsel for the Electronic Privacy Information Center (EPIC), a non-profit research agency whose aim is to protect privacy rights.
The Fifth Circuit Court of Appeals upheld a Texas law allowing the state to regulate social media platforms, while the Eleventh Circuit Court blocked the Florida statute, saying it overburdened editorial discretion. The Supreme Court ultimately ruled that the lower courts had not examined legal precedents and cases closely enough and sent the cases back for reconsideration.
At first blush, neither case appears to involve AI’s use. But the high court emphasized current law be applied — no matter the technology at issue — and that social media platforms be treated like any other entity (such as newspapers) because they curate content and curation is protected speech.
While the decision doesn’t give AI free rein, it did require the lower courts to fully consider all potential applications of the state statutes; the Florida law, in particular, is likely to apply to certain AI platforms, according to Daniel Barsky, an intellectual property attorney in Holland & Knight’s Miami office.
“Can GenAI outputs be thought of as speech? The outputs are supposed to be unique, but they are not spontaneous, as all GenAI output at present is a response to a prompt,” Barsky said.
The First Amendment cases cited by the Supreme Court all involved some sort of human involvement, whether it is writing or speaking the content, making editorial decisions, or selecting content. AI platforms that have arguably have no human involvement would be less likely to be entitled to First Amendment protections, which would affect whether states or the US government can pass laws to ban certain outputs.
Conversely, the decision raises a question aboutwhether AI can commit defamation and if so, who would be liable? It also raises questions about whether the government can regulate social media if that content is produced and selected entirely by AI with no human involvement. And if humans are involved in creating the large language models (LLMs) behind AI, would the resulting content then be considered free speech?
“This is the critical question, but [it] has not yet been addressed by any court; this is an issue that might come up in the continued NetChoice proceedings,” Barksy said. “It is certainly an argument I would consider making if I was arguing a case involving AI and First Amendment issues.”
If AI is considered nothing more than a computer algorithm, laws could be passed to restrict or censor AI outputs; but when humans become involved in the creation of those algorithms, things become complex.
“Basically, this is a big, tangled mess,” Barksy said.
EPIC’s McBrien said it’s unlikely, even if the cases go back up to the Supreme Court, that the Justices will announce a broad rule such as “generative AI outputs are protected expression” or the opposite.
“It’s going to be situational. In the Moody/Paxton cases, NetChoice was angling for them to say that newsfeed generation is always expressive, but the Court rejected this overbroad strategy,” McBrien said. “It remanded the case for the lower courts to parse through the arguments more granularly: what exact newsfeed-construction activities are implicated by the laws, which are claimed to be expressive, are they really expressive, etc.”
The Justices, however, were open to the idea that using algorithms to do something expressive might receive less First Amendment protection, depending on the specifics of the algorithm such as how closely and faithfully it carries out the human being’s message, according to McBrien.
Specifically, the majority thought when content curators (social media) enforce content and community guidelines, such as prohibitions on harassment or pro-Nazi content, those activities receive First Amendment protections. “So, when an algorithm is used to enforce those guidelines, the majority said it might receive First Amendment protections,” he said.
McBrien noted that Justices Amy Coney Barrett and Samuel Alito questioned whether using “black-box algorithms” should receive the same amount of protection, an issue that will be pivotal in the reexamining the case. “Since Justice Barrett’s vote was necessary to form the majority opinion, she will likely be the swing vote in the future,” McBrien said.
The Supreme Court also cited an earlier case, Turner Broadcasting v the FCC; adjudicated in the 1990s, it resolved that cable television companies are protected under First Amendment free speech rights when determining what channels and content to carry on their networks.
“The majority and concurrences pointed to the Turner Broadcasting case where the Court found that the regulation at issue did restrict speech, but because it was passed for competition reasons, not speech-regulating reasons, it was constitutional,” McBrien said. “One could imagine something similar in the realm of generative AI.”
A recent landmark decision from the US Supreme Court has put content created by generative artificial intelligence (genAI) at the forefront of free speech rights as states grapple with how to regulate social media platforms.
Specifically, the decision calls into question whether textural and video content created by genAI can be considered free speech because human beings were involved in crafting the algorithms that produced that content.
Two Supreme Court cases (Moody v. NetChoice and NetChoice v. Paxton) specifically challenged the state laws passed in Florida and Texas that aimed to prevent social-media platforms from silencing conservative content. In its decision, the Supreme Court combined both cases to decide whether Florida and Texas had unfairly interfered with social media companies’ ability to remove or moderate potentially offensive content.
The cases are about a very specific type of expressive activity, content curation.
“So, in terms of AI, they’re mainly focused on recommender systems and systems that automatically identify, remove, or down-rank content for content moderation purposes,” said Tom McBrien, counsel for the Electronic Privacy Information Center (EPIC), a non-profit research agency whose aim is to protect privacy rights.
The Fifth Circuit Court of Appeals upheld a Texas law allowing the state to regulate social media platforms, while the Eleventh Circuit Court blocked the Florida statute, saying it overburdened editorial discretion. The Supreme Court ultimately ruled that the lower courts had not examined legal precedents and cases closely enough and sent the cases back for reconsideration.
At first blush, neither case appears to involve AI’s use. But the high court emphasized current law be applied — no matter the technology at issue — and that social media platforms be treated like any other entity (such as newspapers) because they curate content and curation is protected speech.
While the decision doesn’t give AI free rein, it did require the lower courts to fully consider all potential applications of the state statutes; the Florida law, in particular, is likely to apply to certain AI platforms, according to Daniel Barsky, an intellectual property attorney in Holland & Knight’s Miami office.
“Can GenAI outputs be thought of as speech? The outputs are supposed to be unique, but they are not spontaneous, as all GenAI output at present is a response to a prompt,” Barsky said.
The First Amendment cases cited by the Supreme Court all involved some sort of human involvement, whether it is writing or speaking the content, making editorial decisions, or selecting content. AI platforms that have arguably have no human involvement would be less likely to be entitled to First Amendment protections, which would affect whether states or the US government can pass laws to ban certain outputs.
Conversely, the decision raises a question aboutwhether AI can commit defamation and if so, who would be liable? It also raises questions about whether the government can regulate social media if that content is produced and selected entirely by AI with no human involvement. And if humans are involved in creating the large language models (LLMs) behind AI, would the resulting content then be considered free speech?
“This is the critical question, but [it] has not yet been addressed by any court; this is an issue that might come up in the continued NetChoice proceedings,” Barksy said. “It is certainly an argument I would consider making if I was arguing a case involving AI and First Amendment issues.”
If AI is considered nothing more than a computer algorithm, laws could be passed to restrict or censor AI outputs; but when humans become involved in the creation of those algorithms, things become complex.
“Basically, this is a big, tangled mess,” Barksy said.
EPIC’s McBrien said it’s unlikely, even if the cases go back up to the Supreme Court, that the Justices will announce a broad rule such as “generative AI outputs are protected expression” or the opposite.
“It’s going to be situational. In the Moody/Paxton cases, NetChoice was angling for them to say that newsfeed generation is always expressive, but the Court rejected this overbroad strategy,” McBrien said. “It remanded the case for the lower courts to parse through the arguments more granularly: what exact newsfeed-construction activities are implicated by the laws, which are claimed to be expressive, are they really expressive, etc.”
The Justices, however, were open to the idea that using algorithms to do something expressive might receive less First Amendment protection, depending on the specifics of the algorithm such as how closely and faithfully it carries out the human being’s message, according to McBrien.
Specifically, the majority thought when content curators (social media) enforce content and community guidelines, such as prohibitions on harassment or pro-Nazi content, those activities receive First Amendment protections. “So, when an algorithm is used to enforce those guidelines, the majority said it might receive First Amendment protections,” he said.
McBrien noted that Justices Amy Coney Barrett and Samuel Alito questioned whether using “black-box algorithms” should receive the same amount of protection, an issue that will be pivotal in the reexamining the case. “Since Justice Barrett’s vote was necessary to form the majority opinion, she will likely be the swing vote in the future,” McBrien said.
The Supreme Court also cited an earlier case, Turner Broadcasting v the FCC; adjudicated in the 1990s, it resolved that cable television companies are protected under First Amendment free speech rights when determining what channels and content to carry on their networks.
“The majority and concurrences pointed to the Turner Broadcasting case where the Court found that the regulation at issue did restrict speech, but because it was passed for competition reasons, not speech-regulating reasons, it was constitutional,” McBrien said. “One could imagine something similar in the realm of generative AI.” Read More