Through a recently proposed policy change, Facebook and Instagram users in the European Union and UK learned that Meta planned to use anything they posted publicly to train its generative artificial intelligence (genAI) models.
In the US, Meta has long been using public Facebook and Instagram posts to train its Meta AI chatbot — something many users are not aware of. Users’ interactions with Meta AI are also used in training.
Meta’s change to its privacy policy, which originally was to take effect on June 26 for European Union and UK users, would allow it to use public posts, images, comments, and intellectual property to train Meta AI and the models that power it, including the company’s Llama large language model (LLM). LLMs are the algorithms or programs behind genAI engines. The company stated that it would not use private posts or private messages to train its models.
Users in the EU and UK would be able to opt out of having their content used for AI training, but only by filling out an objection form, according to a June 10 press release from Meta.
When EU and UK regulators caught wind of Meta’s plan, they pushed back, citing privacy concerns. Meta then paused its plans for the privacy policy change for users in the EU, adding that the delay “means we aren’t able to launch Meta AI in Europe at the moment.”
Ireland’s Data Protection Commission (DPC) posted a response to Meta pausing the rollout of its new policy, saying the “decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”
Meta responded to a request for comment by Computerworld by pointing to a blog post by the company’s global engagement director, Stefano Fratta. In her post, Fratta said Meta is following the “example set by others, including Google and OpenAI, both of which have already used data from Europeans to train AI.”
“Our approach is more transparent and offers easier controls than many of our industry counterparts already training their models on similar publicly available information,” Fratta said. “Models are built by looking at people’s information to identify patterns, like understanding colloquial phrases or local references, not to identify a specific person or their information.”
Fratta reiterated that the Meta is building its foundational AI model by using only content that users chose to make public.
In the US, which has weaker privacy protections than the UK and Europe, Meta’s users have never been given the opportunity to opt out of having their public posts and comments used to train the company’s AI models. In a statement published by the New York Times, Meta said of US users, “While we don’t currently have an opt-out feature, we’ve built in-platform tools that allow people to delete their personal information from chats with Meta AI across our apps.”
Gartner vice president analyst Avivah Litan said Meta’s planned use of European users’ posts and other information is “pretty disconcerting,” and “Meta just gave users more reasons to distrust their services.”
At a minimum, Litan said, Meta should be more transparent and make it easier for users to opt out and understand what the implications are whether they do or don’t opt out.
“Users and our enterprise clients are justifiably concerned about model owners using their private data to train and improve their models. In fact, that’s their main concern when it comes to genAI risks and threats,” Litan said. “Now Meta is validating that their fears are valid.”
Meta originally notified users of its privacy policy change on May 31 through an email titled “We’re Updating our Privacy Policy as we expand AI at Meta.” In part, the email notification stated, “we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta.”
Privacy policy updates more often than not go unnoticed by recipients, who are used to receiving a plethora of them, Litan said. What’s more, she noted, most users would not understand how to opt out of the new policy.
“Users are complacent about figuring out all the murky black box privacy processes that occur behind the scenes, largely because they are not able to understand them anyways,” Litan said. “Of course, there is no one at Meta that you can talk to if you have questions, and we have no tutorial or video to explain what it means to not opt out, nor how to opt out,” she added.
Meta should pay their users for their data, Litan said, because they are using it to increase their own profitability.
Meta is not alone in taking advantage of data posted publicly by businesses and users to build out its technology. With the exception of Apple, none of the hyperscalers or tech giants hosting AI used by consumers and businesses allow users to verify their claims around security and privacy, Litan said. “It’s all based on a ‘trust but no verify’ model.”
Through a recently proposed policy change, Facebook and Instagram users in the European Union and UK learned that Meta planned to use anything they posted publicly to train its generative artificial intelligence (genAI) models.
In the US, Meta has long been using public Facebook and Instagram posts to train its Meta AI chatbot — something many users are not aware of. Users’ interactions with Meta AI are also used in training.
Meta’s change to its privacy policy, which originally was to take effect on June 26 for European Union and UK users, would allow it to use public posts, images, comments, and intellectual property to train Meta AI and the models that power it, including the company’s Llama large language model (LLM). LLMs are the algorithms or programs behind genAI engines. The company stated that it would not use private posts or private messages to train its models.
Users in the EU and UK would be able to opt out of having their content used for AI training, but only by filling out an objection form, according to a June 10 press release from Meta.
When EU and UK regulators caught wind of Meta’s plan, they pushed back, citing privacy concerns. Meta then paused its plans for the privacy policy change for users in the EU, adding that the delay “means we aren’t able to launch Meta AI in Europe at the moment.”
Ireland’s Data Protection Commission (DPC) posted a response to Meta pausing the rollout of its new policy, saying the “decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”
Meta responded to a request for comment by Computerworld by pointing to a blog post by the company’s global engagement director, Stefano Fratta. In her post, Fratta said Meta is following the “example set by others, including Google and OpenAI, both of which have already used data from Europeans to train AI.”
“Our approach is more transparent and offers easier controls than many of our industry counterparts already training their models on similar publicly available information,” Fratta said. “Models are built by looking at people’s information to identify patterns, like understanding colloquial phrases or local references, not to identify a specific person or their information.”
Fratta reiterated that the Meta is building its foundational AI model by using only content that users chose to make public.
In the US, which has weaker privacy protections than the UK and Europe, Meta’s users have never been given the opportunity to opt out of having their public posts and comments used to train the company’s AI models. In a statement published by the New York Times, Meta said of US users, “While we don’t currently have an opt-out feature, we’ve built in-platform tools that allow people to delete their personal information from chats with Meta AI across our apps.”
Gartner vice president analyst Avivah Litan said Meta’s planned use of European users’ posts and other information is “pretty disconcerting,” and “Meta just gave users more reasons to distrust their services.”
At a minimum, Litan said, Meta should be more transparent and make it easier for users to opt out and understand what the implications are whether they do or don’t opt out.
“Users and our enterprise clients are justifiably concerned about model owners using their private data to train and improve their models. In fact, that’s their main concern when it comes to genAI risks and threats,” Litan said. “Now Meta is validating that their fears are valid.”
Meta originally notified users of its privacy policy change on May 31 through an email titled “We’re Updating our Privacy Policy as we expand AI at Meta.” In part, the email notification stated, “we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta.”
Privacy policy updates more often than not go unnoticed by recipients, who are used to receiving a plethora of them, Litan said. What’s more, she noted, most users would not understand how to opt out of the new policy.
“Users are complacent about figuring out all the murky black box privacy processes that occur behind the scenes, largely because they are not able to understand them anyways,” Litan said. “Of course, there is no one at Meta that you can talk to if you have questions, and we have no tutorial or video to explain what it means to not opt out, nor how to opt out,” she added.
Meta should pay their users for their data, Litan said, because they are using it to increase their own profitability.
Meta is not alone in taking advantage of data posted publicly by businesses and users to build out its technology. With the exception of Apple, none of the hyperscalers or tech giants hosting AI used by consumers and businesses allow users to verify their claims around security and privacy, Litan said. “It’s all based on a ‘trust but no verify’ model.” Read More