Adobe says it won’t use your art to train its AI

Adobe, the maker of Photoshop, Premiere, and other industry-standard tools in the Creative Suite package, has its foot in its mouth. Last week an update to the Creative Cloud terms of service set off alarms across the web as users interpreted the new wording to mean that the company was using their cloud storage files to train its generative AI systems. Not true, says Adobe in a non-apology post.

According to the message from Creative Cloud design leader Scott Belsky and legal, security, and policy lead Dana Rao, it’s all been a big misunderstanding. The language that customers had noticed, which said that the company’s automated systems can “access, view, or listen to your Content,” sure seems like the kind of thing that enables generative AI systems to be trained. The same kind of AI systems that Adobe has been pushing in Creative Suite for the better part of a year.

The blog makes a few things clear. One, customers own the files and content uploaded to Creative Cloud and edited with Adobe’s tools. Two, generative AI isn’t trained on it. Three, Adobe never scans local files saved on your computer, only the files that are uploaded to Creative Cloud.

Why, and for what purpose, is Adobe scanning the files saved in cloud storage? That’s the point of contention here. According to Belsky and Rao, the reason it’s using automated scanning systems is to make sure that the files do not contain child sexual abuse material. In less legalistic terms, Adobe is using auto-scanning tools to make sure it isn’t hosting child porn. If the system flags an image, video, or other file, it triggers a manual review by a human. Adobe also reserves the right to view user content “to otherwise comply with the law,” i.e. it gets served a warrant to access private content.

The explanation is an understandable one — no one wants to be associated with child porn even as a third-party platform. The post says that Adobe will update its TOS to clarify on June 18th, a week from today.

That being said, this is a classic non-apology. Despite saying things like “we recognize that trust will be earned,” at no point does the message express regret, remorse, or error. The writers themselves acknowledge that they’re “In a world where customers are anxious about how their data is used, and how generative AI models are trained,” which of course includes Adobe’s own Firefly AI system (also unmentioned).

I think a company that’s so intrinsically invested in the creative field, and also working in the contentious world of generative AI, could have seen this one coming.

Professional Software

Adobe, the maker of Photoshop, Premiere, and other industry-standard tools in the Creative Suite package, has its foot in its mouth. Last week an update to the Creative Cloud terms of service set off alarms across the web as users interpreted the new wording to mean that the company was using their cloud storage files to train its generative AI systems. Not true, says Adobe in a non-apology post.

According to the message from Creative Cloud design leader Scott Belsky and legal, security, and policy lead Dana Rao, it’s all been a big misunderstanding. The language that customers had noticed, which said that the company’s automated systems can “access, view, or listen to your Content,” sure seems like the kind of thing that enables generative AI systems to be trained. The same kind of AI systems that Adobe has been pushing in Creative Suite for the better part of a year.

The blog makes a few things clear. One, customers own the files and content uploaded to Creative Cloud and edited with Adobe’s tools. Two, generative AI isn’t trained on it. Three, Adobe never scans local files saved on your computer, only the files that are uploaded to Creative Cloud.

Why, and for what purpose, is Adobe scanning the files saved in cloud storage? That’s the point of contention here. According to Belsky and Rao, the reason it’s using automated scanning systems is to make sure that the files do not contain child sexual abuse material. In less legalistic terms, Adobe is using auto-scanning tools to make sure it isn’t hosting child porn. If the system flags an image, video, or other file, it triggers a manual review by a human. Adobe also reserves the right to view user content “to otherwise comply with the law,” i.e. it gets served a warrant to access private content.

The explanation is an understandable one — no one wants to be associated with child porn even as a third-party platform. The post says that Adobe will update its TOS to clarify on June 18th, a week from today.

That being said, this is a classic non-apology. Despite saying things like “we recognize that trust will be earned,” at no point does the message express regret, remorse, or error. The writers themselves acknowledge that they’re “In a world where customers are anxious about how their data is used, and how generative AI models are trained,” which of course includes Adobe’s own Firefly AI system (also unmentioned).

I think a company that’s so intrinsically invested in the creative field, and also working in the contentious world of generative AI, could have seen this one coming.

Professional Software Read More