Adobe Says Its New ‘Firefly’ AI Image Generator Doesn't Steal Other People's Art

dobe Stock website on laptop screen close up. Man using service on display, blurry background
dobe Stock website on laptop screen close up. Man using service on display, blurry background


Adobe said it was using millions of assets from its Stock service to train its new AI, though it has not explicitly said which model it’s using.

On Tuesday, Adobe introduced its own generative AI model family called Firefly, claiming—like young people tend to do in the throes of pubescence—it’s not like the other scrappy AI image generators. Instead of using training data dredged up from the internet, Adobe said its crafted its service only using images on its stock image site, openly licensed work, and public domain content.

Folks can already request access to the Adobe Firefly beta, though as of reporting time the service seems to be inundated with requests and the page fails to load. The service is only initially available online, though it includes your usual text to image diffusion AI model with some slight twists. Users input a prompt as normal, but Adobe said they can then modify the image afterwards with a different aspect ratio, lighting, or even an art style.

Read more

Beyond the company’s big future plans for this AI, the most interesting aspect of this new AI integration is what the company claims is in its training data. Adobe said the current model is “trained on a dataset of Adobe Stock, along with openly licensed work and public domain content where copyright has expired.”

Alexandru Costin, Adobe’s VP of generative AI, told The Verge that their system can’t generate content like other brands or IPs simply because it “has never seen that brand content or trademark.”

In its FAQ, the company was adamant that Adobe customers won’t have their content being used to train Firefly, though the company said its license agreement already allows them to use contributed stock images for Firefly. The company said it was developing a “compensation model” for contributors, but added it would only have more details on that approach once Firefly was out of beta. Adobe told Gizmodo that it would have more details on new features in the coming months.

Adobe has been notably absent from the AI image debate, even though its Adobe Stock service is one of the most popular stock image services on the market. While Shutterstock already introduced its own AI image generator earlier this year based on OpenAI’s DALL-E, stock image service. Shutterstock has also claimed it plans to compensate artists whose images were used to train the AI. Getty Images has gone the opposite route, banning all AI-generated images and even suing the makers of Stable Diffusion, alleging they stole the site’s copyrighted images to use in its training data.

So Adobe seems to be trying to straddle the knife’s edge, not wanting to be left out of the AI rat race but also unwilling to be caught in the wide-ranging debate about training data. Gizmodo has learned Firefly was trained on over 330 million assets from Adobe Stock plus millions more from external sources. Compare that to the LAION-5B training set, the data used for Stable Diffusion using scraped web images, which includes over 5.85 billion images and their textual metadata.

The AI system Adobe is using isn’t based on one single source, but a selection of systems to analyze the prompt which gets implemented into the diffusion model. The company is not giving the name of which diffusion model it’s using, but according to its press release its system is supposed to avoid biased AI images and other harmful content. Future versions of Firefly will use assets and training data from “Adobe and others.”

While it’s limited at the start, the company is also advertising users will eventually be able to work with these AI systems directly within its Creative Cloud and Document Cloud software. Adobe included several plans to add creating custom vectors, brushes, and textures usable in Adobe’s other suite of programs like Photoshop and Illustrator. And while text to video is still a nascent frontier in AI image generation, Adobe said users will be able to change video images in massive ways, even changing the “mood” or the weather of an image from summer to winter with a single prompt.

And while it’s still up in the air, Adobe said it wants to allow customers to personalize and train the AI further with their own branded content. Time will tell how customers react to the new systems. When DeviantArt revealed its own AI image generator, the community was widely annoyed that the system made opting in to their new images being trained as default. However, in Adobe’s case, it seems stock contributors won’t have much or any choice at all.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

More from Gizmodo

Sign up for Gizmodo's Newsletter. For the latest news, Facebook, Twitter and Instagram.

Click here to read the full article.