The Internet’s Favorite New Photo App Is Using Your Selfies to Train Its AI

Photo-editing app Lensa grew massively popular over the last week as social media has become flooded with people posting AI-generated selfies from the program’s latest feature.

For $3.99, Lensa users can upload 10 to 20 images of themselves and then receive 50 selfies generated by the app’s artificial intelligence in a variety of art styles.

More from Robb Report

But, before you slam the purchase button, a word of warning: Lensa’s privacy policy and terms of use stipulate that the images users submit to generate their selfies, or the “Face Data,” can be used by Prisma AI, the company behind Lensa, to further train the AI’s neural network.

An artificial neural network like the one used by Lensa, or the popular text-to-image generator Dall-E 2, studies vast quantitites of data to learn how to create better and better results. To be able to convert simple sentences into surprisingly well-crafted images, Dall-E 2 was trained on hundreds of millions of images to learn the association between different words and different visual characteristics. Similarly, Lensa’s neural network is continuously learning how to more accurately portray faces.

This face data, which includes position, orientation, and face topology, is harvested using Apple’s TrueDepth API—the same face-tracking capabilities that allow iPhone users to unlock their phones with their face just by looking at their screen. It is this face data that is fed into the neural network. However, this face data is not sold to third parties.

For writer and former model Maya Kotomori, who recently uploaded her own Lensa selfies, it’s unclear whether or not these AI-assisted generators are problematic or not. In the artistic community, apps like Dall-E 2 have been controversial as illustrators worry both about their income being affected and the possibility of their work being stolen to feed the neural networks. In many cases, users have used AI generators to spit out images in particular artists’ style, without their consent or payment.

What is the effect, however, when individuals give up their own faces?

Kotomori said she purchased two packages of selfies because the first set of snapshots made her look like a white woman, though she is a fair-skinned black person. Upon submitting more photos, and paying a second time, Kotomori received selfies she was happier with.

“As a fair-skinned black person, I definitely think the AI drew a lot of conclusions based on my skin tone. The second batch I received back looked more like me,” Kotomori wrote in a direct message. “Then I kind of started kicking myself—did I just aid in teaching an AI how to recognize racial nuance? How can this help/hurt society in the long run? The answer is: I have absolutely no idea.”

Sign up for Robb Report's Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.

Click here to read the full article.