On the Lensa wave: is it safe to use photo editing apps with Artificial Intelligence?

If you're dying to join Artificial Intelligence apps in photos like Lensa, FaceApp or ToonMe, but are afraid of how your personal information will be used, you have a reason: before joining the trendy app, you need to read the small print of the "terms of use". In the case of Lensa, there is a very explanatory and serious usage policy. But is every device of this type safe? We spoke with Mariana Rielli, researcher at Data Privacy Brasil about the subject.

“Regarding the Lens, specifically, there is a more robust privacy policy, with clear categories, than other cases that became famous in the past”, he explains Mariana Rielli do Data Privacy Brazil. 

ADVERTISING

“However, most of the information is available in English, which is an accessibility barrier for users who do not speak the language, as translators can limit easier understanding, which is the objective of a policy aimed at a broad public”, he adds.

Celebrities posted Lensa avatars. Image: Reproduction Twitter

Since Lens became a craze, with magical avatars filling Instagram and Twitter with “digital” versions of ourselves, there was also a flood of security alerts.

There were also conspiracy theories that we would be guinea pigs for experiments with artificial intelligence, handing over our personal data “freely”.

ADVERTISING

But don't worry, it's not that difficult to understand what it's all about and not fall into traps.

What do we need to know before joining the fashion app?

“It is necessary to understand what is collected, whether it is stored, who it is shared with, who is the responsible company, who would be the controller and operator (law classifications). For the lay user, there needs to be a clear and easily understandable policy, which highlights the main information - for what purpose each data is used, who uses it and with whom it shares it, when and how the data is discarded, what are the contact and complaint channels in case of a problem, among others”, advises the researcher.

Specifically about the Lens, Mariana Riell says that the company is emphatic in saying that it discards the original photos after 24 hours, but the policy authorizes the company to use the artwork created from the photos, including being able to modify them later.

ADVERTISING

Furthermore, the Lens collects information about the user's online activities (behavioral tracking) and at this point the policy mentions express consent, but it also says that if the user does not want this collection, they must access the device settings, or contact the company – which is contradictory to the statement about prior consent, as explained by the researcher.

There are other risks in relation to artificial intelligence applications, in relation to the source data – in this case, photos. Therefore, the researcher considers seeking answers to some important questions:

“What is done with them? Are other data also required? Applications connect with social media accounts, how long does the information retain? And if so, when and how are they discarded?”

ADVERTISING

Is there any point in deleting the avatars from the networks and deleting the application, once you have already used it?

According to a Mariana Riell, Lensa's policy allows the company to use not the original photos, which are deleted after 24 hours, but the created avatars, in addition to possibly retaining other personal data provided or collected.

“Deleting the application does not mean deleting the account and the policy itself makes this point clear. To ensure that at least other personal data is discarded, it would be necessary to delete the account. In case of any specific demand related to personal data, the policy suggests contacting directly: [email protected]", Explain.

Other risks: exposure and distorted image?

Image: Unsplash

In addition to concerns about data security, before using any photo app it is important to also think about excessive exposure on the internet or even image distortion, a modern phenomenon that is spreading among younger people.

ADVERTISING

“Speaking of applications in general, there are several possible risks, such as the intensification of image distortion problems, especially in younger people, due to the proliferation of this type of technology, which already occurs in a more diluted form with the use of filters in photo and video applications”, warns Mariana Riell.

The expert talks about a deeper issue, about how people, of different ages, relate to technology and (self)image, and goes beyond a specific application or technology.

https://www.instagram.com/p/Cl34mCnu_oW/

See also:

Scroll up