Image credits: Getty Images via AFP

Facebook and Instagram will label digitally altered content as “made with AI”

A Meta, owner of Facebook and Instagram, announced major changes to its policies for digitally created and altered media on Friday, ahead of elections that will test its ability to police misleading content generated by digital technologies. artificial intelligence.

ADVERTISING

The social media giant will begin applying “Made with AI” labels in May to AI-generated videos, images and audio published on Facebook e Instagram, expanding a policy that previously only addressed a small portion of edited videos. Vice President of Content Policy Monika Bickert said in a blog post.

Bickert said Meta also appliesaria separate and more prominent labels digitally altered media that pose a “particularly high risk of materially misleading the public about an important matter,” regardless of whether the content was created using AI or other tools. Meta will begin applying the more prominent “high risk” labels immediately, a spokesperson said.

The approach will change the company's handling of manipulated content from a focus on removing a limited set of posts to keeping content online while providing viewers with information about how it was made.

ADVERTISING

Meta previously announced a scheme to detect images made using other companies' generative AI tools via invisible markers embedded in files, but did not give a start date at the time.

A company spokesperson said the labeling approach appliesaria to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtual reality headsets, are covered by different rules.

The changes come months before the US presidential election in November, which technology researchers warn could be transformed by generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the limits of guidelines issued by vendors like Meta and the generative AI market leader OpenAI.

ADVERTISING

In February, Meta's oversight board found the company's existing rules on manipulated media “incoherent” after reviewing a video of Joe Biden posted on Facebook last year that altered real images to falsely suggest that the US president had behaved inappropriately.

The footage was allowed to remain online, as Meta's existing “manipulated media” policy prohibits deceptively altered videos only if they are produced by artificial intelligence or if they make people appear to say words they never said.

The council said the policy should also apply to non-AI content, which is “not necessaryarialess misleading” than AI-generated content, as well as audio-only content and videos that show people doing things they never said or did.

ADVERTISING

Read also

Scroll up