Meta oversight board tells company to clean up rules on AI-generated pornography
2024.07.25 06:26
By Katie Paul
NEW YORK (Reuters) – Meta’s Oversight Board on Thursday said the company’s rules were “not sufficiently clear” in barring sexually explicit AI-generated depictions of real people and called for changes to stop such imagery from circulating on its platforms.
The board, which is funded by the social media giant but operates independently, issued its ruling after reviewing two pornographic fakes of famous women created using artificial intelligence and posted on Meta’s Facebook (NASDAQ:) and Instagram.
Meta said it would review the board’s recommendations and provide an update on any changes adopted.
In its report, the board identified the two women only as female public figures from India and the United States, citing privacy concerns.
The board found both images violated Meta’s rule barring “derogatory sexualized photoshop,” which the company classifies as a form of bullying and harassment, and said Meta should have removed them promptly.
In the case involving the Indian woman, Meta failed to review a user report of the image within 48 hours, prompting the ticket to be closed automatically with no action taken.
The user appealed, but the company again declined to act, and only reversed course after the board took up the case, it said.
In the American celebrity’s case, Meta’s systems automatically removed the image.
“Restrictions on this content are legitimate,” the board said. “Given the severity of harms, removing the content is the only effective way to protect the people impacted.”
The board recommended Meta update its rule to clarify its scope, saying, for example, that use of the word “photoshop” is “too narrow” and the prohibition should cover a broad range of editing techniques, including generative AI.
The board also slammed Meta for declining to add the Indian woman’s image to a database that enables automatic removals like the one that occurred in the American woman’s case.
According to the report, Meta told the board it relies on media coverage to determine when to add images to the database, a practice the board called “worrying.”
“Many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance,” the board said.