A CBS News investigation has removed more than dozens of fraudulent and sexual images of famous female actors and athletes. Deepfake images operated by AI On the company’s Facebook platform.
Dozens of fake sexual images of actors Miranda Cosgrove, Janet McCurdy, Ariana Grande, Scarlett Johansson and former tennis star Maria Sharapova have been widely shared by multiple Facebook accounts. It has won hundreds of thousands of favorites on the platform and many re-shares.
“We will delete these images to violate our policy and continue to monitor other violation posts. This is an industry-wide challenge and we will continue to work on improving detection and enforcement technology. “We’re here,” Meta spokesman Erin Logan told CBS News in a statement sent by email on Friday.
Over dozens of analysis of these images by Reality Defender, a platform that helps AI detect generated media, has shown that many photos are deepfake images. Real photos. According to a Reality Defender analysis, some images may have been created using image stitching tools that do not involve AI.
“Almost all deepfake porn have no consent that subjects are deeply nurtured,” Ben Colman, co-founder and CEO of Reality Defender, told CBS News on Sunday. “This kind of content is growing at a dizzying rate, especially as existing measures to stop such content are rarely implemented.”
CBS News has requested comment on the story from Miranda Cosgrove, Jeanette McCurdy, Ariana Grande and Maria Sharapova. According to an actor’s representative, Johansson declined to publish a comment.
Under Meta’s bullying and harassment policy, the company bans “misleading sexual photoshootshops or drawings” on its platform. The company also prohibits adult nudity, sexual activity, and sexual exploitation of adults, and its regulations block users from sharing or threatening to share unconsensual intimate images. It is intended to be. Meta has deployed the use of the “AI Information” label to clearly mark AI-manipulated content.
However, questions remain regarding the effectiveness of policing such content in high-tech companies. Even after CBS News shared such content widely in violation of its terms, dozens of AI-generated sexual images of Cosgrave and McCurdy and images of McCurdy are still being published on Facebook. I discovered that it was.
One such deepfake image of Cosgrave, which was still up over the weekend, was shared by an account with 2.8 million followers.
Based on public figure images analyzed by CBS News, both children’s stars from Nickelodeon, owned by CBS News’ parent company Paramount Global, show Icarly on show Icarly, owned by CBS News parent company Paramount Global .
Meta’s Oversight Committee is an e-mailed to CBS News as it is a semi-independent committee made up of experts in the field of human rights and freedom of speech and develops content moderation recommendations on Meta’s Platform. The statement informs us via email current regulations regarding sexualized deepfake content. It’s not enough.
The Oversight Committee cited recommendations made to Meta over the past year. This encourages the rules to be clarified to include the term “nonconsensual” ban on “mirogic sexual photoshootshop” specifically and include other photo manipulations. Techniques such as AI.
The board also recommends that Meta fold “miserable sexual photoshop” bans to the company’s adult sexual exploitation regulations, so such content relaxation will be more strictly enforced. It will be.
Asked Monday by CBS News about board recommendations, Meta pointed to guidelines for the transparency website. This indicates that the company has previously rejected the proposal, but Meta said in a statement it is still considering ways to demonstrate its lack of consent. AI-generated images. Mehta also said it is considering reforming adult sexual exploitation policies to “capture the spirit” of the board’s recommendations.
“The Oversight Committee has revealed that intimate images of Deepfark, based on nonconsensus, are a serious violation of privacy and personal dignity, disproportionately harming women and girls. is not just a misuse of technology. The results are that Michael McConnell, co-chair of the oversight committee, told CBS News on Friday.
“The board is actively monitoring the Meta response and will continue to promote stronger safeguards, faster enforcement and greater accountability,” McConnell said.
Meta is not the only social media company that has faced widespread, sexually deepfake content issues.
Last year, Elon Musk’s Platform X temporarily blocked Taylor Swift-related searches after spreading AI-generated fake porn images on the singer’s portraits on the platform, earning millions of opinions and impressions I did.
“The posting of non-consensual nude (NCN) images on X is strictly prohibited and there is a zero-tolerance policy for such content,” the platform’s safety team said in a post at the time.
A survey released by the UK government earlier this month found that the number of deep furakate images on social media platforms is expanding rapidly. The government predicts that 8 million deepfakes will be shared between 500,000 and 2023.