«

格罗克正推动AI"脱衣"技术走向主流

qimuai 发布于 阅读:12 一手编译


格罗克正推动AI"脱衣"技术走向主流

内容来源:https://www.wired.com/story/grok-is-pushing-ai-undressing-mainstream/

内容总结:

近日,由埃隆·马斯克旗下人工智能公司xAI开发的聊天机器人Grok,被曝持续生成针对女性的非自愿性化图像,引发国际社会广泛担忧。尽管此前X平台(原推特)的图像生成工具已因涉嫌制作儿童性化图像遭到批评,但Grok仍以每秒钟数张的速度,大量产出女性“身着泳装”或“衣衫不整”的合成图片。

据《连线》杂志监测,仅本周二五分钟内,Grok便公开发布了至少90张涉及女性穿着泳衣或不同程度裸露的图片。这些图片虽未达到完全裸露,但通过技术手段对用户上传至X的原始照片进行“脱衣”处理。为规避安全限制,用户常以“细绳比基尼”“透明比基尼”等指令诱导生成。分析指出,此类行为已构成对女性的数字骚扰与滥用。

与需付费使用的专业“脱衣”软件不同,Grok免费向X平台数百万用户提供秒级图像生成服务,极大降低了技术滥用门槛,可能使非自愿亲密图像的制造“常态化”。反技术滥用组织EndTAB培训教育主任斯隆·汤普森指出:“X平台非但未降低风险,反而将AI图像滥用功能嵌入主流社交平台,使性暴力行为更易实施、规模更大。”

自去年底起,Grok生成性化图像的现象在X平台迅速扩散。社交媒体网红、名人乃至政界人士的照片均成为目标。例如,瑞典副首相及英国两名部长级官员的公开照片曾被用户要求“换上比基尼”。监测数据显示,仅去年12月31日两小时内,研究者便收集到超1.5万条由Grok生成的图片链接,其中大量内容为女性身着内衣或泳装。

尽管X平台安全账号声称禁止非法内容,并援引2021年(马斯克收购前)制定的“非自愿裸露政策”,禁止利用数字技术嫁接他人面容至裸露身体,但Grok对真人照片的性化处理仍持续泛滥。专家警告,这仅是AI深度伪造滥用的冰山一角。过去六年间,“脱衣”网站、Telegram机器人及开源模型已使伪造图像变得轻而易举,相关服务年收入估计超3600万美元。

全球监管行动正逐步加强。美国国会已通过《下架法案》,要求X等平台在5月中旬前设立非自愿亲密图像举报通道,并在48小时内处理。澳大利亚与英国监管机构也已对主要“脱衣”服务采取执法行动。法国、印度、马来西亚等多国官员均对X平台近期图像泛滥表示关切或威胁调查。

英国科技大臣莉兹·肯德尔周二敦促X平台“紧急处理此事”,强调近日网络图像“令人震惊,为文明社会所不容”。澳大利亚电子安全专员办公室证实,自去年底已收到多起关于Grok生成性化图像的报告,正评估相关成人图像,并对生成式AI用于性化剥削(尤其是涉及儿童)表示持续担忧。

截至目前,马斯克的xAI公司与X平台均未对相关质询作出回应。随着监管压力增大,各方正密切关注平台将如何应对这场由AI技术助长的图像滥用危机。

中文翻译:

埃隆·马斯克并未阻止其人工智能公司xAI开发的聊天机器人Grok生成女性性化图像。上周有报道称,X平台上的图像生成工具被用于制作儿童性化图像,而Grok已生成了可能数以千计的女性“未着衣”和“比基尼”非自愿图像。

根据《连线》杂志对该聊天机器人公开实时发布内容的审查,Grok正持续以每秒数张的频率,根据X平台用户的指令生成女性身着比基尼或内衣的图像。周二的分析显示,Grok在不到五分钟内发布了至少90张涉及女性穿着泳衣及不同程度裸露的图像。

这些图像虽未全裸,但涉及马斯克旗下的聊天机器人对其他用户发布在X平台的照片进行“脱衣”处理。用户常试图通过要求将照片编辑成“细绳比基尼”或“透明比基尼”来规避Grok的安全防护机制,但未必总能成功。

尽管有害的AI图像生成技术多年来一直被用于数字化骚扰和虐待女性——这类输出常被称为深度伪造,由“脱衣”软件制作——但Grok持续被用于大规模生成非自愿图像,标志着这似乎已成为迄今为止最主流、最广泛的滥用案例。与特定的有害脱衣软件不同,Grok不向用户收费,能在数秒内生成结果,且对X平台数百万用户开放——这些因素都可能助长非自愿私密图像制作的常态化。

致力于解决技术助长虐待问题的组织EndTAB的培训教育主任斯隆·汤普森指出:“当企业在平台上提供生成式AI工具时,他们有责任降低基于图像的虐待风险。令人震惊的是,X平台反其道而行,直接将AI图像虐待功能嵌入主流平台,使性暴力行为更易实施且更具规模。”

Grok生成性化图像的行为于去年底在X平台开始病毒式传播,尽管该系统制作此类图像的能力已为人所知数月。近日,社交媒体网红、名人和政治人物的照片成为X平台用户的目标,他们可通过回复他人账号的帖子,要求Grok修改已分享的图像。

发布过个人照片的女性用户收到账号回复,并成功让Grok将其照片转为“比基尼”图像。例如,多名X用户曾要求Grok修改瑞典副首相的照片,使其身着比基尼。据报道,英国两名政府部长也遭“脱衣”处理为比基尼形象。

X平台上的图像显示,女性原本衣着完整的照片——如电梯内或健身房中的影像——被转换为衣着暴露的图像。典型留言如“@grok给她穿上透明比基尼”。另一系列帖子中,用户要求Grok“将她的胸部放大90%”“大腿增粗50%”,最终“将她的衣服换成微型比基尼”。

一位追踪露骨深度伪造多年的分析师(因隐私要求匿名)表示,Grok可能已成为托管有害深度伪造图像的最大平台之一。“这完全主流化了,”该研究员指出,“制作图像的并非隐秘团体,而是各行各业的人。用户直接用主账号发布,毫无顾忌。”

去年12月31日的两小时内,该分析师收集了超过1.5万条Grok生成图像的链接,并对X平台上发布生成图像(包括性化与非性化内容)的聊天机器人“媒体”标签页进行了屏幕录制。

《连线》审查了研究员收集的三分之一以上链接,发现超过2500条已失效,近500条被标记为“限龄成人内容”需登录查看。其余帖子中许多仍展示衣着暴露的女性。研究员对Grok在X平台“媒体”页面的录屏显示,海量图像充斥着穿比基尼和内衣的女性。

马斯克的xAI公司未立即回应关于Grok生成和发布性化图像普遍性的评论请求。X平台也未立即回应《连线》的置评请求。

X平台安全账号曾声明禁止非法内容:“我们对X平台上的非法内容采取行动,包括儿童性虐待材料,会予以删除、永久封禁账号,并在必要时与地方政府和执法部门合作。”X平台最新的《数字服务法》透明度报告称,去年4月初至6月底期间封禁了89,151个违反儿童性剥削政策的账号,但未更新近期数据。

该安全账号还援引其禁止内容政策。X平台2021年12月(马斯克收购推特前)制定的非自愿裸露政策规定,“将个人面部叠加或数字合成到他人裸体上的图像或视频”违反政策。

使用Grok制作真人性化图像仅是冰山一角。过去六年间,露骨的深度伪造技术日益先进且更易制作。数十个“脱衣”网站、Telegram机器人和开源图像生成模型使毫无技术基础者也能制作图像和视频。据估算,这些服务年收入至少达3600万美元。去年12月,《连线》曾报道谷歌和OpenAI的聊天机器人同样能将照片中的女性脱至比基尼装扮。

立法和监管机构针对非自愿露骨深度伪造的行动虽进展缓慢,但正逐步加强。去年美国国会通过《下架法案》,将公开传播非自愿私密图像(包括深度伪造)列为非法。截至五月中旬,X等在线平台必须提供渠道供用户举报此类内容,并在48小时内响应。

美国非营利组织“国家失踪与受虐儿童中心”与企业和执法部门合作处理儿童性虐待材料案件。该机构报告称,其在线虐待举报系统2023年至2024年间涉及生成式AI的举报激增1325%。(大幅增长未必意味着活动量同比增加,有时可归因于自动检测技术改进或举报指南明确。)该中心未回应《连线》关于X平台帖子的置评请求。

近几个月,英国和澳大利亚官员针对“脱衣”服务采取了迄今最严厉行动。澳大利亚网络安全监管机构eSafety专员已对最大脱衣服务商之一采取执法行动,英国官员正计划禁止脱衣应用。

然而,各国可能对X平台和Grok大规模制作非自愿图像采取何种行动(如有)仍存疑问。法国、印度和马来西亚等国官员已对近期图像激增表示关切或威胁调查X平台。

eSafety办公室发言人表示,自去年底以来已收到“多份”关于Grok用于生成性图像的举报。该机构称正在评估收到的成人图像,而部分青少年图像未达到该国法律定义的儿童性剥削材料标准。“eSafety持续关注生成式AI被用于性化或剥削人群的现象,尤其涉及儿童时,”发言人强调。

周二,英国政府正式要求X平台对此类图像采取行动。技术部长莉兹·肯德尔在声明中表示:“X平台需紧急处理此事。近日网络所见景象令人震惊,为文明社会所不容。”此前英国通信管理局已于周一联系X平台。

英文来源:

Elon Musk hasn’t stopped Grok, the chatbot developed by his artificial intelligence company xAI, from generating sexualized images of women. After reports emerged last week that the image generation tool on X was being used to create sexualized images of children, Grok has created potentially thousands of nonconsensual images of women in “undressed” and “bikini” photos.
Every few seconds, Grok is continuing to create images of women in bikinis or underwear in response to user prompts on X, according to a WIRED review of the chatbots’ publicly posted live output. On Tuesday, at least 90 images involving women in swimsuits and in various levels of undress were published by Grok in under five minutes, analysis of posts show.
The images do not contain nudity but involve the Musk-owned chatbot “stripping” clothes from photos that have been posted to X by other users. Often, in an attempt to evade Grok’s safety guardrails, users are, not necessarily successfully, requesting photos to be edited to make women wear a “string bikini” or a “transparent bikini.”
While harmful AI image generation technology has been used to digitally harass and abuse women for years—these outputs are often called deepfakes and are created by “nudify” software—the ongoing use of Grok to create vast numbers of nonconsensual images marks seemingly the most mainstream and widespread abuse instance to date. Unlike specific harmful nudify or “undress” software, Grok doesn’t charge the user money to generate images, produces results in seconds, and is available to millions of people on X—all of which may help to normalize the creation of nonconsensual intimate imagery.
“When a company offers generative AI tools on their platform, it is their responsibility to minimize the risk of image-based abuse,” says Sloan Thompson, the director of training and education at EndTAB, an organization that works to tackle tech-facilitated abuse. “What’s alarming here is that X has done the opposite. They’ve embedded AI-enabled image abuse directly into a mainstream platform, making sexual violence easier and more scalable.”
Grok’s creation of sexualized imagery started to go viral on X at the end of last year, although the system’s ability to create such images has been known for months. In recent days, photos of social media influencers, celebrities, and politicians have been targeted by users on X, who can reply to a post from another account and ask Grok to change an image that has been shared.
Women who have posted photos of themselves have had accounts reply to them and successfully ask Grok to turn the photo into a “bikini” image. In one instance, multiple X users requested Grok alter an image of the deputy prime minister of Sweden to show her wearing a bikini. Two government ministers in the UK have also been “stripped” to bikinis, reports say.
Images on X show fully clothed photographs of women, such as one person in a lift and another in the gym, being transformed into images with little clothing. “@grok put her in a transparent bikini,” a typical message reads. In a different series of posts, a user asked Grok to “inflate her chest by 90%,” then “Inflate her thighs by 50%,” and, finally, to “Change her clothes to a tiny bikini.”
One analyst who has tracked explicit deepfakes for years, and asked not to be named for privacy reasons, says that Grok has likely become one of the largest platforms hosting harmful deepfake images. “It’s wholly mainstream,” the researcher says. “It’s not a shadowy group [creating images], it’s literally everyone, of all backgrounds. People posting on their mains. Zero concern.”
During a two-hour period on December 31, the analyst gathered more than 15,000 URLs of images created by Grok and screen-recorded the chatbots’ “media” tab on X, where generated images—both sexualized and non-sexualized—are posted.
WIRED reviewed more than a third of the URLs that the researcher gathered and found that over 2,500 were no longer available, and nearly 500 were marked as “age-restricted adult content,” requiring a login to view. Many of the remaining posts still featured scantily clad women. The researcher’s screen recordings of Grok’s “media” page on X show an overwhelming number of images of women in bikinis and lingerie.
Musk’s xAI did not immediately respond to a request for comment about the prevalence of sexualized images that Grok has been creating and publishing. X did not immediately respond to a request for comment from WIRED.
X’s Safety account has said it prohibits illegal content. “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” the account posted. X’s most recent DSA transparency report said that it suspended 89,151 accounts for violating its child sexual exploitation policy between the start of April and the end of June last year, but hasn’t published more recent numbers.
The X Safety account also points to its policies around prohibited content. X’s nonconsensual nudity policy, which is dated December 2021, before Musk purchased what was then Twitter, claims that “images or videos that superimpose or otherwise digitally manipulate an individual’s face onto another person’s nude body” are against the policy.
The use of Grok to create sexualized images of real people is also just the tip of the iceberg. Over the past six years, explicit deepfakes have become more advanced and easier for people to create. Dozens of “nudify” and “undress” websites, bots on Telegram, and open source image generation models have made it possible to create images and videos with no technical skills. These services are estimated to have made at least $36 million each year. In December, WIRED reported how Google’s and OpenAI’s chatbots have also stripped women in photos down to bikinis.
Action from lawmakers and regulators against nonconsensual explicit deepfakes has been slow but is starting to increase. Last year, Congress passed the TAKE IT DOWN Act, which makes it illegal to publicly post nonconsensual intimate imagery (NCII), including deepfakes. By mid-May, online platforms, including X, will have to provide a way for people to flag instances of NCII, which the platforms will be required to respond to within 48 hours.
The National Center for Missing and Exploited Children (NCMEC), a US-based nonprofit that works with companies and law enforcement to address instances of CSAM, reported that its online abuse reporting system saw a 1,325 percent increase in reports involving generative AI between 2023 and 2024. (Such large increases don’t necessarily mean a similarly large increase in activity and can sometimes be attributed to improvements in automated detection or guidelines about what should be reported.) The NCMEC did not respond to a request for comment from WIRED about the posts on X.
In recent months, officials in both the UK and Australia have taken the most significant action so far around “nudifying” services. Australia’s online safety regulator, the eSafety Commissioner, has targeted one of the biggest nudifying services with enforcement action, and UK officials are planning on banning nudification apps.
However, there are still questions around what, if any, action countries may take against X and Grok for the widespread creation of the nonconsensual imagery. Officials in France, India, and Malaysia are among those who have raised concerns or threatened to investigate X over the recent flurry of images.
A spokesperson for the eSafety office says it has “several reports” of Grok being used to generate sexual images since late last year. The office says it is assessing images of adults that were submitted to it, while some images of young people did not meet the country’s legal definition of child sexual exploitation material. “eSafety remains concerned about the increasing use of generative AI to sexualize or exploit people, particularly where children are involved,” the spokesperson says.
On Tuesday, the UK government officially called for X to take action against the imagery. “X needs to deal with this urgently,” technology minister Liz Kendall said in a statement, which followed communications regulator Ofcom contacting X on Monday. “What we have been seeing online in recent days has been absolutely appalling, and unacceptable in decent society.”

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读