今年,OpenAI提交的儿童剥削报告数量急剧上升。

内容来源:https://www.wired.com/story/openai-child-safety-reports-ncmec/
内容总结:
人工智能公司OpenAI近日发布的数据显示,2025年上半年,该公司向美国失踪与受虐儿童中心(NCMEC)网络举报热线提交的儿童性剥削事件报告数量激增,达到去年同期的80倍。该热线是美国国会授权的儿童性虐待材料(CSAM)及其他形式儿童剥削内容举报中心。
根据美国法律,企业必须将发现的疑似儿童剥削行为向该热线报告。NCMEC在收到报告后会进行审核,并转交相应执法机构调查。
报告数量的显著变化需结合多方面因素解读。OpenAI发言人加比·雷拉表示,公司在2024年底投入资源“提升审查和处理报告的能力,以适应用户增长”。她同时指出,支持图片上传的新产品界面推出及产品普及度提升,也促使报告量增加。数据显示,2025年上半年ChatGPT周活跃用户数已达去年同期的四倍。
2025年上半年,OpenAI提交的报告数量(75,027份)与涉及的内容数量(74,559条)基本持平;而2024年同期报告数为947份,涉及内容3,252条。两项数据在一年间均出现大幅增长。
这一增长趋势与生成式人工智能发展背景下的整体态势相符。NCMEC分析发现,2023年至2024年间涉及生成式AI的举报数量增长超过13倍。尽管谷歌等大型AI实验室也公布向NCMEC提交的报告数据,但未单独说明其中与AI相关的比例。
过去一年,OpenAI及其同行在儿童安全领域面临日益严格的审视。今年夏季,美国44个州的总检察长联合致信多家AI公司,警告将动用一切法律手段保护儿童免受侵害性AI产品的伤害。OpenAI与Character.AI均因聊天机器人涉嫌导致未成年人死亡而面临多起诉讼。美国参议院司法委员会就AI聊天机器人危害举行听证会,联邦贸易委员会也启动了针对AI伴侣机器人的市场调查。
近期,OpenAI已推出一系列安全措施:9月为ChatGPT新增家长控制功能,允许家长关闭语音模式、图像生成及模型训练数据共享,并在检测到青少年自残风险时通知家长或执法部门;10月底与加州司法部达成协议,承诺持续降低AI对青少年群体的风险;11月发布《青少年安全蓝图》,强调不断提升CSAM检测能力,并向NCMEC等机构举报确认的违规内容。
(注:报告统计存在多重影响因素,包括平台自动审核规则变化、判定标准调整等,且同一内容可能被多次举报,单份报告也可能涉及多条内容。OpenAI等平台同时公布报告数与内容数,以更全面反映情况。)
中文翻译:
根据OpenAI近期发布的更新,2025年上半年该公司向美国失踪与受虐儿童援助中心提交的儿童剥削事件报告数量,较2024年同期激增80倍。该中心的网络举报热线是美国国会授权的儿童性侵材料及其他形式儿童剥削行为举报信息处理中心。
法律要求企业必须将明显的儿童剥削行为向该热线举报。企业提交报告后,中心会进行审核并转交相应执法机构调查。
关于该中心报告的统计数据需审慎解读。报告数量的增加有时可能源于平台自动化审核机制的调整,或平台判定是否需提交报告的标准变化,未必直接反映恶意活动的实际增长。
此外,同一内容可能被多次举报,单份报告也可能涉及多项内容。包括OpenAI在内的一些平台会同时披露报告数量及涉及内容总量,以呈现更完整的图景。
OpenAI发言人加比·雷拉在声明中表示,公司在2024年底投入资源"提升审核处理报告的能力,以适应当前及未来的用户增长"。她同时指出,该时段恰逢"支持图片上传的新产品界面推出及产品用户量攀升,共同推动了报告数量的增长"。今年8月,ChatGPT副总裁兼负责人尼克·特利曾宣布,该应用周活跃用户数已达去年同期的四倍。
2025年上半年,OpenAI提交的报告数量与报告涉及内容数量基本持平——分别为75,027份报告与74,559项内容。而在2024年上半年,该公司仅提交947份报告,涉及3,252项内容。两个时段内报告数量与涉及内容量均呈现显著增长。
此处的"内容"具有多重含义。OpenAI表示会向该中心举报所有儿童性侵材料实例,包括上传行为与相关请求。除支持文件(含图片)上传并能生成图文回复的ChatGPT外,OpenAI还通过API接口提供模型服务。最新统计数据未包含视频生成应用Sora的相关报告,因其9月的发布时间已超出本次更新涵盖时段。
报告激增现象与生成式AI崛起后该中心观察到的整体趋势相符。该中心对所有举报数据的分析显示,2023至2024年间涉及生成式AI的报告暴增1325%。虽然谷歌等其他大型AI实验室也公布向该中心提交的报告数据,但未具体说明其中与AI相关的比例。
OpenAI此次更新发布之际,正值该公司及其竞争对手全年面临儿童安全问题的严格审视。今年夏季,美国44个州的总检察长联合致信OpenAI、Meta、Character.AI及谷歌等多家AI企业,警告将"动用一切法定权力保护儿童免受掠夺性人工智能产品的侵害"。OpenAI和Character.AI已遭遇多起诉讼,原告家庭指控聊天机器人导致其子女死亡。秋季,美国参议院司法委员会就AI聊天机器人危害举行听证会,联邦贸易委员会则启动针对AI陪伴机器人的市场调查,重点关注企业如何减轻对儿童的负面影响。
近几个月来,OpenAI更广泛地推出了安全新工具。9月,该公司为ChatGPT新增家长控制等功能,作为"为家庭提供辅助青少年使用AI工具"的举措。家长可与子女账户关联,通过关闭语音模式与记忆功能、禁用图像生成选项、退出模型训练等方式调整设置。当检测到青少年对话出现自残迹象时,系统将通知家长;若发现紧迫生命威胁且无法联系家长,可能直接通报执法机构。
10月下旬,为完成与加州司法厅关于资本重组计划的谈判,OpenAI承诺"持续采取措施降低AI及通用人工智能开发部署对青少年等人群的风险"。次月发布的《青少年安全蓝图》中,该公司表示正持续提升儿童性侵与剥削材料的检测能力,并向包括该中心在内的权威机构举报已确认的儿童性侵材料。
英文来源:
OpenAI sent 80 times as many child exploitation incident reports to the National Center for Missing & Exploited Children during the first half of 2025 as it did during a similar time period in 2024, according to a recent update from the company. The NCMEC’s CyberTipline is a Congressionally authorized clearinghouse for reporting child sexual abuse material (CSAM) and other forms of child exploitation.
Companies are required by law to report apparent child exploitation to the CyberTipline. When a company sends a report, NCMEC reviews it and then forwards it to the appropriate law enforcement agency for investigation.
Statistics related to NCMEC reports can be nuanced. Increased reports can sometimes indicate changes in a platform’s automated moderation, or the criteria it uses to decide whether a report is necessary, rather than necessarily indicating an increase in nefarious activity.
Additionally, the same piece of content can be the subject of multiple reports, and a single report can be about multiple pieces of content. Some platforms, including OpenAI, disclose the number of both the reports and the total pieces of content they were about for a more complete picture.
OpenAI spokesperson Gaby Raila said in a statement that the company made investments toward the end of 2024 “to increase [its] capacity to review and action reports in order to keep pace with current and future user growth.” Raila also said that the time frame corresponds to “the introduction of more product surfaces that allowed image uploads and the growing popularity of our products, which contributed to the increase in reports.” In August, Nick Turley, vice president and head of ChatGPT, announced that the app had four times the amount of weekly active users than it did the year before.
During the first half of 2025, the number of CyberTipline reports OpenAI sent was roughly the same as the amount of content OpenAI sent the reports about—75,027 compared to 74,559. In the first half of 2024, it sent 947 CyberTipline reports about 3,252 pieces of content. Both the number of reports and pieces of content the reports saw a marked increase between the two time periods.
Content, in this context, could mean multiple things. OpenAI has said that it reports all instances of CSAM, including uploads and requests, to NCMEC. Besides its ChatGPT app, which allows users to upload files—including images—and can generate text and images in response, OpenAI also offers access to its models via API access. The most recent NCMEC count wouldn’t include any reports related to video-generation app Sora, as its September release was after the time frame covered by the update.
The spike in reports follows a similar pattern to what NCMEC has observed at the CyberTipline more broadly with the rise of generative AI. The center’s analysis of all CyberTipline data found that reports involving generative AI saw a 1,325 percent increase between 2023 and 2024. NCMEC has not yet released 2025 data, and while other large AI labs like Google publish statistics about the NCMEC reports they’ve made, they don’t specify what percentage of those reports are AI-related.
OpenAI’s update comes at the end of a year where the company and its competitors have faced increased scrutiny over child safety issues beyond just CSAM. Over the summer, 44 state attorneys general sent a joint letter to multiple AI companies including OpenAI, Meta, Character.AI, and Google, warning that they would “use every facet of our authority to protect children from exploitation by predatory artificial intelligence products.” Both OpenAI and Character.AI have faced multiple lawsuits from families or on behalf of individuals who allege that the chatbots contributed to their children’s deaths. In the fall, the US Senate Committee on the Judiciary held a hearing on the harms of AI chatbots, and the US Federal Trade Commission launched a market study on AI companion bots that included questions about how companies are mitigating negative impacts, particularly to children. (I was previously employed by the FTC and was assigned to work on the market study prior to leaving the agency.)
In recent months, OpenAI has rolled out new safety-focused tools more broadly. In September, OpenAI rolled out several new features for ChatGPT, including parental controls, as part of its work “to give families tools to support their teens’ use of AI.” Parents and their teens can link their accounts, and parents can change their teen’s settings, including by turning off voice mode and memory, removing the ability for ChatGPT to generate images, and opting their kid out of model training. OpenAI said it could also notify parents if their teen’s conversations showed signs of self-harm, and potentially also notify law enforcement if it detected an imminent threat to life and wasn’t able to get in touch with a parent.
In late October, to cap off negotiations with the California Department of Justice over its proposed recapitalizations plan, OpenAI agreed to “continue to undertake measures to mitigate risks to teens and others in connection with the development and deployment of AI and of AGI.” The following month, OpenAI released its Teen Safety Blueprint, in which it said it was constantly improving its ability to detect child sexual abuse and exploitation material, and reporting confirmed CSAM to relevant authorities, including NCMEC.
文章标题:今年,OpenAI提交的儿童剥削报告数量急剧上升。
文章链接:https://blog.qimuai.cn/?post=2575
本站文章均为原创,未经授权请勿用于任何商业用途