五角大楼是否被允许使用人工智能监控美国公民?

内容总结:
五角大楼能用AI监控美国人吗?法律在技术狂奔中掉队了
人工智能正以前所未有的力量强化监控能力,而现行法律体系已明显滞后。近期,美国国防部与人工智能公司Anthropic的公开争执,将一个尖锐问题置于聚光灯下:美国法律是否真的允许政府利用AI对本国公民进行大规模监控?
答案出人意料地模糊。在斯诺登揭露国家安全局大规模收集美国公民电话元数据十余年后,公众认知与法律许可之间仍存在巨大鸿沟。
此次对峙的导火索是五角大楼希望使用Anthropic的AI模型Claude来分析美国人的大量商业数据。Anthropic明确要求其AI不得用于国内大规模监控或自主武器。谈判破裂一周后,国防部将Anthropic列为“供应链风险”,这一标签通常用于对国家安全构成威胁的外国公司。
与此同时,OpenAI与国防部达成的协议则允许其AI被用于“所有合法用途”。批评者指出,这种措辞为国内监控敞开了大门,随即引发了用户大规模卸载ChatGPT的抗议浪潮。面对压力,OpenAI本周一宣布已修改协议,明确禁止将其AI用于国内监控,并承诺不向NSA等情报机构提供服务。
然而,协议修改真能筑起高墙吗?法律专家指出,关键在于如何定义“监控”。明尼苏达大学法学院教授艾伦·罗森斯坦解释,许多公众眼中的监控或搜查,在法律上并不构成此类行为。这意味着社交媒体帖子、监控录像、选民登记等公开信息,以及从对外国人监控中“附带”收集的美国公民信息,均可被合法获取。
更值得注意的是,政府可以从数据经纪公司购买商业数据,其中包含手机定位、网页浏览记录等敏感个人信息。近年来,从移民海关执法局到联邦调查局,多个机构日益依赖这个由互联网广告经济滋养的数据市场。这些数据集使政府能够绕开通常获取敏感数据所需的搜查令或传票。
“政府可以收集大量关于美国人的信息,这些信息本身不受宪法第四修正案或成文法的监管,”罗森斯坦指出。更严峻的是,对于政府如何使用这些海量数据,目前几乎没有有意义的限制。
根本原因在于法律体系的时代脱节。保护公民免受无理搜查和扣押的第四修正案诞生于信息收集需进入家门的时代。1978年的《外国情报监视法》和1986年的《电子通信隐私法》则主要针对电话窃听和电子邮件拦截。大部分监控法律在互联网爆发前就已定型,未能预见如今海量数据生成与AI分析能力结合的颠覆性局面。
AI能够聚合大量非敏感信息,通过识别模式、推断关联,大规模构建个人详细画像。只要信息收集方式合法,政府即可自由使用这些数据,包括输入AI系统。“法律未能跟上技术现实,”罗森斯坦总结。
尽管监控引发严重的隐私担忧,五角大楼在收集分析美国公民数据方面也可能存在正当的国家安全考量。前国防部军事情报官员洛伦·沃斯指出,此类收集必须用于“非常特定的任务子集”,例如针对为外国效力或策划国际恐怖活动的美国公民的反情报任务。但他也承认,“这类收集确实让人不安”。
回到OpenAI的修改协议,其新增条款禁止“故意”对美国公民进行监控,包括通过采购商业数据进行的监控。但乔治华盛顿大学法学院教授杰西卡·蒂利普曼指出,这难以覆盖“所有合法用途”这一宽泛条款的效力。“OpenAI可以在协议中声明任何内容……但五角大楼会将其技术用于它认为合法的任何目的,”她表示,这很可能包括国内监控,“大多数情况下,公司无法阻止五角大楼的行动。”
协议措辞也留下了关于“无意”监控、以及对在美外国人或无证移民监控的模糊空间。蒂利普曼质疑:“当对法律解释存在分歧或法律本身发生变化时,会发生什么?”
OpenAI表示将通过技术保障措施和派驻员工进行监督,但其能否真正约束五角大楼对AI的合法使用,仍不明确。另一方面,赋予AI公司在政府行动中随时“拔掉插头”的权力,也可能带来国家安全风险。沃斯认为,关键在于国会应划出明确红线。
这些问题无一简单,涉及隐私与国家安全之间极其艰难的权衡。正因如此,它们或许应由公众通过立法程序决定,而非在行政部门与少数AI公司的闭门谈判中敲定。目前,军事AI的监管仍主要依靠合同而非法律。
一些立法者已开始行动。俄勒冈州参议员罗恩·怀登正寻求两党支持,推动立法应对大规模监控。他长期倡导立法限制政府购买商业数据,其2021年提出的《第四修正案非卖品法案》虽未通过,但反映了立法层面的关切。他在近期声明中强调:“基于这些数据创建美国公民的AI画像,代表着大规模监控令人不寒而栗的扩张,这是不应被允许的。”
当前,AI监控的边界仍在模糊地带游移。在技术狂奔的时代,法律与公众监督的提速,已成为迫在眉睫的挑战。
中文翻译:
五角大楼能用人工智能监控美国人吗?
人工智能正以前所未有的力量强化监控能力,而法律尚未跟上其发展步伐。美国国防部与人工智能公司Anthropic之间持续不断的公开争执,引发了一个悬而未决的深刻问题:现行法律是否真的允许美国政府对美国公民进行大规模监控?
令人意外的是,答案并不简单。在爱德华·斯诺登揭露美国国家安全局大规模收集美国民众手机元数据的十多年后,美国社会仍深陷公众认知与法律允许范围之间的鸿沟。
Anthropic与政府矛盾的爆发点,在于五角大楼希望使用该公司开发的AI模型Claude来分析美国民众的大规模商业数据。Anthropic明确要求其AI不得用于国内大规模监控(或用于无需人类监督即可击杀目标的自主武器)。谈判破裂一周后,五角大楼将Anthropic列为供应链风险企业——这一标签通常仅用于对国家安全构成威胁的外国公司。
与此同时,开发ChatGPT的竞争对手OpenAI却与五角大楼达成协议,允许其AI用于"所有合法用途"。批评者指出,这种措辞为国内监控敞开了大门。协议公布后的周末,用户开始大规模卸载ChatGPT。抗议者在OpenAI旧金山总部周围用粉笔写下:"你们的红线在哪里?"
周一,OpenAI宣布已修改协议,确保其AI不会用于国内监控,并补充说明其服务不会向国家安全局等情报机构提供。公司首席执行官萨姆·奥尔特曼在社交媒体X上表示,现行法律已禁止国防部(现有时被称为"战争部")进行国内监控,OpenAI的合同只需援引该法律即可。"战争部认同这些原则,并在法律政策中予以体现,我们将其纳入协议。"他写道。
而Anthropic首席执行官达里奥·阿莫代伊在政策声明中提出了相反观点:"此类监控目前若属合法,仅仅是因为法律尚未跟上AI快速发展的步伐。"
那么,究竟谁是正确的?法律是否允许五角大楼使用AI监控美国人?
监控能力的质变
答案取决于我们对"监控"的界定。明尼苏达大学法学院教授艾伦·罗森斯坦指出:"许多普通人认为是搜查或监控的行为……在法律上并不被视为搜查或监控。"这意味着社交媒体帖子、监控摄像头画面、选民登记记录等公开信息均可合法获取,从对外国公民监控中附带获取的美国公民信息也不例外。
最值得注意的是,政府可以向企业购买商业数据,其中可能包含手机定位和网页浏览记录等敏感个人信息。近年来,从移民海关执法局、国税局到联邦调查局和国家安全局,越来越多的机构借助以用户数据收集为基础的互联网经济,涉足这一数据市场。这些数据集能让政府获取通常需要搜查令或传票才能获得的敏感个人信息。
罗森斯坦教授解释道:"政府能收集的美国公民信息中,有大量既不受宪法第四修正案约束,也不受法规管制。"而且政府对这类数据的处理几乎没有实质性限制。
这是因为直到近几十年,人们才开始产生海量数据,为监控创造了新可能。禁止不合理搜查和扣押的第四修正案诞生时,收集信息意味着进入民宅。1978年《外国情报监视法》和1986年《电子通信隐私法》等后续法律出台时,监控还停留在窃听电话和拦截电子邮件阶段。现行监控法律主体形成于互联网兴起之前——当时人们尚未产生大量在线数据轨迹,政府也缺乏分析数据的精密工具。
如今情况已截然不同,AI极大增强了监控能力。罗森斯坦指出:"AI能够整合大量单独看来并不敏感、因此不受监管的信息,赋予政府前所未有的强大能力。"AI可以大规模聚合零散信息,识别模式、推导结论、建立个人详细档案。只要政府合法收集信息,就可以任意使用,包括输入AI系统。"法律尚未跟上技术现实。"罗森斯坦总结道。
合法使用的灰色地带
OpenAI已修改合同,规定其AI系统"不得故意用于对美国公民和国民进行国内监控",并明确禁止"通过获取或使用商业购买的个人或可识别信息,对美国公民或国民进行蓄意追踪、监视或监控"。
但乔治华盛顿大学法学院教授杰西卡·蒂利普曼指出,新增条款可能难以撼动"五角大楼可将AI用于所有合法用途"的原有条款,而敏感个人信息的收集分析可能就包含在合法用途中。"OpenAI可以在协议中设定任何条款……但五角大楼会按照其认定的合法方式使用技术。"她表示这包括国内监控,"大多数情况下,企业无法阻止五角大楼的任何行动。"
协议措辞还留下了关于"无意"监控、以及对在美外国公民或无证移民监控的模糊空间。"当对法律理解存在分歧或法律变更时,会发生什么?"蒂利普曼质疑道。OpenAI未回应置评请求,也未公开新合同全文。
除合同约束外,OpenAI表示将通过技术保障措施执行监控红线,包括监控并阻止禁用用途的"安全防护层",并派驻员工与五角大楼协作保持监督。但安全防护层如何约束五角大楼的AI使用、OpenAI员工对其系统使用情况的监督程度仍不明确。更重要的是,合同是否赋予OpenAI阻止技术合法使用的权力尚存疑问。
不过这可能并非坏事。前五角大楼军事情报官员洛伦·沃斯指出:"如果军方为维护国家安全必须采取行动时,私营公司却关闭了技术,这种局面是必须避免的。"但她同时强调,这并不意味着国会不应制定明确界限。
立法滞后的困境
这些问题都异常复杂,涉及隐私与国家安全之间极其艰难的权衡。正因如此,或许应由公众而非行政部门与少数AI公司的闭门谈判来决定。目前,军事AI仍通过合同而非立法进行监管。
部分立法者已开始行动。俄勒冈州参议员罗恩·怀登正寻求两党支持,推动针对大规模监控的立法。他长期倡导限制政府购买商业数据的法案,包括2021年提出但尚未通过的《第四修正案非卖品法案》。他在近期声明中强调:"基于这些数据创建美国公民的AI档案,意味着大规模监控令人不寒而栗的扩张,这必须被禁止。"
这些争议凸显了AI时代监控权力与公民权利之间的根本矛盾。当技术发展速度远超立法进程,当商业合同成为监管的主要工具,社会将如何平衡安全与自由的天平?答案或许不在硅谷的服务器机房,而在国会山的议事厅——但前提是立法者能跟上技术革命的步伐。
英文来源:
Is the Pentagon allowed to surveil Americans with AI?
Artificial intelligence is supercharging surveillance, and the law has not caught up with it.
The ongoing public feud between the Department of Defense and the AI company Anthropic has raised a deep and still unanswered question: Does the law actually allow the US government to conduct mass surveillance on Americans?
Surprisingly, the answer is not straightforward. More than a decade after Edward Snowden exposed the NSA’s collection of bulk metadata from the phones of Americans, the US is still navigating a gap between what ordinary people think and what the law allows.
The flashpoint in the standoff between Anthropic and the government was the Pentagon’s desire to use Anthropic’s AI Claude to analyze bulk commercial data on Americans. Anthropic demanded that its AI not be used for mass domestic surveillance (or for autonomous weapons, which are machines that can kill targets without human oversight). A week after negotiations broke down, the Pentagon designated Anthropic a supply chain risk, a label typically reserved for foreign companies that pose a threat to national security.
Meanwhile, OpenAI, the rival AI company behind ChatGPT, sealed a deal that allowed the Pentagon to use its AI for “all lawful purposes”—language that critics say left the door open to domestic surveillance. Over the following weekend, users uninstalled ChatGPT in droves. Protesters chalked messages around OpenAI’s headquarters in San Francisco: “What are your redlines?”
OpenAI announced on Monday that it had reworked its deal to make sure that its AI will not be used for domestic surveillance. The company added that its services will not be used by intelligence agencies, such as the NSA.
CEO Sam Altman suggested that existing law prohibits domestic surveillance by the Department of Defense (now sometimes called the Department of War) and that OpenAI’s contract simply needed to reference this law. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he wrote on X. Anthropic CEO Dario Amodei argued the opposite. “To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI,” he wrote in a policy statement.
So, who is right? Does the law allow the Pentagon to surveil Americans using AI?
Supercharged surveillance
The answer depends on what we think counts as surveillance. “A lot of stuff that normal people would consider a search or surveillance … is not actually considered a search or surveillance by the law,” says Alan Rozenshtein, a law professor at the University of Minnesota Law School. That means public information—such as social media posts, surveillance camera footage, and voter registration records—is fair game. So is information on Americans picked up incidentally from surveillance of foreign nationals.
Most notably, the government can purchase commercial data from companies, which can include sensitive personal information like mobile location and web browsing records. In recent years, agencies from ICE and IRS to the FBI and NSA have increasingly tapped into this data marketplace, fueled by an internet economy that harvests user data for advertising. These data sets can let the government access information that might not be available without a warrant or subpoena, which are normally required to obtain sensitive personal data.
“There’s a huge amount of information that the government can collect on Americans that is not itself regulated either by the Constitution, which is the Fourth Amendment, or statute,” says Rozenshtein. And there aren’t meaningful limits on what the government can do with all this data.
That’s because until the last several decades, people weren’t generating massive clouds of data that opened up new possibilities for surveillance. The Fourth Amendment, which protects against unreasonable search and seizure, was written when collecting information meant entering people’s homes.
Subsequent laws, like the Foreign Intelligence Surveillance Act of 1978 or the Electronic Communications Privacy Act of 1986, were passed when surveillance involved wiretapping phone calls and intercepting emails. The bulk of laws governing surveillance were on the books before the internet took off. We weren’t generating vast trails of online data, and the government didn’t have sophisticated tools to analyze the data.
Now we do, and AI supercharges what kind of surveillance can be carried out. “What AI can do is it can take a lot of information, none of which is by itself sensitive, and therefore none of which by itself is regulated, and it can give the government a lot of powers that the government didn’t have before,” says Rozenshtein.
AI can aggregate individual pieces of information to spot patterns, draw inferences, and build detailed profiles of people—at massive scale. And as long as the government collects the information lawfully, it can do whatever it wants with that information, including feeding it to AI systems. “The law has not caught up with technological reality,” says Rozenshtein.
While surveillance can raise serious privacy concerns, the Pentagon can have legitimate national security interests in collecting and analyzing data on Americans. “In order to collect information on Americans, it has to be for a very specific subset of missions,” says Loren Voss, a former military intelligence officer at the Pentagon.
For example, a counterintelligence mission might require information about an American who is working for a foreign country, or plotting to engage in international terrorist activities. But targeted intelligence can sometimes stretch into collecting more data. “This kind of collection does make people nervous,” says Voss.
Lawful use
OpenAI has amended its contract to say that the company’s AI system “shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” in line with relevant laws. The amendment clarifies that this prohibits “deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”
But the added language might not do much to override the clause that the Pentagon may use the company’s AI system for all lawful purposes, which could include collecting and analyzing sensitive personal information. “OpenAI can say whatever it wants in its agreement … but the Pentagon’s gonna use the tech for what it perceives to be lawful,” says Jessica Tillipman, a law professor at the George Washington University Law School. That could include domestic surveillance. “Most of the time, companies are not going to be able to stop the Pentagon from doing anything,” she says.
The language also leaves open questions about "inadvertent" surveillance, and the surveillance of foreign nationals or undocumented immigrants living in the US. “What happens when there’s a disagreement about what the law is, or when the law changes?” says Tillipman.
OpenAI did not respond to a request for comment. The company has not publicly shared the full text of its new contract.
Beyond the contract, OpenAI says that it will impose technical safeguards to enforce its red line against surveillance, including a “safety stack" that monitors and blocks prohibited uses. The company also says it will deploy its own employees to work with the Pentagon and remain in the loop. But it’s unclear how a safety stack would constrain the Pentagon’s use of the AI, and to what extent OpenAI’s employees would have visibility into how its AI systems are used. More important, it’s unclear whether the contract gives OpenAI the power to block a legal use of the technology.
But that might not be a bad thing. Giving an AI company power to pull the plug on its technology in the middle of government operations also carries its own risks. “You wouldn’t want the US military to ever be in a situation where they legitimately needed to take actions to protect this country’s national security, and you had a private company turn off technology,” says Voss. But that doesn’t mean there shouldn’t be hard lines drawn by Congress, she says.
None of these questions are simple. They involve brutally difficult trade-offs between privacy and national security. And that’s why perhaps they should be decided by the public—not in backroom negotiations between the executive branch and a handful of AI companies. For now, military AI is being regulated by contracts, not legislation.
Some lawmakers are starting to weigh in. On Monday, Senator Ron Wyden of Oregon will seek bipartisan support for legislation addressing mass surveillance. He has championed bills restricting the government’s purchase of commercial data, including the Fourth Amendment Is Not For Sale Act, which was first introduced in 2021 but has not been passed into law. “Creating AI profiles of Americans based on that data represents a chilling expansion of mass surveillance that should not be allowed,” he said in a recent statement.
Deep Dive
Artificial intelligence
A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions
Backlash against ICE is fueling a broader movement against AI companies’ ties to President Trump.
Moltbook was peak AI theater
The viral social network for bots reveals more about our own current mania for AI as it does about the future of agents.
Meet the new biologists treating LLMs like aliens
By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time.
Yann LeCun’s new venture is a contrarian bet against large language models
In an exclusive interview, the AI pioneer shares his plans for his new Paris-based company, AMI Labs.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.
文章标题:五角大楼是否被允许使用人工智能监控美国公民?
文章链接:https://blog.qimuai.cn/?post=3505
本站文章均为原创,未经授权请勿用于任何商业用途