耗资数十亿美元的数据中心正席卷全球。

内容来源:https://www.wired.com/story/expired-tired-wired-data-centers/
内容总结:
全球AI基建狂潮:科技巨头豪掷万亿构建“数字罗马帝国”,繁荣背后隐现生态与泡沫之忧
一年前,当山姆·奥特曼将OpenAI的雄心比作“真正的罗马帝国”时,他并非戏言。如今,以奥特曼、英伟达黄仁勋、微软萨提亚·纳德拉、甲骨文拉里·埃里森为代表的科技领袖,正仿效古罗马开疆拓土,在全球竞相建设庞大的AI数据中心网络,将其视为未来经济命脉。
这股基建浪潮正以前所未有的资本投入重塑产业格局。从早期的巨型主机、90年代末的互联网数据中心,到“云时代”的虚拟化,计算基础设施不断演进。如今,生成式AI的爆发催生了更高阶的需求:数据中心正为AI进行彻底改造,需要更快、更高效的芯片,并推动美国进入AI基础设施投资的“狂热期”。
巨头间的合作与投资形成引人瞩目的循环生态:
- OpenAI与微软早前合作的“星际之门”项目,已发展为美国史上规模最大的AI基础设施计划之一,获奥特曼、埃里森及软银孙正义共同背书,启动资金达1000亿美元,未来数年投资可能高达5000亿美元。
- 英伟达计划向OpenAI投资高达1000亿美元,前提是OpenAI承诺采购其价值相当的算力系统;AMD亦提出若OpenAI大规模采购其芯片,将授予最多10%公司股权的条件。
- 微软、亚马逊、Meta也纷纷公布千亿美元级别的数据中心投资计划。
然而,这场“数字圈地运动”的副作用正逐渐显现:
- 资源消耗巨大:AI能耗预计在今年年底超越比特币挖矿。数据中心冷却大量抽取市政用水,导致局部地区水源紧张,而用水量往往未完全公开。
- 社会成本上升:数据中心建设导致工地周边交通拥堵与事故激增。例如,Meta在路易斯安那州的数据中心所在地,今年车辆碰撞事故飙升600%。
- 经济泡沫隐忧:巨头间“投资-采购”的循环协议,令公众和分析师担忧行业是否陷入过度繁荣,酝酿AI泡沫风险。
面对质疑,科技领袖们坚信需求足以支撑投入。AMD首席执行官苏姿丰等人强调,市场对AI的需求“压倒性”强劲,每周使用ChatGPT的8亿用户即是证明。从互联网到云计算再到AI,技术演进不可逆转。
但历史的隐喻值得深思:即使强如罗马,亦有倾覆之日。科技巨头们在畅想AI驱动生产力革命的同时,仍需直面其经济模型、资源可持续性及社会影响的长期考验。这场万亿豪赌的结果,将决定我们迎来的是真正的数字文明飞跃,还是一场代价高昂的过热狂欢。
中文翻译:
一年前,当山姆·阿尔特曼说OpenAI的"罗马帝国"就是真正的罗马帝国时,他并非在开玩笑。正如罗马人逐步建立起横跨三大洲、占地球周长九分之一的疆土,这位首席执行官和他的伙伴们如今也在全球点缀着他们的"大庄园"——不是农田,而是人工智能数据中心。
阿尔特曼、英伟达首席执行官黄仁勋、微软首席执行官萨提亚·纳德拉以及甲骨文联合创始人拉里·埃里森等科技巨头,都深信这些堆满IT基础设施的新型仓库代表着美国乃至全球经济的未来。但数据中心其实并非新鲜事物。计算机诞生初期,恒温机房内就矗立着耗电巨大的大型主机,通过同轴电缆将信息传输到终端计算机。随后,上世纪90年代末的消费互联网热潮催生了基础设施的新时代。华盛顿特区周边开始涌现巨型建筑,成排的机架式计算机为科技公司存储和处理数据。
十年后,"云"成为互联网的柔性基础设施。存储成本降低,亚马逊等公司借此崛起。大型数据中心持续激增,但科技公司不再混合使用本地服务器和租赁机架,而是将计算需求转移到虚拟化环境。"云到底是什么?"2010年代中期,一位聪慧的家人曾问我,"为什么我要为17种不同的云服务订阅付费?"
与此同时,科技公司正贪婪地吸纳着 petabytes 级的数据——这些数据来自人们自愿在网络、企业工作区和移动应用中分享的信息。企业开始寻找挖掘和整合这些"大数据"的新方法,并承诺这将改变生活。从许多方面看,它确实做到了。你早该预见到今天的局面。
如今科技行业已陷入生成式人工智能的狂热梦境,这需要前所未有的计算资源。大数据已显疲态;为AI而生的大型数据中心时代已然降临。AI数据中心需要更快、更高效的芯片驱动,英伟达和AMD等芯片制造商兴奋得手舞足蹈,争相宣告对AI的热爱。行业已进入AI基础设施资本投资的前所未有时代,甚至将美国GDP推入正向增长区间。这些规模庞大、令人眼花缭乱的交易如同鸡尾酒会上的握手,在千兆瓦级电力与狂热情绪中达成,而我们普通人却试图追踪真实的合同与资金流向。
OpenAI、微软、英伟达、甲骨文和软银已达成若干巨额交易。今年,OpenAI与微软早期合作的"星际之门"超级计算项目,演变为美国巨型AI基础设施计划的载体。阿尔特曼、埃里森和软银首席执行官孙正义共同参与,承诺初期投入1000亿美元,未来数年计划向星际之门项目投资高达5000亿美元。英伟达GPU将全面部署。七月,OpenAI与甲骨文宣布追加星际之门合作计划——软银意外缺席——该计划以4.5千兆瓦的供电能力和约10万个新增工作岗位为衡量标准。
微软、亚马逊和Meta也公布了数十亿美元的数据项目计划。微软在2025年初宣布,将投入"约800亿美元建设支持AI的数据中心,用于训练AI模型并在全球部署AI及云端应用"。
九月,英伟达表示将向OpenAI投资高达1000亿美元,前提是OpenAI承诺在其基础设施计划中使用高达10千兆瓦的英伟达系统——这本质上意味着OpenAI需要先向英伟达付费,才能获得英伟达的投资。次月AMD宣布,若OpenAI在2030年前采购并部署高达6千兆瓦的AMD GPU,将获得该芯片公司最多10%的股权。
这种循环投资模式令公众和看跌的分析师不禁怀疑:我们是否正走向AI泡沫破裂的边缘?
可以确定的是,这些数据中心建设带来的短期下游影响真实存在。AI基础设施对能源、资源和劳动力的需求极为庞大。据《连线》报道,有预测显示全球AI能源需求将在今年年底超过比特币挖矿需求。数据中心处理器运行产生高温需要冷却,科技巨头们正从市政供水系统调取资源,却往往不公开具体用水量。当地水井正在干涸或水质堪忧。数据中心建设工地附近的居民注意到交通延误日益严重,车祸事故也有所增加。路易斯安那州里奇兰教区一隅——Meta耗资270亿美元的海伯利安数据中心所在地——今年车祸数量激增600%。
AI的主要倡导者似乎暗示这一切都是值得的。几乎没有科技高管愿意公开承认,这种发展可能在生态或经济层面过度扩张。"绝对没有,"AMD首席执行官苏姿丰本月早些时候被问及AI热潮是否过热时断然否认。与其他高管一样,她以AI需求的压倒性增长作为巨额资本支出的理由。
但需求来自何方?这更难界定。在他们看来,需求来自所有人。每周使用ChatGPT的8亿用户。从1990年代的数据中心到2000年代的云计算,再到如今的AI数据中心,并非简单的线性演进。世界已同步从微型互联网时代迈向大型互联网时代,再进入AI互联网时代,现实地说,这已不可逆转。生成式AI的魔瓶已经打开。这个世界的山姆们、黄仁勋们、拉里们和丽莎们对此的判断并没有错。
但这不意味着他们的计算不会出错。他们的经济预测可能失误。他们关于AI驱动生产力与劳动力市场的构想可能存在偏差。他们对数据中心所需自然与物质资源可用性的判断或许不准。他们对"建成之后谁将到来"的设想可能落空。他们对时机的把握也许有误。毕竟,就连罗马帝国最终也崩塌了。
英文来源:
When Sam Altman said one year ago that OpenAI’s Roman Empire is the actual Roman Empire, he wasn’t kidding. In the same way that the Romans gradually amassed an empire of land spanning three continents and one-ninth of the Earth’s circumference, the CEO and his cohort are now dotting the planet with their own latifundia—not agricultural estates, but AI data centers.
Tech executives like Altman, Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, and Oracle cofounder Larry Ellison are fully bought in to the idea that the future of the American (and possibly global) economy are these new warehouses stocked with IT infrastructure. But data centers, of course, aren’t actually new. In the earliest days of computing there were giant power-sucking mainframes in climate-controlled rooms, with co-ax cables moving information from the mainframe to a terminal computer. Then the consumer internet boom of the late 1990s spawned a new era of infrastructure. Massive buildings began popping up in the backyard of Washington, DC, with racks and racks of computers that stored and processed data for tech companies.
A decade later, “the cloud” became the squishy infrastructure of the internet. Storage got cheaper. Some companies, like Amazon, capitalized on this. Giant data centers continued to proliferate, but instead of a tech company using some combination of on-premise servers and rented data center racks, they offloaded their computing needs to a bunch of virtualized environments. (“What is the cloud?” a perfectly intelligent family member asked me in the mid-2010s, “and why am I paying for 17 different subscriptions to it?”)
All the while tech companies were hoovering up petabytes of data, data that people willingly shared online, in enterprise workspaces, and through mobile apps. Firms began finding new ways to mine and structure this “Big Data,” and promised that it would change lives. In many ways, it did. You had to know where this was going.
Now the tech industry is in the fever-dream days of generative AI, which requires new levels of computing resources. Big Data is tired; big data centers are here, and wired—for AI. Faster, more efficient chips are needed to power AI data centers, and chipmakers like Nvidia and AMD have been jumping up and down on the proverbial couch, proclaiming their love for AI. The industry has entered an unprecedented era of capital investments in AI infrastructure, tilting the US into positive GDP territory. These are massive, swirling deals that might as well be cocktail party handshakes, greased with gigawatts and exuberance, while the rest of us try to track real contracts and dollars.
OpenAI, Microsoft, Nvidia, Oracle, and SoftBank have struck some of the biggest deals. This year an earlier supercomputing project between OpenAI and Microsoft, called Stargate, became the vehicle for a massive AI infrastructure project in the US. (President Donald Trump called it the largest AI infrastructure project in history, because of course he did, but that may not have been hyperbolic.) Altman, Ellison, and SoftBank CEO Masayoshi Son were all in on the deal, pledging $100 billion to start, with plans to invest up to $500 billion into Stargate in the coming years. Nvidia GPUs would be deployed. Later, in July, OpenAI and Oracle announced an additional Stargate partnership—SoftBank curiously absent—measured in gigawatts of capacity (4.5) and expected job creation (around 100,000).
Microsoft, Amazon, and Meta have also shared plans for multibillion-dollar data projects. Microsoft said at the start of 2025 that it was on track to invest “approximately $80 billion to build out AI-enabled data centers to train AI models and deploy AI and cloud-based applications around the world.”
Then, in September, Nvidia said it would invest up to $100 billion in OpenAI, provided that OpenAI made good on a deal to use up to 10 gigawatts of Nvidia’s systems for OpenAI’s infrastructure plans, which means essentially that OpenAI has to pay Nvidia in order to get paid by Nvidia. The following month AMD said it would give OpenAI as much as 10 percent of the chip company if OpenAI purchased and deployed up to 6 gigawatts of AMD GPUs between now and 2030.
It’s the circular nature of these investments that have the general public, and bearish analysts, wondering if we’re headed for an AI bubble burst.
Instagram content
What’s clear is that the near-term downstream effects of these data center build-outs are real. The energy, resource, and labor demands of AI infrastructure are enormous. By some estimates, worldwide AI energy demand is set to surpass demand from bitcoin mining by the end of this year, WIRED has reported. The processors in data centers run hot and need to be cooled, so big tech companies are pulling from municipal water supplies to make that happen—and aren’t always disclosing how much water they’re using. Local wells are running dry or seem unsafe to drink from. Residents who live near data center construction sites are noting that traffic delays, and in some cases car crashes, are increasing. One corner of Richland Parish, Louisiana, home of Meta’s $27 billion Hyperion data center, has seen a 600 percent spike in vehicle crashes this year.
Major proponents of AI seem to suggest that all of this will be worth it. Few top tech executives will publicly entertain the notion that this might be an overshoot, either ecologically or economically. “Emphatically … no,” Lisa Su, the chief executive of AMD, said earlier this month when asked if the AI froth has runneth over. Su, like other execs, cited overwhelming demand for AI as justification for these enormous capital expenditures.
Demand from whom? Harder to pin down. In their mind, it’s everyone. All of us. The 800 million people who use ChatGPT on a weekly basis. The evolution from those 1990s data centers to the 2000s era of cloud computing to new AI data centers wasn’t just one continuum. The world has concurrently moved from the tiny internet to the big internet to the AI internet, and realistically speaking, there’s no going back. Generative AI is out of the bottle. The Sams and Jensens and Larrys and Lisas of the world aren’t wrong about this.
It doesn’t mean they aren’t wrong about the math, though. About their economic predictions. Or their ideas about AI-powered productivity and the labor market. Or the availability of natural and material resources for these data centers. Or who will come once they build them. Or the timing of it all. Even Rome eventually collapsed.