为何人工智能的预测如此困难

内容来源:https://www.technologyreview.com/2026/01/06/1130707/why-ai-predictions-are-so-hard/
内容总结:
人工智能预测为何如此困难?我们为何仍要展望2026年的技术前景
人工智能已从专业议题渗透至大众生活的方方面面。节假日期间,从青少年到长辈,人们都在讨论聊天机器人可能引发的心理问题、数据中心推高电价的担忧,以及儿童是否应无限制接触AI。公众的焦虑情绪普遍存在。
当话题转向“若技术持续进步,未来会如何?”时,人们往往期待一个明确的预言。然而,当前对AI的预测正变得前所未有的困难,这主要源于三大未解之谜:
首先,大语言模型的进步是否会持续? 该技术是当前AI热潮的核心,驱动着从AI伴侣到客服的各种应用。若其发展速度放缓,将标志着一个“后AI炒作时代”的到来,行业格局可能面临重大调整。
其次,公众对AI的抵触情绪日益高涨。 以去年OpenAI与特朗普高调宣布的数千亿美元数据中心计划为例,项目遭遇了多地社区的强烈反对。科技巨头正艰难争取公众支持以推进建设,这场舆论战能否取胜仍是未知数。
再者,监管框架混乱且矛盾重重。 联邦政府试图将AI监管权收归中央,迎合了大科技公司的诉求;但与此同时,从加州进步派议员到联邦贸易委员会,各方势力均以不同动机和方式试图约束AI公司(例如保护儿童免受聊天机器人影响)。各方能否弥合分歧,形成有效监管,前景不明。
当然,AI也有其积极一面。机器学习等较早的AI形式已在科研中广泛应用,如荣获诺贝尔奖的蛋白质预测工具AlphaFold,以及提升癌细胞识别率的图像识别模型。然而,基于大语言模型的新一代聊天机器人,其实际科学贡献尚属有限:它们擅长总结已知研究,但所谓“破解数学难题”等突破性发现多被证伪;它们可辅助医生诊断,也可能误导患者自我误诊,带来风险。
展望2026年,我们或许能对上述问题获得更清晰的答案,但同时也必将迎来全新的挑战。预测AI的未来虽难,却是理解这场技术革命走向不可或缺的尝试。
中文翻译:
为何人工智能的预测如此困难
——以及我们为何仍要预测2026年的技术走向
本文原载于《麻省理工科技评论》旗下人工智能主题周刊《算法》。若希望第一时间收到此类内容,请点击此处订阅。
有时人工智能似乎是个小众的写作话题,但每逢假日聚会,我总会听到各年龄段的亲戚们谈论聊天机器人引发的心理问题、将电价上涨归咎于数据中心,并讨论是否该让孩子无限制接触人工智能。换言之,人工智能已无处不在,而人们正为此感到不安。
这些对话总会转向一个方向:人工智能已产生诸多连锁反应,但若技术持续进步,未来会怎样?这时他们通常会看向我,期待听到末日预言或希望曙光。
我大概会让人失望——只因对人工智能的预测正变得越来越困难。
尽管如此,我必须说《麻省理工科技评论》在解读人工智能发展方向方面有着相当出色的记录。我们刚刚发布了一份精准预测2026年趋势的清单(其中可阅读我对人工智能法律纠纷的见解),而去年的预测已全部应验。但每逢岁末,判断人工智能的影响总是愈发艰难,这主要源于三大未解之谜。
其一,我们无法确定大型语言模型在近期是否会持续渐进式提升。由于这项技术正是当前所有人工智能热潮与焦虑的基石——从AI伴侣到客服代理皆由其驱动,其发展放缓将产生重大影响。事实上,这个问题如此关键,以至于我们在去年十二月用整组专题报道探讨了“后AI炒作时代”的可能面貌。
其二,人工智能在公众中的接受度极低。仅举一例:近一年前,OpenAI的山姆·奥特曼曾与特朗普总统并肩宣布一项5000亿美元计划,拟在全美建设数据中心以训练更庞大的AI模型。二人要么未曾预料,要么毫不在意许多美国人会坚决反对在社区建设此类数据中心。一年后的今天,科技巨头们仍在艰难争取民意支持以持续推进建设。他们能成功吗?
立法者对这些民怨的回应混乱不堪。特朗普通过将AI监管权收归联邦(而非各州)取悦了科技巨头CEO们,科技公司正试图将此举措法典化。但希望保护儿童免受聊天机器人影响的群体极为多元——从加州进步派议员到日益倾向特朗普的联邦贸易委员会,各方动机与手段迥异。他们能否搁置分歧,有效约束AI公司?
当阴郁的节日餐桌对话进行至此,总会有人问:难道人工智能没有用在真正有益的领域吗?比如促进健康、推动科学发现、深化对气候变化的认知?
某种程度上确实如此。机器学习作为更早的AI形式,长期应用于各类科学研究。其中被称为深度学习的分支,构成了诺贝尔奖获奖工具AlphaFold(彻底改变生物学领域的蛋白质预测工具)的一部分。图像识别模型在检测癌细胞方面也日益精进。
但基于新型大型语言模型构建的聊天机器人,其实际成效则较为有限。像ChatGPT这类技术擅长分析海量研究资料以总结既有发现,但某些宣称此类AI模型取得突破(如解决此前未解的数学难题)的高调报道实属虚假。它们能辅助医生诊断,也可能诱使人们不经问诊就自行判断健康问题,有时甚至导致灾难性后果。
待到明年此时,我们或许能更清晰地回答家人们的疑问,同时也必将面临全新的问题。在此期间,请务必阅读我们团队对今年趋势的完整预测报告。
深度解析
人工智能
OpenAI新型大语言模型揭示AI运行奥秘
这款实验模型虽无法与顶尖系统竞争,却能解释它们为何行为异常及其真实可信度。
量子物理学家成功压缩并“解禁”DeepSeek R1
他们将AI推理模型体积缩减超一半,并宣称其现在能回答中国AI系统曾禁止的政治敏感问题。
AI聊天机器人比政治广告更能影响选民立场
与聊天机器人的对话足以改变人们的政治观点——但最具说服力的模型往往也传播最多错误信息。
保持联系
获取《麻省理工科技评论》最新动态
探索特别优惠、头条新闻、近期活动及更多内容。
英文来源:
Why AI predictions are so hard
And why we're predicting what's next for the technology in 2026 anyway.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Sometimes AI feels like a niche topic to write about, but then the holidays happen, and I hear relatives of all ages talking about cases of chatbot-induced psychosis, blaming rising electricity prices on data centers, and asking whether kids should have unfettered access to AI. It’s everywhere, in other words. And people are alarmed.
Inevitably, these conversations take a turn: AI is having all these ripple effects now, but if the technology gets better, what happens next? That’s usually when they look at me, expecting a forecast of either doom or hope.
I probably disappoint, if only because predictions for AI are getting harder and harder to make.
Despite that, MIT Technology Review has, I must say, a pretty excellent track record of making sense of where AI is headed. We’ve just published a sharp list of predictions for what’s next in 2026 (where you can read my thoughts on the legal battles surrounding AI), and the predictions on last year’s list all came to fruition. But every holiday season, it gets harder and harder to work out the impact AI will have. That’s mostly because of three big unanswered questions.
For one, we don’t know if large language models will continue getting incrementally smarter in the near future. Since this particular technology is what underpins nearly all the excitement and anxiety in AI right now, powering everything from AI companions to customer service agents, its slowdown would be a pretty huge deal. Such a big deal, in fact, that we devoted a whole slate of stories in December to what a new post-AI-hype era might look like.
Number two, AI is pretty abysmally unpopular among the general public. Here’s just one example: Nearly a year ago, OpenAI’s Sam Altman stood next to President Trump to excitedly announce a $500 billion project to build data centers across the US in order to train larger and larger AI models. The pair either did not guess or did not care that many Americans would staunchly oppose having such data centers built in their communities. A year later, Big Tech is waging an uphill battle to win over public opinion and keep on building. Can it win?
The response from lawmakers to all this frustration is terribly confused. Trump has pleased Big Tech CEOs by moving to make AI regulation a federal rather than a state issue, and tech companies are now hoping to codify this into law. But the crowd that wants to protect kids from chatbots ranges from progressive lawmakers in California to the increasingly Trump-aligned Federal Trade Commission, each with distinct motives and approaches. Will they be able to put aside their differences and rein AI firms in?
If the gloomy holiday dinner table conversation gets this far, someone will say: Hey, isn’t AI being used for objectively good things? Making people healthier, unearthing scientific discoveries, better understanding climate change?
Well, sort of. Machine learning, an older form of AI, has long been used in all sorts of scientific research. One branch, called deep learning, forms part of AlphaFold, a Nobel Prize–winning tool for protein prediction that has transformed biology. Image recognition models are getting better at identifying cancerous cells.
But the track record for chatbots built atop newer large language models is more modest. Technologies like ChatGPT are quite good at analyzing large swathes of research to summarize what’s already been discovered. But some high-profile reports that these sorts of AI models had made a genuine discovery, like solving a previously unsolved mathematics problem, were bogus. They can assist doctors with diagnoses, but they can also encourage people to diagnose their own health problems without consulting doctors, sometimes with disastrous results.
This time next year, we’ll probably have better answers to my family’s questions, and we’ll have a bunch of entirely new questions too. In the meantime, be sure to read our full piece forecasting what will happen this year, featuring predictions from the whole AI team.
Deep Dive
Artificial intelligence
OpenAI’s new LLM exposes the secrets of how AI really works
The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are.
Quantum physicists have shrunk and “de-censored” DeepSeek R1
They managed to cut the size of the AI reasoning model by more than half—and claim it can now answer politically sensitive questions once off limits in Chinese AI systems.
AI chatbots can sway voters better than political advertisements
A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.