«

Instagram负责人:人工智能无处不在,"为真实媒体打上认证标记将比甄别虚假媒体更具实用性"

qimuai 发布于 阅读:20 一手编译


Instagram负责人:人工智能无处不在,"为真实媒体打上认证标记将比甄别虚假媒体更具实用性"

内容来源:https://www.engadget.com/social-media/instagram-chief-ai-is-so-ubiquitous-it-will-be-more-practical-to-fingerprint-real-media-than-fake-media-202620080.html?src=rss

内容总结:

Instagram负责人亚当·莫塞里近日指出,随着人工智能生成内容在社交平台日益泛滥,未来平台验证真实内容可能比追查虚假内容更具可行性。莫塞里在展望2026年平台趋势时坦言,AI工具已使“人人都能创造逼真内容”,平台信息流正被合成内容充斥。

面对AI内容识别难题,莫塞里提出新思路:与其依赖不可靠的AI内容标注技术,不如建立“真实内容指纹”体系。他建议相机厂商在拍摄时对原始文件进行加密签名,从源头建立可追溯的真实性凭证。这一表态折射出Meta对现有AI内容识别技术局限性的默认——该公司已投入数百亿美元研发AI,却仍无法可靠检测平台上的AI生成内容。

对于平台创作者,莫塞里给出了颇具争议的建议:在AI内容主导的环境下,创作者应主动发布“未经修饰”甚至“不完美”的原始影像,以此作为真实性的证明。他认为传统“精致方图”的审美范式已经过时,相机厂商追求“让所有人像专业摄影师”的理念可能偏离方向。

此番言论引发摄影创作者群体担忧。许多创作者本就对平台算法推荐机制不满,如今平台对AI内容的放任态度可能进一步加剧真实创作者边缘化。莫塞里将真实性验证责任转向设备厂商的构想,尚未提出具体实施方案,其可行性仍待观察。

中文翻译:

Instagram负责人表示:人工智能已无处不在,"为真实内容打上认证标记将比为虚假内容打假更实用"。亚当·莫塞里建议创作者优先发布"不完美"影像以自证真实性。

人工智能生成内容在2025年席卷社交媒体已不是秘密。如今,Instagram掌门人亚当·莫塞里明确表示,预计AI内容将超越非AI影像,这一转变对创作者和摄影师影响深远。

莫塞里在长篇帖文中阐述了塑造2026年Instagram的宏观趋势,并对AI如何颠覆平台作出尤为坦率的评估:"曾经让创作者脱颖而出的特质——展现真实、建立联结、拥有无法伪造的独特声音——如今任何掌握合适工具的人都能轻易获得。信息流正逐渐被合成内容填满。"

但莫塞里对此转变似乎并不特别担忧。他指出存在"大量出色的AI内容",平台或许需要重新思考标注方式,转为"为真实媒体建立数字指纹,而非仅仅追查虚假内容"。

莫塞里强调(本人着重号标注):
社交媒体平台将面临越来越大的压力,需要识别并标注AI生成内容。所有主流平台初期都能有效识别AI内容,但随着AI模拟现实的能力增强,识别效果会逐渐下降。越来越多人开始认同我的观点:为真实媒体建立数字指纹比追查虚假媒体更可行。相机厂商可在拍摄时对图像进行加密签名,建立可追溯的监管链。

从某种程度上看,这确实是更符合Meta利益的务实方案。正如我们此前报道,水印等AI内容识别技术已被证明至多只能算不可靠——它们既易被去除,更易被彻底忽略。Meta自家的标注系统远不够清晰,这家仅今年就投入数百亿美元发展AI的公司也承认,无法可靠检测平台上的AI生成或篡改内容。

然而莫塞里如此轻易在此问题上认输,颇具深意。AI垃圾内容已经获胜。在帮助Instagram三十亿用户辨别真伪这件事上,这主要该是别人的责任,而非Meta的。相机厂商——想必包括手机制造商和传统相机厂商——应当建立自己的认证系统,听起来非常像通过"拍摄时验证真实性"的水印技术。莫塞里未详细说明这套系统如何运作,也未提及大规模实施所需的具体方案。

莫塞里也基本未回应此举可能疏远众多摄影师及其他Instagram创作者的现实——这群人早已对应用心生不满。这位高管经常处理他们的投诉,他们想知道为何Instagram算法不能持续将他们的推文展示给关注者。

但莫塞里认为这些抱怨源于对Instagram定位的过时认知。他说,那些"精致"方形图片的信息流"已消亡"。在他看来,相机公司试图"让每个人都像过去的专业摄影师"是"押错了审美取向"。相反,他认为更"原生"和"不完美"的影像将成为创作者证明自身真实性的方式。在Instagram的AI内容占据主流的未来,创作者应当优先发布那些刻意展现不完美状态的图片和视频。

英文来源:

Instagram chief: AI is so ubiquitous 'it will be more practical to fingerprint real media than fake media'
Adam Mosseri says creators should prioritize "unflattering" images to prove they are real.
It's no secret that AI-generated content took over our social media feeds in 2025. Now, Instagram's top exec Adam Mosseri has made it clear that he expects AI content to overtake non-AI imagery and the significant implications that shift has for its creators and photographers.
Mosseri shared the thoughts in a lengthy post about the broader trends he expects to shape Instagram in 2026. And he offered a notably candid assessment on how AI is upending the platform. "Everything that made creators matter—the ability to be real, to connect, to have a voice that couldn’t be faked—is now suddenly accessible to anyone with the right tools," he wrote. "The feeds are starting to fill up with synthetic everything."
But Mosseri doesn't seem particularly concerned by this shift. He says that there is "a lot of amazing AI content" and that the platform may need to rethink its approach to labeling such imagery by "fingerprinting real media, not just chasing fake."
From Mosseri (emphasis his):
Social media platforms are going to come under increasing pressure to identify and label AI-generated content as such. All the major platforms will do good work identifying AI content, but they will get worse at it over time as AI gets better at imitating reality. There is already a growing number of people who believe, as I do, that it will be more practical to fingerprint real media than fake media. Camera manufacturers could cryptographically sign images at capture, creating a chain of custody.
On some level, it's easy to understand how this seems like a more practical approach for Meta. As we've previously reported, technologies that are meant to identify AI content, like watermarks, have proved unreliable at best. They are easy to remove and even easier to ignore altogether. Meta's own labels are far from clear and the company, which has spent tens of billions of dollars on AI this year alone, has admitted it can't reliably detect AI-generated or manipulated content on its platform.
That Mosseri is so readily admitting defeat on this issue, though, is telling. AI slop has won. And when it comes to helping Instagram's 3 billion users understand what is real, that should largely be someone else's problem, not Meta's. Camera makers — presumably phone makers and actual camera manufacturers — should come up with their own system that sure sounds a lot like watermarking to "to verify authenticity at capture." Mosseri offers few details about how this would work or be implemented at the scale required to make it feasible.
Mosseri also doesn't really address the fact that this is likely to alienate the many photographers and other Instagram creators who have already grown frustrated with the app. The exec regularly fields complaints from the group who want to know why Instagram's algorithm doesn't consistently surface their posts to their on followers.
But Mosseri suggests those complaints stem from an outdated vision of what Instagram even is. The feed of "polished" square images, he says, "is dead." Camera companies, in his estimation, are "are betting on the wrong aesthetic" by trying to "make everyone look like a professional photographer from the past." Instead, he says that more "raw" and "unflattering" images will be how creators can prove they are real, and not AI. In a world where Instagram has more AI content than not, creators should prioritize images and videos that intentionally make them look bad.

Engadget

文章目录


    扫描二维码,在手机上阅读