時間:2024-01-02|瀏覽:389
在正在進行的俄羅斯-烏克蘭戰爭期間,出現了一段烏克蘭總統澤連斯基要求其軍隊在戰爭中投降的視頻。 后來證明這是一段深度偽造的視頻。
在針對深度造假技術的不斷發展的斗爭中,各行業越來越多地轉向復雜的技術武器來對抗被操縱內容的擴散。 從內容真實性舉措到隱形水印技術、算法檢測工具、協作項目和平臺政策變化——打擊深度造假的斗爭正在變得多方面和動態。
內容真實性和內容來源
由行業參與者領導的內容真實性倡議旨在以加密方式密封內容的歸屬信息,從而實現從創建到消費的驗證。 同時,內容來源和認證聯盟 (C2PA) 發布了開放標準技術規范,重點提供有關內容來源、更改和貢獻者的數據。
我們希望科技行業和獨立新聞業堅持哪些價值觀? 內容來源和真實性聯盟是打擊虛假信息的重要里程碑 - 更多內容來自 Microsoft 的 @erichorvitz https://t.co/fDaAggbTrV
— 布拉德·史密斯 (@BradSmi) 2021 年 3 月 2 日
微軟更進一步,宣布推出“內容憑證即服務”,利用 C2PA 數字水印憑證來幫助選舉候選人和競選活動保持對其內容的控制。
水印技術
Meta 推出了穩定簽名,這是一種隱形水印技術,旨在區分開源生成人工智能模型創建的內容。 這種不可見的水印肉眼無法察覺,但可以通過算法追蹤,有助于識別被操縱的圖像。 谷歌 DeepMind 也加入了 SynthID 的競爭,允許用戶將數字水印直接嵌入人工智能生成的圖像或音頻中。
來認識一下穩定簽名,這是一種用于 AI 生成圖像的隱形水印技術,可確保 #GenerativeAI 空間中的透明度和問責制:https://t.co/zYbe5Ap9Ek pic.twitter.com/GWn6MTtdaF
- 開發者元 (@MetaforDevs) 2023 年 11 月 6 日
該技術使用戶能夠掃描內容中的水印,從而了解內容是否是使用谷歌的人工智能模型創建或更改的。
算法檢測
該行業正在部署深度偽造的自動檢測軟件,依靠各種基于人工智能的策略,如說話人識別、語音活體檢測、面部識別和時間分析。 微軟的 Video Authenticator 和英特爾支持的 FakeCatcher 就是著名的例子。
#Deepfakes are media created by #AI. The content is made to look like something or someone in real life, but it’s manipulated or completely fake.FakeCatcher is combatting #misinformation with Intel AI optimizations and OpenVINO AI models: https://t.co/ZnCehiaRm2 #IntelON pic.twitter.com/ZpfSzmc6eH
— Intel News (@intelnews) May 11, 2022
However, the challenge lies in the transient nature of detection tools as evolving deepfake production techniques continually challenge their reliability. A study found varying accuracy levels (30-97%) across datasets, indicating the need for ongoing innovation to keep pace with emerging deepfake technologies.
Project Origin
Media organizations, including the BBC and Microsoft, collaborated on Project Origin. This initiative seeks to establish an engineering approach to synthetic media, providing digitally signed links for verifiable tracing of media content back to the publisher. The project also aims to implement validation checks to ensure that content remains unaltered during distribution.
The BBC is working with @Microsoft @CBC and @nytimes to tackle disinformation – Project Origin.Read more: https://t.co/c6gdMpkNUw and https://t.co/9IrQEkMe1m#MediaOrigin | @aythora | @mariannaspring | @BBCtrending | @NYTimesRD | @MSFTIssues | @MSFTResearch | #IBCShowcase https://t.co/0EqxwvCqSR
— BBC Research & Development (@BBCRD) September 8, 2020
Platform Policy Changes
In response to the high-risk context of political advertising, major platforms like Meta and Google have announced policy changes to enhance transparency. Meta’s updated political ads disclosure policy mandates advertisers to reveal digitally altered content in election-related or political ads.
YouTube requires creators to disclose the use of synthetic media content, and failure to comply may result in penalties, including content takedown or suspension from the YouTube Partner Program. The Google Play Store has also introduced a policy for AI-generated content, focusing on preventing the creation and distribution of harmful or deceptive content through generative AI applications.
As the battle against deepfakes intensifies, industry interventions and technological solutions are proving essential in preserving the authenticity and trustworthiness of digital content. The collaborative efforts of major players across various sectors reflect a commitment to staying ahead of the evolving landscape of synthetic media.
The post 2024 Tech Trends: Industry Leaders Embrace AI to Counter Deepfake Threats appeared first on Metaverse Post.