成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

首頁(yè) 500強(qiáng) 活動(dòng) 榜單 商業(yè) 科技 領(lǐng)導(dǎo)力 專題 品牌中心
雜志訂閱

假新聞2.0:人工智能將攪亂美國(guó)大選?

JEREMY KAHN
2023-04-24

社交媒體已經(jīng)對(duì)此前的大選造成嚴(yán)重破壞?,F(xiàn)在,生成式人工智能陰森逼近,我們準(zhǔn)備好了嗎?

文本設(shè)置
小號(hào)
默認(rèn)
大號(hào)
Plus(0條)

預(yù)計(jì)人工智能將為那些針對(duì)選民的虛假信息提供不懈動(dòng)力——但也有可能為打擊這種虛假信息做出貢獻(xiàn)。圖片來(lái)源:SCOTTY PERRY—BLOOMBERG/GETTY IMAGES

2月27日,在如火如荼的芝加哥市長(zhǎng)選舉前夕,一個(gè)自稱“芝加哥湖畔新聞”(Chicago Lakefront News)的推特賬號(hào)發(fā)布了候選人保羅?瓦拉斯(Paul Vallas)的一張照片和一段音頻。此前擔(dān)任市政府預(yù)算主任和學(xué)區(qū)主管的瓦拉斯,是四位角逐芝加哥最高職位的參選人之一。根據(jù)這段音頻,他似乎在刻意淡化警察槍擊事件,說(shuō)“在我那個(gè)時(shí)代,”一位警察在其職業(yè)生涯中可能會(huì)殺死多達(dá)18位平民,“沒有人會(huì)眨一下眼睛?!边@段視頻繼續(xù)說(shuō)道:“這種‘停止為警察提供資金’的言論將導(dǎo)致芝加哥陷于動(dòng)蕩,無(wú)法無(wú)天。我們現(xiàn)在需要做的,不是撤銷警察資金,而是重新為他們提供資金支持。”

事實(shí)證明,瓦拉斯并沒有說(shuō)過(guò)這些話。這段視頻很快就被證偽,很可能是用某個(gè)容易訪問,用于克隆聲音的人工智能軟件制作而成。在視頻發(fā)布前幾天才注冊(cè)的“芝加哥湖畔新聞”很快就刪除了這條推文——但在此之前,它已經(jīng)被成千上萬(wàn)人看到并廣泛轉(zhuǎn)發(fā),一些人顯然上當(dāng)了,相信“錄音”是真實(shí)的。這段音頻或許對(duì)市長(zhǎng)選舉影響甚微:瓦拉斯贏得了多數(shù)選票,并進(jìn)入第二輪決選階段,不過(guò)最終,他以微弱劣勢(shì)敗北。但專家們表示,這種克隆瓦拉斯聲音的做法堪稱一次令人驚悚的預(yù)演:拜人工智能的飛速發(fā)展所賜,2024年美國(guó)總統(tǒng)大選很可能會(huì)遭遇類似虛假信息的侵襲。

這些新型人工智能系統(tǒng)被統(tǒng)稱為“生成式人工智能”。比如,最近大火特火的文本處理工具ChatGPT只需幾個(gè)提示,就能生成學(xué)生期末論文和商業(yè)電子郵件。但它只是這種技術(shù)的一個(gè)例子而已。一家名為ElevenLabs的公司發(fā)布了一款可以利用僅僅幾秒長(zhǎng)的樣本克隆聲音的軟件。此外,現(xiàn)在任何人都可以通過(guò)訂購(gòu)OpenAI的DALL-E 2、Stable Diffusion或Midjourney等軟件生成逼真的靜態(tài)圖像。盡管人工智能利用文本提示創(chuàng)建視頻的能力還處于初級(jí)階段,比如紐約初創(chuàng)公司Runway開發(fā)的軟件可以生成幾秒鐘長(zhǎng)的視頻片段,但精通深度偽造技術(shù)的騙子完全可以制作出足以亂真,讓許多人上當(dāng)受騙的假視頻。

紐約大學(xué)(New York University)認(rèn)知科學(xué)名譽(yù)教授、人工智能專家加里·馬庫(kù)斯(Gary Marcus)憂心忡忡地表示:“這真的令人不寒而栗,寢食難安?!彼恢痹诰婀姡芜@項(xiàng)技術(shù)的大型語(yǔ)言模型是民主面臨的一大威脅。盡管我們已經(jīng)在過(guò)去幾屆大選中見證過(guò)人們利用社交媒體發(fā)布和傳播虛假信息的景象,但人工智能的強(qiáng)悍能力遠(yuǎn)非人類所能企及——正是這種以前所未有的數(shù)量和速度傳播假消息的能力,以及非母語(yǔ)人士現(xiàn)在只需要敲幾下鍵盤就可以用大多數(shù)語(yǔ)言寫出通暢文本這一事實(shí),使得這種新技術(shù)成為一個(gè)巨大威脅?!昂茈y想象人工智能生成的虛假信息不會(huì)在下次大選中掀起波瀾?!彼f(shuō)。

馬庫(kù)斯指出,對(duì)于像俄羅斯這樣的民族國(guó)家來(lái)說(shuō),新的人工智能工具特別有用。在俄羅斯,宣傳的目標(biāo)與其說(shuō)是說(shuō)服,倒不如說(shuō)是用海量謊言和半真半假的信息壓制目標(biāo)受眾,讓其無(wú)從辨?zhèn)?。蘭德公司(Rand Corporation)的一項(xiàng)研究將這種策略稱為“謊言水管”。其目的是制造混亂,破壞信任,使人們更傾向于相信經(jīng)由社會(huì)關(guān)系獲得的信息,而不是專家分享的信息。

并不是每個(gè)人都確信,如今的情勢(shì)像馬庫(kù)斯描述的那樣驚悚——至少現(xiàn)在還不是。專門研究人工智能和新興技術(shù)影響的布魯金斯學(xué)會(huì)(Brookings Institution)研究員克里斯·梅瑟羅爾(Chris Meserole)指出,最近幾屆總統(tǒng)選舉已經(jīng)出現(xiàn)了非常高水平的人造虛假信息,他不確定新的人工智能語(yǔ)言模型會(huì)帶來(lái)顯著變化。“我認(rèn)為這不會(huì)徹底改變游戲規(guī)則,2024年預(yù)計(jì)不會(huì)跟2020年或2016年有明顯不同。”他說(shuō)。

梅瑟羅爾還認(rèn)為,視頻深度偽造技術(shù)還不足以攪亂2024年大選(盡管他認(rèn)為,2028年的情況恐怕會(huì)有所不同)。真正讓梅瑟羅爾擔(dān)憂的是語(yǔ)音克隆。他指出,一個(gè)很容易想象的場(chǎng)景是,一段音頻在選舉的關(guān)鍵時(shí)刻浮出水面,它記錄的據(jù)稱是候選人在私下場(chǎng)合發(fā)表的可恥言論。盡管在場(chǎng)人士可能會(huì)否認(rèn)音頻的真實(shí)性,但外人恐怕很難確定。

那么,虛假敘事究竟是會(huì)說(shuō)服任何人,還是只會(huì)強(qiáng)化一些人的現(xiàn)有信念?牛津互聯(lián)網(wǎng)研究所(Oxford Internet Institute)的技術(shù)與監(jiān)管教授桑德拉·瓦赫特(Sandra Wachter)表示,相關(guān)研究給出了相互矛盾的結(jié)論。但在一場(chǎng)勢(shì)均力敵的選舉中,即便是這種邊際效應(yīng)也可能是決定性的。

面對(duì)機(jī)器生成式假新聞的威脅,一些人認(rèn)為人工智能本身可能是最好的防御手段。在西班牙,一家名為Newtral,專門對(duì)政客言論進(jìn)行事實(shí)核查的公司,正在試驗(yàn)類似于ChatGPT的大型語(yǔ)言模型。首席技術(shù)官魯本·米格斯·佩雷斯(Ruben Miguez Perez)表示,這些模型不能真正驗(yàn)證事實(shí),但可以幫助人們更好地揭穿謊言。這種技術(shù)能夠在一段內(nèi)容做出值得核查的事實(shí)聲明時(shí)進(jìn)行標(biāo)記,還能檢測(cè)其他宣揚(yáng)相同敘事的內(nèi)容,這一過(guò)程稱為“聲明匹配”。與其他機(jī)器學(xué)習(xí)技術(shù)結(jié)合使用,這些大型語(yǔ)言模型也有望根據(jù)內(nèi)容蘊(yùn)含的情緒來(lái)評(píng)估某種說(shuō)法是錯(cuò)誤信息的可能性。米格斯·佩雷斯指出,借助這些方法,Newtral已經(jīng)將識(shí)別可疑陳述所耗費(fèi)的時(shí)間減少了70%到80%。

大型社交媒體平臺(tái),比如臉書(Facebook)和谷歌(Google)旗下的YouTube,一直在研究具備類似功能的人工智能系統(tǒng)。Facebook的母公司Meta表示,在2020年總統(tǒng)大選前夕,這家社交平臺(tái)為超過(guò)1.8億條被第三方事實(shí)核查員揭穿的內(nèi)容貼上了警告標(biāo)識(shí)。盡管如此,仍然有大量虛假信息蒙混過(guò)關(guān)。事實(shí)上,人工智能模型尤其難以捕捉那些借助圖像和文本來(lái)傳遞觀點(diǎn)的“表情包”。Meta宣稱,自2020年大選以來(lái),其人工智能系統(tǒng)獲得了長(zhǎng)足進(jìn)步,但虛假敘事散布者正在持續(xù)不斷地設(shè)計(jì)一批批人工智能模型從未見過(guò)的新變體。

馬庫(kù)斯指出,合理的監(jiān)管或許能發(fā)揮一定作用:政府應(yīng)該要求大型語(yǔ)言模型的創(chuàng)建者刻上“數(shù)字水印”,從而讓其他算法更容易識(shí)別人工智能生成的內(nèi)容。作為ChatGPT的創(chuàng)建者,OpenAI確實(shí)談過(guò)這種水印,但尚未付諸實(shí)施。與此同時(shí),該公司還發(fā)布了免費(fèi)的人工智能內(nèi)容檢測(cè)軟件,但這種軟件只在大約三分之一的情況下有效。馬庫(kù)斯還表示,國(guó)會(huì)應(yīng)該將大規(guī)模制造和傳播虛假信息定為非法行為。盡管第一修正案的支持者可能會(huì)高聲反對(duì),但憲法的制定者無(wú)論如何都想象不到未來(lái)有一天,人們只需按一下按鈕,就能利用技術(shù)生成無(wú)數(shù)令人信服的謊言。

話說(shuō)回來(lái),美利堅(jiān)建國(guó)之時(shí),即18世紀(jì)末期,也是一個(gè)傳播虛假信息的黃金時(shí)代。層出不窮的匿名傳單和黨派報(bào)紙肆無(wú)忌憚地兜售政敵和對(duì)立黨派的卑鄙故事。牛津大學(xué)的瓦赫特指出,民主在那時(shí)得以幸存。也許這一次也會(huì)如此。但我們很可能會(huì)見證一場(chǎng)史無(wú)前例的選舉活動(dòng)。(財(cái)富中文網(wǎng))

譯者:任文科

2月27日,在如火如荼的芝加哥市長(zhǎng)選舉前夕,一個(gè)自稱“芝加哥湖畔新聞”(Chicago Lakefront News)的推特賬號(hào)發(fā)布了候選人保羅?瓦拉斯(Paul Vallas)的一張照片和一段音頻。此前擔(dān)任市政府預(yù)算主任和學(xué)區(qū)主管的瓦拉斯,是四位角逐芝加哥最高職位的參選人之一。根據(jù)這段音頻,他似乎在刻意淡化警察槍擊事件,說(shuō)“在我那個(gè)時(shí)代,”一位警察在其職業(yè)生涯中可能會(huì)殺死多達(dá)18位平民,“沒有人會(huì)眨一下眼睛?!边@段視頻繼續(xù)說(shuō)道:“這種‘停止為警察提供資金’的言論將導(dǎo)致芝加哥陷于動(dòng)蕩,無(wú)法無(wú)天。我們現(xiàn)在需要做的,不是撤銷警察資金,而是重新為他們提供資金支持?!?/p>

事實(shí)證明,瓦拉斯并沒有說(shuō)過(guò)這些話。這段視頻很快就被證偽,很可能是用某個(gè)容易訪問,用于克隆聲音的人工智能軟件制作而成。在視頻發(fā)布前幾天才注冊(cè)的“芝加哥湖畔新聞”很快就刪除了這條推文——但在此之前,它已經(jīng)被成千上萬(wàn)人看到并廣泛轉(zhuǎn)發(fā),一些人顯然上當(dāng)了,相信“錄音”是真實(shí)的。這段音頻或許對(duì)市長(zhǎng)選舉影響甚微:瓦拉斯贏得了多數(shù)選票,并進(jìn)入第二輪決選階段,不過(guò)最終,他以微弱劣勢(shì)敗北。但專家們表示,這種克隆瓦拉斯聲音的做法堪稱一次令人驚悚的預(yù)演:拜人工智能的飛速發(fā)展所賜,2024年美國(guó)總統(tǒng)大選很可能會(huì)遭遇類似虛假信息的侵襲。

這些新型人工智能系統(tǒng)被統(tǒng)稱為“生成式人工智能”。比如,最近大火特火的文本處理工具ChatGPT只需幾個(gè)提示,就能生成學(xué)生期末論文和商業(yè)電子郵件。但它只是這種技術(shù)的一個(gè)例子而已。一家名為ElevenLabs的公司發(fā)布了一款可以利用僅僅幾秒長(zhǎng)的樣本克隆聲音的軟件。此外,現(xiàn)在任何人都可以通過(guò)訂購(gòu)OpenAI的DALL-E 2、Stable Diffusion或Midjourney等軟件生成逼真的靜態(tài)圖像。盡管人工智能利用文本提示創(chuàng)建視頻的能力還處于初級(jí)階段,比如紐約初創(chuàng)公司Runway開發(fā)的軟件可以生成幾秒鐘長(zhǎng)的視頻片段,但精通深度偽造技術(shù)的騙子完全可以制作出足以亂真,讓許多人上當(dāng)受騙的假視頻。

紐約大學(xué)(New York University)認(rèn)知科學(xué)名譽(yù)教授、人工智能專家加里·馬庫(kù)斯(Gary Marcus)憂心忡忡地表示:“這真的令人不寒而栗,寢食難安?!彼恢痹诰婀?,支撐這項(xiàng)技術(shù)的大型語(yǔ)言模型是民主面臨的一大威脅。盡管我們已經(jīng)在過(guò)去幾屆大選中見證過(guò)人們利用社交媒體發(fā)布和傳播虛假信息的景象,但人工智能的強(qiáng)悍能力遠(yuǎn)非人類所能企及——正是這種以前所未有的數(shù)量和速度傳播假消息的能力,以及非母語(yǔ)人士現(xiàn)在只需要敲幾下鍵盤就可以用大多數(shù)語(yǔ)言寫出通暢文本這一事實(shí),使得這種新技術(shù)成為一個(gè)巨大威脅?!昂茈y想象人工智能生成的虛假信息不會(huì)在下次大選中掀起波瀾。”他說(shuō)。

馬庫(kù)斯指出,對(duì)于像俄羅斯這樣的民族國(guó)家來(lái)說(shuō),新的人工智能工具特別有用。在俄羅斯,宣傳的目標(biāo)與其說(shuō)是說(shuō)服,倒不如說(shuō)是用海量謊言和半真半假的信息壓制目標(biāo)受眾,讓其無(wú)從辨?zhèn)?。蘭德公司(Rand Corporation)的一項(xiàng)研究將這種策略稱為“謊言水管”。其目的是制造混亂,破壞信任,使人們更傾向于相信經(jīng)由社會(huì)關(guān)系獲得的信息,而不是專家分享的信息。

并不是每個(gè)人都確信,如今的情勢(shì)像馬庫(kù)斯描述的那樣驚悚——至少現(xiàn)在還不是。專門研究人工智能和新興技術(shù)影響的布魯金斯學(xué)會(huì)(Brookings Institution)研究員克里斯·梅瑟羅爾(Chris Meserole)指出,最近幾屆總統(tǒng)選舉已經(jīng)出現(xiàn)了非常高水平的人造虛假信息,他不確定新的人工智能語(yǔ)言模型會(huì)帶來(lái)顯著變化。“我認(rèn)為這不會(huì)徹底改變游戲規(guī)則,2024年預(yù)計(jì)不會(huì)跟2020年或2016年有明顯不同?!彼f(shuō)。

梅瑟羅爾還認(rèn)為,視頻深度偽造技術(shù)還不足以攪亂2024年大選(盡管他認(rèn)為,2028年的情況恐怕會(huì)有所不同)。真正讓梅瑟羅爾擔(dān)憂的是語(yǔ)音克隆。他指出,一個(gè)很容易想象的場(chǎng)景是,一段音頻在選舉的關(guān)鍵時(shí)刻浮出水面,它記錄的據(jù)稱是候選人在私下場(chǎng)合發(fā)表的可恥言論。盡管在場(chǎng)人士可能會(huì)否認(rèn)音頻的真實(shí)性,但外人恐怕很難確定。

那么,虛假敘事究竟是會(huì)說(shuō)服任何人,還是只會(huì)強(qiáng)化一些人的現(xiàn)有信念?牛津互聯(lián)網(wǎng)研究所(Oxford Internet Institute)的技術(shù)與監(jiān)管教授桑德拉·瓦赫特(Sandra Wachter)表示,相關(guān)研究給出了相互矛盾的結(jié)論。但在一場(chǎng)勢(shì)均力敵的選舉中,即便是這種邊際效應(yīng)也可能是決定性的。

面對(duì)機(jī)器生成式假新聞的威脅,一些人認(rèn)為人工智能本身可能是最好的防御手段。在西班牙,一家名為Newtral,專門對(duì)政客言論進(jìn)行事實(shí)核查的公司,正在試驗(yàn)類似于ChatGPT的大型語(yǔ)言模型。首席技術(shù)官魯本·米格斯·佩雷斯(Ruben Miguez Perez)表示,這些模型不能真正驗(yàn)證事實(shí),但可以幫助人們更好地揭穿謊言。這種技術(shù)能夠在一段內(nèi)容做出值得核查的事實(shí)聲明時(shí)進(jìn)行標(biāo)記,還能檢測(cè)其他宣揚(yáng)相同敘事的內(nèi)容,這一過(guò)程稱為“聲明匹配”。與其他機(jī)器學(xué)習(xí)技術(shù)結(jié)合使用,這些大型語(yǔ)言模型也有望根據(jù)內(nèi)容蘊(yùn)含的情緒來(lái)評(píng)估某種說(shuō)法是錯(cuò)誤信息的可能性。米格斯·佩雷斯指出,借助這些方法,Newtral已經(jīng)將識(shí)別可疑陳述所耗費(fèi)的時(shí)間減少了70%到80%。

大型社交媒體平臺(tái),比如臉書(Facebook)和谷歌(Google)旗下的YouTube,一直在研究具備類似功能的人工智能系統(tǒng)。Facebook的母公司Meta表示,在2020年總統(tǒng)大選前夕,這家社交平臺(tái)為超過(guò)1.8億條被第三方事實(shí)核查員揭穿的內(nèi)容貼上了警告標(biāo)識(shí)。盡管如此,仍然有大量虛假信息蒙混過(guò)關(guān)。事實(shí)上,人工智能模型尤其難以捕捉那些借助圖像和文本來(lái)傳遞觀點(diǎn)的“表情包”。Meta宣稱,自2020年大選以來(lái),其人工智能系統(tǒng)獲得了長(zhǎng)足進(jìn)步,但虛假敘事散布者正在持續(xù)不斷地設(shè)計(jì)一批批人工智能模型從未見過(guò)的新變體。

馬庫(kù)斯指出,合理的監(jiān)管或許能發(fā)揮一定作用:政府應(yīng)該要求大型語(yǔ)言模型的創(chuàng)建者刻上“數(shù)字水印”,從而讓其他算法更容易識(shí)別人工智能生成的內(nèi)容。作為ChatGPT的創(chuàng)建者,OpenAI確實(shí)談過(guò)這種水印,但尚未付諸實(shí)施。與此同時(shí),該公司還發(fā)布了免費(fèi)的人工智能內(nèi)容檢測(cè)軟件,但這種軟件只在大約三分之一的情況下有效。馬庫(kù)斯還表示,國(guó)會(huì)應(yīng)該將大規(guī)模制造和傳播虛假信息定為非法行為。盡管第一修正案的支持者可能會(huì)高聲反對(duì),但憲法的制定者無(wú)論如何都想象不到未來(lái)有一天,人們只需按一下按鈕,就能利用技術(shù)生成無(wú)數(shù)令人信服的謊言。

話說(shuō)回來(lái),美利堅(jiān)建國(guó)之時(shí),即18世紀(jì)末期,也是一個(gè)傳播虛假信息的黃金時(shí)代。層出不窮的匿名傳單和黨派報(bào)紙肆無(wú)忌憚地兜售政敵和對(duì)立黨派的卑鄙故事。牛津大學(xué)的瓦赫特指出,民主在那時(shí)得以幸存。也許這一次也會(huì)如此。但我們很可能會(huì)見證一場(chǎng)史無(wú)前例的選舉活動(dòng)。(財(cái)富中文網(wǎng))

譯者:任文科

ON FEB. 27, the eve of Chicago’s mayoral election, a Twitter account calling itself Chicago Lakefront News posted an image of candidate Paul Vallas, a former city budget director and school district chief who was in a tight four-way contest for the city’s top job, along with an audio recording. On the soundtrack, Vallas seems to downplay police shootings, saying that “in my day” a cop could kill as many as 18 civilians in his career and “no one would bat an eye.” The audio continues, “This ‘Defund the Police’ rhetoric is going to cause unrest and lawlessness in the city of Chicago. We need to stop defunding the police and start refunding them.”

As it turned out, Vallas said none of those things. The audio was quickly debunked as a fake, likely created with easily accessible artificial intelligence software that clones voices. The Chicago Lakefront News account, which had been set up just days before the video was posted, quickly deleted the post—but not before the tweet had been seen by thousands and widely recirculated, with some apparently tricked into believing the “recording” was authentic. The audio had little impact on the mayoral race: Vallas won a plurality and was headed toward a runoff election at press time. But the Vallas voice clone is a scary preview of the sort of misinformation experts say we should expect to face in the 2024 U.S. presidential election, thanks to rapid advances in A.I. capabilities.

These new A.I. systems are collectively referred to as “generative A.I.” ChatGPT, the popular text-based tool that spits out student term papers and business emails with a few prompts, is just one example of the technology. A company called ElevenLabs has released software that can clone voices from a sample just a few seconds long, and anyone can now order up photorealistic still images using software such as OpenAI’s DALL-E 2, Stable Diffusion, or Midjourney. While the ability to create video from a text prompt is more nascent—New York–based startup Runway has created software that produces clips a few seconds in length—a scammer skilled in deepfake techniques can create fake videos good enough to fool many people.

“We should be scared shitless,” says Gary Marcus, professor emeritus of cognitive science at New York University and an A.I. expert who has been trying to raise the alarm about the dangers posed to democracy by the large language models underpinning the tech. While people can already write and distribute misinformation (as we’ve seen with social media in past elections), it is the ability to do so at unprecedented volume and speed—and the fact that non-native speakers can now craft fluent prose in most languages with a few keystrokes—that makes the new technology such a threat. “It is hard to see how A.I.-generated misinformation will not become a major force in the next election,” he says.

The new A.I. tools, Marcus says, are particularly useful for a nation-state, such as Russia, where the goal of propaganda is less about persuasion than simply overwhelming a target audience with an avalanche of lies and half-truths. A Rand Corporation study dubbed this tactic “the firehose of falsehood.” The objective, it concluded, was to sow confusion and destroy trust, making people more likely to believe information shared by social connections than by experts.

Not everyone is sure the situation is as dire as Marcus suggests—at least not yet. Chris Meserole, a fellow at the Brookings Institution who specializes in the impact of A.I. and emerging technologies, says recent presidential elections have already witnessed such high levels of human-written misinformation that he isn’t sure that the new A.I. language models will make a noticeable difference. “I don’t think this will completely change the game and 2024 will look significantly different than 2020 or 2016,” he says.

Meserole also doesn’t think video deepfake technology is good enough yet to play a big role in 2024 (though he says that could change in 2028). What does worry Meserole today is voice clones. He could easily imagine an audio clip surfacing at a key moment in an election, purporting to be a recording of a candidate saying something scandalous in a private meeting. Those present in the meeting might deny the clip’s veracity, but it would be difficult for anyone to know for sure.

Studies have come to conflicting conclusions on whether false narratives persuade anyone or only reinforce existing beliefs, says Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute. But in a close election, even such marginal effects could be decisive.

Faced with the threat of machine-generated fake news, some believe A.I. may itself offer the best defense. In Spain, a company called Newtral that specializes in factchecking claims made by politicians is experimenting with large language models similar to those that power ChatGPT. While these models can’t actually verify facts, they can make humans better at debunking lies, says Newtral chief technology officer Ruben Miguez Perez. The technology can flag when a piece of content is making a factual claim worth checking, and it can detect other content promoting the same narrative, a process called “claim matching.” By pairing large language models with other machine learning software, Miguez Perez says it’s also possible to assess the likelihood of something being misinformation based on the sentiments expressed in the content. Using these methods, Newtral has cut the time it takes to identify statements worth fact-checking by 70% to 80%, he says.

The large social media platforms, such as Meta and Google’s YouTube, have been working on A.I. systems that do similar things. In the run-up to the 2020 U.S. presidential election, Facebook parent company Meta says it displayed warnings on more than 180 million pieces of content that were debunked by third-party fact-checkers. Still, plenty of misinformation slips through. Memes, which rely on both images and text to convey a point, are particularly tricky for A.I. models to catch. And while Meta says its systems have only gotten better since the 2020 election, the people promoting false narratives are continually devising new variations that A.I. models haven’t seen before.

What might make some difference, Marcus says, is sensible regulation: Those creating large language models should be required to create “digital watermarks” that make it easier for other algorithms to identify A.I.-created content. OpenAI, ChatGPT’s creator, has talked about this kind of watermarking, but has yet to implement it. Meanwhile, it has released free A.I.-content detection software, but it works only in about a third of cases. Marcus also says Congress should make it illegal to manufacture and distribute misinformation at scale. While First Amendment advocates might object, he says the framers of the Constitution never imagined technology that could produce infinite reams of convincing lies at the press of a button.

Then again, the late 18th century, when the U.S. was founded, was a golden era of misinformation as well, with anonymous pamphlets and partisan newspapers peddling scurrilous tales about opposing politicians and parties. Democracy survived then, notes Oxford’s Wachter. So perhaps it will this time too. But it could be a campaign unlike any we’ve ever witnessed before.

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識(shí)產(chǎn)權(quán)為財(cái)富媒體知識(shí)產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專屬所有或持有。未經(jīng)許可,禁止進(jìn)行轉(zhuǎn)載、摘編、復(fù)制及建立鏡像等任何使用。
0條Plus
精彩評(píng)論
評(píng)論

撰寫或查看更多評(píng)論

請(qǐng)打開財(cái)富Plus APP

前往打開
熱讀文章
国产办公室沙发系列高清| AV无码专区亚洲AVL在线观看| 日韩一本之道一区中文字幕| 99久久精品国产综合一区| 亚洲欧美日韩综合网站色aa| 久久机热免费视频| 成人AV鲁丝片一区二区免费| 露脸外围女Av剧情在线观看| 国产美女露脸口爆吞精| 国产午夜无码视频在线观看| 中文无码乱人伦中文视频在线V| 91无码人妻精品一区二区蜜桃| 日韩人妻无码专区一本二本| 国产区精品系列在线观看不卡| 插女人骚逼 1080P 麻豆精品无码国产在线| 久久久精品波多野结衣?v| 日韩人妻一区二区三区久久性色| 国产午夜高清一区二区不卡| 岛国无码av不卡一区二区| 在线观看无码视频| 国产成AV人在线观看天堂无码| 人人妻人人澡人人爽视频| 一级做a爰片久久毛片美女| 国产成人精品系列在线观看| 国产日产韩产一区二区精品无码| 98色噜噜刺激有声小说| 日韩在线观看高清视频| 久久久国产精品VA麻豆| 久久久综合视频,小香蕉影院| 国产一区二区精品在线| 亚洲精品无码aⅴ中文字幕蜜桃| 亚洲AV无码AV日韩AV电影| 亚洲AV成人无码久久精品老人| 国产亚洲一区二区三区在线| 日韩无码性爱视频| 国产韩国精品一区二区三区久久| 亚洲av婷婷五月产av中文| 免费一级全黄少妇性色生活片| 国产一级a毛一级a看免费视频| 亚洲欧美日韩综合俺去了| 久久国产精久久精产国|