很少會有人認(rèn)為假鈔對金融系統(tǒng)不構(gòu)成任何威脅。如果你無法信任貨幣的真實性,買賣就會變得更困難,進(jìn)而將影響經(jīng)濟(jì)。但一位著名作家表示,另外一種危險并沒有引起足夠多的關(guān)注,那就是社交媒體上假冒真人的人工智能機器人。
以色列歷史學(xué)家、作家尤瓦爾·諾亞·赫拉利在上周說:“現(xiàn)在,在人類歷史上首次能夠創(chuàng)造虛假的人,人們可以創(chuàng)造出數(shù)以十億計的假人。你并不知道,與你在線交流的某個人是真人還是機器人?!?/p>
尤瓦爾·諾亞·赫拉利是《人類簡史》(Sapiens)的作者。這本書被比爾·蓋茨評價為他最喜歡的書之一。赫拉利在瑞士日內(nèi)瓦召開的聯(lián)合國(UN)人工智能惠及人類(AI for Good)全球峰會上發(fā)表了這番評論。
他表示:“如果一個社交媒體平臺上,不僅有機器人會轉(zhuǎn)發(fā)真人發(fā)布的推文,還有數(shù)以百萬甚至十億計的機器人能夠創(chuàng)作在許多方面比人類創(chuàng)作的內(nèi)容更高級的內(nèi)容,例如更有說服力或更有吸引力的內(nèi)容,或者更有針對性地根據(jù)你的個性和生活經(jīng)歷量身定制的內(nèi)容,結(jié)果會怎樣?如果我們允許這種情況發(fā)生,人類基本上就完全喪失了對公共對話的控制權(quán),比如民主政治這種概念將變得完全不可行?!?/p>
他警告稱“這種情況對社會的危害,相當(dāng)于假鈔對于金融系統(tǒng)的威脅。如果你無法分辨誰是真人,誰是機器人,信任就會崩潰,至少自由社會將不復(fù)存在?;蛟S獨裁政權(quán)可以用某種方式解決這個問題,但民主政體將束手無策?!?/p>
“人工智能機器人泛濫”
推特(Twitter)的老板埃隆·馬斯克也意識到機器人問題。他在今年3月發(fā)推文稱“For You推薦流將只推送經(jīng)認(rèn)證賬戶的內(nèi)容”,他表示,這是“解決高級人工智能機器人泛濫唯一現(xiàn)實的途徑。否則這將是一場注定要失敗的戰(zhàn)爭?!?/p>
與此同時,OpenAI的首席執(zhí)行官薩姆·奧爾特曼參與創(chuàng)建了初創(chuàng)公司W(wǎng)orldcoin,為人們提供了借助虹膜掃描設(shè)備和加密系統(tǒng)證明他們是真人而非人工智能機器人的途徑。該公司宣布在5月C輪融資中融得1.15億美元。
赫拉利呼吁出臺防止“假人”的“極其嚴(yán)格的規(guī)定”。
他說:“如果你假冒真人,或者在你的平臺上允許假人存在,不采取有效的反制措施,我們或許不會將你處決,但你將面臨20年監(jiān)禁?!?/p>
他表示,面對這樣的后果,科技巨頭們會很快“找到防止社交媒體平臺上假人泛濫的方法”。
至于為什么現(xiàn)在還沒有這樣的規(guī)定存在,赫拉利稱,到目前為止創(chuàng)造這樣的假人“在技術(shù)上不可行”。相比之下,偽造貨幣的行為存在已久,而各國政府均頒布了打擊這種行為的“極其嚴(yán)格的規(guī)定”,以“保護(hù)金融系統(tǒng)”。
赫拉利指出,他并非呼吁出臺禁止創(chuàng)建人工智能機器人的法律,而是“禁止讓這些機器人以真人的身份出現(xiàn)在公眾面前”。他表示,例如,提供人工智能醫(yī)生是合法的,“可以帶來巨大幫助”,但“必須說明它并非人類醫(yī)生……我需要知道它是真人還是人工智能”。(財富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
很少會有人認(rèn)為假鈔對金融系統(tǒng)不構(gòu)成任何威脅。如果你無法信任貨幣的真實性,買賣就會變得更困難,進(jìn)而將影響經(jīng)濟(jì)。但一位著名作家表示,另外一種危險并沒有引起足夠多的關(guān)注,那就是社交媒體上假冒真人的人工智能機器人。
以色列歷史學(xué)家、作家尤瓦爾·諾亞·赫拉利在上周說:“現(xiàn)在,在人類歷史上首次能夠創(chuàng)造虛假的人,人們可以創(chuàng)造出數(shù)以十億計的假人。你并不知道,與你在線交流的某個人是真人還是機器人?!?/p>
尤瓦爾·諾亞·赫拉利是《人類簡史》(Sapiens)的作者。這本書被比爾·蓋茨評價為他最喜歡的書之一。赫拉利在瑞士日內(nèi)瓦召開的聯(lián)合國(UN)人工智能惠及人類(AI for Good)全球峰會上發(fā)表了這番評論。
他表示:“如果一個社交媒體平臺上,不僅有機器人會轉(zhuǎn)發(fā)真人發(fā)布的推文,還有數(shù)以百萬甚至十億計的機器人能夠創(chuàng)作在許多方面比人類創(chuàng)作的內(nèi)容更高級的內(nèi)容,例如更有說服力或更有吸引力的內(nèi)容,或者更有針對性地根據(jù)你的個性和生活經(jīng)歷量身定制的內(nèi)容,結(jié)果會怎樣?如果我們允許這種情況發(fā)生,人類基本上就完全喪失了對公共對話的控制權(quán),比如民主政治這種概念將變得完全不可行?!?/p>
他警告稱“這種情況對社會的危害,相當(dāng)于假鈔對于金融系統(tǒng)的威脅。如果你無法分辨誰是真人,誰是機器人,信任就會崩潰,至少自由社會將不復(fù)存在?;蛟S獨裁政權(quán)可以用某種方式解決這個問題,但民主政體將束手無策?!?/p>
“人工智能機器人泛濫”
推特(Twitter)的老板埃隆·馬斯克也意識到機器人問題。他在今年3月發(fā)推文稱“For You推薦流將只推送經(jīng)認(rèn)證賬戶的內(nèi)容”,他表示,這是“解決高級人工智能機器人泛濫唯一現(xiàn)實的途徑。否則這將是一場注定要失敗的戰(zhàn)爭?!?/p>
與此同時,OpenAI的首席執(zhí)行官薩姆·奧爾特曼參與創(chuàng)建了初創(chuàng)公司W(wǎng)orldcoin,為人們提供了借助虹膜掃描設(shè)備和加密系統(tǒng)證明他們是真人而非人工智能機器人的途徑。該公司宣布在5月C輪融資中融得1.15億美元。
赫拉利呼吁出臺防止“假人”的“極其嚴(yán)格的規(guī)定”。
他說:“如果你假冒真人,或者在你的平臺上允許假人存在,不采取有效的反制措施,我們或許不會將你處決,但你將面臨20年監(jiān)禁?!?/p>
他表示,面對這樣的后果,科技巨頭們會很快“找到防止社交媒體平臺上假人泛濫的方法”。
至于為什么現(xiàn)在還沒有這樣的規(guī)定存在,赫拉利稱,到目前為止創(chuàng)造這樣的假人“在技術(shù)上不可行”。相比之下,偽造貨幣的行為存在已久,而各國政府均頒布了打擊這種行為的“極其嚴(yán)格的規(guī)定”,以“保護(hù)金融系統(tǒng)”。
赫拉利指出,他并非呼吁出臺禁止創(chuàng)建人工智能機器人的法律,而是“禁止讓這些機器人以真人的身份出現(xiàn)在公眾面前”。他表示,例如,提供人工智能醫(yī)生是合法的,“可以帶來巨大幫助”,但“必須說明它并非人類醫(yī)生……我需要知道它是真人還是人工智能”。(財富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
Few would argue fake money poses no threat to the financial system. If you can’t trust that currency is real, buying and selling gets more difficult, with implications for the economy. But according to one noted author, not enough attention goes to another danger: A.I. bots on social media pretending to be real people.
“Now it is possible, for the first time in history, to create fake people—to create billions of fake people,” Israeli historian and author Yuval Noah Harari said last week. “You interact with somebody online, and you don’t know if it’s a real human being or a bot.”
The author of Sapiens—a history of humanity that Bill Gates calls one of his favorite books—made the comments while addressing the UN’s AI for Good summit in Geneva.
He continued, “What happens if you have a social media platform when it’s not just bots that retweet what a human created, but you have millions, potentially billions, of bots that can create content that in many ways is superior to what humans can create, like more convincing, more appealing, whatever—more tailored to your specific personality and life history. If we allow this to happen, then basically humans have completely lost control of the public conversation, and things like democracy will become completely unworkable.”
He warned that “it will do to society what fake money threatens to do to the financial system. If you can’t know who is a real human and who is a fake human, trust will collapse, and with it, at least free society. Maybe dictatorships will be able to manage somehow, but not democracies.”
“AI bot swarms taking over”
Twitter owner Elon Musk is also aware of the bot problem. He tweeted in March that “only verified accounts will be eligible to be in For You recommendations,” calling it “the only realistic way to address advanced AI bot swarms taking over. It is otherwise a hopeless losing battle.”
OpenAI CEO Sam Altman, meanwhile, cofounded a startup called Worldcoin, which offers a way for people to prove they’re a human being and not an A.I. bot, with the help of an iris-scanning device and crypto system. The venture announced a $115 million Series C funding round in May.
Harari called for “very strict rules” against “faking people.”
“If you fake people, or if you allow fake people on your platform without taking effective countermeasures, so maybe we don’t execute you, but you go to 20 years in jail,” he said.
Facing such consequences, tech giants would quickly “find ways to prevent the platforms from being overflown with fake people,” he said.
As for why such rules don’t exist already, he noted that until now creating fake people in such a way “was technically impossible.” Counterfeiting money, by contrast, has long been possible, and governments have enacted “very strict rules” against it to “protect the financial system.”
He noted that he wasn’t calling for laws against creating such bots, but rather, “you’re not allowed to pass them in public as real people.” For example offering an A.I. doctor is fine and “can be extremely helpful,” he said, but only “provided it’s very clear that this is not a human doctor…I need to know whether it’s a real human being or an A.I.”