技術(shù)專家和計算機科學專家警告,AI對人類生存的威脅不亞于核戰(zhàn)爭和全球性流行病,甚至為AI辯護的企業(yè)領(lǐng)導者也更加謹慎地看待對AI技術(shù)可能導致的滅絕風險。
周二,非營利研究機構(gòu)人工智能安全中心(Center for A.I. Safety)發(fā)表了一封公開信“人工智能風險聲明”(statement of A.I. risk),包括ChatGPT開發(fā)商OpenAI的首席執(zhí)行官山姆·奧特曼在內(nèi)的300多人聯(lián)合簽署了該聲明。這份簡短的聲明闡述了AI的相關(guān)風險:
“減輕AI帶來的風險應(yīng)該像流行病和核戰(zhàn)爭等社會性風險一樣成為全球優(yōu)先事項?!?/p>
這封信的序言講到,該聲明旨在就如何準備應(yīng)對AI技術(shù)潛在的滅世風險問題“引發(fā)討論”。其他簽署者包括谷歌(Google)前任工程師杰弗里·辛頓(Geoffrey Hinton)和蒙特利爾大學(University of Montreal)計算機科學家約書亞·本吉奧(Yoshua Bengio),他們因?qū)ΜF(xiàn)代計算機科學做出了巨大貢獻而被稱為AI的兩位教父。最近幾周,本吉奧和辛頓已多次就AI技術(shù)在不久的將來可能發(fā)展出的危險能力發(fā)出警告。辛頓最近剛離開谷歌,因此他可以更公開地談?wù)揂I的風險。
這并非第一封這類信,此前也曾有公開信呼吁人們進一步關(guān)注先進AI研究在缺乏嚴格的政府監(jiān)管下可能會帶來的毀滅性后果。今年3月,埃隆·馬斯克(Elon Musk)和1000多名技術(shù)人員和專家曾呼吁人們將對先進AI的研究暫停6個月,稱AI技術(shù)可能造成破壞。
本月,奧特曼也對美國國會發(fā)出警告稱,隨著AI技術(shù)的飛速發(fā)展,當前監(jiān)管已經(jīng)不足以滿足需求了。
奧特曼最近簽署的這份聲明并沒有像先前那封信一樣概述具體目標,而是力求推動討論。本月初,辛頓在接受美國有線電視新聞網(wǎng)(CNN)采訪時表示,他沒有簽署三月份的那封信,因為鑒于中美已經(jīng)在AI技術(shù)領(lǐng)域展開競爭,暫停AI研究是不現(xiàn)實的。
他說:“我不認為我們可以阻止AI的發(fā)展。我沒有在呼吁大家停止AI研究的請愿書上簽名,因為即使美國人停止了研究,中國人也不會停?!?/p>
盡管OpenAI 和谷歌等AI領(lǐng)先開發(fā)商的高管都呼吁政府加快對AI技術(shù)的監(jiān)管步伐,但一些專家警告稱,在AI目前帶來的問題(包括散布誤導信息和可能引起偏見等)已經(jīng)造成嚴重破壞的情況下,討論AI技術(shù)未來的會導致的滅絕風險只會適得其反。其他人甚至認為,奧特曼這些首席執(zhí)行官之所以公開討論滅絕風險,是為了轉(zhuǎn)移人們對AI技術(shù)當前問題的注意力,而這些問題已經(jīng)釀成了許多后果,包括促進了關(guān)鍵的大選年時虛假新聞的及時傳播。
但對AI持悲觀態(tài)度的人也警告稱,AI技術(shù)的發(fā)展速度十分迅猛,其會導致的滅絕風險可能會過快地成為一個問題,讓人們猝不及防。人們越來越擔心,能夠為自己思考和推理的超級智能AI會比許多人想象的更快成為現(xiàn)實,而且一些專家警告稱,AI技術(shù)當前與人類利益和福祉并不契合。
本月,辛頓在接受《華盛頓郵報》(Washington Post)采訪時表示,超級智能AI正在快速發(fā)展,可能只需要20年就能成為現(xiàn)實,現(xiàn)在是時候該討論先進AI的風險了。
他說:“這不是科幻小說。”(財富中文網(wǎng))
譯者:中慧言-劉嘉歡
OpenAI首席執(zhí)行官山姆·奧特曼(Sam Altman)警告,人工智能(AI)可能帶來滅絕風險。
技術(shù)專家和計算機科學專家警告,AI對人類生存的威脅不亞于核戰(zhàn)爭和全球性流行病,甚至為AI辯護的企業(yè)領(lǐng)導者也更加謹慎地看待對AI技術(shù)可能導致的滅絕風險。
周二,非營利研究機構(gòu)人工智能安全中心(Center for A.I. Safety)發(fā)表了一封公開信“人工智能風險聲明”(statement of A.I. risk),包括ChatGPT開發(fā)商OpenAI的首席執(zhí)行官山姆·奧特曼在內(nèi)的300多人聯(lián)合簽署了該聲明。這份簡短的聲明闡述了AI的相關(guān)風險:
“減輕AI帶來的風險應(yīng)該像流行病和核戰(zhàn)爭等社會性風險一樣成為全球優(yōu)先事項?!?/p>
這封信的序言講到,該聲明旨在就如何準備應(yīng)對AI技術(shù)潛在的滅世風險問題“引發(fā)討論”。其他簽署者包括谷歌(Google)前任工程師杰弗里·辛頓(Geoffrey Hinton)和蒙特利爾大學(University of Montreal)計算機科學家約書亞·本吉奧(Yoshua Bengio),他們因?qū)ΜF(xiàn)代計算機科學做出了巨大貢獻而被稱為AI的兩位教父。最近幾周,本吉奧和辛頓已多次就AI技術(shù)在不久的將來可能發(fā)展出的危險能力發(fā)出警告。辛頓最近剛離開谷歌,因此他可以更公開地談?wù)揂I的風險。
這并非第一封這類信,此前也曾有公開信呼吁人們進一步關(guān)注先進AI研究在缺乏嚴格的政府監(jiān)管下可能會帶來的毀滅性后果。今年3月,埃隆·馬斯克(Elon Musk)和1000多名技術(shù)人員和專家曾呼吁人們將對先進AI的研究暫停6個月,稱AI技術(shù)可能造成破壞。
本月,奧特曼也對美國國會發(fā)出警告稱,隨著AI技術(shù)的飛速發(fā)展,當前監(jiān)管已經(jīng)不足以滿足需求了。
奧特曼最近簽署的這份聲明并沒有像先前那封信一樣概述具體目標,而是力求推動討論。本月初,辛頓在接受美國有線電視新聞網(wǎng)(CNN)采訪時表示,他沒有簽署三月份的那封信,因為鑒于中美已經(jīng)在AI技術(shù)領(lǐng)域展開競爭,暫停AI研究是不現(xiàn)實的。
他說:“我不認為我們可以阻止AI的發(fā)展。我沒有在呼吁大家停止AI研究的請愿書上簽名,因為即使美國人停止了研究,中國人也不會停?!?/p>
盡管OpenAI 和谷歌等AI領(lǐng)先開發(fā)商的高管都呼吁政府加快對AI技術(shù)的監(jiān)管步伐,但一些專家警告稱,在AI目前帶來的問題(包括散布誤導信息和可能引起偏見等)已經(jīng)造成嚴重破壞的情況下,討論AI技術(shù)未來的會導致的滅絕風險只會適得其反。其他人甚至認為,奧特曼這些首席執(zhí)行官之所以公開討論滅絕風險,是為了轉(zhuǎn)移人們對AI技術(shù)當前問題的注意力,而這些問題已經(jīng)釀成了許多后果,包括促進了關(guān)鍵的大選年時虛假新聞的及時傳播。
但對AI持悲觀態(tài)度的人也警告稱,AI技術(shù)的發(fā)展速度十分迅猛,其會導致的滅絕風險可能會過快地成為一個問題,讓人們猝不及防。人們越來越擔心,能夠為自己思考和推理的超級智能AI會比許多人想象的更快成為現(xiàn)實,而且一些專家警告稱,AI技術(shù)當前與人類利益和福祉并不契合。
本月,辛頓在接受《華盛頓郵報》(Washington Post)采訪時表示,超級智能AI正在快速發(fā)展,可能只需要20年就能成為現(xiàn)實,現(xiàn)在是時候該討論先進AI的風險了。
他說:“這不是科幻小說。”(財富中文網(wǎng))
譯者:中慧言-劉嘉歡
Technologists and computer science experts are warning that artificial intelligence poses threats to humanity’s survival on par with nuclear warfare and global pandemics, and even business leaders who are fronting the charge for A.I. are cautioning about the technology’s existential risks.
Sam Altman, CEO of ChatGPT creator OpenAI, is one of over 300 signatories behind a public “statement of A.I. risk” published Tuesday by the Center for A.I. Safety, a nonprofit research organization. The letter is a short single statement to capture the risks associated with A.I.:
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The letter’s preamble said the statement is intended to “open up discussion” on how to prepare for the technology’s potentially world-ending capabilities. Other signatories include former Google engineer Geoffrey Hinton and University of Montreal computer scientist Yoshua Bengio, who are known as two of the Godfathers of A.I. due to their contributions to modern computer science. Both Bengio and Hinton have issued several warnings in recent weeks about what dangerous capabilities the technology is likely to develop in the near future. Hinton recently left Google so that he could more openly discuss A.I.’s risks.
It isn’t the first letter calling for more attention to be paid to the possible disastrous outcomes of advanced A.I. research without stricter government oversight. Elon Musk was one of over 1,000 technologists and experts to call for a six-month pause on advanced A.I. research in March, citing the technology’s destructive potential.
And Altman warned Congress this month that sufficient regulation is already lacking as the technology develops at a breakneck pace.
The more recent note signed by Altman did not outline any specific goals like the earlier letter, other than fostering discussion. Hinton said in an interview with CNN earlier this month that he did not sign the March letter, saying that a pause on A.I. research would be unrealistic given the technology has become a?competitive sphere between the U.S. and China.
“I don’t think we can stop the progress,” he said. “I didn’t sign the petition saying we should stop working on A.I because if people in America stop, people in China wouldn’t.”
But while executives from leading A.I. developers including OpenAI and even Google have called on governments to move faster on regulating A.I., some experts warn that it is counter-productive to discuss the technology’s future existential risks when its current problems, including misinformation and potential biases, are already wreaking havoc. Others have even argued that by publicly discussing A.I.’s existential risks, CEOs like Altman have been trying to distract from the technology’s current issues which are already creating problems, including facilitating the spread of fake news just in time for a pivotal election year.
But A.I.’s doomsayers have also warned that the technology is developing fast enough that existential risks could become a problem faster than humans can keep tabs on. Fears are growing in the community that superintelligent A.I., which would be able to think and reason for itself, is closer than many believe, and some experts warn that the technology is not currently aligned with human interests and well-being.
Hinton said in an interview with the Washington Post this month that the horizon for superintelligent A.I. is moving up fast and could now be only 20 years away, and now is the time to have conversations about advanced A.I.’s risks.
“This is not science fiction,” he said.