埃隆·馬斯克(Elon Musk)曾多次將人工智能稱(chēng)為“文明風(fēng)險(xiǎn)”。人工智能教父之一杰弗里·辛頓(Geoffrey Hinton)最近改變了論調(diào),稱(chēng)人工智能是“生存威脅”。
但DeepMind公司聯(lián)合創(chuàng)始人穆斯塔法·蘇萊曼(Mustafa Suleyman)最近發(fā)表了不同觀點(diǎn)。Deepmind之前得到了馬斯克的支持,已在人工智能領(lǐng)域發(fā)展了十多年。蘇萊曼是最新出版的《即將到來(lái)的浪潮:技術(shù)、權(quán)力和21世紀(jì)最大的困境》(“The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma”)一書(shū)的合著者。作為該領(lǐng)域最杰出、從業(yè)時(shí)間最長(zhǎng)的專(zhuān)家之一,他認(rèn)為,AI帶來(lái)了影響深遠(yuǎn)的問(wèn)題,但其威脅并不像其他人認(rèn)為的那樣緊迫。事實(shí)上,從現(xiàn)在開(kāi)始,挑戰(zhàn)相當(dāng)簡(jiǎn)單明了。
人工智能技術(shù)帶來(lái)的風(fēng)險(xiǎn)是整個(gè)2023年公眾辯論的焦點(diǎn),成為媒體熱議的話題。穆斯塔法上周對(duì)《麻省理工科技評(píng)論》(MIT Technology Review)表示:“我只是認(rèn)為,生存風(fēng)險(xiǎn)論完全是庸人自擾。我們應(yīng)該討論更實(shí)際的基本問(wèn)題(從隱私到偏見(jiàn),從面部識(shí)別到在線審核)。”
他表示,最緊迫的問(wèn)題尤其應(yīng)該是監(jiān)管。蘇萊曼對(duì)世界各國(guó)政府能夠有效監(jiān)管人工智能持樂(lè)觀態(tài)度。蘇萊曼表示:“我認(rèn)為所有人都陷入恐慌:認(rèn)為我們無(wú)法對(duì)其進(jìn)行監(jiān)管。這簡(jiǎn)直是無(wú)稽之談。我們完全有能力監(jiān)管人工智能。我們將采用相同的框架(之前行之有效)?!?/p>
他的信念在一定程度上源于,在航空和互聯(lián)網(wǎng)等曾被視為前沿技術(shù)的領(lǐng)域,各國(guó)都實(shí)現(xiàn)了有效監(jiān)管。他認(rèn)為:如果商業(yè)航班沒(méi)有合理的安全協(xié)議,乘客將永遠(yuǎn)不會(huì)信任航空公司,這將損害其業(yè)務(wù)。在互聯(lián)網(wǎng)上,消費(fèi)者可以訪問(wèn)無(wú)數(shù)的網(wǎng)站,但像販賣(mài)毒品或宣傳恐怖主義這樣的活動(dòng)是明令禁止的,盡管沒(méi)有完全消除。
另一方面,正如《麻省理工科技評(píng)論》的威爾·道格拉斯·海恩(Will Douglas Heaven)向蘇萊曼指出的那樣,一些觀察人士認(rèn)為,目前的互聯(lián)網(wǎng)監(jiān)管存在缺陷:沒(méi)有追究大型科技公司的全部責(zé)任。特別是,作為現(xiàn)行互聯(lián)網(wǎng)法律基石之一的《通信規(guī)范法》第230條規(guī)定,公司無(wú)須為第三方或用戶在他們平臺(tái)發(fā)布的內(nèi)容承擔(dān)責(zé)任。這是某些大型社交媒體公司成立的基礎(chǔ),使其免于為網(wǎng)站上分享的內(nèi)容承擔(dān)任何責(zé)任。今年2月,最高法院審理了兩起可能改變互聯(lián)網(wǎng)立法格局的案件。
為了實(shí)現(xiàn)對(duì)人工智能的監(jiān)管,蘇萊曼希望結(jié)合廣泛的國(guó)際監(jiān)管來(lái)創(chuàng)建新的監(jiān)管機(jī)構(gòu),并(在“微觀層面”)將政策落細(xì)落小。所有雄心勃勃的人工智能監(jiān)管機(jī)構(gòu)和開(kāi)發(fā)人員可以采取的第一步就是限制“遞歸自我改進(jìn)”,即人工智能的自我改進(jìn)能力。限制人工智能的這一特定能力將是關(guān)鍵的第一步,以確保其未來(lái)的發(fā)展不會(huì)完全脫離人類(lèi)的監(jiān)督。
“你不會(huì)想讓你的小人工智能在沒(méi)有你監(jiān)督的情況下自行更新代碼?!碧K萊曼說(shuō)?!耙苍S這甚至應(yīng)該是一項(xiàng)需要獲得許可的活動(dòng)——(你知道的)就像處理炭疽或核材料一樣?!?/p>
如果不對(duì)人工智能的細(xì)節(jié)進(jìn)行管理,有時(shí)會(huì)引入使用的“實(shí)際代碼”,立法者將很難確保其法律的可執(zhí)行性。蘇萊曼說(shuō):"這關(guān)系到設(shè)定人工智能無(wú)法逾越的界限。“
為了確保實(shí)現(xiàn)上述愿景,政府應(yīng)該能夠“直接管理”人工智能開(kāi)發(fā)人員,以確保他們不會(huì)跨越最終設(shè)定的任何界限。其中一些界限應(yīng)該明確標(biāo)出,比如禁止聊天機(jī)器人回答某些問(wèn)題,或是對(duì)個(gè)人數(shù)據(jù)進(jìn)行隱私保護(hù)。
世界各國(guó)政府都在制定人工智能法規(guī)
周二,美國(guó)總統(tǒng)喬·拜登(Joe Biden)在聯(lián)合國(guó)發(fā)表演講時(shí)也表達(dá)了類(lèi)似的觀點(diǎn),他呼吁世界各國(guó)領(lǐng)導(dǎo)人攜手合作,減輕人工智能帶來(lái)的“巨大危險(xiǎn)”,同時(shí)確保人工智能仍“為善所用”。
而在美國(guó)國(guó)內(nèi),鑒于人工智能技術(shù)發(fā)展變化如此之快,參議院多數(shù)黨領(lǐng)袖查克·舒默(Chuck Schumer,紐約州民主黨人)敦促國(guó)會(huì)議員迅速采取行動(dòng),對(duì)人工智能進(jìn)行監(jiān)管。上周,舒默邀請(qǐng)了特斯拉(Tesla)首席執(zhí)行官埃隆·馬斯克(Eon Musk)、微軟(Microsoft)首席執(zhí)行官薩蒂亞·納德拉(Satya Nadella)和Alphabet首席執(zhí)行官桑達(dá)爾·皮查伊(Sundar Pichai)等大型科技公司高管到華盛頓開(kāi)會(huì),討論未來(lái)人工智能監(jiān)管問(wèn)題。一些國(guó)會(huì)議員對(duì)邀請(qǐng)硅谷高管討論旨在監(jiān)管其公司的政策的決定持懷疑態(tài)度。
歐盟(最早監(jiān)管人工智能的政府機(jī)構(gòu)之一)在6月通過(guò)了一項(xiàng)立法草案,要求開(kāi)發(fā)人員分享用于訓(xùn)練模型的數(shù)據(jù),并嚴(yán)格限制面部識(shí)別軟件的使用——蘇萊曼也表示應(yīng)該限制這種軟件的使用。《時(shí)代》雜志的一篇報(bào)道發(fā)現(xiàn),ChatGPT的制造商O(píng)penAI曾游說(shuō)歐盟官員弱化其擬議立法的部分法條。
中國(guó)也是最早出臺(tái)最完善的人工智能監(jiān)管法規(guī)的國(guó)家之一。今年7月,中國(guó)國(guó)家互聯(lián)網(wǎng)信息辦公室發(fā)布了《生成式人工智能服務(wù)管理暫行辦法》,包括明確要求遵守現(xiàn)行版權(quán)法,并規(guī)定哪些類(lèi)型的開(kāi)發(fā)需要政府批準(zhǔn)。
蘇萊曼則堅(jiān)信,政府在未來(lái)的人工智能監(jiān)管中可以發(fā)揮關(guān)鍵作用?!拔覠釔?ài)民族國(guó)家?!彼f(shuō)?!拔蚁嘈疟O(jiān)管的力量。我呼吁民族國(guó)家采取行動(dòng)來(lái)解決這一問(wèn)題。事關(guān)重大,現(xiàn)在是時(shí)候采取行動(dòng)了?!保ㄘ?cái)富中文網(wǎng))
譯者:中慧言-王芳
埃隆·馬斯克(Elon Musk)曾多次將人工智能稱(chēng)為“文明風(fēng)險(xiǎn)”。人工智能教父之一杰弗里·辛頓(Geoffrey Hinton)最近改變了論調(diào),稱(chēng)人工智能是“生存威脅”。
但DeepMind公司聯(lián)合創(chuàng)始人穆斯塔法·蘇萊曼(Mustafa Suleyman)最近發(fā)表了不同觀點(diǎn)。Deepmind之前得到了馬斯克的支持,已在人工智能領(lǐng)域發(fā)展了十多年。蘇萊曼是最新出版的《即將到來(lái)的浪潮:技術(shù)、權(quán)力和21世紀(jì)最大的困境》(“The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma”)一書(shū)的合著者。作為該領(lǐng)域最杰出、從業(yè)時(shí)間最長(zhǎng)的專(zhuān)家之一,他認(rèn)為,AI帶來(lái)了影響深遠(yuǎn)的問(wèn)題,但其威脅并不像其他人認(rèn)為的那樣緊迫。事實(shí)上,從現(xiàn)在開(kāi)始,挑戰(zhàn)相當(dāng)簡(jiǎn)單明了。
人工智能技術(shù)帶來(lái)的風(fēng)險(xiǎn)是整個(gè)2023年公眾辯論的焦點(diǎn),成為媒體熱議的話題。穆斯塔法上周對(duì)《麻省理工科技評(píng)論》(MIT Technology Review)表示:“我只是認(rèn)為,生存風(fēng)險(xiǎn)論完全是庸人自擾。我們應(yīng)該討論更實(shí)際的基本問(wèn)題(從隱私到偏見(jiàn),從面部識(shí)別到在線審核)。”
他表示,最緊迫的問(wèn)題尤其應(yīng)該是監(jiān)管。蘇萊曼對(duì)世界各國(guó)政府能夠有效監(jiān)管人工智能持樂(lè)觀態(tài)度。蘇萊曼表示:“我認(rèn)為所有人都陷入恐慌:認(rèn)為我們無(wú)法對(duì)其進(jìn)行監(jiān)管。這簡(jiǎn)直是無(wú)稽之談。我們完全有能力監(jiān)管人工智能。我們將采用相同的框架(之前行之有效)?!?/p>
他的信念在一定程度上源于,在航空和互聯(lián)網(wǎng)等曾被視為前沿技術(shù)的領(lǐng)域,各國(guó)都實(shí)現(xiàn)了有效監(jiān)管。他認(rèn)為:如果商業(yè)航班沒(méi)有合理的安全協(xié)議,乘客將永遠(yuǎn)不會(huì)信任航空公司,這將損害其業(yè)務(wù)。在互聯(lián)網(wǎng)上,消費(fèi)者可以訪問(wèn)無(wú)數(shù)的網(wǎng)站,但像販賣(mài)毒品或宣傳恐怖主義這樣的活動(dòng)是明令禁止的,盡管沒(méi)有完全消除。
另一方面,正如《麻省理工科技評(píng)論》的威爾·道格拉斯·海恩(Will Douglas Heaven)向蘇萊曼指出的那樣,一些觀察人士認(rèn)為,目前的互聯(lián)網(wǎng)監(jiān)管存在缺陷:沒(méi)有追究大型科技公司的全部責(zé)任。特別是,作為現(xiàn)行互聯(lián)網(wǎng)法律基石之一的《通信規(guī)范法》第230條規(guī)定,公司無(wú)須為第三方或用戶在他們平臺(tái)發(fā)布的內(nèi)容承擔(dān)責(zé)任。這是某些大型社交媒體公司成立的基礎(chǔ),使其免于為網(wǎng)站上分享的內(nèi)容承擔(dān)任何責(zé)任。今年2月,最高法院審理了兩起可能改變互聯(lián)網(wǎng)立法格局的案件。
為了實(shí)現(xiàn)對(duì)人工智能的監(jiān)管,蘇萊曼希望結(jié)合廣泛的國(guó)際監(jiān)管來(lái)創(chuàng)建新的監(jiān)管機(jī)構(gòu),并(在“微觀層面”)將政策落細(xì)落小。所有雄心勃勃的人工智能監(jiān)管機(jī)構(gòu)和開(kāi)發(fā)人員可以采取的第一步就是限制“遞歸自我改進(jìn)”,即人工智能的自我改進(jìn)能力。限制人工智能的這一特定能力將是關(guān)鍵的第一步,以確保其未來(lái)的發(fā)展不會(huì)完全脫離人類(lèi)的監(jiān)督。
“你不會(huì)想讓你的小人工智能在沒(méi)有你監(jiān)督的情況下自行更新代碼。”蘇萊曼說(shuō)。“也許這甚至應(yīng)該是一項(xiàng)需要獲得許可的活動(dòng)——(你知道的)就像處理炭疽或核材料一樣。”
如果不對(duì)人工智能的細(xì)節(jié)進(jìn)行管理,有時(shí)會(huì)引入使用的“實(shí)際代碼”,立法者將很難確保其法律的可執(zhí)行性。蘇萊曼說(shuō):"這關(guān)系到設(shè)定人工智能無(wú)法逾越的界限?!?/p>
為了確保實(shí)現(xiàn)上述愿景,政府應(yīng)該能夠“直接管理”人工智能開(kāi)發(fā)人員,以確保他們不會(huì)跨越最終設(shè)定的任何界限。其中一些界限應(yīng)該明確標(biāo)出,比如禁止聊天機(jī)器人回答某些問(wèn)題,或是對(duì)個(gè)人數(shù)據(jù)進(jìn)行隱私保護(hù)。
世界各國(guó)政府都在制定人工智能法規(guī)
周二,美國(guó)總統(tǒng)喬·拜登(Joe Biden)在聯(lián)合國(guó)發(fā)表演講時(shí)也表達(dá)了類(lèi)似的觀點(diǎn),他呼吁世界各國(guó)領(lǐng)導(dǎo)人攜手合作,減輕人工智能帶來(lái)的“巨大危險(xiǎn)”,同時(shí)確保人工智能仍“為善所用”。
而在美國(guó)國(guó)內(nèi),鑒于人工智能技術(shù)發(fā)展變化如此之快,參議院多數(shù)黨領(lǐng)袖查克·舒默(Chuck Schumer,紐約州民主黨人)敦促國(guó)會(huì)議員迅速采取行動(dòng),對(duì)人工智能進(jìn)行監(jiān)管。上周,舒默邀請(qǐng)了特斯拉(Tesla)首席執(zhí)行官埃隆·馬斯克(Eon Musk)、微軟(Microsoft)首席執(zhí)行官薩蒂亞·納德拉(Satya Nadella)和Alphabet首席執(zhí)行官桑達(dá)爾·皮查伊(Sundar Pichai)等大型科技公司高管到華盛頓開(kāi)會(huì),討論未來(lái)人工智能監(jiān)管問(wèn)題。一些國(guó)會(huì)議員對(duì)邀請(qǐng)硅谷高管討論旨在監(jiān)管其公司的政策的決定持懷疑態(tài)度。
歐盟(最早監(jiān)管人工智能的政府機(jī)構(gòu)之一)在6月通過(guò)了一項(xiàng)立法草案,要求開(kāi)發(fā)人員分享用于訓(xùn)練模型的數(shù)據(jù),并嚴(yán)格限制面部識(shí)別軟件的使用——蘇萊曼也表示應(yīng)該限制這種軟件的使用?!稌r(shí)代》雜志的一篇報(bào)道發(fā)現(xiàn),ChatGPT的制造商O(píng)penAI曾游說(shuō)歐盟官員弱化其擬議立法的部分法條。
中國(guó)也是最早出臺(tái)最完善的人工智能監(jiān)管法規(guī)的國(guó)家之一。今年7月,中國(guó)國(guó)家互聯(lián)網(wǎng)信息辦公室發(fā)布了《生成式人工智能服務(wù)管理暫行辦法》,包括明確要求遵守現(xiàn)行版權(quán)法,并規(guī)定哪些類(lèi)型的開(kāi)發(fā)需要政府批準(zhǔn)。
蘇萊曼則堅(jiān)信,政府在未來(lái)的人工智能監(jiān)管中可以發(fā)揮關(guān)鍵作用?!拔覠釔?ài)民族國(guó)家?!彼f(shuō)。“我相信監(jiān)管的力量。我呼吁民族國(guó)家采取行動(dòng)來(lái)解決這一問(wèn)題。事關(guān)重大,現(xiàn)在是時(shí)候采取行動(dòng)了?!保ㄘ?cái)富中文網(wǎng))
譯者:中慧言-王芳
Elon Musk has repeatedly referred to AI as a “civilizational risk.” Geoffrey Hinton, one of the founding fathers of AI research, changed his tune recently, calling AI an “existential threat.” And then there’s Mustafa Suleyman, cofounder of DeepMind, a firm formerly backed by Musk that has been on the scene for over a decade, and coauthor of the newly released “The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma.” One of the most prominent and longest-tenured experts in the field, he thinks such far-reaching concerns aren’t as pressing as others make them out to be, and in fact, the challenge from here on out is pretty straightforward.
The risks posed by AI have been front and center in public debates throughout 2023 since the technology vaulted into the public consciousness, becoming the subject of fascination in the press. “I just think that the existential-risk stuff has been a completely bonkers distraction,” Mustafa told MIT Technology Review last week. “There’s like 101 more practical issues that we should all be talking about, from privacy to bias to facial recognition to online moderation.”
The most pressing issue, in particular, should be regulation, he says. Suleyman is bullish on government’s across the world being able to effectively regulate AI. “I think everybody is having a complete panic that we’re not going to be able to regulate this,” Suleyman said. “It’s just nonsense. We’re totally going to be able to regulate it. We’ll apply the same frameworks that have been successful previously.”
His conviction is in part borne of the successful regulation of past technologies that were once considered cutting edge such as aviation and the internet. He argues: Without proper safety protocols for commercial flights, passengers would have never trusted airlines, which would have hurt business. On the internet, consumers can visit a myriad of sites but activities like selling drugs or promoting terrorism are banned—although not eliminated entirely.
On the other hand, as the Review‘s Will Douglas Heaven noted to Suleyman, some observers argue that current internet regulations are flawed and don’t sufficiently hold big tech companies accountable. In particular, Section 230 of the Communications Decency Act, one of the cornerstones of current internet legislation, which offers platforms safe harbor for content posted by third party users. It’s the foundation on which some of the biggest social media companies are built, shielding them from any liability for what gets shared on their websites. In February, the Supreme Court heard two cases that could alter the legislative landscape of the internet.
To bring AI regulation to fruition, Suleyman wants a combination of broad, international regulation to create new oversight institutions and smaller, more granular policies at the “micro level.” A first step that all aspiring AI regulators and developers can take is to limit “recursive self improvement” or AI’s ability to improve itself. Limiting this specific capability of artificial intelligence would be a critical first step to ensure that none of its future developments were made entirely without human oversight.
“You wouldn’t want to let your little AI go off and update its own code without you having oversight,” Suleyman said. “Maybe that should even be a licensed activity—you know, just like for handling anthrax or nuclear materials.”
Without governing some of the minutiae of AI, inducing at times the “actual code” used, legislators will have a hard time ensuring their laws are enforceable. “It’s about setting boundaries, limits that an AI can’t cross,” Suleyman says.
To make sure that happens, governments should be able to get “direct access” to AI developers to ensure they don’t cross whatever boundaries are eventually established. Some of those boundaries should be clearly marked, such as prohibiting chatbots to answer certain questions, or privacy protections for personal data.
Governments worldwide are working on AI regulations
During a speech at the UN Tuesday, President Joe Biden sounded a similar tune, calling for world leaders to work together to mitigate AI’s “enormous peril” while making sure it is still used “for good.”
And domestically, Senate majority leader Chuck Schumer (D-N.Y.) has urged lawmakers to move swiftly in regulating AI, given the rapid pace of change in the technology’s development. Last week, Schumer invited executives from the biggest tech companies including Tesla CEO Elon Eon Musk, Microsoft CEO Satya Nadella, and Alphabet CEO Sundar Pichai to Washington for a meeting to discuss prospective AI regulation. Some lawmakers were skeptical of the decision to invite executives from Silicon Valley to discuss the policies that would seek to regulate their companies.
One of the earliest governmental bodies to regulate AI was the European Union, which in June passed draft legislation requiring developers to share what data is used to train their models and severely restricting the use of facial recognition software—something Suleyman also said should be limited. A Time report found that OpenAI, which makes ChatGPT, lobbied EU officials to weaken some portions of their proposed legislation.
China has also been one of the earliest movers on sweeping AI legislation. In July, the Cyberspace Administration of China released interim measures for governing AI, including explicit requirements to adhere to existing copyright laws and establishing which types of developments would need government approval.
Suleyman for his part is convinced governments have a critical role to play in the future of AI regulations. “I love the nation-state,” he said. “I believe in the power of regulation. And what I’m calling for is action on the part of the nation-state to sort its shit out. Given what’s at stake, now is the time to get moving.”