杰弗里·辛頓是科技界的開拓者,驅(qū)動ChatGPT等工具的人工智能的許多重要發(fā)展,都有他的功勞。目前,ChatGPT擁有數(shù)以百萬計的用戶。但這位75歲的開拓者表示,他后悔將畢生精力投入到這個領(lǐng)域,因為人工智能可能被濫用。
在《紐約時報》(New York Times)于5月1日發(fā)表的采訪報道中,辛頓說:“我們很難找到阻止不法分子利用人工智能作惡的方法。我只能用一個標(biāo)準(zhǔn)的借口安慰自己:研究人工智能,如果我不做,也會有其他人去做?!?/p>
辛頓經(jīng)常被稱為“人工智能教父”,他從事過多年學(xué)術(shù)研究,2013年,谷歌(Google)以4,400萬美元收購他的公司后,他加入谷歌。他對《紐約時報》表示,關(guān)于如何部署人工智能技術(shù)這個問題,谷歌一直是“適當(dāng)?shù)墓芾碚摺?,而且該科技巨頭始終以負責(zé)任的方式開發(fā)這項技術(shù)。但他在5月從谷歌離職,因此他可以自由談?wù)摗叭斯ぶ悄艿耐{”。
辛頓表示,他擔(dān)心的主要問題之一是,人們很輕易就能夠使用人工智能文本和圖像生成工具,這可能導(dǎo)致更多虛假或欺詐性內(nèi)容出現(xiàn),而普通人“再也無法辨別真?zhèn)巍薄?/p>
對人工智能被不當(dāng)利用的擔(dān)憂已經(jīng)成為現(xiàn)實。幾周前,網(wǎng)絡(luò)上開始流傳教宗方濟各身穿白色羽絨夾克的虛假圖片。4月下旬,美國共和黨全國委員會(Republican National Committee)發(fā)布了美國總統(tǒng)喬·拜登連任后銀行倒閉的深度造假圖片。
在OpenAI、谷歌和微軟(Microsoft)等公司紛紛升級人工智能產(chǎn)品的同時,最近幾個月有越來越多人呼吁放慢研發(fā)新技術(shù)的速度,并對快速擴張的人工智能領(lǐng)域?qū)嵤┍O(jiān)管。今年3月,包括蘋果(Apple)的聯(lián)合創(chuàng)始人史蒂夫·沃茲尼亞克和計算機科學(xué)家約書亞·本希奧等在內(nèi)的多位科技界大佬發(fā)表公開信,要求禁止開發(fā)高級人工智能系統(tǒng)。辛頓并未簽署公開信,但他認為公司在進一步擴大人工智能技術(shù)的規(guī)模之前應(yīng)該慎重。
他說:“我認為,在確定能否控制人工智能之前,不應(yīng)該繼續(xù)擴大其規(guī)模?!?/p>
辛頓還擔(dān)心,人工智能技術(shù)可能改變就業(yè)市場,導(dǎo)致非技術(shù)性崗位被邊緣化。他警告道,人工智能有能力威脅更多類型的工作。
辛頓稱:“它能夠從事繁重枯燥的工作??赡苓€會有更多崗位被人工智能取代?!?/p>
對于辛頓的采訪內(nèi)容,谷歌強調(diào)公司承諾堅持“負責(zé)任的方式”。
谷歌的首席科學(xué)家杰夫·迪恩在一份聲明中告訴《財富》雜志:“杰弗里·辛頓在人工智能領(lǐng)域取得了根本性的突破,我們感謝他十多年來為谷歌公司所做的貢獻。作為最早發(fā)布人工智能準(zhǔn)則的公司之一,我們依舊致力于以負責(zé)任的方式應(yīng)用人工智能。我們將繼續(xù)大膽創(chuàng)新,同時學(xué)習(xí)了解新出現(xiàn)的風(fēng)險?!?/p>
辛頓并未立即答復(fù)《財富》雜志的置評請求。
人工智能的“關(guān)鍵時刻”
辛頓1972年從愛丁堡大學(xué)(University of Edinburgh)研究生畢業(yè),開始了自己的職業(yè)生涯。他在愛丁堡大學(xué)開始從事神經(jīng)網(wǎng)絡(luò)研究。神經(jīng)網(wǎng)絡(luò)是大致模擬人腦工作原理的數(shù)學(xué)模型,可以分析海量數(shù)據(jù)。
辛頓的神經(jīng)網(wǎng)絡(luò)研究是他和兩位學(xué)生創(chuàng)立的公司DNNresearch提出的突破性理念。這家公司于2013年被谷歌收購。辛頓和兩位同事(其中一位是本希奧)憑借神經(jīng)網(wǎng)絡(luò)研究榮獲2018年圖靈獎[Turing Award,圖靈獎相當(dāng)于計算機界的諾貝爾獎(Nobel Prize)]。他們的研究成果對OpenAI的ChatGPT和谷歌的Bard聊天機器人等技術(shù)的誕生至關(guān)重要。
作為人工智能領(lǐng)域的重要思想家之一,辛頓認為目前是“關(guān)鍵”時刻,并且充滿了機遇。辛頓在今年3月接受美國哥倫比亞廣播公司(CBS)采訪時指出,他認為人工智能創(chuàng)新的速度將超過我們控制它的速度,這令他深感憂慮。
在采訪中,他對《早晨秀》(CBS Mornings)節(jié)目表示:“這件事情很棘手。你不希望由規(guī)模龐大的營利性公司決定什么是真實的。直到最近,我一直以為我們可能要在20年到50年之后,才能研發(fā)出通用人工智能。現(xiàn)在,我認為這個期限將縮短到20年甚至更短。”
辛頓還認為,我們可能很快就會看到計算機能夠產(chǎn)生自我完善的想法。“真是個難題,對吧?我們必須認真思考如何控制這種變化。”
辛頓表示,在培訓(xùn)和呈現(xiàn)人工智能驅(qū)動產(chǎn)品方面,谷歌比微軟更慎重,并且谷歌會提醒用戶聊天機器人分享的信息。谷歌一直以來都是人工智能研究的引領(lǐng)者,直至最近生成式人工智能開始興起。眾所周知,谷歌母公司Alphabet的首席執(zhí)行官桑達爾·皮查伊曾經(jīng)將人工智能與塑造了人類的其他創(chuàng)新相提并論。
皮查伊在4月播出的一段采訪中稱:“我始終認為人工智能是人類正在研究的影響最深遠的技術(shù),它的意義將超過人類歷史上掌握的火、電或其他技術(shù)?!彪m然火有危險,但人類依舊學(xué)會了如何熟練地駕馭它,同樣,皮查伊認為人類也可以駕馭人工智能。
他說:“關(guān)鍵在于什么是智能,什么是人類。我們正在開發(fā)的技術(shù),未來將擁有人類前所未見的強大能力?!保ㄘ敻恢形木W(wǎng))
譯者:劉進龍
審校:汪皓
杰弗里·辛頓是科技界的開拓者,驅(qū)動ChatGPT等工具的人工智能的許多重要發(fā)展,都有他的功勞。目前,ChatGPT擁有數(shù)以百萬計的用戶。但這位75歲的開拓者表示,他后悔將畢生精力投入到這個領(lǐng)域,因為人工智能可能被濫用。
在《紐約時報》(New York Times)于5月1日發(fā)表的采訪報道中,辛頓說:“我們很難找到阻止不法分子利用人工智能作惡的方法。我只能用一個標(biāo)準(zhǔn)的借口安慰自己:研究人工智能,如果我不做,也會有其他人去做?!?/p>
辛頓經(jīng)常被稱為“人工智能教父”,他從事過多年學(xué)術(shù)研究,2013年,谷歌(Google)以4,400萬美元收購他的公司后,他加入谷歌。他對《紐約時報》表示,關(guān)于如何部署人工智能技術(shù)這個問題,谷歌一直是“適當(dāng)?shù)墓芾碚摺?,而且該科技巨頭始終以負責(zé)任的方式開發(fā)這項技術(shù)。但他在5月從谷歌離職,因此他可以自由談?wù)摗叭斯ぶ悄艿耐{”。
辛頓表示,他擔(dān)心的主要問題之一是,人們很輕易就能夠使用人工智能文本和圖像生成工具,這可能導(dǎo)致更多虛假或欺詐性內(nèi)容出現(xiàn),而普通人“再也無法辨別真?zhèn)巍薄?/p>
對人工智能被不當(dāng)利用的擔(dān)憂已經(jīng)成為現(xiàn)實。幾周前,網(wǎng)絡(luò)上開始流傳教宗方濟各身穿白色羽絨夾克的虛假圖片。4月下旬,美國共和黨全國委員會(Republican National Committee)發(fā)布了美國總統(tǒng)喬·拜登連任后銀行倒閉的深度造假圖片。
在OpenAI、谷歌和微軟(Microsoft)等公司紛紛升級人工智能產(chǎn)品的同時,最近幾個月有越來越多人呼吁放慢研發(fā)新技術(shù)的速度,并對快速擴張的人工智能領(lǐng)域?qū)嵤┍O(jiān)管。今年3月,包括蘋果(Apple)的聯(lián)合創(chuàng)始人史蒂夫·沃茲尼亞克和計算機科學(xué)家約書亞·本希奧等在內(nèi)的多位科技界大佬發(fā)表公開信,要求禁止開發(fā)高級人工智能系統(tǒng)。辛頓并未簽署公開信,但他認為公司在進一步擴大人工智能技術(shù)的規(guī)模之前應(yīng)該慎重。
他說:“我認為,在確定能否控制人工智能之前,不應(yīng)該繼續(xù)擴大其規(guī)模。”
辛頓還擔(dān)心,人工智能技術(shù)可能改變就業(yè)市場,導(dǎo)致非技術(shù)性崗位被邊緣化。他警告道,人工智能有能力威脅更多類型的工作。
辛頓稱:“它能夠從事繁重枯燥的工作??赡苓€會有更多崗位被人工智能取代?!?/p>
對于辛頓的采訪內(nèi)容,谷歌強調(diào)公司承諾堅持“負責(zé)任的方式”。
谷歌的首席科學(xué)家杰夫·迪恩在一份聲明中告訴《財富》雜志:“杰弗里·辛頓在人工智能領(lǐng)域取得了根本性的突破,我們感謝他十多年來為谷歌公司所做的貢獻。作為最早發(fā)布人工智能準(zhǔn)則的公司之一,我們依舊致力于以負責(zé)任的方式應(yīng)用人工智能。我們將繼續(xù)大膽創(chuàng)新,同時學(xué)習(xí)了解新出現(xiàn)的風(fēng)險?!?/p>
辛頓并未立即答復(fù)《財富》雜志的置評請求。
人工智能的“關(guān)鍵時刻”
辛頓1972年從愛丁堡大學(xué)(University of Edinburgh)研究生畢業(yè),開始了自己的職業(yè)生涯。他在愛丁堡大學(xué)開始從事神經(jīng)網(wǎng)絡(luò)研究。神經(jīng)網(wǎng)絡(luò)是大致模擬人腦工作原理的數(shù)學(xué)模型,可以分析海量數(shù)據(jù)。
辛頓的神經(jīng)網(wǎng)絡(luò)研究是他和兩位學(xué)生創(chuàng)立的公司DNNresearch提出的突破性理念。這家公司于2013年被谷歌收購。辛頓和兩位同事(其中一位是本希奧)憑借神經(jīng)網(wǎng)絡(luò)研究榮獲2018年圖靈獎[Turing Award,圖靈獎相當(dāng)于計算機界的諾貝爾獎(Nobel Prize)]。他們的研究成果對OpenAI的ChatGPT和谷歌的Bard聊天機器人等技術(shù)的誕生至關(guān)重要。
作為人工智能領(lǐng)域的重要思想家之一,辛頓認為目前是“關(guān)鍵”時刻,并且充滿了機遇。辛頓在今年3月接受美國哥倫比亞廣播公司(CBS)采訪時指出,他認為人工智能創(chuàng)新的速度將超過我們控制它的速度,這令他深感憂慮。
在采訪中,他對《早晨秀》(CBS Mornings)節(jié)目表示:“這件事情很棘手。你不希望由規(guī)模龐大的營利性公司決定什么是真實的。直到最近,我一直以為我們可能要在20年到50年之后,才能研發(fā)出通用人工智能。現(xiàn)在,我認為這個期限將縮短到20年甚至更短?!?/p>
辛頓還認為,我們可能很快就會看到計算機能夠產(chǎn)生自我完善的想法。“真是個難題,對吧?我們必須認真思考如何控制這種變化。”
辛頓表示,在培訓(xùn)和呈現(xiàn)人工智能驅(qū)動產(chǎn)品方面,谷歌比微軟更慎重,并且谷歌會提醒用戶聊天機器人分享的信息。谷歌一直以來都是人工智能研究的引領(lǐng)者,直至最近生成式人工智能開始興起。眾所周知,谷歌母公司Alphabet的首席執(zhí)行官桑達爾·皮查伊曾經(jīng)將人工智能與塑造了人類的其他創(chuàng)新相提并論。
皮查伊在4月播出的一段采訪中稱:“我始終認為人工智能是人類正在研究的影響最深遠的技術(shù),它的意義將超過人類歷史上掌握的火、電或其他技術(shù)?!彪m然火有危險,但人類依舊學(xué)會了如何熟練地駕馭它,同樣,皮查伊認為人類也可以駕馭人工智能。
他說:“關(guān)鍵在于什么是智能,什么是人類。我們正在開發(fā)的技術(shù),未來將擁有人類前所未見的強大能力?!保ㄘ敻恢形木W(wǎng))
譯者:劉進龍
審校:汪皓
Geoffrey Hinton is the tech pioneer behind some of the key developments in artificial intelligence powering tools like ChatGPT that millions of people are using today. But the 75-year-old trailblazer says he regrets the work he has devoted his life to because of how A.I. could be misused.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told the New York Times in an interview published on May 1. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”
Hinton, often referred to as “the Godfather of A.I.,” spent years in academia before joining Google in 2013 when it bought his company for $44 million. He told the Times Google has been a “proper steward” for how A.I. tech should be deployed and that the tech giant has acted responsibly for its part. But he left the company in May so that he can speak freely about “the dangers of A.I.”
According to Hinton, one of his main concerns is how easy access to A.I. text- and image-generation tools could lead to more fake or fraudulent content being created, and how the average person would “not be able to know what is true anymore.”
Concerns surrounding the improper use of A.I. have already become a reality. Fake images of Pope Francis in a white puffer jacket made the rounds online a few weeks ago, and deepfake visuals showing banks failing under President Joe Biden if he is reelected were published by the Republican National Committee in late April.
As companies like OpenAI, Google, and Microsoft work on upgrading their A.I. products, there are also growing calls for slowing the pace of new developments and regulating the space that has expanded rapidly in recent months. In a March letter, some of the top names in the tech industry, including Apple cofounder Steve Wozniak and computer scientist Yoshua Bengio signed a letter asking for a ban on the development of advanced A.I. systems. Hinton didn’t sign the letter, although he believes that companies should think before scaling A.I. technology further.
“I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Hinton is also worried about how A.I. could change the job market by rendering nontechnical jobs irrelevant. He warned that A.I. had the capability to harm more types of roles as well.
“It takes away the drudge work,” Hinton said. “It might take away more than that.”
When asked for a comment about Hinton’s interview, Google emphasized the company’s commitment to a “responsible approach.”
“Geoff has made foundational breakthroughs in A.I., and we appreciate his decade of contributions at Google,” Jeff Dean, the company’s chief scientist, told Fortune in a statement. “As one of the first companies to publish A.I. principles, we remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”
Hinton did not immediately return Fortune’s request for comment.
A.I.’s “pivotal moment”
Hinton began his career as a graduate student at the University of Edinburgh in 1972. That’s where he first started his work on neural networks, mathematical models that roughly mimic the workings of the human brain and are capable of analyzing vast amounts of data.
His neural network research was the breakthrough concept behind a company he built with two of his students called DNNresearch, which Google ultimately bought in 2013. Hinton won the 2018 Turing Award—the equivalent of a Nobel Prize in the computing world—with his two other colleagues (one of whom was Bengio) for their neural network research, which has been key to the creation of technologies including OpenAI’s ChatGPT and Google’s Bard chatbot.
As one of the key thinkers in A.I., Hinton sees the current moment as “pivotal” and ripe with opportunity. In an interview with CBS in March, Hinton said he believes that A.I. innovations are outpacing our ability to control it—and that’s a cause for concern.
“It’s very tricky things. You don’t want some big for-profit companies to decide what is true,” he told CBS Mornings in an interview in March. “Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose A.I. And now I think it may be 20 years or less.”
Hinton added that we could be close to computers being able to come up with ideas to improve themselves. “That’s an issue, right? We have to think hard about how you control that.”
Hinton said that Google is going to be a lot more careful than Microsoft when it comes to training and presenting A.I.-powered products and cautioning users about the information shared by chatbots. Google has been at the helm of A.I. research for a long time—well before the recent generative A.I. wave caught on. Sundar Pichai, CEO of Google parent Alphabet, has famously likened A.I. to other innovations that have shaped humankind.
“I’ve always thought of A.I. as the most profound technology humanity is working on—more profound than fire or electricity or anything that we’ve done in the past,” Pichai said in an interview aired in April. Just like humans learned to skillfully harness fire despite its dangers, Pichai thinks humans can do the same with A.I.
“It gets to the essence of what intelligence is, what humanity is,” Pichai said. “We are developing technology which, for sure, one day will be far more capable than anything we’ve ever seen before.”