人工智能統(tǒng)治世界?還早著呢
如果按新聞報道判斷,人們很容易就會相信世界很快將會被人工智能(AI)所統(tǒng)治。中國風投人士李開復(fù)表示,AI很快就會創(chuàng)造出數(shù)萬億美元的財富,而且中國和美國是兩個AI超級大國。 AI擁有不可思議的潛力,這一點毋庸置疑。但這項技術(shù)仍處于起步階段,也不存在什么AI超級大國。將AI付諸使用的競賽基本上還沒有開始,特別是在商業(yè)領(lǐng)域。另外,最先進的AI工具是開源軟件,也就是說任何人都能接觸的到。 科技公司用很酷的演示來炒作AI,比如谷歌的AlphaGo Zero。它用三天時間學(xué)會了圍棋,這是世界上最難的棋類之一,而且能輕松擊敗頂尖選手。還有幾家公司宣稱在自動駕駛汽車方面取得了突破。但別被騙了——下棋只是特例,自動駕駛汽車也仍處于試驗階段。 AlphaGo Zero的前身AlphaGo用生成對抗網(wǎng)絡(luò)(generative adversarial network)來開發(fā)自己的智力。這項技術(shù)讓兩個AI系統(tǒng)通過相互對抗來相互學(xué)習,其要點在于這兩個系統(tǒng)在開始對抗前會接受大量訓(xùn)練。更重要的是,它們的問題和結(jié)果都有很明確的定義。 和下棋或者玩街機不同,商業(yè)系統(tǒng)沒有確定的結(jié)果和規(guī)則。它在運轉(zhuǎn)時使用的數(shù)據(jù)非常有限,而且往往支離破碎、混亂不堪。同時,進行關(guān)鍵商業(yè)分析的不是計算機,理解計算機系統(tǒng)收集的信息并決定怎樣予以使用是人的工作。人能處理不確定性和疑問,AI則不行。谷歌的Waymo自動駕駛汽車總共已經(jīng)行駛了900多萬英里(逾1450萬公里),但它的正式推出還遙遙無期。特斯拉的自動駕駛系統(tǒng)Autopilot已經(jīng)收集了15億英里(24.15億公里)的數(shù)據(jù),卻還不會在遇到紅燈時停下來。 如今的AI系統(tǒng)都竭盡全力來模仿人腦的神經(jīng)網(wǎng)絡(luò)功能,但它們的模擬能力非常有限。它們使用的技術(shù)叫做深度學(xué)習——在你明確告訴AI希望它學(xué)什么并提供標注清晰的范例后,AI就會分析數(shù)據(jù)中的模式,并將其存儲起來以備今后使用。它掌握這些模式的精確程度取決于數(shù)據(jù)的完整程度,所以你給的范例越多,AI就會越有用。 但這里有一個問題,那就是AI只能達到它所接收數(shù)據(jù)的水平,而且只能在給定背景的狹窄范圍內(nèi)對數(shù)據(jù)進行解讀。它并不“理解”自己分析了什么,因此無法將其用于其他背景下的情景。另外,AI也無法弄明白因果和相關(guān)的區(qū)別。 此類AI的更大問題在于它學(xué)到了什么仍是個迷,或者說那是對數(shù)據(jù)的一組無法確定的反應(yīng)。神經(jīng)網(wǎng)絡(luò)受訓(xùn)后,就連設(shè)計者也不完全清楚其運作機理。他們把這種情況稱為AI黑箱。 企業(yè)可不能讓自身機制做出無法解釋的決定,因為監(jiān)管部門對它們有要求,而且它們也擔心自己的聲譽。所以,它們做出的所有決定都必須可以理解、解釋并證明其合理性。 接下來就是可靠性問題。航空公司已經(jīng)開始安裝基于AI的面部識別系統(tǒng),中國也正在基于這樣的系統(tǒng)來構(gòu)建全國性監(jiān)控網(wǎng)絡(luò)。人們將AI用于營銷和信用分析,還用它來操控汽車、無人機和機器人。人們訓(xùn)練AI對醫(yī)療數(shù)據(jù)進行分析,目的是協(xié)助甚至取代人類醫(yī)生。但問題在于,在所有案例中,AI都有可能受到蒙騙。 去年12月谷歌發(fā)表了一篇論文,證明自己可以欺騙AI系統(tǒng),讓它把香蕉認成烤面包機。印度科技學(xué)院不久前做的展示也說明他們有可能讓幾乎所有AI系統(tǒng)陷入困惑,而且就像谷歌,他們甚至沒有使用AI系統(tǒng)作為學(xué)習基礎(chǔ)的知識。AI出現(xiàn)后,安全和隱私成了馬后炮,就像計算機和互聯(lián)網(wǎng)剛剛開始發(fā)展時一樣。 頂尖AI公司通過開源工具交出了這個領(lǐng)域的鑰匙。軟件曾被視為商業(yè)機密,但開發(fā)者已經(jīng)意識到把它展示給別人并讓后者基于他們的代碼繼續(xù)構(gòu)建有可能給軟件帶來極大的改進。微軟、谷歌和Facebook已經(jīng)公開了他們的AI代碼,公眾可以免費進行探究、改編和完善。百度也公開了自動駕駛軟件阿波羅的源代碼。 軟件的真正價值在于應(yīng)用,也就是你怎樣使用它。就像中國打造自己的科技公司以及印度用硅谷創(chuàng)造的工具建立了價值1600億美元的IT服務(wù)業(yè)一樣,任何人都可以用對公眾開放的AI工具制作出成熟的應(yīng)用。創(chuàng)新現(xiàn)已全球化,從而創(chuàng)造出了一個公平的競爭環(huán)境,特別是在AI領(lǐng)域。(財富中文網(wǎng)) 維維克·瓦德哈是卡耐基梅隆大學(xué)工程學(xué)院的杰出研究員,他與別人合作撰寫了《有人黑了你的幸福:為何科技正在奪得人類大腦控制權(quán)以及如何反擊》(Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain—and How to Fight Back)一書。 譯者:Charlie 審校:夏林 |
To gauge by the news headlines, it would be easy to believe that artificial intelligence (AI) is about to take over the world. Kai-Fu Lee, a Chinese venture capitalist, says that AI will soon create tens of trillions of dollars of wealth and claims China and the U.S. are the two AI superpowers. There is no doubt that AI has incredible potential. But the technology is still in its infancy; there are no AI superpowers. The race to implement AI has hardly begun, particularly in business. As well, the most advanced AI tools are open source, which means that everyone has access to them. Tech companies are generating hype with cool demonstrations of AI, such as Google’s AlphaGo Zero, which learned one of the world’s most difficult board games in three days and could easily defeat its top-ranked players. Several companies are claiming breakthroughs with self-driving vehicles. But don’t be fooled: The games are just special cases, and the self-driving cars are still on their training wheels. AlphaGo, the original iteration of AlphaGo Zero, developed its intelligence through use of generative adversarial networks, a technology that pits two AI systems against each another to allow them to learn from each other. The trick was that before the networks battled each other, they received a lot of coaching. And, more importantly, their problems and outcomes were well defined. Unlike board games and arcade games, business systems don’t have defined outcomes and rules. They work with very limited datasets, often disjointed and messy. The computers also don’t do critical business analysis; it’s the job of humans to comprehend information that the systems gather and to decide what to do with it. Humans can deal with uncertainty and doubt; AI cannot. Google’s Waymo self-driving cars have collectively driven over 9 million miles, yet are nowhere near ready for release. Tesla’s Autopilot, after gathering 1.5 billion miles’ worth of data, won’t even stop at traffic lights. Today’s AI systems do their best to reproduce the functioning of the human brain’s neural networks, but their emulations are very limited. They use a technique called deep learning: After you tell an AI exactly what you want it to learn and provide it with clearly labeled examples, it analyzes the patterns in those data and stores them for future application. The accuracy of its patterns depends on completeness of data, so the more examples you give it, the more useful it becomes. Herein lies a problem, though: An AI is only as good as the data it receives, and is able to interpret them only within the narrow confines of the supplied context. It doesn’t “understand” what it has analyzed, so it is unable to apply its analysis to scenarios in other contexts. And it can’t distinguish causation from correlation. The larger issue with this form of AI is that what it has learned remains a mystery: a set of indefinable responses to data. Once a neural network has been trained, not even its designer knows exactly how it is doing what it does. They call this the black box of AI. Businesses can’t afford to have their systems making unexplained decisions, as they have regulatory requirements and reputational concerns and must be able to understand, explain, and prove the logic behind every decision that they make. Then there is the issue of reliability. Airlines are installing AI-based facial-recognition systems and China is basing its national surveillance systems on such systems. AI is being used for marketing and credit analysis and to control cars, drones, and robots. It is being trained to perform medical data analysis and assist or replace human doctors. The problem is that, in all such uses, AI can be fooled. Google published a paper last December that showed that it could trick AI systems into recognizing a banana as a toaster. Researchers at the Indian Institute of Science have just demonstrated that they could confuse almost any AI system without even using, as Google did, knowledge of what the system has used as a basis for learning. With AI, security and privacy are an afterthought, just as they were early in the development of computers and the Internet. Leading AI companies have handed over the keys to their kingdoms by making their tools open source. Software used to be considered a trade secret, but developers realized that having others look at and build on their code could lead to great improvements in it. Microsoft, Google, and Facebook have released their AI code to the public for free to explore, adapt, and improve. China’s Baidu has also made its self-driving software, Apollo, available as open source. Software’s real value lies in its implementation: what you do with it. Just as China built its tech companies and India created a $160 billion IT services industry on top of tools created by Silicon Valley, anyone can use openly available AI tools to build sophisticated applications. Innovation has now globalized, creating a level playing field—especially in AI. Vivek Wadhwa is a distinguished fellow at Carnegie Mellon University’s College of Engineering. He is the co-author of Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain—and How to Fight Back. |