放一百個(gè)心,機(jī)器人不會(huì)反攻人類
????埃里克?霍爾維茨 ????知名科學(xué)家、微軟研究院常務(wù)董事 ????“有人擔(dān)心未來我們可能會(huì)失去對(duì)某些智能的控制。我認(rèn)為這種情況不大可能發(fā)生。我認(rèn)為在使用人工智能系統(tǒng)這個(gè)問題上,我們會(huì)非常積極主動(dòng)的。而且最終在人類生活的方方面面,從科學(xué)到教育到經(jīng)濟(jì),再到日常生活,我們都能享受到機(jī)器智能帶來的驚人效益?!?/p> ????黛伯拉?約翰遜 ????維吉尼亞大學(xué)工程與應(yīng)用科學(xué)學(xué)院科學(xué)、技術(shù)與社會(huì)項(xiàng)目倫理學(xué)教授 ????“如果完全自動(dòng)化的機(jī)器成熟了,所有任務(wù)可以依賴這些機(jī)器自行完成。那么這首先會(huì)帶來職責(zé)上的挑戰(zhàn)。設(shè)想一架盤旋在空中的無(wú)人機(jī),能自動(dòng)識(shí)別戰(zhàn)斗區(qū)域,并確定戰(zhàn)場(chǎng)上的哪些人是敵軍、哪些是平民,然后自行決定向敵軍目標(biāo)開火。 ????雖然生產(chǎn)這種無(wú)人機(jī)是可能的,但這種描述具有一定的誤導(dǎo)性。這類系統(tǒng)要想順暢運(yùn)行,人類是必須要參與的。人類要做出這些決策,然后將任務(wù)委托給機(jī)器。設(shè)計(jì)這套系統(tǒng)的人類要決定機(jī)器怎樣完成任務(wù),至少也要設(shè)定相關(guān)參數(shù),來對(duì)機(jī)器的決策進(jìn)行限定。另外在真實(shí)的環(huán)境中,人類還要判斷機(jī)器是否足夠可靠,能否委以重任?!?/p> ????邁克爾?利特曼 ????布朗大學(xué)計(jì)算機(jī)科學(xué)教授 ????“要明確的是,的確有人擔(dān)心不久的未來人工智能會(huì)對(duì)人類產(chǎn)生影響——比如人工智能的交易商會(huì)導(dǎo)致經(jīng)濟(jì)崩潰,或是敏感的電網(wǎng)管理系統(tǒng)對(duì)用電量的起伏產(chǎn)生過度反應(yīng),從而切斷了大量人口的用電。還有人擔(dān)心,學(xué)術(shù)界和產(chǎn)業(yè)界的系統(tǒng)性編見,可能導(dǎo)致代表性不足的少數(shù)派無(wú)法參與掌控信息技術(shù)的發(fā)展方向。在新理念的發(fā)展和部署過程中,這些擔(dān)憂應(yīng)該會(huì)扮演重要的角色。但是很多人擔(dān)心計(jì)算機(jī)會(huì)突然醒來攻擊我們,這種擔(dān)憂是不現(xiàn)實(shí)的?!保ㄘ?cái)富中文網(wǎng)) ????譯者:樸成奎 ????審校:任文科 |
????Eric Horvitz ????Distinguished Scientist & Managing Director, Microsoft Research ????"There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don't think that's going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life." ????Deborah Johnson ????Anne Shirley Carter Olsson Professor of Applied Ethics in the Science, Technology, and Society Program in the School of Engineering and Applied Sciences at the University of Virginia ????"Presumably in fully autonomous machines all the tasks are delegated to machines. This, then, poses the responsibility challenge. Imagine a drone circulating in the sky, identifying a combat area, determining which of the humans in the area are enemy combatants and which are noncombatants, and then deciding to fire on enemy targets. ????"Although drones of this kind are possible, the description is somewhat misleading. In order for systems of this kind to operate, humans must be involved.Humans make the decisions to delegate to machines; the humans who design the system make decisions about how the machine tasks are performed or, at least, they set the parameters in which the machine decisions will be made; and humans decide whether the machines are reliable enough to be delegated tasks in real-world situations." ????Michael Littman ????Professor of Computer Science, Brown University ????"To be clear, there are indeed concerns about the near-term future of AI — algorithmic traders crashing the economy, or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population. There's also a concern that systemic biases within academia and industry prevent underrepresented minorities from participating and helping to steer the growth of information technology. These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic." |
-
熱讀文章
-
熱門視頻