你將很快有一位機(jī)器人同事
?
微軟創(chuàng)始人比爾·蓋茨最近表示,設(shè)計(jì)用于在職場(chǎng)中取代人類的機(jī)器人應(yīng)該納稅。雖然蓋茨的說法得到的評(píng)論褒貶不一,但卻煽動(dòng)起了一種錯(cuò)誤的觀點(diǎn),即人類需要擔(dān)心機(jī)器人搶走他們的工作。 美國財(cái)政部長(zhǎng)史蒂芬·姆努欽認(rèn)為,雖然現(xiàn)在執(zhí)行機(jī)器人稅為時(shí)尚早,但在未來50至100年內(nèi),這或許將成為現(xiàn)實(shí)。在開始談?wù)撊斯ぶ悄埽ˋI)資本化和向機(jī)器人征稅之前,我們首先需要調(diào)查、分析和解決有哪些嚴(yán)重的問題,會(huì)妨礙機(jī)器人有效服務(wù)于普通消費(fèi)者和應(yīng)用于職場(chǎng)。 在未來五年內(nèi),機(jī)器人能夠執(zhí)行的任務(wù),將對(duì)一些傳統(tǒng)的人類工種產(chǎn)生不可逆的重要影響。但首先,負(fù)責(zé)各種人工智能設(shè)計(jì)與編程的人員,需要確保他們?cè)O(shè)計(jì)的線路能夠保證機(jī)器人更多帶來的是好處,而不是傷害。 如果部門或辦公室選擇使用機(jī)器人(使人類受益)處理行政類、涉及大量數(shù)據(jù)的任務(wù),那么保留管理等人類元素,對(duì)于它們的成功會(huì)有多大的重要性,我們?nèi)杂写^察。但可以肯定的是,在完全自動(dòng)化的環(huán)境中,機(jī)器人做出的廣泛決策與行為,應(yīng)該符合參與工作相關(guān)接觸的人類的最佳利益,而這需要機(jī)器人具備更高的人性水平。簡(jiǎn)而言之,人類必須為人工智能和機(jī)器人制定勞工標(biāo)準(zhǔn)和培訓(xùn)程序,以填補(bǔ)機(jī)器人認(rèn)知中的道德空白。 使人工智能和機(jī)器人可以自主決策,是技術(shù)人員和設(shè)計(jì)師們面臨的最棘手的問題之一。用正確的數(shù)據(jù)培訓(xùn)機(jī)器人,使它們可以進(jìn)行正確的計(jì)算和決策,這是工程師的職業(yè)責(zé)任。在合規(guī)與治理領(lǐng)域尤其可能出現(xiàn)復(fù)雜的挑戰(zhàn)。 人類需要接受合規(guī)培訓(xùn),以了解績(jī)效標(biāo)準(zhǔn)和人事部門的期望。同樣,我們需要在一個(gè)互補(bǔ)的合規(guī)框架內(nèi)設(shè)計(jì)機(jī)器人和人工智能,以管理它們?cè)诼殘?chǎng)中與人類的交互。這意味著制定通用的政策,涵蓋人類勞動(dòng)力的平等機(jī)會(huì)與多元化等重要方面,強(qiáng)制執(zhí)行反腐敗法律,控制各種形式的欺詐活動(dòng)。最終,我們需要以希望人類達(dá)到的職業(yè)標(biāo)準(zhǔn)為藍(lán)本,制定機(jī)器人行為規(guī)范。與此同時(shí),設(shè)計(jì)師們還需要為機(jī)器人留出空間,為自己的錯(cuò)誤承擔(dān)責(zé)任,從中總結(jié)經(jīng)驗(yàn)教訓(xùn)并最終實(shí)現(xiàn)自我糾正。 人工智能和機(jī)器人都需要接受培訓(xùn),學(xué)會(huì)如何在不計(jì)其數(shù)的職場(chǎng)狀況下做出正確的決定。培訓(xùn)的一種方式是創(chuàng)建一個(gè)基于獎(jiǎng)勵(lì)的學(xué)習(xí)系統(tǒng),激勵(lì)機(jī)器人和人工智能實(shí)現(xiàn)高生產(chǎn)率。理想情況下,這種由工程師精心創(chuàng)造的系統(tǒng),將使機(jī)器人從收到第一次獎(jiǎng)勵(lì)開始,“想要”超出預(yù)期。 按照當(dāng)前的“強(qiáng)化學(xué)習(xí)”系統(tǒng),一個(gè)人工智能或機(jī)器人根據(jù)其特定行動(dòng)所產(chǎn)生的結(jié)果,會(huì)收到正面或負(fù)面的反饋。只要我們能夠?yàn)閭€(gè)別機(jī)器人設(shè)計(jì)獎(jiǎng)勵(lì),就能將這種反饋方法推而廣之,確保綜合的機(jī)器人網(wǎng)絡(luò)能夠高效率運(yùn)行,根據(jù)不同反饋?zhàn)龀稣{(diào)整,并基本保持機(jī)器人的良好行為。在實(shí)踐中,設(shè)計(jì)獎(jiǎng)勵(lì)時(shí)不僅要根據(jù)人工智能或機(jī)器人為實(shí)現(xiàn)一個(gè)結(jié)果所采取的行為,還應(yīng)該考慮人工智能與機(jī)器人實(shí)現(xiàn)特定結(jié)果的做法是否符合人類的價(jià)值觀。 但在考慮對(duì)機(jī)器人和人工智能征稅之前,我們必須正確了解自我學(xué)習(xí)技術(shù)的基本知識(shí),制定出綜合的道德標(biāo)準(zhǔn),并長(zhǎng)期堅(jiān)持執(zhí)行。機(jī)器人制造企業(yè)首先需要保證,他們創(chuàng)造的人工智能通過不斷學(xué)習(xí)和完善,最終能夠符合人類的道德倫理,具備適應(yīng)能力,可以對(duì)自己的行為負(fù)責(zé),之后才能讓它們?cè)谝恍﹤鹘y(tǒng)工作領(lǐng)域取代人類。我們的責(zé)任是使人工智能可以顯著改進(jìn)人類從事的工作。否則,我們只是在重復(fù)錯(cuò)誤,用目的不明確的機(jī)器人取代了人類的工作。(財(cái)富中文網(wǎng)) 本文作者克里特·沙瑪為Sage Group的機(jī)器人與人工智能副總裁。?? 譯者:劉進(jìn)龍/汪皓 |
Microsoft founder Bill Gates recently suggested that robots primed to replace humans in the workplace should be taxed. While Gates’s proposal received a mixed reception, it mainly served to stoke an erroneous narrative that humans need to fear robots stealing their jobs. The whole idea of implementing a robot tax is premature, though not quite 50 to 100 years in the future, as Treasury Secretary Steven Mnuchin believes. Before we can start talking about the capitalization of artificial intelligence (AI) and taxing robots, we need to investigate, decipher, and tackle serious challenges in the way of making robots work effectively for the general consumer and in the workplace. Robots will be able to perform tasks that significantly impact the traditionally human workforce in irreversible ways within the next five years. But first, people who build and program all forms of AI need to ensure their wiring prevents robots from causing more harm than good. It remains to be seen how important maintaining a human element—managerial or otherwise—will be to the success of departments and offices that choose to employ robots (in favor of people) to perform administrative, data-rich tasks. Certainly, though, a superior level of humanity will be required to make wide-ranging decisions and consistently act in the best interest of actual humans involved in work-related encounters in fully automated environments. In short, humans will need to establish workforce standards and build training programs for AI and robots geared toward filling ethical gaps in robotic cognition. Enabling AI and robots to make autonomous decisions is one of the trickiest areas for technologists and builders to navigate. Engineers have an occupational responsibility to train robots with the right data in order for them to make the right calculations and come to the right decisions. Particularly complex challenges could arise in the areas of compliance and governance. Humans need to go through compliance training in order to understand performance standards and personnel expectations. Similarly, we need to design robots and AI with a complementary compliance framework to govern their interactions with humans in the workplace. That would mean creating universal policies covering the importance of equal opportunity and diversity among the human workforce, enforcing anti-bribery laws, and curbing all forms of fraudulent activity. Ultimately, we need to create a code of conduct for robots that mirrors the professional standards we expect from people. To accomplish this, builders will need to leave room for robots to be accountable for, learn from, and eventually self-correct their own mistakes. AI and robots will need to be trained to make the right decisions in a countless number of workplace situations. One way to do this would be to create a rewards-based learning system that motivates robots and AI to achieve high levels of productivity. Ideally, the engineer-crafted system would make bots “want to” exceed expectations from the moment they receive their first reward. Under the current “reinforcement learning” system, a single AI or robot receives positive or negative feedback depending on the outcome generated when it takes a certain action. If we can construct rewards for individual robots, it is possible to use this feedback approach at scale to ensure that the combined network of robots operates efficiently, adjusts based on a diverse set of feedback, and remains generally well-behaved. In practice, rewards should be built not just based on what AI or robots do to achieve an outcome, but also on how AI and robots align with human values to accomplish that particular result. But before we think about taxing robots and AI, we need to get the basics of the self-learning technology right, and develop comprehensive ethical standards that hold up for the long term. Builders need to ensure that the AI they are creating has the ability to learn and improve in order to be ethical, adaptable, and accountable prior to replacing traditionally human-held jobs. Our responsibility is to make AI that significantly improves upon work humans do. Otherwise, we will end up replicating mistakes and replacing human-held jobs with robots that have ill-defined purpose. Kriti Sharma is the vice president of bots and AI at Sage Group. |
-
熱讀文章
-
熱門視頻