成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

立即打開
反智主義盛行,美國可能會輸?shù)羧蛉斯ぶ悄芨傎?

反智主義盛行,美國可能會輸?shù)羧蛉斯ぶ悄芨傎?

Joshua New 2018年08月08日
如果美國真的想領(lǐng)跑全球人工智能競賽,那么政策制定者最應(yīng)該避免的,就是用一套低效的監(jiān)管制度扼殺人工智能的發(fā)展?jié)摿Α?

一名科學(xué)家在蘇黎士大學(xué)的人工智能實驗室里調(diào)試一臺名叫ROBOY的人型機(jī)器人。EThamPhoto—Getty Images

一個國家徜若在人工智能領(lǐng)域里贏得了全球的主導(dǎo)權(quán),則必將獲得巨大的經(jīng)濟(jì)利益,其經(jīng)濟(jì)增長率在2035年前甚至可能提高一倍。可惜美國在如何開展競爭上卻沒有得到很好的建議。

過去一年里,中、日、英、法、印、加等國都啟動了由政府支持的大規(guī)模人工智能項目,以在該領(lǐng)域里拔得頭籌。雖然特朗普政府也開始關(guān)注人工智能技術(shù)的發(fā)展,但在國家層面上,美國國內(nèi)并未形成一個有凝聚力的戰(zhàn)略與中日等國抗衡。美國政界反而興起了一股反智主義,決策者們更多擔(dān)心人工智能的潛在危害,呼吁給這項技術(shù)念“緊箍咒”的多,支持其發(fā)展的少。

人工智能確實給社會帶來了一些獨特的挑戰(zhàn)——比如它有可能加劇刑事司法系統(tǒng)中的種族偏見,而自動駕駛汽車技術(shù)也產(chǎn)生了一些倫理問題。至于如何解決這些挑戰(zhàn),也有人給出了一些解決方案,目前最流行的觀點,是要確立人工智能算法的透明性原則或可解釋性原則,或者建立一個最高層級的人工智能監(jiān)管機(jī)構(gòu)。然而這些措施不僅可能無助于解決問題,反而會嚴(yán)重拖慢美國人工智能技術(shù)的發(fā)展和應(yīng)用速度。

透明性原則的支持者們認(rèn)為,應(yīng)該要求人工智能公司公開源代碼,讓監(jiān)管機(jī)構(gòu)、記者和有責(zé)任心的公民可以對這些代碼進(jìn)行審查,從而發(fā)現(xiàn)任何不法行為的跡象。不過以人工智能系統(tǒng)的復(fù)雜性,我們很難相信這種方法會有什么效果,反而會使那些奉行“拿來主義”的國家的企業(yè)更容易偷到美國的原代碼。這種做法顯然會阻礙美國在全球人工智能競賽中的競爭,使美國企業(yè)更不愿意投資這項技術(shù)。

可解釋性原則的支持者則認(rèn)為,政府應(yīng)要求公司采取必要措施,使終端用戶有能力解讀他們的算法——比如描述算法的工作原理,或者只允許使用那些能夠解釋清楚其決策機(jī)制的算法。比如歐盟就把算法可解釋性作為評估人工智能潛在風(fēng)險的一個主要指標(biāo)。歐盟的《通用數(shù)據(jù)保護(hù)條例》(GDPR)規(guī)定,一個自然人有權(quán)獲得關(guān)于算法決策機(jī)制的“有意義的信息”。

可解釋性原則可能是個合理的要求,而且它已經(jīng)成為了刑事司法或消費金融等很多領(lǐng)域的標(biāo)準(zhǔn)。但是在某些領(lǐng)域里,你甚至無法要求一個自然人去解釋他的決策機(jī)制,你又怎能將這個要求硬套在人工智能身上呢?非要這樣的話,企業(yè)只得繼續(xù)依賴真人進(jìn)行決策,以避免監(jiān)管壓力,而這則會不可避免地造成效率和創(chuàng)新的遲滯。

另外,可解釋性與準(zhǔn)確性是魚與熊掌的關(guān)系。一個算法越復(fù)雜,其準(zhǔn)確性一般越高;但一個算法越復(fù)雜,它就越難以解釋清楚。這種矛盾是始終存在的。兩個變量的線性回歸,肯定要比200個變量的線性回歸容易解釋。算法使用的數(shù)學(xué)模型越先進(jìn),這種矛盾就愈發(fā)尖銳。因此,可解釋性原則只有在可以犧牲準(zhǔn)確性的條件下才能實現(xiàn),而這種條件顯然太少了。比如對于自動駕駛汽車技術(shù),為了可解釋性而犧牲準(zhǔn)確性,后果無疑是災(zāi)難性的。導(dǎo)航精度哪怕稍稍損失一點,或者計算機(jī)某一次不小心將行人和廣告牌上的人像搞混了,都會造成巨大的危險。

另一個頗為流行的餿主意,是建立一個全國性的類似于美國食品與藥品管理局(FDA)和美國國家運(yùn)輸交通安全委員會(NTSB)的人工智能監(jiān)管機(jī)構(gòu)。埃隆·馬斯克就堅決支持這一倡議。持這種觀點的人好像把搞人工智能當(dāng)成了賣衣服和快餐,似乎認(rèn)為所有人工智能算法都對社會有同樣的危險性。然而人工智能系統(tǒng)的決策機(jī)制也跟人類一樣,是受一系列行業(yè)法律法規(guī)約束的,其危險性有高有低,主要取決于它的應(yīng)用場景。你不能只因為它是一個人工智能算法,就對一個低風(fēng)險的人工智能產(chǎn)品搞監(jiān)管,這必然會顯著阻礙這項技術(shù)的研發(fā),進(jìn)而限制美國企業(yè)采用人工智能技術(shù)的能力。

好在政策制定者們還是有一種可行的辦法,既能解決人工智能的潛在威脅,又不會阻礙它的發(fā)展——那就是算法責(zé)任原則。它是一種低干涉的監(jiān)管模式,企業(yè)只需通過一系列控制機(jī)制,驗證他們的人工智能系統(tǒng)是不是按照設(shè)計意圖運(yùn)行,能不能識別和糾正有害結(jié)果。它既不會像透明性原則那樣危害知識產(chǎn)權(quán),也不會像可解釋性原則那樣阻礙技術(shù)發(fā)展,企業(yè)仍然可以部署先進(jìn)的創(chuàng)新人工智能系統(tǒng)。但在某些特殊情況下,根據(jù)實際需要,也可以要求企業(yè)解釋其決策機(jī)制,不管人工智能系統(tǒng)在這些決策中有沒有被使用。另外根據(jù)算法責(zé)任原則,各個行業(yè)的監(jiān)管機(jī)構(gòu)也將能夠理解各自領(lǐng)域內(nèi)的人工智能技術(shù),而不需要建立一個全國性的最高監(jiān)管機(jī)構(gòu)。這樣也就大大降低了人工智能部署的壁壘。

如果美國真的想領(lǐng)跑全球人工智能競賽,那么政策制定者最應(yīng)該避免的,就是用一套低效的監(jiān)管制度扼殺人工智能的發(fā)展?jié)摿?。如果政策制定者?dān)心人工智能的安全問題或是公平問題,他們完全可以采用算法責(zé)任原則來化解他們的擔(dān)憂,而不是當(dāng)美國剛站在人工智能競賽的起跑線上,就去一棍子打斷他的腿。(財富中文網(wǎng))

本文作者喬舒亞·紐是智庫機(jī)構(gòu)數(shù)據(jù)創(chuàng)新中心(Center for Data Innovation)的高級政策研究分析師,該機(jī)構(gòu)主要研究數(shù)據(jù)、科技與公共政策的交集。

譯者:樸成奎

The country that wins the global race for dominance in artificial intelligence stands to capture enormous economic benefits, including potentially doubling its economic growth rates by 2035. Unfortunately, the United States is getting bad advice about how to compete.

Over the past year, Canada, China, France, India, Japan, and the United Kingdom have all launched major government-backed initiatives to compete in AI. While the Trump administration has begun to focus on how to advance the technology, it has not developed a cohesive national strategy to match that of other countries. This has allowed the conversation about how policymakers in the United States should support AI to be dominated by proposals from advocates primarily concerned with staving off potential harms of AI by imposing restrictive regulations on the technology, rather than supporting its growth.

AI does pose unique challenges—from potentially exacerbating racial bias in the criminal justice system to raising ethical concerns with self-driving cars—and the leading ideas to address these challenges are to mandate the principle of algorithmic transparency or algorithmic explainability, or to form an overarching AI regulator. However, not only would these measures likely be ineffective at addressing potential challenges, they would significantly slow the development and adoption of AI in the United States.

Proponents of algorithmic transparency contend that requiring companies to disclose the source code of their algorithms would allow regulators, journalists, and concerned citizens to scrutinize the code and identify any signs of wrongdoing. While the complexity of AI systems leaves little reason to believethat this would actually be effective, it would make it significantly easier for bad actors in countries that routinely flout intellectual property protections, to steal U.S. source code. This would simultaneously give a leg up to the United States’ main competition in the global AI race and reduce incentives for U.S. firms to invest in developing AI.

Others have proposed algorithmic explainability, where the government would require companies to make their algorithms interpretable to end users, such as by describing how their algorithms work or by only using algorithms that can articulate rationales for their decisions. For example, the European Union has made explainability a primary check on the potential dangers of AI, guaranteeing in its General Data Protection Regulation (GDPR) a person’s right to obtain “meaningful information” about certain decisions made by an algorithm.

Requiring explainability can be appropriate, and it is already the standard in many domains, such as criminal justice or consumer finance. But extending this requirement to AI decision-making in circumstances where the same standard doesn’t apply for human decisions would be a mistake. It would incentivize businesses to rely on humans to make decisions so they can avoid this regulatory burden, which would come at the expense of productivity and innovation.

Additionally, there can be inescapable trade-offs between explainability and accuracy. An algorithm’s accuracy typically increases with its complexity, but the more complex an algorithm is, the more difficult it is to explain. This trade-off has always existed—a simple linear regression with two variables is easier to explain than one with 200 variables—but the tradeoffs become more acute when using more advanced data science methods. Thus, explainability requirements would only make sense in situations where it is appropriate to sacrifice accuracy—and these cases are rare. For example, it would be a terrible idea to prioritize explainability over accuracy in autonomous vehicles, as even slight reductions in navigational accuracy or to a vehicle’s ability to differentiate between a pedestrian on the road and a picture of a person on a billboard could be enormously dangerous.

A third popular, but bad idea, championed most notably by Elon Musk, is to create the equivalent of the Food and Drug Administration or National Transportation Safety Board to serve as an overarching AI regulatory body. The problem is that establishing an AI regulator falsely implies that all algorithms pose the same level of risk and need for regulatory oversight. However, an AI system’s decisions, like a human’s decisions, are still subject to a wide variety of industry-specific laws and regulation and pose a wide variety of risk depending on their application. Subjecting low-risk decisions to regulatory oversight simply because they use an algorithm would be a considerable barrier to deploying AI, limiting the ability of U.S. firms to adopt the technology.

Fortunately, there is a viable way for policymakers to address the potential risks of AI without sabotaging it: Adopt the principle of algorithmic accountability, a light-touch regulatory approach that incentivizes businesses deploying algorithms to use a variety of controls to verify that their AI systems act as intended, and to identify and rectify harmful outcomes. Unlike algorithmic transparency, it would not threaten intellectual property. Unlike algorithmic explainability, it would allow companies to deploy advanced, innovative AI systems, yet still require that they be able to explain certain decisions when context demands it, regardless of whether AI was used in those decisions. And unlike a master AI regulator, algorithmic accountability would ensure regulators could understand AI within their sector-specific domains while limiting the barriers to AI deployment.

If the United States is to be a serious contender in the global AI race, the last thing policymakers should do is shackle AI with ineffective, economically damaging regulation. Policymakers who want to focus now on unfair or unsafe AI should instead pursue the principle of algorithmic accountability as a means of addressing their concerns without kneecapping the United States as it enters the global AI race.

Joshua New is a senior policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy.

掃碼打開財富Plus App
国产99黄色播放视频| 天天av天天爽无码中文| 欧美日本一区二区欧美专区一区| 国产精品高潮呻吟久久AV无码| 国产日产欧洲无码视频精品| 久久av免费天堂小草播放| 秋霞电影午夜无码免费视频| 久久无码喷吹高潮播放不卡| 忘忧草在线影院www日本韩国| 国产精品无码一本二本三本色| 国产AⅤ无码专区久久精品国产| 欧美精品v欧洲精品| 236宅宅理论片免费| 欧美一级人与嘼视频| 亚洲人成无码www久久久| 成人区精品一区二区婷婷| 日本免费一区二区三区在线播放| 久精品无码精品免费专区| 亚洲综合欧美色五月俺也去| 无码人妻AV一二区二区三区| 国产日韩欧美成人综合电影在线观看| 日本伊人精品一区二区三区 | 在线精品观看一区欧美国产精品不卡在线观看| 无码国产精品一区二区免费式直播| 亚洲AV色香蕉一区二区三区| 91av在线免费视频| 99国产精品国产精品九九| 久久综合狠狠色综合伊人| 2021国产精品视频网站 | 蜜色欲多人AV久久无码| 国产精品日韩专区第一页| 日韩a无码av一区二区三区| 丝袜美腿精品国产一区| 色综合久久久无码中文字幕| 在线精品一区亚洲毛片免费一级| 国产精品免费精品一区| 午夜免费无码福利视频麻豆| 国产成人久久精品二区三区| 亚洲精品无码专区 | 日韩人妻无码专区一本二本| 久久精品成人一区二区三区|