成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

首頁(yè) 500強(qiáng) 活動(dòng) 榜單 商業(yè) 科技 領(lǐng)導(dǎo)力 專題 品牌中心
雜志訂閱

“人工智能教母”:加州AI法案將損害美國(guó)生態(tài)系統(tǒng)

FEI-FEI LI
2024-08-10

李飛飛(Fei-Fei Li)博士是一位計(jì)算機(jī)科學(xué)家,是公認(rèn)的"人工智能教母"。

文本設(shè)置
小號(hào)
默認(rèn)
大號(hào)
Plus(0條)

圖片來(lái)源:COURTESY OF DR. FEI-FEI LI

如今,人工智能比以往任何時(shí)候都更為先進(jìn)。然而,能力越大,責(zé)任越大。政策制定者、公民社會(huì)和行業(yè)人士都在尋求治理方式,以最大限度地減少潛在危害,并塑造安全、以人為本的人工智能賦能社會(huì)。我對(duì)其中一些努力表示贊賞,但對(duì)其他努力則持謹(jǐn)慎態(tài)度;加州的《前沿人工智能模型安全創(chuàng)新法案》(更為人所知的是SB-1047)就屬于后者。這項(xiàng)立法的初衷是好的,但將帶來(lái)意想不到的嚴(yán)重后果,不僅對(duì)加州如此,對(duì)整個(gè)國(guó)家都是如此。

人工智能政策必須鼓勵(lì)創(chuàng)新,設(shè)定適當(dāng)?shù)南拗?,并減輕這些限制的影響。否則,相關(guān)政策在最好的情況下將無(wú)法實(shí)現(xiàn)其目標(biāo),在最壞的情況下會(huì)導(dǎo)致意想不到的嚴(yán)重后果。

如果該法案通過(guò)成為法律,SB-1047將損害剛剛起步的人工智能生態(tài)系統(tǒng),尤其是那些相對(duì)于當(dāng)今科技巨頭已經(jīng)處于劣勢(shì)的部門:公共部門、學(xué)術(shù)界和“小型科技公司”。SB-1047將不必要地懲罰開發(fā)者,扼殺開源社區(qū),阻礙學(xué)術(shù)人工智能研究,卻無(wú)法解決它本應(yīng)解決的實(shí)際問(wèn)題。

首先,SB-1047將過(guò)度懲罰開發(fā)者并抑制創(chuàng)新。如果人工智能模型被濫用,SB-1047將追究責(zé)任方和模型原始開發(fā)者的責(zé)任。每個(gè)人工智能開發(fā)者——尤其是初出茅廬的程序員和企業(yè)家——都無(wú)法預(yù)測(cè)其模型的所有可能用途。SB-1047將迫使開發(fā)者退縮并采取防御措施——這正是我們?cè)噲D避免的。

其次,SB-1047將束縛開源開發(fā)。SB-1047強(qiáng)制規(guī)定,所有超過(guò)一定閾值的模型都必須包含一個(gè)"終止開關(guān)",這是一種可以隨時(shí)關(guān)閉程序的機(jī)制。如果開發(fā)人員擔(dān)心他們下載和基于其開發(fā)的程序會(huì)被刪除,他們編寫代碼和合作的意愿就會(huì)大打折扣。這個(gè)終止開關(guān)將摧毀開源社區(qū)——無(wú)數(shù)創(chuàng)新的源泉,不僅在人工智能領(lǐng)域,而且在各個(gè)領(lǐng)域,從全球定位系統(tǒng)到磁共振成像再到互聯(lián)網(wǎng)本身。

第三,SB-1047將削弱公共部門和學(xué)術(shù)界的人工智能研究。開源開發(fā)對(duì)私營(yíng)部門很重要,但對(duì)學(xué)術(shù)界也至關(guān)重要,原因是沒(méi)有合作和對(duì)模型數(shù)據(jù)的訪問(wèn),學(xué)術(shù)界就無(wú)法取得進(jìn)步。以研究開放式人工智能模型的計(jì)算機(jī)科學(xué)專業(yè)學(xué)生為例。如果我們的機(jī)構(gòu)無(wú)法獲得適當(dāng)?shù)哪P秃蛿?shù)據(jù),我們將如何培養(yǎng)下一代人工智能領(lǐng)導(dǎo)者?與大型科技公司相比,這些學(xué)生和研究人員在數(shù)據(jù)和計(jì)算方面已經(jīng)處于劣勢(shì),如果再設(shè)置"終止開關(guān)",將進(jìn)一步削弱他們的努力。SB-1047將為學(xué)術(shù)界的人工智能敲響喪鐘,而我們本應(yīng)加大對(duì)公共部門人工智能的投資。

最令人擔(dān)憂的是,該法案并未解決人工智能發(fā)展的潛在危害,包括偏見和深度偽造。相反,SB-1047設(shè)定了一個(gè)任意的閾值,對(duì)使用一定算力或訓(xùn)練成本達(dá)1億美元的模型進(jìn)行監(jiān)管。這項(xiàng)措施非但不能提供保障,反而會(huì)限制包括學(xué)術(shù)界在內(nèi)的各個(gè)領(lǐng)域的創(chuàng)新。如今,學(xué)術(shù)界的人工智能模型低于這個(gè)閾值,但如果我們要重新平衡對(duì)私營(yíng)和公共部門人工智能的投資,學(xué)術(shù)界就會(huì)受到SB-1047的監(jiān)管。美國(guó)的人工智能生態(tài)系統(tǒng)將因此變得更糟。

我們必須反其道而行之。在過(guò)去一年與拜登(Biden)總統(tǒng)的多次談話中,我都表示需要"登月精神"來(lái)推動(dòng)美國(guó)的人工智能教育、研究和發(fā)展。然而,SB-1047卻限制過(guò)多、過(guò)于武斷,不僅會(huì)打擊加州的人工智能生態(tài)系統(tǒng),還會(huì)對(duì)全美的人工智能下游生態(tài)系統(tǒng)產(chǎn)生令人不安的影響。

我并非反對(duì)人工智能治理。立法對(duì)于人工智能的安全和有效發(fā)展至關(guān)重要。但人工智能政策必須促進(jìn)開源開發(fā),提出統(tǒng)一、合理的規(guī)則,并建立消費(fèi)者信心。SB-1047并未制定相關(guān)標(biāo)準(zhǔn)。我向該法案的起草人、參議員斯科特·維納(Scott Wiener)提出合作意向:讓我們共同努力,制定人工智能立法,真正構(gòu)建技術(shù)驅(qū)動(dòng)、以人為本的未來(lái)社會(huì)。事實(shí)上,人工智能的未來(lái)取決于此。加州——作為先驅(qū)實(shí)體和美國(guó)最強(qiáng)大的人工智能生態(tài)系統(tǒng)的所在地——是人工智能運(yùn)動(dòng)的核心;加州的走向決定著全美的走向。(財(cái)富中文網(wǎng))

Fortune.com上發(fā)表的評(píng)論文章中表達(dá)的觀點(diǎn),僅代表作者本人的觀點(diǎn),不代表《財(cái)富》雜志的觀點(diǎn)和立場(chǎng)。

譯者:中慧言-王芳

如今,人工智能比以往任何時(shí)候都更為先進(jìn)。然而,能力越大,責(zé)任越大。政策制定者、公民社會(huì)和行業(yè)人士都在尋求治理方式,以最大限度地減少潛在危害,并塑造安全、以人為本的人工智能賦能社會(huì)。我對(duì)其中一些努力表示贊賞,但對(duì)其他努力則持謹(jǐn)慎態(tài)度;加州的《前沿人工智能模型安全創(chuàng)新法案》(更為人所知的是SB-1047)就屬于后者。這項(xiàng)立法的初衷是好的,但將帶來(lái)意想不到的嚴(yán)重后果,不僅對(duì)加州如此,對(duì)整個(gè)國(guó)家都是如此。

人工智能政策必須鼓勵(lì)創(chuàng)新,設(shè)定適當(dāng)?shù)南拗?,并減輕這些限制的影響。否則,相關(guān)政策在最好的情況下將無(wú)法實(shí)現(xiàn)其目標(biāo),在最壞的情況下會(huì)導(dǎo)致意想不到的嚴(yán)重后果。

如果該法案通過(guò)成為法律,SB-1047將損害剛剛起步的人工智能生態(tài)系統(tǒng),尤其是那些相對(duì)于當(dāng)今科技巨頭已經(jīng)處于劣勢(shì)的部門:公共部門、學(xué)術(shù)界和“小型科技公司”。SB-1047將不必要地懲罰開發(fā)者,扼殺開源社區(qū),阻礙學(xué)術(shù)人工智能研究,卻無(wú)法解決它本應(yīng)解決的實(shí)際問(wèn)題。

首先,SB-1047將過(guò)度懲罰開發(fā)者并抑制創(chuàng)新。如果人工智能模型被濫用,SB-1047將追究責(zé)任方和模型原始開發(fā)者的責(zé)任。每個(gè)人工智能開發(fā)者——尤其是初出茅廬的程序員和企業(yè)家——都無(wú)法預(yù)測(cè)其模型的所有可能用途。SB-1047將迫使開發(fā)者退縮并采取防御措施——這正是我們?cè)噲D避免的。

其次,SB-1047將束縛開源開發(fā)。SB-1047強(qiáng)制規(guī)定,所有超過(guò)一定閾值的模型都必須包含一個(gè)"終止開關(guān)",這是一種可以隨時(shí)關(guān)閉程序的機(jī)制。如果開發(fā)人員擔(dān)心他們下載和基于其開發(fā)的程序會(huì)被刪除,他們編寫代碼和合作的意愿就會(huì)大打折扣。這個(gè)終止開關(guān)將摧毀開源社區(qū)——無(wú)數(shù)創(chuàng)新的源泉,不僅在人工智能領(lǐng)域,而且在各個(gè)領(lǐng)域,從全球定位系統(tǒng)到磁共振成像再到互聯(lián)網(wǎng)本身。

第三,SB-1047將削弱公共部門和學(xué)術(shù)界的人工智能研究。開源開發(fā)對(duì)私營(yíng)部門很重要,但對(duì)學(xué)術(shù)界也至關(guān)重要,原因是沒(méi)有合作和對(duì)模型數(shù)據(jù)的訪問(wèn),學(xué)術(shù)界就無(wú)法取得進(jìn)步。以研究開放式人工智能模型的計(jì)算機(jī)科學(xué)專業(yè)學(xué)生為例。如果我們的機(jī)構(gòu)無(wú)法獲得適當(dāng)?shù)哪P秃蛿?shù)據(jù),我們將如何培養(yǎng)下一代人工智能領(lǐng)導(dǎo)者?與大型科技公司相比,這些學(xué)生和研究人員在數(shù)據(jù)和計(jì)算方面已經(jīng)處于劣勢(shì),如果再設(shè)置"終止開關(guān)",將進(jìn)一步削弱他們的努力。SB-1047將為學(xué)術(shù)界的人工智能敲響喪鐘,而我們本應(yīng)加大對(duì)公共部門人工智能的投資。

最令人擔(dān)憂的是,該法案并未解決人工智能發(fā)展的潛在危害,包括偏見和深度偽造。相反,SB-1047設(shè)定了一個(gè)任意的閾值,對(duì)使用一定算力或訓(xùn)練成本達(dá)1億美元的模型進(jìn)行監(jiān)管。這項(xiàng)措施非但不能提供保障,反而會(huì)限制包括學(xué)術(shù)界在內(nèi)的各個(gè)領(lǐng)域的創(chuàng)新。如今,學(xué)術(shù)界的人工智能模型低于這個(gè)閾值,但如果我們要重新平衡對(duì)私營(yíng)和公共部門人工智能的投資,學(xué)術(shù)界就會(huì)受到SB-1047的監(jiān)管。美國(guó)的人工智能生態(tài)系統(tǒng)將因此變得更糟。

我們必須反其道而行之。在過(guò)去一年與拜登(Biden)總統(tǒng)的多次談話中,我都表示需要"登月精神"來(lái)推動(dòng)美國(guó)的人工智能教育、研究和發(fā)展。然而,SB-1047卻限制過(guò)多、過(guò)于武斷,不僅會(huì)打擊加州的人工智能生態(tài)系統(tǒng),還會(huì)對(duì)全美的人工智能下游生態(tài)系統(tǒng)產(chǎn)生令人不安的影響。

我并非反對(duì)人工智能治理。立法對(duì)于人工智能的安全和有效發(fā)展至關(guān)重要。但人工智能政策必須促進(jìn)開源開發(fā),提出統(tǒng)一、合理的規(guī)則,并建立消費(fèi)者信心。SB-1047并未制定相關(guān)標(biāo)準(zhǔn)。我向該法案的起草人、參議員斯科特·維納(Scott Wiener)提出合作意向:讓我們共同努力,制定人工智能立法,真正構(gòu)建技術(shù)驅(qū)動(dòng)、以人為本的未來(lái)社會(huì)。事實(shí)上,人工智能的未來(lái)取決于此。加州——作為先驅(qū)實(shí)體和美國(guó)最強(qiáng)大的人工智能生態(tài)系統(tǒng)的所在地——是人工智能運(yùn)動(dòng)的核心;加州的走向決定著全美的走向。(財(cái)富中文網(wǎng))

Fortune.com上發(fā)表的評(píng)論文章中表達(dá)的觀點(diǎn),僅代表作者本人的觀點(diǎn),不代表《財(cái)富》雜志的觀點(diǎn)和立場(chǎng)。

譯者:中慧言-王芳

Today, AI is more advanced than ever. With great power, though, comes great responsibility. Policymakers, alongside those in civil society and industry, are looking to governance that minimizes potential harm and shapes a safe, human-centered AI-empowered society. I applaud some of these efforts yet caution against others; California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, better known as SB-1047, falls into the latter category. This well-meaning piece of legislation will have significant unintended consequences, not just for California, but for the entire country.

AI policy must encourage innovation, set appropriate restrictions, and mitigate the implications of those restrictions. Policy that doesn’t will at best fall short of its goals, and at worst lead to dire, if unintended, consequences.

If passed into law, SB-1047 will harm our budding AI ecosystem, especially the parts of it that are already at a disadvantage to today’s tech giants: the public sector, academia, and “l(fā)ittle tech.” SB-1047 will unnecessarily penalize developers, stifle our open-source community, and hamstring academic AI research, all while failing to address the very real issues it was authored to solve.

First, SB-1047 will unduly punish developers and stifle innovation. In the event of misuse of an AI model, SB-1047 holds liable the party responsible and the original developer of that model. It is impossible for each AI developer—particularly budding coders and entrepreneurs—to predict every possible use of their model. SB-1047 will force developers to pull back and act defensively—precisely what we’re trying to avoid.

Second, SB-1047 will shackle open-source development. SB-1047 mandates that all models over a certain threshold include a “kill switch,” a mechanism by which the program can be shut down at any time. If developers are concerned that the programs they download and build on will be deleted, they will be much more hesitant to write code and collaborate. This kill switch will devastate the open-source community—the source of countless innovations, not just in AI, but across sectors, ranging from GPS to MRIs to the internet itself.

Third, SB-1047 will cripple public sector and academic AI research. Open-source development is important in the private sector, but vital to academia, which cannot advance without collaboration and access to model data. Take computer science students, who study open-weight AI models. How will we train the next generation of AI leaders if our institutions don’t have access to the proper models and data? A kill switch would even further dampen the efforts of these students and researchers, already at such a data and computation disadvantage compared to Big Tech. SB-1047 will deal a death knell to academic AI when we should be doubling down on public-sector AI investment.

Most alarmingly, this bill does not address the potential harms of AI advancement, including bias and deepfakes. Instead, SB-1047 sets an arbitrary threshold, regulating models that use a certain amount of computing power or cost $100 million to train. Far from providing a safeguard, this measure will merely restrict innovation across sectors, including academia. Today, academic AI models fall beneath this threshold, but if we were to rebalance investment in private and public sector AI, academia would fall under SB-1047’s regulation. Our AI ecosystem will be worse for it.

We must take the opposite approach. In various conversations with President Biden over the past year, I have expressed the need for a “moonshot mentality” to spur our country’s AI education, research, and development. SB-1047, however, is overly and arbitrarily restrictive, and will not only chill California’s AI ecosystem but will also have troubling downstream implications for AI across the nation.

I am not anti-AI governance. Legislation is critical to the safe and effective advancement of AI. But AI policy must empower open-source development, put forward uniform and well-reasoned rules, and build consumer confidence. SB-1047 falls short of those standards. I extend an offer of collaboration to Senator Scott Wiener, the bill’s author: Let us work together to craft AI legislation that will truly build the technology-enabled, human-centered society of tomorrow. Indeed, the future of AI depends on it. The Golden State—as a pioneering entity, and home to our country’s most robust AI ecosystem—is the beating heart of the AI movement; as California goes, so goes the rest of the country.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識(shí)產(chǎn)權(quán)為財(cái)富媒體知識(shí)產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專屬所有或持有。未經(jīng)許可,禁止進(jìn)行轉(zhuǎn)載、摘編、復(fù)制及建立鏡像等任何使用。
0條Plus
精彩評(píng)論
評(píng)論

撰寫或查看更多評(píng)論

請(qǐng)打開財(cái)富Plus APP

前往打開
熱讀文章
国产精品久久久久久无码| 国产一线在线视频一区二区三区四区| 无码免费一区二区三区蜜桃 | 亚洲欧美日韩一区| 亚洲中文字幕AⅤ无码性色| 午夜成人理论无码电影在线播放| 久久成人无码专区| 人人做天天爱夜夜爽2020| 亚洲成a人片在线观看中文app| 国产精品高潮呻吟久久av无码午夜鲁丝片| 亚洲伊人久久精品影院| 国产真实老熟女无套内射| 色天使久久综合给合久久| 久青青视频精品免费观看| 欧美FREESEX黑人又粗又大| 精品熟妇视频一区二区三区| 国产精品高潮呻吟久久AV| 51精品国产人成在线观看| 国产美女自为喷水视频| 欧美日韩一级片在线| 少妇被粗大的猛烈进出视频,| 精品人妻一区二区三区| 全部免费毛片在线播放| 国产成人无码A区在线观看视频免费| 欧美内射AAAAAAXXXXX,男人的JJ| 国产无遮挡又爽又黄大胸免费| 精品无码久久久久久久久水蜜桃| 天堂av无码av一区二区三区| 久久人搡人人玩人妻精品首页| 国产福利在线观看精品| 久久久久久久久精品中文字幕一区| 欧美大黑帍在线播放| 久久精品无码专区免费青青| 亚洲中字幕日产AV片在线| 亚洲理论电影在线观看| 亚洲精品无码伊人久久| 久久亚洲色一区二区三区,| 国产成人精品区在线观看| 中国无码人妻丰满熟妇啪啪软件| 69久久夜色精品国产| 日韩 欧美 国产 另类A级|