就在谷歌宣布推出Axion人工智能芯片之后,Meta剛剛宣布將進(jìn)一步進(jìn)軍人工智能芯片競(jìng)賽。這兩家公司都宣稱(chēng),它們的新型半導(dǎo)體模型是開(kāi)發(fā)人工智能平臺(tái)的關(guān)鍵,也是它們和科技行業(yè)其他公司一直依賴(lài)的英偉達(dá)芯片的替代品,能夠?yàn)槿斯ぶ悄軘?shù)據(jù)中心提供動(dòng)力。
硬件正在成為人工智能的關(guān)鍵增長(zhǎng)領(lǐng)域。對(duì)于擁有資金和人才的大型科技公司來(lái)說(shuō),開(kāi)發(fā)自研芯片有助于減少對(duì)英偉達(dá)和英特爾(Intel)等外部設(shè)計(jì)商的依賴(lài),同時(shí)還允許公司專(zhuān)門(mén)根據(jù)自己的人工智能模型定制硬件,從而提高性能并節(jié)省能源成本。
谷歌和Meta剛剛宣布推出的這些自研人工智能芯片,對(duì)英偉達(dá)在人工智能硬件市場(chǎng)的主導(dǎo)地位構(gòu)成了第一個(gè)真正的挑戰(zhàn)。英偉達(dá)控制著超過(guò)90%的人工智能芯片市場(chǎng),對(duì)其行業(yè)領(lǐng)先的半導(dǎo)體的需求只增不減。但如果英偉達(dá)最大的客戶(hù)轉(zhuǎn)而開(kāi)始生產(chǎn)自己的芯片,那么其自年初以來(lái)飆升了87%的股價(jià)可能會(huì)受到影響。
科技咨詢(xún)公司Omdia的分析師愛(ài)德華·威爾福德(Edward Wilford)在接受《財(cái)富》雜志采訪時(shí)表示:“從Meta的角度來(lái)看……這為它們提供了與英偉達(dá)討價(jià)還價(jià)的工具。這讓英偉達(dá)知道,它們不是排他性的,而且還有其他選擇。它們制造的硬件針對(duì)其正在開(kāi)發(fā)的人工智能進(jìn)行了優(yōu)化?!?/p>
為什么人工智能需要新芯片?
人工智能模型需要大量的算力,因?yàn)樾枰罅康臄?shù)據(jù)來(lái)訓(xùn)練背后的大型語(yǔ)言模型。傳統(tǒng)的計(jì)算機(jī)芯片根本無(wú)法處理構(gòu)建人工智能模型的數(shù)萬(wàn)億個(gè)數(shù)據(jù)點(diǎn),這催生了人工智能專(zhuān)用計(jì)算機(jī)芯片市場(chǎng),這些芯片通常被稱(chēng)為“尖端”芯片,因?yàn)樗鼈兪鞘袌?chǎng)上功能最強(qiáng)大的設(shè)備。
半導(dǎo)體巨頭英偉達(dá)主導(dǎo)了這一新興市場(chǎng):英偉達(dá)價(jià)值3萬(wàn)美元的旗艦人工智能芯片的等待名單長(zhǎng)達(dá)數(shù)月之久,需求推動(dòng)該公司股價(jià)在過(guò)去六個(gè)月上漲了近90%。
競(jìng)爭(zhēng)對(duì)手芯片制造商英特爾也在努力保持競(jìng)爭(zhēng)力。它剛剛發(fā)布了Gaudi 3人工智能芯片,與英偉達(dá)展開(kāi)直接競(jìng)爭(zhēng)。上至谷歌和微軟,下至小型初創(chuàng)企業(yè),人工智能開(kāi)發(fā)者都在爭(zhēng)奪稀缺的人工智能芯片,但受到制造能力的限制。
為什么科技公司開(kāi)始制造自己的芯片?
英偉達(dá)和英特爾只能生產(chǎn)有限數(shù)量的芯片,因?yàn)樗鼈兒蜆I(yè)內(nèi)其他公司都依賴(lài)中國(guó)臺(tái)灣的制造商臺(tái)積電(TSMC)來(lái)實(shí)際組裝芯片設(shè)計(jì)。由于只有一家制造商參與其中,這些尖端芯片的制造周期長(zhǎng)達(dá)數(shù)月。這是導(dǎo)致人工智能領(lǐng)域的主要參與者,如谷歌和Meta,自行設(shè)計(jì)芯片的一大關(guān)鍵因素。
咨詢(xún)公司弗雷斯特市場(chǎng)咨詢(xún)(Forrester)的高級(jí)分析師阿爾文·阮(Alvin Nguyen)告訴《財(cái)富》雜志,谷歌、Meta和亞馬遜等公司設(shè)計(jì)的芯片不會(huì)像英偉達(dá)的頂級(jí)產(chǎn)品那樣功能強(qiáng)大,但可能會(huì)在速度方面使這些公司受益。他說(shuō),它們將能夠在專(zhuān)業(yè)化程度更低的裝配線上生產(chǎn)這些產(chǎn)品,等待時(shí)間更短。
阮說(shuō):“如果你有產(chǎn)品性能差10%,但現(xiàn)在就能買(mǎi)到的東西,我每天都會(huì)買(mǎi)入。”
即使Meta和谷歌正在開(kāi)發(fā)的原生人工智能芯片不如英偉達(dá)的尖端人工智能芯片功能強(qiáng)大,但它們可以更好地針對(duì)公司特定的人工智能平臺(tái)進(jìn)行定制。阮表示,為公司自己的人工智能平臺(tái)設(shè)計(jì)的自研芯片可以通過(guò)消除不必要的功能來(lái)提高效率并節(jié)省成本。
阮說(shuō):“這就像買(mǎi)車(chē)一樣。好吧,你需要自動(dòng)變速箱。但你需要真皮座椅,還是加熱按摩座椅呢?”
Meta發(fā)言人梅蘭妮·羅伊在給《財(cái)富》雜志的一封電子郵件中寫(xiě)道:“對(duì)我們來(lái)說(shuō),這樣做的好處是,我們可以打造一款能夠更有效地處理特定工作負(fù)載的芯片?!?/p>
英偉達(dá)的頂級(jí)芯片每塊售價(jià)約為2.5萬(wàn)美元。它們是極其強(qiáng)大的工具,而且設(shè)計(jì)用于廣泛的應(yīng)用,從訓(xùn)練人工智能聊天機(jī)器人到生成圖像,再到開(kāi)發(fā)推薦算法,比如TikTok和Instagram上的算法。這意味著功能稍弱,但更有針對(duì)性的芯片可能更適合Meta這樣的公司。Meta在人工智能方面的投資主要是用于其推薦算法,而不是面向消費(fèi)者的聊天機(jī)器人。
晨星研究公司(Morningstar)股票研究主管布萊恩·科萊洛(Brian Colello)告訴《財(cái)富》雜志:“英偉達(dá)的圖形處理器(GPU)在人工智能數(shù)據(jù)中心中表現(xiàn)出色,但它們是通用型的。在某些工作負(fù)載和某些模型中,定制芯片可能會(huì)更好。”
萬(wàn)億美元的問(wèn)題
阮表示,更專(zhuān)業(yè)的自研芯片可以憑借其集成到現(xiàn)有數(shù)據(jù)中心的能力帶來(lái)額外的好處。英偉達(dá)的芯片耗電量大、發(fā)熱量高、噪音大,以至于科技公司可能被迫重新設(shè)計(jì)或遷移其數(shù)據(jù)中心,以集成隔音和液冷系統(tǒng)。功能較弱的原生芯片能耗低、發(fā)熱量少,可以解決這個(gè)問(wèn)題。
Meta和谷歌開(kāi)發(fā)的人工智能芯片是長(zhǎng)期賭注。阮估計(jì),這些芯片的開(kāi)發(fā)大約需要一年半時(shí)間,而大規(guī)模應(yīng)用可能還需要數(shù)月時(shí)間。在可預(yù)見(jiàn)的未來(lái),整個(gè)人工智能世界仍將在很大程度上依賴(lài)英偉達(dá)(其次是英特爾,依賴(lài)程度相對(duì)較?。﹣?lái)滿(mǎn)足其計(jì)算硬件需求。事實(shí)上,馬克·扎克伯格最近宣布,Meta有望在今年年底前擁有35萬(wàn)塊英偉達(dá)芯片(屆時(shí)該公司將在芯片上投入約180億美元)。但從外包算力轉(zhuǎn)向自研芯片設(shè)計(jì),可能會(huì)打破英偉達(dá)的壟斷。
科萊洛表示:“這些自研芯片的威脅是英偉達(dá)估值達(dá)萬(wàn)億美元。如果這些自研芯片顯著減少了對(duì)英偉達(dá)的依賴(lài),那么英偉達(dá)的股票可能會(huì)因此下跌。這一事態(tài)發(fā)展并不令人意外,但未來(lái)幾年的執(zhí)行情況是我們關(guān)注的關(guān)鍵估值問(wèn)題?!保ㄘ?cái)富中文網(wǎng))
譯者:中慧言-王芳
就在谷歌宣布推出Axion人工智能芯片之后,Meta剛剛宣布將進(jìn)一步進(jìn)軍人工智能芯片競(jìng)賽。這兩家公司都宣稱(chēng),它們的新型半導(dǎo)體模型是開(kāi)發(fā)人工智能平臺(tái)的關(guān)鍵,也是它們和科技行業(yè)其他公司一直依賴(lài)的英偉達(dá)芯片的替代品,能夠?yàn)槿斯ぶ悄軘?shù)據(jù)中心提供動(dòng)力。
硬件正在成為人工智能的關(guān)鍵增長(zhǎng)領(lǐng)域。對(duì)于擁有資金和人才的大型科技公司來(lái)說(shuō),開(kāi)發(fā)自研芯片有助于減少對(duì)英偉達(dá)和英特爾(Intel)等外部設(shè)計(jì)商的依賴(lài),同時(shí)還允許公司專(zhuān)門(mén)根據(jù)自己的人工智能模型定制硬件,從而提高性能并節(jié)省能源成本。
谷歌和Meta剛剛宣布推出的這些自研人工智能芯片,對(duì)英偉達(dá)在人工智能硬件市場(chǎng)的主導(dǎo)地位構(gòu)成了第一個(gè)真正的挑戰(zhàn)。英偉達(dá)控制著超過(guò)90%的人工智能芯片市場(chǎng),對(duì)其行業(yè)領(lǐng)先的半導(dǎo)體的需求只增不減。但如果英偉達(dá)最大的客戶(hù)轉(zhuǎn)而開(kāi)始生產(chǎn)自己的芯片,那么其自年初以來(lái)飆升了87%的股價(jià)可能會(huì)受到影響。
科技咨詢(xún)公司Omdia的分析師愛(ài)德華·威爾福德(Edward Wilford)在接受《財(cái)富》雜志采訪時(shí)表示:“從Meta的角度來(lái)看……這為它們提供了與英偉達(dá)討價(jià)還價(jià)的工具。這讓英偉達(dá)知道,它們不是排他性的,而且還有其他選擇。它們制造的硬件針對(duì)其正在開(kāi)發(fā)的人工智能進(jìn)行了優(yōu)化?!?/p>
為什么人工智能需要新芯片?
人工智能模型需要大量的算力,因?yàn)樾枰罅康臄?shù)據(jù)來(lái)訓(xùn)練背后的大型語(yǔ)言模型。傳統(tǒng)的計(jì)算機(jī)芯片根本無(wú)法處理構(gòu)建人工智能模型的數(shù)萬(wàn)億個(gè)數(shù)據(jù)點(diǎn),這催生了人工智能專(zhuān)用計(jì)算機(jī)芯片市場(chǎng),這些芯片通常被稱(chēng)為“尖端”芯片,因?yàn)樗鼈兪鞘袌?chǎng)上功能最強(qiáng)大的設(shè)備。
半導(dǎo)體巨頭英偉達(dá)主導(dǎo)了這一新興市場(chǎng):英偉達(dá)價(jià)值3萬(wàn)美元的旗艦人工智能芯片的等待名單長(zhǎng)達(dá)數(shù)月之久,需求推動(dòng)該公司股價(jià)在過(guò)去六個(gè)月上漲了近90%。
競(jìng)爭(zhēng)對(duì)手芯片制造商英特爾也在努力保持競(jìng)爭(zhēng)力。它剛剛發(fā)布了Gaudi 3人工智能芯片,與英偉達(dá)展開(kāi)直接競(jìng)爭(zhēng)。上至谷歌和微軟,下至小型初創(chuàng)企業(yè),人工智能開(kāi)發(fā)者都在爭(zhēng)奪稀缺的人工智能芯片,但受到制造能力的限制。
為什么科技公司開(kāi)始制造自己的芯片?
英偉達(dá)和英特爾只能生產(chǎn)有限數(shù)量的芯片,因?yàn)樗鼈兒蜆I(yè)內(nèi)其他公司都依賴(lài)中國(guó)臺(tái)灣的制造商臺(tái)積電(TSMC)來(lái)實(shí)際組裝芯片設(shè)計(jì)。由于只有一家制造商參與其中,這些尖端芯片的制造周期長(zhǎng)達(dá)數(shù)月。這是導(dǎo)致人工智能領(lǐng)域的主要參與者,如谷歌和Meta,自行設(shè)計(jì)芯片的一大關(guān)鍵因素。
咨詢(xún)公司弗雷斯特市場(chǎng)咨詢(xún)(Forrester)的高級(jí)分析師阿爾文·阮(Alvin Nguyen)告訴《財(cái)富》雜志,谷歌、Meta和亞馬遜等公司設(shè)計(jì)的芯片不會(huì)像英偉達(dá)的頂級(jí)產(chǎn)品那樣功能強(qiáng)大,但可能會(huì)在速度方面使這些公司受益。他說(shuō),它們將能夠在專(zhuān)業(yè)化程度更低的裝配線上生產(chǎn)這些產(chǎn)品,等待時(shí)間更短。
阮說(shuō):“如果你有產(chǎn)品性能差10%,但現(xiàn)在就能買(mǎi)到的東西,我每天都會(huì)買(mǎi)入。”
即使Meta和谷歌正在開(kāi)發(fā)的原生人工智能芯片不如英偉達(dá)的尖端人工智能芯片功能強(qiáng)大,但它們可以更好地針對(duì)公司特定的人工智能平臺(tái)進(jìn)行定制。阮表示,為公司自己的人工智能平臺(tái)設(shè)計(jì)的自研芯片可以通過(guò)消除不必要的功能來(lái)提高效率并節(jié)省成本。
阮說(shuō):“這就像買(mǎi)車(chē)一樣。好吧,你需要自動(dòng)變速箱。但你需要真皮座椅,還是加熱按摩座椅呢?”
Meta發(fā)言人梅蘭妮·羅伊在給《財(cái)富》雜志的一封電子郵件中寫(xiě)道:“對(duì)我們來(lái)說(shuō),這樣做的好處是,我們可以打造一款能夠更有效地處理特定工作負(fù)載的芯片?!?/p>
英偉達(dá)的頂級(jí)芯片每塊售價(jià)約為2.5萬(wàn)美元。它們是極其強(qiáng)大的工具,而且設(shè)計(jì)用于廣泛的應(yīng)用,從訓(xùn)練人工智能聊天機(jī)器人到生成圖像,再到開(kāi)發(fā)推薦算法,比如TikTok和Instagram上的算法。這意味著功能稍弱,但更有針對(duì)性的芯片可能更適合Meta這樣的公司。Meta在人工智能方面的投資主要是用于其推薦算法,而不是面向消費(fèi)者的聊天機(jī)器人。
晨星研究公司(Morningstar)股票研究主管布萊恩·科萊洛(Brian Colello)告訴《財(cái)富》雜志:“英偉達(dá)的圖形處理器(GPU)在人工智能數(shù)據(jù)中心中表現(xiàn)出色,但它們是通用型的。在某些工作負(fù)載和某些模型中,定制芯片可能會(huì)更好?!?/p>
萬(wàn)億美元的問(wèn)題
阮表示,更專(zhuān)業(yè)的自研芯片可以憑借其集成到現(xiàn)有數(shù)據(jù)中心的能力帶來(lái)額外的好處。英偉達(dá)的芯片耗電量大、發(fā)熱量高、噪音大,以至于科技公司可能被迫重新設(shè)計(jì)或遷移其數(shù)據(jù)中心,以集成隔音和液冷系統(tǒng)。功能較弱的原生芯片能耗低、發(fā)熱量少,可以解決這個(gè)問(wèn)題。
Meta和谷歌開(kāi)發(fā)的人工智能芯片是長(zhǎng)期賭注。阮估計(jì),這些芯片的開(kāi)發(fā)大約需要一年半時(shí)間,而大規(guī)模應(yīng)用可能還需要數(shù)月時(shí)間。在可預(yù)見(jiàn)的未來(lái),整個(gè)人工智能世界仍將在很大程度上依賴(lài)英偉達(dá)(其次是英特爾,依賴(lài)程度相對(duì)較小)來(lái)滿(mǎn)足其計(jì)算硬件需求。事實(shí)上,馬克·扎克伯格最近宣布,Meta有望在今年年底前擁有35萬(wàn)塊英偉達(dá)芯片(屆時(shí)該公司將在芯片上投入約180億美元)。但從外包算力轉(zhuǎn)向自研芯片設(shè)計(jì),可能會(huì)打破英偉達(dá)的壟斷。
科萊洛表示:“這些自研芯片的威脅是英偉達(dá)估值達(dá)萬(wàn)億美元。如果這些自研芯片顯著減少了對(duì)英偉達(dá)的依賴(lài),那么英偉達(dá)的股票可能會(huì)因此下跌。這一事態(tài)發(fā)展并不令人意外,但未來(lái)幾年的執(zhí)行情況是我們關(guān)注的關(guān)鍵估值問(wèn)題?!保ㄘ?cái)富中文網(wǎng))
譯者:中慧言-王芳
Meta just announced it’s pushing further into the AI chip race, coming right on the heels of Google’s own announcement of its Axion AI chip. Both companies are touting their new semiconductor models as key to the development of their AI platforms, and as alternatives to the Nvidia chips they—and the rest of the tech industry—have been relying on to power AI data centers.
Hardware is emerging as a key AI growth area. For Big Tech companies with the money and talent to do so, developing in-house chips helps reduce dependence on outside designers such as Nvidia and Intel while also allowing firms to tailor their hardware specifically to their own AI models, boosting performance and saving on energy costs.
These in-house AI chips that Google and Meta just announced pose one of the first real challenges to Nvidia’s dominant position in the AI hardware market. Nvidia controls more than 90% of the AI chips market, and demand for its industry-leading semiconductors is only increasing. But if Nvidia’s biggest customers start making their own chips instead, its soaring share price, up 87% since the start of the year, could suffer.
“From Meta’s point of view … it gives them a bargaining tool with Nvidia,” Edward Wilford, an analyst at tech consultancy Omdia, told Fortune. “It lets Nvidia know that they’re not exclusive, [and] that they have other options. It’s hardware optimized for the AI that they are developing.”
Why does AI need new chips?
AI models require massive amounts of computing power because of the huge amount of data required to train the large language models behind them. Conventional computer chips simply aren’t capable of processing the trillions of data points AI models are built upon, which has spawned a market for AI-specific computer chips, often called “cutting-edge” chips because they’re the most powerful devices on the market.
Semiconductor giant Nvidia has dominated this nascent market: The wait list for Nvidia’s $30,000 flagship AI chip is months long, and demand has pushed the firm’s share price up almost 90% in the past six months.
And rival chipmaker Intel is fighting to stay competitive. It just released its Gaudi 3 AI chip to compete directly with Nvidia. AI developers—from Google and Microsoft down to small startups—are all competing for scarce AI chips, limited by manufacturing capacity.
Why are tech companies starting to make their own chips?
Both Nvidia and Intel can produce only a limited number of chips because they and the rest of the industry rely on Taiwanese manufacturer TSMC to actually assemble their chip designs. With only one manufacturer solidly in the game, the manufacturing lead time for these cutting-edge chips is multiple months. That’s a key factor that led major players in the AI space, such as Google and Meta, to resort to designing their own chips. Alvin Nguyen, a senior analyst at consulting firm Forrester, told Fortune that chips designed by the likes of Google, Meta, and Amazon won’t be as powerful as Nvidia’s top-of-the-line offerings—but that could benefit the companies in terms of speed. They’ll be able to produce them on less specialized assembly lines with shorter wait times, he said.
“If you have something that’s 10% less powerful but you can get it now, I’m buying that every day,” Nguyen said.
Even if the native AI chips Meta and Google are developing are less powerful than Nvidia’s cutting-edge AI chips, they could be better tailored to the company’s specific AI platforms. Nguyen said that in-house chips designed for a company’s own AI platform could be more efficient and save on costs by eliminating unnecessary functions.
“It’s like buying a car. Okay, you need an automatic transmission. But do you need the leather seats, or the heated massage seats?” Nguyen said.
“The benefit for us is that we can build a chip that can handle our specific workloads more efficiently,” Melanie Roe, a Meta spokesperson, wrote in an email to Fortune.
Nvidia’s top-of-the-line chips sell for about $25,000 apiece. They’re extremely powerful tools, and they’re designed to be good at a wide range of applications, from training AI chatbots to generating images to developing recommendation algorithms such as the ones on TikTok and Instagram. That means a slightly less powerful, but more tailored chip could be a better fit for a company such as Meta, for example—which has invested in AI primarily for its recommendation algorithms, not consumer-facing chatbots.
“The Nvidia GPUs are excellent in AI data centers, but they are general purpose,” Brian Colello, equity research lead at Morningstar, told Fortune. “There are likely certain workloads and certain models where a custom chip might be even better.”
The trillion-dollar question
Nguyen said that more specialized in-house chips could have added benefits by virtue of their ability to integrate into existing data centers. Nvidia chips consume a lot of power, and they give off a lot of heat and noise—so much so that tech companies may be forced to redesign or move their data centers to integrate soundproofing and liquid cooling. Less powerful native chips, which consume less energy and release less heat, could solve that problem.
AI chips developed by Meta and Google are long-term bets. Nguyen estimated that these chips took roughly a year and a half to develop, and it’ll likely be months before they’re implemented at a large scale. For the foreseeable future, the entire AI world will continue to depend heavily on Nvidia (and, to a lesser extent, Intel) for its computing hardware needs. Indeed, Mark Zuckerberg recently announced that Meta was on track to own 350,000 Nvidia chips by the end of this year (the company’s set to spend around $18 billion on chips by then). But movement away from outsourcing computing power and toward native chip design could loosen Nvidia’s chokehold on the market.
“The trillion-dollar question for Nvidia’s valuation is the threat of these in-house chips,” Colello said. “If these in-house chips significantly reduce the reliance on Nvidia, there’s probably downside to Nvidia’s stock from here. This development is not surprising, but the execution of it over the next few years is the key valuation question in our mind.”