![](https://images1.caifuzhongwen.com/images/attachement/jpg/site1/20250127/080027dbedf82901466201.jpg)
近日,OpenAI首席執(zhí)行官山姆·奧特曼(Sam Altman)與美國總統(tǒng)唐納德·特朗普(Donald Trump)以及軟銀集團(SoftBank)和甲骨文(Oracle)的領導人一道,大力宣傳“星際之門項目”。該項目預計將在美國投資5000億美元建設數(shù)據(jù)中心,以滿足未來幾年人工智能使用量大幅增長的預期?!靶请H之門項目”將獲得OpenAI、軟銀集團、甲骨文和阿聯(lián)酋人工智能投資者MGX總計1000億美元的先期投資,奧特曼稱其為“這個時代最重要的項目”。
無論你是否認同他的觀點,“星際之門項目”都可以說是科技行業(yè)有史以來最大的一場豪賭。畢竟,除了令人咋舌的高昂成本和天文數(shù)字般的能源需求(可能與整個城市的電力需求相當),這筆巨額投資毫無回報保障。鑒于當今的人工智能技術尚處于通用技術的起步階段,如何從這等規(guī)模的人工智能中獲取盈利,無人能給出確切答案。而且,盡管OpenAI可能認為“星際之門項目”對于開發(fā)“惠及全人類”的通用人工智能至關重要,但事實是,關于通用人工智能,業(yè)界甚至尚未形成公認的定義(最常見的定義是能在某些關鍵任務上與人類相匹敵的人工智能)。即便達成了共識,賓夕法尼亞大學沃頓商學院(University of Pennsylvania’s Wharton School)管理學教授伊桑·莫利克(Ethan Mollick)也在X平臺上指出,“對于大多數(shù)人而言,一個擁有通用人工智能的世界究竟會是何種模樣,至今仍缺乏清晰的愿景”。對于那些認為通用人工智能即將到來的人來說,他寫道:“5到10年后,日常生活會是什么樣子?”
多年來,科技領域的其他高風險投資無論是從耗資之巨還是不確定性之高來看,皆難以與“星際之門項目”相提并論:二戰(zhàn)期間為研制原子彈而開展的曼哈頓計劃改變了歷史。然而,為該項目提供支持的是政府,而非私營企業(yè),而且該項目還有一個優(yōu)勢,那就是建立在眾所周知的科學基礎上。另一方面,人工智能領域的創(chuàng)新者則是在押注一個無人完全理解的結果。
另一個例子是科技公司在云計算基礎設施上投入數(shù)千億美元。與人工智能不同,進軍云計算服務有明確的商業(yè)案例,而且資金投入已持續(xù)十多年之久。與此同時,Meta對元宇宙(即虛擬世界)的癡迷以500億美元的失敗告終。但這不過是首席執(zhí)行官馬克·扎克伯格(Mark Zuckerberg)短暫分心而已。
當然,還有互聯(lián)網(wǎng)泡時期,其中有成功也有失敗。但那是一場全行業(yè)的豪賭,不像“星際之門項目”那樣風險集中。
當然,這些在人工智能領域進行最新一輪豪賭的科技公司無疑擁有雄厚的財力作為支撐。它們高達數(shù)萬億美元的估值以及來自投資者的空白支票,更不必提及來自州、地方和聯(lián)邦政府的財政激勵和補貼,都讓這場豪賭變得容易一些。畢竟,它們的商業(yè)使命就是追逐科技領域最新、最前沿的技術。
盡管如此,“星際之門項目”的賭注還是空前巨大的,因為奧特曼和特朗普不僅將其視為一項技術飛躍,更將其視為國家的當務之急。他們將其描述為能鞏固美國在人工智能領域領先中國地位的項目,承諾創(chuàng)造10萬個新就業(yè)崗位,并極大地推動經(jīng)濟發(fā)展。特朗普甚至稱其為美國“黃金時代”的開端,而甲骨文(Oracle)執(zhí)行董事長拉里·埃里森(Larry Ellison)聲稱,該項目有望在癌癥治療方面帶來突破。
但并非所有人都相信這種炒作。加里·馬庫斯(Gary Marcus)等批評人士認為,人工智能的變革潛力被極大地夸大了,他警告稱,在大規(guī)模過度投資之后,美國經(jīng)濟或?qū)⒚媾R嚴重后果。事實上,當“星際之門項目”在4月首次宣布時,馬庫斯稱其為“史上第二糟糕的人工智能投資”——僅次于過去十年間投入數(shù)十億美元但成果寥寥的自動駕駛汽車。另一些人,比如人工智能研究先驅(qū)約書亞·本吉奧(Yoshua Bengio),則持更為悲觀的看法,他們認為,人工智能非但不會帶來繁榮,反而會如此深刻地重塑世界,以至于對人類的生存構成威脅。
開源人工智能平臺Hugging Face的政策研究員阿維吉特·戈什(Avijit Ghosh)從另一個角度強調(diào)了以下事實——像“星際之門項目”這樣不受限制的資金注入,將權力集中在最富有的人手中,而將公眾和獨立研究人員排除在外。此外,他表示,所有對構建基礎設施以推動通用人工智能發(fā)展的關注,都損害了那些“并未致力于構建通用人工智能(無論其確切含義究竟為何)”的人的利益?!拔覀儼奄Y源投入到這個定義尚且模糊的‘事物’上,卻忽視了當下可以利用技術解決的真正危機?!?
考慮到這些批評意見,“星際之門項目”可以被視為一項“登月計劃”般孤注一擲的實驗,它不僅會在失敗時產(chǎn)生重大影響,而且如果真的取得成功,也會帶來嚴重后果。雖然OpenAI、谷歌(Google)和Meta等公司擁有采取如此大膽行動的財力,但這些風險可能并不符合公眾的最佳利益。
如果考慮到美國與中國的競爭,或許“星際之門項目”所冒的風險是值得的。掌握最先進人工智能技術的國家將在經(jīng)濟實力和國防方面擁有巨大優(yōu)勢。
就在兩天前,中國初創(chuàng)公司DeepSeek發(fā)布了一款全新的開源人工智能模型,這一舉動引起了硅谷的警覺。該公司聲稱,其新模型在多項數(shù)學、編碼和推理基準測試中的表現(xiàn)超越了OpenAI最先進的o1模型。
咨詢公司Futurum Group的分析師迪翁·欣奇克利夫(Dion Hinchcliffe)表示,此次發(fā)布對OpenAI和人工智能行業(yè)其他公司來說是“真正的當頭一棒”。他表示,中國能夠研發(fā)出與OpenAI最頂尖模型相抗衡的前沿技術,這“令人擔憂”。欣奇克利夫解釋說:“這是一場真正的國際競爭。”
特朗普總統(tǒng)于周一就職后數(shù)小時內(nèi),就廢除了拜登政府在人工智能監(jiān)管方面所做的努力,這其中就包括了拜登于2023年頒布的有關人工智能的行政命令。特朗普的計劃是盡可能減少人工智能開發(fā)過程中所面臨的障礙,以期在親商環(huán)境中加快人工智能創(chuàng)新。
但至關重要的是,至少要認識到這是一場高風險的博弈。“星際之門項目”與監(jiān)管的放松相結合,對OpenAI、大型科技公司,甚至特朗普而言,都是一場可能帶來巨大勝利的豪賭。在美國的競爭對手不斷加大賭注的時代,這也可能被視為一場必要的較量。但批評人士指出,我們應該承認,我們所有人——其中許多人既對ChatGPT感到驚嘆,又對終結者式的未來感到恐懼——可能都對即將發(fā)生的事情毫無準備。
Hugging Face 公司的戈什說:"我確實擔心,很多人都在關注構建代理型人工智能,或是賦予人工智能模型驅(qū)動的系統(tǒng)某種程度的自主權。這會帶來很多未知風險。”
公眾對這些風險毫無準備。布倫戴奇今天在X平臺上指出:“人工智能公司對于以必要的速度和規(guī)模發(fā)展,進而使社會能夠做好應對準備,幾乎毫無興趣,因為它們正忙于相互競爭,并應對復雜的政治環(huán)境。”他說,記者、學者和公民社會“需要填補這一空白”。
我們可以將“星際之門項目”和其他大型人工智能項目視為大型科技公司最大的賭博,但無論我們是否愿意,這實則是一場我們所有人都孤注一擲的豪賭。也許是時候確保我們真正了解其中的利害關系了。(財富中文網(wǎng))
譯者:中慧言-王芳
近日,OpenAI首席執(zhí)行官山姆·奧特曼(Sam Altman)與美國總統(tǒng)唐納德·特朗普(Donald Trump)以及軟銀集團(SoftBank)和甲骨文(Oracle)的領導人一道,大力宣傳“星際之門項目”。該項目預計將在美國投資5000億美元建設數(shù)據(jù)中心,以滿足未來幾年人工智能使用量大幅增長的預期?!靶请H之門項目”將獲得OpenAI、軟銀集團、甲骨文和阿聯(lián)酋人工智能投資者MGX總計1000億美元的先期投資,奧特曼稱其為“這個時代最重要的項目”。
無論你是否認同他的觀點,“星際之門項目”都可以說是科技行業(yè)有史以來最大的一場豪賭。畢竟,除了令人咋舌的高昂成本和天文數(shù)字般的能源需求(可能與整個城市的電力需求相當),這筆巨額投資毫無回報保障。鑒于當今的人工智能技術尚處于通用技術的起步階段,如何從這等規(guī)模的人工智能中獲取盈利,無人能給出確切答案。而且,盡管OpenAI可能認為“星際之門項目”對于開發(fā)“惠及全人類”的通用人工智能至關重要,但事實是,關于通用人工智能,業(yè)界甚至尚未形成公認的定義(最常見的定義是能在某些關鍵任務上與人類相匹敵的人工智能)。即便達成了共識,賓夕法尼亞大學沃頓商學院(University of Pennsylvania’s Wharton School)管理學教授伊桑·莫利克(Ethan Mollick)也在X平臺上指出,“對于大多數(shù)人而言,一個擁有通用人工智能的世界究竟會是何種模樣,至今仍缺乏清晰的愿景”。對于那些認為通用人工智能即將到來的人來說,他寫道:“5到10年后,日常生活會是什么樣子?”
多年來,科技領域的其他高風險投資無論是從耗資之巨還是不確定性之高來看,皆難以與“星際之門項目”相提并論:二戰(zhàn)期間為研制原子彈而開展的曼哈頓計劃改變了歷史。然而,為該項目提供支持的是政府,而非私營企業(yè),而且該項目還有一個優(yōu)勢,那就是建立在眾所周知的科學基礎上。另一方面,人工智能領域的創(chuàng)新者則是在押注一個無人完全理解的結果。
另一個例子是科技公司在云計算基礎設施上投入數(shù)千億美元。與人工智能不同,進軍云計算服務有明確的商業(yè)案例,而且資金投入已持續(xù)十多年之久。與此同時,Meta對元宇宙(即虛擬世界)的癡迷以500億美元的失敗告終。但這不過是首席執(zhí)行官馬克·扎克伯格(Mark Zuckerberg)短暫分心而已。
當然,還有互聯(lián)網(wǎng)泡時期,其中有成功也有失敗。但那是一場全行業(yè)的豪賭,不像“星際之門項目”那樣風險集中。
當然,這些在人工智能領域進行最新一輪豪賭的科技公司無疑擁有雄厚的財力作為支撐。它們高達數(shù)萬億美元的估值以及來自投資者的空白支票,更不必提及來自州、地方和聯(lián)邦政府的財政激勵和補貼,都讓這場豪賭變得容易一些。畢竟,它們的商業(yè)使命就是追逐科技領域最新、最前沿的技術。
盡管如此,“星際之門項目”的賭注還是空前巨大的,因為奧特曼和特朗普不僅將其視為一項技術飛躍,更將其視為國家的當務之急。他們將其描述為能鞏固美國在人工智能領域領先中國地位的項目,承諾創(chuàng)造10萬個新就業(yè)崗位,并極大地推動經(jīng)濟發(fā)展。特朗普甚至稱其為美國“黃金時代”的開端,而甲骨文(Oracle)執(zhí)行董事長拉里·埃里森(Larry Ellison)聲稱,該項目有望在癌癥治療方面帶來突破。
但并非所有人都相信這種炒作。加里·馬庫斯(Gary Marcus)等批評人士認為,人工智能的變革潛力被極大地夸大了,他警告稱,在大規(guī)模過度投資之后,美國經(jīng)濟或?qū)⒚媾R嚴重后果。事實上,當“星際之門項目”在4月首次宣布時,馬庫斯稱其為“史上第二糟糕的人工智能投資”——僅次于過去十年間投入數(shù)十億美元但成果寥寥的自動駕駛汽車。另一些人,比如人工智能研究先驅(qū)約書亞·本吉奧(Yoshua Bengio),則持更為悲觀的看法,他們認為,人工智能非但不會帶來繁榮,反而會如此深刻地重塑世界,以至于對人類的生存構成威脅。
開源人工智能平臺Hugging Face的政策研究員阿維吉特·戈什(Avijit Ghosh)從另一個角度強調(diào)了以下事實——像“星際之門項目”這樣不受限制的資金注入,將權力集中在最富有的人手中,而將公眾和獨立研究人員排除在外。此外,他表示,所有對構建基礎設施以推動通用人工智能發(fā)展的關注,都損害了那些“并未致力于構建通用人工智能(無論其確切含義究竟為何)”的人的利益?!拔覀儼奄Y源投入到這個定義尚且模糊的‘事物’上,卻忽視了當下可以利用技術解決的真正危機?!?
考慮到這些批評意見,“星際之門項目”可以被視為一項“登月計劃”般孤注一擲的實驗,它不僅會在失敗時產(chǎn)生重大影響,而且如果真的取得成功,也會帶來嚴重后果。雖然OpenAI、谷歌(Google)和Meta等公司擁有采取如此大膽行動的財力,但這些風險可能并不符合公眾的最佳利益。
如果考慮到美國與中國的競爭,或許“星際之門項目”所冒的風險是值得的。掌握最先進人工智能技術的國家將在經(jīng)濟實力和國防方面擁有巨大優(yōu)勢。
就在兩天前,中國初創(chuàng)公司DeepSeek發(fā)布了一款全新的開源人工智能模型,這一舉動引起了硅谷的警覺。該公司聲稱,其新模型在多項數(shù)學、編碼和推理基準測試中的表現(xiàn)超越了OpenAI最先進的o1模型。
咨詢公司Futurum Group的分析師迪翁·欣奇克利夫(Dion Hinchcliffe)表示,此次發(fā)布對OpenAI和人工智能行業(yè)其他公司來說是“真正的當頭一棒”。他表示,中國能夠研發(fā)出與OpenAI最頂尖模型相抗衡的前沿技術,這“令人擔憂”。欣奇克利夫解釋說:“這是一場真正的國際競爭。”
特朗普總統(tǒng)于周一就職后數(shù)小時內(nèi),就廢除了拜登政府在人工智能監(jiān)管方面所做的努力,這其中就包括了拜登于2023年頒布的有關人工智能的行政命令。特朗普的計劃是盡可能減少人工智能開發(fā)過程中所面臨的障礙,以期在親商環(huán)境中加快人工智能創(chuàng)新。
但至關重要的是,至少要認識到這是一場高風險的博弈?!靶请H之門項目”與監(jiān)管的放松相結合,對OpenAI、大型科技公司,甚至特朗普而言,都是一場可能帶來巨大勝利的豪賭。在美國的競爭對手不斷加大賭注的時代,這也可能被視為一場必要的較量。但批評人士指出,我們應該承認,我們所有人——其中許多人既對ChatGPT感到驚嘆,又對終結者式的未來感到恐懼——可能都對即將發(fā)生的事情毫無準備。
Hugging Face 公司的戈什說:"我確實擔心,很多人都在關注構建代理型人工智能,或是賦予人工智能模型驅(qū)動的系統(tǒng)某種程度的自主權。這會帶來很多未知風險?!?/p>
公眾對這些風險毫無準備。布倫戴奇今天在X平臺上指出:“人工智能公司對于以必要的速度和規(guī)模發(fā)展,進而使社會能夠做好應對準備,幾乎毫無興趣,因為它們正忙于相互競爭,并應對復雜的政治環(huán)境?!彼f,記者、學者和公民社會“需要填補這一空白”。
我們可以將“星際之門項目”和其他大型人工智能項目視為大型科技公司最大的賭博,但無論我們是否愿意,這實則是一場我們所有人都孤注一擲的豪賭。也許是時候確保我們真正了解其中的利害關系了。(財富中文網(wǎng))
譯者:中慧言-王芳
OpenAI CEO Sam Altman joined President Donald Trump and leaders of SoftBank and Oracle yesterday to tout Stargate, a $500 billion plan to build data centers in the U.S. to power the expected soaring use of AI in the coming years. Altman called Stargate, which will get an up-front investment of $100 billion from OpenAI, SoftBank, Oracle, and the Emirati AI investor MGX, the “most important project of this era.”
Whether or not you agree with him, Stargate is arguably the tech industry’s biggest gamble ever. After all, in addition to the eye-popping price tag and the astronomical energy needs (possibly rivaling the electricity demands of entire cities), the massive investment has zero guarantee of return. Given that today’s AI is a generalized technology in its infancy, no one knows how to make money from it at such an enormous scale. And further, while OpenAI may believe that Stargate is “critical” to developing artificial general intelligence (AGI) that will “benefit all of humanity,” the truth is there is not even an agreed-upon definition of AGI (the most common definition is AI that’s equal to humans at certain critical tasks). And even if there was consensus, Ethan Mollick, a professor of management at the University of Pennsylvania’s Wharton School, pointed out on X that there is “still no articulated vision of what a world with AGI looks like for most people.” For those who believe AGI is coming soon, he wrote, “what does daily life look like 5-10 years later?”
Other high-stakes tech bets over the years have not been as costly, nor as wholly uncertain: The Manhattan Project, for developing an atomic bomb during World War II, changed history. However, it was the government, not private business that backed that project, which also had the advantage of being based on well-understood science. AI innovators, on the other hand, are gambling on an outcome that no one fully understands.
Another example is the tens of billions of dollars that tech companies have spent on cloud computing infrastructure. Unlike AI, the push into cloud had a clear business case and the money was invested over more than a decade. Meanwhile, Meta’s obsession with the metaverse, or virtual worlds, was a $50 billion flop. But hey, that strategy was just CEO Mark Zuckerberg’s brief distraction.
And, of course, there was the dot-com boom, which had a mix of success and failures. But it was an industry-wide bet that did not have the concentrated risk of Stargate.
Of course, the tech companies making this latest giant gamble on AI can certainly afford it. Their trillion-dollar valuations and what are practically blank checks from investors, not to mention financial incentives and subsidies from state, local, and federal government, make rolling the dice a bit easier. And their business mission, after all, is going after the latest and the greatest in tech.
Still, the stakes with Stargate are exceptionally high, as both Altman and Trump frame it not just as a technological leap, but as a national imperative. They present it as a project that will solidify U.S. leadership over China in AI, promising 100,000 new jobs and a major economic boost. Trump has even called it the dawn of a “golden age” for America, while Oracle executive chairman Larry Ellison claims it could lead to breakthroughs in treating cancer.
But not everyone is buying the hype. Critics like Gary Marcus argue that AI’s transformative potential is vastly overstated, warning that the U.S. economy will be left holding the bag after a massive overinvestment. In fact, when Stargate was first announced in April, Marcus said it was “the second worst AI investment in history”—after the billions of dollars plowed into self-driving cars over the past decade with little to show for it. Others, like pioneering AI researcher Yoshua Bengio, take an even darker view, believing that far from ushering in prosperity, AI could reshape the world so profoundly that it threatens humanity itself.
Avijit Ghosh, a policy researcher at open source AI platform Hugging Face, emphasizes a different angle—the fact that unrestricted funding like that going towards Stargate concentrates power in the hands of the wealthiest, while excluding the public and independent researchers. In addition, all the attention to building infrastructure to boost AGI harms people who are not “‘building AGI, whatever that means,’” he said. “We are pouring resources into this ‘thing’ that is nebulously defined at best, at the expense of real crises that can be solved with technology at the very present.”
With those criticisms in mind, Stargate can be seen as a moonshot, make-or-break experiment that will not only have significant impact if it fails, but severe consequences if it actually succeeds. While companies like OpenAI, Google, and Meta can afford to make these power moves, the risks may not be in the public’s best interest.
Or maybe the risks of Stargate are worth it, if you consider the U.S. rivalry with China. The country with the best AI has an enormous advantage when it comes to economic power and national defense. If China ends up with the most advanced AI systems, the U.S. could be in danger.
Just two days ago, a Chinese startup, DeepSeek, set off alarm bells by releasing a new open-source AI model that has Silicon Valley buzzing. The company claims its new model beats OpenAI’s most sophisticated o1 model on several math, coding, and reasoning benchmarks.
The release is a “real shot across the bow” to OpenAI and the rest of the AI industry, said Dion Hinchcliffe, an analyst with the Futurum Group, a consulting firm. China’s ability to develop a frontier-level model that competes with the best from OpenAI, he said, is “concerning.” “There’s a real international competition,” Hinchcliffe explained.
Within hours of taking office on Monday, President Trump dismantled the Biden Administration’s efforts to tackle AI regulation, including Biden’s 2023 executive order on AI. Trump’s plan is to reduce as many barriers as possible to developing AI, thereby speeding up AI innovation in a business-friendly environment.
But it’s important to at least recognize the high-stakes game at play here. Stargate, combined with reduced regulation, is a gambit that could deliver huge wins for OpenAI, Big Tech, and possibly Trump. It may also be remembered as a necessary play in an era where America’s rivals are escalating the stakes. But we should acknowledge that all of us—many of whom both marvel at ChatGPT and fear a Terminator-style future—may be woefully unprepared for what’s about to unfold, critics say.
“I do worry that a lot of focus is going into building agentic AI, or giving some level of autonomy to AI model-powered systems,” said Hugging Face’s Ghosh. “That brings forth a lot of unknown risks.”
The public is unprepared for any of those risks. Brundage pointed out on X today that “AI companies have little interest in preparing society, at the speed/scale that’s needed, since they are busy trying to beat each other and navigate a complex political environment.” Journalists, academics, and civil society, he said, “need to fill the gap.”
We can look at Stargate and other massive AI projects as Big Tech’s biggest gamble, but it’s a bet that all of us are all-in on—whether we like it or not. Maybe it’s time to make sure we really understand the stakes.