由于“批評谷歌缺乏對多元化的承諾”,一位知名人工智能研究人員被解雇,離開了谷歌公司(Google)。谷歌試圖壓制批評及討論的行為再次引社會擔(dān)憂。
谷歌團(tuán)隊(duì)中專注于人工智能道德和算法偏見方面的技術(shù)聯(lián)合負(fù)責(zé)人蒂姆尼特?格布魯在其推特中寫道,她被谷歌趕出家門的原因是自己曾經(jīng)給基礎(chǔ)人工智能研究部門——谷歌大腦中(Google Brain)的“女性和同僚們”寫了一封電子郵件,由此激起了高管們的憤怒。
在人工智能的研究人員中,格布魯以促進(jìn)這一領(lǐng)域的多樣性和包容性而出名。她是“公平、責(zé)任和透明度”(Fairness, Accountability, and Transparency)會議的創(chuàng)立者之一。該會議致力于解決人工智能的偏見、安全和倫理等問題;她同時(shí)也是“人工智能中的黑人”組織(Black in AI)的創(chuàng)立者之一。該組織強(qiáng)調(diào)黑人專家在機(jī)器學(xué)習(xí)工作中的地位,并提供專業(yè)性的指導(dǎo),意圖提高人們對黑人計(jì)算機(jī)科學(xué)家及工程師們受到的偏見、歧視的關(guān)注。
12月3日,格布魯向彭博社新聞(Bloomberg News)表示,在被解雇前一周,她曾經(jīng)因?yàn)橐黄c其他六名作者(其中四位來自谷歌)共同撰寫的研究報(bào)告而與管理層發(fā)生爭執(zhí),該報(bào)告將于明年提交給學(xué)術(shù)會議進(jìn)行審議。她說,谷歌要求她撤回這份報(bào)告,至少得抹去她和其他幾位谷歌員工的名字。
格布魯還向內(nèi)部員工組織發(fā)了電子郵件,抱怨自己受到的待遇,指控谷歌在種族和性別多樣性、平等及包容這些方面上的虛偽行徑。
格布魯告訴媒體稱,她已經(jīng)告訴谷歌搜索(Google Research)的副總裁、她的主管之一梅根?卡奇利亞,如果沒有關(guān)于報(bào)告審閱過程方式的更多討論,就不能確保同樣的事情不會再次發(fā)生。
“我們團(tuán)隊(duì)叫做‘AI道德’(Ethical AI),當(dāng)然會寫一些AI存在的問題?!彼f。
格布魯告訴卡奇立亞,如果公司不愿意解決她的問題,她將辭職,在過渡期結(jié)束后離開谷歌。谷歌隨后通知稱,不同意她提出的條件,接受她的辭職,并立即生效。格布魯發(fā)布的一條推文顯示,谷歌稱格布魯給公司內(nèi)部人員發(fā)送電子郵件的行為反映出其“與谷歌管理層的期望不一致”。
人工智能行業(yè)的其他研究人員也在推特上表達(dá)了對格布魯?shù)闹С郑瑢λ唤夤鸵皇赂械綉嵖?。紐約大學(xué)(New York University)的AI Now研究所(AI Now Institute)的主任梅雷迪思?惠特克在推特上寫道:“谷歌對蒂姆尼特進(jìn)行的報(bào)復(fù)令人擔(dān)憂。蒂姆尼特是這一領(lǐng)域最有智慧、有原則的人工智能公正研究人員之一?!?/p>
科技公司Mozilla的員工,人工智能公平、倫理和責(zé)任領(lǐng)域的另一位研究員德布?拉吉在推特中寫道:“現(xiàn)在,公開反對審查制度的行為,已經(jīng)‘與谷歌管理層的想法不一致’。格布魯這樣做是因?yàn)樗敢饷耙磺酗L(fēng)險(xiǎn)來保護(hù)她手下為她工作的人,去成為一個(gè)谷歌公司中最富有多樣性的團(tuán)隊(duì)。”
許多人還提到,就在格布魯離職的同一天,美國國家勞工關(guān)系委員會(National Labor Relations Board)指控谷歌非法解雇參與組織了兩場公司內(nèi)部抗議的員工:其中一次是在2019年,員工抗議谷歌與美國海關(guān)和邊境保護(hù)局(U.S. Customs and Border Protection)的合作,還有一次是在2018年,抗議谷歌對性騷擾案件的處理不當(dāng)。
美國國家勞工關(guān)系委員會認(rèn)為,谷歌是想通過恐嚇、甚至直接解雇員工來鎮(zhèn)壓他們的不滿情緒。
谷歌目前尚未就美國國家勞工關(guān)系委員會的申訴發(fā)表任何公開評論,但其針對解雇格布魯事件向《財(cái)富》雜志做出了回應(yīng),并讓《財(cái)富》雜志參考其研究所高級副總裁杰夫?迪恩的一封電子郵件。
迪恩在郵件中表示,格布魯確實(shí)按照公司的要求在兩周內(nèi)同合著人一起遞交了她們的論文著述,但谷歌內(nèi)部負(fù)責(zé)審核的“跨職能團(tuán)隊(duì)”發(fā)現(xiàn)其著述“并不符合公司的出版要求”,因?yàn)檎撐摹昂鲆暳撕芏嗥渌嚓P(guān)事實(shí)”,格布魯在文中提及的一系列倫理問題“已經(jīng)在公司內(nèi)部有所解決”。
迪恩還表示,格布魯?shù)碾x職是“一個(gè)艱難的決定”,尤其是她正在參與公司重要的研究課題。迪恩強(qiáng)調(diào),自己的部門以及谷歌公司一直以來都深切關(guān)注著人工智能研究領(lǐng)域。
顯然,格布魯事件是一個(gè)導(dǎo)火索,可能會再次引爆谷歌公司內(nèi)外對其技術(shù)倫理問題以及員工異議處理方式的擔(dān)憂與關(guān)注。這家曾經(jīng)以“自由的企業(yè)文化”聞名的科技公司如今似乎日漸背離初心,每當(dāng)觸及聲譽(yù)或是盈利問題之時(shí),總是習(xí)慣于讓員工噤聲。
美國科技新聞網(wǎng)Platformer還曝光了一些格布魯發(fā)給同事的郵件。在郵件中,格布魯諷刺谷歌早前關(guān)于“性別平等”的承諾只是空穴來風(fēng),新進(jìn)員工只有14%是女性。(但她沒有明確表明這14%是指谷歌的研究部門還是其他所有部門。)她認(rèn)為,谷歌所謂的“多元、包容”,只是紙上談兵。格布魯還呼吁和她一樣想要求變的員工,多向外界尋求幫助,以此向公司施加壓力。
提及論文,格布魯在郵件中向同事表示,她早已告知谷歌的公關(guān)團(tuán)隊(duì)自己會在截稿前兩個(gè)月開始動筆,同時(shí)她也已經(jīng)將手頭的論文分發(fā)給了其他30多位研究人員來征求反饋。
此外,格布魯還在好幾條推特中暗示過自己對市場上在研人工智能軟件的道德性擔(dān)憂。以谷歌的大數(shù)據(jù)語言模型為例,這類人工智能算法的確有效地優(yōu)化了谷歌的現(xiàn)有的翻譯及搜索結(jié)果,幫助谷歌在自然語言處理方面取得了許多突破,但其算法訓(xùn)練過程中所采用的大量網(wǎng)頁或書本用語數(shù)據(jù)往往又蘊(yùn)含了潛在的性別不平等及種族歧視觀念,很可能會導(dǎo)致人工智能錯誤學(xué)習(xí)。
在12月3日的一條推特中,格布魯特意@了副總裁迪恩,稱她下一步便會致力于研究谷歌語言模型算法學(xué)習(xí)過程中的文化歧視現(xiàn)象?!癅迪恩,我現(xiàn)在意識到語言模型算法對你有多重要了,但我不希望之后還有和我一樣的人遭受同樣的待遇。”
本周早些時(shí)候,格布魯表示谷歌內(nèi)部管理人員正在干預(yù)她的工作,企圖向外界掩蓋谷歌人工智能算法中的道德風(fēng)險(xiǎn)。“一般舉報(bào)人都會受到機(jī)構(gòu)的安全庇護(hù),為什么‘AI道德’的研究人員就不能被保護(hù)呢?如果研究人員都要面臨審查和恐嚇,那大眾又憑什么相信我們的研究成果呢?”格布魯在其12月1日發(fā)布的推特上寫道。(財(cái)富中文網(wǎng))
編譯:楊二一、陳怡軒
由于“批評谷歌缺乏對多元化的承諾”,一位知名人工智能研究人員被解雇,離開了谷歌公司(Google)。谷歌試圖壓制批評及討論的行為再次引社會擔(dān)憂。
谷歌團(tuán)隊(duì)中專注于人工智能道德和算法偏見方面的技術(shù)聯(lián)合負(fù)責(zé)人蒂姆尼特?格布魯在其推特中寫道,她被谷歌趕出家門的原因是自己曾經(jīng)給基礎(chǔ)人工智能研究部門——谷歌大腦中(Google Brain)的“女性和同僚們”寫了一封電子郵件,由此激起了高管們的憤怒。
在人工智能的研究人員中,格布魯以促進(jìn)這一領(lǐng)域的多樣性和包容性而出名。她是“公平、責(zé)任和透明度”(Fairness, Accountability, and Transparency)會議的創(chuàng)立者之一。該會議致力于解決人工智能的偏見、安全和倫理等問題;她同時(shí)也是“人工智能中的黑人”組織(Black in AI)的創(chuàng)立者之一。該組織強(qiáng)調(diào)黑人專家在機(jī)器學(xué)習(xí)工作中的地位,并提供專業(yè)性的指導(dǎo),意圖提高人們對黑人計(jì)算機(jī)科學(xué)家及工程師們受到的偏見、歧視的關(guān)注。
12月3日,格布魯向彭博社新聞(Bloomberg News)表示,在被解雇前一周,她曾經(jīng)因?yàn)橐黄c其他六名作者(其中四位來自谷歌)共同撰寫的研究報(bào)告而與管理層發(fā)生爭執(zhí),該報(bào)告將于明年提交給學(xué)術(shù)會議進(jìn)行審議。她說,谷歌要求她撤回這份報(bào)告,至少得抹去她和其他幾位谷歌員工的名字。
格布魯還向內(nèi)部員工組織發(fā)了電子郵件,抱怨自己受到的待遇,指控谷歌在種族和性別多樣性、平等及包容這些方面上的虛偽行徑。
格布魯告訴媒體稱,她已經(jīng)告訴谷歌搜索(Google Research)的副總裁、她的主管之一梅根?卡奇利亞,如果沒有關(guān)于報(bào)告審閱過程方式的更多討論,就不能確保同樣的事情不會再次發(fā)生。
“我們團(tuán)隊(duì)叫做‘AI道德’(Ethical AI),當(dāng)然會寫一些AI存在的問題?!彼f。
格布魯告訴卡奇立亞,如果公司不愿意解決她的問題,她將辭職,在過渡期結(jié)束后離開谷歌。谷歌隨后通知稱,不同意她提出的條件,接受她的辭職,并立即生效。格布魯發(fā)布的一條推文顯示,谷歌稱格布魯給公司內(nèi)部人員發(fā)送電子郵件的行為反映出其“與谷歌管理層的期望不一致”。
人工智能行業(yè)的其他研究人員也在推特上表達(dá)了對格布魯?shù)闹С?,對她被解雇一事感到憤慨。紐約大學(xué)(New York University)的AI Now研究所(AI Now Institute)的主任梅雷迪思?惠特克在推特上寫道:“谷歌對蒂姆尼特進(jìn)行的報(bào)復(fù)令人擔(dān)憂。蒂姆尼特是這一領(lǐng)域最有智慧、有原則的人工智能公正研究人員之一。”
科技公司Mozilla的員工,人工智能公平、倫理和責(zé)任領(lǐng)域的另一位研究員德布?拉吉在推特中寫道:“現(xiàn)在,公開反對審查制度的行為,已經(jīng)‘與谷歌管理層的想法不一致’。格布魯這樣做是因?yàn)樗敢饷耙磺酗L(fēng)險(xiǎn)來保護(hù)她手下為她工作的人,去成為一個(gè)谷歌公司中最富有多樣性的團(tuán)隊(duì)?!?/p>
許多人還提到,就在格布魯離職的同一天,美國國家勞工關(guān)系委員會(National Labor Relations Board)指控谷歌非法解雇參與組織了兩場公司內(nèi)部抗議的員工:其中一次是在2019年,員工抗議谷歌與美國海關(guān)和邊境保護(hù)局(U.S. Customs and Border Protection)的合作,還有一次是在2018年,抗議谷歌對性騷擾案件的處理不當(dāng)。
美國國家勞工關(guān)系委員會認(rèn)為,谷歌是想通過恐嚇、甚至直接解雇員工來鎮(zhèn)壓他們的不滿情緒。
谷歌目前尚未就美國國家勞工關(guān)系委員會的申訴發(fā)表任何公開評論,但其針對解雇格布魯事件向《財(cái)富》雜志做出了回應(yīng),并讓《財(cái)富》雜志參考其研究所高級副總裁杰夫?迪恩的一封電子郵件。
迪恩在郵件中表示,格布魯確實(shí)按照公司的要求在兩周內(nèi)同合著人一起遞交了她們的論文著述,但谷歌內(nèi)部負(fù)責(zé)審核的“跨職能團(tuán)隊(duì)”發(fā)現(xiàn)其著述“并不符合公司的出版要求”,因?yàn)檎撐摹昂鲆暳撕芏嗥渌嚓P(guān)事實(shí)”,格布魯在文中提及的一系列倫理問題“已經(jīng)在公司內(nèi)部有所解決”。
迪恩還表示,格布魯?shù)碾x職是“一個(gè)艱難的決定”,尤其是她正在參與公司重要的研究課題。迪恩強(qiáng)調(diào),自己的部門以及谷歌公司一直以來都深切關(guān)注著人工智能研究領(lǐng)域。
顯然,格布魯事件是一個(gè)導(dǎo)火索,可能會再次引爆谷歌公司內(nèi)外對其技術(shù)倫理問題以及員工異議處理方式的擔(dān)憂與關(guān)注。這家曾經(jīng)以“自由的企業(yè)文化”聞名的科技公司如今似乎日漸背離初心,每當(dāng)觸及聲譽(yù)或是盈利問題之時(shí),總是習(xí)慣于讓員工噤聲。
美國科技新聞網(wǎng)Platformer還曝光了一些格布魯發(fā)給同事的郵件。在郵件中,格布魯諷刺谷歌早前關(guān)于“性別平等”的承諾只是空穴來風(fēng),新進(jìn)員工只有14%是女性。(但她沒有明確表明這14%是指谷歌的研究部門還是其他所有部門。)她認(rèn)為,谷歌所謂的“多元、包容”,只是紙上談兵。格布魯還呼吁和她一樣想要求變的員工,多向外界尋求幫助,以此向公司施加壓力。
提及論文,格布魯在郵件中向同事表示,她早已告知谷歌的公關(guān)團(tuán)隊(duì)自己會在截稿前兩個(gè)月開始動筆,同時(shí)她也已經(jīng)將手頭的論文分發(fā)給了其他30多位研究人員來征求反饋。
此外,格布魯還在好幾條推特中暗示過自己對市場上在研人工智能軟件的道德性擔(dān)憂。以谷歌的大數(shù)據(jù)語言模型為例,這類人工智能算法的確有效地優(yōu)化了谷歌的現(xiàn)有的翻譯及搜索結(jié)果,幫助谷歌在自然語言處理方面取得了許多突破,但其算法訓(xùn)練過程中所采用的大量網(wǎng)頁或書本用語數(shù)據(jù)往往又蘊(yùn)含了潛在的性別不平等及種族歧視觀念,很可能會導(dǎo)致人工智能錯誤學(xué)習(xí)。
在12月3日的一條推特中,格布魯特意@了副總裁迪恩,稱她下一步便會致力于研究谷歌語言模型算法學(xué)習(xí)過程中的文化歧視現(xiàn)象?!癅迪恩,我現(xiàn)在意識到語言模型算法對你有多重要了,但我不希望之后還有和我一樣的人遭受同樣的待遇?!?/p>
本周早些時(shí)候,格布魯表示谷歌內(nèi)部管理人員正在干預(yù)她的工作,企圖向外界掩蓋谷歌人工智能算法中的道德風(fēng)險(xiǎn)?!耙话闩e報(bào)人都會受到機(jī)構(gòu)的安全庇護(hù),為什么‘AI道德’的研究人員就不能被保護(hù)呢?如果研究人員都要面臨審查和恐嚇,那大眾又憑什么相信我們的研究成果呢?”格布魯在其12月1日發(fā)布的推特上寫道。(財(cái)富中文網(wǎng))
編譯:楊二一、陳怡軒
A prominent A.I. researcher has left Google, saying she was fired for criticizing the company’s lack of commitment to diversity, renewing concerns about the company’s attempts to silence criticism and debate.
Timnit Gebru, who was technical co-lead of a Google team that focused on A.I. ethics and algorithmic bias, wrote on Twitter that she had been pushed out of the company for writing an email to “women and allies” at Google Brain, the company’s division devoted to fundamental A.I. research, that drew the ire of senior managers.
Gebru is well-known among A.I. researchers for helping to promote diversity and inclusion within the field. She cofounded the Fairness, Accountability, and Transparency (FAccT) conference, which is dedicated to issues around A.I. bias, safety, and ethics. She also cofounded the group Black in AI, which highlights the work of Black machine learning experts as well as offering mentorship. The group has sought to raise awareness of bias and discrimination against Black computer scientists and engineers.
The researcher told Bloomberg News on December 3 that her firing came after a week in which she had wrangled with her managers over a research paper she had co-written with six other authors, including four from Google, that was about to be submitted for consideration at an academic conference next year. She said Google had asked her to either retract the paper or at least remove the her name and those of the other Google employees, she told Bloomberg.
She also posted an email to the internal employee group complaining about her treatment and accusing Google of being disingenuous in its commitment to racial and gender diversity, equity and inclusivity.
Gebru told the news service that she had told Megan Kacholia, Google Research's vice president and one of her supervisors, that without more discussion about the way the review process for the paper had been handled and clear guarantees that the same thing wouldn't happen again.
“We are a team called Ethical AI, of course we are going to be writing about problems in AI,” Gebru said.
She told Bloomberg that she had told Kacholia that if the company was unwilling to address her concerns, she would resign and leave following a transition period. The company then told her it would not agree to her conditions and that it was accepting her resignation effective immediately. It said Gebru's decision to email the internal listserv reflected "behavior that is inconsistent with the expectations of a Google manager," according to a Tweet Gebru posted.
Fellow A.I. researchers took to Twitter to express support for Gebru and outrage at her apparent firing. “Google’s retaliation against Timnit—one of the brightest and most principled AI justice researchers in the field—is *alarming*,” Meredith Whittaker, faculty director at the AI Now Institute at New York University, wrote on Twitter.
“Speaking out against censorship is now ‘inconsistent with the expectations of a Google manager.’ She did that because she cares more and will risk everything to protect those she has hired to work under her—a team that happens to be more diverse than any other at Google,” Deb Raji, another researcher who specializes in A.I. fairness, ethics, and accountability and who works at Mozilla, wrote in a Twitter post.
Many noted that Gebru’s departure came on the same day the National Labor Relations Board accused Google of illegally dismissing workers who helped organize two companywide protests: one, in 2019, against the company’s work with the U.S. Customs and Border Protection agency and a 2018 walkout to demonstrate against the company’s handling of sexual harassment cases.
The NLRB accused Google of using “terminations and intimidation in order to quell workplace activism.”
Google has not issued a public comment on the NLRB case. In reference to questions about Gebru's case, it referred Fortune to an email from Jeff Dean, Google's senior vice president for research, that was obtained by the technology news site Platformer.
In that email, Dean said Gebru and her co-authors had submitted the paper for internal review within the two week period the company requires and that an internal "cross-functional team" of reviewers had found "it didn't meet our bar for publication," and "ignored too much relevant research" that showed some of the ethical issues she and her co-authors were raising had at least been partially mitigated.
Dean said Gebru's departure was "a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible AI research as an org and as a company."
The incident is likely to renew concerns both inside and outside the company about the ethics of its technology and how it deals with employee dissent. Once known for its freewheeling and liberal corporate culture, Google has increasingly sought to limit employee speech, particularly when it touches on issues likely to embarrass the company or potentially impact its ability to secure lucrative work for various government agencies.
Platformer also obtained and published what it said was the email Gebru had sent to colleagues. In it, she criticizes the company’s commitment to diversity, saying that “this org seems to have hired only 14% or so women this year.” (She does not make it clear if that figure is for all of Google Research or some other entity.) She also accuses the company of paying lip service to diversity and inclusion efforts and advises those who want the company to change to seek ways to bring external pressure to bear on Google.
Gebru says in the email that she had informed Google's public relations and policy team of her intent to write the paper at least two months before the submission deadline and that she had already circulated it to more than 30 other researchers for feedback.
Gebru implied in several tweets that she had raised ethical concerns about some of the company’s A.I. software, including its large language models. This kind of A.I. software is responsible for many breakthroughs in natural language processing, including Google’s improved translation and search results, but has been shown to incorporate gender and racial bias from the large amounts of Internet pages and books that are used to train it.
In tweets on December 3, she singled out Dean, a storied figure among many computer engineers and researchers as one of the original coders of Google’s search engine, and implied that she had been planning to look at bias in Google’s large language models. “@JeffDean I realize how much large language models are worth to you now. I wouldn’t want to see what happens to the next person who attempts this,” she wrote.
Earlier in the week, Gebru had also implied Google managers were attempting to censor her work or bury her concerns about ethical issues in the company’s A.I. systems. “Is there anyone working on regulation protecting Ethical AI researchers, similar to whistleblower protection? Because with the amount of censorship & intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?” she wrote in a Twitter post on Dec. 1.