如果你曾在Zoom電話會(huì)議中看到Otter.ai虛擬助手出現(xiàn)在房間里,請(qǐng)知曉它們?cè)诼犇阏f話,并記錄下你所說的一切。在人工智能和混合或遠(yuǎn)程辦公模式時(shí)代,這種做法在某種程度上已變得相當(dāng)普遍,但令人擔(dān)憂的是,許多用戶并不了解這項(xiàng)技術(shù)的全部功能。
如果你不知道如何選擇正確的設(shè)置,Otter.ai等虛擬助手會(huì)向所有與會(huì)者發(fā)送錄音和文字記錄,即使有嘉賓提前離開了會(huì)議。這意味著,如果你說同事的壞話、討論機(jī)密信息或分享卑劣的商業(yè)行為,人工智能會(huì)識(shí)別出來,而且會(huì)出賣你。
研究人員兼工程師亞歷克斯·比爾澤里安(Alex Bilzerian)最近就遇到了這種情況。他與一家風(fēng)險(xiǎn)投資公司進(jìn)行了一次Zoom電話會(huì)議,Otter.ai被用來記錄電話會(huì)議內(nèi)容。比爾澤里安上周在X上的一篇帖子中寫道,會(huì)議結(jié)束后,Otter.ai自動(dòng)將記錄通過電子郵件發(fā)送給他,其中包括“他們之后幾個(gè)小時(shí)的私人談話:他們討論了關(guān)于業(yè)務(wù)的私密和機(jī)密細(xì)節(jié)”。Otter.ai成立于2016年,提供錄音和轉(zhuǎn)錄服務(wù),可以通過Zoom連接,也可以在虛擬會(huì)議或面對(duì)面會(huì)議時(shí)手動(dòng)連接。
比爾澤里安告訴《華盛頓郵報(bào)》,記錄顯示,在他退出會(huì)議后,投資者討論了他們公司的“戰(zhàn)略失敗和虛假指標(biāo)”。雖然比爾澤里安提醒投資者注意此事,但在他們“深表歉意”后,他仍然決定終止交易。
這只是新興人工智能技術(shù)被用戶誤解的眾多例子之一。針對(duì)比爾澤里安在X上發(fā)布的帖子,其他用戶也報(bào)告了類似的情況。
另一位用戶迪恩·朱利葉斯(Dean Julius)在X上寫道:“今天,我妻子參加了一次工作相關(guān)撥款會(huì)議,[整個(gè)]會(huì)議都被記錄下來并做了注釋。有些人留下來私下討論會(huì)議內(nèi)容。但虛擬助手一直在記錄會(huì)議內(nèi)容,然后都發(fā)給大家了。無比尷尬?!?/p>
其他用戶指出,隨著虛擬治療和遠(yuǎn)程醫(yī)療會(huì)議變得更加突出,這可能成為醫(yī)療保健行業(yè)的一個(gè)主要問題。
醫(yī)療軟件公司IT medical的醫(yī)生兼醫(yī)療顧問丹妮爾·凱爾瓦斯(Danielle Kelvas)在接受《財(cái)富》雜志采訪時(shí)表示:“你可以想象,這將成為醫(yī)療保健領(lǐng)域一個(gè)非常嚴(yán)重的問題,涉及到受保護(hù)的健康信息。醫(yī)療服務(wù)提供商對(duì)隱私的擔(dān)憂是可以理解的。例如,無論是人工智能書寫設(shè)備還是人工智能超聲設(shè)備,我們作為醫(yī)生都會(huì)問,這些信息流向哪里?”
不過,Otter.ai堅(jiān)持認(rèn)為,用戶可以防止這些尷尬或令人難堪的事件發(fā)生。
Otter.ai發(fā)言人告訴《財(cái)富》雜志:“用戶完全可以控制自己的設(shè)置,我們努力讓Otter盡可能直觀。雖然通知功能是內(nèi)置的,但我們也強(qiáng)烈建議在會(huì)議和對(duì)話中使用Otter時(shí)繼續(xù)征求用戶同意,并說明你使用Otter的情況,以實(shí)現(xiàn)完全透明?!痹摪l(fā)言人還建議訪問公司的幫助中心,查看所有設(shè)置和偏好。
人工智能虛擬助手的力量
作為提高工作效率和記錄重要對(duì)話的一種手段,越來越多的企業(yè)開始在工作流程中使用人工智能功能。雖然人工智能無疑可以減少抄錄并將其發(fā)送給利益相關(guān)者的繁瑣做法,但人工智能仍不具備與人類相同的感知能力。
數(shù)據(jù)咨詢公司Affinity Reply的高級(jí)顧問蘇克·索哈爾(Sukh Sohal)在接受《財(cái)富》雜志采訪時(shí)表示:“由于人工智能的自動(dòng)化行為和缺乏判斷力,它存在泄露‘工作機(jī)密’的風(fēng)險(xiǎn)。我有客戶對(duì)非計(jì)劃中的信息共享表示擔(dān)憂。如果企業(yè)在采用人工智能工具時(shí)沒有充分了解其設(shè)置或影響,例如在與會(huì)者離開會(huì)議后繼續(xù)自動(dòng)轉(zhuǎn)錄,就會(huì)出現(xiàn)這種情況。”
但歸根結(jié)底,人類才是這項(xiàng)技術(shù)的推動(dòng)者。
計(jì)算技術(shù)行業(yè)協(xié)會(huì)(CompTIA)戰(zhàn)略高級(jí)副總裁漢娜·約翰遜(Hannah Johnson)在接受《財(cái)富》雜志采訪時(shí)表示:“雖然人工智能正在幫助我們更快、更智能地工作,但我們需要了解自己正在使用的工具。我們不能忘記,情商和有效溝通同樣至關(guān)重要。技術(shù)可能在不斷發(fā)展,但人類的技能仍然是維系這一切的粘合劑?!?/p>
其他人工智能助手,如微軟(Microsoft)Copilot,工作原理與Otter.ai類似,可以記錄和轉(zhuǎn)錄會(huì)議內(nèi)容。不過,微軟的一位發(fā)言人告訴《財(cái)富》雜志,Coiplot也有一些保障措施:用戶要么必須親自參加會(huì)議,要么必須征得組織者的同意,才能共享會(huì)議記錄或文稿。
微軟發(fā)言人在一份聲明中表示:“在Teams會(huì)議中,所有與會(huì)者都會(huì)看到會(huì)議正在被記錄或轉(zhuǎn)錄的通知?!贝送?,管理員還可以啟用一項(xiàng)設(shè)置,要求會(huì)議參與者明確同意被記錄和轉(zhuǎn)錄。除非他們提供明確的許可,否則他們的麥克風(fēng)和攝像頭無法打開,也無法共享內(nèi)容?!?/p>
然而,這些權(quán)限并不總是解決人類缺乏經(jīng)驗(yàn)的操作或錯(cuò)誤。人工智能基礎(chǔ)設(shè)施公司CUDO Compute的首席營銷官拉爾斯·尼曼(Lars Nyman)表示,為了給虛擬助手的使用設(shè)置更多的限制,可以把人工智能助手看作是初級(jí)行政助理。
尼曼告訴《財(cái)富》雜志,它“很有用,但還缺乏經(jīng)驗(yàn)。要避免自動(dòng)發(fā)送跟進(jìn)信息,而應(yīng)手動(dòng)進(jìn)行審核和批準(zhǔn)。積極塑造人工智能流程,保持對(duì)共享內(nèi)容和時(shí)間的嚴(yán)格控制。關(guān)鍵在于,在這個(gè)階段,不要賦予人工智能超過剛大學(xué)畢業(yè)的新員工的更多的自主權(quán)?!保ㄘ?cái)富中文網(wǎng))
譯者:中慧言-王芳
如果你曾在Zoom電話會(huì)議中看到Otter.ai虛擬助手出現(xiàn)在房間里,請(qǐng)知曉它們?cè)诼犇阏f話,并記錄下你所說的一切。在人工智能和混合或遠(yuǎn)程辦公模式時(shí)代,這種做法在某種程度上已變得相當(dāng)普遍,但令人擔(dān)憂的是,許多用戶并不了解這項(xiàng)技術(shù)的全部功能。
如果你不知道如何選擇正確的設(shè)置,Otter.ai等虛擬助手會(huì)向所有與會(huì)者發(fā)送錄音和文字記錄,即使有嘉賓提前離開了會(huì)議。這意味著,如果你說同事的壞話、討論機(jī)密信息或分享卑劣的商業(yè)行為,人工智能會(huì)識(shí)別出來,而且會(huì)出賣你。
研究人員兼工程師亞歷克斯·比爾澤里安(Alex Bilzerian)最近就遇到了這種情況。他與一家風(fēng)險(xiǎn)投資公司進(jìn)行了一次Zoom電話會(huì)議,Otter.ai被用來記錄電話會(huì)議內(nèi)容。比爾澤里安上周在X上的一篇帖子中寫道,會(huì)議結(jié)束后,Otter.ai自動(dòng)將記錄通過電子郵件發(fā)送給他,其中包括“他們之后幾個(gè)小時(shí)的私人談話:他們討論了關(guān)于業(yè)務(wù)的私密和機(jī)密細(xì)節(jié)”。Otter.ai成立于2016年,提供錄音和轉(zhuǎn)錄服務(wù),可以通過Zoom連接,也可以在虛擬會(huì)議或面對(duì)面會(huì)議時(shí)手動(dòng)連接。
比爾澤里安告訴《華盛頓郵報(bào)》,記錄顯示,在他退出會(huì)議后,投資者討論了他們公司的“戰(zhàn)略失敗和虛假指標(biāo)”。雖然比爾澤里安提醒投資者注意此事,但在他們“深表歉意”后,他仍然決定終止交易。
這只是新興人工智能技術(shù)被用戶誤解的眾多例子之一。針對(duì)比爾澤里安在X上發(fā)布的帖子,其他用戶也報(bào)告了類似的情況。
另一位用戶迪恩·朱利葉斯(Dean Julius)在X上寫道:“今天,我妻子參加了一次工作相關(guān)撥款會(huì)議,[整個(gè)]會(huì)議都被記錄下來并做了注釋。有些人留下來私下討論會(huì)議內(nèi)容。但虛擬助手一直在記錄會(huì)議內(nèi)容,然后都發(fā)給大家了。無比尷尬?!?/p>
其他用戶指出,隨著虛擬治療和遠(yuǎn)程醫(yī)療會(huì)議變得更加突出,這可能成為醫(yī)療保健行業(yè)的一個(gè)主要問題。
醫(yī)療軟件公司IT medical的醫(yī)生兼醫(yī)療顧問丹妮爾·凱爾瓦斯(Danielle Kelvas)在接受《財(cái)富》雜志采訪時(shí)表示:“你可以想象,這將成為醫(yī)療保健領(lǐng)域一個(gè)非常嚴(yán)重的問題,涉及到受保護(hù)的健康信息。醫(yī)療服務(wù)提供商對(duì)隱私的擔(dān)憂是可以理解的。例如,無論是人工智能書寫設(shè)備還是人工智能超聲設(shè)備,我們作為醫(yī)生都會(huì)問,這些信息流向哪里?”
不過,Otter.ai堅(jiān)持認(rèn)為,用戶可以防止這些尷尬或令人難堪的事件發(fā)生。
Otter.ai發(fā)言人告訴《財(cái)富》雜志:“用戶完全可以控制自己的設(shè)置,我們努力讓Otter盡可能直觀。雖然通知功能是內(nèi)置的,但我們也強(qiáng)烈建議在會(huì)議和對(duì)話中使用Otter時(shí)繼續(xù)征求用戶同意,并說明你使用Otter的情況,以實(shí)現(xiàn)完全透明。”該發(fā)言人還建議訪問公司的幫助中心,查看所有設(shè)置和偏好。
人工智能虛擬助手的力量
作為提高工作效率和記錄重要對(duì)話的一種手段,越來越多的企業(yè)開始在工作流程中使用人工智能功能。雖然人工智能無疑可以減少抄錄并將其發(fā)送給利益相關(guān)者的繁瑣做法,但人工智能仍不具備與人類相同的感知能力。
數(shù)據(jù)咨詢公司Affinity Reply的高級(jí)顧問蘇克·索哈爾(Sukh Sohal)在接受《財(cái)富》雜志采訪時(shí)表示:“由于人工智能的自動(dòng)化行為和缺乏判斷力,它存在泄露‘工作機(jī)密’的風(fēng)險(xiǎn)。我有客戶對(duì)非計(jì)劃中的信息共享表示擔(dān)憂。如果企業(yè)在采用人工智能工具時(shí)沒有充分了解其設(shè)置或影響,例如在與會(huì)者離開會(huì)議后繼續(xù)自動(dòng)轉(zhuǎn)錄,就會(huì)出現(xiàn)這種情況。”
但歸根結(jié)底,人類才是這項(xiàng)技術(shù)的推動(dòng)者。
計(jì)算技術(shù)行業(yè)協(xié)會(huì)(CompTIA)戰(zhàn)略高級(jí)副總裁漢娜·約翰遜(Hannah Johnson)在接受《財(cái)富》雜志采訪時(shí)表示:“雖然人工智能正在幫助我們更快、更智能地工作,但我們需要了解自己正在使用的工具。我們不能忘記,情商和有效溝通同樣至關(guān)重要。技術(shù)可能在不斷發(fā)展,但人類的技能仍然是維系這一切的粘合劑?!?/p>
其他人工智能助手,如微軟(Microsoft)Copilot,工作原理與Otter.ai類似,可以記錄和轉(zhuǎn)錄會(huì)議內(nèi)容。不過,微軟的一位發(fā)言人告訴《財(cái)富》雜志,Coiplot也有一些保障措施:用戶要么必須親自參加會(huì)議,要么必須征得組織者的同意,才能共享會(huì)議記錄或文稿。
微軟發(fā)言人在一份聲明中表示:“在Teams會(huì)議中,所有與會(huì)者都會(huì)看到會(huì)議正在被記錄或轉(zhuǎn)錄的通知?!贝送猓芾韱T還可以啟用一項(xiàng)設(shè)置,要求會(huì)議參與者明確同意被記錄和轉(zhuǎn)錄。除非他們提供明確的許可,否則他們的麥克風(fēng)和攝像頭無法打開,也無法共享內(nèi)容?!?/p>
然而,這些權(quán)限并不總是解決人類缺乏經(jīng)驗(yàn)的操作或錯(cuò)誤。人工智能基礎(chǔ)設(shè)施公司CUDO Compute的首席營銷官拉爾斯·尼曼(Lars Nyman)表示,為了給虛擬助手的使用設(shè)置更多的限制,可以把人工智能助手看作是初級(jí)行政助理。
尼曼告訴《財(cái)富》雜志,它“很有用,但還缺乏經(jīng)驗(yàn)。要避免自動(dòng)發(fā)送跟進(jìn)信息,而應(yīng)手動(dòng)進(jìn)行審核和批準(zhǔn)。積極塑造人工智能流程,保持對(duì)共享內(nèi)容和時(shí)間的嚴(yán)格控制。關(guān)鍵在于,在這個(gè)階段,不要賦予人工智能超過剛大學(xué)畢業(yè)的新員工的更多的自主權(quán)。”(財(cái)富中文網(wǎng))
譯者:中慧言-王芳
If you’ve ever been in a Zoom meeting and seen an Otter.ai virtual assistant in the room, just know they’re listening to you—and recording everything you’re saying. It’s a practice that’s become somewhat mainstream in the age of artificial intelligence and hybrid or remote work, but what’s alarming is many users don’t know the full capabilities of the technology.
Virtual assistants like Otter.ai, if you don’t know the proper settings to select, will send a recording and transcript to all meeting attendees, even if a guest has left the meeting early. That means if you’re talking bad about your coworkers, discussing confidential information, or sharing shoddy business practices, the AI will pick up on it. And it will rat you out.
That happened to researcher and engineer Alex Bilzerian recently. He had been on a Zoom meeting with a venture-capital firm and Otter.ai was used to record the call. After the meeting, it automatically emailed him the transcript, which included “hours of their private conversations afterward, where they discussed intimate, confidential details about their business,” Bilzerian wrote in an X post last week. Otter.ai was founded in 2016, and provides recording and transcription services that can be connected through Zoom or manually when in a virtual or in-person meeting.
The transcript showed that after Bilzerian had logged off, investors had discussed their firm’s “strategic failures and cooked metrics,” he told The Washington Post. While Bilzerian alerted the investors to the incident, he still decided to kill the deal after they had “profusely apologized.”
This is just one of many examples of how nascent AI technologies are misunderstood by users. In response to Bilzerian’s post on X, other users reported similar situations.
“Literally happened to my wife today with a grant meeting at work,” another user, Dean Julius wrote on X. “[The] whole meeting [was] recorded and annotated. Some folks stayed behind on the call to discuss the meeting privately. Kept recording. Sent it all out to everyone. Suuuuper awkward.”
Other users pointed out this could become a major issue in the health-care industry as virtual therapy and telehealth sessions become more prominent.
“This is going to become a pretty terrible problem in health care, as you can imagine, regarding protected health information,” Danielle Kelvas, a physician and medical adviser for medical software company IT Medical, told Fortune. “Health care providers understandably have concerns about privacy. Whether this is an AI-scribe device or AI powered ultrasound device, for example, we as doctors are asking, where is this information going?”
Otter.ai, however, insists users can prevent these awkward or embarrassing incidents from happening.
“Users have full control over their settings and we work hard to make Otter as intuitive as possible,” an Otter.ai spokesperson told Fortune. “Although notifications are built in, we also strongly recommend continuing to ask for consent when using Otter in meetings and conversations and indicate your use of Otter for full transparency.” The spokesperson also suggested visiting the company’s Help Center to review all settings and preferences.
The power of AI virtual assistants
As a means of increasing productivity and having records of important conversations, more businesses have begun implementing AI features into workflows. While it can undoubtedly cut down on the tedious practice of transcribing and sending notes out to stakeholders, AI still doesn’t have the same sentience as humans.
“AI poses a risk in revealing ‘work secrets’ due to its automated behaviours and lack of discretion,” Sukh Sohal, a senior consultant at data advisory Affinity Reply, told Fortune. “I’ve had clients express concerns over unintended information sharing. This can come about when organizations adopt AI tools without fully understanding their settings or implications, such as auto-transcription continuing after participants have left a meeting.”
Ultimately, though, humans are the ones who are enabling the tech.
“While AI is helping us work faster and smarter, we need to understand the tools we’re using,” Hannah Johnson, senior vice president of strategy at The Computing Technology Industry Association (CompTIA), told Fortune. “And we can’t forget that emotional intelligence and effective communication are just as vital. Technology may be evolving, but human skills remain the glue that holds it all together.”
Other AI assistants, like Microsoft’s Copilot, work similarly to Otter.ai, in that meetings can be recorded and transcribed. But in the case of Coiplot, there are some backstops: A user has to either be a part of the meeting or have the organizer approve the share of the recording or transcripts, a Microsoft spokesperson told Fortune.
“In Teams meetings, all participants see a notification that the meeting is being recorded or transcribed,” the Microsoft spokesperson said in a statement. “Additionally, admins can enable a setting that requires meeting participants to explicitly agree to be recorded and transcribed. Until they provide explicit permission, their microphones and cameras cannot be turned on, and they will be unable to share content.”
Still, these permissions don’t always address human naivety or error. To apply more guardrails to virtual assistant usage, Lars Nyman, chief marketing officer of AI infrastructure company CUDO Compute, said to think of your AI assistant as a junior executive assistant.
It’s “useful, but not yet seasoned,” Nyman told Fortune. “Avoid auto-sending follow-ups; instead, review and approve them manually. Shape AI processes actively, maintaining firm control over what gets shared and when. The key is not to entrust AI with more autonomy than you’d give to a new hire fresh out of college at this stage.”