成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

立即打開
發(fā)動群眾回答問題,亞馬遜的做法是否明智?

發(fā)動群眾回答問題,亞馬遜的做法是否明智?

David Morris 2019-09-25
亞馬遜高管稱,公司用機器學習和算法來排除搗亂分子,但專家表示懷疑。

由Alexa語音助手控制的亞馬遜Echo多媒體智能音箱。專家們擔心惡意木馬將找到盜用亞馬遜新門戶網站的辦法。圖片來源:Joby Sessions/T3 Magazine/Future via Getty Images

阿爾伯特·愛因斯坦穿襪子嗎?切洋蔥時怎么防止流眼淚?伯特·雷諾茲娶了莎莉·菲爾德嗎?是什么東西讓芥末呈綠色?普通人未必知道這些問題的答案,但在上周四發(fā)布的Alexa Answers門戶網站幫助下,亞馬遜語音助手Alexa也許知道。更準確地說,是有Alexa用戶可能知道。

Alexa Answers是一個在線社區(qū),人們登陸后就可以為用戶向Alexa提出的問題提供參考答案,此舉旨在回答那些Alexa沒有現成答案的高難度問題。這些答案的準確性會得到評估并打分,足夠好的答案會呈現給Alexa用戶。

但為Alexa的學識進行眾包是個好主意嗎?從種族主義網絡釣魚者“策反”的微軟聊天機器人,到類似Alexa Answers但因充斥不良信息而惡名在外的Yahoo Answers,用戶提供數據的系統(tǒng)走入歧途的案例在過去幾年隨處可見。因此不難想象出最糟糕的情景:在Alexa操控下,智能音箱“愉快地”播報著假新聞、危險的陰謀論或者白人至上主義者的觀點。

亞馬遜負責Alexa Information的副總裁比爾·巴頓向商業(yè)雜志《Fast Company》描述Alexa Answers時態(tài)度樂觀。他說:“我們注入的是貢獻者的正能量和善意,我們用機器學習和算法來排除少數聒噪者和壞家伙?!?/p>

研究數據應用及其影響的專家們卻遠沒有巴頓那么樂觀。

克里斯·吉拉德博士在臨近底特律的Macomb社區(qū)大學研究亞馬遜等科技公司的數據政策。他說:“有充足的證據表明此事進展不會順利?!北姲鼣祿缓笥玫玫降臄祿碛柧欰lexa的算法代表著“亞馬遜看來下定決心要去踩雷”。

與谷歌的較量

雖然更好地語音助手和智能音箱推動了聲控燈等相關商品的銷售,但幾十年的搜索業(yè)務經驗看來讓谷歌在理解問題和返回數據方面領先于亞馬遜。谷歌的智能音箱一直在穩(wěn)步奪取Echo的市場份額,Google Assistant在多次比較測試中的表現也幾乎全面超越Alexa。

實際上,從愛因斯坦的襪子到芥末的顏色,Google Assistant目前可以回答幾乎所有上述問題,只是它們都直接取自Amazon Answers網站。谷歌的答案來自谷歌搜索引擎找到的結果、谷歌featured snippet以及知識圖譜。亞馬遜想利用公眾提供的答案在這個領域追上谷歌。

新西蘭惠靈頓維多利亞大學科技倫理學家、尼古拉斯·阿格說:“亞馬遜不是谷歌。他們沒有谷歌的[數據]能力,所以他們需要外部專家?!?/p>

除了為每個問題找到答案,來自Alexa Answers的數據還將用于訓練亞馬遜語音助手背后的人工智能系統(tǒng)。該公司發(fā)言人告訴《財富》雜志:“Alexa Answers不僅是擴展Alexa知識面的另一條途徑,還可以讓她更好地幫助其他用戶并為他們提供更多信息?!碑敵醢l(fā)布Alexa Answers時,亞馬遜將其稱為“變得更聰明的”Alexa。

不給錢,免費審核事實

對亞馬遜來說,提供答案的人不會得到任何報酬也許和Alexa Answers一樣重要。這個系統(tǒng)中的人類編輯應該可以通過工作獲得工資,但答案貢獻者只能在一個分數和等級構成的系統(tǒng)中獲得獎勵,這種做法用行業(yè)術語來說就是“游戲化”。

阿格相信這會很有效,因為亞馬遜利用了人們給予幫助的天性。但他也認為我們應當思考一下企業(yè)利用人們直覺的行為。阿格說:“某個人隨口一問和亞馬遜對這些答案的依賴是不一樣的。我覺得這在倫理上是個警示信號?!?/p>

吉拉德也認為亞馬遜應該向提供答案的人支付酬勞,無論是它自己的員工,還是建立了成熟的事實審核團隊的合作伙伴。

亞馬遜當然有這方面的基礎設施。這家電商巨頭還運營著‘臨時工’平臺Mechanical Turk,后者向從事重復性零工的“Turker”支付酬勞,而且看來很適于為Alexa的培訓提供補充。

但吉拉德相信,如果Alexa在公眾灌輸知識的基礎上開始源源不斷地提供壞的或冒犯人的答案,依靠‘社區(qū)’模式就可以把亞馬遜隔離開來。他說:“我認為你可以在不支付酬勞的情況下說,嗯,這是大家的想法。但如果付了錢,別人就會指責你有傾向?!?/p>

不過,游戲化的激勵系統(tǒng)并非沒有危險。2013年,Yahoo Answers關閉了部分用戶投票系統(tǒng),據說原因是一些參與者通過虛假賬號來給自己(未必準確)的答案點贊(來源:Quora。同時,這也是眾包信息影響可靠性的一個良好例證)。

防范網絡惡行

Alexa Answers面對的最大問題是亞馬遜能否有效防止對這個新平臺的濫用?!敦敻弧冯s志想知道人類編輯在這個系統(tǒng)中究竟發(fā)揮什么作用,但亞馬遜拒絕做出回答。但出現人類編輯本身就代表亞馬遜承認當前狀態(tài)下的自動化系統(tǒng)還不能可靠地識別冒犯人的內容,或者評估答案的準確性。

以前亞馬遜從未像Facebook和推特那樣直面這些挑戰(zhàn),而且據一些評判者透露,亞馬遜甚至無法持續(xù)識別自家在線店鋪中的虛假評論。巴頓告訴《Fast Company》雜志,亞馬遜將設法把政治問題阻擋在自身系統(tǒng)之外。但吉拉德說,這是一項不太好處理的工作,人也有可能出現失誤,“AI沒辦法做這些事,它做不了文字內容方面的工作”。

但自動化系統(tǒng)可以輕松識別并阻攔個人發(fā)表的冒犯性語句,盡管在這方面也有負面風險。在一次測試中,筆者在Alexa Answer上回答問題時想提到‘20世紀90年代的搖滾樂隊Porno for Pyros’,結果答案被拒,原因不是不準確,而是因為包含了‘porno’這個詞。系統(tǒng)提示稱:“Alexa不會用這個詞”。

并非所有問題都有答案

巴頓告訴《Fast Company》,“我們很希望Alexa能回答人們提出的所有問題,”但這顯然不可能。有些問題Alexa絕不會知道,比如生命的意義,而且公眾為一些謎團提供的答案可能讓整個系統(tǒng)變得更不牢靠。在2018年的一項研究中,研究人員發(fā)現搜索相關數據較有限的問題,或者他們稱之為“數據空白”的問題更容易讓懷有惡意的人偽造出虛假或誤導別人的答案。

網絡釣魚并非Alexa在精神衛(wèi)生方面面臨的唯一風險。如果Alexa沒有正確解讀提問者的話語,就連初衷良好的問題也可能變得荒謬。比如,上周五上午在Alexa Answers出現了一個問題,內容是“What is a piglet titus?”??磥碛脩魧嶋H上問的是“What is Epiglottitis?”(答案是:急性會厭炎,一種罕見咽喉疾?。?。如果有足夠多的用戶嘗試回答這個毫無意義的問題,比如小熊維尼的粉絲或者急于獲得分數的用戶,他們就可能讓數據池變得混亂,而不是得到改善。

還不清楚混亂或惡意數據對Alexa的整體表現會有怎樣的影響——現在答案還很遙遠。但在類似系統(tǒng)經歷了所有這些挫折之后,如果亞馬遜能認真對待眾包答案的風險,那將會是一件很美妙的事。(財富中文網)

譯者:Charlie

審校:夏林

Did Albert Einstein wear socks? How do you prevent tears when cutting an onion? Did Burt Reynolds marry Sally Field? What makes wasabi green? The average person might not know the answer to these questions, but Amazon Alexa, through the new Alexa Answers portal that was announced Thursday, might. Well, more accurately, an Alexa user could.

An online community where anyone who logs in can suggest answers to user-supplied questions posed to the voice-activated Alexa A.I. assistant, Alexa Answers is designed to answer the tough questions that can’t already be answered by the voice-enabled assistant. Once the answers are submitted, they are vetted for accuracy, scored, and if they are good enough, make their way back to Alexa users.

But is crowdsourcing Alexa's smarts a good idea? From a Microsoft chatbot subverted by racist trolls to Yahoo Answers, a similar service to Alexa Answers that has become notoriously rife with bad information, the past few years have been littered with cases of user-generated data systems gone bad. So it's not hard to imagine the worst-case scenario: an Alexa-backed smart speaker blithely spouting fake news, dangerous conspiracy theories, or white supremacist talking points.

Describing Alexa Answers to Fast Company, Bill Barton, Amazon’s Vice President of Alexa Information, struck an optimistic tone. “We’re leaning into the positive energy and good faith of the contributors," he said. "And we use machine learning and algorithms to weed out the noisy few, the bad few.”

Experts on data use and its impacts are markedly less cheery.

“We have plenty of examples of why this is not going to play out well,” says Dr. Chris Gillard, who studies the data policies of Amazon and other tech companies at Macomb Community College near Detroit. Crowdsourcing data, and then using that data in training the Alexa algorithm, he says, presents “pitfalls that Amazon seem intent on stepping right into.”

The race to beat Google

While better assistants and smart speakers drive sales of accessories like voice-activated lights, Google’s decades in the search business seem to have given it an advantage over Amazon when it comes to understanding queries and returning data. Google's smart speaker has steadily gained market share against the Echo, and Google Assistant has almost uniformly outperformed Alexa in comparison tests.

In fact, almost all of the questions above, from Einstein's socks to wasabi's color, are are currently answered with Google Assistant, though they were taken directly Amazon Answers' website. Google's answers come from its search engine's results, featured snippets, and knowledge graph. Amazon is trying to use crowd-supplied answers to catch up in this space.

“Amazon’s not Google,” says Dr. Nicholas Agar, a technology ethicist at Victoria University of Wellington, New Zealand. “They don’t have Google’s [data] power, so they need us.”

Beyond just providing missing answers to individual questions, data from Alexa Answers will be used to further train the artificial intelligence systems behind the voice assistant. “Alexa Answers is not only another way to expand Alexa's knowledge,” an Amazon spokesperson tells Fortune, “but also... makes her more helpful and informative for other customers.” In its initial announcement of Alexa Answers, Amazon referred to this as Alexa “getting smarter.”

Money for nothing, facts for free

As important as Alexa Answers might be for Amazon, contributors won’t get any financial compensation for helping out. The system will have human editors who are presumably paid for their work, but contributed answers will be rewarded only through a system of points and ranks, a practice known in industry parlance as ‘gamification.’

Agar believes this will be effective, because Amazon is leveraging people’s natural helpfulness. But he also thinks a corporation leveraging those instincts should give us pause. “There’s a difference between the casual inquiry of a human being, and Amazon relying on those answers," he says. "I think it’s an ethical red flag.”

Gillard also thinks Amazon should pay people to provide answers, whether its one of its own workers or partner with an established fact-checking group.

Amazon certainly has the infrastructure to do it. The ecommerce giant already runs Mechanical Turk, a ‘gig’ platform that pays “Turkers” for performing small, repetitive tasks, and would seem well-suited to supplementing Alexa’s training.

But Gillard believes that relying on a ‘community’ model insulates Amazon if Alexa starts spouting bad or offensive answers, based on crowd input. “I think not paying people lets you say, well, it was sort of the wisdom of the crowd,” he says. “If you pay people, you’re going to be accused of bias.”

A gamified incentive system, though, is not without its own risk. In 2013, Yahoo Answers disabled part of its user voting system. That's allegedly because some participants created fake accounts to upvote their own (not necessarily accurate) answers. (Source: Quora. Also, this is a good example of how crowd-sourcing information impacts reliability.)

Troll stoppers

The biggest question facing Alexa Answers is whether Amazon can effectively prevent abuse its new platform. Amazon declined to answer questions from Fortune about the precise role of human editors in the system. But their presence alone represent an acceptance that automated systems in their current state can't reliably detect offensive content, or evaluate the accuracy of facts.

Amazon has never grappled with these challenges as directly as companies like Facebook and Twitter, though according to some critics, it has failed even to consistently detect fake reviews in its own store. Barton told Fast Company that Amazon will try to keep political questions out of the system, a subtle task Gillard says will likely fall to humans. “A.I. can’t do those things," he says, "It can’t do context.”

Yet automated systems can easily detect and block individual offensive terms, though even that has its downsides. In a test, this reporter attempted to reference the ‘90s rock band Porno for Pyros when suggesting an Alexa Answer. The answer was rejected, not because of inaccuracy, but because of the word ‘porno.’ According to a notification, “Alexa wouldn’t say that.”

Not everything has an answer

Barton told Fast Company that “we’d love it if Alexa can answer any question people ask her,” but that’s clearly impossible. Alexa cannot be expected to know, for instance, what the meaning of life is, and crowdsourcing answers to questions that are enigmas could make the entire system more fragile. In a 2018 study, researchers found that search queries with limited relevant data, which they called “data voids,” were easier for malicious actors to spoof with fake or misleading results.

And trolls aren’t the only risk to Alexa’s mental hygiene. Even well-intentioned questions can wind up nonsensical, if Alexa doesn’t properly interpret the questioner’s speech. For example, the question “What is a piglet titus?” appeared on Alexa Answers Friday morning. It seems likely the user actually asked “What is Epiglottitis?” (Answer: a rare throat condition). If enough users tried to answer the nonsense question—perhaps Winnie the Pooh fans, or users hungry for points—it could muddy the data pool, instead of improving it.

It’s unclear how Alexa's overall performance might be impacted by messy or malicious data—those answers are a ways away yet. Bit it's a wonder if, after all the stumbles of similar systems, Amazon is taking the risks of crowdsourced answers seriously.

掃描二維碼下載財富APP
99久热精品免费观看98| 伊人久久综合精品无码AV专区| 92精品国产自产在线观看481页| 香蕉久久久久久狠狠色| 国产精品嫩草影院AV| 久久久亚洲欧洲日产国码αv| 久久久久国产精品嫩草影院| 国产精品久久久久9999赢消| 国产一区二区波多野结衣婷婷| 国产精品亚洲аⅴ无码播放| 露脸外围女Av剧情在线观看| 人妻少妇精品视频一区二区三区| 国产成人精品免费午夜| 巨人精品福利官方导航| 欧美午夜理伦三级在线观看| 国色天香天天影院综合网| 日韩精品久久久久久| 亚洲一区波多野结衣在线| 国产精品自在自线免费观看| 亚洲精品456在线播放| 久久久91人妻无码精品蜜桃HD| 久久国产精品免费一区二区| 国产毛片无码专区国产国庆A片在线观看| 亚洲AV综合色区无码区| 色宅男看片午夜大片啪啪| 国产精品爽爽VA在线观看无码| 国产香线蕉手机视频在线观看| 亚洲中文字幕AV一区二区三区| 99久久久国产精品免费蜜臀| 人人做天天爱夜夜爽| 在线亚洲+欧美+日本专区| 少妇人妻无码永久免费| 亚洲人成网站18禁止午字幕| 伊人www22综合色| 桃美女性感视频一区二区三区| av无码免费岛国动作片片段| 乱人伦xxxx国语对白| 色欲国产麻豆一精品一AV一免费| 一本色道久久88—综合亚洲精品| 人人妻人人澡人人爽曰本| 无码一区二区三区免费|