最強利器!AI幫助YouTube制止不當(dāng)視頻的傳播
YouTube在一份報告中首次詳細(xì)披露,已刪除多少違反平臺政策的視頻,數(shù)量確實不少。 2017年第四季度,谷歌母公司Alphabet旗下的YouTube刪除視頻超過800萬條。那么,YouTube怎樣判斷哪些視頻應(yīng)該刪除?機器學(xué)習(xí)技術(shù)在其中發(fā)揮了重要作用。 據(jù)YouTube報告,經(jīng)過評估刪除的視頻之中,超過83%都來自機器判斷,并非人工評定。超過四分之三的視頻還沒點擊量時就被刪去,大部分都是垃圾廣告或者色情內(nèi)容。 科技業(yè)人士喜歡將該技術(shù)稱為機器學(xué)習(xí)或人工智能(AI),主要利用數(shù)據(jù)改進算法,辨識出模式后自行采取行動,無需人工干預(yù)。這次YouTube就用人工智能自動識別會引起不滿的內(nèi)容。 YouTube團隊在博客文章中表示,運用人工智能技術(shù)成效顯著。 舉例來說,YouTube平臺禁止播放含“暴力極端主義”內(nèi)容的視頻,2017年初采用人工智能技術(shù)以前,僅有8%的相關(guān)視頻在評論不足十條的時候被刪除。2017年年中,YouTube開始用機器學(xué)習(xí)識別視頻,一半以上包括暴力極端主義的視頻評論不足十條時就被刪除。 然而某些原本應(yīng)保留的視頻也被刪除,因而機器學(xué)習(xí)也導(dǎo)致一些疑問,比如有些看似暴力極端主義的視頻其實只是諷刺,或者只是如實的報道。 Middle East Eye和Bellingcat等多家新聞機構(gòu)發(fā)現(xiàn),去年年末,YouTube刪除了之前分享有關(guān)敘利亞戰(zhàn)爭罪行的視頻。調(diào)查馬航17號航班飛經(jīng)烏克蘭遇襲墜毀事件中,Bellingcat曾發(fā)揮公民記者的重要角色,卻發(fā)現(xiàn)在YouTube的整個頻道都被中止播放了。 YouTube當(dāng)時表示:“網(wǎng)站上視頻數(shù)量龐大,有時確實會弄錯。發(fā)現(xiàn)某條視頻或者某個頻道被誤刪后,我們會迅速恢復(fù)?!? YouTube在本周一的博客文章中稱,機器學(xué)習(xí)系統(tǒng)審查可能違規(guī)內(nèi)容時仍需要人工協(xié)助。隨著人工智能技術(shù)處理視頻數(shù)量變多,實際上也增加了視頻審核人手。 YouTube團隊稱:“去年我們承諾,到2018年年末谷歌內(nèi)部處理違規(guī)內(nèi)容的工作人員增加到1萬人。在YouTube,大多數(shù)新增人手都是為了審核內(nèi)容。我們已聘請了解暴力極端主義、反恐和人權(quán)領(lǐng)域的全職專家,各地區(qū)專家團隊也已擴充?!保ㄘ敻恢形木W(wǎng)) 譯者:Pessy 審稿:夏林 ? |
YouTube has for the first time revealed a report detailing how many videos it takes down due to violations of the platform’s policies—and it’s a really big number. The Alphabet-owned site removed more than 8 million videos during the last quarter of 2017. But how did it decide to take them down? Machine learning technology played a big role. According to YouTube, machines rather than humans flagged up more than 83% of the now-deleted videos for review. And more than three quarters of those videos were taken down before they got any views. The majority were spam or porn. Machine learning—or AI, as the tech industry often likes to call it—involves training algorithms on data so that they become able to spot patterns and take actions by themselves, without human intervention. In this case, YouTube uses the technology to automatically spot objectionable content. In a blogpost, the YouTube team said the use of the technique had a big effect. Regarding videos containing “violent extremism,” which is banned on the platform, only 8% of such videos were flagged and removed in early 2017 before 10 views had taken place. After YouTube started using machine learning for flagging in the middle of the year, “more than half of the videos we remove for violent extremism have fewer than 10 views,” the team said. However, the use of machine learning does raise serious questions about content being taken down that should stay up—some depictions of violent extremism, for example, may be satire or just reportage. Several news organizations, such as Middle East Eye and Bellingcat, found late last year that YouTube was taking down videos they had shared, depicting war crimes in Syria. Bellingcat, which played a key citizen-journalist role in investigating the downing of Malaysia Airlines Flight 17 over Ukraine in 2014, found its entire channel suspended. “With the massive volume of videos on our site, sometimes we make the wrong call. When it’s brought to our attention that a video or channel has been removed mistakenly, we act quickly to reinstate it,” YouTube said at the time. In its Monday blog post, YouTube said its machine learning systems still require humans to review potential content policy violations, and the number of videos being flagged up using the technology has actually increased staffing requirements. “Last year we committed to bringing the total number of people working to address violative content to 10,000 across Google by the end of 2018,” the team said. “At YouTube, we’ve staffed the majority of additional roles needed to reach our contribution to meeting that goal. We’ve also hired full-time specialists with expertise in violent extremism, counterterrorism, and human rights, and we’ve expanded regional expert teams.” |
-
熱讀文章
-
熱門視頻