成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

立即打開
深度偽造以假亂真,人眼已經(jīng)無法識(shí)別,該怎么破?

深度偽造以假亂真,人眼已經(jīng)無法識(shí)別,該怎么破?

Bernhard Warner 2019-08-14
借助于人工智能,研究人員、立法者和大型科技公司正在努力抗擊日益增長的造假媒體的威脅。

圖片來源:Zuckerberg: Courtesy of Faceboook; Obama: Neilson Barnard—Getty Images; Trump: Saul Loeb—AFP/Getty Images; Pelosi: Chip Somodevilla—Getty Images; Gadot & Johansson: Mike Coppola—Getty Images; Wireframes: Lidiia Moor—Getty images

它們就像是一大群僵尸一樣,層出不窮。一開始,人們看到的是像素拼湊版的女演員蓋爾·加朵和斯嘉麗·約翰遜,被狡詐的用戶植入了自己制作的成人電影。隨后,表情空洞的貝拉克·奧巴馬和唐納德·特朗普出現(xiàn)在從未經(jīng)過自己同意的短片中,說了一些奧巴馬和特朗普自己從未說過的話。在6月,機(jī)器學(xué)習(xí)生成的Facebook的首席執(zhí)行官馬克·扎克伯格就隱私問題做出了一些恐怖的評(píng)論,很快引爆網(wǎng)絡(luò)。

歡迎來到“深度偽造”時(shí)代,這是一種基于人工智能的新威脅,它能夠修改視頻或音頻剪輯中的語音,從海量的自拍中選出令人信服的特寫,甚至可以把人們放到他們從未去過的地方,讓他們與從未見過的人見面。很快,人們便開始擔(dān)心,那些加入深度偽造欺騙行列的人包括行為不端的政客、提供惡意報(bào)道的新聞主持人,以及那些為了開展詐騙而嚇唬雇員的騙子高管。

到目前為止,女性一直是深度偽造最大的受害者。在6月底,應(yīng)用程序Deepnudes在一片爭議聲中下架,因?yàn)橛杏浾吲队脩艨梢韵驊?yīng)用程序上傳女性照片,然后獲得這些女性的裸體圖片。

有人擔(dān)心,該技術(shù)的不良影響可能遠(yuǎn)不止詭異那么簡單,尤其對(duì)于那些尋求破壞大選或摧毀上市公司股票的無賴更是如此。緊張氣氛一觸即發(fā)。立法者希望禁止深度偽造技術(shù)。大型科技公司認(rèn)為自家工程師會(huì)拿出補(bǔ)救措施。與此同時(shí),位于一線的研究人員、學(xué)術(shù)界和數(shù)字權(quán)力活動(dòng)家則嘆息道,他們沒有相應(yīng)的武器來應(yīng)對(duì)這場戰(zhàn)爭。

位于紐約市的人權(quán)組織Witness的項(xiàng)目總監(jiān)山姆·格里高利指出,識(shí)別深度偽造要比制作深度偽造難的多。不久之后,人們無需掌握多少技術(shù)便可以制作深度偽造作品。

Witness一直在培訓(xùn)媒體公司和這一領(lǐng)域里的積極分子如何識(shí)別人工智能生成的“合成媒體”,例如可能會(huì)破壞其工作信任的深度偽造和面部再扮演(也就是錄制某個(gè)人的面部表情,然后將其移接至另一個(gè)人)。他和其他人已經(jīng)開始呼吁科技公司采取更多舉措,管理這些偽造內(nèi)容。格里高利說道:“公司在發(fā)布幫助創(chuàng)作的產(chǎn)品的同時(shí)也應(yīng)該發(fā)布能夠甄別這類作品的產(chǎn)品?!?/p>

軟件制作商Adobe Systems發(fā)現(xiàn)自己處于左右為難的境地。6月,供職于Adobe Research的計(jì)算機(jī)科學(xué)家展示了一個(gè)強(qiáng)大的文字語音機(jī)器學(xué)習(xí)算法,能夠逐字逐句地替換電影角色的對(duì)白。公司的一位發(fā)言人稱,Adobe研究人員也在努力幫助打假。例如,Adobe最近發(fā)布了一項(xiàng)研究,能夠幫助檢測通過其熱門修圖軟件Photoshop修改的圖片。但研究人員和數(shù)字權(quán)力積極人士稱,由業(yè)余人士和獨(dú)立編程人員組建的開源社區(qū)在從事深度偽造方面的組織性要高得多,因此深度偽造作品也就更逼真,也更難識(shí)別。

到目前為止,不良分子這一邊占據(jù)著優(yōu)勢。

這也是立法人員參與這場爭論的原因。美國眾議院情報(bào)委員會(huì)在6月就人工智能、受操縱媒體和深度偽造所帶來的國家安全挑戰(zhàn)舉行了一次聽證會(huì)。就在同一天,伊薇特·克拉克(紐約州民主黨)提出了深度偽造責(zé)任法案,這是美國國會(huì)首次嘗試對(duì)欺騙、詐騙或破壞公共穩(wěn)定用途的合成媒體進(jìn)行定罪。弗吉尼亞州、得克薩斯州和紐約州的州立法人員同時(shí)還引入或頒布了其自身的立法,我們預(yù)計(jì)將迎來一場法律洪流,目的就是為了打擊偽造內(nèi)容。

人工智能智庫OpenAI的政策總監(jiān)杰克·克拉克在6月曾經(jīng)就深度偽造問題前往國會(huì)山作證。他向《財(cái)富》雜志透露,現(xiàn)在到了“行業(yè)、學(xué)術(shù)界和政府共同努力”尋找解決方案的時(shí)候了??死颂岬?,公共和私營領(lǐng)域在過去曾經(jīng)聯(lián)合開發(fā)了蜂窩網(wǎng)絡(luò),并規(guī)范了公用設(shè)施。他說:“我覺得人工智能也是一種非常重要的技術(shù),我們需要采取類似的舉措?!?/p>

為了避免出現(xiàn)此類政府干預(yù),科技公司正在努力向外界展示,它們可以在無需過分強(qiáng)行壓制自由言論的情況下,處理好這個(gè)問題。YouTube已經(jīng)在用戶舉報(bào)之后從網(wǎng)站上移除了數(shù)個(gè)深度偽造視頻。最近,F(xiàn)acebook的扎克伯格表示,他正在考慮針對(duì)識(shí)別自家網(wǎng)站上的深度偽造內(nèi)容推出一項(xiàng)新政策,將由多名人工版主和自動(dòng)化技術(shù)共同執(zhí)行。

大多數(shù)深度偽造以及基于人工智能的合成媒體內(nèi)容所仰仗的基礎(chǔ)性技術(shù)都是生成式對(duì)抗網(wǎng)絡(luò)(又稱GAN),于2014年由位于蒙特利爾的博士生伊萬·古德菲洛發(fā)明,后來他在谷歌工作了一段時(shí)間,然后于今年加入了蘋果公司。

在他的發(fā)明出現(xiàn)之前,機(jī)器學(xué)習(xí)算法一直十分擅長于在海量的培訓(xùn)數(shù)據(jù)中識(shí)別圖片,但也就是僅此而已。借助新面世的技術(shù),例如更加強(qiáng)大的計(jì)算機(jī)芯片,GAN成為了游戲顛覆者。它不僅能夠讓算法進(jìn)行分類操作,同時(shí)還可以創(chuàng)建圖片。只要向GAN展示一張站立的側(cè)面人像圖片,它就可以對(duì)整個(gè)人像進(jìn)行重構(gòu),既可以是正面也可以是背影。

GAN立即引發(fā)了研究人員的關(guān)注,他們認(rèn)為GAN可以讓計(jì)算機(jī)彌補(bǔ)我們對(duì)身邊事物理解的空白,例如,繪制望遠(yuǎn)鏡無法探測到的遙遠(yuǎn)星系。其他編程人員則將其看作是制作超逼真名人黃色視頻的工具。

在2017年年底,Reddit上的一個(gè)名為“Deepfakes”的用戶制作了這類視頻,并將其上傳至成人視頻網(wǎng)站,畫面中包括與知名好萊塢女演員長相相似的角色。深度偽造現(xiàn)象因此而一夜成名。

不久之后,機(jī)器學(xué)習(xí)領(lǐng)域博士希奧爾希奧·帕特里尼對(duì)GAN模型的利用方式感到癡迷,隨后又表示了其擔(dān)憂。他離開了實(shí)驗(yàn)室,并共同創(chuàng)建了Deeptrace Labs。這是一家荷蘭初創(chuàng)企業(yè),自稱正在打造“深度偽造抗體”??蛻舭切┫M麨橛浾邆兲峁╄b別工具的媒體公司,這樣他們就可以發(fā)現(xiàn)其作品中經(jīng)處理的內(nèi)容,或驗(yàn)證用戶制作視頻短片的真實(shí)性。帕特里尼說,最近幾個(gè)月,聯(lián)系他們公司的不僅有企業(yè)品牌聲譽(yù)經(jīng)理,還有網(wǎng)絡(luò)安全專家。

帕特里尼說:“人們對(duì)于深度偽造及其在詐騙和社會(huì)工程方面的用途尤為擔(dān)憂?!?/p>

位于加州圣克拉拉的Malwarebytes Labs最近也發(fā)出了類似的警告,它在6月有關(guān)人工智能威脅的一篇報(bào)道中指出,“深度偽造可能被用于極度逼真的魚叉式網(wǎng)絡(luò)釣魚攻擊,而用戶則很難去識(shí)別真?zhèn)巍!眻?bào)告還提到,“試想一下,你接到老板的視頻電話,她告訴你需要把錢打到某個(gè)賬號(hào),用于出公差,隨后公司會(huì)補(bǔ)償給你。”

在深度偽造領(lǐng)域里,人們無需任何知名度便可以扮演主要角色。(財(cái)富中文網(wǎng))

本文最初刊登于《財(cái)富》雜志2019年8月刊。

譯者:馮豐

審校:夏林

Like a zombie horde, they keep coming. First, there were the pixelated likenesses of actresses Gal Gadot and Scarlett Johansson brushstroked into dodgy user-generated adult films. Then a disembodied digital Barack Obama and Donald Trump appeared in clips they never agreed to, saying things the real Obama and Trump never said. And in June, a machine-learning-generated version of Facebook CEO Mark Zuckerberg making scary comments about privacy went viral.

Welcome to the age of deepfakes, an emerging threat powered by artificial intelligence that puts words in the mouths of people in video or audio clips, conjures convincing headshots from a sea of selfies, and even puts individuals in places they’ve never been, interacting with people they’ve never met. Before long, it’s feared, the ranks of deepfake deceptions will include politicians behaving badly, news anchors delivering fallacious reports, and impostor executives trying to bluff their way past employees so they can commit fraud.

So far, women have been the biggest victims of deepfakes. In late June, the app Deepnudes shut down amid controversy after journalists disclosed that users could feed the app ordinary photos of women and have it spit out naked images of them.

There’s concern the fallout from the technology will go beyond the creepy, especially if it falls into the hands of rogue actors looking to disrupt elections and tank the shares of public companies. The tension is boiling over. Lawmakers want to ban deepfakes. Big Tech believes its engineers will develop a fix. Meanwhile, the researchers, academics, and digital rights activists on the front lines bemoan that they’re ill equipped to fight this battle.

Sam Gregory, program director at the New York City–based human rights organization Witness, points out that it’s far easier to create a deepfake than it is to spot one. Soon, you won’t even need to be a techie to make a deepfake.

Witness has been training media companies and activists in how to identify A.I.-generated “synthetic media,” such as deepfakes and facial reenactments—the recording and transferring of facial expressions from one person to another—that could undermine trust in their work. He and others have begun to call on tech companies to do more to police these fabrications. “As companies release products that enable creation, they should release products that enable detection as well,” says Gregory.

Software maker Adobe Systems has found itself on both sides of this debate. In June, computer scientists at Adobe Research demonstrated a powerful text-to-speech machine-learning algorithm that can literally put words in the mouth of a person on film. A company spokesperson notes that Adobe researchers are also working to help unmask fakes. For example, Adobe recently released research that could help detect images manipulated by Photo?shop, its popular image-editing software. But as researchers and digital rights activists note, the open-source community, made up of amateur and independent programmers, is far more organized around making deepfakes persuasive and thus harder to spot.

For now, bad actors have the advantage.

This is one reason that lawmakers are stepping into the fray. The House Intelligence Committee convened a hearing in June about the national security challenges of artificial intelligence, manipulated media, and deepfakes. The same day, Rep. Yvette Clarke (D-N.Y.) introduced the DEEPFAKES Accountability Act, the first attempt by Congress to criminalize synthetic media used to deceive, defraud, or destabilize the public. State lawmakers in Virginia, Texas, and New York, meanwhile, have introduced or enacted their own legislation in what’s expected to be a torrent of laws aimed at outmaneuvering the fakes.

Jack Clark, policy director at OpenAI, an A.I. think tank, testified on Capitol Hill in June about the deepfakes problem. He tells Fortune that it’s time “industry, academia, and government worked together” to find a solution. The public and private sectors, Clark notes, have joined forces in the past on developing standards for cellular networks and for regulating public utilities. “I expect A.I. is important enough we’ll need similar things here,” he says.

In an effort to avoid such government intervention, tech companies are trying to show that they can handle the problem without clamping down too hard on free speech. YouTube has removed a number of deepfakes from its service after users flagged them. And recently, Facebook’s Zuckerberg said that he’s considering a new policy for policing deepfakes on his site, enforced by a mix of human moderators and automation.

The underlying technology behind most deepfakes and A.I.-powered synthetic media is the generative adversarial network, or GAN, invented in 2014 by the Montreal-based Ph.D. student Ian Goodfellow, who later worked at Google before joining Apple this year.

Until his invention, machine-learning algorithms had been relatively good at recognizing images from vast quantities of training data—but that’s about all. With the help of newer technology, like more powerful computer chips, GANs have become a game changer. They enable algorithms to not just classify but also create pictures. Show a GAN an image of a person standing in profile, and it can produce entirely manufactured images of that person—from the front or the back.

Researchers immediately heralded the GAN as a way for computers to fill in the gaps in our understanding of everything around us, to map, say, parts of distant galaxies that telescopes can’t penetrate. Other programmers saw it as a way to make super-convincing celebrity porn videos.

In late 2017, a Reddit user named “Deepfakes” did just that, uploading to the site adult videos featuring the uncanny likenesses of famous Hollywood actresses. The deepfake phenomenon exploded from there.

Soon after, Giorgio Patrini, a machine-learning Ph.D. who became fascinated—and then concerned—with how GAN models were being exploited, left the research lab and cofounded Deeptrace Labs, a Dutch startup that says it’s building “the antivirus for deepfakes.” Clients include media companies that want to give reporters tools to spot manipulations of their work or to vet the authenticity of user-?generated video clips. Patrini says that in recent months, corporate brand-reputation managers have contacted his firm, as have network security specialists.

“There’s particular concern about deepfakes and the potential for it to be used in fraud and social engineering attempts,” says Patrini.

Malwarebytes Labs of Santa Clara, Calif., recently warned of something similar, saying in a June report on A.I.-powered threats that “deepfakes could be used in incredibly convincing spear-phishing attacks that users would be hard-pressed to identify as false.” The report continues, “Imagine getting a video call from your boss telling you she needs you to wire cash to an account for a business trip that the company will later reimburse.”

In the world of deepfakes, you don’t need to be famous to be cast in a leading role.

This article originally appeared in the August 2019 issue of Fortune.

掃描二維碼下載財(cái)富APP
成人爽a毛片一区二区免费| 97人人看碰人免费公开视频| 久久精品国产99国产精2020| 亚洲精品欧美精品日韩精品| 精品久久久无码人妻字幂| 亚洲AV片不卡无码在线a | 国产精品美女被遭强扒开双腿| 国产福利91精品一区二区三区| 被公侵犯玩弄漂亮人妻中文| 内精品人妻无码久久久影院| 欧美日产国产精品| 天天搞夜夜爽aaaaa级毛片免费视频| 欧美黑人日韩三级破处女视频污片| 国产高清亚洲免费片| a在线视频亚洲精品国产综合久久一线| 久久无码精品一区二区三区| 免费看女人与公拘交酡过程| 亚洲成AV人片在线观看高清| 精品人妻无码专区在中文字幕| 日韩精品视频美在线精品视频| 成人免费无码成人影院日韩| 亚洲国产日韩a在线播放| 亚洲AV片不卡无码在线a | 国产一区二区在线影院欧美超级乱婬视频播放| 国产午夜无码视频免费网站| 亚洲av乱码一区二区三区| 久久久97人妻无码精品蜜桃| 国产精品无码一二三区免费| 人妻无码一区二区在线影院| 国产丰满麻豆vIDEOSSEXHD| 国产精品无码AV在线毛片| 久久99精品久久久久久噜噜| 欧美性大战XXXXX久久久√| 国产欧美日本1区2区3区| 幼香视频在线观看免费| 国产精品午夜视频| 久久精品国产精品亚洲艾草网| 亚洲色偷偷偷综合网另类小说| 国产v片在线播放免费无遮挡| 成人精品一区二区91毛片不卡| 精品无码一区二区高潮久久国产 |