成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

立即打開
抵抗“造假”程序大戰(zhàn)在即,我們準備好了嗎?

抵抗“造假”程序大戰(zhàn)在即,我們準備好了嗎?

RAMI ESSAID 2018年03月07日
造假程序仍在泛濫,主要由于兩大因素:一是開發(fā)和銷售程序的監(jiān)管法律含糊不清,二是社交媒體公司對用戶數(shù)量的真實性睜一只眼閉一只眼。

假新聞,假社交媒體賬號,線上調(diào)查假受訪者,還有假購票者。造假泛濫反映出一個趨勢:程序造假泛濫。

什么時候才能去偽存真,制止造假程序?

造假程序幾乎占互聯(lián)網(wǎng)流量的20%。這些電腦程序竊取商業(yè)網(wǎng)站的內(nèi)容,迫使一些網(wǎng)站關(guān)閉,影響收費廣告指標的正常表現(xiàn),在各種論壇灌水,還會搶購百老匯音樂劇《漢密爾頓》(Hamilton )的票高價倒賣。

隨著媒體曝光俄羅斯利用程序干涉美國總統(tǒng)大選,《紐約時報》推出有關(guān)Twitter僵尸粉絲交易和轉(zhuǎn)發(fā)帖子的熱門調(diào)查報道,事實已經(jīng)生動地說明,造假程序比大多數(shù)人意識的嚴重。

然而造假程序仍在泛濫,主要由于兩大因素:一是開發(fā)和銷售程序的監(jiān)管法律含糊不清,二是社交媒體公司對用戶數(shù)量的真實性睜一只眼閉一只眼。

嚴厲打擊造假程序并非易事,但新近一些實例顯示行動已經(jīng)開始。我們應(yīng)該將造假程序視為社會公敵。

造假程序滲透社交媒體

不久以前,造假程序主要涉及信息技術(shù)或者某些深奧的商業(yè)問題,犯罪分子的主要手段包括網(wǎng)頁抓取、暴力破解攻擊、競爭數(shù)據(jù)挖掘、侵入賬號、未經(jīng)授權(quán)的漏洞檢測、發(fā)送垃圾郵件和點擊量作假等。

可如今,造假程序的應(yīng)用趨勢令人不安,已經(jīng)能通過大型社交媒體平臺操縱選舉和政治議題。

去年10月,美國國會議員舉行聽證會,召集Facebook、Twitter和谷歌的高管,要求其解釋俄羅斯方面如何利用三家公司旗下的平臺干擾2016年美國總統(tǒng)大選。三家公司的高管承諾會改進。今年1月末,美國民主黨議會領(lǐng)袖又呼吁Facebook和Twitter分析俄羅斯的程序在網(wǎng)上競選活動中發(fā)揮的作用,并公布一份包含美國聯(lián)邦調(diào)查局(FBI)對俄羅斯政府干擾大選絕密信息的備忘錄。

今年2月16日,美國特別檢察官羅伯特·米勒起訴13名俄羅斯公民,指控其操縱電腦程序傳播不實信息,在社交媒體散布有利于現(xiàn)任美國總統(tǒng)唐納德·特朗普的宣傳信息。

造假程序在Twitter上肆虐的形勢比很多人意識中還嚴重。Twitter的高管在美國國會作證稱,約5%的Twitter賬號來自造假程序。但一些研究顯示,實際占比高達15%。去年11月,F(xiàn)acebook告知股東,社交平臺上約有6000萬賬號可能是虛假賬號,占其月均用戶總數(shù)的2%。

和線上內(nèi)容出版商一樣,社交媒體公司容許平臺上存在造假程序,因為月度活躍用戶是衡量業(yè)績的一大指標。不管背后是不是真人,賬號就是賬號。

制止瘋狂

這個問題上,社交媒體公司非常虛偽,就像好萊塢經(jīng)典電影《卡薩布蘭卡》(Casablanca)里反面人物雷諾局長(Captain Renault)。片中,身為警察局局長的雷諾一邊在男主人公的酒吧里賭錢,轉(zhuǎn)頭又驚呼“我非常震驚,這里(酒吧)居然有賭場?!爆F(xiàn)狀必須改變。因為社交媒體實際上有能力影響言論,所以在造假程序操縱選舉和公眾議論的過程中,其不作為造成了極大危害。社交媒體必須積極行動,加強自我管理。

他們完全能做到。看看吧,在《紐約時報》公布上述調(diào)查后,Twitter的幾十個知名用戶賬號一下子減少了超過100萬關(guān)注者。我可不信這是巧合。

Twitter應(yīng)該考慮將“認證”服務(wù)范圍擴大到所有人類用戶,認證賬號會獲得藍色徽標,可以幫用戶識別賬號真?zhèn)?。假如Twitter這么做,技術(shù)上會是個大工程,畢竟造假程序太難阻止,一般來說虛假賬號會假扮合法用戶,又通過人工智能技術(shù)模仿人類。不過,人工智能同樣可以用來鑒定賬號身份。

政府的作用

與此同時,政府應(yīng)該參與打擊造假程序的戰(zhàn)爭。這場仗不好打,因為造假程序的傳播者是匿名的,無法識別身份就很難通過法律手段懲治。

2016年9月,美國聯(lián)邦政府才第一次針對造假程序立法。當時國會通過了打擊黃牛票的《優(yōu)化線上售票法案》(BOTS)。耐人尋味的是,法案推出后票務(wù)問題仍然存在。部分原因是美國聯(lián)邦貿(mào)易委員會(FTC)沒怎么落實。

國會接下來會更新早已過時的《電腦欺詐和濫用法》(CFAA),明確侵入電腦獲取和修改信息屬于違法行為。令人吃驚的是,直到現(xiàn)在這部1986年出臺的法案還是執(zhí)法依據(jù)。美國的法律應(yīng)該清晰地界定允許和禁止的行為。

美國各州政府也能發(fā)揮作用。今年1月,紐約州檢察長史樹德就做出了一項為人稱道的決定:調(diào)查出售社交媒體假粉絲賬號的公司Devumi,也是《紐約時報》調(diào)查報道中曝光的對象。

無須再忍

最后,我們身為消費者也都受夠了造假程序。公平地說,受害者就兩塊:一是社交媒體公司二是用戶。當年創(chuàng)始人創(chuàng)立Twitter時并沒料到會被俄羅斯攻擊,初衷是幫助人們互相交流。用戶也沒想到身份信息會被竊取,賬號被濫用。盡管如此,我們?nèi)匀灰笊缃幻襟w平臺更透明,否則只能拋棄。

現(xiàn)在當務(wù)之急是認清造假程序的危險性,然后著手解決問題。不能容忍造假繼續(xù),不然每個人都會受害。(財富中文網(wǎng))

本文作者拉米·埃塞德是Distil Networks的聯(lián)合創(chuàng)始人兼董事長。該公司主要業(yè)務(wù)為檢測造假程序并降低危害。

譯者:Pessy

審稿:夏林

Fake news. Fake social media accounts. Fake online poll takers. Fake ticket buyers. And behind them all: The prolific fakery of botnets.

When will we get real and stop them?

Malicious bots account for nearly 20% of all Internet traffic. These robotic computer scripts have been responsible for stealing content from commercial websites, shutting down websites, swaying advertising metrics, spamming forums, and snatching away Hamilton tickets for exorbitant resale.

But revelations about Russian bots meddling in the U.S. election and a scorching New York Times investigation into the selling of fake Twitter followers and retweets vividly illustrate that the bot epidemic is even more severe than most people realized.

And yet the bots march on, aided by a double whammy: murky laws governing their creation and sale, and social media companies that have too often turned a blind eye to the veracity of their reported user numbers.

Tightening our defenses against malicious bots won’t be easy, but recent events show that the effort is warranted. Bots should be considered nothing less than a public enemy.

Bots infiltrate social media

Not long ago, bots were mainly thought of as an IT or somewhat esoteric business problem—the main culprits behind web scraping, brute force attacks, competitive data mining, account hijacking, unauthorized vulnerability scans, spam, and click fraud.

But the use of bots to manipulate elections and political discussion via the major social media platforms is a new and unnerving trend.

In October, members of Congress hauled executives from Facebook, Twitter, and Googleinto a hearing to explain Russian interference via their platforms in the 2016 presidential campaign. The executives promised to do better. And yet in late January, top congressional Democrats called on Facebook and Twitter to analyze the role of Russian bots in the online campaign to release a memo containing classified information about the federal investigation into Russia’s meddling.

On Feb. 16, Special Counsel Robert Mueller filed an indictment accusing 13 Russians of running a bot farm and disinformation operation that spread pro-Donald Trump propaganda on social media.

Bots are more prevalent on Twitter than many realize. While Twitter testified before Congress that about 5% of its accounts are run by bots, some studies have shown that number to be as high as 15%. In November, Facebook told shareholders that around 60 million, or 2%, of its average monthly users may be fake accounts.

Social media companies—just like online publishers—have a vested interest in letting bots exist on their platforms because monthly active users are one of their main measurements of success. Accounts, human or not, are accounts.

Stopping the madness

Social media companies’ disingenuous Captain Renault act—he was the character in Casablanca who declared, “I’m shocked, shocked, to find that gambling is going on here”—must stop. With its ability to influence opinions, social media does remarkable harm by playing a role in the rigging of elections and public debate. So social media companies must step up and more aggressively self-police.

We know they can do it. Look at how more than a million followers disappeared from the accounts of dozens of prominent Twitter users right after the New York Timesinvestigation was published. I doubt this was a coincidence.

Twitter should consider extending its “verified” program—that blue badge that lets people know an account of public interest is authentic—to all human users. This would be a huge technological undertaking—after all, bots are so hard to prevent because they act as a legitimate user would—but the same artificial intelligence technologies that allow bots to emulate humans could be used to verify humans.

The government’s role

Meanwhile, government needs to join the fight against bad bots. This won’t be easy, as bot promulgators are anonymous and it’s difficult to legislate against those you can’t identify.

The bot problem didn’t prompt its first piece of federal legislation until September 2016, when Congress passed the anti-ticket scalping Better Online Ticket Sales (BOTS) Act. Interestingly, the ticket problem persists despite the law, in part because the Federal Trade Commission has done little to enforce it.

A good next move for Congress would be to launch a long-overdue update of the Computer Fraud and Abuse Act from 1986, which makes it unlawful to break into a computer to access or alter information and, astoundingly, still serves as a legal guidepost today. U.S. law needs better definition of what’s allowed and what’s not.

States can play a role too, as evidenced by New York Attorney General Eric Schneiderman’s laudable decision to investigate Devumi, the company selling fake social media followers and the subject of the New York Times investigation.

Enough is enough

Finally, we as consumers should say we’re tired of these shenanigans. Now, to be fair, there are two victims: the social media companies and the users. Twitter’s founders didn’t create its platform expecting it to be under attack from the Russians; they wanted people to communicate. Users didn’t expect their profiles to be stolen and their accounts to be abused. Nevertheless, we can demand that social media platforms be more transparent—or else we won’t use them.

It’s high time to recognize that bad bots are a serious threat and start addressing the problem head-on. The fakery can’t be allowed to continue, or we all suffer.

Rami Essaid is co-founder and chairman of Distil Networks, a bot detection and mitigation company.

掃碼打開財富Plus App
精品久久久无码中文字幕边打电话| 亚洲A∨精品一区二区三区| 91免费视频在线观看| 欧美高清在线视频一区二区| 天堂无码电影在线| 97se亚洲国产综合自在线尤物| 国产在线精品观看一区欧美| 印度女人性液亚洲国产类| 国产成人亚洲综合色就色| 性欧美大战久久久久久久黑人| 亚洲国产综合无码一区| 欧美R级高清无删节整片在线观看| 巨胸的教师在线完整版中文| 日韩美女午夜精品视频| 亚洲乱码国产乱码精品精| 小小拗女性BBWXXXX国产| sm捆起来强奷在线观看| 在线观看国产色视频网站| 日本一区二区不卡在线| 亚州中文字幕第一页| 欧美日韩精品一区二区三区不卡| 精品国产免费人成在线观看| 女人18毛片水真多学生| 美女胸又WWW又黄的视频| 国产精品成人无码免费| 欧美日韩视频一区二区三区。| 每天鲁一鲁精品国产| 亚洲国产午夜一级片中文字幕精品黄网站| 久久久久噜噜噜亚洲| 国产精品自产拍在线观看| 国产福利一区二区三区在线视频| 高潮胡言乱语对白刺激40分钟| 日韩精品爆乳高清在线观看视频| 久久精品国产99精品最新按摸 | 无码视少妇视频一区二区三区| 国产女人毛片水真多精品| 国内揄拍国内精品少妇国语| 国产在线精品一区二区不卡| 国产精品美女久久久m| 欧美XXXX色视频在线观看免费| 久久午夜夜伦鲁鲁片无码免费|