成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

首頁 500強(qiáng) 活動 榜單 商業(yè) 科技 領(lǐng)導(dǎo)力 專題 品牌中心
雜志訂閱

離職高級AI研究員,觸及了谷歌哪條神經(jīng)?

JEREMY KAHN
2020-12-10

答案或許已經(jīng)浮現(xiàn):為了這項(xiàng)“特殊技術(shù)”的成功,谷歌投入了很多。

文本設(shè)置
小號
默認(rèn)
大號
Plus(0條)

近日,一名受人尊敬的谷歌人工智能研究人員離職,引爆輿論發(fā)問:對于關(guān)鍵人工智能技術(shù)的道德之憂,谷歌公司是否有掩蓋之圖?

離職的人工智能研究員叫蒂姆尼特·格布魯。在她離開谷歌之前,公司曾要求她撤回一篇她參與撰稿的關(guān)于大型語言模型倫理的研究論文。這些模型通過篩選龐大的文本庫創(chuàng)建,用以幫助創(chuàng)建搜索引擎及數(shù)字助手,以便更好地理解用戶并對其作出回應(yīng)。

谷歌拒絕就格布魯?shù)碾x職發(fā)表評論,但其示意媒體參考一封由谷歌人工智能研究部門高級副總裁杰夫·迪恩寫給員工的電子郵件。這封郵件泄露在科技通訊平臺Platformer上,迪恩在郵件中說,格布魯與另外四名谷歌研究人員和華盛頓大學(xué)的一名研究人員合作進(jìn)行的這項(xiàng)研究,沒有達(dá)到公司的標(biāo)準(zhǔn)。

然而,格布魯和她的前人工智能倫理團(tuán)隊(duì)成員都對這一觀點(diǎn)提出了質(zhì)疑。

目前,包括2200名谷歌員工在內(nèi)的5300多人簽署了一封公開信,對谷歌處理格布魯?shù)姆绞奖硎究棺h,并要求谷歌做出解釋。

據(jù)政治新聞網(wǎng)站Axios透露,12月9日,谷歌首席執(zhí)行官桑達(dá)爾·皮查伊對員工表示,他將調(diào)查格布魯離開公司的原因,并將努力恢復(fù)大家的信任。

格布魯及其合作者質(zhì)疑大型語言模型的倫理問題,到底觸及了谷歌哪條神經(jīng)?答案或許已經(jīng)浮現(xiàn):為了這項(xiàng)“特殊技術(shù)”的成功,谷歌投入了很多。

在所有大型語言模型的背后,都隱藏著一種特殊的神經(jīng)網(wǎng)絡(luò),一種松散地基于人類大腦的人工智能軟體框架。這一名為Transformer的神經(jīng)網(wǎng)絡(luò)由谷歌研究人員在2017年首創(chuàng),現(xiàn)在已經(jīng)被工業(yè)界廣泛采用,用于語言和視覺處理等各種用途。

這些大型語言算法建立的統(tǒng)計(jì)模型十分龐大,需要數(shù)億甚至數(shù)千億的變量。因此,這些模型非常擅長精準(zhǔn)預(yù)測句子中缺失的單詞。但事實(shí)上,它們也在此過程中學(xué)會了其他技能:如回答文章附加的問題,總結(jié)文件中的關(guān)鍵信息,找出文中哪個代詞指代哪個人等等。這些事情聽起來不難,但之前的語言軟件必須得經(jīng)過專門的訓(xùn)練,才能最后掌握其中的某一項(xiàng)技能,況且效果也不好。

它們中最龐大的一個,還有更多的技能花樣:舊金山人工智能公司OpenAI創(chuàng)建的大型語言模型GPT-3包含了大約1750億個變量,可以根據(jù)一個簡單的人工提示寫出連貫的長篇文章。想象一下,當(dāng)你寫下博客的標(biāo)題和第一句話,GPT-3就能完成編寫其余的內(nèi)容。目前OpenAI已經(jīng)將GPT-3授權(quán)給了一些科技初創(chuàng)公司以及微軟,為自家服務(wù)賦能。其中一家公司用GPT-3從幾個要點(diǎn)中生成完整的電子郵件。

谷歌有自己的大型語言模型BERT,用以幫助增強(qiáng)包括英語在內(nèi)的多種語言的搜索結(jié)果,而其他公司也在使用BERT構(gòu)建自家語言處理軟件。

BERT經(jīng)過優(yōu)化,可以在谷歌自己的專門人工智能計(jì)算機(jī)處理器上運(yùn)行,且僅向谷歌云計(jì)算服務(wù)的客戶提供——因此,谷歌有明確的商業(yè)動機(jī)來推動BERT的廣泛使用。而且,倘若公司想要訓(xùn)練和運(yùn)行自己的語言模型,必然租用大量的云計(jì)算服務(wù),因此所有的云計(jì)算提供商都很樂意看到目前大語言模型的趨勢。

舉個例子:去年的一項(xiàng)研究估計(jì),在谷歌的云平臺上培訓(xùn)BERT大約花費(fèi)7000美元,而同時OpenAI的首席執(zhí)行官Sam Altman暗示,培訓(xùn)GPT-3要花費(fèi)數(shù)百萬美元。

技術(shù)研究公司弗雷斯特(Forrester)的分析師謝爾·卡爾森表示,盡管這些所謂的大型“Transformer語言模型”目前的市場相對較小,但爆炸式增長隨時可能發(fā)生?!霸谧罱腥斯ぶ悄苤?,這些大型Transformer網(wǎng)絡(luò)對人工智能的未來來說最重要?!彼f。

其中一個原因是,大型語言模型讓構(gòu)建語言處理工具變得更加容易,幾乎是上手即用??柹f:“只需稍加調(diào)整,您就可以擁有定制的聊天機(jī)器人,幫您處理任何事情?!辈粌H如此,預(yù)先訓(xùn)練的大型語言模型還可以幫助編寫軟件,總結(jié)文本,以及創(chuàng)建常見問題及其解答。

市場研究公司Tractica于2017年發(fā)布的一份報(bào)告預(yù)測,到2025年,各類NLP(自然語言處理)軟件的年市場規(guī)模將達(dá)到223億美元。這份報(bào)告被廣泛引用,而報(bào)告中的分析是在諸如BERT和GPT-3這樣的大型語言模型出現(xiàn)之前進(jìn)行的——這就是格布魯?shù)恼撐闹兴嵅〉氖袌錾虣C(jī)。

在格布魯和她的同事看來,大型語言模型到底存在什么問題?答案很明確:很多問題。

首先,因?yàn)楦鞣N大型語言模型是在龐大的現(xiàn)有文本語料庫上進(jìn)行訓(xùn)練的,而這些系統(tǒng)往往會摻雜很多歧視內(nèi)容,尤其是關(guān)于性別和種族的歧視。此外,論文的合著者說,這些模型太大,吸收了太多的數(shù)據(jù),極難審計(jì)和調(diào)試,因此其中一些歧視性信息可能會被遺漏。

其次,論文還指出,在耗電量大的服務(wù)器上訓(xùn)練和運(yùn)行大規(guī)模的語言模型,會對環(huán)境造成碳排放量大等負(fù)面影響。論文指出,訓(xùn)練一次谷歌的語言模型BERT就會產(chǎn)生大約1438磅二氧化碳,相當(dāng)于從紐約到舊金山的一趟往返航班的排放量。

這項(xiàng)研究還注意到一個事實(shí):在構(gòu)建愈發(fā)龐大的語言模型上花費(fèi)更多的金錢和精力,會漸漸消解人類原有的真正“理解”語言并高效學(xué)習(xí)語言的努力。

論文中對大型語言模型的許多批評,之前已經(jīng)有人提出過。艾倫人工智能研究所(Allen Institute for AI)此前發(fā)表了一篇論文,研究GPT-2(GPT-3的前身)產(chǎn)生的種族主義語言和歧視性語言。

而實(shí)際上,OpenAI自己發(fā)布的關(guān)于GPT-3的論文就有一章概述了與偏見和環(huán)境危害有關(guān)的潛在問題,格布魯和她的合著者對這些問題進(jìn)行了強(qiáng)調(diào)和重申。OpenAI發(fā)布的這篇論文還在今年的神經(jīng)信息處理系統(tǒng)大會(Neural Information Processing Systems Conference)上獲得了“最佳論文”獎,據(jù)了解,這一大會在AI研究領(lǐng)域久負(fù)盛名。

可以說,OpenAI與谷歌有同樣的商業(yè)動機(jī)去粉飾GPT-3的缺陷,更何況GPT-3還是OpenAI目前唯一的商業(yè)產(chǎn)品,而谷歌早在BERT出現(xiàn)之前就已經(jīng)賺了數(shù)千億美元了。

但話又說回來,OpenAI的運(yùn)作方式更像是一家科技初創(chuàng)公司,而不是諸如谷歌之類的大型科技企業(yè)。大公司出于本性,不愿意給公開批評自己技術(shù)的員工發(fā)高薪,因?yàn)樗麄兦宄?,公開批評會對數(shù)十億美元的市場商機(jī)構(gòu)成威脅。(財(cái)富中文網(wǎng))

編譯:楊二一

近日,一名受人尊敬的谷歌人工智能研究人員離職,引爆輿論發(fā)問:對于關(guān)鍵人工智能技術(shù)的道德之憂,谷歌公司是否有掩蓋之圖?

離職的人工智能研究員叫蒂姆尼特·格布魯。在她離開谷歌之前,公司曾要求她撤回一篇她參與撰稿的關(guān)于大型語言模型倫理的研究論文。這些模型通過篩選龐大的文本庫創(chuàng)建,用以幫助創(chuàng)建搜索引擎及數(shù)字助手,以便更好地理解用戶并對其作出回應(yīng)。

谷歌拒絕就格布魯?shù)碾x職發(fā)表評論,但其示意媒體參考一封由谷歌人工智能研究部門高級副總裁杰夫·迪恩寫給員工的電子郵件。這封郵件泄露在科技通訊平臺Platformer上,迪恩在郵件中說,格布魯與另外四名谷歌研究人員和華盛頓大學(xué)的一名研究人員合作進(jìn)行的這項(xiàng)研究,沒有達(dá)到公司的標(biāo)準(zhǔn)。

然而,格布魯和她的前人工智能倫理團(tuán)隊(duì)成員都對這一觀點(diǎn)提出了質(zhì)疑。

目前,包括2200名谷歌員工在內(nèi)的5300多人簽署了一封公開信,對谷歌處理格布魯?shù)姆绞奖硎究棺h,并要求谷歌做出解釋。

據(jù)政治新聞網(wǎng)站Axios透露,12月9日,谷歌首席執(zhí)行官桑達(dá)爾·皮查伊對員工表示,他將調(diào)查格布魯離開公司的原因,并將努力恢復(fù)大家的信任。

格布魯及其合作者質(zhì)疑大型語言模型的倫理問題,到底觸及了谷歌哪條神經(jīng)?答案或許已經(jīng)浮現(xiàn):為了這項(xiàng)“特殊技術(shù)”的成功,谷歌投入了很多。

在所有大型語言模型的背后,都隱藏著一種特殊的神經(jīng)網(wǎng)絡(luò),一種松散地基于人類大腦的人工智能軟體框架。這一名為Transformer的神經(jīng)網(wǎng)絡(luò)由谷歌研究人員在2017年首創(chuàng),現(xiàn)在已經(jīng)被工業(yè)界廣泛采用,用于語言和視覺處理等各種用途。

這些大型語言算法建立的統(tǒng)計(jì)模型十分龐大,需要數(shù)億甚至數(shù)千億的變量。因此,這些模型非常擅長精準(zhǔn)預(yù)測句子中缺失的單詞。但事實(shí)上,它們也在此過程中學(xué)會了其他技能:如回答文章附加的問題,總結(jié)文件中的關(guān)鍵信息,找出文中哪個代詞指代哪個人等等。這些事情聽起來不難,但之前的語言軟件必須得經(jīng)過專門的訓(xùn)練,才能最后掌握其中的某一項(xiàng)技能,況且效果也不好。

它們中最龐大的一個,還有更多的技能花樣:舊金山人工智能公司OpenAI創(chuàng)建的大型語言模型GPT-3包含了大約1750億個變量,可以根據(jù)一個簡單的人工提示寫出連貫的長篇文章。想象一下,當(dāng)你寫下博客的標(biāo)題和第一句話,GPT-3就能完成編寫其余的內(nèi)容。目前OpenAI已經(jīng)將GPT-3授權(quán)給了一些科技初創(chuàng)公司以及微軟,為自家服務(wù)賦能。其中一家公司用GPT-3從幾個要點(diǎn)中生成完整的電子郵件。

谷歌有自己的大型語言模型BERT,用以幫助增強(qiáng)包括英語在內(nèi)的多種語言的搜索結(jié)果,而其他公司也在使用BERT構(gòu)建自家語言處理軟件。

BERT經(jīng)過優(yōu)化,可以在谷歌自己的專門人工智能計(jì)算機(jī)處理器上運(yùn)行,且僅向谷歌云計(jì)算服務(wù)的客戶提供——因此,谷歌有明確的商業(yè)動機(jī)來推動BERT的廣泛使用。而且,倘若公司想要訓(xùn)練和運(yùn)行自己的語言模型,必然租用大量的云計(jì)算服務(wù),因此所有的云計(jì)算提供商都很樂意看到目前大語言模型的趨勢。

舉個例子:去年的一項(xiàng)研究估計(jì),在谷歌的云平臺上培訓(xùn)BERT大約花費(fèi)7000美元,而同時OpenAI的首席執(zhí)行官Sam Altman暗示,培訓(xùn)GPT-3要花費(fèi)數(shù)百萬美元。

技術(shù)研究公司弗雷斯特(Forrester)的分析師謝爾·卡爾森表示,盡管這些所謂的大型“Transformer語言模型”目前的市場相對較小,但爆炸式增長隨時可能發(fā)生。“在最近所有人工智能中,這些大型Transformer網(wǎng)絡(luò)對人工智能的未來來說最重要。”他說。

其中一個原因是,大型語言模型讓構(gòu)建語言處理工具變得更加容易,幾乎是上手即用??柹f:“只需稍加調(diào)整,您就可以擁有定制的聊天機(jī)器人,幫您處理任何事情?!辈粌H如此,預(yù)先訓(xùn)練的大型語言模型還可以幫助編寫軟件,總結(jié)文本,以及創(chuàng)建常見問題及其解答。

市場研究公司Tractica于2017年發(fā)布的一份報(bào)告預(yù)測,到2025年,各類NLP(自然語言處理)軟件的年市場規(guī)模將達(dá)到223億美元。這份報(bào)告被廣泛引用,而報(bào)告中的分析是在諸如BERT和GPT-3這樣的大型語言模型出現(xiàn)之前進(jìn)行的——這就是格布魯?shù)恼撐闹兴嵅〉氖袌錾虣C(jī)。

在格布魯和她的同事看來,大型語言模型到底存在什么問題?答案很明確:很多問題。

首先,因?yàn)楦鞣N大型語言模型是在龐大的現(xiàn)有文本語料庫上進(jìn)行訓(xùn)練的,而這些系統(tǒng)往往會摻雜很多歧視內(nèi)容,尤其是關(guān)于性別和種族的歧視。此外,論文的合著者說,這些模型太大,吸收了太多的數(shù)據(jù),極難審計(jì)和調(diào)試,因此其中一些歧視性信息可能會被遺漏。

其次,論文還指出,在耗電量大的服務(wù)器上訓(xùn)練和運(yùn)行大規(guī)模的語言模型,會對環(huán)境造成碳排放量大等負(fù)面影響。論文指出,訓(xùn)練一次谷歌的語言模型BERT就會產(chǎn)生大約1438磅二氧化碳,相當(dāng)于從紐約到舊金山的一趟往返航班的排放量。

這項(xiàng)研究還注意到一個事實(shí):在構(gòu)建愈發(fā)龐大的語言模型上花費(fèi)更多的金錢和精力,會漸漸消解人類原有的真正“理解”語言并高效學(xué)習(xí)語言的努力。

論文中對大型語言模型的許多批評,之前已經(jīng)有人提出過。艾倫人工智能研究所(Allen Institute for AI)此前發(fā)表了一篇論文,研究GPT-2(GPT-3的前身)產(chǎn)生的種族主義語言和歧視性語言。

而實(shí)際上,OpenAI自己發(fā)布的關(guān)于GPT-3的論文就有一章概述了與偏見和環(huán)境危害有關(guān)的潛在問題,格布魯和她的合著者對這些問題進(jìn)行了強(qiáng)調(diào)和重申。OpenAI發(fā)布的這篇論文還在今年的神經(jīng)信息處理系統(tǒng)大會(Neural Information Processing Systems Conference)上獲得了“最佳論文”獎,據(jù)了解,這一大會在AI研究領(lǐng)域久負(fù)盛名。

可以說,OpenAI與谷歌有同樣的商業(yè)動機(jī)去粉飾GPT-3的缺陷,更何況GPT-3還是OpenAI目前唯一的商業(yè)產(chǎn)品,而谷歌早在BERT出現(xiàn)之前就已經(jīng)賺了數(shù)千億美元了。

但話又說回來,OpenAI的運(yùn)作方式更像是一家科技初創(chuàng)公司,而不是諸如谷歌之類的大型科技企業(yè)。大公司出于本性,不愿意給公開批評自己技術(shù)的員工發(fā)高薪,因?yàn)樗麄兦宄?,公開批評會對數(shù)十億美元的市場商機(jī)構(gòu)成威脅。(財(cái)富中文網(wǎng))

編譯:楊二一

The recent departure of a respected Google artificial intelligence researcher has raised questions about whether the company was trying to conceal ethical concerns over a key piece of A.I. technology.

The departure of the researcher, Timnit Gebru, came after Google had asked her to withdraw a research paper she had coauthored about the ethics of large language models. These models, created by sifting through huge libraries of text, help create search engines and digital assistants that can better understand and respond to users.

Google has declined to comment about Gebru’s departure, but it has referred reporters to an email to staff written by Jeff Dean, the senior vice president in charge of Google’s A.I. research division, that was leaked to the tech newsletter Platformer. In the email Dean said that the study in question, which Gebru had coauthored with four other Google scientists and a University of Washington researcher, didn’t meet the company’s standards.

That position, however, has been disputed by both Gebru and members of the A.I. ethics team she formerly co-led.

More than 5,300 people, including over 2,200 Google employees, have now signed an open letter protesting Google’s treatment of Gebru and demanding that the company explain itself.

On Wednesday, Sundar Pichai, Google’s chief executive officer, told staff he would investigate the circumstances under which Gebru left the company and would work to restore trust, according to a report from news service Axios, which obtained Pichai’s memo to Google employees.

But why might Google have been particularly upset with Gebru and her coauthors questioning the ethics of large language models? Well, as it turns out, Google has quite a lot invested in the success of this particular technology.

Beneath the hood of all large language models is a special kind of neural network, A.I. software loosely based on the human brain, that was pioneered by Google researchers in 2017. Called a Transformer, it has since been adopted industrywide for a variety of different uses in both language and vision tasks.

The statistical models that these large language algorithms build are enormous, taking in hundreds of millions, or even hundreds of billions, of variables. In this way, they get very good at being able to accurately predict a missing word in a sentence. But it turns out that along the way, they pick up other skills too, like being able to answer questions about a text, summarize key facts about a document, or figure out which pronoun refers to which person in a passage. These things sound simple, but previous language software had to be trained specifically for each one of these skills, and even then it often wasn’t that good.

The biggest of these large language models can do some other nifty things as well: GPT-3, a large language model created by San Francisco A.I. company OpenAI, encompasses some 175 billion variables and can write long passages of coherent text from a simple human prompt. So imagine writing just a headline and a first sentence for a blog post with GPT-3 then composing the rest. OpenAI has licensed GPT-3 to a number of technology startups, plus Microsoft, to power their own services, which include one company’s using the software to enable users to generate full emails from just a few bullet points.

Google has its own large language model, called BERT, that it has used to help power search results in several languages including English. Other companies are also using BERT to build their own language processing software.

BERT is optimized to run on Google’s own specialized A.I. computer processors, available exclusively to customers of its cloud computing service. So Google has a clear commercial incentive to encourage companies to use BERT. And, in general, all of the cloud computing providers are happy with the current trend toward large language models, because if a company wants to train and run one of its own, it must rent a lot of cloud computing time.

For instance, one study last year estimated that training BERT on Google’s cloud costs about $7,000. Sam Altman, the CEO of OpenAI, meanwhile, has implied that it cost many millions to train GPT-3.

And while the market for these large so-called Transformer language models is relatively small at the moment, it is poised to explode, according to Kjell Carlsson, an analyst at technology research firm Forrester. “Of all the recent A.I. developments, these large Transformer networks are the ones that are most important to the future of A.I. at the moment,” he says.

One reason is that the large language models make it far easier to build language processing tools, almost right out of the box. “With just a little bit of fine-tuning, you can have customized chatbots for everything and anything,” Carlsson says. More than that, the pretrained large language models can help write software, summarize text, or create frequently asked questions with their answers, he notes.

A widely cited 2017 report from market research firm Tractica forecast that NLP (natural language processing) software of all kinds would be a $22.3 billion annual market by 2025. And that analysis was made before large language models such as BERT and GPT-3 arrived on the scene. So this is the market opportunity that Gebru’s research criticized.

What exactly did Gebru and her colleagues say was wrong with large language models? Well, lots. For one thing, because they are trained on huge corpora of existing text, the systems tend to bake in a lot of existing human bias, particularly about gender and race. What’s more, the paper’s coauthors said, the models are so large and take in so much data, they are extremely difficult to audit and test, so some of this bias may go undetected.

The paper also pointed to the adverse environmental impact, in terms of carbon footprint, that training and running such large language models on electricity-hungry servers can have. It noted that BERT, Google’s own language model, produced, by one estimate, about 1,438 pounds of carbon dioxide, or about the amount of a roundtrip flight from New York to San Francisco.

The research also looked at the fact that money and effort spent on building ever larger language models took away from efforts to build systems that might actually “understand” language and learn more efficiently, in the way humans do.

Many of the criticisms of large language models made in the paper have been made previously. The Allen Institute for AI had published a paper looking at racist and biased language produced by GPT-2, the forerunner system to GPT-3.

In fact, the paper from OpenAI itself on GPT-3, which won an award for “best paper” at this year’s Neural Information Processing Systems Conference (NeurIPS), one of the A.I. research field’s most prestigious conferences, contained a meaty section outlining some of the same potential problems with bias and environmental harm that Gebru and her coauthors highlighted.

OpenAI, arguably, has as much—if not more—financial incentive to sugarcoat any faults in GPT-3. After all, GPT-3 is literally OpenAI’s only commercial product at the moment. Google was making hundreds of billions of dollars just fine before BERT came along.

But then again, OpenAI still functions more like a tech startup than the megacorporation that Google’s become. It may simply be that large corporations are, by their very nature, allergic to paying big salaries to people to publicly criticize their own technology and potentially jeopardize billion-dollar market opportunities.

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識產(chǎn)權(quán)為財(cái)富媒體知識產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專屬所有或持有。未經(jīng)許可,禁止進(jìn)行轉(zhuǎn)載、摘編、復(fù)制及建立鏡像等任何使用。
0條Plus
精彩評論
評論

撰寫或查看更多評論

請打開財(cái)富Plus APP

前往打開
熱讀文章
国内野外强奷在线视频| 最近中文字幕国语免费高清6| 日韩精品无码一本二本三本色| 欧美激情精品久久久久久| 国产福利113精品一区二区三区| 闺蜜张开腿让我爽了一夜| 无码精品人妻一区二| 日本亚洲国产在线视频| 日韩福利视频日韩久久免费影院观看| 精品国产免费一区二区三区| 国产无遮挡色视频在线观看| 国产在线精品一区二区夜色| 日韩一区二区在线 观看视频播放| 亚洲精品中文字幕乱码三区| 国产无码在线免费大片| 亚洲AV无码一区二区一二区| 午夜福利理论片在线观看播放| 国产精品永久久久久久久久久| 亚洲无码少妇专区| 亚洲日产2020中文字幕| 国产在线不卡一区二区三区| 久久国产精品波多野结衣AV| 成人性生交大片免费看京东小视频| 91久久久久无码精品国产| 国产成人亚洲精品无码影院BT| 高清欧美性猛交xxxx黑人猛交| 一区二区国产在线播放| 最新亚洲春色AV无码专区| 欧美精品一区二区精品久久| 成视人a免费观看 视频| 最新欧美国产亚洲一区二区三区精品久久久| 国产A∨国片精品青草视频| 超薄肉色丝袜一区二区| 精品国产一区黄区| 国产精品1000夫妇激情啪发布| 国产AV一区二区无码| 日产无人区一线二线三线最新版| 中文字幕无码亚洲字幕成a人蜜桃| 国产在线拍揄自揄拍无码| 国产乱婬AV片免费| 国产成人综合久久精品尤物|