話題討論:大家會(huì)害怕人工智能嗎?
Is AI an existential threat to humanity?譯文簡(jiǎn)介
網(wǎng)友:短期內(nèi),人工智能作為一種工具,能夠放大個(gè)體擁有的力量。例如,有人可以購買成千上萬的廉價(jià)無人機(jī),為它們各自裝備槍支,并開發(fā)人工智能軟件來指揮它們四處射擊。如果軟件足夠先進(jìn),這造成的破壞程度可能遠(yuǎn)超普通恐怖襲擊。我完全預(yù)料到,如果現(xiàn)在還沒有變得簡(jiǎn)單,那么將來這部分軟件將會(huì)變得越來越容易……
正文翻譯
Is AI an existential threat to humanity?
人工智能是人類生存的威脅嗎?
評(píng)論翻譯
很贊 ( 3 )
收藏
Is AI a potential threat to humanity?
In the near term AI serves as a tool that can magnify the amount of power an individual has. For example, someone could buy thousands of cheap drones, attach a gun to each of them, and develop AI software to send them around shooting people. If the software was good enough this could result in far more destruction than a normal terrorist attack. And I fully expect that the software part of this will become easy in the future if it isn't already today.
人工智能是否對(duì)人類構(gòu)成潛在威脅?
短期內(nèi),人工智能作為一種工具,能夠放大個(gè)體擁有的力量。例如,有人可以購買成千上萬的廉價(jià)無人機(jī),為它們各自裝備槍支,并開發(fā)人工智能軟件來指揮它們四處射擊。如果軟件足夠先進(jìn),這造成的破壞程度可能遠(yuǎn)超普通恐怖襲擊。我完全預(yù)料到,如果現(xiàn)在還沒有變得簡(jiǎn)單,那么將來這部分軟件將會(huì)變得越來越容易。
原創(chuàng)翻譯:龍騰網(wǎng) http://m.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處
這與當(dāng)今恐怖組織的選項(xiàng)截然不同,因?yàn)樗麄冃枰藖韴?zhí)行攻擊,每個(gè)人能造成的損害是有限的。用相對(duì)簡(jiǎn)單的人工智能取代人,可以將攻擊的邊際成本降至零,并削弱執(zhí)法部門阻止攻擊或進(jìn)行報(bào)復(fù)的能力。因此,存在一種風(fēng)險(xiǎn),隨著人工智能越來越先進(jìn),它至少會(huì)破壞穩(wěn)定。
這與擔(dān)心AI憑借“自由意志”來“接管一切”完全不同。我認(rèn)為這是一個(gè)潛在的風(fēng)險(xiǎn),但它離我們還很遠(yuǎn)。而我認(rèn)為,短期內(nèi)的力量放大問題同樣是一個(gè)更為危險(xiǎn)的隱患。
This scares me.
這讓我感到害怕。
Not that scare to be honest.
So hobby sized drones armed with small arms is possible but is still deep in the proof of concept phase. Anything in the scale of say a MQ-1 Predator (Smallest armed RPA) is well outside the budget of a single non state actor and easily countered (Many active programs dedicated to this exact threat right now)
說實(shí)話,我并不那么害怕。
就業(yè)余級(jí)別的無人機(jī)而言,雖然理論上可以裝備小型武器,但目前仍處于概念驗(yàn)證階段。任何達(dá)到MQ-1“捕食者”(最小的武裝遙控飛機(jī))規(guī)模的無人機(jī),都遠(yuǎn)遠(yuǎn)超出了單一非國(guó)家玩家的預(yù)算,并且很容易被對(duì)抗(目前有許多活躍的程序?qū)iT針對(duì)這種威脅)。
假設(shè)小型無人機(jī)確實(shí)能夠裝備輕型武器,并且能夠從中央指揮控制系統(tǒng)接收自動(dòng)任務(wù)指令。那么,這種攻擊可能造成的破壞,很可能會(huì)比一個(gè)裝備相當(dāng)、成本相仿的武裝個(gè)體要小得多。這種評(píng)估是基于小型空中平臺(tái)在狹窄空間中的行動(dòng)受限,以及在此類事件中人群可能的反應(yīng)(找到掩護(hù)并不困難)。當(dāng)你深入探討各種可能性時(shí),你會(huì)開始意識(shí)到,在這種能力成為現(xiàn)實(shí)之前,更小規(guī)模的分布式自主攻擊可能已經(jīng)以非致命的方式實(shí)施過,這樣可以允許我們進(jìn)行徹底的威脅模擬。
Most non state actors are cheap when it comes to weapons for terrorist attacks. It is far cheaper to recruit some fundamentalists willing to die for a cause than invest in an expensive technology. Furthermore the tactic would be a one time use and directs the fear factor onto the technology and not the ideology.
值得一提的是,許多研究人員已經(jīng)針對(duì)小型遙控飛機(jī)提出了解決方案,并開發(fā)了一系列非常有效的對(duì)策,其中許多涉及電子對(duì)抗措施。
大多數(shù)非國(guó)家玩家在恐怖襲擊的武器選擇上都很節(jié)省。招募一些愿意為事業(yè)犧牲的激進(jìn)分子比投資昂貴的技術(shù)要便宜得多。再者,這種策略只能使用一次,并將人們的恐懼感引向技術(shù),而不是背后的意識(shí)形態(tài)。
原創(chuàng)翻譯:龍騰網(wǎng) http://m.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處
The only problem with your entire argument is that you have neglected to consider explosives
你的整個(gè)論點(diǎn)的問題在于你沒有考慮到爆炸物。
Drones are already there in Afghanistan, Yemen etc. targeting terrorists and sometimes civilians too..
I don't see how drone with AI can be a bigger threat than this. The hardest part is to get the drone infiltrate a country, not to shoot people (that can be controlled by humans through signals from a distance, and this can be done even now). The drone will be shot down by radars way before it enters US or Europe or any other potentially terrorist targeted country.
無人機(jī)已經(jīng)在阿富汗、也門等地被用來打擊恐怖分子,有時(shí)也會(huì)誤傷平民。
我不認(rèn)為配備人工智能的無人機(jī)會(huì)比現(xiàn)在的情況更危險(xiǎn)。最難的部分是讓無人機(jī)潛入一個(gè)國(guó)家,而不是攻擊人(這可以由人類通過遠(yuǎn)程信號(hào)控制,現(xiàn)在就可以做到)。無人機(jī)在進(jìn)入美國(guó)或歐洲或其他潛在的恐怖襲擊目標(biāo)國(guó)家之前,就會(huì)被雷達(dá)擊落。
I assume the drone would be assembled inside the country. Imagine a civilian quadcopter with a handgun strapped to it and a system to fire it. With good enough software you could get it to autonomously fly around and indiscriminately kill people.
我假設(shè)無人機(jī)會(huì)在國(guó)內(nèi)組裝。想象一下,一個(gè)民用四旋翼無人機(jī)上綁著一把手槍,并且有一個(gè)開火的系統(tǒng)。有了足夠好的軟件,你可以讓它自動(dòng)飛行并隨意殺人。
原創(chuàng)翻譯:龍騰網(wǎng) http://m.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處
一旦恐怖組織開發(fā)出這樣的軟件,他們可能會(huì)組裝十幾架這樣的無人機(jī),并將它們藏在卡車?yán)铩H缓笏麄兛梢园芽ㄜ囃T隗w育賽事附近。一旦他們離開現(xiàn)場(chǎng),他們可以遠(yuǎn)程觸發(fā)無人機(jī)飛入看臺(tái)并發(fā)動(dòng)攻擊。他們還可以不斷地用同樣的廉價(jià)現(xiàn)成硬件重復(fù)使用軟件進(jìn)行多次攻擊。
這并不是說這很容易。目前編寫這樣的人工智能仍然非常困難,可能還有更便宜、更有效的攻擊方式。但誰知道未來20年會(huì)發(fā)生什么。
If terrorists would use drones and AI controlled bots, I'm very sure law enforcement agencies would too. And the latter would invest in them much before terrorists if machines are found to be more reliable (UAVs for instance are already in use by the US whereas not many terrorists can get their hands on such tech). Plus if I remember correctly, a recent law passed in the US requires to register drones according to their specs. So, yes, as AI controlled machines improve, their uses too would increase, which will be both beneficial and harmful at the same time for humankind.
如果恐怖分子使用無人機(jī)和人工智能控制的機(jī)器人,我非常確定執(zhí)法機(jī)構(gòu)也會(huì)使用。如果發(fā)現(xiàn)機(jī)器更可靠(例如,美國(guó)已經(jīng)在使用無人機(jī),而沒有多少恐怖分子能夠掌握這種技術(shù)),后者會(huì)比恐怖分子更早投資于它們。另外,如果我沒記錯(cuò)的話,美國(guó)最近通過的一項(xiàng)法律要求根據(jù)規(guī)格注冊(cè)無人機(jī)。所以隨著人工智能控制的機(jī)器的進(jìn)步,它們的用途也會(huì)增加,這對(duì)人類來說既有益也有害。
What do you think is the best course of action to deal with this?
你認(rèn)為處理這個(gè)問題的最佳方法是什么?
Perhaps the solution is to have your own drone security? I recently read "The Diamond Age," which mentions this problem.
也許解決方案是擁有你自己的無人機(jī)安全系統(tǒng)?我最近讀了《鉆石時(shí)代》,它提到了這個(gè)問題。
like every one having their own guns? Seems kind of a nice idea, works pretty well in the USA.
就像每個(gè)人都擁有槍一樣?聽起來是個(gè)不錯(cuò)的主意,在美國(guó)運(yùn)作得很好。
I agree that it would be unfortunate to start an arms race. I'm imagining security drones that would only be used against other drones. If you agree that terrorists with autonomous drones is a problem, what would you do about it?
我認(rèn)同啟動(dòng)軍備競(jìng)賽并非好事。我設(shè)想的是專門用來對(duì)抗其他無人機(jī)的安全無人機(jī)。如果你也認(rèn)同恐怖分子操控的自動(dòng)無人機(jī)構(gòu)成威脅,你會(huì)如何處理這個(gè)問題?
Retaliation is not the responsibility of law enforcement, at least not in a free society. I know that wasn't a major point in your answer, but it is always important to be clear about it.
執(zhí)法部門的責(zé)任并不包括報(bào)復(fù)行為,至少在自由社會(huì)中是這樣。我知道這一點(diǎn)在你之前的回答中并不是重點(diǎn),但明確這一點(diǎn)非常重要。
I wont think AI is going to be a threat to the humanity. Why should they exterminate human when they can collaborate with us? The machine itself can not have the thinking of a human, nor the human can not think like a machine. Arbitrary and variation are the unique trait of human brain, and if it apply to a binary machine, it wont functioned. Therefor if AI can be developed to the point of self awareness they will find that human can be the key point to their development in the time forward of that point.
我認(rèn)為人工智能不會(huì)成為人類的威脅。當(dāng)他們可以與我們合作時(shí),他們?yōu)槭裁匆麥缛祟??機(jī)器本身不能擁有人類的思維,人類也不能像機(jī)器那樣思考。任意性和變化是人類大腦的獨(dú)特特征,如果將其應(yīng)用于二進(jìn)制機(jī)器,機(jī)器將無法正常工作。因此,如果人工智能發(fā)展到具有自我意識(shí)的程度,它們會(huì)發(fā)現(xiàn)人類是它們未來發(fā)展的關(guān)鍵所在。
AI already affects your existence and socio-economic mobility. Here in the US it is called the Fair Issacs credit score. Low score makes you a pariah. High score puts you in the top 1%.
These algorithns are not sophisticated but argue for a future where deep learning AI systems will decide your future and potential value or burden to society.
Up to now data scientists pulled the levers but AI systems are becoming self-learning and wil be more precise than humans in pattern detection and prediction.
人工智能已經(jīng)影響到了你的生存和社會(huì)經(jīng)濟(jì)流動(dòng)性。在美國(guó),這被稱為 FICO 信用評(píng)分。低分讓你成為賤民,高分讓你進(jìn)入前1%。
這些算法雖然不復(fù)雜,但它們指向了一個(gè)未來,屆時(shí)深度學(xué)習(xí)人工智能系統(tǒng)將能夠決定你的未來和你對(duì)這個(gè)社會(huì)可能的價(jià)值或負(fù)擔(dān)。
迄今為止,數(shù)據(jù)科學(xué)家們一直在控制著局面,但人工智能系統(tǒng)正變得能夠自我學(xué)習(xí),并且在模式識(shí)別和預(yù)測(cè)方面將比人類更加精確。
As we master genetics and socio-economic models of population determinants to wealth, AI could control who is allowed to procreate based on their healthy genes, their social status and intellectual capacity.
Sound like eugenics? I hope not. But in countries that have social complexities and need to industrialize at a rapid pace (India and Indonesia come to mind), AI could be misused by those in power to decide what demographics, ethnicity, even beliefs should be promoted and which need to be deprecated.
將人工智能系統(tǒng)相互連接是我們即將面臨的挑戰(zhàn)。這種挑戰(zhàn)比電影中常描繪的終結(jié)者式的機(jī)器人威脅要大得多。
隨著我們掌握遺傳學(xué)和社會(huì)經(jīng)濟(jì)模型的人口決定因素對(duì)財(cái)富的影響,人工智能可以根據(jù)他們的健康基因、社會(huì)地位和智力能力來控制誰被允許生育。
聽起來像優(yōu)生學(xué)嗎?我希望不是。但在那些社會(huì)復(fù)雜且需要快速工業(yè)化的國(guó)家(印度和印度尼西亞就是例子),人工智能可能被當(dāng)權(quán)者濫用,以決定哪些人口統(tǒng)計(jì)、種族甚至信仰應(yīng)該被推廣,哪些需要被貶低。
The Internet of Everything (IOE) may just be the first step. See below.
在信息驅(qū)動(dòng)的經(jīng)濟(jì)成為未來全球增長(zhǎng)的核心時(shí),人工智能真正的威脅在于它們可能形成一個(gè)實(shí)時(shí)、大數(shù)據(jù)、深度學(xué)習(xí)的機(jī)器系統(tǒng)聯(lián)盟,開始主導(dǎo)人類的未來。由于這些系統(tǒng)無需人類操作,因此也就不需要任何控制桿。
萬物互聯(lián)(IOE)可能只是第一步。
AI a threat to humanity? Should it be regulated?
No and no. I think the term AI is overloaded and mostly used by fear mongering technophiles or wannabe intellectuals. AI, in its current state is really about a probabilistic set of heuristics or rules. if A will ever become I, there needs to be a fluid transition and unknown boundary between deterministic and probabilistic decision making - like the human mind. I don't see that around the corner. I do think we will get ever precise capabilities in strictly defined systems (autonomous driving) where most of the hairiest and ambiguous rules will be ratified or voted on, but i don't see an "intelligent" brain anywhere around the corner. i think its mostly "smart" people trying to sound really smart...
人工智能對(duì)人類構(gòu)成威脅嗎?它應(yīng)該受到監(jiān)管嗎?
不,兩個(gè)問題的答案都是否定的。我認(rèn)為“人工智能”這個(gè)術(shù)語被過度使用,而且大多被那些散布恐慌的技術(shù)狂熱者或自詡知識(shí)分子的人使用。目前狀態(tài)下的人工智能實(shí)際上就是關(guān)于一組概率性的啟發(fā)式規(guī)則或規(guī)則集。如果A(人工)要成為I(智能),就需要在確定性和概率性決策之間有一個(gè)流暢的過渡和未知的界限——就像人類大腦一樣。我并不認(rèn)為這很快就會(huì)發(fā)生。我確實(shí)認(rèn)為我們會(huì)在嚴(yán)格定義的系統(tǒng)(如自動(dòng)駕駛)中獲得越來越精確的能力,其中大多數(shù)復(fù)雜和模糊的規(guī)則將被批準(zhǔn)或投票決定,但我并不認(rèn)為“智能”大腦很快就會(huì)出現(xiàn)。我覺得這主要是一些“聰明”的人在試圖讓自己顯得格外聰明。
"there needs to be a fluid transition and unknown boundary between deterministic and probabilistic decision making - like the human mind."
I have no idea what, if anything, this means.
“需要在確定性和概率性決策之間有一個(gè)流暢的過渡和未知的界限——就像人類大腦一樣?!?br /> 我不知道這句話到底意味著什么——如果有什么意思的話。
Do you have "free will?" Pete? Our mind says, "Yes. I make decisions, such as posting my comment." Current neuroscience casts doubts on our opinion of ourselves. Our "selves" seem to make decisions at a lower level than our consciousness tells us. Don't think about it too much. That way leads to madness.
你有“自由意志”嗎,Pete?我們的大腦告訴我們,“有的,我做決定,比如發(fā)表我的評(píng)論。”但當(dāng)前的神經(jīng)科學(xué)對(duì)我們自我認(rèn)知提出了質(zhì)疑。我們的“自我”似乎在我們意識(shí)之前就已經(jīng)做出了決定。不要過多思考這個(gè)問題,否則會(huì)導(dǎo)致瘋狂。
I don't know Stephen. And I've given up thinking about it, for now at least. It does indeed lead to madness.
我不知道,Stephen。至少目前,我選擇不再糾結(jié)這個(gè)問題。確實(shí),深究會(huì)讓人陷入瘋狂。
Differentiate between Deterministic and Probabilistic Systems
You might find this answer useful
What is machine learning?
區(qū)分確定性系統(tǒng)和概率系統(tǒng)
你可能會(huì)發(fā)現(xiàn)這個(gè)答案很有用
什么是機(jī)器學(xué)習(xí)?
I know what machine learning is. I just don't understand what a 'fluid transition and unknown boundary' means.
我知道機(jī)器學(xué)習(xí)是什么。我只是不明白“流暢的過渡和未知的界限”這個(gè)表述是什么意思。
原創(chuàng)翻譯:龍騰網(wǎng) http://m.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處
From my understanding;
Machines are programmed to follow a set of rules for any particular task, whereas in AI, which is known as machine learning, meaning it has the tendency to observe and learn from it environment while performing it task, but to achieve this learning, it also follows a set of rules.
Whereas, the human mind can be autonomous, given to emotions (fear, anxiety, love and Whatnots), moral, nature and nurture, when making decisions.
Hence unless the machines have a human mind, there won't be need to worry about AI threats.
據(jù)我理解:
機(jī)器被編程遵循一套規(guī)則來執(zhí)行特定任務(wù),而在人工智能,也就是機(jī)器學(xué)習(xí)中,它能夠觀察并從環(huán)境中學(xué)習(xí),同時(shí)執(zhí)行任務(wù),但這種學(xué)習(xí)也需要遵循一套規(guī)則。
相比之下,人類大腦在決策時(shí)會(huì)受到情緒(如恐懼、焦慮、愛等)、道德、天性和教養(yǎng)的影響。
因此,除非機(jī)器擁有類似人類的大腦,否則我們不必?fù)?dān)心人工智能的威脅。
For now, the AI systems that exist are for marketing and business, though the medical fields would push it a notch further, but the bottom line is a commercial success, but always remember, it still humans that design the machines, so let worry about the motives of these humans
要?jiǎng)?chuàng)造出類似人類大腦的復(fù)雜的人工智能系統(tǒng),雖然理論上是可能的,但需要多年的科學(xué)研究和計(jì)算。
目前,現(xiàn)有的人工智能系統(tǒng)主要用于市場(chǎng)營(yíng)銷和商業(yè)領(lǐng)域,盡管醫(yī)學(xué)領(lǐng)域可能會(huì)將其應(yīng)用推向新的高度,但歸根結(jié)底是為了商業(yè)成功。但請(qǐng)記住,設(shè)計(jì)這些機(jī)器的還是人類,所以我們更應(yīng)該關(guān)注這些人類的動(dòng)機(jī)。
I see it more in an economic context: 50% of all jobs will be replaced by AI based tools within the next 5 years. Short term this is good news because it increases productivity. Mid term it's not because the society can't change the social systems and values quick enough to make that digestible for the majority.
Our existing societies are ruled by the idea that people should participate on the generated add value in the amount they were able to create it.
That is how people become billionaires.
我更傾向于從經(jīng)濟(jì)角度來看待這個(gè)問題:在未來五年內(nèi),預(yù)計(jì)有50%的工作將被基于人工智能的工具所取代。短期內(nèi),這可能是個(gè)好消息,因?yàn)樗芴嵘a(chǎn)力。但從中期來看,這可能是個(gè)問題,因?yàn)樯鐣?huì)無法迅速調(diào)整其社會(huì)體系和價(jià)值觀,以適應(yīng)這種變化,使之為廣大民眾所接受。
我們現(xiàn)有的社會(huì)體系是建立在這樣的理念之上:人們應(yīng)該根據(jù)他們創(chuàng)造的附加價(jià)值參與社會(huì)分配。
這也是人們?nèi)绾纬蔀閮|萬富翁的方式。
And how can a large portion of humans adapt to the reality that no one needs their work anymore ?
If people can't find a positive role within a society they get radicalized.
但是,如果是由機(jī)器或人工智能來創(chuàng)造附加價(jià)值,那么人們?nèi)绾螀⑴c分配、參與到多少程度的分配、以及參與什么樣的分配呢?
如果大量的人類發(fā)現(xiàn)自己的工作不再被需要,他們?cè)撊绾芜m應(yīng)這種現(xiàn)實(shí)?
如果人們無法在社會(huì)中找到一個(gè)積極的角色,他們可能會(huì)變得激進(jìn)。
I would like to see more ideas and discussions about how to solve this near term real problem rather than speculating about the day after the singularity.
如果在較大群體中出現(xiàn)這種情況,我們就面臨一個(gè)生存威脅。
我更希望看到更多關(guān)于如何解決這個(gè)迫在眉睫的實(shí)際問題的想法和討論,而不是去推測(cè)奇點(diǎn)之后會(huì)發(fā)生什么。