【Comment】
This is what Beijing really is. It
will do whatever it can, ignoring the consequences and moral principles.
美專家警告:中國用新技術製假地圖 擾亂各國 自由 20190404
近期「AI換臉」(Deepfakes)技術相當盛行,能製出以假亂真的動態人臉畫面。不過,美國國家地理情報局專家31日指出,這項技術也可以應用在軍事上,而中國在這方面技術領先於美國,甚至能在衛星圖上製出不存在的影像或建築物,來擾亂其他國家的偵察。
據《防務一號》(Defenseone)報導,美國國家地理情報局局長辦公室的自動化項目主管兼首席資訊總監邁爾斯(Todd Myers)聲稱,中國是首個利用「生成對抗網路」(Generative Adversarial Networks,GAN)技術來對衛星圖進行加工,試圖欺騙、擾亂外界偵察的國家,「他們已經成功欺騙了電腦,讓電腦辨識出不存在的地形和目標。」據悉,在GAN技術中,最廣為人知的應用就是「AI換臉」。
邁爾斯解釋,中國利用GAN技術來操縱圖像和像素,並進一步製出不存在的東西。例如,當中國在某處河流製造出「假橋」後,接著對手就會針對這座橋,在被引誘的情況下制定戰術。他說:「中國早就有計畫要來實現『邪惡目的』。」
邁爾斯指出,中國學者從2017年起,就開始利用這項技術來識別衛星照片中的道路和橋梁等目標物,「先不說國防和情報方面會遇到的問題,想像一下,若是Google地圖真的被混入假資訊,那麼5年後,當特斯拉自動駕駛系統被廣泛使用後,又會如何?」
The Newest
AI-Enabled Weapon: ‘Deep-Faking’ Photos of the Earth Defense One 20190331
Step 1: Use AI to make undetectable changes
to outdoor photos.
Step 2: release them into the open-source
world and enjoy the chaos.
Worries about deep fakes — machine-manipulated videos of celebrities and
world leaders purportedly saying or doing things that they really didn’t — are
quaint compared to a new threat: doctored images of the Earth itself.
China is the acknowledged leader in using an emerging technique called
generative adversarial networks to trick computers
into seeing objects in landscapes or in satellite images that aren’t there,
says Todd Myers, automation lead for the CIO-Technology Directorate at the
National Geospatial-Intelligence Agency.
“The Chinese are well ahead of us.
This is not classified info,” Myers said Thursday at the second annual
Genius Machines summit, hosted by Defense One and Nextgov. “The Chinese have already designed; they’re already doing it right now, using
GANs—which are generative adversarial networks—to
manipulate scenes and pixels to create things for nefarious reasons.”
For example, Myers said, an adversary might fool your computer-assisted imagery analysts
into reporting that a bridge crosses an important river at a given point.
“So from a tactical perspective or mission planning, you train your
forces to go a certain route, toward a bridge, but it’s not there. Then there’s a big surprise waiting for you,”
he said.
First described in 2014, GANs
represent a big evolution in the way neural networks learn to see and recognize
objects and even detect truth from fiction.
Say you ask your conventional neural network to figure out which objects
are what in satellite photos. The network will break the image into multiple pieces, or
pixel clusters, calculate how those broken pieces relate to one another, and
then make a determination about what the final product is, or, whether the
photos are real or doctored. It’s all based on the experience of looking at lots of
satellite photos.
GANs reverse that process by pitting two
networks against one another — hence the word “adversarial.” A conventional network might say, “The
presence of x, y, and z in these pixel clusters means this is a picture of a
cat.” But a GAN network might say, “This is a picture of a cat, so x, y, and z
must be present. What are x, y, and z and how do they relate?” The adversarial
network learns how to construct, or generate, x, y, and z in a way that
convinces the first neural network, or the discriminator, that something is
there when, perhaps, it is not.
A lot of scholars have found GANs useful for
spotting objects and sorting valid images from fake ones. In 2017, Chinese scholars used GANs to
identify roads, bridges, and other features in satellite photos.
The concern, as AI technologists told Quartz last year, is that the same technique that can discern real bridges from
fake ones can also help create fake bridges that AI can’t tell from the real
thing.
Myers worries that as the world comes to rely more and more on
open-source images to understand the physical terrain, just a handful of expertly manipulated data sets entered into
the open-source image supply line could create havoc. “Forget about the [Department of Defense] and
the [intelligence community]. Imagine Google Maps being infiltrated with that,
purposefully? And imagine five years
from now when the Tesla [self-driving] semis are out there routing stuff?”
he said.
When it comes to deep fake videos of people,
biometric indicators like pulse and speech can defeat the fake effect. But faked landscape isn’t
vulnerable to the same techniques.
Even if you can defeat GANs, a lot of image-recognition systems can be fooled
by adding small visual changes to the physical objects in the environment
themselves, such as stickers added to stop signs that are barely noticeable to
human drivers but that can throw off machine vision systems, as DARPA program
manager Hava Siegelmann has demonstrated.
Myers says the military and intelligence community
can defeat GAN, but it’s time-consuming and costly,
requiring multiple, duplicate collections of satellite images and other pieces
of corroborating evidence. “For every
collect, you have to have a duplicate collect of what occurred from different
sources,” he said. “Otherwise, you’re
trusting the one source.”
The
challenge is both a technical and a financial one.
“The biggest thing is the funding required to make sure you can do what I
just talked about,” he said.
On Thursday, U.S. officials confirmed that data
integrity is a rising concern. “It’s something we care about in terms of
protecting our data because if you can get to the data you can do the
poisoning, the corruption, the deceiving and the denials and all of those other
things,” said Lt. Gen. Jack Shanahan, who runs the Pentagon’s new Joint Artificial Intelligence Center. “We have a strong program protection plan to
protect the data. If you get to the
data, you can get to the model.”
But when it comes to protecting open-source data and images, used by
everybody from news organizations to citizens to human rights groups to hedge
funds to make decisions about what is real and what isn’t, the question of how
to protect it is frighteningly open. The gap between the “truth” that the government can
access and the “truth” that the public can access may soon become unbridgeable,
which would further erode the public credibility of the national security
community and the functioning of democratic institutions.
Andrew Hallman, who heads the CIA’s Digital Directorate, framed the
question in terms of epic conflict. “We are in an existential battle for truth in the digital
domain,” Hallman said. “That’s,
again, where the help of the private sector is important and these data
providers. Because that’s frankly the
digital conflict we’re in, in that battle space…This is one of my highest
priorities.”
When asked if he felt the CIA had a firm grasp of the challenge of fake
information in the open-source domain, Hallman said, “I think we are starting
to. We are
just starting to understand the magnitude of the problem.”
好像大家都出去玩了.....
回覆刪除這兩天對岸在收買本地FB和召募台奸網軍的新聞開始傳開了,沒人轉載?
人們很難理解政府對中國這些越趨積極的侵蝕作為採取了那些措施。怎麼想都應該不會是毫無作為,
刪除民眾的擔憂是很清楚的。難道將一點對敵方反制的隨時告知會破壞兩岸和諧氣氛都是受到控管的嗎?
我是因為王立這篇文章,想說等一等。轉他的全文:
回覆刪除我勸各位不要太心急,覺得時間緊迫到一秒都不能流失,然後到處轉到處分享,然後亂罵人。
買粉專這件事情兩年前就有了,連你們天天去看的都有中獎,早換人經營都不知道。
外包,層層轉包,才會出現這種智障的購買技巧,追金流沒有用,十之八九最後都是台灣的公關公司,就算調查局上門問,他也可以拿一堆娛樂跟圖片出來,說是客戶要經營粉專,做網購生意。
我不期望會理解比較專業的手法,很多手法跟手段,細膩之處很難發現,這次的狀況比較像是打掉信用的。
簡單說就是,負責擾亂台灣輿情的專案負責人,或是其中的業務負責人之一,發現台灣有人在提倡戰爭意識,注意到這類的活動。
接著,刻意用智障手法激起討論,第一步就可以把泛藍反綠的切割,因為藍的先天覺得綠的在推卸執政無能的責任,只有綠的會說謊,「共匪手段怎麼可能這麼白癡」。
然後,就只需要處理剩下的一半,大概不用幾天,就會開始出現這是偏綠的在收購粉灣,「刻意栽贓給中國」的訊息出現。保證打掉偏綠那邊兩三成的自信。
手法一致,前置作業相同,如果繼續炒熱到整個偏綠的跟泛藍的網路開火,後面就會有人丟資料去打掉沈伯洋的信用。
最直接的影響就是,不管沈那邊的人再怎麼努力,政府就沒辦法將他們的研究報告當作官方依據,用來說服主管跟擬定政策。
==========
我想說的是,冷靜
一直有人在做這方面的事情
https://m.facebook.com/story.php?story_fbid=2176357746010445&id=1608253896154169
"共匪手段怎麼可能這麼白癡" 有一句話叫聰明反被聰明誤,而台灣向來不缺這種有小聰明的人,所以可以用最白癡的方式哄騙.
刪除重點就是"不要沒事亂罵",看到黑影就開槍是我們在這裡同溫層聊天可以,在外面就多聽多觀察.
粉專這種事就算了,我沒在用FB,偶爾看一下別人貼的而已.
「這裡同溫層」
刪除這裡,是同溫層
很好的自我警惕
但是,還是有基本的邏輯與事實基礎要謹守
除非是想觀察大眾行為