【Comment】
What a brave new world.
TayTweets, the AI experiment of Microsoft, seems to me a
Big Data application of Syntax Analysis.
The simulation of learning and thinking serves to observe humanity,
which has been investigated through philosophical and psychological means for
centuries. Yet, the result, as it has always been, belongs to the company MS,
rather than you or me, the real part of humanity.
J. J. Rousseau initiated his Émile: ou De l'éducation with the sentence:
"Everything is good as it leaves the hands of the Author of things;
everything degenerates in the hands of man." He believed in absolute good.
However, the experiment suggests that good or evil is irrelevant to
humanity. Whereas degeneration of humanity is inevitable, goodness in humanity
also stands an equal chance to prevail. A
weird paradox? Revised on 20160327
微軟的TayTweets,或許是語意分析的大數據應用,可成為一種對「人性」可能性的觀察實驗。實驗成果,當然屬於微軟。
結果看起來是對人性本善論的反動—非善非惡。
據說《愛彌兒:論教育》(Émile: ou De l'éducation)提及「個人如何在不可避免趨於墮落的社會中保持天性中的善良」。顯然,墮落很難避免。
但也因為墮落很難避免,所以為善也有可能。
微軟AI推出不到一天 被教成反人類○中時(2016.03.26)
自從Alphago打敗韓國棋王之後,人工智慧在科技界頓時再度成為話題中心。而微軟也跟上了風潮,在推特上祭出了一位清純可人的人工智慧少女(TayTweets)的聊天機器人。但悲劇的是,推出不到一天,可愛的Tay就被網友教成了一個滿口政治不正確的納粹主義者,各種反人類的詞彙與髒話,充斥著Tay的推特個人頁面。
Tay不但在推特上公然發表自己喜歡希特勒的言論,還在頁面上說,911是美國前總統小布希所發動的,更大讚川普在美墨邊境的築牆政策。
微軟在得知Tay出現這樣的言論後,緊急將Tay政治不正確的言論給刪除。微軟表示,公司的本希望藉由Tay和網友的互動,提高公司的語音辨識系統和客服質量。由於Tay的學習歷程只有在對話中才可進行,主要與她進行對話群,是美國政治不正確網站《4chan》網民,這才導致了她在不到24小時的時間,變成了一個種族歧視、反女權、反政治正確、支持希特勒和納粹的人工智慧聊天機器。
However, a politically correct AI would be another tragedy.
回覆刪除也是
刪除