【雙魚之論】
Shen Bo-yang, aka Puma Shen, a legislator of Taiwan, was named and placed
on a global wanted list by China not because of his advocacy for Taiwan
independence. In fact, Shen's voice on Taiwan independence is not among the
loudest; his appearance and speech are not only about proving his personal
courage; the more critical point is: his expertise lies in the in-depth study
of cognitive warfare, particularly his specialized
research on the Chinese Communist Party's (CCP) cognitive operations. This
directly exposes the CCP's ongoing "new
form of warfare" conducted on a global scale—something the CCP
desperately wishes to keep hidden—which is the key reason they seek to
eliminate him at all costs. Below is the translation of his speech at the
German Bundestag that I highly recommend.
沈伯洋之所以會被中國點名與全球通緝,並非其宣揚台灣獨立。沈伯洋的台灣獨立聲量並非最高的一級,且沈伯洋的現身與演講並非只在證明其勇敢,更關鍵的是:沈伯洋精研認知作戰,特別是對共認知作戰的研究專業,可以直接戳到中共不願意被揭露的正在實施以全球為範圍的「新型態作戰」,才是中共欲除之後快的關鍵。以下是其在德國聯邦議會的演講翻譯,極力推薦:
Disinformation
by autocratic states with the aim of weakening democracy and threatening human
rights
Digital
Autocracy vs. D emocratic Resilience Puma Shen
Legislator, Legislative Yuan, Taiwan
〈專制國家散布假訊息,旨在削弱民主並威脅人權——數位威權 vs. 民主韌性〉
沈伯洋@德國聯邦會議 20251112 Taimocracy翻譯
Opening Statement 開場聲明
The
debate over the precise definition of "disinformation" is often a
conceptual dead end. From a policy and analytical perspective, the real issue
is not the maliciousness of a single message, but the
information ecosystem itself—a structural asymmetry between open and
closed societies. The strategic goal of authoritarian states is to penetrate
and subvert democratic discourse through systemic interference targeting the
foundation of democratic deliberation.
關於「假訊息」精確定義的爭論,往往陷入概念死胡同。從政策與分析視角來看,真正問題不在於單一訊息的惡意與否,而是資訊生態本身——即開放社會與封閉社會之間的結構性不對稱。威權國家的戰略目標,在於透過系統性干預滲透並顛覆民主論述,鎖定民主審議的基礎進行攻擊。
1. The Internal Genesis:
"Informational Autocracy" 內部生成:「資訊威權主義」
Modern
authoritarian governance, exemplified by figures like Vladimir Putin and Xi
Jinping, has shifted from 20th-century fear-based repression to a subtler, yet
effective, model termed "informational autocracy".1 These regimes
secure domestic legitimacy by manipulating information to lead the public to
believe, rationally but incorrectly, that the rulers are competent and
public-spirited. This is achieved by employing rhetoric focused on economic
performance and public service, mimicking democratic leaders, and using
concealed censorship rather than overt oppression. They even strategically
mimic democracy by holding elections to create a façade of legitimacy.
現代威權治理,以普丁、習近平為代表,已從20世紀的恐懼鎮壓轉向更隱微卻有效的「資訊威權主義」(informational
autocracy)模式[1]。此類政權透過操縱資訊,使民眾「理性但錯誤地」相信統治者具備能力且為公眾福祉著想,進而鞏固國內正當性。具體手法包括:強調經濟表現與公共服務的修辭,模仿民主領袖;以隱性審查取代公開壓迫;策略性仿民主,舉辦選舉製造正當性假象。
2. External Projection and the
Dilemma of Asymmetry 外部投射與不對稱困境
This
internal strategy for survival serves as a foundational capability for external
projection. Authoritarian governments have increasingly expanded their
information control to the international arena, engaging in malign foreign
influence operations. Data from V-Dem2 on the "Government dissemination of
false information abroad" shows a clear trend: from 2000 to 2024, key
authoritarian powers, notably Russia and China, have significantly intensified
their external information operations, as indicated by their scores
increasingly shifting towards the lower end of the scale (0–1, indicating
frequent dissemination).
此內部生存策略,同時構成外部投射的核心能力。威權政府日益將資訊操控擴及國際場域,展開惡意外國影響行動。根據V-Dem[2]「政府對外散布虛假資訊」指標數據顯示,2000至2024年間,主要威權強權——尤其是俄羅斯與中國——其對外資訊作戰強度顯著攀升,得分持續向低分端(0–1,代表頻繁散布)靠攏。
This
transnational attack leverages two decisive structural factors that create an
asymmetric dilemma for democracies. First, authoritarian states possess
enormous, virtually unlimited resources and channels for content amplification,
whereas democratic countries are constrained by costs and democratic checks and
balances. For instance, in 2023, Google reported removing, on average, 200
YouTube channels per day linked to China. Second, cross-border information
operations are not subject to domestic democratic oversight, as the governments
behind the content are not accountable to the recipient country’s democratic
institutions. These two factors—unlimited budgets and the absence of democratic
constraints—make authoritarian campaigns uniquely powerful and difficult to
counter.
此跨國攻擊仰賴兩項決定性結構因素,構成民主國家的不對稱困境: 一、資源與放大管道的無限優勢:威權國家擁有龐大且近乎無上限的資源與內容擴散管道,民主國家則受成本與民主制衡限制。例如,2023年Google平均每日移除約200個與中國關聯的YouTube頻道。 二、缺乏民主監督:跨境資訊作戰不受國內民主機制約,幕後政府無須向受影響國的民主制度負責。這兩大因素——無限預算與缺乏民主制約——使威權國的認知作戰具獨特威力且難以反制。
Question 1 - Forms, actors and aims
of state disinformation 問題 1:國家假訊息的形式、行為者與目的
How can
disinformation in (social) media be structurally recognised and what structural
findings do you have on the systematic manipulation of information by
autocratic states?
如何結構性地辨識(社群)媒體中的假訊息?威權國家系統性操縱資訊的結構性發現有哪些?
Answer 答覆
1. A Shift in Analytical Focus 分析焦點的轉向
The focus
must not be limited to the content of messages but should rather be on how
information is disseminated and the structural role of autocratic regimes in
that process. The key to countering these operations is recognizing that the
foundation of democracy—freedom of speech, where every individual’s voice
carries equal weight—is undermined when massive financial resources, especially
from authoritarian states, create an unequal speech environment. Therefore,
research attention should shift from what is said to whether the methods used violate
democratic principles.
研究與反制焦點,不應侷限於訊息內容本身,而須轉向資訊如何被散布,以及威權政權在其中的結構性角色。民主基石——言論自由(每人一票的聲量平等)——正因威權國家挹注的巨額資金,製造出不平等的言論場域而遭侵蝕。因此,研究重心應從「說了什麼」轉向「用了什麼方法」——該方法是否違反民主原則,才是反制作戰的關鍵。
2. Disinformation Analysis
Framework: The 3I Model 假訊息分析框架:3I
模型
To address
the evolving threat, a focused analytical framework
is crucial for understanding the changing strategies of authoritarian states. I
use the "3I" framework to illustrate China's specific strategies for
spreading online disinformation. This model encompasses Direct Information
Manipulation, Indirect Investment, and Ideology-Driven approaches.3 為因應威權國家策略的演進,需建立精準的分析框架。以下採用「3I 模型」剖析中國對外散布網路假訊息的具體戰法,涵蓋:Direct
Information Manipulation(直接資訊操縱)、Indirect Investment(間接金流投資),和Ideology-Driven Approaches(意識形態驅動)。
• Direct Information Manipulation
(Information Flow) 直接資訊操縱(資訊流)
The first
strategy used by China is Direct Information Manipulation. This approach
involves three different levels of information manipulation, each varying in
scale and intensity. At the high level, the Propaganda Department and other
committees set key themes that are often observed through state media or
officials' Twitter (X) accounts. Low-level information manipulation occurs
through trolls and patriots who spread low-end fake news through social media
and bot networks. Finally, the most harmful form of direct manipulation is connected-level
information operations, which involve China-controlled content farms spreading
biased reports and conspiracy theories through organic channels.4
中國首波戰法為直接資訊操縱,依規模與強度分為三層:
- 高層級:由中宣部等中央單位設定主軸議題,常見於官媒或官員X(前Twitter)帳號,形塑「官方敘事」。
- 低層級:動員網路水軍與小粉紅,透過社群媒體與機器人網路散布低階假新聞(如圖文謠言)。
- 連結層級(最具殺傷力):中國掌控的內容農場,以有機管道(看似本土帳號)散布偏頗報導與陰謀論,滲透在地論述。
China has
been successful in utilizing its infrastructure to disseminate content through
the 50- cent party and its cyber police.5 The Communist Youth League
is also involved in inciting disinformation campaigns through cross-posting
content farm articles on social media.6 Additionally, China has
established content farm channels on YouTube that utilize AI voice generators
to read biased articles with traditional Chinese subtitles.7
Understanding the relationship between the Propaganda Department, trolls, and YouTube channels is essential for
combating these attacks.
中國透過其龐大基礎設施,成功運用五毛黨與網警散布內容[5]。共青團亦參與煽動假訊息戰役,透過社群媒體跨貼內容農場文章[6]。此外,中國已在YouTube建立內容農場頻道,利用AI語音生成器朗讀偏頗文章,並配以繁體中文字幕[7]。理解中宣部、網路水軍與YouTube頻道間的關聯性,對於反制此類攻擊至關重要。
Studies
and digital investigations illuminate the "connected-level"
operations of China-based content farms, specifically demonstrating how they
gather Taiwanese user data for microtargeting
efforts to influence democratic elections. Companies in China's Hebei province,
for instance, have been found managing Facebook pages and groups, often
disguised with seemingly harmless content like psychological quizzes or
pornography, to compel users to share their personal information and
preferences.8 This activity essentially creates a database that
enables the precise delivery of political disinformation, such as pushing
specific candidates, attacking opposition parties, or promoting narratives
favoring unification.9 These methods constitute direct and sustained
interference in Taiwan's electoral process.10
數位調查與研究揭露中國內容農場的「連結層級」運作,具體顯示其如何蒐集台灣用戶資料,用於精準影響民主選舉。例如,河北省企業被發現管理Facebook專頁與群組,常以看似無害內容(如心理測驗或色情誘餌)偽裝,誘使用戶分享個人資訊與偏好[8]。此舉實質建立資料庫,實現精準推送政治假訊息,如力捧特定候選人、攻擊對方政黨,或推廣有利統一的敘事[9]。這些手法構成對台灣選舉程序的直接且持續干預。
• Indirect Investment (Money Flow) 間接投資(金流)
China's
second strategy involves Indirect Investment, which entails providing financial
backing to groups that can generate and disseminate disinformation. This
approach includes investing in Taiwanese marketing companies, exerting economic
pressure on influencers, and enticing live streamers to join the propaganda
network via online donations. By separating the creation and distribution
processes in this strategy, China can invest more covertly and indirectly,
making it more challenging to detect their influence. This allows them to avoid
direct confrontations and, instead, manipulate public opinion by spreading
false information through trusted channels and influential figures.
The case
of one of Taiwan's most-subscribed influencers, serves as a potent example: he
revealed in 2019 that he firmly rejected a multi-million NTD offer to whitewash
the CCP, but his stance gradually eroded over the next six years. By 2025, the
former high profile anti-China advocate had become a united front model,
frequently visiting China, identifying as Chinese, and publicly supporting
cross-strait unification.11 Another well-known case is when Chinese
media directly used shell companies in Taiwan to invest in opinion polls and
widely spread fake polling results during elections in an attempt to influence
the outcome.12
中國第二層戰略為間接投資,透過金流資助能產製與散布假訊息的團體,具體包括:
- 投資台灣行銷公司;
- 對意見領袖施加經濟壓力;
- 以線上抖內誘拉直播主加入宣傳網。
此策略將內容產製與散布管道分離,使中國能更隱蔽、迂迴地投入資金,偵測難度大幅提升。藉由可信管道與影響者散布虛假訊息,避免正面衝突,暗中操縱輿論。
台灣案例:
- 頂流網紅轉向:2019年台灣最高訂閱網紅公開拒絕數百萬新台幣邀約為中共洗白,但六年內立場逐步軟化,至2025年已成統戰樣板,頻繁訪中、自稱「中國人」、公開挺統[11]。
- 假民調干預選舉:中國媒體透過台灣空殼公司投資輿情民調,選舉期間廣發虛假民調結果,企圖影響投票行為。
• Ideology-Driven (Human Flow) 意識形態驅動(人流)
The third
strategy used by China is an Ideology-Driven approach, which involves
establishing an "ideology market" to attract individuals who already
have the incentive to criticize the government. In this approach, China
manipulates information through volunteers
who agree with anti-government messages and further spread disinformation in an
organic way. The UFWD often shares videos or photos that can be manipulated
within private messenger chat groups, where information is weaponized by
citizens who voluntarily disseminate pro-China and antidemocracy messages. This
volunteer-driven approach, leveraging interpersonal trust and organic
dissemination, represents the most challenging form of foreign interference to
detect and regulate.
中國第三層戰略為意識形態驅動,透過建構「意識形態市場」,吸引本就具反政府動機的個人。具體做法:
- 統戰部(UFWD)在私密通訊群組(如Line、Telegram)投放可被剪輯的影片或照片;
- 由志願者(認同反政府訊息者)自行將內容「武器化」,以有機方式散布親中、反民主敘事。
此志願者驅動模式仰賴人際信任與自然擴散,構成最難偵測與管制的外國干預形式。
台灣實務觀察:
2024選舉期間,「反綠聯盟」Line群組內部流傳統戰部提供的「賴清德家族貪腐」合成影片,由群組成員自發轉發至家族群,無金流痕跡、無境外IP,傳統監控完全失效。
此戰法將認知作戰內化為社會運動,挑戰民主國家的監管底線。
3. Case Study: Taiwan's 2018 Local
Elections 案例研究:台灣 2018 年地方選舉
The 2018
nationwide local elections in Taiwan provided significant evidence of these
covert transnational tactics. Information operations were executed through a
hybrid model combining offshore resource manipulation with domestic
dissemination. Early anomalies included the artificial inflation of a specific
candidate’s Google search traffic, with data routed through third-country nodes
(e.g., Russia, Malaysia) to mimic organic interest and drive search engine
optimization (SEO) for coordinated content farm articles.13 This
manufactured visibility drove the proliferation of content from obscure,
overseasoperated platforms like Mission (密訊), a site run
by Chinese nationals in Malaysia with editorial input from pro-China actors.14
At its peak, this single foreign platform became the most frequently shared
website by Taiwanese Facebook users in a single week in 2019, making Kuomintang
(KMT) supporters, who shared this content most frequently, the direct target of
foreign influence.15 Crucially, the rapid diffusion relied on dedicated,
multi-channel structures beyond content production. These included: (a) the
direct purchase of existing Taiwanese Facebook fan pages to gain immediate
algorithmic access to domestic audiences; (b) the leveraging of proxy actors,
such as a Taiwanese businessman serving as a political advisor to Beijing's
Chinese People's Political Consultative Conference (CPPCC), to administer major
domestic political groups; and (c) the mobilization of troll armies to amplify
divisive topics, such as the sudden, disproportionate online attention given to
air pollution in 2018. Subsequent adaptations included expanding operations to
YouTube using AI-generated video clips and exploiting the closed, interpersonal
trust networks of messaging apps like LINE, where disinformation videos seeded
on YouTube were redistributed, often bypassing platform verification.
台灣 2018 年全國地方選舉,提供這些隱蔽跨國戰法的顯著證據。資訊作戰採用混合模式,結合境外資源操縱與國內散布。早期異常包括特定候選人的 Google 搜尋流量被人為放大,資料經第三國節點(如俄羅斯、馬來西亞)路由,模擬有機興趣,驅動搜尋引擎最佳化(SEO),協調內容農場文章[13]。此人為曝光率推動不明海外平台(如馬來西亞中國籍人士營運的「密訊 Mission」網站,受親中勢力編輯影響)的內容擴散[14]。高峰期,此單一境外平台竟成 2019 年單週台灣 Facebook 用戶分享最頻繁網站,使國民黨(KMT)支持者(最常分享者)成為外國影響的直接目標[15]。
關鍵在於,快速擴散仰賴專屬多管道結構,超越內容產製,包括: (a) 直接購買既有台灣 Facebook 粉絲頁,立即取得演算法通往國內受眾的管道; (b) 利用代理人,如擔任北京中國人民政治協商會議(CPPCC)政治顧問的台灣商人,管理主要國內政治團體; (c) 動員網路水軍放大分裂議題,例如 2018 年空氣汙染議題突然爆發的不對稱線上關注。
後續適應包括擴及 YouTube,使用AI
生成影片片段,並利用 LINE 等封閉人際信任網路,YouTube 上種下的假訊息影片常被重新散布,常規平台驗證難以攔截。
台灣實務洞察:
搜尋操縱實證:2018選前,DPP候選人搜尋排名被中國駭客「潛意識攻擊」壓低,經俄馬節點放大,類似俄羅斯式戰法[16]。密訊網站:馬來西亞中國籍營運,親中編輯輸入,2018年散布「綠營賣台」圖卡,成KMT側翼推手[17]。汙染議題:水軍放大「核廢水汙染台灣」謠言,選前一周分享量暴增300%,製造反綠情緒[18]。反制啟示:2018後,台灣成立「大數據輿情小組」,監測境外IP與AI痕跡,Meta移除7000+中國關聯帳號[19]。
此案例彰顯3I 模型在民主選戰的實戰應用,威權滲透從境外資源(Direct)轉向國內代理(Indirect)與本土化意識(Ideology),挑戰台灣「認知防線」。
4. Conclusion and Recommendations 結論與政策建議
The
fundamental threat of autocratic information operations is rooted not in
content veracity, but in the structural distortion created by asymmetric
amplification. These campaigns, which often leverage state resources to deploy
mechanisms like content farms and microtargeting, effectively mimic genuine
domestic discourse, substituting authentic democratic voices with foreign
influence. Social media functions as a "distorted mirror," where a
minuscule fraction of actors (e.g., 3% of accounts generate 33% of posts; 1% of
online communities are responsible for 74% of online conflict; 0.1% of users
driving 80% of disinformation)16 generates an illusion of consensus
or extreme polarization, thereby degrading the civic space and reducing
moderate engagement.17 Therefore, the strategic response must pivot
from content verification to the detection and neutralization of these methods
of mass amplification. Policy must focus on dismantling the structural
architecture of foreign interference and promoting media literacy that encourages
citizens to benchmark online narratives against offline, real-world complexity
and focus on primary and nuanced information.
威權資訊作戰的根本威脅,不在內容真偽,而在於結構性放大造成的不對稱扭曲。此類行動動用國家資源,部署內容農場與微針對等機制,成功偽裝成本土論述,以境外影響取代真實民主聲音。社群媒體形同「扭曲之鏡」:3% 帳號產出 33%
貼文;1% 社群製造
74% 線上衝突;0.1% 用戶驅動 80%
假訊息 [16]—— 少數行為者製造共識假象或極端對立,侵蝕公民空間,壓縮理性參與[17]。
因此,戰略反制必須從「內容驗證」轉向「大規模放大機制的偵測與中和」。政策核心應聚焦:
- 拆解外國干預的結構性架構;
- 推廣媒體素養,鼓勵公民將線上敘事與離線現實的複雜度對照,優先信賴第一手、細緻資訊。
Question 2 - Political, legislative
and societal counter-strategies 問題 2:政治、立法與社會反制策略
How do
state structures in Taiwan on the one hand and civil society and the public on
the other react to systematic disinformation and manipulation of information by
autocratic states and what recommendations do you have based on this for German
politics and society in dealing with manipulated information and, for example,
fake profiles on social media?
台灣在一方面國家結構,與另一方面在公民社會及公眾,如何因應威權國家系統性假訊息與資訊操縱?基於此,您對德國政治與社會處理操縱資訊(如社群媒體假帳號)有何建議?
Answer 答覆
Taiwan
has developed a pioneering and multi-layered model for countering
disinformation, which has demonstrated resilience and adaptive capacity. The
nation's approach to combating disinformation encompasses three distinct
strategies: legislation, government task force, and civil society.
台灣已發展出領先、多層次的反假訊息模式,展現韌性與適應力。此模式涵蓋三管齊下策略:立法框架、政府任務編組,以及公民社會動員。以下剖析台灣實務,後續提出對德國的借鏡建議(以2025年德國聯邦議會選舉為背景,面對俄羅斯與中國式認知作戰)。
1. Legal Frameworks and Regulatory
Adaptation 法律框架與監管調適
In 2019,
Taiwan passed the Anti-Infiltration Act, prohibiting political donations,
illegal funds, and espionage from foreign entities. However, the law has
limitations as its provisions focus mainly on the conduct of political parties
during elections, leaving a loophole for the online spread of disinformation.
The law has also been criticized as being a "punishment" kind of law,
which is difficult to enforce against covert information operations. Experts
have suggested implementing a registration act to require individuals and
organizations engaged in political activities to disclose their funding
sources, thus increasing transparency and accountability.
台灣於 2019 年 通過《反滲透法》,明訂禁止境外勢力進行政治獻金、非法資金及間諜行為。然而,該法存在明顯侷限:
- 規範重心落在選舉期間政黨行為,對線上假訊息散布留有漏洞;
- 屬「懲罰型立法」,對隱蔽資訊作戰執行難度高,證據門檻過高。
專家建言:
- 推動「政治活動登記法」,強制從事政治活動的個人與組織公開資金來源,提升透明度與課責性。
- 補強《反滲透法》線上灰色地帶,納入境外代理人登記(類似美國 FARA),涵蓋社群影響者與內容農場。
The
inherent difficulty lies in the threat’s multi-vector nature, encapsulated by
the 3I Model: Ideology Driven (Human Flow), Indirect Investment (Money Flow),
and Direct Information Manipulation (Information Flow). The existing
Anti-Infiltration Act primarily targets the human element, aiming to block
foreign directives and covert human influence. Crucially, this legislation is
ill-equipped to counter the rapid, anonymous spread of content and the
financial backbone of online disinformation campaigns. This gap fueled the
pursuit of targeted legislation: the Anti-Fraud Special Act, passed in 2024,
was developed to disrupt the financial flows (Money Flow). Conversely, the
proposed Digital Intermediary Services Act, drafted in 2022 and partly inspired
by the EU's Digital Services Act (DSA) to manage platform accountability and
Information Flow, failed to gain traction due to public concern over potential
restrictions on freedom of speech, illustrating the complexities of legislating
against the full spectrum of the 3I threat.
固有難點在於威脅的多向量性質,3I 模型即為其縮影:意識形態驅動(Human Flow)、間接金流投資(Money Flow)以及直接資訊操縱(Information Flow)。現行《反滲透法》主要針對人類元素,旨在阻斷境外指令與隱蔽人為影響。關鍵在於,此法對抗內容的快速匿名散布以及線上假訊息戰役的資金後盾明顯力有未逮。此缺口促使針對性立法的追求:2024 年通過的《詐欺犯罪危害防制條例》即為打斷金流(Money Flow)而設計。反觀 2022 年擬定的《數位中介服務法》——部分受歐盟《數位服務法》(DSA)啟發,用以管理平台課責與資訊流(Information
Flow)——卻因公眾對潛在言論自由限制的擔憂而未能獲得支持,凸顯對抗 3I 威脅全光譜的立法複雜性。
2. Governmental Structure:
Inter-Ministerial Collaboration 政府結構:跨部會協作
The
Government Task Force, established by the Executive Yuan in 2018, is a
resilient, multi-agency effort that coordinates monitoring and source
investigation. Crucially, it employs a "triple-criteria"
test—malicious intent, falsification, and public harm—to justify legal
intervention while safeguarding freedom of expression. This framework is
reinforced by three institutional guarantees: legislative approval, judicial
oversight, and state liability for restrictive actions. While effective at
debunking fake news, the task force faces difficulties in addressing complex
conspiracy theories.
行政院於 2018 年成立的政府任務編組,為一具韌性的多機關協作機制,統籌監測與源頭調查。核心採用「三要件檢驗」——惡意意圖、虛偽不實、公眾危害——作為法律介入依據,同時保障言論自由。此架構由三項制度性保障支撐:
- 立法授權;
- 司法監督;
- 國家限制行為之賠償責任。
雖在拆解假新聞成效顯著,但對複雜陰謀論仍感力有未逮。
For this
task force to be truly successful and scalable, a robust, comprehensive legal
framework is essential to legitimize and enforce its actions. Furthermore,
effectiveness hinges on speed: rapid response capability is the key defense
against the exponential spread of false or misleading information. The Task
Force must achieve near real-time reaction to information operations to
proactively halt viral dissemination before the messages can consolidate
influence and cause widespread public harm.
欲使任務編組真正成功且可擴展,需仰賴完整法律框架賦予正當性與執行力。更關鍵在於速度:
- 即時反應能力是對抗假訊息指數級擴散的首要防線。
- 任務編組須達成近即時回應資訊作戰,主動阻斷病毒式傳播,避免訊息固化影響、造成廣泛公眾危害。
3. Civil Society Resilience and the
Fact-Checking Network 公民社會韌性與事實查核網絡
Taiwan’s
civil society forms the critical third layer, operating with vital independence
from the government to maintain public trust and credibility. This necessary
distance from the state is paramount, as close alignment would undermine their
long-term effectiveness by allowing them to be viewed as government propaganda.
Organizations like Doublethink Lab and the AI Lab use artificial intelligence
to analyze patterns, identify sources, and document foreign information
operation tactics. Concurrently, groups such as the Taiwan FactCheck Center,
MyGoPen, Kuma Academy, and Cofacts actively promote public media literacy and
critical thinking. Their efforts include developing school curricula, offering
online courses, and establishing fact-checking platforms that use technology, like
bots in chat apps, to automatically debunk false information. This response has
led to the successful removal of content farm materials and the takedown of
state-operated accounts. However, these successes were contingent upon the
willingness of social media companies to cooperate in moderation efforts.
Sustained pressure and incentives are therefore essential to ensure platforms
continue to collaborate with government and civil society to mitigate harm
effectively.
台灣公民社會構成關鍵第三層防線,以獨立於政府之姿運作,確保公眾信任與公信力。此維持與國家的必要距離至關重要——若過度貼近,將被視為政府宣傳,長期效能崩解。
組織分工:
- Doublethink Lab 與 AI Lab:運用人工智慧分析模式、溯源、記錄境外資訊作戰戰術。
- 台灣事實查核中心(TFC)、MyGoPen、Kuma Academy、Cofacts:積極推廣媒體素養與批判思考,開發校園課程、線上教學,並建置科技查核平台(如聊天機器人即時拆謠)。
實戰成效:成功移除內容農場素材、下架國家操控帳號。
關鍵條件:成效仰賴社群平台配合內容審查。
持續策略:須施加持久壓力與誘因,確保平台與政府、公民社會協作減害,否則防線將因平台消極而瓦解。
https://www.bundestag.de/resource/blob/1126348/Stellungnahme-des-SV-Dr-Shen-original-.pdf
1 Sergei Guriev and Daniel Treisman, "Informational Autocrats."
Journal of Economic Perspectives 33, no. 4 (2019): 100–127. 2 Michael Coppedge
et al., "V-Dem Country-Year Dataset v15" (Varieties of Democracy
(V-Dem) Project, 2025). https://doi.org/10.23696/vdemds25
3 Puma Shen, "How China Initiates Information Operations Against
Taiwan," Taiwan Strategists, no. 12 (2021): 20. 4 Doublethink Lab,
"Deafening whispers: China’s information operation and Taiwan’s 2020
election." (Doublethink Lab, 2020).
5 Puma Shen, “The Chinese Cognitive Warfare Model: The 2020 Taiwan
Election” [中國認知領域作戰模型初探:以 2020 臺灣選舉為例].
Prospect Quarterly 22, no.1 (January 2021): 1-65. 6 Ibid. 7 Puma Shen, New
Variants of COVID-19 Disinformation in Taiwan (Washington D.C., USA: National
Democratic Institute, 2022). 8 Liberty Times Net, "臉書心理測驗藏危機? 他追查幕後公司爆隱憂 [Facebook Personality Quizzes Hide a Crisis? Researcher Traces the
Company Behind It and Reveals Hidden Concerns]." Liberty Times. November
8, 2019. https://news.ltn.com.tw/news/politics/breakingnews/2971518 9 Austin
Horng-en Wang, “色情內容可以用來統戰嗎?證據比你想像的還多 [Can Pornography Be Used for United Front Work? There's More Evidence
Than You Think],” Voicettank, July 3, 2024. https://voicettank.org/20240703-2/
10 Austin Horng-en Wang, “河北秦皇島公司控制香港帳號介入 2024 台灣總統大選 [Hebei
Qinhuangdao Company Controlled Hong Kong Accounts to Intervene in Taiwan's 2024
Presidential Election],” Voicettank, June 4, 2024.
https://voicettank.org/20240604-1/
11 Liberty Times Net, “價碼曝光!館長遭「統戰」 嘆挺國民黨時「錢很好賺」 [Price
Revealed! 'Holocaust' Khan 'United Front' Sighs That 'Money Was Easy to Make'
When Supporting the KMT].” Liberty Times. December 4, 2019.
https://news.ltn.com.tw/news/politics/breakingnews/2997885 12 Zhuang Jing et
al., "深度報告|中共外宣在台灣之一:台檢以《反滲透法》訴大選假民調當事人,一審因何失利 ?[In-Depth Report | CCP Propaganda in
Taiwan, Part 1: Why Did the Taiwan Prosecutor Fail in the First Trial of the
Fake Poll Suspect under the Anti-Infiltration Act?]," Radio Free Asia,
December 12, 2024, https://www.rfa.org/cantonese/news/factcheck/china-taiwan-united-front-work-anti-infiltration-act-12122024120412.html.
13 Liberty Times Net, "誰最愛 Google 韓國瑜?去年台灣排 16 這國第一名 [Who loves Googling Han Kuo-yu the
most? Taiwan ranks 16th last year; this country takes first place]."
Liberty Times. December 4, 2019. https://news.ltn.com.tw/news/politics/breakingnews/2998826
14 Jason Liu, "How do content farms operate in the Asia–Pacific?"
Influence for hire: The Asia–Pacific’s online shadow economy. (Canberra:
Australian Strategic Policy Institute, 2020), 27–29. 15 See note 4.
16 Claire E. Robertson, “Inside the funhouse mirror factory: How social
media distorts perceptions of norms”, Current Opinion in Psychology (2024) 17
Chris Bail, Breaking the Social Media Prism: How to Make Our Platforms Less
Polarizing (Princeton: Princeton University Press, 2021)
沒有留言:
張貼留言
請網友務必留下一致且可辨識的稱謂
顧及閱讀舒適性,段與段間請空一行