網頁

2024-01-12

風險 4:不受監管的人工智慧 Eurasia 20240108 / Taimocracy翻譯

風險 4:不受監管的人工智慧   Eurasia 20240108 / Taimocracy翻譯

Gaps in AI governance will become evident in 2024 as regulatory efforts falter, tech companies remain largely unconstrained, and far more powerful AI models and tools spread beyond the control of governments.

 

2024年,隨著監管力度的減弱、科技公司在很大程度上仍不受約束,以及更強大的人工智慧模型和工具超出了政府的控制範圍,人工智慧治理方面的差距將變得明顯。

Last year brought a wave of ambitious AI initiatives, policy announcements, and proposed new standards, with cooperation on unusual fronts. America's leading AI companies committed to voluntary standards at the White House. The United States, China, and most of the G20 signed up to the Bletchley Park Declaration on AI safety. The White House issued a groundbreaking AI executive order. The European Union finally agreed on its much-heralded AI Act. And the United Nations convened a high-level advisory group (of which Ian is a member).

 

去年,出現了一波雄心勃勃的人工智慧計畫、政策公告和擬議的新標準,並在不同尋常的領域開展了合作。美國領先的人工智慧公司在白宮承諾遵守自願標準。美國、中國和二十國集團大多數成員簽署了《關於人工智慧安全的布萊切利公園宣言》(The European Union finally agreed on its much-heralded AI Act)。白宮發布了一項開創性的人工智慧行政命令。歐盟最終同意了備受讚譽的人工智慧法案。聯合國也召集了一個高階諮詢小組(伊恩是該小組的成員)。

 

But breakthroughs in artificial intelligence are moving much faster than governance efforts. Four factors will contribute to this AI governance gap in 2024:

人工智慧的突破比治理工作的進展速度快得多。到2024年,四個因素將導致人工智慧治理差距

1) Politics. As governance structures are created, policy or institutional disagreements will cause them to limit their ambitions. The lowest common denominator of what can be agreed politically by governments and what tech companies don't see as a constraint on their business models will fall short of what's necessary to address AI risks. This will result in a scattershot approach to testing foundational AI models, no agreement on how to deal with open source vs. closed source AI, and no requirements for assessing the impact of AI tools on populations before they are rolled out. A proposed Intergovernmental Panel on Climate Change (IPCC)-style institution for AI would be a useful first step toward a shared global scientific understanding of the technology and its social and political implications, but it will take time … and is not going to “fix” AI safety risks on its own any more than the IPCC has fixed climate change.

1) 政治。由於政策或機構之間的意見分歧,這些治理結構可能不會追求或實現原本所標訂的雄心壯志。政府和科技公司在協商達成的共識,以及科技公司認為不會影響其業務運作的規範,可能會低於應對人工智慧風險所必需的水平。由於政府和科技公司在政治和商業模式方面的分歧(缺乏統一協議和標準),可能對人工智慧的發展和應用帶來一些不確定性和風險。這將導致對基礎人工智慧模型進行測試的方法雜亂無章,對處理開放碼與專有碼人工智慧缺乏一致協議,以及在推出之前對人工智慧工具對人群影響進行評估的要求失之闕如。擬議中的如「政府間氣候變化專門委員會」(IPCC)的人工智慧機構,是朝著全球對該技術及其社會和政治影響的共同科學理解邁出的有用的第一步,但這需要時間......並且不會像IPCC解決氣候變化一樣,能自行「修復」人工智慧安全風險。

2) Inertia. Government attention is finite, and once AI is no longer "the current thing," most leaders will move on to other, more politically salient priorities such as wars (please see Top Risks #2 and #3) and the global economy (please see Top Risk #8). As a result, much of the necessary urgency and prioritization of AI governance initiatives will fall by the wayside, particularly when implementing them requires hard trade-offs for governments. Once attention drifts, it will take a major crisis to force the issue to the fore again.

2) 慣性。政府的注意力是有限的,一旦人工智慧不再具有「新聞熱度」,大多數領導人將轉向其他政治上更顯著的優先事項,例如戰爭(請參閱首要風險#2和#3)和全球經濟(請參閱首要風險#8)。於是,特別是當實施這些措施需要政府進行艱難的權衡時,人工智慧治理措施中許多必要的緊迫性和優先順序將被擱置。一旦注意力轉移了,只有發生重大危機才能再次將問題推向前台。

3) Defection. The biggest stakeholders in AI have so far decided to cooperate on AI governance, with tech companies themselves committing to voluntary standards and guardrails. But as the technology advances and its enormous benefits become self-evident, the growing lure of geopolitical advantage and commercial interest will incentivize governments and companies to defect from the non-binding agreements and regimes they've joined to maximize their gains—or to not join in the first place.

3 背叛。迄今為止,人工智慧領域最大的利益相關者已經決定與科技公司在人工智慧治理方面進行合作,致力於自願標準和準則。但隨著技術的進步及其巨大的利益變得不言而喻,地緣政治優勢和商業利益的誘惑越來越大,這將激勵政府和公司背叛他們為了最大化利益而加入的不具約束力的協議和制度——或者一開始就不加入。

4) Technological speed. AI will continue to improve quickly, with capabilities doubling roughly every six months—three times faster than Moore's law. GPT-5, the next generation of OpenAI's large language model, is set to come out this year—only to be rendered obsolete by the next as-of-yet inconceivable breakthrough in a matter of months. As AI models become exponentially more capable, the technology itself is outpacing efforts to contain it in real time.

4) 技術速度。人工智慧將繼續快速改進,其能力大約每六個月翻一番,比摩爾定律快三倍。GPT-5OpenAI的下一代大型語言模型,將於今年問世,但幾個月後,下一個迄今難以想像的突破就會讓GPT-5過時。隨著人工智慧模型的能力呈指數級增長,該技術本身正在超越能即時遏制它的努力

Which brings us to the core challenge for AI governance: Responding to AI is less about regulating the technology (which is well beyond plausible containment) than understanding the business models driving its expansion and then constraining the incentives (capitalism, geopolitics, human ingenuity) that propel it in potentially dangerous directions. On this front, no near-term governance mechanisms will come close. The result is an AI Wild West resembling the largely ungoverned social media landscape, but with greater potential for harm.

這給我們帶來了人工智慧治理的核心挑戰:應對人工智慧的焦點不在於對技術的嚴格規範(因為技術發展已經難以合理地被約束),而是在於理解推動其擴張的商業模式,然後限制推動它朝著潛在危險方向發展的激勵因素(包括資本主義、地緣政治、人類創造力)。在這方面,任何短期治理機制都無法與之相提並論。結果是人工智慧類似於狂野西部一般,是大體上不受監管的社群媒體環境,但具有更大的危害潛力

Two risks stand out for 2024. The first is disinformation. In a year when four billion people head to the polls, generative AI will be used by domestic and foreign actors—notably Russia—to influence electoral campaigns, stoke division, undermine trust in democracy, and sow political chaos on an unprecedented scale. Sharply divided Western societies, where voters increasingly access information from social media echo chambers, will be particularly vulnerable to manipulation. A crisis in global democracy is today more likely to be precipitated by AI-created and algorithm-driven disinformation than any other factor.

2024年有兩個突出的風險。第一個是假訊息。今年將有40億人參加投票,國內外參與者(尤其是俄羅斯)將利用生成式人工智慧來影響競選活動、煽動分裂、破壞對民主的信任,並造成前所未有的政治混亂。分裂嚴重的西方社會,選民越來越多地從社群媒體同溫層獲取訊息,將特別容易受到操縱。如今,人工智慧創造和演算法驅動的假訊息比任何其他因素更有可能引發全球民主危機。

Beyond elections, AI-generated disinformation will also be used to exacerbate ongoing geopolitical conflicts such as the wars in the Middle East and Ukraine (please see Top Risks #2 and #3). Kremlin propagandists recently used generative AI to spread fake stories about Ukrainian President Volodymyr Zelensky on TikTok, X, and other platforms, which were then cited by Republican lawmakers as reasons not to support further US aid to Ukraine. Last year also saw misinformation about Hamas and Israel spread like wildfire. While much of this has happened without AI, the technology is about to become a principal risk shaping snap policy decisions. Simulated pictures, audio, and video—amplified on social media by armies of AI-powered bots—will increasingly be used by combatants, their backers, and chaos agents to sway public opinion, discredit real evidence, and further inflame geopolitical tensions around the world.

除了選舉之外,人工智慧產生的假訊息也將被用來加劇持續的地緣政治衝突,例如中東和烏克蘭的戰爭(請參閱主要風險#2和#3)。克里姆林宮宣傳人員最近使用生成式人工智慧在TikTokX平台和其他平台上傳播有關烏克蘭總統弗拉基米爾·澤倫斯基的假消息,隨後共和黨議員將這些故事作為不支持美國進一步援助烏克蘭的理由。去年,有關哈馬斯和以色列的錯誤訊息也像野火一樣蔓延。雖然這一切都是在沒有人工智慧的情況下發生的,但該技術即將成為影響快速政策決策的主要風險。模擬圖片、音訊和影片——由人工智慧驅動的機器人大軍在社群媒體上放大——將越來越被戰鬥人員、他們的支持者和混亂代理人用來影響公眾輿論、抹黑真實證據,並進一步加劇世界各地的地緣政治緊張局勢。

The second imminent risk is proliferation. Whereas AI has thus far been dominated by the United States and China, in 2024 new geopolitical actors—both countries and companies—will be able to develop and acquire breakthrough artificial intelligence capabilities. These include state-backed large-language models and advanced applications for intelligence and national security use. Meanwhile, open-source AI will enhance the ability of rogue actors to develop and use new weapons and heighten the risk of accidents (even as it also enables unfathomable economic opportunities).

第二個迫在眉睫的危險是擴散。雖然人工智慧迄今為止一直由美國和中國主導,但到2024年,新的地緣政治參與者——包括國家和公司——將能夠開發和獲得突破性的人工智慧能力。其中包括國家支援的大型語言模型,以及用於情報和國家安全用途的高級應用程式。與此同時,開放碼的人工智慧將增強流氓行為者開發和使用新武器的能力,並增加事故風險(儘管它也帶來了深不可測的經濟機會)

AI is a “gray rhino,” and its upside is easier to predict than its downside. It may or may not have a disruptive impact on markets or geopolitics this year, but sooner or later it will. The longer AI remains ungoverned, the higher the risk of a systemic crisis—and the harder it will be for governments to catch up.

人工智慧是一頭「灰犀牛」(顯而易見、卻經常被忽視的重大風險或危機),它的好處比壞處更容易預測。它可能會或可能不會對今年的市場或地緣政治產生破壞性影響,但遲早會產生人工智慧不受監管的時間越長,發生系統性危機的風險就越高,政府就越難迎頭趕上


沒有留言:

張貼留言

請網友務必留下一致且可辨識的稱謂
顧及閱讀舒適性,段與段間請空一行