跳到主要內容

The price of waiting: what Taiwan’s AI law reveals about regulatory uncertainty



As Taiwan advances its proposed Artificial Intelligence Basic Act, the debate has largely focused on familiar themes: ethics, principles, and the need for “responsible AI”. These questions matter. But they are not the most consequential ones.

The more important issue is economic rather than moral. It concerns how law structures expectations, how uncertainty is distributed, and how delay becomes a rational response when judgement is deferred. Taiwan’s AI legislation offers a revealing case study in the political economy of regulatory uncertainty — and in the costs of asking markets to decide first.

At a symbolic level, the Basic Act marks a clear shift. Artificial intelligence is no longer treated merely as a technical input or an industrial productivity tool, but as an object of public governance. Its deployment is recognised as having implications for legal responsibility, administrative authority and decision-making frameworks.

Yet symbol and structure are not the same. The law’s institutional design matters far more than its declaratory ambition.

A familiar legislative pattern — and its hidden cost

Taiwan’s approach follows a well-established legislative pattern. Parliament articulates high-level values and principles, while the task of constructing operational rules is delegated to administrative agencies. This “principles first, rules later” model has appeared repeatedly over the past two decades in areas such as digital convergence, data protection and financial technology.

The rationale is clear. In fast-changing technological environments, detailed statutory rules risk premature obsolescence. Flexibility preserves regulatory space and allows agencies to adapt.

What this model rarely accounts for is its distributional effect. By postponing specificity, the law shifts the burden of interpretation outward. Predictability is delayed. Judgement is decentralised. The cost of uncertainty is absorbed not by the state, but by firms, investors and institutions forced to act without a clear sense of where responsibility will eventually fall.

In conventional regulatory domains, this postponement may be manageable. In artificial intelligence, it is not.

AI is deployed quickly, across sectors, and at scale. Capital commitments are front-loaded. Organisational decisions — about architecture, compliance design and risk allocation — must be made long before regulators issue guidance. In this context, uncertainty is not a temporary inconvenience. It is a structural input into decision-making.

When principles replace judgement

The core feature of Taiwan’s AI Basic Act is not what it prohibits or mandates, but what it withholds. Rather than embedding risk classifications or responsibility thresholds in law, the Act relies on broad principles and authorises regulators to elaborate later.

This design choice is often defended as pragmatic restraint. But restraint has consequences.

For regulated actors, the relevant question is no longer “am I high risk?” but “might I later be judged problematic?” The difference between the two is not semantic. It is economic.

If a firm understands that a given application has been classified as low risk, it can proceed with reasonable confidence. If, instead, the absence of classification reflects an unfinished regulatory process, proceeding carries the risk of retroactive redefinition. Rational actors respond accordingly.

In practice, this produces a predictable pattern. Investment is delayed. Deployment is segmented. Projects remain in pilot phases. Decisions are deferred until regulatory signals harden. Nothing is prohibited. Everything slows.

This is not regulatory paralysis. It is regulatory drag — subtle, cumulative, and largely invisible to legislative debate.

Europe’s lesson is not about strictness

Taiwan’s debate frequently looks to Europe for comparison, particularly to the EU’s Artificial Intelligence Act. Europe is often portrayed as having chosen a “stricter” path, with detailed risk classifications and heavy compliance obligations.

That framing misses the more relevant distinction.

The EU’s approach is not simply stricter; it is more determinate. Risk categories are embedded in law. Prohibited uses are specified. High-risk applications are linked to defined obligations. Responsibility is triggered by legal facts that are, at least in principle, identifiable in advance.

This design is costly. Compliance burdens are substantial. As generative AI disrupted existing classifications, firms increasingly invested in legal positioning rather than technical improvement. Managing labels began to overshadow managing risk.

But even in this imperfect system, actors operated within a shared framework. Disputes concerned how categories should be interpreted, not whether categories existed at all. Responsibility, while burdensome, was legible.

Europe paid for clarity with money. Taiwan is paying for flexibility with time.

Uncertainty as a priced input

From a policy-economic perspective, uncertainty is not neutral. It is priced.

When firms cannot determine whether regulatory silence implies acceptance or merely delay, they incorporate that ambiguity into their decision calculus. Projects are not abandoned outright. They are discounted.

Capital allocation shifts toward jurisdictions where responsibility thresholds are clearer. Internal compliance structures become conservative. Legal departments gain veto power not because regulation is strict, but because it is indeterminate.

This effect is difficult to quantify. It does not show up in enforcement statistics or compliance reports. It appears instead in delayed launches, prolonged pilot phases, and investment that quietly goes elsewhere.

Legislators rarely see it. But markets feel it.

Fragmented administration amplifies delay

Taiwan’s administrative structure compounds the problem. Governance remains organised around sectoral ministries, each responsible for its own domain. This arrangement struggles with general-purpose technologies such as AI.

The same model may generate marketing content, analyse medical data and assess credit risk. Each use falls under a different authority. In the absence of a clear horizontal coordination mechanism, regulators face their own incentives.

Early interpretation carries risk. Clarifying too soon may be seen as overreach or premature commitment. Silence, by contrast, is safe. It preserves discretion and avoids inter-agency conflict.

Under such conditions, delay is not inertia. It is rational administrative behaviour.

The result is a governance system that functions reactively rather than predictively. Rules emerge through case-by-case friction. Clarity arrives only after conflict. By then, economic decisions have already been made — often conservatively.

Europe’s regulatory variation occurs primarily across member states. Taiwan’s variation occurs within a single system, revealing itself only when something breaks.

Accountability without triggers

Every governance framework ultimately confronts the same question: who bears responsibility when systems fail?

The EU answers this directly. It distinguishes between providers and deployers and ties each role to defined obligations. The burden is heavy, but the trigger conditions are knowable.

Taiwan’s AI Basic Act speaks instead in the language of accountability and transparency, without specifying how responsibility is activated within organisations. This omission is consequential.

Legal teams cannot estimate liability exposure. Engineers cannot identify design boundaries. Executives defer decisions because no one can confidently own the risk.

Projects stall not because they are forbidden, but because responsibility remains abstract.

“Trustworthy AI” becomes an aspiration rather than infrastructure. Trust depends on responsibility being intelligible, assignable and enforceable. Without this, a regulatory chill sets in — gradual, quiet, and corrosive.

Waiting as a policy outcome

The political economy of Taiwan’s AI regulation reveals a deeper pattern. When law offers principles without judgement, it does not remain neutral. It redistributes risk.

Uncertainty shifts from the state to the market. Responsibility moves from institutions to individuals. Delay becomes the rational response.

This is not unique to artificial intelligence. It is a recurring feature of Taiwan’s approach to high-uncertainty policy domains. The difference lies in speed. AI evolves faster than governance adapts.

Waiting, in this context, is not a temporary phase. It is a policy outcome.

What determines success

The success of Taiwan’s AI Basic Act will not be measured by the clarity of its values. Those are largely uncontroversial. It will be measured by how quickly principles are translated into judgement.

At a minimum, a basic law should answer three questions: when responsibility begins, who bears it, and how it is enforced. If these answers remain perpetually deferred, uncertainty itself becomes the governing mechanism.

Some degree of uncertainty is inevitable in emerging technologies. But indeterminacy is not costless. It shapes incentives. It slows investment. It privileges waiting over action.

The choice Taiwan faces is not between regulation and innovation, or between flexibility and rigidity. It is between making uncertainty visible — and allowing it to be silently priced in.

Conclusion: the economics of deferral

Taiwan’s AI Basic Act reflects a broader truth about regulation in fast-moving technological environments. Laws that postpone judgement do not avoid costs. They relocate them.

Europe chose clarity and paid in compliance. Taiwan has chosen flexibility and is paying in time.

Time, however, is not free. It is the most expensive input in technological competition — and the easiest to overlook in legislative debate.

Whether Taiwan’s AI law becomes a functional foundation for governance or another declaratory statute awaiting interpretation will depend not on its principles, but on its willingness to replace deferral with decision.

Until then, the market will do what it does best under uncertainty: wait, discount and move cautiously elsewhere.

留言

這個網誌中的熱門文章

除了宣示價值,我們還有什麼?—人工智慧基本法的未竟之業

  除了宣示價值,還有什麼?—人工智慧基本法的未竟之業 隨著人工智慧基本法的正式推進,台灣在數位法制的版圖上終於立下了一根顯著的標竿。在制度符號學(Institutional Semiotics)意義上,無疑傳遞了清晰的訊號:國家機器正式承認,「人工智慧(AI)」已不再僅僅是實驗室裡的技術參數,也不只是產業界追求效率的經濟工具,而是必須被納入公共治理架構、甚至牽動既有權責配置與法秩序調整的核心。 此等立法選擇,平心而論,不令人意外,亦不特別激進。若回顧台灣過去二十年來面對數位匯流、個資保護甚至金融科技(FinTech)等新興議題的軌跡,會發現這相當符合台灣立法者的一貫邏輯——先確立抽象的價值與方向,再透過授權條款,期待行政機關逐步補齊制度細節。這是一種「宣示先行,實質後補」的立法慣性,旨在於變動劇烈的技術浪潮中,為行政機關保留最大的裁量彈性;然而,這種彈性的代價,往往是將「法律可預測性」(Legal Predictability)往後延,並將龐大的判斷成本轉嫁至市場與社會端。 真正的問題並不在於立法的時機或形式,而在於這種在傳統行政領域行之有效的「摸著石頭過河」策略,在面對人工智慧這種演化速度極快、滲透力極強的「通用目的技術」(General Purpose Technology, GPT)時,是否仍能如預期般運作?當立法者再次將難題拋回社會,要求行為主體「先理解原則,再自行判斷後果」時,我們是否正在製造一場巨大的監管迷霧? 更精確地說,這場迷霧不只是「不知道怎麼守法」,而是包含了至少三層結構性的不確定:誰來說清楚?什麼時候說清楚?以及要用什麼形式說清楚?這三件事如果沒有被制度化,原則就會變成口號,而口號最終只能換來無盡的等待。 歐盟經驗的鏡像反射:確定性的代價與價值 在當前的公共討論中,台灣的人工智慧基本法常被置於國際比較的脈絡下檢視,其中最顯著的參照座標,便是歐盟甫通過的人工智慧法案(EU AI Act)。主流論述將歐盟視為「監管先行者」的典範,認為其作法「走得比較前面」,因此自然成為台灣借鏡的對象。然而,若我們仔細剝開歐盟的制度洋蔥,會發現一個在台灣經常被忽略的事實:歐盟目前面臨的治理困境,並非來自「是否立法」,而是來自立法之後,行為主體究竟該如何將這些規範翻譯成內部流程、產品決策與責任配置。 歐盟採取的是一種近乎「產品安全法規」的邏輯。其制度之所以能...

當法律跑在能力前面,執法就變成賭局

When law outruns capability, enforcement becomes a gamble. 歐盟最新提出的「數位綜合方案」,將原本預計自 2026 年起陸續落地的 AI Act 高風險義務整體延後,並對 GDPR 的若干適用標準作出調整。這一系列時間表與技術條文的修改;從法制運作角度看,其實是歐盟試圖重新修正過去幾年高度前傾的監管節奏。核心訊息很直接: 當規範在行政能力、標準體系與產業準備都尚未到位時提前生效,制度本身便會成為新的風險來源,同時對 法律確定性(Rechtssicherheit)與可執行性(Vollzugstauglichkeit)產生損害。 過去十年,歐盟在數位領域採取的是一套高度主動的立法模式:先以框架性規範設定邊界,再透過技術標準、指引與執法實務慢慢填補細節。GDPR、DMA、DSA 乃至 AI Act 無不如此。這樣的作法在政治上具有明顯的宣示效果,也強化了歐盟作為「規則輸出者」的角色。但在 AI 與資料治理領域,這種先立架構、後補能力的路線,逐步暴露出其結構性限制: Regelungsdichte(規範密度)可以很高,Vollzugskapazität(實際執行能力)卻未必能跟上。 AI Act 的高風險義務便是一個典型例子。法條要求涵蓋技術文件完整性、訓練資料可追溯性、模型行為監測機制、風險管理流程等多重層面,每一項都假設存在一套成熟的標準體系與行政審查機制。然而,相關技術標準仍在制定過程中,各國主管機關的準備度明顯不一,企業端也尚未形成穩定的 best practice。在這樣的條件下,法規若在原定時程強行生效,實務上極易出現「義務已存在,但合格標準與審查方式未臻明確」的狀態。 對企業而言,這意味著法規遵循被迫建立在猜測之上:不知道做到何種程度才足以被認定為合規,卻必須提前調整內部結構與資源配置。對主管機關而言,則是在執法時缺乏穩定的判準,不同成員國之間的差異難以避免。這種情形直接侵蝕了 Rechtssicherheit,使法律本身成為一種額外的不確定性,而不是降低不確定性的工具。從這個角度看,延後義務並非削弱監管,而是試圖讓規範重新落在與現實能力大致相稱的水位,回到 Verhältnismäßigkeit(比例原則)可接受的範圍之內。 GDPR 的調整呈現出相同的邏輯,只是焦點從 AI 行為,轉移到資...