As Taiwan advances its proposed Artificial Intelligence Basic Act, the debate has largely focused on familiar themes: ethics, principles, and the need for “responsible AI”. These questions matter. But they are not the most consequential ones.
The more important issue is economic rather than moral. It concerns how law structures expectations, how uncertainty is distributed, and how delay becomes a rational response when judgement is deferred. Taiwan’s AI legislation offers a revealing case study in the political economy of regulatory uncertainty — and in the costs of asking markets to decide first.
At a symbolic level, the Basic Act marks a clear shift. Artificial intelligence is no longer treated merely as a technical input or an industrial productivity tool, but as an object of public governance. Its deployment is recognised as having implications for legal responsibility, administrative authority and decision-making frameworks.
Yet symbol and structure are not the same. The law’s institutional design matters far more than its declaratory ambition.
A familiar legislative pattern — and its hidden cost
Taiwan’s approach follows a well-established legislative pattern. Parliament articulates high-level values and principles, while the task of constructing operational rules is delegated to administrative agencies. This “principles first, rules later” model has appeared repeatedly over the past two decades in areas such as digital convergence, data protection and financial technology.
The rationale is clear. In fast-changing technological environments, detailed statutory rules risk premature obsolescence. Flexibility preserves regulatory space and allows agencies to adapt.
What this model rarely accounts for is its distributional effect. By postponing specificity, the law shifts the burden of interpretation outward. Predictability is delayed. Judgement is decentralised. The cost of uncertainty is absorbed not by the state, but by firms, investors and institutions forced to act without a clear sense of where responsibility will eventually fall.
In conventional regulatory domains, this postponement may be manageable. In artificial intelligence, it is not.
AI is deployed quickly, across sectors, and at scale. Capital commitments are front-loaded. Organisational decisions — about architecture, compliance design and risk allocation — must be made long before regulators issue guidance. In this context, uncertainty is not a temporary inconvenience. It is a structural input into decision-making.
When principles replace judgement
The core feature of Taiwan’s AI Basic Act is not what it prohibits or mandates, but what it withholds. Rather than embedding risk classifications or responsibility thresholds in law, the Act relies on broad principles and authorises regulators to elaborate later.
This design choice is often defended as pragmatic restraint. But restraint has consequences.
For regulated actors, the relevant question is no longer “am I high risk?” but “might I later be judged problematic?” The difference between the two is not semantic. It is economic.
If a firm understands that a given application has been classified as low risk, it can proceed with reasonable confidence. If, instead, the absence of classification reflects an unfinished regulatory process, proceeding carries the risk of retroactive redefinition. Rational actors respond accordingly.
In practice, this produces a predictable pattern. Investment is delayed. Deployment is segmented. Projects remain in pilot phases. Decisions are deferred until regulatory signals harden. Nothing is prohibited. Everything slows.
This is not regulatory paralysis. It is regulatory drag — subtle, cumulative, and largely invisible to legislative debate.
Europe’s lesson is not about strictness
Taiwan’s debate frequently looks to Europe for comparison, particularly to the EU’s Artificial Intelligence Act. Europe is often portrayed as having chosen a “stricter” path, with detailed risk classifications and heavy compliance obligations.
That framing misses the more relevant distinction.
The EU’s approach is not simply stricter; it is more determinate. Risk categories are embedded in law. Prohibited uses are specified. High-risk applications are linked to defined obligations. Responsibility is triggered by legal facts that are, at least in principle, identifiable in advance.
This design is costly. Compliance burdens are substantial. As generative AI disrupted existing classifications, firms increasingly invested in legal positioning rather than technical improvement. Managing labels began to overshadow managing risk.
But even in this imperfect system, actors operated within a shared framework. Disputes concerned how categories should be interpreted, not whether categories existed at all. Responsibility, while burdensome, was legible.
Europe paid for clarity with money. Taiwan is paying for flexibility with time.
Uncertainty as a priced input
From a policy-economic perspective, uncertainty is not neutral. It is priced.
When firms cannot determine whether regulatory silence implies acceptance or merely delay, they incorporate that ambiguity into their decision calculus. Projects are not abandoned outright. They are discounted.
Capital allocation shifts toward jurisdictions where responsibility thresholds are clearer. Internal compliance structures become conservative. Legal departments gain veto power not because regulation is strict, but because it is indeterminate.
This effect is difficult to quantify. It does not show up in enforcement statistics or compliance reports. It appears instead in delayed launches, prolonged pilot phases, and investment that quietly goes elsewhere.
Legislators rarely see it. But markets feel it.
Fragmented administration amplifies delay
Taiwan’s administrative structure compounds the problem. Governance remains organised around sectoral ministries, each responsible for its own domain. This arrangement struggles with general-purpose technologies such as AI.
The same model may generate marketing content, analyse medical data and assess credit risk. Each use falls under a different authority. In the absence of a clear horizontal coordination mechanism, regulators face their own incentives.
Early interpretation carries risk. Clarifying too soon may be seen as overreach or premature commitment. Silence, by contrast, is safe. It preserves discretion and avoids inter-agency conflict.
Under such conditions, delay is not inertia. It is rational administrative behaviour.
The result is a governance system that functions reactively rather than predictively. Rules emerge through case-by-case friction. Clarity arrives only after conflict. By then, economic decisions have already been made — often conservatively.
Europe’s regulatory variation occurs primarily across member states. Taiwan’s variation occurs within a single system, revealing itself only when something breaks.
Accountability without triggers
Every governance framework ultimately confronts the same question: who bears responsibility when systems fail?
The EU answers this directly. It distinguishes between providers and deployers and ties each role to defined obligations. The burden is heavy, but the trigger conditions are knowable.
Taiwan’s AI Basic Act speaks instead in the language of accountability and transparency, without specifying how responsibility is activated within organisations. This omission is consequential.
Legal teams cannot estimate liability exposure. Engineers cannot identify design boundaries. Executives defer decisions because no one can confidently own the risk.
Projects stall not because they are forbidden, but because responsibility remains abstract.
“Trustworthy AI” becomes an aspiration rather than infrastructure. Trust depends on responsibility being intelligible, assignable and enforceable. Without this, a regulatory chill sets in — gradual, quiet, and corrosive.
Waiting as a policy outcome
The political economy of Taiwan’s AI regulation reveals a deeper pattern. When law offers principles without judgement, it does not remain neutral. It redistributes risk.
Uncertainty shifts from the state to the market. Responsibility moves from institutions to individuals. Delay becomes the rational response.
This is not unique to artificial intelligence. It is a recurring feature of Taiwan’s approach to high-uncertainty policy domains. The difference lies in speed. AI evolves faster than governance adapts.
Waiting, in this context, is not a temporary phase. It is a policy outcome.
What determines success
The success of Taiwan’s AI Basic Act will not be measured by the clarity of its values. Those are largely uncontroversial. It will be measured by how quickly principles are translated into judgement.
At a minimum, a basic law should answer three questions: when responsibility begins, who bears it, and how it is enforced. If these answers remain perpetually deferred, uncertainty itself becomes the governing mechanism.
Some degree of uncertainty is inevitable in emerging technologies. But indeterminacy is not costless. It shapes incentives. It slows investment. It privileges waiting over action.
The choice Taiwan faces is not between regulation and innovation, or between flexibility and rigidity. It is between making uncertainty visible — and allowing it to be silently priced in.
Conclusion: the economics of deferral
Taiwan’s AI Basic Act reflects a broader truth about regulation in fast-moving technological environments. Laws that postpone judgement do not avoid costs. They relocate them.
Europe chose clarity and paid in compliance. Taiwan has chosen flexibility and is paying in time.
Time, however, is not free. It is the most expensive input in technological competition — and the easiest to overlook in legislative debate.
Whether Taiwan’s AI law becomes a functional foundation for governance or another declaratory statute awaiting interpretation will depend not on its principles, but on its willingness to replace deferral with decision.
Until then, the market will do what it does best under uncertainty: wait, discount and move cautiously elsewhere.
留言
張貼留言