跳到主要內容

The Practical Role of Legitimate Interest under the GDPR

 


The Practical Role of Legitimate Interest under the GDPR

AI, Operational Constraints, and the Limits of Regulatory Simplification

Among the six legal bases for processing personal data under Article 6 GDPR, legitimate interest was not originally designed to support large-scale or continuous data processing as a primary mechanism. Its role was limited and functional: to cover processing activities that cannot realistically rely on consent, do not fall under a legal obligation or the exercise of public authority, yet occur as part of ordinary organisational operations.

Such activities are common in large organisations. They include internal administration, cybersecurity measures, fraud prevention, system maintenance, and basic operational analytics. The GDPR deliberately retained legitimate interest for these situations. Article 6(1)(f) does not provide a general authorisation. It conditions its use on whether the processing is necessary for the purposes of a specific legitimate interest pursued by the controller or a third party.

For a considerable period, this allocation remained stable. What has changed is not the legal text, but the operating environment in which data processing takes place. The expansion of generative AI model training, large-scale data reuse, and cross-platform data flows has altered the structure of processing activities.

In environments characterised by high data volumes, fragmented sources, and indirect collection, consent as a primary legal basis has become increasingly difficult to apply in practice. Controllers often cannot identify data subjects individually at the point of collection, establish contact, or obtain specific and timely consent that reflects an actual understanding of the processing purpose, scope, and implications.

Even where consent is formally obtained, it frequently takes the form of standardised or retroactive declarations. In such cases, consent risks functioning as a procedural step rather than an effective mechanism of rights protection. This is not merely an implementation issue. It reflects a structural limitation of consent as a regulatory tool in certain processing contexts.

Within the existing legal framework, controllers therefore rely more frequently on other legal bases that remain operational. Legitimate interest has, in certain contexts, come to function as the primary legal basis supporting ongoing data processing. This outcome does not indicate a change in legislative intent. It reflects the limited set of workable options available under current conditions.

The limiting clause of Article 6(1)(f) has not been removed. The requirement that legitimate interests must not be overridden by the interests or fundamental rights and freedoms of the data subject remains fully applicable. What has changed is the degree of pressure placed on this balancing mechanism when legitimate interest is used to support large-scale, continuous, and technically complex processing activities.

The resulting question is concrete: when legitimate interest is used in this way, do the safeguards originally designed to constrain it under the GDPR continue to operate effectively?

The Limiting Structure of Article 6(1)(f)

From its wording alone, Article 6(1)(f) GDPR is not an open-ended authorisation. It establishes two cumulative conditions. Processing must be necessary, and the interest pursued must not override the interests or fundamental rights and freedoms of the data subject.

The English text expresses this through the pairing of “necessary” and “overridden”. The German text adopts the same structure through erforderlich and sofern nicht … überwiegen. This design makes clear that legitimate interest is not a default status. It is a conclusion that must be reached anew for each specific processing operation.

Regulatory and judicial practice at EU level has therefore developed a relatively consistent assessment sequence. First, the interest pursued must be concrete and lawful; abstract references to commercial benefit, efficiency, or technological progress are insufficient. Second, the processing must be genuinely necessary, meaning that no less intrusive but equally effective alternative is reasonably available. Third, a balancing assessment must be conducted, examining whether the processing aligns with the reasonable expectations of the data subject and what actual impact it has on their rights and freedoms.

The European Data Protection Board has explicitly identified the reasonable expectations of the data subject as a core factor in this assessment. These elements are not academic categories. They are practical decision tools used to determine whether a processing operation may continue. If any one of these elements fails, reliance on legitimate interest cannot be sustained.

Divergent National Baselines

Effective Exercisability of Rights

Germany’s approach to legitimate interest is anchored in constitutional doctrine and supervisory practice rather than abstract balancing alone. The Federal Constitutional Court’s 1983 Census Decision articulated the concept of informational self-determination, linking personal data processing to the individual’s ability to foresee how data would be used, and thereby to the free development of personality.

This constitutional framing produces a concrete regulatory consequence. In German practice, the primary question is whether data subject rights remain practically exercisable. Where information, objection, or restriction rights cannot be meaningfully exercised due to system design, scale, or technical architecture, reliance on Article 6(1)(f) is called into question.

This approach is reflected in supervisory practice. German authorities apply a strict interpretation of necessity and reasonable expectations in cases involving large-scale tracking, cross-site integration, third-party data sharing, or technically complex processing. When traditional rights mechanisms cannot operate effectively, controllers are expected to introduce functionally equivalent or compensatory safeguards. Claims of technical infeasibility are not accepted as a sufficient endpoint.

Recent German case law on AI training does not depart from this logic. Courts have accepted reliance on legitimate interest only where controllers demonstrate concrete notice mechanisms, effective objection options, and verifiable risk controls. The standard is not lowered; it is operationalised.

Governability and Administrative Control

France starts from a different institutional baseline. The French data protection framework developed around administrative governance rather than constitutional adjudication. The establishment of CNIL following the SAFARI controversy in the 1970s reflects a regulatory choice: to manage and control data processing through institutional design rather than to prohibit data integration as such.

In current practice, this translates into a more operational reading of legitimate interest. French authorities explicitly recognise that consent may be impracticable in certain large-scale or indirect processing scenarios, including AI system development. Legitimate interest is therefore treated as an available legal basis, provided that it is accompanied by structured safeguards.

CNIL’s guidance focuses on documentation and process. Legitimate interest assessments, analysis of data sources and context, risk mitigation measures, and data protection impact assessments where appropriate form the core of this approach. The emphasis is not on rejecting legitimate interest at the outset, but on ensuring that processing remains governable, auditable, and subject to ongoing supervision.

The central question in the French approach is whether a processing activity can be placed within a framework that allows regulatory control and correction over time.

Implications for EU-Level Simplification

Recent discussions on regulatory simplification at EU level cannot avoid legitimate interest. The compliance burden experienced by organisations rarely arises from the existence of Article 6(1)(f) itself. It arises from uncertainty in application: which scenarios qualify, what level of safeguards is required, and whether supervisory authorities across Member States will reach consistent conclusions.

Germany and France illustrate why this issue cannot be resolved through technical drafting alone. Germany prioritises the effective exercisability of rights. France prioritises the governability of complex processing systems. These positions are not mutually exclusive, but they lead to different regulatory instincts.

As a result, EU-level initiatives addressing GDPR simplification face a structural coordination problem. The task is not to decide whether legitimate interest should exist. It already does, and it is already in use. The real question is how far it can extend without rendering data subject rights ineffective, and what institutional costs must accompany its use in large-scale and high-technology environments.

Legitimate interest is not a marginal clause. It is a functional instrument already embedded in EU data protection practice. What requires attention is not its survival, but the conditions under which it is used, and the consequences that follow from its use across different historical and institutional contexts.

留言

這個網誌中的熱門文章

除了宣示價值,我們還有什麼?—人工智慧基本法的未竟之業

  除了宣示價值,還有什麼?—人工智慧基本法的未竟之業 隨著人工智慧基本法的正式推進,台灣在數位法制的版圖上終於立下了一根顯著的標竿。在制度符號學(Institutional Semiotics)意義上,無疑傳遞了清晰的訊號:國家機器正式承認,「人工智慧(AI)」已不再僅僅是實驗室裡的技術參數,也不只是產業界追求效率的經濟工具,而是必須被納入公共治理架構、甚至牽動既有權責配置與法秩序調整的核心。 此等立法選擇,平心而論,不令人意外,亦不特別激進。若回顧台灣過去二十年來面對數位匯流、個資保護甚至金融科技(FinTech)等新興議題的軌跡,會發現這相當符合台灣立法者的一貫邏輯——先確立抽象的價值與方向,再透過授權條款,期待行政機關逐步補齊制度細節。這是一種「宣示先行,實質後補」的立法慣性,旨在於變動劇烈的技術浪潮中,為行政機關保留最大的裁量彈性;然而,這種彈性的代價,往往是將「法律可預測性」(Legal Predictability)往後延,並將龐大的判斷成本轉嫁至市場與社會端。 真正的問題並不在於立法的時機或形式,而在於這種在傳統行政領域行之有效的「摸著石頭過河」策略,在面對人工智慧這種演化速度極快、滲透力極強的「通用目的技術」(General Purpose Technology, GPT)時,是否仍能如預期般運作?當立法者再次將難題拋回社會,要求行為主體「先理解原則,再自行判斷後果」時,我們是否正在製造一場巨大的監管迷霧? 更精確地說,這場迷霧不只是「不知道怎麼守法」,而是包含了至少三層結構性的不確定:誰來說清楚?什麼時候說清楚?以及要用什麼形式說清楚?這三件事如果沒有被制度化,原則就會變成口號,而口號最終只能換來無盡的等待。 歐盟經驗的鏡像反射:確定性的代價與價值 在當前的公共討論中,台灣的人工智慧基本法常被置於國際比較的脈絡下檢視,其中最顯著的參照座標,便是歐盟甫通過的人工智慧法案(EU AI Act)。主流論述將歐盟視為「監管先行者」的典範,認為其作法「走得比較前面」,因此自然成為台灣借鏡的對象。然而,若我們仔細剝開歐盟的制度洋蔥,會發現一個在台灣經常被忽略的事實:歐盟目前面臨的治理困境,並非來自「是否立法」,而是來自立法之後,行為主體究竟該如何將這些規範翻譯成內部流程、產品決策與責任配置。 歐盟採取的是一種近乎「產品安全法規」的邏輯。其制度之所以能...

當法律跑在能力前面,執法就變成賭局

When law outruns capability, enforcement becomes a gamble. 歐盟最新提出的「數位綜合方案」,將原本預計自 2026 年起陸續落地的 AI Act 高風險義務整體延後,並對 GDPR 的若干適用標準作出調整。這一系列時間表與技術條文的修改;從法制運作角度看,其實是歐盟試圖重新修正過去幾年高度前傾的監管節奏。核心訊息很直接: 當規範在行政能力、標準體系與產業準備都尚未到位時提前生效,制度本身便會成為新的風險來源,同時對 法律確定性(Rechtssicherheit)與可執行性(Vollzugstauglichkeit)產生損害。 過去十年,歐盟在數位領域採取的是一套高度主動的立法模式:先以框架性規範設定邊界,再透過技術標準、指引與執法實務慢慢填補細節。GDPR、DMA、DSA 乃至 AI Act 無不如此。這樣的作法在政治上具有明顯的宣示效果,也強化了歐盟作為「規則輸出者」的角色。但在 AI 與資料治理領域,這種先立架構、後補能力的路線,逐步暴露出其結構性限制: Regelungsdichte(規範密度)可以很高,Vollzugskapazität(實際執行能力)卻未必能跟上。 AI Act 的高風險義務便是一個典型例子。法條要求涵蓋技術文件完整性、訓練資料可追溯性、模型行為監測機制、風險管理流程等多重層面,每一項都假設存在一套成熟的標準體系與行政審查機制。然而,相關技術標準仍在制定過程中,各國主管機關的準備度明顯不一,企業端也尚未形成穩定的 best practice。在這樣的條件下,法規若在原定時程強行生效,實務上極易出現「義務已存在,但合格標準與審查方式未臻明確」的狀態。 對企業而言,這意味著法規遵循被迫建立在猜測之上:不知道做到何種程度才足以被認定為合規,卻必須提前調整內部結構與資源配置。對主管機關而言,則是在執法時缺乏穩定的判準,不同成員國之間的差異難以避免。這種情形直接侵蝕了 Rechtssicherheit,使法律本身成為一種額外的不確定性,而不是降低不確定性的工具。從這個角度看,延後義務並非削弱監管,而是試圖讓規範重新落在與現實能力大致相稱的水位,回到 Verhältnismäßigkeit(比例原則)可接受的範圍之內。 GDPR 的調整呈現出相同的邏輯,只是焦點從 AI 行為,轉移到資...

The price of waiting: what Taiwan’s AI law reveals about regulatory uncertainty

As Taiwan advances its proposed Artificial Intelligence Basic Act, the debate has largely focused on familiar themes: ethics, principles, and the need for “responsible AI”. These questions matter. But they are not the most consequential ones. The more important issue is economic rather than moral. It concerns how law structures expectations, how uncertainty is distributed, and how delay becomes a rational response when judgement is deferred. Taiwan’s AI legislation offers a revealing case study in the political economy of regulatory uncertainty — and in the costs of asking markets to decide first. At a symbolic level, the Basic Act marks a clear shift. Artificial intelligence is no longer treated merely as a technical input or an industrial productivity tool, but as an object of public governance. Its deployment is recognised as having implications for legal responsibility, administrative authority and decision-making frameworks. Yet symbol and structure are not the same. The law’s...