Human Rights in Cyberspace and Ethics in Social Networks
5 Credits
Total Hours: 120
With Ratings: 125h
Undergraduate
Elective
Course Description
<p class="MsoNormal">The study of human rights in cyberspace contributes to deepening knowledge about contemporary challenges of the digital age, peculiarities of personal data protection, freedom of expression in the online environment, ethical dilemmas of social platforms, and international cooperation in the field of digital human rights protection.The module covers theoretical foundations, sources of regulation, subject-object composition of digital legal relations, fundamental digital rights, ethics of social networks, rights violations and protection mechanisms, as well as international cooperation in this field. Instruction is conducted in Uzbek, Russian, and English languages.</p>
Syllabus Details (Topics & Hours)
#
Topic Title
Lecture (hours)
Seminar (hours)
Independent (hours)
Total (hours)
Resources
1
Theoretical foundations of human rights in cyberspace
2
2
5
9
Lecture text
Section 1: The Evolution of the Normative Framework
The theoretical foundation of human rights in cyberspace is rooted in the principle of normative equivalence, which asserts that the rights possessed by individuals in the physical world must also be protected in the online environment. This concept was formally solidified by the United Nations Human Rights Council in Resolution 20/8, which explicitly affirmed that the same rights that people have offline must also be protected online, in particular, freedom of expression (United Nations Human Rights Council [UNHRC], 2012). This resolution serves as the primary theoretical anchor for digital human rights, bridging the gap between traditional international human rights law and the rapidly evolving digital landscape. It rejects the notion that cyberspace is a "lawless" zone or a separate jurisdiction where established human rights conventions do not apply, thereby extending the applicability of the Universal Declaration of Human Rights (UDHR) to digital interactions.
Historically, the debate regarding the applicability of human rights to the internet centered on the nature of the medium itself. Early theorists and digital activists often viewed the internet as a distinct space, separate from the sovereign control of nation-states, a perspective often referred to as cyber-libertarianism. However, legal scholars and international bodies have progressively moved towards a model of technological neutrality. This legal theory posits that laws should apply to the actions and consequences of behavior regardless of the technological medium used to execute them (Reed, 2012). Consequently, the theoretical framework does not require the invention of "new" human rights for the digital age, but rather the reinterpretation and application of existing treaty obligations, such as the International Covenant on Civil and Political Rights (ICCPR), to new technological contexts.
The scope of this normative framework is primarily defined by Article 19 of the UDHR and Article 19 of the ICCPR, which protect freedom of opinion and expression. The text of these articles is notably forward-looking, protecting the freedom to seek, receive, and impart information and ideas through any media and regardless of frontiers (United Nations General Assembly, 1948). Theoretical discussions emphasize the phrase "through any media," arguing that the drafters of these conventions established a medium-independent right that automatically encompasses the internet, social media, and future digital communication technologies. This textual basis provides the legitimacy required to challenge state censorship and internet shutdowns as violations of established international law rather than merely sovereign domestic policy decisions.
Furthermore, the theoretical expansion of these rights includes the recognition of the internet as a critical enabler of other fundamental rights. The internet is no longer viewed solely as a communication tool but as a catalyst for the realization of economic, social, and cultural rights. Access to the internet facilitates the right to education, the right to assembly, and the right to participate in cultural life. Scholars argue that restricting digital access consequently infringes upon a broader spectrum of human rights, suggesting a composite theoretical model where digital rights are intersectional (La Rue, 2011). This intersectionality means that a violation of digital infrastructure often triggers a cascade of violations across the human rights spectrum.
The role of the state in this framework has shifted from a negative obligation to a positive obligation. Traditionally, civil liberties were viewed as negative rights, requiring the state to refrain from interference. However, in the digital context, the theory of positive obligation suggests that states must actively ensure the availability, accessibility, and affordability of internet infrastructure to guarantee that rights are not merely theoretical but practical. This aligns with the United Nations Sustainable Development Goals, specifically Goal 9c, which aims to significantly increase access to information and communications technology (United Nations, 2015).
Additionally, the theoretical foundations must address the challenge of jurisdiction and territoriality. Human rights treaties are generally binding on states within their territories and subject to their jurisdiction. The cross-border nature of the internet complicates this, as data flows and digital interactions often transcend physical borders. Legal theorists are currently grappling with the extraterritorial application of human rights obligations, debating whether a state's human rights duties extend to individuals outside its borders when the state exercises effective control over digital infrastructure or data affecting those individuals (Milanovic, 2011).
Another critical aspect of the evolving framework is the recognition of anonymity and encryption as vital enablers of human rights. While not explicitly mentioned in early human rights treaties, contemporary theory posits that the right to privacy in the digital age requires the protection of secure communication channels. The Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression explicitly links encryption and anonymity to the enjoyment of freedom of opinion and expression (Kaye, 2015). This establishes a theoretical basis where technical security measures are not just tools but protected extensions of the right to hold opinions without interference.
The integration of private sector responsibility constitutes a significant evolution in the theoretical landscape. Traditional human rights law binds states, not corporations. However, because the digital public square is largely owned and operated by private entities, the theoretical framework has expanded to include the "Ruggie Principles" or the UN Guiding Principles on Business and Human Rights. These principles establish that while states have a duty to protect human rights, business enterprises have a distinct responsibility to respect human rights, meaning they must act with due diligence to avoid infringing on the rights of others (United Nations Human Rights Office of the High Commissioner [OHCHR], 2011).
Moreover, the concept of "digital dignity" is emerging as a theoretical counterbalance to the commodification of personal data. This perspective draws upon the Kantian notion of dignity, arguing that individuals should not be treated merely as means to an end (data points for advertising) but as ends in themselves. This underpins the theoretical justification for data protection regimes like the GDPR, which view control over one's digital persona as a fundamental aspect of human dignity and autonomy in the modern world (Floridi, 2016).
The framework also contends with the "digital divide" as a human rights issue. If the internet is an essential enabler of rights, then lack of access constitutes a form of inequality that the international community must address. Theoretical discourse has moved beyond simple connectivity to include "meaningful connectivity," which encompasses factors like speed, device quality, and digital literacy. This nuanced view asserts that formal access without the skills or tools to utilize it effectively does not satisfy the human rights requirement of non-discrimination.
Finally, the evolution of this framework is characterized by its dynamic nature. As technologies such as Artificial Intelligence (AI) and the Internet of Things (IoT) proliferate, the theoretical boundaries of human rights in cyberspace are continuously tested. The current academic and legal consensus maintains that the fundamental principles of dignity, non-discrimination, and due process must remain the bedrock of any new digital regulations. The adaptability of the existing human rights framework is its greatest theoretical strength, allowing it to encompass recognized rights in unrecognized environments.
Section 2: Cyber-Libertarianism vs. Cyber-Paternalism and Sovereignty
The theoretical landscape of cyberspace governance is defined by the tension between cyber-libertarianism and cyber-paternalism (or digital sovereignty). Cyber-libertarianism, prevalent in the early discussions of the internet, is best exemplified by John Perry Barlow’s "Declaration of the Independence of Cyberspace," which argued that the internet was a distinct space immune to the sovereignty of physical governments (Barlow, 1996). This theory posits that the decentralized architecture of the network inherently resists centralized control and that the internet should be a self-regulating marketplace of ideas. Proponents of this view argue that state intervention inevitably stifles innovation and infringes upon the unique liberties afforded by the digital medium.
In contrast, the theory of cyber-paternalism or cyber-regulation argues that the internet is not a separate realm but a tool that has real-world consequences, thereby necessitating state regulation to protect citizens and maintain order. This perspective is famously articulated by Lawrence Lessig in his work "Code and Other Laws of Cyberspace," where he introduced the concept that "code is law." Lessig argued that the architecture of the internet—its software and protocols—regulates behavior just as effectively as legal statutes (Lessig, 1999). Therefore, if the state does not regulate the code, the architects of that code (private corporations) will become the de facto sovereigns, potentially imposing values that conflict with democratic norms.
The shift toward digital sovereignty represents a reassertion of the Westphalian state model in the digital domain. Nations increasingly view the digital infrastructure and data within their borders as national assets subject to domestic law. This theoretical stance challenges the universalist vision of a borderless internet, leading to phenomena such as the "splinternet," where the internet becomes fragmented along national lines. This approach is often justified by states under the pretext of national security, cultural preservation, and the protection of citizens from harmful content, asserting that the state has the primary duty to define the boundaries of acceptable speech and behavior within its digital territory (Mueller, 2017).
A critical component of this theoretical debate involves the legitimacy of control. Cyber-libertarians argue that legitimacy is derived from the consensus of the networked community and the rough consensus of engineering task forces, rather than traditional democratic institutions. Conversely, proponents of digital sovereignty argue that only the state possesses the democratic legitimacy to balance competing rights, such as free speech versus privacy, or security versus liberty. They contend that leaving these decisions to private companies or technical bodies lacks accountability and due process.
The concept of "network sovereignty" further complicates this dichotomy. It suggests that sovereignty is not just about territorial control but about controlling the flow of information. States employing this theory invest in technologies like the "Great Firewall" or national intranets, theoretically grounding their actions in the right to self-determination. They argue that the free flow of information is often a cover for cultural imperialism or information warfare by dominant geopolitical powers, thus framing internet control as a matter of national defense and anti-colonial resistance (Goldsmith & Wu, 2006).
However, human rights theorists argue that extreme versions of digital sovereignty inevitably lead to rights violations. When the state asserts absolute control over the digital architecture to enforce local laws, it often bypasses international human rights standards regarding necessity and proportionality. The theory of "internet freedom" has thus emerged as a counter-narrative, promoted by coalitions of democratic states and civil society. This theory frames the open, interoperable, and secure internet as a prerequisite for the enjoyment of human rights, positioning restrictive sovereign controls as aberrations of the technological promise.
The debate also extends to the governance of critical internet resources, such as the Domain Name System (DNS). The transition of the Internet Assigned Numbers Authority (IANA) functions from US oversight to a global multi-stakeholder community represented a victory for a hybrid theoretical model. This "multi-stakeholderism" rejects both pure state control and pure anarchy, proposing that governance should involve states, the private sector, civil society, and the technical community on an equal footing. This model attempts to reconcile the need for order with the preservation of the internet's open architecture (DeNardis, 2014).
Furthermore, the rise of platform governance introduces a theory of "private sovereignty." Large technology companies define the rules of speech and association for billions of users through Terms of Service and Community Guidelines. These private regulations often conflict with national laws and international human rights standards. Theoretical analysis here focuses on the "quasi-public" nature of these platforms, debating whether they should be treated as state actors subject to the First Amendment (in the US context) or international human rights obligations due to their functional dominance over public discourse (Klonick, 2018).
The tension between security and liberty is central to these governance theories. Cyber-paternalism prioritizes security, arguing that the state must have backdoor access to encrypted communications to prevent crime and terrorism. This "security-first" approach posits that human rights are contingent upon a secure environment. In opposition, rights-based theories argue that weakening encryption for security purposes disproportionately harms the privacy and security of law-abiding citizens, thereby failing the proportionality test required by international law (UNHRC, 2015).
Another theoretical dimension is the economic implication of sovereignty. Data localization laws, which require data to be stored physically within a country, are often justified on privacy grounds but also serve economic protectionist goals. This merges human rights rhetoric with economic theory, complicating the analysis of state motives. Theorists must distinguish between genuine efforts to protect citizen data from foreign surveillance and efforts to incubate local tech industries through regulatory barriers.
Ultimately, the theoretical foundations of cyberspace governance are in a state of flux. The binary of "open" versus "closed" is being replaced by a spectrum of regulation. The challenge for human rights scholars is to define the threshold where regulation moves from legitimate governance (protecting users from fraud, abuse, and violence) to illegitimate repression (silencing dissent and monitoring populations).
The consensus in human rights theory is drifting toward a model of "democratic constitutionalism" for the internet. This model accepts the necessity of regulation but demands that such regulation be consistent with constitutional principles of human rights, transparency, and accountability, regardless of whether the regulator is a state or a corporation.
Section 3: Privacy, Surveillance, and the Digital Panopticon
The theoretical conceptualization of privacy in cyberspace has undergone a radical transformation, moving from the traditional "right to be left alone" to the modern concept of "informational self-determination." This shift acknowledges that in a digital society, total isolation is impossible; therefore, privacy is re-theorized as the right of the individual to control the flow of their own personal information. This concept is foundational to modern data protection laws and is rooted in the recognition that personal data constitutes a digital projection of the self, worthy of protection equivalent to the physical body (Westin, 1967).
A dominant theoretical metaphor in the study of digital surveillance is the "Panopticon," originally conceptualized by Jeremy Bentham and later expanded by Michel Foucault. In the digital panopticon, the few (states or corporations) observe the many (users) without the observed knowing precisely when they are being watched. This asymmetry of visibility creates a chilling effect, where individuals modify their behavior and speech towards conformity due to the mere possibility of surveillance. Theoretical discourse argues that this internalized discipline fundamentally undermines the freedom of thought and expression, even if no direct punishment is administered (Foucault, 1977).
The rise of "surveillance capitalism" represents a critical theoretical development, describing a new economic order that claims human experience as free raw material for translation into behavioral data. Shoshana Zuboff argues that this system fundamentally threatens human autonomy. In this theoretical model, privacy violations are not accidental byproducts of digital services but the core mechanism of value extraction. This challenges the traditional human rights framework which assumes that violations are aberrations; here, the violation is the business model itself, requiring a rethinking of how rights can be protected against systemic economic imperatives (Zuboff, 2019).
The concept of "data retention" challenges the presumption of innocence. Laws requiring the indiscriminate collection and storage of telecommunications data treat all citizens as potential suspects. The Court of Justice of the European Union (CJEU) has theoretically and legally challenged this in the "Digital Rights Ireland" case, establishing that mass surveillance without specific suspicion constitutes a disproportionate interference with the fundamental rights to privacy and data protection. This establishes a theoretical limit on state power: the pursuit of security cannot justify the total elimination of anonymity (CJEU, 2014).
Furthermore, the "mosaic theory" of privacy has gained prominence in legal analysis. This theory posits that while individual data points (metadata) might be innocuous in isolation, when aggregated over time, they reveal an intimate portrait of an individual's life, including political views, health status, and associations. This theory refutes the argument often made by intelligence agencies that the collection of metadata is harmless. It asserts that in the digital age, the distinction between content and metadata is theoretically collapsing in terms of the privacy intrusion it represents.
The right to anonymity is increasingly theorized not as a shield for criminality, but as a prerequisite for the exercise of other rights. In environments where political dissent is criminalized, anonymity is the only layer of protection for activists and journalists. The theoretical framework put forth by the UN Special Rapporteur on Freedom of Expression classifies anonymity tools as necessary enablers of rights, shifting the burden of proof onto states to justify any restrictions on these technologies (Kaye, 2015).
Biometric surveillance and facial recognition introduce the theory of "the end of public anonymity." Historically, individuals possessed a practical obscurity in public spaces; one could walk down a street without being identified and tracked. Digital technologies erode this practical protection. Human rights theorists argue that the permanent, automated identification of individuals in public spaces creates a society of total transparency for the citizen and opacity for the state, inverting the democratic requirement of transparent government and private citizens.
The concept of "privacy by design" moves privacy theory from the legal to the technical realm. It argues that legal compliance is insufficient if the underlying technology is inherently intrusive. Instead, privacy principles must be embedded into the architecture of systems from the initial design phase. This theoretical stance aligns with Lessig’s "code is law," suggesting that the most effective way to protect the right to privacy is to build it into the infrastructure of the internet itself (Cavoukian, 2009).
Cross-border data flows introduce the problem of "legal interoperability." When data moves from a jurisdiction with strong privacy protections (like the EU) to one with weaker protections, the rights of the individual are jeopardized. The "Schrems II" judgment by the CJEU highlighted the theoretical conflict between US surveillance laws (like FISA 702) and EU fundamental rights. This underscores a geopolitical theory of privacy where conflicting national security priorities create a fragmented landscape of rights protection (CJEU, 2020).
The "Right to be Forgotten" (or Right to Erasure) introduces a theoretical conflict between privacy and free expression/historical memory. It posits that individuals should have the right to determine the lifespan of their digital footprint and not be permanently stigmatized by past actions. Critics argue this allows for the sanitization of history. The theoretical balance is found in the "public interest" test, where the right to privacy prevails unless there is a compelling public interest in the accessibility of the information (Google Spain v. AEPD, 2014).
Furthermore, the commodification of privacy leads to the "privacy paradox," where users claim to value privacy but freely trade their data for convenience. Theories explaining this focus on "rational apathy" and information asymmetry—users cannot possibly understand the complex terms of service or the extent of data tracking, so their consent is not truly informed. This undermines the theoretical validity of "consent" as the primary legal basis for data processing in consumer relationships.
Finally, the theoretical foundation of digital privacy is increasingly viewed through the lens of collective rights rather than just individual rights. Massive data breaches and algorithmic manipulation affect groups and societies as a whole (e.g., election interference). Therefore, contemporary theory suggests that privacy frameworks must evolve to protect the integrity of social and democratic systems, not just the secrets of individuals.
Section 4: Intermediaries, Gatekeepers, and Private Power
The theoretical role of internet intermediaries—search engines, social media platforms, and ISPs—is central to understanding human rights in cyberspace. These entities act as "gatekeepers" of information, possessing the power to facilitate or block the flow of content. Historically, the legal theory governing intermediaries, particularly in the US (Section 230 of the CDA) and the EU (e-Commerce Directive), was one of "conditional immunity." This theory posits that intermediaries should not be held liable for third-party content, as imposing liability would compel them to actively censor speech to avoid legal risk, thereby chilling freedom of expression (Balkin, 2018).
However, the immense growth of platforms has led to a critique of this laissez-faire approach. The "platform theory" recognizes that these companies are not merely passive conduits but active curators of content through algorithmic recommendation systems. By prioritizing high-engagement content (which is often polarizing or sensational), algorithms shape the public sphere. This has led to the theoretical argument that platforms have a "duty of care" toward their users and society, requiring them to mitigate systemic risks such as hate speech and disinformation without engaging in censorship (Gillespie, 2018).
The power dynamic between platforms and users is often analyzed through the lens of "contract theory." Users agree to Terms of Service (ToS) that grant the platform broad discretion to remove content or suspend accounts. Human rights theorists argue that these "contracts of adhesion" are fundamentally unbalanced. Because major platforms function as essential public utilities for participation in modern life, the theory of "horizontal effect" (Drittwirkung) is increasingly applied. This legal theory suggests that fundamental rights, which traditionally bind the state, should also have some application in relations between private parties when one party holds dominant power (Tushnet, 2003).
Content moderation is the practical application of this power, often described theoretically as the "new governance" of speech. Unlike judicial systems, content moderation lacks procedural safeguards: there is often no presumption of innocence, no clear appeal process, and no transparency. The "Santa Clara Principles" represent a theoretical and practical attempt by civil society to impose due process standards on private moderation, arguing that the decision to remove speech requires explanation and a mechanism for redress (Santa Clara Principles, 2018).
The phenomenon of "shadow banning" or algorithmic demotion introduces the concept of "visibility as a right." Traditional free speech theory focuses on the right to speak. In the digital age, where attention is a scarce resource, the right to speak is distinct from the right to be heard (or amplified). Theoretical debates now center on whether platforms have the right to amplify or suppress content based on opaque internal policies, effectively shaping the "marketplace of ideas" invisibly (Pasquale, 2015).
"Collateral censorship" is a key theoretical risk associated with strict liability regimes. When states impose heavy fines on platforms for failing to remove illegal content within short timeframes (e.g., Germany's NetzDG), platforms are theoretically incentivized to err on the side of caution and remove legal speech rather than risk fines. This outsourcing of state censorship to private entities allows the state to bypass constitutional constraints on its own power, creating a system of privatized enforcement (Keller, 2018).
The "essential facilities" doctrine from competition law is sometimes applied to the human rights context. If a platform becomes so dominant that it constitutes an essential facility for public discourse (e.g., Facebook or X/Twitter), antitrust theories merge with human rights theories to suggest that these platforms may have limited rights to exclude users arbitrarily. This challenges the property rights of the platform owners in favor of the democratic necessity of the platform.
Global content moderation creates a "jurisdictional conflict" theory. Platforms often apply a single set of community standards globally, or conversely, apply the restrictive laws of one country to users worldwide. The "Brussels Effect" describes how EU regulation often sets the global standard because platforms prefer a single compliance regime. However, this also raises the danger of the "lowest common denominator," where the most restrictive laws of authoritarian regimes might influence global content policies.
The role of "fact-checking" and "labeling" introduces an epistemic dimension to the theory of digital rights. Platforms intervening to label misinformation are asserting an authority over truth. While intended to protect the information ecosystem, this raises theoretical questions about who acts as the arbiter of truth in a democratic society and whether private corporations possess the legitimacy to make such determinations.
"De-platforming" or the permanent suspension of users (including public figures) highlights the tension between private property rights and public interest. While platforms have a right to enforce rules, the permanent removal of a voice from the digital public square is viewed by some theorists as a form of "digital capital punishment." This has reinvigorated the debate on whether an "Internet Bill of Rights" is necessary to protect users from arbitrary exclusion.
The emergence of decentralized social networks (the Fediverse) offers a theoretical alternative: "federated governance." In this model, users choose their own servers and moderation rules, reducing the power of centralized gatekeepers. This returns to the early architectural vision of the internet but faces challenges in scalability and the handling of illegal content like child sexual abuse material (CSAM).
Ultimately, the theoretical framework regarding intermediaries is shifting from "intermediary liability" to "intermediary responsibility." This nuance implies that while platforms might not be liable for every specific post, they are responsible for the systems and architectures they create. This systemic responsibility is the focus of new legislation like the EU's Digital Services Act, which seeks to codify the theoretical balance between platform autonomy, user safety, and fundamental rights (European Commission, 2020).
Section 5: The Digital Divide, Access, and Equality
The "Digital Divide" is theoretically conceptualized not merely as a gap in technology ownership, but as a fundamental inequality in the capability to exercise human rights. This perspective draws heavily on Amartya Sen’s "capability approach," which assesses well-being based on the freedom of individuals to achieve functionings they value. In the modern context, the internet is a primary vector for these capabilities. Therefore, the digital divide is a human rights violation because it systematically deprives unconnected populations of the opportunities for economic mobility, political participation, and social inclusion (Sen, 1999).
Scholars categorize the digital divide into three theoretical levels. The "first-level divide" concerns physical access to infrastructure (connectivity). The "second-level divide" involves differences in digital skills and literacy (competency). The "third-level divide" relates to the disparate outcomes people achieve using the internet (benefits). Human rights theory argues that addressing only the first level is insufficient; true equality requires addressing the structural inequalities that prevent marginalized groups from effectively utilizing digital tools (Scheerder et al., 2017).
The concept of "internet access as a human right" has been debated extensively. While some, like Vinton Cerf, argue that technology is an enabler of rights rather than a right itself (Cerf, 2012), the prevailing international consensus is moving toward recognizing it as a derivative or auxiliary right essential for the realization of other rights. The UN General Assembly has declared that access to the internet is intrinsically linked to the right to freedom of expression. This theoretical stance frames access as a sine qua non—without it, the substantive right to expression in the 21st century is hollow.
Gender constitutes a critical theoretical axis in the digital divide. The "gender digital divide" refers to the systemic barriers that prevent women and girls from accessing and using the internet on equal terms. This is not just an infrastructure issue but a result of socio-cultural norms, economic disparity, and online gender-based violence. Feminist theories of technology argue that the internet is not a neutral space but one that often replicates offline patriarchies. Consequently, achieving digital equality requires a gender-responsive approach to policy and infrastructure development (UN Women, 2018).
The "rural-urban divide" presents a challenge to the universality of rights. Market-driven models of internet provision theoretically fail in low-density rural areas where infrastructure deployment is not profitable. Human rights theory posits that the state has an obligation to intervene in these "market failures" to ensure universal service. This is often framed through the lens of "utility theory," arguing that high-speed internet is as essential as electricity or water and should be regulated to ensure universal coverage regardless of profitability.
Economic barriers and "affordability" are central to the exclusion of the poor. The Alliance for Affordable Internet sets a threshold (1 for 2) suggesting that 1GB of data should not cost more than 2% of average monthly income. When costs exceed this, access becomes a luxury rather than a right. Theoretical critiques of pricing models emphasize that high costs act as a regressive tax on information, disproportionately silencing the poor and violating the principle of non-discrimination found in Article 2 of the UDHR.
"Zero-rating" practices (providing free access to specific apps while charging for the rest of the internet) present a theoretical dilemma. Proponents argue it helps bridge the divide by providing some connectivity to those who can afford none. Critics, employing "Net Neutrality" theory, argue that it creates a "walled garden" or a "poor internet for poor people," violating the principle that all internet traffic should be treated equally. This practice allows corporations to act as gatekeepers for the poor, shaping their internet experience and access to information (Marsden, 2016).
The "disability digital divide" highlights the failure of "ableist" design. Digital accessibility is a human right mandated by the Convention on the Rights of Persons with Disabilities (CRPD). The theoretical framework here is "universal design," which posits that environments should be usable by all people, to the greatest extent possible, without the need for adaptation. Inaccessible websites and tools are viewed as discriminatory barriers that prevent persons with disabilities from living independently and participating fully in all aspects of life.
Information literacy and "critical digital literacy" are essential components of the right to education in the digital age. It is not enough to consume information; citizens must be able to evaluate its veracity and source. Theoretical models of education now emphasize that digital literacy is a prerequisite for democratic citizenship. Without these skills, users are vulnerable to manipulation, fraud, and misinformation, effectively disenfranchising them despite being "connected."
The "right to disconnect" is an emerging theoretical counter-point to the push for constant connectivity. It addresses the blurring of work-life boundaries and the mental health implications of the "always-on" culture. While access is a right, the freedom from digital intrusion is also becoming a claimed right, particularly in labor law. This highlights the dialectical nature of digital technology: it is both a tool for liberation and a potential tether of exploitation.
Indigenous data sovereignty is another layer of the divide. Indigenous peoples often lack control over data collected about them and their lands. The "CARE Principles for Indigenous Data Governance" (Collective Benefit, Authority to Control, Responsibility, Ethics) offer a theoretical framework that contrasts with the open data movement. It asserts that indigenous peoples have the right to govern their own data to protect traditional knowledge and cultural heritage from digital exploitation (Carroll et al., 2020).
Finally, the global north-south divide reflects the geopolitics of the internet. The physical infrastructure of the internet (root servers, submarine cables) is concentrated in developed nations. Theoretical dependency theorists argue that this structure perpetuates a form of "digital colonialism," where developing nations are consumers of technology and data exporters, rather than producers and owners. Achieving true digital human rights requires a restructuring of the global digital economy to ensure equitable participation and ownership for the Global South.
Questions
1. The Principle of Normative Equivalence
How does the principle of "normative equivalence," as affirmed by UN Resolution 20/8, define the relationship between the rights individuals possess in the physical world and those they possess in the online environment?
2. Technological Neutrality
Explain the legal theory of "technological neutrality" referenced in Section 1. How does this theory support the argument that existing human rights treaties (like the ICCPR) are applicable to the internet without the need for inventing "new" rights?
3. Cyber-Libertarianism vs. Cyber-Paternalism
Contrast the theoretical perspectives of "cyber-libertarianism" (as championed by John Perry Barlow) and "cyber-paternalism" (as articulated by Lawrence Lessig). How do these opposing views differ regarding the necessity and legitimacy of state regulation in cyberspace?
4. The Shift to Positive Obligations
Traditionally, civil liberties were viewed as negative rights (freedom from state interference). According to Section 1, how has the digital context shifted the role of the state toward "positive obligations," particularly regarding infrastructure?
5. The Digital Panopticon
In the context of digital surveillance, how is the metaphor of the "Panopticon" used to describe the relationship between the observer and the observed, and what "chilling effect" does this dynamic create for freedom of expression?
6. The Mosaic Theory of Privacy
Describe the "mosaic theory" of privacy. How does this theory challenge the argument often made by intelligence agencies that the collection of metadata (as opposed to content) is harmless?
7. Collateral Censorship
What is "collateral censorship," and how do strict liability regimes (which impose heavy fines on platforms for failing to remove illegal content) inadvertently encourage private companies to over-censor legal speech?
8. Visibility as a Right
Section 4 discusses the distinction between the "right to speak" and the "right to be heard." How does the concept of "visibility as a right" relate to algorithmic demotion and shadow banning in the digital attention economy?
9. The Three Levels of the Digital Divide
Scholars categorize the digital divide into three theoretical levels. Beyond the first level of physical connectivity, what are the second and third levels, and why is addressing only the first level considered insufficient for human rights?
10. Zero-Rating and Net Neutrality
Explain the theoretical dilemma presented by "zero-rating" practices. How do proponents justify it as a bridge for the digital divide, and conversely, how do critics argue it violates the principles of Net Neutrality and creates a "walled garden" for the poor?
Cases
Case Study: The Republic of Oretia and "Platform X"
The Republic of Oretia, a nation with a history of political instability, recently passed the "Digital Sovereignty and Safety Act" (DSSA). The government argues this law is necessary to protect "national cultural values" and prevent foreign interference. The DSSA requires all social media platforms with over 1 million users to:
Store all user data locally on servers physically located within Oretia (Data Localization).
Decrypt user communications upon request by the Ministry of Interior when "national security" is cited, effectively banning end-to-end encryption.
Remove "harmful content" within 24 hours of notification or face fines equivalent to 10% of their global revenue. The definition of "harmful content" is broad, including "speech that undermines social cohesion."
"Platform X," a global social media giant based in a different jurisdiction, operates in Oretia. Platform X uses end-to-end encryption for its messaging service, which is widely used by Oretian activists and journalists to organize protests anonymously. Following the passage of the DSSA, the Oretian government demands that Platform X hand over the decryption keys for the accounts of three prominent opposition leaders.
Simultaneously, Platform X's algorithms have been promoting sensationalist content that is critical of the government, which the government labels as "harmful disinformation" aimed at destabilizing the state. Platform X refuses to hand over the keys, citing international human rights standards on privacy. In retaliation, the Oretian government initiates a complete internet shutdown in specific regions known as opposition strongholds and threatens to ban Platform X entirely.
Questions
1. Digital Sovereignty vs. Human Rights Obligations
Analyze the Oretian government's actions through the lens of Cyber-Paternalism and Digital Sovereignty (Section 2). How would the government justify the DSSA? Conversely, using the principle of Normative Equivalence (Section 1), how would a human rights lawyer argue that the demand to decrypt communications and the subsequent internet shutdown violate international law?
2. Intermediary Liability and Collateral Censorship
Focusing on the DSSA's requirement to remove "harmful content" within 24 hours under threat of massive fines, explain the risk of "Collateral Censorship" (Section 4). How might Platform X change its moderation policies in Oretia to avoid these fines, and how does this relate to the concept of "Privatized Enforcement" where the state outsources censorship to a private entity?
3. Privacy and the "Security-First" Approach
The Oretian government argues that breaking encryption is necessary for national security (a "Security-First" approach). Using the arguments from the Report of the Special Rapporteur mentioned in Section 1 and the "Digital Rights Ireland" case logic in Section 3, counter this argument. Why is encryption considered a "protected extension" of the right to hold opinions, and how does the removal of anonymity disproportionately affect activists in this scenario?
References
Balkin, J. M. (2018). Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation. UC Davis Law Review, 51, 1149.
Barlow, J. P. (1996). A Declaration of the Independence of Cyberspace. Electronic Frontier Foundation.
Carroll, S. R., et al. (2020). The CARE Principles for Indigenous Data Governance. Data Science Journal, 19(1), 43.
Cavoukian, A. (2009). Privacy by Design: The 7 Foundational Principles. Information and Privacy Commissioner of Ontario.
Cerf, V. G. (2012). Internet Access Is Not a Human Right. The New York Times.
Court of Justice of the European Union. (2014). Digital Rights Ireland Ltd v Minister for Communications, Marine and Natural Resources. Case C-293/12.
Court of Justice of the European Union. (2020). Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems (Schrems II). Case C-311/18.
DeNardis, L. (2014). The Global War for Internet Governance. Yale University Press.
European Commission. (2020). Proposal for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act).
Floridi, L. (2016). On Human Dignity as a Foundation for the Right to Privacy. Philosophy & Technology, 29, 307–312.
Foucault, M. (1977). Discipline and Punish: The Birth of the Prison. Pantheon Books.
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.
Goldsmith, J., & Wu, T. (2006). Who Controls the Internet? Illusions of a Borderless World. Oxford University Press.
Kaye, D. (2015). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. United Nations Human Rights Council. A/HRC/29/32.
Keller, D. (2018). Internet Platforms: Observations on Speech, Danger, and Money. Hoover Institution Working Group on National Security, Technology, and Law.
Klonick, K. (2018). The New Governors: The People, Rules, and Processes Governing Online Speech. Harvard Law Review, 131, 1598.
La Rue, F. (2011). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. United Nations Human Rights Council. A/HRC/17/27.
Lessig, L. (1999). Code and Other Laws of Cyberspace. Basic Books.
Marsden, C. (2016). Network Neutrality: From Policy to Law to Regulation. Manchester University Press.
Milanovic, M. (2011). Extraterritorial Application of Human Rights Treaties: Law, Principles, and Policy. Oxford University Press.
Mueller, M. L. (2017). Will the Internet Fragment? Sovereignty, Globalization and Cyberspace. Polity.
Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
Reed, C. (2012). Making Laws for Cyberspace. Oxford University Press.
Santa Clara Principles. (2018). The Santa Clara Principles on Transparency and Accountability in Content Moderation.
Scheerder, A., van Deursen, A., & van Dijk, J. (2017). Determinants of Internet skills, uses and outcomes. A systematic review of the second- and third-level digital divide. Telematics and Informatics, 34(8), 1607-1624.
Sen, A. (1999). Development as Freedom. Knopf.
Tushnet, M. (2003). The Issue of State Action/Horizontal Effect in Comparative Constitutional Law. International Journal of Constitutional Law, 1(1), 79–98.
United Nations. (2015). Transforming our world: the 2030 Agenda for Sustainable Development. Resolution A/RES/70/1.
United Nations General Assembly. (1948). Universal Declaration of Human Rights.
United Nations Human Rights Council. (2012). The promotion, protection and enjoyment of human rights on the Internet. Resolution A/HRC/RES/20/8.
United Nations Human Rights Office of the High Commissioner. (2011). Guiding Principles on Business and Human Rights.
UN Women. (2018). Gender Equality and the Sustainable Development Goals.
Westin, A. F. (1967). Privacy and Freedom. Atheneum.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
2
Sources of law and regulation of digital human rights
2
4
10
16
Lecture text
Section 1: International Human Rights Treaties as Primary Hard Law Sources
The hierarchy of sources governing digital human rights begins with international "hard law," primarily consisting of binding treaties and conventions that establish the fundamental obligations of states. The most foundational of these documents is the International Covenant on Civil and Political Rights (ICCPR), adopted in 1966. While drafted before the advent of the internet, the ICCPR serves as the bedrock for digital rights litigation and advocacy, particularly through Article 19, which protects freedom of expression "regardless of frontiers." This phrase effectively renders the treaty technology-neutral and applicable to the borderless nature of cyberspace. The binding nature of the ICCPR means that states parties are legally obligated to respect digital rights, and derogations are only permissible under the strict conditions of legality, necessity, and proportionality outlined in Article 19(3) (United Nations General Assembly, 1966).
Complementing the ICCPR is the International Covenant on Economic, Social, and Cultural Rights (ICESCR). This treaty is increasingly recognized as a vital source of law for the digital age, particularly concerning the right to science and culture under Article 15. The Committee on Economic, Social and Cultural Rights has interpreted these obligations to include the duty of states to ensure affordable access to the internet as a prerequisite for cultural participation and education. This transforms the internet from a mere commodity into a public good protected by treaty law, creating a legal basis for challenging the digital divide as a violation of international law rather than merely a failure of infrastructure policy (United Nations General Assembly, 1966).
The interpretation of these treaties is further refined by "General Comments" issued by UN treaty bodies, which, while technically interpretative, carry significant authoritative weight as sources of law. A pivotal document in this regard is General Comment No. 34 by the Human Rights Committee, which explicitly interprets Article 19 of the ICCPR in the context of new technologies. It clarifies that the protection of freedom of expression extends to internet-based modes of communication and asserts that any restrictions on websites, blogs, or other internet-based information dissemination systems must strictly comply with the treaty's limitations clauses. This document provides the legal standard used by courts worldwide to assess the validity of internet shutdowns and site-blocking measures (Human Rights Committee, 2011).
Another crucial hard law source is the Convention on the Rights of the Child (CRC). As children constitute a significant portion of internet users, the CRC has become a primary source of regulation for digital safety and privacy. General Comment No. 25 regarding children’s rights in relation to the digital environment explicitly sets out states' obligations to protect children from online risks such as cyberbullying and commercial exploitation while simultaneously upholding their rights to access information and freedom of association online. This establishes a specialized legal framework that prioritizes the "best interests of the child" in digital governance (Committee on the Rights of the Child, 2021).
The Convention on the Rights of Persons with Disabilities (CRPD) serves as the primary international legal source mandating digital accessibility. Article 9 of the CRPD requires states to take appropriate measures to ensure access to information and communications technologies and systems. This treaty obligation moves web accessibility standards (like WCAG) from voluntary best practices to mandatory legal requirements for states parties, providing a basis for litigation against inaccessible government websites and digital services. It frames digital exclusion not just as an inconvenience but as unlawful discrimination (United Nations, 2006).
The International Convention on the Elimination of All Forms of Racial Discrimination (CERD) provides the legal basis for regulating online hate speech. Article 4 of CERD requires states to declare illegal all dissemination of ideas based on racial superiority or hatred. This provision is the legal source for many national laws prohibiting hate speech on social media. It creates a positive obligation for states to intervene in the digital sphere to prevent the incitement of racial violence, balancing the right to free speech with the right to be free from discrimination (United Nations General Assembly, 1965).
Similarly, the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) is the primary source of law for addressing online gender-based violence. The CEDAW Committee’s General Recommendation No. 35 updates the treaty’s application to include violence that occurs in technology-mediated environments. This establishes that state inaction regarding online harassment, doxxing, and non-consensual distribution of intimate images constitutes a violation of the treaty. This interpretation forces states to modernize their domestic violence and harassment laws to cover digital acts (CEDAW Committee, 2017).
The Optional Protocols to these conventions also serve as sources of law by establishing individual complaint mechanisms. For example, the First Optional Protocol to the ICCPR allows individuals to bring complaints directly to the Human Rights Committee if they believe their digital rights have been violated and domestic remedies have been exhausted. Decisions made under this protocol, while not strictly binding in the same way as a domestic court judgment, create a body of "quasi-judicial" case law that defines the practical application of digital human rights standards globally (United Nations General Assembly, 1966).
However, a significant challenge in using these treaties as sources of law is the issue of extraterritoriality. Traditional treaty law is bound by jurisdiction, typically defined by territory. The global nature of the internet creates a conflict where a violation may originate in one state but impact a user in another. Legal scholars and bodies are currently debating the extent to which treaty obligations follow the state's "effective control" over digital infrastructure or data, attempting to stretch the source material to cover cross-border surveillance and censorship (Milanovic, 2011).
The Vienna Convention on the Law of Treaties serves as the "law of the law," dictating how these human rights instruments must be interpreted. It requires that treaties be interpreted in good faith and in light of their object and purpose. In the digital context, this principle supports "evolutionary interpretation," allowing 20th-century texts to remain relevant sources of law for 21st-century problems without constant amendment. This interpretive rule is what allows a "letter" in 1948 to legally equate to an "email" or "encrypted message" today (United Nations, 1969).
Customary International Law constitutes another layer of hard law sources. These are rules that derive from a general practice accepted as law. While the internet is relatively new, some scholars argue that the prohibition of prolonged, arbitrary internet shutdowns is emerging as a norm of customary international law. Additionally, the principle of due diligence, which requires states not to knowingly allow their territory to be used for acts contrary to the rights of other states, is being applied to cyber-operations, evolving into a binding source of law for state conduct in cyberspace (Schmitt, 2017).
Finally, the interaction between these various treaties creates a "regime complex" for digital human rights. No single treaty covers every aspect of the digital experience; instead, the legal source is often a patchwork of provisions from the ICCPR, CRC, and specialized conventions. Lawyers and policymakers must synthesize these sources to construct a coherent legal argument for protection, demonstrating that digital rights are not a new category of law but the cumulative application of the existing international human rights corpus.
Section 2: Regional Human Rights Systems and Jurisprudence
Regional human rights systems provide a more granular and often more enforceable source of law than the global UN framework. The European Convention on Human Rights (ECHR), overseen by the Council of Europe, is arguably the most influential regional source. Articles 8 (Right to respect for private and family life) and 10 (Freedom of expression) of the ECHR have generated a vast body of digital jurisprudence. The European Court of Human Rights (ECtHR) acts as the primary interpreter, and its judgments are binding on member states. Cases like Delfi AS v. Estonia have established legal precedents regarding intermediary liability, treating the Court's case law as a direct source of regulation for how states must balance reputation and free speech online (European Court of Human Rights [ECtHR], 2015).
In the European Union, the Charter of Fundamental Rights of the European Union serves as a primary source of constitutional-level law. Unlike the ECHR, which applies to the broader Council of Europe, the Charter binds EU institutions and member states when implementing EU law. Article 8 of the Charter specifically establishes the protection of personal data as a distinct fundamental right, separating it from the general right to privacy. This separation provides the legal foundation for the comprehensive data protection regimes that characterize European digital regulation (European Union, 2012).
The General Data Protection Regulation (GDPR) acts as a unique hybrid source: it is a legislative regulation that functions with the weight of a constitutional standard for privacy. While technically secondary EU law, its extraterritorial reach (the "Brussels Effect") effectively makes it a global source of law for digital privacy. It codifies principles such as data minimization, purpose limitation, and the "right to be forgotten" into hard legal obligations. For many non-EU states and multinational corporations, the GDPR is the de facto source of rules for data handling, influencing domestic legislation from Brazil to Japan (Bradford, 2020).
The Court of Justice of the European Union (CJEU) has played a pivotal role in defining digital rights through its preliminary rulings. The Schrems I and Schrems II decisions are critical sources of law regarding international data transfers. By invalidating the Safe Harbor and Privacy Shield frameworks due to concerns over US mass surveillance, the CJEU established that foreign surveillance laws are relevant factors in determining the legality of data flows. These judgments serve as binding sources of law that prohibit the transfer of personal data to jurisdictions that do not offer an "essentially equivalent" level of protection (Court of Justice of the European Union [CJEU], 2020).
Council of Europe Convention 108+ (The Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data) is the only binding international treaty specifically focused on data protection. Unlike the GDPR, which is an EU instrument, Convention 108+ is open for accession by any country in the world. It serves as a modernizing source of law, introducing requirements for reporting data breaches and increasing transparency in algorithmic processing. It provides a baseline legal standard for countries seeking to upgrade their privacy laws to international standards outside of the EU framework (Council of Europe, 2018).
In the Americas, the American Convention on Human Rights acts as the primary source of digital rights law, interpreted by the Inter-American Court of Human Rights and the Inter-American Commission on Human Rights (IACHR). The IACHR’s Office of the Special Rapporteur for Freedom of Expression has been prolific in producing thematic reports that serve as authoritative interpretations of the Convention. The Joint Declaration on Freedom of Expression and the Internet is a key source, establishing that mandatory blocking and filtering of content is only justifiable under strict judicial review, setting a high bar for state censorship in the region (Organization of American States, 2011).
The African Charter on Human and Peoples' Rights (Banjul Charter) is the foundational source for the African system. The African Commission on Human and Peoples' Rights has adopted the Declaration of Principles on Freedom of Expression and Access to Information in Africa. This document is a critical soft law source that interprets the Charter's broad provisions to specifically address internet shutdowns, affirming that cutting off internet access is a violation of the right to seek, receive, and impart information. It provides civil society with a legal text to challenge the frequent shutdowns observed in the region (African Commission on Human and Peoples' Rights, 2019).
The African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention) represents an attempt to create a binding continental source of law. Although ratification has been slow, it provides a comprehensive framework addressing electronic transactions, privacy, and cybercrime. It serves as a model law for African nations drafting domestic legislation, aiming to harmonize digital regulation across the continent and facilitate cross-border digital trade while protecting human rights (African Union, 2014).
In Southeast Asia, the ASEAN Human Rights Declaration includes provisions on digital rights, though it is often criticized for its "cultural particularism" and broad limitations clauses. Clause 7 of the Declaration allows rights to be balanced against "national security" and "public morality," which states often use as a source of law to justify restrictive cyber-laws. This highlights how regional sources can sometimes conflict with universal standards, providing cover for digital authoritarianism under the guise of regional values (Association of Southeast Asian Nations, 2012).
The role of "dialogue" between these regional courts creates a cross-pollination of legal sources. The Inter-American Court often cites ECtHR judgments, and vice versa. This judicial dialogue is creating a jus commune (common law) of digital human rights, where a ruling on digital privacy in Strasbourg can influence a high court decision in Latin America. This makes comparative jurisprudence a vital, albeit indirect, source of law for digital rights lawyers (Helfer & Slaughter, 1997).
Another emerging regional source is the directive on copyright and related rights in the Digital Single Market in the EU. Article 17 of this directive changes the liability regime for platforms regarding copyright infringement, effectively mandating upload filters. This legislative source has sparked intense debate about the balance between intellectual property rights and freedom of expression, illustrating how economic regulations can become de facto human rights sources by dictating the technical architecture of speech (European Union, 2019).
Finally, the overarching trend in regional systems is the move from negative to positive obligations as a source of law. Courts are increasingly finding that states have a positive legal duty to protect individuals from digital harm caused by third parties (horizontal effect). This expands the sources of law to include not just what the state cannot do, but what the state must do to regulate private actors, effectively turning regulatory failure into a human rights violation.
Section 3: Soft Law, Declarations, and Multi-stakeholder Norms
In the rapidly evolving digital landscape, traditional treaty amendment is often too slow, leading to a reliance on "soft law" as a primary source of regulation. Soft law includes non-binding resolutions, declarations, guidelines, and principles that, while lacking the force of a treaty, possess significant normative weight and political commitment. A prime example is the United Nations General Assembly (UNGA) resolutions on The Right to Privacy in the Digital Age. These resolutions, adopted by consensus, serve as a barometer of international opinion and provide a framework for states to review their surveillance practices. They are often cited in national court cases to demonstrate global standards (United Nations General Assembly, 2013).
The United Nations Human Rights Council (HRC) resolutions are similarly critical soft law sources. The landmark Resolution 20/8, affirming that "the same rights that people have offline must also be protected online," established a high-level political commitment that permeates all subsequent digital rights discussions. These resolutions empower the Office of the High Commissioner for Human Rights (OHCHR) to produce detailed reports on issues like encryption, artificial intelligence, and content moderation, which act as technical guidance documents for legislators and corporate policy makers (United Nations Human Rights Council, 2012).
UNESCO serves as a key source of soft law through its "ROAM" principles (Rights, Openness, Accessibility, Multi-stakeholder participation). These principles form a diagnostic framework for assessing the state of the internet in a given country. While voluntary, member states use the ROAM indicators to benchmark their national digital policies. The Recommendation on the Ethics of Artificial Intelligence, adopted by UNESCO, is another recent soft law source that attempts to set the first global standard for ethical AI development, emphasizing human oversight and non-discrimination (UNESCO, 2021).
The UN Guiding Principles on Business and Human Rights (UNGPs), also known as the "Ruggie Principles," are the definitive source of soft law regarding corporate responsibility. Since treaties generally bind states, not companies, the UNGPs fill a critical gap by establishing the corporate responsibility to "respect" human rights. This framework requires tech companies to conduct human rights due diligence and provide remedies for violations. Although voluntary, they have been incorporated into the contractual requirements of some states and the internal policies of major tech firms, effectively functioning as a regulatory code (OHCHR, 2011).
The Organization for Economic Co-operation and Development (OECD) Guidelines on the Protection of Privacy and Transborder Flows of Personal Data represent an early and enduring source of soft law. Originally drafted in 1980 and updated in 2013, these guidelines established the "Fair Information Practice Principles" (FIPPs) that underpin most modern data protection laws, including the GDPR. They demonstrate how soft law can harden over time, serving as the template for binding national legislation across the developed world (OECD, 2013).
The mandates of Special Rapporteurs provide a dynamic source of "quasi-law." The UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression produces thematic reports that interpret how human rights law applies to specific technologies. For instance, the 2015 report on encryption and anonymity (A/HRC/29/32) argued that encryption is necessary for the exercise of freedom of opinion. These reports are frequently cited by judges and civil society to critique restrictive laws, giving them substantial authority in the legal discourse (Kaye, 2015).
Civil society and multi-stakeholder coalitions also generate normative sources known as "principles." The Manila Principles on Intermediary Liability provide a set of best practices for limiting the liability of content hosts to prevent censorship. Drafted by a coalition of NGOs and experts, these principles are used as a model for law reform advocacy. They argue that intermediaries should not be held liable for third-party content without a judicial order, offering a counter-narrative to strict liability regimes (Manila Principles, 2015).
Similarly, the Santa Clara Principles on Transparency and Accountability in Content Moderation set the standard for how platforms should manage user content. They demand that companies provide clear notice to users when their content is removed and offer a robust appeal process. These principles function as a voluntary code of conduct that pressures tech companies to align their private rules with due process norms. Major platforms have explicitly referenced the Santa Clara Principles when updating their transparency reports (Santa Clara Principles, 2018).
The Christchurch Call to Action represents a new form of diplomatic soft law. Initiated by New Zealand and France after the 2019 terrorist attacks, it is a non-binding commitment by governments and tech companies to eliminate terrorist and violent extremist content online. It illustrates the trend of "hybrid" sources where sovereign states and private corporations sign the same document, blurring the lines between international law and corporate social responsibility (Christchurch Call, 2019).
The Global Network Initiative (GNI) Principles create a framework for company decision-making when facing government demands for data or censorship. Member companies (including Google and Meta) agree to independent assessments of their compliance with these principles. This creates a system of private governance that serves as a regulatory source in the absence of binding international treaties on government surveillance access to private data (Global Network Initiative, 2017).
Internet governance bodies like the Internet Corporation for Assigned Names and Numbers (ICANN) operate based on their own bylaws and policies, which act as a unique form of technical soft law. The "multistakeholder model" used by ICANN allows for policy development through consensus among technical experts, governments, and civil society. While technically a non-profit corporation's internal rules, ICANN's policies regarding domain name dispute resolution effectively govern global trademark and speech rights on the internet infrastructure (DeNardis, 2014).
Ultimately, the effectiveness of soft law as a source lies in its flexibility and its ability to influence the "norm cascade." It allows for experimentation and consensus-building in areas where states are not yet ready to commit to binding treaties. However, critics argue that reliance on soft law can lead to "ethics washing," where companies or states sign non-binding pledges to avoid strict regulation while continuing rights-violating practices.
Section 4: National Legislation and the Cybercrime Framework
National legislation constitutes the most direct and enforceable source of law for digital human rights. Within this sphere, national constitutions serve as the supreme source, with many modern constitutions explicitly recognizing digital rights. For example, the Constitution of Portugal was among the first to include a specific prohibition on the use of personal data for defining a "third party" identity. More recently, Chile amended its constitution to include "neuro-rights," protecting mental privacy in the face of advancing neuro-technology. These constitutional provisions allow high courts to strike down ordinary legislation that infringes on digital privacy or access (Constitution of the Portuguese Republic, 1976).
The Budapest Convention on Cybercrime is the primary international treaty guiding national legislation on cybercrime. Although it is a Council of Europe treaty, it has been ratified by non-European states including the US, Japan, and Australia, making it a global standard. It requires states to criminalize specific conduct (illegal access, data interference, system interference) and establishes procedures for cross-border evidence gathering. However, human rights groups criticize it for prioritizing law enforcement powers over privacy safeguards, as it mandates data preservation and allows for mutual legal assistance without robust dual criminality requirements in all cases (Council of Europe, 2001).
The negotiation of a new UN Cybercrime Treaty represents a current battleground for defining sources of law. Led by Russia and supported by China, this initiative seeks to replace the Budapest framework with a UN-based convention. Critics fear this new source of law will legitimize internet control under the guise of combating cybercrime, potentially criminalizing speech that is protected under international human rights law but deemed "criminal" by authoritarian regimes (e.g., "disinformation" or "incitement to subversion"). The outcome of this treaty process will significantly shape the global legal landscape for digital content (United Nations General Assembly, 2019).
National data protection statutes are arguably the most prolific source of digital regulation. Beyond the GDPR, laws like Brazil’s Lei Geral de Proteção de Dados (LGPD) and the California Consumer Privacy Act (CCPA) in the United States establish statutory rights for individuals to access, delete, and port their data. These laws create a regulatory floor that companies must adhere to. The proliferation of these laws reflects a "Brussels Effect" where the EU standard is localized into national sources of law to ensure trade continuity (Soto, 2019).
"Net Neutrality" legislation serves as a critical source of law for protecting the right to access information. Countries like India, through the Prohibition of Discriminatory Tariffs for Data Services Regulations, and the Netherlands have codified the principle that Internet Service Providers (ISPs) must treat all data equally. These laws prevent ISPs from favoring certain content or blocking competitors, thereby legally enshrining the internet as an open platform. The repeal and potential reinstatement of net neutrality rules in the US highlights the volatility of this statutory source (Telecom Regulatory Authority of India, 2016).
Legislation combating "fake news" and disinformation has emerged as a controversial source of digital regulation. Laws such as Singapore's Protection from Online Falsehoods and Manipulation Act (POFMA) grant government ministers the power to order corrections or takedowns of false statements. While intended to protect public order, these laws are often criticized as sources of censorship that lack independent judicial oversight, demonstrating how national statutes can weaponize vague definitions to stifle political dissent (Republic of Singapore, 2019).
National security and surveillance laws constitute a "hidden" source of law that deeply impacts digital rights. Statutes like the US Foreign Intelligence Surveillance Act (FISA), particularly Section 702, provide the legal basis for the collection of foreign intelligence from non-US persons. These domestic laws often conflict with international human rights norms regarding privacy. The legal friction arises because these statutes provide the lawful authority for state agencies to access the data held by private platforms, often with lower evidentiary thresholds than required for domestic criminal investigations (50 U.S.C. § 1881a).
Network enforcement laws, exemplified by Germany’s NetzDG (Network Enforcement Act), create a source of law that imposes strict liability on social media platforms. By threatening massive fines for failure to remove "manifestly unlawful" content within 24 hours, these laws deputize private companies to act as judges of speech. This legislative model has been copied by numerous countries, creating a trend where the source of speech regulation is nominally the state, but the execution is privatized (German Federal Ministry of Justice, 2017).
Antitrust and competition laws are increasingly being interpreted as sources of digital rights protection. Regulators in the US and EU are using monopoly laws to challenge the dominance of "Big Tech." The theory is that extreme market concentration reduces consumer choice and privacy, as users have no alternative to the dominant platforms. Therefore, breaking up monopolies or mandating interoperability is seen as a legal mechanism to restore user autonomy and digital rights (Khan, 2017).
The European Union’s Digital Services Act (DSA) represents a new generation of legislative sources. It moves beyond simple liability to regulating the processes of platforms. It creates legal obligations for transparency in algorithms, risk assessments for systemic harms, and access to data for researchers. As a regulation, it applies directly across the EU, replacing the fragmented national laws and setting a new global benchmark for platform accountability (European Union, 2022).
Jurisdictional conflicts often arise when national laws assert extraterritorial application. The US CLOUD Act allows US law enforcement to compel US-based tech companies to provide data stored on servers in foreign countries. This asserts US national law as a global source for data access, often bypassing the mutual legal assistance treaties (MLATs) that traditionally governed such exchanges. This creates a conflict of laws where a company might be compelled by US law to disclose data while prohibited by local privacy law from doing so (United States Congress, 2018).
Finally, the fragmentation of national legislation creates a "splinternet" effect. As countries like Russia (with its "Sovereign Internet Law") and China enact laws to isolate their domestic networks from the global internet, the concept of a universal source of digital law erodes. This leads to a scenario where digital human rights are geographically contingent, defined entirely by the national statute of the user's physical location rather than universal principles.
Section 5: Private Ordering, Terms of Service, and Code as Law
In the digital realm, private ordering often supersedes public law as the primary regulator of behavior. The most immediate source of "law" for any user is the Terms of Service (ToS) and Community Guidelines of the platforms they use. These contracts, often referred to as "contracts of adhesion" because they are non-negotiable, establish the rules for acceptable speech, copyright usage, and privacy. While technically private contracts, the dominance of platforms like Facebook, Google, and X (formerly Twitter) means these documents function as de facto constitutional law for the digital public sphere (Klonick, 2018).
This phenomenon is often described by Lawrence Lessig’s theory that "Code is Law." This concept asserts that the technical architecture (hardware and software) of the internet constrains and enables behavior just as effectively as legal statutes. For example, if a messaging app is built with end-to-end encryption (e.g., Signal), the "law" of that space is that surveillance is impossible, regardless of what government statutes might say. The source of regulation here is the code itself, written by engineers rather than legislators. This makes software developers influential drafters of the laws that govern digital human rights (Lessig, 1999).
The policies of the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C) serve as technical sources of law. These bodies publish "Requests for Comments" (RFCs) and standards that define how the internet functions. When the IETF adopts a standard like TLS 1.3 (which enhances encryption), it effectively mandates a higher privacy standard for the global internet. These technical decisions are rarely reviewed by democratic institutions yet define the boundaries of what is technically possible in terms of rights exercise (Internet Engineering Task Force, 2018).
Algorithmic regulation, or "Lex Algorithmic," refers to the automated enforcement of rules. Content moderation algorithms that detect and remove copyright infringement or hate speech act as executive, judicial, and legislative powers simultaneously. The source of the "verdict" is a proprietary machine learning model. This shifts the source of law from a public, transparent text to a black-box system where the logic of the decision is often inaccessible even to the platform operators. This poses a significant challenge to the rule of law, which requires laws to be knowable and transparent (Yeung, 2018).
The Meta Oversight Board represents a novel source of private jurisprudence. Created by Meta (Facebook) to adjudicate difficult content moderation decisions, it functions as a "Supreme Court" for the platform. Its decisions are binding on the company for the specific case and offer policy recommendations. While entirely a construct of private corporate governance, its rulings use the language of international human rights law, creating a pseudo-legal system that exists parallel to state courts (Meta Oversight Board, 2021).
Domain Name System (DNS) policy acts as a powerful lever of regulation. Registries and registrars (the companies that sell domain names) have the power to "seize" domains, effectively erasing a website from the internet. While usually reserved for clear illegality (like botnets), pressure is increasing on these intermediaries to act as content regulators. The contracts between ICANN and registries thus become a source of law regarding who is allowed to have a presence online, often bypassing due process (Zittrain, 2003).
Self-regulatory industry codes constitute another layer of private sourcing. The Code of Practice on Disinformation in the EU is a voluntary agreement signed by major tech platforms to fight fake news. While non-binding, the threat of future regulation forces compliance. These codes allow companies to define the operational details of compliance, often prioritizing cost-efficiency over rigorous rights protection. This creates a "privatized" regulatory environment where the source of the rule is a negotiation between corporate lobbyists and regulators (European Commission, 2018).
App Store guidelines (Apple and Google) act as a significant bottleneck and source of regulation. Because mobile devices are the primary way people access the internet, the rules governing what apps are allowed on the App Store effectively dictate the software market. Apple’s privacy labels or its ban on certain types of apps serve as a global regulation that supersedes national laws; for instance, an app legal in a country might still be unavailable if it violates Apple’s private "law" (Geradin & Katsifis, 2021).
The concept of "transnational private regulation" explains how standard-setting bodies (like ISO) create rules for information security (ISO 27001) that become industry mandates. Companies adopt these standards to prove compliance to clients and insurers. While voluntary, the market enforces them. These technical standards often contain implicit policy choices about access control and surveillance that impact user rights without public debate (Cafaggi, 2006).
Digital Rights Management (DRM) systems serve as a source of "private copyright law." DRM code physically prevents users from copying or modifying works, enforcing a stricter regime than copyright law itself (which allows for fair use/fair dealing). The Digital Millennium Copyright Act (DMCA) legalizes this by making it a crime to circumvent DRM. Here, the private code (the lock) is backed by public law (the ban on breaking the lock), creating a composite source of regulation that severely restricts user freedoms (17 U.S.C. § 1201).
The interaction between private rules and public law is becoming more formalized. "Co-regulation" models, like the DSA, legally require platforms to have robust Terms of Service and enforce them. This essentially incorporates private contracts into the public legal framework, giving private rules the backing of state sanctions. This blurs the line between the state and the corporation, raising the question of whether constitutional free speech protections should apply to these private regulators.
Finally, the "sovereignty of the user" acts as a theoretical counter-source. Tools that allow users to customize their feed, block trackers, or use alternative front-ends (like ad-blockers) allow individuals to write their own "personal laws" for their digital experience. However, the arms race between platform code (seeking to enforce ads/tracking) and user code (seeking to block them) defines the practical limits of this autonomy.
Questions
1. The Legal Status of Internet Speech
According to General Comment No. 34 by the Human Rights Committee, how does Article 19 of the ICCPR apply to internet-based modes of communication, and what standard must restrictions on websites strictly comply with?
2. Digital Accessibility as a Legal Mandate
How does Article 9 of the Convention on the Rights of Persons with Disabilities (CRPD) transform web accessibility standards (like WCAG) from voluntary best practices into mandatory legal requirements?
3. Evolutionary Interpretation of Treaties
Explain the principle of "evolutionary interpretation" derived from the Vienna Convention on the Law of Treaties. How does this principle allow human rights instruments drafted in the 20th century to serve as valid sources of law for modern digital technologies?
4. The "Brussels Effect" and Schrems II
In the context of the European Union, how did the CJEU's "Schrems II" decision impact international data transfers, and what does this judgment establish regarding the relevance of foreign surveillance laws?
5. Regional Values vs. Universal Standards
How does the ASEAN Human Rights Declaration illustrate the tension between regional "cultural particularism" and universal human rights standards, specifically regarding the "national security" limitations mentioned in Clause 7?
6. Corporate Responsibility and the "Ruggie Principles"
Since international treaties generally bind states rather than corporations, what role do the UN Guiding Principles on Business and Human Rights (the "Ruggie Principles") play in regulating the conduct of private tech companies?
7. Privatized Enforcement and NetzDG
Germany’s Network Enforcement Act (NetzDG) is described as creating a system of "privatized enforcement." How does this legislative model shift the responsibility of judging speech from the state to private companies, and what is the specific incentive mechanism used?
8. The Geopolitics of Cybercrime Treaties
What is the primary conflict between the existing Budapest Convention on Cybercrime and the proposed UN Cybercrime Treaty supported by Russia and China, particularly regarding human rights and content regulation?
9. "Code is Law"
Explain Lawrence Lessig’s theory that "Code is Law" as a source of private ordering. How does the technical architecture of a system (such as end-to-end encryption) function similarly to a legal statute in constraining or enabling behavior?
10. Lex Algorithmic
How does the concept of "Lex Algorithmic" or algorithmic regulation challenge the traditional rule of law requirement that laws must be knowable and transparent?
Cases
Case Study: The "SecureData" Crisis in the Republic of Varia
Background
The Republic of Varia is a member of a regional political union similar to the EU, which binds it to a regional human rights convention (similar to the ECHR) and a strict regional data protection regulation (similar to the GDPR). Varia recently enacted a national law called the "Civility and Safety Act" (CSA).
The CSA has two main provisions:
Strict Liability: Social media platforms must remove "manifestly illegal hate speech" within 24 hours of notification or face fines up to 5% of global turnover.
Data Sovereignty: Critical user data must be stored on servers accessible to Varian law enforcement.
The Incident
"VidShare," a video-hosting giant headquartered in the United States, operates in Varia.
The Censorship Issue: Fearing the massive fines under the CSA, VidShare modifies its algorithms to aggressively auto-delete content containing keywords associated with political extremism. This results in the removal of thousands of videos documenting police brutality posted by human rights activists. VidShare’s Terms of Service (ToS) state they can remove any content "at their sole discretion," and they offer no specific appeal mechanism for these takedowns.
The Data Transfer Issue: A Varian privacy activist, Leo, files a complaint with the national data regulator. He argues that VidShare transfers his personal data to its US headquarters. He cites the US CLOUD Act (referenced in Section 4), arguing that US intelligence agencies have unfettered access to this data, which violates his fundamental rights under the regional charter.
The Accessibility Issue: Simultaneously, a disability rights group sues the Varian government because the online portal for reporting hate speech under the CSA is completely incompatible with screen readers, preventing blind users from filing reports.
Questions
1. Privatized Enforcement and International Hard Law
Analyze the "Civility and Safety Act" (CSA) using the concept of "Privatized Enforcement" (Section 4).
How does the threat of massive fines under the CSA create a conflict with the ICCPR's Article 19 (Section 1) regarding freedom of expression?
Specifically, how does this structure incentivize VidShare to engage in "collateral censorship," effectively bypassing the judicial "necessity and proportionality" tests required by international treaties?
2. Cross-Border Data Flows and the "Schrems II" Standard
Leo’s complaint mirrors the logic of the Schrems II judgment (Section 2).
Explain the legal conflict between US surveillance laws (like FISA 702/CLOUD Act) and regional data protection rights (like the GDPR/Charter of Fundamental Rights).
Why would a court likely find that VidShare cannot legally transfer data to the US in this scenario, even if VidShare promises to keep it safe? (Focus on the "essentially equivalent" protection standard).
3. Private Ordering vs. Soft Law Standards
VidShare defends its removal of the activists' videos by citing its Terms of Service (Contracts of Adhesion) and "Code is Law" rights (Section 5).
Counter this defense using Soft Law sources from Section 3. specifically the UN Guiding Principles on Business and Human Rights (Ruggie Principles) and the Santa Clara Principles.
How do these soft law frameworks argue that VidShare has a responsibility to provide due process (notice and appeal), despite its ToS granting it absolute discretion?
References
50 U.S.C. § 1881a. (2008). Foreign Intelligence Surveillance Act of 1978 Amendments Act of 2008.
African Commission on Human and Peoples' Rights. (2019). Declaration of Principles on Freedom of Expression and Access to Information in Africa.
African Union. (2014). African Union Convention on Cyber Security and Personal Data Protection.
Association of Southeast Asian Nations. (2012). ASEAN Human Rights Declaration.
Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.
Cafaggi, F. (2006). Reframing Self-Regulation in European Private Law. Kluwer Law International.
CEDAW Committee. (2017). General recommendation No. 35 on gender-based violence against women, updating general recommendation No. 19.
Christchurch Call. (2019). Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online.
Committee on the Rights of the Child. (2021). General comment No. 25 (2021) on children’s rights in relation to the digital environment.
Constitution of the Portuguese Republic. (1976). Article 35: Use of Informatics.
Council of Europe. (2001). Convention on Cybercrime (ETS No. 185).
Council of Europe. (2018). Modernised Convention for the Protection of Individuals with Regard to the Processing of Personal Data (Convention 108+).
Court of Justice of the European Union. (2020). Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems (Schrems II). Case C-311/18.
DeNardis, L. (2014). The Global War for Internet Governance. Yale University Press.
European Commission. (2018). Code of Practice on Disinformation.
European Court of Human Rights. (2015). Delfi AS v. Estonia. Application no. 64569/09.
European Union. (2012). Charter of Fundamental Rights of the European Union.
European Union. (2016). General Data Protection Regulation (GDPR). Regulation (EU) 2016/679.
European Union. (2019). Directive on Copyright in the Digital Single Market. Directive (EU) 2019/790.
European Union. (2022). Regulation on a Single Market For Digital Services (Digital Services Act).
Geradin, D., & Katsifis, D. (2021). The Antitrust Case Against the Apple App Store. Journal of Competition Law & Economics.
German Federal Ministry of Justice. (2017). Network Enforcement Act (NetzDG).
Global Network Initiative. (2017). GNI Principles on Freedom of Expression and Privacy.
Helfer, L. R., & Slaughter, A. M. (1997). Toward a Theory of Effective Supranational Adjudication. Yale Law Journal, 107(2).
Human Rights Committee. (2011). General comment No. 34: Article 19: Freedoms of opinion and expression. CCPR/C/GC/34.
Internet Engineering Task Force. (2018). The TLS Protocol Version 1.3. RFC 8446.
Kaye, D. (2015). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. A/HRC/29/32.
Khan, L. (2017). Amazon's Antitrust Paradox. Yale Law Journal, 126(3).
Klonick, K. (2018). The New Governors: The People, Rules, and Processes Governing Online Speech. Harvard Law Review, 131, 1598.
Lessig, L. (1999). Code and Other Laws of Cyberspace. Basic Books.
Manila Principles. (2015). Manila Principles on Intermediary Liability.
Meta Oversight Board. (2021). Case 2021-001-FB-FBR (Trump Ban).
Milanovic, M. (2011). Extraterritorial Application of Human Rights Treaties. Oxford University Press.
OECD. (2013). The OECD Privacy Framework.
Organization of American States. (2011). Joint Declaration on Freedom of Expression and the Internet.
Republic of Singapore. (2019). Protection from Online Falsehoods and Manipulation Act.
Santa Clara Principles. (2018). The Santa Clara Principles on Transparency and Accountability in Content Moderation.
Schmitt, M. (Ed.). (2017). Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations. Cambridge University Press.
Soto, E. (2019). The Brussels Effect: The Rise of a Global Regulatory Regime. University of Miami International and Comparative Law Review.
Telecom Regulatory Authority of India. (2016). Prohibition of Discriminatory Tariffs for Data Services Regulations.
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence.
United Nations. (1969). Vienna Convention on the Law of Treaties.
United Nations. (2006). Convention on the Rights of Persons with Disabilities.
United Nations General Assembly. (1965). International Convention on the Elimination of All Forms of Racial Discrimination.
United Nations General Assembly. (1966). International Covenant on Civil and Political Rights.
United Nations General Assembly. (2013). The right to privacy in the digital age. Resolution 68/167.
United Nations General Assembly. (2019). Countering the use of information and communications technologies for criminal purposes. Resolution 74/247.
United Nations Human Rights Council. (2012). The promotion, protection and enjoyment of human rights on the Internet. Resolution A/HRC/RES/20/8.
United States Congress. (2018). Clarifying Lawful Overseas Use of Data Act (CLOUD Act).
Yeung, K. (2018). Algorithmic Regulation: A Critical Interrogation. Regulation & Governance, 12(4).
Zittrain, J. (2003). Internet Points of Control. Boston College Law Review, 44(2).
3
Subjects of digital rights and human rights obligations
2
2
10
14
Lecture text
Section 1: The State as the Primary Duty-Bearer in Cyberspace
The traditional framework of international human rights law places the state at the center of legal obligations, establishing it as the primary duty-bearer responsible for the protection and realization of human rights. This state-centric model, derived from the Westphalian system of sovereignty, remains the foundational legal theory even in the digital age. Under treaties like the International Covenant on Civil and Political Rights (ICCPR), the state is not merely a passive observer but an active guarantor of rights within its jurisdiction. This jurisdiction encompasses the digital infrastructure located within its territory and the individuals subject to its laws. Consequently, the state is legally accountable for ensuring that its domestic laws and practices regarding the internet align with its international commitments (United Nations General Assembly, 1966).
A critical distinction in understanding the state's role is the difference between negative and positive obligations. Negative obligations require the state to refrain from interfering with the enjoyment of rights. In the context of cyberspace, this means the state must not engage in arbitrary censorship, unlawful surveillance, or the shutting down of internet services. The United Nations Human Rights Council has repeatedly affirmed that the same rights that people have offline must also be protected online, thereby extending the state's negative obligation of non-interference to digital communications (United Nations Human Rights Council, 2012).
However, the digital environment increasingly demands the exercise of positive obligations. Positive obligations require the state to take active measures to protect individuals from human rights violations committed by third parties, including private corporations and other individuals. This doctrine, known as the "horizontal effect" or Drittwirkung, is essential in cyberspace where private platforms own the public square. The state must enact and enforce legislation that prevents data abuse by tech giants, protects children from online predators, and ensures that private internet service providers do not discriminate against certain types of content (Knox, 2008).
The state also acts as the primary regulator of the physical layer of the internet. While the internet is often described as a "cloud," it relies on a tangible infrastructure of submarine cables, data centers, and spectrum allocations that are subject to state control. By licensing telecommunications operators and managing internet exchange points (IXPs), the state exerts profound influence over the availability and quality of digital rights. This regulatory power creates a duty for the state to ensure that infrastructure management is conducted transparently and does not become a tool for digital exclusion or political control (DeNardis, 2014).
Furthermore, the state is a significant consumer and collector of digital data, acting as a "super-user." Through e-governance initiatives, biometric identification systems, and smart city projects, states are digitizing the relationship between the citizen and the government. This transformation imposes new obligations on the state to secure the vast amounts of personal data it collects. A breach of a government database is not just a technical failure but a violation of the state's human rights obligation to protect the privacy and security of its citizens (Office of the United Nations High Commissioner for Human Rights [OHCHR], 2018).
The issue of jurisdiction and extraterritoriality complicates the state's role as a subject of obligations. The internet is borderless, but state authority is territorially bounded. Legal debates currently focus on whether a state's human rights obligations extend beyond its borders when it exercises "effective control" over digital communications. For example, if a state conducts mass surveillance on foreign nationals using its trans-oceanic cables, human rights bodies are increasingly arguing that the state has jurisdiction and thus owes human rights obligations to those foreign individuals (Milanovic, 2011).
National security is often invoked by states to justify derogations from their digital rights obligations. Article 4 of the ICCPR allows for derogation in times of public emergency, but these measures must be strictly necessary and proportionate. In the digital realm, states often abuse this provision to justify indefinite internet shutdowns or mass surveillance programs. The challenge for international law is to define the strict limits of the state's security powers to prevent the exception from swallowing the rule, ensuring that "national security" does not become a blanket immunity for rights violations (Scheinin, 2013).
The state is also a potential perpetrator of cyber-operations and cyber-warfare, which raises complex questions of state responsibility. The Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations clarifies that states are responsible for cyber acts that are attributable to them and constitute a breach of international obligation. This means a state cannot hide behind proxy groups or "patriotic hackers" to conduct cyber-attacks on civilian infrastructure; if the state directs or controls these actors, it bears full legal responsibility for the resulting human rights harms (Schmitt, 2017).
Moreover, the principle of due diligence obliges states not to allow their territory to be used for acts contrary to the rights of other states. In cyberspace, this translates to a duty to prevent non-state actors (like cybercriminal gangs) from operating within their borders to harm others. If a state knowingly allows its servers to be used for a massive botnet attack or the hosting of child sexual abuse material, it fails in its due diligence obligation, making the state itself a subject of international liability (International Court of Justice, 1949).
The state's role in digital education constitutes another aspect of its positive obligations. As digital literacy becomes a prerequisite for participation in modern life, the state must ensure that its education system equips citizens with the skills to navigate the digital world safely. This includes understanding privacy settings, recognizing disinformation, and knowing one's rights online. Failure to provide this education creates a vulnerability gap that compromises the effective enjoyment of digital rights (UNESCO, 2018).
In authoritarian regimes, the state often inverts its role, becoming the primary threat to digital rights rather than their protector. This phenomenon of "digital authoritarianism" employs the tools of the state—legislation, police power, and intelligence agencies—to suppress dissent online. International human rights law refuses to recognize this inversion as legitimate, maintaining that the state remains the duty-bearer even when it acts as the violator, and creating a basis for international condemnation and accountability mechanisms (Polyus, 2021).
Finally, the state is the primary entity with the capacity to provide remedies. Article 2(3) of the ICCPR guarantees an effective remedy for any person whose rights are violated. In the digital context, this means the state must provide independent courts and regulatory bodies capable of investigating cybercrimes, adjudicating data privacy claims, and ordering redress from powerful tech companies. Without a state apparatus to enforce them, digital rights remain theoretical concepts rather than actionable legal realities.
Section 2: The Individual as the Digital Rights Holder
The primary subject of digital rights is the individual natural person. In human rights law, rights inhere in the human being by virtue of their humanity, not their technology. Therefore, every individual who uses the internet, or whose life is affected by digital systems, is a "data subject" and a rights holder. This status grants them the standing to claim protection for their privacy, freedom of expression, and access to information against both the state and, increasingly, private actors. The transition from "citizen" to "user" in digital discourse should not obscure the fact that the legal subject remains the human being with inalienable rights (Donnelly, 2013).
A central component of the individual's status in cyberspace is the concept of "digital identity." Unlike physical identity, which is singular and biological, digital identity can be multiple, fluid, and constructed. Individuals have the right to define their own digital persona, including the right to anonymity and pseudonymity. The UN Special Rapporteur on Freedom of Expression has identified anonymity as a critical enabler of rights, protecting individuals from retaliation for their unpopular or dissenting views. Thus, the "subject" in cyberspace has a protected interest in concealing their physical identity to preserve their digital agency (Kaye, 2015).
However, the individual is also the "data subject," a legal term codified in the General Data Protection Regulation (GDPR) to describe an identified or identifiable natural person. This definition links the digital representation (data) back to the human subject. It grants the individual specific powers of control, such as the right to access, rectify, and erase their data. This legal construction serves to re-empower the individual in the face of massive data processing systems, asserting that the human subject retains ownership and sovereignty over their informational self (European Union, 2016).
Children represent a specific category of rights holders with specialized protections. The Convention on the Rights of the Child (CRC) applies fully to the digital world. Because children lack the developmental maturity to fully understand digital risks, they are subjects of "special protection." This does not mean they are passive objects of care; General Comment No. 25 emphasizes that children also have agency and rights to participation and association online. The legal framework must therefore balance protection from harm (like cyberbullying and grooming) with the child’s evolving capacity for autonomy (Committee on the Rights of the Child, 2021).
Vulnerable groups, including human rights defenders, journalists, and political dissidents, are hyper-exposed subjects in the digital realm. For these individuals, digital rights are often a matter of life and death. They are the primary targets of state surveillance, spyware (such as Pegasus), and online harassment campaigns. International human rights mechanisms recognize their specific status and demand heightened protections. Their status as high-risk subjects highlights the unequal distribution of digital threats and the need for targeted security measures (Front Line Defenders, 2021).
Women and gender-diverse individuals also occupy a distinct position as subjects of digital rights due to the prevalence of online gender-based violence. The digital sphere often replicates and amplifies offline patriarchal structures. As subjects, women frequently face doxxing, non-consensual distribution of intimate images, and misogynistic hate speech designed to drive them out of public discourse. Recognizing the gendered nature of the digital subject is essential for crafting laws that address the specific harms women face, which are often dismissed by "neutral" legal standards (UN Women, 2018).
Persons with disabilities are often overlooked subjects in the design of digital spaces. The Convention on the Rights of Persons with Disabilities (CRPD) establishes their right to accessibility. When websites and apps are not designed with screen readers or other assistive technologies in mind, these individuals are effectively stripped of their status as active subjects, becoming excluded from the digital society. The legal obligation of "reasonable accommodation" affirms their right to be full participants in the digital world on an equal basis with others (United Nations, 2006).
The distinction between the "consumer" and the "citizen" is vital in defining the individual subject. Consumer protection law treats the individual as a market actor, focused on fair prices and contract terms. Human rights law treats the individual as a citizen, focused on dignity and freedom. In the digital age, these roles blur, as social media users are effectively "paying" with their data. However, elevating the subject status from consumer to citizen is crucial because human rights cannot be waived in a Terms of Service agreement, whereas consumer preferences can be traded (Cohen, 2019).
The "Right to be Forgotten" introduces the temporal dimension of the digital subject. It recognizes that an individual's identity is not static and should not be permanently defined by a past action recorded by a search engine. This right allows the subject to curate their digital history to some extent, preventing their past from indefinitely mortgaging their future. It asserts that the human need for rehabilitation and personal growth overrides the machine's capacity for infinite memory (Google Spain v. AEPD, 2014).
Conversely, the concept of "digital legacy" deals with the subject after death. What rights does a deceased person have over their digital assets and communications? Legal systems are currently grappling with whether the privacy of the subject survives their physical death or if their digital estate becomes property to be inherited. This creates a new category of "post-mortem subjects," requiring legal frameworks to manage the privacy and dignity of the deceased against the curiosity or financial interests of the living (Harbinja, 2017).
The "quantified self" is a new form of subjecthood emerging from the Internet of Things (IoT) and wearable technology. Here, the individual's biological processes—heart rate, sleep patterns, steps—are converted into data streams. This turns the biological body itself into a digital subject of surveillance and analysis. The legal challenge is to protect the integrity of the body when it is constantly transmitting data, preventing insurance companies or employers from discriminating against the subject based on their biological data (Lupton, 2016).
Finally, the individual subject is increasingly a "transparent citizen" to the state and corporations, while those entities remain opaque to the individual. This asymmetry of information fundamentally alters the power dynamic of subjecthood. The fight for digital rights is essentially a struggle to restore the privacy of the individual subject while increasing the transparency of the institutional subjects, reversing the current panoptic arrangement.
Section 3: Corporations and Private Platforms as Non-State Actors
In the digital ecosystem, private corporations—particularly the "Big Tech" giants like Google, Meta, Amazon, Apple, and Microsoft—have emerged as powerful subjects with immense influence over human rights. While traditional international law views states as the primary duty-bearers, the sheer scale and control these companies exercise over the global public sphere have forced a re-evaluation of their status. They are often described as "quasi-sovereigns" because they regulate speech, commerce, and association for billions of people, effectively performing functions traditionally reserved for the state (Klonick, 2018).
The primary framework for addressing the obligations of these non-state actors is the United Nations Guiding Principles on Business and Human Rights (UNGPs). These principles establish that while states have a duty to protect human rights, business enterprises have a responsibility to respect human rights. This responsibility means companies must avoid causing or contributing to adverse human rights impacts through their own activities and address such impacts when they occur. This elevates corporations from mere economic actors to subjects with distinct social and ethical responsibilities in the international legal order (OHCHR, 2011).
Corporate "due diligence" is the operational mechanism of this responsibility. Tech companies are expected to conduct human rights impact assessments (HRIAs) before launching new products or entering new markets. For example, before introducing a facial recognition tool or entering a market with a repressive regime, a company must assess how its technology could be used to violate rights. Failure to conduct this due diligence can make the corporation complicit in subsequent abuses, creating a basis for moral and potentially legal liability (B-Tech Project, 2020).
Internet intermediaries act as "gatekeepers" of information, a role that gives them the power to grant or deny access to the digital world. Their Terms of Service (ToS) and Community Guidelines function as the de facto law of the platform. Unlike state laws, which are subject to constitutional review, these private rules are drafted by corporate lawyers and enforced by opaque algorithms. This privatization of governance makes the corporation a legislative subject, creating rules that supersede national laws in practice, if not in theory (Gillespie, 2018).
The concept of "surveillance capitalism" defines the economic logic of these corporate subjects. Their business model relies on the extraction and commodification of human experience. By treating user behavior as free raw material for prediction products, these corporations fundamentally alter the relationship between the subject (user) and the economy. This creates an inherent conflict between the corporate imperative to maximize data collection (profit) and the user's right to privacy, positioning the corporation as a subject whose interests are structurally opposed to traditional privacy rights (Zuboff, 2019).
Intermediary liability regimes, such as Section 230 of the Communications Decency Act in the US, historically provided these corporations with broad immunity for content posted by users. This legal shield treated platforms as passive conduits rather than active publishers. However, as platforms have begun to heavily curate content using algorithmic recommendation systems, the argument for their "neutrality" has collapsed. Policymakers are now treating these corporations as responsible subjects who must take active measures to mitigate systemic risks like disinformation and hate speech (Kosseff, 2019).
Corporations also act as proxies for state surveillance. In many jurisdictions, the state relies on the data collection capabilities of private companies to conduct intelligence operations. Through subpoenas, warrants, or sometimes extra-legal pressure, companies are compelled to hand over user data. This entanglement makes the corporation a critical node in the state's surveillance apparatus. The "transparency reports" issued by these companies reveal the tension they face as subjects caught between their users' trust and state coercion (Parsons, 2019).
The "tech ambassador" phenomenon illustrates the geopolitical power of these corporate subjects. Countries like Denmark have appointed ambassadors specifically to interact with Silicon Valley companies, recognizing them as entities with diplomatic standing comparable to nation-states. This formalizes the status of Big Tech companies as subjects of international relations, negotiating treaties and agreements directly with sovereign governments regarding taxation, data flows, and security (Copenhagen Tech Policy Committee, 2020).
Content moderators, the human workforce employed by these corporations, are often the hidden subjects of digital trauma. To keep platforms "safe," thousands of workers review graphic violence, child exploitation, and hate speech daily. The mental health toll on these workers is a significant labor rights issue within the digital supply chain. The corporate responsibility to respect rights extends to their own workforce, requiring them to provide psychological support and fair working conditions for the people who maintain the digital ecosystem (Roberts, 2019).
The antitrust and competition dimension highlights the corporation as a market-dominating subject. When a single company controls the operating system (e.g., Android/iOS), the search engine, and the app store, it possesses "bottleneck power." This dominance allows the corporation to pick winners and losers in the digital economy, potentially stifling innovation and limiting consumer choice. Human rights advocates increasingly view breaking up these monopolies as a necessary step to decentralizing power and protecting the diversity of the digital public sphere (Khan, 2017).
Data brokers are a shadowy category of corporate subjects that operate largely outside of public scrutiny. These companies aggregate data from various sources to build detailed profiles of individuals, which are then sold to advertisers, insurers, and even law enforcement. Unlike consumer-facing platforms, data brokers have no direct relationship with the individual, making it difficult for the data subject to exercise their rights. Regulating these "third-party" subjects is a major challenge for current privacy frameworks (Federal Trade Commission, 2014).
Finally, the shift toward "Corporate Digital Responsibility" (CDR) suggests a voluntary evolution of the corporate subject. Some companies are moving beyond compliance to actively champion digital rights, adopting encryption by default or fighting gag orders in court. This differentiation strategy attempts to position the corporation as a defender of rights, acknowledging that in a trust-based economy, ethical conduct is a competitive advantage.
Section 4: Artificial Intelligence, Algorithms, and Non-Human Subjects
The rise of Artificial Intelligence (AI) and autonomous systems introduces the question of non-human subjects in the digital rights framework. Currently, legal systems do not recognize AI as a legal person or a subject of rights and obligations. An algorithm cannot sue or be sued, nor can it be sent to prison. However, these non-human entities possess "agency" in a functional sense—they make decisions, execute actions, and cause harm without direct human intervention. This creates a "responsibility gap" where the link between the human creator and the machine's action becomes tenuous (Matthias, 2004).
Algorithms act as "proxies" for human decision-making, often inheriting the biases of their creators or the datasets they are trained on. When an algorithm denies a loan, flags a passenger as a security risk, or rejects a job application, it is performing an act that affects human rights. Because the "subject" making the decision is a mathematical model, it is difficult to interrogate its motives or logic. This "black box" problem challenges the human right to due process and explanation, as the subject of the decision cannot understand or contest the reasoning of the automated judge (Pasquale, 2015).
The European Union’s proposed AI Act attempts to regulate these systems by categorizing them based on risk. It treats high-risk AI systems (like those used in policing or employment) as subjects of strict compliance regimes. While the AI itself is not the legal subject, the "provider" and "user" of the AI are burdened with heavy obligations. This approach maintains the human-centric view of law: the machine is a dangerous tool, and the human operator is the responsible subject who must ensure its safety (European Commission, 2021).
The debate over "electronic personhood" remains a theoretical frontier. Some legal scholars argue that as AI becomes more autonomous, it may be necessary to grant it a limited form of legal personality, similar to a corporation. This would allow an AI to hold insurance or assets to pay for damages it causes (e.g., a self-driving car accident). Critics argue this would merely shield human corporations from liability, allowing them to offload responsibility onto a digital scapegoat. Currently, the consensus in human rights law firmly rejects granting rights to AI, emphasizing that rights belong to humans (Solis, 2019).
Automated decision-making (ADM) systems are increasingly the subjects of regulation under data protection laws. Article 22 of the GDPR grants individuals the right not to be subject to a decision based solely on automated processing. This provision creates a right to "human intervention," essentially demanding that a human subject be inserted into the loop to validate the machine's output. This reinforces the principle that significant judgments about human lives must ultimately remain the province of human subjects (Wachter et al., 2017).
Facial recognition technology transforms the public space into a readable text, treating every human face as a trackable object. When coupled with AI, surveillance cameras become active subjects that search, identify, and catalogue populations. This capability fundamentally alters the nature of public anonymity. The "ban" movements in various cities reflect a rejection of this technology as an illegitimate subject of municipal governance, arguing that the risk to civil liberties outweighs any security benefit (Garvie et al., 2016).
Autonomous Weapons Systems (AWS), or "killer robots," represent the most extreme example of non-human subjects. These are systems that can select and engage targets without human intervention. The campaign to ban AWS argues that the decision to take a human life must never be delegated to an algorithm. International humanitarian law requires a "responsible commander" to be accountable for attacks. If the subject pulling the trigger is a line of code, the concept of moral and legal responsibility in warfare collapses (Human Rights Watch, 2012).
Bots and "inauthentic behavior" on social media create a population of synthetic subjects. These automated accounts can inflate the popularity of ideas, harass individuals, and distort public discourse. They mimic human subjects to manipulate the "marketplace of ideas." Platforms now distinguish between "bad bots" (disinformation networks) and "good bots" (automated news feeds), but the difficulty in distinguishing a bot from a human user complicates the verification of the digital subject (Ferrara et al., 2016).
The Internet of Things (IoT) populates the human environment with sensing subjects. Smart fridges, assistants (Alexa/Siri), and thermostats are constantly "listening" and "watching." These devices possess a limited agency to collect data and act on the environment. They transform the home from a private sanctuary into a node in the global information network. The legal challenge is to ensure that these non-human subjects serve the human owner rather than the corporate manufacturer (Tanczer et al., 2019).
Generative AI (like ChatGPT) introduces a new layer of complexity by creating content. When an AI generates text, images, or code, questions of copyright and authorship arise. Is the AI the author? Is the prompter? Currently, copyright offices generally refuse to register works created by non-human subjects. This denies the machine the status of a "creator," reinforcing the human-centric definition of creativity and intellectual property (U.S. Copyright Office, 2023).
Neuro-technology and Brain-Computer Interfaces (BCIs) threaten to dissolve the boundary between the human subject and the machine. If a device is directly connected to the brain to augment intelligence or restore motor function, where does the human subject end and the device begin? "Neuro-rights" advocates call for new protections for "mental privacy" and "cognitive liberty," fearing that external algorithms could eventually "hack" the subjective experience of the mind itself (Ienca & Andorno, 2017).
Ultimately, the discussion of non-human subjects serves to highlight the uniqueness of human dignity. The entire human rights framework is predicated on the moral worth of the human being. While machines can calculate, process, and execute, they cannot feel, suffer, or possess dignity. Therefore, the governance of AI and algorithms is not about giving them rights, but about strictly controlling them to prevent them from violating the rights of the only true subjects: human beings.
Section 5: Civil Society, Technical Communities, and Multi-Stakeholderism
The governance of the internet is unique in international relations because it relies on a "multi-stakeholder" model. In this model, civil society organizations (CSOs), the technical community, and academia are not just observers but active subjects with decision-making power alongside states and corporations. This deviates from the traditional multilateral model where only states have a seat at the table. The World Summit on the Information Society (WSIS) affirmed this inclusive approach, recognizing that the complexity of the internet requires the expertise and vigilance of all sectors of society (WSIS, 2005).
Civil society acts as the "watchdog" subject in the digital ecosystem. Organizations like the Electronic Frontier Foundation (EFF), Access Now, and Amnesty International play a critical role in monitoring violations, documenting internet shutdowns, and exposing surveillance scandals. Without these non-state subjects, many digital rights violations would remain invisible. They provide the evidence base used by UN bodies and courts to hold states and companies accountable. Their status as independent subjects allows them to critique power without the diplomatic constraints that bind states (Franklin, 2013).
Strategic litigation is a primary tool used by civil society to shape the law. By bringing cases to courts (like the Schrems cases or Snowden revelations), these organizations force judicial review of digital practices. In these scenarios, the CSO acts as a legal subject representing the collective public interest. They translate technical grievances into human rights arguments, effectively writing the jurisprudence of the digital age through their advocacy. This role makes them essential co-creators of digital rights norms (Privacy International, 2019).
The "technical community"—comprising bodies like the Internet Engineering Task Force (IETF), the World Wide Web Consortium (W3C), and ICANN—exercises a unique form of power. These groups design the protocols and standards that define how the internet works. Though they often claim to be "apolitical" engineers, their decisions about encryption standards or IP addressing have profound human rights implications. They are subjects of "governance by infrastructure," where their technical choices determine the possibilities of privacy and censorship for all users (Musiani, 2013).
ICANN (Internet Corporation for Assigned Names and Numbers) is a particularly powerful technical subject. It manages the Domain Name System (DNS), the address book of the internet. Its policies on who can register a domain name and when a domain can be seized are effectively global laws. ICANN operates as a private non-profit with a multi-stakeholder board, illustrating a new form of transnational subject that governs a critical global resource outside of traditional state structures (Mueller, 2010).
The academic and research community acts as the "epistemic subject," providing the knowledge and data necessary for evidence-based policy. Researchers who analyze algorithmic bias, measure network interference, or study the sociological impacts of social media are vital for understanding the digital environment. Their independence is crucial; however, they often face restricted access to data from corporate platforms. The push for "data access for researchers" (as seen in the EU Digital Services Act) is an effort to empower this community to fulfill its oversight role (Pasquale, 2015).
Whistleblowers occupy a precarious position as individual subjects who expose systemic wrongdoing from within powerful institutions. Figures like Edward Snowden, Chelsea Manning, and Frances Haugen have historically shifted the global debate on digital rights. They are often prosecuted as criminals by the state but celebrated as human rights defenders by civil society. International law is slowly evolving to recognize them as protected subjects who serve the public right to know, although their practical protection remains weak (United Nations Special Rapporteur, 2015).
Collective rights and communities are emerging as distinct subjects. Indigenous peoples, for example, are asserting "Indigenous Data Sovereignty." They argue that data collected about their people and resources belongs to the collective, not to the researchers or governments that collected it. This challenges the Western notion of the individual data subject, proposing that communities have group rights to control their digital representation and cultural knowledge in the cloud (Kukutai & Taylor, 2016).
The "Global South" acts as a geopolitical bloc/subject in internet governance debates. Countries in Africa, Asia, and Latin America often band together to challenge the dominance of US-based tech companies and the US government's historical control over internet resources. They advocate for a more equitable distribution of digital benefits and "digital sovereignty." This collective subjecthood highlights the inequalities in the current system, where the "rule-makers" are largely in the Global North and the "rule-takers" in the South (Datta, 2019).
Open Source communities act as collaborative subjects. Projects like Linux, Tor, or Signal are built by decentralized networks of volunteers. These communities produce "freedom-enhancing technology" that is not driven by profit. They operate on a gift economy and meritocracy. Their existence proves that the development of critical digital infrastructure does not require a corporate or state subject, but can emerge from the collective action of free individuals (Kelty, 2008).
The role of "standard-setting bodies" (like ISO or IEEE) extends beyond the purely technical. When they set standards for "Ethically Aligned Design" or information security, they are effectively creating soft law. These professional associations are subjects of ethical governance, imposing codes of conduct on engineers and developers. They attempt to embed human rights values into the professional identity of the technical workforce (IEEE, 2019).
Finally, the interaction between all these subjects—state, corporate, individual, and civil society—creates a dynamic "ecosystem of accountability." No single subject has absolute control. Digital rights are realized through the constant friction and negotiation between these actors. The future of digital human rights depends on maintaining the balance of power within this multi-stakeholder model, ensuring that the voice of the vulnerable individual is not drowned out by the amplified voices of the powerful state and the profitable corporation.
Questions
1. The "Super-User" State
Section 1 describes the state not only as a regulator but as a "consumer and collector" of digital data (a "super-user"). What specific human rights obligation arises from the state's digitization of citizen relationships through initiatives like e-governance and biometric systems?
2. Positive vs. Negative Obligations
Explain the distinction between the state's "negative obligations" and "positive obligations" in cyberspace. How does the concept of "horizontal effect" (Drittwirkung) apply to the state's duty regarding private platforms?
3. Jurisdiction and Extraterritoriality
According to Section 1, how does the concept of "effective control" over digital infrastructure (such as trans-oceanic cables) challenge the traditional Westphalian model of territorially bounded state jurisdiction?
4. The Fluidity of Digital Identity
In contrast to biological identity, how does Section 2 characterize "digital identity"? Why does the UN Special Rapporteur on Freedom of Expression consider anonymity a critical enabler of rights for the digital subject?
5. The "Right to be Forgotten"
How does the "Right to be Forgotten" address the temporal dimension of the digital subject? What conflict does it resolve between the individual's need for rehabilitation/growth and the nature of search engines?
6. Corporate Responsibility to Respect
Under the UN Guiding Principles on Business and Human Rights (UNGPs) described in Section 3, what is the specific distinction between the duties of the state and the responsibilities of business enterprises like "Big Tech"?
7. Surveillance Capitalism and the Subject
How does the concept of "surveillance capitalism" define the economic relationship between the corporate subject and the user? Why is this business model described as being in "inherent conflict" with traditional privacy rights?
8. The "Responsibility Gap" in AI
Section 4 discusses Artificial Intelligence and "non-human subjects." What is the "responsibility gap," and why does the text argue that current legal systems do not (and should not) grant "electronic personhood" to AI?
9. The Multi-Stakeholder Model
How does the "multi-stakeholder" model of internet governance differ from the traditional multilateral model of international relations? Who are the key "subjects" involved alongside the state in this framework?
10. Indigenous Data Sovereignty
How does the concept of "Indigenous Data Sovereignty" challenge the Western notion of the individual data subject, and what alternative rights framework does it propose for data collected about communities?
Cases
Case Study: The "Eco-ID" Initiative in the Republic of Orodruin
The State as "Super-User" and Regulator
The Republic of Orodruin, seeking to modernize its welfare system, launches the "Eco-ID" program. This mandatory digital identity system requires all citizens to submit biometric data (iris scans and fingerprints) to access essential services like healthcare and banking. The government argues this fulfills its positive obligation to ensure efficient service delivery. However, the Orodruin government outsources the storage and processing of this massive database to a private multinational corporation, "Palantir Tech." The contract is opaque; there is no public oversight regarding how the data is secured or who has access to it. Orodruin’s laws provide broad exemptions for "national security," allowing the Ministry of Interior to access the Eco-ID database without a warrant. This transforms the state into a "super-user" of data, digitizing the relationship between citizen and state while arguably failing its negative obligation to refrain from arbitrary interference with privacy.
Corporate Responsibility and the Non-Human Subject
Palantir Tech, acting as a "quasi-sovereign" in this context, does not merely store the data. Without explicit consent from the citizens, Palantir uses the Eco-ID biometric dataset to train its proprietary AI algorithms for a new "predictive policing" product. This reflects the model of "surveillance capitalism," where the citizens' biological data is treated as free raw material for value extraction. The AI, a "non-human subject" in function, begins flagging individuals as "high risk" for fraud based on opaque patterns. A specific ethnic minority, the Dunlendings, finds themselves disproportionately flagged and denied services. Because the decision is made by a "black box" algorithm, Palantir claims it cannot explain the logic, creating a "responsibility gap." Palantir argues it is merely a service provider respecting local laws, while human rights groups argue it has failed its corporate responsibility to respect human rights under the UN Guiding Principles (UNGPs) by ignoring due diligence on algorithmic bias.
The Vulnerable Subject and the Fight for Remedy
The crisis escalates when a whistleblower from Palantir leaks documents revealing that the biometric data of the Dunlendings was also sold to foreign data brokers. A collective of Dunlending activists sues both the Orodruin government and Palantir Tech. They assert "Indigenous Data Sovereignty," arguing that their biometric data belongs to their community and should not be commodified. However, they face a legal void: Orodruin’s courts rule that the AI (the "non-human subject") cannot be sued, and the state claims sovereign immunity under national security laws. Civil society organizations intervene, arguing that the state has failed its duty to provide an effective remedy (Article 2(3) ICCPR) and that the "horizontal effect" requires the state to hold Palantir accountable. The case becomes a landmark test of whether the multi-stakeholder model can protect vulnerable subjects when the state and the corporation act in concert to violate rights.
Questions
1. The State's Positive Obligations vs. Data Sovereignty
Analyze the Orodruin government's actions through the lens of Section 1. While the state claims the Eco-ID system fulfills a positive obligation to provide services, how does its role as a "super-user" and its failure to regulate Palantir Tech violate the doctrine of "horizontal effect" (Drittwirkung)?
Hint: Consider the state's duty to protect individuals from third-party (corporate) violations.
2. The Responsibility Gap and AI Accountability
Focusing on the "black box" algorithm used by Palantir Tech (Section 4), explain the "responsibility gap" facing the Dunlending minority.
Why does the lecture text argue that granting "electronic personhood" to the AI would likely fail to solve this problem, and where should the liability rightfully sit according to the "human-centric" view of law?
3. Corporate Duty and Surveillance Capitalism
Using Section 3, evaluate Palantir Tech's defense that it was "merely following local laws."
According to the UN Guiding Principles on Business and Human Rights (UNGPs), does Palantir’s responsibility to respect human rights depend on the state's laws?
How does the concept of "surveillance capitalism" explain the structural conflict between Palantir's business model (training AI on citizen data) and the rights of the Dunlending "data subjects"?
References
B-Tech Project. (2020). Foundational Paper: The UN Guiding Principles on Business and Human Rights and the Technology Sector. OHCHR.
Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press.
Committee on the Rights of the Child. (2021). General comment No. 25 on children’s rights in relation to the digital environment. UN Doc CRC/C/GC/25.
Copenhagen Tech Policy Committee. (2020). Tech Diplomacy: A New Frontier in Foreign Policy. Ministry of Foreign Affairs of Denmark.
Datta, A. (2019). Digital sovereignty and the Global South. Media, Culture & Society, 41(6).
DeNardis, L. (2014). The Global War for Internet Governance. Yale University Press.
Donnelly, J. (2013). Universal Human Rights in Theory and Practice. Cornell University Press.
European Commission. (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). COM/2021/206 final.
European Union. (2016). General Data Protection Regulation (GDPR). Regulation (EU) 2016/679.
Federal Trade Commission. (2014). Data Brokers: A Call for Transparency and Accountability.
Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96-104.
Franklin, M. I. (2013). Digital Dilemmas: Power, Resistance, and the Internet. Oxford University Press.
Front Line Defenders. (2021). Global Analysis 2020.
Garvie, C., Bedoya, A., & Frankle, J. (2016). The Perpetual Line-Up: Unregulated Police Face Recognition in America. Georgetown Law Center on Privacy & Technology.
Gillespie, T. (2018). Custodians of the Internet. Yale University Press.
Google Spain SL v. Agencia Española de Protección de Datos (AEPD). (2014). Case C-131/12. Court of Justice of the European Union.
Harbinja, E. (2017). Post-mortem privacy 2.0: theory, law, and technology. International Review of Law, Computers & Technology, 31(1).
Human Rights Watch. (2012). Losing Humanity: The Case against Killer Robots.
Ienca, M., & Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(5).
IEEE. (2019). Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems.
International Court of Justice. (1949). Corfu Channel Case (United Kingdom v. Albania).
Kaye, D. (2015). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. UN Doc A/HRC/29/32.
Kelty, C. M. (2008). Two Bits: The Cultural Significance of Free Software. Duke University Press.
Khan, L. (2017). Amazon's Antitrust Paradox. Yale Law Journal, 126(3).
Klonick, K. (2018). The New Governors. Harvard Law Review, 131, 1598.
Knox, J. H. (2008). Horizontal Human Rights Law. American Journal of International Law, 102(1).
Kosseff, J. (2019). The Twenty-Six Words That Created the Internet. Cornell University Press.
Kukutai, T., & Taylor, J. (Eds.). (2016). Indigenous Data Sovereignty: Toward an Agenda. ANU Press.
Lupton, D. (2016). The Quantified Self. Polity.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3).
Milanovic, M. (2011). Extraterritorial Application of Human Rights Treaties. Oxford University Press.
Mueller, M. (2010). Networks and States: The Global Politics of Internet Governance. MIT Press.
Musiani, F. (2013). Governance by algorithms. Internet Policy Review, 2(3).
OHCHR. (2011). Guiding Principles on Business and Human Rights. United Nations.
OHCHR. (2018). The Right to Privacy in the Digital Age. A/HRC/39/29.
Parsons, C. (2019). The (In)effectiveness of Voluntarily Produced Transparency Reports. Business & Society, 58(1).
Pasquale, F. (2015). The Black Box Society. Harvard University Press.
Polyus. (2021). Digital Authoritarianism: The Rise of the Surveillance State. Stanford University Press.
Privacy International. (2019). A Guide to Litigating Identity Systems.
Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.
Scheinin, M. (2013). Report of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism. A/HRC/22/52.
Schmitt, M. (Ed.). (2017). Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations. Cambridge University Press.
Solis, M. (2019). Electronic Personhood: An Introduction. Artificial Intelligence and Law.
Tanczer, L., et al. (2019). The Internet of Things and Domestic Abuse. The Journal of Law and Society.
U.S. Copyright Office. (2023). Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence.
UN Women. (2018). Cyberviolence against Women and Girls: A World-Wide Wake-Up Call.
UNESCO. (2018). A Global Framework of Reference on Digital Literacy Skills.
United Nations. (2006). Convention on the Rights of Persons with Disabilities.
United Nations General Assembly. (1966). International Covenant on Civil and Political Rights.
United Nations Human Rights Council. (2012). Resolution 20/8. A/HRC/RES/20/8.
United Nations Special Rapporteur. (2015). Report on the protection of sources and whistleblowers. A/70/361.
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2).
WSIS. (2005). Tunis Agenda for the Information Society.
Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
4
Objects of legal protection in digital space
2
2
10
14
Lecture text
Section 1: Personal Data as a Fundamental Legal Object
In the digital legal landscape, personal data has emerged as the primary object of protection, shifting from a mere administrative record to a fundamental projection of the human personality. Unlike physical objects which are rivalrous and tangible, personal data is non-rivalrous and ubiquitous, capable of being processed simultaneously by multiple actors in different jurisdictions. Legal theory has evolved to treat personal data not just as a commodity or property, but as an extension of the individual’s dignity. This concept, known as "informational self-determination," was pioneered by the German Federal Constitutional Court, establishing that the individual must have the authority to decide the disclosure and use of their personal data. Consequently, the legal object here is not the data bit itself, but the individual's autonomy over their digital representation (German Federal Constitutional Court, 1983).
The definition of this object is expansive. Under the General Data Protection Regulation (GDPR), "personal data" encompasses any information relating to an identified or identifiable natural person. This includes obvious identifiers like names and ID numbers, but also dynamic IP addresses, location data, and cookie identifiers. By broadening the scope of the protected object, the law acknowledges that in a big data environment, even seemingly innocuous fragments of information can be aggregated to single out an individual. This prevents data processors from circumventing the law by anonymizing single attributes while retaining the ability to re-identify the subject through other means (European Union, 2016).
"Sensitive data" or "special category data" constitutes a distinct, highly protected tier within this object class. This includes data revealing racial origin, political opinions, religious beliefs, biometric data, and health data. The legal regime places a "prohibition by default" on processing these objects, requiring stricter justifications such as explicit consent or vital public interest. This hierarchy recognizes that certain types of information, if mishandled, pose a risk not just to privacy but to the fundamental safety and non-discrimination rights of the individual. For example, the misuse of health data can lead to insurance discrimination, making the integrity of this object a matter of life chances (Article 29 Working Party, 2011).
Biometric data—fingerprints, facial geometry, and iris scans—represents a unique form of legal object because it is immutable. Unlike a password, which can be changed if stolen, biometric data is biologically linked to the physical body. Therefore, the compromise of biometric objects is catastrophic and permanent. Legal frameworks increasingly treat biometric templates as objects requiring the highest level of security encryption. The "uniqueness" of this object challenges traditional security paradigms, forcing a shift towards "cancelable biometrics" or tokenization where the raw biological data is never stored directly (Kindt, 2013).
Metadata, often described as "data about data," is another critical object of protection. Intelligence agencies have historically argued that metadata (who called whom, when, and for how long) is less sensitive than content. However, human rights jurisprudence, notably from the Court of Justice of the European Union (CJEU) in the Digital Rights Ireland case, has rejected this distinction. The Court found that metadata allows for the construction of a precise profile of an individual's private life. Therefore, metadata is legally elevated to the same status as the content of communications, protecting the "context" of human interaction as a vital legal interest (CJEU, 2014).
The "Right to Portability" introduces a property-like dimension to personal data. It allows individuals to move their data object from one service provider to another (e.g., from Facebook to another social network). This treats the dataset as a movable asset belonging to the user, combating the "lock-in" effects of platform ecosystems. By making the object transferable, the law aims to foster competition and empower users, although it stops short of declaring full property rights over data, maintaining a rights-based approach rather than an ownership-based one (Wong & Henderson, 2019).
Genetic data is an object that transcends the individual, as it inherently implicates biological relatives. The legal protection of a genetic sequence involves a tension between the individual's right to know (or not know) and the family's shared interest. This makes genetic data a "shared" legal object, complicating consent models that assume a single owner. Legal scholars argue for a "familial" approach to genetic privacy, where the object of protection is the genetic integrity of the lineage rather than just the individual donor (Taylor, 2012).
The commercial value of personal data has led to debates about "data as property." Proponents argue that if data is an asset for companies, users should be able to sell it as a legal object. However, human rights purists argue that treating data as a tradable commodity undermines its status as a fundamental right. If privacy becomes a product, it becomes a luxury good available only to the rich. The prevailing legal consensus, particularly in Europe, rejects the commodification of the data object, insisting that fundamental rights are inalienable and cannot be sold (Purtova, 2017).
Anonymized data ostensibly falls outside the scope of protection, as the link to the individual is broken. However, computer science demonstrates that "perfect anonymization" is often a myth. High-dimensional datasets can almost always be de-anonymized. Therefore, the legal boundary of the protected object is porous. Regulators are increasingly focusing on "pseudonymized" data as an intermediate object that still requires protection, acknowledging that the potential for re-identification remains a latent risk inherent to the data itself (Ohm, 2010).
The "digital dead" create a new category of objects: the post-mortem data estate. What happens to emails, photos, and social media profiles after death? Legal systems are struggling to define whether these are assets to be inherited or personality rights that extinguish with the person. Some jurisdictions have enacted laws allowing for "legacy contacts" to manage this object, while others strictly prioritize the privacy of the deceased, effectively sealing the digital tomb. The object here is the "memory" of the deceased, balanced against the privacy of the living who communicated with them (Harbinja, 2017).
Algorithmically inferred data poses a challenge to the definition of the object. If a machine learning model infers that a user is pregnant or depressed based on their shopping habits, is this "inferred data" a protected object? The user never provided it, yet it exists and affects them. Legal frameworks are expanding to include "derived data" within the scope of protection to prevent circumvention of privacy rights through predictive analytics. The object of protection is thus the "informational privacy" of the individual, regardless of the source of the information (Wachter & Mittelstadt, 2019).
Finally, the territoriality of the data object is a subject of geopolitical conflict. "Data sovereignty" laws require that the data of citizens be stored physically within national borders. This treats data as a national resource, similar to oil or minerals. The object is subjected to "residency" requirements, clashing with the technical reality of the cloud where data is fragmented and distributed globally. This tension highlights the dual nature of personal data as both a transnational human right and a national sovereign asset.
Section 2: Intellectual Property and Digital Content
Digital content—encompassing software, text, images, music, and video—constitutes a massive category of legal objects protected primarily through Intellectual Property (IP) regimes. In the digital space, the traditional distinction between the "medium" and the "work" collapses. A digital book is not a physical object but a license to access a stream of code. Consequently, the primary legal object is the "intangible expression" fixed in a digital medium. International treaties like the WIPO Copyright Treaty (WCT) have adapted the Berne Convention to the digital age, recognizing computer programs and databases as literary works, thereby confirming code as a protected textual object (World Intellectual Property Organization, 1996).
The concept of "ownership" has largely been replaced by "licensing" for digital objects. When a user "buys" a song on iTunes or a game on Steam, they are not acquiring title to a good but a limited, revocable license to use the service. This shifts the legal status of the object from a piece of property (which can be resold under the First Sale Doctrine) to a service contract. This "end of ownership" is a significant shift in consumer rights, meaning the digital library is an ephemeral object that can vanish if the platform shuts down or the user is banned (Perzanowski & Schultz, 2016).
Digital Rights Management (DRM) represents a technological layer of protection fused with the legal object. DRM systems are "digital locks" that control how a work can be accessed, copied, or shared. Legal frameworks like the US Digital Millennium Copyright Act (DMCA) and the EU InfoSoc Directive provide legal protection to these technological measures, making it an offense to circumvent the lock even for fair use purposes. Here, the object of protection extends beyond the creative work to include the "access control mechanism" itself, criminalizing the act of digital tinkering (17 U.S.C. § 1201).
Software code is a dual-natured object, protected as both a literary work (copyright) and, in some jurisdictions, a functional invention (patent). This duality creates complex legal disputes. The "API copyright" debate (e.g., Google v. Oracle) centered on whether the functional interface of code—the declaring code that allows programs to talk to each other—is a protectable object. The US Supreme Court’s ruling that API reimplementation is "fair use" established that the functional aspects of software are distinct from the creative expression, preventing the monopolization of the fundamental building blocks of digital innovation (Supreme Court of the United States, 2021).
Open Source Software (OSS) redefines the code object through the "copyleft" mechanism. Instead of restricting use, the license (e.g., GPL, MIT) grants freedom to study, change, and distribute the object, often on the condition that derivative works remain open. This creates a "commons" where the legal object is protected not for exclusion, but to ensure its perpetual availability to the public. The legal value of the OSS object lies in its collaborative potential rather than its proprietary exclusivity, challenging the dominant logic of IP law (Stallman, 2002).
User-Generated Content (UGC) turns millions of internet users into creators of protected objects. Every tweet, blog post, and TikTok video is technically a copyrighted work owned by the user. However, platform Terms of Service typically demand a non-exclusive, royalty-free, worldwide license to use this content. While the user retains the "copyright object," the platform extracts the "economic value," creating a disconnect between legal ownership and economic benefit. This vast reservoir of UGC is the fuel for the modern attention economy (Frosio, 2015).
Databases are protected as distinct objects, particularly in the EU, which offers a sui generis database right. This right protects the investment of time and money in compiling data, even if the individual data points are not creative (like a phone book). This creates a property-like right over "information structures." In the age of Big Data and AI training, these database rights are becoming critical legal objects, as they control who can mine the raw material necessary to train machine learning models (European Union, 1996).
The Public Domain is the "negative space" of IP protection—the realm of objects that are free for all to use. In the digital age, the digitization of public domain works (like scanning old books) has raised the question of whether the digital copy is a new protected object. The general legal consensus is that "slavish copies" of public domain works do not generate new copyright, ensuring that the digital heritage of humanity remains a free object. However, museums and archives often claim rights over high-resolution digital scans, creating "enclosure" of public domain objects (Europeana, 2011).
Non-Fungible Tokens (NFTs) attempt to reintroduce artificial scarcity to digital objects. An NFT is a blockchain-based certificate of authenticity linked to a digital file. Legally, purchasing an NFT usually conveys ownership of the token (the metadata), not the copyright of the underlying art. This distinction is crucial: the "object" bought is the receipt, not the work itself. Misunderstanding this distinction has led to rampant legal confusion and fraud. The legal value of the NFT object relies entirely on the social contract of the blockchain community rather than traditional property law (Guadamuz, 2021).
Deepfakes and synthetic media challenge the integrity of the digital object. When AI can generate a hyper-realistic video of a person doing something they never did, the "authenticity" of the recording is compromised. Legal systems are exploring new personality rights to protect the "digital likeness" as an object. This would allow individuals to control the synthesis of their image and voice, treating their biometric identity as a protected property interest against unauthorized digital replication (Chesney & Citron, 2019).
Trade secrets in the digital age protect the "black box" of algorithms. The source code and weighting of the Google search algorithm or the Uber matching system are highly protected corporate secrets. Unlike patents, which are public, trade secrets are protected by secrecy. This creates a tension with transparency rights; when a secret algorithm denies a loan, the "trade secret object" clashes with the user's right to an explanation. Courts are increasingly asked to inspect these sealed objects to ensure they are not concealing illegal discrimination (Pasquale, 2015).
Finally, the "right to repair" movement views the digital device (and its software) as an object that the owner should have the right to modify. Manufacturers often use software locks (pairing parts to the motherboard) to prevent independent repair. The legal battle is over whether the consumer owns the device as a sovereign object or merely licenses a terminal for the manufacturer's services. Legislation in the EU and US is shifting to protect the "repairability" of the object as an environmental and consumer right (European Commission, 2020).
Section 3: Critical Information Infrastructure and Systems
The "object" of protection in cyberspace is not just data, but the infrastructure itself—the physical and logical layers that enable connectivity. "Critical Information Infrastructure" (CII) refers to systems whose incapacity or destruction would have a debilitating impact on national security, the economy, or public health. This includes power grids, banking networks, and hospital systems. International law, specifically the norms endorsed by the UN Group of Governmental Experts (GGE), establishes that these infrastructures are off-limits for cyber-attacks in peacetime. The legal object here is the "functionality" of societal lifelines (UN GGE, 2015).
Cybersecurity law defines the "integrity, availability, and confidentiality" (the CIA Triad) of information systems as the core protected interests. "Integrity" protects the object from unauthorized modification (e.g., changing bank balances). "Availability" protects against denial-of-service (DDoS) attacks that render the object inaccessible. "Confidentiality" protects against data breaches. Criminal laws, such as those derived from the Budapest Convention on Cybercrime, penalize the violation of these attributes, effectively treating the "security state" of the system as a legally protected good (Council of Europe, 2001).
The Domain Name System (DNS) is a critical logical object. It translates human-readable names (https://www.google.com/search?q=google.com) into IP addresses. Control over the DNS root zone is a form of ultimate power over the internet's address book. While technically managed by ICANN, the integrity of the DNS is a global public interest. "DNSSEC" (Security Extensions) creates a chain of trust to protect this object from "cache poisoning" attacks. Legally, domain names are also treated as trademark objects, leading to disputes where intellectual property rights clash with the technical governance of the namespace (Mueller, 2002).
Submarine cables constitute the physical backbone of the internet, carrying 99% of intercontinental data traffic. These cables, lying on the ocean floor, are vulnerable physical objects protected under the UN Convention on the Law of the Sea (UNCLOS). However, the legal protections are outdated, primarily focusing on accidental damage by fishing trawlers rather than intentional sabotage or military tapping. Modern legal discourse argues for classifying these cables as "global critical infrastructure," creating a stricter regime of protection for these vital arteries of global communication (Davenport, 2012).
Internet Exchange Points (IXPs) and data centers are the physical nodes where the internet "lives." These facilities are objects of intense physical and digital security. Legally, they are subject to jurisdiction based on their physical location. This leads to the "data residency" issue, where a data center in Frankfurt is subject to German law, even if the data belongs to a Brazilian company. The physical sovereignty over the server creates a legal anchor for the intangible data it holds, making the data center a strategic geopolitical object (Svantesson, 2017).
The "Internet of Things" (IoT) expands the scope of protected objects to include billions of connected devices—from pacemakers to connected cars. These objects are often insecure by design ("insecurity of things"). New regulations, such as the EU Cyber Resilience Act, aim to impose mandatory security requirements on these hardware objects before they can enter the market. The law is shifting from blaming the hacker to holding the manufacturer accountable for producing a "defective object" that endangers the digital ecosystem (European Commission, 2022).
Spectrum—the radio frequencies used for Wi-Fi and 5G—is a finite, state-managed natural resource. It is a "public object" allocated by governments to telecommunications operators. The fair allocation of spectrum is essential for the right to access the internet. Legal disputes often arise over "spectrum squatting" or the interference between different services. The transition to 5G makes this object even more critical, as it underpins the future automated economy, leading to "spectrum sovereignty" debates (ITU, 2020).
"Zero-Day Vulnerabilities" are a unique and controversial object. These are software flaws known to hackers but not the vendor. Governments and private brokers buy and sell these vulnerabilities in a grey market. They are weaponized objects used for espionage or law enforcement hacking. The debate centers on the "Vulnerabilities Equities Process," which determines whether the state should disclose the flaw to fix the object (protecting everyone) or hoard it for offensive use (leaving the object vulnerable). This treats the security flaw as a strategic asset (Egloff, 2019).
Botnets—networks of infected computers controlled by an attacker—are "weaponized infrastructures." The legal response involves "takedowns," where law enforcement and tech companies collaborate to seize control of the command-and-control servers. This involves a legal intervention against the infrastructure itself to dismantle the criminal network. The object of the legal action is the "zombie network," aiming to liberate the infected devices from the attacker's control (Shiryaev, 2012).
Cryptographic keys are the "keys to the kingdom" in digital space. They are the mathematical objects that unlock encrypted data. The "Crypto Wars" refer to the legal battle over whether the state should have access to these keys ("backdoors"). Human rights advocates argue that strong encryption keys are essential objects for the preservation of privacy and free speech. Any legal mandate to weaken these keys compromises the security of the entire digital infrastructure, effectively declaring that no digital object can be truly private (Abelson et al., 2015).
Cloud computing changes the nature of the IT infrastructure object from a capital asset (owning servers) to an operational expense (renting utility). The "Shared Responsibility Model" defines the legal split: the provider protects the "cloud" (infrastructure), and the customer protects what is "in the cloud" (data). This contractual division of the object complicates liability when things go wrong—was it a failure of the Azure/AWS infrastructure, or a failure of the user's configuration? (Microsoft, 2021).
Finally, the "Public Core of the Internet" is a proposed norm to protect the fundamental protocols (TCP/IP, BGP) that make the internet work. The Global Commission on the Stability of Cyberspace has proposed that state and non-state actors should not conduct cyber-operations intended to disrupt the general availability or integrity of the public core. This attempts to establish the internet's logical functioning as a global common good, an object protected from the vicissitudes of cyber-warfare (GCSC, 2018).
Section 4: Reputation, Honor, and the Digital Persona
In the digital space, "reputation" is a quantifiable, persistent, and highly fragile object. Unlike in the physical village where gossip fades, digital reputation is recorded, indexed, and searchable forever. Legal systems protect reputation as a component of "personality rights." Defamation laws have been adapted to the online context, where a single tweet can destroy a career. The object of protection is the individual's "social standing" and "honor" against false factual assertions. However, the speed and anonymity of online speech make protecting this object incredibly difficult (Solove, 2007).
The "Right to be Forgotten" (RTBF), or the Right to Erasure, is the most significant legal innovation protecting the digital persona. Established by the CJEU in Google Spain v. AEPD, it allows individuals to request the de-listing of information that is inadequate, irrelevant, or excessive. The court ruled that the individual's privacy interest generally overrides the economic interest of the search engine and the public's interest in access, unless the figure is public. This treats the "search result" as a distinct object that can be severed from the index to protect the individual's narrative (CJEU, 2014).
"Doxxing"—the malicious publication of private information (address, phone number) to incite harassment—attacks the safety of the individual. The object of protection here is "obscurity" or "seclusion." Even if the data is technically public (e.g., in a property record), aggregating and broadcasting it to a hostile mob weaponizes it. Legal reforms are increasingly criminalizing doxxing as a specific offense, recognizing that the "context" of the data release transforms it into a weapon against the person (Citron, 2014).
The "Right to Reply" is a procedural right used to protect reputation. In many jurisdictions, if a digital media outlet attacks an individual's honor, they have a legal right to have a correction or reply published with equal visibility. This treats the digital news article as a contested object, forcing the platform to include the subject's voice. This mechanism attempts to restore balance to the informational object, ensuring the "truth" is not monopolized by the publisher (Council of Europe, 1974).
Online impersonation and "identity theft" attack the integrity of the digital persona. When someone creates a fake profile using another's photos and name, they are hijacking the "identity object." This is not just a privacy violation but a fraud against the public and the individual. Laws vary, with some treating it as criminal fraud and others as a civil tort of "misappropriation of likeness." The protected interest is the exclusive right to control one's own identity presentation in the digital sphere (Marwick, 2013).
"Mugshot extortion" sites highlight the commercial exploitation of reputation. These sites scrape public arrest records and host the mugshots, charging the subjects a fee to remove them. Even if the charges were dropped, the digital stain remains. Legislation in several US states now prohibits this practice, treating the "booking photo" as a restricted object in the commercial context to prevent predatory capitalization on human misery (Laster, 2015).
The reputation of businesses is also a protected object. Companies can sue for "trade libel" or malicious falsehoods that damage their goodwill. However, "Anti-SLAPP" (Strategic Lawsuit Against Public Participation) laws exist to prevent corporations from using reputation laws to silence legitimate criticism or consumer reviews. The legal balance is between protecting the corporate "brand object" and preserving the "consumer review" as a tool of market transparency (Donson, 2000).
"Revenge porn," or the non-consensual distribution of intimate images (NCII), is a severe violation of the digital persona. The object is the "sexual privacy" of the victim. Historically, copyright law was clumsily used to takedown these images (if the victim took the selfie). Now, specific criminal statutes target the act of distribution itself. The law recognizes that the harm is not to the copyright of the photo, but to the dignity of the subject portrayed. The image is an object of violation that must be scrubbed from the internet (Citron & Franks, 2014).
Algorithmic reputation scores (e.g., credit scores, Uber ratings) create a "quantified self" that determines access to services. If an algorithm erroneously flags a user as a fraudster, their digital reputation is damaged in a way that is invisible but consequential. Data protection laws grant a right to "explanation" and "human intervention" to contest these scores. The object of protection is the "accuracy" of the digital profile that mediates the user's economic existence (Pasquale, 2015).
"Cancel Culture" and "online shaming" represent the extra-legal enforcement of reputation norms. While often driven by social justice motives, the lack of due process can lead to "digital vigilantism." The law generally struggles to intervene here unless speech crosses into defamation or harassment. The "court of public opinion" operates on the reputation object with a speed and severity that the legal system cannot match, creating a "shadow justice" system (Ronson, 2015).
The "Digital Legacy" of reputation involves the preservation of honor after death. Can a family sue for defamation of a deceased relative? Most common law jurisdictions say "no" (the right dies with the person), while civil law jurisdictions often protect the "memory" of the deceased as a dignitary interest of the surviving family. This defines the reputation object as either a personal possession or a familial heritage (Edwards & Harbinja, 2013).
Finally, the "integrity of the timeline" is emerging as a conceptual object. Disinformation campaigns (e.g., deepfakes, bots) pollute the information ecosystem, making it impossible for individuals to maintain a truthful reputation or for the public to know the truth. The "right to the truth" is being discussed as a collective human right, protecting the "public sphere" as an object of epistemic integrity against coordinated inauthentic behavior (Kaye, 2017).
Section 5: Virtual Assets and the Metaverse
The emergence of the Metaverse and blockchain economies has created a new class of "Virtual Assets" (VAs) that challenge traditional property law. Cryptocurrencies like Bitcoin are the most prominent examples. Legally, they are difficult to classify: are they money, securities, commodities, or pure data? The EU's "Markets in Crypto-Assets" (MiCA) regulation defines them as "digital representations of value or rights which may be transferred and stored electronically." This creates a specific legal container for these objects, distinguishing them from fiat currency while subjecting them to financial regulation to prevent money laundering (European Parliament, 2023).
Virtual real estate in platforms like Decentraland or The Sandbox is bought and sold for millions of dollars. These are NFTs that represent coordinates in a virtual space. The "object" is a tokenized claim to a digital location. Disputes arise over "virtual trespass" or the rights of neighbors (e.g., putting a virtual billboard next to a virtual museum). Since these worlds are governed by private Terms of Service, the "property right" is actually a contractual right against the platform. If the servers go down, the land vanishes, revealing the precarious nature of the virtual object (Fairfield, 2005).
In-game assets—skins, weapons, and gold in games like Fortnite or World of Warcraft—have immense economic value. Yet, historically, courts have ruled that players do not "own" these items; they are merely licensed to use them within the game. If a player is banned, they lose their inventory. However, as "Play-to-Earn" models evolve, where assets can be traded for real money on third-party markets, the legal argument for recognizing a "virtual property interest" is strengthening. The object is transitioning from a "game mechanic" to a "digital asset" (Lastowka & Hunter, 2004).
Theft of virtual property poses a challenge to criminal law. If someone hacks an account and steals a "Magic Sword," is it theft? Traditional theft requires a tangible object. Some jurisdictions (like the Netherlands in the Runescape case) have ruled that virtual items have value and can be "stolen" in the criminal sense because the thief exerted control over the object and deprived the victim of its use. This solidifies the status of virtual items as protected legal objects (Supreme Court of the Netherlands, 2012).
Smart Contracts are "self-executing" code objects that run on a blockchain. They automate the transfer of assets. The legal question is whether "code is law" literally. If a smart contract contains a bug (like the DAO hack) that allows someone to drain funds, is it theft or just "using the code as written"? Legal purists argue that the intent of the parties overrides the code, meaning the "contractual object" is the mutual agreement, not just the script. However, the technical immutability of the blockchain often makes the code the final arbiter in practice (Werbach & Cornell, 2017).
Decentralized Autonomous Organizations (DAOs) are organizational objects existing on the blockchain. They have no physical headquarters or directors. Legal systems are struggling to classify them: are they general partnerships (imposing unlimited liability on members) or a new type of legal entity? States like Wyoming (USA) have created specific "DAO LLC" statutes to wrap these digital objects in a legal shell, granting them legal personality and limited liability protection (Wyoming State Legislature, 2021).
"Airdrops" and tokens create new forms of "value objects." When a protocol distributes free tokens to users, is it a gift, income, or a dividend? Tax authorities are aggressively defining these events to ensure the state captures its share of the value. The "taxable object" is the fair market value of the token at the moment of receipt, forcing digital nomads to account for every virtual transaction in fiat terms (IRS, 2019).
Digital Wallets are the "containers" for these assets. A "custodial wallet" (hosted by an exchange like Coinbase) is a financial account protected by banking laws. A "non-custodial wallet" (where the user holds the private key) is a piece of software. The regulatory push is to apply "Know Your Customer" (KYC) rules to these non-custodial wallets, effectively treating the "private software object" as a regulated financial instrument. Privacy advocates argue this infringes on the right to transact anonymously (FinCEN, 2020).
Stablecoins (crypto pegged to the dollar) attempt to create a "stable object" for payments. Regulators view them as a systemic risk—if the reserve backing the coin fails, it could trigger a run on the bank. Regulations now demand that stablecoin issuers hold high-quality liquid assets, treating the stablecoin object as a "money market fund" that requires strict solvency guarantees to protect the holder (G7 Working Group on Stablecoins, 2019).
The "Tokenization" of physical assets brings real-world objects into the digital legal sphere. Real estate, art, or stocks can be represented by tokens on a blockchain. This creates a "digital twin" object. The legal challenge is ensuring the link remains valid—if I sell the token, does the physical house legally transfer? This requires a synchronization of land registries with blockchain ledgers, creating a hybrid legal object that exists in both jurisdictions (Narayanan et al., 2016).
Jurisdiction in the Metaverse is the ultimate theoretical puzzle. If an avatar from France assaults an avatar from Brazil in a virtual space hosted on a US server, where did the act occur? The "location of the object" is ambiguous. Legal theories propose "platform law" (the rules of the server prevail) or "personal law" (the law of the user's nationality follows them). As virtual acts gain real-world consequences (emotional distress, financial loss), defining the "locus delicti" (place of the crime) is essential for enforcement (Reed, 2012).
Finally, the "sovereign individual" thesis argues that cryptography allows individuals to protect their assets independently of the state. By memorizing a "seed phrase" (12 words), a person can carry millions of dollars across borders in their head. This creates an "unseizable object," challenging the state's power of confiscation and taxation. The legal response is the "physical coercion" of the key holder, proving that while the digital object is cryptographically secure, the human subject remains vulnerable to physical law (Rees-Mogg & Davidson, 1997).
Questions
Based on the text provided, here are 10 questions designed to test understanding of the legal objects in the digital landscape:
1. Biometric Data and Immutability
According to Section 1, why does the "immutability" of biometric data (such as fingerprints or iris scans) present a unique security challenge compared to traditional authentication methods like passwords, and what is the suggested technical solution?
2. The Legal Status of Metadata
How did the Court of Justice of the European Union (CJEU) in the Digital Rights Ireland case alter the legal status of "metadata" (data about data) in relation to the content of communications?
3. The "End of Ownership" in Digital Media
Section 2 describes a shift from "ownership" to "licensing" for digital objects like music or e-books. How does this shift affect the consumer's rights regarding the resale or long-term possession of the work compared to physical media?
4. NFT Ownership Distinction
Legally, what is the crucial distinction described in Section 2 regarding what a buyer actually acquires when purchasing a Non-Fungible Token (NFT) versus the copyright of the underlying digital art?
5. The Shared Responsibility Model
In the context of Cloud Computing (Section 3), how does the "Shared Responsibility Model" define the division of legal liability between the cloud provider and the customer?
6. Zero-Day Vulnerabilities and State Equities
What is the "Vulnerabilities Equities Process," and what dilemma does it attempt to resolve regarding the government's handling of "Zero-Day" software flaws?
7. The Right to be Forgotten
Under the ruling in Google Spain v. AEPD discussed in Section 4, what specific condition generally allows an individual's privacy interest to override the public's interest in accessing search results?
8. Legal Reframing of Non-Consensual Intimate Images
How has the legal framework for addressing "revenge porn" evolved from utilizing copyright law to recognizing a specific violation of the victim's dignity and sexual privacy?
9. Virtual Theft and Property Rights
In the Runescape case mentioned in Section 5, on what grounds did the Supreme Court of the Netherlands rule that virtual in-game items could be the object of criminal theft?
10. Liability of DAOs
What is the primary legal uncertainty regarding Decentralized Autonomous Organizations (DAOs), and how have jurisdictions like Wyoming attempted to resolve the issue of liability for DAO members?
Cases
Case Study: The "Synth-Pop" Star and the Immutable Scandal
Background: The Creation of a Digital Object
Elena, a world-famous pop star, signs a contract with "Meta-Talent," a tech company. The contract authorizes Meta-Talent to scan Elena’s face and voice (collecting Biometric Data) to create a hyper-realistic AI avatar named "V-Elena."
The Asset: V-Elena is designed to perform concerts in the Metaverse. Meta-Talent holds the copyright to the software code and the 3D model.
The Token: To monetize this, Meta-Talent mints Non-Fungible Tokens (NFTs). Each NFT represents "ownership" of a unique digital costume worn by V-Elena during her virtual tour.
The Incident: Unauthorized Synthesis and Reputation Damage
Disputes arise when Meta-Talent uses V-Elena’s likeness to endorse a controversial political candidate in a virtual rally.
The "Deepfake" Defense: Elena claims this violates her "personality rights." She argues that while she licensed her IP, she did not consent to this specific speech. She demands the "digital likeness" be treated as a Biometric Object (Section 1) that requires specific consent for every use, rather than just a copyrighted software object (Section 2).
The Blockchain Scandal: Meta-Talent releases a series of "V-Elena Political NFTs" commemorating the rally. These sell out instantly. Elena attempts to scrub the internet of these images to protect her Digital Reputation (Section 4). She files a "Right to be Forgotten" request with search engines to de-list the auction sites. However, the sales records are on a public blockchain, which is technically immutable.
The Heist: Amidst the chaos, a hacker exploits a Zero-Day Vulnerability (Section 3) in Meta-Talent’s servers. They steal the Private Keys to the company's "non-custodial wallet" (Section 5) and drain the proceeds from the NFT sales. They also leak the source code for V-Elena’s voice synthesis engine.
Questions
1. The Hybrid Nature of the Digital Likeness
Analyze the legal status of "V-Elena" using Section 1 (Biometrics) and Section 2 (Digital Content/Deepfakes).
Elena argues V-Elena is an extension of her "biometric self," while Meta-Talent argues it is a "copyrighted software object." Why does this distinction matter for Elena’s ability to stop the political endorsement?
Hint: Consider the difference between owning a "work" (IP) and controlling "informational self-determination" (Privacy).
2. The Clash Between Reputation and the Blockchain
Focusing on Section 4 (Right to be Forgotten) and Section 5 (Virtual Assets):
Elena tries to use the Google Spain ruling to de-list the NFT sales. Why does the technical nature of the blockchain (immutable ledger) present a fundamental conflict with the legal concept of the "Right to Erasure"?
Can the "reputation object" (Elena's honor) be effectively protected when the "transaction object" (the NFT record) is permanent?
3. Virtual Theft and the "Private Key"
Reflecting on the hacker's theft in Section 5 (Virtual Assets):
If the hacker stole the "Private Keys" rather than the physical server, has a theft occurred? Use the logic from the Runescape case or the concept of the "unseizable object" to explain whether the law views the "key" as property that can be stolen, or merely a secret that was copied.
How does the theft of the source code (a Trade Secret) differ legally from the theft of the crypto-tokens?
References
17 U.S.C. § 1201. (1998). Digital Millennium Copyright Act.
Abelson, H., et al. (2015). Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications. MIT CSAIL.
Article 29 Working Party. (2011). Opinion 15/2011 on the definition of consent. 01197/11/EN WP187.
Chesney, B., & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107, 1753.
Citron, D. K. (2014). Hate Crimes in Cyberspace. Harvard University Press.
Citron, D. K., & Franks, M. A. (2014). Criminalizing Revenge Porn. Wake Forest Law Review, 49, 345.
Council of Europe. (1974). Resolution (74) 26 on the right of reply — position of the individual in relation to the press.
Council of Europe. (2001). Convention on Cybercrime (Budapest Convention). ETS No. 185.
Court of Justice of the European Union (CJEU). (2014). Digital Rights Ireland Ltd v Minister for Communications. Joined Cases C-293/12 and C-594/12.
Court of Justice of the European Union (CJEU). (2014). Google Spain SL v. Agencia Española de Protección de Datos (AEPD). Case C-131/12.
Davenport, T. (2012). Submarine Communications Cables and Law of the Sea: Problems in Law and Practice. Ocean Development & International Law, 43(3).
Donson, F. (2000). Legal Intimidation: A SLAPP in the Face of Democracy. Free Association Books.
Edwards, L., & Harbinja, E. (2013). Protecting Post-Mortem Privacy: Reconsidering the Privacy Interests of the Deceased in a Digital World. Cardozo Arts & Entertainment Law Journal, 32(1).
Egloff, F. J. (2019). Public attribution of cyber intrusions. Journal of Cybersecurity, 6(1).
European Commission. (2020). Circular Economy Action Plan: For a cleaner and more competitive Europe.
European Commission. (2022). Proposal for a Regulation on horizontal cybersecurity requirements for products with digital elements (Cyber Resilience Act).
European Parliament. (2023). Regulation on Markets in Crypto-assets (MiCA).
European Union. (1996). Directive 96/9/EC on the legal protection of databases.
European Union. (2016). General Data Protection Regulation (GDPR). Regulation (EU) 2016/679.
Europeana. (2011). The Public Domain Charter.
Fairfield, J. (2005). Virtual Property. Boston University Law Review, 85, 1047.
FinCEN. (2020). Requirements for Certain Transactions Involving Convertible Virtual Currency or Digital Assets.
Frosio, G. F. (2015). User Created Content and Intermediary Liability. in Cumulative Supplement to Copyright Law.
G7 Working Group on Stablecoins. (2019). Investigating the impact of global stablecoins.
German Federal Constitutional Court. (1983). Volkszählungsurteil (Census Judgment). BVerfGE 65, 1.
Global Commission on the Stability of Cyberspace (GCSC). (2018). Norm Protecting the Public Core of the Internet.
Guadamuz, A. (2021). The Treachery of Images: Non-fungible tokens and copyright. Journal of Intellectual Property Law & Practice, 16(12).
Harbinja, E. (2017). Post-mortem privacy 2.0: theory, law, and technology. International Review of Law, Computers & Technology, 31(1).
IRS. (2019). Revenue Ruling 2019-24.
ITU. (2020). Radio Regulations, Edition of 2020.
Kaye, D. (2017). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. A/HRC/35/22.
Kindt, E. J. (2013). Privacy and Data Protection Issues of Biometric Applications. Springer.
Laster, J. (2015). The Mugshot Racket. The Marshall Project.
Lastowka, G., & Hunter, D. (2004). The Laws of the Virtual Worlds. California Law Review, 92(1).
Marwick, A. E. (2013). Status Update: Celebrity, Publicity, and Branding in the Social Media Age. Yale University Press.
Microsoft. (2021). Shared Responsibility in the Cloud. Microsoft Azure Documentation.
Mueller, M. (2002). Ruling the Root: Internet Governance and the Taming of Cyberspace. MIT Press.
Narayanan, A., et al. (2016). Bitcoin and Cryptocurrency Technologies. Princeton University Press.
Ohm, P. (2010). Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization. UCLA Law Review, 57, 1701.
Pasquale, F. (2015). The Black Box Society. Harvard University Press.
Perzanowski, A., & Schultz, J. (2016). The End of Ownership: Personal Property in the Digital Economy. MIT Press.
Purtova, N. (2017). Do Property Rights in Personal Data Make Sense? Tilburg Law School Research Paper.
Reed, C. (2012). Making Laws for Cyberspace. Oxford University Press.
Rees-Mogg, W., & Davidson, J. D. (1997). The Sovereign Individual. Simon & Schuster.
Ronson, J. (2015). So You've Been Publicly Shamed. Riverhead Books.
Shiryaev, Y. (2012). Cyberterrorism in the Context of Contemporary International Law.
Solove, D. J. (2007). The Future of Reputation: Gossip, Rumor, and Privacy on the Internet. Yale University Press.
Stallman, R. (2002). Free Software, Free Society. GNU Press.
Supreme Court of the Netherlands. (2012). Runescape Theft Case (ECLI:NL:HR:2012:BQ9251).
Supreme Court of the United States. (2021). Google LLC v. Oracle America, Inc. 141 S. Ct. 1183.
Svantesson, D. (2017). Solving the Internet Jurisdiction Puzzle. Oxford University Press.
Taylor, M. (2012). Genetic Data and the Law. Cambridge University Press.
UN Group of Governmental Experts (GGE). (2015). Report of the Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security. A/70/174.
Wachter, S., & Mittelstadt, B. (2019). A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. Columbia Business Law Review, 2019(2).
Werbach, K., & Cornell, N. (2017). Contracts Ex Machina. Duke Law Journal, 67, 313.
Wong, J., & Henderson, T. (2019). The right to data portability in practice. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
World Intellectual Property Organization. (1996). WIPO Copyright Treaty (WCT).
Wyoming State Legislature. (2021). Wyoming Decentralized Autonomous Organization Supplement Act.
5
Digital rights protection relations and their regulation
2
2
10
14
Lecture text
Section 1: Jurisdictional Challenges and the Territorial Paradox
The fundamental tension in the regulation of digital rights lies in the conflict between the borderless nature of the internet and the strictly territorial definition of state sovereignty. Traditional legal relations are predicated on the Westphalian system, where a state's legal authority is confined to its physical geography. However, digital data flows ignore these physical boundaries, creating a "jurisdictional paradox" where actions taken in one country can have immediate legal effects in another. This forces legal theorists to grapple with the "effects doctrine," a principle of international law which posits that a state has jurisdiction over conduct outside its borders if that conduct has substantial effects within its territory. This doctrine is increasingly applied to justify the extraterritorial application of national digital laws (Ryngaert, 2015).
A seminal case illustrating this conflict is the Yahoo! v. LICRA litigation. In this instance, a French court ordered an American company to block French users from accessing Nazi memorabilia auctions hosted on US servers. This highlighted the clash between US First Amendment protections and French hate speech laws. The case established the early precedent that while the internet is global, the commercial "targeting" of a specific population subjects a digital actor to the local laws of that population. This "targeting criterion" has since become a standard test in private international law to determine when a digital relation establishes a jurisdictional link (Goldsmith & Wu, 2006).
The European Union has aggressively expanded its jurisdictional reach through the "Brussels Effect." By setting high regulatory standards like the General Data Protection Regulation (GDPR) and conditioning market access on compliance, the EU effectively exports its domestic laws to the rest of the world. Article 3 of the GDPR explicitly claims extraterritorial scope, applying to any organization, regardless of location, that offers goods or services to, or monitors the behavior of, individuals in the Union. This creates a regulatory relationship where a Silicon Valley startup or an Indian tech firm is directly subject to European administrative law (Bradford, 2020).
However, the global reach of regulation is not absolute, as clarified by the Court of Justice of the European Union (CJEU) in Google v. CNIL. The case concerned whether the "Right to be Forgotten" required Google to de-list search results globally or only on its European domains (e.g., google.fr). The Court ruled that, for now, EU law generally requires de-listing only on EU-facing versions of the search engine, coupled with geo-blocking measures. This decision reflects a judicial restraint intended to avoid a "race to the bottom" where the most restrictive censorship laws of one nation could be imposed on the entire global internet (CJEU, 2019).
Conflict of laws, or private international law, struggles to identify the lex loci delicti (law of the place where the tort was committed) in digital disputes. If a defamatory post is written in London, hosted on a server in California, and read in Sydney, where did the harm occur? The "mosaic theory" of jurisdiction suggests that the harm occurs in every location where the content is accessed and reputation is damaged. This exposes digital publishers to potential liability in every jurisdiction worldwide, creating a massive "chilling effect" on free speech as platforms may over-comply to avoid multiple lawsuits (Svantesson, 2017).
Data localization laws represent a state response to re-assert territorial control over digital relations. Countries like Russia, China, and increasingly others, mandate that the personal data of their citizens be stored on servers physically located within the country. This forces the physical infrastructure of the internet to align with political borders, turning the "cloud" into a series of national silos. While often justified on privacy or security grounds, these laws primarily serve to ensure that the state has direct legal and physical access to data for surveillance and law enforcement purposes (Chander & Le, 2015).
The concept of "sovereign clouds" is emerging as a new model of digital relation. Here, governments contract with major tech providers to build isolated cloud environments that are legally and technically ring-fenced from foreign jurisdictions. This creates a "sovereign immunity" for data, attempting to shield it from foreign subpoenas (like the US CLOUD Act). This creates a complex triangular legal relation between the user, the state, and the foreign technology provider, where the terms of the contract attempt to override international jurisdictional conflicts (Microsoft, 2022).
Jurisdiction over domain names presents another layer of complexity. The .com registry is managed by Verisign, a US company, meaning that technically all .com websites are subject to US seizure laws, regardless of who owns them or where the content is hosted. This was demonstrated when the US Department of Justice seized the domains of foreign gambling sites. This "infrastructure jurisdiction" allows the US to act as a global gatekeeper, regulating digital relations based on control over the internet's root files rather than the location of the user (Zittrain, 2003).
The "country of origin" principle, foundational to the EU's e-Commerce Directive, attempted to simplify digital relations by stating that a provider is subject only to the laws of the country where it is established. However, this principle is eroding. Recent legislation like the Digital Services Act (DSA) and audiovisual media regulations increasingly shift power to the "country of destination," empowering the regulator where the consumer is located. This shift increases the compliance burden on platforms, forcing them to navigate 27 different legal systems within the single market alone (European Commission, 2020).
Arbitration clauses in Terms of Service (ToS) attempt to privatize the jurisdictional question. By forcing users to waive their right to sue in local courts and agree to binding arbitration (often in California), platforms try to sever the digital relation from public court systems. However, courts in Canada (e.g., Uber Technologies Inc. v. Heller) and the EU have increasingly struck down these clauses as unconscionable when applied to consumers or employees, re-establishing the primacy of local public law over private contractual arrangements (Supreme Court of Canada, 2020).
The "splinternet" is the ultimate manifestation of unresolved jurisdictional conflicts. As states implement incompatible technical standards and censorship regimes, the single global network fractures. This rupture in digital relations means that a "right" protected in one segment of the network (e.g., encryption in Europe) might be a "crime" in another (e.g., Russia). This fragmentation makes the concept of "universal digital human rights" difficult to enforce, as the material reality of the internet varies depending on the user's IP address (Mueller, 2017).
Finally, the regulation of digital relations is moving towards "internet sovereignty," where states assert the right to control the information environment as a matter of national security. This perspective rejects the multi-stakeholder model in favor of strict multilateralism, where only states dictate the rules. This fundamentally alters the digital relation from a global, open exchange to a series of bilateral state-to-state permissions, challenging the original architectural vision of the internet as an open network of networks.
Section 2: Administrative and Institutional Mechanisms of Protection
The enforcement of digital rights relies heavily on specialized administrative bodies, most notably Data Protection Authorities (DPAs) or Information Commissioners. These independent public authorities are the primary regulators of the digital landscape. Under the GDPR, they possess significant investigative and corrective powers, including the ability to enter premises, demand access to algorithms, and impose fines of up to 4% of global turnover. This institutional design moves digital rights protection from abstract judicial theory to concrete administrative enforcement, creating a direct regulatory relationship between the DPA and the data controller (Hijmans, 2016).
The European Data Protection Board (EDPB) exemplifies the institutionalization of cross-border cooperation. Composed of representatives from each national DPA, the EDPB ensures the consistent application of data protection rules across the EU. It issues binding decisions when national regulators disagree, acting as a "supreme administrative court" for privacy. This mechanism prevents "forum shopping," where companies might otherwise establish headquarters in jurisdictions with lenient regulators (e.g., Ireland) to avoid strict enforcement (European Data Protection Board, 2018).
In the United States, the Federal Trade Commission (FTC) serves as the de facto privacy regulator, though its authority stems from consumer protection law rather than a dedicated human rights statute. The FTC regulates digital relations by policing "unfair and deceptive acts or practices." If a company violates its own privacy policy, it is committing a deceptive act. The FTC enforces this through consent decrees—settlements that impose 20-year auditing requirements on tech companies. This creates a regulatory relationship based on contract and monitoring rather than direct statutory rights (Solove & Hartzog, 2014).
Cybersecurity agencies (like CISA in the US or ENISA in the EU) play a growing role in the protection of digital rights by securing the underlying infrastructure. These institutions regulate the relationship between critical infrastructure providers and the state. By mandating incident reporting and vulnerability disclosures, they protect the "availability" and "integrity" of the digital sphere. However, their mandate often prioritizes national security over individual privacy, creating tension in the institutional landscape regarding who effectively protects the user (European Union, 2019).
Consumer protection agencies act as a parallel enforcement mechanism. As digital rights often overlap with consumer rights (e.g., the right to fair terms, transparency, and non-discrimination), these bodies challenge predatory digital practices like "dark patterns"—user interfaces designed to trick users into consent. By treating data privacy violations as market failures, these institutions utilize broad consumer protection statutes to litigate against tech giants, adding another layer of regulatory oversight (Mathur et al., 2019).
The role of National Human Rights Institutions (NHRIs) is expanding into the digital domain. These bodies, accredited under the Paris Principles, monitor state compliance with international human rights obligations. They are increasingly conducting inquiries into digital ID systems, surveillance laws, and the digital divide. NHRIs bridge the gap between international treaty bodies and domestic administration, translating global norms into local policy recommendations and holding the state accountable for its digital footprint (Global Alliance of National Human Rights Institutions, 2019).
Sector-specific regulators are also entering the digital rights arena. Financial regulators audit fintech apps for data security; health regulators monitor telemedicine platforms; and competition authorities (antitrust) are dismantling data monopolies. This "regulatory intersectionality" acknowledges that digital rights are not an isolated vertical but a horizontal layer across all sectors of the economy. It requires a "whole-of-government" approach where the competition regulator must understand privacy law, and the privacy regulator must understand market dynamics (Kerber, 2016).
The "Ombudsman" institution provides an alternative dispute resolution mechanism for citizens. In the context of surveillance, specialized intelligence oversight bodies (like the IPT in the UK) provide a check on state power. However, the effectiveness of these institutions is often limited by secrecy laws. The European Court of Human Rights has frequently criticized these oversight bodies for lacking true independence and the power to offer effective remedies, highlighting the institutional gap in protecting rights against the "deep state" (ECtHR, 2018).
The "One-Stop-Shop" mechanism in the GDPR was designed to simplify the regulatory relation for businesses. It allows a company to deal with a single "Lead Supervisory Authority" in the country of its main establishment, rather than 27 different regulators. While efficient for business, this mechanism has faced criticism for creating bottlenecks. If the Lead Authority is under-resourced or reluctant to act (a criticism often leveled at the Irish DPC), the entire continent's enforcement is stalled. This highlights the fragility of centralized administrative models (Access Now, 2021).
Algorithmic auditing bodies are a proposed institutional innovation. The EU's AI Act envisions "Notified Bodies"—third-party auditors accredited to certify the safety and compliance of high-risk AI systems before they enter the market. This creates a regulatory relationship similar to product safety testing (like CE marking). These institutions will act as gatekeepers, ensuring that code complies with fundamental rights standards regarding bias and transparency before it is deployed (European Commission, 2021).
Content moderation appeals bodies, such as the appeals councils mandated by the Digital Services Act (DSA), act as quasi-judicial institutions within the private sector. The DSA requires platforms to fund independent out-of-court dispute settlement bodies. These institutions aim to provide users with a redress mechanism that is faster and cheaper than the court system, institutionalizing the protection of freedom of expression against arbitrary platform censorship (European Union, 2022).
Finally, the interaction between these administrative bodies creates a complex "regulatory mesh." A single data breach might trigger an investigation by the DPA (for privacy), the Cybersecurity Agency (for security), the Consumer Agency (for unfair practices), and the Competition Authority (for abuse of dominance). Coordinating these diverse institutional mandates is the central challenge of modern digital governance, requiring formal Memorandums of Understanding (MoUs) to prevent double jeopardy and ensure consistent enforcement.
Section 3: Private Regulation and the Role of Intermediaries
Private regulation, often termed "self-regulation" or "transnational private regulation," constitutes the primary governing force of the internet for most users. The Terms of Service (ToS) and Community Guidelines of major platforms like Meta, Google, and X function as the "constitution" of these digital spaces. This creates a contractual regulatory relation where the user agrees to a set of rules in exchange for access. Unlike public law, which is constrained by human rights standards, private regulation is historically constrained only by contract law and market tolerance, allowing platforms to restrict speech that would be constitutionally protected in the public square (Balkin, 2018).
The "Santa Clara Principles on Transparency and Accountability in Content Moderation" represent a civil society effort to standardize this private regulation. They demand that platforms provide clear notice to users when content is removed, offer a meaningful opportunity for appeal, and publish transparency reports. Major platforms have endorsed these principles, signaling a shift towards "procedural justice" in private governance. This adoption illustrates how soft law pressures can harden into binding private policies (Santa Clara Principles, 2018).
The Meta Oversight Board acts as a pioneering experiment in private judicial power. Independent from Meta's management, this body reviews difficult content decisions and issues binding rulings on whether content should be restored. While it lacks the power of a state court, its decisions use the language and framework of international human rights law. This creates a "hybrid" regulatory relation where a private corporation voluntarily submits to a quasi-judicial review based on public law norms (Klonick, 2020).
"Code as Law," a concept popularized by Lawrence Lessig, describes how technical architecture regulates behavior. Private actors regulate digital relations not just through written rules, but through the design of their software. For example, the decision to implement end-to-end encryption in WhatsApp is a regulatory decision that makes surveillance impossible. Conversely, the design of "real-name policies" technically enforces a ban on anonymity. In this sphere, the software engineer is the lawmaker, and the code is the enforcement mechanism (Lessig, 1999).
The concept of "Co-regulation" blends public oversight with private enforcement. The EU Code of Practice on Disinformation is a prime example. Here, the state sets the broad objectives (reducing misinformation), but the private platforms design the specific measures to achieve them. The state then monitors compliance. This regulatory relation outsources the "dirty work" of censorship to private entities, allowing the state to achieve public policy goals without directly infringing on free speech rights (European Commission, 2018).
"Trusted Flaggers" are actors empowered within the private regulatory system. These are NGOs, government agencies, or copyright holders who are given special status by platforms. Their reports of illegal content are prioritized for review. This creates a tiered regulatory system where certain institutional actors have a "fast lane" to influence the policing of the digital sphere, while ordinary users must wait in the queue. This raises questions about equality of arms and due process (Digital Services Act, 2022).
The privatization of copyright enforcement through "upload filters" (like YouTube's Content ID) exemplifies automated private regulation. These systems scan every piece of uploaded content against a database of copyrighted works, blocking matches automatically. This "prior restraint" mechanism bypasses the judicial determination of fair use, effectively shifting the burden of proof onto the user to prove their innocence. The regulatory relation here is entirely algorithmic, with no human intervention in the first instance (Urban et al., 2016).
Domain Name System (DNS) abuse policies allow registries to act as private regulators of the internet's infrastructure. Under pressure from intellectual property rights holders, registries can suspend domain names accused of piracy or selling counterfeit goods. This "chokepoint regulation" effectively exiles a site from the internet without a court order. The relationship is governed by the contract between the registrant and the registrar, often bypassing national courts entirely (Mueller, 2019).
The "Brussels Effect" also operates through private regulation. Multinational platforms often apply their strictest compliance standard (usually the GDPR) globally to streamline their engineering processes. This means that a user in Brazil or Japan might be protected by a private policy derived from European law, even if their local law is weaker. Private corporate policy thus becomes a transmission belt for global regulatory standards (Bradford, 2020).
"Shadow banning" and algorithmic demotion represent the opaque side of private regulation. Instead of removing content, platforms may simply reduce its visibility. This "freedom of speech vs. freedom of reach" distinction allows platforms to regulate the information ecosystem without technically censoring the user. However, the lack of transparency in these "visibility reductions" makes it impossible for users to know if they have been sanctioned, creating a Kafkaesque regulatory relation (Gillespie, 2018).
The Global Internet Forum to Counter Terrorism (GIFCT) is a consortium of tech companies that shares a database of "hashes" (digital fingerprints) of terrorist content. Once a piece of content is flagged by one member, it can be automatically blocked by all others. This creates a global private censorship cartel. While aimed at violent extremism, the lack of public oversight means that errors (like removing documentation of war crimes) are replicated across the entire internet instantly (Human Rights Watch, 2020).
Finally, the "de-platforming" of public figures (e.g., the suspension of Donald Trump) highlights the ultimate sanction in private regulation: "digital capital punishment." This power to exclude individuals from the modern public square demonstrates that private platforms hold sovereign-like power over democratic participation. The regulatory relation is asymmetrical; the platform can terminate the user's digital existence based on a violation of contract, a power that arguably exceeds that of the state in the digital age.
Section 4: International Cooperation and Cross-Border Evidence
The global nature of cybercrime and data flows necessitates robust international cooperation mechanisms. The primary instrument for this is the Mutual Legal Assistance Treaty (MLAT). MLATs are bilateral agreements that allow the law enforcement of one country to request evidence (e.g., emails, server logs) held in another. However, the MLAT process is notoriously slow, often taking months or years to process a request. This "latency" is incompatible with the speed of digital crime, where data can be deleted in seconds. The breakdown of the MLAT system is a primary driver for new, more aggressive forms of cross-border regulation (Swire & Hemmings, 2015).
The Budapest Convention on Cybercrime (2001) is the first and only binding international treaty on crimes committed via the internet. It harmonizes national laws on cybercrime definitions and establishes a framework for 24/7 cooperation between police forces. Article 32 of the Convention is particularly controversial; it allows for "transborder access to stored computer data" with consent or where publicly available. However, critics argue that the Convention lacks robust human rights safeguards, prioritizing police efficiency over privacy protections (Council of Europe, 2001).
The US CLOUD Act (Clarifying Lawful Overseas Use of Data Act) represents a shift from the treaty-based model to a unilateral/bilateral executive agreement model. It asserts US jurisdiction over data held by US companies anywhere in the world. Crucially, it also allows the US to sign executive agreements with trusted foreign governments (like the UK), allowing them to bypass the MLAT process and demand data directly from US tech companies. This significantly accelerates the regulatory relation but bypasses judicial review in the receiving country (United States Congress, 2018).
The European Union’s "e-Evidence" Regulation proposal aims to create a similar mechanism within the EU. It would allow a judge in one member state to issue a "European Production Order" directly to a service provider in another member state (e.g., a German judge ordering Facebook Ireland to hand over data), with a deadline of 10 days (or 6 hours in emergencies). This removes the political "middleman" of the receiving state, creating a direct judicial-corporate relation across borders (European Commission, 2018).
Interpol and Europol serve as critical hubs for operational cooperation. The European Cybercrime Centre (EC3) at Europol coordinates major cross-border operations against botnets and dark web marketplaces. These agencies do not have executive powers themselves but facilitate the "deconfliction" of investigations, ensuring that police in different countries do not compromise each other's operations. They act as the "connective tissue" of international digital policing (Europol, 2020).
The "Second Additional Protocol to the Budapest Convention" was drafted to modernize the treaty for the cloud computing era. It introduces provisions for direct cooperation between service providers and authorities in other parties, as well as expedited emergency procedures. However, civil society groups have criticized it for lowering the standard of data protection, arguing that it allows law enforcement to "forum shop" for data without adequate judicial oversight in the country where the user resides (EFF, 2021).
Joint Investigation Teams (JITs) are a specific legal tool allowing prosecutors and law enforcement from multiple countries to form a temporary legal entity to investigate cross-border crime. JITs allow for the direct exchange of evidence without formal MLAT requests. This mechanism was used effectively in the takedown of the "EncroChat" encrypted network, where French and Dutch police collaborated to hack the network and share the intelligence with the UK and Sweden (Eurojust, 2020).
Data transfer mechanisms, such as the EU-US Data Privacy Framework (successor to Privacy Shield), are essential for legalizing the commercial flow of personal data. The Schrems II judgment invalidated the previous framework because US surveillance laws were deemed incompatible with EU fundamental rights. This creates a state of permanent diplomatic friction. Cooperation here is not just about catching criminals but about keeping the global digital economy functioning while respecting divergent human rights standards (CJEU, 2020).
The G7 Cyber Expert Group and the Counter Ransomware Initiative represent "minilateral" cooperation. These political groupings coordinate policy responses to state-sponsored cyber threats and ransomware gangs. They focus on "naming and shaming" malicious actors and coordinating sanctions (e.g., sanctioning cryptocurrency wallets used by hackers). This is a form of regulatory diplomacy that bypasses the gridlock of the United Nations (White House, 2021).
The United Nations is currently negotiating a new "Cybercrime Treaty" initiated by Russia. This process highlights the geopolitical split in international cooperation. While Western nations favor the Budapest Convention model (limited to specific crimes), Russia and China advocate for a broader treaty that includes "content crimes" like disinformation. This creates a risk that international cooperation mechanisms could be weaponized to enforce authoritarian censorship norms globally (United Nations General Assembly, 2019).
"Voluntary cooperation" remains a massive, unregulated channel. Tech companies frequently respond to "emergency disclosure requests" from foreign law enforcement where there is an imminent threat to life (e.g., suicide threats or terrorism). While necessary, this creates a grey area where companies act as arbiters of urgency without judicial oversight. Transparency reports reveal that thousands of such requests are processed annually, constituting a shadow system of cross-border data sharing (Google Transparency Report, 2023).
Finally, the "jurisdiction of the data subject" is emerging as a limiting principle in cooperation. The GDPR asserts that the protection travels with the data. This means that if the US government accesses the data of an EU citizen, it must theoretically afford that citizen the same rights as they would have in Europe. Bridging this gap—where rights are attached to the person but powers are attached to the territory—is the central unresolved challenge of international digital relations.
Section 5: Remedies, Liability, and Enforcement Procedures
The efficacy of digital rights depends on the availability of effective remedies. The GDPR introduced a paradigm shift in liability with its tiered fine structure. The "administrative fine" is the primary enforcement tool, designed to be "effective, proportionate, and dissuasive." The threat of fines up to €20 million or 4% of total global annual turnover has transformed privacy compliance from a box-ticking exercise into a boardroom priority. This liability regime targets the corporate entity's bottom line, aligning shareholder interests with human rights compliance (European Union, 2016).
Compensation for non-material damage is a critical evolution in enforcement. Article 82 of the GDPR gives individuals the right to receive compensation for "non-material damage" (e.g., distress, anxiety, reputation loss) resulting from a data breach. Historically, courts were reluctant to award damages without proof of financial loss. The CJEU has confirmed that the mere infringement of the regulation can give rise to a claim for compensation, although the threshold for "damage" is still being debated in national courts (CJEU, 2023).
Collective redress, or class action lawsuits, is essential for enforcing digital rights, as individual damages are often too small to justify litigation (the "rational apathy" problem). The EU's "Representative Actions Directive" allows qualified entities (NGOs/Consumer groups) to bring lawsuits on behalf of groups of consumers. In the UK, the Lloyd v. Google case tested the limits of "opt-out" class actions for data privacy. While the UK Supreme Court restricted damages without proof of individual distress, the trend is moving towards allowing mass claims to balance the power asymmetry between users and platforms (UK Supreme Court, 2021).
Injunctions are powerful procedural remedies used to stop ongoing violations. In the context of intellectual property and defamation, "dynamic injunctions" are increasingly used. These orders require ISPs to block not just a specific URL, but also any future mirror sites or IP addresses that host the same illegal content. This creates a "whack-a-mole" enforcement mechanism where the remedy automatically adapts to the evasion tactics of the infringer (Arnold, 2018).
The "Right to Explanation" serves as a remedy against algorithmic harm. When a user is subject to an automated decision (e.g., credit denial), they have the right to obtain meaningful information about the "logic involved." Enforcement of this right requires "algorithmic auditing" and "explainable AI" (XAI) techniques. Regulators are beginning to demand that companies open their "black boxes" to prove that the code did not discriminate, converting a technical opacity into a legal liability (Selbst & Powles, 2017).
Intermediary liability regimes determine when a platform pays for user actions. The "Notice and Takedown" model (DMCA, e-Commerce Directive) shields platforms from liability as long as they remove illegal content once notified. However, the move towards "Notice and Staydown" (requiring platforms to prevent the re-upload of illegal content) increases the liability burden. If a platform fails to keep the content off, it becomes liable as a publisher. This shift incentivizes the use of aggressive filtering technologies (Kulk, 2019).
"Turnover-based penalties" are spreading beyond privacy. The EU Digital Markets Act (DMA) introduces fines of up to 10% of global turnover for anti-competitive behavior by "Gatekeepers." This massive liability is designed to break the economic incentives of monopoly power. It represents a shift from ex-post antitrust enforcement (fining companies years after the abuse) to ex-ante regulation (setting rules of conduct upfront with immediate penalties for non-compliance) (European Union, 2022).
Criminal liability for executives is the "nuclear option" of enforcement. Some proposed laws (like the UK Online Safety Bill or Irish Online Safety and Media Regulation Bill) consider holding senior managers criminally liable if they fail to protect children from harmful content. Piercing the corporate veil to target the personal liberty of CEOs is seen as the ultimate deterrent, ensuring that digital safety is treated with the same seriousness as physical workplace safety (United Kingdom Parliament, 2023).
"Soft remedies" like reputational sanctions play a role. Data Protection Authorities often publish the names of non-compliant companies ("naming and shaming"). In the trust-based digital economy, the stigma of being labeled a privacy violator can cause stock drops and user exodus. This reputational mechanism leverages market forces as an enforcement tool, complementing legal sanctions (Gunningham, 2018).
Cross-border enforcement remains the weak link. Even if a DPA in Spain issues a fine against a US app, collecting that fine is difficult without a local entity. The "Representative" requirement in Article 27 of the GDPR forces foreign companies to appoint a legal representative in the EU who can be held liable. This ensures that the enforcement arm of the law has a physical subject to grasp within the jurisdiction (Bygrave, 2017).
Alternative Dispute Resolution (ADR) and Online Dispute Resolution (ODR) provide low-cost remedies. The "UDRP" (Uniform Domain-Name Dispute-Resolution Policy) allows trademark holders to recover domain names through arbitration rather than court. Similarly, e-commerce platforms have internal ODR systems to resolve refunds. While efficient, these private justice systems often lack the procedural guarantees of public courts, prioritizing speed over deep factual investigation (Katsh & Rabinovich-Einy, 2017).
Finally, the "right to effective judicial protection" (Article 47 of the EU Charter) underpins all these mechanisms. It guarantees that if a digital right is violated, a judge must have the power to review the case. The CJEU's invalidation of the "Privacy Shield" was fundamentally based on the lack of this remedy—US ombudspersons were not deemed to be "effective judicial protection." This principle asserts that a right without a remedy is no right at all, and a remedy without a judge is merely a suggestion.
Questions
1. The Jurisdictional Paradox
How does the "effects doctrine" attempt to resolve the conflict between the borderless nature of the internet and the Westphalian system of strictly territorial state sovereignty?
2. The "Targeting Criterion"
In the context of Yahoo! v. LICRA, what principle was established regarding commercial "targeting," and how is this used to determine when a digital actor is subject to local laws?
3. The Brussels Effect and Extraterritoriality
Explain how Article 3 of the GDPR creates an extraterritorial regulatory relationship known as the "Brussels Effect." How does this affect non-European companies like Silicon Valley startups or Indian tech firms?
4. Judicial Restraint in Global Censorship
In Google v. CNIL, why did the Court of Justice of the European Union (CJEU) rule against the global application of the "Right to be Forgotten," and what concept ("race to the bottom") were they trying to avoid?
5. FTC vs. GDPR Enforcement Models
Contrast the enforcement model of the European Data Protection Board (EDPB) with that of the US Federal Trade Commission (FTC). How does the FTC use "consent decrees" to regulate privacy in the absence of a comprehensive human rights statute?
6. Code as Law
Describe Lawrence Lessig’s concept of "Code as Law" found in Section 3. How does the technical design of software (e.g., end-to-end encryption or real-name policies) function as a regulatory mechanism distinct from written legal rules?
7. The Shift in Intermediary Liability
How does the move from a "Notice and Takedown" model to a "Notice and Staydown" model change the liability burden for platforms, and why does this incentivize the use of aggressive filtering technologies?
8. The Failure of MLATs
What is the primary operational deficiency of Mutual Legal Assistance Treaties (MLATs) in the context of digital crime, and how do the US CLOUD Act and the EU’s e-Evidence proposal attempt to bypass this bottleneck?
9. Geopolitical Conflict in Treaty Negotiations
What is the fundamental disagreement between Western nations and the Russia/China bloc regarding the proposed UN Cybercrime Treaty, particularly concerning the definition of criminal conduct?
10. Effective Remedies and Damages
According to Section 5 and the Lloyd v. Google case, what is the current legal debate regarding compensation for "non-material damage" (such as distress) resulting from data breaches?
Cases
Case Study: The "StreamZone" Jurisdictional and Enforcement Crisis
The Jurisdictional Paradox and Private Regulation
"StreamZone," a popular live-streaming platform headquartered in California, has no physical offices in Europe but aggressively monetizes the EU market. It accepts payments in Euros, runs advertising campaigns in French and German, and employs local "brand ambassadors." This clear commercial intent triggers the "targeting criterion" established in Yahoo! v. LICRA, subjecting the US-based company to EU regulations despite its physical absence. The platform relies heavily on "private regulation" to police content, utilizing an automated "upload filter" (Section 3) to block copyrighted material. However, during a high-profile sporting event, a user named "PirateKing" bypasses the filter by slightly altering the video speed and audio pitch. He livestreams the copyrighted event while adding commentary that violates French hate speech laws. StreamZone’s "Code as Law" approach fails, as the algorithmic filter cannot detect the nuanced violation, and their "Notice and Takedown" system is overwhelmed by the volume of reports, allowing the stream to continue for hours.
The Cross-Border Evidence Quagmire
French law enforcement identifies "PirateKing" as a priority target and demands his IP address and registration details from StreamZone. This triggers a conflict of international cooperation mechanisms (Section 4). The French prosecutor initially attempts to use the traditional Mutual Legal Assistance Treaty (MLAT) process to request the data from US authorities. However, the "latency" of the MLAT system means the request could take months, by which time the digital evidence (server logs) would likely be deleted. Frustrated by the delay, the French prosecutor attempts to bypass the treaty by issuing a direct production order to StreamZone, citing the EU’s emerging e-Evidence framework. StreamZone refuses to comply, citing a conflict of laws: while the US CLOUD Act would allow them to disclose data to "trusted" foreign governments, they argue that transferring the data of an EU citizen to the US for processing might violate the GDPR standards established in the Schrems II judgment, leaving them trapped between US disclosure obligations and EU privacy restrictions.
Remedies and the Liability Shift
In response to StreamZone's failure to stop the stream and provide evidence, the French court seeks robust remedies (Section 5). It rejects StreamZone's defense that it is merely a passive intermediary protected by the traditional "Notice and Takedown" model. Instead, the court argues that because StreamZone uses algorithmic recommendation systems to promote the stream, it has become an active publisher. The court issues a "Dynamic Injunction" against French Internet Service Providers (ISPs), ordering them to block not just the current URL of the stream, but any future IP addresses or "mirror sites" PirateKing might use. Furthermore, the French Data Protection Authority (CNIL) initiates proceedings to impose a "turnover-based penalty" of 4% of StreamZone’s global revenue for the data handling violations, aiming to use this massive financial liability to force the company to adopt a stricter "Notice and Staydown" regime for future content.
Questions
1. Jurisdiction via the "Effects Doctrine"
StreamZone argues that as a US company with no French offices, it is not subject to the French court's orders.
Refute this argument using the "Targeting Criterion" and the "Effects Doctrine" discussed in Section 1.
How do factors like "accepting payments in Euros" and "employing local ambassadors" legally establish a jurisdictional link that overrides the physical location of their servers?
2. The MLAT vs. Direct Access Conflict
Focusing on the evidence gathering in Section 4:
Why is the MLAT system described as "incompatible" with the speed of digital crime like the "PirateKing" livestream?
Explain the legal dilemma StreamZone faces regarding the US CLOUD Act and the Schrems II judgment. Why creates the friction between complying with a US data warrant and respecting EU data transfer restrictions?
3. Dynamic Injunctions and Intermediary Liability
Regarding the remedies applied in Section 5:
How does a "Dynamic Injunction" solve the "whack-a-mole" problem of live-streaming piracy compared to a standard static injunction?
The court demands a shift from "Notice and Takedown" to "Notice and Staydown." How does this shift change the technical obligations of the platform (i.e., what must they do before content appears)?
References
Access Now. (2021). One Year of GDPR: No more excuses.
Arnold, R. (2018). Cartier International AG v British Sky Broadcasting Ltd. UKSC 28.
Balkin, J. M. (2018). Free Speech is a Triangle. Columbia Law Review, 118, 2011.
Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.
Bygrave, L. A. (2017). Data Protection Law: Approaching Its Rationale, Logic and Limits. Kluwer Law International.
Chander, A., & Le, U. P. (2015). Data Nationalism. Emory Law Journal, 64, 677.
Council of Europe. (2001). Convention on Cybercrime. ETS No. 185.
Court of Justice of the European Union (CJEU). (2019). Google LLC v. Commission nationale de l'informatique et des libertés (CNIL). Case C-507/17.
Court of Justice of the European Union (CJEU). (2020). Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems (Schrems II). Case C-311/18.
Court of Justice of the European Union (CJEU). (2023). UI v Österreichische Post AG. Case C-300/21.
EFF. (2021). Privacy Standards Must Not Be Compromised in the New Budapest Convention Protocol.
European Commission. (2018). Proposal for a Regulation on European Production and Preservation Orders for electronic evidence in criminal matters. COM/2018/225 final.
European Commission. (2020). Proposal for a Regulation on a Single Market For Digital Services (Digital Services Act).
European Commission. (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
European Data Protection Board. (2018). First Plenary Meeting of the EDPB.
European Union. (2016). General Data Protection Regulation (GDPR). Regulation (EU) 2016/679.
European Union. (2019). Cybersecurity Act. Regulation (EU) 2019/881.
European Union. (2022). Digital Markets Act (DMA). Regulation (EU) 2022/1925.
Europol. (2020). Internet Organised Crime Threat Assessment (IOCTA).
Gillespie, T. (2018). Custodians of the Internet. Yale University Press.
Goldsmith, J., & Wu, T. (2006). Who Controls the Internet? Illusions of a Borderless World. Oxford University Press.
Google Transparency Report. (2023). Global requests for user information.
Gunningham, N. (2018). Corporate Environmental Responsibility. Routledge.
Hijmans, H. (2016). The European Union as Guardian of Internet Privacy: The Story of Art 16 TFEU. Springer.
Human Rights Watch. (2020). Erosions of Free Speech in the Name of Counterterrorism.
Katsh, E., & Rabinovich-Einy, O. (2017). Digital Justice: Technology and the Internet of Disputes. Oxford University Press.
Kerber, W. (2016). Digital Markets, Data, and Privacy: Competition Law, Consumer Law and Data Protection. Journal of Intellectual Property Law & Practice.
Klonick, K. (2020). The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression. Yale Law Journal, 129, 2418.
Kulk, S. (2019). Internet Intermediaries and Copyright Law. Kluwer Law International.
Lessig, L. (1999). Code and Other Laws of Cyberspace. Basic Books.
Mathur, A., et al. (2019). Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Apps. CSCW '19.
Microsoft. (2022). Cloud for Sovereignty.
Mueller, M. (2017). Will the Internet Fragment? Polity.
Mueller, M. (2019). DNS Abuse: The definition and scope of the problem. Internet Governance Project.
Ryngaert, C. (2015). Jurisdiction in International Law. Oxford University Press.
Santa Clara Principles. (2018). The Santa Clara Principles on Transparency and Accountability in Content Moderation.
Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4).
Solove, D. J., & Hartzog, W. (2014). The FTC and the New Common Law of Privacy. Columbia Law Review, 114, 583.
Supreme Court of Canada. (2020). Uber Technologies Inc. v. Heller. 2020 SCC 16.
Svantesson, D. (2017). Solving the Internet Jurisdiction Puzzle. Oxford University Press.
Swire, P., & Hemmings, J. (2015). Re-engineering the Mutual Legal Assistance Treaty Process. New York University Law Review.
UK Supreme Court. (2021). Lloyd v Google LLC. [2021] UKSC 50.
United Nations General Assembly. (2019). Resolution 74/247.
United States Congress. (2018). Clarifying Lawful Overseas Use of Data Act (CLOUD Act).
Urban, J. M., Karaganis, J., & Schofield, B. L. (2016). Notice and Takedown in Everyday Practice. UC Berkeley Public Law Research Paper.
Zittrain, J. (2003). Internet Points of Control. Boston College Law Review, 44, 653.
6
Fundamental digital human rights
2
2
10
14
Lecture text
Questions
Cases
References
7
Ethics of social networks and special rights protection institutions
2
2
5
9
Lecture text
Section 1: Theoretical Foundations of Digital Ethics
The ethics of social networks cannot be understood through traditional deontological or utilitarian frameworks alone; they require a specialized "information ethics" that addresses the unique ontology of the digital sphere. Luciano Floridi, a pioneer in this field, argues that the "infosphere" constitutes a new environment where information entities have intrinsic moral worth. In this view, the destruction or corruption of information (entropy) is a form of evil. Applied to social networks, this means that platforms are not merely neutral conduits for data but active moral agents that shape the informational environment. The design choices that determine which post appears at the top of a feed are ethical decisions that prioritize certain values—engagement, outrage, or truth—over others. Therefore, the study of social network ethics begins with the rejection of "technological neutrality," acknowledging that every line of code embodies a value judgment by its creator (Floridi, 2014).
A central ethical concern is the concept of "Value Sensitive Design" (VSD). This theoretical approach posits that technology must be designed with human values—such as privacy, autonomy, and dignity—embedded from the outset. In the context of social networks, a failure of VSD is evident in systems optimized solely for "time on site." When an algorithm is programmed to maximize engagement, it often inadvertently maximizes polarization and extremism, as these emotions drive the highest interaction. Ethical design would require replacing these engagement metrics with "well-being metrics," optimizing the system for the user's flourishing rather than their exploitation. This shift requires a fundamental reimagining of the engineering curriculum to include moral philosophy as a core competency (Friedman et al., 2013).
The "Trolley Problem" of the digital age manifests in algorithmic decision-making. Algorithms are constantly forced to make trade-offs between competing values: free speech versus safety, privacy versus security, transparency versus trade secrets. For instance, in recommendation systems, an algorithm must decide whether to show a user a piece of challenging political content (promoting diversity of thought) or a comforting confirmation of their existing bias (promoting user satisfaction). These automated choices have profound ethical implications for democratic discourse. The ethical failure arises when these decisions are made based on commercial imperatives rather than civic responsibility, effectively privatizing the public sphere's moral governance without public accountability (Mittelstadt et al., 2016).
"Epistemic responsibility" refers to the ethical duty of social networks to curate truth. Traditionally, platforms argued they were not arbiters of truth, protecting themselves under liability shields like Section 230 of the Communications Decency Act. However, the proliferation of disinformation and deepfakes has made this stance ethically untenable. Information ethics suggests that platforms have a duty of care to the "information ecosystem." Just as a factory cannot dump toxic waste into a river, a social network should not dump toxic disinformation into the public consciousness. This duty does not necessarily demand censorship, but it demands "frictional design" that slows the spread of unverified information, prioritizing the integrity of knowledge over the velocity of viral sharing (Frost, 2019).
The ethics of "contextual integrity," developed by Helen Nissenbaum, is crucial for understanding privacy violations on social networks. Privacy is not simply secrecy; it is the appropriate flow of information within a specific context. A photo shared with friends on Facebook is "public" in one sense but "private" in another. When a platform takes that photo and uses it to train a facial recognition algorithm or targets ads based on it, it violates the contextual norms of the original transmission. The ethical breach lies in the "context collapse," where information provided for social connection is weaponized for surveillance and profit. Respecting contextual integrity requires platforms to honor the user's intended audience and purpose, not just their technical consent (Nissenbaum, 2010).
Algorithmic bias represents a systemic ethical failure in social networking. Algorithms trained on historical data often inherit and amplify the prejudices of the past. For example, an ad delivery algorithm might show high-paying job offers primarily to men, or a moderation algorithm might disproportionately flag African American Vernacular English (AAVE) as toxic speech. These are not just technical glitches but moral wrongs that perpetuate social inequality. The ethics of AI require "fairness auditing" to ensure that the mathematical models do not act as agents of discrimination. This demands a shift from "blind" algorithms to "justice-aware" systems that actively correct for historical disparities (Noble, 2018).
The concept of "moral distance" in digital interactions explains the toxicity often found online. The screen acts as a barrier that dehumanizes the other, removing the visceral feedback loops (like facial expressions) that regulate empathy in face-to-face communication. This phenomenon, known as the "online disinhibition effect," allows users to engage in harassment and cruelty they would never commit offline. Social networks have an ethical obligation to design interfaces that "re-humanize" the user, perhaps by introducing friction or reminders of the human recipient before a toxic comment is posted. The design of the interface itself can either encourage our worst impulses or nurture our better angels (Suler, 2004).
"Surveillance Realism" describes the ethical resignation of the user base. It is the widespread acceptance that constant monitoring is the inevitable price of using digital services. This cynicism is ethically corrosive because it normalizes the loss of autonomy. Social networks exploit this by creating "forced choices" where the only alternative to surveillance is social isolation. An ethical framework must reject this determinism, asserting that technology can and should be compatible with liberty. It calls for the development of "privacy-preserving technologies" that prove surveillance is a business choice, not a technological necessity (Dencik, 2018).
The ethics of the "quantified self" on social networks involves the reduction of human experience to data points. Likes, shares, and follower counts turn social life into a game with scores. This "gamification" of friendship commodifies relationships, valuing them only for their metrics. Philosophers argue this alienates the individual from the true meaning of connection. The ethical challenge for platforms is to design spaces for "qualitative" interaction that do not rely on dopamine-driven feedback loops, fostering genuine community rather than performative engagement (Whitson, 2013).
"Digital virtue ethics" focuses on the character of the user. In the Aristotelian tradition, virtue is developed through habit. If social networks are designed to reward vanity, impatience, and outrage, they cultivate vicious character traits in the citizenry. Conversely, a virtuous network would be designed to reward patience, empathy, and reflection. The "techno-moral" shaping of the user is an inescapable consequence of platform design. Therefore, designers must ask not just "what will the user do?" but "who will the user become?" after repeated use of the system (Vallor, 2016).
The problem of "dirty hands" in content moderation involves the trauma inflicted on the human moderators who police these networks. To keep the feed clean for the general public, thousands of low-paid workers in the Global South must view unspeakable violence and exploitation daily. This creates an ethical caste system where the psychological safety of the privileged user is purchased with the mental health of the invisible moderator. Ethical labor practices demand that these workers be provided with robust psychological support, fair wages, and that the burden of moderation be shifted towards AI or distributed community models where possible (Roberts, 2019).
Finally, the ethics of social networks must grapple with the "power asymmetry" between the platform and the user. The platform knows everything about the user, while the user knows nothing about the platform's inner workings. This opacity prevents informed ethical consent. The principle of "explicability" demands that platforms be transparent about their logic, purposes, and data flows. Without transparency, there can be no trust, and without trust, the social contract of the digital network is fundamentally predatory (Pasquale, 2015).
Section 2: The Attention Economy and the Ethics of Persuasion
The economic model of most social networks is the "attention economy," where human attention is the scarce commodity harvested and sold to advertisers. This business model creates a fundamental conflict of interest between the platform's profit motive and the user's best interests. To maximize profit, the platform must maximize the time a user spends on the site, often by exploiting cognitive vulnerabilities. This is described by Tristan Harris as a "race to the bottom of the brain stem," where platforms compete to trigger the most primal instincts of fear and outrage to capture attention. The ethical critique is that this treats the human being as a means to an end—a resource to be mined—rather than an end in themselves, violating Kantian ethics (Williams, 2018).
"Persuasive Technology," a field founded by B.J. Fogg, studies how computers can be designed to change human attitudes and behaviors. While initially intended for positive habits (like exercise), these techniques have been weaponized by social networks to induce compulsive use. Features like "pull-to-refresh" (mimicking a slot machine), infinite scroll (removing stopping cues), and variable rewards (unpredictable notifications) are deliberate psychological manipulations. The ethics of persuasion distinguishes between "nudging" (helping users do what they want) and "coercion" (manipulating users to do what the platform wants). Current social media design often crosses the line into coercion, undermining user autonomy (Fogg, 2003).
"Dark patterns" are user interface designs specifically crafted to trick users into taking actions they did not intend, such as sharing more data than necessary or finding it impossible to delete an account. These deceptive practices are ethically indefensible as they rely on exploiting the user's lack of attention or understanding. The "roach motel" pattern, where it is easy to get in but hard to get out, is a common example. Ethical design requires "fair patterns" that facilitate honest decision-making, ensuring that the path to privacy is as frictionless as the path to exposure (Brignull, 2011).
The impact on mental health, particularly for adolescents, is a severe ethical crisis. Internal research from platforms (such as the "Facebook Files" regarding Instagram) has shown that social comparison mechanics can exacerbate body image issues, anxiety, and depression in teenage girls. The ethical failure here is "negligence"—knowing that a product causes harm but failing to redesign it because doing so would reduce engagement. The precautionary principle suggests that when a technology poses a risk of significant harm to vulnerable populations, the burden of proof is on the company to demonstrate safety before deployment (Haidt & Twenge, 2021).
"Dopamine loops" exploit the brain's reward system. The instantaneous feedback of a "like" releases dopamine, creating a short-term pleasure loop that degrades long-term focus and satisfaction. This biological hacking creates a dependency similar to substance addiction. While not a chemical substance, the behavioral reinforcement mechanisms are designed to override executive control. The ethical responsibility of the platform is akin to that of the tobacco industry; if the product is designed to be addictive, the provider bears responsibility for the health consequences of that addiction (Alter, 2017).
The "Right to Attention" is proposed as a new human right to counter this exploitation. It asserts that the freedom of thought includes the freedom to control one's own attention, free from external manipulation. If a platform is constantly interrupting the user's train of thought with notifications, it is effectively trespassing on their cognitive liberty. Ethical regulations would require "attention-respecting" defaults, such as batching notifications or turning off auto-play, to restore the user's cognitive sovereignty (Wu, 2016).
The commodification of "social validation" turns human connection into a transactional market. When self-worth is tied to quantitative metrics, users begin to perform their lives for an audience rather than living them. This "performative self" leads to a dissociation from authentic experience. The ethical critique focuses on the "corruption" of social goods; friendship and approval are qualitative goods that are degraded when converted into quantitative currency. Platforms have an ethical duty to obscure these metrics (e.g., hiding like counts) to decommodify the social experience (Seymour, 2019).
"Micro-targeting" and behavioral prediction allow platforms to manipulate consumer and voter behavior with frightening precision. By analyzing thousands of data points, algorithms can identify a user's emotional state and target them with a message at the exact moment they are most vulnerable. This "predatory analytics" bypasses rational scrutiny. In the political sphere, this undermines the democratic ideal of the autonomous voter. The ethical limit of persuasion must be drawn where the user's capacity for rational deliberation is bypassed by subliminal manipulation (Susser et al., 2019).
The "filter bubble" effect, while often discussed in terms of politics, is also an ethical design flaw. By showing users only what they agree with to keep them happy and engaged, platforms intellectually isolate them. This creates a fragmented reality where shared truth becomes impossible. The ethical duty of a "public square" (which these platforms claim to be) is to facilitate the collision of differing viewpoints, not to create comfortable silos. The refusal to design for serendipity and diversity is a refusal of civic responsibility (Pariser, 2011).
Children are uniquely vulnerable to the attention economy. Their developing brains lack the impulse control to resist persuasive design. The "Age Appropriate Design Code" (or Children's Code) in the UK establishes that the best interests of the child must be a primary consideration in design. This legal and ethical standard requires high privacy defaults and the removal of nudge techniques for minors. It rejects the notion that a child is just a "short adult" and demands a separate ethical standard for the digital treatment of youth (Information Commissioner's Office, 2020).
The "engagement trap" forces content creators to adopt sensationalist and extreme behaviors to survive in the algorithm. To get noticed, one must be louder, more radical, or more visually arresting than the competition. This degrades the quality of public culture and incentivizes harmful challenges and pranks. The platform's ethical responsibility extends to the incentives it creates; if the system rewards antisocial behavior, the platform is complicit in that behavior (Phillips, 2015).
Finally, the movement for "Time Well Spent" advocates for a shift from extracting value from the user to adding value to the user's life. This requires new metrics of success, such as "net promoter score" or qualitative surveys asking if the user felt the time was meaningful. The ethical horizon for social networks is to transform from "attention merchants" to "relationship utilities," where the goal is to facilitate offline connection and personal growth rather than endless online consumption.
Section 3: Specialized Rights Protection Institutions – The Public Sector
The enforcement of ethics and rights in the digital sphere is no longer left solely to courts; it has given rise to a new ecosystem of specialized administrative institutions. Foremost among these are Data Protection Authorities (DPAs), such as the CNIL in France or the ICO in the UK. Unlike traditional ombudsmen who issue non-binding recommendations, modern DPAs under the GDPR are powerful regulators with the authority to levy massive fines, ban data processing, and enter corporate premises. They act as the "guardians of the galaxy" of personal data, institutionalizing the protection of privacy as an administrative function rather than just a private cause of action (Hijmans, 2016).
The European Data Protection Board (EDPB) represents a novel form of "networked governance." It brings together national regulators to ensure the consistent application of rights across the EU. This institution addresses the problem of cross-border data flows, ensuring that a violation in Dublin is treated with the same severity as one in Berlin. The EDPB issues binding decisions on disputes between national regulators, effectively acting as a supreme administrative court for digital rights. This institutional design reflects the ethical principle of universality—rights should not depend on the user's geographic location (European Data Protection Board, 2018).
In the context of the European Union's Digital Services Act (DSA), a new class of institutions called "Digital Services Coordinators" (DSCs) is being established. These bodies are responsible for supervising intermediaries and enforcing rules on content moderation and transparency. Unlike DPAs, which focus on privacy, DSCs focus on the safety of the information environment, tackling issues like disinformation, hate speech, and algorithmic risk. They possess the power to certify "trusted flaggers" and vet academic researchers for data access, thereby institutionalizing the oversight of the public sphere (European Commission, 2022).
National Human Rights Institutions (NHRIs) are adapting their traditional mandates to the digital age. Accredited under the Paris Principles, these independent state bodies monitor government compliance with human rights treaties. Increasingly, NHRIs are launching inquiries into digital ID systems, surveillance legislation, and the digital divide. They serve as a bridge between international human rights law and domestic digital policy, translating abstract treaty obligations into concrete recommendations for digital governance. Their ethical role is to ensure that the "digital state" remains a "human rights state" (Global Alliance of National Human Rights Institutions, 2019).
Consumer Protection Agencies (like the FTC in the US) play a critical, albeit indirect, role in digital rights protection. By prosecuting "unfair and deceptive practices," they enforce privacy policies as contracts. If a social network claims to respect privacy but sells data, it is a consumer fraud issue. These institutions treat digital rights violations as market failures. Their ethical mandate is to ensure transparency and fair dealing in the data economy, protecting the user as a vulnerable consumer against the monopoly power of platforms (Solove & Hartzog, 2014).
Information Commissioners often possess a dual mandate: protecting privacy (data protection) and ensuring transparency (freedom of information). This tension reflects the balance between the right to know and the right to hide. In the digital age, these institutions must arbitrate complex disputes, such as whether a public official's WhatsApp messages are public records. Their institutional ethics are grounded in the promotion of "open government" while safeguarding "citizen privacy," a delicate balancing act in a world of leaking data (Information Commissioner's Office, 2017).
Cybersecurity Agencies (like ANSSI in France or CISA in the US) are institutions of "digital defense." While their primary focus is national security, they effectively protect the "right to security" of the citizenry. By mandating security standards for critical infrastructure and social networks, they prevent data breaches that would violate users' rights. The ethical challenge for these institutions is to protect the public without engaging in excessive surveillance themselves, maintaining a firewall between defense and espionage (Forcepoint, 2018).
The institution of the " Ombudsman" provides a low-threshold complaint mechanism for citizens aggrieved by digital administration. In the context of algorithmic decision-making (e.g., welfare robots), the Ombudsman investigates complaints of maladministration and bias. They provide a human remedy to automated injustice. Their ethical function is to re-inject humanity into the bureaucratic machine, ensuring that citizens are treated with dignity rather than as edge cases in a database (Reif, 2020).
Judicial oversight remains the ultimate institutional backstop. Specialized "Cyber Courts" or designated internet tribunals are emerging in countries like China (Hangzhou Internet Court). These courts conduct proceedings entirely online and specialize in digital disputes like e-commerce and copyright. While efficient, the ethical risk is that "fast justice" may become "rough justice." In democratic systems, supreme courts (like the CJEU or US Supreme Court) act as the final interpreters of how constitutional rights apply to new technologies, setting the ethical boundaries for all other institutions (Zuo, 2019).
Global institutions like the United Nations utilize "Special Rapporteurs" to monitor digital rights. The Special Rapporteur on the Right to Privacy and the Special Rapporteur on Freedom of Opinion and Expression act as global watchdogs. They issue thematic reports that establish normative standards (soft law) on issues like encryption and anonymity. While they lack enforcement power, their institutional authority shapes international law and provides moral ammunition for civil society advocates (United Nations Human Rights Council, 2015).
"Algorithm Audit Offices" are a proposed institutional innovation. As AI regulation tightens (e.g., the EU AI Act), there is a need for specialized bodies capable of technically inspecting code for bias and safety. These institutions would function like financial auditors but for algorithms, certifying that a social network's ranking system complies with non-discrimination laws. This institutionalizes the "ethics of explicability," moving from voluntary transparency to mandatory inspection (Möhlmann, 2021).
Finally, the interaction between these institutions requires "cooperative supervision." Digital harms rarely fall into neat silos; a data breach involves privacy, security, and consumer rights. Therefore, formal mechanisms like the "Digital Clearinghouse" are emerging to allow privacy, competition, and consumer regulators to coordinate their enforcement actions. This "institutional mesh" is designed to match the complexity of the platforms they regulate, ensuring that no aspect of digital power escapes oversight.
Section 4: Private and Hybrid Institutions – The Oversight Board and Beyond
The limitations of state regulation have led to the emergence of private and hybrid institutions of rights protection. The most prominent example is the Meta Oversight Board (OSB). Created by Facebook (now Meta) to adjudicate difficult content moderation decisions, the OSB operates as a "Supreme Court" for the platform. It is funded by an independent trust to ensure autonomy from corporate management. The Board reviews cases where users appeal the removal (or retention) of content and issues binding decisions that Meta must implement. This institution represents a radical experiment in "private constitutionalism," where a corporation voluntarily submits to an external check on its sovereign power over speech (Klonick, 2020).
The OSB applies international human rights law (IHRL) as its primary interpretive framework, effectively incorporating public law standards into private adjudication. This is a significant ethical shift, as it implies that the private terms of service should be read in light of the UN Guiding Principles on Business and Human Rights. By forcing the company to justify its actions based on necessity and proportionality, the OSB institutionalizes the "culture of justification" within the corporate structure. However, critics argue that without the power to subpoena data or influence the underlying algorithms, the Board's power is limited to the "tip of the iceberg" of specific content pieces (Douek, 2021).
"Trusted Flaggers" are specialized institutional partners—such as NGOs, government agencies, and expertise centers—granted special status by platforms. Their reports of illegal content (like hate speech or child abuse material) are prioritized for immediate review. This creates a "fast lane" for rights protection. While efficient, this institutional arrangement raises ethical questions about equality. It creates a two-tiered justice system where privileged actors have direct access to the "judges," while ordinary users often face automated rejection. The transparency of who gets to be a Trusted Flagger is a critical governance issue (European Commission, 2020).
Civil society organizations (CSOs) act as informal but powerful institutions of accountability. Groups like AlgorithmWatch, Electronic Frontier Foundation (EFF), and Access Now function as external watchdogs. They conduct "adversarial audits" of social networks, scraping data to reveal bias or shadow banning that the platforms try to hide. These institutions provide the "counter-power" necessary for a healthy ecosystem. Their ethical role is to represent the public interest against the corporate interest, often using strategic litigation to force legal changes (Kravets, 2020).
Corporate "Ethics Boards" or "AI Ethics Councils" are internal institutions established by tech companies to guide their development. Ideally, these bodies provide a moral compass, vetoing products that violate human rights. However, they are frequently criticized for "ethics washing"—providing a veneer of responsibility while lacking real power. The dissolution of Google's AI Ethics Board shortly after its formation highlights the fragility of these internal institutions when they conflict with profit motives or employee activism. For these institutions to be effective, they require structural independence and the power to halt product launches (Phan et al., 2021).
Multi-stakeholder initiatives like the Global Network Initiative (GNI) bring together companies, NGOs, and academics to set standards for freedom of expression and privacy. Member companies agree to independent assessments of their compliance with GNI principles. This institutionalizes "peer pressure" and collective responsibility. It allows companies to push back against government overreach (e.g., censorship demands) by citing their commitments to the international coalition. This hybrid institution creates a buffer zone between the state and the corporation, protecting user rights through collective bargaining power (Global Network Initiative, 2017).
The "Social Media Council" is a proposed model for a self-regulatory body similar to a Press Council. It would be an industry-wide institution that sets ethical standards and adjudicates complaints, independent of any single platform. This would solve the problem of fragmentation, where being banned on one platform leads to migration to another. A unified council could establish a universal "code of ethics" for the social media industry, professionalizing the field of content moderation (Article 19, 2019).
Fact-checking organizations (like Snopes or Full Fact) have become institutionalized components of the social media ecosystem. Platforms contract these third-party bodies to verify content. When a post is labeled "false," its distribution is throttled. This delegates the "epistemic authority" (the power to decide truth) to external journalistic institutions. The ethical integrity of these fact-checkers is paramount; they must be transparent about their funding and methodology to avoid accusations of bias. They act as the "immune system" of the information body politic (Graves & Cherubini, 2016).
"Red Teams" are internal institutional structures tasked with attacking the company's own systems to find vulnerabilities. In the context of ethics, "Societal Red Teams" simulate bad actors to see how a new feature could be abused to harm democracy or marginalized groups. Institutionalizing this adversarial mindset is crucial for "safety by design." It formalizes the ethical skepticism that prevents unintended consequences (Belman & Dixon, 2021).
Ombudspersons within corporations serve as an internal avenue for user grievances. Unlike customer support, which follows scripts, an Ombudsman has the authority to investigate systemic issues and advocate for the user's rights within the company. While rare in big tech, mandating such an institution would provide a necessary pressure valve for the "bureaucratic violence" of automated account suspensions (bureaucracy that cannot be reasoned with) (Citron, 2014).
Whistleblower support mechanisms are the institutions of last resort. When internal oversight fails, whistleblowers like Frances Haugen (The Facebook Papers) expose the truth. Legal protections and NGOs that support whistleblowers (like Whistleblower Aid) are vital institutions of rights protection. They ensure that the "duty of loyalty" to the company does not override the "duty of care" to society. They operationalize the ethical principle that transparency is the best disinfectant (Cone, 2021).
Finally, the "Fediverse" (e.g., Mastodon) represents a decentralized institutional model. Here, rights protection is handled by individual server administrators rather than a central corporation. If a user disagrees with the moderation policy of one server, they can move to another. This "exit rights" model relies on the institution of federalism. It returns ethical agency to the community, allowing different groups to define their own norms of acceptable speech, provided they remain interoperable (Doctorow, 2022).
Section 5: Enforcement Mechanisms and the Future of Compliance
The effectiveness of ethical norms and rights protection institutions depends entirely on the mechanisms of enforcement. The transition from "soft law" (ethical guidelines) to "hard law" (regulation) marks the current era of digital governance. The primary enforcement mechanism is the administrative fine. Under the GDPR, fines can reach €20 million or 4% of global turnover; under the DSA, they can reach 6%. These "dissuasive penalties" are designed to change the calculus of the boardroom. When the cost of non-compliance exceeds the profit of violation, ethical behavior becomes a fiduciary duty to shareholders. This economic enforcement translates moral values into financial imperatives (European Union, 2022).
"Algorithmic Disgorgement" (or model deletion) is a novel and severe enforcement tool used by the FTC in the United States. If a company collects data illegally (e.g., without consent) and uses it to train an algorithm, the regulator can order not just the deletion of the data, but the destruction of the algorithm itself. This "fruit of the poisonous tree" doctrine prevents companies from benefiting from their ethical breaches. It treats the algorithm as contraband, creating a powerful deterrent against "move fast and break things" data practices (Kaye, 2021).
Personal liability for executives is emerging as the "nuclear option" of enforcement. Proposed legislation in the UK and Ireland considers holding senior managers criminally liable if they fail to protect children from harmful content. This pierces the corporate veil, ensuring that decision-makers cannot hide behind the legal entity. The threat of jail time or personal bans forces executives to treat digital safety with the same seriousness as financial reporting or workplace safety (United Kingdom Parliament, 2023).
"Structural separation" or antitrust remedies are used to enforce rights by breaking up power. If a platform's monopoly power allows it to ignore user privacy without losing customers, the remedy is to break the monopoly. By mandating interoperability or forcing the divestiture of Instagram/WhatsApp, regulators aim to restore "privacy competition." The ethical theory is that a competitive market forces companies to compete on trust and safety, whereas a monopoly breeds indifference (Khan, 2017).
"Auditability" is a procedural enforcement mechanism. The DSA requires "Very Large Online Platforms" (VLOPs) to undergo annual independent audits at their own expense. These audits check compliance with risk management obligations regarding systemic risks like disinformation. This creates a "compliance industry" similar to financial auditing. The enforcement power lies in the transparency of the audit report; a failed audit invites regulatory intervention and public backlash (Möhlmann, 2021).
Data portability and interoperability act as "market-based enforcement." If users can easily take their social graph and move to a competitor, platforms are disciplined by the threat of user exit. The GDPR's right to portability and the DMA's interoperability requirements are designed to lower the switching costs. This empowers the user to "vote with their data," enforcing ethical standards through market choice rather than state coercion (Engels, 2016).
"Class action lawsuits" and collective redress allow users to enforce their rights horizontally. Because individual damages for privacy violations are often small ("rational apathy"), collective actions aggregate these claims into a formidable weapon. In the US and increasingly in the EU (via the Representative Actions Directive), class actions punish platforms for data breaches and privacy intrusions, serving as a private enforcement mechanism that complements public regulation (Mulheron, 2018).
Reputational sanctions, or "naming and shaming," remain a potent enforcement tool in the trust economy. When DPAs publish the names of violators, or when civil society ranks companies on their digital rights performance (e.g., Ranking Digital Rights index), it impacts the brand equity. In a market where "brand purpose" matters to consumers and employees, the stigma of being an "unethical" platform can lead to talent drain and advertiser boycotts (Gunningham, 2019).
"Compliance by Design" is the future of enforcement. Rather than regulating after the fact, regulators are moving towards ex-ante regulation. This means that privacy and safety features must be demonstrated before a product is released. The "regulatory sandbox" allows companies to test innovations under the supervision of the regulator, ensuring compliance is baked in. This shifts the enforcement relation from adversarial punishment to collaborative guidance (Financial Conduct Authority, 2015).
International cooperation agreements (like the Global Privacy Assembly) attempt to solve the enforcement gap across borders. The "Brussels Effect" means that EU enforcement often sets the global standard, but true global enforcement requires treaties. The challenge remains "jurisdictional arbitrage," where platforms base themselves in countries with weak enforcement. Harmonizing enforcement standards is the only way to prevent the existence of "ethical havens" (Bradford, 2020).
"Vigilante enforcement" by hacker collectives (like Anonymous) represents the chaotic edge of rights protection. When legal institutions fail, these groups may hack and leak the data of unethical platforms (e.g., the Parler hack) to expose extremism. While illegal, this phenomenon highlights the "enforcement vacuum" that exists when the state is too slow to act. It serves as a reminder that if institutions do not enforce ethics, the community may take justice into its own hands (Coleman, 2014).
Finally, the ultimate enforcement mechanism is the "social license to operate." If a platform consistently violates the ethical norms of a society, it risks a mass exodus that no algorithm can reverse (the "MySpace effect"). Regulation can fine a company, but only the users can kill it. The sustainability of social networks depends on maintaining a fragile trust; once that is broken, the network effects that powered their rise can work in reverse to power their collapse.
Questions
1. Contextual Integrity and Context Collapse
According to Helen Nissenbaum’s theory of "contextual integrity," why is the use of a photo shared with friends for facial recognition training considered a privacy violation, and how does "context collapse" define this ethical breach?
2. Algorithmic Bias and Fairness Auditing
Section 1 describes algorithmic bias as a "moral wrong" rather than a technical glitch. Give two examples provided in the text of how algorithms amplify historical prejudices and explain the proposed solution of "fairness auditing."
3. The Ethics of "Dark Patterns"
Define "dark patterns" in user interface design. How does the "roach motel" pattern operate, and why is it considered an ethically indefensible practice compared to legitimate persuasion?
4. The Age Appropriate Design Code
How does the UK's "Age Appropriate Design Code" challenge the traditional design of social networks regarding minors, and what specific standard does it establish regarding the privacy of children?
5. The Role of the EDPB
What is the primary function of the European Data Protection Board (EDPB) in the context of "networked governance," and how does it address the issue of cross-border data flows within the EU?
6. Digital Services Coordinators (DSCs)
Under the European Union's Digital Services Act (DSA), how do "Digital Services Coordinators" differ from traditional Data Protection Authorities (DPAs) in terms of their focus and enforcement powers?
7. The Meta Oversight Board
Section 4 describes the Meta Oversight Board (OSB) as an experiment in "private constitutionalism." What legal framework does the OSB use to interpret content moderation decisions, and what is the significance of this choice?
8. Trusted Flaggers
What is the function of "Trusted Flaggers" within the content moderation ecosystem? While efficient, what ethical concern does the text raise regarding the creation of a "two-tiered justice system"?
9. Algorithmic DisgorgementExplain the enforcement mechanism known as "Algorithmic Disgorgement" (or model deletion) used by the FTC. How does the "fruit of the poisonous tree" doctrine apply to algorithms trained on illegally collected data
10. Personal Liability for Executives
Section 5 refers to personal liability for executives as the "nuclear option" of enforcement. How does piercing the corporate veil to target senior managers change the incentive structure regarding digital safety?
Cases
Case Study: The "Euphoria" Algorithm Crisis
The Ethical Design Failure
"Euphoria" is a rapidly growing social media app popular among teenagers. Its core feature is the "Mood Feed," an algorithm designed to maximize "Time on Site" (Section 1). To achieve this, the algorithm prioritizes high-arousal content. Internal research reveals that the algorithm disproportionately promotes content related to extreme dieting and self-harm to young female users because this content generates intense engagement. Despite knowing this, Euphoria's leadership decides not to alter the code, fearing a drop in revenue. They employ "Dark Patterns" (Section 2) such as "infinite scroll" and difficult-to-find account deletion settings to keep users locked into these "Dopamine Loops." This decision reflects a failure of "Value Sensitive Design," prioritizing commercial metrics over user well-being.
The Whistleblower and the Institutional Response
A data scientist at Euphoria, Alex, becomes the Whistleblower (Section 4). She leaks internal documents (The "Euphoria Papers") to a major newspaper, revealing that the company knew its product was toxic to mental health but chose "negligence" over safety.
The Regulatory Reaction: The scandal triggers a multi-pronged institutional response (Section 3). The national Data Protection Authority (DPA) launches an investigation into whether the processing of minors' data for behavioral targeting violates the "Age Appropriate Design Code."
The Safety Response: Simultaneously, the newly established "Digital Services Coordinator" (DSC) investigates whether the platform failed to mitigate systemic risks under the Digital Services Act (DSA).
The Attempt at Self-Regulation
In a bid to save its reputation, Euphoria creates an external "Ethics Council" (Section 4) composed of academics and NGOs. They promise to abide by the council's recommendations. However, when the Council demands the removal of the "infinite scroll" feature for users under 18, Euphoria's CEO vetoes the decision, citing "technical complexity." This leads to accusations of "Ethics Washing"—using the institution to simulate responsibility while maintaining the predatory business model. The Council members resign in protest, and a coalition of parents initiates a "Class Action Lawsuit" (Section 5) seeking damages for the mental health harm caused by the app.
Questions
1. The Ethics of "Persuasive Technology" vs. Coercion
Focusing on the design of the "Mood Feed" and infinite scroll:
Using the distinction made in Section 2, explain why Euphoria's design crosses the line from "nudging" into "coercion."
How does the "Race to the bottom of the brain stem" concept explain the algorithm's tendency to promote harmful content (self-harm/dieting) over neutral content?
2. Institutional Enforcement and the "Children's Code"
Alex's leak reveals that Euphoria ignored the safety of minors.
According to Section 2 (Attention Economy) and Section 3 (Public Institutions), what specific standard does the "Age Appropriate Design Code" establish that Euphoria violated? (Hint: It relates to the "best interests of the child").
What enforcement power does the DPA possess that makes this investigation more threatening to Euphoria than a simple bad news cycle?
3. "Ethics Washing" and Algorithmic Disgorgement
Euphoria’s Ethics Council failed.
Analyze the failure of the Ethics Council using Section 4. Why do internal/hybrid institutions often fail without "structural independence"?
If the regulator finds that the "Mood Feed" algorithm was trained on illegally collected data from minors, they might order "Algorithmic Disgorgement" (Section 5). What would this penalty require Euphoria to do, and why is it considered a stronger deterrent than a fine?
References
Alter, A. (2017). Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked. Penguin Press.
Article 19. (2019). The Social Media Council: Consultation Paper.
Belman, S., & Dixon, L. (2021). The Societal Ethical Red Teaming (SERT) Framework.
Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.
Brignull, H. (2011). Dark Patterns: Deception vs. Honesty in UI Design. A List Apart.
Citron, D. K. (2014). Hate Crimes in Cyberspace. Harvard University Press.
Coleman, G. (2014). Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. Verso.
Cone, C. (2021). Whistleblowing as a Check on the Power of Big Tech. Georgetown Law Technology Review.
Dencik, L. (2018). Surveillance Realism and the Politics of Datafication. Health and Technology.
Doctorow, C. (2022). Chokepoint Capitalism. Beacon Press.
Douek, E. (2021). The Rise of Content Cartels. Knight First Amendment Institute.
Engels, B. (2016). Data Portability and Online User Behavior. Marketing ZFP.
European Commission. (2020). Communication on Countering Disinformation.
European Commission. (2022). The Digital Services Act Package.
European Data Protection Board. (2018). Endorsement of GDPR Guidelines.
European Union. (2022). Regulation (EU) 2022/2065 (Digital Services Act).
Floridi, L. (2014). The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford University Press.
Fogg, B. J. (2003). Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann.
Forcepoint. (2018). The Role of Cybersecurity in Human Rights.
Friedman, B., Kahn, P. H., & Borning, A. (2013). Value Sensitive Design and Information Systems. Early Engagement and New Technologies.
Frost, R. (2019). The Ethics of Frictional Design. Journal of Design and Science.
Global Alliance of National Human Rights Institutions. (2019). NHRIs and the Digital Age.
Global Network Initiative. (2017). The GNI Principles.
Graves, L., & Cherubini, F. (2016). The Rise of Fact-Checking Sites in Europe. Reuters Institute.
Gunningham, N. (2019). Corporate Environmental Responsibility. Routledge.
Haidt, J., & Twenge, J. (2021). Social Media Use and Mental Health: A Review. Adolescent Health.
Hijmans, H. (2016). The European Union as Guardian of Internet Privacy. Springer.
Information Commissioner's Office. (2017). Big Data, AI, Machine Learning and Data Protection.
Information Commissioner's Office. (2020). Age Appropriate Design: A Code of Practice for Online Services.
Kaye, D. (2021). Speech Police: The Global Struggle to Govern the Internet. Columbia Global Reports.
Khan, L. (2017). Amazon's Antitrust Paradox. Yale Law Journal.
Klonick, K. (2020). The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression. Yale Law Journal.
Kravets, D. (2020). The Role of Civil Society in Digital Rights. EFF.
Mittelstadt, B. D., et al. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society.
Möhlmann, M. (2021). Algorithmic Accountability in Action. Organization Science.
Mulheron, R. (2018). Class Actions and Government. Cambridge University Press.
Nissenbaum, H. (2010). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
Pariser, E. (2011). The Filter Bubble. Penguin Press.
Pasquale, F. (2015). The Black Box Society. Harvard University Press.
Phan, T., et al. (2021). Economies of Virtue: The Circulation of 'Ethics' in Big Tech. Science as Culture.
Reif, L. C. (2020). The Ombudsman, Good Governance and the International Human Rights System. Martinus Nijhoff.
Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.
Seymour, R. (2019). The Twittering Machine. Indigo.
Solove, D. J., & Hartzog, W. (2014). The FTC and the New Common Law of Privacy. Columbia Law Review.
Suler, J. (2004). The Online Disinhibition Effect. CyberPsychology & Behavior.
Susser, D., Roessler, B., & Nissenbaum, H. (2019). Technology, Autonomy, and Manipulation. Internet Policy Review.
United Kingdom Parliament. (2023). Online Safety Bill.
United Nations Human Rights Council. (2015). Report of the Special Rapporteur on the Right to Privacy.
Vallor, S. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.
Whitson, J. R. (2013). Gaming the Quantified Self. Surveillance & Society.
Williams, J. (2018). Stand Out of Our Light. Cambridge University Press.
Wu, T. (2016). The Attention Merchants. Knopf.
Zuo, M. (2019). The Internet Courts in China. Computer Law & Security Review.
8
Violations of digital human rights and responsibility
2
2
5
9
Lecture text
Questions
Cases
References
9
Mechanisms and procedures for protecting digital human rights
2
2
5
9
Lecture text
Section 1: Judicial Mechanisms and Strategic Litigation
The judiciary serves as the ultimate guardian of digital rights, transforming abstract legal principles into enforceable realities. The "Right to an Effective Remedy," enshrined in Article 47 of the EU Charter of Fundamental Rights and Article 2 of the ICCPR, guarantees that any individual whose rights are violated in the digital sphere has recourse to a competent court. This judicial mechanism is fundamental because it provides an independent check on both state surveillance and corporate overreach. Without access to courts, digital rights remain "lex imperfecta"—laws without teeth. Courts act as the final arbiter in balancing competing interests, such as privacy versus national security, or freedom of expression versus reputation, ensuring that derogations from rights are strictly necessary and proportionate (European Union, 2012).
Constitutional courts play a pivotal role in defining the boundaries of digital rights. Landmark rulings often emerge from these high courts, striking down legislation that infringes on digital liberties. The German Federal Constitutional Court’s "Census Judgment" of 1983 is the archetype of such intervention, creating the right to "informational self-determination" out of general constitutional principles. Similarly, the Indian Supreme Court’s Puttaswamy judgment declared privacy a fundamental right, invalidating parts of the Aadhaar biometric ID law. These constitutional mechanisms function as a "negative legislator," annulling laws that fail to respect the digital integrity of the citizen (Puttaswamy v. Union of India, 2017).
International courts, such as the Court of Justice of the European Union (CJEU) and the European Court of Human Rights (ECtHR), provide a supranational mechanism for protection. The CJEU has been particularly aggressive in asserting digital rights, as seen in the Digital Rights Ireland and Schrems cases. By invalidating the Data Retention Directive and the Privacy Shield framework, the Court established that mass surveillance creates a systemic violation of rights that cannot be cured by minor safeguards. These judgments serve as "erga omnes" precedents, binding all member states and forcing a continental-wide revision of surveillance laws (CJEU, 2014).
"Strategic litigation" is a procedural mechanism used by civil society to force systemic change. Instead of waiting for a random case, organizations like NOYB (None of Your Business) or the Electronic Frontier Foundation (EFF) carefully select "test cases" that highlight a structural flaw in the law. Max Schrems’ decade-long legal battle against Facebook was not merely about his personal data but was a strategic move to challenge the legality of US-EU data flows. This mechanism allows individual grievances to scale into global policy shifts, using the courtroom as a venue for regulatory reform (Privacy International, 2019).
"Habeas Data" is a specific judicial procedure originating in Latin American constitutional law (e.g., Brazil, Colombia). Modeled after Habeas Corpus ("produce the body"), Habeas Data allows a citizen to petition a court to demand access to their own data held by government or private registries. It provides a rapid, summary proceeding to correct errors or delete sensitive information. This procedural innovation specifically empowers the citizen against the "dossier society," giving them a direct legal tool to physically reclaim control over their informational self (Guimarães, 2017).
Class action lawsuits and collective redress mechanisms address the "problem of scale" in digital violations. When a platform like Equifax loses the data of 147 million people, the individual damage to each person might be too small to justify a solitary lawsuit. Class actions aggregate these claims, creating a liability risk large enough to punish the corporation. The EU’s Representative Actions Directive institutionalizes this mechanism, allowing qualified consumer groups to sue on behalf of millions. This procedure converts "rational apathy" into "collective power," ensuring that widespread low-value harms do not go unpunished (Mulheron, 2018).
Injunctive relief is a critical procedural tool for stopping ongoing harm. In cases of "revenge porn" or defamation, monetary damages awarded years later are insufficient; the harm occurs in the continuous availability of the content. Courts can issue "interim injunctions" to order the immediate removal of content pending a full trial. "Dynamic injunctions" go further, ordering ISPs to block not just the specific URL but any future mirrors of the site. This mechanism adapts the slow pace of judicial orders to the rapid replication of the digital environment (Frosio, 2018).
Judicial review of surveillance warrants is the primary mechanism for controlling state espionage. In democratic systems, intelligence agencies must typically obtain a warrant from a judge before intercepting communications. This "ex-ante" control ensures that an independent magistrate verifies the probable cause. However, the effectiveness of this mechanism is often undermined by "FISA courts" (in the US) or secret tribunals that operate ex parte (without the target present). Reformers argue for the introduction of "special advocates" who can argue for privacy rights within these secret chambers to ensure a true adversarial process (Donohue, 2008).
The burden of proof often shifts in digital rights cases to protect the weaker party. In algorithmic discrimination cases, it is nearly impossible for a user to prove the code is biased. Therefore, procedural rules are evolving to shift the burden to the deployer of the algorithm to prove it is not biased once a prima facie case is made. This "reversal of burden" is a procedural safeguard that acknowledges the information asymmetry between the transparency of the human victim and the opacity of the black box system (Hacker, 2018).
"Amicus Curiae" (friend of the court) briefs allow technical experts and human rights NGOs to intervene in digital cases. Because judges often lack technical expertise in cryptography or network architecture, these briefs provide essential context. Organizations like Privacy International submit technical analyses explaining why, for example, a "backdoor" cannot be secure. This procedural mechanism ensures that judicial decisions are grounded in technical reality rather than abstract metaphors (Kerr, 2019).
The challenge of "jurisdiction" often acts as a procedural barrier to protection. A user in Brazil whose rights are violated by a platform in California may find their local court judgment unenforceable. The mechanism of "Letters Rogatory" or mutual legal assistance is used to enforce judgments across borders, but it is slow. To overcome this, courts are increasingly asserting "long-arm jurisdiction," claiming that if a company does business digitally in the country, it is subject to the local court's contempt powers. This asserts the sovereignty of the local court over the global cloud (Svantesson, 2017).
Finally, the digitization of the justice system itself—"e-justice"—is a mechanism for access. Online dispute resolution (ODR) platforms and the electronic filing of claims reduce the cost and complexity of enforcing rights. However, the use of "AI judges" or automated sentencing algorithms raises new due process concerns. The mechanism of protection must not become a source of violation; the right to a "human judge" remains a critical procedural safeguard in the era of automated justice.
Section 2: Administrative and Regulatory Enforcement
Beyond the courts, independent administrative bodies form the frontline of digital rights protection. Data Protection Authorities (DPAs), such as the CNIL in France or the ICO in the UK, are the primary regulatory mechanism. Unlike courts, which react to lawsuits, DPAs have proactive investigatory powers. They can conduct "dawn raids" on corporate offices, seize servers, and audit algorithms without waiting for a victim to complain. This "police power" regarding data turns privacy protection from a civil dispute into a matter of administrative law enforcement, ensuring compliance through active monitoring (Hijmans, 2016).
The power to impose administrative fines is the most potent weapon in the DPA arsenal. Under the GDPR, fines can reach up to 4% of a company’s total global annual turnover. This mechanism is designed to be "dissuasive," altering the economic calculus of compliance. Before such fines, privacy violations were often treated as a "cost of doing business." The multi-million euro fines levied against companies like Google, British Airways, and Amazon demonstrate that this mechanism has successfully elevated digital rights to a boardroom-level risk (European Data Protection Board, 2020).
"Corrective powers" allow regulators to stop violations in real-time. Fines punish past behavior, but corrective orders shape the future. A DPA can order a temporary or definitive ban on data processing. For instance, the Italian DPA temporarily banned ChatGPT in 2023 until it complied with age verification and transparency rules. This "kill switch" mechanism allows the regulator to halt a product launch or a data flow immediately if it poses an imminent risk to rights, acting as an emergency brake on digital innovation (Garante Privacy, 2023).
The "One-Stop-Shop" mechanism streamlines enforcement within the EU. It ensures that a company operating across the continent answers to a "Lead Supervisory Authority" (LSA) in its home country. While designed for efficiency, this mechanism includes a "consistency mechanism" involving the European Data Protection Board (EDPB). If other concerned DPAs disagree with the Lead Authority’s leniency, they can force a binding vote. This "peer review" mechanism prevents regulatory capture and ensures that a single weak regulator cannot undermine protection for all Europeans (Bygrave, 2017).
Consumer protection agencies, like the US Federal Trade Commission (FTC), utilize a different mechanism: "consent decrees." When a company deceives users about its security practices, the FTC sues and then settles, imposing a consent decree that mandates 20 years of privacy audits. This places the company under a distinct regime of federal monitoring. Although criticized for lacking the immediate punch of GDPR fines, this mechanism creates a long-term "probationary" relationship that forces institutional change within the company (Solove & Hartzog, 2014).
Competition authorities (antitrust regulators) are increasingly acting as digital rights protectors. The "abuse of dominance" theory is applied to data harvesting. The German Bundeskartellamt ruled that Facebook abused its market dominance by forcing users to agree to excessive data collection from third-party sites. By linking antitrust enforcement to privacy standards, this mechanism attacks the economic root of rights violations—monopoly power—restoring user choice as a form of protection (Bundeskartellamt, 2019).
"Ex-ante" regulation represents a shift from punishing harm to preventing it. The EU AI Act and Digital Services Act (DSA) impose obligations on companies before they can operate. Mechanisms like "conformity assessments" require high-risk AI developers to prove their systems are safe, unbiased, and transparent before deployment. This "CE marking" approach treats digital algorithms like physical products (e.g., elevators or toys), ensuring they meet safety standards before entering the market (European Commission, 2021).
"Whistleblower channels" within regulatory bodies provide a safe mechanism for insiders to report violations. The "Facebook Files" leak by Frances Haugen revealed that companies often know about the harms they cause but hide them. Regulators are establishing dedicated, encrypted portals for such disclosures, offering legal immunity and financial rewards (as in the US SEC program). This mechanism leverages the conscience of employees to overcome the corporate veil of secrecy (Cone, 2021).
"Sandboxes" allow for controlled innovation under regulatory supervision. A "regulatory sandbox" allows a startup to test a new technology (e.g., a digital ID system) with real users but under close monitoring and with relaxed rules, provided strict safeguards are in place. This mechanism allows regulators to understand new technologies and shape their development toward rights-compliance from the outset, rather than trying to regulate a mature technology retroactively (Financial Conduct Authority, 2015).
"Algorithmic Auditing" is becoming a formalized administrative procedure. Regulators are hiring data scientists to inspect the code of platforms. The DSA empowers the European Commission to access the "black box" of Very Large Online Platforms (VLOPs). This mechanism ends the era of "security through obscurity," allowing the state to verify claims about content moderation and bias reduction mathematically. It establishes the principle that code which affects public rights must be transparent to public authority (Möhlmann, 2021).
Cross-border enforcement networks, such as the Global Privacy Assembly (GPA), facilitate global coordination. Since data flows do not respect borders, regulators must share intelligence and coordinate actions. The mechanism of "joint investigations" allows the Dutch and Canadian authorities, for example, to simultaneously investigate a breach at a multinational firm. This prevents companies from playing regulators off against each other and creates a united global front for enforcement (Global Privacy Assembly, 2021).
Finally, the "Ombudsman" provides a soft-power mechanism for dispute resolution. In the context of intelligence oversight, bodies like the Intelligence and Security Committee (UK) or independent Privacy Commissioners act as intermediaries between the citizen and the "deep state." While they often lack binding power, their ability to access classified information and issue public reports serves as a transparency mechanism, shedding light on the dark corners of state surveillance (Reif, 2020).
Section 3: Technical Mechanisms and Privacy by Design
Technical mechanisms constitute the "physical" armor of digital rights. "Encryption" is the primary technical procedure for protecting the right to privacy and freedom of expression. By mathematically scrambling data so that only the holder of the key can read it, encryption enforces privacy through the laws of physics rather than the laws of man. End-to-end encryption (E2EE) ensures that even the service provider (e.g., WhatsApp or Signal) cannot access the content of communications. This technical architecture creates a "zone of absolute privacy" that protects users from mass surveillance, hackers, and even repressive governments (Abelson et al., 2015).
"Privacy by Design" (PbD) is a methodology that embeds legal principles into the engineering process. It posits that privacy should not be a policy add-on but a default setting. Technical mechanisms like "data minimization" (collecting only what is needed) and "purpose limitation" (technically preventing data from being used for other reasons) are coded into the system. For instance, a PbD-compliant system might automatically delete data after 30 days or store it in a decentralized manner to prevent a single point of failure. This mechanism shifts protection from user compliance to system architecture (Cavoukian, 2009).
Anonymity networks, most notably Tor (The Onion Router), provide a technical mechanism for "traffic analysis resistance." By bouncing internet traffic through a distributed network of volunteer relays, Tor obscures the user's location and usage from network surveillance. This technology is vital for activists, journalists, and whistleblowers living under authoritarian regimes. It provides a technical guarantee of the "right to anonymity," allowing individuals to seek and impart information without fear of reprisal or identification (Dingledine et al., 2004).
"Privacy-Enhancing Technologies" (PETs) allow for the processing of data without revealing the data itself. "Differential Privacy" involves adding mathematical noise to a dataset so that aggregate patterns can be analyzed without exposing any individual's data. "Homomorphic Encryption" allows computations to be performed on encrypted data without ever decrypting it. These technical mechanisms reconcile the utility of big data with the right to privacy, enabling medical research or smart city analytics without compromising individual confidentiality (Dwork, 2008).
Decentralization and "Self-Sovereign Identity" (SSI) shift power from central authorities to the user. Using blockchain or distributed ledger technology, SSI allows individuals to hold their own identity credentials (like a digital passport) in a private wallet. The user can prove their age or citizenship using "Zero-Knowledge Proofs"—a cryptographic method of proving a fact (e.g., "I am over 18") without revealing the underlying data (e.g., date of birth). This mechanism restores the user's sovereignty over their digital identity, reducing reliance on "identity providers" like Google or the state (Mühle et al., 2018).
"Access Control" mechanisms are the digital locks and keys. Multi-Factor Authentication (MFA) protects the right to security by requiring more than just a password. Granular permission systems in mobile operating systems (iOS/Android) allow users to deny apps access to the camera, microphone, or location. These technical controls empower the user to act as the gatekeeper of their own device, enforcing the principle of consent at a hardware level. The evolution of these controls (e.g., "Ask app not to track") has had a massive impact on the data economy (Apple, 2021).
Open Source software serves as a transparency mechanism. When code is "open," it can be audited by the global community to ensure there are no backdoors or vulnerabilities. The "many eyes" theory suggests that open source code is more secure because bugs are found and fixed faster. For critical infrastructure or voting machines, open source is a rights-protection mechanism because it allows the public to verify the integrity of the system, preventing "black box" governance (Raymond, 1999).
"Fuzzing" and "Red Teaming" are proactive security procedures. Companies employ ethical hackers to attack their own systems to find vulnerabilities before malicious actors do. This "offensive defense" protects the user's right to security by stress-testing the digital infrastructure. Bug Bounty programs incentivize researchers to report flaws responsibly. These procedural mechanisms turn the adversarial nature of hacking into a tool for rights protection, ensuring that the digital environment is hardened against attack (Zalewski, 2012).
"Do Not Track" (DNT) and the "Global Privacy Control" (GPC) are technical signals sent by the browser to websites. They automate the user's objection to tracking. While historically ignored by advertisers, new laws (like the CCPA in California) are making these signals legally binding. This technical mechanism allows the user to assert their rights once (in the browser settings) rather than having to click "reject cookies" on every single website, reducing "consent fatigue" (Electronic Frontier Foundation, 2020).
"Data loss prevention" (DLP) tools monitor data in transit to prevent unauthorized exfiltration. These systems can detect if a database of credit card numbers is being copied to an external drive and block the transfer. This technical mechanism protects the right to security by acting as an automated sentry. It ensures that even if a human employee makes a mistake or turns malicious, the technical system prevents the catastrophic breach of user rights (Shabtai et al., 2012).
"Content Authenticity" mechanisms (like the C2PA standard) fight disinformation by cryptographically signing media. When a photo is taken, the camera embeds a secure metadata trail proving where and when it was taken. This allows users to verify the origin of a digital object, protecting the "right to truth" and helping to debunk deepfakes. This technical provenance layer creates a "chain of custody" for digital evidence (C2PA, 2021).
Finally, the limits of technical mechanisms must be acknowledged. "Techno-solutionism" cannot fix political problems. Encryption can protect a message, but it cannot protect the user from being beaten by police for the password ("rubber-hose cryptanalysis"). Technical mechanisms are force multipliers for rights, but they must be embedded within a robust legal and social framework to be truly effective.
Section 4: Corporate Governance and Private Remedies
Given the dominance of private platforms, corporate governance mechanisms are essential for rights protection. The "UN Guiding Principles on Business and Human Rights" (UNGPs) establish the procedural standard: corporate due diligence. Companies must conduct "Human Rights Impact Assessments" (HRIAs) before launching new products or entering new markets. For example, before deploying a facial recognition system, a company should assess the risk of bias and misuse. This procedural mechanism forces the corporation to "know and show" its risks, moving ethics from a PR exercise to an operational risk management process (Ruggie, 2011).
Terms of Service (ToS) enforcement is the primary private mechanism. While often criticized for being opaque, the ToS is the contract that governs the digital relationship. "Notice and Action" procedures allow users to flag content that violates these terms (e.g., hate speech or harassment). Effective protection requires that these mechanisms be responsive and fair. The "Santa Clara Principles" demand that platforms provide a clear reason for any removal and offer a meaningful appeal process. This introduces "due process" into the private contractual relationship (Santa Clara Principles, 2018).
The Meta Oversight Board represents the institutionalization of private remedy. It is an independent body funded by a trust, empowered to overturn Meta's content decisions. Users who have exhausted internal appeals can petition the Board. The Board’s decisions are binding on the company for the specific case and include non-binding policy recommendations. This mechanism acts as a "check and balance" on corporate sovereignty, creating a quasi-judicial avenue for redress outside the state court system (Klonick, 2020).
Transparency Reports are a disclosure mechanism. Major tech companies publish periodic reports detailing government requests for user data and content removal. This mechanism reveals the extent of state surveillance and censorship. It allows civil society to monitor the "pressure points" between the state and the platform. By quantifying the number of requests and the compliance rate, transparency reports hold both the government and the company accountable to the public (Parsons, 2019).
"Trusted Flagger" programs create a fast-track mechanism for expert organizations. Entities like child safety hotlines or anti-extremism NGOs are given special tools to report illegal content, which is then prioritized for review by the platform. This mechanism acknowledges that not all user reports are equal; experts can identify harm more accurately than algorithms or casual users. It creates a partnership model for enforcement, leveraging civil society expertise to clean up the platform (European Commission, 2020).
Internal "Grievance Officers" are often mandated by law (e.g., in India or Germany). These are designated employees responsible for handling user complaints. This mechanism ensures there is a "human in the loop" who is legally liable for resolving issues. It prevents the company from hiding behind automated support bots. The existence of a specific, reachable officer provides a focal point for accountability and a direct channel for remedy (Information Technology Rules, 2021).
"Employee activism" has emerged as a bottom-up protection mechanism. Tech workers have organized walkouts and signed open letters to protest unethical contracts (e.g., Google’s Project Maven). This internal pressure acts as a moral brake on corporate conduct. Because tech companies compete fiercely for talent, the threat of an employee exodus forces management to reconsider products that violate human rights. This mechanism leverages the labor power of the developers themselves to enforce ethical red lines (O'Mara, 2019).
"Ethics Boards" and internal review committees provide institutional oversight. Before a research paper is published or a dataset is released, it undergoes review. While sometimes criticized as "ethics washing," robust review boards can stop harmful experiments. For example, Microsoft’s Aether Committee advises leadership on AI ethics issues. To be effective, these mechanisms must have the power to veto profitable but unethical projects, not just offer advice (Phan et al., 2021).
Third-party audits are a verification mechanism. Companies hire external firms to audit their security (SOC2), privacy (ISO 27001), or algorithmic bias. These audits provide an independent certification of compliance. In the advertising industry, the Media Rating Council audits metrics to prevent fraud. Expanding this to human rights, "social audits" verify if a company’s supply chain is free from forced labor. This mechanism relies on the reputational value of the auditor’s seal (Power, 1999).
"Bug Bounty" programs for data abuse are an incentive mechanism. Facebook launched a "Data Abuse Bounty" after Cambridge Analytica, rewarding researchers who find apps misusing user data. This crowdsources the detection of violations. It aligns the financial incentives of white-hat hackers with the protection of user privacy. By paying for reports of abuse, the company turns the security community into an extended defense team (Facebook, 2018).
Shareholder resolutions use the mechanism of corporate governance to force change. Activist investors file resolutions demanding civil rights audits or the prohibition of facial recognition sales. While often voted down, these resolutions force a public vote and debate at the Annual General Meeting (AGM). They use the machinery of capitalism to advance human rights, keeping the issues on the agenda of the Board of Directors (Generic, 2020).
Finally, the "Ombudsperson" model in the EU-US Data Privacy Framework creates a diplomatic mechanism. A designated official in the US State Department handles complaints from EU citizens regarding surveillance. While criticized for lacking true judicial independence, it attempts to provide a bridge for remedy across conflicting legal jurisdictions. It represents the bureaucratization of cross-border rights protection.
Section 5: Civil Society and Social Mechanisms
Civil society serves as the "immune system" of the digital sphere, detecting pathogens (violations) and mobilizing a response. "Advocacy and Lobbying" is the primary mechanism for shaping protective laws. Organizations like EDRi (European Digital Rights) or Access Now work directly with legislators to draft amendments that protect privacy and free speech. They translate complex technical issues into policy language, counterbalancing the massive lobbying power of Big Tech. This mechanism ensures that the user's perspective is represented in the halls of power (Breindl, 2013).
"Watchdog" functions involve monitoring and "naming and shaming." The "Ranking Digital Rights" index evaluates the world’s most powerful telecommunications and internet companies on their commitments to human rights. By creating a public scorecard, this mechanism fosters a "race to the top." Companies are sensitive to their reputation; a low ranking signals risk to investors and customers. This reputational mechanism leverages market forces to enforce ethical standards (Ranking Digital Rights, 2020).
"Digital Security Clinics" and helplines provide direct assistance. Organizations like Front Line Defenders or the Digital Security Helpline offer 24/7 emergency support to activists facing cyber-attacks. This is a "harm reduction" mechanism. If a journalist’s account is compromised, these experts help recover it and secure the device. This grassroots technical support is vital for the survival of civil society in authoritarian environments, providing immediate triage for digital rights emergencies (Front Line Defenders, 2021).
"Whistleblower support" is a social mechanism for truth-telling. When legal channels fail, whistleblowers like Edward Snowden or Chelsea Manning expose systemic abuse. Civil society organizations (like the Freedom of the Press Foundation) provide the secure tools (SecureDrop) and legal defense funds necessary for these individuals to come forward. This mechanism protects the right to truth by shielding the messenger, ensuring that crimes committed in secret can be brought to public light (Benkler, 2011).
"Consumer boycotts" and campaigns mobilize the power of the user base. The #DeleteFacebook campaign or the advertiser boycott #StopHateForProfit demonstrated that collective action can impact corporate revenue. This mechanism treats the user not as a passive subject but as an economic agent. While "network effects" make boycotts difficult, the threat of a user exodus keeps platforms responsive to public sentiment. It is a form of "vote with your wallet" in the attention economy (Véliz, 2020).
Public awareness and "digital literacy" campaigns act as a preventative mechanism. Educating users about phishing, two-factor authentication, and privacy settings empowers them to protect themselves. This "self-defense" mechanism is crucial because laws and tech cannot catch every threat. Programs that teach "media literacy" help users navigate disinformation, reducing the spread of viral falsehoods. A literate citizenry is the first line of defense against digital manipulation (Warschau, 2012).
"Strategic partnerships" and coalitions amplify impact. The "KeepItOn" coalition brings together over 200 organizations to fight internet shutdowns. By speaking with one voice, they can exert diplomatic pressure on governments. This mechanism of "solidarity networks" ensures that a violation in a small country receives global attention. It prevents isolation, which is the primary goal of repressive regimes when they cut the internet (Access Now, 2021).
"Academic research" acts as an epistemic mechanism. Independent researchers analyze platform data to uncover bias, polarization, and censorship. Their studies provide the empirical evidence needed for lawsuits and regulation. Mechanisms that mandate "data access for researchers" (like in the DSA) are essential to protect this function. Without independent science, society relies on the companies' own self-assessments, which are inherently conflicted (Pasquale, 2015).
"Open Standards" development bodies (like the IETF) are social institutions where the "constitution" of the internet is written in code. Civil society participation in these technical bodies ensures that protocols are designed with human rights in mind (e.g., enabling encryption by default). This mechanism works "upstream," influencing the fundamental architecture of the network before it is even built. It bridges the gap between the technical community and the human rights community (Cath, 2018).
"Citizen Forensic Labs" (like Citizen Lab) investigate the use of spyware. They reverse-engineer malware found on the phones of victims to attribute the attack to specific governments or companies (like NSO Group). This "forensic accountability" mechanism provides the smoking gun. It turns the device from a scene of the crime into a source of evidence, stripping the attacker of plausible deniability (Deibert, 2020).
"Alternative platforms" and the "Fediverse" (e.g., Mastodon) provide an "exit mechanism." By building and maintaining community-owned, decentralized social networks, civil society creates a viable alternative to surveillance capitalism. This "prefigurative politics" builds the world they want to see. It protects rights by demonstrating that a rights-respecting internet is technically and socially possible, not just a theoretical ideal (Doctorow, 2022).
Finally, the mechanism of "Global Solidarity" connects the local to the global. When a blogger is arrested in Vietnam, a tweet from a famous activist in New York can trigger diplomatic inquiries. This "boomerang pattern" (Keck & Sikkink) bypasses the blockage in the domestic system by routing the pressure through the international system. It relies on the interconnectedness of the digital sphere to protect its most vulnerable members.
Questions
1. The Role of "Negative Legislators"
Section 1 describes constitutional courts as "negative legislators." Using the example of the German Federal Constitutional Court or the Indian Supreme Court, explain what this term means in the context of striking down surveillance laws.
2. Habeas Data
What is "Habeas Data," and how does this Latin American judicial mechanism differ from a standard privacy lawsuit in terms of its procedural speed and objective?
3. The Deterrence of GDPR Fines
According to Section 2, how has the power to impose fines of up to 4% of global turnover transformed privacy compliance from a "cost of doing business" into a "boardroom-level risk"?
4. Corrective Powers and the "Kill Switch"
Explain the "corrective power" of Data Protection Authorities (DPAs) using the example of the Italian DPA's action against ChatGPT. Why is this power described as an "emergency brake" on innovation?
5. Privacy by Design (PbD)
Define "Privacy by Design." How does this methodology shift the burden of protection from user compliance (e.g., changing settings) to system architecture?
6. Zero-Knowledge Proofs
In the context of Self-Sovereign Identity (SSI), what is a "Zero-Knowledge Proof," and how does it allow a user to prove a fact (like being over 18) without revealing the underlying data?
7. Human Rights Impact Assessments (HRIAs)
Under the UN Guiding Principles on Business and Human Rights (UNGPs), what is the specific procedural requirement for companies before they launch a new product, such as a facial recognition system?
8. The Meta Oversight Board
Section 4 describes the Meta Oversight Board as a "check and balance" on corporate sovereignty. What is the key structural feature (funding/appointment) that grants it independence from Meta's management?
9. Citizen Forensic Labs
What is the function of "Citizen Forensic Labs" like Citizen Lab? How does their work in "reverse-engineering malware" provide the "smoking gun" necessary to hold governments accountable for using spyware?
10. Strategic Litigation
How does "strategic litigation" differ from a standard lawsuit? Using the example of Max Schrems, explain how a single case is used to force systemic regulatory reform rather than just resolving an individual grievance.
Cases
Case Study: The "Medi-Link" Health Data Crisis
Technical Failure and the Breach of Trust
"Medi-Link" is a popular health app that uses AI to predict heart attacks. Despite marketing itself as secure, a whistleblower reveals that the app was not built with "Privacy by Design" (Section 3). Instead of using "Homomorphic Encryption" to process data securely, the company stored raw patient records in a centralized database to train its algorithm faster. A hacker exploits a vulnerability, stealing the unencrypted records of 5 million users. Furthermore, the whistleblower reveals that the AI has a high error rate for ethnic minorities because the training data was not audited for bias, a failure of the "Human Rights Impact Assessment" (Section 4) process required by corporate governance norms.
Administrative "Dawn Raids" and Corrective Powers
The scandal triggers immediate regulatory action (Section 2). The national Data Protection Authority (DPA) exercises its "police power" and conducts a "Dawn Raid" on Medi-Link’s headquarters to seize servers and emails. Finding "systemic negligence," the DPA uses its "Corrective Powers" to order an immediate "Kill Switch" (temporary ban) on the app's processing of data, effectively shutting down the business overnight. They also signal an intent to impose a fine of 4% of global turnover to ensure the penalty is "dissuasive."
Judicial Remedy and the Fight for Control
Victims of the breach feel that a fine is not enough. A civil society organization, "Health Rights Now," initiates "Strategic Litigation" (Section 1). They file a "Class Action" lawsuit seeking compensation for the non-material damage (distress) caused by the leak. Simultaneously, individual users in Latin American jurisdictions file "Habeas Data" petitions, demanding immediate access to their specific records to see if their HIV status was compromised. Medi-Link attempts to settle, but the court issues a "Dynamic Injunction" against the dark web sites hosting the stolen data, ordering ISPs to block access to the leaks continuously.
Questions
1. The Failure of Privacy by Design
Medi-Link stored raw data to train its AI faster.
Using Section 3 (Technical Mechanisms), explain what "Privacy by Design" would have required Medi-Link to do before building the system.
How could "Differential Privacy" or "Homomorphic Encryption" have allowed them to train the AI without exposing individual user data to theft?
2. The Power of the DPA
The DPA shut down the app before the investigation was even finished.
According to Section 2 (Administrative Enforcement), why is the power to issue a temporary ban (or "kill switch") considered more impactful than a fine?
How does the "One-Stop-Shop" mechanism apply if Medi-Link operates in multiple EU countries but has its headquarters in Ireland? Who leads the investigation?
3. Judicial Speed: Habeas Data vs. Class Actions
Victims are using different legal tools.
Contrast the purpose of the "Habeas Data" petitions filed by individuals with the "Class Action" filed by the NGO (Section 1).
Why is Habeas Data described as a "summary proceeding," and why is it specifically suited for a user wanting to know what was stolen, rather than just getting paid?
References
Abelson, H., et al. (2015). Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications. MIT CSAIL.
Access Now. (2021). Shattered Dreams and Lost Opportunities: A Year in the Fight Against Internet Shutdowns.
Apple. (2021). App Tracking Transparency. Apple Developer Documentation.
Benkler, Y. (2011). A Free Irresponsible Press: Wikileaks and the Battle over the Soul of the Networked Fourth Estate. Harvard Civil Rights-Civil Liberties Law Review.
Breindl, Y. (2013). Internet politics: The role of civil society organizations in the fight against the Data Retention Directive. Information, Communication & Society.
Bundeskartellamt. (2019). Bundeskartellamt prohibits Facebook from combining user data from different sources.
Bygrave, L. A. (2017). Data Protection Law: Approaching Its Rationale, Logic and Limits. Kluwer Law International.
C2PA. (2021). Coalition for Content Provenance and Authenticity Technical Specification.
Cath, C. (2018). The Technology We Choose to Create: Human Rights Advocacy in the Internet Engineering Task Force. Oxford Internet Institute.
Cavoukian, A. (2009). Privacy by Design: The 7 Foundational Principles.
Cone, C. (2021). Whistleblowing as a Check on the Power of Big Tech. Georgetown Law Technology Review.
Court of Justice of the European Union (CJEU). (2014). Digital Rights Ireland Ltd v Minister for Communications. Joined Cases C-293/12 and C-594/12.
Deibert, R. (2020). Reset: Reclaiming the Internet for Civil Society. House of Anansi Press.
Dingledine, R., Mathewson, N., & Syverson, P. (2004). Tor: The Second-Generation Onion Router. USENIX Security Symposium.
Doctorow, C. (2022). Chokepoint Capitalism. Beacon Press.
Donohue, L. K. (2008). The Cost of Counterterrorism: Power, Politics, and Liberty. Cambridge University Press.
Dwork, C. (2008). Differential Privacy: A Survey of Results. Theory and Applications of Models of Computation.
Electronic Frontier Foundation. (2020). Global Privacy Control (GPC).
European Commission. (2020). Proposal for a Regulation on a Single Market For Digital Services (Digital Services Act).
European Commission. (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
European Data Protection Board. (2020). Guidelines on the calculation of administrative fines under the GDPR.
European Union. (2012). Charter of Fundamental Rights of the European Union.
Facebook. (2018). Data Abuse Bounty: Rewarding Reports of Data Misuse.
Frosio, G. F. (2018). The Death of 'No Monitoring Obligations'. JIPITEC.
Garante Privacy. (2023). Garante privacy stop a ChatGPT.
Generic. (2020). Shareholder Resolutions on Digital Rights.
Global Privacy Assembly. (2021). Strategic Plan 2021-2023.
Guimarães, L. (2017). Habeas Data: The Latin-American Guarantee of Personal Data Protection. Mexican Law Review.
Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies. Annual Review of Law and Social Science.
Hijmans, H. (2016). The European Union as Guardian of Internet Privacy. Springer.
Information Technology Rules. (2021). The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Government of India.
Kerr, O. S. (2019). The Fourth Amendment and the Global Internet. Stanford Law Review.
Klonick, K. (2020). The Facebook Oversight Board. Yale Law Journal.
Möhlmann, M. (2021). Algorithmic Accountability in Action. Organization Science.
Mühle, A., et al. (2018). A survey on essential components of a self-sovereign identity. Computer Science Review.
Mulheron, R. (2018). Class Actions and Government. Cambridge University Press.
O'Mara, M. (2019). The Code: Silicon Valley and the Remaking of America. Penguin Press.
Parsons, C. (2019). The (In)effectiveness of Voluntarily Produced Transparency Reports. Business & Society.
Pasquale, F. (2015). The Black Box Society. Harvard University Press.
Phan, T., et al. (2021). Economies of Virtue. Science as Culture.
Power, M. (1999). The Audit Society: Rituals of Verification. Oxford University Press.
Privacy International. (2019). A Guide to Litigating Identity Systems.
Puttaswamy v. Union of India. (2017). 10 SCC 1. Supreme Court of India.
Ranking Digital Rights. (2020). 2020 Corporate Accountability Index.
Raymond, E. S. (1999). The Cathedral and the Bazaar. O'Reilly Media.
Reif, L. C. (2020). The Ombudsman, Good Governance and the International Human Rights System. Martinus Nijhoff.
Ruggie, J. (2011). Guiding Principles on Business and Human Rights. United Nations.
Santa Clara Principles. (2018). The Santa Clara Principles on Transparency and Accountability in Content Moderation.
Shabtai, A., et al. (2012). A Survey of Data Leakage Detection and Prevention Solutions. Springer.
Solove, D. J., & Hartzog, W. (2014). The FTC and the New Common Law of Privacy. Columbia Law Review.
Svantesson, D. (2017). Solving the Internet Jurisdiction Puzzle. Oxford University Press.
Véliz, C. (2020). Privacy Is Power. Bantam Press.
Warschau, M. (2012). Digital Literacy as a Human Right.
Zalewski, M. (2012). The Tangled Web: A Guide to Securing Modern Web Applications. No Starch Press.
10
International cooperation in protecting digital human
2
5
5
12
Lecture text
Section 1: The Architecture of Global Internet Governance
The landscape of international cooperation in digital human rights is defined by a fundamental tension between two competing models of governance: multilateralism and multi-stakeholderism. Multilateralism, the traditional model of international relations, posits that sovereign states are the primary decision-makers in global affairs. In this view, championed by countries like Russia and China, internet governance should be conducted through intergovernmental organizations like the United Nations (UN) or the International Telecommunication Union (ITU), where governments hold voting power and national sovereignty is paramount. This model emphasizes the right of the state to control the information space within its borders, often framing "cyber-sovereignty" as a prerequisite for national security and stability (Mueller, 2010).
In contrast, the multi-stakeholder model argues that the internet is too complex and dynamic to be managed by governments alone. This approach involves the equal participation of governments, the private sector, civil society, and the technical community in policy development. The World Summit on the Information Society (WSIS), held in two phases in Geneva (2003) and Tunis (2005), formalized this model in the "Tunis Agenda for the Information Society." The agenda recognized that the management of the internet should be transparent, democratic, and multilateral, with the full involvement of governments, the private sector, civil society, and international organizations. This document remains the foundational text for international digital cooperation, legitimizing the role of non-state actors in global governance (WSIS, 2005).
The Internet Governance Forum (IGF) was established by the WSIS as a primary vehicle for this cooperation. Unlike a traditional treaty body, the IGF has no decision-making power; it is a forum for dialogue. Its mandate is to discuss public policy issues related to key elements of internet governance in order to foster the sustainability, robustness, security, stability, and development of the internet. While critics argue that the IGF is merely a "talking shop" because it cannot issue binding resolutions, proponents argue that its "soft power" allows for the socialization of norms and the sharing of best practices without the diplomatic gridlock that characterizes binding treaty negotiations (Epstein, 2012).
The role of the International Telecommunication Union (ITU) remains a flashpoint in international cooperation. As a specialized agency of the UN, the ITU is responsible for technical standards and spectrum allocation. However, in recent years, there has been a push by some member states to expand the ITU’s mandate to include internet governance and content regulation. Western democracies and civil society groups generally oppose this expansion, fearing that placing internet governance under the ITU’s "one country, one vote" system would legitimize state censorship and undermine the open architecture of the internet designed by the technical community (DeNardis, 2014).
ICANN (Internet Corporation for Assigned Names and Numbers) represents a unique experiment in international cooperation. Originally operating under a contract with the US Department of Commerce, ICANN manages the global Domain Name System (DNS). The "IANA transition" in 2016, which ended direct US government oversight, was a milestone in globalizing critical internet resources. It transferred stewardship to a global multi-stakeholder community. This move was intended to prevent the fragmentation of the internet by assuring the international community that the root zone was not a tool of American foreign policy, but a global public good managed by consensus (Greenstein, 2016).
The United Nations Human Rights Council (HRC) has become a central venue for establishing digital rights norms. Resolution 20/8, adopted in 2012, affirmed that human rights apply online just as they do offline. This consensus resolution was a diplomatic breakthrough, bridging the gap between nations with divergent political systems. Subsequent resolutions have addressed specific issues like the safety of journalists, the right to privacy in the digital age, and internet shutdowns. These documents, while non-binding, create a normative framework that civil society uses to hold governments accountable during the Universal Periodic Review (UPR) process (United Nations Human Rights Council, 2012).
The Office of the United Nations High Commissioner for Human Rights (OHCHR) supports this normative work by producing detailed reports on digital issues. For example, the B-Tech Project provides guidance on implementing the UN Guiding Principles on Business and Human Rights in the technology sector. This initiative fosters cooperation between the UN and major tech companies, creating a space for dialogue on how to operationalize human rights due diligence in product design and corporate governance. It represents a shift from "naming and shaming" to "knowing and showing" (OHCHR, 2020).
Regional organizations also play a critical role in the governance architecture. The Council of Europe has been a pioneer in setting binding standards, most notably the Budapest Convention on Cybercrime and Convention 108+ on data protection. These treaties are open to accession by non-European states, making them de facto global instruments. The Council’s approach demonstrates how regional cooperation can scale up to set global benchmarks when the UN system is paralyzed by geopolitical polarization (Council of Europe, 2018).
The Freedom Online Coalition (FOC) is a partnership of 34 governments working to advance internet freedom. Formed in 2011, the FOC coordinates diplomatic efforts to oppose internet shutdowns and support civil society in repressive environments. This "coalition of the willing" allows like-minded democracies to coordinate their foreign policies and funding priorities. By issuing joint statements and funding digital safety training, the FOC operationalizes the protection of digital rights as a foreign policy objective (Freedom Online Coalition, 2011).
The G7 and G20 have increasingly integrated digital rights into their summits. The "Hiroshima Process" on Artificial Intelligence initiated by the G7 in 2023 exemplifies this trend. It aims to align the governance of generative AI among the world’s leading industrial democracies. By focusing on "trustworthy AI," these economic forums are acknowledging that digital rights are not just social issues but economic imperatives. International cooperation here focuses on preventing regulatory fragmentation that could hinder the digital economy (G7, 2023).
Tech diplomacy has emerged as a new form of international relations. Nations like Denmark have appointed "Tech Ambassadors" to Silicon Valley, recognizing that major technology companies have the geopolitical influence of nation-states. This creates a direct channel of diplomatic cooperation between sovereign governments and private platforms. These envoys negotiate on issues ranging from content moderation to tax policy, treating the governance of digital platforms as a matter of foreign affairs rather than domestic regulation (Kettemann, 2020).
Finally, the architecture of cooperation is threatened by the "splinternet." The proliferation of national intranets (like Russia’s Runet) and incompatible regulatory regimes challenges the very premise of a global internet. Cooperation is increasingly becoming defensive, focused on maintaining connectivity and the free flow of information against a rising tide of digital protectionism. The future of this architecture depends on whether the international community can maintain the "interoperability" of both the technical and legal layers of the internet.
Section 2: Cross-Border Data Flows and Privacy Frameworks
The regulation of cross-border data flows is the most legally complex area of international cooperation. Data is the lifeblood of the global digital economy, yet privacy rights are traditionally territorial. To bridge this gap, the European Union established the concept of "adequacy." Under the GDPR, data can flow freely to a non-EU country only if that country provides a level of protection "essentially equivalent" to that of the EU. This mechanism creates a powerful incentive for other nations to upgrade their privacy laws to match European standards, a phenomenon known as the "Brussels Effect." Countries from Japan to Brazil have modeled their legislation on the GDPR to ensure uninterrupted trade (Bradford, 2020).
However, cooperation is frequently disrupted by conflicts between privacy rights and national security surveillance. The invalidation of the "Safe Harbor" and "Privacy Shield" frameworks by the Court of Justice of the European Union (CJEU) in the Schrems cases highlighted the difficulty of aligning the EU's fundamental rights with US surveillance laws. These judgments declared that US laws (like FISA 702) did not provide adequate redress for EU citizens, effectively halting the legal basis for transatlantic data transfers. This forced the US and EU back to the negotiating table to create the "EU-US Data Privacy Framework," illustrating that international cooperation is a continuous process of legal repair and diplomatic negotiation (CJEU, 2020).
The Council of Europe’s "Convention 108+" (The Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data) serves as the only binding international treaty on data protection. Unlike the GDPR, which is an EU regulation, Convention 108+ is open to any country in the world. It provides a baseline of privacy principles that facilitate cooperation between diverse legal systems. Accession to this convention is often seen as a seal of quality that builds trust between nations, creating a "common legal space" for data privacy that extends beyond Europe to Africa and Latin America (Council of Europe, 2018).
In the Asia-Pacific region, the APEC Cross-Border Privacy Rules (CBPR) system offers an alternative model of cooperation. Unlike the EU's top-down regulatory approach, the CBPR is a voluntary, accountability-based system. Companies are certified by independent agents as compliant with APEC privacy principles. While less stringent than the GDPR, it promotes interoperability among economies with different legal traditions (like the US, Japan, and Singapore). This system prioritizes the flow of data for trade while establishing a baseline of consumer protection (Greenleaf, 2014).
The OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, originally adopted in 1980 and updated in 2013, act as the "soft law" foundation for global cooperation. They established the "Fair Information Practice Principles" (FIPPs) such as collection limitation and purpose specification. These guidelines provide a common vocabulary for privacy regulators globally. Even in the absence of a binding global treaty, the OECD guidelines ensure that privacy laws in different jurisdictions share a common genetic code, facilitating regulatory dialogue (OECD, 2013).
"Data Free Flow with Trust" (DFFT) is a concept championed by Japan at the G20. It seeks to balance the economic necessity of data flows with the political necessity of trust (privacy and security). DFFT aims to operationalize cooperation by identifying commonalities between different regulatory regimes (e.g., GDPR and CBPR) and creating mechanisms for mutual recognition. It acknowledges that while laws may differ, the underlying values of trust are often shared, providing a basis for pragmatic cooperation (World Economic Forum, 2020).
Mutual Legal Assistance Treaties (MLATs) are the traditional mechanism for law enforcement cooperation regarding data. However, the MLAT system is notoriously slow and bureaucratic, often taking months to process a request for emails or server logs. This latency is incompatible with the speed of digital crime. The breakdown of the MLAT system has led to new forms of cooperation, such as the US CLOUD Act, which allows the US to sign executive agreements with trusted foreign partners (like the UK) to bypass the MLAT process and demand data directly from service providers, provided human rights safeguards are met (Swire & Hemmings, 2015).
The "Second Additional Protocol to the Budapest Convention" aims to modernize this cooperation further. It provides a legal basis for direct cooperation between service providers in one country and law enforcement in another for subscriber information. It also establishes procedures for emergency mutual assistance. While aimed at efficiency, civil society groups have raised concerns that speeding up cooperation might bypass necessary judicial oversight, highlighting the constant tension between efficient enforcement and rigorous rights protection in international agreements (EFF, 2021).
Global privacy enforcement networks represent the operational layer of cooperation. The Global Privacy Assembly (GPA) brings together data protection authorities from over 130 countries. Through working groups and annual conferences, regulators share enforcement strategies and issue joint resolutions. The GPA facilitates "joint investigations," where regulators from multiple countries (e.g., Canada and the Netherlands) collaborate to investigate a multinational breach (like the Ashley Madison hack). This operational cooperation ensures that global corporations cannot divide and conquer national regulators (Global Privacy Assembly, 2021).
Standard contractual clauses (SCCs) are a private law mechanism for international cooperation. When a country does not have an adequacy decision, companies can use these pre-approved contracts to legalize data transfers. SCCs impose contractual obligations on the foreign importer to protect the data. While a private mechanism, the wording is determined by regulators (like the European Commission). They serve as a legal bridge that allows data to cross borders safely even when the governments on either side have not reached a political agreement (European Commission, 2021).
Digital trade agreements are increasingly becoming venues for privacy rules. The US-Mexico-Canada Agreement (USMCA) and the CPTPP include chapters on digital trade that mandate consumer protection and prohibit data localization requirements. These trade treaties lock in the "free flow of data" as a binding international obligation. However, they typically include exceptions for "legitimate public policy objectives," creating a legal battleground over whether a specific privacy law is a legitimate protection or a disguised trade barrier (Burri, 2017).
Finally, the lack of a single global privacy treaty creates a "patchwork" of cooperation. Companies must navigate a complex web of bilateral agreements, regional regulations, and adequacy decisions. This fragmentation increases compliance costs and creates legal uncertainty. The future of cooperation lies in "interoperability mechanisms"—legal tools that allow different privacy systems to "talk" to each other without requiring them to be identical, ensuring that digital rights are protected continuously as data travels around the globe.
Section 3: Cybercrime and Cybersecurity Cooperation
Cybercrime is inherently transnational; a hacker in one country can attack a server in a second country, affecting victims in a third. This reality necessitates robust international cooperation. The Budapest Convention on Cybercrime (2001) is the first and most influential international treaty on this subject. Drafted by the Council of Europe but open to global accession, it harmonizes national laws by defining offenses (like illegal access and data interference) and establishing procedural powers for investigation. By creating a common legal standard, it ensures that "dual criminality" exists, which is a prerequisite for extradition and mutual legal assistance (Council of Europe, 2001).
Despite its success, the Budapest Convention is not universal. Russia and China have historically rejected it because they were not involved in its drafting and believe Article 32 (transborder access to stored data) infringes on national sovereignty. Consequently, the United Nations initiated the negotiation of a new "UN Cybercrime Treaty" (officially the "Comprehensive International Convention on Countering the Use of Information and Communications Technologies for Criminal Purposes"). This process, driven by Russia, has been controversial. Human rights groups fear that the new treaty will criminalize online speech under the guise of cybercrime and expand state surveillance powers without the safeguards present in the Budapest system (Human Rights Watch, 2022).
Norms of responsible state behavior in cyberspace act as a soft law framework for cooperation. The UN Group of Governmental Experts (GGE) and the Open-Ended Working Group (OEWG) have developed a set of 11 voluntary norms. These include the principle that states should not conduct cyber-attacks against critical infrastructure and should respond to requests for assistance from other states whose infrastructure is being attacked. While non-binding, these norms, endorsed by the UN General Assembly, create a diplomatic baseline for acceptable behavior. Violating them justifies "naming and shaming" or collective sanctions (UN GGE, 2015).
Operational cooperation is facilitated by institutions like Interpol and Europol. Europol’s European Cybercrime Centre (EC3) acts as a central hub for criminal intelligence in the EU. It supports member states in high-profile investigations, such as the takedown of the "Emotet" botnet. These agencies do not have executive powers (they cannot arrest people), but they provide the essential service of "deconfliction"—ensuring that police in different countries are not unknowingly investigating the same target or compromising each other's operations (Europol, 2020).
Computer Security Incident Response Teams (CSIRTs) form the technical backbone of international cooperation. The global network of CSIRTs (FIRST) allows for the rapid sharing of information about vulnerabilities and threats. When a major vulnerability like "Log4j" is discovered, this network disseminates mitigation strategies globally within hours. This "technical diplomacy" often continues even when political relations between nations are strained, as the stability of the internet is a shared interest (Tikk, 2011).
The "Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations" is a critical academic contribution to cooperation. Authored by a group of international experts, it interprets how existing international law (like the UN Charter and the Law of Armed Conflict) applies to cyberspace. While not a treaty, it is widely used by legal advisors in foreign ministries to determine when a cyber-attack constitutes a "use of force" or an "armed attack," helping to define the thresholds for self-defense and state responsibility (Schmitt, 2017).
Capacity building is a major pillar of international cooperation. The "digital divide" in cybersecurity means that many developing nations lack the legal and technical infrastructure to fight cybercrime. This makes them "safe havens" for hackers. International initiatives, such as the Global Forum on Cyber Expertise (GFCE), coordinate funding and training to help these nations build their cyber-defenses. Strengthening the weakest link in the global network is viewed as a collective security measure (Global Forum on Cyber Expertise, 2015).
Public-private partnerships are essential because the vast majority of digital infrastructure is owned by the private sector. Initiatives like the "No More Ransom" project bring together law enforcement (Europol, Dutch Police) and security companies (McAfee, Kaspersky) to provide free decryption tools to victims of ransomware. This cooperation bypasses traditional diplomatic channels to provide direct relief to citizens. It acknowledges that the private sector often possesses better threat intelligence than governments (No More Ransom, 2016).
Attribution and collective sanctions represent a coercive form of cooperation. The EU’s "Cyber Diplomacy Toolbox" allows the EU to impose sanctions (travel bans, asset freezes) on individuals and entities responsible for cyber-attacks. By acting collectively, the EU increases the cost of malicious behavior. Similarly, the US, UK, and allies frequently issue "joint attributions," publicly blaming a specific state (e.g., North Korea for WannaCry) to signal a united diplomatic front (Council of the European Union, 2017).
Export controls on surveillance technology are a mechanism to prevent human rights abuses. The Wassenaar Arrangement is a multilateral export control regime that includes "intrusion software" in its list of dual-use goods. Member states agree to license the export of these tools to prevent them from falling into the hands of authoritarian regimes. However, enforcement is inconsistent, and the "proliferation" of cyber-weapons remains a failure of international cooperation (Wassenaar Arrangement, 2013).
The "Paris Call for Trust and Security in Cyberspace" (2018) is a high-level declaration supported by states, companies, and civil society. It outlines nine principles for cybersecurity, including preventing the proliferation of malicious tools and protecting the integrity of the internet's public core. While symbolic, it demonstrates the "multi-stakeholder" nature of security cooperation, as it was signed by tech giants like Microsoft alongside nation-states, recognizing their shared responsibility (Paris Call, 2018).
Finally, cooperation on "Vulnerability Equities" remains a challenge. When a government discovers a "zero-day" flaw, should it disclose it to the vendor to patch (protecting everyone) or hoard it for offensive use? There is no international agreement on this. The lack of cooperation leads to a "market for lemons" in software security, where states stockpile weapons that jeopardize the entire digital ecosystem. A future norm of "responsible disclosure" is a key goal for digital human rights advocates.
Section 4: Digital Trade, Intellectual Property, and Development
The intersection of international trade law and digital rights is becoming the primary locus of global rule-making. The World Trade Organization (WTO) is currently negotiating rules on e-commerce involving over 80 countries. These negotiations aim to establish global rules on data flows, source code protection, and consumer trust. The "Joint Statement Initiative" (JSI) on E-commerce represents an attempt to update the trading system for the digital age. However, there is a sharp divide between developed nations pushing for the "free flow of data" and developing nations concerned about "digital industrialization" and the ability to regulate their own data economies (WTO, 2019).
The Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) set the baseline for global IP protection. In the digital age, TRIPS is both a shield and a sword. It protects the software and content that fuel the digital economy, but it also creates barriers to access. The debate over the "TRIPS waiver" during the COVID-19 pandemic highlighted the tension between IP protection and the right to health/science. In the digital context, this tension manifests in debates over "Right to Repair" and access to educational materials, where rigid IP enforcement can hinder the right to information (Taubman, 2012).
"Digital Colonialism" is a critique raised by activists and scholars from the Global South. It argues that the current structure of digital trade allows Western (and Chinese) tech giants to extract raw data from developing nations, process it in the North, and sell it back as finished AI services. This "extractivist" model mirrors historical colonialism. Cooperation among developing nations (South-South cooperation) seeks to resist this by advocating for "data sovereignty" and the right to tax digital services. They argue that data is a national resource that should benefit the local population (Kwet, 2019).
The "Moratorium on Customs Duties on Electronic Transmissions" is a longstanding WTO agreement that prevents countries from taxing digital goods (like software downloads or emails). While this has facilitated the growth of the digital economy, developing nations (led by India and South Africa) argue it deprives them of significant tariff revenue that could be used for digital development. They view the lifting of the moratorium as a matter of fiscal sovereignty and economic rights (Banga, 2019).
Bilateral and regional trade agreements often go further than the WTO. The "Digital Economy Partnership Agreement" (DEPA) between Chile, New Zealand, and Singapore is a pioneering "digital-only" trade treaty. It establishes modules on AI ethics, digital identity, and data innovation. DEPA is designed as a "living agreement" open to other nations, creating a modular approach to cooperation that prioritizes inclusivity and the "human-centric" use of technology (Ministry of Trade and Industry Singapore, 2020).
The protection of source code is a contentious issue in trade deals. Recent agreements like the USMCA prohibit governments from requiring the transfer of source code as a condition for market access. While this protects corporate trade secrets (IP rights), critics argue it undermines "algorithmic accountability." If a government cannot inspect the source code of an AI system, it cannot effectively regulate it for bias or safety violations. This trade rule potentially handcuffs the ability of regulators to protect human rights (Burri, 2021).
Data localization mandates act as non-tariff barriers to trade. Countries like China, Russia, and Vietnam require data to be stored locally. While often justified on national security or privacy grounds (digital sovereignty), trade partners view these measures as protectionist tools designed to favor domestic champions. International cooperation attempts to distinguish between "legitimate" localization (for privacy) and "protectionist" localization. The "G7 Digital Trade Principles" advocate for unjustified data localization to be removed (G7, 2021).
The World Intellectual Property Organization (WIPO) administers the "WIPO Internet Treaties" (WCT and WPPT). These treaties updated copyright for the digital age, introducing protection for "Technological Protection Measures" (TPMs) or digital locks. This globalized the DMCA-style model of copyright enforcement. However, WIPO has also facilitated the "Marrakesh Treaty," which mandates exceptions to copyright for the visually impaired. This treaty is a triumph of human rights-based cooperation, ensuring that IP laws do not create barriers for persons with disabilities (WIPO, 2013).
"Digital Development" assistance is a form of cooperation aimed at bridging the global digital divide. The World Bank and regional development banks fund infrastructure projects (like submarine cables) and regulatory reform in developing nations. The "Digital for Development" (D4D) strategy of the EU emphasizes that connectivity must be accompanied by "digital rights" protections. Aid is increasingly conditional on the adoption of privacy laws and cybercrime legislation, diffusing these norms through financial leverage (European Commission, 2017).
Taxation of the digital economy requires massive international coordination. The OECD/G20 "Inclusive Framework on BEPS" (Base Erosion and Profit Shifting) reached a historic deal in 2021 to reform global tax rules. Pillar One of the agreement reallocates taxing rights over multinational tech giants to the countries where they have users, regardless of physical presence. This cooperation ensures that digital companies contribute to the public finances of the countries where they operate, supporting the state's capacity to fulfill economic and social rights (OECD, 2021).
The "Geneva Declaration on the Future of the World Intellectual Property Organization" represents a civil society push for reform. It calls for WIPO to focus less on the expansion of monopoly rights and more on the "development agenda." It advocates for the protection of the "public domain" and open access to knowledge as essential for global education and innovation. This highlights the role of cooperation in preserving the "intellectual commons" against enclosure (Geneva Declaration, 2004).
Finally, the concept of "Data as a Public Good" is emerging in UN discussions. The UN Secretary-General’s "Roadmap for Digital Cooperation" suggests that certain data sets (e.g., for climate change or pandemics) should be treated as global public goods. This requires international "data trusts" or "data commons" where nations pool data for the collective benefit of humanity, moving beyond both the proprietary model (corporate ownership) and the sovereign model (state hoarding).
Section 5: Future Challenges and the Horizon of Cooperation
The future of international cooperation will be defined by the governance of Artificial Intelligence (AI). AI does not respect borders; an algorithm trained in California can discriminate against a loan applicant in Kenya. The Council of Europe is currently finalizing the "Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law." This will be the first binding international treaty on AI. It aims to ensure that AI systems—whether developed by public or private actors—are compatible with human rights standards throughout their lifecycle. The challenge is to secure the participation of non-European powers (like the US and Japan) to ensure the treaty has global relevance (Council of Europe, 2023).
UNESCO’s "Recommendation on the Ethics of Artificial Intelligence," adopted by 193 member states in 2021, provides a global normative framework. While non-binding, it establishes common principles like "proportionality," "safety," and "fairness." Importantly, it includes a "readiness assessment methodology" to help countries evaluate their legal and technical capacity to regulate AI. This cooperation focuses on capacity building and soft norm alignment, trying to prevent a regulatory fragmentation where AI is safe in some regions and dangerous in others (UNESCO, 2021).
The militarization of space and satellite internet introduces a new frontier for cooperation. Mega-constellations like Starlink and Kuiper are privatizing Low Earth Orbit (LEO). This raises questions about "orbital sovereignty" and the right to internet access. If a private US company provides internet to dissidents in Iran, is that a violation of Iranian sovereignty or a fulfillment of the human right to information? The Outer Space Treaty of 1967 is ill-equipped for this digital reality. New cooperative mechanisms are needed to manage spectrum and orbital slots as "global commons" to ensure equitable access for all nations (ITU, 2022).
The threat of the "Splinternet"—the fragmentation of the internet into separate national networks—is the existential threat to digital cooperation. China’s "Great Firewall" and Russia’s "Sovereign Internet" create technical and legal barriers that sever the global network. Cooperation in this context shifts from integration to "bridge-building." Technical bodies like the IETF and ICANN work to maintain the "single root" of the internet, ensuring that at least the technical layer remains interoperable even if the content layer fragments. The diplomatic challenge is to maintain a "universal" internet in a multipolar world (Mueller, 2017).
"Digital Identity" interoperability is a practical challenge. As countries roll out digital ID systems (like the EU Digital Identity Wallet), international cooperation is needed to ensure these credentials are recognized across borders. A digital ID should allow a user to prove their age or qualifications in another country without exposing all their data. Standards bodies like ISO and W3C are working on "Decentralized Identity" standards to facilitate this. This cooperation is essential for the freedom of movement in the digital age (European Commission, 2021).
The regulation of "Big Tech" requires a global competition policy. A monopoly in the US is a monopoly in Europe and India. Regulators are beginning to cooperate on antitrust actions to prevent companies from playing jurisdictions against each other. The "Digital Clearinghouse" model allows privacy, competition, and consumer protection regulators to share evidence and coordinate remedies. This "regulatory diplomacy" aims to counterbalance the immense power of transnational corporations with transnational enforcement (Kerber, 2016).
The protection of the "Public Core of the Internet" is a proposed norm by the Global Commission on the Stability of Cyberspace (GCSC). It suggests that state and non-state actors should not conduct cyber-operations intended to disrupt the general availability or integrity of the public core (e.g., DNS, routing). Establishing this as a peremptory norm of international law (jus cogens) is a long-term goal of cooperation. It frames the internet’s infrastructure as neutral ground, akin to international waters or the high seas (GCSC, 2018).
"Neuro-rights" and the governance of brain-computer interfaces (BCI) represent the horizon of cooperation. As technology begins to interface directly with the human brain, the "freedom of thought" faces literal intrusion. International cooperation is needed to define "mental privacy" as a new human right before the technology matures. Chile has led the way with constitutional amendments, but a global protocol is needed to prevent "neuro-data havens" where companies can experiment on human minds without regulation (Yuste et al., 2017).
The role of "Cities" in international cooperation is growing. The "Cities for Digital Rights" coalition brings together municipalities (from Barcelona to New York) to pledge to protect privacy and open internet access. This "municipal diplomacy" bypasses national gridlock. Cities often have more direct leverage over smart city technologies and can share best practices on ethical procurement. This represents a decentralization of international cooperation (Cities for Digital Rights, 2018).
Disinformation and "Information Integrity" require a global response to "content authenticity." The Content Authenticity Initiative (CAI) seeks to create an open technical standard for media provenance (digital watermarking). International cooperation here involves getting camera manufacturers, software companies, and news organizations globally to adopt this standard. It is a technical solution to a social problem, requiring broad consensus to function as a "truth layer" for the internet (Content Authenticity Initiative, 2019).
Finally, the "Global Digital Compact," proposed by the UN Secretary-General, aims to be the successor to the WSIS agenda. Expected to be agreed upon at the Summit of the Future, it seeks to outline shared principles for an "open, free, and secure digital future for all." It attempts to stitch together the fragmented landscape of development, security, and human rights into a unified global agenda. The success of this compact will determine whether the international community can renew the vow of digital cooperation for the next generation.
Questions
1. Multilateralism vs. Multi-stakeholderism
How do the "multilateral" and "multi-stakeholder" models of internet governance differ in terms of who holds decision-making power? Which model did the WSIS "Tunis Agenda" formally adopt as the foundation for international digital cooperation?
2. The Mandate of the IGF
What is the primary function of the Internet Governance Forum (IGF) established by the WSIS? Why do proponents argue that its lack of binding decision-making power is actually a strength ("soft power")?
3. The "Brussels Effect" in Data Privacy
Explain the concept of the "Brussels Effect" in the context of the GDPR. How does the EU's "adequacy" mechanism create an incentive for non-EU countries (like Brazil or Japan) to align their privacy laws with European standards?
4. The Failure of "Safe Harbor" and "Privacy Shield"
Why did the Court of Justice of the European Union (CJEU) invalidate the "Safe Harbor" and "Privacy Shield" frameworks for transatlantic data transfers? What fundamental conflict between EU rights and US law did these judgments highlight?
5. The Budapest Convention vs. The UN Cybercrime Treaty
What is the primary reason why Russia and China have historically rejected the Budapest Convention on Cybercrime? How does the proposed UN Cybercrime Treaty differ in its approach, and what concern do human rights groups have regarding its potential impact on online speech?
6. Data Free Flow with Trust (DFFT)
What is the core objective of the "Data Free Flow with Trust" (DFFT) initiative championed by Japan at the G20? How does it attempt to reconcile the economic need for data flows with the political necessity of privacy protection?
7. Digital Colonialism
Define the critique of "Digital Colonialism" raised by scholars from the Global South. How does the current structure of digital trade allegedly mirror historical colonial extraction models?
8. The "Public Core" of the Internet
What norm has the Global Commission on the Stability of Cyberspace (GCSC) proposed regarding the "Public Core of the Internet"? Why is establishing this as a jus cogens (peremptory) norm considered a long-term goal for international security?
9. The Splinternet Threat
How does the phenomenon of the "Splinternet" (e.g., Russia's Runet) threaten the existing architecture of international cooperation? What specific role do technical bodies like ICANN play in trying to prevent the complete fragmentation of the network?
10. Neuro-rights and International Cooperation
As Brain-Computer Interfaces (BCI) advance, why is international cooperation needed to define "mental privacy" as a new human right? What risk is associated with the potential emergence of "neuro-data havens"?
Cases
Case Study: The "New Horizon" Infrastructure Project in the Republic of Zambezi
The Digital Silk Road and the "New IP" Debate
The Republic of Zambezi, a rapidly digitizing African nation, recently ratified the Malabo Convention (African Union Convention on Cyber Security and Personal Data Protection), signaling its commitment to data privacy and cybersecurity. Seeking to modernize its critical infrastructure, Zambezi partners with a major state-owned tech giant from an authoritarian power under the "Digital Silk Road" initiative. The partnership involves building a sovereign "Smart City" network using a new protocol proposed at the ITU known as "New IP" (Section 1). Proponents argue this top-down, centralized protocol is necessary for the high-speed, low-latency demands of holographic communication and autonomous driving, which they claim the "outdated" TCP/IP architecture cannot handle. However, civil society groups and Western tech diplomats warn that adopting "New IP" would fragment Zambezi's internet from the global open web (the "Splinternet"), embedding "intrinsic security" features that allow for granular, centralized tracking of every data packet—effectively baking surveillance into the network layer.
The "Schrems II" Data Transfer Dilemma
Meanwhile, "ZambeziCloud," a local startup hosting data for European clients, faces a legal crisis. Following the Schrems II judgment (Section 2), European regulators argue that Zambezi's new national surveillance laws—modeled on the "cyber-sovereignty" principles of its infrastructure partner—allow the state to access data without judicial redress. Consequently, the EU declares that Zambezi does not offer an "essentially equivalent" level of protection. ZambeziCloud attempts to use Standard Contractual Clauses (SCCs) to maintain its business, but European partners demand "Supplementary Measures" (like technical encryption where the keys are held solely in Europe). The situation is complicated because the new "New IP" infrastructure allegedly contains "backdoors" that could render standard encryption ineffective against state inspection, making it impossible for ZambeziCloud to guarantee the security required by the GDPR.
Regional Harmonization vs. Digital Sovereignty
Domestically, the government of Zambezi invokes the Malabo Convention to justify its strict new "Data Localization Law." While the Convention promotes personal data protection, the government interprets its "cybersecurity" provisions (Article 26) aggressively, mandating that all critical data be stored on government-controlled servers to prevent "cyber-imperialism." Critics argue this violates the spirit of the African Continental Free Trade Area (AfCFTA), which encourages cross-border digital trade. They contend that the government is weaponizing a regional human rights treaty (Malabo) to enforce a "Digital Sovereignty" model that aligns with the Multilateral (state-centric) view of internet governance, rather than the Multi-stakeholder model championed by the local technical community and the Internet Governance Forum (IGF).
Questions
1. The "New IP" vs. TCP/IP Governance Clash
Focusing on the infrastructure debate in paragraph 1:
How does the technical architecture of "New IP" (centralized, top-down) reflect the Multilateral model of internet governance promoted by the ITU, as opposed to the Multi-stakeholder model of ICANN/IETF?
Why do critics argue that shifting from TCP/IP to "New IP" creates a human rights risk regarding "intrinsic security" and the potential for a "Splinternet"?
2. Schrems II and Supplementary Measures
Regarding the crisis faced by "ZambeziCloud" in paragraph 2:
According to the Schrems II judgment, why are Standard Contractual Clauses (SCCs) insufficient on their own when the destination country (Zambezi) has invasive surveillance laws?
What specific "Supplementary Measure" (technical) could ZambeziCloud theoretically implement to satisfy EU regulators, and why does the "New IP" infrastructure make this difficult?
3. The Malabo Convention and Conflicting Norms
Analyzing the domestic legal situation in paragraph 3:
How does the Zambezi government's use of the Malabo Convention to justify "Data Localization" illustrate the tension between "Digital Sovereignty" (state security) and "Cross-Border Data Flows" (economic development/trade)?
Does the Malabo Convention explicitly mandate data localization, or is the government using the treaty's cybersecurity provisions to "gold-plate" its control over the internet?
References
Banga, R. (2019). Growing Trade in Electronic Transmissions: Implications for the South. UNCTAD.
Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.
Burri, M. (2017). The Governance of Data and Data Flows in Trade Agreements. Journal of International Economic Law.
Burri, M. (2021). Trade Law 4.0. Cambridge University Press.
Cities for Digital Rights. (2018). Declaration of Cities Coalition for Digital Rights.
Content Authenticity Initiative. (2019). The Case for Content Authenticity.
Council of Europe. (2001). Convention on Cybercrime. ETS No. 185.
Council of Europe. (2018). Modernised Convention for the Protection of Individuals with Regard to the Processing of Personal Data (Convention 108+).
Council of Europe. (2023). Consolidated Working Draft of the Framework Convention on Artificial Intelligence. CAI(2023)18.
Council of the European Union. (2017). Cyber Diplomacy Toolbox.
Court of Justice of the European Union (CJEU). (2020). Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems (Schrems II). Case C-311/18.
DeNardis, L. (2014). The Global War for Internet Governance. Yale University Press.
EFF. (2021). Privacy Standards Must Not Be Compromised in the New Budapest Convention Protocol.
Epstein, D. (2012). The Making of Global Internet Governance. Journal of Information Technology & Politics.
European Commission. (2017). Digital4Development: Mainstreaming Digital Technologies and Services into EU Development Policy.
European Commission. (2021). Standard Contractual Clauses (SCCs).
European Commission. (2021). Proposal for a European Digital Identity Wallet.
Europol. (2020). Internet Organised Crime Threat Assessment (IOCTA).
Your final grade will be determined by the following:
Essay: 20
Presentation: 10
Class Participation 10
Written Exam: 60
These are guidelines for writing easy (Article) for students. Before writing your paper, be sure to check that it meets the requirements.
1. Manuscript format: Ensure that your manuscript is formatted according to the department’s guidelines, including font type, size, margins, and line spacing.
a. The font must be 14 Times New Roman throughout the essay.
b. Margins must use a "Moderate" inch on all sides.
c. The text must be Single spaces.
2. Length of the manuscript: The typical length of an essay is not more than six to eight pages or 2500 (including abstract, main body, conclusion) and without references.
3. The title of the article should not be longer than 12 words, the title should be free of numbers or bullets, and the initial letter of each word should be capitalized.
4. The abstract should provide a concise summary of the article and should be written clearly and concisely.
5. The abstract should be a one paragraph of maximum 150 words in length.
6. Avoid citations in the abstract.
7. Keywords: Immediately after the abstract, provide 5-7 keywords, avoiding general and plural terms and multiple concepts (Please avoid for example, “and”, and “of”). A keyword shouldn’t be longer than two worlds.
8. The essay should be logically constructed.
9. The essay is better structured according to IMRAD, the standard for the design of a scientific article.
a. Introduction;
b. Materials and methods;
c. Results; and
d. Discussion.
9. Moreover, in the end, there must always be conclusions.
10. Divide your essay into clearly defined and numbered sections in the left side. Subsections should be numbered I, II, III, like;
I. Introduction
II. Methodology
III. Results
IV. Discussion
Then (A, B, C,)
And then (1, 2, 3), etc. The abstract is not included in the section numbering.
11. Not allowed to go for 4th sub heading if it is necessary use the bullets point with in third heading
12. Present tables and figures at the end of the essay or in line with the text.
13. Please include the In-text reference where necessary (APA Style) at least one at the end of each paragraph like (Naeem, 2024).
14. Do not use footnote references.
15. The bibliography (APA Style) should be in alphabetic order without numbers or bullets.
16. All references should be based on Journals and books published within the last three years.
17. The author(s) should follow the latest edition of the APA (7th edition) style in referencing. Please visit the APA Style website to learn more about APA style.
18. Please ensure that every reference cited in the text is also present in the reference list (and vice versa). Avoid citations in the abstract. Unpublished results and personal communications should not be in the reference.
19. Each paragraph should contain 8-10 sentences.
20. There should be no spaces between the paragraphs, headings and paragraphs
21. Introduction: The introduction should provide a clear and concise background to the topic and should state the purpose of the article.
22. Methods: The methods section should provide a detailed description of the research methods used in the study, including the study design, sample size, data collection methods, and statistical analysis methods.
23. Results: The results section should present the findings of the study clearly and concisely, including tables, figures, and graphs as appropriate.
24. Discussion: The discussion should interpret the results of the study and place them in the context of the existing literature.
25. Conclusion: The conclusion should summarize the key findings of the study and provide implications for future research. It should not exceed 2 paragraphs.
26. Originality: The manuscript must be original and must not have been published previously.
27. Article should be original and should not contain any plagiarism (20% allowed for plagiarism and AI contribution must be between 30-50 %).
28. Language: The manuscript should be written in clear and concise English/Uzbek or Russian, free from grammatical and spelling errors.
29. All pages must be numbered right side the bottom of the page
30. All the paragraphs must be justified
1. Time management: Strictly adhere to the time limit. (10/7/5/3)
2. Slide Structure:
a. Single sentence bullet (Maximum 8-10 words per bullet)
b. Maximum 4-6 bullets per slide
3. Visual aids: Use effective, relevant visuals.
4. Delivery technique: Never read directly from your slides.
5. Evidence-based content and Audience engagement
6. Content structure (IFRAR):
a. Introduction
i. Description of the issue
ii. Relevance of the study
iii. Significance of the problem
iv. Objectives
b. Facts and issues
i. Important information relevant to the problem
c. Research questions
i. Specific research questions
d. Analysis
i. Literature
ii. Comparison
iii. Evaluation
iv. Findings
e. Recommendations
i. Proposal and suggestions
ii. Implications
The final exam will be a comprehensive assessment worth sixty marks, administered as a computer-based test within the university's specially equipped facility. You will have a strict time limit of two hours to complete it.
The examination will take place on university computers that are installed with security cameras for identity verification. While these computers are not connected to the general internet, you will be granted specific access to the Lex.uz legal database to consult official laws and regulations. The Dean's office will provide the necessary login ID and passcode to access both the exam platform and this legal resource.
The core of the exam will be a case-based scenario. You will be presented with a realistic legal situation and must carefully analyze its details. Your answers to the subsequent questions must be derived directly from this case and must be supported by the applicable laws of Uzbekistan.
For each question, your response should be structured to demonstrate a deep understanding. Begin with a precise introduction that clearly identifies the central legal issue at hand. Following this, you must discuss the specific rules and laws relevant to the situation, citing them appropriately.
The most critical part of your answer is the in-depth analysis. Here, you must move beyond simply stating the law to provide a critical evaluation of how the legal principles apply to the case's unique facts, exploring different interpretations and consequences. Finally, conclude each answer with a constructive and well-reasoned summary that provides a definitive resolution based on your preceding analysis.
The Tashkent State University of Law offers a wealth of additional opportunities for students drawn to academic research, building upon a strong institutional tradition that both recognizes and actively supports such pursuits. The university's overarching research and innovation policy creates a fertile ground for intellectual exploration, a commitment that is vividly reflected in the activities of its individual departments.
The Department of Cyber Law stands as a prime example of this ethos, actively implementing and benefiting from the university's supportive framework. A fundamental opportunity provided by the department is a dedicated course titled "Research Methodology and Legal Teach," which is designed to provide a comprehensive foundation in academic research. This subject equips students with essential skills, from formulating a research question to analyzing data and structuring a paper, thereby polishing their abilities and preparing them for direct involvement in scholarly activities.
To further enhance these skills, the department has established a specialized Scientific Research Writing School. This school serves as a dynamic hub for aspiring researchers, offering a practical and interactive complement to classroom learning. Its activities include targeted lectures on advanced writing techniques, workshops dedicated to the intricacies of academic publishing, and the organization of student-focused conferences where participants can present their work. A key feature of the school is its invitation of guest lecturers from the international academic community, providing students with direct access to the expertise and perspectives of foreign scholars.
The department also provides exceptional platforms for disseminating completed research through its two recognized journals. One is a national journal officially registered with the OAK authority of Uzbekistan, offering a reputable venue for domestic scholarly contribution. The other is an international journal, which is indexed in prestigious databases like Crossref and other international agencies, allowing students to achieve global visibility for their work.
The university broadens the research horizon through strategic international collaboration. It has established partnerships with other universities specifically for joint research initiatives and co-publications. This allows students to engage in cross-border academic projects, fostering a global perspective and providing invaluable experience in collaborative research, thereby fully preparing them for a future in the global academic or professional landscape.
A wide variety of resources are available for independent study, providing students with multiple avenues for academic exploration. The primary resources originate from departmental teachers, whose materials are made readily accessible. These materials, which include textbooks, study manuals, monographs, academic publications, and recorded lecture videos, are hosted on the department's official website with open access for all students.
Furthermore, the Tashkent State University library serves as a crucial hub for research, offering a vast collection of sources and the latest publications. The library provides access to numerous specialized academic databases, which contain a wealth of peer-reviewed journals and research papers. These resources typically have very high subscription costs, but the library's institutional membership makes them freely available to students for their research.
For students focusing on legal and regulatory studies, the official website Lex.uz is an indispensable resource. This platform provides access to the latest legislation and legal documents, ensuring that students have up-to-date information on current laws and governmental regulations.
To broaden their perspective and gain international exposure, students are also guided towards specific online resources by their departments. For instance, the Department of Cyber Law actively recommends a selection of relevant websites and international databases. These curated resources are designed to help students engage with global scholarship and stay informed about international developments in their field of study.
The university maintains a robust support system for students who find themselves struggling in their courses. The institution is committed to recognizing the needs of its student body and acts in their best interests, providing a foundational network of support to help overcome academic challenges.
A prominent example of this support within the Department of Cyber Laws is the "Ostaz Shagird" custom, a concept championed by the professors. This tradition embodies the principle that a teacher serves not only as an instructor but also as a dedicated mentor. In this role, professors are committed to providing direct assistance with studies, offering valuable academic consultancy, and sharing guidance to support students' overall development.
Consequently, any student experiencing academic difficulty is encouraged to consult with the department. The professors are consistently available to assist students with their coursework and to provide the necessary guidance to navigate and resolve study-related problems. This proactive approach ensures that students have the resources required to progress confidently in their academic pursuits.