An exploration of Digital Sovereignty

By Grace Park

When the internet first emerged, it was intended to serve as a space for global use without strict boundaries. In fact, in 1997, the Clinton Administration claimed the internet should facilitate a “transparent and predictable legal environment to support global business and commerce” in the Framework for Global Electronic Commerce. However, in recent years, with the growing number of nations aiming to expand legal authority over online platforms, the internet has been gradually diverging from being a collective space to a fragmented one. Despite a global environment that has been promoting statutory control over online platforms and data, the United States continues to regulate the internet through regulations grounded in the First Amendment that advocate for freedom and against liabilities. 

Digital sovereignty is defined as the ability of a nation to exercise authority over its own “digital destiny,” in all its components, from data governance to hardware and software systems. The European Union’s Parliament is a prominent example of a body that has been consistently practicing this idea. In 2022, the European Parliament and Council adopted The Digital Services Act, a comprehensive regulation designed to update how the EU regulates online platforms. The DSA imposes extensive responsibilities on major platforms to remove illegal content, conduct risk assessments, and disclose algorithmic and moderation practices. They were also asked to address issues like online advertising transparency and the spread of disinformation. Such obligations precisely reflect the EU’s intention to assert more direct authority over the regulation of online spaces, as noted in the European Commission’s Shaping Europe’s Digital Future statement from 2020 which emphasized digital sovereignty principles, such as technological independence and greater public accountability for major online services. 

China’s National People’s Congress has also been following the trend of asserting stronger state authority over digital systems. Through enacting laws like Cybersecurity Law of the People’s Republic of China, Data Security Law, and Personal Information Protection Law, the government mandated platforms to localize data, undergo security reviews, and permit government access when requested to do so. These laws align with China’s overall goal of “cyber sovereignty,” highlighted in the Cyberspace Administration of China, which proposes a governance model that supports strong state involvement in regulating digital spaces. More nations have adopted similar measures, through documents like Intermediary Guidelines and Digital Media Ethics Code) Rules in India, Online Safety Act in the U.K., and the Online Safety Act in Australia, all of which expand governmental intervention through takedown requirements, compliance obligations, and traceability mandates. 

Although national regulation guidelines differ significantly across political systems, they still share several key characteristics. Many countries have strengthened government authority over media platforms, imposed more responsibilities on intermediaries to regulate user content like hate speech, disinformation, and unsafe online products, and adopted territorial approaches to facilitating the digital space. The Organization for Economic Co-operation and Development’s Digital Security Governance Framework notes that this movement toward state-centered digital governance has been prevalent in a wide range of countries, regardless of their affiliation with OECD. 

In contrast to the aforementioned nations, the United States continues to utilize a regulatory system that maintains a relatively weaker government intervention on digital communication. This principle was affirmed in Reno v. ACLU (1997), where the Supreme Court ruled that the internet receives full First Amendment protection and overturned sections of the Communications Decency Act that had attempted to regulate indecent content on the internet. The Court clarified that online communication will not be governed by the standards used for broadcast media and that the government will be able to restrict speech only under much stricter constitutional requirements. 

This idea was reaffirmed two decades later, in Packingham v. North Carolina (2017), when the Court invalidated a North Carolina law that barred registered sex offenders from accessing mainstream social media sites. By labeling media platforms as “the modern public square,” the Court emphasized the vital role of these sites in facilitating political and civic discourse. The Court further reasoned that excluding individuals from these spaces would infringe on their First Amendment rights, reinforcing the importance of preserving access to online spaces. 

This integral role of social media in society is related to the questions raised in the pending litigations NetChoice v. Paxton and Moody v. Netchoice. Both cases ask whether content moderation is a form of expression protected by the First Amendment. In NetChoice v. Paxton, the Fifth Circuit upheld Texas House Bill 20, which restricts large platforms from removing content; contrastingly, the Eleventh Circuit in Moody v. Netchoice invalidated parts of Florida Senate Bill 7072, which penalized platforms for removing political candidates and required disclosure of moderation decisions. The Supreme Court is working on clarifying the extent to which states can moderate media platforms in regards to the First Amendment. Its ruling will determine the future of First Amendment protection regarding platforms’ moderation practices. 

Another crucial characteristic of the U.S.’s approach to internet law is shown in Section 230 of the Communications Decency Act, which states that platforms cannot be treated as the “publisher or speaker” of user-generated content. The Fourth Circuit interpreted this section in a broad sense in Zeran v. AOL (1997), explaining that imposing liability on platforms for not removing harmful posts would force them to censor far more content than necessary, or entirely shut down services, to avoid lawsuits. More recently, in Gonzalez v. Google, the Supreme Court declined to limit Section 230’s protections in cases involving algorithm-driven content. No other major jurisdiction, including the EU, China, India, or the United Kingdom, enforces protection to this degree. 

Thus, the U.S. facilitates a system that is significantly different from how other nations approach internet regulation. Under the EU’s Digital Services Act, platforms are expected to proactively remove illegal content and comply with transparency obligations, while the U.S. government has been firmly against forcing platforms to remove content, citing First Amendment protections. Similarly, China’s laws require data localization and allow heavy government intervention, whereas U.S. law minimizes government involvement for platforms to maintain autonomy. Such global differences inevitably lead media platforms that circulate globally to face challenges, as they must follow different, sometimes conflicting, laws in every country, leading to concerns about keeping platform practices consistent. 

Evaluating how nations vary in their regulation of the digital space highlights both strengths and tradeoffs of each approach. For instance, the U.S.’s framework offers strong constitutional protections for speech, extensive legal protection for online platforms, and minimal state involvement in platform governance. Contrastingly, the EU prioritizes stricter monitoring for consumer protection, while India aims to prevent misinformation. Each model comes with a distinctive set of risks: strict regulation can discourage public discourse, while loose regulation can increase users’ vulnerability to online harms. As noted by the United Nations, no single approach will fully capture the perfect balance between openness, safety, and government authority. 

Internet regulation guidelines are undergoing a major transition on a global scale with an increasing number of governments adopting new digital regulations, ultimately creating a fragmented digital environment. Jurisdictions like the European Union, China, and India have established governmental guidelines that assert much greater state authority, while the United States has maintained a comparatively decentralized system that protects free speech of both individuals and platforms. As regulatory methods continue to diverge, the U.S. model, anchored in constitutional limits that aim to protect freedom, remains an important reference point within the evolving digital regulation laws.

Previous
Previous

Revisiting Diplomatic Immunity in the 21st Century

Next
Next

Exploring the legality of ‘trolling’ on social media within the Executive Branch