The Next Legal Revolution: What the history of privacy law teaches about governing AI
By Cora Haggerty
Technological change can be a catalyst for legal reform. New tools and computing systems are transforming how people live, make decisions, and interact with one another, creating modern landscapes that outpace past legal imagination and compel the law to adapt to the complexities of contemporary life. With the invention of the portable camera and the rise of mass media in the nineteenth century, jurists had to reconsider the boundaries between public and private life. As the twentieth and twenty-first centuries introduced social media, genetic testing, and the digitization of personal data, the landscape of privacy continued to redefine the contours of individual rights in the rapid onset of technological change. The ever-evolving development of privacy law directly reflects the technologies that impact it.
Today, a comparable and even more profound transformation is underway that is reshaping nearly every aspect of modern society: the exponential growth of artificial intelligence (AI). From generative content creation to modernized criminal justice procedures to the rise of an intelligence economy, AI has become integral to the social, economic, and civic functions of everyday life. But as a result, governments face the urgent task of regulating technologies that are growing faster than the law previously envisioned and surpassing legal frameworks still premised on human agency and intention. Like privacy law, the emergence of AI law is a reactive response to technological transformation, but this parallel has a core difference. While privacy law uses and adapts existing doctrines to protect individual rights, the landscape of AI law demands the creation of a new regulatory framework capable of addressing the autonomy, opacity, and unpredictability of systems that could exceed human control. While privacy law could reinterpret the law, AI law requires reimagining and reinventing it.
The history of privacy law began in 1890 as a reaction to the introduction of the camera and mass media, as described in 1890 by Samuel Warren and Louis Brandeis in their Harvard Law Review article “The Right to Privacy”. Their argument addressed the “instantaneous photographs and newspaper enterprise” that they claimed intruded upon the sanctity of personal life, becoming a revolutionary argument for the time that reinterpreted the common law of torts to recognize a novel injury. They suggested that the existing principles of property could be extended to encompass the new realities of society, and this development continued to evolve. In 1967, Katz v. United States (1967) extended the Fourth Amendment’s protections against unreasonable searches anywhere to electronic surveillance, ruling that “the Fourth Amendment protects people, not places.” Justice Harlan’s concurring opinion introduced the enduring “Katz test” or “reasonable expectation of privacy” test, demonstrating the Court’s willingness to adapt constitutional principles to technological change. In the twenty-first century, the same logic continues to guide judicial reasoning, where in Carpenter v. United States (2018), the Court ruled that the government’s collection of historical cell-site location data without a warrant violated the Fourth Amendment. This ruling reaffirmed privacy protections in the digital age. Within this ruling, Chief Justice Roberts emphasized that the “depth, breadth, and comprehensive reach” of digital surveillance posed new privacy threats that required doctrinal evolution and highlighted how new technology often necessitates an evolution in Fourth Amendment protections. But among these evolutions, they still took place within the contours of established law, where privacy rights were extended and reinterpreted, but not reinvented.
Artificial intelligence, however, challenges this elasticity in interpretation. While traditional legal reasoning once sustained privacy law, it cannot do the same with AI because AI systems differ from earlier technologies not simply in degree, but in kind. They do not merely collect or disseminate information about people, but can also make judgments, form correlations, and act upon data with limited human oversight through Machine Learning (ML) and Deep Learning (DL). Governing these systems requires confronting questions of accountability and control that the traditional human-centered legal framework cannot easily answer because it faces a form of unprecedented thinking, processing, and judgment from a machine.
Currently, the legal architecture surrounding AI is fragmented. In the United States, there is no comprehensive federal AI statute, and the scope of the AI regulation is uncertain. Instead, governance has occurred through a patchwork of agency guidelines, executive statements, and state initiatives that aim to resolve a solution unprecedented in ability, scale, and possibility. Since 2019, states have passed over a hundred laws that create safeguards in response to AI threats. Nearly 100 AI-related bills were passed in 2024, and more than 1,000 have been introduced across the country during the 2025 legislative sessions. States like California and Colorado have advanced accountability legislation requiring transparency reports and risk assessments for high-impact AI applications. The executive branch has attempted to articulate normative baselines, and specifically, in 2022, The White House’s Blueprint for an AI Bill of Rights outlines five guiding principles that could serve as a federal standard for AI regulation: 1) safe and effective systems, 2) algorithmic discrimination protections, 3) data privacy, 4) notice and explanation, and 5) human alternatives. The document symbolically represents a constitutional moment for AI that is an acknowledgment that emerging technologies pose not only economic and ethical challenges, but civil rights challenges. Furthermore, in 2023, the National Institute of Standards and Technology (NIST) published its AI Risk Management Framework, providing voluntary guidelines for assessing and mitigating risks in AI systems. Together, these efforts reflect an ever-changing regulatory landscape that is trying to grapple with how to govern a technology with highly transformative capabilities.
Globally, the European Union has taken a more assertive stance. The EU Artificial Intelligence Act (2024) establishes a tiered regulatory regime based on risk, prohibiting certain “unacceptable” applications such as social scoring, heavily regulating “high-risk” systems, and imposing transparency obligations on generative AI. This represents a leap beyond reinterpretation where a legislative body attempts to codify the governance of autonomous systems as a distinct field of law. But even this model remains largely untested and faces challenges of enforcement and international coherence. Another parallel but contrasting approach has emerged in Canada through its proposed Artificial Intelligence and Data Act (AIDA), which was introduced as part of Bill C-27 and aims to create a federal, risk-based regulatory framework for private-sector AI systems. AIDA is centered on regulating “high impact” AI systems through a principles-based framework that relies heavily on administrative rulemaking rather than a strict risk taxonomy. This approach, however, has been criticized for limited oversight and uncertain interaction with provincial privacy laws.
The complicated state of AI regulation emphasizes the deeper problem that the traditional tools of legal reasoning are strained under the weight of AI’s technological complexity. In contrast to privacy law, which evolved through the reinterpretation of existing doctrines, AI law requires a complete reinvention of legal frameworks. The fundamental premises of law require human agency, intentionality, and foreseeability, but AI systems often operate through processes that are neither transparent nor completely foreseeable. The “black box” problem, as Frank Pasquale describes in The Black Box Society, makes it impossible to fully understand or explain how an algorithm arrives at its decisions. This opacity undermines due process, accountability, and the rule of law itself. If neither the designers nor the regulators can explain a system’s reasoning, it will be difficult for those affected to meaningfully challenge its outcomes. Another example related to the lack of disclosure of the mechanisms behind algorithmic decision-making can be seen through COMPAS, which is an algorithm used to predict the risk of recidivism in making parole decisions in some states.
Traditional doctrines are ill-equipped to handle such questions. Privacy law relies on concepts like “reasonable expectation” and “informed consent,” but these notions falter when the relevant actions are autonomous and statistical, instead of human and deliberate. Similarly, doctrines of negligence and product liability depend on identifiable causal chains linking actions to outcomes, but when a machine learning system makes an unexpected decision based on self-generated inferences, it is difficult to place responsibility on the programmer, deployer, or algorithm. As scholars at the Brookings Institution have argued, such uncertainty may demand new institutional structures, such as a Federal Robotics or AI Commission, to centralize expertise and coordinate regulatory efforts. Compounding these structural difficulties are shifts in the administrative landscape that weaken the government’s ability to regulate emergent technologies. For decades, Chevron v. Natural Resources Defense Council (1984) allowed agencies to interpret ambiguous statutory provisions within their jurisdiction, granting flexibility in areas requiring technical expertise. But in Loper Bright Enterprises v. Raimondo (2024), the Supreme Court curtailed Chevron deference, significantly limiting agencies’ interpretive autonomy. Combined with the “major questions doctrine” articulated in West Virginia v. EPA (2022), these decisions restrict the very tools the federal government would need to respond nimbly to the challenges of AI governance. In effect, the law has constrained its own capacity for adaptation at the moment adaptation is most essential.
The philosophical implications are as profound as the doctrinal ones. Margot Kaminski’s “The Right to Explanation, Explained” underscores how algorithmic decision-making destabilizes foundational principles of accountability and fairness. In the AI era, biases that once arose from human prejudice can be embedded in code and replicated at scale. Privacy law once offered individuals the power to shield themselves from unwanted exposure; AI law must offer individuals the power to understand and contest the decisions that shape their opportunities. This requires a regulatory philosophy grounded not only in rights but in structural transparency, referred to by some scholars as the “right to algorithmic accountability.”
Ultimately, while privacy and AI law began reactively in response to technologies that outpaced the law, tas society struggled to articulate harms the law had not yet named. But where privacy law succeeded through reinterpretation, AI law cannot rely on the same strategy. The autonomy and opacity of algorithmic systems render the traditional logic of precedent insufficient. Governing AI demands a legal revolution: one that does not merely adapt old doctrines but constructs new ones capable of addressing the nonhuman agency that defines the twenty-first century. The emergence of privacy law shows that legal systems can evolve without collapsing. But it also warns that evolution has limits. Just as Warren and Brandeis once recognized that new technologies could erode human dignity, jurists today must recognize that artificial intelligence poses an even greater test of human governance. The law must move beyond reaction toward reinvention, crafting doctrines that embed transparency, fairness, and accountability into the architecture of AI itself. The next legal revolution will not merely reinterpret the boundaries of individual rights; it will redefine what it means for law to govern intelligence, whether human or artificial.