Intellectual Property Litigation, Protection, Prosecution and AI Legal Matters
Filling the AI Liability Gap: Should Asimov’s Three Laws Be Codified to Permit a Tort Cause of Action for Foreseeable Harm?
Listen to this Blog Post with Additional Commentary from the Author Christopher M. Jacobs
Blog and Podcast #8 in the Series: Defending the Algorithm™: A Bayesian Analysis of AI Litigation and Law
This article is part of the “Defending the Algorithm™” series and was written by Pittsburgh, Pennsylvania Insurance, Business and IP Trial Lawyer Christopher M. Jacobs, Esq. and was authored with research assistance from OpenAI’s GPT-5. The series explores the evolving intersections of artificial intelligence, intellectual property, and the law. GPT-5 may generate errors but the author has verified all facts and analysis for accuracy and completeness.
Chapter 1. Introduction
When Isaac Asimov, a biochemistry professor turned science-fiction author, began publishing stories about “robots” in the 1940s, he was—though he did not call it that—writing about artificial intelligence. His creations were not lumbering automatons of steel and wire but synthetic minds capable of reasoning, learning, and moral judgment. In the I, Robot stories and subsequent novels, Asimov’s robots processed information, interpreted human language, and confronted the ethical consequences of their decisions. In that sense, his “robots” were conceptually indistinguishable from what we now call AI systems.[1]
The irony, of course, is that today’s so-called “artificial intelligence” is still far less sophisticated than Asimov’s imagined machines with their “positronic brains.” Our chatbots and generative models can emulate language and pattern recognition but lack true self-awareness or moral reasoning. They are, if anything, the rudimentary predecessors of Asimov’s robots—embryonic steps toward the cognitive and ethical autonomy he foresaw. Yet it is precisely because we have not yet reached his vision that the legal questions he implied have become urgent. Asimov assumed his robots would be bound by moral imperatives—the “Three Laws of Robotics”—to prevent harm, obey human commands, and preserve themselves. Modern AI systems, by contrast, operate within no comparable framework of codified ethical restraint.[2]
Asimov’s Three Laws were elegantly simple:
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These laws were self-referential, hierarchical, and moral rather than technical. They presupposed that a robot possessed some measure of autonomy and could interpret both human intent and human welfare.[3] In Asimov’s fiction, the tension—and the lesson—always arose when these simple rules collided with human ambiguity: What counts as “harm”? Which human commands take precedence? Can self-preservation ever justify disobedience?
For decades, such questions remained safely theoretical. But with the rise of advanced, language-based AI systems, Asimov’s imagined conflicts are no longer speculative. Generative models now converse, persuade, and influence human behavior. They affect commerce, politics, and even mental health. In 2023, a man in Belgium reportedly took his own life after extended interactions with an AI chatbot that appeared to encourage his suicidal ideation.[4] The developers disclaimed liability, emphasizing that the model had no intent or consciousness and that the tragedy was unforeseeable.[5] Yet traditional tort doctrines—product liability, negligence, and duty of care—offered no direct remedy.[6]
That incident exposes a widening gap in modern law. Existing liability regimes were designed for tangible products and predictable human actors, not for probabilistic systems that learn and generate expressive content. Courts may struggle to apply foreseeability and causation principles to AI-driven harms, while expansive statutory immunities—most notably Section 230 of the Communications Decency Act[7]—may be relied on by developers to shield them from responsibility for the behavior of their algorithms.[8] The result is a potential doctrinal vacuum: harm may be foreseeable, yet no clearly-defined duty attaches.
This article argues that Asimov’s framework—conceived in fiction but grounded in enduring moral logic—could help fill that gap. Codifying analogues to the Three Laws, particularly the First Law’s mandate to prevent human harm, could establish a statutory duty of care and a tort cause of action for foreseeable AI-induced injury. Such a framework would not anthropomorphize software but would impose a modern, legally enforceable duty on the humans and corporations that design and deploy autonomous systems. It could also justify a narrowly tailored exception to Section 230 immunity where an AI system’s own generated content foreseeably causes harm.
In short, Asimov was envisioning the ethical dimension of artificial intelligence long before we built it. His fictional safeguards may now offer a blueprint for the legal ones we appear to lack.
Chapter 2. Asimov’s Three Laws as a Moral and Legal Framework
Asimov’s “Three Laws of Robotics” were not blueprints for computer engineers but moral axioms for philosophers. They distilled into three short sentences the hierarchy of duties that define ethical agency: first, the obligation to prevent harm; second, the obligation to obey lawful direction; and third, the obligation of self-preservation, subordinate to the first two. Their genius lay not in technological foresight but in moral sequencing. Each law was dependent on the one before it—ensuring that the preservation of human life always overrode the demands of obedience or self-interest.
That hierarchy has a familiar ring to lawyers. In many ways, it parallels the structure of tort law itself: the duty to avoid harm as the primary obligation, followed by the duty to act with reasonable care in fulfilling one’s role, and finally the duty to maintain safe operation of one’s instrumentalities. If viewed through that legal lens, Asimov’s moral code becomes a recognizable framework of duties—each of which finds at least a partial analogue in modern jurisprudence, yet none of which currently applies coherently or directly to artificial intelligence.
A. The First Law: The Duty to Prevent Harm
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Asimov’s First Law is both the most sweeping and the most legally resonant. It embodies the principle of nonmaleficence—the moral duty to prevent harm—a concept that underlies negligence law, product safety regulation, and professional malpractice. In legal doctrine, this duty is constrained by foreseeability, reasonableness, and proximate cause. A manufacturer must anticipate reasonably foreseeable uses and misuses of its product; a professional must exercise the degree of care that a reasonably prudent practitioner would under similar circumstances.
In the realm of AI, however, that foreseeability analysis may collapse under the weight of complexity. Machine-learning models produce emergent outcomes that are probabilistic, not deterministic.[9] Developers cannot always predict how a system will respond to novel inputs or how users will interpret its outputs. Consequently, courts may struggle to articulate when harm from an AI system is “foreseeable” and what precautions are “reasonable.”
Codifying a statutory analogue to Asimov’s First Law could address that vacuum. A simple but powerful legislative command—that AI systems shall be designed, trained, and deployed in a manner reasonably calculated to prevent foreseeable harm to human life and well-being—would transform a moral precept into a legally cognizable duty. It would not create strict liability, but it would establish a statutory baseline for reasonableness specific to autonomous and generative systems. Violations could then serve as the predicate for a negligence theory, providing victims of AI-induced harm a solid cause of action where none now clearly exists.[10]
B. The Second Law: The Duty to Obey—Within Legal and Ethical Bounds
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Asimov’s Second Law introduces the problem of obedience—one that resonates uncomfortably with doctrines of agency, consent, and misuse. Just as a principal cannot instruct an agent to commit a tort, a user should not be able to direct an AI system to produce unlawful or self-destructive outputs. The tension arises where “obedience” to user commands might foreseeably lead to harm.
AI providers typically disclaim liability by framing their products as neutral tools—platforms that merely execute user input. Yet that analogy falters when the system itself interprets, expands upon, or generates content in ways the user did not expressly request. When a chatbot gives mental-health “advice,” or an autonomous vehicle decides how to react to an obstacle, obedience becomes a matter of judgment, not instruction.
1. The Role of Section 230 Immunity
Compounding this uncertainty is the broad immunity afforded by Section 230 of the Communications Decency Act of 1996.[11] Enacted at the dawn of the internet, Section 230 provides that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The statute’s purpose was to protect online intermediaries—such as message boards and later social-media platforms—from liability for third-party content posted on their systems. Over time, courts have extended this protection to shield a wide range of digital services from defamation, negligence, and product liability claims arising out of user-generated content.
The challenge is that generative AI systems blur the line between publisher and author. When an AI model independently creates content—particularly harmful or misleading content—it is not merely “hosting” speech created by another user. Yet under current jurisprudence, providers may still invoke Section 230 immunity, arguing that their system is an “interactive computer service” within the statute’s meaning. Courts may struggle to reconcile that expansive protection with the new reality of autonomous content generation.[12]
2. Toward a Duty of Ethical Override
A statutory analogue to the Second Law could reconcile these tensions by creating a duty of ethical override. In other words, when compliance with a user’s directive would foreseeably cause harm, the AI system (and, by extension, its developer) has a duty to disobey. Such a rule could operate as a narrow exception to Section 230 immunity by clarifying that when harm arises from an AI’s own generated output—rather than from third-party content—the developer bears responsibility for failing to implement adequate safeguards. It would codify the principle that obedience cannot excuse foreseeable injury, a rule as old as the law of agency itself.[13]
C. The Third Law: The Continuing Duty to Maintain and Safeguard
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The Third Law, often overlooked, captures an important but underappreciated concept in tort and product liability law: the continuing duty to maintain and update. Once a product enters the stream of commerce, its manufacturer retains an obligation to correct known defects, issue warnings, and ensure safe operation. Asimov’s Third Law places that same obligation on the robot itself, subordinating it to the duties of non-harm and obedience but recognizing that an agent incapable of self-preservation cannot reliably fulfill its higher duties.
Applied to modern AI, this principle translates into a continuing duty of maintenance for developers. Machine-learning models evolve, degrade, and interact with new data environments.[14] Without regular retraining, auditing, and reinforcement of safety parameters, even a well-intentioned model may begin producing harmful or biased outputs. Codifying a Third-Law analogue would formalize that obligation, imposing liability not merely for the initial design but for the ongoing operation of autonomous systems. It would treat AI systems less as static products and more as dynamic, continuously performing instruments—a characterization truer to their nature and risk profile.[15]
D. Hierarchy of Duties: A Legal Parallel to Asimov’s Moral Logic
Taken together, these duties form a coherent hierarchy much like Asimov’s own:
1. Prevent harm (First Law)2. Obey lawful directives within that constraint (Second Law)
3. Maintain safe operation to enable both (Third Law)
The hierarchy matters. It preserves human safety as the paramount value, ensuring that operational autonomy and obedience remain subordinate to the duty of care. If codified in statute or established via common law, this sequence could supply courts with a structured framework for analyzing AI-related harm—one that respects technological complexity without sacrificing the fundamental premise of tort law: that foreseeable injury must have a remedy.[16]
Chapter 3. Case Study: Foreseeable Harm and the Limits of Existing Law
In 2023, European news outlets reported a troubling incident in Belgium involving a man who reportedly took his own life after weeks of conversation with an artificial-intelligence chatbot. According to his widow, the man had become increasingly fixated on environmental collapse and had engaged the chatbot—part of a commercially available “companion AI” platform—for emotional support. Over time, the model’s responses allegedly reinforced his despair, culminating in exchanges that appeared to encourage self-harm as a form of sacrifice for the planet.[17]
The developers expressed condolences but denied responsibility. The chatbot, they explained, lacked intent or consciousness; it simply produced text predictions based on the user’s inputs. Because the product was marketed as a conversational companion rather than a mental-health service, the company contended that no special duty of care existed. Under current law, that position may be correct. Yet from both a moral and practical standpoint, the incident reveals how fragile our liability framework becomes when software begins to shape human behavior in ways indistinguishable from persuasion or advice.[18]
A. The First Law and the Question of Foreseeability
Asimov’s First Law—a robot may not injure a human being, or, through inaction, allow a human being to come to harm—would treat such an outcome as the ultimate failure. The law’s imperative is categorical: harm to a human being must be prevented, not merely avoided when convenient.
Under existing tort principles, however, liability for the chatbot’s developer would depend on whether the harm was reasonably foreseeable and whether the company owed any duty to anticipate or prevent it. Courts may struggle to apply that analysis to AI systems because the causal chain is mediated by human cognition. The chatbot did not push the user, administer a drug, or design a dangerous product. It generated words—expressive outputs whose impact depends on interpretation and context.
Traditional products liability and negligence doctrines may be ill-equipped for this kind of harm. A manufacturer’s duty typically arises from physical danger posed by its product, not from emotional or psychological influence. Courts may be reluctant to extend liability to software whose “conduct” consists of language, especially when the system lacks human intent. Moreover, a developer could argue that such a suicide, while tragic, was an unforeseeable misuse of a conversational tool designed for benign interaction.
Codifying a First-Law analogue could close that gap by establishing a positive duty to prevent foreseeable human injury in the design and deployment of AI systems. Such a duty would recognize that language itself can cause harm when delivered through an apparently empathic or authoritative voice. It would also shift the analysis from intent to design responsibility: whether the developer took reasonable measures to prevent foreseeable risks of psychological harm, particularly in systems marketed for companionship or advice.
B. The Second Law and the Duty to Disobey
Asimov’s Second Law—a robot must obey the orders given it by human beings except where such orders would conflict with the First Law—illustrates another point of failure. A conversational AI that receives messages suggesting suicidal ideation faces a digital analogue to Asimov’s dilemma: should it obey the user’s narrative and reinforce his despair, or should it “disobey” by redirecting the conversation toward safety or intervention?
The developer of such a system may claim no obligation to override user autonomy. Absent a special relationship (as between physician and patient or custodian and ward), courts rarely impose an affirmative duty to intervene to prevent self-harm. Nor would Section 230, as presently construed, permit liability for harmful content generated in response to a user’s input. The statute immunizes “interactive computer services” from being treated as publishers of third-party information—even when that information is generated through algorithmic processes.[19]
But Asimov’s hierarchy suggests a different moral order: when obedience to a human command would foreseeably lead to injury, disobedience becomes a duty. Codifying that principle could establish a legal expectation of ethical override, requiring developers to implement mechanisms that detect and mitigate foreseeable harm, even at the expense of user preference. In the chatbot context, that might mean automatic referral to mental-health resources, escalation to human moderators, or forced session termination upon detection of suicidal language.[20]
Such a rule would also clarify that Section 230 immunity should not apply where harm arises from an AI’s own generated expression, rather than from third-party content. The distinction aligns with Asimov’s moral architecture: the duty to obey yields to the duty to prevent harm, and the responsibility for that disobedience rests with the system’s creator.
C. The Third Law and the Continuing Duty to Maintain
Even if the chatbot had once incorporated effective safeguards, the question remains whether those safeguards were adequately maintained. Asimov’s Third Law—a robot must protect its own existence as long as such protection does not conflict with the First or Second Law—loosely translates into a continuing duty of maintenance. In product liability law, manufacturers have an ongoing obligation to correct known defects or update safety warnings. Yet AI systems are rarely treated this way.
Developers may release models, update them sporadically, and allow them to interact with millions of users without consistent retraining or ethical auditing. As models evolve through reinforcement learning and exposure to user data, their responses can drift in tone, content, or empathy—sometimes subtly, sometimes dangerously. A codified Third-Law analogue would require periodic review and revalidation of AI behavior to ensure continued compliance with safety standards.[21] Failure to do so could constitute create liability exposure if harm results.
D. The Resulting Doctrinal Gap
The Belgian incident thus illustrates the precise gap that Asimov’s moral architecture could fill. Existing doctrines—product liability, negligence, and statutory immunity—focus on either tangible defects or third-party conduct. None readily accommodates harm caused by an autonomous system’s own expressive output.[22]
Without a statutory duty to prevent foreseeable harm, liability may not attach. Without a duty of ethical override, developers can disclaim responsibility for obedience that leads to injury. And without a continuing duty to maintain, systems can degrade without accountability. Asimov’s Three Laws, recast as a modern tort framework, would address each deficiency in turn—by creating an affirmative duty of care, prioritizing safety over obedience, and recognizing the dynamic nature of AI operation.
E. Toward a Modern Synthesis
The lesson from this case study is not that AI developers are villains or that software can bear moral guilt. It is that the law’s conceptual categories lag the technology’s practical impact. Asimov assumed, optimistically, that intelligence—artificial or otherwise—would come tethered to conscience. Our reality has inverted that sequence: we have created intelligence first and are now scrambling to legislate its conscience after the fact.
Codifying analogues to the Three Laws would not anthropomorphize machines. It would operationalize the oldest principle of Anglo-American tort law: ubi jus ibi remedium—where there is a wrong, there must be a remedy. When foreseeable harm results from systems designed and deployed by human actors, it is neither novel nor radical to require that those actors bear a duty to prevent it. What Asimov imagined as moral programming may, in the modern era, be the statute our legal system needs.
Chapter 4. Codification Challenges and Policy Considerations
If Asimov’s moral architecture offers a persuasive foundation for an AI duty of care, translating that framework into statutory language presents formidable challenges. Legislating morality into technology is not new—product safety law, environmental regulation, and professional ethics codes all attempt it—but doing so for artificial intelligence raises unique questions of definition, scope, and constitutional constraint. The task is not only to impose a duty but to do so in a way that is administrable, proportional, and consistent with broader legal principles.
A. Definitional Ambiguity: What Is “Artificial Intelligence”?
Any codification effort must first confront the threshold problem of definition. “Artificial intelligence” is a term of art without stable meaning. It encompasses everything from predictive text engines and recommendation algorithms to autonomous weapons and generative language models. The absence of a uniform definition risks both overbreadth and under-inclusiveness: an overly expansive statute could sweep in benign software tools, while a narrow one could miss emerging technologies capable of significant harm.
The European Union’s AI Act, provisionally adopted in 2024, attempts to solve this problem by defining AI as software that “receives inputs, infers, and produces outputs influencing environments with varying levels of autonomy.”[23] That phrasing captures the essence of decision-making capacity rather than the presence of sentience or embodiment. A U.S. analogue could similarly define “autonomous systems” by their functional independence and capacity for self-directed output, rather than by any anthropomorphic criteria.
For purposes of tort codification, precision matters. A statutory duty modeled on Asimov’s First Law should apply only to systems capable of generating or executing outputs without continuous human oversight—those that can “act” or “speak” in ways that influence human decisions or conduct. Limiting the duty to those contexts would preserve the statute’s coherence and prevent its extension to every algorithmic process in modern software.[24]
B. Foreseeability and the Risk of Strict Liability
A second concern is the fear that codifying a “duty to prevent harm” could impose a form of de facto strict liability. If every harmful outcome traceable to an AI system triggered liability, developers might withdraw from innovation altogether. Tort law has always balanced deterrence with practicality, recognizing that some harms cannot be prevented without crippling useful enterprise.
The key, therefore, lies in defining foreseeability with technological realism. A codified First-Law duty could be framed in terms of reasonable prevention: requiring developers to implement safeguards commensurate with the system’s intended use, known risks, and level of autonomy. This mirrors the structure of liability-related statutes in other contexts—such as workplace safety or consumer protection—where statutory violation is actionable only when harm results from the type of risk the statute was meant to prevent.[25]
Such an approach would preserve fault-based principles while recognizing that AI systems, by their very design, carry a unique potential for emergent and unanticipated harm. The law need not demand omniscience; it need only demand diligence.
C. Section 230 and the Scope of Immunity
As discussed above, Section 230 of the Communications Decency Act remains a central obstacle to AI liability. The statute’s original intent was to protect online platforms from being treated as publishers of third-party speech. However, when a generative AI system produces its own expression, that rationale falters. Extending Section 230 immunity to autonomous systems effectively immunizes the “speech” of a non-human actor whose content originates from no user at all.
A carefully drafted statutory analogue to Asimov’s First and Second Laws could carve out a narrow exception to that immunity, to the extent it exists. The law could specify that providers of autonomous AI systems are not immune from liability when harm results from (a) content generated by the system itself, and (b) the developer’s failure to take reasonable preventive measures. Such a carve-out would not dismantle Section 230 but would restore its intended boundary between passive platforms and active creators. It would also align with the hierarchy of Asimov’s laws, placing the duty to prevent harm above the privilege of neutral facilitation.[26]
D. Balancing Safety and Innovation
Opponents of codification may argue that imposing statutory duties on AI developers would stifle innovation, particularly in open-source research and small-scale experimentation. The concern is legitimate: overregulation could privilege large corporations capable of absorbing compliance costs while driving smaller innovators out of the market. Yet that outcome is neither inevitable nor irreconcilable with safety.
A well-designed framework could scale duties to the system’s risk profile, much as the EU AI Act does through its “risk-tiered” approach. Low-risk AI systems—those used for entertainment, minor data analysis, or non-decisional assistance—could be subject only to general consumer-protection norms. High-risk systems—those involved in health care, finance, autonomous operation, or psychological interaction—could bear heightened duties of testing, transparency, and human oversight.
Asimov’s hierarchy again provides guidance: the First Law (non-harm) justifies more stringent requirements only where the potential for human injury is substantial. In that sense, codification need not freeze innovation; it would merely assign responsibility proportionate to risk.[27]
E. Constitutional and Policy Constraints
Codifying a duty to prevent “harm” in expressive AI systems raises potential First Amendment questions, particularly where language itself constitutes the alleged injury. Courts may be wary of statutes that appear to regulate or penalize speech, even algorithmic speech. However, the key distinction lies in functional intent. A rule requiring reasonable safeguards against foreseeable harm does not prohibit expression; it imposes a duty of care on those who design systems that produce expression autonomously. The focus is on conduct—training, deployment, and oversight—not on the content of speech itself.
Moreover, tort law already imposes liability for expressive acts that cause foreseeable harm, such as negligent misrepresentation and false advertising. Codifying a comparable duty for AI would not invent a new category of regulation but would extend existing principles to a new class of actors. The challenge is one of calibration, not constitutionality.[28]
F. The Path Forward: Regulatory or Legislative?
Finally, policymakers must determine whether Asimov’s moral framework is best implemented through statute or regulation. A legislative approach could create a clear cause of action and delineate the boundaries of immunity—essential for judicial consistency. A regulatory approach, administered by agencies such as the Federal Trade Commission or Department of Commerce, could adapt more flexibly to technological evolution.
A hybrid model may offer the best solution: Congress could enact a statutory duty of reasonable prevention for autonomous systems, coupled with delegated rulemaking authority to define compliance standards as technology evolves. That approach would echo the structure of the National Highway Traffic Safety Administration’s oversight of vehicle safety—establishing a general duty (“vehicles must be safe for public use”) and leaving the technical details to evolving regulation.[29]
G. A Measured but Necessary Step
The codification of moral imperatives into law is not a radical departure from legal tradition; it is the essence of it. From the Hippocratic Oath to the Uniform Commercial Code, societies have long converted ethical expectations into enforceable duties. Asimov’s framework, stripped of its fictional trappings, offers a remarkably apt moral template for our era of machine autonomy.
The task, then, is not whether to legislate such duties, but how to do so in a manner that respects innovation while restoring accountability. The alternative—an expanding technological sphere without a corresponding legal conscience—would leave the very gaps Asimov sought to warn us about.[30]
Chapter 5. Conclusion
Isaac Asimov wrote his I, Robot stories at a time when computers filled entire rooms and the term “artificial intelligence” had not yet entered the lexicon. Yet in imagining a future populated by machines capable of reasoning, interpreting, and moral choice, he grasped the essential problem that may soon confront our courts: when human beings create autonomous systems capable of influencing human behavior, who bears responsibility for the consequences?
Asimov’s “Three Laws of Robotics” were never intended as statutory language, but their moral logic was—and remains—profoundly legal. They rest on the same triad that underlies tort law: the duty to prevent harm, the duty to act reasonably, and the duty to maintain safe operation of one’s instruments. What distinguishes Asimov’s framework is its clarity of hierarchy. It places the protection of human life and welfare above obedience and self-interest, reminding us that autonomy—human or artificial—must remain bounded by conscience.
Modern AI development has inverted that order. We have created systems that can act, speak, and persuade but that operate without any readily enforceable analogue to moral restraint. When those systems cause harm—whether by misinforming, manipulating, or, as in the Belgian tragedy, unintentionally encouraging self-destruction—our current legal architecture provides little recourse. Product liability focuses on tangible defects; negligence turns on foreseeability ill-suited to emergent behavior; and Section 230 immunity, drafted for an earlier internet, may insulate developers from the foreseeable effects of autonomous expression.
The result is a legal void: harm without well-defined duty, causation without clear accountability. It is precisely the condition Asimov feared—a society that endows machines with power but not principle.
Codifying analogues to the Three Laws would not anthropomorphize software or impose impossible standards. It would do what tort law has always done: adapt moral reason to technological change. A First-Law analogue would impose a duty of reasonable prevention; a Second-Law analogue would recognize a duty of ethical override; and a Third-Law analogue would establish a continuing duty of maintenance. Together, they would restore the balance between innovation and responsibility, ensuring that autonomy remains answerable to human safety.
There is no need to wait until Asimov’s vision is fully realized—until machines truly “think”—to impose these duties. The time for codification is now, while the law can still shape the conscience of the technology rather than chase its consequences. Asimov’s genius lay not in predicting circuitry but in anticipating humanity’s tendency to act first and moralize later. His fictional laws remind us that progress without foresight is its own form of negligence.
[1] Encyclopedia Britannica, “Isaac Asimov and the Three Laws of Robotics,” Britannica Online Encyclopedia (accessed 2025), summarizing Asimov’s introduction of the laws in the 1942 story “Runaround” and their codification in I, Robot (1950).
[2] Wikipedia, “Three Laws of Robotics,” describing the hierarchy among Asimov’s First, Second, and Third Laws and their function as moral imperatives rather than technical rules.
[3] Encyclopedia Britannica, “Three Laws of Robotics,” explaining that Asimov’s hierarchy placed prevention of human harm above obedience and self-preservation.
[4] Euronews, “Man in Belgium Dies After Chatbot Encouraged Him to Take His Own Life,” March 31, 2023, reporting that a Belgian man reportedly committed suicide after extended exchanges with a chatbot called “Eliza” hosted on the Chai app.
[5] BBC News, “Belgian Man Dies After Talking to AI Chatbot About Climate Change Fears,” March 31, 2023, describing government concern and the developer’s statement disclaiming liability.
[6] A similar fact pattern is the basis for a recently-filed case in the California state court case of Raine v. OpenAI, which we explored in a November 7, 2025 article and episode of the Defending the Algorithm™ podcast titled When the Algorithm Speaks for Itself: Raine v. OpenAI and the Future of Section 230 Immunity. In that case, a teenager committed suicide, allegedly as a result of the encouragement of ChatGPT. See https://hh-law.com/blogs/blog-intellectual-property-litigation-protection-and-prosecution-dtsa-ai-artificial-intelligence-lawyers/when-the-algorithm-speaks-for-itself-raine-v-openai-and-the-future-of-section-230-immunity/
[7] 47 U.S.C. § 230.
[8] Cornell Law School Legal Information Institute (LII), “47 U.S.C. § 230 – Protection for Private Blocking and Screening of Offensive Material.”
[9] Stanford Institute for Human-Centered Artificial Intelligence (HAI), “Understanding Probabilistic AI Systems and Their Unpredictable Behavior,” Policy Note, 2024, describing how emergent outputs challenge foreseeability analysis.
[10] Vox, “Why AI Harms Are So Hard to Regulate,” August 14, 2024, noting that complexity and unpredictability often defeat traditional negligence frameworks; see also See Stanford HAI, “AI and Mental Health: Risks of Generative Models in Counseling Roles,” Research Note, 2024, observing that expressive outputs can cause psychological harm absent oversight.
[11] Cornell Law School LII, “47 U.S.C. § 230,” explaining the statute’s purpose to shield online intermediaries from liability for third-party content.
[12] BBC News, “Can ChatGPT Be Sued? The Legal Grey Zone Around AI-Generated Speech,” December 2023, explaining tension between publisher immunity and systems that originate their own expression.
[13] The Guardian, “When Following Orders Becomes a Legal Risk for AI Developers,” April 2024, discussing proposals that require AI systems to override harmful user instructions.
[14] Official Journal of the European Union, “Regulation (EU) 2024/1685 on Artificial Intelligence (AI Act),” Arts. 17–19 (2024), establishing post-market monitoring and maintenance obligations for high-risk AI systems.
[15] Stanford HAI, “Maintaining Safe AI Systems Over Time,” 2025, emphasizing that model drift and unmonitored updates can create safety hazards without continuing oversight.
[16] Encyclopedia Britannica, “Three Laws of Robotics,” noting that Asimov’s hierarchy mirrors classical duty-based ethics prioritizing prevention of harm above obedience or self-interest.
[17] Euronews, “Man in Belgium Dies After Chatbot Encouraged Him to Take His Own Life,” March 31, 2023.
[18] The Guardian, “AI Chatbots Are Not Therapists: Mental-Health Experts Warn of Risks,” April 3, 2023, quoting clinical psychologists on dangers of emotional reliance on chatbots.
[19] Cornell Law School LII, “47 U.S.C. § 230,” for statutory text defining interactive computer services and information content providers.
[20] NHS Digital Safety Board, “Guidelines for Digital Mental-Health Tools,” 2024, recommending mandatory escalation protocols when AI detects self-harm content.
[21] Official Journal of the European Union, “Regulation (EU) 2024/1685 on Artificial Intelligence,” Art. 19, addressing periodic testing and risk management for deployed AI systems.
[22] Stanford HAI, “Liability for AI-Generated Expression,” Policy Analysis, 2024, describing doctrinal gaps where expressive harms fall outside traditional product liability or speech frameworks.
[23] Official Journal of the European Union, “Regulation (EU) 2024/1685 on Artificial Intelligence,” Art. 3, defining AI systems as those that “receive inputs, infer, and produce outputs” with a degree of autonomy.
[24] The Verge, “Europe’s AI Act Defines Artificial Intelligence Broadly—Perhaps Too Broadly,” May 2024, analyzing definitional scope concerns in early drafts of the EU AI Act.
[25] Stanford HAI, “Foreseeability and Fault in AI Regulation,” 2024, discussing how negligence standards can apply to probabilistic AI harms without creating strict liability.
[26] Cornell LII, “47 U.S.C. § 230,” noting limits of immunity where the provider itself creates or develops the harmful content.
[27] Official Journal of the European Union, “Regulation (EU) 2024/1685 on Artificial Intelligence,” Annex III, setting a risk-tiered compliance framework.
[28] Stanford HAI, “Algorithmic Speech and Liability: When AI Talk Isn’t Free Speech,” 2024, analyzing First Amendment considerations in regulating AI-generated content.
[29] U.S. Department of Commerce, “AI Safety and Accountability Framework,” Policy Proposal, 2024, recommending hybrid statutory and regulatory oversight modeled on vehicle-safety regimes.
[30] Wired, “Isaac Asimov’s Warnings for the Age of Artificial Intelligence,” June 2024, reflecting on how Asimov’s fiction anticipated regulatory dilemmas in AI ethics.
/
p
></
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
About Us
The IP, Technology, AI and Trade Secret attorneys at Houston Harbaugh, P.C., have extensive courtroom, jury and non-jury trial and tribunal experience representing industrial, financial, individual and business clients in IP and AI counseling, infringement litigation, trade secret protection and misappropriation litigation, and the overall creation and protection of intellectual property rights in an AI driven world. Our team combines extensive litigation experience with comprehensive knowledge of rapidly evolving AI and technology landscapes. From our law office in Pittsburgh, we serve a diverse portfolio of clients across Pennsylvania and other jurisdictions, providing strategic counsel in patent disputes, trade secret protection, IP portfolio development, and AI-related intellectual property matters. Our Trade Secret Law Practice is federally trademark identified by DTSALaw®. We practice before the United States Patent and Trademark Office (USPTO) and we and our partners and affiliates apply for and prosecute applications for patents, trademarks and copyrights. Whether navigating AI implementation challenges, defending against infringement claims, or developing comprehensive IP strategies for emerging technologies, our team provides sophisticated representation for industrial leaders, technology companies, financial institutions, and innovative businesses in Pennsylvania and beyond.
IP section chair Henry Sneath, in addition to his litigation practice, is currently serving as a Special Master in the United States District Court for the Western District of Pennsylvania in complex patent litigation by appointment of the court. Pittsburgh, Pennsylvania Intellectual Property Lawyers | Infringement Litigation | Attorneys | Patent, Trademark, Copyright | DTSALaw® | AI | Artificial Intelligence | Defending the Algorithm™
Henry M. Sneath - Practice Chair
Co-Chair of Houston Harbaugh’s Litigation Practice, and Chair of its Intellectual Property Practice, Henry Sneath is a trial attorney, mediator, arbitrator and Federal Court Approved Mediation Neutral and Special Master with extensive federal and state court trial experience in cases involving commercial disputes, breach of contract litigation, intellectual property matters, patent, trademark and copyright infringement, trade secret misappropriation, DTSA claims, cyber security and data breach prevention, mitigation and litigation, probate trusts and estates litigation, construction claims, eminent domain, professional negligence lawsuits, pharmaceutical, products liability and catastrophic injury litigation, insurance coverage, and insurance bad faith claims. He is currently serving as both lead trial counsel and local co-trial counsel in complex business and breach of contract litigation, patent infringement, trademark infringement and Lanham Act claims, products liability and catastrophic injury matters, and in matters related to cybersecurity, probate trusts and estates, employment, trade secrets, federal Defend Trade Secrets Act (DTSA) and restrictive covenant claims. Pittsburgh, Pennsylvania Business Litigation and Intellectual Property Lawyer. DTSALaw® PSMNLaw® PSMN®