Title: An Act Relating to establishing civil liability for suicide linked to the use of artificial intelligence systems.
1. What SB 5870 Would Do
SB 5870 proposes to create a new legal cause of action, meaning a person (or their estate) could sue when a suicide can be linked to the use of an “artificial intelligence system.” The bill defines:
“Artificial intelligence” broadly, covering systems that generate outputs based on input and can influence environments.
“Companion chatbot” specifically as an AI capable of human-like conversation and sustaining relationships across multiple interactions.
Key points from the bill text:
This is not a criminal law; it is a civil liability framework. If someone dies by suicide and a plaintiff alleges the harm is “linked to the use of” an AI system, especially a companion chatbot as defined in the bill, the AI operator could be held legally responsible. The bill intends to make it easier to recover damages and to treat deployment of harmful AI outputs as actionable harm.
That is striking: instead of regulating conduct via standards or safety requirements, SB 5870 opens AI developers and deployers up to potential lawsuits after harm occurs. This is analogous to product liability or negligence claims when a defective product injures someone, except it applies to software behavior rather than physical defects.
2. How It Connects With HB 2225
SB 5870 and HB 2225 (the companion chatbot regulation bill) are separate bills with different mechanisms but the same legislative theme: regulating AI based on how it interacts with humans, particularly around vulnerable users like minors and those experiencing emotional distress.
HB 2225 focuses on preventive regulation:
- It creates obligations for AI chatbot operators to implement safeguards (self-harm response protocols, age-related transparency disclosures, etc.).
- It uses the state’s Consumer Protection Act and empowers the Attorney General to enforce compliance. It implicitly pushes companies toward monitoring, age verification, and possibly identity practices to avoid liability.
SB 5870 adds a reactive legal risk layer: liability after harm occurs. Instead of just fines or AG enforcement, it would give private litigants and representatives legal standing to pursue damages, dramatically increasing risk for AI companies if their systems are alleged to contribute to tragedy.
Put simply:
- HB 2225: You must implement safeguards or face regulatory penalties.
- SB 5870: If your system allegedly contributes to a suicide, you can be sued for damages.
Together these bills create a two-front regulatory landscape:
- Standard setting and enforcement pressure (HB 2225), and
- Potential private liability for harm (SB 5870).
3. Broader AI Policy Context in Washington
SB 5870 and HB 2225 don’t exist in isolation, rather they are emerging in a broader state AI policy ecosystem, including the Washington AI Task Force created by ESSB 5838. That Task Force’s interim report urges careful policy around high-risk AI, data transparency, and privacy protections.
Other 2025–26 AI bills in the state include (but aren’t limited to):
- SB 5956: Regulating AI use and surveillance in public schools (targeting automated decision systems and biometric tracking), a separate but related concern around how AI interacts with children in educational settings.
4. Privacy, Digital ID, and Data Implications
SB 5870 does not require ID verification or digital identity systems on its face. But its legal risk structure, tying legal liability to outcomes from user interaction, will effectively nudge platforms toward practices that strengthen data collection and monitoring, for several reasons:
Age & Identity Pressure
To defend against claims under SB 5870 (or under HB 2225 compliance obligations), platforms will want to know who their users are.
- Who was the user?
- Were they a minor?
- What content flowed between the AI and the user?
Platforms may see identity verification (including digital ID systems) as a tool to mitigate risk by correlating interactions with real age and identity data.
Behavioral and Content Monitoring
SB 5870 implicitly assumes the ability to trace cause and effect between AI responses and user outcomes. That creates strong legal incentives for:
- storing interaction logs,
- semantic analysis of content,
- pattern detection designed to highlight risk factors.
These are privacy-intensive practices, even if not mandated explicitly.
This EU-style liability push, translated into U.S. state law, operates on the assumption that more data is safer data…a conclusion with profound privacy tradeoffs.
State bills like California’s SB 243 (which also regulates companion chatbots, age-related disclosures, and self-harm protocols) show the same dynamic: the more you make platforms responsible for outcomes, the more they will invest in data and identity infrastructure to protect themselves.
Conclusion
SB 5870 represents a new front in AI regulation, one grounded in civil liability rather than administrative enforcement. It amplifies the same theme driving HB 2225 and other AI bills: fear of harm is turning into legal responsibility for platforms.
The practical effect will almost certainly be:
- More data collection and monitoring by platforms to defend against liability
- Greater pressure toward identity verification and age/identity proofing
- Expansion of legal tools opponents can use against AI systems long after deployment
At a moment when technology is evolving faster than policy, Washington’s AI legislative landscape is shaping up to be one of the most aggressive in the nation. SB 5870 and associated bills show how fear of harm can quickly become legal power to assign blame, with major implications for privacy, innovation, and civil liberties.
Track this bill and sign up for email notifications here: SB 5870 Washington State Legislature
Help Us Keep Up the Fight
We’re committed to truth, transparency, and empowering citizens with facts that matter. Your support helps us monitor policy, create rapid-response content, and mobilize families around critical issues. Donate now to fuel our mission! Thank you!

