Smiley face
Weather     Live Markets

Washington State Takes Aim at Regulating Artificial Intelligence

In the serene capital of Olympia, where the stately Legislative Building stands as a symbol of governance, Washington lawmakers are embarking on an ambitious journey to establish guardrails around artificial intelligence. As federal legislators continue their lengthy debates with few concrete outcomes, Washington state is stepping into the regulatory void with a slate of bills aimed at protecting its citizens from AI’s potential harms. This renewed push comes after previous broad regulatory attempts stalled, though the state has successfully implemented narrower measures limiting facial recognition and deepfakes. The current legislative package, set to be considered in the session starting this Monday, takes aim at several critical areas: discrimination in high-stakes decisions, limitations on AI in educational settings, and new obligations for companies developing emotionally responsive AI products that interact with vulnerable populations.

At the heart of the proposed legislation is House Bill 2157, a comprehensive measure addressing “high-risk” AI systems that influence life-altering decisions around employment, housing, credit, healthcare, education, insurance, and parole. The bill would require companies deploying such systems in Washington to conduct discrimination risk assessments and implement mitigation strategies, provide clear disclosure when people are interacting with AI, and offer explanations when AI contributes to adverse decisions. This approach acknowledges that while low-risk tools like spam filters or basic customer service chatbots pose minimal societal risk, systems that determine whether someone gets a job, loan, or housing deserve heightened scrutiny. If passed, the regulations would impact a wide spectrum of companies, from HR software vendors to fintech firms, with compliance required by January 2027. This thoughtful timeline gives businesses adequate preparation time while addressing urgent concerns highlighted by Washington’s AI Task Force, which recently noted that the federal government’s “hands-off approach” has created “a crucial regulatory gap that leaves Washingtonians vulnerable.”

The emotional wellbeing of citizens, particularly young people, drives another significant portion of the legislative package. Senate Bill 5984, requested by Governor Bob Ferguson, focuses on AI companion chatbots that blur the line between human and artificial interaction. These increasingly sophisticated systems, which can form emotional bonds with users, would face new requirements: repeatedly disclosing their non-human nature, prohibiting sexually explicit content for minors, and implementing suicide-prevention protocols. The bill reflects growing concerns that AI companions could foster unhealthy dependencies or reinforce harmful thoughts, especially among vulnerable youth. For mental health and wellness startups exploring AI-driven therapy or emotional support tools, these regulations would establish important boundaries. Babak Parviz, CEO of Seattle-based mental health startup NewDays and former Amazon executive, acknowledged the bill’s good intentions while expressing concerns about the vague definition of “long-term relationship” and emphasizing the importance of human oversight in clinical settings: “For critical AI systems that interact with people, it’s important to have a layer of human supervision.”

Taking the protection of mental health further, Senate Bill 5870 proposes groundbreaking civil liability for AI systems alleged to have contributed to a person’s suicide. Companies could face lawsuits if their AI encouraged self-harm, provided instructions, or failed to direct users to crisis resources—and notably, they would be barred from deflecting responsibility by claiming the harm resulted from autonomous AI behavior. This proposal explicitly connects AI system design and operation to wrongful-death claims, reflecting growing legal scrutiny of companion-style chatbots following lawsuits involving platforms like Character.AI and OpenAI. By creating this potential liability, lawmakers are sending a clear message that companies must take responsibility for how their AI systems interact with people in crisis, rather than treating harmful outcomes as unexpected or unintended consequences of complex algorithms.

The protection of children extends beyond emotional wellbeing into the educational sphere with Senate Bill 5956, which takes aim at controversial AI applications in K-12 schools. The bill would prohibit predictive “risk scores” that label students as potential troublemakers—algorithms that civil rights advocates have long criticized for potentially amplifying existing disparities in school discipline. Schools would also be barred from using real-time biometric surveillance like facial recognition and, critically, would be prohibited from using AI as the sole basis for serious disciplinary actions like suspensions, expulsions, or law enforcement referrals. This approach acknowledges the value of human judgment in educational settings while preventing automated systems from making consequential decisions about children’s futures based on algorithms that may contain hidden biases or lack contextual understanding of individual circumstances.

The final piece in Washington’s AI regulatory puzzle, Senate Bill 5886, addresses the growing concern about synthetic media by updating the state’s right-of-publicity laws to explicitly cover AI-generated forgeries, including convincing voice clones and synthetic images. Using someone’s AI-generated likeness for commercial purposes without their consent could expose companies to liability, reinforcing that existing identity protections extend into the AI era for all citizens, not just celebrities. This update reflects the rapidly evolving capability of generative AI to create increasingly convincing fakes that could mislead the public or damage reputations. Taken together, these five bills represent Washington’s thoughtful attempt to establish reasonable guardrails around AI development and deployment while the federal government continues its deliberations. By focusing on protecting vulnerable populations, ensuring human oversight of consequential decisions, and holding companies accountable for harmful outcomes, Washington lawmakers are charting a course that balances innovation with responsibility—a model that other states may soon follow as AI continues its rapid integration into daily life.

Share.
Leave A Reply