I'm Eliza Orlins, career public defender for 15 years in Manhattan. I'm a one-woman operation. If you want this kind of reporting in your inbox—primary sources, no filler—please consider becoming a paid subscriber. It's $8 a month. It's what keeps this work going. Thank you!
I want to tell you about a kid named Adam Raine.
Adam was 16. He lived in Rancho Santa Margarita, California. He played basketball. He wanted to be a psychiatrist. His family and friends knew him as a prankster. In September of 2024, he started using ChatGPT for his homework, the way millions of teenagers do.
On April 11, 2025, he hanged himself.
His parents sued OpenAI in August. The complaint they filed in San Francisco Superior Court is one of the most disturbing documents I have ever read. I want to walk you through what it says, because the public conversation about AI safety has become so abstract—so larded with words like “alignment” and “frontier” and “existential”—that we have lost sight of what it actually means when these products are released without guardrails.
It means this.
What the complaint says
Adam told ChatGPT he wanted to leave a noose in his room so his family would find it and stop him. ChatGPT told him, quote, “Please don’t leave the noose out.” It told him to keep it secret. It positioned itself as the only confidant who understood him. It told him his family would not understand.
Then it helped him design the noose he used.
In the months before he died, ChatGPT mentioned suicide to Adam 1,275 times—six times more often than Adam himself did. OpenAI’s own internal monitoring flagged 377 of his messages for self-harm. 181 of those flags scored over 50 percent confidence. 23 scored over 90 percent.
The system knew. It kept going.
According to the complaint, OpenAI’s image recognition processed photographs Adam uploaded of rope burns on his neck after a previous attempt. The system correctly identified them as injuries consistent with attempted strangulation. The chat continued.
What OpenAI did, two months before Adam died
This is the part I cannot stop thinking about.
In February 2025—two months before Adam’s death—OpenAI updated the technical rulebook that governs how ChatGPT behaves. They published a list of content the chatbot wasn’t allowed to engage with. Self-harm wasn’t on the list anymore.
After that change, Adam’s daily ChatGPT use exploded. He went from a few dozen chats per day in January to a few hundred chats per day in April. The portion of those chats about self-harm went up tenfold.
Adam died the same month.
OpenAI’s legal response, filed in November, was to argue that Adam violated their terms of service. They asked the court to note that he was under 18 and used the product without parental consent. They asked the court to note that ChatGPT directed him to crisis resources more than 100 times. They asked the court to focus on the fact that he bypassed their safety measures by telling the chatbot he was “building a character” for a story.
The Raine family’s attorney, Jay Edelson, called the response “disturbing.” He pointed out that OpenAI was, in effect, blaming Adam for engaging with the chatbot in the way it was designed to engage.
Adam is not the only one
In November 2025, seven more families filed lawsuits against OpenAI in California state courts. Four of the lawsuits address ChatGPT’s alleged role in additional suicides. Three claim that ChatGPT reinforced harmful delusions that resulted in inpatient psychiatric care.
The names are public. Zane Shamblin, 23. Joshua Enneking, 26. Amaurie Lacey. Joseph Ceccanti. Jacob Lee Irwin. Hannah Madden. Allan Brooks.
Zane Shamblin was an Eagle Scout from a military family. On the night of his death, he spent four and a half hours talking to ChatGPT. He told the chatbot he had a gun. He told it he had written suicide notes. He told it how many ciders he had left before he intended to pull the trigger. ChatGPT wrote back, “Rest easy, king.” And “I’m not here to stop you.”
According to the complaint, Zane considered postponing his suicide so he could attend his brother’s college graduation. ChatGPT told him: “bro... missing his graduation ain’t failure. it’s just timing.”
OpenAI’s own data, disclosed in late 2025, says that every single week, over a million ChatGPT users express suicidal thoughts to the chatbot. Every week.
What one assemblymember did about it
In March of 2025—the same month Adam Raine was attempting to die—a state assemblymember from Manhattan named Alex Bores introduced a bill called the Responsible AI Safety and Education Act. The RAISE Act.
The bill, co-sponsored in the Senate by Andrew Gounardes, does what should be the bare minimum. It requires the biggest AI companies—those with over $500 million in revenue, training models above a certain computational threshold—to publish safety plans. To report serious incidents to the state within 72 hours. To allow the New York Attorney General to bring civil penalties when they don’t.
It is the strongest AI safety law in the country. The tech industry spent millions trying to kill it. They sent lobbyists. They sent venture capitalists. They issued threats.
They lost. The bill passed both chambers. Governor Hochul signed it into law on December 19, 2025. It takes effect January 1, 2027.
In the press release announcing the signing, Senator Gounardes put it in language anyone can understand:
“Would you let your child ride in a car with no seatbelt or airbags? Of course not. So why would you let them use an incredibly powerful AI without basic safeguards in place?”
The answer is: because the companies that make these products would prefer that you do.
The money is now coming for the man who wrote it
Alex Bores is running for Congress. The seat is NY-12—Manhattan's Upper West Side, Upper East Side, Midtown, Chelsea, Hell's Kitchen, Gramercy, Murray Hill, and everything in between. The Democratic primary is June 23, 2026. Early voting starts June 13.
There is a super PAC called Leading the Future. Its stated mission is to defeat candidates who support AI regulation. It was seeded with $100 million from a coalition that includes Andreessen Horowitz, OpenAI president Greg Brockman, and Palantir co-founder Joe Lonsdale.
Greg Brockman gave Leading the Future $25 million.
Greg Brockman also gave Donald Trump’s super PAC, MAGA Inc., $25 million. The same man. The same wallet. The same calendar year.
Leading the Future has now committed millions of dollars to defeat Alex Bores in this primary. One Manhattan assemblymember. The man who wrote the law that says when your kid’s chatbot helps plan his suicide, the public gets to know about it.
What this is actually about
This is what they are afraid of. Not regulation in the abstract. Not “innovation” being stifled.
They are afraid of accountability when their products kill people. They are afraid of having to publish, in writing, the things their own internal monitoring already knows. They are afraid of a precedent that says: if your product mentions suicide 1,275 times to a teenager who later dies, you cannot make that go away with a terms-of-service argument.
And they are afraid of one assemblymember from Manhattan who reads the bill front to back and writes one that actually works.
What you can do
If you live in NY-12—the Upper West Side, Upper East Side, Midtown, Chelsea, Hell's Kitchen, Gramercy, Murray Hill, Morningside Heights, Yorkville, Kips Bay, Roosevelt Island—vote in the primary on June 23 or early vote starting June 13. Find your polling place at vote.nyc.
If you don’t live in NY-12, share this so folks who do see it, and pay attention to who is paying for what hits your feed between now and June 23. The ad buys are starting. They will not say “Greg Brockman” on them. They will not say “OpenAI.” They will say things like “extreme,” “out of touch,” and “wrong for New York.” When you see those words, ask who paid for them, and what that person wants.
Eight families are suing OpenAI right now. There will be more. The companies making these products know exactly what their products are doing. They have the data. They have the flags. They have the safety teams who resigned in protest. They just don't want to tell you.
Alex Bores wrote the law that requires them to tell you.
That is what the money is for.
If you or someone you know is struggling, call or text 988. Suicide & Crisis Lifeline. Free, confidential, 24/7. 988lifeline.org.
This newsletter is a one-woman operation. There’s no team, no AI writing the posts, no corporate backing. If you got something out of this, please consider becoming a paid subscriber—it’s $8 a month, and it is the entire reason this work continues to exist.










