It’s 2 a.m. You’re sitting in the dark, phone in hand, heart pounding. You want to tell someone what’s been happening, but you’re not sure if you’re ready to speak to a stranger just yet. Thoughts of being judged, misunderstood, and not believed give you pause.
Instead of a hotline operator, you see a blinking cursor on a chatbot screen. Would you trust it with your first disclosure?
For some survivors, the answer is a clear no. A chatbot does not have empathy. It feels cold and unsafe. Trust, once broken, is hard to rebuild. And for many, automation feels like a betrayal before it even begins.
But for others, the answer is yes or why not? A chatbot offers complete anonymity, holds no judgement, and can be paused or deleted at any time. For survivors afraid of being recognized, judged, dismissed, or pressured, a chatbot can feel like a safe step.
Both truths exist.
This is the paradox we can’t ignore: trust doesn’t look the same for every survivor. For one person, trust means human empathy. For another, it means a predictable system that doesn’t flinch or push back. To force survivors into one model is to erase what little agency they may have.
The real ethical failure isn’t in using AI or refusing to use it. It’s in taking away the choice to use one.
If we want survivor services to be worthy of trust, we need to ask a different question: not should we automate or resist, but who gets to decide?
Here at ChoiceNotCode.com, I argue that survivors should.
Leave a Reply