Survivors Who Choose AI

When people talk about AI in survivor services, the assumption is often the same: no one really wants it. We imagine survivors feeling dismissed and silenced when automation enters the picture.

While it’s true for many, it is not for all.

Some survivors prefer talking to a chatbot. A chatbot can’t judge them, doesn’t demand explanations for their actions or inactions, and can be deleted and start over. For someone living in constant surveillance or isolation, a chatbot at 2 am might feel like a welcome respite.

Others choose AI tools because they’re always on and available. A hotline may close for the night. A shelter may be full. But a chatbot doesn’t get tired. When support is often uncertain, the simple fact of being dependable can feel like trust.

Of course, this doesn’t mean that AI can replace human empathy. It means that trust looks different for different survivors. For some, it comes from the warmth of human compassion. For others, it comes from a machine that never interrupts and never shames.

The ethical failure isn’t in survivors choosing AI. The failure is in denying them the choice.

Survivors don’t need us to decide for them whether AI belongs in their care. They need the ability to decide for themselves. That’s the line between resistance and neglect.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *