Just because we can automate something doesn’t mean we should. But refusing to automate can carry its own dangers.
That’s the paradox at the heart of this blog.
AI is entering the places we once thought machines could never reach, from domestic violence hotlines, support services, to crisis response. Some see it as a lifeline: 24/7 access, anonymity, and speed in systems that never receive adequate funding. Others see it as a failure waiting to happen: lost empathy, eroded trust, survivors reduced to data points.
Both are right.
Here’s the uncomfortable truth: AI may fail survivors, but refusing it may harm them too.
When services are stretched thin, saying “no” to technology doesn’t always erase the risk. It can create new ones: long waits, missed warning signs, or survivors being denied access to chatbots because it feels safer than a human. Refusal can protect dignity, but it can also slide into neglect.
And, that is the focus of my inquiry.
ChoiceNotCode.com is a place to wrestle with the space in the middle. Over the next months, I’ll share essays drawn from my PhD research on survivors, AI, and the ethics of refusal. Each post will ask:
- When is automation a lifeline
- When does it fall short?
- How do we keep survivor choice at the center?
Neutrality is a myth. Code doesn’t get to decide. Survivors do.
Welcome to the conversation.
Leave a Reply to A WordPress Commenter Cancel reply