Survivor choice isn’t optional — it’s the ethics.
AI is entering some of the most intimate and high-stakes spaces in society: domestic violence hotlines, survivor services, crisis response. Some say we should automate everything. Others say we should resist machines at all costs.
This blog sits in the messy middle.
I started ChoiceNotCode.com to explore a paradox: AI may fail survivors, but refusing it may harm them too. In under-resourced systems, resistance can protect dignity — or it can create neglect. The real question isn’t adoption or refusal alone, but how we keep trust and choice at the center.
Here, I write about:
- The double risk of automation — adoption and refusal.
- Survivor voices on trust, safety, and preference (yes, some prefer chatbots).
- The Survivor-Centered Toolkit and Trust–Empowerment Matrix I’m building as part of my doctoral research.
- Broader debates on AI ethics, refusal, and care.
This is not a neutral space. Neutrality is a myth. My stance is simple: survivors decide.
ChoiceNotCode.com is my way of thinking out loud as I write my PhD dissertation and develop a book, Too Dangerous Not to Automate: When Resistance Becomes Neglect.
If you’re a survivor, practitioner, technologist, policymaker, or just curious about where trust meets the machine, welcome. Join me in asking the hardest question of all: what should never be automated — and what’s too dangerous not to?