Author: Ednalyn C. De Dios

  • Refusal as Resistance, Refusal as Risk

    Refusing AI feels like resistance. It’s a way of saying: survivors are not data points, and empathy cannot be coded. Resistance is an act of care, a stand for dignity in systems that too often forget it.

    But resistance is not risk-free.

    When refusal hardens into policy and services ban automation outright, it can leave survivors waiting, silenced, or without access to the very tools they prefer. A decision that starts as protection can end as neglect.

    This is the contradiction at the heart of refusal: it can defend survivors, and it can harm them. Both truths exist.

    The real challenge is not deciding whether to resist or adopt AI in blanket terms. It’s learning how to resist bad uses of AI without closing the door on survivor choice.

    Refusal as resistance is necessary. Refusal as risk is dangerous. The difference comes down to one principle: choice must stay in survivors’ hands.

  • Survivors Who Choose AI

    When people talk about AI in survivor services, the assumption is often the same: no one really wants it. We imagine survivors feeling dismissed and silenced when automation enters the picture.

    While it’s true for many, it is not for all.

    Some survivors prefer talking to a chatbot. A chatbot can’t judge them, doesn’t demand explanations for their actions or inactions, and can be deleted and start over. For someone living in constant surveillance or isolation, a chatbot at 2 am might feel like a welcome respite.

    Others choose AI tools because they’re always on and available. A hotline may close for the night. A shelter may be full. But a chatbot doesn’t get tired. When support is often uncertain, the simple fact of being dependable can feel like trust.

    Of course, this doesn’t mean that AI can replace human empathy. It means that trust looks different for different survivors. For some, it comes from the warmth of human compassion. For others, it comes from a machine that never interrupts and never shames.

    The ethical failure isn’t in survivors choosing AI. The failure is in denying them the choice.

    Survivors don’t need us to decide for them whether AI belongs in their care. They need the ability to decide for themselves. That’s the line between resistance and neglect.

  • The Hidden Cost of Saying No

    Saying no to AI feels safe. It feels ethical. In a world where we’re constantly inundated with technology, resistance can feel like the only choice left.

    But here’s the paradox: refusal carries risks, too.

    When services reject technology outright, the cost is often borne by those who need them. Underfunded systems mean unanswered calls, long waitlists, or inconsistent support. A survivor reaching out may find no one available. A pattern of escalating danger may go unflagged. In a stand against automation disguised as protection, refusal can sometimes create neglect.

    This doesn’t mean AI should be a source of empathy or become the first point of contact in a crisis. It means resistance isn’t neutral. Saying no doesn’t freeze time, it shapes outcomes. And those outcomes matter most for the people already living in danger.

    So, the harder question isn’t should we automate or not? It’s what risks do we create when we refuse?

    That’s the hidden cost this blog will keep coming back to: when resistance, intended to safeguard, unintentionally leaves survivors with less.

  • AI Isn’t Neutral. Neither Is Resistance.

    We assume that technology is neutral and does not discriminate. It’s just a bunch of zeroes and ones. Also, we like to tell ourselves that resistance to technology is always the safer choice and refusing to use AI affords us the ethical high ground.

    Both stories are myths.

    AI isn’t neutral. It’s built on data that mirror human bias. Oftentimes, it is designed with priorities that gives the priveleged a leg up while deprioritizing unique survivor needs. Algorithms are deployed in systems where power already tilts against the vulnerable, further perpetuating many cultural inequities. A chatbot might answer faster than a human, but it also might miss important context, fail to recognize urgency, or reduce someone’s pain into text on a screen.

    But resistance isn’t neutral either. When services reject technology outright, survivors pay the cost: longer waitlists, unanswered calls, or denial of tools that some survivors actually prefer. Refusal can protect dignity — but it can also silence choice.

    Here’s the real tension: both automation and refusal are decisions with consequences. Neither is neutral. Neither is risk-free.

    So the question isn’t whether AI is good or bad. It’s how we navigate the double risk, and whose voices guide that navigation. Survivors don’t need neutrality. They need agency.

    That’s the work this blog is here to do: pull the debate out of binaries, and put choice back at the center.

  • Would You Trust a Chatbot with Your First Disclosure?

    It’s 2 a.m. You’re sitting in the dark, phone in hand, heart pounding. You want to tell someone what’s been happening, but you’re not sure if you’re ready to speak to a stranger just yet. Thoughts of being judged, misunderstood, and not believed give you pause.

    Instead of a hotline operator, you see a blinking cursor on a chatbot screen. Would you trust it with your first disclosure?

    For some survivors, the answer is a clear no. A chatbot does not have empathy. It feels cold and unsafe. Trust, once broken, is hard to rebuild. And for many, automation feels like a betrayal before it even begins.

    But for others, the answer is yes or why not? A chatbot offers complete anonymity, holds no judgement, and can be paused or deleted at any time. For survivors afraid of being recognized, judged, dismissed, or pressured, a chatbot can feel like a safe step.

    Both truths exist.

    This is the paradox we can’t ignore: trust doesn’t look the same for every survivor. For one person, trust means human empathy. For another, it means a predictable system that doesn’t flinch or push back. To force survivors into one model is to erase what little agency they may have.

    The real ethical failure isn’t in using AI or refusing to use it. It’s in taking away the choice to use one.

    If we want survivor services to be worthy of trust, we need to ask a different question: not should we automate or resist, but who gets to decide?

    Here at ChoiceNotCode.com, I argue that survivors should.