AI Isn’t Neutral. Neither Is Resistance.

We assume that technology is neutral and does not discriminate. It’s just a bunch of zeroes and ones. Also, we like to tell ourselves that resistance to technology is always the safer choice and refusing to use AI affords us the ethical high ground.

Both stories are myths.

AI isn’t neutral. It’s built on data that mirror human bias. Oftentimes, it is designed with priorities that gives the priveleged a leg up while deprioritizing unique survivor needs. Algorithms are deployed in systems where power already tilts against the vulnerable, further perpetuating many cultural inequities. A chatbot might answer faster than a human, but it also might miss important context, fail to recognize urgency, or reduce someone’s pain into text on a screen.

But resistance isn’t neutral either. When services reject technology outright, survivors pay the cost: longer waitlists, unanswered calls, or denial of tools that some survivors actually prefer. Refusal can protect dignity — but it can also silence choice.

Here’s the real tension: both automation and refusal are decisions with consequences. Neither is neutral. Neither is risk-free.

So the question isn’t whether AI is good or bad. It’s how we navigate the double risk, and whose voices guide that navigation. Survivors don’t need neutrality. They need agency.

That’s the work this blog is here to do: pull the debate out of binaries, and put choice back at the center.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *