Refusing AI feels like resistance. It’s a way of saying: survivors are not data points, and empathy cannot be coded. Resistance is an act of care, a stand for dignity in systems that too often forget it.
But resistance is not risk-free.
When refusal hardens into policy and services ban automation outright, it can leave survivors waiting, silenced, or without access to the very tools they prefer. A decision that starts as protection can end as neglect.
This is the contradiction at the heart of refusal: it can defend survivors, and it can harm them. Both truths exist.
The real challenge is not deciding whether to resist or adopt AI in blanket terms. It’s learning how to resist bad uses of AI without closing the door on survivor choice.
Refusal as resistance is necessary. Refusal as risk is dangerous. The difference comes down to one principle: choice must stay in survivors’ hands.