AI systems carry embedded worldviews. This isn’t a metaphor—it’s architecture. They inherit Western epistemologies, English-language biases, individualistic assumptions, capitalist logics, hetero-normative defaults, racialized categories, settler-colonial knowledge priorities, Silicon Valley’s belief in optimization and speed, and a particular moral grammar about what counts as “rational,” “true,” or “good.”
This is a converstion with Farhan Samir that began in November as we graduated from UBC. Talking with him I felt something familiar—something that sits at the centre of my own work on conversation, polarization, and the power of language. Our conversaion reminds me how much complexity lies beneath even the simplest act of interpretation, and how rarely we pay attention to it.