Meanwhile, for those already in the US, various initiatives have sprung up to turn the eye towards immigration enforcement as the Trump administration continues its brutal crackdown on people on the move and migrant justice communities. In March last year, the TurnLeft Political Action Committee launched its Resist Map, an open-source project that it hopes will create a live nationwide registry of ICE activity by enabling communities to monitor and track ICE via text updates and a national map.
The search and rescue space is also responding with its own technologies. Sea-Watch, a German non-profit that patrols the Mediterranean to rescue people in distress whom the authorities have left to drown, has teamed up with SearchWing, which builds drones to help NGOs spot people in need of help. Search and rescuers have even set up their own satellites to safely share information, literally circumnavigating the telecommunications grid.
While these efforts are commendable, poet Audre Lorde reminds us that “the master’s tools will never dismantle the master’s house”. Current technologies are no panacea for freedom; they limit us to what already exists, rather than encouraging us to see what else is possible. But as Leanne Betasamosake Simpson, an author, musician, and academic from the Mississauga Nishnaabeg group of First Nation peoples in Canada, counters: “I am not so concerned with how we dismantle the master’s house, that is, which set of theories we used to critique colonialism; but I am very concerned with how we (re) build our own houses.”
Betasamosake Simpson’s approach, which she details in Dancing on Our Turtle’s Back: Stories of Nishnaabeg Re-Creation, Resurgence, and a New Emergence, champions the resurgence of what Italian philosopher Antonio Gramsci termed “subaltern knowledges”. These are the knowledge and perspectives of marginalised or oppressed groups, often those excluded from hegemonic systems of knowledge production.
Indigenous academics are introducing conceptions of AI that alter and enhance our understanding of reality, such as research by Megan Kelleher, from the Barada and Gabalbara people of Central Queensland in Australia, into how or whether Indigenous protocols can inform AI’s design. Elsewhere, Michael Running Wolf, a McGill University computer scientist from the Northern Cheyenne First Nation, created the Lakota AI Code Camp to train Indigenous youth in data science and AI. Running Wolf also champions Indigenous data sovereignty, similar to the goals of Canada’s First Nations Information Governance Center, which aims to ensure that data gathering is ethical and that First Nations communities are empowered to use their data for their needs, so that “every First Nation will achieve data sovereignty in alignment with its distinct world view… Our Data. Our Stories. Our Futures.”
Others are finding ways to fight back using the master’s ultimate tool: the law. Laws have long been used to oppress marginalised groups, concretising particular ideas about who belongs and who does not. Like a solid minority of the profession, I am a bit of a reluctant lawyer, having always struggled with the law being a hegemonic tool in and of itself, based on solidified categories such as ‘refugee’ vs ‘immigrant’ vs ‘expat’ – with little space left for the messiness of the human experience. And while international law can help maintain a common standard, states’ ratification of conventions often amounts to nothing more than performance on the global stage.
Despite these limitations, existing domestic laws can sometimes be stretched and expanded in novel ways, such as forcing states to rethink privacy legislation and the implications of growing surveillance on people’s data protection rights. International norms can be pushed to hold perpetrators of technological harm to account, such as a UN report from last year recognising that AI has played a major part in Israel’s targeting of civilians in the genocide in Gaza.
Tech bros often lament that regulation stifles innovation, but some innovation should be stifled – especially when it hurts real people. Yet when it comes to technology, regulation continues to lag behind – and is even being actively turned away from as private sector actors increase their influence in policymaking, as can be seen by X owner Elon Musk’s stint in the Oval Office and Facebook founder Mark Zuckerberg lobbying European politicians in Brussels. It’s no wonder that the EU’s long-awaited act to regulate AI does not go nearly far enough to protect people’s rights under the guise of protecting innovation, nor that the White House has signalled absolutely no appetite to regulate even the most harmful of technologies...
You can read the rest of the article here.