Imagine a not-so-distant future, when automated bots appraise refugees’ stories about their own lives, probing whether their marriages are real, their children are their own, or whether they pose a security threat. Then imagine these artificial intelligence arbiters meting out inscrutable rulings that push people out of Canada and back to precarious lives back home, where they may face war, oppressive regimes or persecution.
It’s a dystopian scenario newcomers could one day face here, according to Petra Molnar, a Toronto human rights and refugee lawyer who has been steadily shining a light on the more troubling realities of this country’s immigration system. In September, Ms. Molnar co-authored a pivotal report on the ethical perils of Canada’s plans to use artificial intelligence to help vet immigrant and refugee claims.
Ms. Molnar is sounding an urgent alarm. Though technology is often viewed as impartial, it’s anything but, the lawyer argues. Discrimination, bias and violations of due process and privacy are just the tip of the iceberg with unchecked AI assisting or replacing the judgment of human decision-makers in the immigration sphere.
“These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives,” read the 88-page report, a joint project between the International Human Rights Program at the University of Toronto’s Faculty of Law and the Citizen Lab at the Munk School of Global Affairs and Public Policy.
As an immigrant who stared down her own difficult circumstances, Ms. Molnar finds herself feeling personally invested in helping people rebuild their lives in Canada. Her parents immigrated to Winnipeg from the Czech Republic in 2000. Family turmoil, including domestic violence, nearly derailed her education.
“My whole childhood was punctuated by really difficult family relationships,” said Ms. Molnar, whose father left her mother 10 years ago. After forfeiting a University of Toronto scholarship to help her single mother back in Winnipeg, Ms. Molnar eventually became the first lawyer in her family.
“A lot of these issues are personal,” said Ms. Molnar, who articled at Toronto’s Barbra Schlifer Commemorative Clinic, which aids women who have experienced violence. She worked with refugee women who were struggling with trauma and precarious housing and employment as they escaped spouses threatening them and their children with harm.
The work with refugee women and the groundbreaking AI immigration screening research both ignite her “fire” for protecting human rights, Ms. Molnar said.
This spring, in an attempt to deal with backlog, the federal government piloted an artificial intelligence program to assist with immigration applications made on humanitarian and compassionate grounds – processes for people who often believe they will face harm back home. The use of AI with such immigrants is “a laboratory for high-risk experiments within an already highly discretionary system,” reads Ms. Molnar’s report, co-authored by Citizen Lab research fellow Lex Gill.
Canadians need to shed longstanding myths about artificial intelligence before turning to it for such dire work, Ms. Molnar argues. We often falsely assume that technology is mechanical and objective, even though its algorithms are designed by human beings who hold various biases. This can include prejudiced views about how people look, which religion they practice and where they travel.
There is also a mistaken belief that technology can read people better than people can, even as it is non-sentient and prone to system error. Given AI technologies are in their infancy, Ms. Molnar warns that they may be too oversimplified to offer nuanced appraisals of people in complex, high-risk situations.
Ms. Molnar has met with government officials to call for transparency and accountability. The lawyer wants to see the creation of an independent task force to ensure the technologies fall within domestic and international human rights laws. She’s urging a freeze on the roll out of such systems until standards, safeguards and robust appeals processes are in place.
In the field of human rights law, advocacy usually arises after people are violated. The AI work is unique because it looks to prevent future harms. “This was uncharted territory. There was no meaningful focus on this before [Ms. Molnar’s] report,” said Samer Muscati, director of the International Human Rights Program.
Prior to the AI research, Ms. Molnar had been working on migrants’ rights for a decade, on the front lines near the Syrian-Turkish border and closer to home, helping resettle refugees in Toronto. Mr. Muscati recalls her sensitivity working with undocumented migrant workers from the Philippines. “She’s able to do very delicate interviews on tough issues,” Mr. Muscati said. “One of the challenging parts of this job is to be able to have these types of relationships with people you just met and to win over their trust. You can only do that if you’re a genuine person who has humility.”
Today, alongside the work on AI-assisted border controls, Ms. Molnar is also investigating the trauma of immigration detention centres, where thousands of migrants and asylum seekers are held each year in this country. For refugees escaping from conflict zones and dislocated from home, Ms. Molnar sees long-term mental health harms in detention. “It stays with people,” she said.
The lawyer’s fears about immigration detention and automated border technology echo graver concerns. Ms. Molnar is uneasy about what she views as a resurgence of xenophobia, especially around ideas of “old stock Canadians” versus “others” in this country.
“Canada needs to look at how we are thinking through these issues and why every couple of years, we are falling back on these tropes of being ‘overrun by migrants,’” Ms. Molnar said. “At the end of the day, if you’re not Indigenous, we’re all newcomers. We just arrived at different times.”