What happens when countries use AI to manage immigration? Some cases from the past decade show that it can violate human dignity—and that humans will always need to be closely involved in the process. 

That’s according to experts who spoke at a March 11 Fordham event. Governments are increasingly relying on AI and machine learning to handle visa applications, refugee claims, naturalization requests, and the like—raising concerns that citizenship could become commodified, said Kevin Jackson, Ph.D., professor of law and ethics in the Gabelli School of Business. 

AI Could Make Immigration More Transactional 

AI-based systems tend to be transactional and “prioritize applicants who can maximize economic utility for a nation-state,” he said. “Are we seeing a fundamental shift in the meaning of citizenship and the moral worth of individuals due to the rise of AI?”

Kevin Jackson and Emma Foley
Kevin Jackson and Emma Foley

He and his research assistant, Emma Foley, a Gabelli School graduate student, presented two ethics case studies: In the United Kingdom, an AI system for screening visa applicants reflected past pro-Western bias and discriminated against people from Africa, Asia, and the Middle East, reinforcing racial and economic disparities in global mobility, Foley said. That system was suspended about five years ago after legal challenges. 

And an AI-powered initiative of the U.S. Department of Homeland Security (DHS), proposed in 2017, drew criticism for its “extreme vetting” of immigrants in America, monitoring everything from social media use and employment records to religious affiliations, Jackson said. 

The project, also dropped following legal challenges, “highlights how AI-driven immigration systems can redefine the moral worth of migrants by preemptively classifying them as threats on one hand or as assets on the other hand,” he said. “Making AI immigration decisions open to public scrutiny and to legal appeal are important.” (Today, DHS says it uses AI responsibly across a variety of functions.)

AI, Immigration, and Social Justice

Jackson and Foley spoke at Fordham’s International Conference on Im/migration, AI, and Social Justice, organized in concert with Sophia University in Japan and held at Fordham.

Frank Hsu, Clavius Distinguished Professor of Science, speaking about "Detecting and Mitigating Bias: Harnessing Responsible and Trustworthy AI for Social Justice."
Frank Hsu, Clavius Distinguished Professor of Science, spoke about “Detecting and Mitigating Bias: Harnessing Responsible and Trustworthy AI for Social Justice.”

Faculty and graduate students, as well as alumni experts and others, spoke about how AI can enhance immigration processes but also about the potential perils.

Communication professor Gregory Donovan, Ph.D., suggested that AI might be used to provide legal assistance for migrants as they negotiate immigration processes, given the lack of enough lawyers to serve them. But even then, “It actually demands more human involvement.” 

“You’re going to need humans who are understanding of how trauma works, who are able to be there culturally and emotionally for someone as they interact with a chatbot to figure out their legal fate,” he said.

Retaining the Human Touch

Another presenter, Sarah Blackmore, LAW ’14, is a senior associate with Fragomen, an immigration services firm. She noted that AI can be helpful in immigration by streamlining administrative work and repetitive tasks like processing immigration applications, freeing up staffers to focus on “the more complex cases that need a human touch.”

That human touch is needed when, for instance, someone’s asylum case could hinge on fine nuances of translation and emotion and context, she said. “With AI, it’s really important, especially for sensitive things, that there is always this human oversight,” she said. 

She was answering a question by Carey Kasten, Ph.D., professor of Spanish, who noted that “so much of immigration law and asylum laws … have to do with the way you tell your story.” 

‘I Am Afraid’

A key element in those stories is fear—particularly, fear of gender-based violence, “one of the main factors pushing people out of their countries,” said Marciana Popescu, Ph.D., professor in the Graduate School of Social Service and co-director of Her Migrant Hub, an online information hub for women seeking asylum. Women are nearly half the population of globally displaced people, and 40% to 46% are under 18, she said during her own presentation. 

In her own work with migrants, the three most common words she has heard, she said, are “I am afraid.” She ended with a plea: “I am asking you, dear colleagues, that are looking into AI—think of AI as a tool that can expand sanctuary. This comes from the voices of the women, because it is [their stories that matter]most.”

Marciana Popescu speaking during the closing panel
Share.

Chris Gosier is research news director for Fordham Now. He can be reached at (646) 312-8267 or [email protected].