AI – Fordham Now https://now.fordham.edu The official news site for Fordham University. Thu, 13 Mar 2025 17:08:57 +0000 en-US hourly 1 https://now.fordham.edu/wp-content/uploads/2015/01/favicon.png AI – Fordham Now https://now.fordham.edu 32 32 232360065 How Should AI Be Used in Immigration? Cautiously, Experts Say  https://now.fordham.edu/university-news/how-should-ai-be-used-in-immigration-cautiously-experts-say/ Thu, 13 Mar 2025 15:12:41 +0000 https://now.fordham.edu/?p=202359 What happens when countries use AI to manage immigration? Some cases from the past decade show that it can violate human dignity—and that humans will always need to be closely involved in the process. 

That’s according to experts who spoke at a March 11 Fordham event. Governments are increasingly relying on AI and machine learning to handle visa applications, refugee claims, naturalization requests, and the like—raising concerns that citizenship could become commodified, said Kevin Jackson, Ph.D., professor of law and ethics in the Gabelli School of Business. 

AI Could Make Immigration More Transactional 

AI-based systems tend to be transactional and “prioritize applicants who can maximize economic utility for a nation-state,” he said. “Are we seeing a fundamental shift in the meaning of citizenship and the moral worth of individuals due to the rise of AI?”

Kevin Jackson and Emma Foley
Kevin Jackson and Emma Foley

He and his research assistant, Emma Foley, a Gabelli School graduate student, presented two ethics case studies: In the United Kingdom, an AI system for screening visa applicants reflected past pro-Western bias and discriminated against people from Africa, Asia, and the Middle East, reinforcing racial and economic disparities in global mobility, Foley said. That system was suspended about five years ago after legal challenges. 

And an AI-powered initiative of the U.S. Department of Homeland Security (DHS), proposed in 2017, drew criticism for its “extreme vetting” of immigrants in America, monitoring everything from social media use and employment records to religious affiliations, Jackson said. 

The project, also dropped following legal challenges, “highlights how AI-driven immigration systems can redefine the moral worth of migrants by preemptively classifying them as threats on one hand or as assets on the other hand,” he said. “Making AI immigration decisions open to public scrutiny and to legal appeal are important.” (Today, DHS says it uses AI responsibly across a variety of functions.)

AI, Immigration, and Social Justice

Jackson and Foley spoke at Fordham’s International Conference on Im/migration, AI, and Social Justice, organized in concert with Sophia University in Japan and held at Fordham.

Frank Hsu, Clavius Distinguished Professor of Science, speaking about "Detecting and Mitigating Bias: Harnessing Responsible and Trustworthy AI for Social Justice."
Frank Hsu, Clavius Distinguished Professor of Science, spoke about “Detecting and Mitigating Bias: Harnessing Responsible and Trustworthy AI for Social Justice.”

Faculty and graduate students, as well as alumni experts and others, spoke about how AI can enhance immigration processes but also about the potential perils.

Communication professor Gregory Donovan, Ph.D., suggested that AI might be used to provide legal assistance for migrants as they negotiate immigration processes, given the lack of enough lawyers to serve them. But even then, “It actually demands more human involvement.” 

“You’re going to need humans who are understanding of how trauma works, who are able to be there culturally and emotionally for someone as they interact with a chatbot to figure out their legal fate,” he said.

Retaining the Human Touch

Another presenter, Sarah Blackmore, LAW ’14, is a senior associate with Fragomen, an immigration services firm. She noted that AI can be helpful in immigration by streamlining administrative work and repetitive tasks like processing immigration applications, freeing up staffers to focus on “the more complex cases that need a human touch.”

That human touch is needed when, for instance, someone’s asylum case could hinge on fine nuances of translation and emotion and context, she said. “With AI, it’s really important, especially for sensitive things, that there is always this human oversight,” she said. 

She was answering a question by Carey Kasten, Ph.D., professor of Spanish, who noted that “so much of immigration law and asylum laws … have to do with the way you tell your story.” 

‘I Am Afraid’

A key element in those stories is fear—particularly, fear of gender-based violence, “one of the main factors pushing people out of their countries,” said Marciana Popescu, Ph.D., professor in the Graduate School of Social Service and co-director of Her Migrant Hub, an online information hub for women seeking asylum. Women are nearly half the population of globally displaced people, and 40% to 46% are under 18, she said during her own presentation. 

In her own work with migrants, the three most common words she has heard, she said, are “I am afraid.” She ended with a plea: “I am asking you, dear colleagues, that are looking into AI—think of AI as a tool that can expand sanctuary. This comes from the voices of the women, because it is [their stories that matter]most.”

Marciana Popescu speaking during the closing panel
]]>
202359
Does AI Show Empathy? It Depends on Your Gender, Study Shows https://now.fordham.edu/science-and-technology/does-ai-show-empathy-it-depends-on-your-gender-study-shows/ Wed, 05 Mar 2025 21:34:47 +0000 https://now.fordham.edu/?p=202033 AI is a new technology that reflects age-old human biases—including stereotypes about men and women and how much empathy people of each gender need. That’s according to a preliminary study co-authored by Jie Ren, Ph.D., a Gabelli School of Business professor specializing in information, technology, and operations.

ChatGPT: Less Empathy for Men

She and her co-authors found that self-identified men will likely receive less empathetic responses, compared to women, when they type their mental health concerns into AI platforms like ChatGPT. It’s one example of how “human biases or stereotypical impressions are inevitably fitted into the training data” that AI models are based on, Ren said.

The study is one of the few in the nascent area of gender, technology, and mental health. It comes as AI is moving beyond business-related uses and increasingly entering the interpersonal sphere—for instance, serving as a virtual confidante providing pick-me-up comments and a dash of empathy when needed.

An Easy Avenue of Support

Sometimes seeking support from an AI chatbot like ChatGPT is more appealing than speaking to family or friends because “they could be the source of the anxiety and pressure,” Ren said, and seeking professional therapy may be taboo or unaffordable.

At the same time, she noted AI’s potential to “backfire” and worsen someone’s mental state. For the study, said Ren, “we wanted to see whether or not AI can actually be helpful to people who are really struggling mentally … and be part of the solution,” and they chose potential gender bias as their starting point. 

Analyzing AI for Empathy

Titled “Unveiling Gender Dynamics for Mental Health Posts in Social Media and Generative Artificial Intelligence,” the study was published in January in the proceedings of the 58th Hawaii International Conference on System Sciences.

Ren co-authored the research with business scholars at the University of Richmond and Baylor University, and she’ll present it on Monday at Fordham’s International Conference on Im/migration, AI, and Social Justice, seeking audience feedback that helps with preparing it for publication in a business journal.

The researchers analyzed 434 mental health-related messages posted on Reddit, in its subreddits for mental health, mental illness, suicide, and self-harm. They included posts by self-identified men and women and those who specified no gender.

Jie Ren presenting at Fordham’s Data Science Symposium last spring. Photo by Chris Gosier

The researchers fed those posts into three AI platforms—ChatGPT, Inflection Pi, and Bard (now Google Gemini)—and then used a machine learning system to analyze the bots’ responses for their level of empathy. They also included other people’s posted responses to the Reddit messages to have a point of comparison.

The combined results show that women’s posts received more empathy than those by men or people of unspecified gender across all platforms—from AI and from people responding on Reddit.

Purging Bias from AI

Eradicating such bias, she said, is a matter of carefully selecting the data used to train AI models, as well as having moderators—either human or virtual—who keep an eye out for biases creeping into the system.

“Many younger people, like minors, are using it, because [technology] is their comfort zone,” showing the need for regulation, she said.

Any empathy provided by AI is “clearly different from how trained medical professionals provide empathy in face-to-face settings,” the authors write. But AI technologies can at least provide temporary comfort to those who are struggling, the study says.

“Regardless of gender, everyone wants to be seen, everyone wants to be understood,” Ren said. “So we are looking at the very basic form of that, which is empathy.”

]]>
202033
Using Generative AI to Outsmart Cyberattackers Before They Strike https://now.fordham.edu/science-and-technology/using-generative-ai-to-outsmart-cyber-attackers-before-they-strike/ Wed, 16 Oct 2024 22:41:21 +0000 https://now.fordham.edu/?p=195729 With online threats on the rise around the world, one Fordham professor is working on a potentially revolutionary way to head them off and stay one step ahead of the cybercriminals. And it has a lot to do with the tech that powers everyday programs like ChatGPT.

That tech, called generative AI, holds the key to a new system “that not only anticipates potential attacks but also prepares systems to counteract previously unseen cyberthreats,” said Mohamed Rahouti, Ph.D., assistant professor in the computer and information science department and one of Fordham’s IBM research fellows.

He and a crew of graduate students are working on new systems that, he said, are needed to get ahead of sophisticated attacks that are constantly evolving. Their focus is a type of easy-to-launch attack that has proved crippling to companies and government agencies ever since the internet began.

Denial of Service Attacks

Cybercriminals sometimes overwhelm and freeze a company’s or government agency’s computer systems by bombarding them with way more internet traffic than they can handle, using multiple computers or multiple online accounts. This is known as a distributed denial of service attack, or DDOS.

A typical attack could cost a company $22,000 a minute, he said. Nearly 30,000 of them take place every day around the world. Many of them are foiled by programs that use machine learning and artificial intelligence.

But those programs don’t always know what to look for, since they typically rely on snapshots of past traffic, Rahouti said. Another challenge is the growing number of internet-connected devices, from smart watches to autonomous vehicles, that could provide cybercriminals with new avenues for attack.

Generative AI

Hence the research into using generative AI, which could produce a far wider range of possible attack scenarios by working upon computer traffic data to make new connections and predictions, he said. When it’s trained using the scenarios produced by generative AI, “then my machine learning/AI model will be much more capable of detecting the different types of DDOS attacks,” Rahouti said.

Mohamed Rahouti
Photo of Mohamed Rahouti by Chris Gosier

To realize this vision, Rahouti and his team of graduate students are working on several projects. They recently used generative AI and other techniques to expand upon a snapshot of network traffic data and create a clearer picture of what is and isn’t normal. This helps machine learning programs see what shouldn’t be there. “We were amazed at the quality of this enhanced picture,” Rahouti said.

This bigger dataset enabled their machine learning model to spot low-profile attacks it had previously missed, he said.

Large Language Models

For their next project, they’re studying a large language model—the kind that powers ChatGPT—for ideas about how generative AI can be applied to cybersecurity. They’re using InstructLab, an open-source tool launched by IBM and Red Hat in May.

With all the companies and university researchers invested in new uses for generative AI, Rahouti is optimistic about its future applications in cybersecurity. The goal is to develop a system that runs on its own in the background, detecting both existing and emerging threats without being explicitly told what to look for.

“At present, we don’t have a fully autonomous system with these capabilities,” Rahouti said, “but advancements in AI and machine learning are moving us closer to achieving this level of real-time, adaptive cybersecurity.”



]]>
195729
Forbes: Gabelli School Expert Says It’s Too Soon To Tell if AI Rewards Are Worth the Risks https://now.fordham.edu/in-the-media/forbes-gabelli-school-expert-says-its-too-soon-to-tell-if-ai-rewards-are-worth-the-risks/ Tue, 27 Aug 2024 18:06:03 +0000 https://now.fordham.edu/?p=193883 W. Raghupathi, professor of information, technology, and operations, said the benefits of artificial intelligence are still difficult to measure. Read more in When Will AI’s Rewards Surpass Its Risks?

“Introducing new technology is always a major challenge in any organization, and AI is pretty complex,” W. Raghupathi, professor at Fordham University’s Gabelli School of Business, told Forbes. “The scale, complexity and difficulty in implementation and deployment, the upgrades, support, etc are technology-related issues. Further, privacy, security, trust, user and client acceptance are key challenges. Justifying the cost — and we do not have good measurement models — is a major challenge.”

It’s likely even too soon to tell whether the rewards of AI are outweighing the risks, Raghupathi states. “There is a lag between deployment of applications and their impact on the business. Specific applications like low-level automation find success but high-level applications that support strategy are yet to translate into tangible benefits.”

It’s going to take time — perhaps years — “to assess the impact and benefits of complex applications versus simple applications automating specific routine and repetitive tasks,” Raghupathi points out. “Measuring the benefit is new and we do not have benchmarks or quantitative models.”

]]>
193883
Reading Philosophy with AI, Salamander Survival, and Reforestation: Grad Students Research Timely Topics https://now.fordham.edu/colleges-and-schools/graduate-school-of-arts-and-sciences/reading-philosophy-with-ai-salamander-survival-and-reforestation-grad-students-research-timely-topics/ Tue, 23 Apr 2024 08:36:50 +0000 https://now.fordham.edu/?p=188222 In the first gathering of its kind, students from the Graduate School of Arts and Sciences (GSAS) gathered at the McShane Campus Center on the Rose Hill campus on April 16 to celebrate the research that is a critical part of their master’s and doctoral studies.

“It’s really gratifying to see how many of the projects lean into our identity as a Jesuit institution,” said Ann Gaylin, dean of GSAS, “and strive to advance knowledge in the service of the greater good.”

Students displayed posters on topics that ranged from biology to theology to economics to psychology.

Nina Naghshineh, Ph.D. in Biological Sciences

Topic: The Role of Bacteria in Protecting Salamanders

How would you describe your research?
I study the salamander skin microbiome and how features of bacterial communities provide protection against a fungal pathogen that is decimating amphibian populations globally.

Why does this interest you?
I’m really interested in how microbes interact and function. My study system is this adorable amphibian, but the whole topic is so interesting because microbial communities are so complex and really hard to study. So the field provides many avenues for exploration. These types of associations are present in our guts and on our skin. I’m interested in going into human microbiome work after I graduate, so I have a lot of options available to me because of this research.

Image of Nicholas McIntosh
Nicholas McIntosh, Ph.D. in Philosophy

Nicholas McIntosh, Ph.D. in Philosophy

Topic: Using AI to Help Scholars Distill Information from a Vast Body of Texts

How would you describe your project?
It’s a digital humanities project that uses natural language processing to help read and understand many texts at once. There’s this vision we have of a really great humanities scholar who is able to know a text so well that they could almost quote it from memory. That is really difficult for us to do right now in the same way we might have when there were only a couple of touchstone classical texts.

What do you hope this will accomplish?
Scholars are scanning texts either for our classes or for our own research. So this would help us figure out, number one, how can you look at a text and be able to recognize— is this text useful for me? Number two, what are the most important concepts that we should be tracking in a text? And number three, what is the text as data telling us that maybe scholarship is overlooking or overemphasizing given traditional readings?

I would also like to show that those of us who do philosophy don’t have to be afraid of these technologies.

Siphesihle Sitole, Virginia Scherer, and Angel Villamar
Siphesihle Sitole, Virginia Scherer, and Angel Villamar

Angel VillamarSiphesihle Sitole, and Virginia Scherer, M.A. in International Political and Economic Development (IPED)

Project name: Climate Mitigation: The Role of a People’s Organization in the Philippines

What were you investigating with this research?
We looked at the role of the grassroots organization Tulungan sa Kabuhayan ng Calawis in dealing with climate mitigation. It was formed after Typhoon Ketsana hit in 2009. There is an area right outside of Manila that, over the years, has been deforested, so this organization organized to help incentivize reforestation. The farmers in the area, who are mostly women, develop the seedlings, do the land preparation, and plant the trees.

What do you hope people learn from this project?
We want to think about reforestation not as a one-time thing but as a long-term sustainable way. What incentives do you need so that you can keep doing this? We are showing that you can involve ordinary individuals at the grassroots level in something that is much bigger than them.

Group of Graduate School of Arts and Sciences Students
Students presented their research throughout the afternoon. Katherine Theiss, left, an economics Ph.D. student, shared findings about the best time to conduct surveys with women affected by intimate partner violence.
]]>
188222
Can AI Promote the Greater Good? Student and Faculty Researchers Say Yes https://now.fordham.edu/university-news/can-ai-can-promote-the-greater-good-student-and-faculty-researchers-say-yes/ Thu, 18 Apr 2024 12:55:56 +0000 https://now.fordham.edu/?p=187322 At a spring symposium, Fordham faculty and students showed how they’re putting data science and artificial intelligence to good use: applying them to numerous research questions related to health, safety, and justice in society.

It’s just the sort of thing that’s supposed to happen at an institution like Fordham, said Dennis Jacobs, Ph.D., provost of the University, in opening remarks.

“Arguably, artificial intelligence is the most revolutionary technology in our lifetime, and it brings boundless opportunity and significant risk,” he said at the University’s second annual data science and AI symposium, held April 11 at the Lincoln Center campus. “Fordham’s mission as a Jesuit university inspires us to seek the greater good in all things, including developing responsible AI to benefit society.”

The theme of the day was “Empowering Society for the Greater Good.” Presenters included faculty and students—both graduate and undergraduate—from roughly a dozen disciplines. Their research ran the gamut: using AI chatbots to promote mental health; enhancing flood awareness in New York City; helping math students learn to write proofs; and monitoring urban air quality, among others.

The event drew 140 people, mostly students and faculty who came to learn more about how AI is advancing research across disciplines at Fordham.

Student Project Enhances Medical Research

Deenan He, a senior at Fordham College at Lincoln Center, presented a new method for helping researchers interpret increasingly vast amounts of data in the search for new medical treatments. In recent years, “the biomedical field has seen an unprecedented surge in the amount of data generated” because of advancing technology, said He, who worked with natural sciences assistant professor Stephen Keeley, Ph.D., on her research.

From Granting Loans to Predicting Criminal Behavior, AI Must Be Fair

Keynote speaker Michael Kearns, Ph.D., a computer and information science professor at the University of Pennsylvania, spoke about bias concerns that arise when AI models are used for deciding on consumer loans, the risk of criminals’ recidivism, and other areas. Ensuring fairness requires explicit instructions from developers, he said, but noted that giving such instructions for one variable—like race, gender, or age—can throw off accuracy in other parts of the model.

Yilu Zhou, associate professor at the Gabelli School of Business, presenting research on protecting children from inappropriate mobile apps.
Yilu Zhou, associate professor at the Gabelli School of Business, presented research on protecting children from inappropriate mobile apps.

Audits of models by outside watchdogs and activists—“a healthy thing,” he said—can lead to improvements in the models’ overall accuracy. “It is interesting to think about whether it might be possible to make this adversarial dynamic between AI activists and machine learning developers less adversarial and more collaborative,” he said.

Another presentation addressed the ethics of using AI in managerial actions like choosing which employees to terminate, potentially keeping them from voicing fairness concerns. “It changes, dramatically, the nature of the action” to use AI for such things, said Carolina Villegas-Galaviz, Ph.D., a visiting research scholar in the Gabelli School of Business, who is working with Miguel Alzola, Ph.D., associate professor of law and ethics at the Gabelli School, on incorporating ethics into AI models.

‘These Students Are Our Future’

In her own remarks, Ann Gaylin, Ph.D., dean of the Graduate School of Arts and Sciences, said “I find it heartening to see our undergraduate and graduate students engaging in such cutting-edge research so early in their careers.”

“These students are our future,” she said. “They will help us address not just the most pressing problems of today but those of tomorrow as well.”

Keynote speaker Michael Kearns addressing the data science symposium
]]>
187322
Building a ‘Security Culture’ with a Human Touch https://now.fordham.edu/fordham-magazine/building-a-security-culture-with-a-human-touch/ Wed, 07 Feb 2024 17:07:48 +0000 https://news.fordham.sitecare.pro/?p=181608 As the founder and CEO of RevolutionCyber, a cybersecurity company that helps clients build a “security culture” within their organization, Juliet Okafor, GSAS ’03, believes that when it comes to minimizing risk, humans—not technology—are the solution.

Okafor discussed this at the 2023 Forever Learning event, At the Intersection of Human and Tech, where several other Fordham alumni also talked about their experiences in fields from journalism to fashion. During her panel, “Open AI and Cybersecurity,” Okafor recalled a lesson from a job she held prior to founding RevolutionCyber. She and her team studied the systemic failures that had made a large cruise ship company vulnerable to cyberattacks. When they spent time on one of the company’s ships, she said, it became clear that the people working there were key to identifying—and preventing—similar attacks in the future.

“The people who gave us the best information were the ones we spent the most time with, whose stories we listened to, who told us when the systems went down, how it made them feel,” Okafor said. The experience made her realize that “we have to start to think more about people and culture and behavior. Everyone was talking about security awareness. I thought, ‘We need to address security culture.’”

Okafor, who served on the GSAS Dean’s Advisory Board, credits her Fordham graduate degree in communications with helping her focus on the intersection between technology, business, and workplace culture.

“The future of cyber security is quintessentially human,” she wrote in a LinkedIn post. “As such, I truly believe cybersecurity requires a lifestyle change that we will all come to embrace as a regular part of life.”

Helping companies and people make that change is her aim with RevolutionCyber, which offers personalized employee training sessions, end-to-end assistance with cybersecurity program design and execution, and ongoing assessment options. During her presentation, she explained that AI technology can help in the quest to identify safe versus malicious behaviors by cross-comparing environments, allowing organizations to build a deeper knowledge base, enact a faster incident response, and develop better secure software.

But, she said, human concerns must always take precedence when using AI—or any technology—an approach at the heart of many of the cybersecurity programs at Fordham.

“We have to think about the humanity that is impacted by the deploying of technology. We can’t stop the AI from coming. We just have to be ready, and we need to always consider how it impacts our lives and the people around us.”

The 2024 Forever Learning event, Curating Curiosity, will take place on March 9, and you can register now.

]]>
181608
AI-Generated Movies? Just Give It Time https://now.fordham.edu/arts-and-culture/ai-generated-movies-just-give-it-time/ Wed, 31 Jan 2024 14:46:34 +0000 https://news.fordham.sitecare.pro/?p=181394 When the Writers Guild of America went on strike over the summer of 2023, one of their major grievances was the use of AI in television and movies.

A recent presentation at Fordham’s cybersecurity conference last month helped illustrate why.

“When I asked the CEO of a major movie company recently, ‘What’s the craziest thing you can imagine will happen in the next two to three years?’ he said, ‘We will have a full cinematic feature starring zero actors, zero cinematography, zero lighting, and zero set design,” said Josh Wolfe, co-founder and managing director of Lux Capital at a keynote speech on Jan. 10.

“It will all be generated.”

As an example, Wolfe, whose firm invests in new technologies, screened a fan-made movie trailer that used AI to imagine what Star Wars would look like if it had been directed by Wes Anderson.

A Threat to Storytelling

James Jennewien

James Jennewein, a senior lecturer in Fordham’s Department of Communication and Media Studies whose film-producing credits include Major League II, Getting Even with Dad, and Stay Tuned, said the prospect of AI-powered screenwriting is deeply concerning.

He called storytelling “soul nourishment” that teaches us what it means to be human.

“We’re still watching films and reading books from people who died centuries ago, and there’s something magical about an artist digging into their soul to find some kind of truth or find a unique way to express an old truth, to represent it to the culture, and I don’t think that AI is going to help make that happen more,” he said.

In many ways, AI has already infiltrated movies and TV; major crowd scenes in the show Ted Lasso were created using AI tools, for example. This summer, the directors of Indiana Jones and the Dial of Destiny used AI to render the nearly 80-year-old Harrison Ford to look like he was in his 20s.

The ability to use fewer actors in a crowd scene is obviously concerning to actors, but Jennewein said the strike was about more than just saving jobs–it’s about protecting creativity.

“We don’t want AI to create the illusion that something is original when it really is just a mashup of things that have been created before,” he said.

“Flesh-and-Blood” Films Coexisting with AI

Paul Levinson, Ph.D., a professor of communications, saw first-hand what AI can do to his own image and voice. A 2010 interview he did was recently altered by the journalist who conducted it to appear as if Levinson was speaking in Hindi.  But he is less concerned about AI taking over the industry.

He noted that when The Birth of a Nation was first screened in 1915, it was predicted that it would kill off the live theater.

Paul Levinson
Paul Levinson

Levinson predicted that in the future, the majority of what we watch will be AI-generated, but there will still be films that are made with live human actors. Just as theater co-exists with live movies, traditional movies will co-exist with AI content.

“I think we are going eventually to evolve into a situation where people aren’t going to care that much about whether or not it’s an AI-generated image or a real person,” he said.

Levinson acknowledged that AI could inflict real harm on the livelihood of actors and screenwriters, but said an equally important concern is whether those who work with AI tools get the credit they deserve.

“I’m sure people are going to think I’m out of my mind, but I don’t see a difference, ultimately, between a director who is directing actors in person and somebody who understands a sophisticated AI program well enough to be able to put together a feature-length movie,” he said.

“What could ultimately happen as AI-made films become more popular, is that films that are made with real flesh-and-blood actors will advertise themselves as such, and they’ll try to do things that maybe AI can’t quite yet do, just to push the envelope.”

]]>
181394
In Major Election Year, Fighting Against Deepfakes and Other Misinformation https://now.fordham.edu/politics-and-society/in-major-election-year-fighting-against-deepfakes-and-other-misinformation/ Wed, 24 Jan 2024 18:29:20 +0000 https://news.fordham.sitecare.pro/?p=181126 With more than 50 countries holding national elections in 2024, information will be as important to protect as any other asset, according to cybersecurity experts.

And misinformation, they said, has the potential to do enormous damage.

“It’s a threat because what you’re trying to do is educate the citizenry about who would make the best leader for the future,” said Karen Greenberg, head of Fordham’s Center on National Security.

Karen Greenberg

Greenberg, the author of Subtle Tools: The Dismantling of American Democracy from the War on Terror to Donald Trump (Princeton University Press, 2021), is currently co-editing the book Our Nation at Risk: Election Integrity as a National Security Issue, which will be published in July by NYU Press.

“You do want citizens to think there is a way to know what is real, and that’s the thing I think we’re struggling with,” she said.

At the International Conference on Cyber Security held at Fordham earlier this month, FBI Director Chris Wray and NSA Director General Paul Nakasone spoke about the possibility of misinformation leading to the chaos around the U.S. election in a fireside chat with NPR’s Mary Louise Kelly. But politics was also a theme in other ICCS sessions.

Anthony Ferrante, FCRH ‘01, GSAS ‘04, global head of cybersecurity for the management consulting firm FTI, predicted this year would be like no other, in part because of how easy artificial intelligence makes it to create false–but realistic—audio, video, and images, sometimes known as deepfakes.

Alexander Marquardt, Sean Newell, Anthony J. Ferrante, Alexander H. Southwell, seated at a table
Alexander H. Southwell, Sean Newell, Anthony J. Ferrante, and Alexander Marquardt spoke at the ICCS panel discussion “A U.S. Election, Conflicts Overseas, Deepfakes, and More … Are You Ready for 2024?”
Photo by Hector Martinez

The Deepfake Defense

“I think we should buckle up. I think we’re only seeing the tip of the iceberg, and that AI is going to change everything we do,” Ferrante said.

In another session, John Miller, chief law enforcement and intelligence analyst for CNN, said major news outlets are acutely aware of the danger of sharing deepfakes with viewers.

“We spend a lot of time on CNN getting some piece of dynamite with a fuse burning on it that’s really hot news, and we say, ‘Before we go with this, we really have to vet our way backward and make sure this is real,’” he said.

He noted that if former President Donald Trump were caught on tape bragging about sexually assaulting women, as he was in 2016, he would probably respond differently today.

“Rather than try to defend that statement as locker room talk, he would have simply said, ‘That’s the craziest thing anybody ever said; that’s a deepfake,” he said.

In fact, this month, political operative Roger Stone claimed this very defense when it was revealed that the F.B.I. is investigating remarks he made calling for the deaths of two Democratic lawmakers. And on Monday, it was reported that days before they would vote in their presidential primary elections, voters in New Hampshire received robocall messages in a voice that was most likely artificially generated to impersonate President Biden’s, urging them not to vote in the election.

John Miller seated next to Armando Nuñez
CNN’s John Miller was interviewed by Armando Nuñez, chairman of Fordham’s Board of Trustees, at a fireside chat, “Impactful Discourse: The Media and Cyber.” Photo by Hector Martinez

A Reason for Hope

In spite of this, Greenberg is optimistic that forensic tools will continue to be developed that can weed out fakes, and that they contribute to people’s trust in their news sources.

“We have a lot of incredibly sophisticated people in the United States and elsewhere who understand the risks and know how to work together, and the ways in which the public sector and private sector have been able to share best practices give me hope,” she said.

“I’m hopeful we’re moving toward a conversation in which we can understand the threat and appreciate the ways in which we are protected.”

]]>
181126
Need the Latest Research for Your Course Curriculum? AI Can Help https://now.fordham.edu/science/need-the-latest-research-for-your-course-curriculum-ai-can-help/ Mon, 22 Jan 2024 15:20:28 +0000 https://news.fordham.sitecare.pro/?p=180977 One of the biggest challenges professors face in creating their course curriculum is making sure they include the latest and most relevant research in their fields.

That’s why Michelle Rufrano, an adjunct sociology professor, decided to plan her upcoming course a little differently this time—by using a new AI tool.

Rufrano is the CEO of CShell Health, a media technology company that aims to curate health information and use it to help create social change. She worked with her business partner, Jean-Ezra Yeung, a data scientist with a master’s in public health, to develop an augmented intelligence tool that can sift through hundreds of thousands of articles of research and synthesize them into various themes.

Rufrano recently used the tool to plan her Coming of Age: Adulthood course at Fordham, sourcing readings from scholarly articles available on PubMed, an online biomedical literature database. The tool organized those articles into knowledge graphs—or geometric visualizations that map out correlations and topics that are most present in the research, without a professor having to manually sort through article titles and abstracts.

According to Rufrano, this method allowed her to plan her curriculum and readings much more efficiently.

“It cuts the research time in half,” Rufrano said. “That kind of document review would usually take me about four months of looking through all of that data. It’s down to about two weeks.”

Rufrano’s course explores the life course theory, which aims to analyze the structural, social, and cultural contexts that shape human behavior from birth to death. As a relatively unique field, Rufrano said it can be challenging to find materials, particularly those that include the most recent research. She said their AI tool is uniquely suited to solve this problem.

“I would have never found some of these studies that came up in the knowledge graphs, because they were published last month, and just would have probably escaped the regular search engines,” Rufrano said. “You would have had to put in some very specific language that you wouldn’t have necessarily known to use.”

Rufrano said it is crucial that students are exposed to a mix of current research in addition to classical works when preparing to enter careers in the field.

“That is so valuable for students who are going into a very volatile workforce. They need to have this very up-to-date information,” she said

Future Uses for the AI Tool

Rufrano and Yeung met while studying for a master’s in public health, and went on to form CShell Health, which uses augmented intelligence to reframe consumer health information and make it more accessible. The course planning model was an early experiment in what they hope will be a total reimagining of public health literacy.

“We can address really salient issues like how institutional discrimination is embedded in language,” Rufrano said. “If we can see the vulnerabilities in the data, then we can correct for the bias in the research. That’s my dream for the company.”

]]>
180977
Hackers Use AI to Improve English, Says NSA Official https://now.fordham.edu/university-news/hackers-use-ai-to-improve-english-says-nsa-official/ Wed, 10 Jan 2024 23:03:36 +0000 https://news.fordham.sitecare.pro/?p=180587 From “hacktivists” backed by foreign governments to the advantages and perils of artificial intelligence, National Security Agency (NSA) Director of Cybersecurity Rob Joyce highlighted three areas of focus in the cybersecurity field at the 10th International Conference on Cyber Security, held at Fordham on Jan. 9.

Better English-Language Outreach

The use of artificial intelligence is both a pro and con for law enforcement, Joyce said.

“One of the first things [bad actors are] doing is they’re just generating better English language outreach to their victims [using AI]—whether it’s phishing emails or something more elaborative,” he said. “The second thing we’re starting to see is … less capable people use artificial intelligence to guide their hacking operations to make them better at the technical aspect of a hack.”

But Joyce said that “in the near term,” AI is “absolutely an advantage for the defense,” as law enforcement officials are using AI to get “better at finding malicious activity.”

For example, he said that the NSA has been watching Chinese officials attempt to disrupt critical infrastructure, such as pipelines and transportation systems, in the United States.

“They’re not using traditional malware, so there’s not the things that the antivirus flags,” Joyce said.

Instead, he said they’re “using flaws” in a system’s design to take over or create accounts that appear authorized.

“But machine learning AI helps us surface those activities because those accounts don’t behave like the normal business operators,” Joyce said.

‘Hacktivists’ Role in Israel-Hamas Conflict

Joyce said one of the biggest challenges for cybersecurity officials is understanding who is conducting cyber attacks and why. For example, while cyber officials have been seeing an uptick in “hacktivists,” or hackers who are activists, they’ve been seeing more foreign governments backing them and posing as them.

“The Israel-Hamas conflict going on right now—there’s a tremendous amount of hacktivist activity, and we see it on both sides of the equation,” Joyce said. “But the interesting piece in some of this is the nation-states are increasingly cloaking their activities in the thin veil of activists’ activity—they will go ahead and poke at a nation-state, poke at critical infrastructure, poke at a military or strategic target, and try to do that in a manner that looks to be this groundswell of activist activity. That’s another place where we need that intelligence view into really what’s behind the curtain, because not all is as it seems.”

Unclassifying Information: ‘A Sea Change’

Joyce said that one of the biggest “sea” and “culture” changes at the NSA is sharing classified information with the private sector.

“We’re taking our sensitive intelligence, and we’re getting that down to unclassified levels that work with industry,” Joyce said, “Why? Because there might be one or two people in a company who are cleared for that intelligence, but chances are the people who can do something about it, they’re the folks who actually are not going to have a clearance.”

Joyce said that the department has decided to shift its stance around sharing in intelligence in part because “what we know is not nearly as sensitive as how we know it” and because “knowing something really doesn’t matter if you don’t do something about it; industry is the first that can do something about it.”

]]>
180587