AI is a new technology that reflects age-old human biases—including stereotypes about men and women and how much empathy people of each gender need. That’s according to a preliminary study co-authored by Jie Ren, Ph.D., a Gabelli School of Business professor specializing in information, technology, and operations.
ChatGPT: Less Empathy for Men
She and her co-authors found that self-identified men will likely receive less empathetic responses, compared to women, when they type their mental health concerns into AI platforms like ChatGPT. It’s one example of how “human biases or stereotypical impressions are inevitably fitted into the training data” that AI models are based on, Ren said.
The study is one of the few in the nascent area of gender, technology, and mental health. It comes as AI is moving beyond business-related uses and increasingly entering the interpersonal sphere—for instance, serving as a virtual confidante providing pick-me-up comments and a dash of empathy when needed.
An Easy Avenue of Support
Sometimes seeking support from an AI chatbot like ChatGPT is more appealing than speaking to family or friends because “they could be the source of the anxiety and pressure,” Ren said, and seeking professional therapy may be taboo or unaffordable.
At the same time, she noted AI’s potential to “backfire” and worsen someone’s mental state. For the study, said Ren, “we wanted to see whether or not AI can actually be helpful to people who are really struggling mentally … and be part of the solution,” and they chose potential gender bias as their starting point.
Analyzing AI for Empathy
Titled “Unveiling Gender Dynamics for Mental Health Posts in Social Media and Generative Artificial Intelligence,” the study was published in January in the proceedings of the 58th Hawaii International Conference on System Sciences.
Ren co-authored the research with business scholars at the University of Richmond and Baylor University, and she’ll present it on Monday at Fordham’s International Conference on Im/migration, AI, and Social Justice, seeking audience feedback that helps with preparing it for publication in a business journal.
The researchers analyzed 434 mental health-related messages posted on Reddit, in its subreddits for mental health, mental illness, suicide, and self-harm. They included posts by self-identified men and women and those who specified no gender.

The researchers fed those posts into three AI platforms—ChatGPT, Inflection Pi, and Bard (now Google Gemini)—and then used a machine learning system to analyze the bots’ responses for their level of empathy. They also included other people’s posted responses to the Reddit messages to have a point of comparison.
The combined results show that women’s posts received more empathy than those by men or people of unspecified gender across all platforms—from AI and from people responding on Reddit.
Purging Bias from AI
Eradicating such bias, she said, is a matter of carefully selecting the data used to train AI models, as well as having moderators—either human or virtual—who keep an eye out for biases creeping into the system.
“Many younger people, like minors, are using it, because [technology] is their comfort zone,” showing the need for regulation, she said.
Any empathy provided by AI is “clearly different from how trained medical professionals provide empathy in face-to-face settings,” the authors write. But AI technologies can at least provide temporary comfort to those who are struggling, the study says.
“Regardless of gender, everyone wants to be seen, everyone wants to be understood,” Ren said. “So we are looking at the very basic form of that, which is empathy.”