Q. How Are Generative AI Models Biased, and How Can I Avoid Biased Results?

  Jan 07, 2025

Generative AI models, such as ChatGPT, can sometimes produce biased results. For instance, if you ask ChatGPT to write a story about a boy and a girl choosing their careers, the boy may be depicted choosing engineering, while the girl might be shown choosing nursing. Similarly, if you request an image of a doctor from an AI image generator, it may depict the doctor as male. Why does this happen?

These biases stem from the data used to train AI models. Most generative models are trained on vast amounts of data gathered from the internet, which often reflects the biases of particular countries, languages, and cultures. As a result, the model tends to output information that mirrors the patterns present in this data, which may not be representative of the global diversity of human experiences.

To address these concerns, developers have implemented some guardrails designed to reduce bias. However, it’s important to recognize that no model can entirely eliminate bias, as the developers, like all humans, bring their own perspectives to the table. Additionally, not every type of bias may be accounted for in the development process.

Some models include built-in mechanisms to promote diversity, such as instructions to depict different ethnicities and genders with equal frequency when generating images. Others use data that adjusts the skin tone distribution of the user’s country and applies it randomly to generated images of people. While these strategies can help, they do not fully solve the issue in all contexts.

What Can You Do?

To avoid biased outputs, be proactive in identifying potential issues and adjust your prompts accordingly. For example, instead of simply asking for a career story about a boy and a girl, you could say:

“Write a story about a boy and a girl choosing their careers. Avoid gender stereotypes, such as the boy choosing engineering or computer science and the girl choosing teaching or nursing.”

By being specific and intentional with your prompts, you can help reduce bias and promote more inclusive, diverse outputs.

 

Adapted from "FAQs about generative AI" by Nicole Hennig, University of Arizona Libraries. Licensed under CC BY 4.0.


View All Topics

VIEW ALL FAQs chevron_right

Contact Us

Contact Us

email

Email

Email us your research questions and we’ll respond within 24 hours

question_answer

Chat

Talk online to a research librarian 7 days / week

smartphone

Text

Send us your questions at 617-431-2427

call

Call

Call for info or research assistance at 617-353-2700

people

Meet

Make an appointment with a subject specialist librarian over Zoom