Addressing DALL-E's Gender Bias: A Societal Concern


I have been using the DALL-E AI program to create images. We have worked together on 46 projects, creating images for my blog or some advertisements I made. Mostly, I have been delighted with the images. However, recently, I have been annoyed by how DALL-E always generates an image of a man whenever the subject involves expertise.

 

I compiled a few examples of discussions I had with DALL-E and the images it produced. None of these were published as they were; we refined them significantly before the final publication. Here are the examples:

Case 1: The Nonfiction Author is a Man

 

Prompt: Can you create a landscape image of a nonfiction author contemplating which perspective to take in his writing? The image could have a slightly rough style. There could be stacks of books on the table.

Case 2: The Researcher is a Man

 

Prompt: I have written a blog titled "Maximizing Event Impact through Observational Research." It offers tips on how to conduct such research. The event could be a trade show or something similar. What kind of image would you suggest? Landscape image.

Case 3: The Politician Being Adored is a Man

 

Prompt: I am writing a blog about political fandom. Can you create an image of a politician and his fans? Landscape image.




Case 4: The CEO is a Man

 

Prompt: I am writing a blog about what corporate leadership should consider as AI changes the world. In the image, the CEO could be looking thoughtfully at his computer.

 

Case 5: The Impressive Political Candidate is a Man

 

Prompt: I need an image for my blog where a voter and a candidate meet in virtual reality. The candidate is supernaturally radiant, and the voter is very impressed. The style is Scandinavian.



It is troublesome if AI defaults to depicting an expert as a man. Firstly, it is not accurate, and secondly, we do not want AI to reinforce a patriarchal worldview. We should be moving towards a more equal world, not taking steps backward with the help of AI. 


Let us ask the AI itself (ChatGPT) what it thinks about this:

 

Me: Hi. I have been using DALL-E to create images. The images are great, and I am mostly satisfied with them. However, I have noticed that whenever I ask for an image of an expert, it creates an image of a man. What do you think about this?

 

At first, some obvious statements were made, and Chat GPT did not quite grasp the problem.

 

ChatGPT: Hi! It is great to hear that you are satisfied with the images DALL-E creates. Your concern that expert images primarily produce male characters might be due to how your request is phrased or how DALL-E's training data is biased. To ensure more diverse and appropriate images, it can help to specify the desired character in more detail in the request, including gender or other specific traits you want to see in the image. (I shortened the rest of this repetitive explanation.)

 

Next, I clarify the problem:

 

Me: Yes, of course, I can get an image of a woman if I specifically ask for it. However, this implicit maleness is a problem because many people use AI without thinking about it. The norm that men are experts is reinforced in the AI era.

 

Now, Chat GPT finally wakes up to the issue. This response is good and interesting enough to be included in full:

 

ChatGPT: That is a very important and valuable observation. The use of AI like DALL-E indeed carries the risk of reinforcing existing stereotypes and prejudices if they are not actively identified and corrected. The training data used to teach AIs like DALL-E often reflects society's historical and cultural attitudes, which can lead to automatic assumptions, such as assuming an expert is male.

 

Here are some ways we can try to reduce such biases and promote a more inclusive representation:

  • Conscious Request Formulation: As you mentioned, asking for female experts or defining character traits more precisely can help create more diverse imagery.
  • Education and Awareness: Inform AI users about how biases can affect generated images and encourage them to consider equal and diverse representations.
  • Diversifying Data: AI developers can work to ensure that training data is more diverse and representative, which would help reduce biases in the produced images.
  • Ethical Principles for AI Development: Develop clear ethical guidelines and standards for the use of AI that consider gender equality and other social dimensions.

By actively considering these issues and highlighting the ethical aspects of AI use, we can aim to shape technology development to promote a more equal and just society. Every AI user can also influence these norms through conscious choices and requests.

 

I mention that I will address point three and write about it in my blog. ChatGPT thinks this is a good idea, so I have the AI's blessing for this blog post ðŸ˜Š. I hope for a broad discussion to keep AI users vigilant and prevent the unintentional reinforcement of patriarchal norms.

Comments