top of page

Why AI Can Be Biased – And How to Make It More Inclusive with a Few Powerful Words

Illustration of a smiling person gesturing next to text: "Why AI Can Be Biased—And How to Make It More Inclusive With a Few Powerful Words." AI and speech icons.

We’ve all heard the promises of artificial intelligence: faster decisions, better insights, and a new era of productivity. But there’s a shadow lurking behind those glowing headlines—AI can also replicate, amplify, and perpetuate the same biases we've been trying to unpick for decades. And if we’re not careful, it’ll do it at scale and with a sense of algorithmic authority that’s even harder to challenge.


So, how did we end up here? And what can we do about it?


It’s Not Just the Algorithm – It’s the Data Diet


Let’s start with a truth bomb: AI isn’t neutral. It’s not born in some moral vacuum, untouched by society’s messiness. AI is trained on data, and data is a record of human behaviour—past and present. Unfortunately, that means it’s a record of all the inequalities, blind spots, and biases we've yet to deal with.


From overrepresented white, Western male data to underrepresented global majority voices, the “training” AI receives is anything but balanced. We’ve literally coded our biases into the machines. It's like feeding a toddler a diet of only one kind of food, and then being surprised when they grow up with skewed tastes.


Bias in Action: A Quick Reality Check


Here are just a few "Is it OK?" questions we need to ask when AI is let loose in society:


  • Is it OK for an AI to decide who gets a job interview based on skewed hiring data?

  • Is it OK for facial recognition systems to misidentify Black faces at far higher rates?

  • Is it OK that medical AI misdiagnoses women because it was trained mostly on male bodies?


If your gut is screaming “No,” you're not alone. These aren't just technical glitches—they're systemic issues born of unbalanced data and uncritical design.


The Magic of Prompting with Inclusive Language

Now here’s the good news: we can do better. And one surprisingly simple place to start? The prompt.


Generative AI systems, like ChatGPT or image generators, respond directly to the language you feed them. And just like any other tool, you get better outputs when you ask better questions. If you want inclusive, equity-focused, and representative results—you have to say so.


That’s where inclusive prompt design comes in.


Instead of just asking:

“Write a company bio for our leadership team”

Try:

“Write a company bio for a diverse, intersectional, and representation-aware leadership team that includes LGBTQIA+, neurodivergent, racially and age-diverse voices.”

Boom. You’ve just shifted the lens through which the AI filters the world. You’ve named the need for inclusion, and AI listens.


Your Inclusive Prompt Toolkit


Here’s a list of inclusive terms you can add to your prompts to steer generative AI away from homogenised assumptions and toward more equitable results:


  1. Inclusive

  2. Diverse

  3. Intersectional

  4. Equitable

  5. Equity-focused

  6. Accessible

  7. Representation

  8. Representation-aware

  9. Marginalised

  10. Neurodivergent

  11. LGBTQIA+

  12. Culturally sensitive

  13. Culturally-responsive

  14. Gender-inclusive

  15. Non-binary

  16. Racially diverse

  17. Age-diverse

  18. Age-inclusive

  19. Global majority

  20. Decolonised

  21. Community-led

  22. Justice-oriented

  23. Trauma-informed

  24. Disability-inclusive

  25. Disability-aware

  26. Socioeconomic diversity

  27. Socioeconomic-sensitive

  28. Multilingual

  29. Authentic voices

  30. Authentic

  31. Lived experience

  32. Allyship

  33. Respectful

  34. Anti-oppressive

  35. Anti-racist

  36. Context-aware

  37. Contextual

  38. Empowering

  39. Human-centred

  40. Non-Western

  41. Indigenous

  42. Non-colonial

  43. Ethical

  44. Compassionate

  45. Bias-aware


Pro tip: Combine multiple terms when you want a layered, intersectional lens. The more precise you are, the more empowering the results can be.


It’s Not Just About the Tech – It’s About the People It Serves


Let’s not lose sight of the human cost. When we talk about AI bias, we’re talking about real-world consequences: people being denied jobs, misdiagnosed, overpoliced, or simply left out of the digital conversation altogether.


We need to shift from building tech that’s “smart” to building tech that’s just. That means:


  • Involving people with lived experience in AI design.

  • Prioritising transparency and accountability.

  • Measuring the social impact, not just the performance metrics.


As I often say: "If your AI isn’t lifting up the people furthest from opportunity, it’s just another power tool for the already powerful."


Final Thoughts: Words Build Worlds

AI is not inherently good or bad. It’s a reflection of us—our choices, our assumptions, our language.


So the next time you prompt an AI, think about the kind of world you want it to imagine. Language matters. It shapes not just how machines respond, but how we include—or exclude—each other in the digital world.


Let’s start prompting for a future that’s not just intelligent, but inclusive.


Call to Action

How will you use your next AI prompt to amplify voices that usually go unheard? Try it out—and let me know how it goes. What inclusive prompts have worked for you? What didn’t? Let’s build better prompts—and better tech—together.

Hozzászólások


bottom of page