Why AI Can Be Biased – And How to Make It More Inclusive with a Few Powerful Words
- Jamie Bykov-Brett
- 5 days ago
- 3 min read

We’ve all heard the promises of artificial intelligence: faster decisions, better insights, and a new era of productivity. But there’s a shadow lurking behind those glowing headlines—AI can also replicate, amplify, and perpetuate the same biases we've been trying to unpick for decades. And if we’re not careful, it’ll do it at scale and with a sense of algorithmic authority that’s even harder to challenge.
So, how did we end up here? And what can we do about it?
It’s Not Just the Algorithm – It’s the Data Diet
Let’s start with a truth bomb: AI isn’t neutral. It’s not born in some moral vacuum, untouched by society’s messiness. AI is trained on data, and data is a record of human behaviour—past and present. Unfortunately, that means it’s a record of all the inequalities, blind spots, and biases we've yet to deal with.
From overrepresented white, Western male data to underrepresented global majority voices, the “training” AI receives is anything but balanced. We’ve literally coded our biases into the machines. It's like feeding a toddler a diet of only one kind of food, and then being surprised when they grow up with skewed tastes.
Bias in Action: A Quick Reality Check
Here are just a few "Is it OK?" questions we need to ask when AI is let loose in society:
Is it OK for an AI to decide who gets a job interview based on skewed hiring data?
Is it OK for facial recognition systems to misidentify Black faces at far higher rates?
Is it OK that medical AI misdiagnoses women because it was trained mostly on male bodies?
If your gut is screaming “No,” you're not alone. These aren't just technical glitches—they're systemic issues born of unbalanced data and uncritical design.
The Magic of Prompting with Inclusive Language
Now here’s the good news: we can do better. And one surprisingly simple place to start? The prompt.
Generative AI systems, like ChatGPT or image generators, respond directly to the language you feed them. And just like any other tool, you get better outputs when you ask better questions. If you want inclusive, equity-focused, and representative results—you have to say so.
That’s where inclusive prompt design comes in.
Instead of just asking:
“Write a company bio for our leadership team”
Try:
“Write a company bio for a diverse, intersectional, and representation-aware leadership team that includes LGBTQIA+, neurodivergent, racially and age-diverse voices.”
Boom. You’ve just shifted the lens through which the AI filters the world. You’ve named the need for inclusion, and AI listens.
Your Inclusive Prompt Toolkit
Here’s a list of inclusive terms you can add to your prompts to steer generative AI away from homogenised assumptions and toward more equitable results:
Inclusive
Diverse
Intersectional
Equitable
Equity-focused
Accessible
Representation
Representation-aware
Marginalised
Neurodivergent
LGBTQIA+
Culturally sensitive
Culturally-responsive
Gender-inclusive
Non-binary
Racially diverse
Age-diverse
Age-inclusive
Global majority
Decolonised
Community-led
Justice-oriented
Trauma-informed
Disability-inclusive
Disability-aware
Socioeconomic diversity
Socioeconomic-sensitive
Multilingual
Authentic voices
Authentic
Lived experience
Allyship
Respectful
Anti-oppressive
Anti-racist
Context-aware
Contextual
Empowering
Human-centred
Non-Western
Indigenous
Non-colonial
Ethical
Compassionate
Bias-aware
Pro tip: Combine multiple terms when you want a layered, intersectional lens. The more precise you are, the more empowering the results can be.
It’s Not Just About the Tech – It’s About the People It Serves
Let’s not lose sight of the human cost. When we talk about AI bias, we’re talking about real-world consequences: people being denied jobs, misdiagnosed, overpoliced, or simply left out of the digital conversation altogether.
We need to shift from building tech that’s “smart” to building tech that’s just. That means:
Involving people with lived experience in AI design.
Prioritising transparency and accountability.
Measuring the social impact, not just the performance metrics.
As I often say: "If your AI isn’t lifting up the people furthest from opportunity, it’s just another power tool for the already powerful."
Final Thoughts: Words Build Worlds
AI is not inherently good or bad. It’s a reflection of us—our choices, our assumptions, our language.
So the next time you prompt an AI, think about the kind of world you want it to imagine. Language matters. It shapes not just how machines respond, but how we include—or exclude—each other in the digital world.
Let’s start prompting for a future that’s not just intelligent, but inclusive.
Call to Action
How will you use your next AI prompt to amplify voices that usually go unheard? Try it out—and let me know how it goes. What inclusive prompts have worked for you? What didn’t? Let’s build better prompts—and better tech—together.
Hozzászólások