The Bias Blueprint: Transforming AI from Threat to Ally

By Keith Boswell, Chief Digital Officer @ Perceptint

Originally presented at the Michigan Department of Civil Rights “MI Response to Hate” event on Race at the Howell Opera House, April 24, 2025

The AI Revolution Is Here (Whether You’re Ready or Not)

AI isn’t just coming; it’s arrived with the subtlety of a marching band surrounding the Innies on Severance. ChatGPT added over 100 million users in just the first two months of 2025 alone. Tools like Deepseek, Claude, and Midjourney dominate headlines daily. What was once relegated to sci-fi films and academic research papers is now making dinner reservations and writing college essays.

This generative AI revolution represents the most significant technological leap since the Internet became the Web. Models like GPT-4, Claude, and Gemini are performing tasks that seemed purely fantastical just a few years ago—from writing complex code to engaging in nuanced conversations that can be indistinguishable from human interactions. The famous “Turing Test” has been passed a few times over now.

Alongside the fascination comes fear.

“Will AI take my job? Is it safe? Is it fair?”

the “Collective Inner Fear” of mankind circa 2023+

These aren’t just hypothetical concerns anymore. With high-profile incidents like deepfakes disrupting politics and AI-generated content becoming increasingly difficult to distinguish from human-created work, the time for abstract ethical discussions has passed. We need concrete solutions—now.

The Hidden Danger: Institutional Bias at Warp Speed

Here’s what keeps me up at night: AI doesn’t just reflect our biases—it accelerates them at unprecedented scale.

When deployed in high-stakes decision-making contexts, AI systems risk widening existing racial disparities in ways that are both more pervasive and less detectable than human bias. Consider what’s happening right now in the housing market: AI-driven tenant screening and rent pricing algorithms are rapidly proliferating, creating what critics have aptly called “black-box redlining.”

These systems might reject qualified renters or increase rents in minority communities without explicitly considering race—instead, they use seemingly “neutral” proxies that recreate historical patterns of discrimination. The Department of Justice’s lawsuit against RealPage, an AI rent-setting firm, alleges precisely this problem—that their algorithm enabled landlords to push rents higher than the market would otherwise bear, potentially disadvantaging vulnerable populations. Here in Michigan, a Lansing-based housing company using similar technology has raised significant local concerns.

Without appropriate regulation and oversight, these AI systems threaten to cement housing inequality rather than potentially help eliminate it.

The “Garbage In, Garbage Out” Dilemma

The challenge lies in the data itself. AI systems learn from what we feed them, and unfortunately, our data is a by-product of an algorithm led society. The algorithms chase what excites us. So explosive conversations have become more the norm than civil discussions.

Large AI models are predominantly trained on what they can easily read online. If you’ve spent five minutes in a comment section, you know it can include harmful stereotypes and racist content. The resulting systems inherit these biases, not through malicious intent, but through the mechanical reproduction of patterns in their training data.

Consider criminal justice algorithms that determine “crime risk” in neighborhoods. Decades of disproportionate policing mean crime databases contain skewed information, resulting in AI systems that may flag certain areas as high-risk regardless of actual conditions—they simply echo historical bias.

Similarly, mortgage algorithms designed to “not use race” as a factor still incorporate credit scores and neighborhood data that can function as proxies for race, producing biased outcomes anyway. Even social media algorithms have been accused of amplifying white creators over the Black creators who originated the very content being shared.

These types of biases in underlying data MUST be “trained out”—a challenge requiring both technical sophistication and ethical clarity.

Technology Can be a Bridge, Not a Barrier

Despite serious concerns, I’m still cautiously optimistic. Used thoughtfully, AI could help us reduce human bias in ways never available to us before.

Properly designed AI hiring systems can be programmed to ignore demographics entirely and focus exclusively on skills and qualifications, potentially reducing the unconscious bias that affects human hiring decisions. AI translation and speech tools are already breaking down language barriers. AI-powered captioning allows deaf and hearing individuals to engage in real-time conversation.

The scalability of AI means positive efforts can reach a much wider audience. This could mean personalized education systems that adapt to a student’s culture, dialect, or learning style; city services using AI chatbots to assist residents equally 24/7; and analytics tools that can sift through massive datasets to identify discrimination patterns that might otherwise remain invisible.

AI audits of pay records, for instance, could flag when employees of color or gender are paid less than their equally qualified counterparts—providing hard evidence for remedial action.

The Blueprint Forward: From Awareness to Action

So where do we go from here? We need a comprehensive approach that acknowledges the risks and opportunities of AI.

Here’s what each of us can do:

  1. Advocate for Ethical AI Policies that require transparency, fairness testing, and ongoing audits of high-stakes AI systems
  2. Ask Questions & Stay Informed about how AI is being used in your workplace, community, and the products you use
  3. Participate in the Process by providing feedback and engaging with organizations developing AI systems. If we don’t contribute to setting boundaries, they will never exist
  4. Promote Tech Literacy & Inclusion to ensure diverse voices help shape these technologies
  5. Hold Companies Accountable by supporting those with strong ethical AI practices and questioning those without them
  6. Commit to Continuous Learning about AI developments and their implications, this is now everyone’s future path
  7. Support Organizations Fighting Bias in technology and elsewhere
  8. Build Bridges between technologists, civil rights advocates, policymakers, and communities

We’re at the Crossroads

The relationship between AI and civil rights represents one of the defining challenges of our time. The decisions we make now will shape our world for decades to come.

Everyone has a part to play in this conversation. We need to keep these discussions active in our workplaces, schools, and dinner tables. We must challenge comfortable myths like “algorithms are always neutral” and share concrete examples that illustrate why equity in AI matters.

The pace at which AI continues to advance is staggering, even to many of us who have worked in this space for decades. That’s precisely why I write about topics like this—to advocate for the common good of us all. Educating everyone about both the potential and pitfalls of AI isn’t just important—it’s essential.

Let’s approach this challenge with clear eyes and determined hearts. The future of AI can be one that reinforces our highest values rather than our deepest flaws. But only if we commit to making it so.


Want to continue the conversation? Reach me at info@perceptint.com