Back in spring 2025, when the DEI rollbacks started accelerating under the new administration, I built RESIST training—not because I thought we should abandon equity work, but because I knew we’d need survival strategies for hostile political environments.
That training was about preserving DEI outcomes when you can’t use DEI language. About doing the work while changing the vocabulary to survive. About tactical adaptation without moral capitulation.
This AI governance pathway is the same principle, operationalized.
Companies fired DEI practitioners to appease fascists. But they still desperately need the exact skills those people have—bias auditing, equity-centered process design, ethical risk assessment, asking “who gets harmed by this decision?”
So here’s the strategy: Hire them back. Same people. Same skills. Different org chart.
Call them AI Governance & Ethics Leads. Let them build bias-resistant AI systems, audit for fairness, and prevent your automation from accidentally creating discrimination lawsuits.
You get to look like you’re “innovating.” They get to keep doing work that matters. Everyone pretends this was always the plan.
And the equity work? It continues. We just stop painting a target on it.
Let’s Be Honest About What Happened
In late 2024 and early 2025, companies across America gutted their DEI programs in anticipation of—and then in response to—the Trump regime’s attacks on diversity initiatives.
Some did it because they genuinely believed the legal risk was too high. Some did it because their boards panicked. Some did it because they never really cared about equity in the first place, and this gave them cover.
Regardless of motive, the result was the same: tens of thousands of skilled practitioners lost their jobs. Programs that took years to build were dismantled in weeks. Infrastructure for equitable hiring, promotion, and culture-building was destroyed.
And companies told themselves they had no choice.
Fine. Let’s accept that premise (not in our hearts but for the purpose of this article). Let’s say you fired your DEI staff because you were terrified of political retaliation, boycotts, or federal investigations.
You’re cowards, but I get it. Survival instinct is real.
But here’s your penance: you still need those skills. And now you need them for AI governance.
Because if you think AI systems are going to be less biased than human systems, you’re delusional. AI trained on biased data produces biased outcomes. AI deployed without ethical guardrails creates discrimination at scale, faster and less visible than humans ever could.
And when that discrimination results in lawsuits, brand damage, and regulatory penalties, “the algorithm did it” won’t be a defense.
You need people who can audit for bias. Build equitable systems. Ask uncomfortable questions. Identify who gets harmed before the harm happens.
You know who’s really good at that? DEI practitioners.
So hire them back. Just call them something else.
The Skills Overlap: DEI Practitioner = AI Governance Expert
This isn’t a stretch. The Venn diagram is nearly a circle.
What DEI practitioners are trained to do:
1. Audit systems for bias and inequity
- Review hiring processes, promotion criteria, and performance evaluations
- Identify where implicit bias creeps into decision-making
- Measure demographic outcomes and flag disparities
- Trace how policies impact different groups
What AI Governance Leads need to do:
- Audit AI training data for bias
- Review AI outputs for discriminatory patterns
- Identify where algorithmic bias produces unfair outcomes
- Measure how AI impacts different user populations
It’s the same work. Just applied to machines instead of humans.
2. Build equitable processes
- Design hiring rubrics that reduce bias
- Create evaluation frameworks that assess fairly across demographics
- Establish accountability mechanisms for decision-makers
- Document processes so they’re transparent and consistent
What AI Governance Leads need to do:
- Design AI prompts and training protocols that reduce bias
- Create evaluation frameworks for AI outputs
- Establish human oversight mechanisms for high-risk AI decisions
- Document AI decision-making processes for auditability
Same skill. Different application.
3. Ask “who gets harmed?” before implementing change
- Anticipate unintended consequences of policies
- Center marginalized voices in decision-making
- Identify disparate impact before it occurs
- Push back on “efficient” solutions that harm vulnerable populations
What AI Governance Leads need to do:
- Anticipate unintended consequences of AI deployment
- Center affected users in AI design decisions
- Identify when AI will produce disparate outcomes
- Push back on “efficient” automation that creates harm
Literally the same ethical muscle.
4. Train people on sensitive, complex topics
- Explain bias in ways that don’t trigger defensiveness
- Help teams understand systemic issues, not just individual incidents
- Build buy-in for changes that feel uncomfortable
- Navigate resistance and backlash
What AI Governance Leads need to do:
- Explain AI bias risks to teams who don’t want to hear it
- Help teams understand AI limitations and failure modes
- Build buy-in for governance protocols that slow things down
- Navigate resistance from people who think oversight is unnecessary
Exact same change management challenge.
5. Measure outcomes, not just intentions
- Track whether DEI initiatives actually improve equity
- Build metrics that capture impact on marginalized groups
- Report honestly about what’s working and what isn’t
- Adjust strategies based on data
What AI Governance Leads need to do:
- Track whether AI actually performs equitably across demographics
- Build metrics that capture AI impact on different user groups
- Report honestly about AI failures and risks
- Adjust AI systems based on performance data
Same accountability framework.
Why This Isn’t Capitulation—It’s Tactical Survival
I can already hear the objection: “This is just hiding DEI. You’re letting companies pretend they care about equity while avoiding the political cost of saying so.”
Yeah. Exactly.
Because the alternative is what? Letting the equity work die entirely? Letting skilled practitioners lose their livelihoods while companies build discriminatory AI systems with no one to stop them?
The work matters more than the label.
If a Black woman who spent a decade auditing hiring bias can now audit AI bias—and get paid well to do it, and prevent discriminatory outcomes at scale—that’s a win.
If a Latina compliance officer who built equitable promotion frameworks can now build ethical AI guardrails—and protect vulnerable users from algorithmic harm—that’s a win.
If a queer former DEI director who asked “who gets hurt?” about every policy change can now ask that same question about every AI deployment—and actually have budget and authority to enforce answers—that’s a win.
The Trump administration wanted to kill DEI by making the label toxic. Fine. We’ll change the label.
But the work continues.
And in five years, when the political climate shifts—and it will—the infrastructure will still be there. The expertise will still be intact. The practitioners will still be employed and building equity into systems.
Just under different titles.
That’s not surrender. That’s strategic patience.
How to Actually Do This (The Internal Pitch)
If you’re a company leader who wants to hire DEI practitioners into AI governance roles without triggering political blowback, here’s how to position it:
Don’t say: “We’re rebuilding our DEI program under a different name.”
Do say: “We’re investing in AI ethics and governance to mitigate legal risk, protect our brand, and ensure our automation doesn’t create discrimination lawsuits.”
Don’t say: “We need to center equity in our AI strategy.”
Do say: “We need to ensure AI performs fairly across all customer demographics and doesn’t expose us to regulatory penalties.”
Don’t say: “We’re hiring a former Chief Diversity Officer.”
Do say: “We’re hiring an AI Governance & Ethics Lead with expertise in bias auditing, risk assessment, and compliance frameworks.”
Same person. Same work. Different framing.
The business case you make internally:
“AI systems trained on biased data produce biased outcomes. Without governance, we risk:
- Discrimination lawsuits (hiring algorithms, credit decisions, customer service bots)
- Regulatory penalties (GDPR, EEOC, FTC enforcement)
- Brand damage (PR disasters when AI says something offensive or discriminatory)
- Strategic drift (AI making decisions misaligned with company values)
We need someone whose job is preventing these outcomes. That requires expertise in:
- Auditing systems for bias
- Building equitable processes
- Asking hard questions about unintended harm
- Measuring outcomes across demographics
Former DEI practitioners have exactly these skills. And they’re currently underemployed because companies panicked and fired them.
We can hire top-tier talent at reasonable rates, mitigate existential AI risks, and build a competitive advantage through responsible AI deployment.
This isn’t a political statement. It’s operational necessity.”
That pitch works. Because it’s true.
For DEI Practitioners: How to Position Yourself
If you lost your DEI role in the rollbacks and want to pivot to AI governance, here’s how to translate your experience:
On your resume, reframe your accomplishments:
Instead of: “Led company-wide diversity and inclusion initiatives”
Write: “Designed and implemented bias-reduction frameworks across organizational systems, improving equity outcomes by [X metrics]”
Instead of: “Conducted unconscious bias training for leadership teams”
Write: “Trained cross-functional teams on bias identification and mitigation in decision-making processes”
Instead of: “Built equitable hiring and promotion frameworks”
Write: “Developed evaluation systems that reduce bias and ensure fair outcomes across diverse populations”
Instead of: “Audited HR processes for discriminatory impact”
Write: “Conducted systematic audits to identify and eliminate bias in organizational workflows and decision points”
In your cover letter:
“As organizations deploy AI at scale, the risk of algorithmic bias and discriminatory outcomes grows exponentially. My background in [bias auditing / equitable process design / compliance frameworks / change management] directly translates to AI governance.
I’ve spent [X years] identifying where bias enters systems, building frameworks that reduce discriminatory outcomes, and ensuring processes are transparent and accountable. Those same skills are critical for responsible AI deployment—auditing training data, reviewing outputs for fairness, establishing human oversight protocols, and measuring performance across demographics.
I’m seeking to apply this expertise to AI ethics and governance, where the stakes are even higher and the impact can scale even further.”
In interviews:
When they ask “Why are you interested in AI governance?” don’t say “because my DEI job got eliminated.”
Say: “I realized the same challenges I was solving in human systems—bias in decision-making, lack of accountability, disparate outcomes for different groups—are all amplified in AI systems. And most companies are deploying AI without anyone asking the critical questions: Who gets harmed? Where’s the bias? What’s the oversight?
I want to prevent algorithmic discrimination at scale. That’s where this expertise matters most.”
Skills to emphasize:
- Bias identification and mitigation
- Process auditing and documentation
- Ethical risk assessment
- Compliance frameworks (EEOC, GDPR, industry regulations)
- Change management and stakeholder communication
- Data analysis (measuring outcomes across demographics)
- Training and education on sensitive topics
Certifications to consider adding:
- AI ethics courses (Coursera, edX, LinkedIn Learning)
- Algorithmic bias detection
- Responsible AI frameworks
- GDPR and data ethics compliance
You don’t need to become a data scientist. You need to show you understand AI risks and can build governance structures—which you already know how to do.
The Long Game: Infrastructure Survives Administrations
Here’s what companies don’t understand yet but will soon: AI governance isn’t optional.
Regulators are coming. The EU AI Act is already in force. State-level AI regulations are proliferating. The FTC is investigating algorithmic discrimination. The EEOC is cracking down on biased hiring algorithms.
In 2-3 years, every company deploying AI at scale will be required to have governance infrastructure. Audit trails. Bias testing. Human oversight. Accountability mechanisms.
The companies building that infrastructure now—with experienced practitioners who know how to audit systems, mitigate bias, and ensure fairness—will have a massive advantage.
The companies that waited, thinking they could “figure it out later,” will scramble to hire expertise that’s suddenly in desperately short supply.
And in three years (I pray sooner and I am an atheist), when the political climate shifts and DEI stops being a liability? The companies that hired these practitioners into AI roles will have:
- Intact expertise
- Built infrastructure
- Proven track records
- Embedded equity frameworks
They’ll be able to say “we never abandoned this work—we just evolved how we did it.”
The companies that actually fired everyone and lost that expertise? They’ll be starting from scratch. Again.
Strategic patience wins.
This Is About Protecting the Work, Not the Label
I’m not naive. I know this strategy has limitations.
Some DEI practitioners will refuse, correctly pointing out that rebranding equity work as “risk mitigation” undercuts the moral foundation of the work itself.
Some companies will use this as cover to do nothing—hiring one person, calling it “AI ethics,” and congratulating themselves while building deeply biased systems.
Some will hire former DEI staff into these roles and then ignore everything they say, treating it as performative compliance theater.
All of that is true.
But here’s what’s also true:
Equity work that continues under a different name is better than equity work that dies entirely.
Practitioners who can pay their mortgages while doing meaningful work are better off than practitioners who are unemployed.
AI systems with bias auditors are better than AI systems with no oversight at all.
Infrastructure that survives a hostile administration is more valuable than infrastructure that gets destroyed and has to be rebuilt from scratch.
I hate that we’re in a political moment where this kind of tactical camouflage is necessary.
I hate that companies are cowardly enough to need it.
I hate that doing the right thing requires pretending it’s actually just risk management.
But I hate the alternative more.
So here’s my advice:
To companies: Hire the DEI practitioners you fired. Pay them well. Call them AI Governance Leads. Let them do the work. And when the political winds shift—and they will—you’ll have built something durable instead of just reacting to whatever outrage is trending.
To DEI practitioners: Your skills are more valuable than ever. The work matters more than the title. If reframing gets you employed, gets you resourced, and gets you the authority to prevent algorithmic harm at scale—take it. You’re not selling out. You’re surviving. And the equity infrastructure you build will outlast the administration that tried to kill it.
To everyone else: Watch what happens when companies try to deploy AI without anyone asking hard questions about bias, fairness, and harm. Watch the lawsuits pile up. Watch the brand disasters unfold. Watch the regulatory penalties hit.
And then remember: this was all predictable. Preventable. And ignored.
Because asking “who gets hurt?” was deemed too political.
For more on how to structure AI-hybrid roles and who should fill them, see my previous piece: Stop Calling Them ‘Forward-Deployed Engineers’. For the full framework on why companies need human-AI teams instead of replacing workers entirely, start here: The AI Manager Model.
