I’ve been arguing for months that companies need to build human-AI teams instead of replacing workers. That the real risk isn’t AI taking jobs—it’s companies (short-sightedly IMHO) eliminating the human oversight that makes AI safe, accurate, and strategically valuable.
Now there’s proof I’m right.
Market proof. Operational proof. And proof that the companies ignoring this are about to face a catastrophic trust crisis that makes AI implementation without human governance not just dumb but a pointless risk.
The Market Is Already Moving This Direction
Between January and September 2025, job listings for forward-deployed engineers (people who translate between AI capabilities and human needs) went up 800%. Not 8%. Not 80%. Eight hundred percent.
According to a review of 3 million job postings by Autodesk, design skills have now surpassed technical skills in AI job postings. Not machine learning expertise. Not data science credentials. Design. The ability to make AI feel like something humans want to use. Oh the humanities degree (JK)
As Greg Dessau wrote recently, “The bottleneck isn’t building AI anymore. It’s making it work for people who’ll never understand how it works.”
The market is screaming for human-AI collaboration. Job listings prove it. And companies are hiring for these roles at unprecedented rates because they’re discovering what I’ve been saying: AI without human oversight doesn’t work.
But it’s not just one role. It’s an entire job family—six distinct positions companies need to staff if they want AI that’s safe, effective, and strategically aligned:
- AI Process Managers who document workflows and train agents
- AI Quality Assurance Managers who review outputs for accuracy and bias
- AI Implementation Specialists who deploy and troubleshoot systems
- AI Governance & Ethics Leads who set guardrails and audit for harm
- AI ROI & Operations Managers who prevent tool sprawl and measure what works
- AI Change Management Leads who help humans trust and adopt the systems
For the full scoop on these roles—like who’s a good fit and how they lead to bigger things—check out this article: Stop Calling Them “Forward-Deployed Engineers”—Here Are the 6 AI-Hybrid Jobs Companies Actually Need.
My point is that the market has already sussed out that human-AI teams outperform AI-only approaches. Now, companies just need to catch up.
Operational Proof This Actually Works
Jason Lemkin’s team at SaaStr deployed 20+ AI agents and replaced their entire human SDR team. They’ve sent 60,000+ hyper-personalized emails, booked 130+ meetings automatically, and generated 15% of their London event revenue through AI agents alone.
I know, I know, you are screaming at your screen that I do not get it, this is EXACTLY what I was talking against before. Well, just calm your farm because…
…what matters is that those agents didn’t replace humans. They replaced the work humans consistently refused to do.
For six years, Lemkin’s human SDRs wouldn’t follow up with return attendees about ticket sales. It wasn’t worth their time—they wanted to hunt six-figure sponsorships instead. They tried incentives. They begged. The SDRs said they’d do it, then didn’t.
When they deployed AI agents on those exact same leads? 15% of event revenue. Found money they’d been leaving on the table for half a decade.
The lesson isn’t “fire your humans and buy AI.” The lesson is: AI agents crush the work humans won’t do—the small leads, the low-priority follow-ups, the consistency humans can’t or don’t WANT to maintain. Because it’s not what motivates SDRs, they probably don’t get some amazing bonus for doing this tedious work. That 15% rev lift is valuable from a macro level to the company, but not the individual humans and the makeup of skills that make an excellent salesperson.
Also, it sounds boring.
But here’s the critical part it feels like lots are missing: those agents required extensive human training. Two weeks minimum to deploy each agent properly. Constant spot-checking. Ongoing refinement. Human oversight on every output category. Escalation protocols for edge cases.
Lemkin’s exact words: “You need exactly two humans to make this work”—a vendor-side implementation partner and an internal GTM engineer who owns the orchestration.
This isn’t plug-and-play magic. It’s:
- Take what humans have figured out
- Document it
- Train an agent with what works
- Segment ruthlessly
- Have humans own the rollout and governance
- Read everything early, then build trust over time
Sound familiar? That’s exactly the framework I’ve been advocating: hire young workers to learn the job, document the workflows, train AI agents to handle standardized portions, then become the managers of those systems.
And it’s working. At scale. With real revenue. With human-AI collaboration at the center.
Why This Matters Now: The Trust Crisis Makes Human Oversight Non-Negotiable
LiveCareer just released their Trust Deficit report analyzing the 2025 workplace. The numbers are devastating:
- 45% of HR professionals admit to posting ghost jobs regularly
- 74% of workers say they’ve received 360-degree feedback that was unfair, biased, or inaccurate
- 40% of workers have quit a job due to distrust in their manager
- 47% don’t trust anyone at work with confidential information
TBH, our workplace ecosystem is already shattered by broken trust, deceptive hiring practices, biased performance systems, and leadership failures. So IDK why everyone is losing it over AI.
Oh yeah, it’s because layering AI badly on top of this, firing people for efficiency, automating without oversight, eliminating roles while claiming “AI can do it” creates catastrophe. It’s also like giving someone a paper cut and pouring lemon juice on (shout out Rob Reiner 😭)
But what if we did it in a way that did not suck?
With human oversight, transparency, accountability, and genuine investment in human-AI collaboration? That could help rebuild some of that trust.
Here’s what workers are seeing right now: companies that lie about job openings, give biased feedback, and break promises. If those same companies suddenly say “don’t worry, the AI will handle it,” why the hell would anyone believe them?
The only way AI implementation doesn’t accelerate the trust collapse is if humans visibly remain in the loop.
- Managing the systems. Boom
- Escalating the exceptions. Boom
- Auditing the outputs. Boom
- Taking responsibility. BOOM
(P.S. I moved these around to spell MEAT, because the one thing we have over the robots is our meat suits and opposable thumbs.)
Which means the worst possible strategy is the one most companies are pursuing: eliminate humans, trust the AI, hope for the best.
The Philosophical Case: Why AI-Only Is Dangerous and Human-AI Is Unstoppable
The conversation around AI and work has become weirdly binary. Either AI is coming for every job, or we’re supposed to believe humans will simply “reskill” our way out of structural change with a cheerful attitude and a framed Coursera certificate.
Both miss the actual risk and the actual opportunity.
The real threat isn’t AI taking over.
It’s companies eliminating the human layer that makes AI safe, accurate, adaptable, and strategically valuable. (Hint: It’s still rampant capitalism and greed if you’re keeping track at home.)
The real opportunity isn’t replacing workers.
It’s about building human–AI teams where people supervise, shape, and allow the AI systems that power modern organizations to enhance their human skills.
But right now, most companies are sprinting in the wrong direction: flattening org charts, eliminating early-career roles, and assuming AI can stand in for institutional knowledge that took decades to build.
They are trading long-term capability for short-term efficiency. And they’re doing it at the precise moment when the opposite is required.
AI Is a Force Multiplier… but Only With People in the Loop
Sorry not sorry. Every credible AI research lab says the same thing: AI models drift, hallucinate, misinterpret nuance, and fail silently when edge cases appear.
They are astonishing pattern-recognition systems. They are not employees. (But we still have to manage it, so good luck, buttercup.)
Yet companies have begun treating AI like a fully autonomous worker: assigning tasks with no human buffer, no reviewer, no escalation path, and no internal governance or protocol.
This is how organizations end up with:
- Contracts containing invalid clauses
- Invented statistics
- Fabricated case studies
- Unvetted claims
- Misclassified financial data
- Incorrect regulatory interpretations
- Brand-damaging errors that no one catches until it’s too late
And because companies are shrinking their entry-level workforce, there’s no one left to do the work that ensures the AI operates safely.
The jobs that once taught people how an industry works—like drafting, research, analysis, and documentation are being automated away.
But the need for human oversight has not.
This is the paradox companies are about to face with terrifying clarity: The tasks entry-level workers performed are gone, but the need for junior talent has never been higher.
We wiped out the human layer without replacing its function.
The Future Is Not AI-Only. It’s AI-Supervised.
The organizations that will survive the next decade are the ones that do three things exceptionally well:
- Deploy AI
- Govern AI
- Adapt AI as business, regulation, and technology evolve
Only humans can do #2 and #3.
And the best humans to do it are not overburdened senior leaders. It’s the early-career workforce companies are currently shoving out of the pipeline.
Young Workers Aren’t a Cost Center—They’re the AI Governance Layer
Instead of eliminating junior roles, companies ought to transform them.
Hire new grads for one purpose: to learn the job and then automate the job—safely, ethically, and permanently in alignment with company standards.
That doesn’t mean “train your replacement.” It means “become the manager of the AI system that executes the workflow.”
Young workers become:
- Workflow architects
- Quality-control leads
- AI supervisors
- Internal knowledge stewards
- Process documentarians
- Risk mitigators
- Scalers of institutional intelligence
This isn’t cheaper, instead of better. It’s cheaper because it’s better.
You get scale without brittleness. You get efficiency without dependency. You get speed without losing the human judgment that prevents strategic catastrophes.
This is usually where someone gets annoyed at my focus on entry-level workers. Please do not misunderstand me, but I can walk and chew gum at the same time. There IS a broader plan for a lot of displaced workers, but this is specifically aimed at the push for agentic, to which I am not necessarily opposed. Anyways, this is why these things always turn into series, then completely overwhelm me in getting them out in the world, because I try to solve for every equation. If you can’t wait for my ideas for other groups, feel free to start thinking about solutions yourself.
Why Companies Can’t Replace This Layer With Mid-Career Hires
Many executives say, “We’re not hiring juniors because we don’t have time to train them.”
This logic collapses under basic scrutiny.
Senior workers are not going to:
- Document workflows
- Codify institutional knowledge
- Train internal AI agents
- Build ground-level process maps
- Trace data lineage
- Set escalation protocols
- Define risk thresholds
- Run quality audits on AI output
And frankly, they shouldn’t. That’s not what they’re there for. (They SHOULD be guiding these AI managers (entry-level) toward how to find, document, check, organize all that stuff, though.)
But if no one does it, the AI remains unmanaged. Which means the company remains vulnerable.
A human–AI team is an operational necessity. It is. I don’t care how principled you are, if you are running a business and you stubbornly refuse to use AI, you’re just gonna get lapped. And I mean a real business, not your onesie-twosie consultancy you built over the last 40 years. OBVIOUSLY, I am not speaking to you.
This is exactly why entry-level roles need to transform, not disappear.
Junior workers are PERFECTLY suited for this work because:
- They’re learning the business from the ground up (so they understand workflows deeply)
- They have time to document processes (it’s literally part of their learning)
- They can spot inefficiencies with fresh eyes (they haven’t normalized the chaos yet)
- They’re building skills that will make them invaluable as AI managers, not replaceable
Senior people won’t do this work. But junior people absolutely can—and should—because it’s how they learn the business while simultaneously making themselves essential to how the business operates.
AI Without Humans Is Dangerous. Humans Without AI Are Outpaced. Together They’re Unstoppable.
The companies that will win the AI era look different from those trying to automate humans out of existence.
They build hybrid teams. They elevate judgment, not just output. They treat automation as a system, not a shortcut. They keep people in the loop at the exact moments where AI becomes unpredictable.
Humans bring context, ethics, lived experience, domain understanding, and strategic reasoning.
AI brings speed, scale, consistency, and mechanical recall.
Together, they become something neither can achieve alone:
- Adaptive intelligence
- Institutional continuity
- Strategic foresight
- Safe automation
- Resilient execution
- Scalable operational excellence
That’s the real frontier—not replacing workers, but augmenting them.
Who Should Train for These Roles: Not Who You Think
The panic narrative suggests that AI jobs are suited for young digital natives who grew up coding.
That’s backwards.
This work is ideal for entry-level roles. BUT it’s also accessible to experienced workers who want to transition into AI-hybrid work—because the skills (training, documentation, quality control) are things they’ve been doing their whole careers. They just didn’t know it had this application.
The people best suited to manage AI aren’t the ones who grew up with technology. They’re the ones who grew up managing humans.
If you’ve spent decades learning how to:
- Train a new hire on a complex process
- Break down ambiguous tasks into clear steps
- Review someone else’s work for quality and accuracy
- Ask “should we do this?” before green-lighting a project
- Manage a team through major operational change
- Document tribal knowledge so it doesn’t walk out the door
…you already have the core skills for AI-hybrid work. You’ve just been prompting humans instead of machines.
Elder millennials, younger Gen X, even Boomers who’ve spent careers in training, operations, compliance, or management? You’re not obsolete. You’re actually more valuable than you were before AI—if you’re willing to redirect those skills.
Sequential thinking, quality control, institutional knowledge, judgment, change management—these transfer directly to AI governance roles. The gatekeeping language makes it sound like you need a CS degree. You don’t. You need to understand workflows, quality standards, and people. And you already do.
The DEI Angle Almost No One Is Discussing
And here’s the connection that matters right now: DEI practitioners who lost their jobs in the 2024-2025 rollbacks are perfectly positioned for AI Governance & Ethics roles.
When companies gutted DEI programs, they didn’t just lose diversity officers. They lost the people who were best at:
- Auditing systems for bias
- Building equitable processes
- Training people on sensitive topics
- Measuring outcomes beyond revenue
- Asking “who gets harmed by this decision?”
Which means they eliminated the exact people they need to make AI safe, ethical, and strategically sound.
The skills are identical.
- Auditing hiring bias = auditing AI bias.
- Building equitable promotion frameworks = building fair AI systems.
- Asking “who gets hurt?” about policies = asking “who gets hurt?” about automation.
Here’s the strategy: Companies can hire back DEI practitioners under different titles—such as AI Governance & Ethics Leads—and let them perform functionally the same work (making systems more fair, transparent, and human-centered) without incurring political blowback.
Same people. Same skills. Different org chart. The equity work continues. We just change the vocabulary to survive the hostile political environment. This was at the heart of the R.E.S.I.S.T. series we built earlier this year.
For the full case on how to make this pivot and why it’s tactical survival, not capitulation, read: Your Company Fired DEI Staff. Here’s How to Hire Them Back as AI Governance Leads.
How to Actually Implement This: The Decision Framework
You can’t just point AI at your entire operation and hope for the best. You need strategic frameworks for:
- Which tasks should be automated (based on repeatability, risk, edge case frequency, institutional knowledge dependency)
- Which people should be reskilled versus transitioned
- Which workflows get prioritized (business impact × ease of automation)
- Who owns what in the governance model (separation of duties prevents chaos and liability)
I’ve built a complete decision rubric (ask me sometime about the time Chris and I argued about rubrics because mine don’t always have squares or scores 🙂 — that evaluates tasks across four dimensions and outputs clear recommendations:
- Automate fully (high repeatability, low risk, low edge cases) → with periodic quality audits
- AI Manager supervised (medium across dimensions) → entry point for AI-hybrid workforce
- Human-led, AI-assisted (high risk OR high knowledge dependency) → human decides, AI supports
- Keep fully human (low repeatability, high stakes, strategic judgment required) → don’t automate, improve the human process
The rubric includes people classification frameworks (who’s got the knack for learning, the process smarts, and the big-picture thinking needed for jobs that mix human and AI skills), how to decide what work gets done first, and the rules that spell out who’s in charge of what.
For the complete implementation guide with sample scenarios and decision matrices, read: The AI Implementation Decision Matrix: Which Tasks, Which People, What Order.
Replacing Workers Creates Fragile Companies. Human–AI Teams Create Future-Proof Ones.
Organizations that fire their youngest workers today will wake up in ten years with empty leadership pipelines and brittle reliance on tools they don’t know how to govern. But I have done enough bitching about that. Here’s a maybe possible bright spot.
Organizations that invest in human–AI collaboration will wake up with:
- AI systems that reflect their values
- Workflows encoded into the internal IP
- Talent that understands the business end-to-end
- Resilience against model drift and vendor instability
- Teams that can build, not just consume, automation
- A future workforce ready to lead, not just survive
- BONUS: They’ll have a more bonded team because they’re working through something totally new together, putting senior-level folks and entry-level workers on par with each other (in this respect anyway).
AI won’t eliminate the need for people, but it def will eliminate the need for people who can’t work with AI.
That’s a challenge AND the opportunity of a lifetime. Pick how you want to see it.
The smartest companies won’t replace workers. They’ll rebuild the workforce around human–AI partnership.
If organizations can master hybrid intelligence, they won’t just survive the future; they will shape it.
And they’re already being built.
