I’ve spent ages writing about what’s breaking in the American workplace. The entry-level collapse. The AI displacement of junior roles. The DEI rollbacks. The compounding crises that make workers increasingly expendable while companies hollow themselves out, chasing quarterly efficiency.
The anger behind those pieces is justified. The documentation matters. But documentation without architecture is just noise.
And I’m not interested in noise anymore. Well, I am but this piece is about solutions. I reserve the right to get pissed off again with all this nuttiness anytime I like.
The truth is, there ARE people building thoughtful solutions. There ARE frameworks that keep humans in the loop while leveraging AI’s capabilities. There ARE operational models—proven, working, revenue-generating models—that treat this moment as an opportunity to build resilience rather than just strip costs.
And if we spotlight what’s actually working instead of just cataloging what’s broken, we can create a different template. One where we don’t repeat the same pattern of “move fast, break things, ruin lives, apologize later, pay settlements, learn nothing.”
Ok let’s do it.
The Market Is Already Screaming For Human-AI Collaboration
The market is screaming for human-AI translation. It just doesn’t call it that yet. It uses confusing Silicon Valley jargon like “forward-deployed engineer” that makes this sound like a unicorn role requiring a CS degree and venture funding connections.
But strip away the pretentious language and you know what this role actually is? It’s management. It’s training. It’s documentation. It’s breaking down complex processes into clear steps. It’s quality control. It’s asking “should we?” before “can we?”
It’s the work people have been doing forever, just redirected at machines instead of humans.
I’ve been arguing this exact point for months: entry-level workers should become AI managers, not casualties of automation. Young graduates should learn the job, then automate the repeatable parts, then become the supervisors of the AI systems doing that work.
Turns out the market agrees. They’re just using different language.
The Mindset Required: Future-First Thinking
A writer of a newsletter I subscribe to discussed a “future-first” mindset they saw in action earlier this month in Riyadh. The Riyadh Chamber of Commerce hosted the International Strategy Day, and the conversation centered on Saudi Vision 2030—a massive, time-bound transformation to shift the economy beyond oil. But that’s not what I wanna talk about. I want to discuss the “future first” concept.
When an entire culture operates with a fixed future date in mind, the urgency to execute today changes completely. It forces alignment in a way that standard corporate planning rarely achieves.
As one speaker noted, “If your future self looks back at 2025, they won’t care about how much data you collected. They will care about how fast you could find the truth within it.” And if that doesn’t speak to recruiting and the HRTech space, IDK what. We’re obsessed with data, most times to our detriment. But sitting on mountains of data will no longer a king (or queen) make. What does your company, team, department, and job look like in 2030? Great. Now how do you GET there?
Or consider Telstra’s recent move. Josh Blandy, Head of Performance Strategy at Telstra, explained that AI is shifting data demand from asynchronous to synchronous, and old networks aren’t built for it. So Telstra divested profitable parts of their business, like cloud services, to go all-in on core network infrastructure.
It’s a perfect example of the discipline required to say “no” to good opportunities so you can say “yes” to existential ones.
This is the discipline companies need right now.
- Not “can we automate this?” but “should we?”
- Not “what’s fastest?” but “what’s sustainable?”
- Not “how do we cut costs this quarter?” but “who will run this company in ten years if we never hire anyone?”
Every previous technological shift, we optimized for speed over humanity. We moved fast, broke things, ruined lives, apologized later, paid settlements, and learned nothing. But alas, I am an idealist, and I will continue to rage against the dying of the light, or the man, or the machine. Depends on the day.
We have the data NOW. We can see the collapse happening in real time—the entry-level jobs vanishing, the trust eroding, the institutional knowledge walking out the door.
We can choose differently.
Whether we actually will? That’s the test.
The Framework: What Actually Needs to Happen
Based on everything I’ve studied, built, and seen working in practice, here’s what companies need to do:
1. Stop eliminating entry-level roles. Transform them.
Hire young graduates to learn the job, document the workflows, and train AI agents to handle standardized portions of that work. Their job becomes managing the AI systems, not being replaced by them. I’ve written the full framework here and even built a training document showing exactly how this works operationally.
2. Recognize this isn’t one job—it’s a job family.
“Forward-deployed engineer” is gatekeeping Silicon Valley nonsense #SBI. What companies actually need are:
- AI Process Managers (workflow architects)
- AI Quality Assurance Leads (output reviewers)
- AI Governance & Ethics Leads (bias auditors, guardrail builders)
- AI Implementation Specialists (deployment experts)
- AI ROI Managers (measuring what works, preventing tool sprawl)
- AI Change Management Leads (helping humans trust the systems)
I’ll break down each of these roles in detail in an upcoming piece, but the point is that there’s an entire ecosystem of human jobs that AI can create if companies are smart enough to build them.
3. Understand that AI-hybrid roles are accessible to workers at ANY career stage
The panic narrative says AI is for young people who grew up with technology. That’s backwards.
The popular, fearful idea is that AI is exclusively for young individuals who are digital natives. This perspective is…bad.
Entry-level workers are ideal for these roles because:
- They’re learning workflows from scratch (perfect foundation for documentation)
- They can build AI management skills as they build domain expertise
- They represent the future leadership pipeline companies desperately need
- The rotational model transforms what would be disappearing junior roles into essential positions
But experienced workers are ALSO ideal because:
- They already know how to train humans—which is exactly what AI prompting is
- They can break down complex ideas, anticipate questions, check for understanding
- They’ve spent careers doing quality control, documentation, and process improvement
- They have institutional knowledge that makes them invaluable for AI governance
If you’ve spent decades giving clear, sequential direction to team members, you already know how to prompt AI. You’ve just been doing it with people instead of machines.
Elder millennials, younger Gen X, even Boomers who are good at training, documentation, and quality control? You’re ideal for AI-hybrid roles. And many of you are currently getting pushed out of companies that think “AI expertise” means youth and coding skills. Boo. Hiss.
The core skills are the same regardless of age:
- Sequential thinking
- Quality assessment
- Process documentation
- Teaching/training ability
- Knowing when to escalate vs. when to trust the system
The smartest companies will hire both:
- Entry-level workers to build the foundation
- Experienced workers to bring judgment and institutional knowledge
- And recognize that “managing AI” is a human skill, not a technical one
4. Build decision frameworks, not chaos.
You can’t just point AI at your entire operation and hope for the best. You need strategic rubrics for:
- Which tasks should be automated (based on repeatability, risk, edge case frequency, institutional knowledge dependency)
- Which people should be reskilled versus transitioned
- Which workflows get prioritized
- Who owns what in the governance model
I’m working on a detailed decision matrix for this that I’ll publish soon. But the principle is simple: treat AI implementation like a strategic initiative, not a tactical hack.
A Note on Pattern Recognition, Neurodivergence, and Framework Building
I’ve mentioned before how my team and my company have grown up around my ADHD and the way my brain is wired—it’s actually what led to our anti-agency positioning. We built systems that worked with how I think, not against it. Frameworks that externalize memory. Documentation that compensates for executive function challenges. Processes that turn chaos into repeatable patterns.
The powerful thing about spending years building accommodations for neurodivergent thinking is that the pattern recognition muscle gets stronger and stronger. You start seeing the underlying structure beneath surface-level tasks. You learn to ask: “What’s the actual process here?” “Where are the decision points?” “What part of this is repeatable and what part requires human judgment?”
Eventually, you can create frameworks before you even start a project—not after struggling through it. You can anticipate where things will break. You can build guardrails proactively instead of reactively.
And here’s what I think nobody’s talking about yet: This skillset is going to be essential for AI governance.
Because managing AI systems requires exactly this kind of thinking:
- Breaking complex tasks into explicit steps (because the AI needs instructions, not intuition)
- Externalizing institutional knowledge (because if it only lives in people’s heads, the AI can’t access it)
- Building frameworks that account for edge cases and exceptions (because AI fails silently when it encounters something unexpected)
- Documenting what “good” looks like (because AI can’t infer quality standards—you have to define them)
This is neurodivergent thinking as a strategic advantage.
And I believe this is going to matter more and more over the next decade as we learn what AI can and cannot—and should not—do.
Because younger generations (Gen Z, millennials, even younger Gen X) are far more aware of neurodivergence than we were growing up. They understand:
- Different brains process information differently
- Systems need to be designed for how humans actually think, not how we wish they’d think
- Accommodations aren’t special treatment—they’re good design
- Making things explicit and documented helps everyone, not just people who “need” it
That awareness is going to translate directly into better AI implementation.
Because the people who’ve spent their lives building workarounds, creating external systems, and making implicit knowledge explicit? They’re the ones who’ll be best at training AI agents.
The people who naturally think in frameworks, who document everything because they can’t rely on memory alone, who break tasks down into granular steps because that’s how their brain works? They’re going to excel at AI governance.
And companies that recognize this—that hire for process-oriented thinking, documentation skills, and pattern recognition rather than “AI expertise”—are going to build fundamentally better human-AI systems.
So when I talk about entry-level workers and experienced workers both being suited for AI-hybrid roles, I’m also talking about neurodivergent workers who’ve been building these exact skills out of necessity their entire lives.
The accommodations we built for ourselves? They’re the blueprint for how to work with AI.
And that’s not a coincidence.
5. Keep humans in the loop at the moments that matter most.
High-risk decisions. Legal implications. Brand voice. Customer relationships. Ethical judgment calls. Edge cases that require context the AI doesn’t have.
These aren’t “nice to have” human touchpoints. They’re the foundation of safe, responsible, strategically sound AI deployment.
Companies that eliminate the human oversight layer to save costs will pay for it in lawsuits, reputation damage, and strategic drift they won’t catch until it’s too late.
A Different Path Forward
Here’s what I know:
The companies winning the AI era won’t be the ones that eliminate humans fastest. They’ll be the ones that build the most sophisticated human-AI collaboration systems.
The workers who thrive won’t be the ones who compete with AI on speed or output volume. They’ll be the ones who manage it, govern it, and know when to override it.
The organizations that survive won’t be the ones with the most automation. They’ll be the ones with the most resilient hybrid intelligence—human judgment married to machine capability, each doing what it does best.
This isn’t theoretical. The market is already moving this direction. Job listings prove it. Revenue numbers prove it. Operational case studies prove it.
The only question is whether your company will lead this shift or get left behind by it.
I’ve documented the crisis. Now I’m documenting the solutions.
What you do with them is up to you.
In upcoming pieces, I’ll break down the specific AI-hybrid job roles companies need to build, show why experienced workers are ideal for these positions, and provide the decision framework for strategic implementation. If you’re a DEI practitioner who lost your job in the rollbacks, I’ll also show you how those exact skills make you perfect for AI governance roles—and how to position yourself accordingly.
For companies that want to start implementing these models now, I’ve already published the AI Manager framework, the strategic case for human-AI teams, and a complete training document showing how to operationalize this approach.
