Loss is my biggest problem, and maybe yours too

Maren Hogan

Maren Hogan is CEO of Red Branch and general Bad@$$

Loss is my biggest problem, and maybe yours too…even if you don’t have ADHD (I know I know, everyone has ADHD now and they make it their whole personality, I am too old to care what you think). Loss of relationships. Loss of ideas, motivation, the muse. Loss of loved ones, jobs, creative momentum. The only time we celebrate loss is when we’re shedding weight or bad habits. Otherwise, it’s the thread of anxiety running through everything.

The Daily Loss Tax

I lose things constantly FR. Physical things, yes—keys, earbuds, that coffee cup I just had in my hand thirty seconds ago. But also intangible things. Thoughts mid-sentence. The name of that supplement I need to reorder (which company? which email account did I use?). Time itself, vanishing into hyperfocus on the wrong task while the trash piles up because I forgot to pay the bill, ACTUALLY, I did pay it by setting up autopay and going paperless to manage my ADHD, except the card expired and now my actual trash can is gone and I can’t even use recycling because my garbage disposal is broken and it took me three months to email my landlady so I can’t wash the lids off properly to recycle them and I have to throw away PLASTIC in CALIFORNIA OMG. Don’t worry, my nice neighbors are letting me use their cans (at least they haven;t stopped me yet?)

They call this the ADHD tax—the accumulated cost of lost items, missed deadlines, late fees, and the mental load of constantly compensating for a brain that doesn’t naturally align with filing systems, linear time, or object permanence. One study published in JAMA found that adults with ADHD lose an average of 50 minutes per day just searching for misplaced items. That’s over 300 hours a year. Nearly two full weeks of your life, gone. That SUUUUUUCKS.

But here’s what hit me recently: the personal experience of loss (where is that GD supplement email?) connects directly to the existential fear narrative around AI and what we’ll lose if it develops the way tech oligarch billionaire assholes envision it.

The ADHD experience reveals something universal: we’re all paying a cognitive overhead tax, losing time to busywork instead of doing valuable work.

The Loss Narrative Is Dominating the AI Conversation—And Missing the Point

Nearly every AI anxiety piece focuses on loss. Loss of jobs (Goldman Sachs estimates 300 million jobs affected globally). Loss of creativity, human connection, agency, meaning. Loss of the ability to distinguish real from synthetic. Even the people building this technology frame it in terms of what might be destroyed.

But my daily experience with a neurodivergent brain suggests something different: some forms of “loss”—specifically, offloading cognitive load onto systems that can handle it—might actually be gaining something. FREEDOM!

There’s a concept in neuroscience that’s sometimes called cognitive load theory…the idea that our working memory has limited capacity (roughly 4-7 items for neurotypical brains, often fewer for ADHD brains). Every piece of information we’re trying to hold simultaneously: the supplement name, the meeting time, the thing you were about to say, where you put your keys, takes up precious mental bandwidth. When that capacity is maxed out, we lose things. Ideas slip away. We can’t access higher-order thinking because we’re too busy trying to remember if we paid the electric bill or put those little reg stickers on the ONE car you remembered to register.

Everybody is talking about smooth brain theory, referring to reduced cortical thickness in certain areas (due to overuse of AI in this specific use), and when they say it, it don’t mean “hey smartypants”. But to someone like me, my brain feels like it lacks grooves to catch and hold information as it flows through. Everything’s slippery. Everything can slide away. Or maybe the grooves are just shit and all that stuff is stuck in there and that;s why shower thoughts on reddit is so popular.

What If the Reframe Isn’t About What We Lose, But What We Stop Wasting Capacity On?

Ben Affleck gave an interview recently where he talked about AI in filmmaking. His point wasn’t the usual catastrophizing. it was nuanced (I saw someone saying that every few years people realize that the two guys who won an Oscar and their first foray into movie making or actually pretty effing smart). AI can handle certain production tasks (shot lists, scheduling, even some aspects of editing), but it can’t do the taste-making. The curation. The “what story matters and why” decisions. It can’t make Taxi Driver because it can’t be the director saying “this is the thing that matters.”

That RIGHT THERE is everything.

The gain from AI isn’t just productivity (though OBVS I’d like those 50 minutes back). It’s potentially getting our cognitive capacity back for the things that actually require human judgment. The strategic thinking. The creative synthesis. The relationship building. The work that is, in fact, my zone of genius when I’m not spending half my day trying to remember where I put my phone.

You don’t need human-level intelligence to search email archives for “magnesium supplement order confirmation.” You don’t need it to track which card is linked to which autopay account. You don’t need it to remember that the trash bill needs updating when your card expires.

You do need it to understand why a client relationship is fraying before the data shows it. To recognize the strategic opportunity in a competitor’s positioning. To know which story will land and which will fall flat. To build frameworks that account for human irrationality and market chaos.

The Oligarch Vision Versus the Actual Possibility

There are two completely different paths here, and the difference matters enormously.

The oligarch vision: Replace humans. Cut costs. Consolidate control. Extract maximum value with minimum labor expense. This is the loss narrative’s legitimate fear—AI deployed purely for wealth concentration, where “efficiency” means disposable workers and “productivity” means shareholder returns divorced from human flourishing.

The other vision: AI handles the things that don’t require human judgment, freeing up cognitive and temporal resources for the work that does. This isn’t about replacing capabilities—it’s about redistributing them. Reallocating mental bandwidth from “where did I put that” to “what problem are we actually solving.”

For neurodivergent brains especially, this could be transformative. We’re often incredibly good at pattern recognition, creative synthesis, strategic thinking, and seeing connections others miss—exactly the skills Affleck identified as irreplaceable. We’re often terrible at the administrative overhead, the detail tracking, the “don’t forget this exists” mental load that bogs down so much of knowledge work.

We’ve Been Paying the Loss Tax Forever. What If We… Just…Didn’t?

I know this isn’t a new thought. Since AI emerged into public consciousness, I’ve been quasi-middle-of-the-road; excited about possibilities, terrified of misuse. And as I’ve learned more about how my brain actually works, I’ve realized something: neurodivergent thinking lends itself unusually well to this technology. Prompting AI. Probing its weaknesses. Understanding its logic (or lack thereof) even when we don’t fully understand the underlying mechanisms. (Also, we’re really good at masking so when AI blows air up our skirts, we inherently DON’T believe it.)

Because here’s what virtually everyone with ADHD can relate to: losing something. Regularly. Multiple times an hour.

The coffee cup. The thought you were about to post. The end of your sentence. Your keys. An earbud. A sale to a competitor because you forgot to follow up. Time itself because you got caught in hyperfocus on the wrong task (like, say, dictating an essay when you should be taking out the trash).

Losing things matters viscerally to ADHD people. We’ve been paying the tax our entire lives—in late fees, replacement costs, missed opportunities, and the exhausting compensation strategies we build just to appear functional.

What if we’re actually uniquely positioned to imagine what it would mean to not to? To reframe the conversation from “what will we lose to AI” to “what have we been losing to cognitive overhead that we could finally stop losing”?

There are real risks in how this technology could be deployed, who controls it, what gets optimized and for whom. But maybe the people most familiar with loss (as a lifestyle), with its daily weight and cost…are also the ones who can most clearly see what there is to gain.

The Irony of “Losing” Our Humanity to Machines

There’s a particular irony in the way we talk about AI threatening human creativity and judgment when most knowledge workers spend the majority of their time on tasks that require neither.

A 2018 study by Asana found that knowledge workers spend only 27% of their time on skilled work—the work they were actually hired to do. (I know that stat is hella old but stay with me) The rest? Searching for information (60% of time toggling between apps), attending status meetings, duplicating work that’s already been done elsewhere, and managing the coordination costs of simply existing in an organization.

For context: that means if you work an 8-hour day, you get about 2 hours and 10 minutes to do your actual job. The rest is overhead.

The AI panic narrative treats this overhead as somehow sacred—as if cc’ing six people on an email thread or reformatting a spreadsheet for the third time this week is the essence of human contribution. As if “work” and “value creation” are synonymous when they demonstrably are not.

I think about this every time I’m deep in strategic work for a client—let’s say developing competitive positioning for an HR tech company—and I have to stop because I can’t find the analyst report I downloaded last week. Or because I need to track down which conversation contained that crucial insight about their competitor’s pricing model. Or because I’ve spent 40 minutes trying to remember if their fiscal year starts in January or July and whether that matters for this particular deliverable timeline.

NONE OF THAT IS WORK!

All of it prevents the work.

What ADHD Reveals About “Normal” Work

I get that folks like me aren’t the only ones dealing with this stuff. Here’s what neurotypical people don’t always realize: the ADHD experience is just a more visible, more acute version of what everyone deals with. We lose things more often (I am talking multiple times per hour and sometimes without even LEAVING MY DESK). We forget more dramatically. We experience the cognitive load more intensely. But the structure of the problem is universal.

Everyone’s working memory is limited. Everyone experiences decision fatigue. Everyone has a finite amount of mental energy before they start making worse choices, missing details, losing the thread.

The difference is that ADHD brains hit those limits faster and harder. Which means we’ve had to develop workarounds, compensation strategies, and a very clear understanding of what’s overhead versus what’s actual creative/strategic work. We know intimately the difference between “thinking” and “trying to remember what we were thinking about.”

And increasingly, I think that lived experience of cognitive limitations or differences is precisely what makes neurodivergent people unusually suited to understanding how to work with AI rather than being replaced by it.

Because we’ve been doing human-AI collaboration our whole lives. We just called it “coping mechanisms.”

The Coping Mechanisms Were Always Outsourcing

Every ADHD person has a system. Or twelve systems. Or the graveyard of forty-seven abandoned systems that worked for three weeks until they didn’t. Just this month alone, I have downloaded Notion, Obsidian, and Craft in an effort to stay organized (of course, January as a month is horrible for my brain, I buy all the new notebooks and planners, every dang year…)

Multiple calendars. Alarms for everything. Notes apps. Reminder apps. Post-its. Timers. The “launching pad” by the door where keys must go (except when they don’t). The friend who texts you when you need to leave for the airport. The partner who faithfully pings your phone when you lose it 10 times a day, only for you to tell them it’s on silent 😩.

Every single one of these is kind of external cognitive prosthetic. An offloading of mental load onto something—or someone—else.

We’ve been told this is weakness. Lack of discipline. Failure to adult properly.

But I’m going with adaptive intelligence. It’s recognizing that your wetware has specific limitations and building systems to compensate. The problem has never been the concept of outsourcing cognitive overhead—it’s that the available tools have been inadequate, and the human “tools” (the partner, the friend, the assistant) come with their own costs and limitations.

What if AI is just… a better version of the alarm that reminds you to pay the trash bill before they repossess your can?

The Question Isn’t Whether to Offload. It’s What to Offload, and to Whom.

This is where the oligarch vision and the human-centered vision totally diverge.

The oligarch vision treats humans as expensive, inefficient machines. The goal is to automate everything possible and extract maximum value from whatever human remainder is left. In this model, you offload the valuable work—the strategy, the creativity, the judgment—and keep humans around only for the menial tasks that aren’t worth automating yet. I feel like I am in the upside down.

It’s the worst possible inversion.

The human-centered vision is the opposite: offload the overhead, the administrative burden, the cognitive busywork that prevents people from doing valuable work. Keep humans focused on judgment, relationship, strategy, creativity, taste, ethics—the things that require human intelligence.

The World Economic Forum’s 2023 Future of Jobs Report predicts that analytical thinking, creative thinking, and complex problem-solving will be the fastest-growing skills in demand. Not despite AI, but because of it. Because when routine cognitive tasks are handled by systems, what remains is the work that requires human discernment.

But—and this is crucial—that only happens if we’re intentional about what we automate and why.

What My Trash Can Has to Do With The Future of Work

Let me come back to my trash can, because it’s actually a perfect microcosm of this whole thing.

I lost my trash can because of a cascading failure of cognitive overhead:

  1. Decided to go paperless to reduce mail I’d lose
  2. Set up autopay to reduce bills I’d forget
  3. Lost my credit card (it’s okay I have a system I lose about a card a month)
  4. New card = new number = old card no work, payment failed, service lapsed
  5. Didn’t notice until they TOOK MY TRASH CAN WHICH IS EFFED UP IMHO (executive dysfunction)
  6. Took more weeks to address (executive dysfunction compounded)
  7. Meanwhile, garbage disposal breaks
  8. Can’t use recycling without rinsing (inexplicably they LEFT the recycle bin)
  9. Can’t rinse without disposal
  10. Entire waste management system collapses

This is what ADHD people call a “doom pile” or a “doom loop”—where one small failure cascades into systemic breakdown because each step requires executive function you don’t have available.

Now imagine I had an AI assistant that:

  • Tracked card expiration dates across all autopay accounts
  • Flagged upcoming renewals
  • Noticed failed payment patterns (AKA when I lose another credit card)
  • Sent the “hey, your trash service is about to lapse” message before they took the can
  • Reminded me weekly about the broken disposal until I contacted the landlady
  • Maybe even drafted the email to the landlady so I just had to hit send

None of that requires human judgment. It’s all pattern recognition, deadline tracking, and gentle nagging—exactly what AI is already decent at.

And what it would free me up to do? The actual work. The client strategy. The writing. The relationship building. The thinking that creates value.

AI panic focuses on what we’ll lose to automation—but what if the real question is what we’re already losing to cognitive overhead we could finally stop paying?

The ADHD Perspective as a Roadmap

I’m starting to think the ADHD experience might be a preview of what conscious, intentional AI integration could look like for everyone.

Because we’ve already learned:

1. You can’t rely on willpower to overcome systemic cognitive limitations. Neurotypical hustle culture says “just try harder, be more disciplined, care more.” ADHD people know that’s bullshit. You need systems. External scaffolding. Tools that work with your brain, not against it.

2. The goal isn’t to eliminate human involvement—it’s to redirect it. I don’t want to stop parenting my kids because I automated the permission slip signatures. I want to automate the permission slips so I have more bandwidth for actually parenting. The valuable thing isn’t the signature. It’s the conversation. The relationship. The judgment calls that actually require a human.

3. Good tools become invisible. The best ADHD coping mechanisms are the ones you stop thinking about. The alarm that works becomes part of your routine. The system that works disappears into the background. The worst thing you can do is create tools that require constant conscious management—because that’s just more cognitive load.

4. One size fits nobody. Every ADHD person’s brain is different. What works for me won’t work for you. The system needs to be customizable, adaptable, and responsive to individual patterns. Cookie-cutter solutions fail.

5. The point is agency, not automation. I don’t want a system that makes decisions for me. I want a system that surfaces the information I need to make better decisions myself. That reminds me of the context I’m forgetting. That flags patterns I’m missing. That handles the busywork so I can focus on the judgment calls.

These principles scale. They apply to knowledge work generally, not just ADHD brains.

What We’re Actually Afraid Of (And Why We Should Be)

Here’s what I think is really happening with AI anxiety: we’re afraid of losing control. Of becoming dependent on systems we don’t understand, controlled by people whose interests don’t align with ours.

And that fear? Completely valid.

Because the current trajectory of AI development is not toward customizable, user-controlled cognitive assistance. It’s toward centralized, proprietary, black-box systems optimized for engagement (read: addiction) and data extraction (read: surveillance capitalism). It’s toward tools that increase corporate control over workers rather than increasing worker agency.

The fear isn’t that AI will handle our email. The fear is that AI will surveil our email, monitor our productivity, evaluate our worth, and ultimately make us more disposable to systems that see humans as costs to be minimized.

That’s the oligarch vision. And it’s already happening.

Studies show that 60% of employers now use AI-powered monitoring tools. Keystroke tracking. Facial recognition to monitor “engagement” in meetings. Productivity scores. Performance predictions.

This isn’t AI as cognitive assistance. This is AI as panopticon. (Guess what that is? It’s a panopticon. Guess what that is? It’s a design of institutional building with an inbuilt system of control, originated by Jeremy Bentham in the 18th century. The idea is to allow all prisoners of an institution to be observed by a single prison guard, without the inmates knowing whether or not they are being watched.

Obviously, it is physically impossible for the single guard to watch them all at the same time, BUT the inmates cannot know when they are being watched. Apparently this is meant to motivate them to act as though they are all being watched at all times. They are effectively compelled to self-regulation. (Sorry I got sidetracked but GROSS)

By Willey Reveley, via Wikimedia Commons https://en.wikipedia.org/wiki/Panopticon, CC BY-SA 4.0

So What Do We Do?

I don’t have a tidy conclusion here because this isn’t a solved problem. But I have some thoughts:

We need to get loud about what kind of AI future we’re building.

Not whether AI, but which AI. Not whether automation, but what we automate and who benefits. The conversation can’t be “AI: yes or no?” It needs to be “AI: for what, controlled by whom, optimized for whose interests?”

We need to demand tools that increase user agency (agency is our newest value at Red Branch Media and it’s crazy because we’re the anti-agency, another post soon on that one), not corporate control. Open source where possible. Transparent about what’s being tracked and why. Customizable to individual needs. Designed to augment human capability, not replace or surveil it.

We need to protect the right to be inefficient. Because “efficiency” is often a euphemism for “extracting maximum value from minimum labor cost.” And y’all KNOW how I feel about that.

Some inefficiency is human.

Some inefficiency is creativity.

Some inefficiency is the time it takes to think, to explore, to fail, to learn.

We need to remember that productivity is not the same as value. The things that make life worth living: relationships, creativity, beauty, meaning, play, dance, music, are often wildly “inefficient.” A world optimized purely for productivity would be unbearable.

And maybe most importantly: we need to listen to the people who’ve been managing cognitive overhead their entire lives.

Because ADHD people, disabled people, neurodivergent people, marginalized people generally we’ve been the guinea pigs for human-computer collaboration since before it was called that. We kinda arrive at what works and what doesn’t a little faster. We’ve been evaluating the difference between tools that empower and tools that infantilize. We know what it feels like when a system works with you versus on you.

We’ve been paying the loss tax forever.

We know exactly what there is to gain—and what we can’t afford to lose.

Maren Hogan