The Race Between Consciousness and Technology

April 26, 2026

One of my clients started measuring the productivity gap between engineers who are fully using AI tools and the ones who aren't.

The gap was 30x.

Not 30%. Thirty times.

He's not the only one. A number of my clients are laying off engineers despite their businesses being as strong as ever and growing.

For founders right now, it's a bit of a tale of two cities.

If you're building AI itself, a model or an AI-native application for some specific vertical like legal services, there's investor excitement and a sense of building the future.

If you've been building for three to ten years and AI isn't what you're building, the picture looks different. The treadmill sped up.

Suddenly you're expected to achieve the same result with fewer people and less cost. Fundraising is harder, even for companies hitting their numbers. Some are doing down rounds despite growing.

Underneath all of it, there's a current of fear and doom I keep noticing. I don't want to say everyone's experiencing it, but it's there, and it's easy to get swept up in.

The ground feels a little shakier. Things that seemed set and safe and established don't seem to be anymore.

The Problem of Speed

I'm no expert in AI alignment, but from what I'm reading and hearing, one of the biggest issues is just speed. How fast these models are evolving as the models are increasingly improving themselves.

The Prisoner's Dilemma is the classic game-theory problem where two parties would both be better off cooperating, but each can't trust the other to actually do it, so both defect. That's the dynamic with AI. Even if everyone agrees we ought to slow down, no party can fully trust the others will, so the rational move for each lab, each country, is to keep pace. True across OpenAI, Anthropic, and Google. True across the U.S. and China.

We've solved versions of this problem before. Tristan Harris and the Center for Humane Technology pose the comparison directly in their talk The AI Dilemma: how did we get to nuclear nonproliferation? Same dynamic. Some of the scientists who built those weapons believed they'd brought forth the end of humanity, and so far we haven't.

But nukes had two things working in our favor that AI doesn't. Mutually assured destruction was visible and immediate, you could see the cost. And the materials themselves, uranium and plutonium, were physically hard to build with. AI has neither.

It might take a shock, something AI does that's scary enough, hopefully not catastrophic, to wake us up to the fact that we're in this together. That kind of collective recognition is probably what's needed.

For the first time in history, we've created something more intelligent than we are. The question many founders are sitting with, whether they name it or not, is whether the future we're building toward is one we'd actually want to live in.

Leading Your Team Through the Change

The pressure isn't only macro. It's showing up in the most human, immediate part of running a company: the people on your team.

The temptation is to hero the engineer who doesn't want to adopt AI. Because the response to it is real and human. “Screw that. I'm not getting on board with this thing that's taking my job away.” If you've spent the last twenty years crafting your ability to code, of course that's the response.

Empathy is appropriate. They didn't choose this. They're frustrated and scared. But it may genuinely be harder for them to find another job if they don't adapt.

And you're the “mayor of the town.” Your responsibility is to the good of the whole, which sometimes means you can't optimize for the individuals who have their own wants and fears.

As an empathic human, that's hard. I'm certainly a humanist, and there's a callousness to "well, capitalism, it is what it is" that I'm not a fan of. But you need to be grounded in the reality that if you don't adapt, probably your company won't make it, and then everyone loses their job.

So you hold both: empathy and being grounded in reality.

There's a version of this for the engineers themselves too. Resisting AI is a form of resisting what is. It's like saying I wish today was Monday when it's Tuesday. You're going to suffer.

This is What We've Been Training For

What I keep telling my clients is that this is what we've been training for.

Now more than ever, the inner tools of learning to stay present amidst fear and uncertainty are what allow you to remain curious, creative, and inspired.

The trap, especially with a force this big, is responding from below the line: reacting from fear, looping in doom, freezing or thrashing. Activism from below the line says, this is bad, this is a problem, and stops there.

Activism from above the line is different. Get present to what's actually happening. Welcome the fear instead of bracing against it. Then, and this is the part most people skip, imagine the most beautiful future you can, and bring your creativity to building that, instead of just defending against the one you're afraid of. Beliefs tend to be self-fulfilling. If too many of us stay in doom, no one brings the creativity required to build the better version.

The shift is from victim to creator. The guiding question stops being what's going to happen to me and becomes: what's mine to do here, and how can I contribute to the future I want to live in?

There's also a reframe worth offering for the people who do embrace these tools.

I had a client tell me once that the reason he went into software development in the first place was that it felt like magic. You craft these spells, and out comes this working piece of technology. That feeling is what kept him in the work for years.

The reframe I'd offer is this: AI doesn't take the magic away. It makes you a more powerful magician. Less attached to the specifics of how the spell gets cast, more focused on what you can now create.

A Question Worth Sitting With

The Prisoner's Dilemma only traps us if we see the other party as separate from us. Inside our own families, we create win-for-all outcomes all the time, because we actually care. The dilemma gets a lot easier when the other prisoner is your sibling.

So here's the question worth sitting with:

If 8 billion humans were operating from an awakened state, where we genuinely felt interconnected with each other, what would we be doing about AI? Would we even be developing it this way? Would we not?

What would the people building the models do differently if they were operating from that place?

I don't necessarily have the answer. But the more we can sit with this question, the more we can see how so much of what's hard about this change comes from how strongly we believe we're separate from each other.

My intuition is that the more humans who can feel that interconnection, the better chance we have to coordinate in a way that benefits all, especially in the face of the most powerful technology we've ever created.

Until then, the dance is whether we can be grounded in the reality of what's occurring without letting fear overtake us, so that we can take agency and create the future we want.

With love,

Dave Kashen