đ˘ SPECIAL UNLOCK â The Drift: Normally for paid subscribers only.
I'm sharing this premium Drift free to show you what The Upgrade Loop delivers each week.
How do we design systems that evolve with humans and technology?
Signal
Control is overrated.
And weâre finally starting to see it.
Decision-making is shifting.
Not because itâs broken â
but because itâs evolving.
Itâs no longer about holding the wheel.
Itâs about sharing it â
between humans, machines, and markets.
Thatâs not sci-fi.
Itâs distributed cognition.
And itâs already happening.
Right now, two big shifts are changing how decisions get made â and who (or what) gets to make them:
1. Smart systems â like self-driving cars and drones â where people and machines share the job of making decisions.
2. Brain tech in the workplace â where new tools let people control computers with their thoughts, no keyboard needed.
These arenât feature upgrades.
Theyâre system overhauls.
This isnât just faster decision-making.
Itâs smarter.
Itâs more adaptive.
And itâs changing how we think about resources, agency, and trust.
The Old Control Model
For a long time, we wanted:
One source of authority.
One ladder.
Top-down decisions.
And it worked â when the world moved slowly.
That model thrived in stability.
In environments where predictability was an advantage.
But predictability has left the group chat.
The environment is now dynamic, volatile, complex.
And control â the old kind â doesnât stretch that far.
The problem isnât that control failed.
It succeeded⌠but that world no longer exists.
So the real opportunity is this:
What does control look like when cognition is no longer centralised?
Distributed Cognition: The Shift That Changes Everything
Cognition doesnât just live in brains.
It lives in systems.
In people, tools, teams, tech.
This is distributed cognition â
a concept grounded in decades of cognitive science research.
Take autonomous vehicles.
A self-driving car isnât just a car.
Itâs a system: radar, GPS, real-time maps, predictive models, and human intervention on standby.
Humans and machines co-produce cognition.
They manage the load â together.
This isnât automation replacing humans.
Itâs automation extending cognition.
Machines do what they do best: speed, scale, data crunching.
Humans bring context, judgment, and moral reasoning.
But this comes with a shift in responsibility.
Because when decisions are shared â
so is accountability.
So is trust.
But we are still the ones with the accountability.
Because automation doesnât replace humans.
It extends us.
Brain as the Interface
Now take this one step further.
What if your brain could interface directly with your tools?
Non-invasive brain-computer interfaces (BCIs) are already here â
and theyâre changing how we interact with systems.
BCIs can already enable hands-free cursor control using EEG alone â no implants required.
This isnât a gimmick.
Itâs a new input layer.
One that bypasses typing, clicking, even speaking.
Imagine systems that respond to neural intent.
No mouse. No mic. Just thought.
Suddenly, cognition becomes interface.
Decision-making becomes biological.
Speed becomes neurological.
But more control doesnât just mean more power.
It means more vulnerability.
Because with great neural input comes great ethical responsibility.
We need oversight.
We need governance.
We need to ask:
Are we building tech that lets people think better â
or just think faster?
And what doors are we opening up along the way?
So What Are These Signals Really Telling Us?
These signals are telling us that the future isnât top-down.
Itâs distributed.
Not just in where decisions get made,
but in how decisions happen â between people and systems.
Autonomy wonât mean isolation.
Itâll mean shared decision-making.
Between AI, sensors, humans, protocols.
Each bringing what theyâre best at.
And that changes how we respond under pressure.
Because the next generation of crisis management?
It wonât be command and control.
Itâll be sense and respond.
Itâll be about empowering teams at the edge of the system â
the ones closest to the signal.
Because the research tells us very clearly that
people make better decisions under pressure when theyâre close to whatâs actually happening.
Thatâs not theory.
Thatâs cognitive advantage.
Designing for Cognitive Systems
So now the question becomes real.
How do we design distributed cognitive systems
that let humans and machines make decisions together â
in real time â without losing coherence?
How do we build environments where decentralised decisions aren't risky,
but necessary?
The answer starts with letting go of control as a singular function.
And redesigning it as a shared capability.
One that flexes across layers, roles, and systems.
This means building systems that hold up under pressure â
not by locking things down,
but by helping people think clearly when it counts.
It means using tech that extends what humans do best,
without stealing the steering wheel.
And it means leadership that doesnât override thinking â it amplifies it.
Because the winners in complexity wonât be the ones who know the most.
Theyâll be the ones who notice fastest, decide sharpest, and adapt cleanest as a system.
How do we think with machines, not around them?
How do we design for true adaptability, not just authority in disguise?
How do we keep cognition human â even as it becomes distributed?
This isnât about losing control.
Itâs about redefining what control means in systems that think with us.
We donât have all the answers yet.
But weâre asking better questions.
đ Enjoy this drop?
đ˘ This Drop is usually part of the premium tier.
Join bold thinkers upgrading how we think each week,
through science, sound, and smarter systems.
đ Subscribe now to get the next upgrade loop.


