Twenty-plus products, six categories, one stubborn gap
If you’ve been following along, you already know the shape of the problem. The Stress Wall is real. Cognitive load scales nonlinearly with velocity. The lateral line, the bat call, and the bee in the tunnel all point toward an architecture that nobody has built yet.
So let’s stop talking about the architecture and start talking about what’s actually on the market — and why none of it gets us where we need to go.
The companion white paper to Breaking the Cognitive Barrier surveys more than twenty existing products, prototypes, and platforms across six categories. The framework is the same one we’ve been using all along: five questions derived directly from the cognitive load model.
- Does it work at running speed?
- Does it actually reduce mental load, or just relocate it?
- Does it preserve the auditory channel?
- Does it work without painted lines, beacons, or pre-mapped routes?
- Can you use it alone?
Run any product through those five filters and the picture clarifies fast.
The Six Categories, Honestly Assessed
GPS apps. BlindSquare, RunGo, Seeing Assistant Move, and the rest. Wonderful for walking. Useless for running. Consumer GPS is accurate to 3–10 meters; bike paths are 3 meters wide. The math doesn’t math. Worse, voice navigation occupies the auditory channel that’s already doing critical safety work — traffic, cyclists, the dog you can’t see. These systems push τ in exactly the wrong direction.
Smart canes and proximity bands. The WeWALK is a beautifully engineered walking aid. It is not a running technology, and no amount of GPT integration is going to change that — the cane sweep biomechanics fight the running gait. The Sunu Band is hands-free and theoretically wearable at speed, but its narrow ultrasonic beam needs you to actively aim your wrist, which adds attentional demand instead of removing it. Also discontinued, which tells its own story.
Computer vision systems. This is where the real money and ambition have gone. Three names matter: Project Guideline, Biped AI’s NOA, and the University of Indonesia’s RunSight.
Project Guideline is the only one that has actually moved a blind runner around a 5K loop and a half marathon at running speed. It’s also the only one that requires you to paint a line on the ground first. Every new path is a deployment project. Leaves, shadows, and other runners can occlude the line. The model has no fallback. Open-sourced, but the development momentum looks thin.
NOA is a remarkable piece of hardware — 170-degree IR cameras, on-device AI, 250+ beta testers across 20 countries. And explicitly walking-only. The product literally narrates the world to you (“In front of you is a busy coffee shop with three people in line”), which is the philosophical opposite of what a runner needs. Verbal scene description isn’t just unhelpful at speed; it’s actively hostile to cognitive sustainability.
RunSight is the most directly running-aimed of the three, but it’s still a track-and-guide-present prototype. Voice instructions for course correction. Same auditory channel problem.
Haptic wearables. Architecturally, this is the most promising category, and we’ve said so before. The Wayband’s virtual corridor is the closest existing implementation of the lateral line idea — silent when centered, gradient feedback when drifting. The feelSpace belt’s 16-motor array is the closest thing to distributed flow sensing on the market. Both work. Both inherit the GPS problem underneath. The Wayband’s mile-16 dropout at the NYC Marathon — when Simon Wheatcroft had to finish with human help — is the GPS scalability problem in one sentence.
Also, none of them implement adaptive cadence. Constant-frequency feedback on a straightaway is the same as constant-frequency feedback through a curve. That’s not echolocation; that’s a metronome.
Remote human assistance. Aira and Be My Eyes are extraordinary services for daily life. They are categorically incompatible with the goal of independent running. A human watching a bouncing camera through a phone connection cannot deliver sub-second corrections at 12 km/h. And philosophically, swapping a co-located guide for a remote agent doesn’t expand autonomy — it relocates dependency.
Precision GPS and robotic guides. Two products to know. Nordic Evolution claims 10 cm accuracy through a multi-constellation receiver and proprietary sensor fusion. Their differential audio model — silence on one side, intensifying tone on the other — is a workable proof-of-concept for gradient correction. It’s also Sweden-network-dependent, requires a smartphone tether for WiFi, and only follows pre-recorded paths. Spontaneous routes need not apply. And the audio cadence is constant.
Glidance’s Glide is fascinating mostly for what its CEO says about it. Amos Miller frames it as cognitive load relief — “you can listen to emails or talk on the phone while moving with Glide” — which is exactly the right way to think about mobility tech. The device itself is a wheeled robot you push by a handle. Unambiguously a walking product, current and future.
The Pattern in One Table
The white paper includes a coverage matrix that’s worth dwelling on. Across all five filter criteria, no single product hits every box. More tellingly, no current system implements any of the three biomimetic principles we’ve spent earlier posts working through.
Distributed gradient sensing? The closest is the feelSpace 16-motor compass, which tells you which way is north — not whether you’re drifting left of the path edge.
Adaptive sampling cadence? Nothing. Every system runs at a fixed feedback rate.
Motion-based centering? Nothing. Every centering system is built on absolute positioning, never on relative motion fields.
That’s not a small gap. That’s a category of system that does not yet exist.
Thomas Panek using Guideline technology to run independently outdoors.
What This Tells Us About Phase II
The survey is useful for one big reason beyond the obvious “buyer beware” function. It tells us where to start — what infrastructure we don’t have to rebuild.
Bone conduction headphones are validated. They work. Project Guideline and the broader VI running community have settled this. Phase II should adopt them as the baseline output channel, with haptic supplementation rather than replacement.
The virtual corridor concept is validated. The Wayband proved the interaction model works at speed. We don’t need to relitigate whether gradient feedback is the right paradigm. The work to do is replacing the GPS underneath with something that doesn’t drop out at mile 16.
Precision positioning at the 10 cm level is achievable. Nordic Evolution shows it. The integration challenge is wiring that precision into adaptive feedback logic — the part that nobody has built.
The structured outdoor loop test environment we’ve been describing for Phase II is genuinely well-matched to the precision-GPS-plus-haptic-gradient stack. The hardware exists. The control logic doesn’t.
That’s the contribution opportunity, and it’s the whole reason for running this survey in the first place. We’re not waiting on a sensor breakthrough. We’re waiting on someone to put the pieces together with the right brain in the middle.
NOA by biped.ai demo
A Closing Reframe
It’s tempting, when you list the gaps in existing products, to read the conclusion as “the field is failing.” That’s not quite right. The field has built the components. Haptic gradient feedback works. Bone conduction works. Centimeter-grade GPS works. Distributed haptic arrays work.
What hasn’t been built — what the lateral line, the bat call, and the bee in the tunnel are all telling us to build — is the workload-sensitive control layer that makes those components serve a runner instead of a pedestrian.
That’s where Phase II goes. Not a new sensor. A new logic.
Full assessment, with the coverage matrix and per-product detail, is in the Technology Landscape companion paper. Earlier posts in this series cover the Stress Wall, the cognitive load function, and the three biomimetic principles.