Blindness tech widens access
Blindness tech keeps moving on three tracks: prevent, cure, and live-with. The real progress sits in the seams where software widens access and biology makes devices less necessary. Eight years in, that interplay feels like the story. 👁️👇
Found a tiny note from Aug 5, 2017: “EyeD.” I was running Oswald Labs, we’d just been incubated by EyeFocus, and I was meeting founders across prevention, cure, and living-with blindness. 2017 felt like a hinge. AR wearables everywhere, Microsoft’s Seeing AI hit the App Store, and gene therapy was stepping out of the lab. In 2025 I added “Cool space,” still true, just for different reasons now. I learned a lot from mentors and teams there.
Live-with, then vs now
2017: purpose-built devices like OrCam and eSight, remote agents like Aira, and apps like Eye-D stitched together navigation, reading, and object ID. The frictions were price, setup, and spotty coverage.
2022-2025: mainstream platforms absorbed much of that stack. On-device Door Detection and Point and Speak eased last meter navigation. Seeing AI reached more people. Be My AI showed how multimodal models can turn a camera stream into assistance. Wins: distribution, lower marginal cost, more on-device privacy. New risks: hallucinations, consent in shared spaces, and designing graceful failure for safety.
Prevent and cure, then vs now
2017: Luxturna became the first FDA-approved therapy for an inherited retinal disease. A proof point that the eye can be treated locally and precisely.
2021-2025: optogenetics produced early human signals. X-linked RP programs moved forward but wrestled with endpoints, dose, and durability. Drugs for geographic atrophy arrived with debates on functional benefit and safety. Sustained-delivery implants returned to lower injection burden. Retinal arrays wound down, while cortical systems like Orion took their place with broader indications but tougher surgery and rehab. Delivery, durability, and reimbursement now matter as much as the mechanism.
Tradeoffs I keep seeing
- Specialized hardware vs OS features. Control and reliability vs reach and price.
- Cloud AI vs on-device ML. Capability vs privacy, energy, and latency.
- Medical-grade validation vs consumer speed. Safety and reimbursement vs time to ship.
- Subscriptions and remote agents vs one-time devices. Unit economics vs accessibility.
What aged well from that 2017 shard: ecosystems beat silos, the eye suits precise local intervention, and general-purpose platforms can be powerful accessibility tools. What I missed: how quickly OS features and generative models would absorb assistive workloads. I was optimistic, just not optimistic enough.
Open questions
- When does vision AI cross into medical device territory and who validates it?
- Who pays for continuous assistance at scale, devices or payers or platforms?
- Can commodity cameras let us do earlier, equitable prevention without widening data risk?
Here’s the original note from 2017: https://github.com/AnandChowdhary/notes/blob/main/notes/2017/eye-d.md