Accessible content designed for reading
Accessibility that actually helps means rewriting content for how our brains read. 📚👇
May 16, 2017. I was a high school founder hacking on ClassRebels and Kaksha after classes on a tiny budget. NCERT was mostly PDFs. Most students used low end Android phones on 2G. I shipped a dyslexia friendly mode with sentence level TTS. I passed using an app I built called Kaksha that sources all content and made it easy to digest.
What “dyslexia friendly” meant in practice:
- Split long NCERT paragraphs into sentences
- Add spacing and line focus to cut visual crowding
- Read aloud with synced highlighting
Tradeoffs:
- Fast rule based splitting vs slower NLP that preserves meaning
- Server TTS to save battery vs offline on device needs
- Hindi and English switching without awkward pauses
What changed by 2025:
- Neural TTS is fast, natural, and multilingual on device
- Reader modes are built into OS and browsers
- Evidence favors spacing, chunking, and syntax aware pauses over magic fonts
- The real unlock is semantic content. Clean text with structure, alt text for diagrams, local language parity, and offline by default
If I rebuilt Kaksha today I’d start with structured NCERT text, multilingual TTS with prosody tuned for learning, and I’d measure comprehension and recall, not clicks.
Durable principles:
- Structure over chrome
- Plain text first with alt text for visuals
- Local language parity
- Chunk by meaning, not by screen
- Offline as a baseline
Open questions:
- How do we personalize chunking without breaking coherence
- What is a good metric for cognitive load
Here’s the original note from 2017: https://github.com/AnandChowdhary/notes/blob/main/notes/2017/kaksha-like-microsoft.md