Typing on glass reveals intent
Typing on glass is not target practice. It is intent inference. Fleksy taught me that in 2014, and the idea aged well from geometric decoding to small on device LMs. ⌨️👇
In 2014 I reviewed Fleksy after uninstalling Swype. My bet: gestures, aggressive spatial autocorrect, and a tiny UI, even an invisible keyboard (yes, really), could boost speed and accessibility. iOS 8 had just allowed third party keyboards. Reality was more mixed.
Geometric vs LM. Fleksy scored where you touched and decoded the word. Fast and tolerant of occlusion, even eyes-free with gestures. LMs add context. Code switching trips small dictionaries. Bigger LMs add latency and privacy risk off device. Sweet spot: spatial, small local LM.
2016 to 2025, the giants folded the ideas in. Gboard made glide typing and on-device learning normal. iOS added QuickPath and better autocorrect. SwiftKey went deeper on AI. Meanwhile, extension limits, full access prompts, and good enough defaults squeezed third party keyboards, especially on iOS.
Things that last: infer intent from noisy taps, keep latency low, learn on device, use gestures as power tools, stay out of the way, and design for accessibility.
Here’s the original 2014 article: https://anandchowdhary.com/blog/2014/fleksy-keyboard