NayanVaani lets people with cerebral palsy, ALS, and stroke communicate using only their eyes — on any Android phone they already own. No hardware. No waiting list. Eyes in, voice out.
Every thought, every cry for help, every "I love you" — dying in silence. Because the only solution costs more than most Indian families earn in five years.
Three years ago, Rajan was an engineer. He had opinions. He argued about cricket. He told his daughter bedtime stories.
Then ALS took his hands. Then his voice. His family sits beside him every day, guessing. He can hear everything. He understands everything. He cannot respond to any of it.
His family was quoted ₹9.5 lakh for a specialised AAC device — with a 4-month wait for a specialist to configure it. They are still waiting.
"He keeps looking at the window when we talk about the old days. We think he wants to say something. We just don't know what."
NayanVaani could be on Rajan's phone today. Calibrated in 15 seconds. No specialist. No wait. No cost. He could tell his daughter he loves her — tonight.
No clicking. No touching. No special hardware. Just any Android phone and a pair of eyes. That's genuinely it.
Not just communication. A complete operating system for people who cannot use their hands.
"I am in pain." "I need water." "Call my family." "I love you." Quick-access phrases — Needs, Emotions, Responses, Greetings — a full alphabet keyboard, and AI word prediction so entire words select in a single gaze. All in Hindi and English.
Morning shows "Good morning" and "I need tea." Evening shows "I love you" and "Good night." Most-used phrases move to centre screen. After weeks, the interface becomes uniquely personal.
Contacts appear as large photo thumbnails — faces, not names. Dwell on a face for 1.5 seconds, the call connects automatically. During the call, a live overlay shows Yes, No, I understand, I love you — all eye-selectable in real time.
For the first time, a non-verbal person independently makes a phone call. No touching. No asking. No caregiver needed. Any time of day or night they want to connect.
Play, pause, next, previous, volume. Five mood presets: Calm, Happy, Devotional, Classical, Sleep — each opens a curated Spotify playlist directly, zero intermediate steps. YouTube Music and Gaana also supported.
Every patient has specific music they need at specific times. NayanVaani gives them full control — without asking anyone to change the song. Ever again.
SOS tile sits permanently on every screen. Deliberate 3-second dwell fires it: WhatsApp with GPS to 3 emergency contacts, phone speaks aloud at maximum volume, flashlight flashes international distress pattern, SMS backup sent simultaneously.
No caregiver present. No one needs to hear them. Works in the middle of the night in a silent room. Three seconds. That's all it takes.
Personalised channel grid: news, devotional, cricket, family videos, saved channels. Dwell on any tile — YouTube opens directly to that content. Eye keyboard to search anything. Play, pause, volume overlay without leaving the video.
For years, what a bedridden patient watches required asking for help. Not any more. They choose. They control. For the first time.
Eye keyboard plus AI word prediction — compose a full WhatsApp message independently. With word prediction active, a full sentence requires 30–40% fewer eye selections than raw letter-by-letter typing.
A patient can reach a family member at work, a doctor between appointments, or a friend in another city — any time, without help from anyone.
Fan speed, AC temperature, room lights, TV channels and volume. Works via IR blaster on Xiaomi/Redmi/Poco phones — the most popular in India at this price range — with zero additional hardware. Google Home API for smart plugs.
Temperature and lighting are not luxuries — they're comfort and health. They've always required asking whoever is in the room. NayanVaani ends that dependency.
Family uploads old recordings — voicemails, video calls, birthday messages. NayanVaani rebuilds the patient's voice via ElevenLabs: their accent, their tone, their cadence. Every phrase speaks aloud as them. Not a robot. Them.
For a family that hasn't heard their loved one speak in years, the first time this works is not a product feature. It is a moment they will carry for the rest of their lives.
Camera reads facial expressions passively via AI throughout the session. Distressed? Emergency phrases rise to the top automatically. Happy? Greetings appear first. Mood logs build a daily timeline visible to caregivers and doctors.
Non-verbal patients can't always communicate what they need — but their face can. NayanVaani reads that. Mental health, previously invisible, becomes measurable.
Pain assessment: body outline, 1–10 scale, descriptors (sharp, dull, burning, pressure). Symptom reporting: nausea, breathing, headache, dizziness. ICU needs: suctioning, position, medication. Consent responses. Built for Indian hospitals.
The patient participates in their own care. A nurse responds to needs without guessing. No AAC therapist needed. Meaningful clinical assessment from the first day.
No AAC device in the world does this for Indian languages. NayanVaani speaks in the languages Indians actually speak at home. Tamil, Bengali, Marathi, Gujarati, Telugu — all in Phase 2.
A patient in Chennai communicates in Tamil. A patient in Kolkata in Bengali. Existing devices speak English. NayanVaani is getting there this year.
Every critical feature runs locally on the phone. Nothing is sent to a server. Privacy by architecture, not by policy.
We made it Indian. We made it available on the phone that's already in your pocket.
Free. No account required. Works on any Android 8.0+ phone. Download, calibrate in 15 seconds, and communicate — right away.
5 lakh people in India have never been able to tell their family they're in pain. They've never been able to say I love you. They've never been able to ask for a glass of water.
Monetisation happens everywhere except the patient. The core app is free. Forever. That's the deal.
We're just getting started. Here's where NayanVaani is going next.
Learns each user's daily routine and personality over months. Surfaces "good morning" at 7:58am automatically. Full phrases from partial selections. Gets uniquely personal.
Tamil, Bengali, Marathi, Gujarati, Telugu, and more. A patient in Chennai communicates in Tamil. No AAC device in the world does this.
For users with cognitive impairments alongside physical ones. Look at a picture of food — the app says "I am hungry." Works for children with cerebral palsy who cannot read yet.
Family gets real-time notifications from another room. Communication history log. Mood dashboard. A week of emotional data visible to doctors before a consultation.
MediaPipe already tracks eye movement 30× per second. Additions detect early signs of eye muscle fatigue — critical for ALS patients. Medical team alerted before communication ability is lost.
When eye control eventually fails, an EEG headband connects as replacement input. Same grid, same phrases, same life — accessed through a different channel. The patient is never abandoned by their tools.
Four people, one shared conviction: nobody should go voiceless because they can't afford hardware.
We're looking for hospital partners, government champions, and anyone who believes every person deserves to be heard — regardless of what they can afford.