ACCESSIBILITY DESIGN
exDOT
Assistive Technology For Legally Blind
Project Brief
7 million people live with uncorrectable vision loss, including more than 1 million Americans with blindness. Over 1.6 million people with uncorrectable visual acuity loss and 141,000 persons with blindness are under 40.
A wide range of daily activities causes major problems for blind and visually impaired persons (VIPs), like wayfinding in unfamiliar surroundings, detecting objects and persons, and recognizing faces and facial expressions.
Duration
2 months
Tools
Figma, Miro, Procreate, DIY Tools for Pretotype
Project Type
Accessibility, Collaborative Academic Project
Exploratory Research
In our Exploratory Research, we learned how facial recognition technology has evolved and how current technology assists people with visual impairments and blindness—the importance of nonverbal communication and behavior to bridge the gap between legally blind and sighted people.
“Most nonverbal communication relies on the visual signals such as eye contacts, facial expressions, hand and body gestures etc. However, visual signals are inaccessible for the blind and hardly accessible for low vision people, since it is a process through sending and receiving wordless visual messages between people.”
“Information architecture and the flow of navigational items should be organized in such a way so that cognitive load on the blind people should be kept a minimum as possible.”
Competitive Analysis
Further, we did a competitive analysis to understand the advancement in facial recognition and expressions assistive technology. We found Envision glasses and Aira glasses are the two leading wearable devices that can Read, Identify, Find and Call. The Envision has three reading modes: Instant text, Scan text, and Batch scan, which are great advantages for users. Aira glasses connect to the Aira agent to see what the blind person or visually impaired sees in real-time and then talk to them through their phone or earphones.
Pretotyping - Mechanical Turk
We conducted a mechanical Turk to understand these applications' workings and leverage emojis. The test was conducted by letting the user listen to the audio of a youtube video, and we would send a mix of emojis and words to convey the facial expression and behavior of the speaker.
The first test was a diplomatic discussion between Indra Gandhi, an Indian Politician, and a news reporter. The user understood 85% of speakers' expressions through emojis and the perception of sound.
Whereas, the second test played the conversation between Trevor Noah and Stephen Colbert on his late-night show. Since Trevor Noah had animated expressions every 5 sec, conveying the changes in facial expression was difficult.
Mechanical Turk Insights
It was important to know when the other person was listening.
Tracking and conveying two or more people's facial expressions was difficult.
And how users process the information is subconscious and up to them.
Brainstorming
We brainstormed situations in a person's daily life, facial expressions, and what information we would capture and why.
Persona
What’s the Problem?
Individuals with vision impairment get accustomed to their surroundings using other senses like hearing and touch. However, they miss important visual cues that might help them better understand their environment. One such visual information is facial expressions. These are prime aspects of social interactions, especially during one-on-one conversations. Studies show that perceiving others' facial expressions enhances an individual's expressiveness during interactions. Hence, strengthening a vision-impaired individual's understanding of others' facial emotions will benefit them regarding their relationship with that person.
How might we support the perception of facial expressions in individuals with legal blindness and enhance their understanding of others’ emotions?
Ideation
After gathering data and brainstorming on the use-case scenarios, we jot down the various possible input and output mechanisms to convey the expressions.
Ideation - Concept Sketch
We explored the possibilities of various devices with ideal feedback mechanisms for a device and considered the various use-case scenarios we had brainstormed previously during our research.
Narrowing on our ideas, we decided on a braille watch as the output interface and a removable clip as the input camera.
Braille Emojis
Using the existing braille character set for emojis and body language, displaying these characters on the braille watch is a discrete way to convey information to the user. Hence, we explored these character sets and narrowed down 3 categories of body language, each comprising 6 expressions. These expressions are selected based on the most common and frequently expressed emotions.
Pretotype - Pinocchio
To test our concept, we conducted the pinocchio pretotyping experiment. By wearing the mock-up for a day, we found out that it was easy to use, discrete, and did not interfere with the conversations. However, the form of the watch could be reduced to further make it minimal and sleek.
Form Sketch
We explored the form of the watch and camera through sketching.
One ethical issue during our iterations was notifying the other person that they were being videotaped by adding an LED notification light. However, the device does not store any data, nor is it connected to the internet. Hence, there is no need to inform the people whose expressions are being analyzed. The device is solely used for assisting a visually impaired individual in experiencing conversations the same way sighted people do.
Technology Leveraged
The technology required to develop such a product is readily available and has already been implemented in other products.
The dot-watch displays information in braille. Its hardware can be utilized to recreate the same mechanism.