This is a post by Lee Cusack, our summer intern and recent graduate of Lesley University’s College of Art and Design. We’ve had an awesome time working on a project with him that sheds light on making face-to-face *conversation* universally accessible. (Photo captions by Terrence)
I was born with Spastic Quadriplegic Cerebral Palsy, a brain paralysis that causes stiff muscle movement and exaggerated reflexes, which apart from the rest of my body, makes it difficult to move my tongue and touch my lips together when speaking. This impairs my ability to communicate with anybody that hasn’t spent a lot of time learning my vocalizations. Needless to say, navigating the workplace, where face-to-face interaction is essential for learning and productivity, can be daunting.
The Let’s Chat! mobile app is a tool for teaching other people how to converse with me. It compels me to speak more often, thereby keeping my mouth moving to prevent muscle atrophy, and where other communication aids use pre-programmed phrases to simulate fluid conversation, the Let’s Chat! app allows me to freely articulate my thoughts and express myself.
It works by instructing the listener to follow three personalized steps:
- Step 1, I speak a word and you say it back to me. If I nod yes, then we repeat Step 1.
- If you don’t understand, then continue to Step 2 by asking me to spell it. If you don’t understand a letter continue to Step 3.
- In Step 3 the letters of the alphabet are arranged and grouped by their frequency of usage in the American-English language and further tailored to suit my linguistic habits.
When conversing for the first time we typically spend more time on Step 3. As you begin to grasp the nuances of my voice and the rhythm of my speech we use the first two steps more often.
The Lets Chat! design has been evolving over the past few years. It began as a simple business card, but as an app, it will soon offer those with similar speech impairments the ability to create profiles to customize their own unique letter groups and personalized instructions. It helps keep my personal life and business world alive, making it easier to freely engage with other people and create equal ownership of conversations.
Human interaction allows one to see outside themselves, share knowledge, gather ideas, reflect inward, reorganize concepts, and imagine new paradigms. This process is necessary for well-being and individual growth, but with my impairment, having to rely on computer-based touch-to-speak devices can severely filter the exchange of communication and become a detriment to my health.
Having used computer based touch-to-speak devices, I find them to be clunky, difficult to operate, complicated to program, outdated, and extremely expensive. They limit self-expression by utilizing generic pre-programmed phrases to communicate for an individual, robbing them of the attention and articulation we all deserve when expressing ourselves. My education in design and programming has given me the tools I need to envision a new mode of communication aid that works better for me and those I converse with.
During the prototype stage, we decided to build a web app (instead of a native mobile app) to make design iterations easier, and so the app would be usable on both iOS and Android devices.
After developing an initial prototype with input from the Fathom team, we did user testing with others in the studio. For the test, people used the Let’s Chat! app to help me complete the Color Survey. The informal sessions shed light on areas to be improved around wording and usability. For instance, the instructions on the business card need to be refined further for the mobile environment. What’s the right length? Where and when should they appear in the process? In addition, people sometimes misunderstood or skipped steps entirely. Overall, the sessions went well, and these insights will help me further develop the app. For the first time in a long time, I was able to speak one-on-one with people who had never before been introduced to my vocalizations.
Now that a solid framework of the Let’s Chat! system is functioning, the next step is to test ways of guiding users through the process. Tightening up ambiguities in the instructional language will make the system less confusing. Further iterations of the UI will improve how one navigates through the letter groups, allowing for more eye contact. Since I’m not able to physically interact with the app, I’ll pursue more user testing as I make iterations on the design.
If you’d like to follow along or contribute to this project please check it out on Github.