The future of accessibility

Insights

The future of accessibility

Using AI to aid the visually and hearing impaired

By

24

Oct

2024

Accessible technologies have come a long way in recent years, enabling people with disabilities to interact more fully with the digital world. From screen readers and voice recognition software to haptic feedback devices and smart home assistants, these tools have significantly improved the quality of life for many.

As we enter this new era of artificial intelligence, the potential for creating even more advanced and intuitive accessibility solutions is rapidly expanding.

Models that can see and hear

With the advent of Large Language Models (LLMs), AI's ability to improve accessibility is becoming increasingly apparent.

Frontier models like OpenAI’s GPT-4o and Meta's Llama, are quickly becoming multimodal. In addition to generating text and images, these models will be able to see and hear the world around them. Most notably:

  • They will be able to analyze and understand the content of images. This includes recognizing objects, understanding scenes, detecting facial expressions, and even interpreting complex visual data.
  • They will be able to understand and interpret sounds and audio information (e.g., natural, artificial, or unusual sounds) and convert those sounds to text.

Assisting with vision

One of the most promising applications of AI for accessibility is in computer vision. For individuals who are blind or visually impaired, AI-powered image description tools can serve as a “set of eyes,” interpreting visual information.

For example, GPT-4 can now describe photos and visual details with remarkable accuracy. Imagine you’re visually impaired and encounter street construction—AI can help guide you by describing the scene in real-time.

This level of detail allows visually impaired individuals to understand their surroundings more fully, enhancing both independence and safety.

Practical applications of this technology are already evident in tools like Microsoft's Seeing AI, apps like Be My Eyes, and wearables like OrCam’s MyEye.

Assisting with hearing

For the deaf and hard of hearing, AI is also making significant strides in speech recognition and translation. Advanced speech-to-text algorithms can now deliver real-time translation  and captioning for live conversations and media.

For instance, the transcription app Ava provides real-time text for any conversation within its range, while Google’s  Live Transcribe offers a similar service, making everyday interactions more accessible.

But large language models can also recognize and identify sounds. Consider the following use cases, where models are able to interpret different types of audio:

This kind of recognition can alert individuals with hearing impairments to important sounds in their environment, from potential danger signals to everyday notifications.

Future opportunities for accessibility

As AI continues to evolve, its potential to enhance accessibility will grow exponentially. By offering detailed visual descriptions and precise audio interpretations, AI is already contributing to a more inclusive world for individuals with visual and hearing impairments.

In the future, it will be even more important to develop these technologies with input from the communities they serve, minimizing bias and ensuring that solutions address real needs and improve quality of life. With this focus, the future of accessibility looks bright.

Have an idea or want to learn more? Subscribe to our newsletter and follow us on LinkedIn!

Special thanks to Max Schnitzer and Morgan Gerber

Around the studio