Google’s latest accessibility features are expected to have a big impact. In an update to its Lookout app for Android, Google is introducing an “image question and answer” feature powered by DeepMind’s AI. The creative update means the app can now describe each image in detail without the need for captions or alternative text. Users can now use voice or text input to ask precise questions about the photos, such as what a dog’s temperament is. With the aim of making this feature available to a wider audience in the near future, Google is planning to test it with a small sample of blind and visually impaired users. People with visual impairments could benefit immensely from this AI-powered improvement in accessibility and freedom.
Also read: Unattended Google Accounts to be Deleted Starting 2023
If you are using a wheelchair or pushchair, it will be easier for you to navigate around the city. The wheelchair accessibility labels on Google Maps are now available to everyone, so you’ll know about any step-free entrances before you arrive. If a venue doesn’t have a welcoming entrance, you’ll see a warning and information about alternative accommodation (such as wheelchair accessible seating), which can help you decide whether it’s worth the trip.
However, a few small changes can be beneficial. You can type back replies to calls that are read aloud to recipients using Live Captioning. Chrome now detects URL typos and suggests alternatives on desktop (and soon on mobile). When Wear OS 4 launches later this year, it will have faster and more reliable text-to-speech.
Google unveiled a flurry of new features at I/O 2023, and has been aggressively pushing AI in recent months. The improvement to Lookout could be one of the most beneficial. While AI explanations are useful, the Q&A tool can provide specifics that would often require the participation of another person. This could increase the independence of those with vision problems.