Apple previews several new accessibility features

Accessibility Feature - Apple Logo

Among the previews included Personal Voice and Live Speech, though others were also unveiled.

Apple has announced that it has been working on a range of different accessibility features for its products, having also unveiled what they are and how they are expected to help people who have disabilities.

The new additions are meant to improve hearing, vision, and cognitive access to its products.

The first of the accessibility features that was unveiled is Live Speech, which was developed to make it people to type to take part in phone calls when they are non-speaking. Similarly, Personal Voice will provide a synthesized voice model. Additionally, people with highly impaired vision or who are blind may be able to benefit from Detection Mode.

Accessibility features - Apple Devices
Credit: Photo by depositphotos.com

The new options are not yet rolling out to Apple devices, but the company said that they will be arriving later this year. That said, a specific timetable of release dates has not yet been provided. Many in the industry are speculating that they will be incorporated into the iOS 17 and iPadOS 17, in addition to some of the next-gen MacOS apps. Each of the interfaces will likely be revealed in June at the WWDC.

The accessibility features are meant to help provide a broader range of users options for using their devices.

The Assistive Access potential of a device can be turned off or on within its Settings. Among the experience availability is for Phone and FaceTime, which are brought together into a Calls application with vibrant, large contact shortcuts. Messages has a large emoji keyboard and thumbnails are magnified in the gallery to make it simpler to preview pictures. Both iOS and iPadOS devices can have Assistive Access applied.

Live Speech will make it possible for commonly used phrases to be rapidly added to conversations. Personal Voice will give users text prompts to read and record into as much as 15 minutes of audio based on a set of text prompts. In this way, the model will be trainable. Therefore, as a user’s vocal capabilities decline, they will still have an option for communication with the people in their lives through accessibility features.

Leave a Comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.