Вештачка интелигенција

Apple Introduces New Accessibility Features for Speech

Summary

Apple has expanded its range of accessibility options with the latest software updates, adding a new category of speech accessibility alongside existing visual, physical, motor, auditory, and cognitive tools. Users can now input text for their devices to speak on […]

Apple Introduces New Accessibility Features for Speech

Apple has expanded its range of accessibility options with the latest software updates, adding a new category of speech accessibility alongside existing visual, physical, motor, auditory, and cognitive tools. Users can now input text for their devices to speak on their behalf, and individuals at risk of losing their ability to speak can even use Apple’s advanced artificial intelligence to create a model of their voice, helping them to sound like themselves.

The intention is to support individuals who are unable to speak or at risk of losing their speech ability. For example, one in every three people diagnosed with Amyotrophic Lateral Sclerosis (ALS) is likely to lose their ability to speak during their lifetime, according to Sarah Herrlinger, Senior Director of Global Accessibility Policy and Initiatives at Apple.

On iPhones, users can activate “Live Speech” by triple-pressing the side button. With Apple’s advanced artificial intelligence, users can generate a voice that sounds like their own by following a simple process of reading aloud randomly generated sentences. This process provides audio material for training the model while making it more difficult to clone others’ voices.

The new “Live Speech” feature allows users to type messages that will be spoken aloud. There are also predefined phrases, such as favorite meal orders, easily accessible by pressing the corresponding button. Additionally, users can select from a variety of voice options.

Users with newer devices can even create their own personalized voice that sounds like them using machine learning. Called “Personal Voice,” this voice can be used with the “Live Speech” feature to speak words aloud or in applications like FaceTime.

The new accessibility software adds to the extensive list of accessibility tools on Apple devices, many of which utilize artificial intelligence. This includes “Magnifier,” which uses the camera to read everything from button labels to the distances of approaching doors. There is also “Sound Recognition,” which alerts users if the phone detects sounds like a baby crying, a dog barking, water running, or other significant noises.

Apple’s accessibility team is not content with exploring what artificial intelligence can do just for the sake of its existence.

“Machine learning from an accessibility perspective is not new to us. We love our machine learning teams, and we know we will work with them even more,” says Herrlinger.

But as always, they first want to understand the unique challenges of the user community by listening to them, understanding what they want technology to do, and then solving those specific problems.

Frequently Asked Questions