Apple has revealed a new feature that will allow iPhones and iPads to generate digital reproductions of a user’s voice.
The Personal Voice feature, expected as part of iOS 17, will work with the Live Speech feature to allow users to record their voices and communicate with others on audio calls or platforms such as FaceTime.
Users can create a Personal Voice by reading along with a random set of text prompts to record 15 minutes of audio on iPhone or iPad.
The Live Speech feature then allows users to type messages on the device to be read out loud.
If they use certain phrases a lot, these can be saved as shortcuts.
If they have created a Personal Voice model, they can play the phrases in their own voice – otherwise they are read by the device’s digital assistant Siri.
It is aimed at people who suffer certain conditions, such as ALS (amyotrophic lateral sclerosis) that could mean they lose their ability to speak in future.
Crypto trading should be treated like a type of gambling, influential MPs say
Senator Richard Blumenthal’s chilling warning after AI-generated speech convincingly imitates him at Congress hearing
Boss of AI firm’s ‘worst fears’ are more worrying than creepy Senate party trick
Philip Green, board member and ALS advocate at the Team Gleason charity, has experienced significant changes to his voice since being diagnosed with ALS in 2018.
He said: “At the end of the day, the most important thing is being able to communicate with friends and family.
“If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world – and being able to create your synthetic voice on your iPhone in just 15 minutes is extraordinary.”
The feature is among a number of new tools that will arrive on Apple devices later this year, although the company would not be more specific about the timing.
Another, called Point And Speak, will allow users to point their finger at something in front of the camera and the app will read the text on or near it – for example, a person using it to read the text on microwave buttons.
This feature will only work on Apple devices with a built-in LIDAR sensor – among the more pricey of the tech giant’s iPhone and iPad models.
The news comes ahead of the Worldwide Developers’ Conference on 5 June, where Apple is also expected to reveal its first virtual reality headset.