At Amazon’s Re:Mars conference, Alexa’s senior vice-president Rohit Prasad exhibited a startling new voice assistant capability: the supposed ability to mimic voices. So far, there's no timeline whatsoever as to when or if this feature will be released to the public.
Stranger still, Amazon framed this copycatting ability as a way to commemorate lost loved ones. It played a demonstration video in which Alexa read to a child in the voice of his recently deceased grandmother. Prasad stressed that the company was seeking ways to make AI as personal as possible. “While AI can’t eliminate that pain of loss, he said, "it can definitely make the memories last.” An Amazon spokesperson told Engadget that the new skill can create a synthetic voiceprint after being trained on as little as a minute of audio of the individual it's supposed to be replicating.
Security experts have long held concerns that deep fake audio tools, which use text-to-speech technology to create synthetic voices, would pave the way for a flood of new scams. Voice cloning software has enabled a number of crimes, such as a 2020 incident in the United Arab Emirates where fraudsters fooled a bank manager into transferring $ 35 million after they impersonated a company director. But deep fake audio crimes are still relatively unusual, and the tools available to scammers are, for now, relatively primitive.
Newton Mail has had a very, very weird life. It started as a good, albeit expensive, email application with a ton of supercharged features; it wasn’t sustainable, however, so it was going to be killed off. At least, that was the original plan. Unlike Google’s Inbox, Newton Mail had a savior. Essential swooped in and […]
Come comment on this article: Newton Mail comes back from the grave one more time