They’re a staple of science fiction: super-smart, disembodied virtual assistants that cater to the hero’s every whim. Depending on the story, a computer with this kind of intelligence behaves one of two ways: either it’s subservient and helpful like Tony Stark’s J.A.R.V.I.S in Iron Man, or it develops a penchant for subterfuge and murder like HAL 9000 in Space Odyssey. Of course, Intel hopes its new assistant Jarvis, which is housed in the earpiece pictured, will be more like its namesake. But Intel’s biggest hurdle will just be to get it to understand natural speech.
So far, digital assistants, such as Siri for iPhone and EVA for Android, need to be spoken to like someone who’s just fallen off their bike: slowly and with lots of repetition. When they do understand what you’re saying, your request is sent to the cloud, or in other words, sent over the internet to remote servers. This means that the second the phone loses its web connection, Siri becomes as talkative as a Trappist monk. Any question you may have about the weather is rebuffed with a blunt “I’m not connected to the internet”. This is where Intel wants to change things.
The idea is to pull the assistant’s brains out of the cloud and onto a device – specifically, an earpiece. To do this Intel will need to squeeze the hardware and software required to understand your words into something no bigger than a matchbox. It’s a big ask, especially since the company has so far struggled to crack the mobile phone and tablet markets.
“As well as the power you’d need for speech recognition, there’d have to be somewhere to store information while the device learns your voice,” says Matthew Aylett, an AI researcher at the University of Edinburgh and Chief Scientific Officer at CereProc, a company that synthesizes speech for computer systems. “But you can do some smart things if you can combine online and offline capabilities intelligently, like having it save the data about your speech in the cloud, and then update the device’s speech recognition when you’re connected.”
According to Aylett, if Intel’s ambitious system is successful it could outstretch the abilities of the current generation of digital assistants by some distance. Jarvis, it’s hoped, will understand context. “Siri doesn’t really have a dialogue with you at the moment. When you ask it a follow-up question, it doesn’t hold on to the context provided by the previous question,” Aylett points out. This means that, despite Siri’s best intentions, you’re never really engaged in an actual conversation.
“Intel’s objective will be to do more with social signal processing. This will help it master context. For example if Jarvis hears that you’re having a romantic chat, it will think better of reminding you to buy more toilet roll at that particular moment in time,” says Aylett.
Of course, a Jarvis bestowed with this kind of social awareness will have to be always listening, a thought that won’t sit well with everyone. But the pay-off would be a virtual assistant that could finally compete with those on the silver screen. And if not Jarvis, it seems as though some other virtual butler will be coming to your aid soon: Google, Apple and Amazon have all recently invested huge sums to be the first to create you a new virtual friend.