LAS VEGAS—For many of us, it feels like the future is now—and that’s largely thanks to AI-powered technology such as virtual assistants and robots. AI is on the front lines of new technology, and venture capital investors have certainly taken note, pouring more and more money into the space each year. But there's still a lot to come for the red-hot industry.
That's the view from a panel of AI executives who attended CES. Andrew Shuman, who leads the Cortana Engineering group at Microsoft; Al Lindsay, who's in charge of the Alexa engine software team at Amazon; and Cameron Clayton, who works on IBM's Watson robot, took the stage to discuss how their AI teams are working toward the future of human-bot interaction.
"As an industry, we have a long way to go," Lindsay remarked during the discussion, summing up the tone of the event. The panelists chatted about several future applications of AI, some of which their teams are working on right now.
Robots with personalities
One major theme of AI's future pertains to virtual assistants. They could potentially have more realistic personalities, but is that a desirable trait?
"We're really just going for something that will be factual, humble and a little bit playful and fun at the end of the day," Lindsay said of Alexa. "We keep expanding on personality and adding more capabilities. We recently added some cultural zeitgeist and the ability to have opinions."
The panel agreed that eventually, users can probably expect to have back-and-forth discussions with virtual assistants that last five to 10 minutes, during which the bot carries on its end of the conversation. Of course, a non-human companion with opinions of its own isn't the same thing as an actual person, and it's a fine line companies are careful to acknowledge.
"I think it's really important to have an expectation that you set with users. Users should know that it's software and not a real person," Shuman said. He went on to say that because some do use virtual assistants in place of humans, the Cortana team prioritizes certain algorithms. For instance, it makes sure that contact information for suicide hotlines is always accurate in the bot's database.
One purpose of a virtual assistant is to establish a relationship with the person using it—but that's something that's still in development stages. Right now, bots have the ability to listen to people and respond to them. But they don't anticipate the needs of users very well yet.
"The idea is that software is coming to you and learning about you rather than you having to learn about the software," Shuman remarked. "Understanding who's in the room, where they are looking, simple gestures. … This will start to really augment the technology. Things like that, I think, are really coming next."
One example of how a virtual assistant could eventually help users without explicit instructions? A bot with the ability to differentiate between voices and understand their preferences so it knows which type of music to play when different members of the household say "play music."
Clayton from IBM discussed how Watson is already interacting with users—and in a way that's more serious than musical preferences. Several major hospitals in Boston use the Watson robot to study CT scans, look through available medical journals and learn about specific treatments. "It then presents a recommendation of what may be going on to the radiologist, and it may speed up a diagnosis. … Many times it's something the doctor hasn’t heard of, because a doctor can't keep up with all clinical trials."
That type of intelligence is developing, but the experts agreed that while one goal in the AI industry is to improve back-and-forth relationships, machines will never be human. "I'm not sure that creating something that tricks people into thinking they're speaking to a real human is the endgame," Lindsay said.