Category Archives: Track D

KEYNOTE PANEL – The Future of Conversational Robots

KEYNOTE – Wednesday 26 April 
09:00 – 10:00

Amazon Echo, Google Home, and the Jibo social robots promise to enable users to perform many useful tasks, including control devices connected with the internet such as home appliances and industrial robots; educate and train users with self-improvement activities; entertain users with passive and active games and activities; perform transactions such as pay bills; shop for goods and services; solve problems such as diagnose illnesses; debug and repair products; calculate taxes; mediate conflicts; and protect and secure home and business. This panel begins with short demonstrations of products, followed by a discussion of issues such as these: What is a conversational robot and how do they differ from other current interactive technologies? What capabilities do conversational robots have beyond just searching the web, answering questions, and presenting information? How can you replace negative perceptions of robots with positive insights? What technologies, tools, and standards will to enable widespread creation and distribution of content for conversational robots?

Presented by: Leor Grebler, Sunil Vemuri, Roberto Pieraccini

D301: Now Trending: Voice Biometrics

Track D: INFRASTRUCTURE – Wednesday 26 April 
10:45 – 11:30

An overview of voice biometrics, including how the technology uniquely balances security and convenience, while bringing a new level of personalization to customer service; We compare the features and benefits of voice biometrics technology to other authentication technologies, and explore use cases for biometrics and real-world examples of deployments – from large financial institutions, telecom providers, government organizations, and more; We discuss how consumer behavior and preferences impact adoption of voice biometrics.

Presented by: Advait Deshpande

D302: An Intelligent Assistant for High-Level Task Understanding

Track D: INFRASTRUCTURE – Wednesday 26 April 
11:45 – 12:30

Current intelligent agents (IAs) are limited to specific domains. However, people often engage in activities that span multiple domains and have to manage context and information transfer on their own. An ideal personal IA would be able discover such (recurring) activities and learn their structure, to support interaction with the user. The result would be custom applications supporting personal activities. We discuss our work creating agents that autonomously configure spoken language interfaces for this purpose.

Presented by: Alexander Rudnicky