Category Archives: Tuesday

A202: Unified Natural Language for Messaging Bots and IVRs

Track A: BOTS – Tuesday 25 April 
11:45 – 12:30

Natural language (NL) democratization is happening at a dizzying pace with the advent of text bots (chatbots or virtual agents). Most messaging bots cannot deliver a high level of NL accuracy that is table stakes for enterprise IVRs. Learn how a unified technology stack can leverage the same NL models for both chat bots and IVRs and meet and exceed enterprise NL accuracy expectations. Emphasis will be on statistical language models, statistical semantic interpreter, and predictive technology that further enhances NL intent accuracy for both IVRs and bots.

Presented by: Tajinder Singh

C202: In Conversation, There Are No Errors

Track C: VIXD – Tuesday 25 April 
11:45 – 12:30

When interacting with a virtual assistant, the centerpiece of the user’s experience is the conversation itself. This means that each “error” is an opportunity for the designer to forge a meaningful exchange between the virtual assistant and user. Let’s leverage users’ mental models of how everyday conversations unfold in the negotiation of meaning. Learn how to frame a new way of approaching conversation design, in which so-called errors become organic turns in the dialogue—moving conversational design forward naturally.

Presented by: Nandini Stocker

D202: Keys to Measuring an NLU Implementation

Track D: STATISTICAL LANGUAGE MODELS – Tuesday 25 April 
11:45 – 12:30

If your company is looking to implement natural language, what metrics are you going to use for determining success? This talk will compare several natural language deployments to help you understand when NL is a good fit and the metrics you should consider tracking to determine success. We’ll compare and contrast several NL deployments across verticals to understand how those industries created their business case and the deployment results.

Presented by: Jenny Burr

KEYNOTE LUNCH – From IVR to IoT: Digital Transformation in the Real World

Track A: BOTS – Tuesday 25 April 
12:30 – 13:45

Does it seem like all the businesses around you are hurtling toward digital transformation at warp speed, while you’re still trying to figure out where to begin? Don’t worry; you’re not alone! This presentation provided fresh insights into a more holistic (and more human) approach to digital transformation—an approach that has the potential to change your customers’ lives, not just your technologies.

Presented by: Allyson Boudousquie

B203: Voice Biometric Speaker Verification Fused Into Voice-Enabled Devices

Track B: TALKING WITH THINGS – Tuesday 25 April 
13:45 – 14:30

Many speech-enabled products are incapable of recognizing users, allowing kids, friends, and complete strangers to control these devices. By fusing speech recognition technology and voice biometrics, only enrolled users can interact with devices and services, subject to restrictions set by the owners. Hear about new deep learning speech recognition technologies that fuse accuracy and performance with voice biometric security technology that will secure users from identity theft, and enable parental controls as the voice revolution continues to take shape.

Presented by: Bernard Brafman

C203: Digital Assistant With Co-Pilot Expertise

Track C: VIXD – Tuesday 25 April 
13:45 – 14:30

Despite state-issued bans, the use of cellphones while driving is on the rise. How can we manage proper access to smartphone services while driving? Learn how smart, context-aware digital assistants with co-pilot expertise estimate the level of driving risk; evaluate driver attention and ability to respond to emergencies; interact with the driver, knowing when to speak or not; and inform and coach drivers to minimize risks and improve their performance.

Presented by: Malgorzata Stys

D203: Why Some Conversational Apps Actually Work, and How to Build One

Track D: STATISTICAL LANGUAGE MODELS – Tuesday 25 April 
13:45 – 14:30

Teams that use rule-based approaches to build demos of voice or chat assistants run into trouble when trying to take those apps to production. This happens because as you go beyond “toy” demo functionality, the rules rapidly become unworkably complex. To handle human language in all its endless variation, statistical NLU models are needed. This talk shows how you can build production-ready conversational apps in ten steps, using machine-learned language models and AI algorithms.

Presented by: Karthik Raghunathan

A203: Voice Services in the World of Bots

Track A: BOTS – Tuesday 25 April 
13:45 – 14:30

Conversational commerce now spans apps, virtual agents, and services offered through smartphones, home electronics, automobiles, public kiosks, and elsewhere. This talk describes real-world use cases that integrate speech processing with natural language understanding, analytics, knowledge management, and other enterprise infrastructure. Hear how conversational commerce is taking shape. Learn how the world of apps, virtual agents, and smartphone services relates to each other. Will they compete or cooperate? Will they integrate into some new powerful capability?

Presented by: Dan Miller

A204: Speech in the Connected Car: Embedded Versus the Cloud

Track A: BOTS – Tuesday 25 April 
14:45 – 15:30

The speech experience in the car is transforming from basic command and control to natural interactions with automated assistants. This presentation explores the current embedded speech experience in the car, both inside and outside of the cloud. We then consider cloud-only solutions. Finally, we discuss the optimum speech experience for the driver, and what’s required to achieve this optimum experience. These implications may apply to other mobile devices and the IoT.

Presented by: Tom Schalk

C204: Conversational Turn-Taking & Social Robotics

Track C: VIXD – Tuesday 25 April 
14:45 – 15:30

Calling a company and dealing with a system-initiated IVR is the extent of most people’s experience interacting with a spoken dialogue system. This will change with social robots. What is the etiquette when conversing with a robot? Who talks when? What are the boundaries of acceptable things to say? With each evolutionary step forward, the rules of turn-taking shift. We focus on the fascinating challenges faced by the Jibo design team while creating the foundations for a human-robot conversation.

Presented by: Jonathan Bloom