2018


The AAAI 2018 Workshop on Plan, Activity, and Intent Recognition


PAIR 2018 was one of the most well attended and successful of the PAIR workshop series. Below is the schedule along with links to the papers and talk given (where availble).

9:00am-10:00am Invited talk by Maria Gini (University of Minnesota)

Voice activated intelligent personal assistants: challenges and opportunities
The advent of inexpensive voice activated devices (Amazon Echo, Google Home) is opening up new unprecedented opportunities to create personal assistants or a variety of applications and populations. The use of voice is specially promising to enable people with sensory deprivation (e.g., blind users), limited motor control, or mild memory impairments to have a continuously present assistant for many of their daily needs. However, the state of the art of the software for such devices is far from being able to provide intelligent and personalizable assistance. In this talk we will explore open challenges and opportunities.

10:00am-10:30am Paper Session 1

10:30am-11:00am Break

11:00am-11:30am Poster Session

11:30am-12:30pm Paper Session 2

12:30pm-2:00pm Lunch

2:00pm-3:00pm Invited talk by Shirin Sohrabi (IBM Research)

Plan Recognition as Planning: Theory and Practice
In this talk I will give an overview of our work at IBM Research in applying plan recognition as planning techniques in several applications. I will discuss both the theory and the practical challenges as well as the results and the lessons learned. The talk will focus on the IBM Scenario Planning Advisor (SPA) tool, which is a decision support system that utilizes plan recognition as planning techniques to assist financial organizations in identifying and managing emerging risks.

3:00pm-3:30pm Paper Session 3

3:30pm-4:00pm Break

4:00pm-5:00pm Paper Session 4

5:00pm-6:00pm Invited talk by Philip Cohen (Voicebox Technologies)

Steps Towards Collaborative Dialogue
Dialogue is all the rage nowadays. Most of the approaches currently receiving attention involve deep learning of stimulus-response pairs, or various machine-learned strategies for simple “slot-filling” dialogues in which a system acquires information sufficient to enable it to perform a single action. In this talk, I will argue that these approaches are too simplistic and will not extend to realistic dialogues. In particular, as currently pursued, they will not support dialogues with intelligent systems that can collaborate with their users to help accomplish the user’s goals.

The talk begins with a discussion of collaboration, which revolves around plan recognition skills learned as a child. Such deeply engrained collaboration strategies will be seen to be at the foundation of dialogue and are expected by human interlocutors. The approach I will advocate to implementing collaborative dialogue systems is to build a (joint) belief-desire- intention architecture that attempts to recognize the user’s plans, and determines obstacles to their success. The system then plans and executes a response intended to overcome those obstacles. In so doing, the system needs to reason about, and may plan to alter, users’ mental states thereby resulting in speech acts. I will demonstrate a system that embodies this type of collaboration, engaging the user in dialogue about travel planning. Importantly, because the system is driven by plans, it is explainable, and thus able to answer “why” questions. The upshot of this approach is a system that assists its users, and knows what it is doing/saying.

FIN

Zip file of all accepted papers.

Cochairs:

Reuth Mirsky, Primary contact (Ben-Gurion University, dekelr@post.bgu.ac.il),
Sarah Keren (Technion University, sarahn@technion.ac.il),
Christopher Geib (SIFT LLC, cgeib@sift.net)