Interactive Task Learning From Corrective Feedback

Presented by: Mattias Appelgren, University of Edinburgh
Date: December 15, 2021


The current approach to AI uses large datasets and fixed domains where systems learn to perform a single desired task to a very high level. However, each system is designed only for that limited task, which means that moving into new areas is costly and AI is only really feasible in domains with enough. However, the world is extremely complex with constraints constantly shifting and large variation between. For this reason Interactive Task Learning (ITL) is starting to be studied with the aim to build systems that are deployed not with a set of fixed skills, but deployed with the ability to learn new tasks through interactions with human users who aren’t experts in AI. To achieve this goal we need to develop methods of interaction which are natural for humans to perform and build systems that can learn from these natural modes of. In this talk I will present my PhD work on this topic where I look at how a teacher might use verbal corrective feedback to teach the agent about a task. We utilise concepts from formal semantics and pragmatics to reason about how the agent’s knowledge should change given the teacher’s utterance. In particular, we use the concept of coherence to place constraints on a probabilistic model for inferring what the world state is and what the teacher intended to convey with its correction. I present experiments testing our hypothesis which is that verbal corrections allow an agent to learn faster than it would if it just learns from the word “no” and we show that the learning is facilitated by the constraints imposed by coherence.

Time: 13.15-15.00

Location: In person at J577 or via Zoom,