Presented by: Vladislav Maraev and Bill Noble
Duration: 2 hours
On: 26 Feb, 2020
Location: Gothenburg
We investigate how useful BERT is for dialogue act recognition. We analyse benefit of BERT's pre-training procedure and the importance of fine-tuning in the dialogue setting. To confirm that the model learns to represent dialogical features, we look at how it uses laughter, a phenomenon specific to dialogue, and analyse where laughter is most helpful for dialogue act recognition.