Can predicted intents calibrate correct answers in open-domain dialogues?
The capability of predicted intents to refine answer selection in open-domain dialogues is a topic of significant interest.
The mission of VOXReality is to explore the development of advanced context-aware task-oriented dialogue systems. In this context, Centrum Wiskunde & Informatica (CWI) has extensively explored and provided insights into whether predicted intent labels have the potential to calibrate answer labels in open-domain dialogues.
Spearheaded by the Distributions and Interactive Systems (DIS) group, this initiative has culminated in the publication of a paper titled “Intent-Calibrated Self-Training for Answer Selection in Open-domain Dialogues” on Transactions of the Association for Computational Linguistics (TACL).
This publication serves as an evidence of the significant progress made in understanding the intricate interplay between predicted intent labels and calibrated answer selection. Full paper is available here.
Answer selection models have achieved notable success through training on extensive labelled datasets. However, the process of amassing large-scale labelled data is not only labour-intensive but also time-consuming. This challenge is further exacerbated for Open-Domain Systems (ODSs) as they grapple with deciphering users’ information needs due to the unstructured nature of open-ended goals (Huang et al., 2020).
The concept of user intents, encompassing a taxonomy of utterances, has been introduced to provide guidance to the information-seeking process (Qu et al., 2018, 2019a; Yang et al., 2020). When a potential answer (PA) does not satisfy the intent of the original question (OQ), the subsequent intent of the user is likely to be an information request (IR). For instance, if a user queries, “Can you direct me to a website for more information?” their intent is classified as IR. Overlooking the intent label IR may result in providing an answer that fails to fulfil the user’s request.
We introduce a novel approach known as Intent-Calibrated Self-Training (ICAST) to enhance answer label calibration within a self-training framework. Specifically, our proposal involves leveraging predicted intent labels to calibrate answer labels. The ICAST method encompasses the following steps:
- Teacher Model Training: A teacher model is trained on labelled data (D^l) to predict pseudo intent labels for unlabeled data (D^u).
- Intent-Calibrated Pseudo Labelling: High-quality intent labels are identified using intent confidence gain, subsequently influencing the selection of samples. The answer labels are calibrated by integrating selected intent labels as supplementary inputs for answer selection.
- Student Model Training: The student model is trained using both labelled and pseudo-labeled data.
Figure (b) visually represents the Intent-Calibrated Self-Training (ICAST) process. The flow involves training the teacher model, intent-calibrated pseudo labelling, and student model training. In contrast to the basic teacher-student training depicted in Figure (a), ICAST enhances the quality of pseudo-labeled data, resulting in significant performance gains.
In this article, we introduce Intent-Calibrated Self-Training (ICAST), a framework rooted in teacher-student self-training and intent-calibrated answer selection. The approach entails training a teacher model on labelled data to predict intent labels for unlabeled data, selecting high-quality intents via intent confidence gain to enhance pseudo answer label prediction, and retraining a student model using labelled and pseudo-labeled data.
Extensive experimentation on two benchmark datasets demonstrates the superiority of ICAST over baselines even with minimal labelled data (1%, 5%, and 10%). Our future work aims to explore additional predictable dialogue contexts, such as user profiles, beyond intents.
Thanks to this research, VOXReality will stand poised to harness these insights to advance the frontiers of context-aware task-oriented dialogue systems. These insights will serve as the driving force propelling us to push the boundaries, ushering in a new era of inquiry, innovation, and seamless application.
- Deng, W., Pei, J., Ren, Z., Chen, Z., & Ren, P. (2023). Intent-calibrated Self-training for Answer Selection in Open-domain Dialogues. arXiv preprint arXiv:2307.06703.
- Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent opendomain dialog systems. ACM Transactions on Information Systems.
- Chen Qu, Liu Yang, W. Bruce Croft, Johanne R Trippas, Yongfeng Zhang, and Minghui Qiu.2018. Analyzing and characterizing user intent in information-seeking conversations. In Proceedings of International ACM SIGIR conference on research and development in information retrieval.
- Chen Qu, Liu Yang, W. Bruce Croft, Yongfeng Zhang, Johanne R. Trippas, and Minghui Qiu. 2019a. User intent prediction in informationseeking conversations. In Human Information Interaction and Retrieval.
- Liu Yang, Minghui Qiu, Chen Qu, Cen Chen, Jiafeng Guo, Yongfeng Zhang, W. Bruce Croft, and Haiqing Chen. 2020. IART: Intent-aware response ranking with transformers in informationseeking conversation systems. In The Web Conference.