We propose Chain-of-Questions, a framework that trains a model to robustly answer multistep questions by generating and answering sub-questions. We obtain supervision for sub-questions from human-annotated question decomposition meaning representation (QDMR), but QDMR does not include annotated answers to sub-questions. To overcome this technical challenge, we treat sub-answers as latent variables and infer them with a novel dynamic mixture of Hard-EM and MAPO. Chain-of-Questions is effective and robust, greatly outperforming strong neuro-symbolic methods by 9.0 F1 on a DROP contrast set and GPT-3.5 by 24.3 F1 on a HOTPOTQA adversarial set.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Zhu, W., Thomason, J., & Jia, R. (2023). Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 8845–8860). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.547