Retrieval In Decoder benefits generative models for explainable complex question answering

0Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Large-scale Language Models (LLMs) utilizing the Chain-of-Thought prompting demonstrate exceptional performance in a variety of tasks. However, the persistence of factual hallucinations remains a significant challenge in practical applications. Prevailing retrieval-augmented methods treat the retriever and generator as separate components, which inadvertently restricts the generator's capabilities to those of the retriever through intensive supervised training. In this work, we propose an unsupervised Retrieval In Decoder framework for multi-granularity decoding called RID, which integrates retrieval directly into the decoding process of generative models. It dynamically adjusts decoding granularity based on retrieval outcomes, and duly corrects the decoding direction through its direct impact on the next token. Moreover, we introduce a reinforcement learning-driven knowledge distillation method for adaptive explanation generation to better apply to Small-scale Language Models (SLMs). The experimental results across six public benchmarks surpass popular LLMs and existing retrieval-augmented methods, which demonstrates the effectiveness of RID in models of different scales and verifies its applicability and scalability.

References Powered by Scopus

Survey of Hallucination in Natural Language Generation

1634Citations
N/AReaders
Get full text

Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly

213Citations
N/AReaders
Get full text

Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions

83Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Feng, J., Wang, Q., Qiu, H., & Liu, L. (2025). Retrieval In Decoder benefits generative models for explainable complex question answering. Neural Networks, 181. https://doi.org/10.1016/j.neunet.2024.106833

Readers over time

‘24‘25010203040

Readers' Seniority

Tooltip

Researcher 25

83%

PhD / Post grad / Masters / Doc 5

17%

Readers' Discipline

Tooltip

Computer Science 27

90%

Social Sciences 3

10%

Save time finding and organizing research with Mendeley

Sign up for free
0