Papers
arxiv:2401.10225

ChatQA: Building GPT-4 Level Conversational QA Models

Published on Jan 18
· Submitted by akhaliq on Jan 19
#3 Paper of the day
Authors:
,
,

Abstract

In this work, we introduce ChatQA, a family of conversational question answering (QA) models, that obtain GPT-4 level accuracies. Specifically, we propose a two-stage instruction tuning method that can significantly improve the zero-shot conversational QA results from large language models (LLMs). To handle retrieval in conversational QA, we fine-tune a dense retriever on a multi-turn QA dataset, which provides comparable results to using the state-of-the-art query rewriting model while largely reducing deployment cost. Notably, our ChatQA-70B can outperform GPT-4 in terms of average score on 10 conversational QA datasets (54.14 vs. 53.90), without relying on any synthetic data from OpenAI GPT models.

Community

This comment has been hidden
This comment has been hidden
This comment has been hidden

start

Сколько будет 2+2

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

Sign up or log in to comment

Models citing this paper 55

Browse 55 models citing this paper

Datasets citing this paper 2

Spaces citing this paper 11

Collections including this paper 19