Edit model card

This model is a fine-tuned LLaMA-2 (7B) model. Please accept the LLaMA-2 license agreement before downloading this model. This model works with WikiChat v1.0.

Refer to the following for more information:

GitHub repository: https://github.com/stanford-oval/WikiChat

Paper: WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia

WikiChat Logo

WikiChat
arXiv Github Stars

Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia

Online demo: https://wikichat.genie.stanford.edu

WikiChat Pipeline

Downloads last month
685
Safetensors
Model size
6.74B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for stanford-oval/Llama-2-7b-WikiChat

Merges
1 model

Collection including stanford-oval/Llama-2-7b-WikiChat