--- title: README emoji: 🐢 colorFrom: red colorTo: gray sdk: static pinned: false --- Artefact Link Type Details 🥇 Falcon-40B Here pretrained model 40B parameters trained on 1,000 billion tokens. Falcon-40B-Instruct Here instruction/chat model Falcon-40B finetuned on the Baize dataset. 🥈 Falcon-7B Here pretrained model 6.7B parameters trained on 1,500 billion tokens. Falcon-7B-Instruct Here instruction/chat model Falcon-7B finetuned on the Baize, GPT4All, and GPTeacher datasets. 📀 RefinedWeb Here pretraining web dataset ~600 billion "high-quality" tokens. Falcon-RW-1B Here pretrained model 1.3B parameters trained on 350 billion tokens. Falcon-RW-7B Here pretrained model 7.5B parameters trained on 350 billion tokens.