|
--- |
|
datasets: |
|
- tiiuae/falcon-refinedweb |
|
language: |
|
- en |
|
inference: false |
|
license: apache-2.0 |
|
new_version: tiiuae/falcon-11B |
|
base_model: tiiuae/falcon-7b |
|
tags: |
|
- llama-cpp |
|
- gguf-my-repo |
|
--- |
|
|
|
# Triangle104/falcon-7b-Q5_K_S-GGUF |
|
This model was converted to GGUF format from [`tiiuae/falcon-7b`](https://huggingface.co/tiiuae/falcon-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. |
|
Refer to the [original model card](https://huggingface.co/tiiuae/falcon-7b) for more details on the model. |
|
|
|
--- |
|
Model details: |
|
- |
|
Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license. |
|
|
|
Paper coming soon π. |
|
|
|
π€ To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading this great blogpost fron HF! |
|
Why use Falcon-7B? |
|
|
|
It outperforms comparable open-source models (e.g., MPT-7B, StableLM, RedPajama etc.), thanks to being trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. See the OpenLLM Leaderboard. |
|
It features an architecture optimized for inference, with FlashAttention (Dao et al., 2022) and multiquery (Shazeer et al., 2019). |
|
It is made available under a permissive Apache 2.0 license allowing for commercial use, without any royalties or restrictions. |
|
|
|
β οΈ This is a raw, pretrained model, which should be further finetuned for most usecases. If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at Falcon-7B-Instruct. |
|
|
|
π₯ Looking for an even more powerful model? Falcon-40B is Falcon-7B's big brother! |
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import transformers |
|
import torch |
|
|
|
model = "tiiuae/falcon-7b" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
tokenizer=tokenizer, |
|
torch_dtype=torch.bfloat16, |
|
trust_remote_code=True, |
|
device_map="auto", |
|
) |
|
sequences = pipeline( |
|
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", |
|
max_length=200, |
|
do_sample=True, |
|
top_k=10, |
|
num_return_sequences=1, |
|
eos_token_id=tokenizer.eos_token_id, |
|
) |
|
for seq in sequences: |
|
print(f"Result: {seq['generated_text']}") |
|
|
|
π₯ Falcon LLMs require PyTorch 2.0 for use with transformers! |
|
|
|
For fast inference with Falcon, check-out Text Generation Inference! Read more in this blogpost. |
|
|
|
You will need at least 16GB of memory to swiftly run inference with Falcon-7B. |
|
Model Card for Falcon-7B |
|
Model Details |
|
Model Description |
|
|
|
Developed by: https://www.tii.ae; |
|
Model type: Causal decoder-only; |
|
Language(s) (NLP): English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish); |
|
License: Apache 2.0. |
|
|
|
Model Source |
|
|
|
Paper: coming soon. |
|
|
|
Uses |
|
Direct Use |
|
|
|
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.) |
|
Out-of-Scope Use |
|
|
|
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. |
|
Bias, Risks, and Limitations |
|
|
|
Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. |
|
Recommendations |
|
|
|
We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use. |
|
How to Get Started with the Model |
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import transformers |
|
import torch |
|
|
|
model = "tiiuae/falcon-7b" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
tokenizer=tokenizer, |
|
torch_dtype=torch.bfloat16, |
|
trust_remote_code=True, |
|
device_map="auto", |
|
) |
|
sequences = pipeline( |
|
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", |
|
max_length=200, |
|
do_sample=True, |
|
top_k=10, |
|
num_return_sequences=1, |
|
eos_token_id=tokenizer.eos_token_id, |
|
) |
|
for seq in sequences: |
|
print(f"Result: {seq['generated_text']}") |
|
|
|
Training Details |
|
Training Data |
|
|
|
Falcon-7B was trained on 1,500B tokens of RefinedWeb, a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile (Gao et al., 2020). |
|
Data source Fraction Tokens Sources |
|
RefinedWeb-English 79% 1,185B massive web crawl |
|
Books 7% 110B |
|
Conversations 6% 85B Reddit, StackOverflow, HackerNews |
|
Code 3% 45B |
|
RefinedWeb-French 3% 45B massive web crawl |
|
Technical 2% 30B arXiv, PubMed, USPTO, etc. |
|
|
|
The data was tokenized with the Falcon-7B/40B tokenizer. |
|
Training Procedure |
|
|
|
Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO. |
|
Training Hyperparameters |
|
Hyperparameter Value Comment |
|
Precision bfloat16 |
|
Optimizer AdamW |
|
Learning rate 6e-4 4B tokens warm-up, cosine decay to 1.2e-5 |
|
Weight decay 1e-1 |
|
Z-loss 1e-4 |
|
Batch size 2304 30B tokens ramp-up |
|
Speeds, Sizes, Times |
|
|
|
Training happened in early March 2023 and took about two weeks. |
|
Evaluation |
|
|
|
Paper coming soon. |
|
|
|
See the OpenLLM Leaderboard for early results. |
|
Technical Specifications |
|
Model Architecture and Objective |
|
|
|
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). |
|
|
|
The architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences: |
|
|
|
Positionnal embeddings: rotary (Su et al., 2021); |
|
Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022); |
|
Decoder-block: parallel attention/MLP with a single layer norm. |
|
|
|
Hyperparameter Value Comment |
|
Layers 32 |
|
d_model 4544 Increased to compensate for multiquery |
|
head_dim 64 Reduced to optimise for FlashAttention |
|
Vocabulary 65024 |
|
Sequence length 2048 |
|
Compute Infrastructure |
|
Hardware |
|
|
|
Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances. |
|
Software |
|
|
|
Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.) |
|
Citation |
|
|
|
Paper coming soon π. In the meanwhile, you can use the following information to cite: |
|
|
|
@article{falcon40b, |
|
title={{Falcon-40B}: an open large language model with state-of-the-art performance}, |
|
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme}, |
|
year={2023} |
|
} |
|
|
|
To learn more about the pretraining dataset, see the π RefinedWeb paper. |
|
|
|
@article{refinedweb, |
|
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, |
|
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, |
|
journal={arXiv preprint arXiv:2306.01116}, |
|
eprint={2306.01116}, |
|
eprinttype = {arXiv}, |
|
url={https://arxiv.org/abs/2306.01116}, |
|
year={2023} |
|
} |
|
|
|
License |
|
|
|
Falcon-7B is made available under the Apache 2.0 license. |
|
Contact |
|
|
|
[email protected] |
|
|
|
--- |
|
## Use with llama.cpp |
|
Install llama.cpp through brew (works on Mac and Linux) |
|
|
|
```bash |
|
brew install llama.cpp |
|
|
|
``` |
|
Invoke the llama.cpp server or the CLI. |
|
|
|
### CLI: |
|
```bash |
|
llama-cli --hf-repo Triangle104/falcon-7b-Q5_K_S-GGUF --hf-file falcon-7b-q5_k_s.gguf -p "The meaning to life and the universe is" |
|
``` |
|
|
|
### Server: |
|
```bash |
|
llama-server --hf-repo Triangle104/falcon-7b-Q5_K_S-GGUF --hf-file falcon-7b-q5_k_s.gguf -c 2048 |
|
``` |
|
|
|
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. |
|
|
|
Step 1: Clone llama.cpp from GitHub. |
|
``` |
|
git clone https://github.com/ggerganov/llama.cpp |
|
``` |
|
|
|
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). |
|
``` |
|
cd llama.cpp && LLAMA_CURL=1 make |
|
``` |
|
|
|
Step 3: Run inference through the main binary. |
|
``` |
|
./llama-cli --hf-repo Triangle104/falcon-7b-Q5_K_S-GGUF --hf-file falcon-7b-q5_k_s.gguf -p "The meaning to life and the universe is" |
|
``` |
|
or |
|
``` |
|
./llama-server --hf-repo Triangle104/falcon-7b-Q5_K_S-GGUF --hf-file falcon-7b-q5_k_s.gguf -c 2048 |
|
``` |
|
|