dolly-v2-12b: sharded 8bit checkpoint
This is a sharded checkpoint (with ~4GB shards) of the databricks/dolly-v2-12b
model in 8bit
precision using bitsandbytes
.
Refer to the original model for all details w.r.t. to the model. For more info on loading 8bit models, refer to the example repo and/or the 4.28.0
release info.
- total model size is only ~12.5 GB!
- this enables low-RAM loading, i.e. Colab :)
- update: generation speed can be greatly improved by setting
use_cache=True
and generating via contrastive search. example notenook here
Basic Usage
install/upgrade transformers
, accelerate
, and bitsandbytes
. For this to work you must have transformers>=4.28.0
and bitsandbytes>0.37.2
.
pip install -U -q transformers bitsandbytes accelerate
Load the model. As it is serialized in 8bit you don't need to do anything special:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ethzanalytics/dolly-v2-12b-sharded-8bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
- Downloads last month
- 25