blackmount8's picture
Update README.md
b2578a3
|
raw
history blame
2.02 kB
metadata
inference: false
license: cc
datasets:
  - VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
  - en
library_name: transformers
pipeline_tag: text-generation

blackmount8/open-llama-13B-open-instruct-ct2-int8_float16

Int8_float16 version of VMware/open-llama-13b-open-instruct, quantized using CTranslate2.

VMware/open-llama-13B-open-instruct

Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for <b>COMMERCIAL USE </b>. <br>

<b> NOTE </b> : The model was trained using the Alpaca prompt template <b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the use_fast = False parameter, when instantiating the tokenizer

License

Nomenclature

  • Model : Open-llama
  • Model Size: 13B parameters
  • Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf)

Use in CTranslate2

import ctranslate2
from transformers import AutoTokenizer

model_name = "blackmount8/open-llama-13b-open-instruct-ct2-int8_float16"

tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left")
model = ctranslate2.Generator(model_name, device="auto", compute_type="int8_float16")

input_text = ["What is the meaning of stonehenge?", "Hello mate!"]

input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids
input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids]

outputs = model.generate_batch(input_tokens, max_length=128)

output_tokens = [
    ele.sequences_ids[0] for ele in outputs
]

output = tokenizer.batch_decode(output_tokens)

print(output)