Edit model card

OpenCALM-7B - 8bit

Open In Colab

8-bit quantized version of OpenCALM-7B by CyberAgent (under CC BY-SA 4.0)

When using this quantized model, please be sure to give credit to the original.

Setup

pip install -q -U bitsandbytes
pip install -q -U git+https://github.com/huggingface/transformers.git
pip install -q -U git+https://github.com/huggingface/accelerate.git

Usage

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

MODEL_ID = "kyo-takano/open-calm-7b-8bit"
model = AutoModelForCausalLM.from_pretrained(MODEL_ID)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)

inputs = tokenizer("AIによって私達の暮らしは、", return_tensors="pt").to(model.device)
with torch.no_grad():
    tokens = model.generate(
        **inputs,
        max_new_tokens=64,
        do_sample=True,
        temperature=0.7,
        top_p=0.9,
        repetition_penalty=1.05,
        pad_token_id=tokenizer.pad_token_id,
    )
    
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)

Model Details

  • Developed by: CyberAgent, Inc.
  • Quantized by: Kyo Takano
  • Model type: Transformer-based Language Model
  • Language: Japanese
  • Library: GPT-NeoX
  • License: OpenCALM is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0). When using this model, please provide appropriate credit to CyberAgent, Inc.
Downloads last month
49
Inference Examples
Inference API (serverless) has been turned off for this model.

Space using kyo-takano/open-calm-7b-8bit 1