Edit model card
YAML Metadata Error: "model-index[0].results[1].dataset.type" with value "MMLU (5-shot)" fails to match the required pattern: /^(?:[\w-]+\/)?[\w-.]+$/
YAML Metadata Error: "model-index[0].results[2].dataset.type" with value "HellaSwag (10-shot)" fails to match the required pattern: /^(?:[\w-]+\/)?[\w-.]+$/
YAML Metadata Error: "model-index[0].results[3].dataset.type" with value "ARC (25-shot)" fails to match the required pattern: /^(?:[\w-]+\/)?[\w-.]+$/
YAML Metadata Error: "model-index[0].results[4].dataset.type" with value "ThrutfulQA (0-shot)" fails to match the required pattern: /^(?:[\w-]+\/)?[\w-.]+$/

# Fast-Inference with Ctranslate2

Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.

quantized version of bigcode/starcoderplus

pip install hf-hub-ctranslate2>=2.0.10 ctranslate2>=3.16.0

Converted on 2023-06-18 using

ct2-transformers-converter --model bigcode/starcoderplus --output_dir ./ct2fast-starcoder --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code

Checkpoint compatible to ctranslate2>=3.16.0 and hf-hub-ctranslate2>=2.0.10

  • compute_type=int8_float16 for device="cuda"
  • compute_type=int8 for device="cpu"
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "piratos/ct2fast-starcoderplus"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
        # load in int8 on CUDA
        model_name_or_path=model_name,
        device="cuda",
        compute_type="int8_float16",
        # tokenizer=AutoTokenizer.from_pretrained("bigcode/starcoderplus")
)
outputs = model.generate(
    text=["def fibonnaci(", "User: How are you doing? Bot:"],
    max_length=64,
    include_prompt_in_result=False
)
print(outputs)

Licence and other remarks:

This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.

Original description

StarCoderPlus

Play with the instruction-tuned StarCoderPlus at StarChat-Beta.

Table of Contents

  1. Model Summary
  2. Use
  3. Limitations
  4. Training
  5. License
  6. Citation

Model Summary

StarCoderPlus is a fine-tuned version of StarCoderBase on 600B tokens from the English web dataset RedefinedWeb combined with StarCoderData from The Stack (v1.2) and a Wikipedia dataset. It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses Multi Query Attention, a context window of 8192 tokens, and was trained using the Fill-in-the-Middle objective on 1.6 trillion tokens.

Use

Intended use

The model was trained on English and GitHub code. As such it is not an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in StarChat makes a capable assistant.

Feel free to share your generations in the Community tab!

Generation

# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

checkpoint = "bigcode/starcoderplus"
device = "cuda" # for GPU usage or "cpu" for CPU usage

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)

inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

Fill-in-the-middle

Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:

input_text = "<fim_prefix>def print_hello_world():\n    <fim_suffix>\n    print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

Attribution & Other Requirements

The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a search index that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.

Limitations

The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online. Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See StarCoder paper.

Training

StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:

Model

  • Architecture: GPT-2 model with multi-query attention and Fill-in-the-Middle objective
  • Finetuning steps: 150k
  • Finetuning tokens: 600B
  • Precision: bfloat16

Hardware

  • GPUs: 512 Tesla A100
  • Training time: 14 days

Software

License

The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement here.

Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train piratos/ct2fast-starcoderplus

Evaluation results

Model card error

This model's model-index metadata is invalid: Schema validation error. "model-index[0].results[1].dataset.type" with value "MMLU (5-shot)" fails to match the required pattern: /^(?:[\w-]+\/)?[\w-.]+$/. "model-index[0].results[2].dataset.type" with value "HellaSwag (10-shot)" fails to match the required pattern: /^(?:[\w-]+\/)?[\w-.]+$/. "model-index[0].results[3].dataset.type" with value "ARC (25-shot)" fails to match the required pattern: /^(?:[\w-]+\/)?[\w-.]+$/. "model-index[0].results[4].dataset.type" with value "ThrutfulQA (0-shot)" fails to match the required pattern: /^(?:[\w-]+\/)?[\w-.]+$/