pipeline_tag: text-generation
inference: true
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
- text: Gradient descent is
example_title: Machine Learning
group: English
- license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
- tiiuae/falcon-refinedweb
metrics:
- code_eval
- mmlu
- arc
- hellaswag
- truthfulqa
library_name: transformers
tags:
- code
model-index:
- name: StarCoderPlus
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval (Prompted)
metrics:
- name: pass@1
type: pass@1
value: 26.7
verified: false
- task:
type: text-generation
dataset:
type: MMLU (5-shot)
name: MMLU
metrics:
- name: Accuracy
type: Accuracy
value: 45.1
verified: false
- task:
type: text-generation
dataset:
type: HellaSwag (10-shot)
name: HellaSwag
metrics:
- name: Accuracy
type: Accuracy
value: 77.3
verified: false
- task:
type: text-generation
dataset:
type: ARC (25-shot)
name: ARC
metrics:
- name: Accuracy
type: Accuracy
value: 48.9
verified: false
- task:
type: text-generation
dataset:
type: ThrutfulQA (0-shot)
name: ThrutfulQA
metrics:
- name: Accuracy
type: Accuracy
value: 37.9
verified: false
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
StarCoderPlus
Play with the instruction-tuned StarCoderPlus at StarChat-Beta.
Table of Contents
Model Summary
StarCoderPlus is a fine-tuned version of StarCoderBase on a mix of:
- The English web dataset RefinedWeb (1x)
- StarCoderData dataset from The Stack (v1.2) (1x)
- A Wikipedia dataset that has been upsampled 5 times (5x)
It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses Multi Query Attention, a context window of 8192 tokens, and was trained using the Fill-in-the-Middle objective on 1.6 trillion tokens.
- Repository: bigcode/Megatron-LM
- Project Website: bigcode-project.org
- Point of Contact: [email protected]
- Languages: English & 80+ Programming languages
Use
Intended use
The model was trained on English and GitHub code. As such it is not an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in StarChat makes a capable assistant.
Feel free to share your generations in the Community tab!
Generation
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoderplus"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Attribution & Other Requirements
The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a search index that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
Limitations
The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online. Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See StarCoder paper.
Training
StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
Model
- Architecture: GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- Finetuning steps: 150k
- Finetuning tokens: 600B
- Precision: bfloat16
Hardware
- GPUs: 512 Tesla A100
- Training time: 14 days
Software
- Orchestration: Megatron-LM
- Neural networks: PyTorch
- BP16 if applicable: apex
License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement here.