File size: 2,627 Bytes
9a56eb9 534cdbf 9a56eb9 534cdbf 9a56eb9 534cdbf 9a56eb9 534cdbf 9a56eb9 534cdbf 9a56eb9 534cdbf 9a56eb9 534cdbf 9a56eb9 534cdbf 9a56eb9 534cdbf 9a56eb9 9ee64ae 9a56eb9 9ee64ae 9a56eb9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
license: bigscience-openrail-m
library_name: transformers
tags:
- code
- gpt_bigcode
datasets:
- nuprl/MultiPL-T
metrics:
- code_eval
model-index:
- name: MultiPLCoder-15b-OCaml
results:
- task:
type: text-generation
dataset:
name: MultiPL-HumanEval (Lua)
type: nuprl/MultiPL-E
metrics:
- type: pass@1
value: 0.31
name: pass@1
verified: true
- type: pass@1
value: 0.21
name: pass@1
verified: true
- type: pass@1
value: 0.199
name: pass@1
verified: true
---
# MultiPLCoder-15b
15 billion parameter version of MultiPLCoder, a set of StarCoder-based models finetuned on the [MultiPL-T dataset](https://huggingface.co/datasets/nuprl/MultiPL-T).
These models are state-of-the-art at low-resource languages, such as: Lua, Racket, and OCaml.
This 15 billion parameter model is the most capable of the MultiPLCoder family. However, it requires a dedicated GPU for inference.
For a smaller model that fits on the CPU, check out [MultiPLCoder-1b](https://huggingface.co/nuprl/MultiPLCoder-1b).
## Language Revision Index
This is the revision index for the best-performing models for their respective langauge.
| Langauge | Revision ID | Epoch |
| ------------- | ----------- | ----- |
| Lua | `6069aa54dd554404dd18fccdf5dedd56b8088e74` | 4 |
| Racket | `f0c77c06482f436f469007f20d731cb9dd73d609` | 8 |
| OCaml | `e7babda985786810707200ff885df6105de7dc56` | 4 |
## Usage
To utilize one of the models in this repository, you must first select a commit revision for that model from the table above.
For example, to use the Lua model:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("nuprl/MultiPLCoder-15b")
lua_revision="6069aa54dd554404dd18fccdf5dedd56b8088e74"
model = AutoModelForCausalLM.from_pretrained("nuprl/MultiPLCoder-15b", revision=lua_revision).cuda()
```
Note that the model's default configuration does not enable caching, therefore you must specify to use the cache on generation.
```py
toks = tokenizer.encode("-- Fibonacci iterative", return_tensors="pt").cuda()
out = model.generate(toks, use_cache=True, do_sample=True, temperature=0.2, top_p=0.95, max_length=256)
print(tokenizer.decode(out[0], skip_special_tokens=True))
```
```
-- Fibonacci iterative.
local function fib_iterative(n)
if n == 0 or n == 1 then
return n
end
local previous, current = 0, 1
for _ = 2, n do
previous, current = current, current + previous
end
return current
end
``` |