madhavatreplit
commited on
Commit
•
0a65073
1
Parent(s):
12cbb33
Update for README (#3)
Browse files- Update for README (e6404494223be6d9e115f5f03f0321b12bfa4e88)
README.md
CHANGED
@@ -10,13 +10,13 @@ Developed by: Replit, Inc.
|
|
10 |
|
11 |
Replit Code v1.5 is a 3.3B parameter Causal Language Model focused on **Code Completion**.
|
12 |
|
13 |
-
The model is trained on 1T tokens of code (~200B tokens over 5 epochs, including linear cooldown) for 30 programming languages from a subset of permissively licensed code from Bigcode's [Stack Dedup V2 dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup).
|
|
|
14 |
|
15 |
-
We use the GPTNeoX tokenizer with a custom trained and optimized vocabulary of 32768 tokens
|
16 |
|
17 |
The model has been trained on the [MosaicML](https://www.mosaicml.com/) platform on 128 H100-80GB GPUs.
|
18 |
|
19 |
-
The model has a context window size of 4096 tokens.
|
20 |
|
21 |
## Dependancies
|
22 |
You will need to install the latest versions of the following dependencies:
|
@@ -26,9 +26,57 @@ torch
|
|
26 |
transformers
|
27 |
```
|
28 |
|
29 |
-
## How to Use
|
30 |
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
## Intended Use
|
34 |
|
@@ -36,5 +84,7 @@ Replit intends this model be used by anyone as a foundational model for applicat
|
|
36 |
|
37 |
The model is trained specifically for code completion tasks.
|
38 |
|
|
|
|
|
39 |
## Limitations
|
40 |
-
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters, and such content may be reflected in model generated text. We recommend that users exercise reasonable caution when using in production systems. Do not use for any applications that may cause harm or distress to individuals or groups.
|
|
|
10 |
|
11 |
Replit Code v1.5 is a 3.3B parameter Causal Language Model focused on **Code Completion**.
|
12 |
|
13 |
+
The model is trained in `bfloat16` on 1T tokens of code (~200B tokens over 5 epochs, including linear cooldown) for 30 programming languages from a subset of permissively licensed code from Bigcode's [Stack Dedup V2 dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup) and a dev-oriented samples from StackExchange.
|
14 |
+
The context size is 4096 tokens can be extended using techniques on its ALiBi positional embeddings.
|
15 |
|
16 |
+
We use the GPTNeoX tokenizer with a custom trained and optimized vocabulary of 32768 tokens. This custom vocabulary led to single-digit % points on compression while maintaining or improving coverage on our training corpus.
|
17 |
|
18 |
The model has been trained on the [MosaicML](https://www.mosaicml.com/) platform on 128 H100-80GB GPUs.
|
19 |
|
|
|
20 |
|
21 |
## Dependancies
|
22 |
You will need to install the latest versions of the following dependencies:
|
|
|
26 |
transformers
|
27 |
```
|
28 |
|
29 |
+
## How to Use
|
30 |
|
31 |
+
### Generation
|
32 |
+
|
33 |
+
You can generate code using the `transformers` library as follows:
|
34 |
+
|
35 |
+
```python
|
36 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
37 |
+
|
38 |
+
tokenizer = AutoTokenizer.from_pretrained('replit/replit-code-v1_5-3b', trust_remote_code=True)
|
39 |
+
model = AutoModelForCausalLM.from_pretrained('replit/replit-code-v1_5-3b', trust_remote_code=True)
|
40 |
+
|
41 |
+
x = tokenizer.encode('def fibonacci(n): ', return_tensors='pt')
|
42 |
+
y = model.generate(x, max_length=100, do_sample=True, top_p=0.95, top_k=4, temperature=0.2, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
|
43 |
+
|
44 |
+
# decoding
|
45 |
+
generated_code = tokenizer.decode(y[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
46 |
+
print(generated_code)
|
47 |
+
```
|
48 |
+
|
49 |
+
Experiment with different decoding methods and parameters to get the best results for your use case.
|
50 |
+
|
51 |
+
### Using Triton Implementation of Flash Attention
|
52 |
+
|
53 |
+
```python
|
54 |
+
import torch
|
55 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig
|
56 |
+
|
57 |
+
config = AutoConfig.from_pretrained(
|
58 |
+
"replit/replit-code-v1_5-3b",
|
59 |
+
trust_remote_code=True
|
60 |
+
)
|
61 |
+
config.attn_config['attn_impl'] = 'triton'
|
62 |
+
|
63 |
+
# load model
|
64 |
+
tokenizer = AutoTokenizer.from_pretrained('replit/replit-code-v1_5-3b', trust_remote_code=True)
|
65 |
+
model = AutoModelForCausalLM.from_pretrained('replit/replit-code-v1_5-3b', config=config, trust_remote_code=True)
|
66 |
+
model.to(device='cuda:0', dtype=torch.bfloat16)
|
67 |
+
|
68 |
+
# forward pass
|
69 |
+
x = tokenizer.encode('def fibonacci(n): ', return_tensors='pt').to(device='cuda:0')
|
70 |
+
x = x.to(device='cuda:0')
|
71 |
+
y = model.generate(x, max_length=100, do_sample=True, top_p=0.95, top_k=4, temperature=0.2, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
|
72 |
+
|
73 |
+
|
74 |
+
# decoding
|
75 |
+
generated_code = tokenizer.decode(y[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
76 |
+
print(generated_code)
|
77 |
+
```
|
78 |
+
|
79 |
+
Experiment with different decoding methods and parameters to get the best results for your use case. We recommend experimenting with `temperature` and `reptition_penalty`for optimal performance on your use case!
|
80 |
|
81 |
## Intended Use
|
82 |
|
|
|
84 |
|
85 |
The model is trained specifically for code completion tasks.
|
86 |
|
87 |
+
|
88 |
+
|
89 |
## Limitations
|
90 |
+
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing and toxicity and profanity filters, and such content may be reflected in model generated text. We recommend that users exercise reasonable caution when using in production systems. Do not use for any applications that may cause harm or distress to individuals or groups.
|