kaizen9 commited on
Commit
a9db022
1 Parent(s): 0ab4ea5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -109
README.md CHANGED
@@ -1,109 +1 @@
1
- <<<<<<< HEAD
2
- ---
3
- license: apache-2.0
4
- datasets:
5
- - bigcode/the-stack-dedup
6
- - togethercomputer/RedPajama-Data-1T
7
- tags:
8
- - code
9
- - Composer
10
- - MosaicML
11
- - llm-foundry
12
- - StreamingDatasets
13
- language:
14
- - code
15
- ---
16
-
17
- # Replit Code V-1.5 3B
18
-
19
- Developed by: Replit, Inc.
20
-
21
- ## Model Description
22
-
23
- Replit Code v1.5 is a 3.3B parameter Causal Language Model focused on **Code Completion**.
24
-
25
- The model is trained in `bfloat16` on 1T tokens of code (~200B tokens over 5 epochs, including linear cooldown) for 30 programming languages from a subset of permissively licensed code from Bigcode's [Stack Dedup dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup), a filtered natural language sample from Markdown and reStructuredText subsets from the same Stack Dedup dataset, and a dev-oriented sample from [RedPajama's StackExchange dataset](https://github.com/togethercomputer/RedPajama-Data) sourced from the [Stack Exchange Data Dump by Stack Exchange Inc](https://archive.org/details/stackexchange).
26
-
27
- The 30 programming languages are:
28
- ```
29
- Java, JavaScript, C, PHP, Python, C++, C#, TypeScript, Go, CSS, HTML, Rust, Ruby, Swift, Scala, Shell, Lua, Perl, Haskell, JSX, Julia, Common Lisp, OCaml, Solidity, Scheme, R, Zig, SQL, Racket, D
30
- ```
31
-
32
- The context size of the model is 4096 tokens. We use the GPTNeoX tokenizer with a custom trained and optimized vocabulary of 32768 tokens. This custom vocabulary led to single-digit % points on compression while maintaining or improving coverage on our training corpus.
33
-
34
- The model has been trained on the [MosaicML](https://www.mosaicml.com/) platform on 128 H100-80GB GPUs using their [LLM Foundry](https://github.com/mosaicml/llm-foundry) and [Composer](https://github.com/mosaicml/composer) training library built on top of PyTorch.
35
-
36
- ## Dependencies
37
- You will need to install the latest versions of the following dependencies:
38
- ```
39
- einops
40
- torch
41
- transformers
42
- ```
43
-
44
- ## How to Use
45
-
46
- ### Generation
47
-
48
- You can generate code using the `transformers` library as follows:
49
-
50
- ```python
51
- from transformers import AutoModelForCausalLM, AutoTokenizer
52
-
53
- tokenizer = AutoTokenizer.from_pretrained('replit/replit-code-v1_5-3b', trust_remote_code=True)
54
- model = AutoModelForCausalLM.from_pretrained('replit/replit-code-v1_5-3b', trust_remote_code=True)
55
-
56
- x = tokenizer.encode('def fibonacci(n): ', return_tensors='pt')
57
- y = model.generate(x, max_length=100, do_sample=True, top_p=0.95, top_k=4, temperature=0.2, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
58
-
59
- # decoding
60
- generated_code = tokenizer.decode(y[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
61
- print(generated_code)
62
- ```
63
-
64
- Experiment with different decoding methods and parameters to get the best results for your use case.
65
-
66
- ### Using Triton Implementation of Flash Attention
67
-
68
- ```python
69
- import torch
70
- from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig
71
-
72
- config = AutoConfig.from_pretrained(
73
- "replit/replit-code-v1_5-3b",
74
- trust_remote_code=True
75
- )
76
- config.attn_config['attn_impl'] = 'triton'
77
-
78
- # load model
79
- tokenizer = AutoTokenizer.from_pretrained('replit/replit-code-v1_5-3b', trust_remote_code=True)
80
- model = AutoModelForCausalLM.from_pretrained('replit/replit-code-v1_5-3b', config=config, trust_remote_code=True)
81
- model.to(device='cuda:0', dtype=torch.bfloat16)
82
-
83
- # forward pass
84
- x = tokenizer.encode('def fibonacci(n): ', return_tensors='pt').to(device='cuda:0')
85
- x = x.to(device='cuda:0')
86
- y = model.generate(x, max_length=100, do_sample=True, top_p=0.95, top_k=4, temperature=0.2, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
87
-
88
-
89
- # decoding
90
- generated_code = tokenizer.decode(y[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
91
- print(generated_code)
92
- ```
93
-
94
- Experiment with different decoding methods and parameters to get the best results for your use case. We recommend experimenting with `temperature` and `reptition_penalty`for optimal performance on your use case!
95
-
96
- ## Intended Use
97
-
98
- Replit intends this model be used by anyone as a foundational model for application-specific fine-tuning without strict limitations on commercial use.
99
-
100
- The model is trained specifically for code completion tasks.
101
-
102
-
103
- ## Limitations
104
- The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing and toxicity and profanity filters, and such content may be reflected in model generated text. We recommend that users exercise reasonable caution when using in production systems. Do not use for any applications that may cause harm or distress to individuals or groups.
105
- =======
106
- ---
107
- license: mit
108
- ---
109
- >>>>>>> origin/main
 
1
+ Currently Under Developement - Original Model BF16