philschmid HF staff commited on
Commit
5799998
1 Parent(s): 678fb46

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md CHANGED
@@ -15,6 +15,43 @@ Code Llama is a collection of pretrained and fine-tuned generative text models r
15
  | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
16
  | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ## Model Details
19
  *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
20
 
 
15
  | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
16
  | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
17
 
18
+ Make sure to be using this temporary branch of transformers unit support is fully merged and released.
19
+
20
+ ```bash
21
+ pip install git+https://github.com/huggingface/transformers.git@refs/pull/25740/head accelerate
22
+ ```
23
+
24
+
25
+ ```python
26
+ from transformers import AutoTokenizer
27
+ import transformers
28
+ import torch
29
+
30
+ model = "codellama/CodeLlama-34b-hf"
31
+
32
+ tokenizer = AutoTokenizer.from_pretrained(model)
33
+ pipeline = transformers.pipeline(
34
+ "text-generation",
35
+ model=model,
36
+ torch_dtype=torch.float16,
37
+ device_map="auto",
38
+ )
39
+
40
+ sequences = pipeline(
41
+ 'import socket\n\ndef ping_exponential_backoff(host: str):',
42
+ do_sample=True,
43
+ top_k=10,
44
+ temperature=0.1,
45
+ top_p=0.95
46
+ num_return_sequences=1,
47
+ eos_token_id=tokenizer.eos_token_id,
48
+ max_length=200,
49
+ )
50
+ for seq in sequences:
51
+ print(f"Result: {seq['generated_text']}")
52
+ ```
53
+
54
+
55
  ## Model Details
56
  *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
57