uukuguy commited on
Commit
1c7c968
1 Parent(s): 88e0c4e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -2
README.md CHANGED
@@ -43,10 +43,27 @@ Specifically, Mistral-7B-0.1 is used as the base model, with 16 experts and 4 ex
43
  - Spider: 8,659 samples
44
  - codefuse-ai/Evol-Instruction-66k: 100%, 66,862 samples
45
 
46
- Alpaca Prompt Format
47
 
48
  ```
49
  ### Instruction:
50
  <instruction>
51
  ### Response:
52
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  - Spider: 8,659 samples
44
  - codefuse-ai/Evol-Instruction-66k: 100%, 66,862 samples
45
 
46
+ ## Alpaca Prompt Format
47
 
48
  ```
49
  ### Instruction:
50
  <instruction>
51
  ### Response:
52
+ ```
53
+
54
+ ## Usage
55
+
56
+ ```python
57
+ from transformers import AutoModelForCausalLM, AutoTokenizer
58
+
59
+ model_name_or_path="uukuguy/speechless-sparsetral-16x7b-MoE"
60
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
61
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=True).eval()
62
+
63
+ system = ""Below is an instruction that describes a task.\nWrite a response that appropriately completes the request.\n\n""
64
+ prompt = f"{system}\n\n### Instruction:\n{instruction}\n\n### Response:"
65
+
66
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
67
+ pred = model.generate(**inputs, max_length=4096, do_sample=True, top_k=50, top_p=0.99, temperature=0.9, num_return_sequences=1)
68
+ print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
69
+ ```