Transformers
Safetensors
openlm
Inference Endpoints
vaishaal commited on
Commit
75199a5
1 Parent(s): a24d330

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -33,6 +33,26 @@ DCLM-Baseline-7B is a 7 billion parameter language model trained on the DCLM-Bas
33
  - **Dataset:** https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
34
  - **Paper:** [DataComp-LM: In search of the next generation of training sets for language models](https://arxiv.org/abs/2406.11794)
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ### Training Details
38
 
 
33
  - **Dataset:** https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
34
  - **Paper:** [DataComp-LM: In search of the next generation of training sets for language models](https://arxiv.org/abs/2406.11794)
35
 
36
+ ## Using Model
37
+
38
+ First install open_lm
39
+
40
+ ```pip install git+https://github.com/mlfoundations/open_lm.git```
41
+
42
+ Then:
43
+ ```
44
+ from open_lm.hf import *
45
+ from transformers import AutoTokenizer, AutoModelForCausalLM
46
+ tokenizer = AutoTokenizer.from_pretrained("apple/DCLM-Baseline-7B")
47
+ model = AutoModelForCausalLM.from_pretrained("apple/DCLM-Baseline-7B")
48
+
49
+ inputs = tokenizer(["Machine learning is"], return_tensors="pt")
50
+ gen_kwargs = {"max_new_tokens": 50, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
51
+ output = model.generate(inputs['input_ids'], **gen_kwargs)
52
+ output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
53
+ print(output)
54
+ ```
55
+
56
 
57
  ### Training Details
58