File size: 1,556 Bytes
ef7f603
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
license: mit
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- en
---

## Model Summary

phi2-ultrachat-qlora is a Transformer fine tuned using the ultrachat dataset. 

Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.


### Inference Code:

```python
import warnings
from transformers import AutoModelForCausalLM, AutoTokenizer

path= f"sandeepsundaram/phi2-ultrachat-qlora"
tokenizer = AutoTokenizer.from_pretrained(path)
tokenizer.eos_token_id = model.config.eos_token_id
tokenizer.pad_token = tokenizer.eos_token
tokenizer.add_special_tokens({'pad_token': '[PAD]'})

warnings.filterwarnings('ignore')  # Ignore all warnings
#inputs = tokenizer('Question: why human are cute then human? write in the form of poem. \n Output: ', return_tensors="pt", return_attention_mask=False).to('cuda')
inputs = tokenizer('''write code for fibonaci series in python.''', return_tensors="pt", return_attention_mask=False).to('cuda')
generation_params = {
    'max_length': 512,
    'do_sample': True,
    'temperature': .5,
    'top_p': 0.9,
    'top_k': 50
}

outputs = model.generate(**inputs, **generation_params)
decoded_outputs = tokenizer.batch_decode(outputs)

for text in decoded_outputs:
    text = text.replace('\\n', '\n')
    print(text)
    print("\n\n")
```