File size: 1,694 Bytes
76fa0be
8159495
 
e5cee08
 
 
8159495
 
e5cee08
76fa0be
 
09555ec
76fa0be
ec2eda2
76fa0be
 
 
 
 
 
 
 
 
e5cee08
76fa0be
ddea838
e5cee08
 
09555ec
76fa0be
 
 
 
 
 
 
b7bc002
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d860926
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
language:
- ru
- en
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
---

# Model Card

Devple is a fine-tuned model based on Llama 3.1 Instruct, designed for development tasks such as code generation and review, with a focus on the quality and safety of the generated code. Its synthetic dataset was generated using GPT-4o with Llama-3 (rejected).



## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

Devple is a fine-tuned model based on Llama 3.1 Instruct. The model is built on a synthetic dataset. The main focus of the training was on development-related tasks such as code generation, code review, refactoring, etc., with particular emphasis on the quality and safety of the generated code.

Fine-tuning was done using ORPO. The dataset was generated using GPT-4o (chosen) and Llama-3 (rejected).

- **Language(s) (NLP):** English, Russian
- **Finetuned from model:** Llama 3.1 Instruct

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

```python
import transformers
import torch

model_id = "Kkaastr/Devple-8B"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```