File size: 1,799 Bytes
c446fa2
 
 
 
 
 
9449b93
 
c446fa2
 
 
 
 
 
 
 
 
9449b93
 
 
 
 
 
 
 
c446fa2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
- int8
- BPLLM
library_name: transformers
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
widget:
  - messages:
      - role: user
        content: What is your favorite condiment?
license: other
---

# Fine-tuned Llama 3.1 8B PEFT int8 for Food Delivery and Reimbursement

This model was trained for the experiments carried out in the research paper "Conversing with business process-aware Large Language Models: the BPLLM framework".

It comprises a version of the Llama 3.1 8B model fine-tuned (PEFT with quantization int8) to operate within the context of the Food Delivery and Reimbursement process models (different in terms of activities and events) introduced in the article.

Further insights can be found in our paper "[Conversing with business process-aware Large Language Models: the BPLLM framework](https://doi.org/10.21203/rs.3.rs-4125790/v1)".

# Model Trained Using AutoTrain

This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).

# Usage

```python

from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "PATH_TO_THIS_REPO"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "hi"}
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "Hello! How can I assist you today?"
print(response)
```