File size: 6,293 Bytes
848ec03
 
 
f7be917
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8be44d0
 
bb8349c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f7be917
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
---
language:
- en
license: apache-2.0
model-index:
- name: lamatama
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 36.35
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 61.12
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 24.72
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 37.67
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 60.77
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 2.27
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/lamatama
      name: Open LLM Leaderboard
---

# Model Card: kevin009/lamatama

## Model Description
The `kevin009/lamatama` model is a groundbreaking achievement in the field of language modeling, showcasing the power of leveraging a substantial dataset and state-of-the-art training techniques. This model is designed to push the boundaries of what's possible in natural language understanding and generation.

### Training Details
- **Model Architecture**: The `kevin009/lamatama` model is built upon the architecture and tokenizer of Llama 2, ensuring compatibility and easy integration with various open-source projects.
- **Dataset**: It was pretrained on an impressive 3 trillion tokens, a scale that allows for a deep and nuanced understanding of language.
- **Training Period**: The training process was carried out over 90 days, utilizing 16 A100-40G GPUs, a testament to the model's efficiency and the team's optimization skills.

### Fine-tuning
This specific version of the model has been fine-tuned to excel in chat-based applications. It builds upon the `TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T` model, incorporating learnings and optimizations from HF's Zephyr's training recipe.

- **Initial Phase**: The model was first fine-tuned on a variant of the UltraChat dataset, which is rich in synthetic dialogues generated by ChatGPT.
- **Further Alignment**: Subsequent alignment was achieved using 🤗 TRL's DPOTrainer with the openbmb/UltraFeedback dataset, comprising 64k prompts and model completions ranked by GPT-4.

## How to Use
Ensure you have `transformers>=4.34`. For detailed instructions and updates, check out the GitHub page for `kevin009/lamatama`.

### Installation (for versions <= v4.34)
```bash
pip install git+https://github.com/huggingface/transformers.git
pip install accelerate
```

### Example Usage
Here's a quick guide on using `kevin009/lamatama` for generating text:

```python
import torch
from transformers import pipeline

# Initialize the pipeline
pipe = pipeline("text-generation", model="kevin009/lamatama", torch_dtype=torch.bfloat16, device_map="auto")

# Sample dialogue with templating
messages = [
    {"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate"},
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}
]

# Generate prompt and outputs
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```

## Acknowledgements
This model is a product of collaboration and innovative approaches to language modeling. We extend our thanks to all contributors, as well as the creators of the datasets and training methodologies that made `kevin009/lamatama` a reality.

---

This model card introduces `kevin009/lamatama`, a versatile and powerful language model fine-tuned for chat applications, demonstrating exceptional understanding and generation capabilities.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kevin009__lamatama)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |37.15|
|AI2 Reasoning Challenge (25-Shot)|36.35|
|HellaSwag (10-Shot)              |61.12|
|MMLU (5-Shot)                    |24.72|
|TruthfulQA (0-shot)              |37.67|
|Winogrande (5-shot)              |60.77|
|GSM8k (5-shot)                   | 2.27|