File size: 12,335 Bytes
b927660
 
2c8b2a3
 
 
 
 
 
 
 
 
 
 
 
 
c91b69e
 
2c8b2a3
 
 
c91b69e
 
 
 
 
 
b927660
 
2c8b2a3
 
 
 
 
c91b69e
2c8b2a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e1d9256
 
 
 
 
2c8b2a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c91b69e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
---
library_name: transformers
tags:
- nlp
- code
- vision
- chemistry
- engineering
- biology
- bio-inspired
- text-generation-inference
- materials science
- AI4Science
- Materiomics
- Biomateriomics
base_model:
- HuggingFaceTB/SmolLM-1.7B
datasets:
- mlabonne/orpo-dpo-mix-40k
- lamm-mit/bio-silk-mech-data-integrated
- HuggingFaceTB/Magpie-Pro-300K-Filtered-H4
- HuggingFaceTB/self-oss-instruct-sc2-H4
- HuggingFaceTB/OpenHermes-2.5-H4
- HuggingFaceTB/everyday-conversations-llama3.1-2k
- HuggingFaceTB/instruct-data-basics-smollm-H4
license: apache-2.0
---

# lamm-mit/Bioinspired-SmolLM-1.7B-Instruct

This model was constructed from the SmolLM-1.7B base model using a combination of Continued Pre-training (CPT), Supervised fine-tuning (SFT), and DPO.


![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/IhTyZRoOB11Qr9GRl54cx.png)
The model was trained on a mix of publically available datasets and a corpus of around 5,000 scientific papers in the bio-inspired materials field. During the CPT phase, the raw text of all papers is used. During SFT and ORPO, the model is shown a high-quality mix of question-answer pairs and question-answer-rejected triples, respectively, along with other datasets to train the model for instructions and chat interactions.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/2D3Jm0goTW_mvNRK2gKKU.png)

## Inference 

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig

def generate_response(model, tokenizer, text_input="Biology offers amazing materials. Tell me more!", system_prompt='You are a materials scientist.',
                      num_return_sequences=1, temperature=0.3, max_new_tokens=256, do_sample=True,
                      num_beams=1, eos_token_id=[2],
                      device='cuda', top_k=50, top_p=0.9, repetition_penalty=1.1, messages=None, ):

    if messages is None:
        if system_prompt:
            messages = [{"role": "user", "content": system_prompt + text_input}]
        else:
            messages = [{"role": "user", "content": text_input}]
    else:
        messages.append({"role": "user", "content": text_input})

    text_input = tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
    )

    inputs = tokenizer([text_input], add_special_tokens=False, return_tensors='pt').to(device)

    with torch.no_grad():
        outputs = model.generate(
            **inputs, 
            max_new_tokens=max_new_tokens,
            temperature=temperature,
            num_beams=num_beams,
            top_k=top_k,
            do_sample=do_sample,
            top_p=top_p,
            eos_token_id=eos_token_id,
            num_return_sequences=num_return_sequences, 
            repetition_penalty=repetition_penalty,
        )

    outputs = outputs[:, inputs["input_ids"].shape[1]:]

    return tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True), messages            

def load_model(model_name, chat_template=None, compile_mode=None,
               attn_implementation="flash_attention_2", quant=False):
    
    if quant:
        bnb_config4bit = BitsAndBytesConfig(
            load_in_4bit=True,
            bnb_4bit_quant_type="nf4",
            bnb_4bit_compute_dtype=torch.bfloat16,
            bnb_4bit_use_double_quant=True,
        )
        model = AutoModelForCausalLM.from_pretrained(
            model_name,
            trust_remote_code=True,
            use_cache=False,
            quantization_config=bnb_config4bit,
            device_map="auto",
            torch_dtype=torch.bfloat16,
            attn_implementation=attn_implementation,
        )
    else:
        model = AutoModelForCausalLM.from_pretrained(
            model_name,
            trust_remote_code=True,
            use_cache=False,
            device_map="auto",
            torch_dtype=torch.bfloat16,
            attn_implementation=attn_implementation,
        )

    if compile_mode:
        model.generation_config.cache_implementation = "static"
        model.forward = torch.compile(model.forward, mode=compile_mode, fullgraph=True)

    tokenizer = AutoTokenizer.from_pretrained(
        model_name, 
        trust_remote_code=True,
        add_bos_token=False
    )

    if chat_template:
        tokenizer.chat_template = chat_template

    return model, tokenizer
```

### Chat template


The model uses the ChatML template, as so (with ```<|im_end|>``` as end token):
```python
messages = [
    {"role": "system", "content": "You are a biomaterials scientist."},
    {"role": "user", "content": "What is the role of collagen in bone?"},
    {"role": "assistant", "content": "Collagen is a key structural protein."},
]
prompt = tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)
print (prompt)
```
The output:
```raw
<|begin_of_text|><|im_start|>system
You are a biomaterials scientist.<|im_end|>
<|im_start|>user
What is the role of collagen in bone?<|im_end|>
<|im_start|>assistant
Collagen is a key structural protein.<|im_end|>
<|im_start|>assistant
```

## Example use case

Load the model:
```
model, tokenizer,=load_model (model_name='lamm-mit/Bioinspired-SmolLM-1.7B-Instruct')
```

### Collagen and the Glass Bead Game
```python
messages=[]
result,messages=generate_response (model, tokenizer, text_input="Collagen and the Glass Bead Game, discuss their relationship.",
                                   num_return_sequences=1,  temperature=0.3,max_new_tokens=1024 , messages=[]
                                   system_prompt="You are materials scientist. ",)

for res in result:
    print(res, "\n")

messages.append (  {"role": "assistant", "content":result[0]})  
```

<pre style="white-space: pre-wrap;">
What a fascinating topic! As a materials scientist, I've always been intrigued by the intersection of collagen, a ubiquitous protein in biological systems, and the ancient, strategy-based game, the Glass Bead Game.

At first glance, collagen and the Glass Bead Game may seem like vastly different entities. Collagen is a structural protein found in connective tissues, providing strength, flexibility, and support to various biological structures. In contrast, the Glass Bead Game is a strategy board game originating from ancient Mesopotamia, where players navigate a complex, interconnected network of beads and cards to achieve victory.

Upon closer inspection, I discovered a surprising connection between these two seemingly disparate entities. The Glass Bead Game, with its intricate, interconnected network of beads and cards, shares a striking similarity with the hierarchical structure of collagen. Both are composed of smaller, interconnected units that form a larger, more complex system.

In collagen, the repeating unit of amino acids, glycine, proline, and hydroxyproline, creates a hierarchical structure that contributes to its mechanical properties. Similarly, the Glass Bead Game's beads and cards are arranged in a hierarchical manner, with each component influencing the overall gameplay and strategy. The Glass Bead Game's "beads" represent the individual components, while the "cards" symbolize the interactions and relationships between these components.

Furthermore, both collagen and the Glass Bead Game exhibit self-assembly and hierarchical organization. Collagen molecules self-assemble into fibrils, which then form larger structures, such as fibers and tissues. The Glass Bead Game, too, relies on the strategic arrangement of beads and cards to create a cohesive, interconnected system. The game's rules and strategies are designed to facilitate the assembly and interaction of its components, much like the self-assembly of collagen molecules.

Moreover, the concept of "interconnectedness" is a crucial aspect of both collagen and the Glass Bead Game. In collagen, the interconnected fibrils provide structural integrity and enable the transfer of mechanical loads. In the Glass Bead Game, the interconnected network of beads and cards creates a dynamic, ever-changing system that requires players to adapt and respond to the shifting relationships between components.

In conclusion, the relationship between collagen and the Glass Bead Game is one of shared principles and concepts. Both entities exhibit hierarchical structure, self-assembly, and interconnectedness, which are essential for their respective functions. By recognizing these parallels, we can gain a deeper understanding of the fundamental principles that govern the behavior of biological systems and the strategic complexities of games like the Glass Bead Game. 
</pre>

You can ask a follow-up question like so:
```python
result,messages=generate_response (model, tokenizer, text_input="Summarize the key points in JSON format.",num_return_sequences=1, 
                                   temperature=0.3,max_new_tokens=1024 , messages=messages)

for res in result:
    print(res, "\n")

messages.append (  {"role": "assistant", "content":result[0]})

<pre style="white-space: pre-wrap;">

Here's a summary of collagen and the Glass Bead Game in JSON format:

```json
{
  "collagen": {
    "structure": "proteinous fibrous network",
    "composition": "glycoprotein",
    "properties": "high tensile strength, low stiffness",
    "functions": "structural support, cell signaling, wound healing",
    "types": ["alpha", "beta", "gamma", "delta"]
  },
  "glass bead game": {
    "rules": "a strategy game where players collect and trade glass beads, simulating the process of collagen synthesis and mineralization",
    "components": ["glass beads", "mineral deposits", "cell signaling molecules", "growth factors"],
    "goal": "collect and trade the most beads to achieve mineralization and growth",
    "rules of play": "players must collect beads, deposit them in cells, and stimulate mineralization through signaling molecules and growth factors",
    "game mechanics": "players earn points for successful mineralization, trade beads for resources, and manage their cells' mineralization and growth rates"
  },
}
```

In this JSON representation, collagen is a proteinous fibrous network with a high tensile strength and low stiffness, while the Glass Bead Game is a strategy game where players collect and trade glass beads to simulate the process of collagen synthesis and mineralization. The Glass Bead Game components include glass beads, mineral deposits, cell signaling molecules, and growth factors, with the goal of collecting and trading the most beads to achieve mineralization and growth. The rules of play involve collecting beads, depositing them in cells, and stimulating mineralization through signaling molecules and growth factors. 
</pre>


## Benchmark

Performance on the bioinspired benchmark (testing with domain questions to assess performance relative to the training corpus around biological materials and spider silk):

![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/E2zJnRjtNHlZgy7uZElrP.png)

## Citation

Please cite as:

```bibtex
@article{LuLuuBuehler2024,
  title={Fine-tuning large language models for domain adaptation: Exploration of training strategies, scaling, model merging and synergistic capabilities},
  author={Wei Lu and Rachel K. Luu and Markus J. Buehler},
  journal={arXiv: https://arxiv.org/abs/2409.03444},
  year={2024},
}

@article{LuuBuehler2023bioinspiredLLM,
  title={BioinspiredLLM: Conversational Large Language Model for the Mechanics of Biological and Bio-Inspired Materials},
  author={Rachel K. Luu and Markus J. Buehler},
  year={2023},
  journal={Advanced Science},
  doi={10.1002/advs.202306724},
  eprint={2309.08788},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2309.08788}
}

@article{Buehler2024XLoRA,
  title={X-LoRA: Mixture of low-rank adapter experts, a flexible framework for large language models with applications in protein mechanics and molecular design},
  author={Eric L. Buehler and Markus J. Buehler},
  year={2024},
  journal={APL Machine Learning},
  volume={2},
  number={2},
  pages={026119},
  doi={10.1063/5.0203126},
  note={\url{https://doi.org/10.1063/5.0203126}}
}
```