Update README.md
Browse files
README.md
CHANGED
@@ -31,27 +31,26 @@ widget:
|
|
31 |
---
|
32 |
## Model Summary
|
33 |
|
34 |
-
Cephalo is a series of multimodal materials science focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction in human-AI or multi-agent AI frameworks.
|
35 |
-
|
36 |
-
The model is developed to process diverse inputs, including images and text, facilitating a broad range of applications such as image captioning, visual question answering, and multimodal content generation. The architecture combines a vision encoder model and an autoregressive transformer to process complex natural language understanding.
|
37 |
|
38 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/kl5GWBP9WS0D4uwd1t3S7.png)
|
39 |
|
40 |
-
Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods.
|
41 |
-
|
42 |
This version of Cephalo, lamm-mit/Cephalo-Idefics2-3x8b-beta, is a Mixture-of-Expert model based on variants and fine-tuned versions of the Idefics-2 model. The basic model architecture is as follows:
|
43 |
|
44 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/b7BK8ZtDzTMsyFDi0wP3w.png)
|
45 |
|
46 |
-
|
|
|
|
|
47 |
|
48 |
## Download Idefics-2 MoE Model and Sample inference code
|
49 |
|
50 |
```python
|
51 |
pip install transformers -U
|
52 |
```
|
53 |
-
Install FlashAttention-2
|
54 |
|
|
|
55 |
```python
|
56 |
pip install flash-attn --no-build-isolation
|
57 |
```
|
@@ -108,7 +107,10 @@ generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True
|
|
108 |
|
109 |
print(generated_texts)
|
110 |
```
|
111 |
-
|
|
|
|
|
|
|
112 |
|
113 |
<pre style="white-space: pre-wrap;">
|
114 |
The image shows a group of ants climbing over a vertical surface. The ants are using their legs and antennae to navigate the surface, demonstrating their ability to adapt to different environments and overcome obstacles. This behavior is relevant for materials design because it highlights the ants' ability to optimize their movements and interactions with their surroundings, which can inspire the development of advanced materials that mimic these natural adaptations.
|
@@ -120,7 +122,7 @@ The image of ants climbing over a vertical surface highlights their ability to a
|
|
120 |
|
121 |
## Make a Idefics-2-MoE model from scratch using several pre-trained models
|
122 |
|
123 |
-
|
124 |
|
125 |
```python
|
126 |
pip install huggingface_hub
|
@@ -156,13 +158,17 @@ for file_name in tqdm(py_files):
|
|
156 |
print("Download completed.")
|
157 |
```
|
158 |
|
159 |
-
|
160 |
|
161 |
1) Materials-science fine-tuned model: lamm-mit/Cephalo-Idefics-2-vision-8b-beta (model_1)
|
162 |
2) A chatty version: HuggingFaceM4/idefics2-8b-chatty (model_1) (model_2)
|
163 |
3) A basic variant: HuggingFaceM4/idefics2-8b (model_3)
|
164 |
|
165 |
-
One (or another model) must be used as base model, from which the vision model, connector, self-attention, etc. are used. From the list of models provided as experts, the feed forward layers are used. Each model will become one expert.
|
|
|
|
|
|
|
|
|
166 |
|
167 |
```python
|
168 |
from transformers import AutoProcessor, Idefics2ForConditionalGeneration , AutoTokenizer
|
|
|
31 |
---
|
32 |
## Model Summary
|
33 |
|
34 |
+
Cephalo is a series of multimodal materials science and engineering focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction in human-AI or multi-agent AI frameworks.
|
35 |
+
The model is developed to process diverse inputs, including images and text, facilitating a broad range of applications such as image captioning, visual question answering, and multimodal content generation. The architecture combines a vision encoder model and an autoregressive transformer to process complex natural language understanding.
|
|
|
36 |
|
37 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/kl5GWBP9WS0D4uwd1t3S7.png)
|
38 |
|
|
|
|
|
39 |
This version of Cephalo, lamm-mit/Cephalo-Idefics2-3x8b-beta, is a Mixture-of-Expert model based on variants and fine-tuned versions of the Idefics-2 model. The basic model architecture is as follows:
|
40 |
|
41 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/b7BK8ZtDzTMsyFDi0wP3w.png)
|
42 |
|
43 |
+
This model leverages multiple expert networks to process different parts of the input, allowing for more efficient and specialized computations. For each token in the input sequence, a gating layer computes scores for all experts and selects the top-*k* experts based on these scores. We use a *softmax (..)* activation function to ensure that the weights across the chosen experts sum up to unity. The output of the gating layer is a set of top-*k* values and their corresponding indices. The selected experts' outputs ($\mathbf{Y}$) are then computed and combined using a weighted sum, where the weights are given by the top-*k* values. This sparse MoE mechanism allows our model to dynamically allocate computational resources, improving efficiency and performance for complex vision-language tasks. depicts an overview of the architecture.
|
44 |
+
|
45 |
+
For this sample model, the model has 20b parameters (three experts, 8b each, and 8b active parameters during inference). The instructions below include a detailed explanation about how other models can be constructed.
|
46 |
|
47 |
## Download Idefics-2 MoE Model and Sample inference code
|
48 |
|
49 |
```python
|
50 |
pip install transformers -U
|
51 |
```
|
|
|
52 |
|
53 |
+
Install FlashAttention-2
|
54 |
```python
|
55 |
pip install flash-attn --no-build-isolation
|
56 |
```
|
|
|
107 |
|
108 |
print(generated_texts)
|
109 |
```
|
110 |
+
Sample output:
|
111 |
+
|
112 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/5n6oRNHrfwHkBX0QertZp.png)
|
113 |
+
<small>Image by [Vaishakh Manohar](https://www.quantamagazine.org/the-simple-algorithm-that-ants-use-to-build-bridges-20180226/)</small>
|
114 |
|
115 |
<pre style="white-space: pre-wrap;">
|
116 |
The image shows a group of ants climbing over a vertical surface. The ants are using their legs and antennae to navigate the surface, demonstrating their ability to adapt to different environments and overcome obstacles. This behavior is relevant for materials design because it highlights the ants' ability to optimize their movements and interactions with their surroundings, which can inspire the development of advanced materials that mimic these natural adaptations.
|
|
|
122 |
|
123 |
## Make a Idefics-2-MoE model from scratch using several pre-trained models
|
124 |
|
125 |
+
This section includes detailed instructions to make your own Idefics-2-MoE model. First, download .py files that implement the Idefics-2 Mixture-of-Expert Vision model:
|
126 |
|
127 |
```python
|
128 |
pip install huggingface_hub
|
|
|
158 |
print("Download completed.")
|
159 |
```
|
160 |
|
161 |
+
Second, we will download the models that will form the experts, as well as the base model. As a simple example, we use
|
162 |
|
163 |
1) Materials-science fine-tuned model: lamm-mit/Cephalo-Idefics-2-vision-8b-beta (model_1)
|
164 |
2) A chatty version: HuggingFaceM4/idefics2-8b-chatty (model_1) (model_2)
|
165 |
3) A basic variant: HuggingFaceM4/idefics2-8b (model_3)
|
166 |
|
167 |
+
One of them (or another model) must be used as base model, from which the vision model, connector, self-attention, etc. are used. From the list of models provided as experts, the feed forward layers are used. Each model will become one expert.
|
168 |
+
|
169 |
+
To transform an existing model into a Mixture of Experts (MoE) model, we first take the base model use a set of fine-tuned or otherwise trained models to create multiple expert models. Typically, each of the expert models specializes in different aspects of the input data, allowing for greater flexibility and efficiency in processing. To implement this, the original layers of the base model are replaced with modified layers that incorporate the gating and expert mechanisms. A custom configuration class is created to extend the base configuration, adding parameters specific to the MoE setup, such as the number of experts and the number of experts to select in each forward call ($k$).
|
170 |
+
|
171 |
+
Within the algorithm, the original MLP layers in the base model are replaced with a new MoE layer that combines the outputs of the selected experts. This MoE layer uses the gate scores to select the relevant experts' outputs and combines them into a single output by computing a weighted sum. The modified layers are then integrated back into the model, creating a hybrid architecture that retains the original model's structure but with enhanced capabilities.
|
172 |
|
173 |
```python
|
174 |
from transformers import AutoProcessor, Idefics2ForConditionalGeneration , AutoTokenizer
|