|
--- |
|
base_model: |
|
- google/gemma-2-2b-it |
|
- VAGOsolutions/SauerkrautLM-gemma-2-2b-it |
|
- stvlynn/Gemma-2-2b-Chinese-it |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
license: apache-2.0 |
|
--- |
|
# Gemma2-2B-it Merged Fine-Tuned Models for Chinese & German understanding |
|
|
|
Lightweight language model based on Gemma2 2B created by merging multiple fine tuned Gemma2-2B-IT versions to test multilingual conversation capabilities in specialized low parameter language models. |
|
|
|
## 🤏 Models Merged |
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) as a base. |
|
|
|
The following models were included in the merge: |
|
* [VAGOsolutions/SauerkrautLM-gemma-2-2b-it](https://huggingface.co/VAGOsolutions/SauerkrautLM-gemma-2-2b-it) |
|
* [stvlynn/Gemma-2-2b-Chinese-it](https://huggingface.co/stvlynn/Gemma-2-2b-Chinese-it) |
|
|
|
## 🧩 Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: google/gemma-2-2b-it |
|
- model: VAGOsolutions/SauerkrautLM-gemma-2-2b-it |
|
- model: stvlynn/Gemma-2-2b-Chinese-it |
|
merge_method: model_stock |
|
base_model: google/gemma-2-2b-it |
|
dtype: bfloat16 |
|
|
|
``` |
|
|
|
### 💻 Usage |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import torch |
|
|
|
# Load the tokenizer and model |
|
tokenizer = AutoTokenizer.from_pretrained("AdamLucek/gemma2-2b-it-chinese-german") |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"AdamLucek/gemma2-2b-it-chinese-german", |
|
device_map="cuda", |
|
torch_dtype=torch.bfloat16 |
|
) |
|
|
|
# Prepare the input text |
|
input_text = "请解释一下量子力学中的叠加原理,并举例说明该原理在实际应用中的重要性和挑战。" |
|
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") |
|
|
|
# Generate the output |
|
outputs = model.generate( |
|
**input_ids, |
|
max_new_tokens=256, |
|
pad_token_id=tokenizer.eos_token_id |
|
) |
|
|
|
# Decode and print the generated text |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
**Ouptut** |
|
|
|
``` |
|
## 量子叠加原理: |
|
|
|
**叠加原理**是量子力学中一个重要的概念,它描述了量子系统在测量之前处于多个状态的可能性。 |
|
|
|
**简单来说,就是说,一个量子系统可以同时处于多个状态,直到我们测量它时,才会坍缩到一个确定的状态。** |
|
|
|
**具体来说,我们可以用以下方式理解叠加原理:** |
|
|
|
* **量子系统:** 比如一个原子,它可以处于多个能量状态。 |
|
* **叠加态:** 表示量子系统同时处于多个状态的概率分布。 |
|
* **测量:** 当我们测量量子系统时,它会坍缩到一个确定的状态。 |
|
* **坍缩:** 测量过程会改变量子系统的状态,使其坍缩到一个确定的状态。 |
|
|
|
**举例说明:** |
|
|
|
想象一下一个量子系统,它可以处于两个状态:上或下。这个系统可以被描述为一个叠加态,表示它同时处于上和下两个状态的概率分布。 |
|
|
|
**如果我们没有测量这个系统,那么它就处于叠加态,同时处于上和下两个状态。** |
|
|
|
**但是,当我们测量这个系统时 |
|
``` |