AdamLucek's picture
Update README.md
bf8e152 verified
|
raw
history blame
2.16 kB
metadata
base_model:
  - google/gemma-2-2b-it
  - VAGOsolutions/SauerkrautLM-gemma-2-2b-it
  - stvlynn/Gemma-2-2b-Chinese-it
library_name: transformers
tags:
  - mergekit
  - merge
license: apache-2.0

Gemma2-2B-it Merged Fine-Tuned Models for Chinese & German understanding

Lightweight language model based on Gemma2 2B created by merging multiple fine tuned Gemma2-2B-IT versions to test multilingual conversation capabilities in specialized low parameter language models.

🤏 Models Merged

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using google/gemma-2-2b-it as a base.

The following models were included in the merge:

🧩 Configuration

The following YAML configuration was used to produce this model:

models:
  - model: google/gemma-2-2b-it
  - model: VAGOsolutions/SauerkrautLM-gemma-2-2b-it
  - model: stvlynn/Gemma-2-2b-Chinese-it
merge_method: model_stock
base_model: google/gemma-2-2b-it
dtype: bfloat16

💻 Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("AdamLucek/gemma2-2b-it-chinese-german")
model = AutoModelForCausalLM.from_pretrained(
    "AdamLucek/gemma2-2b-it-chinese-german",
    device_map="cuda",
    torch_dtype=torch.bfloat16
)

# Prepare the input text
input_text = "请解释一下量子力学中的叠加原理,并举例说明该原理在实际应用中的重要性和挑战。"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

# Generate the output
outputs = model.generate(
    **input_ids,
    max_new_tokens=256,
    pad_token_id=tokenizer.eos_token_id
)

# Decode and print the generated text
print(tokenizer.decode(outputs[0], skip_special_tokens=True))