--- base_model: - google/gemma-2-2b-it - VAGOsolutions/SauerkrautLM-gemma-2-2b-it - stvlynn/Gemma-2-2b-Chinese-it library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # Gemma2-2B-it Merged Fine-Tuned Models for Chinese & German understanding Lightweight language model based on Gemma2 2B created by merging multiple fine tuned Gemma2-2B-IT versions to test multilingual conversation capabilities in specialized low parameter language models. ## 🤏 Models Merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) as a base. The following models were included in the merge: * [VAGOsolutions/SauerkrautLM-gemma-2-2b-it](https://huggingface.co/VAGOsolutions/SauerkrautLM-gemma-2-2b-it) * [stvlynn/Gemma-2-2b-Chinese-it](https://huggingface.co/stvlynn/Gemma-2-2b-Chinese-it) ## 🧩 Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: google/gemma-2-2b-it - model: VAGOsolutions/SauerkrautLM-gemma-2-2b-it - model: stvlynn/Gemma-2-2b-Chinese-it merge_method: model_stock base_model: google/gemma-2-2b-it dtype: bfloat16 ``` ### 💻 Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained("AdamLucek/gemma2-2b-it-chinese-german") model = AutoModelForCausalLM.from_pretrained( "AdamLucek/gemma2-2b-it-chinese-german", device_map="cuda", torch_dtype=torch.bfloat16 ) # Prepare the input text input_text = "请解释一下量子力学中的叠加原理,并举例说明该原理在实际应用中的重要性和挑战。" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") # Generate the output outputs = model.generate( **input_ids, max_new_tokens=256, pad_token_id=tokenizer.eos_token_id ) # Decode and print the generated text print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```