Edit model card

image/png

KafkaLM-7B-DARE_TIES-LaserRMT-QLoRA-DPO-v0.5

KafkaLM 7b is a Mistral 7b model - further pre-trained on a large German dataset from Björn Plüster and LAION. leo-mistral-hessianai-7b - which was finetuned on an ensemble of popular high-quality open-source instruction sets (translated from English to German).

KafkaLM 7b is a Seedbox project trained by Dennis Dickmann.

Why Kafka? The models are proficient, yet creative, and have some tendencies to linguistically push boundaries 😊

THE MODEL CAN BE TESTET HERE Kafka-7B HF Space

Model Details

The purpose of releasing the KafkaLM series is to contribute to the German AI community with a set of fine-tuned LLMs that are easy to use in everyday applications across a variety of tasks.

The main goal was to provide LLMs proficient in German, especially to be used in German-speaking business contexts where English alone is not sufficient.

DPO Training with laserRMT w/ Q-Lora

Based on the brilliant work from laserRMT team, I used the SNR implementation for identifying candiate layers to be used for the DPO training.

Dataset

I used a 8k filtered version of the following seedboxai/multitask_german_examples_32k

Prompt Format

This model follows the subsequent prompt format:

<|system|>
Du bist ein freundlicher und hilfsbereiter KI-Assistent. Du beantwortest Fragen faktenorientiert und präzise, ohne dabei relevante Fakten auszulassen.</s>
<|user|>
Welche Möglichkeiten der energetischen Sanierung habe ich neben Solar und Energiespeicher?</s>
<|assistant|>

🧩 Configuration

models:
  - model: mistralai/Mistral-7B-v0.1
    # no parameters necessary for base model
  - model: seedboxai/KafkaLM-7B-German-V0.1
    parameters:
      density: 0.65
      weight: 0.50
  - model: mlabonne/Monarch-7B
    parameters:
      density: 0.60
      weight: 0.30
  - model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
    parameters:
      density: 0.60
      weight: 0.20
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  int8_mask: true
dtype: bfloat16
random_seed: 0

💻 Usage (fast vLLM inference example)

!pip install -qU vllm

import torch
from vllm import LLM, SamplingParams

sampling_params = SamplingParams(
    temperature=0.7, 
    top_p=0.95, 
    top_k=50,
    max_tokens=512,
)

llm = LLM(model="doubledsbv/KafkaLM-7B-DARE_TIES-DPO-v0.5-AWQ", quantization = "awq", dtype=torch.float16)


def generate_prompt(input, sys_prompt = None):
    prompt = ''
    if not sys_prompt:
      sys_prompt = "Du bist ein freundlicher und hilfsbereiter KI-Assistent. Du beantwortest Fragen faktenorientiert, präzise und ausführlich."
    
    prompt += f"<|system|>\n{sys_prompt.strip()}</s>\n"
    prompt += f"<|user|>\n{input.strip()}</s>\n"
    prompt += f"<|assistant|>\n"

    return prompt

outputs = llm.generate(generate_prompt("Was ist der Unterschied zwischen Ironie und Sarkasmus?"), sampling_params)
primt(outputs[0].outputs[0].text.strip())

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be used for research purposes. The original Llama2 license and all restrictions of datasets used to train this model apply.

Downloads last month
3
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train doubledsbv/KafkaLM-7B-DARE_TIES-LaserRMT-QLoRA-DPO-v0.5-AWQ