Edit model card

Model Overview:

日本語で質問すると、日本語で回答を得られます。

This is a fine-tuned unsloth/mistral-7b-v0.3-bnb-4bit for Japanese.
You can ask in Japanese to get the answers in Japanese.
Made possible thanks to a detailed notebook from Unsloth.

Datasets Used:

  • "wikimedia/wikipedia:" (20231101.ja) for continued pretaining
  • "FreedomIntelligence/alpaca-gpt4-japanese" for instruction fine tuning

Inference Template:

from transformers import pipeline

pipe = pipeline("text-generation", model="Ryu-m0m/16bit-japanese-finetuned-mistral-7b-v0")

instruction = "侍の歴史を簡単に教えてください。" # Can you give us a brief history of the Samurai?
response = pipe(
    instruction,
    max_length=150,         # Controls the length of the output
    temperature=0.7,        # Controls randomness; lower is more deterministic
    top_k=50,               # Limits sampling pool to top 50 tokens
    top_p=0.9,              # Nucleus sampling, considering tokens up to 90% cumulative probability
    num_return_sequences=1  # Generates only one response
)

print(response[0]['generated_text'])

Contact me

Any questions or quality issues found in the model, please feel free to contact me.

Uploaded model

  • Developed by: Ryu-m0m
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-7b-v0.3-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Ryu-m0m/16bit-japanese-finetuned-mistral-7b-v0

Finetuned
(305)
this model