Edit model card

KoreanLM icon

quantumaikr/llama-2-70b-fb16-korean

Model Description

quantumaikr/llama-2-70b-fb16-korean is a Llama2 70B model finetuned the Korean Dataset

Usage

Start chatting with quantumaikr/llama-2-70b-fb16-korean using the following code snippet:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("quantumaikr/llama-2-70b-fb16-korean")
model = AutoModelForCausalLM.from_pretrained("quantumaikr/llama-2-70b-fb16-korean", torch_dtype=torch.float16, device_map="auto")

system_prompt = "### System:\nκ·€ν•˜λŠ” μ§€μ‹œλ₯Ό 맀우 잘 λ”°λ₯΄λŠ” AI인 QuantumLMμž…λ‹ˆλ‹€. μ΅œλŒ€ν•œ 많이 λ„μ™€μ£Όμ„Έμš”. μ•ˆμ „μ— μœ μ˜ν•˜κ³  λΆˆλ²•μ μΈ 행동은 ν•˜μ§€ λ§ˆμ„Έμš”.\n\n"

message = "인곡지λŠ₯μ΄λž€ λ¬΄μ—‡μΈκ°€μš”?"
prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, temperature=0.9, top_p=0.75, max_new_tokens=4096)

print(tokenizer.decode(output[0], skip_special_tokens=True))

QuantumLM should be used with this prompt format:

### System:
This is a system prompt, please behave and help the user.

### User:
Your prompt here

### Assistant
The output of QuantumLM

Use and Limitations

Intended Use

These models are intended for research only, in adherence with the CC BY-NC-4.0 license.

Limitations and bias

Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.

Contact us : [email protected]

Downloads last month
1,518
Safetensors
Model size
69B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using quantumaikr/llama-2-70b-fb16-korean 27