--- license: apache-2.0 tags: - generated_from_trainer base_model: yanolja/EEVE-Korean-2.8B-v1.0 ---

# "We must sleep, but AI Never Sleeps!"   ## Prompt Template ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: {prompt} Assistant: ``` ## Simple-Usage ```python from transformers import AutoTokenizer from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True) prompt_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {prompt}\nAssistant:\n" text = '다이어트식 메뉴를 추천해주세요.\n\n(A) 샐러드\n(B) 치킨\n(C) 피자\n(D) 파스타' model_inputs = tokenizer(prompt_template.format(prompt=text), return_tensors='pt') outputs = model.generate(**model_inputs, max_new_tokens=256) output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] print(output_text) ``` ### Example Output ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: 다이어트식 메뉴를 추천해주세요. (A) 샐러드 (B) 치킨 (C) 피자 (D) 파스타 Assistant: (A) 샐러드를 추천드립니다. 샐러드는 저칼로리이면서도 영양소가 풍부해 다이어트식으로 적합합니다. 다양한 채소와 단백질을 추가하여 균형 잡힌 식사를 만드실 수 있습니다. ``` ## About the Model First of all, Overwhelming gratitude to 'yanolja/EEVE' Model & Team! This model is a fine-tuned version of [crimsonjoo/Neversleep-3B-v0.1](https://huggingface.co/crimsonjoo/Neversleep-3B-v0.1), which is a Korean vocabulary-extended version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2). Specifically, we utilized Direct Preference Optimization (DPO) through the use of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). For more details, please refer to our technical report: [Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models](https://arxiv.org/abs/2402.14714). ## Training Data - Korean-translated version of [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup) - Korean-translated version of [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned) - No other dataset was used