--- library_name: peft base_model: google/gemma-1.1-7b-it language: - ko - en tags: - translation - gemma --- # Model Card for Model ID ## Model Details ### Model Description - **Developed by:** [Kang Seok Ju] - **Contact:** [brildev7@gmail.com] ## Training Details ### Training Data https://huggingface.co/datasets/traintogpb/aihub-koen-translation-integrated-tiny-100k # Inference Examples ``` import os import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig from peft import PeftModel model_id = "google/gemma-1.1-7b-it" peft_model_id = "brildev7/gemma-1.1-7b-it-translation-koen-sft-qlora" quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, bnb_4bit_quant_type="nf4" ) model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=quantization_config, torch_dtype=torch.float16, low_cpu_mem_usage=True, attn_implementation="flash_attention_2", ) model = PeftModel.from_pretrained(model, peft_model_id) tokenizer = AutoTokenizer.from_pretrained(peft_model_id) tokenizer.pad_token_id = tokenizer.eos_token_id # example prompt_template = """Translate the following into English: {} output: """ passage = "달이 해를 완전히 가리는 '개기일식'이 북미 대륙에서 7년 만에 관측되면서 전 세계 수억명의 관심이 집중됐다. 멕시코에서 시작해 캐나다까지 북미를 가로지르며 나타난 '우주쇼'를 보기 위해 사람들은 하던 일을 멈추고 하늘을 올려다봤다. 개기일식으로 창출된 경제효과도 수조원에 이른다는 분석이 나온다." prompt = prompt_template.format(passage) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.2, top_p=0.95, do_sample=True, use_cache=False) print(tokenizer.decode(outputs[0])) - 7 years after the last solar eclipse, when the moon completely covered the sun was observed in North America, tens of millions of people around the world focused their attention. People stopped what they were doing and looked up to watch the 'cosmic show' that appeared across North America, from Mexico to Canada. An analysis showed that the economic effect created by the lunar eclipse was also in the hundreds of billions of won. # example prompt_template = """Translate the following into English: {} output: """ passage = "이틀째 황사 현상이 이어지며 시야가 흐린 하루였습니다. 오늘도 서울 도심은 황사에 갇혀 종일 뿌옇고 누런빛까지 띠었습니다. 내일도 대기 중에 황사가 남아 미세먼지 농도가 높게 나타나겠습니다." prompt = prompt_template.format(passage) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=1024, temperature=1, top_p=0.95, do_sample=True, use_cache=False) print(tokenizer.decode(outputs[0])) - On the second day of the yellow dust, the day was misty with the continuous phenomenon. On this day, downtown Seoul was covered with yellow dust and covered with yellow dust throughout the day. Yellow dust remained from tomorrow, so the fine dust concentration would be high. ```