metadata
license: gpl-3.0
datasets:
- tatsu-lab/alpaca
- yizhongw/self_instruct
- anon8231489123/ShareGPT_Vicuna_unfiltered
- NeelNanda/pile-10k
language:
- en
- es
- ar
- fr
- fa
metrics:
- accuracy
- bleu
pipeline_tag: text-generation
this model uses Task classification and the conversation is between USER and Answer or AI
NOTE β οΈ
This model is a finetuned version of Kolla with LGeM data With Respect to them and changes some data and optimizers The model includes pre-trained Weights so it is GNU v3.0 licensed as same as Original Llama Model
Using Model in Huggingface Transformers
Examples π
CONVERSATION: USER: how can I start to work out more \n
Q&A: USER: how can I start to work out more \n
INFO: USER: how can I start to work out more \n
from transformers import LlamaTokenizer, LlamaForCausalLM, pipeline
import torch
import textwrap
tokenizer = LlamaTokenizer.from_pretrained("erfanzar/LGeM-7B-MT")
model = LlamaForCausalLM.from_pretrained(
'erfanzar/LGeM-7B-MT',
load_in_8bit=True,
device_map='auto',
torch_dtype=torch.float16,
)
pipe_line = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_length=256, # Your max length here
temperature=1, # Temperature (use 1 for good performance)
top_p=0.95,
)
verify_text = lambda txt : '\n'.join([textwrap.fill(txt, width=90) for txt in txt.split('\n')])
with torch.no_grad():
output = pipe_line('CONVERSATION: USER: code a program for me to check internet connection in python ? ')
print(verify_text(output[0]['generated_text']))
Generate Method to get res Text by Text
def generate(model_,input_ids_,tokeinzer_,max_length:int=256,temperature :float= 1,eos_token_id:int=2):
with torch.no_grad():
before_start = len(input_ids_[0])+1
for _ in range(max_length):
out = model_(
input_ids=input_ids_,
return_dict=True,
)
opa = torch.nn.functional.softmax(out.logits[:,-1,:]/temperature)
Camila = torch.multinomial(opa,1)
input_ids_ = torch.cat([input_ids_,Camila],-1)
clear_output(wait=True)
print(f"\r{tokeinzer_.decode(input_ids_[0],skip_special_tokens=True)[before_start:]}",end='')
if Camila[0].item() == eos_token_id:
break
yield tokeinzer_.decode(Camila[0],skip_special_tokens=True)
return f"{tokeinzer_.decode(input_ids_[0],skip_special_tokens=True)[before_start:]}"
Result
import socket
import time
def check_internet_connection():
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("www.google.com", 80))
print("Internet connection is active.")
except:
print("Internet connection is not active.")
if __name__ == "__main__":
check_internet_connection()
Using Model in OST
LGeM π
what is LGeM, LGeM is a CausalLM Model that is trained on self instruct data (Alpaca data) and for initialization of the first train of the main model (weights are available) I used pre weights from Alpaca LoRA (open source)
it's Decoder Only
built-in Pytorch
you can simply import models like
from modules import LGeMForCausalLM
- and Training code is available at LGeM-Train.py (check source)
- training parameters
- learning rate 1e-4
- AdamW (weight decay 1e-2)
- batch 2
- A 100 80GB used for training (4 X)
- Train Time 800 hours
- budget 760 $
python3 LGeM-train.py