ENGPT - v0
Our model, fine-tuned with Llama 3.1 8B, has been trained using a dataset generated from the EPDK and TEDAŞ regulations within the Turkish Electricity Distribution system.
Model Details
It can answer questions related to regulations and provide detailed information about specific materials.
Model Description
- Developed by: hdogrukan ,ecokumus
- Language(s) (NLP): Turkish
- Finetuned from model : Llama 3.1 8B
Model Sources
- Datasets:
- https://www.tedas.gov.tr/tr/1/sartnameler/RoutePage/63c658e3d27de36b22f9cef6
- https://www.mevzuat.gov.tr/
Metrics
Uses
!pip install --upgrade transformers
import transformers
import torch
model_id = "hdogrukan/Llama-3.1-8B-Instruct-Energy"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are helpful asistant."},
{"role": "user", "content": "TEDAŞ-MLZ/99-032.E şartnamesini hangi kurum yayınlamıştır?"},
]
outputs = pipeline(
messages,
max_new_tokens=1024,
temperature=0.2
)
# For more accuracy try this!
"""
outputs = pipeline(
messages,
max_new_tokens=512,
#max_length=50,
num_return_sequences=3,
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.5 )
"""
print(outputs[0]["generated_text"][-1])
Output : "TEDAŞ-MLZ/99-032.E şartnamesini Türkiye Elektrik Dağıtım A.Ş. yayınlamıştır."
- Downloads last month
- 12
Model tree for hdogrukan/Llama-3.1-8B-Instruct-Energy
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct