Not working
Hi, when I try to run this model I get gibberish.
I think you can try simple:
from transformers import pipeline
pipe = pipeline("text-generation", model="vonjack/phi-3-mini-4k-instruct-llamafied")
pipe("Hello!")
For me everything is working perfectly fine, and model is preforming great!
My code looks like this:
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
autoModelForCausalLM = path to model folder
self.pipe = pipeline(
"text-generation",
model=self.config['autoModelForCausalLM'],
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
outputs = self.pipe(
prompt,
max_new_tokens=1024,
do_sample=True,
temperature=0.1,
top_p=0.9,
)
I tried your sample and I get:
'Hello! I I I I I LIKEzatmmzatllLL!m
IS
Running:
>>> pipe("Microsoft recently released Phi 3, which was converted into the LLaMA format", max_new_tokens=24)
[{'generated_text': 'Microsoft recently released Phi 3, which was converted into the LLaMA format.a 2. version,\nl.P,L p'}]
Ah never mind, I redownloaded the model and now it works fine. Thanks!