Giving Partial answers
from text_generation import InferenceAPIClient
client = InferenceAPIClient("OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5")
complete_answer = ""
for response in client.generate_stream("<|prompter|>Write Job Description for Data Scientist<|endoftext|><|assistant|>"):
print(response.token)
complete_answer += response.token.text
print(complete_answer)
using above codes it's giving me Partial answers. Can someone help me .....
Increase the max_new_tokens attribute in client.generate_stream, like
for response in client.generate_stream("<|prompter|>Write Job Description for Data Scientist<|endoftext|><|assistant|>", max_new_tokens=100)
Increase the max_new_tokens attribute in client.generate_stream, like
for response in client.generate_stream("<|prompter|>Write Job Description for Data Scientist<|endoftext|><|assistant|>", max_new_tokens=100)
Can you provide complete code π
@balu548411 actually it's an experiment by normally generate JD, once I get success then I will modify it on my current codes, and share it ππ