falcon-7b-instruct responds with weird and short answers ?
#98
by
olsi8
- opened
So I am trying to do a QA app for a document and when I try to do this withqa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=docsearch.as_retriever())
response = qa.run(query)
When the llm is falcon-7b it responds in short(not complete response) and weird ways.
model_kwargs={"temperature": 0.5, "max_length": 4000})```
Here is a sample
OPEN AI: The best way to clean a DPF (Diesel Particulate Filter) is to use a commercially available DPF cleaning solution. This should be applied to the filter and left to soak for a few hours before being rinsed off with water. The filter should then be dried and reinstalled in the vehicle.
FALCON: Yes No Your Answer Add your own answer! Your email address will not be