ValueError: Error raised by inference API: Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 190761 `inputs` tokens and 20 `max_new_tokens`
I have 4K rows x 15 columns of txt data that has categorical text information in it. I wanted to use Langchain to do information retrieval using the following model repo. I did receive this error while implementing the below given code snippet "ValueError: Error raised by inference API: Input validation error: inputs
tokens + max_new_tokens
must be <= 1512. Given: 190761 inputs
tokens and 20 max_new_tokens
" .
Does 1512 chunk size is the limit for this repo? What can I use max number for the chunk size?
If this is not convenient repo, what would you recommend me to use as model while trying to retrieve information for the entire dataset I have (4K rows x 15 columns)?
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=200000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
llm=HuggingFaceHub(repo_id="google/flan-t5-xxl", model_kwargs={"temperature":0.7, "max_length":512})
I have the same issue
Who is gonna fix this?
The issue is the approach you have taken. Lets try and fix the approach
Approach 1:
Since the text data is really tabular data, either csv / tsv it is better to use CSVLoader / Dataframe loader to load the file. Then you can try using the flan-t5-xxl as the LLM.
Approach 2:
If you insist that the data has to be loaded as text, then convert the tabular data into json data. The load the json data as text. Split the text using recursivecharactertext splitter. After that use the flan-t5-xxl model.
The model token limit is hard limit. So don't expect that to be increased. Solution is either work with the given token limit, or look for alternate models.
can anyone tell me what is the maximum number of tokens Flan-T5 XXL can handle?
I am getting this error when running this message:
caption = agent.run("hi there")
I am getting this error when running this message:
caption = agent.run("hi there")
same here. This is my code :
from transformers.tools import HfAgent
from huggingface_hub import login
login("my api key")
agent = HfAgent(url_endpoint="https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5")
print("OpenAssistant is initialized 💪")
resp = agent.chat("<|prompter|>hello!<|endoftext|><|assistant|>")
# resp = agent.chat("hello!")
print(resp) ```
I tried
llm=HuggingFaceHub(repo_id="google/flan-t5-xxl", model_kwargs={"temperature":0.7, "max_length":1024})
agent = create_csv_agent(llm,
'/content/drive/MyDrive/Dataset.csv',
verbose=True)
and I performed agent.run("how many columns are there ?")
but I get the ValueError: Error raised by inference API: Input validation error: inputs
must have less than 1000 tokens. Given: 1396 what could be the solution to perform on csv dataset with lang chain agent. Please anyone help. Thanks!
I tried
llm=HuggingFaceHub(repo_id="google/flan-t5-xxl", model_kwargs={"temperature":0.7, "max_length":1024})
agent = create_csv_agent(llm,
'/content/drive/MyDrive/Dataset.csv',
verbose=True)
and I performed agent.run("how many columns are there ?")
but I get the ValueError: Error raised by inference API: Input validation error:inputs
must have less than 1000 tokens. Given: 1396 what could be the solution to perform on csv dataset with lang chain agent. Please anyone help. Thanks!
Actually there's issue with using other models passing for create_csv_agent() function so while I tried reducing columns and rows then it gives parsing output error for csv. So I reckon that there's issue with other models used.
is this issue solved?
I'm currently using the Hugging Face Inference API to generate text using the flan-t5-xxl model. My request involves the generation of 250 tokens.
Is there a built-in limit to the number of tokens that can be generated in a single request or could it be due to some other reasons like the model itself stopping earlier? Any guidance on this would be greatly appreciated.
pip install torch
pip install sentence-transformers