Spaces:
Running
on
Zero
Running
on
Zero
This was working great until recently where it errors out
#1
by
shivanshdhar
- opened
First off, thanks for this project! It works great and I'm looking forward to the ChartInstruct-Flan-T5-XL model.
The demo was working until a couple days ago and now Hugging Face shows "Error" for queries without much further details.
I also tried running the app.py file locally by downloading the model but I get the following error.
Does this model have to be run on a GPU?
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/gradio/queueing.py", line 541, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/gradio/route_utils.py", line 276, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/gradio/blocks.py", line 1928, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/gradio/blocks.py", line 1514, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/gradio/utils.py", line 833, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/Users/shivi/Library/CloudStorage/[email protected]/My Drive/Waverly AI/ChartInstruct-LLama2/app.py", line 34, in predict
generate_ids = model.generate(**inputs, num_beams=4, max_new_tokens=512)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/transformers/generation/utils.py", line 1953, in generate
result = self._beam_search(
^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/transformers/generation/utils.py", line 2914, in _beam_search
outputs = self(
^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/transformers/models/llava/modeling_llava.py", line 424, in forward
image_outputs = self.vision_tower(pixel_values, output_hidden_states=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/transformers/models/donut/modeling_donut_swin.py", line 965, in forward
embedding_output, input_dimensions = self.embeddings(
^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/transformers/models/donut/modeling_donut_swin.py", line 210, in forward
embeddings = self.norm(embeddings)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/torch/nn/modules/normalization.py", line 201, in forward
return F.layer_norm(
^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/chart_instruct_testing/lib/python3.12/site-packages/torch/nn/functional.py", line 2546, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
I would appreciate any support and would be happy to answer any further questions.
I provided my responses in the git repo: https://github.com/vis-nlp/ChartInstruct/issues/3