runtime error
Exit code: 1. Reason: transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message. Downloading shards: 0%| | 0/4 [00:00<?, ?it/s][A Downloading shards: 25%|βββ | 1/4 [00:12<00:37, 12.51s/it][A Downloading shards: 50%|βββββ | 2/4 [00:24<00:24, 12.47s/it][A Downloading shards: 75%|ββββββββ | 3/4 [00:37<00:12, 12.56s/it][A Downloading shards: 100%|ββββββββββ| 4/4 [00:42<00:00, 9.66s/it][A Downloading shards: 100%|ββββββββββ| 4/4 [00:42<00:00, 10.71s/it] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s][A Loading checkpoint shards: 100%|ββββββββββ| 4/4 [00:00<00:00, 5.69it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 32, in <module> model.to("cuda:0") File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3094, in to return super().to(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1340, in to return self._apply(convert) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 900, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 900, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 900, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 927, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1326, in convert return t.to( File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Container logs:
Fetching error logs...