runtime error
Exit code: 1. Reason: The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored. Downloading shards: 0%| | 0/5 [00:00<?, ?it/s][A Downloading shards: 20%|ββ | 1/5 [00:04<00:19, 4.76s/it][A Downloading shards: 40%|ββββ | 2/5 [00:08<00:12, 4.04s/it][A Downloading shards: 60%|ββββββ | 3/5 [00:13<00:08, 4.37s/it][A Downloading shards: 80%|ββββββββ | 4/5 [00:16<00:03, 3.92s/it][A Downloading shards: 100%|ββββββββββ| 5/5 [00:18<00:00, 3.18s/it][A Downloading shards: 100%|ββββββββββ| 5/5 [00:18<00:00, 3.63s/it] Loading checkpoint shards: 0%| | 0/5 [00:00<?, ?it/s][A Loading checkpoint shards: 100%|ββββββββββ| 5/5 [00:00<00:00, 5.02it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 21, in <module> ).cuda().eval() File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2880, in cuda return super().cuda(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 916, in cuda return self._apply(lambda t: t.cuda(device)) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 805, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 916, in <lambda> return self._apply(lambda t: t.cuda(device)) File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 314, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available
Container logs:
Fetching error logs...