runtime error
Space failed. Exit code: 1. Reason: .85G/3.89G [00:38<00:00, 96.9MB/s] Downloading pytorch_model.bin: 99%|█████████▉| 3.86G/3.89G [00:38<00:00, 91.2MB/s] Downloading pytorch_model.bin: 100%|█████████▉| 3.88G/3.89G [00:38<00:00, 89.9MB/s] Downloading pytorch_model.bin: 100%|██████████| 3.89G/3.89G [00:38<00:00, 100MB/s] No compiled kernel found. Compiling kernels : /home/user/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4/e214c5b71d9c13e90d92968fd8e39ce2be95419b/quantization_kernels_parallel.c Compiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 /home/user/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4/e214c5b71d9c13e90d92968fd8e39ce2be95419b/quantization_kernels_parallel.c -shared -o /home/user/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4/e214c5b71d9c13e90d92968fd8e39ce2be95419b/quantization_kernels_parallel.so Load kernel : /home/user/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4/e214c5b71d9c13e90d92968fd8e39ce2be95419b/quantization_kernels_parallel.so Setting CPU quantization kernel threads to 8 Using quantization cache Applying quantization to glm layers /home/user/.local/lib/python3.8/site-packages/gradio/deprecation.py:43: UserWarning: You have unused kwarg parameters in Textbox, please remove them: {'line': 7, 'labels': 'è¯·è¾“å…¥ä½ çš„é—®é¢˜'} warnings.warn( /home/user/.local/lib/python3.8/site-packages/gradio/deprecation.py:43: UserWarning: You have unused kwarg parameters in Textbox, please remove them: {'line': 7, 'labels': '万能AI的回ç”'} warnings.warn( Running on local URL: http://0.0.0.0:7860 Traceback (most recent call last): File "app.py", line 22, in <module> gr.Interface(fn=ChatGLM_bot,inputs=inputs,outputs=outputs,title='万能的AI助手', File "/home/user/.local/lib/python3.8/site-packages/gradio/blocks.py", line 1818, in launch raise RuntimeError("Share is not supported when you are in Spaces") RuntimeError: Share is not supported when you are in Spaces
Container logs:
Fetching error logs...