Spaces:
Running
on
L4
๐ฉ Report: Not working
"Runtime error
ols-10.1.0 mpmath-1.3.0 networkx-3.2.1 numba-0.58.1 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.18.1 nvidia-nvjitlink-cu12-12.3.101 nvidia-nvtx-cu12-12.1.105 openai-whisper-20231117 sympy-1.12 tiktoken-0.5.2 torch-2.1.2 triton-2.1.0
[notice] A new release of pip available: 22.3.1 -> 23.3.2
[notice] To update, run: pip install --upgrade pip
Traceback (most recent call last):
File "/home/user/app/app.py", line 3, in
import gradio as gr
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/init.py", line 3, in
import gradio.components as components
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/components.py", line 38, in
from gradio import media_data, processing_utils, utils
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/processing_utils.py", line 17, in
from gradio import encryptor, utils
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/utils.py", line 395, in
class Request:
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/utils.py", line 415, in Request
client = httpx.AsyncClient()
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/httpx/_client.py", line 1397, in init
self._transport = self._init_transport(
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/httpx/_client.py", line 1445, in _init_transport
return AsyncHTTPTransport(
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/httpx/_transports/default.py", line 275, in init
self._pool = httpcore.AsyncConnectionPool(
TypeError: AsyncConnectionPool.init() got an unexpected keyword argument 'socket_options'
Container logs:
===== Application Startup at 2024-01-01 03:29:19 =====
Collecting git+https://github.com/openai/whisper.git
Cloning https://github.com/openai/whisper.git to /tmp/pip-req-build-d_1bmpnu
Running command git clone --filter=blob:none --quiet https://github.com/openai/whisper.git /tmp/pip-req-build-d_1bmpnu
Resolved https://github.com/openai/whisper.git to commit ba3f3cd54b0e5b8ce1ab3de13e32122d0d5f98ab
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting torch
Downloading torch-2.1.2-cp310-cp310-manylinux1_x86_64.whl (670.2 MB)"