vllm

Cant run the Pixtral example inside readme because of library conflicts

#20
by Valadaro - opened

I'm trying to run the code bellow I found on readme but I'm getting this error: RuntimeError: use_libuv was requested but PyTorch was build without libuv support

from vllm import LLM
from vllm.sampling_params import SamplingParams

model_name = "mistralai/Pixtral-12B-2409"

sampling_params = SamplingParams(max_tokens=8192)

llm = LLM(model=model_name, tokenizer_mode="mistral")

prompt = "Describe this image in one sentence."
image_url = "https://picsum.photos/id/237/200/300"

messages = [
    {
        "role": "user",
        "content": [{"type": "text", "text": prompt}, {"type": "image_url", "image_url": {"url": image_url}}]
    },
]

outputs = llm.chat(messages, sampling_params=sampling_params)

print(outputs[0].outputs[0].text)

Steps until i get on this situation:
1 - Run on terminal: python -m venv .pixtral
2 - Run on terminal: .pixtral\Scripts\activate.bat
3 - Run on terminal: python -m pip install -U "huggingface_hub[cli]"
4 - Run on terminal huggingface-cli login
5 - I pass the access token and made login
6 - Run on Terminal: python -m pip install --upgrade vllm
7 - Got an error: ModuleNotFoundError: No module named 'xformers'
8 - Run in Terminal: python -m pip install --upgrade xformers
9 - Got an error: AttributeError: module 'torch._C' has no attribute '_cuda_setDevice'
10 - Run on Terminal: python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
11 - Got an error: RuntimeError: use_libuv was requested but PyTorch was build without libuv support

I'm on this error for 3 days and cant figure out a solution.

https://github.com/RVC-Boss/GPT-SoVITS/issues/1357
I saw on this issue on GitHub with recommendations for a solution. They tell to downgrade torch to 3.0 but if i do it will lost compatibility with the current vLLM version and i cant downgrade it because on last update vLLM implemented the mistral tokenizer mode like you can see bellow

https://github.com/vllm-project/vllm/pull/8168#issuecomment-2330341084

Any suggestions? It is working for someone?

Valadaro changed discussion title from Cant run the Pixtral exemple inside readme because of library conflicts to Cant run the Pixtral example inside readme because of library conflicts

I followed some guide for vllm and istaled this dependencies:
pip install vllm kaleido python-multipart typing-extensions==4.5.0 torch==2.1.0

https://medium.com/@fengliplatform/trying-out-vllm-in-colab-459484096386

but now I get folowing problem:
OSError: /root/mistral_models/Pixtral does not appear to have a file named config.json. Checkout 'https://huggingface.co//root/mistral_models/Pixtral/tree/None' for available files.

and remember upon creating the token to give it read acces not fine grained

Sign up or log in to comment