Spaces:
Running
Running
girishwangikar
commited on
Commit
•
c2903a1
1
Parent(s):
6ce9885
Update app.py
Browse files
app.py
CHANGED
@@ -1,28 +1,3 @@
|
|
1 |
-
The error you're encountering stems from two separate issues:
|
2 |
-
|
3 |
-
1. **`trust_remote_code` warning:**
|
4 |
-
This warning is triggered because `trust_remote_code` is used in the wrong context. It only affects Auto classes (like `AutoModel` or `AutoProcessor`) but has no effect when loading the model directly using `Qwen2VLForConditionalGeneration`. You can safely remove it when loading the model. Here's the corrected model loading line:
|
5 |
-
|
6 |
-
```python
|
7 |
-
model = Qwen2VLForConditionalGeneration.from_pretrained(
|
8 |
-
"Qwen/Qwen2-VL-2B-Instruct",
|
9 |
-
torch_dtype=torch.float32,
|
10 |
-
device_map="cpu"
|
11 |
-
).eval()
|
12 |
-
```
|
13 |
-
|
14 |
-
2. **`enable_queue` argument in `launch`:**
|
15 |
-
The argument `enable_queue` has been replaced by `queue` in recent Gradio versions. Instead of using `enable_queue=False`, you should use `queue=False`. Here’s how to fix the `demo.launch()` call:
|
16 |
-
|
17 |
-
```python
|
18 |
-
demo.launch(inline=False, server_name="0.0.0.0", server_port=int(os.getenv("PORT", 7860)), debug=True, queue=False)
|
19 |
-
```
|
20 |
-
|
21 |
-
This should resolve the issues you're encountering. Here's the corrected code:
|
22 |
-
|
23 |
-
### Final Code Fix:
|
24 |
-
|
25 |
-
```python
|
26 |
import gradio as gr
|
27 |
import torch
|
28 |
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
|
@@ -117,7 +92,4 @@ with gr.Blocks(css=css) as demo:
|
|
117 |
commandline_args = os.getenv("COMMANDLINE_ARGS", "")
|
118 |
|
119 |
demo.queue(api_open=False)
|
120 |
-
demo.launch(inline=False, server_name="0.0.0.0", server_port=int(os.getenv("PORT", 7860)), debug=True, queue=("--no-gradio-queue" not in commandline_args))
|
121 |
-
```
|
122 |
-
|
123 |
-
This code should now work without the previous errors.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
import gradio as gr
|
2 |
import torch
|
3 |
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
|
|
|
92 |
commandline_args = os.getenv("COMMANDLINE_ARGS", "")
|
93 |
|
94 |
demo.queue(api_open=False)
|
95 |
+
demo.launch(inline=False, server_name="0.0.0.0", server_port=int(os.getenv("PORT", 7860)), debug=True, queue=("--no-gradio-queue" not in commandline_args))
|
|
|
|
|
|