Problem with start-webui start-up. Help please
How can I solve this problem?
Starting the web UI...
Loading the extension "gallery"... Ok.
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
No model is loaded! Select one in the Model tab.
No model is loaded! Select one in the Model tab.
Loading vicuna-13b-GPTQ-4bit-128g...
Found the following quantized model: models\vicuna-13b-GPTQ-4bit-128g\vicuna-13b-4bit-128g.safetensors
Traceback (most recent call last):
File "D:\AI\oobabooga-windows\installer_files\env\lib\site-packages\gradio\routes.py", line 393, in run_predict
output = await app.get_blocks().process_api(
File "D:\AI\oobabooga-windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1108, in process_api
result = await self.call_function(
File "D:\AI\oobabooga-windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 929, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\AI\oobabooga-windows\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\AI\oobabooga-windows\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\AI\oobabooga-windows\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\AI\oobabooga-windows\installer_files\env\lib\site-packages\gradio\utils.py", line 490, in async_iteration
return next(iterator)
File "D:\AI\oobabooga-windows\text-generation-webui\modules\chat.py", line 228, in cai_chatbot_wrapper
for history in chatbot_wrapper(text, state):
File "D:\AI\oobabooga-windows\text-generation-webui\modules\chat.py", line 149, in chatbot_wrapper
prompt = generate_chat_prompt(text, state, **kwargs)
File "D:\AI\oobabooga-windows\text-generation-webui\modules\chat.py", line 42, in generate_chat_prompt
while i >= 0 and len(encode(''.join(rows))[0]) < max_length:
File "D:\AI\oobabooga-windows\text-generation-webui\modules\text_generation.py", line 31, in encode
input_ids = shared.tokenizer.encode(str(prompt), return_tensors='pt', add_special_tokens=add_special_tokens)
AttributeError: 'NoneType' object has no attribute 'encode'
You need to copy the http://127.0.0.1:7860 or hold ctrl and click on it and you will be redirected to youre chat
How can I solve this problem?
Starting the web UI...
Loading the extension "gallery"... Ok.
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
No model is loaded! Select one in the Model tab.
No model is loaded! Select one in the Model tab.
i have all the files but i cant choose the model. Why?
thx all for assistence
lol6969 the same error, what not only tried. Still displays the same thing, if you know how to fix it write.
Starting the web UI...
Loading the extension "gallery"... Ok.
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
All that pops up. Installed two models, and I'm not even given a choice...(
yes if i find a solutions ill write. You too pls... bye
Same issue!
(D:\software\GPT\oobabooga-windows\installer_files\env) D:\software\GPT\oobabooga-windows>start-webui.bat
Starting the web UI...
Loading the extension "gallery"... Ok.
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Loading vicuna-13b-GPTQ-4bit-128g...
Loading vicuna-13b-GPTQ-4bit-128g...
Traceback (most recent call last):
File "D:\software\GPT\oobabooga-windows\installer_files\env\lib\site-packages\gradio\routes.py", line 393, in run_predict
output = await app.get_blocks().process_api(
File "D:\software\GPT\oobabooga-windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1108, in process_api
result = await self.call_function(
File "D:\software\GPT\oobabooga-windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 929, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\software\GPT\oobabooga-windows\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\software\GPT\oobabooga-windows\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\software\GPT\oobabooga-windows\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\software\GPT\oobabooga-windows\installer_files\env\lib\site-packages\gradio\utils.py", line 490, in async_iteration
return next(iterator)
File "D:\software\GPT\oobabooga-windows\text-generation-webui\modules\chat.py", line 228, in cai_chatbot_wrapper
for history in chatbot_wrapper(text, state):
File "D:\software\GPT\oobabooga-windows\text-generation-webui\modules\chat.py", line 160, in chatbot_wrapper
for reply in generate_reply(f"{prompt}{' ' if len(cumulative_reply) > 0 else ''}{cumulative_reply}", state, eos_token=eos_token, stopping_strings=stopping_strings):
File "D:\software\GPT\oobabooga-windows\text-generation-webui\modules\text_generation.py", line 181, in generate_reply
input_ids = encode(question, add_bos_token=state['add_bos_token'], truncation_length=get_max_prompt_length(state))
File "D:\software\GPT\oobabooga-windows\text-generation-webui\modules\text_generation.py", line 31, in encode
input_ids = shared.tokenizer.encode(str(prompt), return_tensors='pt', add_special_tokens=add_special_tokens)
AttributeError: 'NoneType' object has no attribute 'encode'