Doesn't load in Oobabooga
This is with llama.ccp but doesn't work with any of them.
PyEnv/text-generation-webui/modules/ui_model_menu.pyβ, line 201, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File β/.../PyEnv/text-generation-webui/modules/models.pyβ, line 79, in load_model
output = load_func_maploader
File β/.../PyEnv/text-generation-webui/modules/models.pyβ, line 222, in llamacpp_loader
model_file = list(Path(f'{shared.args.model_dir}/{model_name}').glob('*.gguf'))[0]
IndexError: list index out of range
Yep this is based off NanoGPT/GPT-2, so it doesn't really have any support for Quants/Ooba
Thanks, I didn't know Ooba only works for Quants. However, the config file is also missing. KoboldAI is complaining about this:
.../PyEnv/koboldai-client/runtime/envs/koboldai/lib/python3.8/site-packages/transformers/utils/hub.py", line 380, in cached_file
raise EnvironmentError(
OSError: models/ does not appear to have a file named config.json. Checkout 'https://huggingface.co/models//None' for available files.
Never mind, I'm looking into running a checkpoint based on GPT-2.
This Checkpoints more research based, trying to replicate the Success of GPT-2 with Phi-like data, using NanoGPT, not really plug and play with other platforms