--- tags: - finetuned - quantized - 4-bit - AWQ - transformers - pytorch - mistral - instruct - text-generation - conversational - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - finetune - chatml model-index: - name: Tanuki-7B-v0.1 results: [] license: apache-2.0 base_model: NeuralNovel/Tanuki-7B-v0.1 datasets: - NeuralNovel/Neural-Story-v1 - NeuralNovel/Creative-Logic-v1 language: - en quantized_by: Suparious pipeline_tag: text-generation model_creator: NeuralNovel model_name: Tanuki 7B 0.1 inference: false library_name: transformers prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' --- # NeuralNovel/Senzu-7B-v0.1 - Model creator: [NeuralNovel](https://huggingface.co/NeuralNovel) - Original model: [Tanuki-7B-v0.1](https://huggingface.co/NeuralNovel/Tanuki-7B-v0.1) ![Neural-Story](https://i.ibb.co/FbBrb5H/OIG-4-Ffmd-Jvny-ZJvn.jpg) ## Model Details The **NeuralNovel/Tanuki-7B-v0.1** Designed to generate instructive and narrative text, with a specific focus on roleplay & short storytelling. This fine-tune has been tailored to provide detailed and creative responses in the context of complex narrative. Full-parameter fine-tune (FFT) of Mistral-7B-Instruct-v0.2, with apache-2.0 license, suitable for commercial or non-commercial use. The model was finetuned using the Neural-Story-v1 and Creative-Logic-v1 datasets. ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/Tanuki-7B-v0.1-AWQ" system_message = "You are Tanuki, incarnated as a powerful AI." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Prompt template: ChatML ```plaintext <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```