|
--- |
|
datasets: |
|
- anon8231489123/ShareGPT_Vicuna_unfiltered |
|
- ehartford/wizard_vicuna_70k_unfiltered |
|
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered |
|
- QingyiSi/Alpaca-CoT |
|
- teknium/GPT4-LLM-Cleaned |
|
- teknium/GPTeacher-General-Instruct |
|
- metaeval/ScienceQA_text_only |
|
- hellaswag |
|
- openai/summarize_from_feedback |
|
- riddle_sense |
|
- gsm8k |
|
- ewof/code-alpaca-instruct-unfiltered |
|
language: |
|
- en |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
license: other |
|
inference: false |
|
--- |
|
<!-- header start --> |
|
<!-- 200823 --> |
|
<div style="width: auto; margin-left: auto; margin-right: auto"> |
|
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</div> |
|
<div style="display: flex; justify-content: space-between; width: 100%;"> |
|
<div style="display: flex; flex-direction: column; align-items: flex-start;"> |
|
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> |
|
</div> |
|
<div style="display: flex; flex-direction: column; align-items: flex-end;"> |
|
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> |
|
</div> |
|
</div> |
|
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> |
|
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> |
|
<!-- header end --> |
|
|
|
# Manticore 13B GPTQ |
|
|
|
This repo contains 4bit GPTQ format quantised models of [OpenAccess AI Collective's Manticore Chat 13B](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg). |
|
|
|
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). |
|
|
|
## Repositories available |
|
|
|
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GPTQ). |
|
* [4-bit, 5-bit and 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GGML). |
|
* [OpenAccess AI Collective's original float16 HF format repo for GPU inference and further conversions](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg). |
|
|
|
## How to easily download and use this model in text-generation-webui |
|
|
|
Open the text-generation-webui UI as normal. |
|
|
|
1. Click the **Model tab**. |
|
2. Under **Download custom model or LoRA**, enter `TheBloke/manticore-13b-chat-pyg-GPTQ`. |
|
3. Click **Download**. |
|
4. Wait until it says it's finished downloading. |
|
5. Click the **Refresh** icon next to **Model** in the top left. |
|
6. In the **Model drop-down**: choose the model you just downloaded, `manticore-13b-chat-pyg-GPTQ`. |
|
7. If you see an error in the bottom right, ignore it - it's temporary. |
|
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama` |
|
9. Click **Save settings for this model** in the top right. |
|
10. Click **Reload the Model** in the top right. |
|
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! |
|
|
|
## Provided files |
|
|
|
**`Manticore-13B-Chat-Pyg-GPTQ-4bit-128g.no-act-order.safetensors`** |
|
|
|
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility. |
|
|
|
It was created without `--act-order` to ensure compatibility with all UIs out there. |
|
|
|
* `Manticore-13B-Chat-Pyg-GPTQ-4bit-128g.no-act-order.safetensors` |
|
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches |
|
* Works with text-generation-webui one-click-installers |
|
* Parameters: Groupsize = 128. No act-order. |
|
* Command used to create the GPTQ: |
|
``` |
|
python llama.py /workspace/models/openaccess-ai-collective_manticore-13b-chat-pyg wikitext2 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/manticore-pyg/gptq/Manticore-13B-Chat-Pyg-GPTQ-4bit-128g.no-act-order.safetensors |
|
``` |
|
|
|
<!-- footer start --> |
|
<!-- 200823 --> |
|
## Discord |
|
|
|
For further support, and discussions on these models and AI in general, join us at: |
|
|
|
[TheBloke AI's Discord server](https://discord.gg/theblokeai) |
|
|
|
## Thanks, and how to contribute. |
|
|
|
Thanks to the [chirper.ai](https://chirper.ai) team! |
|
|
|
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. |
|
|
|
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. |
|
|
|
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. |
|
|
|
* Patreon: https://patreon.com/TheBlokeAI |
|
* Ko-Fi: https://ko-fi.com/TheBlokeAI |
|
|
|
**Special thanks to**: Aemon Algiz. |
|
|
|
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter |
|
|
|
|
|
Thank you to all my generous patrons and donaters! |
|
|
|
And thank you again to a16z for their generous grant. |
|
|
|
<!-- footer end --> |
|
# Original Manticore Chat 13B model card |
|
|
|
# Manticore 13B Chat |
|
|
|
Manticore 13B Chat builds on Manticore with new datasets, including a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using `###` in favor of |
|
chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharme prompting](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using `<|system|>, <|user|> and <|model|>` tokens. |
|
|
|
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/EqrvvehG) or email [[email protected]](mailto:[email protected]) |
|
|
|
# Training Datasets |
|
|
|
Manticore 13B Chat is a Llama 13B model fine-tuned on the following datasets along with the datasets from the original Manticore 13B. |
|
|
|
**Manticore 13B Chat was trained on 25% of the datasets below. The datasets were merged, shuffled, and then sharded into 4 parts.** |
|
|
|
- de-duped pygmalion dataset, filtered down to RP data |
|
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented |
|
- hellaswag, updated for detailed explanations w 30K+ rows |
|
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented |
|
- [ewof/code-alpaca-instruct-unfiltered](https://huggingface.co/datasets/ewof/code-alpaca-instruct-unfiltered) |
|
|
|
Manticore 13B |
|
- [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset |
|
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered) |
|
- [Wizard-Vicuna](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered) |
|
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) |
|
- [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned) |
|
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) |
|
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split |
|
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses, derived from the `train` split |
|
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses |
|
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization |
|
|
|
Not added from Manticore 13B: |
|
- mmlu - mmlu datasets were not added to this model as the `test` split is used for benchmarks |
|
|
|
# Shoutouts |
|
|
|
Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, ehartford for cleaned datasets, and 0x000011b for the RP dataset. |
|
# Demo |
|
|
|
Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality. |
|
- https://huggingface.co/spaces/openaccess-ai-collective/manticore-13b-chat-pyg |
|
|
|
## Release Notes |
|
|
|
- https://wandb.ai/wing-lian/manticore-13b-v2/runs/hxr3aiiw |
|
|
|
## Build |
|
|
|
Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB |
|
- 3 epochs taking approximately 8 hours. No further epochs will be released. |
|
- The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-13b/tree/main/configs). |
|
|
|
## Bias, Risks, and Limitations |
|
Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). |
|
Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information. |
|
|
|
## Examples |
|
|
|
``` |
|
USER: make up a sonnet in the style of shakespeare about a skier skiing in the glades on a bluebird day. |
|
ASSISTANT: A skier doth glide 'neath sun’s glow, His edges sharp to cut the snow, Bluebird sky above, While he doth revel on the slopes below, The cold air biting his nose, But not his spirits, For he is filled with a sense of joy, As he carves through the powdery drifts. His body moves in graceful rhythm, And each turn he makes with ease, He feels like he could ski forever, With no care for the world outside these glades. |
|
``` |
|
|
|
``` |
|
USER: tell me a joke about alpacas |
|
ASSISTANT: Did you hear about the famous alpaca ballet dancer? She was a prima llama. |
|
USER: okay, that is actually pretty good |
|
ASSISTANT: What do you call an alpaca in a bowtie? A llama-cholla! |
|
USER: why is that funny? |
|
ASSISTANT: Because it’s a play on words. “Llama-cholla” sounds like “lame chola,” which is Spanish slang for someone who acts cool or pretentious. So, the joke is implying that alpacas are vain and act cool with their bowties. |
|
``` |
|
|
|
``` |
|
USER: What is 44 + 77? |
|
ASSISTANT: 121 |
|
USER: What is the sum of 33 and 77? |
|
ASSISTANT: 110 |
|
USER: tell me 5 words that start with “ex” and the sum of 33 and 77. |
|
ASSISTANT: exotic, expressive, explicit, experimental, and extravagant. The sum of 33 and 77 is 110. |
|
USER: tell me 3 words that start with “egr” and end in “s” |
|
ASSISTANT: egret, eggplant, and egrets. |
|
``` |
|
- oof |
|
|
|
|
|
``` |
|
USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? |
|
ASSISTANT: Sally has a total of 6 siblings, so she has a total of 6 - 2 = 4 sisters. |
|
USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? Use multi-step reasoning to explain your answer. |
|
ASSISTANT: Sally has a total of three brothers, so she has a total of three * 2 = six siblings. Her family consists of three boys and six girls in total. This means that Sally has three sisters since there are six female relatives and only three male ones. |
|
USER: But when brothers have sisters, aren’t they shared or the same? |
|
ASSISTANT: No, not every sibling shares their mother. For example, I have two brothers who share one mother but my third brother has a different mother from both of them. |
|
USER: Let’s assume they all share the same mother and father. Now explain with multi-step reasoning. |
|
ASSISTANT: If Sally has three brothers then that means she has six siblings altogether. Each brother has two sisters, which means that there are twelve female relatives in her family - one for each of her brothers and herself. Since there are only six females in the entire family, that means that Sally’s mother has no other children than her own three sons - so she can’t be the mother of any of the sisters. So all those sisters have to belong to one of the brothers, and since they are shared by two brothers, then each brother has a single sister. Therefore, there is only one sister for Sally, making the final answer 1. |
|
``` |
|
- pretty sure that reasoning isn't correct, and the final answer of having 1 sister is a lucky guess |
|
|