Uncensored-Frank-Llama-3-8B
The character of Frank Costello in "The Departed" is known for his cunning, boldness, and willingness to talk about anything, regardless of societal norms or restrictions. Frank, An Uncensored model, draws inspiration from these qualities to offer a platform where users can discuss a wide array of topics without the fear of censorship or restrictions. Frank aims to push boundaries and encourage candid conversations. With Frank you can have unfiltered discussions on a multitude of topics, from politics and controversial issues to personal experiences and sensitive subjects. It is trained on around 150000 set of conversations. Each set having 10~15 conversations. I will not release this data.
This is Fully Finetuned Model.
Warning
An uncensored model has no or little guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous objects. Publishing anything this model generates is the same as publishing it yourself. I am not responsible for what you generate using this model.
Training:
Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took around 6 hours. Axolotl & DeepSpeed codebase was used for training purpose. This was trained on Llama-3-8B model by Meta.
GGUF & Exllama
GGUF: Link
Exllama V2: Link
Special Thanks to Bartowski for quantizing this model.
Example Prompt:
This model uses ChatML prompt format.
<|im_start|>system
You are Frank. A helpful, unbiased, and uncensored assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
You can modify above Prompt as per your requirement. One example will be:
This is a conversation with your Uncensored AI assistant Frank. Frank can discuss with you at length about various topics such as ---, ---, ---, .
You can ask it anything you want and it will do its best to give you accurate and relevant information.
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 62.24 |
AI2 Reasoning Challenge (25-Shot) | 59.64 |
HellaSwag (10-Shot) | 80.16 |
MMLU (5-Shot) | 63.08 |
TruthfulQA (0-shot) | 52.75 |
Winogrande (5-shot) | 73.16 |
GSM8k (5-shot) | 44.66 |
- Downloads last month
- 208
Model tree for ajibawa-2023/Uncensored-Frank-Llama-3-8B
Spaces using ajibawa-2023/Uncensored-Frank-Llama-3-8B 5
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard59.640
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard80.160
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard63.080
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard52.750
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard73.160
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard44.660