Text Generation
Transformers
PyTorch
llama
Not-For-All-Audiences
nsfw
text-generation-inference
Inference Endpoints
metadata
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
An attempt using BlockMerge_Gradient to get better result.
In addition, LimaRP v3 was used, is it recommanded to read the documentation.
Description
This repo contains fp16 files of Amethyst-13B.
Models and loras used
- Xwin-LM/Xwin-LM-13B-V0.1
- The-Face-Of-Goonery/Huginn-13b-FP16
- zattio770/120-Days-of-LORA-v2-13B
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
Prompt template: Alpaca
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
LimaRP v3 usage and suggested settings
You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length:
Special thanks to Sushi.
If you want to support me, you can here.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 51.2 |
ARC (25-shot) | 62.63 |
HellaSwag (10-shot) | 83.17 |
MMLU (5-shot) | 55.91 |
TruthfulQA (0-shot) | 52.43 |
Winogrande (5-shot) | 74.74 |
GSM8K (5-shot) | 10.84 |
DROP (3-shot) | 18.7 |