Trained on a flavorful melange of the WizardLM, Airoboros, and Wizard Vicuna datasets.
This model was trained using both linear and NTK-aware RoPE scaling in tandem. When loading, ensure that compress_pos_emb
(or scale
) is set to 2, and alpha_value
is set to 4. Both values must be set.
Expect context length of up to 8192 to work for sure. It will probably maintain coherence into the ~12k range, but I have not tested that.
Prompt format is vicuna 1.1:
<whatever nonsense system prompt you want>
USER: ...
ASSISTANT: ...
- Downloads last month
- 14
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.