Edit model card

image/png

Test merge. Attempt to get good at RP, ERP, general tasks model with 128k context. Every model here has Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context in merge instead of regular MistralYarn 128k. The reason is because i belive Epiculous merged it with Mistral Instruct v0.2 to make first 32k context experience as perfect as possible until we reach YaRN from 32 to 128k, if not - it's sad D:, or, i get something wrong.

Exl2, 4.0 bpw

GGUF

Here is the "family tree" of this model, im not writing full model names cause they long af

NeuralKunoichi-EroSumika 4x7B 128k

* NeuralKunoichi-EroSumika 4x7B
    *(1) Kunocchini-7b-128k
    |
    *(2) Mistral-Instruct-v0.2-128k
        * Mistral-7B-Instruct-v0.2
        |
        * Fett-128k
    |
    *(3) Erosumika-128k
        * Erosumika 7B
        |
        * FFett-128k
    |
    *(4) Mistral-NeuralHuman-128k
        * Fett-128k
        |
        * Mistral-NeuralHuman
            * Mistral_MoreHuman
            |
            * Mistral-Neural-Story

Models used

Downloads last month
9
Safetensors
Model size
24.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k

Quantizations
2 models

Collection including xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k