Edit model card

Norocetacean-20b-10k

This is Jeb Carter's Psyonic-Cetacean-20B, merged with Undi's no_robots-alpaca LoRA and extended to 10240 context length via YaRN.

The overall goal of this merge was to create a model with the unique brain of Psyonic-Cetacean and the human voice of the no_robots dataset, that would remain capable at long contexts.

The prompt format is Alpaca. You can use the standard format as shown, but for best results, I strongly recommend customizing the system prompt to your specific needs.

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{YOUR MESSAGE HERE}

### Response:
{BOT MESSAGE HERE}

Misc. information

  • BOS token is <s>
  • EOS token is </s>
  • Native context length is 10240 via YaRN (original context length was 4096)
  • Base model is Llama 2
  • Due to the inclusion of Orca-2-13b, the model is subject to the terms of the Microsoft Research License

Thanks

  • Thanks to Jeb Carter for Psyonic-Cetacean-20B
  • Thanks to Undi for the no_robots-alpaca LoRA
Downloads last month
29
Safetensors
Model size
20B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ddh0/Norocetacean-20b-10k

Quantizations
4 models

Collection including ddh0/Norocetacean-20b-10k