---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Kimiko_13B
This is my new Kimiko models, trained with LLaMA2-13B for...purpose
## Model Details
### Model Description
- **Developed by:** nRuaif
- **Model type:** Decoder only
- **License:** CC BY-NC-SA
- **Finetuned from model [optional]:** LLaMA 2
### Model Sources [optional]
- **Repository:** https://github.com/OpenAccess-AI-Collective/axolotl
[](https://github.com/OpenAccess-AI-Collective/axolotl)
## Uses
### Direct Use
This model is trained on 3k examples of instructions dataset, high quality roleplay, for best result follow this format
```
<>
How to do abc
<>
Here is how
Or with system prompting for roleplay
<>
A's Persona:
B's Persona:
Scenario:
Add some instruction here on how you want your RP to go.
```
## Bias, Risks, and Limitations
All bias of this model come from LlaMA2 with an exception of NSFW bias.....
## Training Details
### Training Data
3000 examples from LIMAERP, LIMA and I sample 1000 good instruction from Airboro
### Training Procedure
Model is trained with 1 L4 from GCP costing a whooping 2.5USD
#### Training Hyperparameters
- **Training regime:** [More Information Needed]
3 epochs with 0.0002 lr, full 4096 ctx token, QLoRA
#### Speeds, Sizes, Times [optional]
It takes 18 hours to train this model with xformers enable
[More Information Needed]
[More Information Needed]
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** L4 with 12CPUs 48gb ram
- **Hours used:** 5
- **Cloud Provider:** GCP
- **Compute Region:** US
- **Carbon Emitted:** 0.5KG