|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- stingning/ultrachat |
|
- kaist-ai/CoT-Collection |
|
- mesolitica/google-translate-commitpackft |
|
- Wanfq/Explore_Instruct_Rewriting_32k |
|
- Wanfq/Explore_Instruct_Rewriting_10k |
|
- Wanfq/Explore_Instruct_Brainstorming_16k |
|
- xiyuez/red-dot-design-award-product-description |
|
--- |
|
|
|
# RWKV v4 7B world model |
|
finetuned with ultrachat , COT and some novel instructions data, commitpackft and so on |
|
|
|
use full ultrachat and cot data, about 3B tokens |
|
|
|
|
|
# Contributor |
|
[@JL-er](https://huggingface.co/JL-er) |
|
[@Remixa](https://huggingface.co/Remixa) |
|
|
|
|
|
# Design of experiment |
|
this model lose multi-turn chat ability,cause from using whole ultrachat datasets. |
|
|
|
so i continue tuned multi-turn datasets with 2 aspects |
|
|
|
1.role play |
|
|
|
2.for novel multiturn instruction |
|
|
|
# Training details |
|
[wandb.ai](https://wandb.ai/one-/one-rwkv-64k) |
|
|
|
# CAses |
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6176b32847ee6431f632981e/_1dJo549ldgX6q0JUwC6c.jpeg) |
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6176b32847ee6431f632981e/7969wbHaJpBq2n6xvfC7C.jpeg) |
|
|
|
# Usage |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6176b32847ee6431f632981e/cGDF6b4-x_9rcwMdl1KPp.png) |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6176b32847ee6431f632981e/hUxTVgjLBMcFqxQX9HoxL.png) |
|
|
|
|