|
--- |
|
tags: |
|
- rocm |
|
- amd-gpus |
|
- amd-ai |
|
- rocm-ai |
|
- rocm-rwkv |
|
- 3B-rwkv |
|
--- |
|
3B rocm-rwkv pth record. |
|
- rwkv-final-chnk5.pth: 3B rocm-rwkv model trained with Slim pajama chunk1-5 and with a loss of 2.456. |
|
- rwkv-final-chnk17.pth: 3B rocm-rwkv model trained with Slim pajama chunk1-10 for the first epoch and an aditional training with chunk1-7 after the first epoch and with a loss of 2.281 |
|
- rwkv-code39-16012024.pth: 3B rocm-rwkv model trained with Slim pajama chunk1-10 for the first epoch and an aditional training with chunk1-8 after the first epoch; plus a little bit of code. This pth has a loss of 1.174 for code alone and 2.26 for text. |
|
- rwkv-HHMIX-63x1-47-29012024.pth: 3B rocm-rwkv model starting with rwkv-code39-16012024.pth plus a mix of multi-language and code. This model has a loss value of 2.065 for the code+multilingual dataset. |
|
- rwkv-coder-63x1-104-29012024.pth: 3B rocm-rwkv model starting with rwkv-HHMIX-63x1-47-29012024.pth plus more code (71.21 Gtokens of code). This model has a loss value of 1.090 for the code dataset. |
|
- rwkv-final_HHMIX_chuk3.pth: 3B rocm-rwkv model starting with rwkv-coder-63x1-104-29012024.pth plus a mix of multi-language and code. This model has a loss value of 1.836 for the code+multilingual dataset. |