--- license: apache-2.0 datasets: - THUDM/LongBench - tasksource/icl-symbol-tuning-instruct - quac - togethercomputer/Long-Data-Collections - FinchResearch/OpenPlatypus-Alpaca --- ### RWKV v5.2 7B experimental model with long context training, would push to 300k ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6176b32847ee6431f632981e/av7cfSeyYhcyuKFa28lpm.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6176b32847ee6431f632981e/LWhRlAqMkAHpPGGY62Cwm.png)