File size: 2,156 Bytes
cfafa8a 3ff4a43 cfafa8a 3ff4a43 c8679a3 3ff4a43 d954468 3ff4a43 9f8d27a 3ff4a43 cfafa8a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
tags:
- game
pretty_name: The Chess Dataset
---
# Chess
> Recent advancements in artificial intelligence (AI) underscore the progress of reasoning and planning shown by recent generalist machine learning (ML) models. The progress can be boosted by datasets that can further boost these generic capabilities when used for training foundation models of various kind. This research initiative has generated extensive synthetic datasets from complex games — chess, Rubik's Cube, and mazes — to study facilitation and the advancement of these critical generic skills in AI models.
This dataset contains 3.2 billion games, equating to approximately 608 billion individual moves.
it is generated through self-play by Stockfish engine using Fugaku and we add initial moves to expand its diversity.
Each game has three columns: 'Moves', 'Termination' and 'Result',
- 'Move': recorded chess moves of the whole game.
- 'Termination': include CHECKMATE, INSUFFICIENT_MATERIAL, ... etc.
- Please check this for detail information
https://python-chess.readthedocs.io/en/latest/core.html#chess.Outcome.termination
- 'Result': result of this game, 1-0, 1/2-1/2, 0-1.
### Call for Collaboration
We invite interested researchers and ML practitioners to explore these datasets' potential. Whether training GPT models from scratch or fine-tuning pre-existing models, we encourage the exploration of various pre-training and fine-tuning strategies using these game-based datasets standalone or as enhancement of other already composed large-scale data.
Our team is prepared to assist in securing necessary GPU resources for these explorations. We are particularly interested in collaborators eager to pre-train models of small to medium scale on our game data, subsequently transition to standard text-based training, and then perform comparative analyses against models of similar architecture trained exclusively on text data.
Conclusively, this initiative marks a significant stride toward intricate problem-solving and strategic planning in AI, extending an open invitation to the research community for collaborative advancement in this domain. |