datasets:
- bigcode/the-stack
hexcoder
This is a model that trains the base santacoder model on all r code and rmarkdown code in "the stack". Training for 6 epochs on 512 token length snippets of r and rmarkdown code. While there isnt that much r code in the stack (far less then python or java...) this should at least give the model some r skills
Because I am on a limited compute budget, I trained the model on 512 token length pieces of R code, this means that for longer pieces of code it will do poorly. I will now proceed to fine tune the base model on 2048 context length pieces of r code in a parameter efficient way, for another 2 epochs (to ensure acceptable performance beyond 512 tokens).
Then I intend to instruction tune the model on all stackoverflow questions and anwsers with the tag 'r' in the 2011 to 2016 timeframe, presenting stackoverflow questions as <|human|> and the best answer as <|assistant|>. This will teach the model that it is expected to produce an answer to a user's question about 'r'.
The intended outcome is a reasonably adequate model which can answer basic r user questions, but more broadly an evaluaino of the data/sources and training needed to produce great open source code generating models for r.