cambridge-climb/objective_curriculum-roberta_pre_layer_norm-model
Updated
Cambridge University submission for the 2023 CoNLL baby language modeling shared task competition.
This repository is Cambridge University NLP's submission to the 2023 BabyLM Challenge (CoNLL workshop).
Our approach experiments with the following three variants of cognitively-motivated curriculum learning and analyze their effect on the performance of the model on linguistic evaluation tasks.
Overall, we find that various curriculum learning settings outperform our baseline in linguistic tasks. We moreover find that careful selection of model architecture, and training hyper-parameters yield substantial improvements over the default baselines provided by the BabyLM challenge.