efficient_mlm_m0.50 / README.md
andreasmadsen's picture
Upload with huggingface_hub
dff34b4
|
raw
history blame
630 Bytes
metadata
inference: false

This is a model checkpoint for "Should You Mask 15% in Masked Language Modeling" (code).

The original checkpoint is avaliable at princeton-nlp/efficient_mlm_m0.50. Unfortunately this checkpoint depends on code that isn't part of the official transformers library. Additionally, the checkpoints contains unused weights due to a bug.

This checkpoint fixes the unused weights issue and uses the RobertaPreLayerNorm model from the transformers library.