Edit model card

Paper

SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING

Aurthors: Moyan Mei, Rohit Sroch

Abstract

With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [1] [2] [3] and achieves comparable or superior performance to its teacher model such as BERT [4] on total 13 tasks for the GLUE [5] and SuperGLUE [6] benchmarks.

Moyan Mei and Rohit Sroch. 2022. SEAD: Simple ensemble and knowledge distillation framework for natural language understanding. Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).

SEAD-L-6_H-384_A-12-sst2

This is a student model distilled from BERT base as teacher by using SEAD framework on sst2 task. For weights initialization, we used microsoft/xtremedistil-l6-h384-uncased

All SEAD Checkpoints

Other Community Checkpoints: here

Intended uses & limitations

More information needed

Training hyperparameters

Please take a look at the training_args.bin file

$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))

Evaluation results

eval_accuracy eval_runtime eval_samples_per_second eval_steps_per_second eval_loss eval_samples
0.9312 1.5334 568.684 18.261 0.2929 872

Framework versions

  • Transformers >=4.8.0
  • Pytorch >=1.6.0
  • TensorFlow >=2.5.0
  • Flax >=0.3.5
  • Datasets >=1.10.2
  • Tokenizers >=0.11.6

If you use these models, please cite the following paper:

@article{article, 
      author={Mei, Moyan and Sroch, Rohit}, 
      title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, 
      volume={3}, 
      number={1}, 
      journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
      day={26},
      year={2022}, 
      month={Feb},
      url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
} 
Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train C5i/SEAD-L-6_H-384_A-12-sst2