Model Card for distilbert-base-uncased
Model Details
Model Description
A fine-tuned version of distilbert/distilbert-base-uncased
using the stanford-nlp/snli
dataset.
- Developed by: Karl Weinmeister
- Language(s) (NLP): en
- License: apache-2.0
- Finetuned from model [optional]: distilbert/distilbert-base-uncased
Training Hyperparameters
- Training regime: The model was trained for 5 epochs with batch size 128.
- Downloads last month
- 32
Inference API (serverless) does not yet support transformers models for this pipeline type.