Model description
This model is built off of the distilbert/distilbert-base-uncased pretrained model and fine-tuned on the stanfordnlp/imdb dataset. This fill-mask model fine-tuned to predict the words that are most likely to replace [MASK] words given the context of extensive movie reviews.
Intended uses & limitations
Please note that this model is my first ever trained and fine-tuned model and meant to serve as a learning purpose to train and fine-tune models. This model may contain limitations and biases as it is not fine-tuned on the full dataset for learning purposes, but feel free to play around with this model.
How to use
You can use this model directly with a pipeline for masked language modeling:
>>> from transformers import pipeline
>>> mask_filler = pipeline("fill-mask", model="haotiany/distilbert-base-uncased-finetuned-imdb-accelerate")
>>> mask_filler("this is a great [MASK]")
{'score': 0.23489679396152496, 'token': 2143, 'token_str': 'film', 'sequence': 'this is a great film.'}
{'score': 0.14010392129421234, 'token': 3185, 'token_str': 'movie', 'sequence': 'this is a great movie.'}
{'score': 0.04468822479248047, 'token': 2801, 'token_str': 'idea', 'sequence': 'this is a great idea.'}
{'score': 0.020726347342133522, 'token': 2028, 'token_str': 'one', 'sequence': 'this is a great one.'}
{'score': 0.017180712893605232, 'token': 2265, 'token_str': 'show', 'sequence': 'this is a great show.'}
- Downloads last month
- 0