RecipeBERT
This model is a fine-tuned version of bert-base-uncased on the food domain data Recipe1M+ dataset. Recipe1M+ contains over 1M records of distinct food names with their ingredients and recipes, more details about the dataset can be found on their project website. We used the whole Recipe1M+ dataset with a total of 1,029,720 records, with using 10% of the dataset as an evaluation dataset. Each of the records contains the food name, followed by its ingredients and recipes.
It achieves the following results on the evaluation set:
- Loss: 0.6230
Usage
You can use this model to get embeddings/representations for your food-related dataset that you will use for your downstream tasks.
from transformers import pipeline
# Your food-related data
food_data = "Hawaiian Pizza"
# Use pipeline for feature extraction
embedding = pipeline(
'feature-extraction', model='alexdseo/RecipeBERT', framework='pt'
)
# Mean pooling
food_rep = embedding(food_data, return_tensors='pt')[0].numpy().mean(axis=0)
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7914 | 1.0 | 13286 | 0.7377 |
0.6945 | 2.0 | 26572 | 0.6569 |
0.6574 | 3.0 | 39858 | 0.6216 |
Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.11.0
- Tokenizers 0.14.1
- Downloads last month
- 266
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for alexdseo/RecipeBERT
Base model
google-bert/bert-base-uncased