Edit model card

This model is mbart-large-50-many-to-many-mmt model fine-tuned on the text part of CATSLU spoken language understanding dataset.

The average scores on the four test sets are 82.56% and 73.48% for accuracy and F1 respectively.

Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.