This model exists mostly for research purposes.
It is essentially the same as MoritzLaurer/deberta-v3-large-zeroshot-v2.0
only that the training data from the 28 datasets/tasks used for evaluating the model were excluded.
The purpose of the model is to create true zeroshot metrics, by holding out the training data from the 28 datasets/tasks.
For most practical purposes MoritzLaurer/deberta-v3-large-zeroshot-v2.0
will be more useful as it has seen data from 28 additional tasks and will perfom better on most tasks.
Note that MoritzLaurer/deberta-v3-large-zeroshot-v2.0
only has seen training data for these 28 tasks, no test data.
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for MoritzLaurer/deberta-v3-large-zeroshot-v2.0-28heldout
Base model
microsoft/deberta-v3-large