Number of target labels in SCOTUS dataset
Why do I see only 13 labels in SCOTUS dataset, instead of 14 labels?
Thanks for reporting, @JayKasundra.
Indeed, the source data only contains 13 labels (from 1 to 13), instead of 14.
Maybe the description should be updated. Also the paper mentions 14 instead of 13.
I'm pinging one of the authors: @kiddothe2b
Hi @JayKasundra and @albertvillanova ,
Thanks for the question. This is a very common open question I get via emails
- The number of potential classes that are defined by the Supreme Court Database (SCDB) and can be used to label a document for issue areas are 14 (http://scdb.wustl.edu/documentation.php?var=issueArea).
- In the current dataset, which has a subset of 7,8k SCOTUS cases out of all the available, only 13 out of 14 have been used.
Both aforementioned statements are equally factually true, so it's up to us how we want to define the label set. Is it the full label set, including the 14th label that never occurs in this dataset, or is it the one based on the labels that occur at least once? It's almost a philosophical question :)
Thanks for your answer, @kiddothe2b .
The issue is that any model trained only on this dataset will never see an example belonging to the "14" class during training, nor evaluation, nor testing... Therefore, it will never be able to predict that class during inference...
I think it would be worth to mention the dataset contains only 13 our of the 14 classes, at least on the README here on this repo.
I'm taking care of this...
So what class did not exist in this dataset? is it "14. Private Action"?
Yeah, if I'm not mistaken, that's the one with no data points.
Yeah, if I'm not mistaken, that's the one with no data points.
Thank you!
I am trying to convert this dataset for Question Answering task so ...
In an open-ended setting like QA with instruction-tuned LLMs, I think it would make sense to include the extra label as an option to your instructions (prompts). The fact that there are no cases in this dataset that have been annotated with this label, by no means makes this class irrelevant in general. This label may be used in future cases, or have been already used in older cases that are missing from this dataset.
In an open-ended setting like QA with instruction-tuned LLMs, I think it would make sense to include the extra label as an option to your instructions (prompts). The fact that there are no cases in this dataset that have been annotated with this label, by no means makes this class irrelevant in general. This label may be used in future cases, or have been already used in older cases that are missing from this dataset.
so you suggest to keep the label "14. Private Action", right? Is there any chance that make model confuse about the options and make more wrong answers?
I am very new to this field and have a lot of "questions" (some questions might be stupid though).