be ware of duplicates
#10
by
legaltextai
- opened
i am working on indiana and found about 16% duplicates based on 'case_name_full' & 'citations'& 'date_filed'
not a criticism, just to keep in mind depending on your use case.
duplicates_mask_both = df.duplicated(subset=['case_name_full', 'citations', 'date_filed'], keep=False)
num_duplicates_both = duplicates_mask_both.sum()
print(num_duplicates_both)
still a good idea to read the opinions of random duplicates to double check they are indeed the same.