Datasets:
Deduplicate Dataset
Download all images listed in master file (Jiggins_Zenodo_Img_Master.csv
), take MD5s of each image, and reduce all master files to only unique images (as indicated by MD5).
There are still multiple images per specimen, but there are now 36,211 images of 11,968 unique specimens covering approximately 360 taxa.
@thompsonmj , @hlapp please review.
There's a lot here. I guess we should have reviewed this together during the meeting last week. @egrace479 can you give some guidance on what you're looking to get reviewed? Is it the README changes, Jupyter NB changes, and/or something else? Perhaps it's worth walking through this over Zoom?
There's a lot here. I guess we should have reviewed this together during the meeting last week. @egrace479 can you give some guidance on what you're looking to get reviewed? Is it the README changes, Jupyter NB changes, and/or something else? Perhaps it's worth walking through this over Zoom?
We can certainly walk through it over Zoom, though I'll provide a summary here. The main updates center on the de-duplication process and updating documentation in the README accordingly.
- Updated notebook names for a clearer order of operation; I believe I have updated all references to them accordingly.
- All notebooks required for data generation are in the root notebooks folder and named
Data-gen-#-#
. - The notebooks in
deduplication_process/notebooks
are allEDA-DL-0-#
, as they contain an exploration of the duplication in the overall download to inform the de-duplication performed innotebooks/Data-gen-1-2
(there is a summary of the process at the beginning of the notebook and in the deduplication_process README).
- All notebooks required for data generation are in the root notebooks folder and named
- Downloaded images that did not have taxonomic info (labeled "Unknown") and those whose labels were not a genus of butterfly were removed from the master file and documented in
metadata/Missing_taxa_download.csv
. This way they could potentially be labeled correctly later. There is a larger CSV in that folder already which contains all the records from the original master file that did not have any taxonomic info (as measured by null values). - In the root
metadata
folder, the license JSON file was updated and a bib file was added for easier citation.- The script to generate the license JSON file is a slight modification of this script, written by
@AndreyKopanev
, though I adjusted the input file to be
metadata/master_licenses.csv
(generated innotebooks/Data-gen-1-2
) and set indent at save so that it was more human-readable. @thompsonmj and I were discussing whether it should actually go to GitHub as a general function to retrieve citation information more easily from Zenodo or just be added to this repo (since it's the parent Jiggins data repo, and we would then remove it from the other one).
- The script to generate the license JSON file is a slight modification of this script, written by
@AndreyKopanev
, though I adjusted the input file to be
- I updated
scripts/download_jiggins_subset.py
. @thompsonmj had reviewed it offline before I started the overall download, and its current form is what was used in the download of all images for the de-duplication process.
In Data-gen-0-1.ipynb
:
jiggins = pd.read_csv("../Jiggins_Zenodo_Master.csv", low_memory = False)
to
jiggins = pd.read_csv("../metadata/Jiggins_Zenodo_Master.csv", low_memory = False)
@hlapp
I moved the deduplication_process
folder files into the root notebooks
and metadata
folders as we discussed on Zoom. That README
is now README-supplemental
at the root. I adjusted links and filepaths in relevant notebooks and the READMEs accordingly. I also added the explanation in notebooks/EDA-DL-0-1.ipynb
for why the one Excel file from a Zenodo record was exported to CSV.
We'll get
@AndreyKopanev
's get_licenses
script generalized with him as a simple package on GitHub for Zenodo citation retrieval.
Looks great! Thanks for cleaning this up so well.