--- '[object Object]': null language: - en library_name: timm pipeline_tag: image-classification tags: - vision - mapreader - maps - National Library of Scotland - historical - lam - humanities - heritage --- # Model Card for mr_resnest101e_timm_no_pretrain A ResNeSt (ResNet based architecture with Split Attention) image classification model. Trained on gold standard annotations and outputs from early experiments using MapReader (found [here](https://huggingface.co/datasets/Livingwithmachines/MapReader_Data_SIGSPATIAL_2022)). ## Model Details ### Model Description - **Model type:** Image classification /feature backbone - **Finetuned from model:** https://huggingface.co/timm/resnest101e.in1k ## Uses This fine-tuned version of the model is an output of the MapReader pipeline. It was used to classify 'patch' images (cells/regions) of scanned nineteenth-century series maps of Britain provided by the National Library of Scotland (learn more [here](https://maps.nls.uk/os/)). We classified patches to indicate the presence of buildings and railway infrastructure. See [our paper](https://dl.acm.org/doi/10.1145/3557919.3565812) for more details about labels. ## How to Get Started with the Model in MapReader Please go to [the MapReader documentation](https://mapreader.readthedocs.io/en/latest/User-guide/Classify.html) for instructions on how to use this model in MapReader. ## Training, Evaluation and Testing Details ### Training, Evaluation and Testing Data This model was fine-tuned on [manually-annotated data](https://huggingface.co/datasets/Livingwithmachines/MapReader_Data_SIGSPATIAL_2022). ### Training, Evaluation and Testing Procedure Details can be found [here](https://dl.acm.org/doi/10.1145/3557919.3565812). Open access version of the article available [here](https://arxiv.org/abs/2111.15592). ### Results Data outputs can be found [here](https://huggingface.co/datasets/Livingwithmachines/MapReader_Data_SIGSPATIAL_2022). Further details can be found [here](https://dl.acm.org/doi/10.1145/3557919.3565812). ## More Information This model was fine-tuned using MapReader. The code for MapReader can be found [here](https://github.com/Living-with-machines/MapReader) and the documentation can be found [here](https://mapreader.readthedocs.io/en/latest/). ## Model Card Authors - Katie McDonough (k.mcdonough@lancaster.ac.uk) - Rosie Wood (rwood@turing.ac.uk) ## Model Card Contact Katie McDonough (k.mcdonough@lancaster.ac.uk) ## Funding Statement This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1). Living with Machines, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and Cambridge, King's College London, East Anglia, Exeter, and Queen Mary University of London.