--- license: apache-2.0 tags: - generated_from_trainer - medical datasets: - imagefolder metrics: - accuracy - f1 - recall - precision model-index: - name: vit-large-patch32-384-Hyper_Kvasir_Labeled_Images results: [] language: - en pipeline_tag: image-classification --- # vit-large-patch32-384-Breast_Histopathology_Images This model is a fine-tuned version of [google/vit-large-patch32-384](https://huggingface.co/google/vit-large-patch32-384). It achieves the following results on the evaluation set: - Loss: 0.3954 - Accuracy: 0.8202 - F1 - Weighted: 0.8151 - Micro: 0.8202 - Macro: 0.7674 - Recall - Weighted: 0.8202 - Micro: 0.8202 - Macro: 0.7549 - Precision - Weighted: 0.8141 - Micro: 0.8202 - Macro: 0.7860 ## Model description For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Binary%20Classification/Breast%20Histopathology%20Images/Breast_Histopathology_Images_Using_ViT.ipynb ## Intended uses & limitations This model is intended to demonstrate my ability to solve a complex problem using technology. ## Training and evaluation data Dataset Source: https://huggingface.co/datasets/EulerianKnight/breast-histopathology-images-train-test-valid-split ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:| | 0.3536 | 1.0 | 649 | 0.3568 | 0.8455 | 0.8411 | 0.8455 | 0.8003 | 0.8455 | 0.8455 | 0.7863 | 0.8411 | 0.8455 | 0.8205 | | 0.4417 | 2.0 | 1298 | 0.3954 | 0.8202 | 0.8151 | 0.8202 | 0.7674 | 0.8202 | 0.8202 | 0.7549 | 0.8141 | 0.8202 | 0.7860 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3