--- library_name: transformers license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-mobile-eye-tracking-dataset-v2 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8709677419354839 --- # swin-tiny-patch4-window7-224-finetuned-mobile-eye-tracking-dataset-v2 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3968 - Accuracy: 0.8710 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8889 | 2 | 1.7756 | 0.2258 | | No log | 1.7778 | 4 | 1.6784 | 0.2581 | | No log | 2.6667 | 6 | 1.5861 | 0.3226 | | No log | 4.0 | 9 | 1.3571 | 0.4194 | | No log | 4.8889 | 11 | 1.0993 | 0.5484 | | No log | 5.7778 | 13 | 0.9242 | 0.6452 | | 1.4667 | 6.6667 | 15 | 0.7538 | 0.7097 | | 1.4667 | 8.0 | 18 | 0.6294 | 0.7742 | | 1.4667 | 8.8889 | 20 | 0.5326 | 0.7097 | | 1.4667 | 9.7778 | 22 | 0.4848 | 0.7419 | | 1.4667 | 10.6667 | 24 | 0.4832 | 0.7742 | | 1.4667 | 12.0 | 27 | 0.4483 | 0.7742 | | 1.4667 | 12.8889 | 29 | 0.4296 | 0.7742 | | 0.5925 | 13.7778 | 31 | 0.4023 | 0.7742 | | 0.5925 | 14.6667 | 33 | 0.4111 | 0.8387 | | 0.5925 | 16.0 | 36 | 0.3873 | 0.8065 | | 0.5925 | 16.8889 | 38 | 0.4029 | 0.8065 | | 0.5925 | 17.7778 | 40 | 0.4065 | 0.8065 | | 0.5925 | 18.6667 | 42 | 0.3864 | 0.8065 | | 0.3285 | 20.0 | 45 | 0.3968 | 0.8710 | | 0.3285 | 20.8889 | 47 | 0.3930 | 0.8710 | | 0.3285 | 21.7778 | 49 | 0.3871 | 0.8710 | | 0.3285 | 22.6667 | 51 | 0.3779 | 0.8065 | | 0.3285 | 24.0 | 54 | 0.3698 | 0.8065 | | 0.3285 | 24.8889 | 56 | 0.3726 | 0.8387 | | 0.3285 | 25.7778 | 58 | 0.3732 | 0.8387 | | 0.2621 | 26.6667 | 60 | 0.3732 | 0.8387 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0