nateraw commited on
Commit
36fee2e
1 Parent(s): 32af854
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ checkpoint-*/
README.md CHANGED
@@ -1,44 +1,58 @@
1
  ---
 
2
  tags:
3
- - image-classification
4
- - pytorch
5
- - huggingpics
6
- metrics:
7
- - accuracy
8
-
9
- model-index:
10
  - name: trainer-rare-puppers
11
  results:
12
  - task:
13
  name: Image Classification
14
  type: image-classification
15
- metrics:
16
- - name: Accuracy
17
- type: accuracy
18
- value: 0.8955223880597015
19
  ---
20
 
 
 
 
21
  # trainer-rare-puppers
22
 
 
 
 
 
 
23
 
24
- Autogenerated by HuggingPics🤗🖼️
25
 
26
- Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
27
 
28
- Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
29
 
 
30
 
31
- ## Example Images
32
 
 
33
 
34
- #### corgi
 
 
 
 
 
 
 
 
35
 
36
- ![corgi](images/corgi.jpg)
37
 
38
- #### samoyed
 
 
39
 
40
- ![samoyed](images/samoyed.jpg)
41
 
42
- #### shiba inu
43
 
44
- ![shiba inu](images/shiba_inu.jpg)
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
  tags:
4
+ - generated_from_trainer
5
+ model_index:
 
 
 
 
 
6
  - name: trainer-rare-puppers
7
  results:
8
  - task:
9
  name: Image Classification
10
  type: image-classification
 
 
 
 
11
  ---
12
 
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
  # trainer-rare-puppers
17
 
18
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unkown dataset.
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
 
24
+ ## Intended uses & limitations
25
 
26
+ More information needed
27
 
28
+ ## Training and evaluation data
29
 
30
+ More information needed
31
 
32
+ ## Training procedure
33
 
34
+ ### Training hyperparameters
35
 
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 2e-05
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 1
44
+ - mixed_precision_training: Native AMP
45
 
46
+ ### Training results
47
 
48
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
49
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
50
+ | No log | 1.0 | 48 | 0.4087 | 0.8806 |
51
 
 
52
 
53
+ ### Framework versions
54
 
55
+ - Transformers 4.9.2
56
+ - Pytorch 1.9.0+cu102
57
+ - Datasets 1.11.0
58
+ - Tokenizers 0.10.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9cfc6cb629f3611b2f771e161e9b6dc7e457371e43254cfcdea4f55ceec9e2c0
3
  size 343282929
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:688a0485d0f04f6ba4e294e87dd575065d66a99f02390e225f2f62dd0c913865
3
  size 343282929
runs/Aug23_18-14-48_eb7d1b59e2b0/1629742526.517056/events.out.tfevents.1629742526.eb7d1b59e2b0.1285.7 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:935efd1d8ece35cc62ceade8494767dd0891a2e3382f9c31f41f472a4dc071f6
3
+ size 4318
runs/Aug23_18-14-48_eb7d1b59e2b0/events.out.tfevents.1629742526.eb7d1b59e2b0.1285.6 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e2662b6cc8e53f78dedd83e4563aec8306a5d37ee26c270f0e3bb02a4987cca
3
+ size 3782
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1b4ae6fdc97fb4101413253012822ccb79e54c220da7be0d96a68cc29d9b110
3
+ size 2799