DOFOFFICIAL NathMath commited on
Commit
760eb86
1 Parent(s): b2bf333

Update README.md (#1)

Browse files

- Update README.md (b99c7cc7ab3027ceaec5f8708661f15c69c3f057)


Co-authored-by: NathMath huang <[email protected]>

Files changed (1) hide show
  1. README.md +49 -1
README.md CHANGED
@@ -16,4 +16,52 @@ tags:
16
  - nathmath
17
  - computer_vision
18
  - anime
19
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  - nathmath
17
  - computer_vision
18
  - anime
19
+ ---
20
+
21
+ ###animeGender-dvgg-0.7
22
+
23
+ ###[Description]
24
+ **Our proposal model, animeGender-dvgg-0.7, which is a fine-tuned binary classification model created by DOF-Studio(2023) and based on the pre-trained model vgg-16, aims to identify the gender, or sex of a particular animation character (particularly designed for Japanese-style 2D anime characters). It is trained by DOF-Studio in July, 2023, on an organizational private data set that is manually collected and tagged by our staff. Although this model has shown an unprecedentedly successful and charming result of our test and verification data set, please note that this model is still not the final version of our character-gender identification model series, but only a phased result (Version 0.7) of our open-source project, which means upgraded version will be released by us in the near future, and we are confident to tell that as we have improved the network structure, there is going to be a magnificent amelioration in the up-coming ones. Thank you for all of your appreciation and support for our work and models.
25
+
26
+ ###[Technical Details]
27
+ **Modification: This model, animeGender-dvgg-0.7, uses all weights from the original vgg-16 model, but has changed the network structure of the last sequantial, the dense layers, which means we have modified it into a binary classification model with two nodes (activated by a softmax layer) output the possibility of each gender, namely female, and male. Note that although the overall network structure, particularly the convolutional neural layers have been left untrained, in the future, we have planned to deeply modify this base model, vgg16, to achieve a higher score and precision in this classification task.
28
+ **Input: While the original model vgg-16 has been designed with an input with 224*224 in terms of resolution, and 3 dementions in RGB colorspace, in our model animeGender-dvgg-0.7, we aim to only use 64*64 with RGB colorspace only, as the classification task is not too tough. Please note when feeding a picture into the model, please ensure that the input illustration only consists of the head and face of the character you want to identify, in order to make the result from the model most precise and reliable. Moreover, we have designed some Python functions in our open source codes to help you resize, crop, and transform your pictures into 64*64 RGB ones, and more information is available in the file folder.
29
+ **Output: This model, animeGender-dvgg-0.7, has an original output with a one-dim tensor, which length is 2, respectively shows the possibilities of each result of your input, namely female and male. In our open source usage example, see in the file folder, we have conveniently transformed the raw output into a readable result, for example, "male", with a numerical number showing the possibility, or the confidence. Note that our model does not have the background knowledge of a certain character, or the context of an animation, so some gender-neutral characters may still be misclassified, or correctly matched but with a confidence that is around 0.5.
30
+ **Checkpoint: We have provided the final and proposal model with the name "animeGender-dvgg-0.7.pth", however, to satisfy some further requirements, for example, research, we have provided checkpoints on process, while they have been proved to have an inferior capability compared to the proposed model. More models available, please see the "more-models" folder in the file folder.
31
+
32
+ ###[Results and Rankings]
33
+ **Based on the training data and testing data, the proposal model has achieved a result shown below:
34
+ name animeGender-dvgg-0.7
35
+ epoch 50
36
+ trainSet 20k
37
+ trainLoss 0.0019
38
+ trainAcc 0.9640
39
+ testSet 1.4k
40
+ testLoss 0.0024
41
+ testAcc 0.9267
42
+
43
+ ###[Examples]
44
+ **Here are some sample-exteral tests that are conducted by our staff with the corresponding results shown below:
45
+ ![Pic 0](https://cdn-uploads.huggingface.co/production/uploads/64a59126f988662d058c4fb7/_yHVFm91B7-fCNBkuKFNf.png)
46
+ ![Pic 1](https://cdn-uploads.huggingface.co/production/uploads/64a59126f988662d058c4fb7/0phmkYFC1nfMHsZ7__4NG.png)
47
+ ![Pic 2](https://cdn-uploads.huggingface.co/production/uploads/64a59126f988662d058c4fb7/2XWirU8RFNrg3HP3-kFuP.png)
48
+ ![Pic 3](https://cdn-uploads.huggingface.co/production/uploads/64a59126f988662d058c4fb7/nqYI8accIVUsFLjFll_Hs.png)
49
+ ![Pic 4](https://cdn-uploads.huggingface.co/production/uploads/64a59126f988662d058c4fb7/szj1Io23S05aH4vXHo6PK.png)
50
+ ![Pic 5](https://cdn-uploads.huggingface.co/production/uploads/64a59126f988662d058c4fb7/7OSp0nDHbrqgIoyKF3od5.png)
51
+ ![Pic 6](https://cdn-uploads.huggingface.co/production/uploads/64a59126f988662d058c4fb7/4LmkTtoi6_c-kP5mgpnAC.png)
52
+ ![Pic 7](https://cdn-uploads.huggingface.co/production/uploads/64a59126f988662d058c4fb7/77Bxn_vibwGvxser0BsYg.png)
53
+ Output:
54
+ [Pic 0] This is female with a confidence of 0.991.
55
+ [Pic 1] This is female with a confidence of 0.994.
56
+ [Pic 2] This is male with a confidence of 0.604.
57
+ [Pic 3] This is male with a confidence of 0.671.
58
+ [Pic 4] This is female with a confidence of 0.886.
59
+ [Pic 5] This is female with a confidence of 0.998.
60
+ [Pic 6] This is female with a confidence of 1.0.
61
+ [Pic 7] This is female with a confidence of 0.997.
62
+
63
+ ###[Usage]
64
+ **We have uploaded the usage with Python in the file folder, and please note you should download them and run locally using either your CPU or with CUDA.
65
+
66
+ Yours,
67
+ Team DOF Studio, July 6th, 2023.