English
cssprad1 commited on
Commit
0b102e0
1 Parent(s): a39dc4b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md CHANGED
@@ -1,3 +1,103 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
  ---
6
+ # SatelliteVision-Base
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** NASA GSFC CISTO Data Science Group
21
+ - **Model type:** Pre-trained visual transformer model
22
+ - **License:** Apache license 2.0
23
+
24
+ ### Model Sources [optional]
25
+
26
+ <!-- Provide the basic links for the model. -->
27
+
28
+ - **Training repository:** https://github.com/nasa-nccs-hpda/pytorch-caney
29
+ - **Data repository:** https://github.com/nasa-nccs-hpda/pytorch-caney
30
+ - **Paper [optional]:** https://github.com/nasa-nccs-hpda/satvision
31
+
32
+ ## Uses
33
+
34
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
35
+
36
+ ### Direct Use
37
+
38
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
39
+
40
+ [More Information Needed]
41
+
42
+ ### Downstream Use [optional]
43
+
44
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
45
+
46
+ [More Information Needed]
47
+
48
+ ## How to Get Started with the Model
49
+
50
+ Use the code below to get started with the model.
51
+
52
+ [More Information Needed]
53
+
54
+ ## Training Details
55
+
56
+ ### Training Data
57
+
58
+ SatVision-MODIS-Small dataset: <src to HF dataset here>
59
+
60
+ [More Information Needed]
61
+
62
+ ### Training Procedure
63
+
64
+ The pre-training strategy used is Masked-Image-Modeling, a contrastive learning procedure
65
+ #### Preprocessing [optional]
66
+
67
+ [More Information Needed]
68
+
69
+
70
+ #### Training Hyperparameters
71
+
72
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
73
+
74
+ #### Speeds, Sizes, Times [optional]
75
+
76
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
77
+
78
+ [More Information Needed]
79
+
80
+ ## Evaluation
81
+
82
+ <!-- This section describes the evaluation protocols and provides the results. -->
83
+
84
+ ### Testing Data, Factors & Metrics
85
+
86
+ #### Testing Data
87
+
88
+ <!-- This should link to a Data Card if possible. -->
89
+
90
+ [More Information Needed]
91
+
92
+ #### Factors
93
+
94
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
95
+
96
+ [More Information Needed]
97
+
98
+ #### Metrics
99
+
100
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
101
+
102
+ [More Information Needed]
103
+