Ezi commited on
Commit
f2b266b
1 Parent(s): 54ee55d

Update README.md

Browse files

Hi! 👋
This PR has a preliminary model card, based on the format we are using as part of our effort to standardise model cards at Hugging Face. It is generated automatically using our [our tool](https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool), as we’re testing our automatic Model Card generation abilities and running a study to see the effects of model cards on models.
Initial evidence suggests that model cards increase usage.
Please take a look when you get a chance, feel free to merge if you are ok with the changes or incorporate any additional information🤗

Files changed (1) hide show
  1. README.md +166 -0
README.md CHANGED
@@ -1,3 +1,169 @@
1
  ---
2
  language: en
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language: en
3
  ---
4
+
5
+ # Model Card for ivila-row-layoutlm-finetuned-s2vl-v2
6
+
7
+
8
+
9
+ # Model Details
10
+
11
+ ## Model Description
12
+
13
+
14
+ - **Developed by:** Allen Institute for AI [allenai]
15
+ - **Shared by [Optional]:** More information needed
16
+ - **Model type:** Token Classification
17
+ - **Language(s) (NLP):** en
18
+ - **License:** More information needed
19
+ - **Parent Model:** [LayoutLM](https://huggingface.co/microsoft/layoutlm-base-uncased)
20
+ - **Resources for more information:**
21
+ - [GitHub Repo](https://aka.ms/layoutlm)
22
+ - [LayoutLM Associated Paper](https://arxiv.org/abs/1912.13318)
23
+
24
+
25
+ # Uses
26
+
27
+
28
+ ## Direct Use
29
+
30
+ This model can be used for the task of document image understanding.
31
+ The [LayoutLM model card](https://huggingface.co/microsoft/layoutlm-base-uncased) notes:
32
+ > LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets.
33
+
34
+ ## Downstream Use [Optional]
35
+
36
+ More information needed
37
+
38
+ ## Out-of-Scope Use
39
+
40
+ The model should not be used to intentionally create hostile or alienating environments for people.
41
+
42
+ # Bias, Risks, and Limitations
43
+
44
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
45
+
46
+
47
+ ## Recommendations
48
+
49
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
50
+
51
+
52
+ # Training Details
53
+
54
+ ## Training Data
55
+
56
+ See the [LayoutLM model card](https://huggingface.co/microsoft/layoutlm-base-uncased) for more information
57
+
58
+ > LayoutLM was pre-trained on IIT-CDIP Test Collection 1.0* dataset with two settings.
59
+ LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters (This Model)
60
+ LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters
61
+
62
+ ## Training Procedure
63
+
64
+
65
+ ### Preprocessing
66
+
67
+ More information needed
68
+
69
+ ### Speeds, Sizes, Times
70
+
71
+ More information needed
72
+
73
+ # Evaluation
74
+
75
+
76
+ ## Testing Data, Factors & Metrics
77
+
78
+ ### Testing Data
79
+
80
+ More information needed
81
+
82
+ ### Factors
83
+
84
+
85
+ ### Metrics
86
+
87
+ More information needed
88
+ ## Results
89
+
90
+ More information needed
91
+
92
+ # Model Examination
93
+
94
+ More information needed
95
+
96
+ # Environmental Impact
97
+
98
+
99
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
100
+
101
+ - **Hardware Type:** More information needed
102
+ - **Hours used:** More information needed
103
+ - **Cloud Provider:** More information needed
104
+ - **Compute Region:** More information needed
105
+ - **Carbon Emitted:** More information needed
106
+
107
+ # Technical Specifications [optional]
108
+
109
+ ## Model Architecture and Objective
110
+
111
+ More information needed
112
+
113
+ ## Compute Infrastructure
114
+
115
+ More information needed
116
+
117
+ ### Hardware
118
+
119
+ More information needed
120
+
121
+ ### Software
122
+ * Transformers_version: 4.6.0
123
+
124
+ # Citation
125
+
126
+
127
+ **BibTeX:**
128
+ ```
129
+ @misc{xu2019layoutlm,
130
+ title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding},
131
+ author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou},
132
+ year={2019},
133
+ eprint={1912.13318},
134
+ archivePrefix={arXiv},
135
+ primaryClass={cs.CL}
136
+ }
137
+ ```
138
+
139
+
140
+ # Glossary [optional]
141
+ More information needed
142
+
143
+ # More Information [optional]
144
+
145
+ More information needed
146
+
147
+ # Model Card Authors [optional]
148
+
149
+
150
+ Allen Institute for AI [allenai] in collaboration with Ezi Ozoani and the Hugging Face team
151
+
152
+ # Model Card Contact
153
+
154
+ More information needed
155
+
156
+ # How to Get Started with the Model
157
+
158
+ Use the code below to get started with the model.
159
+
160
+ <details>
161
+ <summary> Click to expand </summary>
162
+ ```python
163
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
164
+
165
+ tokenizer = AutoTokenizer.from_pretrained("allenai/ivila-row-layoutlm-finetuned-s2vl-v2")
166
+
167
+ model = AutoModelForTokenClassification.from_pretrained("allenai/ivila-row-layoutlm-finetuned-s2vl-v2")
168
+ ```
169
+ </details>