amang1802 commited on
Commit
10a622c
1 Parent(s): 357de2f

Upload LlamaForCausalLM

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +200 -0
  2. config.json +40 -0
  3. generation_config.json +10 -0
  4. pytorch_model-00001-of-00162.bin +3 -0
  5. pytorch_model-00002-of-00162.bin +3 -0
  6. pytorch_model-00003-of-00162.bin +3 -0
  7. pytorch_model-00004-of-00162.bin +3 -0
  8. pytorch_model-00005-of-00162.bin +3 -0
  9. pytorch_model-00006-of-00162.bin +3 -0
  10. pytorch_model-00007-of-00162.bin +3 -0
  11. pytorch_model-00008-of-00162.bin +3 -0
  12. pytorch_model-00009-of-00162.bin +3 -0
  13. pytorch_model-00010-of-00162.bin +3 -0
  14. pytorch_model-00011-of-00162.bin +3 -0
  15. pytorch_model-00012-of-00162.bin +3 -0
  16. pytorch_model-00013-of-00162.bin +3 -0
  17. pytorch_model-00014-of-00162.bin +3 -0
  18. pytorch_model-00015-of-00162.bin +3 -0
  19. pytorch_model-00016-of-00162.bin +3 -0
  20. pytorch_model-00017-of-00162.bin +3 -0
  21. pytorch_model-00018-of-00162.bin +3 -0
  22. pytorch_model-00019-of-00162.bin +3 -0
  23. pytorch_model-00020-of-00162.bin +3 -0
  24. pytorch_model-00021-of-00162.bin +3 -0
  25. pytorch_model-00022-of-00162.bin +3 -0
  26. pytorch_model-00023-of-00162.bin +3 -0
  27. pytorch_model-00024-of-00162.bin +3 -0
  28. pytorch_model-00025-of-00162.bin +3 -0
  29. pytorch_model-00026-of-00162.bin +3 -0
  30. pytorch_model-00027-of-00162.bin +3 -0
  31. pytorch_model-00028-of-00162.bin +3 -0
  32. pytorch_model-00029-of-00162.bin +3 -0
  33. pytorch_model-00030-of-00162.bin +3 -0
  34. pytorch_model-00031-of-00162.bin +3 -0
  35. pytorch_model-00032-of-00162.bin +3 -0
  36. pytorch_model-00033-of-00162.bin +3 -0
  37. pytorch_model-00034-of-00162.bin +3 -0
  38. pytorch_model-00035-of-00162.bin +3 -0
  39. pytorch_model-00036-of-00162.bin +3 -0
  40. pytorch_model-00037-of-00162.bin +3 -0
  41. pytorch_model-00038-of-00162.bin +3 -0
  42. pytorch_model-00039-of-00162.bin +3 -0
  43. pytorch_model-00040-of-00162.bin +3 -0
  44. pytorch_model-00041-of-00162.bin +3 -0
  45. pytorch_model-00042-of-00162.bin +3 -0
  46. pytorch_model-00043-of-00162.bin +3 -0
  47. pytorch_model-00044-of-00162.bin +3 -0
  48. pytorch_model-00045-of-00162.bin +3 -0
  49. pytorch_model-00046-of-00162.bin +3 -0
  50. pytorch_model-00047-of-00162.bin +3 -0
README.md ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - llama-factory
5
+ ---
6
+
7
+ # Model Card for Model ID
8
+
9
+ <!-- Provide a quick summary of what the model is/does. -->
10
+
11
+
12
+
13
+ ## Model Details
14
+
15
+ ### Model Description
16
+
17
+ <!-- Provide a longer summary of what this model is. -->
18
+
19
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
20
+
21
+ - **Developed by:** [More Information Needed]
22
+ - **Funded by [optional]:** [More Information Needed]
23
+ - **Shared by [optional]:** [More Information Needed]
24
+ - **Model type:** [More Information Needed]
25
+ - **Language(s) (NLP):** [More Information Needed]
26
+ - **License:** [More Information Needed]
27
+ - **Finetuned from model [optional]:** [More Information Needed]
28
+
29
+ ### Model Sources [optional]
30
+
31
+ <!-- Provide the basic links for the model. -->
32
+
33
+ - **Repository:** [More Information Needed]
34
+ - **Paper [optional]:** [More Information Needed]
35
+ - **Demo [optional]:** [More Information Needed]
36
+
37
+ ## Uses
38
+
39
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
+
41
+ ### Direct Use
42
+
43
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
44
+
45
+ [More Information Needed]
46
+
47
+ ### Downstream Use [optional]
48
+
49
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
50
+
51
+ [More Information Needed]
52
+
53
+ ### Out-of-Scope Use
54
+
55
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
56
+
57
+ [More Information Needed]
58
+
59
+ ## Bias, Risks, and Limitations
60
+
61
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
62
+
63
+ [More Information Needed]
64
+
65
+ ### Recommendations
66
+
67
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
68
+
69
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
70
+
71
+ ## How to Get Started with the Model
72
+
73
+ Use the code below to get started with the model.
74
+
75
+ [More Information Needed]
76
+
77
+ ## Training Details
78
+
79
+ ### Training Data
80
+
81
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
82
+
83
+ [More Information Needed]
84
+
85
+ ### Training Procedure
86
+
87
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
88
+
89
+ #### Preprocessing [optional]
90
+
91
+ [More Information Needed]
92
+
93
+
94
+ #### Training Hyperparameters
95
+
96
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
97
+
98
+ #### Speeds, Sizes, Times [optional]
99
+
100
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
101
+
102
+ [More Information Needed]
103
+
104
+ ## Evaluation
105
+
106
+ <!-- This section describes the evaluation protocols and provides the results. -->
107
+
108
+ ### Testing Data, Factors & Metrics
109
+
110
+ #### Testing Data
111
+
112
+ <!-- This should link to a Dataset Card if possible. -->
113
+
114
+ [More Information Needed]
115
+
116
+ #### Factors
117
+
118
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
119
+
120
+ [More Information Needed]
121
+
122
+ #### Metrics
123
+
124
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
125
+
126
+ [More Information Needed]
127
+
128
+ ### Results
129
+
130
+ [More Information Needed]
131
+
132
+ #### Summary
133
+
134
+
135
+
136
+ ## Model Examination [optional]
137
+
138
+ <!-- Relevant interpretability work for the model goes here -->
139
+
140
+ [More Information Needed]
141
+
142
+ ## Environmental Impact
143
+
144
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
145
+
146
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
147
+
148
+ - **Hardware Type:** [More Information Needed]
149
+ - **Hours used:** [More Information Needed]
150
+ - **Cloud Provider:** [More Information Needed]
151
+ - **Compute Region:** [More Information Needed]
152
+ - **Carbon Emitted:** [More Information Needed]
153
+
154
+ ## Technical Specifications [optional]
155
+
156
+ ### Model Architecture and Objective
157
+
158
+ [More Information Needed]
159
+
160
+ ### Compute Infrastructure
161
+
162
+ [More Information Needed]
163
+
164
+ #### Hardware
165
+
166
+ [More Information Needed]
167
+
168
+ #### Software
169
+
170
+ [More Information Needed]
171
+
172
+ ## Citation [optional]
173
+
174
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
175
+
176
+ **BibTeX:**
177
+
178
+ [More Information Needed]
179
+
180
+ **APA:**
181
+
182
+ [More Information Needed]
183
+
184
+ ## Glossary [optional]
185
+
186
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
187
+
188
+ [More Information Needed]
189
+
190
+ ## More Information [optional]
191
+
192
+ [More Information Needed]
193
+
194
+ ## Model Card Authors [optional]
195
+
196
+ [More Information Needed]
197
+
198
+ ## Model Card Contact
199
+
200
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "nvidia/Llama-3.1-Nemotron-70B-Reward-HF",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 128000,
9
+ "eos_token_id": [
10
+ 128001,
11
+ 128008,
12
+ 128009
13
+ ],
14
+ "head_dim": 128,
15
+ "hidden_act": "silu",
16
+ "hidden_size": 8192,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 28672,
19
+ "max_position_embeddings": 131072,
20
+ "mlp_bias": false,
21
+ "model_type": "llama",
22
+ "num_attention_heads": 64,
23
+ "num_hidden_layers": 80,
24
+ "num_key_value_heads": 8,
25
+ "pretraining_tp": 1,
26
+ "rms_norm_eps": 1e-05,
27
+ "rope_scaling": {
28
+ "factor": 8.0,
29
+ "high_freq_factor": 4.0,
30
+ "low_freq_factor": 1.0,
31
+ "original_max_position_embeddings": 8192,
32
+ "rope_type": "llama3"
33
+ },
34
+ "rope_theta": 500000.0,
35
+ "tie_word_embeddings": false,
36
+ "torch_dtype": "bfloat16",
37
+ "transformers_version": "4.45.0",
38
+ "use_cache": true,
39
+ "vocab_size": 128256
40
+ }
generation_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 128000,
3
+ "eos_token_id": [
4
+ 128001,
5
+ 128008,
6
+ 128009
7
+ ],
8
+ "pad_token_id": 128001,
9
+ "transformers_version": "4.45.0"
10
+ }
pytorch_model-00001-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a83aed5505c73fd524ddeb2537f07ae139b038819c995eca0c6c6a5ce70bb2cb
3
+ size 2101347717
pytorch_model-00002-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c58cc4b8ff9bd72ceaf173188493e9ffb26f70cfd2fbb3b68c4202f50510e6cd
3
+ size 771754633
pytorch_model-00003-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:659c0f554dd6ae63cf7df448c70175830b73af28f012d6e4da4a05aedd03db92
3
+ size 939559224
pytorch_model-00004-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cd5703050123929933ebbd51a61461d56b4a3472c550ba0f80620bd42f5101a
3
+ size 771754633
pytorch_model-00005-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50c301809612119034c81ce08c306ab075a175ad49f9e14676839c188fa13540
3
+ size 939559224
pytorch_model-00006-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f371e1c185aa3e9602d196c1b897fe4bed4a403aa61b3fc99eb305bffc085be
3
+ size 771754633
pytorch_model-00007-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2f1b564ca95fe9f8273cc422e9c8741b795ca78a3ab5905a98108863238ab13
3
+ size 939559224
pytorch_model-00008-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0b8f18dd2127879d7a0f7d1d772c9d074350976a6fdb8b76f226ca12176568b
3
+ size 771754633
pytorch_model-00009-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4a860d920443f0665ec7489b3b5e8f5d97f4e6303be608058cdd62d805bee29
3
+ size 939559224
pytorch_model-00010-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87275d026d91c2752966a4adfc2630a1524202382b50a4ba4d7ef7cbfac8bb4a
3
+ size 771754633
pytorch_model-00011-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6618da200ea6f2a07cfa8a8c0173c35d1dc0b5d13f1fd380c4cd4bc97d8742f
3
+ size 939559224
pytorch_model-00012-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:474b2ab02e6f68c18c73b1669eb81dd8842e51be14b1aeaa52f2cdd9a796e643
3
+ size 771754633
pytorch_model-00013-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65d31a8f8ef6dbf52a78cb745509f6766f4db824430692a17c28c9ff1e1ace45
3
+ size 939559224
pytorch_model-00014-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22c2413a9b320a644b4b572532035239538931fb60d4a0d9c32651f83b8e2d74
3
+ size 771754633
pytorch_model-00015-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4cc74fe5547f14f83d51502b201094f3e8d197d249f36d4ffe0d64b6fa304e0
3
+ size 939559224
pytorch_model-00016-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c18f7425397b228e588d9c3acc7f6acf5bed57b9612ff38da43a7a4dbe13b8d6
3
+ size 771754633
pytorch_model-00017-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57cc86d0c4efdf63d4d66e43bea57ce533fb0bb88ce926ef5e0aac6c8f0df077
3
+ size 939559224
pytorch_model-00018-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59175fcc57362fee7da3d0f2baee31463eac7f9c4ffe080bfd22912315ef03b4
3
+ size 771754633
pytorch_model-00019-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a27ea2d33416669e307e396df10a02561ac70121183e19289e4d480501998ef0
3
+ size 939559224
pytorch_model-00020-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbd6004b3737d498f051895d2a9d0081aeaf2babbefb47f15205815c9b8a6d6a
3
+ size 771754633
pytorch_model-00021-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5f91366a1a4485b0ee1b57e03909f27feb7ebf2cc81561a1198d1d1b4fe6e93
3
+ size 939559224
pytorch_model-00022-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bae034ef401e85adc175d79a9669a43bccf75f0dcba38e4f5dde62b32562b7e6
3
+ size 771754633
pytorch_model-00023-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7809b2c563082aaceec3c906ecdb1321dc906e9c137e3ed108f01a766b5d8f66
3
+ size 939559224
pytorch_model-00024-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79d6e115c7057ffbd280b45d25e0fd7274bb9b4784985530032e59090b83e11b
3
+ size 771754633
pytorch_model-00025-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71ccd33c91f1e7c21aaa14b2aea72f08ec1cd4132c3f5caa2f7bb57843503f03
3
+ size 939559224
pytorch_model-00026-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08c3e817e3b5adce96530aba55988a2c3f0104db82c6b52b8fd37a2fe676eb3f
3
+ size 771754633
pytorch_model-00027-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c0091e163631d3bd52cdf15d9fef6f5b66a08864e4c53d1cef571c27aca2089
3
+ size 939559224
pytorch_model-00028-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27d468451f2583920c4b744daf5659680f748056ccc252246556c5a55d68b6dc
3
+ size 771754633
pytorch_model-00029-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:247dbf2302c77a4fd172c132ccbc2b3015f0294003397de5d50f87b53db7a187
3
+ size 939559224
pytorch_model-00030-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca68e47fca7bb738e1dcdeb5e5197e3406e26d7e79782d5efc6745cc7ee9aa31
3
+ size 771754633
pytorch_model-00031-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e11ac74512654b281e15f291be0dcabc6f69c1bc8a63a4e1e29782f4c5004ab3
3
+ size 939559224
pytorch_model-00032-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b59c931d38e597ef8ab32b515c92a3499c9bd2e0c4a26060662d3ddc3bec8888
3
+ size 771754633
pytorch_model-00033-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4b4c404143a74e10fedc1edf026bda3bb0278d69474fe114ec123fc20513358
3
+ size 939559224
pytorch_model-00034-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4ac2767971d32570da030568d20ecde754007e98cce3267d400d7e3f99fe96d
3
+ size 771754633
pytorch_model-00035-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78cc84516ce753c1027f0b28fd45e61c4418b346fc9cc2c6aa93dc3693d9a3ba
3
+ size 939559224
pytorch_model-00036-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a6f1d12107bdec186ebceee10e495a8b1bfa7b7d47e0a14dd0981d6baf7940a
3
+ size 771754633
pytorch_model-00037-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5fe3999c820bb7825cdeac14946f07f2b36580ffc116fbb83d9de88413925e7
3
+ size 939559224
pytorch_model-00038-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d2bb19353756e745a12ffec454b8d8f379e165ec3c721a1dc350f75f9d7a5d2
3
+ size 771754633
pytorch_model-00039-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fa480228991569a19c5d5f850d077dffcc468f37987ba46b39322624f56da54
3
+ size 939559224
pytorch_model-00040-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9204b88d66b72d1ec16aca176491ab63188427906242b13521c27d51a22311f6
3
+ size 771754633
pytorch_model-00041-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6d28aea4cd070f1e1fb4aebefee6886c16b60e810f4f12436cb3fac0815eb7b
3
+ size 939559224
pytorch_model-00042-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4779b09467c9f769fa6305c1fc5043533501c4873d3fb252f39a4a8effa53cd
3
+ size 771754633
pytorch_model-00043-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5024ff651aab673149e764a91872482804a6f3b31efcc0d120614a85eea0b2de
3
+ size 939559224
pytorch_model-00044-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:603b8db1e3a1d699881ce7fd578640d2e4f5347a41021e9493fe0d44ab1ca88f
3
+ size 771754633
pytorch_model-00045-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8e6ad81c9c92078301910219556c5cd87628c55ea283b7fbc56c0707f4cd357
3
+ size 939559224
pytorch_model-00046-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d63f800761bcdc41f225172dc4bcf905f23f764c78611acd395ad7680eae9bde
3
+ size 771754633
pytorch_model-00047-of-00162.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7df8bf1c25fc672324c626c3d4aed429311ba2a0353b089bafe4c425e46acce
3
+ size 939559224