varun-v-rao commited on
Commit
f6b2d40
1 Parent(s): dd92392

End of training

Browse files
README.md CHANGED
@@ -1,42 +1,54 @@
1
  ---
 
 
2
  tags:
3
- - adapter-transformers
4
- - gpt2
5
  datasets:
6
- - squad
 
 
 
7
  ---
8
 
9
- # Adapter `varun-v-rao/gpt2-large-bn-adapter-7.42M-squad-model3` for openai-community/gpt2-large
 
10
 
11
- An [adapter](https://adapterhub.ml) for the `openai-community/gpt2-large` model that was trained on the [squad](https://huggingface.co/datasets/squad/) dataset.
12
 
13
- This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
14
 
15
- ## Usage
16
 
17
- First, install `adapters`:
18
 
19
- ```
20
- pip install -U adapters
21
- ```
22
 
23
- Now, the adapter can be loaded and activated like this:
24
 
25
- ```python
26
- from adapters import AutoAdapterModel
27
 
28
- model = AutoAdapterModel.from_pretrained("openai-community/gpt2-large")
29
- adapter_name = model.load_adapter("varun-v-rao/gpt2-large-bn-adapter-7.42M-squad-model3", source="hf", set_active=True)
30
- ```
31
 
32
- ## Architecture & Training
33
 
34
- <!-- Add some description here -->
35
 
36
- ## Evaluation results
 
 
 
 
 
 
 
37
 
38
- <!-- Add some description here -->
39
 
40
- ## Citation
41
 
42
- <!-- Add some description here -->
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ base_model: openai-community/gpt2-large
4
  tags:
5
+ - generated_from_trainer
 
6
  datasets:
7
+ - varun-v-rao/squad
8
+ model-index:
9
+ - name: gpt2-large-bn-adapter-7.42M-squad-model3
10
+ results: []
11
  ---
12
 
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
 
16
+ # gpt2-large-bn-adapter-7.42M-squad-model3
17
 
18
+ This model is a fine-tuned version of [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large) on the squad dataset.
19
 
20
+ ## Model description
21
 
22
+ More information needed
23
 
24
+ ## Intended uses & limitations
 
 
25
 
26
+ More information needed
27
 
28
+ ## Training and evaluation data
 
29
 
30
+ More information needed
 
 
31
 
32
+ ## Training procedure
33
 
34
+ ### Training hyperparameters
35
 
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 2e-05
38
+ - train_batch_size: 4
39
+ - eval_batch_size: 4
40
+ - seed: 27
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 3
44
 
45
+ ### Training results
46
 
 
47
 
48
+
49
+ ### Framework versions
50
+
51
+ - Transformers 4.35.2
52
+ - Pytorch 2.1.1+cu121
53
+ - Datasets 2.15.0
54
+ - Tokenizers 0.15.0
runs/Jun20_11-57-06_gl1524.arc-ts.umich.edu/events.out.tfevents.1718899074.gl1524.arc-ts.umich.edu.1654219.2 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c86ad116902cb0d86b048a26a174f1bdd1d7e0f6777d21b73a9f1e1cc7c91c3d
3
- size 18777
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c373a40e239d45621bef326d192505c3ddfc67351495a1c424ae355bd03f94ee
3
+ size 26017
squad/adapter_config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "config": {
3
+ "adapter_residual_before_ln": false,
4
+ "cross_adapter": false,
5
+ "factorized_phm_W": true,
6
+ "factorized_phm_rule": false,
7
+ "hypercomplex_nonlinearity": "glorot-uniform",
8
+ "init_weights": "bert",
9
+ "inv_adapter": null,
10
+ "inv_adapter_reduction_factor": null,
11
+ "is_parallel": false,
12
+ "learn_phm": true,
13
+ "leave_out": [],
14
+ "ln_after": false,
15
+ "ln_before": false,
16
+ "mh_adapter": false,
17
+ "non_linearity": "relu",
18
+ "original_ln_after": true,
19
+ "original_ln_before": true,
20
+ "output_adapter": true,
21
+ "phm_bias": true,
22
+ "phm_c_init": "normal",
23
+ "phm_dim": 4,
24
+ "phm_init_range": 0.0001,
25
+ "phm_layer": false,
26
+ "phm_rank": 1,
27
+ "reduction_factor": 16,
28
+ "residual_before_ln": true,
29
+ "scaling": 1.0,
30
+ "shared_W_phm": false,
31
+ "shared_phm_rule": true,
32
+ "use_gating": false
33
+ },
34
+ "config_id": "9076f36a74755ac4",
35
+ "hidden_size": 1280,
36
+ "model_class": "GPT2ForQuestionAnswering",
37
+ "model_name": "openai-community/gpt2-large",
38
+ "model_type": "gpt2",
39
+ "name": "squad",
40
+ "version": "0.1.1"
41
+ }
squad/head_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "config": null,
3
+ "hidden_size": 1280,
4
+ "label2id": {
5
+ "LABEL_0": 0,
6
+ "LABEL_1": 1
7
+ },
8
+ "model_class": "GPT2ForQuestionAnswering",
9
+ "model_name": "openai-community/gpt2-large",
10
+ "model_type": "gpt2",
11
+ "name": null,
12
+ "num_labels": 2,
13
+ "version": "0.1.1"
14
+ }
squad/pytorch_adapter.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d7f9854ed0f9e000daa6a9890c85d0cbbd6bb26420a759a13dd222d4c6978e5
3
+ size 29739506
squad/pytorch_model_head.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06a5b204393a92825d730a39e409dcd1feb9db6c9dfdf9e3fa47fccef7093889
3
+ size 11802