evamaxfield commited on
Commit
fec1f5a
1 Parent(s): e79e49d

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -28
README.md CHANGED
@@ -1,44 +1,63 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
4
  # soft-search
5
 
6
- [![Build Status](https://github.com/PugetSoundClinic-PIT/soft-search/workflows/CI/badge.svg)](https://github.com/PugetSoundClinic-PIT/soft-search/actions)
7
- [![Documentation](https://github.com/PugetSoundClinic-PIT/soft-search/workflows/Documentation/badge.svg)](https://PugetSoundClinic-PIT.github.io/soft-search)
 
 
8
 
9
- searching for software promises in grant applications
10
 
11
- ---
 
 
 
 
12
 
13
- ## Installation
14
 
15
- **Stable Release:** `pip install soft-search`<br>
16
- **Development Head:** `pip install git+https://github.com/PugetSoundClinic-PIT/soft-search.git`
17
 
18
- ## Quickstart
19
 
20
- ### Apply our Pre-trained Transformer
21
 
22
- ```python
23
- from soft_search import constants, nsf
24
- from soft_search.label import transformer
25
- df = nsf.get_nsf_dataset(
26
- "2016-01-01",
27
- "2017-01-01",
28
- dataset_fields=[constants.NSFFields.abstractText],
29
- )
30
- predicted = transformer.label(
31
- df,
32
- apply_column=constants.NSFFields.abstractText,
33
- )
34
- ```
35
 
36
- ## Documentation
37
 
38
- For full package documentation please visit [PugetSoundClinic-PIT.github.io/soft-search](https://PugetSoundClinic-PIT.github.io/soft-search).
 
 
 
 
 
 
39
 
40
- ## Development
41
 
42
- See [CONTRIBUTING.md](CONTRIBUTING.md) for information related to developing the code.
43
 
44
- **MIT License**
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ model-index:
8
+ - name: soft-search
9
+ results: []
10
  ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
  # soft-search
16
 
17
+ This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.2866
20
+ - Accuracy: 0.8333
21
 
22
+ ## Model description
23
 
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
 
30
+ ## Training and evaluation data
31
 
32
+ More information needed
 
33
 
34
+ ## Training procedure
35
 
36
+ ### Training hyperparameters
37
 
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 3e-05
40
+ - train_batch_size: 16
41
+ - eval_batch_size: 16
42
+ - seed: 42
43
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
+ - lr_scheduler_type: linear
45
+ - num_epochs: 5
 
 
 
 
 
46
 
47
+ ### Training results
48
 
49
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
50
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
51
+ | No log | 1.0 | 3 | 0.7179 | 0.75 |
52
+ | No log | 2.0 | 6 | 0.2900 | 0.8333 |
53
+ | No log | 3.0 | 9 | 0.2498 | 0.8333 |
54
+ | 0.6351 | 4.0 | 12 | 0.2538 | 0.9167 |
55
+ | 0.6351 | 5.0 | 15 | 0.2866 | 0.8333 |
56
 
 
57
 
58
+ ### Framework versions
59
 
60
+ - Transformers 4.21.0
61
+ - Pytorch 1.12.0+cu102
62
+ - Datasets 2.4.0
63
+ - Tokenizers 0.12.1