julianrisch commited on
Commit
35d9bf8
1 Parent(s): 8c85f7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -13
README.md CHANGED
@@ -30,6 +30,8 @@ model-index:
30
  verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTMwNzk0ZDRjNGUyMjQyNzc1NzczZmUwMTU2MTM5MGQ3M2NhODlmOTU4ZDI0YjhlNTVjNDA1MGEwM2M1MzIyZSIsInZlcnNpb24iOjF9.eElGmTOXH_qHTNaPwZ-dUJfVz9VMvCutDCof_6UG_625MwctT_j7iVkWcGwed4tUnunuq1BPm-0iRh1RuuB-AQ
31
  ---
32
 
 
 
33
  ## Overview
34
  **Language model:** deepset/roberta-base-squad2-distilled
35
  **Language:** English
@@ -39,7 +41,7 @@ model-index:
39
  **Published**: Apr 21st, 2021
40
 
41
  ## Details
42
- - haystack's distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model.
43
 
44
  ## Hyperparameters
45
  ```
@@ -52,6 +54,51 @@ embeds_dropout_prob = 0.1
52
  temperature = 5
53
  distillation_loss_weight = 1
54
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ## Performance
56
  ```
57
  "exact": 68.6431398972458
@@ -63,18 +110,31 @@ distillation_loss_weight = 1
63
  - Julian Risch: `julian.risch [at] deepset.ai`
64
  - Malte Pietsch: `malte.pietsch [at] deepset.ai`
65
  - Michel Bartels: `michel.bartels [at] deepset.ai`
 
66
  ## About us
67
- ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo)
68
- We bring NLP to the industry via open source!
69
- Our focus: Industry specific language models & large scale QA systems.
70
-
71
- Some of our work:
72
- - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
73
- - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
74
- - [FARM](https://github.com/deepset-ai/FARM)
75
- - [Haystack](https://github.com/deepset-ai/haystack/)
76
-
77
- Get in touch:
78
- [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
 
 
 
 
 
 
 
 
 
 
 
 
79
 
80
  By the way: [we're hiring!](http://www.deepset.ai/jobs)
 
30
  verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTMwNzk0ZDRjNGUyMjQyNzc1NzczZmUwMTU2MTM5MGQ3M2NhODlmOTU4ZDI0YjhlNTVjNDA1MGEwM2M1MzIyZSIsInZlcnNpb24iOjF9.eElGmTOXH_qHTNaPwZ-dUJfVz9VMvCutDCof_6UG_625MwctT_j7iVkWcGwed4tUnunuq1BPm-0iRh1RuuB-AQ
31
  ---
32
 
33
+ # bert-medium-squad2-distilled for Extractive QA
34
+
35
  ## Overview
36
  **Language model:** deepset/roberta-base-squad2-distilled
37
  **Language:** English
 
41
  **Published**: Apr 21st, 2021
42
 
43
  ## Details
44
+ - Haystack version 1.x distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model.
45
 
46
  ## Hyperparameters
47
  ```
 
54
  temperature = 5
55
  distillation_loss_weight = 1
56
  ```
57
+
58
+ ## Usage
59
+
60
+ ### In Haystack
61
+ Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents.
62
+ To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/):
63
+ ```python
64
+ # After running pip install haystack-ai "transformers[torch,sentencepiece]"
65
+
66
+ from haystack import Document
67
+ from haystack.components.readers import ExtractiveReader
68
+
69
+ docs = [
70
+ Document(content="Python is a popular programming language"),
71
+ Document(content="python ist eine beliebte Programmiersprache"),
72
+ ]
73
+
74
+ reader = ExtractiveReader(model="deepset/bert-medium-squad2-distilled")
75
+ reader.warm_up()
76
+
77
+ question = "What is a popular programming language?"
78
+ result = reader.run(query=question, documents=docs)
79
+ # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]}
80
+ ```
81
+ For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline).
82
+
83
+ ### In Transformers
84
+ ```python
85
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
86
+
87
+ model_name = "deepset/bert-medium-squad2-distilled"
88
+
89
+ # a) Get predictions
90
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
91
+ QA_input = {
92
+ 'question': 'Why is model conversion important?',
93
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
94
+ }
95
+ res = nlp(QA_input)
96
+
97
+ # b) Load model & tokenizer
98
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
99
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
100
+ ```
101
+
102
  ## Performance
103
  ```
104
  "exact": 68.6431398972458
 
110
  - Julian Risch: `julian.risch [at] deepset.ai`
111
  - Malte Pietsch: `malte.pietsch [at] deepset.ai`
112
  - Michel Bartels: `michel.bartels [at] deepset.ai`
113
+
114
  ## About us
115
+
116
+ <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
117
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
118
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
119
+ </div>
120
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
121
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
122
+ </div>
123
+ </div>
124
+
125
+ [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/).
126
+
127
+ Some of our other work:
128
+ - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2)
129
+ - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1)
130
+ - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio)
131
+
132
+ ## Get in touch and join the Haystack community
133
+
134
+ <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
135
+
136
+ We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
137
+
138
+ [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai)
139
 
140
  By the way: [we're hiring!](http://www.deepset.ai/jobs)