Token Classification
GLiNER
PyTorch
multilingual
bert
Inference Endpoints
Rejebc commited on
Commit
4e97a81
1 Parent(s): 0710830

Upload 7 files

Browse files
Files changed (7) hide show
  1. README.md +92 -0
  2. config.json +26 -0
  3. gliner_config.json +26 -0
  4. handler.py +42 -0
  5. pytorch_model.bin +3 -0
  6. requirements.txt +1 -0
  7. test.py +32 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - multilingual
5
+ library_name: gliner
6
+ datasets:
7
+ - urchade/pile-mistral-v0.1
8
+ pipeline_tag: token-classification
9
+ ---
10
+
11
+ # About
12
+
13
+ GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
14
+
15
+
16
+ ## Links
17
+
18
+ * Paper: https://arxiv.org/abs/2311.08526
19
+ * Repository: https://github.com/urchade/GLiNER
20
+
21
+ ## Available models
22
+
23
+ | Release | Model Name | # of Parameters | Language | License |
24
+ | - | - | - | - | - |
25
+ | v0 | [urchade/gliner_base](https://huggingface.co/urchade/gliner_base)<br>[urchade/gliner_multi](https://huggingface.co/urchade/gliner_multi) | 209M<br>209M | English<br>Multilingual | cc-by-nc-4.0 |
26
+ | v1 | [urchade/gliner_small-v1](https://huggingface.co/urchade/gliner_small-v1)<br>[urchade/gliner_medium-v1](https://huggingface.co/urchade/gliner_medium-v1)<br>[urchade/gliner_large-v1](https://huggingface.co/urchade/gliner_large-v1) | 166M<br>209M<br>459M | English <br> English <br> English | cc-by-nc-4.0 |
27
+ | v2 | [urchade/gliner_small-v2](https://huggingface.co/urchade/gliner_small-v2)<br>[urchade/gliner_medium-v2](https://huggingface.co/urchade/gliner_medium-v2)<br>[urchade/gliner_large-v2](https://huggingface.co/urchade/gliner_large-v2) | 166M<br>209M<br>459M | English <br> English <br> English | apache-2.0 |
28
+ | v2.1 | [urchade/gliner_small-v2.1](https://huggingface.co/urchade/gliner_small-v2.1)<br>[urchade/gliner_medium-v2.1](https://huggingface.co/urchade/gliner_medium-v2.1)<br>[urchade/gliner_large-v2.1](https://huggingface.co/urchade/gliner_large-v2.1) <br>[urchade/gliner_multi-v2.1](https://huggingface.co/urchade/gliner_multi-v2.1) | 166M<br>209M<br>459M<br>209M | English <br> English <br> English <br> Multilingual | apache-2.0 |
29
+
30
+ ## Installation
31
+ To use this model, you must install the GLiNER Python library:
32
+ ```
33
+ !pip install gliner
34
+ ```
35
+
36
+ ## Usage
37
+ Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
38
+
39
+ ```python
40
+ from gliner import GLiNER
41
+
42
+ model = GLiNER.from_pretrained("urchade/gliner_multi-v2.1")
43
+
44
+ text = """
45
+ Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
46
+ """
47
+
48
+ labels = ["person", "award", "date", "competitions", "teams"]
49
+
50
+ entities = model.predict_entities(text, labels)
51
+
52
+ for entity in entities:
53
+ print(entity["text"], "=>", entity["label"])
54
+ ```
55
+
56
+ ```
57
+ Cristiano Ronaldo dos Santos Aveiro => person
58
+ 5 February 1985 => date
59
+ Al Nassr => teams
60
+ Portugal national team => teams
61
+ Ballon d'Or => award
62
+ UEFA Men's Player of the Year Awards => award
63
+ European Golden Shoes => award
64
+ UEFA Champions Leagues => competitions
65
+ UEFA European Championship => competitions
66
+ UEFA Nations League => competitions
67
+ Champions League => competitions
68
+ European Championship => competitions
69
+ ```
70
+
71
+ ## Named Entity Recognition benchmark result
72
+
73
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317233cc92fd6fee317e030/Y5f7tK8lonGqeeO6L6bVI.png)
74
+
75
+ ## Model Authors
76
+ The model authors are:
77
+ * [Urchade Zaratiana](https://huggingface.co/urchade)
78
+ * Nadi Tomeh
79
+ * Pierre Holat
80
+ * Thierry Charnois
81
+
82
+ ## Citation
83
+ ```bibtex
84
+ @misc{zaratiana2023gliner,
85
+ title={GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer},
86
+ author={Urchade Zaratiana and Nadi Tomeh and Pierre Holat and Thierry Charnois},
87
+ year={2023},
88
+ eprint={2311.08526},
89
+ archivePrefix={arXiv},
90
+ primaryClass={cs.CL}
91
+ }
92
+ ```
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "bert",
3
+ "size_sup": -1,
4
+ "max_types": 25,
5
+ "shuffle_types": true,
6
+ "random_drop": true,
7
+ "max_neg_type_ratio": 1,
8
+ "max_len": 384,
9
+ "lr_encoder": "1e-5",
10
+ "lr_others": "5e-5",
11
+ "num_steps": 30000,
12
+ "warmup_ratio": 3000,
13
+ "train_batch_size": 8,
14
+ "eval_every": 5000,
15
+ "max_width": 12,
16
+ "model_name": "microsoft/mdeberta-v3-base",
17
+ "fine_tune": true,
18
+ "subtoken_pooling": "first",
19
+ "hidden_size": 768,
20
+ "num_attention_heads": 12,
21
+ "num_hidden_layers": 12,
22
+ "intermediate_size": 3072,
23
+ "span_mode": "markerV0",
24
+ "dropout": 0.4,
25
+ "name": "correct"
26
+ }
gliner_config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "bert",
3
+ "size_sup": -1,
4
+ "max_types": 25,
5
+ "shuffle_types": true,
6
+ "random_drop": true,
7
+ "max_neg_type_ratio": 1,
8
+ "max_len": 384,
9
+ "lr_encoder": "1e-5",
10
+ "lr_others": "5e-5",
11
+ "num_steps": 30000,
12
+ "warmup_ratio": 3000,
13
+ "train_batch_size": 8,
14
+ "eval_every": 5000,
15
+ "max_width": 12,
16
+ "model_name": "microsoft/mdeberta-v3-base",
17
+ "fine_tune": true,
18
+ "subtoken_pooling": "first",
19
+ "hidden_size": 768,
20
+ "num_attention_heads": 12,
21
+ "num_hidden_layers": 12,
22
+ "intermediate_size": 3072,
23
+ "span_mode": "markerV0",
24
+ "dropout": 0.4,
25
+ "name": "correct"
26
+ }
handler.py ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Dict, List, Any
2
+ from transformers import pipeline, AutoConfig, AutoModelForTokenClassification, AutoTokenizer, BertTokenizerFast
3
+ import os
4
+
5
+
6
+ class EndpointHandler():
7
+ def __init__(self, path=""):
8
+ dir_model = "urchade/gliner_multi-v2.1"
9
+
10
+ config_path = os.path.join(path, "gliner_config.json")
11
+ if not os.path.exists(config_path):
12
+ raise FileNotFoundError(f"Custom configuration file not found at {config_path}")
13
+
14
+ # Load the custom configuration
15
+ config = AutoConfig.from_pretrained(config_path)
16
+
17
+ # Load the model using the custom configuration
18
+ self.model = AutoModelForTokenClassification.from_pretrained(dir_model, config=config)
19
+
20
+ # Initialize the pipeline with the model and tokenizer
21
+ # Use a pre-trained tokenizer compatible with your model
22
+ self.tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
23
+ # Use a pipeline appropriate for your task. Here we use "token-classification" for NER (Named Entity Recognition).
24
+ self.pipeline = pipeline("token-classification", model=path, tokenizer=self.tokenizer)
25
+
26
+ def __call__(self, data: Dict[str, Any]) -> List[Dict[str, Any]]:
27
+ """
28
+ Args:
29
+ data (Dict[str, Any]): The input data including:
30
+ - "inputs": The text input from which to extract information.
31
+
32
+ Returns:
33
+ List[Dict[str, Any]]: The extracted information from the text.
34
+ """
35
+ # Get inputs
36
+ inputs = data.get("inputs", "")
37
+
38
+ # Run the pipeline for text extraction
39
+ extraction_results = self.pipeline(inputs)
40
+
41
+ # Process and return the results as needed
42
+ return extraction_results
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:580d8b061bcb6596bd79be7f29736cea84fdfef0679fc3ae9c14c08d2e974fad
3
+ size 1155900362
requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ holidays
test.py ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # from handler import EndpointHandler
2
+ #
3
+ # # init handler
4
+ # my_handler = EndpointHandler(path=".")
5
+ #
6
+ # # prepare sample payload
7
+ # non_holiday_payload = {"inputs": "I am quite excited how this will turn out", "date": "2022-08-08"}
8
+ # holiday_payload = {"inputs": "Today is a though day", "date": "2022-07-04"}
9
+ #
10
+ # # test the handler
11
+ # non_holiday_pred=my_handler(non_holiday_payload)
12
+ # holiday_payload=my_handler(holiday_payload)
13
+ #
14
+ # # show results
15
+ # print("non_holiday_pred", non_holiday_pred)
16
+ # print("holiday_payload", holiday_payload)
17
+ #
18
+ # # non_holiday_pred [{'label': 'joy', 'score': 0.9985942244529724}]
19
+ # # holiday_payload [{'label': 'happy', 'score': 1}]
20
+ from handler import EndpointHandler
21
+
22
+ handler = EndpointHandler(path=".")
23
+
24
+ # Example input data
25
+ test_data = {
26
+ "inputs": "John Doe visited New York last week."
27
+ }
28
+
29
+ # Invoke the handler
30
+ result = handler(test_data)
31
+ print(result)
32
+