--- license: apache-2.0 language: - en library_name: transformers pipeline_tag: token-classification tags: - NER - token classification - information extraction - question answering --- **UTC-DeBERTa-small** - universal token classifier ***🚀 Meet the first prompt-tuned universal token classification model 🚀*** This is a model based on [DeBERTaV3-small](https://huggingface.co/microsoft/deberta-v3-small) that was trained on multiple token classification tasks or tasks that can be represented in this way. Such multi-task fine-tuning enabled better generalization; even small models can be used for zero-shot named entity recognition and demonstrate good performance on reading comprehension tasks. The model can be used for the following tasks: * Named entity recognition (NER); * Question answering; * Relation extraction; * Coreference resolution; * Text cleaning; * Summarization; #### How to use We recommend to use the model with transformers `ner` pipeline: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline def process(text, prompt, treshold=0.5): """ Processes text by preparing prompt and adjusting indices. Args: text (str): The text to process prompt (str): The prompt to prepend to the text Returns: list: A list of dicts with adjusted spans and scores """ # Concatenate text and prompt for full input input_ = f"{prompt}\n{text}" results = nlp(input_) # Run NLP on full input processed_results = [] prompt_length = len(prompt) # Get prompt length for result in results: # check whether score is higher than treshold if result['score']