|
--- |
|
license: mit |
|
language: |
|
- de |
|
tags: |
|
- title generation |
|
- headline-generation |
|
- teaser generation |
|
- keyword generation |
|
- tweet generation |
|
- news |
|
inference: false |
|
--- |
|
|
|
# snip-igel-500 |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
snip-igel-500 |
|
Version 1.0 / 13 April 2023 |
|
|
|
An adapter for [IGEL](https://huggingface.co/philschmid/instruct-igel-001) to generate german news snippets with human written instructions. |
|
For usage example see this [notebook](https://github.com/snipaid-nlg/igel-lora-finetune-news-snippets/blob/main/getting-started-with-igel-lora-finetuned.ipynb). |
|
|
|
# Model Details |
|
|
|
## Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
Test generation capabilities here: https://snipaid.tech |
|
|
|
SNIP-IGEL is a continued instruction-tuned LoRa-Adapter to generate titles, teasers, summaries, tweets and keywords from the text of a news article in german language. [IGEL](https://huggingface.co/philschmid/instruct-igel-001) is an instruction-tuned model on top of the pre-trained german version of BLOOM ([bloom-6b4-clp-german](https://huggingface.co/malteos/bloom-6b4-clp-german)). It was developed by fine-tuning with a machine translated instruction-dataset, aimed to explore the potential of the BLOOM architecture for language modeling tasks requiring instruction-based responses. |
|
|
|
- **Developed by:** snipaid |
|
- **Model type:** bloom |
|
- **Language(s) (NLP):** de |
|
- **License:** MIT |
|
- **Finetuned from model:** [IGEL](https://huggingface.co/philschmid/instruct-igel-001) |
|
|
|
# Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
SNIP-IGEL is intended to be used for generating snippets for german news articles. It can be used by researchers, journalists, content creators and news agencies to automatically generate snippets for their articles in german language. |
|
|
|
# Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
Several common deficiencies can be observed, including hallucination, toxicity and stereotypes. |
|
|
|
# Training Details |
|
|
|
## Training Data |
|
|
|
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
SNIP-IGEL has been fine-tuned on [instruct-snippet-mlsum](https://huggingface.co/datasets/snipaid/instruct-snippet-mlsum). MLSUM is a dataset containing a german subset with text, title and teaser for news articles from the newspaper "Süddeutsche Zeitung". The dataset has been augmented with snippet data generated using a composite prompt which involves generating a SERP, keywords and a tweet for the news articles using a student-teacher-approach. Also see [snippet-mlsum-500](https://huggingface.co/datasets/snipaid/snippet-mlsum-500) for the dataset without instructions and our [blogpost](https://snipaid-nlg.github.io/2023/04/13/SNIP-IGEL.html) for more information about the construction of the dataset. |