Text2Text Generation
Transformers
PyTorch
Spanish
led
text-generation-inference
Inference Endpoints
Edit model card

Longformer Encoder-Decoder Spanish (LEDO) (base-sized model)

LEDO is based on BARTO and was introduced in the paper Sequence-to-Sequence Spanish Pre-trained Language Models.

Model description

LEDO is a BART-based model (transformer encoder-decoder) with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function and (2) learning a model to reconstruct the original text.

To process 16K tokens, the BARTO's position embedding matrix was simply copied 16 times.

BARTO is particularly effective when fine-tuned for long-range summarization and question answering.

Intended uses & limitations

You can use the raw model for text infilling. However, the model is mainly meant to be fine-tuned on a supervised dataset.

This model does not have a slow tokenizer (LEDTokenizer).

How to use

Here is how to use this model in PyTorch:

from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained('vgaraujov/led-base-16384-spanish')
model = AutoModel.from_pretrained('vgaraujov/led-base-16384-spanish')

inputs = tokenizer("Hola amigo, bienvenido a casa.", return_tensors="pt")
outputs = model(**inputs)

last_hidden_states = outputs.last_hidden_state

Citation (BibTeX)

@misc{araujo2023sequencetosequence,
      title={Sequence-to-Sequence Spanish Pre-trained Language Models}, 
      author={Vladimir Araujo and Maria Mihaela Trusca and Rodrigo Tufiño and Marie-Francine Moens},
      year={2023},
      eprint={2309.11259},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
208
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for vgaraujov/led-base-16384-spanish

Finetuned
(7)
this model

Datasets used to train vgaraujov/led-base-16384-spanish

Collection including vgaraujov/led-base-16384-spanish