Sheshera Mysore
kp encoder readme.
cb5580b
|
raw
history blame
3.57 kB
---
language: en
license: apache-2.0
---
## Overview
This encoder model is part of an approach for making interactive recommendations, LACE:
**Title**: "Editable User Profiles for Controllable Text Recommendation"
**Authors**: Sheshera Mysore, Mahmood Jasim, Andrew McCallum, Hamed Zamani
**Paper**: https://arxiv.org/abs/2304.04250
**Github**: https://github.com/iesl/editable_user_profiles-lace
## Model Card
### Model description
This model is a BERT based encoder trained for keyphrase representation. The model is trained with the inverse cloze task objective which miminizes the distance between the keyphrase embedding and the embedding for the surrounding context. The context is embedded with an Aspire contextual sentence encoder: [`allenai/aspire-contextualsentence-multim-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-multim-compsci). So this model is best used with `allenai/aspire-contextualsentence-multim-compsci`.
### Training data
The model is trained on about 100k keyphrases extracted automatically from computer science papers and their associated contexts representing about 1M triples.
### Intended uses & limitations
This model is trained for representing keyphrases in **computer science**. However, the model was not tested as a keyphrase encoder, it was only used as part of the LACE model -- it is likely that other models may be better suited for your use case, e.g. [SPECTER2](https://huggingface.co/allenai/specter2).
## Usage (Sentence-Transformers)
This model is intended for use in the LACE model, for that look at: https://github.com/iesl/editable_user_profiles-lace
But its also possible to use this as a keyphrase encoder. The easiest way to use this is with [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
keyphrases = ["machine learning", "keyphrase encoders"]
model = SentenceTransformer('Sheshera/lace-kp-encoder-compsci')
embeddings = model.encode(keyphrases)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Keyphrases we want embeddings for
keyphrases = ["machine learning", "keyphrase encoders"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Sheshera/lace-kp-encoder-compsci')
model = AutoModel.from_pretrained('Sheshera/lace-kp-encoder-compsci')
# Tokenize keyphrases
encoded_input = tokenizer(keyphrases, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
keyphrase_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Keyphrase embeddings:")
print(keyphrase_embeddings)
```