Hierarchy Transformers (HiTs)

university

AI & ML interests

This collection includes language models trained on hierarchies using hyperbolic losses. The resulting HiT models yield entity embeddings that are hierarchically organised in hyperbolic space.

Hierarchy Transformers (HiTs) are capable of interpreting and encoding hierarchies explicitly.

The relevant code in HierarchyTransformers extends from Sentence-Transformers.

Get Started

Install hierarchy_tranformers (check our repository) through pip or GitHub.

Use the following code to get started with HiTs:

from hierarchy_transformers import HierarchyTransformer

# load the model
model = HierarchyTransformer.from_pretrained('Hierarchy-Transformers/HiT-MiniLM-L12-WordNetNoun')

# entity names to be encoded.
entity_names = ["computer", "personal computer", "fruit", "berry"]

# get the entity embeddings
entity_embeddings = model.encode(entity_names)

Models

See available HiT models under this organisation.

Datasets

The datasets for training and evaluating HiTs are available at Zenodo.

Citation

Our paper has been accepted at NeurIPS 2024 (to appear).

Preprint on arxiv: https://arxiv.org/abs/2401.11374.

Yuan He, Zhangdie Yuan, Jiaoyan Chen, Ian Horrocks. Language Models as Hierarchy Encoders. arXiv preprint arXiv:2401.11374 (2024).

@article{he2024language,
  title={Language Models as Hierarchy Encoders},
  author={He, Yuan and Yuan, Zhangdie and Chen, Jiaoyan and Horrocks, Ian},
  journal={arXiv preprint arXiv:2401.11374},
  year={2024}
}