Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
base_model: google/flan-t5-base
|
5 |
+
---
|
6 |
+
# GLM-flan-t5-base
|
7 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
8 |
+
This model is designed to process text-attributed graphs, texts, and interleaved inputs of both. It applies the architectural changes from [Graph Language Models](https://aclanthology.org/2024.acl-long.245/) to the encoder of `google/flan-t5-base`. The parameters are unchanged, meaning that the model should be trained to obtain best performance.
|
9 |
+
|
10 |
+
Paper abstract: <br>
|
11 |
+
> *While Language Models (LMs) are the workhorses of NLP, their interplay with structured knowledge graphs (KGs) is still actively researched. Current methods for encoding such graphs typically either (i) linearize them for embedding with LMs – which underutilize structural information, or (ii) use Graph Neural Networks (GNNs) to preserve the graph structure – but GNNs cannot represent text features as well as pretrained LMs. In our work we introduce a novel LM type, the Graph Language Model (GLM), that integrates the strengths of both approaches and mitigates their weaknesses. The GLM parameters are initialized from a pretrained LM to enhance understanding of individual graph concepts and triplets. Simultaneously, we design the GLM’s architecture to incorporate graph biases, thereby promoting effective knowledge distribution within the graph. This enables GLMs to process graphs, texts, and interleaved inputs of both. Empirical evaluations on relation classification tasks show that GLM embeddings surpass both LM- and GNN-based baselines in supervised and zero-shot setting, demonstrating their versatility.*
|
12 |
+
|
13 |
+
|
14 |
+
## Usage
|
15 |
+
|
16 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
17 |
+
In the paper we evaluate the model as a graph (and text) encoder for (text-guided) relation classification on ConceptNet and WikiData subgraphs. However, the model can be used for any task that requires encoding text-attributed graphs, texts, or interleaved inputs of both. See [Encoding Graphs and Texts](#encoding-graphs-and-texts) for an example implementation.
|
18 |
+
|
19 |
+
As we build on the T5 architecture, the model can be combined with the T5 decoder for generation. See [Generating from Graphs and Texts](#generating-from-graphs-and-texts) for an example implementation.
|
20 |
+
|
21 |
+
Note that the model is not trained for the new architecture, so it should be trained to obtain best performance.
|
22 |
+
|
23 |
+
### Encoding Graphs and Texts
|
24 |
+
|
25 |
+
```python
|
26 |
+
from transformers import AutoTokenizer, AutoModel
|
27 |
+
|
28 |
+
modelcard = 'plenz/GLM-flan-t5-base'
|
29 |
+
|
30 |
+
print('Load the model and tokenizer')
|
31 |
+
model = AutoModel.from_pretrained(modelcard, trust_remote_code=True, revision='main')
|
32 |
+
tokenizer = AutoTokenizer.from_pretrained(modelcard)
|
33 |
+
|
34 |
+
print('get dummy input (2 instances to show batching)')
|
35 |
+
graph_1 = [
|
36 |
+
('black poodle', 'is a', 'dog'),
|
37 |
+
('dog', 'is a', 'animal'),
|
38 |
+
('cat', 'is a', 'animal')
|
39 |
+
]
|
40 |
+
text_1 = 'The dog chased the cat.'
|
41 |
+
|
42 |
+
graph_2 = [
|
43 |
+
('dog', 'is a', 'animal'),
|
44 |
+
('dog', 'has', 'tail'),
|
45 |
+
('dog', 'has', 'fur'),
|
46 |
+
('fish', 'is a', 'animal'),
|
47 |
+
('fish', 'has', 'scales')
|
48 |
+
]
|
49 |
+
text_2 = None # only graph for this instance
|
50 |
+
|
51 |
+
print('prepare model inputs')
|
52 |
+
how = 'global' # can be 'global' or 'local', depending on whether the local or global GLM should be used. See paper for more details.
|
53 |
+
data_1 = model.data_processor.encode_graph(tokenizer=tokenizer, g=graph_1, text=text_1, how=how)
|
54 |
+
data_2 = model.data_processor.encode_graph(tokenizer=tokenizer, g=graph_2, text=text_2, how=how)
|
55 |
+
datas = [data_1, data_2]
|
56 |
+
model_inputs = model.data_processor.to_batch(data_instances=datas, tokenizer=tokenizer, max_seq_len=None, device='cpu')
|
57 |
+
|
58 |
+
print('compute token encodings')
|
59 |
+
outputs = model(**model_inputs)
|
60 |
+
|
61 |
+
# get token embeddings
|
62 |
+
print('Sequence of tokens (batch_size, max_seq_len, embedding_dim):', outputs.last_hidden_state.shape) # embeddings of all graph and text tokens. Nodes in the graph (e.g., dog) appear only once in the sequence.
|
63 |
+
print('embedding of `black poodle` in the first instance. Shape is (seq_len, embedding_dim):', model.data_processor.get_embedding(sequence_embedding=outputs.last_hidden_state[0], indices=data_1.indices, concept='black poodle', embedding_aggregation='seq').shape) # embedding_aggregation can be 'seq' or 'mean'. 'seq' returns the sequence of embeddings (e.g., all tokens of `black poodle`), 'mean' returns the mean of the embeddings.
|
64 |
+
```
|
65 |
+
|
66 |
+
### Generating from Graphs and Texts
|
67 |
+
|
68 |
+
```python
|
69 |
+
from transformers import AutoTokenizer, AutoModel, T5ForConditionalGeneration
|
70 |
+
|
71 |
+
modelcard = 'plenz/GLM-flan-t5-base'
|
72 |
+
modelcard_generation = 'google/flan-t5-base'
|
73 |
+
|
74 |
+
print('load the model and tokenizer')
|
75 |
+
model_generation = T5ForConditionalGeneration.from_pretrained(modelcard_generation)
|
76 |
+
del model_generation.encoder # we only need the decoder for generation. Deleting the encoder is optional, but saves memory.
|
77 |
+
model = AutoModel.from_pretrained(modelcard, trust_remote_code=True, revision='main')
|
78 |
+
tokenizer = AutoTokenizer.from_pretrained(modelcard)
|
79 |
+
|
80 |
+
|
81 |
+
print('get dummy input (2 instances to show batching)')
|
82 |
+
graph_1 = [
|
83 |
+
('black poodle', 'is a', 'dog'),
|
84 |
+
('dog', 'is a', 'animal'),
|
85 |
+
('cat', 'is a', 'animal')
|
86 |
+
]
|
87 |
+
text_1 = 'summarize: The black poodle chased the cat.' # with T5 prefix
|
88 |
+
|
89 |
+
graph_2 = [
|
90 |
+
('dog', 'is a', 'animal'),
|
91 |
+
('dog', 'has', 'tail'),
|
92 |
+
('dog', 'has', 'fur'),
|
93 |
+
('fish', 'is a', 'animal'),
|
94 |
+
('fish', 'has', 'scales')
|
95 |
+
]
|
96 |
+
text_2 = "Dogs have <extra_id_0> and fish have <extra_id_1>. Both are <extra_id_2>." # T5 MLM
|
97 |
+
|
98 |
+
print('prepare model inputs')
|
99 |
+
how = 'global' # can be 'global' or 'local', depending on whether the local or global GLM should be used. See paper for more details.
|
100 |
+
data_1 = model.data_processor.encode_graph(tokenizer=tokenizer, g=graph_1, text=text_1, how=how)
|
101 |
+
data_2 = model.data_processor.encode_graph(tokenizer=tokenizer, g=graph_2, text=text_2, how=how)
|
102 |
+
datas = [data_1, data_2]
|
103 |
+
model_inputs = model.data_processor.to_batch(data_instances=datas, tokenizer=tokenizer, max_seq_len=None, device='cpu')
|
104 |
+
|
105 |
+
print('compute token encodings')
|
106 |
+
outputs = model(**model_inputs)
|
107 |
+
|
108 |
+
print('generate conditional on encoded graph and text')
|
109 |
+
outputs = model_generation.generate(encoder_outputs=outputs, max_new_tokens=10)
|
110 |
+
print('generation 1:', tokenizer.decode(outputs[0], skip_special_tokens=True))
|
111 |
+
print('generation 2:', tokenizer.decode(outputs[1], skip_special_tokens=False))
|
112 |
+
```
|
113 |
+
|
114 |
+
|
115 |
+
## Contact
|
116 |
+
More information can be found in our paper [Graph Language Models](https://arxiv.org/abs/2401.07105) or our [GitHub repository](https://github.com/Heidelberg-NLP/GraphLanguageModels).
|
117 |
+
|
118 |
+
If you have any questions or comments, please feel free to send us an email at [[email protected]](mailto:[email protected]).
|
119 |
+
|
120 |
+
If this model is helpful for your work, please consider citing the paper:
|
121 |
+
```bibtex
|
122 |
+
@inproceedings{plenz-frank-2024-graph,
|
123 |
+
title = "Graph Language Models",
|
124 |
+
author = "Plenz, Moritz and Frank, Anette",
|
125 |
+
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics",
|
126 |
+
year = "2024",
|
127 |
+
address = "Bangkok, Thailand",
|
128 |
+
publisher = "Association for Computational Linguistics",
|
129 |
+
}
|
130 |
+
```
|
131 |
+
|
132 |
+
|