Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
asahi417 commited on
Commit
29b9fa1
1 Parent(s): 4d962fc

fix readme

Browse files
README.md CHANGED
@@ -19,38 +19,44 @@ This is the T-REX dataset proposed in [https://aclanthology.org/L18-1544/](https
19
  The test split is universal across different version, which is manually checked by the author of [relbert/t_rex](https://huggingface.co/datasets/relbert/t_rex),
20
  and the test split contains predicates that is not included in the train/validation split.
21
  The train/validation splits are created for each configuration by the ratio of 9:1.
22
- The number of triples in test split is 121, and train/validation is summarized in the table below.
23
-
24
- | data | train | validation | total |
25
- |:----------------------------------------------|:--------|-------------:|:--------|
26
- | filter_unified.min_entity_1_max_predicate_100 | 7,075 | 787 | 7,862 |
27
- | filter_unified.min_entity_1_max_predicate_50 | 4,131 | 459 | 4,590 |
28
- | filter_unified.min_entity_1_max_predicate_25 | 2,358 | 262 | 2,620 |
29
- | filter_unified.min_entity_1_max_predicate_10 | 1,134 | 127 | 1,261 |
30
- | filter_unified.min_entity_2_max_predicate_100 | 4,873 | 542 | 5,415 |
31
- | filter_unified.min_entity_2_max_predicate_50 | 3,002 | 334 | 3,336 |
32
- | filter_unified.min_entity_2_max_predicate_25 | 1,711 | 191 | 1,902 |
33
- | filter_unified.min_entity_2_max_predicate_10 | 858 | 96 | 954 |
34
- | filter_unified.min_entity_3_max_predicate_100 | 3,659 | 407 | 4,066 |
35
- | filter_unified.min_entity_3_max_predicate_50 | 2,336 | 260 | 2,596 |
36
- | filter_unified.min_entity_3_max_predicate_25 | 1,390 | 155 | 1,545 |
37
- | filter_unified.min_entity_3_max_predicate_10 | 689 | 77 | 766 |
38
- | filter_unified.min_entity_4_max_predicate_100 | 2,995 | 333 | 3,328 |
39
- | filter_unified.min_entity_4_max_predicate_50 | 1,989 | 222 | 2,211 |
40
- | filter_unified.min_entity_4_max_predicate_25 | 1,221 | 136 | 1,357 |
41
- | filter_unified.min_entity_4_max_predicate_10 | 603 | 68 | 671 |
42
-
 
 
 
 
 
 
 
43
 
44
  ### Filtering to Remove Noise
45
-
46
  We apply filtering to keep triples with alpha-numeric subject and object, as well as triples with at least either of subject or object is a named-entity.
47
  After the filtering, we manually remove too vague and noisy predicate, and unify same predicates with different names (see the annotation [here](https://huggingface.co/datasets/relbert/t_rex/raw/main/predicate_manual_check.csv)).
48
 
49
- | Dataset | Raw | Filter | Unification |
50
- |----------:|----------:|----------:|--------------:|
51
- | Triples | 941,663 | 583,333 | 432,795 |
52
- | Predicate | 931 | 659 | 247 |
53
- | Entity | 270,801 | 197,163 | 149,172 |
54
 
55
  ### Filtering to Purify the Dataset
56
  We reduce the size of the dataset by applying filtering based on the number of predicates and entities in the triples.
 
19
  The test split is universal across different version, which is manually checked by the author of [relbert/t_rex](https://huggingface.co/datasets/relbert/t_rex),
20
  and the test split contains predicates that is not included in the train/validation split.
21
  The train/validation splits are created for each configuration by the ratio of 9:1.
22
+ The number of triples in each split is summarized in the table below.
23
+
24
+ - train/validation split
25
+
26
+ | data | number of triples (train) | number of triples (validation) | number of triples (all) | number of unique predicates (train) | number of unique predicates (validation) | number of unique predicates (all) | number of unique entities (train) | number of unique entities (validation) | number of unique entities (all) |
27
+ |:----------------------------------------------|----------------------------:|---------------------------------:|--------------------------:|--------------------------------------:|-------------------------------------------:|------------------------------------:|------------------------------------:|-----------------------------------------:|----------------------------------:|
28
+ | filter_unified.min_entity_1_max_predicate_100 | 7,075 | 787 | 9,193 | 212 | 166 | 246 | 8,496 | 1,324 | 10,454 |
29
+ | filter_unified.min_entity_1_max_predicate_50 | 4,131 | 459 | 5,304 | 212 | 156 | 246 | 5,111 | 790 | 6,212 |
30
+ | filter_unified.min_entity_1_max_predicate_25 | 2,358 | 262 | 3,034 | 212 | 144 | 246 | 3,079 | 465 | 3,758 |
31
+ | filter_unified.min_entity_1_max_predicate_10 | 1,134 | 127 | 1,465 | 210 | 94 | 246 | 1,587 | 233 | 1,939 |
32
+ | filter_unified.min_entity_2_max_predicate_100 | 4,873 | 542 | 6,490 | 195 | 139 | 229 | 5,386 | 887 | 6,704 |
33
+ | filter_unified.min_entity_2_max_predicate_50 | 3,002 | 334 | 3,930 | 193 | 139 | 229 | 3,457 | 575 | 4,240 |
34
+ | filter_unified.min_entity_2_max_predicate_25 | 1,711 | 191 | 2,251 | 195 | 113 | 229 | 2,112 | 331 | 2,603 |
35
+ | filter_unified.min_entity_2_max_predicate_10 | 858 | 96 | 1,146 | 194 | 81 | 229 | 1,149 | 177 | 1,446 |
36
+ | filter_unified.min_entity_3_max_predicate_100 | 3,659 | 407 | 4,901 | 173 | 116 | 208 | 3,892 | 662 | 4,844 |
37
+ | filter_unified.min_entity_3_max_predicate_50 | 2,336 | 260 | 3,102 | 174 | 115 | 208 | 2,616 | 447 | 3,240 |
38
+ | filter_unified.min_entity_3_max_predicate_25 | 1,390 | 155 | 1,851 | 173 | 94 | 208 | 1,664 | 272 | 2,073 |
39
+ | filter_unified.min_entity_3_max_predicate_10 | 689 | 77 | 937 | 171 | 59 | 208 | 922 | 135 | 1,159 |
40
+ | filter_unified.min_entity_4_max_predicate_100 | 2,995 | 333 | 4,056 | 158 | 105 | 193 | 3,104 | 563 | 3,917 |
41
+ | filter_unified.min_entity_4_max_predicate_50 | 1,989 | 222 | 2,645 | 157 | 102 | 193 | 2,225 | 375 | 2,734 |
42
+ | filter_unified.min_entity_4_max_predicate_25 | 1,221 | 136 | 1,632 | 158 | 76 | 193 | 1,458 | 237 | 1,793 |
43
+ | filter_unified.min_entity_4_max_predicate_10 | 603 | 68 | 829 | 157 | 52 | 193 | 797 | 126 | 1,018 |
44
+
45
+ - test split
46
+
47
+ | number of triples (test) | number of unique predicates (test) | number of unique entities (test) |
48
+ |---------------------------:|-------------------------------------:|-----------------------------------:|
49
+ | 122 | 34 | 188 |
50
 
51
  ### Filtering to Remove Noise
 
52
  We apply filtering to keep triples with alpha-numeric subject and object, as well as triples with at least either of subject or object is a named-entity.
53
  After the filtering, we manually remove too vague and noisy predicate, and unify same predicates with different names (see the annotation [here](https://huggingface.co/datasets/relbert/t_rex/raw/main/predicate_manual_check.csv)).
54
 
55
+ | Dataset | `raw` | `filter` | `filter_unified` |
56
+ |:----------|----------:|----------:|-----------------:|
57
+ | Triples | 941,663 | 583,333 | 432,795 |
58
+ | Predicate | 931 | 659 | 247 |
59
+ | Entity | 270,801 | 197,163 | 149,172 |
60
 
61
  ### Filtering to Purify the Dataset
62
  We reduce the size of the dataset by applying filtering based on the number of predicates and entities in the triples.
check_split.py DELETED
@@ -1,27 +0,0 @@
1
- import json
2
- from itertools import product
3
-
4
- import pandas as pd
5
-
6
- parameters_min_e_freq = [1, 2, 3, 4]
7
- parameters_max_p_freq = [100, 50, 25, 10]
8
-
9
- stats = []
10
- for min_e_freq, max_p_freq in product(parameters_min_e_freq, parameters_max_p_freq):
11
- with open(f"data/t_rex.filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}.train.jsonl") as f:
12
- train = [json.loads(i) for i in f.read().split('\n') if len(i) > 0]
13
- with open(f"data/t_rex.filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}.validation.jsonl") as f:
14
- validation = [json.loads(i) for i in f.read().split('\n') if len(i) > 0]
15
- stats.append({
16
- "data": f"filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}",
17
- "train": len(train),
18
- "validation": len(validation)
19
- })
20
-
21
- df = pd.DataFrame(stats)
22
- df['total'] = df['train'] + df['validation']
23
- df.loc[:, 'total'] = df['total'].map('{:,d}'.format)
24
- df.loc[:, 'train'] = df['train'].map('{:,d}'.format)
25
- df.loc[:, 'validation'] = df['validation'].map('{:,d}'.format)
26
-
27
- print(df.to_markdown(index=False))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
check_stats.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from itertools import product
3
+
4
+ import pandas as pd
5
+
6
+
7
+ parameters_min_e_freq = [1, 2, 3, 4]
8
+ parameters_max_p_freq = [100, 50, 25, 10]
9
+
10
+ stats = []
11
+ for min_e_freq, max_p_freq in product(parameters_min_e_freq, parameters_max_p_freq):
12
+
13
+ with open(f"data/t_rex.filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}.train.jsonl") as f:
14
+ train = [json.loads(i) for i in f.read().split('\n') if len(i) > 0]
15
+ df_train = pd.DataFrame(train)
16
+
17
+ with open(f"data/t_rex.filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}.validation.jsonl") as f:
18
+ validation = [json.loads(i) for i in f.read().split('\n') if len(i) > 0]
19
+ df_validation = pd.DataFrame(validation)
20
+
21
+ with open(f"data/t_rex.filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}.jsonl") as f:
22
+ full = [json.loads(i) for i in f.read().split('\n') if len(i) > 0]
23
+ df_full = pd.DataFrame(full)
24
+
25
+ stats.append({
26
+ "data": f"filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}",
27
+ "number of triples (train)": len(train),
28
+ "number of triples (validation)": len(validation),
29
+ "number of triples (all)": len(full),
30
+ "number of unique predicates (train)": len(df_train['predicate'].unique()),
31
+ "number of unique predicates (validation)": len(df_validation['predicate'].unique()),
32
+ "number of unique predicates (all)": len(df_full['predicate'].unique()),
33
+ "number of unique entities (train)": len(
34
+ list(set(df_train['object'].unique().tolist() + df_train['subject'].unique().tolist()))),
35
+ "number of unique entities (validation)": len(
36
+ list(set(df_validation['object'].unique().tolist() + df_validation['subject'].unique().tolist()))),
37
+ "number of unique entities (all)": len(
38
+ list(set(df_full['object'].unique().tolist() + df_full['subject'].unique().tolist())))
39
+ })
40
+
41
+ df = pd.DataFrame(stats)
42
+ df.index = df.pop("data")
43
+ for c in df.columns:
44
+ df.loc[:, c] = df[c].map('{:,d}'.format)
45
+
46
+ print(df.to_markdown())
47
+
48
+ with open(f"data/t_rex.filter_unified.test.jsonl") as f:
49
+ test = [json.loads(i) for i in f.read().split('\n') if len(i) > 0]
50
+ df_test = pd.DataFrame(test)
51
+ df_test = pd.DataFrame([{
52
+ "number of triples (test)": len(df_test),
53
+ "number of unique predicates (test)": len(df_test['predicate'].unique()),
54
+ "number of unique entities (test)": len(
55
+ list(set(df_test['object'].unique().tolist() + df_test['subject'].unique().tolist()))
56
+ )
57
+ }])
58
+ for c in df_test.columns:
59
+ df_test.loc[:, c] = df_test[c].map('{:,d}'.format)
60
+ print(df_test.to_markdown(index=False))
data/stats.data_size.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b44d88a51f7716796f4b94b4fe85fc86baca83120570a6222fef6a4bc3b3992d
3
- size 126
 
 
 
 
data/stats.predicate_size.csv DELETED
@@ -1,5 +0,0 @@
1
- min entity,10
2
- 1,246
3
- 2,229
4
- 3,208
5
- 4,193
 
 
 
 
 
 
filtering_purify.py CHANGED
@@ -1,7 +1,3 @@
1
- """
2
- TODO: save the data with different config
3
- TODO: get stats for the frequency based selection
4
- """
5
  import json
6
  from itertools import product
7
 
@@ -81,43 +77,27 @@ def main(min_entity_freq, max_pairs_predicate, min_pairs_predicate: int = 3, ran
81
  predicate_dist = df_balanced.groupby("predicate")['text'].count().sort_values(ascending=False).to_dict()
82
  entity, count = np.unique(df_balanced['object'].tolist() + df_balanced['subject'].tolist(), return_counts=True)
83
  entity_dist = dict(list(zip(entity.tolist(), count.tolist())))
84
- return predicate_dist, entity_dist, len(df_balanced), target_data
85
 
86
 
87
  if __name__ == '__main__':
88
 
89
  p_dist_full = []
90
  e_dist_full = []
91
- data_size_full = []
92
  config = []
93
  candidates = list(product(parameters_min_e_freq, parameters_max_p_freq))
94
 
95
  # run filtering with different configs
96
  for min_e_freq, max_p_freq in candidates:
97
- p_dist, e_dist, data_size, new_data = main(
98
  min_entity_freq=min_e_freq, max_pairs_predicate=max_p_freq, random_sampling=False)
99
  p_dist_full.append(p_dist)
100
  e_dist_full.append(e_dist)
101
- data_size_full.append(data_size)
102
  config.append([min_e_freq, max_p_freq])
103
  # save data
104
  with open(f"data/t_rex.filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}.jsonl", 'w') as f:
105
  f.write('\n'.join([json.dumps(i) for i in new_data]))
106
 
107
- # check statistics
108
- print("- Data Size")
109
- df_size = pd.DataFrame([{"min entity": mef, "max predicate": mpf, "freq": x} for x, (mef, mpf) in zip(data_size_full, candidates)])
110
- df_size = df_size.pivot(index="min entity", columns="max predicate", values="freq")
111
- df_size.index.name = "min entity / max predicate"
112
- df_size.to_csv("data/stats.data_size.csv")
113
- print(df_size.to_markdown())
114
- df_size_p = pd.DataFrame(
115
- [{"min entity": mef, "max predicate": mpf, "freq": len(x)} for x, (mef, mpf) in zip(p_dist_full, candidates)])
116
- df_size_p = df_size_p.pivot(index="max predicate", columns="min entity", values="freq")
117
- df_size_p = df_size_p.loc[10]
118
- df_size_p.to_csv("data/stats.predicate_size.csv")
119
- print(df_size_p.to_markdown())
120
-
121
  # plot predicate distribution
122
  df_p = pd.DataFrame([dict(enumerate(sorted(p.values(), reverse=True))) for p in p_dist_full]).T
123
  df_p.columns = [f"min entity: {mef}, max predicate: {mpf}" for mef, mpf in candidates]
 
 
 
 
 
1
  import json
2
  from itertools import product
3
 
 
77
  predicate_dist = df_balanced.groupby("predicate")['text'].count().sort_values(ascending=False).to_dict()
78
  entity, count = np.unique(df_balanced['object'].tolist() + df_balanced['subject'].tolist(), return_counts=True)
79
  entity_dist = dict(list(zip(entity.tolist(), count.tolist())))
80
+ return predicate_dist, entity_dist, target_data
81
 
82
 
83
  if __name__ == '__main__':
84
 
85
  p_dist_full = []
86
  e_dist_full = []
 
87
  config = []
88
  candidates = list(product(parameters_min_e_freq, parameters_max_p_freq))
89
 
90
  # run filtering with different configs
91
  for min_e_freq, max_p_freq in candidates:
92
+ p_dist, e_dist, new_data = main(
93
  min_entity_freq=min_e_freq, max_pairs_predicate=max_p_freq, random_sampling=False)
94
  p_dist_full.append(p_dist)
95
  e_dist_full.append(e_dist)
 
96
  config.append([min_e_freq, max_p_freq])
97
  # save data
98
  with open(f"data/t_rex.filter_unified.min_entity_{min_e_freq}_max_predicate_{max_p_freq}.jsonl", 'w') as f:
99
  f.write('\n'.join([json.dumps(i) for i in new_data]))
100
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
  # plot predicate distribution
102
  df_p = pd.DataFrame([dict(enumerate(sorted(p.values(), reverse=True))) for p in p_dist_full]).T
103
  df_p.columns = [f"min entity: {mef}, max predicate: {mpf}" for mef, mpf in candidates]