File size: 77,937 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
{
    "paper_id": "W09-0215",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T06:37:27.834416Z"
    },
    "title": "Context-theoretic Semantics for Natural Language: an Overview",
    "authors": [
        {
            "first": "Daoud",
            "middle": [],
            "last": "Clarke",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Sussex Falmer",
                "location": {
                    "settlement": "Brighton",
                    "country": "UK"
                }
            },
            "email": "[email protected]"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We present the context-theoretic framework, which provides a set of rules for the nature of composition of meaning based on the philosophy of meaning as context. Principally, in the framework the composition of the meaning of words can be represented as multiplication of their representative vectors, where multiplication is distributive with respect to the vector space. We discuss the applicability of the framework to a range of techniques in natural language processing, including subsequence matching, the lexical entailment model of Dagan et al. (2005), vector-based representations of taxonomies, statistical parsing and the representation of uncertainty in logical semantics.",
    "pdf_parse": {
        "paper_id": "W09-0215",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We present the context-theoretic framework, which provides a set of rules for the nature of composition of meaning based on the philosophy of meaning as context. Principally, in the framework the composition of the meaning of words can be represented as multiplication of their representative vectors, where multiplication is distributive with respect to the vector space. We discuss the applicability of the framework to a range of techniques in natural language processing, including subsequence matching, the lexical entailment model of Dagan et al. (2005), vector-based representations of taxonomies, statistical parsing and the representation of uncertainty in logical semantics.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Techniques such as latent semantic analysis (Deerwester et al., 1990 ) and its variants have been very successful in representing the meanings of words as vectors, yet there is currently no theory of natural language semantics that explains how we should compose these representations: what should the representation of a phrase be, given the representation of the words in the phrase? In this paper we present such a theory, which is based on the philosophy of meaning as context, as epitomised by the famous sayings of Wittgenstein (1953) , \"Meaning just is use\" and Firth (1957) , \"You shall know a word by the company it keeps\". For the sake of brevity we shall present only a summary of our research, which is described in full in (Clarke, 2007) , and we give a simplified version of the framework, which nevertheless suffices for the examples which follow.",
                "cite_spans": [
                    {
                        "start": 44,
                        "end": 68,
                        "text": "(Deerwester et al., 1990",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 521,
                        "end": 540,
                        "text": "Wittgenstein (1953)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 575,
                        "end": 581,
                        "text": "(1957)",
                        "ref_id": null
                    },
                    {
                        "start": 736,
                        "end": 750,
                        "text": "(Clarke, 2007)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We believe that the development of theories that can take vector representations of meaning beyond the word level, to the phrasal and sentence levels and beyond are essential for vector based semantics to truly compete with logical semantics, both in their academic standing and in application to real problems in natural language processing. Moreover the time is ripe for such a theory: never has there been such an abundance of immediately available textual data (in the form of the worldwide web) or cheap computing power to enable vector-based representations of meaning to be obtained. The need to organise and understand the new abundance of data makes these techniques all the more attractive since meanings are determined automatically and are thus more robust in comparison to hand-built representations of meaning. A guiding theory of vector based semantics would undoubtedly be invaluable in the application of these representations to problems in natural language processing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The context-theoretic framework does not provide a formula for how to compose meaning; rather it provides mathematical guidelines for theories of meaning. It describes the nature of the vector space in which meanings live, gives some restrictions on how meanings compose, and provides us with a measure of the degree of entailment between strings for any implementation of the framework.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The remainder of the paper is structured as follows: in Section 2 we present the framework; in Section 3 we present applications of the framework:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 We describe subsequence matching (Section 3.1) and the lexical entailment model of (Dagan et al., 2005) (Section 3.2), both of which have been applied to the task of recognising textual entailment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 We show how a vector based representation of a taxonomy incorporating probabilistic information about word meanings can be con-",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "d 1 d 2 d 3 d 4 d 5 d 6 d 1 d 2 d 3 d 4 d 5 d 6 d 1 d 2 d 3 d 4 d 5 d 6 orange fruit orange \u2227 fruit Figure 1: Vector representations of two terms in a space L 1 (S) where S = {d 1 , d 2 , d 3 , d 4 , d 5 , d 6 }",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "and their vector lattice meet (the darker shaded area).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "structed in Section 3.3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 We show how syntax can be represented within the framework in Section 3.4.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "\u2022 We summarise our approach to representing uncertainty in logical semantics in Section 3.5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The context-theoretic framework is based on the idea that the vector representation of the meaning of a word is derived from the contexts in which it occurs. However it extends this idea to strings of any length: we assume there is some set S containing all the possible contexts associated with any string. A context theory is an implementation of the context-theoretic framework; a key requirement for a context theory is a mapping from strings to vectors formed from the set of contexts.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "In vector based techniques, the set of contexts may be the set of possible dependency relations between words, or the set of documents in which strings may occur; in context-theoretic semantics however, the set of \"contexts\" can be any set. We continue to refer to it as a set of contexts since the intuition and philosophy which forms the basis for the framework derives from this idea; in practice the set may even consist of logical sentences describing the meanings of strings in model-theoretic terms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "An important aspect of vector-based techniques is measuring the frequency of occurrence of strings in each context. We model this in a general way as follows: let A be a set consisting of the words of the language under consideration. The first requirement of a context theory is a mapping x \u2192x from a string x \u2208 A * to a vectorx \u2208 L 1 (S) + , where L 1 (S) means the set of all functions from S to the real numbers R which are finite under the L 1 norm,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "u 1 = s\u2208S |u(s)|",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "and L 1 (S) + restricts this to functions to the nonnegative real numbers, R + ; these functions are called the positive elements of the vector space L 1 (S). The requirement that the L 1 norm is finite, and that the map is only to positive elements reflects the fact that the vectors are intended to represent an estimate of relative frequency distributions of the strings over the contexts, since a frequency distribution will always satisfy these requirements. Note also that the l 1 norm of the context vector of a string is simply the sum of all its components and is thus proportional to its probability.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "The set of functions L 1 (S) is a vector space under the point-wise operations:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "(\u03b1u)(s) = \u03b1u(s) (u + v)(s) = u(s) + v(s) for u, v \u2208 L 1 (S) and \u03b1 \u2208 R, but it is also a lattice under the operations (u \u2227 v)(s) = min(u(s), v(s)) (u \u2228 v)(s) = max(u(s), v(s)).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "In fact it is a vector lattice or Riesz space (Aliprantis and Burkinshaw, 1985) since it satisfies the following relationships",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "if u \u2264 v then \u03b1u \u2264 \u03b1v if u \u2264 v then u + w \u2264 v + w,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "where \u03b1 \u2208 R + and \u2264 is the partial ordering associated with the lattice operations, defined by u \u2264",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "v if u \u2227 v = u.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "Together with the l 1 norm, the vector lattice defines an Abstract Lebesgue space (Abramovich and Aliprantis, 2002) a vector space incorporating all the properties of a measure space, and thus can also be thought of as defining a probability space, where \u2228 and \u2227 correspond to the union and intersection of events in the \u03c3 algebra, and the norm corresponds to the (un-normalised) probability.",
                "cite_spans": [
                    {
                        "start": 82,
                        "end": 115,
                        "text": "(Abramovich and Aliprantis, 2002)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context-theoretic Framework",
                "sec_num": "2"
            },
            {
                "text": "The vector lattice nature of the space under consideration is important in the context-theoretic framework since it is used to define a degree of entailment between strings. Our notion of entailment is based on the concept of distributional generality (Weeds et al., 2004) , a generalisation of the distributional hypothesis of Harris (1985) , in which it is assumed that terms with a more general meaning will occur in a wider array of contexts, an idea later developed by Geffet and Dagan (2005) . Weeds et al. (2004) also found that frequency played a large role in determining the direction of entailment, with the more general term often occurring more frequently. The partial ordering of the vector lattice encapsulates these properties sincex \u2264\u0177 if and only if y occurs more frequently in all the contexts in which x occurs.",
                "cite_spans": [
                    {
                        "start": 252,
                        "end": 272,
                        "text": "(Weeds et al., 2004)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 328,
                        "end": 341,
                        "text": "Harris (1985)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 474,
                        "end": 497,
                        "text": "Geffet and Dagan (2005)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 500,
                        "end": 519,
                        "text": "Weeds et al. (2004)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distributional Generality",
                "sec_num": "2.1"
            },
            {
                "text": "This partial ordering is a strict relationship, however, that is unlikely to exist between any two given vectors. Because of this, we define a degree of entailment",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distributional Generality",
                "sec_num": "2.1"
            },
            {
                "text": "Ent(u, v) = u \u2227 v 1 u 1 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distributional Generality",
                "sec_num": "2.1"
            },
            {
                "text": "This value has the properties of a conditional probability; in the case of u =x and v =\u0177 it is a measure of the degree to which the contexts string x occurs in are shared by the contexts string y occurs in.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distributional Generality",
                "sec_num": "2.1"
            },
            {
                "text": "The map from strings to vectors already tells us everything we need to know about the composition of words: given two words x and y, we have their individual context vectorsx and\u0177, and the meaning of the string xy is represented by the vector xy. The question we address is what relationship should be imposed between the representation of the meanings of individual wordsx and\u0177 and the meaning of their composition xy. As it stands, we have little guidance on what maps from strings to context vectors are appropriate. The first restriction we propose is that vector representations of meanings should be composable in their own right, without consideration of what words they originated from. In fact we place a strong requirement on the nature of multiplication on elements: we require that the multiplication \u2022 on the vector space defines a lattice-ordered algebra. This means that multiplication is associative, distributive with respect to addition, and satisfies u \u2022 v \u2265 0 if u \u2265 0 and v \u2265 0, i.e. the product of positive elements is also positive.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Multiplication",
                "sec_num": "2.2"
            },
            {
                "text": "We argue that composition of context vectors needs to be compatible with concatenation of words, i.e.x",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Multiplication",
                "sec_num": "2.2"
            },
            {
                "text": "\u2022\u0177 = xy,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Multiplication",
                "sec_num": "2.2"
            },
            {
                "text": "i.e. the map from strings to context vectors defines a semigroup homomorphism. Then the requirement that multiplication is associative can be seen to be a natural one since the homomorphism enforces this requirement for context vectors. Similarly since all context vectors are positive their product in the algebra must also be positive, thus it is natural to extend this to all elements of the algebra. The requirement for distributivity is justified by our own model of meaning as context in text corpora, described in full elsewhere.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Multiplication",
                "sec_num": "2.2"
            },
            {
                "text": "The above requirements give us all we need to define a context theory. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Theory",
                "sec_num": "2.3"
            },
            {
                "text": "In this section we describe applications of the context-theoretic framework to applications in computational linguistics and natural language processing. We shall commonly use a construction in which there is a binary operation \u2022 on S that makes it a semigroup. In this case L 1 (S) is a lattice-ordered algebra with convolution as multiplication:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Theories for Natural Language",
                "sec_num": "3"
            },
            {
                "text": "(u \u2022 v)(r) = s\u2022t=r u(s)v(t)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Theories for Natural Language",
                "sec_num": "3"
            },
            {
                "text": "for r, s, t \u2208 S and u, v \u2208 L 1 (S). We denote the unit basis element associated with an element x \u2208 S by e x , that is e x (y) = 1 if and only if y = x, otherwise e x (y) = 0.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Theories for Natural Language",
                "sec_num": "3"
            },
            {
                "text": "A string x \u2208 A * is called a \"subsequence\" of y \u2208 A * if each element of x occurs in y in the same order, but with the possibility of other elements occurring in between, so for example abba is a subsequence of acabcba in {a, b, c} * . We denote the set of subsequences of x (including the empty string) by Sub(x). Subsequence matching compares the subsequences of two strings: the more subsequences they have in common the more similar they are assumed to be. This idea has been used successfully in text classification (Lodhi et al., 2002) and recognising textual entailment (Clarke, 2006) .",
                "cite_spans": [
                    {
                        "start": 521,
                        "end": 541,
                        "text": "(Lodhi et al., 2002)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 577,
                        "end": 591,
                        "text": "(Clarke, 2006)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Subsequence Matching",
                "sec_num": "3.1"
            },
            {
                "text": "We can describe such models using a context theory A,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Subsequence Matching",
                "sec_num": "3.1"
            },
            {
                "text": "A * ,\u02c6, \u2022 , where \u2022 is convolution in L 1 (A * ) andx = (1/2 |x| ) y\u2208Sub(x) e y ,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Subsequence Matching",
                "sec_num": "3.1"
            },
            {
                "text": "i.e. the context vector of a string is a weighted sum of its subsequences. Under this context theoryx \u2264 y, i.e. x completely entails y if x is a subsequence of y.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Subsequence Matching",
                "sec_num": "3.1"
            },
            {
                "text": "Many variations on this context theory are possible, for example using more complex mappings to L 1 (A * ). The context theory can also be adapted to incorporate a measure of lexical overlap between strings, an approach that, although simple, performs comparably to more complex techniques in tasks such as recognising textual entailment ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Subsequence Matching",
                "sec_num": "3.1"
            },
            {
                "text": "Glickman and Dagan 2005define their own model of entailment and apply it to the task of recognising textual entailment. They estimate entailment between words based on occurrences in documents: they estimate a lexical entailment probability LEP(x, y) between two terms x and y to be",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Entailment Model",
                "sec_num": "3.2"
            },
            {
                "text": "LEP(x, y) \u2243 n x,y n y",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Entailment Model",
                "sec_num": "3.2"
            },
            {
                "text": "where n y and n x,y denote the number of documents that the word y occurs in and the words x and y both occur in respectively. We can describe this using a context theory A, D,\u02c6, \u2022 , where D is the set of documents, and",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Entailment Model",
                "sec_num": "3.2"
            },
            {
                "text": "x(d) = 1 if x occurs in document d 0 otherwise. .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Entailment Model",
                "sec_num": "3.2"
            },
            {
                "text": "In this case the estimate of LEP(x, y) coincides with our own degree of entailment Ent(x, y).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Entailment Model",
                "sec_num": "3.2"
            },
            {
                "text": "There are many ways in which the multiplication \u2022 can be defined on L 1 (D). and N = 10 7 using a cutoff for the degree of entailment of 0.5 at which entailment was regarded as holding. CWS is the confidence weighted score -see for the definition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Entailment Model",
                "sec_num": "3.2"
            },
            {
                "text": "Glickman and Dagan (2005) do not use this measure, possibly because the problem of data sparseness makes it useless for long strings. However the measure they use can be viewed as an approximation to this context theory.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical Entailment Model",
                "sec_num": "3.2"
            },
            {
                "text": "We have also used this idea to determine entailment, using latent Dirichlet allocation to get around the problem of data sparseness. A model was built using a subset of around 380,000 documents from the Gigaword corpus, and the model was evaluated on the dataset from the first Recognising Textual Entailment Challenge; the results are shown in Table 1 . In order to use the model, a document length had to be chosen; it was found that very long documents yielded better performance at this task.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 345,
                        "end": 352,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Lexical Entailment Model",
                "sec_num": "3.2"
            },
            {
                "text": "In this section we describe how the relationships described by a taxonomy, the collection of isa relationships described by ontologies such as WordNet (Fellbaum, 1989) , can be embedded in the vector lattice structure that is crucial to the context-theoretic framework. This opens up the way to the possibility of new techniques that combine the vector-based representations of word meanings with the ontological ones, for example:",
                "cite_spans": [
                    {
                        "start": 151,
                        "end": 167,
                        "text": "(Fellbaum, 1989)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Taxonomies",
                "sec_num": "3.3"
            },
            {
                "text": "\u2022 Semantic smoothing could be applied to vector based representations of an ontology, for example using distributional similarity measures to move words that are distributionally similar closer to each other in the vector space. This type of technique may allow the benefits of vector based techniques and ontologies to be combined.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Taxonomies",
                "sec_num": "3.3"
            },
            {
                "text": "\u2022 Automatic classification: representing the taxonomy in a vector space may make it easier to look for relationships between the meanings in the taxonomy and meanings derived from vector based techniques such as latent semantic analysis, potentially aiding in classifying word meanings in a taxonomy.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Taxonomies",
                "sec_num": "3.3"
            },
            {
                "text": "\u2022 The new vector representation could lead to new measures of semantic distance, for example, the L p norms can all be used to measure distance between the vector representations of meanings in a taxonomy. Moreover, the vector-based representation allows ambiguity to be represented by adding the weighted representations of individual senses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Taxonomies",
                "sec_num": "3.3"
            },
            {
                "text": "We assume that the is-a relation is a partial ordering; this is true for many ontologies. We wish to incorporate the partial ordering of the taxonomy into the partial ordering of the vector lattice. We will make use of the following result relating to partial orders: Definition 2 (Ideals). A lower set in a partially ordered set S is a set T such that for all x, y \u2208 S, if x \u2208 T and y \u2264 x then y \u2208 T .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Taxonomies",
                "sec_num": "3.3"
            },
            {
                "text": "The principal ideal generated by an element x in a partially ordered set S is defined to be the lower set \uf8e6 (x) = {y \u2208 S : y \u2264 x}. We are also concerned with the probability of concepts. This is an idea that has come about through the introduction of \"distance measures\" on taxonomies (Resnik, 1995) . Since terms can be ascribed probabilities based on their frequencies of occurrence in corpora, the concepts they refer to can similarly be assigned probabilities. The probability of a concept is the probability of encountering an instance of that concept in the corpus, that is, the probability that a term selected at random from the corpus has a meaning that is subsumed by that particular concept. This ensures that more general concepts are given higher probabilities, for example if there is a most general concept (a top-most node in the taxonomy, which may correspond for example to \"entity\") its probability will be one, since every term can be considered an instance of that concept.",
                "cite_spans": [
                    {
                        "start": 285,
                        "end": 299,
                        "text": "(Resnik, 1995)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Taxonomies",
                "sec_num": "3.3"
            },
            {
                "text": "We give a general definition based on this idea which does not require probabilities to be assigned based on corpus counts: ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Proposition 3 (Ideal Completion",
                "sec_num": null
            },
            {
                "text": "x\u2208S p(s) = 1. In this casep refers to the probability of a concept.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The taxonomy is called probabilistic if",
                "sec_num": null
            },
            {
                "text": "Thus in a probabilistic taxonomy, the function p corresponds to the probability that a term is observed whose meaning corresponds (in that context) to that concept. The functionp denotes the probability that a term is observed whose meaning in that context is subsumed by the concept.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The taxonomy is called probabilistic if",
                "sec_num": null
            },
            {
                "text": "Note that if S has a top element I then in the probabilistic case, clearlyp(I) = 1. In studies of distance measures on ontologies, the concepts in S often correspond to senses of terms, in this case the function p represents the (normalised) probability that a given term will occur with the sense indicated by the concept. The top-most concept often exists, and may be something with the meaning \"entity\"-intended to include the meaning of all concepts below it.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The taxonomy is called probabilistic if",
                "sec_num": null
            },
            {
                "text": "The most simple completion we consider is into the vector lattice L 1 (S), with basis elements {e x : x \u2208 S}.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The taxonomy is called probabilistic if",
                "sec_num": null
            },
            {
                "text": "Proposition 5 (Ideal Vector Completion). Let S be a probabilistic taxonomy with probability distribution function p that is non-zero everywhere on S. The function \u03c8 from S to L 1 (S) defined by",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The taxonomy is called probabilistic if",
                "sec_num": null
            },
            {
                "text": "\u03c8(x) = y\u2208\u2193(x) p(y)e y",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The taxonomy is called probabilistic if",
                "sec_num": null
            },
            {
                "text": "is a completion of the partial ordering of S under the vector lattice order of L 1 (S), satisfying \u03c8(x) 1 =p(x). Proof. The function \u03c8 is clearly order-preserving: if x \u2264 y in S then since \uf8e6 (x) \u2286 \uf8e6 (y) , necessarily \u03c8(x) \u2264 \u03c8(y). Conversely, the only way that \u03c8(x) \u2264 \u03c8(y) can be true is if \uf8e6 (x) \u2286 \uf8e6 (y) since p is non-zero everywhere. If this is the case, then x \u2264 y by the nature of the ideal completion. Thus \u03c8 is an order-embedding, and since L 1 (S) is a complete lattice, it is also a completion. Finally, note that \u03c8(",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The taxonomy is called probabilistic if",
                "sec_num": null
            },
            {
                "text": "x) 1 = y\u2208\u2193(x) p(y) =p(x).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The taxonomy is called probabilistic if",
                "sec_num": null
            },
            {
                "text": "This completion allows us to represent concepts as elements within a vector lattice so that not only the partial ordering of the taxonomy is preserved, but the probability of concepts is also preserved as the size of the vector under the L 1 norm.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The taxonomy is called probabilistic if",
                "sec_num": null
            },
            {
                "text": "In this section we give a description link grammar (Sleator and Temperley, 1991) in terms of a context theory. Link grammar is a lexicalised syntactic formalism which describes properties of words in terms of links formed between them, and which is context-free in terms of its generative power; for the sake of brevity we omit the details, although a sample link grammar parse is show in Figure 3 .",
                "cite_spans": [
                    {
                        "start": 51,
                        "end": 80,
                        "text": "(Sleator and Temperley, 1991)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 389,
                        "end": 397,
                        "text": "Figure 3",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "Our formulation of link grammar as a context theory makes use of a construction called a free inverse semigroup. Informally, the free inverse semigroup on a set S is formed from elements of S and their inverses, S \u22121 = {s \u22121 : s \u2208 S}, satisfying no other condition than those of an inverse semigroup. Formally, the free inverse semigroup is defined in terms of a congruence relation on (S \u222a S \u22121 ) * specifying the inverse property and commutativity of idempotents -see (Munn, 1974) for details. We denote the free inverse semigroup on S by FIS(S).",
                "cite_spans": [
                    {
                        "start": 470,
                        "end": 476,
                        "text": "(Munn,",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "Free inverse semigroups were shown by Munn (1974) to be equivalent to birooted word trees. A birooted word-tree on a set A is a directed acyclic graph whose edges are labelled by elements of A which does not contain any subgraphs of the form An element in the free semigroup FIS(S) is denoted as a sequence",
                "cite_spans": [
                    {
                        "start": 38,
                        "end": 49,
                        "text": "Munn (1974)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "x d 1 1 x d 2 2 . . . x dn n where x i \u2208 S and d i \u2208 {1, \u22121}.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "We construct the birooted word tree by starting with a single node as the start node, and for each i from 1 to n:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "\u2022 Determine if there is an edge labelled x i leaving the current node if d i = 1, or arriving at the current node if d i = \u22121.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "\u2022 If so, follow this edge and make the resulting node the current node.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "\u2022 If not, create a new node and join it with an edge labelled x i in the appropriate direction, and make this node the current node.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "The finish node is the current node after the n iterations. The product of two elements x and y in the free inverse semigroup can be computed by finding the birooted word-tree of x and that of y, joining the graphs by equating the start node of y with the finish node of x (and making it a normal node), and merging any other nodes and edges necessary to remove any subgraphs of the form",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "\u2022 a \u2212\u2192 \u2022 a \u2190\u2212 \u2022 or \u2022 a \u2190\u2212 \u2022 a \u2212\u2192 \u2022.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "The inverse of an element has the same graph with start and finish nodes exchanged.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "We can represent parses of sentences in link grammar by translating words to syntactic categories in the free inverse semigroup. The parse shown earlier for \"they mashed their way through the thick mud\" can be represented in the inverse semigroup on S = {s, m, o, d, j, a} as",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "ss \u22121 modd \u22121 o \u22121 m \u22121 jdaa \u22121 d \u22121 j \u22121",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "which has the following birooted word-tree (the words which the links derive from are shown in brackets): Let A be the set of words in the natural language under consideration, S be the set of link types. Then we can form a context theory A, FIS(S),\u02c6, \u2022 where \u2022 is multiplication defined by convolution on FIS(S), and a word a \u2208 A is mapped to a probabilistic sum\u00e2 of its link possible grammar representations (called disjuncts). Thus we have a context theory which maps a string x to elements of L 1 (FIS(S)); if there is a parse for this string then there will be some component of x which corresponds to an idempotent element of FIS(S). Moreover we can interpret the magnitude of the component as the probability of that particular parse, thus the context theory describes a probabilistic variation of link grammar.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Representing Syntax",
                "sec_num": "3.4"
            },
            {
                "text": "For the sake of brevity, we summarise our approach to representing uncertainty in logical semantics, which is described in full elsewhere. Our aim is to be able to reason with probabilistic information about uncertainty in logical semantics. For example, in order to represent a natural language sentence as a logical statement, it is necessary to parse it, which may well be with a statistical parser. We may have hundreds of possible parses and logical representations of a sentence, and associated probabilities. Alternatively, we may wish to describe our uncertainty about word-sense disambiguation in the representation. Incorporating such probabilistic information into the representation of meaning may lead to more robust systems which are able to cope when one component fails.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Uncertainty in Logical Semantics",
                "sec_num": "3.5"
            },
            {
                "text": "The basic principle we propose is to first represent unambiguous logical statements as a context theory. Our uncertainty about the meaning of a sentence can then be represented as a probability distribution over logical statements, whether the uncertainty arises from parsing, word-sense disambiguation or any other source. Incorporating this information is then straightforward: the representation of the sentence is the weighted sum of the representation of each possible meaning, where the weights are given by the probability distribution.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Uncertainty in Logical Semantics",
                "sec_num": "3.5"
            },
            {
                "text": "Computing the degree of entailment using this approach is computationally challenging, however we have shown that it is possible to estimate the degree of entailment by computing a lower bound on this value by calculating pairwise degrees of entailment for each possible logical statement. Mitchell and Lapata (2008) proposed a framework for composing meaning that is extremely general in nature: there is no requirement for linearity in the composition function, although in practice the authors do adopt this assumption. Indeed their \"multiplicative models\" require composition of two vectors to be a linear function of their tensor product; this is equivalent to our requirement of distributivity with respect to vector space addition.",
                "cite_spans": [
                    {
                        "start": 290,
                        "end": 316,
                        "text": "Mitchell and Lapata (2008)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Uncertainty in Logical Semantics",
                "sec_num": "3.5"
            },
            {
                "text": "Various ways of composing vector based representations of meaning were investigated by Widdows (2008) , including the tensor product and direct sum. Both of these are compatible with the context theoretic framework since they are distributive with respect to the vector space addition. Clark et al. (2008) proposed a method of composing meaning that generalises Montague semantics; further work is required to determine how their method of composition relates to the contexttheoretic framework. Erk and Pado (2008) describe a method of composition that allows the incorporation of selectional preferences; again further work is required to determine the relation between this work and the context-theoretic framework.",
                "cite_spans": [
                    {
                        "start": 87,
                        "end": 101,
                        "text": "Widdows (2008)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 286,
                        "end": 305,
                        "text": "Clark et al. (2008)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 495,
                        "end": 514,
                        "text": "Erk and Pado (2008)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "4"
            },
            {
                "text": "We have given an introduction to the contexttheoretic framework, which provides mathematical guidelines on how vector-based representations of meaning should be composed, how entailment should be determined between these representations, and how probabilistic information should be incorporated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "5"
            },
            {
                "text": "We have shown how the framework can be applied to a wide range of problems in computational linguistics, including subsequence matching, vector based representations of taxonomies and statistical parsing. The ideas we have presented here are only a fraction of those described in full in (Clarke, 2007) , and we believe that even that is only the tip of the iceberg with regards to what it is possible to achieve with the framework.",
                "cite_spans": [
                    {
                        "start": 288,
                        "end": 302,
                        "text": "(Clarke, 2007)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "5"
            }
        ],
        "back_matter": [
            {
                "text": "I am very grateful to my supervisor David Weir for all his help in the development of these ideas, and to Rudi Lutz and the anonymous reviewers for many useful comments and suggestions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "An Invitation to Operator Theory",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [
                            "A"
                        ],
                        "last": "Abramovich",
                        "suffix": ""
                    },
                    {
                        "first": "Charalambos",
                        "middle": [
                            "D"
                        ],
                        "last": "Aliprantis",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Y. A. Abramovich and Charalambos D. Aliprantis. 2002. An Invitation to Operator Theory. American Mathematical Society.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "A compositional distributional model of meaning",
                "authors": [
                    {
                        "first": "Stephen",
                        "middle": [],
                        "last": "Clark",
                        "suffix": ""
                    },
                    {
                        "first": "Bob",
                        "middle": [],
                        "last": "Coecke",
                        "suffix": ""
                    },
                    {
                        "first": "Mehrnoosh",
                        "middle": [],
                        "last": "Sadrzadeh",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the Second Symposium on Quantum Interaction",
                "volume": "",
                "issue": "",
                "pages": "133--140",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stephen Clark, Bob Coecke, and Mehrnoosh Sadrzadeh. 2008. A compositional distribu- tional model of meaning. In Proceedings of the Second Symposium on Quantum Interaction, Oxford, UK, pages 133-140.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Meaning as context and subsequence analysis for textual entailment",
                "authors": [
                    {
                        "first": "Daoud",
                        "middle": [],
                        "last": "Clarke",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the Second PASCAL Recognising Textual Entailment Challenge",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daoud Clarke. 2006. Meaning as context and subse- quence analysis for textual entailment. In Proceed- ings of the Second PASCAL Recognising Textual En- tailment Challenge.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Context-theoretic Semantics for Natural Language: an Algebraic Framework",
                "authors": [
                    {
                        "first": "Daoud",
                        "middle": [],
                        "last": "Clarke",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daoud Clarke. 2007. Context-theoretic Semantics for Natural Language: an Algebraic Framework. Ph.D. thesis, Department of Informatics, University of Sussex.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "The pascal recognising textual entailment challenge",
                "authors": [
                    {
                        "first": "Oren",
                        "middle": [],
                        "last": "Ido Dagan",
                        "suffix": ""
                    },
                    {
                        "first": "Bernardo",
                        "middle": [],
                        "last": "Glickman",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Magnini",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Proceedings of the PASCAL Chal- lenges Workshop on Recognising Textual Entail- ment.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Indexing by latent semantic analysis",
                "authors": [
                    {
                        "first": "Scott",
                        "middle": [],
                        "last": "Deerwester",
                        "suffix": ""
                    },
                    {
                        "first": "Susan",
                        "middle": [],
                        "last": "Dumais",
                        "suffix": ""
                    },
                    {
                        "first": "George",
                        "middle": [],
                        "last": "Furnas",
                        "suffix": ""
                    },
                    {
                        "first": "Thomas",
                        "middle": [],
                        "last": "Landauer",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Harshman",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Journal of the American Society for Information Science",
                "volume": "41",
                "issue": "6",
                "pages": "391--407",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Scott Deerwester, Susan Dumais, George Furnas, Thomas Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391-407.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "A structured vector space model for word meaning in context",
                "authors": [
                    {
                        "first": "Katrin",
                        "middle": [],
                        "last": "Erk",
                        "suffix": ""
                    },
                    {
                        "first": "Sebastian",
                        "middle": [],
                        "last": "Pado",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of EMNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Katrin Erk and Sebastian Pado. 2008. A structured vector space model for word meaning in context. In Proceedings of EMNLP.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "WordNet: An Electronic Lexical Database",
                "authors": [],
                "year": 1989,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christaine Fellbaum, editor. 1989. WordNet: An Elec- tronic Lexical Database. The MIT Press, Cam- bridge, Massachusetts.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Modes of meaning",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "John",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Firth",
                        "suffix": ""
                    }
                ],
                "year": 1957,
                "venue": "Papers in Linguistics 1934-1951",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John R. Firth. 1957. Modes of meaning. In Papers in Linguistics 1934-1951. Oxford University Press, London.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "The distributional inclusion hypotheses and lexical entailment",
                "authors": [
                    {
                        "first": "Maayan",
                        "middle": [],
                        "last": "Geffet",
                        "suffix": ""
                    },
                    {
                        "first": "Ido",
                        "middle": [],
                        "last": "Dagan",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Maayan Geffet and Ido Dagan. 2005. The dis- tributional inclusion hypotheses and lexical entail- ment. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguistics (ACL'05), University of Michigan.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "A probabilistic setting and lexical cooccurrence model for textual entailment",
                "authors": [
                    {
                        "first": "Oren",
                        "middle": [],
                        "last": "Glickman",
                        "suffix": ""
                    },
                    {
                        "first": "Ido",
                        "middle": [],
                        "last": "Dagan",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "ACL-05 Workshop on Empirical Modeling of Semantic Equivalence and Entailment",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Oren Glickman and Ido Dagan. 2005. A probabilis- tic setting and lexical cooccurrence model for tex- tual entailment. In ACL-05 Workshop on Empirical Modeling of Semantic Equivalence and Entailment.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Distributional structure",
                "authors": [
                    {
                        "first": "Zellig",
                        "middle": [],
                        "last": "Harris",
                        "suffix": ""
                    }
                ],
                "year": 1985,
                "venue": "The Philosophy of Linguistics",
                "volume": "",
                "issue": "",
                "pages": "26--47",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zellig Harris. 1985. Distributional structure. In Jer- rold J. Katz, editor, The Philosophy of Linguistics, pages 26-47. Oxford University Press.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Text classification using string kernels",
                "authors": [
                    {
                        "first": "Huma",
                        "middle": [],
                        "last": "Lodhi",
                        "suffix": ""
                    },
                    {
                        "first": "Craig",
                        "middle": [],
                        "last": "Saunders",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Shawe-Taylor",
                        "suffix": ""
                    },
                    {
                        "first": "Nello",
                        "middle": [],
                        "last": "Cristianini",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Watkins",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Journal of Machine Learning Research",
                "volume": "2",
                "issue": "",
                "pages": "419--444",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. Journal of Ma- chine Learning Research, 2:419-444.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Vector-based models of semantic composition",
                "authors": [
                    {
                        "first": "Jeff",
                        "middle": [],
                        "last": "Mitchell",
                        "suffix": ""
                    },
                    {
                        "first": "Mirella",
                        "middle": [],
                        "last": "Lapata",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of ACL-08: HLT",
                "volume": "",
                "issue": "",
                "pages": "236--244",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236-244, Columbus, Ohio, June. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Free inverse semigroup. Proceedings of the",
                "authors": [
                    {
                        "first": "W",
                        "middle": [
                            "D"
                        ],
                        "last": "Munn",
                        "suffix": ""
                    }
                ],
                "year": 1974,
                "venue": "",
                "volume": "29",
                "issue": "",
                "pages": "385--404",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "W. D. Munn. 1974. Free inverse semigroup. Proceed- ings of the London Mathematical Society, 29:385- 404.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Using information content to evaluate semantic similarity in a taxonomy",
                "authors": [
                    {
                        "first": "Philip",
                        "middle": [],
                        "last": "Resnik",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "448--453",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In IJ- CAI, pages 448-453.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Parsing english with a link grammar",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Daniel",
                        "suffix": ""
                    },
                    {
                        "first": "Davy",
                        "middle": [],
                        "last": "Sleator",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Temperley",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daniel D. Sleator and Davy Temperley. 1991. Pars- ing english with a link grammar. Technical Report CMU-CS-91-196, Department of Computer Sci- ence, Carnegie Mellon University.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Characterising measures of lexical distributional similarity",
                "authors": [
                    {
                        "first": "Julie",
                        "middle": [],
                        "last": "Weeds",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Weir",
                        "suffix": ""
                    },
                    {
                        "first": "Diana",
                        "middle": [],
                        "last": "Mccarthy",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the 20th International Conference of Computational Linguistics, COLING-2004",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. In Proceedings of the 20th International Conference of Computational Linguistics, COLING- 2004, Geneva, Switzerland.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Semantic vector products: Some initial investigations",
                "authors": [
                    {
                        "first": "Dominic",
                        "middle": [],
                        "last": "Widdows",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the Second Symposium on Quantum Interaction, Oxford",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dominic Widdows. 2008. Semantic vector products: Some initial investigations. In Proceedings of the Second Symposium on Quantum Interaction, Ox- ford, UK.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Philosophical Investigations",
                "authors": [
                    {
                        "first": "Ludwig",
                        "middle": [],
                        "last": "Wittgenstein",
                        "suffix": ""
                    }
                ],
                "year": 1953,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ludwig Wittgenstein. 1953. Philosophical Investiga- tions. Macmillan, New York. G. Anscombe, trans- lator.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "The simplest one defines e d \u2022 e f = e d if d = f and e d e f = 0 otherwise. The effect of multiplication of the context vectors of two strings is then set intersection: (x\u2022\u0177)(d) = 1 if x and y occur in document d 0 otherwise.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF1": {
                "text": "Real Valued Taxonomy). A real valued taxonomy is a finite set S of concepts with a partial ordering \u2264 and a positive real function p over S. The measure of a concept is then defined in terms of p asp (x) = y\u2208\u2193(x) p(y).",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF2": {
                "text": "Figure 2: A small example taxonomy extracted from WordNet (Fellbaum, 1989).",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF3": {
                "text": "A link grammar parse. Link types: s: subject, o: object, m: modifying phrases, a: adjective, j: preposition, d: determiner.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF4": {
                "text": ", together with two distinguished nodes, called the start node, 2 and finish node, \u2022.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF5": {
                "text": "s(they, mashed) m(mashed, through) o(mashed, way) d(their, way) j(through, mud) d(the, mud) a(thick, mud)",
                "uris": null,
                "num": null,
                "type_str": "figure"
            }
        }
    }
}