File size: 99,103 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
{
    "paper_id": "W90-0122",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T04:00:16.923339Z"
    },
    "title": "The Computer Generation of Speech with Dlscoursally and Semantically Motivated Intonation",
    "authors": [
        {
            "first": "Robin",
            "middle": [
                "P"
            ],
            "last": "Fawcett",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Computational Linguistics Unit University of Wales College of Cardiff",
                "location": {
                    "postCode": "CF1 3EU",
                    "settlement": "Cardiff",
                    "country": "UK"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "The paper shows how it is possible, in the framemork of a gygtemle funetlonal g:r~mmat, (SFG) approach to the semantics of aatttral language, to generate an output with intonation that is motivated semantically and discoursally. Most of ~ wurk reported has already been *accuse, fully implem~4 in GF.N~SY$ (the very large generator of the COMMUNAL Pro~eet; see Appendix 1). A major feature is that it does am flint generate a syntax tree and words, and then impose intonational coatogm on them (as is a common aggroaeh in modelling intonation); rather, it generates the various intonational feamrta diteetJy, as it is generating richly labe.llad m:rucratea (as are typical in SFG), and the associated items. ~ claim is not that the model p~ he,\u00a2 solves all the problems of generating intonation, but that it points a way forward that makea natural links with semantics and ditu:ourse. A secondaJty perpoe~ of this paler is to demonstraW,, for one of many possible areas of NLO that could have been choc~n, that there is still much important work to be done in '~nt.e, mev geaeratkm'. I do this tn order to tefut\u00a9 the augge~On, ~easiOnally head at recent eel that the major wod\u00a2 io 'sentence generation' has already been done, and that the main (only'?) area of s/gnifleanee in NLG is ia ltighet level planning. In my expea'ieace the two are inteaxlependen% and we should expect significant developments at eveey level in the years to come.",
    "pdf_parse": {
        "paper_id": "W90-0122",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "The paper shows how it is possible, in the framemork of a gygtemle funetlonal g:r~mmat, (SFG) approach to the semantics of aatttral language, to generate an output with intonation that is motivated semantically and discoursally. Most of ~ wurk reported has already been *accuse, fully implem~4 in GF.N~SY$ (the very large generator of the COMMUNAL Pro~eet; see Appendix 1). A major feature is that it does am flint generate a syntax tree and words, and then impose intonational coatogm on them (as is a common aggroaeh in modelling intonation); rather, it generates the various intonational feamrta diteetJy, as it is generating richly labe.llad m:rucratea (as are typical in SFG), and the associated items. ~ claim is not that the model p~ he,\u00a2 solves all the problems of generating intonation, but that it points a way forward that makea natural links with semantics and ditu:ourse. A secondaJty perpoe~ of this paler is to demonstraW,, for one of many possible areas of NLO that could have been choc~n, that there is still much important work to be done in '~nt.e, mev geaeratkm'. I do this tn order to tefut\u00a9 the augge~On, ~easiOnally head at recent eel that the major wod\u00a2 io 'sentence generation' has already been done, and that the main (only'?) area of s/gnifleanee in NLG is ia ltighet level planning. In my expea'ieace the two are inteaxlependen% and we should expect significant developments at eveey level in the years to come.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "The aspect of NoZural Language Generation (NLG) to be described here is the generation of spoken text that has intonation, where that intonation is motivated both semantically (i.e, in terms of the semantics -in a broad sense of the term to be clarified soon -of sentences) and dJscoursally (Le. in terms of vhat the discourse p!~..;.ng compotmnt specifies).\" Any specification of intonation requires, of course, to be integrated with an adapted version of a speech synthesizer (e.g. one that draws on one of the currently available systems that attempts -inevitably with Appendix 1 for a brief overview of the project). L As will be \u00a2l\u00a2ar from what follow the model presented hcte owes a great deal to two people in particular:. Michael Halliday and Pad Teach. Through them, I am well awate, there is a debt to many others, too numerous to mention, who have worked in the diffvgult field of intonation in F~glish. [ am grateful too for early encouragement in this area from Gillian Brown (whose work is drawn on also by Paul Teach), nod for the regular\u00b0 ongoing stimulus of many good explorations of ideas in this and other arco, s with my colteagmt Gordon Tucker. But none of there should bc blamed for the inevitable crudities, infelicities and no aoubt ca'mrs in the model described here; these are mine.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Purpose and Scope",
                "sec_num": "1."
            },
            {
                "text": "The approach is very different from that in MITalk (Allen 1986) , which is essentially a textto-speech system. So far as I am aware, the only generative model prior to ours that attempts to generate intonation that is motivated semanticaIly and discoursaIly is the impressive work of hard, Houghton, Pearson and their colleagues at Sussex (Houghton and hard 1987) and Houghton and Pearson 1988) . Its limitation is the very small size of its syntax, lexis, semantics and working domain. We see our work in the COMMUNAL Project as being to build on their important achievement. (But see Appendix 2 for what we do not attempt.)",
                "cite_spans": [
                    {
                        "start": 51,
                        "end": 63,
                        "text": "(Allen 1986)",
                        "ref_id": null
                    },
                    {
                        "start": 339,
                        "end": 363,
                        "text": "(Houghton and hard 1987)",
                        "ref_id": null
                    },
                    {
                        "start": 368,
                        "end": 394,
                        "text": "Houghton and Pearson 1988)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Purpose and Scope",
                "sec_num": "1."
            },
            {
                "text": "The major components of the overall model that will b\u00a2 referred to below are as follows. We assume an Interactive system, rather than one that is merely monologue. We shall ignore here the components related to parsing, interpretation and inputting to the belief system and planner. The components relevant to generation are:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Relevant Components of the Communal Model",
                "sec_num": "2."
            },
            {
                "text": "1. The belief system, which includes general and specific beliefs about ('knowledge of) situations and things in some domain; specific beliefs about the content of the preceding discourse, about vozious aspects of the current social situation, about the addressee and his beliefs of all types, his attitudes, his goals and plans.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Relevant Components of the Communal Model",
                "sec_num": "2."
            },
            {
                "text": "on knowledge of ... (scripts, schemas, etc) , introducing where appropriate sub-units such as transactions (see below) and more detailed plans, using ... 4, the local discourse grammar, which is modelled as a 'systemic flowchart' (i.e. a flowchart containing many small system networks at the choice points, and which generates exchanges and their structure), 5. the lexlcogrammar, i.e. the sentence generator, consisting of: a. the system networks of semantic features for a wide variety of types of mea-;-g related to situations and realized in the clause, including theme and information structure as well as transitivity, mood, negativity, modallty, affeetive meaning and logical relationships, and equivalent system networks for thiage and qualities, and b. the realization rules which turn the selection expressions of features that are the output from passes through the system networks into syntactic structures with Items (grammatical and lexical) and markers of punctuaUou or intonation as their term;hal nodes.",
                "cite_spans": [
                    {
                        "start": 20,
                        "end": 43,
                        "text": "(scripts, schemas, etc)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A planner, which makes general plans, drawing",
                "sec_num": "2."
            },
            {
                "text": "Let as imagine that Ivy (the 'person' of whose ufind GENESYS models a part) is about to generate a sentence, Let us suppose that she is being consulted by the Personnel Officer of a large institution, who draws regularly on her specialist knowledge and advice, and that he has just asked her Wghere does Peter Piper live?\". (We shall come later to how intonation is represented.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": ",am Overview of the Generation of Intonation",
                "sec_num": "3.1."
            },
            {
                "text": "Like most human users of language, Ivy makes reasonable assumptions about (loosely, she 'knows') where she is in any current transaction (e.g. at the start, in the middle or at the end), and where she is in the current exchange. This affects the pitch level of what she says. She needs to choose a tone (the change in pitch marked by a stepping or a slide on the tonic syllable) which will express the MOOD of the final matrix clause of her sentence. ('Matrix' here means 'at the top layer of structure'.) She nee& to locate that tone on an item which will be thereby marked as new Information. She needs to decide if it is to be presented simply as 'neW, or as 'contrastively new' (in the terms used here). And she needs to deride on the Information status of any chunks of information that are to be presented as separate from the main Information ear of the clause. (The information that guides these choices comes from various aspects of the higher belief system, which there is unfortunately no space to discuss here.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": ",am Overview of the Generation of Intonation",
                "sec_num": "3.1."
            },
            {
                "text": "As we shaft see, these various components of the semantic level of intonation account, in a different way from the usual approach in British intonation studies, for Halliday% well-known triad of TONE, TONALITY and TONICITY.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": ",am Overview of the Generation of Intonation",
                "sec_num": "3.1."
            },
            {
                "text": "While it is perfectly possible m talk about the contrasts in intonational form to which these three rder as 'systems', I suggest that it is more insightful to take, as the level of contrasts to be modelIed in system networks, the meanings that lie behind (or, ia the SFG metaphor, above them). These semantic features are then realized in the purely intonational contrasts of TONE, TONICITY and TONA.LITY.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": ",am Overview of the Generation of Intonation",
                "sec_num": "3.1."
            },
            {
                "text": "The accounts of the various aspects of intonation in what follows will inevitably be introductory, and may to the specialist appear simplistic. A somewhat fuller treatment is given in Teach and Fawcett 1988 , and a very fur treatment is given in Tenth 1987, which includes summaries of relevant work by other intonation spe\u00a2islists.",
                "cite_spans": [
                    {
                        "start": 184,
                        "end": 206,
                        "text": "Teach and Fawcett 1988",
                        "ref_id": "BIBREF27"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": ",am Overview of the Generation of Intonation",
                "sec_num": "3.1."
            },
            {
                "text": "(I omit here, for reasons of space, a specification of how one might model the way in which the position in a transaction and an exchange affects intonation.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": ",am Overview of the Generation of Intonation",
                "sec_num": "3.1."
            },
            {
                "text": "Let us assume, then, that Ivy is preparing a response to the Personnel Officer's question, using the information that, while Mr Peter Piper's address is currently 11 Romilly Crescent, Canton, Cardiff, he is moving from there after one month. In discourse planni.g terms, she chooses that her move will be a 'support' for a 'solicit information', gad that the act at the head of the move is a 'give new content ) (see Fawcett, van der Mije and van Wissen 1988) . As wc shall see, these choices pro. select in the MOOD network of the lexieegrammar the features [information] and [giver] . But first there is a more basic system to consider.",
                "cite_spans": [
                    {
                        "start": 417,
                        "end": 459,
                        "text": "Fawcett, van der Mije and van Wissen 1988)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 577,
                        "end": 584,
                        "text": "[giver]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": ",am Overview of the Generation of Intonation",
                "sec_num": "3.1."
            },
            {
                "text": "The inidal rule of the semantics of the lexicogrammar to be considered here is:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The MODE System",
                "sec_num": "32."
            },
            {
                "text": "situation -> \"MODE'& 'TENOR'&'CONGRUENCZ~SIT'.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The MODE System",
                "sec_num": "32."
            },
            {
                "text": "Thus means that, for any 'situation' ( roughly = 'proposition') that you are generating, you must make choices in all three of the systems named. (Notice, then, that 'parallelism' lies at the heart of the grammar.) Here we shall be concerned only with the MODE system. (It is from CONGR-UENCE SIT that the main part of the network is entered, to generate configurations of participant roles, such as Agent and Affected, and choices in MOOD, such as 'information seeker', and very many others.) The MODE system is very simple:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The MODE System",
                "sec_num": "32."
            },
            {
                "text": "'MODE' -> 70% spoken / 30% written.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The MODE System",
                "sec_num": "32."
            },
            {
                "text": "This means: 'In the MODE system, you must choose between generating a spoken output (for which under random generation there is a 70% probability) and generating a written output (which carries a 30% probability). Clearly, sinc~",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The MODE System",
                "sec_num": "32."
            },
            {
                "text": "Ivy is in a spoken interaction, she will be strongly disposed to select [spoken] . but in principle she need not We shall not discuss here the interesting reasons for and ~in~t introducing this system to the lexi\u00a2o-grammar itself, except to point to two 91~,niEcant advantages that it brings. Nor, unfortunately, is there space to discuss the roles of the probabilities and the ways in which they are assigned (sometimes simply a guess at the overall pattern for central types of text; somet;mes based on textual studies).",
                "cite_spans": [
                    {
                        "start": 72,
                        "end": 80,
                        "text": "[spoken]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The MODE System",
                "sec_num": "32."
            },
            {
                "text": "(The ne,~ few lines presuppose some familiarity with systemic grammars; for those without this knowledge it may be advisable to re-read this section after seeing the working of the examples.) What is the role of this system? Fh'st, it enables the grammat-builder to refer, at any point on this initial pass through the system network, to whichOver feature in this system has been chosen as an entry condition to a later system. In other words, where there is greater richness of choice in me aning in the spoken mode (as is typically the case with m~nings real|Ted in intonation, as gain.st those reAI;~,~d in punctuation), we can ensure that those systems are only entered when the feature [spoken] has been chosen. We shall shortly see the great value of this. Second, the 'MODE' system enables us to refer, at any point in a realization rule, on this or any subsequent pass through the network, to this feature as a conditional feature for the realization of some othex feature, In other words, we can ensure that if [spoken] has been chosen the realization will take the form of intonation, and if [wrkten] has been chosen it is expressed in punctuation. Both of these faclh'fies contribute greatly to the elegant operation of the lexieogramm~tr as a whole, both in me~nin~ r\u00a291h'?d ill intonation and in many other ways.",
                "cite_spans": [
                    {
                        "start": 691,
                        "end": 699,
                        "text": "[spoken]",
                        "ref_id": null
                    },
                    {
                        "start": 1020,
                        "end": 1028,
                        "text": "[spoken]",
                        "ref_id": null
                    },
                    {
                        "start": 1102,
                        "end": 1110,
                        "text": "[wrkten]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The MODE System",
                "sec_num": "32."
            },
            {
                "text": "The unmarked choice in the 'CONGRUENCE SIT' system is, unsurpri~ngly, [congruent_sill. This is the choice that opens up the whole array of me~-;,~gs a.~sociated with realization in a clause, and many parallel systems follow. Among these is the MOOD network. This is a fairly large and complex network of meanings, and these are re~l;-ed partly in syntax, partly in items (such as \"please'), and partly in tone (= variation in pitch). The network is too large and complex to present here, but we shall trace a route through it that shows why it is central to an understan,4;-g of intonation. The first options in the current GENESYS network are shown below:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "congruent sit-> 'TRANgITIVITY' & 'MOOD (& OTHERS). 'MOOD(A)' -> 90% information / 10% directive. information -> 'MOOD(B)' & 'TIME REFERENCE POINT' (& OTHERS). m 'MOOD(B)' -> 70% giver (1.2) / 30% seeker (16).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "The second line reads:' In the MOOD(A) system you must choose between the feature [information] , which overall has a 90% probability of being selected, and [directive] , which has only a 10% probability. As so often, the choice of a single feature leads to further parallel systems, one of which continues the MOOD network itself. The last line in the above rules exemplifies the use of numbers in brackets after the features; it is the number of the realization rule for the feature concerned. What will this look like?",
                "cite_spans": [
                    {
                        "start": 82,
                        "end": 95,
                        "text": "[information]",
                        "ref_id": null
                    },
                    {
                        "start": 157,
                        "end": 168,
                        "text": "[directive]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "Here is a slightly simplified version of the realization rule for [giver]:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "1.2 : giver : if falls 'Z' and (simplex sit or final co ordinated situation) ten on r st3,ass itten then 'E' < \"!' )\"' .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "if on,~first_pass spoken then 'E' <",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "The. effect of this rule is on the 'Ender' (i.e. 'E', the last element in the structure of the clause). If [written] is chosen in the 'MODE' system it is expounded by a full stop (Br. E. for 'period'), but if the choice is [spoken] it is expounded by a final intonation unit boundary, i.e. [. However, says the rule, neither realization will occur unle~ the clause (1) directly fdls the element 'Sentence' (represented by'Z' a~ an approximation to sigma) and (2) is 'simplex ~, i.e. is not co~ordinated with one or more other clauses or, if it is, is the final clause.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "This may seem a surpr/~gly complex rule to those in NIP used to working with minigrammars. But this is typical of the working level of complexity in a natural language, and those who ate used to working with the problems of building broad coverage grammars will appreciate that this is not a particularly complex rule. In the case of our example the effect is to give to Ivy's output a final intonation unit boundary.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "We come next to an example of the value of being able to use of the feature [spoken] as an entry condition to a system. This is necessary because the MOOD network also builds in variables in 'key' (in the sense of HaRiday 1970); i.e. finer choices w/t h;. the MOOD options. These correspond to what Tench treats separately as variations in attitude. While accepting the 'view that these more delicate choices can be seen as serving a separate ftmction from the function of the basic tone, the fact is that in any systemic computational implementation the way in which they enter the choice systom is simply us more delicate choices that are directly dependent on the broad choice of meaning realized in the broad tone. The range of such delicate variations appears to be potentially different for the various meani,o~ (see further below). In the systems given below, note ~ high probability of choosing [assertive] followed by [neutral] .",
                "cite_spans": [
                    {
                        "start": 927,
                        "end": 936,
                        "text": "[neutral]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "giver & spoken -> 70% assertive / 15% deferring (1.21) / 15% withjeservation (1.22).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "assertive. > 2% very strong (L23) / 8% strong (1.24) / 60% neutral (1.25) / 30% mild (1.26).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "In an intermediate level model (such as Prototype Generator 2 (PG2), which is the most advanced version of GENESYS currently implemented) we need only relatively simple rules such as the following:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "1.21 : deferring: if fills 'Z' and on first p~ spoken and (simplex sit or]inal~co~,rdinated situation) then '2-' by 'NT.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "('lq'I\" = 'Nuclear Tonic'; see 3.4.3.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "1.22 : with reservation : if fills '~' and on_first_pa~ss spoken and (simplexsit or fmaI_co-ordinated skuation) then 'L2' by 'hiT '.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "And so on, for [very strong] (realized by '21'), [strong] (realized by ~\u00f7'), [neutral] (realized by T) and [mild] (realized by '1-').",
                "cite_spans": [
                    {
                        "start": 77,
                        "end": 86,
                        "text": "[neutral]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "Here we are using a numerical notation for tones that goes back to an earlier tradition even than Hall [day's description (1967, 1970) , though it has much in common with Hallida/s. (Readers from the American traditiOn used to an iconic representation may hate some adjustments to make in interpreting the notation. But there should be act fundamental diffu:ulty; Hallida/s description has been widely used (and indeed tested) on American and Canadian English.) I give next a brief summary of the differences between the scheme for tones used here and Hall[day's well-lmown scheme (1967, 1970) . Tench's (and so my) numbers '1' and '2' correspond to HoJliday's usage, as do the use of '+' and '-'. But Halt[day% 'Tone 3' is seen as a variant of our Tone 2; Hallida/s Tone 4 (a fall. rise) is represented by '12'; and his 'Tone 5 (a ris~fall) is shown as '21'. Tench's general descriptions of the tones in words (1987) imply four pitch level.% and I therefore use the following labels for the model implemented: base, low, mid and high. The four levels in turn provide a framework for describing three types of pitch change. It will be helpful for what follows to set them out as three 'scales'; these descriptions of the tones are in effect source material for writing realization rules. (1 have given these scales informal semantic labels; these are not intended to correspond directly to the features in the MOOD network encountered so far, but to evoke features from various parts of the network, including the many options dependent on [directive]). F;uaily, let me remind yon that we are not at this point trying to account for all tones, but only for those that carry the MOOD of a matrix simplex or final clause. This dear separation of the ways in which tones are generated is a key feature of the present proposals. We shall come shortly to some of the ways of generating appropriate tones for some of the other positions in which tones occur.",
                "cite_spans": [
                    {
                        "start": 103,
                        "end": 134,
                        "text": "[day's description (1967, 1970)",
                        "ref_id": null
                    },
                    {
                        "start": 581,
                        "end": 587,
                        "text": "(1967,",
                        "ref_id": null
                    },
                    {
                        "start": 588,
                        "end": 593,
                        "text": "1970)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "The 'assertive' scale (Tones 21 and 1):",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "Tone 21: rise-fall (rise to high plus fall to base) Tone 1 +: high-fall (fall from high to base) Tone 1: mid-fall (fall from mid to base) Tone 1-: low-fall (fall from low to base)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "Also (see below): ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "meaning",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": ", followed by a Tone 3 (here 2) for 'supplementary information'.) Such final Tone 2s will be generated in a similar way to that to be illustrated in section 3.5 below for initial Tone 3s (and, as we shaU see, 12s). Finally, note that I include h~e one option that Tench includes under 'stat~ of information'. This is his 'implication', re~l;~\u00a2d in Tone 12, i.e. a fall-rise. This is Halliday~s Tone 4, which he characterizes as (among other descriptions) 'with reservation'. This tone occurs both as a carrier of MOOD and otherwise; it is with the former tiutt we are concerned here. It seems plausible to treat it as a variant that can be chosen as an alternative to the basic falling and rising tones recognized by both Halliday and Tench, and 1 have therefore incorporated it in the overall MOOD network.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Sentence Generator= MOOD",
                "sec_num": "3.3."
            },
            {
                "text": "I shall present here a somewhat novel approach to the relationship between the two sets of phenomena described by both Halliday and Tench as TONALITY and TONiCITY, TONALITY is typically thought of as 'cutting up a string of words into intonation ,mirA' ~ench's term; Halllday's is 'tone groups'), with each intonation unit realizing one information unit. The problem, when one is approaching the question from the angle of generation, is that there is no string of words to eat up -not, that is, until the senten~ has been generated. We therefore need to look for a semanfl\u00a2 approach to the problem, My proposal is that it is helpful to start not with TONALITY but TONICITY.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Line of Approach to the Problem",
                "sec_num": "3.4.1."
            },
            {
                "text": "TONICITY is the placing of the tonic on a syllable. The item so markfd is shown to be being presented as new Information -and !h{s iS a semantic concept. ('New' information is information presented as 'not recoverable.') But a further problem arises, in that linguists reccgni*e both 'marked' and '~1~marked' tonicity.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Line of Approach to the Problem",
                "sec_num": "3.4.1."
            },
            {
                "text": "Marked tonidty occurs when the item cont~;-i-g the tonic syllable is presented by the speaker as 'contrastivcly new'. Unmarked toniclty occurs when there is no marked tonicity (which is by far the most mual case); we shatl return to this shortly. Marked tonidty is handled in GENESYS ha the following way. In principle, any pathway through the system network that results in the generation of a formal item will lead to a system of the following form (where \u2022 is the current terminal feature):",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generating Marked Tonidty",
                "sec_num": "3.4.2."
            },
            {
                "text": "x-> notcontrastively_new / contrastively_new. 18.2 : contrastlve newness_ on_.process : 'CT' by 'M'.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generating Marked Tonidty",
                "sec_num": "3.4.2."
            },
            {
                "text": "In the case of our c~mple, the choke is not to present any clement as conttrasfively new.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generating Marked Tonidty",
                "sec_num": "3.4.2."
            },
            {
                "text": "How, then, should we generate unmarked tonicity? The answer is simple: as the defaulti.e. when there is no contrastive tonic, in other words, I want to suggest that ~mmarked tonicity is a formal phenomenon of intonation that does not express an active choice in meaning.. The relevant facts are well known, i,e., roughly, that what we hero term a nuclear tonic ('biT') fails on the last lexical item in the information unit. The question is: 'How can we define the intonation unit, in semantic terms?' The only contender as a semantic unit, in the GENF.SYS framework, is the situation, i.e. the semantic unit typically rvalizcd in the claw. The actual decision as to which item the unmarked tonic shall be assigned to gets made relatively late in the generation process. In GENESYS we simply have a list of the few dozen items generated by the lexicogrammar that cannot carry the unmarked tonic: roughly, the 'grammatical items' of English. Essentially, then, this default rule will insert one, and only one, nuclear tonic in each sentence. This will hold even when there are two or more co-ordinated clauses in that sentence, and/or one or more embedded dames.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generat;.~g Unmarked Tonicity",
                "sec_num": "3.4.3."
            },
            {
                "text": "This is a concept not disting~j;shed as a separate phenomenon in Halliday's ~eatment of intonation, but which Tench does treat separately. This dear separation of two semantically distinct phenomena was a significant help in developing the generative model proposed here. However the concept of 'status of information' is quite highly generalised, in the seine that it is not manifested in just one part of the overall network (as for example MOOD is). Specifically, we fred this option at many of the points where a unit is generated that is not the final matrix clause in the sentence. Many of these (thou# by no means all) have already been implemented in GENESYS, and the following are a representative sample.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "3_5.1. The Importance of this Category",
                "sec_num": null
            },
            {
                "text": "One major source of multiple intonation units is co-ordination. Thus, when GENESYS generates co-ordinated clauses (realizing co-ordinated sltuatione) such as \"Either Ivy loves Ike, or she loves Fred, or she doesn't love anybody.', she first recognizes at an abstract level that separate information Units are being assi~ed and then inserts, depending on whether the output is to be spoken or written, either (1) commas or (2) intonation trait boundaries and an appropriate tone such as Tone 2. We shall trot re.educe here the surprisingly large system network and realization rules for this area of the grammar, which merit a paper to themselves. All that needs to be said is that to develop a model of clause coordination that incorporates most of the phenomena of naturally occurring texts is a major task and that it took several mouths of work to buikl our current system. Another major source of additional intonation units is the thematizatlon of time and drcumstanee. These meanings are realized in Adjuncts of various types. They may occur in various places ha the clause, and here we shall consider just those that appear at the beginning of a clause. So far GENESYS includes eleven types, each of which may be realized by either a clause or a group (three different classes of groups being reco~ized: nominal, prepositional and quantity-quall~ groups). Note, theah that we have now identi~d a second major source of what has been termed 'clause \u00a2omb;ningL A similar approach is needed for 'dame final' dames, i.e. clauses that fill any of the ecleven types of Adjunct built into GENESYS so far, and that come late in the clause. (This is a different approach to clausecombining from that in Halliday 198.5 and so from that in the Nigel grammar at ISI; here such clauses are simply treated as embedded -so far with gains in generalizations rather than losses.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "Let us take as an e~mple the concept of time position, which is one of five types of 'circumstance of time' reco~niTed in GENF.SYSthe others being repetition, duration, periodic frequency, and usuality. While GENESYS will happily generate dames such as \"until he leaves the company\" to specify a time position, iu the case of our example Ivy has chosen the simpler structure of the prepositional group, i.e. \"until next month\". The first system to consider is:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "'TIME POSITION TI-IEMATIZATION\"-> 99%-~thematize-d timeposition (20.2) / i% thematizedJim '~ Ixffifion (20.3) .",
                "cite_spans": [
                    {
                        "start": 90,
                        "end": 109,
                        "text": "'~ Ixffifion (20.3)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "Because the answer modifies the presuppositions that the Personnel Officer brought to his question (i.e. that Peter Piper had a fixed address), Ivy decides to thematize the part of her reply that expresses this, i,e. her specification of the 'time position'. This is realized by placing the 'time position Adjunct' at an early place in the clause. (Note that this is not a 'movement rule'; there are no such rules in this generator, and no element is located until it can be located in its correct place.) The next two systems are: thematized time_position-> 80% tim~..pos~io a as separate information uniT/-20% time_.lXm\"/ion as_part of main information unit. ----spoken & time_position as separate information unit-> --20% high!i~ted thcmatized time_position / S0% ~utnil3heTaatizccLti~ ix,/ition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "The first of the two systems appfies whether or not the MODE is spoken or written (the written realization being a comma). But the writing system cannot make the distinction offered in the second, so that here again the feature [spoken] from the erR|hal MODE system is used as an entry condition. In our example Ivy chooses to",
                "cite_spans": [
                    {
                        "start": 228,
                        "end": 236,
                        "text": "[spoken]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "\u2022 resent the specification of the time position \"nntil last week\") as a separate information umt, and furthermore to klshlight it (by using a Tone 12 (a fall-rise).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "But you may have noticed that these features have no realization rules. How, then, do these choices get realized? The answer is that these features act as conditional features on the realization rules for the units that are generated, after re-entry to the overall network, on a subsequent pass though it. The reason for including the system at the rsnk of situation is that in this way we can capture the genexalisa6on that these options are relevant whatever the unit -a clause or some kind of groupthat fills the Adjunct.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "In our case the sub-network that we fred ourselves in on re-entry is the network for 'MINIMAL RELATIONSHIP PLUS THING', i.e. the netv~k from which prepositional groups are generated. Here we enter the following system (where the suffix 'mrpt' echoes the name of the overall system):",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "location mrpt-> place-mrpt (90.001) / time_mrpt (90.002).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "Here [time.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": ".mrpt] will be In'e-selected by the choice at the higher rank. The part of its realization rule concerned with intonation may appear, once again, somewhat complex, but once again it seems to correspond to the relative (but always limited) complexity of the facts of how English works: 90.002 : time_mrpt : if (on_previous_pass time_position as separat\u00a2 information unit and on first_l~ass'spoken )thenill current unit pgp then e < \"['9, if on prc'~ous pass highlighted_thematized time position then '12' by 'T ~, ' if on_previous_pass neutral thematized time_position then '2' by 'T.",
                "cite_spans": [
                    {
                        "start": 513,
                        "end": 514,
                        "text": "'",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "As you will see, these rules insert appropriate intonation boundaries and tones. The tonic ('T')",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "is already waiting in the starting structure of the tnepositional group, so that the rule ~;mply conflates the actual tone with it. Let us assume that Ivy, in order to highlight still further the thematization of the words \" month', selects the highfighting rather than the neutral option. (The nominal group \"next month\" is generated by a further re-entry.) Finally, the system supplies the initial intonation unit boundary for any unit without one. If we assume that the rest of items generated (in components not considered in this paper) are \"he will be living at eleven Romilly Crescent, Canton\" the full output for our example is:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "] until next month/T/12 [ he will be living at eleven Romilly Crescent/T/2 I Canton/NT/1 i",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Co-ordination of Situations and Things",
                "sec_num": "3.5.2."
            },
            {
                "text": "Other sources of intonation occur in specialist mini-grammars such as those for dates and addresses. These can be quite complex, and may insert several tonics, each with an appropriate tone. Our worked example illustrates one such case: note the Tone 2 on \"Crescent\". Yet other types will be included in the next version of GENESY$, including (1) Adjuncts (which may be filled by clauses or groups) that are placed after the nuclear tonic of a clause and which carry 'supplementary information', and (2) 'nonrestrictive relative clauses' (i.e. ones that carry, once again, 'supplementary information'),",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Sources of Intonation",
                "sec_num": "3.5.4."
            },
            {
                "text": "We have now completed a fairly fuU specification of the major aspects of intonation included at the present stage of the development of the GENESYS model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Summary of the lexicogrammatical generation of intonation",
                "sec_num": "3.6."
            },
            {
                "text": "To summarize: GENESYS offers the choice, on entering the first system that results in the generation of a scntenco, between [written] and [spoken] . The importance of this apparently trivial system is that the choice made in it determines whether or not one can go on to enter quite a number of more 'delicate' systems whose choices are realized in intonation. Its features also act as conditions on the realization of features chosen in the same network, or in one entered on a subsequent pass. The result is that the realization at the level of form will be in terms of either intonation or punctuation. We have seen how choices in MOOD, in INFORMATION FOCUS and in various types of 'status of informaBon' contribute together to the specification of intonation, and we have seen some of the details of how this can be implemented. The result is an integrated model that avoids the psychologically implausible approach whereby one first generates a syntax tree and a string of words at its leave% and then 'adds on' the intonation. Instead, it treats intonation as one of three modes of re8i|Tation (the other two being syntax and items),, generating the various aspects of iatonatmn at appropriate points in the generation of syntax and items.",
                "cite_spans": [
                    {
                        "start": 138,
                        "end": 146,
                        "text": "[spoken]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Summary of the lexicogrammatical generation of intonation",
                "sec_num": "3.6."
            },
            {
                "text": "R may be helpful to conclude by specifying explicitly the final stages of this process. First the generator looks for a eontrastive tonic ('CT') with which to confiate the tone., and then, if there isn't one, it provides as a default a nuclear tonic ('NT') for the flail matrix clause, i.e.. the intonational element of structure with which the tone reollzln~ the me a,;,g of MOOD is conflated. The other intonation units specified by various types of Information status are fitted around this central framework, receiving tones appropriate to their status. Where they are clauses these tones will be eonflated with a nuclear tonic (unless, of course, there is a contrastive tonic), and where they are groups the tones will be conflated with a simple tonic A nuclear tonic is thus one that is potentially capable of r0ceiviag the type of tone that re~!i~,\u00a2s a MOOD option. It should be made clear that, in every case of the location of a nuclear tonic or a simple tonic, the element with which it is conflated must be one that is not exponndcd by an item from the list of inherently weak items. (Any such item may of course still receive a tonic by being contrastively stressed, as in l he has/C'r/t+ eaten it I.) 4. Conclusions",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Summary of the lexicogrammatical generation of intonation",
                "sec_num": "3.6."
            },
            {
                "text": "The COMMUNAL project began with a hope that it would be possible to take the insights from a Hatlidayan-Tenchian view of intonation, and to develop a computational adaptation and implementation of them. A promising overall approach to the problem has indeed been developed; much of the resulting model has been worked out in considerable detail; and many large and significant portions have been implemented computationa!iy. The framework has proved itself to be adaptable when modifications are indicated, and there is good reason to hope that aspects not as yet worked out expficitly will prove to be solvable in the framework of the present model. There is, therefore, the exciting prospect that, when our sister project gets under way and provides the necessary complementary components (no doubt with some requirements on us to adapt our outputs to their needs) we shatl be in a position to offer a relatively full model of speech with diseoursally and semantically motivated Intonation. It will. moreover, be a principled model, and we hope that it will be capable of further extension and of fme.t, ni, g. We feel that the use of SFG. and specifically of the type that separates clearly system networks and realization rules (as in GENESYS), gives us a facility that is sensitive to the need for both extension and fine-t~m;-g.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "4.1, Overall Summary",
                "sec_num": null
            },
            {
                "text": "Above all, the centrality in the model of choice between semantic features makes it a natural formalism for relating the 'sentence grammar' to higher components in the overall model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "4.1, Overall Summary",
                "sec_num": null
            },
            {
                "text": "Finally, let me turn to a more general point, h appears that, incr~;ngly over the last few years, the focus of interest for many researchers in NLG has switched from what we might term sentence generation to hi~her level planning (which I term discourse 8eneraflon). It is here, one sometimes hears it said, that 'all the really interesting work' is being done. Going implicitly with r hk eJ~im is the assumption, which I have occasionally heard expressed quite explicitly, that the major problems of sentence generation have been solved.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The General Prospect in NiX;",
                "sec_num": "4.2."
            },
            {
                "text": "While a lot of very impressive work has been done, and while some quite large generators have been built (e.g. as reported in McDonald 1985 , Mann and Mathiessen 1985 , Fawcett 1990 ), very many major problems remain unresolved. Specifically, many important aspects of 'sentence gr~rnrnar ~ remain outside the scope of current generators. Where, for e~mple, will we fmd a full description of a semanticaRy and/or pragmatically motivated model of even such a wen-known syntactic phenomenon as the relative clause? And what about comparative constructions (where even the linguistics fiterature is weak)? And there are many, many more areas of the semantics and syntax of sentences where our models are still far from adequate. There are also many issues of model construction regarding, for example, the optimal division of labour between components, the outh'ni,g of which deserves a separate paper (or book). And, even if we had models that covered all these and the many other areas competently, we have hardly begun the process of developing adequate methods for the comparison and evaluation of models. Thus there is still an enormous amount of challenging and fascinating work to do before we can say with any confidence that we have an~hing like adequate sentence generators. (A senior figure in German NLP ch'des suggested at COLING '88 that one can buy good sentence generators off the shelf. It depends how good 'good' is*.)",
                "cite_spans": [
                    {
                        "start": 126,
                        "end": 139,
                        "text": "McDonald 1985",
                        "ref_id": null
                    },
                    {
                        "start": 140,
                        "end": 166,
                        "text": ", Mann and Mathiessen 1985",
                        "ref_id": null
                    },
                    {
                        "start": 167,
                        "end": 181,
                        "text": ", Fawcett 1990",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "But is this really so?",
                "sec_num": null
            },
            {
                "text": "In this paper I have Ulustrated two crucial points: (1) that there are indeed significant areas of language not yet adequately covered in current generators, and (less dearly because I have had to omit the relevant section for reasons of space) (2) that the development of an adequate model of these depends on the eeaeerrent development of discourse and sentence generators.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "But is this really so?",
                "sec_num": null
            },
            {
                "text": "Clearly, while there are in e~dstence a number of fairly large sentence generators, we have in no way reached a situation where no further work needs to be done. I am aware, as the director of a project that seeks to provide rich coverage for as much of English as possible, that we have a great deal of work still to do, and that this holds for the sentence generator component as weft as for the discourse planning systems. GE~qESYS already has 50% more systems than the NIGEL (in the long established P e~am~n Project; see Appendix 1), but our rough estimate is that we need to make it at least as large again before we have an~hlng approaching full grammatical coverage. And, of course, as everyone who has wrestled seriously with genuine natural language knows, many tricky problems will remain even then. Fmding anything like the 'right' solution to many of these ~ require, I claim, models that have developed, m dose interaction with each other, their discourse planning and their sentence generation components and their belief relYresentation, including befiefs about the addressee. Fawcett 1988 , and an illustration of a generation is presented in Tucker 1989. A fuller (but fairly informal) overall description., incl.uding some comparison with other proiects, ts gtven ia Fawcett 1990 . See also Fawcett (to appear). The project is planned to last 5 years, with around 6 researchers working on it. We finished the SUccessful Phase I in 1989, and now (May 1990) are getting under way on Phase 2 The central component of the overall system is the generator, built at Cardiff. This is called GENESYS (because it GE.NErates SYStemically). ConUdbutions from the University of Leeds in Phase 1 were to build (1) a derived probabilisti\u00a2 parser, called the RAP (for Reafistie Annealing Parser, which develops earlier work at Leeds), and (2) the inte~reter (called REVELATION, because it reveals the 'meaning' from the 'wozding'). Each of these is a major development ia its field. But because both buiM di~ealy on the relevant aspects of GFENESYS, we can characterise the coverage of the COMMUNAL system as a whole in terms of the fxze of GENESYS.",
                "cite_spans": [
                    {
                        "start": 1093,
                        "end": 1105,
                        "text": "Fawcett 1988",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 1286,
                        "end": 1298,
                        "text": "Fawcett 1990",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 1464,
                        "end": 1474,
                        "text": "(May 1990)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "But is this really so?",
                "sec_num": null
            },
            {
                "text": "Here is a quotation and a few facts to give you a perspective on COMMUNAL at the end of Phase L",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "But is this really so?",
                "sec_num": null
            },
            {
                "text": "McDonald, Vaughan and Pustejovsky (1987:179) , in referring to the Penman project at the University of S. California, say:. 'Nigel, Penman's grammar .... is the largest systemic grammar and possibly the largest machine grammar of any kind.'",
                "cite_spans": [
                    {
                        "start": 10,
                        "end": 44,
                        "text": "Vaughan and Pustejovsky (1987:179)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "But is this really so?",
                "sec_num": null
            },
            {
                "text": "Although the COMMUNAL team developed GENESYS completely independently, starting from scratch with new system networks and handIing realization in a rather different way, GENESYS already has many more systems than NigeL (This is not a criticism of Nigel; the research team have been working on other components of Penman). A major theoretical difference between the two is that the networks in GENESYS are more explicitly oriented to semantics than in Nigel. We make the assumption that the system networks in the lexicogrammar are the semantic options. GENF_.SYS has around 600 semantic systems realized in grammar (syntax and morphology, and also intonation and punctuation (see below), while Nigel has about 400 grammatical systems. But GENESYS additionally does something that the builders of Nigel would have liked to do, but from which they have so far been prevented (by the requirement of a sponsor): it integrates system networks for vocabulary with the networks realized grammaticatly.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "But is this really so?",
                "sec_num": null
            },
            {
                "text": "GENESYS is still growing, so that in Phase 2 we estimate that it wiU more than double the number of systems r\u00a2~liTed in syntax and grammatical items. This should enable it to handle something approaching unrestricted syntax. COMMUNAL's first major achievement is therefore the size and scope of",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "But is this really so?",
                "sec_num": null
            },
            {
                "text": "The second must he seen in the wider framework of the model as a whole. It has been a long-standlng goal of ~ to build a large scale system that uses the same grammar to either generate or interlmet a sentence. (Many current systems use a different grammar for each process.) The second major achievement is to have performed this task with a very large grammar -a Systemic l~mction~d Grammar. in this case. (This will be the subject of a separate paper in the future.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G~YS.",
                "sec_num": null
            },
            {
                "text": "A third achievement (though one less relevant in the present context) has been the development of a probabilisti\u00a2 parsex by the Leeds part of the COMMUNAl. team.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G~YS.",
                "sec_num": null
            },
            {
                "text": "Appendix 2 'Intonation' is a term susceptible to a wide range of interpretations. It may therefore be useful to list some major aspects of the complex task of generating natural intonation that will not be discussed here. The first four are not covered because they lie outside the current goals of the COMMUNAL project, while the last two are omitted because they will be implemented (we expect) by a sister project, support for which is currently being negotiated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G~YS.",
                "sec_num": null
            },
            {
                "text": "1. We shall not be concerned with the high level pla,,i,\u00a3 that will tailor the text to the needs of the addressee as affected by the e.ha,,~! (e.g~ to build in greater redundancy, in the form of repetition of subject matter in planning what to express overtly, act by act). (For the general notion of tailoring, see Paris 1988.) 2. We shall not discuss variation in intonatlonal characteristics of the sort that distingahh between speakers of different dialects (geographical, social class, age, etc).",
                "cite_spans": [
                    {
                        "start": 316,
                        "end": 328,
                        "text": "Paris 1988.)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G~YS.",
                "sec_num": null
            },
            {
                "text": "3. The same goes for individual variation, i.e. intonational idleleet. 4. We shall ignore the code of tone of voice ('angry', 'conciliatory', 'delighted', etc) . At the same tlme we recognize that it is an important semiotic system in its own right, and that in the longer run the way ia which it is, as it were, superimposed on the intonation systean itsdf must be modelled. We recogniTe too the problems of drawing a firm line between tone of voice and some of the quite delicate distinctions that we shall recognise in the MOOD system (c.f. Halliday% 1970 term 'key').",
                "cite_spans": [
                    {
                        "start": 116,
                        "end": 159,
                        "text": "('angry', 'conciliatory', 'delighted', etc)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G~YS.",
                "sec_num": null
            },
            {
                "text": "5. We shaft ignore any aspect of intonational variation that does not realize meaning. For eaample, it may be that speakers introduce semantically ,,-motivated variation into the pretonic segment of an intonation unit, in order to avoid monotony (of. Hoase and Johnr.on 1987). (An alternative hypothesis, of course, might be that such variation is in fact semanticaUy motivated, but that we have not yet discovered what aspects of meaning it correlates with and how best to refer to it; this is a charaeterisrie of much interpersonal me~-;-g,) 6. We shall not be concerned here with the physical implementation of the output, but simply (if only it were ~;mpleI) with providing a written teat output marked appropriately for input to the system which will integrate it with the speech synthesis representation of the segmental phonology.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G~YS.",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The research reported here was supported by grants from RSRE Malvem under contract no. ER1/9/4/2181/23, by the University Research Council of International Computers Ltd, and by Longman.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            },
            {
                "text": "Appendix 1 COMMUNAL is a major research project that applies and develops Systemic Functional Grammar (SFG) in a very large, fully working computer program. The acronym COMMUNAL stands for COnvivial Man-Machine Understanding through NAtural Language. The prindples underlying the project are set out in",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "annex",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "From Text to Speech: the MITalk System",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Allen",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Allen, J. (ed.) From Text to Speech: the MITalk System. Cambridge: Cambridge University Press.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Selected theoretical papers from the Ninth International Systemic Workshop",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "1",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Systemic perxpectives on discourse Vol 1: Selected theoretical papers from the Ninth International Systemic Workshop. Norwood, NJ., Ablex.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Computational models of discourse",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Brady",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "C"
                        ],
                        "last": "Berwick",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Brady, M., and Berwick, R.C. (eds), 1983. Computational models of discourse. Cambridge, Mass: MIT Press,",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Cognitive linguistics and social interaction: towards an int~'ated model of a systemic functional grammar and the other components of an interacting mind",
                "authors": [
                    {
                        "first": "R",
                        "middle": [
                            "P"
                        ],
                        "last": "Fawcett",
                        "suffix": ""
                    }
                ],
                "year": 1980,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fawcett, R.P., 1980. Cognitive linguistics and social interaction: towards an int~'ated model of a systemic functional grammar and the other components of an interacting mind. Heidelberg: Julius Groos and Exeter University.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Language generation as choice in social interaction",
                "authors": [
                    {
                        "first": "R",
                        "middle": [
                            "P"
                        ],
                        "last": "Fawcett",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "27--49",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fawcett, R.P., 1988. 'Language generation as choice in social interaction'. In Zoc.k and Sabah (eds) 1988b, 27-49.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "The COMMUNAL Project: two years oH and going well",
                "authors": [
                    {
                        "first": "R",
                        "middle": [
                            "P"
                        ],
                        "last": "Fawcett",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fawcett, R.P., 1990. 'The COMMUNAL Project: two years oH and going well'. In Network No. 13.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "A systemic functional approach to selectional restri~ons, roles and semantic preferences'. Accepted for Machine Ttwtslalion",
                "authors": [
                    {
                        "first": "R",
                        "middle": [
                            "P"
                        ],
                        "last": "Fawcett",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fawcett, R.P., (to appear). 'A systemic functional approach to selectional restri~ons, roles and semantic preferences'. Accepted for Machine Ttwtslalion.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Towards a systemic flowchart model for local discourse structure",
                "authors": [
                    {
                        "first": "R",
                        "middle": [
                            "P"
                        ],
                        "last": "Fawcett",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Van Der Mije",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Van Wissen",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "116--159",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fawcett, R.P., van der Mije, A,, and van Wissen, C., 1988. 'Towards a systemic flowchart model for local discourse structure', in Fawcett and Young 1988, pp. 116-43.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "New developments in systemic lingui~'cs",
                "authors": [
                    {
                        "first": "R",
                        "middle": [
                            "P"
                        ],
                        "last": "Fawcett",
                        "suffix": ""
                    },
                    {
                        "first": "Young",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "J"
                        ],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "",
                "volume": "2",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fawcett, R.P., and Young, DJ., (eds.) 1988. New developments in systemic lingui~'cs, Vol 2: Theory and application. London: Pinter.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Intonation and grammar in B?itish Englial~",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "A K"
                        ],
                        "last": "Halliday",
                        "suffix": ""
                    }
                ],
                "year": 1967,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Halliday, M.A.K., 1967. Intonation and grammar in B?itish Englial~. The Hague: Mouton.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "A ccgu,se in spoken English: intonation",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "A K"
                        ],
                        "last": "Halliday",
                        "suffix": ""
                    }
                ],
                "year": 1970,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Halliday, M.A.K., 1970. A ccgu,se in spoken English: intonation. London: Oxford University Press.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Laboratorio degli studi linguistici 1989/1. Camerino: Italy: Universita degli Studi di Camerino",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "7--27",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Laboratorio degli studi linguistici 1989/1. Camerino: Italy: Universita degli Studi di Camerino (pp.7-27).",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Advances in natural language generation",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Zock",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Sabah",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "",
                "volume": "1",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zock, M., and Sabah, G. (eds) 1988a. Advances in natural language generation Vol 1. London: Pinter.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "An introduction to functional grammar",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "A K"
                        ],
                        "last": "Halliday",
                        "suffix": ""
                    }
                ],
                "year": 1985,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Halliday, M.A.K., 1985. An introduction to functional grammar. London: Arnold.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Why to speak, what to say and how to say it: modelling language production in discourse",
                "authors": [
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Houghton",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "D"
                        ],
                        "last": "Isard",
                        "suffix": ""
                    }
                ],
                "year": 1987,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "112--142",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Houghton, G. and Isard, S.D.,1987. 'Why to speak, what to say and how to say it: modelling language production in discourse'. In Morris 1987, pp. 112- 30.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "(ods) 1988b. Advances natural language generation Vet 2",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Zoek",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Sabah",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zoek, M., and Sabah, G., (ods) 1988b. Advances natural language generation Vet 2. London: Pinter.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "The production of spoken 4;~1ogueL In Zock and Sabah",
                "authors": [
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Houghton",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Pearson",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "112--142",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Houghton, G., and Pearson, M., 1988. 'The production of spoken 4;~1ogueL In Zock and Sabah 1988a, pp. 112-30.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Enfivening the intonation in Text-to-Speech Synthesis: an 'Accent-Unlt' Model",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "House",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Johnson",
                        "suffix": ""
                    }
                ],
                "year": 1987,
                "venue": "Procs lit.it ICPhS",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "House, J. & Johnson, M. (1987) 'Enfivening the intonation in Text-to-Speech Synthesis: an 'Accent-Unlt' Model', Procs lit.it ICPhS, Tallian.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Natural language generation",
                "authors": [
                    {
                        "first": "Gerard",
                        "middle": [],
                        "last": "Kempen",
                        "suffix": ""
                    }
                ],
                "year": 1987,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kempen, Gerard, (ed) 1987. Natural language generation. Dordrecht: Ma~inu~ Nijhoff.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "User Models in Dialogue Systems",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Kobsa",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Wahlster",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kobsa, A., and Wahlster, W. (cds.) User Models in Dialogue Systems. Berlin: Springer.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "A demonstration of the Nigel text generation computer program'. In Ma_nn and Matthiessen",
                "authors": [
                    {
                        "first": "W",
                        "middle": [
                            "C"
                        ],
                        "last": "Mann",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "Mj M"
                        ],
                        "last": "Matthiessen",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "50--83",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mann, W.C., and Matthiessen, C.MJ.M., 1983/85. 'A demonstration of the Nigel text generation computer program'. In Ma_nn and Matthiessen 1983 and in Benson and Grcav\u00a2~ 1985, pp.50-83.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Natural language generation as a computational problem",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Mcdonald",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "209--65",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "McDonald, D., 1983. 'Natural language generation as a computational problem'. In Brady and Berwick 1983, pp.209-65.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Factors contributing to effideacy in natural language generation",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "D"
                        ],
                        "last": "Pustejovsky",
                        "suffix": ""
                    }
                ],
                "year": 1987,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "159--181",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pustejovsky, J.D., 1987. 'Factors contributing to effideacy in natural language generation'. In Kempen 1987, pp. 159-181.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Tailoring object descriptions to a user's expertise'. In Kobsa and Wahlqer",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Paris",
                        "suffix": ""
                    }
                ],
                "year": 1987,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Paris, C.L 'Tailoring object descriptions to a user's expertise'. In Kobsa and Wahlqer 1988. Teach, P., 1987. The ro/e.s ofintonation in English discourse. PhD thesis, University of Wales.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "Specification of intonation for Prototype Generator 2. (COMMUNAL Report No 6) Cardiff: Computational Linguistics Unit",
                "authors": [
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Teach",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "P"
                        ],
                        "last": "Fawcett",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Teach, P., and Fawcett, R.P., 1988. Specification of intonation for Prototype Generator 2. (COMMUNAL Report No 6) Cardiff: Computational Linguistics Unit, University of Wales CoUego of Cardiff.",
                "links": null
            },
            "BIBREF28": {
                "ref_id": "b28",
                "title": "Natural language generation with a systemic functional gTa,-m~r",
                "authors": [
                    {
                        "first": "G",
                        "middle": [
                            "H"
                        ],
                        "last": "Tucker",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tucker, G.H., 1989. 'Natural language generation with a systemic functional gTa,-m~r'. In",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF1": {
                "html": null,
                "text": "The realiTafion of[contrastively..new]  is that a contrastive tonic is conflated with the element of structure that the item expotmds. TI~ fimple version implemented in PG2 is as follows:'INFORMATION FOCUS'-> 99% no element marked as contrastively.new ] 1% elementmar'fked_as..~ont'rastively..new.",
                "type_str": "table",
                "content": "<table><tr><td>element marked as contrastively new-&gt;</td></tr><tr><td>50% ~ontrast~e newness on_.~larity (18.1) /</td></tr><tr><td>50% contrastiv~newnes~on process (18.2) /</td></tr><tr><td>0% other.</td></tr><tr><td>Realization rule 18.1 states the complex set of</td></tr><tr><td>conditions for conflating a eontrastive tonic ('CT')</td></tr><tr><td>with the appropriate element; for the POLARITY</td></tr><tr><td>system ('positive' vs. 'negative') this is typically the</td></tr><tr><td>Operator (which may have to be supplied by a</td></tr></table>",
                "num": null
            }
        }
    }
}