{ "paper_id": "W89-0212", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:45:38.621331Z" }, "title": "P A R SIN G SPEEC H FOR ST R U C T U R E AND PR O M IN EN C E", "authors": [ { "first": "Dieter", "middle": [], "last": "Huber", "suffix": "", "affiliation": { "laboratory": "", "institution": "Com putational Linguistics University o f Goteborg and Department o f Information Theory C halm ers University o f Technology", "location": { "postCode": "S-41 2 96", "settlement": "Gothenburg", "country": "Sweden" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "W89-0212", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "The purpose of parsing natural language is essentially to assign to a linear input string of symbols a formalized structural description that reflects the underlying linguistic (syntactic and/or semantic) properties of the utterance and can be used for further information processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "In most practical applications, this delinearization [4] is acheived by som e kind of recursive panern matching strategy which accepts texts in standard orthographic writing, i.e. composed of discrete symbols (the letters and signs of some specified alphabet) and blocks of svmbols (words separated by blanks) as input, and rewrites them step by step, in accordance with (1) a lexicon and (2) a finite set of production rules defined in a formal grammar, into a p arse tree or a bracketed string. This approach is com m only restricted to the domain of the sentence as maximal unit of linguistic processing, thus adhering to the traditional view that larger units like paragraphs, texts and discourse, are formed by mere juxtaposition of autarchic, independently parsed sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "Clearly, this kind of procedure developed for parsing written language material is not im mediately applicable to speech processing purposes. For one. natural human speech does not normally present itself in the acoustical medium as a simple linear string of discrete, well demarcated and easily identifiable sym bols, but constitutes a continuously varying signal w hich incorporates virtually unlimited allophonic variations, reductions, elisions, repairs, overlapping segmental representations, grammatical deficiencies, and potential ambiguities at all levels of linguistic description. There are no \"blanks\" and \"punctuation marks\" to define words or indicate sentential boundaries in the acoustic domain. Syntactic structures at least in spontaneous speech are often fragmentary or highly irregular, and cannot be easily defined in terms of established grammatical theory [ In the following sections, the grammar formalism, the lexicon and the parser will be presented as separate modules. Problems of integration with other language models (linguistic and stochastic) will be discussed in the summary.", "cite_spans": [ { "start": 878, "end": 879, "text": "[", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "The grammar formalism adopted for syntactic/semantic parsing of the speech input is based on Fillmore's case g ra m m a r [ 11] . According to this approach, a sentence in its basic structure (deep structure as opposed to surface structure) is composed of a modality component M and the proposition P:", "cite_spans": [ { "start": 122, "end": 127, "text": "[ 11]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "G R A M M A R", "sec_num": null }, { "text": "S = > M + P (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G R A M M A R", "sec_num": null }, { "text": "where M defines a series of modes which describe aspects of the sentence as a whole: M * tense, aspect.. .mood", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G R A M M A R", "sec_num": null }, { "text": "(2) and P consists of the verb together with various cases related to it:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G R A M M A R", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P = > Verb + C t + C ^ C n", "eq_num": "(3)" } ], "section": "G R A M M A R", "sec_num": null }, { "text": "with the indices in C , denoting that a particular case relationship can only occur once in a proposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G R A M M A R", "sec_num": null }, { "text": "Each case is defined according to Simmons [28] as:", "cite_spans": [ { "start": 42, "end": 46, "text": "[28]", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "G R A M M A R", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C * K + N P", "eq_num": "(4 )" } ], "section": "G R A M M A R", "sec_num": null }, { "text": "where K (which mav be null) stands for the preposition which introduces the noun phrase and defines its relationship with the verb: in which the parentheses denote optional elem ents, the asterix means that the elem ent may be repeated, and the vertical bar indicates alternation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G R A M M A R", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K * Prep", "eq_num": "(5)" } ], "section": "G R A M M A R", "sec_num": null }, { "text": "A full case grammar representation can thus be described as a tree structure in the form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G R A M M A R", "sec_num": null }, { "text": "Within the general framework of case grammar, the following modes and their respective possible values have been adopted:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S M odality", "sec_num": null }, { "text": "The modality of the utterance as a whole is ultimately determined by the combination of the individual values assigned to each of the modes listed above. At least five of these eight modes, i.e. form. mood, essence and the adverbials of time and manner have been shown to be directly reflected in the intonation contours of natural human speech (e.g. [2] . [5] . [20] . [27] ). For instance, emphatic pronunciation appears to be universally signaled by larger pitch movements both in the local (emphatic accent) and in the global (wider k e y ) domain. Imperative mood, in addition to displaying on the average shorter durations per intonation unit, is usually associated with higher F 0 onsets and steeper declination line falls, whereas declarative mood is typically cued by low. target-value F offsets, often combined with a short period of laryngealization or devoicing. Adverbials. botft of manner and time, are commonly processed in terms of separate intonation units, especially w'hen they appear at utterance-final positions. The interrogative mood, at least as far as non-WH-questions are concerned, is signaled intonationally in most languages studied so far by rising intonation patterns, terminally and/or globally (the latter predominantly with respect to the topline).", "cite_spans": [ { "start": 351, "end": 354, "text": "[2]", "ref_id": null }, { "start": 357, "end": 360, "text": "[5]", "ref_id": null }, { "start": 363, "end": 367, "text": "[20]", "ref_id": "BIBREF10" }, { "start": 370, "end": 374, "text": "[27]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "T E N S E -present, past, future A SPEC T -perfect, imperfect E SSEN C E -positive, negative, indeterminate F O R M -simple, emphatic, progressive M O D A L -can. may, must M O O D -declarative, imperative, interrogative M A N N E R -adeverbial T IM E -adverbial", "sec_num": null }, { "text": "As shown earlier, the speech segmentation algorithm not only aims to unearth the underlying intonation/information structure of the utterance, but also represents the calculated values of various intonation unit parameters (i.e. duration, declination slope, onset, offset and resetting, for the baselines and toplines respectively) in a 10-parameter vector which is used for a first broad classification and hierarchization (see references In summary, modality provides essential information about the propositional content of the utterance. It also provides valuable cues to word order (e.g. interrogative mood is often associated with inverted word order), word structure (e.g. imperative sentences usually lack a lexical expression for the subject, which is comm only understood to be the addressed person), and constituent identity. Determining the modality at an early stage of the parsing process by probabilistic evaluation of the intonational cues specified by the segmentation algorithm thus helps (1) to establish important aspects of the overall meaning of the utterance, and (2) to judge the plausibility of alternative word order hypotheses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T E N S E -present, past, future A SPEC T -perfect, imperfect E SSEN C E -positive, negative, indeterminate F O R M -simple, emphatic, progressive M O D A L -can. may, must M O O D -declarative, imperative, interrogative M A N N E R -adeverbial T IM E -adverbial", "sec_num": null }, { "text": "T-H 3 V 0 02 p \u00ab. 41 0 02 r 1-1*1 _ ..\u00ab > ft T, i| k A i -irw 5 :3 2 n -i\u00ab3 0 01 ooo --------I L f ! -------im, (j 1 I k 0\u00ae -------- - 0 00 -----^U l", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T E N S E -present, past, future A SPEC T -perfect, imperfect E SSEN C E -positive, negative, indeterminate F O R M -simple, emphatic, progressive M O D A L -can. may, must M O O D -declarative, imperative, interrogative M A N N E R -adeverbial T IM E -adverbial", "sec_num": null }, { "text": "In traditional case grammar, the main verb in the proposition constitutes the kernel to which the cases are attached, and the auxiliary verbs contain much of the information about modality. It is thus important to detect and identify the verbal elements of the utterance at an early stage of the parsing process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition", "sec_num": null }, { "text": "It Albeit for obvious reasons this situation is far from optimal for a caseframe approach to continuous speech parsing, we consider the fact to be able to reliably identify about one third of the potential verbal case heads in natural human speech, and to use them to construct a skeleton o f verb kernels around which a case grammar representation o f the original utterance can be built, as a promising step in the right direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition", "sec_num": null }, { "text": "Several attempts have been reported in the literature to extend the traditional casetheoretic approach to include even nominal caseframes, i.e. to construct case grammars that use caseframes not only to describe verbs but also the head nouns o f noun phrases (see for instance [15] ). W ork in this direction is ongoing and will be reported in later papers.", "cite_spans": [ { "start": 277, "end": 281, "text": "[15]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Proposition", "sec_num": null }, { "text": "The lexicon to be used with the parser is specially designed for speech processing applications (text-to-speech. speech recognition, speech coding, etc) and supports the caseframe approach to continuous speech parsing outlined in this studv. Its format is defined as a Swedish monolingual dictionary which contains in addition to the standard entries (head, homograph index, part-of-speech. inflexion code, morphological form classes, etc) also: 1 -a narrow phonetic transcription reflecting standard pronunciation usage: 2 -the textual frequency rating based on a one-million word korpus of Swedish newspaper articles:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "International Parsing Workshop '89 LEXICON", "sec_num": null }, { "text": "3 -an indexed caseframe description for each verb entry.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "International Parsing Workshop '89 LEXICON", "sec_num": null }, { "text": "For the latter purpose, the following reduced set of cases has been adopted from Stockwell. Schachter and Partee [29] . with definitions compiled by the author:", "cite_spans": [ { "start": 113, "end": 117, "text": "[29]", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "International Parsing Workshop '89 LEXICON", "sec_num": null }, { "text": "-animate instigator of the action DATIVE -animate recipient of the action INSTRUMENTAL -inanimate object used to perform the action LOCATIVE -location or orientation of the action NEUTRAL -the thing being acted upon (combining the objective and the factive in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "AGENT", "sec_num": null }, { "text": "in which each case can be either required (req) or optional (opt) or disallowed (d is) and must be marked accordingly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ". n e u t r a l] (8)", "sec_num": null }, { "text": "Since several different verbs often share the same particular kind of caseframe. we propose to store the entire set of 3 5 logically possible caseframes as an indexed list, using the indices as pointers (identifiers) with the respective verb entries in the lexicon. Thus, instead of listing the complete caseframe specification together with the lexical entry as in the following example for the Swedish verb \"hacka\" ( ", "cite_spans": [ { "start": 417, "end": 418, "text": "(", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": ". n e u t r a l] (8)", "sec_num": null }, { "text": "Cr>mmgen Based on the probabilistic data for verb-prominence correspondences established in the previous section, the verbal components of the utterance are localized and used as points of departure for further linguistic processing. As shown among others by Waibel [30] for English and Bannert [j] for German, these pitch obtrusions provide the must reliable cue for the automatic detection if s t r e s s in continuous speech recognition, i.e. marking the \"important'' words carrying most of the semantic information content in the utterance. In addition, stressed syllables are commonly pronounced with longer durations and better articulation, which qualifies them as \"islands of phonemic reliability\", generally scoring better recognition rates than the unstressed (reduced, neutralized) parts of the utterance.", "cite_spans": [ { "start": 266, "end": 270, "text": "[30]", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "S W E D I S H (male speaker) E N G L I S H (female speaker) J A P A N E S E (male speaker)", "sec_num": null }, { "text": "r * < 1 ! ^z : \" C .-3.^\u00bb \u2022 1 .... 1 _ I ,\u00a3r'^4 Jl = a -E^-a r s B - > n s -1 0 1 1 ) * ! 4 v \u2022 \u00bb I J 4 ! 1 \u2022 \u00bb 2 ) \u00ab", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "^~-", "sec_num": null }, { "text": "Parsing is run in parallel with the acoustic-phonetic classifier, following a hypothesisdriven island parsing strategy, i.e. using the areas of prominence (islands of reliability) as points of departure for inside-out processing. In other words, the classifier first forms a hypothesis about the phonetic identity of the speech segment(s) at the center of prominence. After that, the island is gradually expanded in both directions by verifying neighbour phone candidates using continuously variable hidden Markov models (H M M ) [25] based on precompiled allophone/diphone/triphone statistics [16] and bounded by phonological constraints expressed in the form of finite state transition networks as proposed among others by Church [ioj. W e like to believe that the approach presented in this study shows promise not only for spoken input parsing in general, but for a number of practical applications in the field of speech processing including telecommunication, interpreting telephony, automatic keyword extraction, and text-to-speech synthesis. Linear regression lines are easily calculated and require onlv little computational effort, which makes the segmentation algorithm a fast, robust and objective technique for computer speech applications. Modulating voice for increased informativitv exploits a natural strategy that human speakers use quite automatically in comm unicative situations involving channel deficiency (e.g. due to static, transmission noise, or masking effects) and/or different kinds of ambiguity Prominent pitch excursions (together with greater segmental durations) constitute a universally used feature of language that is employed to signal new versus given, contrastive versus presupposed, thematic versus rhematic information in connected speech utterances [7] and can thus be used as a reliable cue to quickly identify the semantically potent keywords in the m essage. In addition, the frequency range covered by voice phenomena (intonation, accentuation, larvngealization) lies safely within the normal band limits of telecommunication, which qualifies F 0 as a natural, versatile, and accessible code for human-computer interaction via telephone.", "cite_spans": [ { "start": 530, "end": 534, "text": "[25]", "ref_id": "BIBREF15" }, { "start": 594, "end": 598, "text": "[16]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "^~-", "sec_num": null }, { "text": "Finallv. text-to-speech systems using standard syntactic parsers designed to find 'major svntactic boundaries\" at which the intonation contour needs to be broken into separate units that help the listener to decode the message, invariably come up with the same two kinds of problems [23]:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S U M M A R Y A N D C O N C LU S IO N S T he speech segmentation, classification and hierarchization components have been developed for", "sec_num": null }, { "text": "1 -they tend to produce not one (the most probable, semantically most plausible) but several alternative parses: 2 -they produce too many boundaries at falsely detected or inappropriate sentence locations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S U M M A R Y A N D C O N C LU S IO N S T he speech segmentation, classification and hierarchization components have been developed for", "sec_num": null }, { "text": "Perceptual evaluation of these synthesized contours reveals that listeners get distracted and often even piainlv confused by too many prosodicallv marked boundaries, while too few prosodic breaks just sound like as if the speaker simply is talking too fast. These findings not onlv show that the amount of segmentation and the correspondence between syntactic and prosodic units are dependent on the rate of speech, but that listeners apparently neither expect, nor need, nor even want prosodically cued information about all the potential richness in syntactic structure described by modern syntactic theories, in order to decode the intended meaning of an utterance. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S U M M A R Y A N D C O N C LU S IO N S T he speech segmentation, classification and hierarchization components have been developed for", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Phonological Parsing in Speech Recognition", "authors": [ { "first": "K W", "middle": [], "last": "Church", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K W Church. \"Phonological Parsing in Speech Recognition\". Kluwer Academic Publishers. 1987", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The case for case", "authors": [ { "first": "Ch", "middle": [], "last": "Fillmore", "suffix": "" }, { "first": "; E", "middle": [], "last": "Bach", "suffix": "" }, { "first": "R T", "middle": [], "last": "Harms", "suffix": "" } ], "year": 1968, "venue": "Universals in Linguistic Theory. Holt. Rinehart and W inston", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ch Fillmore. \"The case for case\", in: E Bach and R T Harms. Universals in Linguistic Theory. Holt. Rinehart and W inston. Chicago 1968", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Performance structures: a psycnolinguistic and linguistic appraisal", "authors": [ { "first": "J P", "middle": [], "last": "Gee", "suffix": "" }, { "first": "F", "middle": [], "last": "Grosjean", "suffix": "" } ], "year": 1983, "venue": "Cognitive Psychology", "volume": "15", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J P Gee and F Grosjean. \"'Performance structures: a psycnolinguistic and linguistic appraisal\". Cognitive Psychology 15. 1983", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Finite state processing of tone system s", "authors": [ { "first": "D", "middle": [], "last": "Gibbon", "suffix": "" } ], "year": 1987, "venue": "ACL Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D Gibbon. \"Finite state processing of tone system s\". ACL Proceedings. 1987", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Theme and Information in the English Clause", "authors": [ { "first": "M A K", "middle": [], "last": "Halliday", "suffix": "" } ], "year": 1976, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M A K Halliday. '\"Theme and Information in the English Clause\". Oxford University Press. 1976", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Parsing spoken language: a semantic caseframe approach", "authors": [ { "first": "J", "middle": [], "last": "Ph", "suffix": "" }, { "first": "", "middle": [], "last": "Hayes", "suffix": "" }, { "first": ". J G", "middle": [], "last": "A G Hauptmann", "suffix": "" }, { "first": "M", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "", "middle": [], "last": "Tomita", "suffix": "" } ], "year": 1986, "venue": "ACL Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ph J Hayes. A G Hauptmann. J G Carbonell and M Tomita. \"Parsing spoken language: a semantic caseframe approach\". ACL Proceedings. 1986", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Probability distribution of allophones, diphones and triphones in phonetic transcriptions of Swedish newspaper text", "authors": [ { "first": "P", "middle": [], "last": "Hedelin", "suffix": "" }, { "first": "A", "middle": [], "last": "Huber", "suffix": "" }, { "first": "", "middle": [], "last": "Leijon", "suffix": "" } ], "year": 1988, "venue": "Chalmers Report", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P Hedelin. D Huber and A Leijon. \"Probability distribution of allophones, diphones and triphones in phonetic transcriptions of Swedish newspaper text\". Chalmers Report 8. 1988", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Svensk Uttalslexicon (Swedish Pronunciation L exicon", "authors": [ { "first": "P", "middle": [], "last": "Hedelin", "suffix": "" }, { "first": "P", "middle": [], "last": "Jonsson", "suffix": "" }, { "first": "", "middle": [], "last": "Lindblad", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P Hedelin. A Jonsson and P Lindblad. \"Svensk Uttalslexicon (Swedish Pronunciation L exicon)\". Chalmers Report 4. 1989", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "On the Communicative Function of Voice in Human-Computer Interaction", "authors": [ { "first": "D", "middle": [], "last": "Huber", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D Huber. \"On the Communicative Function of Voice in Human-Computer Interaction\".", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Larvngealization as a Boundary Cue in Read Speech", "authors": [ { "first": "D", "middle": [], "last": "Huber", "suffix": "" } ], "year": 1988, "venue": "Proceedings o f the Second Swedish Phonetics Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D Huber. \"Larvngealization as a Boundary Cue in Read Speech\". Proceedings o f the Second Swedish Phonetics Conference. Lund 1988", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Aspects of the Communicative Function of Voice in Text Intonation", "authors": [ { "first": "D", "middle": [], "last": "Huber", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D Huber. \"Aspects of the Communicative Function of Voice in Text Intonation\". PhD Dissertation. Goteborg 1988", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A statistical approach to the segmentation and broad classification of continuous speech into phrase-sized information units", "authors": [ { "first": "D", "middle": [], "last": "Huber", "suffix": "" } ], "year": 1989, "venue": "Proceedings ICA SSP 89", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D Huber. \"A statistical approach to the segmentation and broad classification of continuous speech into phrase-sized information units\". Proceedings ICA SSP 89. Glasgow 1989", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Prosodic Contributions to the Resolution of Ambiguity", "authors": [ { "first": "D", "middle": [], "last": "Huber", "suffix": "" } ], "year": 1989, "venue": "Proceedings o f the Conference N O RD IC PRO SODY V. Abo (Finland)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D Huber. \"Prosodic Contributions to the Resolution of Ambiguity\". Proceedings o f the Conference N O RD IC PRO SODY V. Abo (Finland). 1989", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Review of text-to-speech conversion for English", "authors": [ { "first": "D H", "middle": [], "last": "Klatt", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D H Klatt. \"Review of text-to-speech conversion for English\". Jo u rn al o f the A coustical Society o f A m erica 82(3). 1987", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Ungram m atically and extra-grammaticality in natural language understanding systems", "authors": [ { "first": "N K", "middle": [], "last": "S C Kwasnv", "suffix": "" }, { "first": "", "middle": [], "last": "Sondheimer", "suffix": "" } ], "year": 1979, "venue": "Proceedings o f the 17th Annual Meeting o f the A ssociation fo r Com putational Linguistics. La Jolla. Cal", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S C Kwasnv and N K Sondheimer. \"Ungram m atically and extra-grammaticality in natural language understanding systems\". Proceedings o f the 17th Annual Meeting o f the A ssociation fo r Com putational Linguistics. La Jolla. Cal. 1979", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Continuously variable hidden Markov models for automatic speech recognition", "authors": [ { "first": "", "middle": [], "last": "S E Levinson", "suffix": "" } ], "year": 1986, "venue": "Com puter Speech an d L anguage \\", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S E Levinson. \"Continuously variable hidden Markov models for automatic speech recognition\". Com puter Speech an d L anguage \\ . 1986", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Repliker utan Granser (Boundless Conversational E xchanges", "authors": [ { "first": "J", "middle": [], "last": "Lofsrrom", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J Lofsrrom. \"Repliker utan Granser (Boundless Conversational E xchanges)\". PhD Dissertation. Goteborg 1989", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Intonarische K ennzeichnung von Satzmodi", "authors": [ { "first": "W", "middle": [], "last": "Oppenrieder", "suffix": "" }, { "first": ";", "middle": [], "last": "Intonationsforschungen", "suffix": "" }, { "first": "Max", "middle": [ "N" ], "last": "Iem Ever", "suffix": "" }, { "first": "", "middle": [], "last": "Verlag", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W Oppenrieder. \"Intonarische K ennzeichnung von Satzmodi\". in: H Altmann (ed) Intonationsforschungen, Max N iem ever Verlag. Tubingen 1988", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semantic networks: their computation and use for understanding English sentences", "authors": [ { "first": "R F", "middle": [], "last": "Sim M Ons", "suffix": "" } ], "year": 1973, "venue": "Computer M odels of Thought and Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R F Sim m ons. \"Semantic networks: their computation and use for understanding English sentences\", in: R C Schank and K M Colby (eds). Computer M odels of Thought and Language, W H Freeman & Co. San Francisco 1973", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The Major Syntactic Structures of English", "authors": [ { "first": "", "middle": [], "last": "R P Stockweil", "suffix": "" }, { "first": "B H Partee ;", "middle": [], "last": "Schachter", "suffix": "" }, { "first": "", "middle": [], "last": "Hoit", "suffix": "" }, { "first": "W Inston. N Ew", "middle": [], "last": "Rinehart", "suffix": "" }, { "first": "", "middle": [], "last": "York", "suffix": "" } ], "year": 1973, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R P Stockweil. P Schachter and B H Partee. \"The Major Syntactic Structures of English\", Hoit, Rinehart and W inston. N ew York 1973", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Prosodic knowledge sources for word hypothesization in a continuous speech recognition system", "authors": [], "year": 1987, "venue": "Proceedings IC A SSP 87", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A W aibel, \"Prosodic knowledge sources for word hypothesization in a continuous speech recognition system\". Proceedings IC A SSP 87. Dallas 1987", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Responding to potentially unparseable sentences", "authors": [ { "first": "R", "middle": [], "last": "Black", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R M W eischedel and L Black. \"Responding to potentially unparseable sentences\".", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "26]. Last not least, important components of the total message are typically encoded and transmitted by nonverbal and even nonvocal means of com m unication [is]. On the other hand, human speakers organize and present their speech output in terms of well defined and clearly delimited chunks rather than as an unstructured, amorphous chain of signals. This division into chunks is represented among other parameters in the time course of voice fundamental frequency ( F 0 ) where it appears as a sequence of coherent intonation units optionally delimited by pauses and/or periods of laryngeaiization [19], and containing at least one salient pitch movement [9].[20]. Human listeners are able to perceive these units as \"natural groups\" forming a kind of perform ance structure [12], which reflects the information structure of the utterance [ 14] and is used to decode the intended meaning of the transmitted m essage. This involves (I) chopping up the message into information units in accordance with the speaker's and listener's shared state of knowledge. (2) organizing these units both internally and externally in terms of given and new information, and (3) selecting one or at the most two elements in each unit as points o f prominence within the message. SYSTEM OVERVIEW W hile written language input is generally presented to the parser with both the term inal svm bols (i.e. words) and the starting sym bols or roots (i.e. sentences) clearly delineated and set off from each other by spaces and/or punctuation marks, thus imposing the parsing algorithm with the task to identify som kind of intermediate structure(s) representation com posed o f variab les from a finite set of non-terminal sym bols or categories (i.e. the phrase structure, constituent structure, functional structure, etc), essentially the reverse applies when parsing connected speech input. That is, the continuously varying speech signal is presented to the analysis with som e kind of intermediate structure(s) representation either immediately observable (e.g. the voiced-unvoiced distinction between individual speech sounds) or readily deducible (e.g. the prosodic structure expressed in patterns o f intonation and accentuation) without prior knowledge of higher-level linguistic information, thus leaving the parser with the task to recognize (or rather support the recognition of) both the individual words and the full sentences. This reverse relationship between text parsing on one side and speech parsing on the other is illustrated schematically in figure 1. It must be appreciated in this context that the intermediate structure(s) representations in text versus speech parsing are neither identical nor necessarily isomorphical. NL text versus parsing connected speech The speech parsing algorithm presented in this study is thus initiated by a data-driven ,spCeCh segmentation stage that exploits the prosodically cued chunking present in the acousticsi! speech signal and uses it to perform automatic, speaker-independent segmentation of continuous speech into functionally defined intonation/information units. For this purpose. o global declination lines are computed by the linear regression method, which approximate the trends in time of the peaks (topline) and vallevs (baseline) of F . across the utterance Computation is reiterated every time the Pearson Product Moment Correlation Coefficient drops below a preset level of acceptability. Segmentation is thus performed without prior knowledge ot higher-level linguistic information, with the termination of one unit being determined bv the general resetting of the intonation contour wherever in the utterance it may occur. Earlier studies in the correlations between prosodv and grammar have shown that the intonation units thus established time-align in a clearly defined way with units of linguistic structure that can be described in probabilistic terms with respect to three interlacing levels ot analysis: constituent structure, linear word count and duration [i].[:o]. Furthermore, once the extent of an intonation unit has been established both in the time and in the frequency dom ain, areas of prominence can easily be detected as overshooting or undershooting F excursions that provide valuable points of departure for further linguistic analysis and island parsing strategies. A detailed description of the segmentation algorithm together with an evaluation o f its performance on three medium sized Swedish texts read by four native speakers (two female, two male) is presented in [21]. Problems of classification by means o f hierarchically organized, non-parametric. multiple-hypothesis classifiers are discussed in [6], A statistical evaluation and coarse classification of the time-alignment between the intonation units established by our segmentation algorithm and features of linguistic structure at the level o f a complete sentence (S). clause (C). noun phrase (S U B ), verb phrase (V P), adverbial modifier (A D V ) and parenthetical construction (PAR) can be found in [20] and [21 ]. The present paper deals specifically with design aspects of a parsing algorithm that accepts the output of the speech segmentation stage as input and uses it 1 -to build a case gram m ar representation of the original speech utterance: 2 -to guide the word recognition process by generating expectations resulting from partial linguistic analyses.", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "[6],[20] and [2i] for further details). Individual values are measured in Hz ( F -values) or milliseconds (durations) and represented in separate probability density functions (PDF) which allows for (1) finer grain. (2) fast computation of average means, standard deviations and modal targets, and (3) direct comparison and categorization of individual intonation unit parameters reflecting m o d a l i t y by simple and robust VQ methods. Prominence Topline Intercept (atop) ' 801 Topline Endpoint (y top)", "num": null, "uris": null }, "FIGREF3": { "type_str": "figure", "text": "Endpoint (y base) 7W9------ra n ------nr. i I f c , In to n a tio n unit param eters for o n e m ale sp ea k er", "num": null, "uris": null }, "FIGREF4": { "type_str": "figure", "text": "has been shown earlier that once the extent of an intonation unit is established both in the time and in the frequency domain, areas of prominence can easily be spotted as overshooting or undershooting pitch excursions that reach outside the F 0 range defined by the computed baseline-topline configuration. Unfortunately, only a small proportion of these prominent pitch obtrusions (less than one third, i.e. 31.6 %. in our accumulated Swedish material comprising 10440 running words and 704 sentences of read speech recorded by four native speakers) have been found to be directly associated with the verbal constituents in natural human speech, and thus provide an immediate cue for the detection and identification of the case head. On the other hand, these verb-prominence coincidences -at least in our Swedish material -have been found to be strongly related: 1 -to prominent pitch obtrusions in the initial parts of the individual intonation units (81.7 %). whereas prominence in the final parts appears to be predominantly associated with nominal constituents (77.1 % ): 2 -to lower average F 0 values of overshooting pitch prominence (typically around 12 Hz for our male speakers and 17-20 Hz for their female counterparts), whereas pitch prominence in connection with focal accent or emphasis on nominals reaches on the average significantly higher values.This latter phenom enon apparently applies irrespective to the position of the pitch obtrusion with regards to earlier or later sections of the intonation unit.In sum mary, about one third of the prominent pitch obtrusions computed by the speech segmentation algorithm are directly associated with verbal constituents, and can thus be regarded as reliable cues to indicate verbal case heads in connected speech parsing. On the other hand, the overwhelm ing majority of prominent pitch excursions time-align with nominal constructions, i.e. signaling the \"important\", \"new\", \"unpredictable\" words carrying most of the semantic information content in the utterance, whereas most o f the potential verbal case heads are associated with non-obtrusive pitch movements inside the baseline-topline configuration.", "num": null, "uris": null }, "FIGREF5": { "type_str": "figure", "text": "list of cases A caseframe is thus defined as an ordered array composed of the entire set of cases casefram e = a r r a y [ a g e n t . .", "num": null, "uris": null }, "FIGREF6": { "type_str": "figure", "text": "using the indexed representation format results in the more space-economic and searcheffective structure: entry \" t y p e : v e r b \" might at first glance appear redundant in view o f the fact that to begin with only the verb entries are listed with caseframes. As indicated in the previous section, however, we plan to include caseframe descriptions even for nouns and 121-other nominal constructions, with feature descriptions based on research on valency theory currently conducted at the department of computational linguistics. Further lexical work, is also directed towards the extension of individual case states marked as \"req \" or \" o p t\" with probabilistic lexical hypotheses derived from KW IC-studies of coherent speech.PARSERGiven the potentially ungrammatical and often highly fragmentary nature of continuous speech input, the actual parsing of the prosodicallv segmented utterance is performed following a flexible, multiple-strategy, construction-specific approach as proposed among others bv Carbonell Sc Hayes [s]). Kwasnv & Sondheimer [24] and W eischedel & Black [31]. A fundamental objective associated with this kind of approach is to integrate general signal processing and natural language processing techniques (both linguistic and stochastic) in order to fully exploit the combination of partial information obtained at various stages of the analysis. As shown earlier, the output of the speech segmentation algorithm and input to the parser is a linear sequence of parameter vectors representing the LPC-coefficients and pitch value estimates of the original continuous speech utterance at 16ms-intervals, with the F 0 contour segmented into prosodically defined intonation/information units. Typical prosodic structure representations are exemplified below in Figure 3 for three short samples of Swedish (male speaker, high-quality digital recording). English (female speaker, poor-quality analogue recording) and Japanese (male speaker, toll-qualitv analogue recording) speech.", "num": null, "uris": null }, "FIGREF7": { "type_str": "figure", "text": "Prosodic structure representation for three short samples of Swedish. English and Japanese speech. Arrows indicate areas of prominence outside the F Q range defined by the baseline-topline configuration. T he calculated values of the intonation unit parameters duration, declination, onset, offset and resetting, for the baselines and toplines respectively, are stored in a 10-parameter vector and used for a first broad classification and hierarchization of the material. the time and in the frequenv domain, areas of prominence can be easily spotted as overshooting or undershooting pitch excursions reaching outside the F 0 range defined by the computed baseline-topline configuration. Prominence is measured by the Hz-distance above topline or below baseline respectively (compare figure 2).", "num": null, "uris": null }, "FIGREF8": { "type_str": "figure", "text": "Island expansion proceeds to the beginning and end of the respective intonation/informa tion unit, thus constructing a phone lattice that spans the entire duration of the IU. A word lattice of the input utterance is hypothesized on the basis of information about (1) the most probable number of words predicted for the respective intonation/information unit as derivedfrom the broad classification [21]. (2) the language specific knowledge about the phonotactic properties within words and across words defined by the phonology-constrained diphone and triphone models. (3) the expected case identities generated by the casefram e entries in the lexicon, and (4) the lexical identities listed in a Swedish pronunciation lexicon [17]. Syntactic (including morphological) constraints are only weakly defined in a constituent-based contextfree grammar formulation (C FG ), which is aimed to permit successful parses even for fragmentary and/or grammatically deficient speech input and is expected to support the pruning of \"unpromising\" parses at an early stage of the analysis. It must be appreciated in this context that only about one fifth of all intonation/informa tion units unearthed by the speech segmentation algorithm (18.2% in our Swedish material) align in a simple one-to-one fashion with full sentences, while the majority (81.8% in the Swedish material) aligns with features of constituent structure in the sub-sentence domain. This implies that the overwhelming majority of full sentences (several intonation/information units. Empirical study of our accumulated Swedish speech material revealed an average of 2 .3 6 IUs per sentence with three clearly defined modes at 2, 3 and 5 IUs [20]. It must be appreciated in this context that sentences composed of 4 or more intonation/information units typically contain parallel structures such as enumerations, appositions, parentheticals and rhetorical repetitions. Given the limited number o f actually occurring IU-per-sentence constellations represented by the combination of (1) the most probable number of IUs per sentence, (2) the internal properties o f each individual IU specified in a 10-parameter vector containing duration, onset, offset, slope and resetting values for the baseline and topline respectively, and (3) the scored lattice of constituent label(s) derived from the coarse-classification procedure, the sub problem of sentence generation by intonation unit concatenation can be conveniently solved bv a finite-state parsing strategy such as proposed by Gibbon [ 13]. i.e. using a finite-state autom ation (FSA) with transition probabilities attached to each arc.", "num": null, "uris": null }, "FIGREF9": { "type_str": "figure", "text": "Swedish speech input. Testing the algorithm for English and Japanese speech input is ongoing and shows promising results. Further research focuses on improvements in the definition of the linguistic description format (i.e . incorporating nominal caseframes, attaching probability scores for cases in the \" o p t\" state, including lexical hypotheses with the caseframe entries, integrating the case grammar with a functional grammar component, etc).", "num": null, "uris": null }, "FIGREF10": { "type_str": "figure", "text": "B Altenberg. ''Prosodic Patterns in Spoken English\". Lund University Press, 1987 [2] H Altmann. \"Zur Problematik der Konstitution von Satzmodi als Formrvpen\". in: J M eibauer (ed) Satzmodus zwischen Grammatik und Pragmatik. Max N iem eyer Verlag, Bannert. \"From prominent syllables to a skeleton of meaning: a model of prosodically guided speech recognition\". Proceedings o f the Xlth International C on gress o f Phonetic Scien ces, of Optimization or Organization?\" , Proceedings o f the 57(7-Symposium \"D ig ital C om m unication\", Stockholm 1989 [7] G Brown. \"Prosodic structure and the given/new distinction\", in: A.Cutler and D .R .Ladd (eds). Prosody: M odels and M easurem ents. Springer-Verlag. 1983 [\u00ab] J G Carbonell and P J Hayes. \"Robust parsing using multiple construction-specific strategies\", in: L Bole (ed). Natural Language Parsing Systems. Springer-Verlag, Berlin 1987 [9] W L Chafe. \"Givenness, contrastiveness, definiteness, subjects, topics, and points of view\", in: Charles Li (ed). Subject and Topic. Academic Press. N ew York 1976", "num": null, "uris": null } } } }