{ "paper_id": "W02-0304", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T05:13:17.928790Z" }, "title": "Accenting unknown words in a specialized language", "authors": [ { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 Paris", "location": {} }, "email": "" }, { "first": "Natalia", "middle": [], "last": "Grabar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 Paris", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose two internal methods for accenting unknown words, which both learn on a reference set of accented words the contexts of occurrence of the various accented forms of a given letter. One method is adapted from POS tagging, the other is based on finite state transducers. We show experimental results for letter e on the French version of the Medical Subject Headings thesaurus. With the best training set, the tagging method obtains a precision-recall breakeven point of 84.2\u00a64.4% and the transducer method 83.8\u00a64.5% (with a baseline at 64%) for the unknown words that contain this letter. A consensus combination of both increases precision to 92.0\u00a63.7% with a recall of 75%. We perform an error analysis and discuss further steps that might help improve over the current performance.", "pdf_parse": { "paper_id": "W02-0304", "_pdf_hash": "", "abstract": [ { "text": "We propose two internal methods for accenting unknown words, which both learn on a reference set of accented words the contexts of occurrence of the various accented forms of a given letter. One method is adapted from POS tagging, the other is based on finite state transducers. We show experimental results for letter e on the French version of the Medical Subject Headings thesaurus. With the best training set, the tagging method obtains a precision-recall breakeven point of 84.2\u00a64.4% and the transducer method 83.8\u00a64.5% (with a baseline at 64%) for the unknown words that contain this letter. A consensus combination of both increases precision to 92.0\u00a63.7% with a recall of 75%. We perform an error analysis and discuss further steps that might help improve over the current performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The ISO-latin family, Unicode or the Universal Character Set have been around for some time now. They cater, among other things, for letters which can bear different diacritic marks. For instance, French uses four accented es (\u00e9\u00e8\u00ea\u00eb) besides the unaccented form e. Some of these accented forms correspond to phonemic differences. The correct handling of such accented letters, beyond US ASCII, has not been immediate and general. Although suitable character encodings are widely available and used, some texts or terminologies are still, for historical reasons, written with unaccented letters. For instance, in the French version of the US National Library of Medicine's Medical Subject Headings thesaurus (MeSH, (INS, 2000) ), all the terms are written in unaccented uppercase letters. This causes difficulties when these terms are used in Natural Language interfaces or for automatically indexing textual documents: a given unaccented word may match several words, giving rise to spurious ambiguities such as, e.g., marche matching both the unaccented marche (walking) and the accented march\u00e9 (market).", "cite_spans": [ { "start": 706, "end": 724, "text": "(MeSH, (INS, 2000)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Removing all diacritics would simplify matching, but would increase ambiguity, which is already pervasive enough in natural language processing systems. Another of our aims, besides, is to build language resources (lexicons, morphological knowledge bases, etc.) for the medical domain and to learn linguistic knowledge from terminologies and corpora (Grabar and Zweigenbaum, 2000) , including the MeSH. We would rather work, then, with linguistically sound data in the first place.", "cite_spans": [ { "start": 214, "end": 261, "text": "(lexicons, morphological knowledge bases, etc.)", "ref_id": null }, { "start": 350, "end": 380, "text": "(Grabar and Zweigenbaum, 2000)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We therefore endeavored to produce an accented version of the French MeSH. This thesaurus includes 19,971 terms and 9,151 synonyms, with 21,475 different word forms. Human reaccentuation of the full thesaurus is a time-consuming, errorprone task. As in other instances of preparation of linguistic resources, e.g., part-of-speech-tagged corpora or treebanks, it is generally more efficient for a human to correct a first annotation than to produce it from scratch. This can also help obtain better consistency over volumes of data. The issue is then to find a method for (semi-)automatic accentuation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The CISMeF team of the Rouen University Hos-pital already accented some 5,500 MeSH terms that are used as index terms in the CISMeF online catalog of French-language medical Internet sites (Darmoni et al., 2000) (www.chu-rouen.fr/cismef). This first means that less material has to be reaccented. Second, this accented portion of the MeSH might be usable as training material for a learning procedure. However, the methods we found in the literature do not address the case of 'unknown' words, i.e., words that are not found in the lexicon used by the accenting system. Despite the recourse to both general and specialized lexicons, a large number of the MeSH words are in this case, for instance those in table 1. One can argue indeed that the compila cryomicroscopie dactylolyse decarboxylases decoquinate denitrificans deoxyribonuclease desmodonte desoxyadrenaline dextranase dichlorobenzidine dicrocoeliose diiodotyrosine dimethylamino dimethylcysteine dioctophymatoidea diosgenine Table 1 : Unaccented words not in lexicon.", "cite_spans": [ { "start": 189, "end": 211, "text": "(Darmoni et al., 2000)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 753, "end": 1010, "text": "cryomicroscopie dactylolyse decarboxylases decoquinate denitrificans deoxyribonuclease desmodonte desoxyadrenaline dextranase dichlorobenzidine dicrocoeliose diiodotyrosine dimethylamino dimethylcysteine dioctophymatoidea diosgenine Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "tion of a larger lexicon should reduce the proportion of unknown words. But these are for the most part specialized, rare words, some of which we did not find even in a large reference medical dictionary (Garnier and Delamare, 1992) . It is then reasonable to try to accentuate automatically these unknown words to help human domain experts perform faster post-editing. Moreover, an automatic accentuation method will be reusable for other unaccented textual resources. For instance, the ADM (Medical Diagnosis Aid) knowledge base online at Rennes University (Seka et al., 1997) is another large resource which is still in unaccented uppercase format. We first review existing methods (section 2). We then present two trainable accenting methods (section 3), one adapted from part-of-speech tagging, the other based on finite-state transducers. We show experimental results for letter e on the French MeSH (section 4) with both methods and their combination. We finally discuss these results (section 5) and conclude on further research directions.", "cite_spans": [ { "start": 204, "end": 232, "text": "(Garnier and Delamare, 1992)", "ref_id": "BIBREF2" }, { "start": 559, "end": 578, "text": "(Seka et al., 1997)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work has addressed text accentuation, with an emphasis on the cases where all possible words are assumed to be known (listed in a lexicon). The issue in that case is to disambiguate unaccented words when they match several possible accented word forms in the lexicon -the marche/march\u00e9 examples in the introduction. Yarowsky (1999) addresses accent restoration in Spanish and in French, and notes that they can be linked to part-of-speech ambiguities and to semantic ambiguities which context can help to resolve. He proposes three methods to handle these: N-gram tagging, Bayesian classification and decision lists, which obtain the best results. These methods rely either on full words, on word suffixes or on partsof-speech. They are tested on 'the most problematic cases of each ambiguity type', extracted from the Spanish AP Newswire. The agreement with human accented words reaches 78.4-98.4% depending on ambiguity type.", "cite_spans": [ { "start": 325, "end": 340, "text": "Yarowsky (1999)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Spriet and El-B\u00e8ze (1997) use an N-gram model on parts-of-speech. They evaluate this method on a 19,000 word test corpus consisting of news articles and obtain a 99.31% accuracy. In this corpus, only 2.6% of the words were unknown, among which 89.5% did not need accents. The resulting error rate (0.3%) accounts for nearly one half of the total error rate, but is so small that it is not worth trying to guess accentuation for unknown words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The same kind of approach is used in project R\u00c9ACC (Simard, 1998) . Here again, unknown words are left untouched, and account for one fourth of the errors. We typed the words in table 1 through the demonstration interface of R\u00c9ACC online at www-rali.iro.umontreal.ca/Reacc/: none of these words was accented by the system (7 out of 16 do need accentuation).", "cite_spans": [ { "start": 51, "end": 65, "text": "(Simard, 1998)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "When the unaccented words are in the lexicon, the problem can also be addressed as a spelling correction task, using methods such as string edit distance (Levenshtein, 1966) , possibly combined with the previous approach (Ruch et al., 2001 ).", "cite_spans": [ { "start": 154, "end": 173, "text": "(Levenshtein, 1966)", "ref_id": "BIBREF6" }, { "start": 221, "end": 239, "text": "(Ruch et al., 2001", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "However, these methods have limited power when a word is not in the lexicon. At best, they might say something about accented letters in grammatical affixes which mark contextual, syntactic constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "We found no specific reference about the accentuation of such 'unknown' words: a method that, when a word is not listed in the lexicon, proposes an accented version of that word. Indeed, in the above works, the proportion of unknown words is too small for specific steps to be taken to handle them. The situation is quite different in our case, where about one fourth of the words are 'unknown'. Moreover, contextual clues are scarce in our short, often ungrammatical terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "We took obvious measures to reduce the number of unknown words: we filtered out the words that can be found in accented lexicons and corpora. But this technique is limited by the size of the corpus that would be necessary for such 'rare' words to occur, and by the lack of availability of specialized French lexicons for the medical domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "We then designed two methods that can learn accenting rules for the remaining unknown words:\u00b4 \u00b5 adapting a POS-tagging method (Brill, 1995) (section 3.3);\u00b4 \u00b5 adapting a method designed for learning morphological rules (Theron and Cloete, 1997) (section 3.4).", "cite_spans": [ { "start": 126, "end": 139, "text": "(Brill, 1995)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The French MeSH was briefly presented in the introduction; we work with the 2001 version. The part which was accented and converted into mixed case by the CISMeF team is that of November 2001. As more resources are added to CISMeF on a regular basis, a larger number of these accented terms must now be available. The list of word forms that occur in these accented terms serves as our base lexicon (4861 word forms). We removed from this list the 'words' that contain numbers, those that are shorter than 3 characters (abbreviations), and converted them in lower case. The resulting lexicon includes 4054 words (4047 once unaccented). This lexicon deals with single words. It does not try to register complex terms such as myocardial infarction, but instead breaks them into the two words myocardial and infarction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering out know words", "sec_num": "3.1" }, { "text": "A word is considered unknown when it is not listed in our lexicon. A first concern is to filter out from subsequent processing words that can be found in larger lexicons. The question is then to find suit-able sources of additional words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering out know words", "sec_num": "3.1" }, { "text": "We used various specialized word lists found on the Web (lexicon on cancer, general medical lexicon) and the ABU lexicon (abu.cnam.fr/DICO), which contains some 300,000 entries for 'general' French. Several corpora provided accented sources for extending this lexicon with some medical words (cardiology, haematology, intensive care, drawn from the current state of the CLEF corpus (Habert et al., 2001) , and drug monographs). We also used a word list extracted from the French versions of two other medical terminologies: the International Classification of Diseases (ICD-10) and the Microglossary for Pathology of the Systematized Nomenclature of Medicine (SNOMED). This word list contains 8874 different word forms. The total number of word forms of the final word list was 276 445.", "cite_spans": [ { "start": 382, "end": 403, "text": "(Habert et al., 2001)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Filtering out know words", "sec_num": "3.1" }, { "text": "After application of this list to the MeSH, 7407 words were still not recognized. We converted these words to lower case, removed those that did not include the letter e, were shorter than 3 letters (mainly acronyms) or contained numbers. The remaining 5188 words, among which those listed in table 1, were submitted to the following procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering out know words", "sec_num": "3.1" }, { "text": "The underlying hypotheses of this method are that sufficiently regular rules determine, for most words, which letters are accented, and that the context of occurrence of a letter (its neighboring letters) is a good basis for making accentuation decisions. We attempted to compile these rules by observing the occurrences of e\u00e9\u00e8\u00ea\u00eb in a reference list of words (the training set, for instance, the part of the French MeSH accented by the CISMeF team). In the following, we shall call pivot letter a letter that is part of the confusion set e\u00e9\u00e8\u00ea\u00eb (set of letters to discriminate).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing the context of a letter", "sec_num": "3.2" }, { "text": "An issue is then to find a suitable description of the context of a pivot letter in a word, for instance the letter \u00e9 in excis\u00e9e. We explored and compared two different representation schemes, which underlie two accentuation methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing the context of a letter", "sec_num": "3.2" }, { "text": "This first method is based on the use of a part-ofspeech tagger: Brill's (1995) tagger. We consider each word as a 'string of letters': each letter makes one word, and the sequence of letters of a word makes a sentence. The 'tag' of a letter is the expected accented form of this letter (or the same letter if it is not accented). For instance, for the word endometre (endometer), to be accented as endom\u00e8tre, the 'tagged sentence' is e/e n/n d/d o/o m/m e/\u00e8 t/t r/r e/e (in the format of Brill's tagger). The regular procedure of the tagger then learns contextual accentuation rules, the first of which are shown on table 2.", "cite_spans": [ { "start": 65, "end": 79, "text": "Brill's (1995)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Accentuation as contextual tagging", "sec_num": "3.3" }, { "text": "( Given a new 'sentence', Brill's tagger first assigns each 'word' its mots frequent tag: this consists in accenting no e. The contextual rules are then applied and successively correct the current accentuation. For instance, when accenting the word flexion, rule (1) first applies (if e with second next tag = i, change to \u00e9) and accentuates the e to yield fl\u00e9xion (as in ...\u00e9mie). Rule (9) applies next (if \u00e9 with one of next three tags = x, change to e) to correct this accentuation before an x, which finally results in flexion. These rules correspond to representations of the contexts of occurrence of a letter. This representation is mixed (left and right contexts can be combined, e.g., in SURROUNDTAG, where both imme-diate left and right tags are examined), and can extend to a distance of three letters left and right, but in restricted combinations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Brill Format Gloss", "sec_num": null }, { "text": "The 'mixed context' representation used by Theron and Cloete (1997) folds the letters of a word around a pivot letter: it enumerates alternately the next letter on the right then on the left, until it reaches the word boundaries, which are marked with special symbols (here, for start of word, and $ for end of word). Theron & Cloete additionally repeat an out-of-bounds symbol outside the word, whereas we dispense with these marks. For instance, the first e in excis\u00e9e (excised) is represented as the mixed context in the right column of the first row of table 3. The left column shows the order in which the letters of the word are enumerated. The next two rows explain the mixed context representations for the two other es in the word. This representation Each of these contexts is unaccented (it is meant to be matched with representations of unaccented words) and the original form of the pivot letter is associated to the context as an output (we use the symbol '=' to mark this output). Each context is thus converted into a transducer: the input tape is the mixed context of a pivot letter, and the output tape is the appropriate letter in the confusion set e\u00e9\u00e8\u00ea\u00eb.", "cite_spans": [ { "start": 54, "end": 67, "text": "Cloete (1997)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Mixed context representation", "sec_num": "3.4" }, { "text": "The next step is to determine minimal discriminating contexts (figure 1). To obtain them, we join all these transducers (OR operator) by factoring their common prefixes as a trie structure, i.e., a deterministic transducer that exactly represents the training set. We then compute, for each state of this transducer and for each possible output (letter in the con-fusion set) reachable from this state, the number of paths starting from this state that lead to this output. We call a state unambiguous if all the paths from this state lead to the same output. In that case, for our needs, these paths may be replaced with a shortcut to an exit to the common output (see figure 1) . This amounts to generalizing the set of contexts by replacing them with a set of minimal discriminating contexts.", "cite_spans": [], "ref_spans": [ { "start": 670, "end": 679, "text": "figure 1)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Mixed context representation", "sec_num": "3.4" }, { "text": "Given a word that needs to be accented, the first step consists in representing the context of each of its pivot letters. For instance, the word biologie: $igoloib . Each context is matched with the transducer in order to find the longest path from the start state that corresponds to a prefix of the context string (here, $igo). If this path leads to an output state, this output provides the proposed accented form of the pivot letter (here, e). If the match terminates earlier, we have an ambiguity: several possible outputs can be reached (e.g., h\u00e9morragie matches $ig).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mixed context representation", "sec_num": "3.4" }, { "text": "We can take absolute frequencies into account to obtain a measure of the support (confidence level) for a given output \u00c7 from the current state \u00cb: how much evidence there is to support this decision. It is computed as the number of contexts of the training set that go through \u00cb to an output state labelled with \u00c7 (see figure 1) . The accenting procedure can choose to make a decision only when the support for that decision is above a given threshold. shows some minimal discriminating contexts learnt from the accented part of the French MeSH with a high support threshold. However, in previous experiments (Zweigenbaum and Grabar, 2002) , we tested a range of support thresholds and observed that the gain in precision obtained by raising the support threshold was minor, and counterbalanced by a large loss in recall. We therefore do not use this device here and accept any level of support. Instead, we take into account the relative frequencies of occurrence of the paths that lead to the different outputs, as marked in the trie. A probabilistic, majority decision is made on that basis: if one of the competing outputs has a relative frequency above a given threshold, this output is chosen. In the present experiments, we tested two thresholds: 0.9 (90% or more of the examples must support this case; this makes the correct decision for h\u00e9morragie) and 1 (only non-ambiguous states lead to a decision: no decision for the first e in hemorragie, which we leave unaccented).", "cite_spans": [ { "start": 609, "end": 639, "text": "(Zweigenbaum and Grabar, 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 319, "end": 328, "text": "figure 1)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Mixed context representation", "sec_num": "3.4" }, { "text": "Simpler context representations of the same family can also be used. We examined right contexts (a variable-length string of letters on the right of the pivot letter) and left contexts (idem, on the left).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mixed context representation", "sec_num": "3.4" }, { "text": "We trained both methods, Brill and contexts (mixed, left and right), on three training sets: the 4054 words of the accented part of the MeSH, the 54,291 lemmas of the ABU lexicon and the 8874 words in the ICD-SNOMED word list. To check the validity of the rules, we applied them to the accented part of the MeSH. The context method knows when it can make a decision, so that we can separate the words that are fully processed ( , all es have lead to decisions) from those that are partially (\u00d4) processed or not (\u00d2) processed at all. Let the number of correct accentuations in . If we decide to only propose an accented form for the words that get fully accented, we can compute recall \u00ca and precision \u00c8 figures as follows: \u00ca", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the rules", "sec_num": "3.5" }, { "text": "\u2022\u00d4\u2022\u00d2 and \u00c8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the rules", "sec_num": "3.5" }, { "text": ". Similar measures can be computed for \u00d4 and \u00d2, as well as for the total set of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the rules", "sec_num": "3.5" }, { "text": "We then applied the accentuation rules to the 5188 accentable 'unknown' words of the MeSH. No gold standard is available for these words: human validation was necessary. We drew from that set a random sample containing 260 words (5% of the total) which were reviewed by the CISMeF team. Because of sampling, precision measures must include a confidence interval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the rules", "sec_num": "3.5" }, { "text": "We also tested whether the results of several methods can be combined to increase precision. We simply applied a consensus rule (intersection): a word is accepted only if all the methods considered agree on its accentuation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the rules", "sec_num": "3.5" }, { "text": "The programs were developed in the Perl5 language. They include a trie manipulation package which we wrote by extending the Tree::Trie package, online on the Comprehensive Perl Archive Network (www.cpan.org).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the rules", "sec_num": "3.5" }, { "text": "The baseline of this task consists in accenting no e. On the accented part of the MeSH, it obtains an accuracy of 0.623, and on the test sample, 0.642. The Brill tagger learns 80 contextual rules with MeSH training (208 on ABU and 47 on CIM-SNOMED). The context method learns 1,832 rules on the MeSH training set (16,591 on ABU and 3,050 on CIM-SNOMED).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Tables 5, 6 and 7 summarize the validation results obtained on the accented part of the MeSH. Set denotes the subset of words as explained in section 3.5. Cor. stands for the number of correctly accented words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Not surprizingly, the best global precision is obtained with MeSH training (table 6). The mixed context method obtains a perfect precision, whereas Brill reaches 0.901 (table 5) . ABU and CIM-SNOMED training also obtain good results (table 7) , again better with the mixed context method (0.912-0.931) than with Brill (0.871-0.895). We performed the same tests with right and left contexts (table 6): precision can be as good for fully processed words (set ) as that of mixed contexts, but recall is always lower. The results of these two context variants are therefore not kept in the following tables. Both precision and recall are generally slightly better with the majority decision variant. If we concentrate on the fully processed words ( ), precision is always higher than the global result and than that of words with no decision (\u00d2). The \u00d2 class, whose words are left unaccented, generally obtain a precision well over the baseline. Partially processed words (\u00d4) are always those with the worst precision. training set cor. recall precision\u00a6ci MeSH 3646 0.899 0.901\u00a60.009 ABU 3524 0.869 0.871\u00a60.010 CIM-SNOMED 3621 0.893 0.895\u00a60.009 mixed \u00d2 2 0.000 1.000\u00a60.000 \u00d4 0 0.000 0.000\u00a60.000 4045 0.998 1.000\u00a60.000", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 177, "text": "(table 5)", "ref_id": "TABREF5" }, { "start": 233, "end": 242, "text": "(table 7)", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "\u00d8\u00d3\u00d8 4047 0.998 1.000\u00a60.000 Precision and recall for the unaccented part of the MeSH are showed on tables 8 and 9. The global results with the different training sets at breakeven point, with their confidence intervals, are not really distinguishable. They are clustered from 0.819\u00a60.047 to 0.842\u00a60.044, except the unambiguous decision method trained on MeSH which stands a bit lower at 0.800\u00a60.049 and the Brill tagger trained on ABU (0.785). If we only consider fully processed words, precision can reach 0.884\u00a60.043 (ICD-SNOMED training, majority decision), with a recall of 0.731 (or 0.876\u00a60.043 / 0.758 with MeSH training, majority decision).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Consensus combination of several methods (table 8) does increase precision, at the expense of recall. A precision/recall of 0.920\u00a60.037/0.750 is obtained by combining Brill and the mixed context method (majority decision), with MeSH training on both sides. The same level of precision is obtained with other combinations, but with lower recalls.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "We showed that a higher precision, which should make human post-editing easier, can be obtained in two ways. First, within the mixed context method, three sets of words are separated: if only the 'fully processed' words are considered (table 9), precision/recall can reach 0.884/0.731 (CIM-SNOMED, majority) or 0.876/0.758 (MeSH, majority). Second, the results of several methods can be combined with a consensus rule: a word is accepted only if all these methods agree on its accentuation. The combination of Brill mixed contexts (majority decision), for instance with MeSH training on both sides, increases precision to 0.920\u00a60.037 with a recall still at 0.750 (table 8) .", "cite_spans": [], "ref_spans": [ { "start": 663, "end": 672, "text": "(table 8)", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "5" }, { "text": "The results obtained show that the methods presented here obtain not only good performance on their training set, but also useful results on the tar- Table 9 : Evaluation on the rest of the MeSH: mixed contexts, estimate on same 5% sample. get data. We believe these methods will allow us to reduce dramatically the final human time needed to accentuate useful resources such as the MeSH thesaurus and ADM knowledge base.", "cite_spans": [], "ref_spans": [ { "start": 150, "end": 157, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "5" }, { "text": "It is interesting that a general-language lexicon such as ABU can be a good training set for accenting specialized-language unknown words, although this is true with the mixed context method and the reverse with the Brill tagger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "5" }, { "text": "A study of the 44 errors made by the mixed context method (table 9, MeSH training, majority decision: 216 correct out of 260) revealed the following errors classes. MeSH terms contain some English words (academy, cleavage) and many Latin words (arenaria, chrysantemi, denitrificans), some of which built over proper names (edwardsiella). These loan words should not bear accents; some of their patterns are correctly processed by the methods presented here (i.e., unaccented eae$, ella$), but others are not distinguishable from normal French words and get erroneously accented (rena of arenaria is erroneously processed as in r\u00e9nal; acad\u00e9my as in acad\u00e9mie). A first-stage classifier might help handle this issue by categorizing Latin (and English) words and excluding them from processing. Our first such experiments are not conclusive and add as many errors as are removed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "5" }, { "text": "Another class of errors are related with morpheme boundaries: some accentuation rules which depend on the start-of-word boundary would need to apply to morpheme boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "5" }, { "text": "For in-stance, pilo/erection fails to receive the \u00e9 of r e=\u00e9 ( \u00e9rection), apic/ectomie erroneously receives an \u00e9 as in cc=\u00e9 (c\u00e9cit\u00e9). An accurate morpheme segmenter would be needed to provide suitable input to this process without again adding noise to it. In some instances, no accentuation decision could be made because no example had been learnt for a specific context (e.g., accentuation of c\u00e9falo in cefaloglycine).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "5" }, { "text": "We also uncovered accentuation inconsistencies in both the already accented MeSH words and the validated sample (e.g., bacterium or bact\u00e9rium in different compounds). Cross-checking on the Web confirmed the variability in the accentuation of rare words. This shows the difficulty to obtain consistent human accentuation across large sets of complex words. One potential development of the present automated accentuation methods could be to check the consistency of word lists. In addition, we discovered spelling errors in some MeSH terms (e.g., bethanechol instead of betanechol prevents the proper accentuation of beta).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "5" }, { "text": "Finally, further testing is necessary to check the relevance of these methods to other accented letters in French and in other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "We wish to thank Magaly Douy\u00e8re, Beno\u00eet Thirion and St\u00e9fan Darmoni, of the CISMeF team, for providing us with accented MeSH terms and patiently reviewing the automatically accented word samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Transformation-based errordriven learning and natural language processing: A case study in part-of-speech tagging", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "4", "pages": "543--565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Brill. 1995. Transformation-based error- driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543-565.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "CISMeF: a structured health resource guide", "authors": [ { "first": "[", "middle": [], "last": "Darmoni", "suffix": "" } ], "year": 2000, "venue": "Methods Inf Med", "volume": "39", "issue": "1", "pages": "30--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Darmoni et al.2000] St\u00e9fan J. Darmoni, J.-P. Leroy, Beno\u00eet Thirion, F. Baudic, Magali Douyere, and J. Piot. 2000. CISMeF: a structured health resource guide. Methods Inf Med, 39(1):30-35.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Dictionnaire des Termes de M\u00e9decine", "authors": [ { "first": "M", "middle": [], "last": "Garnier", "suffix": "" }, { "first": "V", "middle": [], "last": "Delamare", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Garnier and Delamare1992] M. Garnier and V. Dela- mare. 1992. Dictionnaire des Termes de M\u00e9decine. Maloine, Paris.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatic acquisition of domain-specific morphological resources from thesauri", "authors": [ { "first": "Natalia", "middle": [], "last": "Grabar", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2000, "venue": "Proceedings of RIAO 2000: Content-Based Multimedia Information Access", "volume": "", "issue": "", "pages": "765--784", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Grabar and Zweigenbaum2000] Natalia Grabar and Pierre Zweigenbaum. 2000. Automatic acquisition of domain-specific morphological resources from the- sauri. In Proceedings of RIAO 2000: Content-Based Multimedia Information Access, pages 765-784, Paris, France, April. C.I.D.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Building a text corpus for representing the variety of medical language", "authors": [ { "first": "", "middle": [], "last": "Habert", "suffix": "" } ], "year": 2001, "venue": "Corpus Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Habert et al.2001] Beno\u00eet Habert, Natalia Grabar, Pierre Jacquemart, and Pierre Zweigenbaum. 2001. Build- ing a text corpus for representing the variety of medi- cal language. In Corpus Linguistics 2001, Lancaster.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Th\u00e9saurus Biom\u00e9dical Fran\u00e7ais/Anglais", "authors": [], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Institut National de la Sant\u00e9 et de la Recherche M\u00e9dicale, Paris, 2000. Th\u00e9saurus Biom\u00e9dical Fran\u00e7ais/Anglais.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics-Doklandy", "authors": [ { "first": "V", "middle": [ "I" ], "last": "Levenshtein", "suffix": "" } ], "year": 1966, "venue": "", "volume": "", "issue": "", "pages": "707--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and rever- sals. Soviet Physics-Doklandy, pages 707-710.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Looking back or looking all around: comparing two spell checking strategies for documents edition in an electronic patient record", "authors": [ { "first": "Patrick", "middle": [], "last": "Ruch", "suffix": "" }, { "first": "Robert", "middle": [ "H" ], "last": "Baud", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Geissbuhler", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Lovis", "suffix": "" }, { "first": "Anne-Marie", "middle": [], "last": "Rassinoux", "suffix": "" }, { "first": "A", "middle": [], "last": "Rivi\u00e8re", "suffix": "" } ], "year": 2001, "venue": "J Am Med Inform Assoc", "volume": "8", "issue": "", "pages": "568--572", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Ruch et al.2001] Patrick Ruch, Robert H. Baud, Antoine Geissbuhler, Christian Lovis, Anne-Marie Rassinoux, and A. Rivi\u00e8re. 2001. Looking back or looking all around: comparing two spell checking strategies for documents edition in an electronic patient record. J Am Med Inform Assoc, 8(suppl):568-572.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "ADM-INDEX: an automated system for indexing and retrieval of medical texts", "authors": [ { "first": "[", "middle": [], "last": "Seka", "suffix": "" } ], "year": 1997, "venue": "In Stud Health Technol Inform", "volume": "43", "issue": "", "pages": "406--410", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Seka et al.1997] LP Seka, C Courtin, and P Le Beux. 1997. ADM-INDEX: an automated system for index- ing and retrieval of medical texts. In Stud Health Tech- nol Inform, volume 43 Pt A, pages 406-410. Reidel.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automatic insertion of accents in French text", "authors": [ { "first": "Michel", "middle": [], "last": "Simard", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Third Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Simard. 1998. Automatic inser- tion of accents in French text. In Proceedings of the Third Conference on Empirical Methods in Natural Language Processing, Grenade. [Spriet and El-B\u00e8ze1997] Thierry Spriet and Marc El- B\u00e8ze. 1997. R\u00e9accentuation automatique de textes. In FRACTAL 97, Besan\u00e7on.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic acquisition of two-level morphological rules", "authors": [ { "first": "Theron", "middle": [], "last": "", "suffix": "" }, { "first": "Cloete1997] Pieter", "middle": [], "last": "Theron", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Cloete", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "103--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Theron and Cloete1997] Pieter Theron and Ian Cloete. 1997. Automatic acquisition of two-level morpholog- ical rules. In Ralph Grishman, editor, Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 103-110, Washington, DC, March- April. ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Corpus-based techniques for restoring accents in Spanish and French text", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1999, "venue": "Natural Language Processing Using Very Large Corpora", "volume": "", "issue": "", "pages": "99--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky. 1999. Corpus-based techniques for restoring accents in Spanish and French text. In Natural Language Processing Using Very Large Corpora, pages 99-120. Kluwer Academic Pub- lishers.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Accenting unknown words: application to the French version of the MeSH", "authors": [ { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Grabar", "suffix": "" } ], "year": 2002, "venue": "Workshop NLP in Biomedical Applications", "volume": "", "issue": "", "pages": "69--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Zweigenbaum and Grabar2002] Pierre Zweigenbaum and Natalia Grabar. 2002. Accenting unknown words: application to the French version of the MeSH. In Workshop NLP in Biomedical Applications, pages 69-74, Cyprus, March. EFMI.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Resources for the medical domain: medical terminologies, lexicons and corpora", "authors": [ { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2001, "venue": "ELRA Newsletter", "volume": "6", "issue": "4", "pages": "8--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pierre Zweigenbaum. 2001. Re- sources for the medical domain: medical terminolo- gies, lexicons and corpora. ELRA Newsletter, 6(4):8- 11.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "uris": null, "text": "Trie of mixed contexts, each state showing the frequency of each possible output.", "type_str": "figure" }, "TABREF1": { "content": "", "type_str": "table", "num": null, "html": null, "text": "Accentuation correction rules, of the form'change \u00d8 \u00bd to \u00d8 \u00be if test true on \u00dc [\u00dd]'. NEXT2TAG =second next tag, NEXT1OR2TAG = one of next 2 tags, NEXTBIGRAM = next 2 words, NEXT1OR2OR3TAG = one of next 3 tags, SURROUNDTAG = previous and next tags," }, "TABREF2": { "content": "
", "type_str": "table", "num": null, "html": null, "text": "Mixed context representations. caters for contexts of different sizes and facilitates their comparison." }, "TABREF3": { "content": "
Context Support GlossExamples
$igo=e65-ogiecytologie
$ih=e63-hielipoatrophie
$uqit=e 77-tique am\u00e9lanotique
u=e247-eu-activateur, calleux
x=e68-ex-excis\u00e9e
", "type_str": "table", "num": null, "html": null, "text": "" }, "TABREF4": { "content": "", "type_str": "table", "num": null, "html": null, "text": "" }, "TABREF5": { "content": "
: Validation: Brill, 4054 words of accented
MeSH.
context set cor. recall precision\u00a6ci right \u00d2 1906 0.470 0.747\u00a60.017 \u00d4 943 0.233 0.804\u00a60.023
left324 0.080 1.000\u00a60.000 \u00d8\u00d3\u00d8 3173 0.783 0.784\u00a60.013 \u00d2 743 0.183 0.649\u00a60.028 \u00d4 500 0.123 0.428\u00a60.028
1734 0.428 1.000\u00a60.000 \u00d8\u00d3\u00d8 2977 0.734 0.736\u00a60.014 mixed \u00d2 7 0.002 1.000\u00a60.000 \u00d4 0 0.000 0.000\u00a60.000
4040 0.997 1.000\u00a60.000 \u00d8\u00d3\u00d8 4047 0.998 1.000\u00a60.000
majority decision (0.9)
", "type_str": "table", "num": null, "html": null, "text": "" }, "TABREF6": { "content": "
: Validation: different context methods,
MeSH training, 4054 words of accented MeSH.
", "type_str": "table", "num": null, "html": null, "text": "" }, "TABREF8": { "content": "
: Validation: mixed contexts, strict (thresh-
old = 1) and majority (threshold = 0.9) decisions,
4054 words of accented MeSH.
training setcor. recall precision\u00a6ci
MeSH219 0.842 0.842\u00a60.044
ABU204 0.785 0.785\u00a60.050
CIM-SNOMED218 0.838 0.838\u00a60.045
Combined methods
mesh/Brill + mesh/majority 195 0.750 0.920\u00a60.037
mesh/Brill + mesh/majority 185 0.712 0.930\u00a60.036
mesh+abu+cim-snomed/Brill 178 0.685 0.927\u00a60.037
+ mesh/majority
", "type_str": "table", "num": null, "html": null, "text": "" }, "TABREF9": { "content": "", "type_str": "table", "num": null, "html": null, "text": "Evaluation on the rest of the MeSH: Brill, estimate on 5% sample (260 words)." }, "TABREF10": { "content": "
majority decision
cor. recall precision\u00a6ci
80.031 0.727\u00a60.263
11 0.042 0.458\u00a60.199
197 0.758 0.876\u00a60.043
216 0.831 0.831\u00a60.046
ABU training (strict)majority decision
\u00d2 30 0.115 0.882\u00a60.10813 0.050 0.929\u00a60.135
\u00d4 32 0.123 0.711\u00a60.13211 0.042 0.786\u00a60.215
153 0.588 0.845\u00a60.053194 0.746 0.836\u00a60.048
\u00d8\u00d3\u00d8 215 0.827 0.827\u00a60.046218 0.838 0.838\u00a60.045
CIM-SNOMED trainingmajority decision
\u00d2 27 0.104 0.818\u00a60.13214 0.054 0.824\u00a60.181
\u00d4 19 0.073 0.487\u00a60.15790.035 0.321\u00a60.173
168 0.646 0.894\u00a60.044190 0.731 0.884\u00a60.043
\u00d8\u00d3\u00d8 214 0.823 0.823\u00a60.046213 0.819 0.819\u00a60.047
", "type_str": "table", "num": null, "html": null, "text": "MeSH training (strict) set cor. recall precision\u00a6ci \u00d2 19 0.073 0.731\u00a60.170 \u00d4 15 0.058 0.429\u00a60.164 174 0.669 0.874\u00a60.046 \u00d8\u00d3\u00d8 208 0.800 0.800\u00a60.049" } } } }