{ "paper_id": "W11-0108", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T05:35:57.489966Z" }, "title": "Modular Graph Rewriting to Compute Semantics", "authors": [ { "first": "Guillaume", "middle": [], "last": "Bonfante", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nancy-Universit\u00e9 -LORIA", "location": {} }, "email": "bonfante@loria.fr" }, { "first": "Bruno", "middle": [], "last": "Guillaume", "suffix": "", "affiliation": { "laboratory": "", "institution": "INRIA -LORIA", "location": {} }, "email": "guillaum@loria.fr" }, { "first": "Mathieu", "middle": [], "last": "Morey", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nancy-Universit\u00e9 -LORIA", "location": {} }, "email": "moreymat@loria.fr" }, { "first": "Guy", "middle": [], "last": "Perrier", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nancy-Universit\u00e9 -LORIA", "location": {} }, "email": "perrier@loria.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Taking an asynchronous perspective on the syntax-semantics interface, we propose to use modular graph rewriting systems as the model of computation. We formally define them and demonstrate their use with a set of modules which produce underspecified semantic representations from a syntactic dependency graph. We experimentally validate this approach on a set of sentences. The results open the way for the production of underspecified semantic dependency structures from corpora annotated with syntactic dependencies and, more generally, for a broader use of modular rewriting systems for computational linguistics.", "pdf_parse": { "paper_id": "W11-0108", "_pdf_hash": "", "abstract": [ { "text": "Taking an asynchronous perspective on the syntax-semantics interface, we propose to use modular graph rewriting systems as the model of computation. We formally define them and demonstrate their use with a set of modules which produce underspecified semantic representations from a syntactic dependency graph. We experimentally validate this approach on a set of sentences. The results open the way for the production of underspecified semantic dependency structures from corpora annotated with syntactic dependencies and, more generally, for a broader use of modular rewriting systems for computational linguistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The aim of our work is to produce a semantic representation of sentences on a large scale using a formal and exact approach based on linguistic knowledge. In this perspective, the design of the syntax-semantics interface is crucial.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Based on the compositionality principle, most models of the syntax-semantics interface use a synchronous approach: the semantic representation of a sentence is built step by step in parallel with its syntactic structure. According to the choice of the syntactic formalism, this approach is implemented in different ways: in a Context-Free Grammars (CFG) style framework, every syntactic rule of a grammar is associated with a semantic composition rule, as in the classical textbook by Heim and Kratzer (1998) ; following the principles introduced by Montague, Categorial Grammars use an homomorphism from the syntax to the semantics (Carpenter (1992) ). HPSG integrates the semantic and syntactic representations in feature structures which combine by unification (Copestake et al. (2005) ). LFG follows a similar principle (Dalrymple (2001) ). In a synchronous approach, the syntax-semantics interface closely depends on the grammatical formalism. Building such an interface can be very costly, especially if we aim at a large coverage for the grammar.", "cite_spans": [ { "start": 485, "end": 508, "text": "Heim and Kratzer (1998)", "ref_id": "BIBREF12" }, { "start": 633, "end": 650, "text": "(Carpenter (1992)", "ref_id": "BIBREF5" }, { "start": 764, "end": 788, "text": "(Copestake et al. (2005)", "ref_id": "BIBREF9" }, { "start": 824, "end": 841, "text": "(Dalrymple (2001)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In our work, we have chosen an asynchronous approach in the sense that we start from a given syntactic analysis of a sentence to produce a semantic representation. With respect to the synchronous approach, a drawback is that the reaction of the semantics on the syntax is delayed. On the other hand, the computation of the semantics is made relatively independent from the syntactic formalism. The only constraint is the shape of the output of the syntactic analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In the formalisms mentioned above, the syntactic structure most often takes the form of a phrase structure, but the choice of constituency for the syntax makes the relationship with the semantics more complicated. We have chosen dependency graphs, because syntactic dependencies are closely related to predicate-argument relations. Moreover, they can be enriched with relations derived from the syntax, which are usually ignored, such as the arguments of infinitives or the anaphora determined by the syntax. One may observe that our syntactic representation of sentences involves plain graphs and not trees. Indeed, these relations can give rise to multiple governors and dependency cycles. On the semantic side, we have also chosen graphs, which are widely used in different formalisms and theories, such as DMRS (Copestake (2009)) or MTT (Mel'\u010duk (1988) ) .", "cite_spans": [ { "start": 815, "end": 833, "text": "(Copestake (2009))", "ref_id": "BIBREF8" }, { "start": 837, "end": 856, "text": "MTT (Mel'\u010duk (1988)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The principles being fixed, our problem was then to choose a model of computation well suited to transforming syntactic graphs into semantic graphs. The \u03bb-calculus, which is widely used in formal semantics, is not a good candidate because it is appropriate for computing on trees but not on graphs. Our choice naturally went to graph rewriting. Graph rewriting is barely used in computational linguistics; it could be due to the difficulty to manage large sets of rules. Among the pioneers in the use of graph rewriting, we mention Hyv\u00f6nen (1984) ; Bohnet and Wanner (2001) ; Crouch (2005) ; Jijkoun and de Rijke (2007) ; B\u00e9daride and Gardent (2009) ; Chaumartin and Kahane (2010) .", "cite_spans": [ { "start": 532, "end": 546, "text": "Hyv\u00f6nen (1984)", "ref_id": "BIBREF13" }, { "start": 549, "end": 573, "text": "Bohnet and Wanner (2001)", "ref_id": "BIBREF2" }, { "start": 576, "end": 589, "text": "Crouch (2005)", "ref_id": "BIBREF10" }, { "start": 592, "end": 619, "text": "Jijkoun and de Rijke (2007)", "ref_id": "BIBREF14" }, { "start": 622, "end": 649, "text": "B\u00e9daride and Gardent (2009)", "ref_id": "BIBREF1" }, { "start": 652, "end": 680, "text": "Chaumartin and Kahane (2010)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "A graph rewriting system is defined as a set of graph rewrite rules and a computation is a sequence of rewrite rule applications to a given graph. The application of a rule is triggered via a mechanism of pattern matching, hence a sub-graph is isolated from its context and the result is a local modification of the input. This allows a linguistic phenomenon to be easily isolated for applying a transformation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Since each step of computation is fired by some local conditions in the whole graph, it is well known that one has no grip on the sequence of rewriting steps. The more rules, the more interaction between rules, and the consistency of the whole rule system becomes difficult to maintain. This bothers our ambition of a large coverage for the grammar. To solve this problem, we propose to organize rules in modules. A module is a set of rules that is linguistically consistent and represents a particular step of the transformation. For instance, in our proposal, there is a module transforming the syntactic arguments of verbs, predicative nouns and adjectives into their semantic arguments. Another module resolves the anaphoric links which are internal to the sentence and determined by the syntax.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "From a computational point of view, the grouping of a small number of rules inside a module allows some optimizations in their application, thus leading to efficiency. For instance, the confluence of rewriting is a critical feature -one computes only one normal form, not all of them -for the performance of the program. Since the underlying relation from syntax to semantics is not functional but relational, the system cannot be globally confluent. Then, it is particularly interesting to isolate subsets of confluent rules. Second point, with a small number of rules, one gets much more control on their output. In particular, it is possible to automatically infer some invariant properties of graphs along the computation within a particular module. Thus, it simplifies the writing of the rules for the next modules. It is also possible to plan a strategy in the global evaluation process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "It is well known that syntactic parsers produce outputs in various formats. As a by-product of our approach, we show that the choice of the input format (that is the syntax) seems to be of low importance overall. Indeed, as far as two formats contain the same linguistic information with different representations, a system of rewrite rules can be designed to transform any graph from one format to another as a preliminary step. The same remark holds for the output formats.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "To illustrate our proposal, we have chosen the Paris7 TreeBank (hereafter P7TB) dependency format defined by as the syntactic input format and the Dependency MRS format (hereafter DMRS) defined by Copestake (2009) as the semantic output format. We chose those two formats because the information they represent, if it is not complete, is relatively consensual and because both draw on large scale experiments: statistical dependency parsing for French 1 on the one hand and the DELPH-IN project 2 on the other hand.", "cite_spans": [ { "start": 197, "end": 213, "text": "Copestake (2009)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Actually, in our experiments, since we do not have an appropriate corpus annotated according to the P7TB standard, we used our syntactic parser LEOPAR 3 whose outputs differ from this standard and we designed a rewriting system to go from one format to the other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The paper is organized as follows. In section 1, we define our graph rewriting calculus, the \u03b2-calculus. In Section 2, we describe the particular rewriting system that is used to transform graphs from the syntactic P7TB format into the DMRS semantic format. In Section 3, we present experimental results on a test suite of sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "1 The \u03b2-calculus, a graph rewriting calculus Term rewriting and tree rewriting can be defined in a straightforward and canonical way. Graph rewriting is much more problematic and there is unfortunately no canonical definition of a graph rewriting system. Graph rewriting can be defined through a categorical approach like SPO or DPO (Rozenberg (1997) ). But, in practice, it is much easier to use a more operational view of rewriting where modification of the graph (the \"right-hand side\" of a rule) is defined by means of a set of commands; the control of the way rules are applied (the \"left hand-side\") still uses pattern matching as this is done in traditional graph rewriting.", "cite_spans": [ { "start": 333, "end": 350, "text": "(Rozenberg (1997)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In this context, a rule is a pair of a pattern and a sequence of commands. We give below the formal materials about graphs, patterns, matchings and commands. We illustrate the section with examples of rules and of rewriting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In the following, we suppose given a finite set L of edge labels corresponding to the kind of dependencies used to describe sentences. They may correspond to syntax or to semantics. For instance, we use L = {SUJ, OBJ, ARG1, ANT, . . .}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph definition", "sec_num": "1.1" }, { "text": "To decorate vertices, we use the standard notion of feature structures. Let N be a finite set of feature names and A be a finite set of atomic feature values. In our example, N = {cat, mood, . . .} and A = {passive, v, n, . . .}. A feature is a pair made of a feature name and a set of atomic values. The feature (cat, {v, aux}) means that the feature name cat is associated to either the value v or aux. In the sequel, we use the notation cat = v|aux for this feature. Two features f = v and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph definition", "sec_num": "1.1" }, { "text": "f \u2032 = v \u2032 are compatible whenever f = f \u2032 and v \u2229 v \u2032 = \u2205.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph definition", "sec_num": "1.1" }, { "text": "A feature structure is a finite set of features such that each feature name occurs at most once. F denotes the set of feature structures. Two feature structures are compatible if their respective features with the same name are pairwise compatible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph definition", "sec_num": "1.1" }, { "text": "A graph G is then defined by a 6-tuple (V, fs, E, lab, \u03c3, \u03c4 ) with:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph definition", "sec_num": "1.1" }, { "text": "\u2022 a finite set V of vertices;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph definition", "sec_num": "1.1" }, { "text": "\u2022 a labelling function fs from V to F;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph definition", "sec_num": "1.1" }, { "text": "\u2022 a finite set E of edges;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph definition", "sec_num": "1.1" }, { "text": "\u2022 a labelling function lab from E to L;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph definition", "sec_num": "1.1" }, { "text": "\u2022 two functions \u03c3 and \u03c4 from E to V which give the source and the target of each edge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph definition", "sec_num": "1.1" }, { "text": "Moreover, we require that two edges between the same couple of nodes cannot have the same label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph definition", "sec_num": "1.1" }, { "text": "Formally, a pattern is a graph and a matching \u03c6 of a pattern P = (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Patterns and matchings", "sec_num": "1.2" }, { "text": "V \u2032 , fs \u2032 , E \u2032 , lab \u2032 , \u03c3 \u2032 , \u03c4 \u2032 ) into a graph G = (V, fs, E, lab, \u03c3, \u03c4 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Patterns and matchings", "sec_num": "1.2" }, { "text": "is an injective graph morphism from P to G. More precisely, \u03c6 is a couple of injective functions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Patterns and matchings", "sec_num": "1.2" }, { "text": "\u03c6 V from V \u2032 to V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Patterns and matchings", "sec_num": "1.2" }, { "text": "and \u03c6 E from E \u2032 to E which:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Patterns and matchings", "sec_num": "1.2" }, { "text": "\u2022 respects vertex labelling: fs(\u03c6 V (v)) and fs \u2032 (v) are compatible;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Patterns and matchings", "sec_num": "1.2" }, { "text": "\u2022 respects edge labelling: lab(\u03c6 E (e)) = lab \u2032 (e);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Patterns and matchings", "sec_num": "1.2" }, { "text": "\u2022 respects edge sources: \u03c3(\u03c6 E (e)) = \u03c6 V (\u03c3 \u2032 (e));", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Patterns and matchings", "sec_num": "1.2" }, { "text": "\u2022 respects edge targets: \u03c4 (\u03c6 E (e)) = \u03c6 V (\u03c4 \u2032 (e)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Patterns and matchings", "sec_num": "1.2" }, { "text": "Commands are low-level operations on graphs that are used to describe the rewriting of the graph within a rule application. In the description below, we suppose to be given a pattern matching \u03c6 : P \u2192 G. We describe here the set of commands which we used in our experiment so far. Naturally, this set could be extended.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "\u2022 del edge(\u03b1, \u03b2, \u2113) removes the edge labelled \u2113 between \u03b1 and \u03b2. More formally, we suppose that \u03b1 \u2208 V P , \u03b2 \u2208 V P and P contains an edge e from \u03b1 to \u03b2 with label \u2113 \u2208 L. Then, del edge(\u03b1, \u03b2, \u2113)(G) is the graph G without the edge \u03c6(e). In the following, we give only the intuitive definition of the command: thanks to injectivity of the matching \u03c6, we implicitly forget the distinction between x and \u03c6(x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "\u2022 add edge(\u03b1, \u03b2, \u2113) adds an edge labelled \u2113 between \u03b1 and \u03b2. Such an edge is supposed not to exist in G.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "\u2022 shift edge(\u03b1, \u03b2) modifies all edges that are incident to \u03b1: each edge starting from \u03b1 is moved to start from \u03b2; similarly each edge ending on \u03b1 is moved to end on \u03b2;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "\u2022 del node(\u03b1) removes the \u03b1 node in G. If G contains edges starting from \u03b1 or ending on \u03b1, they are silently removed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "\u2022 add node(\u03b2) adds a new node with identifier \u03b2 (a fresh name).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "\u2022 add feat(\u03b1, f = v) adds the feature f = v to the node \u03b1. If \u03b1 already contains a feature name f , it is replaced by the new one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "\u2022 copy feat(\u03b1, \u03b2, f ) copies the value of the feature named f from the node \u03b1 to the node \u03b2. If \u03b1 does not contain a feature named f , nothing is done. If \u03b2 already contains a feature named f , it is replaced by the new value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "Note that commands define a partial function on graphs: the action add edge(\u03b1, \u03b2, \u2113) is undefined on a graph which already contains an edge labelled \u2113 from \u03b1 to \u03b2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "The action of a sequence of commands is the composition of actions of each command. Sequences of commands are supposed to be consistent with the pattern:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "\u2022 del edge always refers to an edge described in the pattern and not previously modified by a del edge or a shift edge command;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "\u2022 each command refers only to identifiers defined either in the pattern or in a previous add node;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "\u2022 no command refers to a node previously deleted by a del node command.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "Finally, we define a rewrite rule to be a pair of a pattern and a consistent sequence of commands.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "A first example of a rule is given below with the pattern on the left and the sequence of commands on the right. This rule called INIT PASSIVE is used to remove the node corresponding to the auxiliary of the passive construction and to modify the features accordingly. Our second example (PASSIVE ATS) illustrates the add node command. It is used in a passive construction where the semantic subject of the verb is not realized syntactically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commands", "sec_num": "1.3" }, { "text": "\u03b1 cat = v voice = passive \u03b2 \u03b3 SUJ ATS c 1 = del edge(\u03b1, \u03b2, SUJ) c 2 = add edge(\u03b1, \u03b2, OBJ) c 3 = del edge(\u03b1, \u03b3, ATS) c 4 = add edge(\u03b1, \u03b3, ATO)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PASSIVE ATS", "sec_num": null }, { "text": "c 5 = add feat(\u03b1, voice = active) c 6 = add node(\u03b4) c 7 = add edge(\u03b1, SUJ, \u03b4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PASSIVE ATS", "sec_num": null }, { "text": "We consider a graph G and a rewrite rule r = (P, [c 1 , . . . , c k ]). We say that G \u2032 is obtained from G by a rewrite step with the r rule (written G \u2212\u2192 r G \u2032 ) if there is a matching morphism \u03c6 : P \u2192 G and G \u2032 is obtained from G by applying the composition of commands c k \u2022 . . . \u2022 c 1 . Let us now illustrate two rewrite steps with the rules above. Consider the first graph below which is a syntactic dependency structure for the French sentence \"Marie est consid\u00e9r\u00e9e comme brillante\" [Mary is considered as bright]. The second graph is obtained by application of the INIT PASSIVE rewrite rule and the last one with the PASSIVE ATS rewrite rule. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting", "sec_num": "1.4" }, { "text": "A module contains a set of rewrite rules but, in order to have a finer control on the output of these modules, it is useful to declare some forbidden patterns. Hence a module is defined by a set R of rules and a set P of forbidden patterns. For a given module M = (R, P), we say that G \u2032 is an M-normal form of the graph G if there is a sequence of rewriting steps with rules of R from G to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modules and normal forms", "sec_num": "1.5" }, { "text": "G \u2032 : G \u2212\u2192 r 1 G 1 \u2212\u2192 r 2 G 2 . . . \u2212\u2192 r k G \u2032 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modules and normal forms", "sec_num": "1.5" }, { "text": "if no rule of R can be applied to G \u2032 and no pattern of P matches in G \u2032 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modules and normal forms", "sec_num": "1.5" }, { "text": "In our experiment, forbidden patterns are often used to control the subset of edges allowed in normal forms. For instance, the NORMAL module contains the forbidden pattern: AUX PASS . Hence, we can then safely suppose that no graph contains any AUX PASS edge afterward.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modules and normal forms", "sec_num": "1.5" }, { "text": "Linguistic theories diverge on many issues including the exact definition of the linguistic levels and the relationships between them. Our aim here is not to commit to any linguistic theory but rather to demonstrate that graph rewriting is an adequate and realistic computational framework for the syntaxsemantics interface. Consequently, our approach is bound to neither the (syntactic and semantic) formats we have chosen nor the transformation modules we have designed; both are mainly meant to exemplify our proposal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From syntactic dependency graphs to semantic graphs", "sec_num": "2" }, { "text": "Our syntactic and semantic formats both rely on the notion of linguistic dependency. The syntactic format is an enrichment of the one which was designed to annotate the French Treebank (Abeill\u00e9 and Barrier (2004) ) with surface syntactic dependencies ). The enrichment is twofold:", "cite_spans": [ { "start": 185, "end": 212, "text": "(Abeill\u00e9 and Barrier (2004)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Representational formats", "sec_num": "2.1" }, { "text": "\u2022 if they are present in the sentence, the deep arguments of infinitives and participles (from participial subordinate clauses) are marked with the usual labels of syntactic functions,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representational formats", "sec_num": "2.1" }, { "text": "\u2022 the anaphora relations that are predictable from the syntax (i.e. the antecedents of relative, reflexive and repeated pronouns) are marked with a special label ANT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representational formats", "sec_num": "2.1" }, { "text": "This additional information can already be provided by many syntactic parsers and is particularly interesting to compute semantics. The semantic format is Dependency Minimal Recursion Semantics (DMRS) which was introduced by Copestake (2009) as a compact and easily readable equivalent to Robust Minimal Recursion Semantics (RMRS), which was defined by Copestake (2007) . This underspecified semantic formalism was designed for large scale experiments without committing to fine-grained semantic choices. DMRS graphs contain the predicate-argument relations, the restriction of generalized quantifiers and the mode of combination between predicates. Predicate-argument relations are labelled ARGi, where i is an integer following a fixed order of obliqueness SUJ, OBJ, ATS, ATO, A-OBJ, DE-OBJ. . . . Naturally, the lexicon must be consistent with this ordering. The restrictions of generalized quantifiers are labelled RSTR ; their bodies are not overtly expressed but can be retrieved from the graph. There are three ways of combining predicates:", "cite_spans": [ { "start": 353, "end": 369, "text": "Copestake (2007)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Representational formats", "sec_num": "2.1" }, { "text": "\u2022 EQ when two predicates are elements of a same conjunction;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representational formats", "sec_num": "2.1" }, { "text": "\u2022 H when a predicate is in the scope of another predicate; it is not necessarily one of its arguments because quantifiers may occur between them;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representational formats", "sec_num": "2.1" }, { "text": "\u2022 NEQ for all other cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representational formats", "sec_num": "2.1" }, { "text": "Graph rewriting allows to proceed step by step to the transformation of a syntactic graph into a semantic one, by associating a rewrite rule to each linguistic rule. While the effect of every rule is local, grouping rules in modules allows a better control on the global effect of all rules. We do not have the space here to propose a system of rules that covers the whole French grammar. We however propose six modules which cover a significative part of this grammar (cleft clauses, coordination, enumeration, comparatives and ellipses are left aside but they can be handled by other rewrite modules):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modular rewriting system", "sec_num": "2.2" }, { "text": "\u2022 NORMAL handles the regular syntactic transformations involving predicates: it computes tense and transforms all redistributions of arguments (passive and middle voices, impersonal constructions and the combination of them) to the active canonical form. This reduces the number of rules required to produce the predicate-argument relations in the ARG module below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modular rewriting system", "sec_num": "2.2" }, { "text": "\u2022 PREP removes affixes, prepositions and complementizers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modular rewriting system", "sec_num": "2.2" }, { "text": "\u2022 ARG transforms the verbal, nominal and adjectival predicative phrases into predicate-argument relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modular rewriting system", "sec_num": "2.2" }, { "text": "\u2022 DET translates the determiner dependencies (denoted DET) to generalized quantifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modular rewriting system", "sec_num": "2.2" }, { "text": "\u2022 MOD interprets the various modifier dependencies (denoted MOD), according to their specificity: adjectives, adverbs, adjunct prepositional phrases, participial clauses, relative clauses, adjunct clauses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modular rewriting system", "sec_num": "2.2" }, { "text": "\u2022 ANA interprets all anaphoric relations that are determined by the syntax (denoted ANT).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modular rewriting system", "sec_num": "2.2" }, { "text": "Modules provide an easy way to control the order in which rules are fired. In order to properly set up the rules in modules, we first have to fix the global ordering of the modules. Some ordering constraints are evident: for instance, NORMAL must precede PREP, which must precede ARG. The rules we present in the following are based on the order NORMAL, PREP, ARG, DET, MOD, ANA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modular rewriting system", "sec_num": "2.2" }, { "text": "The NORMAL module has two effects: it merges tense and voice auxiliaries with their past participle and brings all the argument redistributions back to the canonical active form. This module accounts for the passive and middle voices and the impersonal construction for verbs that are not essentially impersonal. The combination of the two voices with the impersonal construction is naturally expressed by the composition of the corresponding rewrite rules. The two rules given in section 1.4 are part of this module. The first rule (INIT PASSIVE) merges the past participle of the verb with its passive auxiliary. The auxiliary brings its mood and tense to the verb, which is marked as being passive. The second rule (PASSIVE ATS) transforms a passive verb with a subject and an attribute of the subject into its active equivalent with a semantically undetermined subject, an object (which corresponds to the subject of the passive form) and an attribute of the object (which corresponds to the attribute of the subject of the passive form).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalization of syntactic dependencies", "sec_num": "2.2.1" }, { "text": "The PREP module removes affixes, prepositions and complementizers. For example, the rule given here merges prepositions with the attribute of the object that they introduce. The value of the preposition is kept to compute the semantics. The ARG module transforms the syntactic arguments of a predicative word (a verb, a common noun or an adjective) into its semantic arguments. Following DMRS, the predicate-argument relations are not labelled with thematic roles but only numbered. The numbering reflects the syntactic obliqueness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Erasure of affixes, prepositions and complementizers", "sec_num": "2.2.2" }, { "text": "ARG OBJ \u03b1 \u03b2 cat = n|np|pro OBJ c 1 = del edge(\u03b1, \u03b2, OBJ) c 2 = add edge(\u03b1, \u03b2, ARG2) c 3 = add edge(\u03b1, \u03b2, NEQ)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PREP ATO", "sec_num": null }, { "text": "DET reverts the determiner dependencies (labelled DET) from common nouns to determiners into dependencies of type RSTR from the corresponding generalized quantifier to the nominal predicate which is the core of their restriction. MOD deals with the modifier dependencies (labelled MOD, MOD REL and MOD LOC), providing rules for the different kinds of modifiers. Adjectives and adverbs are translated as predicates whose first argument is the modified entity. The modifier and modified entities are in a conjunction (EQ), except for scopal adverbs which take scope (H) over the modified predicate. Because only lexical information enables to differentiate scopal from non-scopal adverbs, we consider all adverbs to be systematically ambiguous at the moment. Adjunct prepositional phrases (resp. clauses) have a similar rule except that their corresponding predicate is the translation of the preposition (resp. complementizer), which has two arguments: the modified entity and the noun (resp. verb) which heads the phrase (resp. clause). Participial and relative clauses exhibit a relation labelled EQ or NEQ between the head of the clause and the antecedent, depending on the restrictive or appositive type of the clause.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From determiners to generalized quantifiers", "sec_num": "2.2.4" }, { "text": "ANA deals with dependencies of type ANT and merges their source and their target. We apply them to reflexive, relative and repeated pronouns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resolution of syntactic anaphora", "sec_num": "2.2.6" }, { "text": "For the experimentation, we are interested in a test suite which is at the same time small enough to be manually validated and large enough to cover a rich variety of linguistic phenomena. As said earlier, we use the P7 surface dependency format as input, so the first attempt at building a test suite is to consider examples in the guide which describes the format. By nature, an annotation guide tries to cover a large range of phenomena with a small set of examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "The latest version 4 of this guide ) contains 186 linguistic examples. In our current implementation of the semantic constructions, we leave out clefts, coordinations and comparatives. We also leave out a small set of exotic sentences for which we are not able to give a sensible syntactic structure. Finally, our experiment runs on 116 French sentences. Syntactic structures following P7 specifications are obtained with some graph rewriting on the output of our parser. Each syntactic structure was manually checked and corrected when needed. Then, graph rewriting with the modules described in the previous section is performed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "For all of these sentences, we produce at least one normal form. Even if DMRS is underspecified, our system can output several semantic representations for one syntactic structure (for instance, for appositive and restrictive relative clauses). We sometimes overgenerate because we do not use lexical information like the difference between scopal and non-scopal adverbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "The result for three sentences is given below and the full set is available on a web page 5 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "[ ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "In this paper, we have shown the relevance of modular graph rewriting to compute semantic representations from graph-shaped syntactic structures. The positive results of our experiments on a test suite of varied sentences make us confident that the method can apply to large corpora. The particular modular graph rewriting system presented in the paper was merely here to illustrate the method, which can be used for other input and output formats. There is another aspect to the flexibility of the method: we may start from the same system of rules and enrich it with new rules to get a finer semantic analysis -if DMRS is considered as providing a minimal analysis -or integrate lexical information. The method allows the semantic ambiguity to remain unsolved within underspecified representations or to be solved with a rule system aiming at computing models of underspecified representations. Moreover, we believe that its flexibility makes graph rewriting a convenient framework to deal with idiomatic expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "http://alpage.inria.fr/statgram/frdep/fr_stat_dep_parsing.html 2 http://www.delph-in.net/ 3 http://leopar.loria.fr", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "version 1.1, january 2010 5 http://leopar.loria.fr/doku.php?id=iwcs2011", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Enriching a french treebank", "authors": [ { "first": "A", "middle": [], "last": "Abeill\u00e9", "suffix": "" }, { "first": "N", "middle": [], "last": "Barrier", "suffix": "" } ], "year": 2004, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abeill\u00e9, A. and N. Barrier (2004). Enriching a french treebank. In Proceedings of LREC.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Semantic Normalisation : a Framework and an Experiment", "authors": [ { "first": "P", "middle": [], "last": "B\u00e9daride", "suffix": "" }, { "first": "C", "middle": [], "last": "Gardent", "suffix": "" } ], "year": 2009, "venue": "Proceedings of IWCS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B\u00e9daride, P. and C. Gardent (2009). Semantic Normalisation : a Framework and an Experiment. In Proceedings of IWCS, Tilburg Netherlands.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On using a parallel graph rewriting formalism in generation", "authors": [ { "first": "B", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "L", "middle": [], "last": "Wanner", "suffix": "" } ], "year": 2001, "venue": "Proceedings of EWNLG '01", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bohnet, B. and L. Wanner (2001). On using a parallel graph rewriting formalism in generation. In Proceedings of EWNLG '01, pp. 1-11. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Statistical french dependency parsing: Treebank conversion and first results", "authors": [ { "first": "M", "middle": [], "last": "Candito", "suffix": "" }, { "first": "B", "middle": [], "last": "Crabb\u00e9", "suffix": "" }, { "first": "P", "middle": [], "last": "Denis", "suffix": "" } ], "year": 2010, "venue": "Proceedings of LREC2010", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Candito, M., B. Crabb\u00e9, and P. Denis (2010). Statistical french dependency parsing: Treebank conversion and first results. Proceedings of LREC2010.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "D\u00e9pendances syntaxiques de surface pour le fran\u00b8cais", "authors": [ { "first": "M", "middle": [], "last": "Candito", "suffix": "" }, { "first": "B", "middle": [], "last": "Crabb\u00e9", "suffix": "" }, { "first": "M", "middle": [], "last": "Falco", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Candito, M., B. Crabb\u00e9, and M. Falco (2010). D\u00e9pendances syntaxiques de surface pour le fran\u00b8cais.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The logic of typed feature structures", "authors": [ { "first": "B", "middle": [], "last": "Carpenter", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carpenter, B. (1992). The logic of typed feature structures. Cambridge: Cambridge University Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Une approche paresseuse de l'analyse s\u00e9mantique ou comment construire une interface syntaxe-s\u00e9mantique\u00e0 partir d'exemples", "authors": [ { "first": "F.-R", "middle": [], "last": "Chaumartin", "suffix": "" }, { "first": "S", "middle": [], "last": "Kahane", "suffix": "" } ], "year": 2010, "venue": "TALN 2010", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chaumartin, F.-R. and S. Kahane (2010). Une approche paresseuse de l'analyse s\u00e9mantique ou comment construire une interface syntaxe-s\u00e9mantique\u00e0 partir d'exemples. In TALN 2010, Montreal, Canada.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Semantic composition with (robust) minimal recursion semantics", "authors": [ { "first": "A", "middle": [], "last": "Copestake", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Workshop on Deep Linguistic Processing", "volume": "", "issue": "", "pages": "73--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Copestake, A. (2007). Semantic composition with (robust) minimal recursion semantics. In Proceedings of the Workshop on Deep Linguistic Processing, pp. 73-80. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Invited Talk: Slacker semantics: Why superficiality, dependency and avoidance of commitment can be the right way to go", "authors": [ { "first": "A", "middle": [], "last": "Copestake", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EACL 2009", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Copestake, A. (2009). Invited Talk: Slacker semantics: Why superficiality, dependency and avoidance of commitment can be the right way to go. In Proceedings of EACL 2009, Athens, Greece, pp. 1-9.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Minimal Recursion Semantics -an Introduction", "authors": [ { "first": "A", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "D", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "C", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "I", "middle": [], "last": "Sag", "suffix": "" } ], "year": 2005, "venue": "Research on Language and Computation", "volume": "3", "issue": "", "pages": "281--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "Copestake, A., D. Flickinger, C. Pollard, and I. Sag (2005). Minimal Recursion Semantics -an Introduc- tion. Research on Language and Computation 3, 281-332.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Packed Rewriting for Mapping Semantics to KR", "authors": [ { "first": "D", "middle": [], "last": "Crouch", "suffix": "" } ], "year": 2005, "venue": "Proceedings of IWCS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Crouch, D. (2005). Packed Rewriting for Mapping Semantics to KR. In Proceedings of IWCS.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Lexical Functional Grammar", "authors": [ { "first": "M", "middle": [], "last": "Dalrymple", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dalrymple, M. (2001). Lexical Functional Grammar. New York: Academic Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Semantics in generative grammar", "authors": [ { "first": "I", "middle": [], "last": "Heim", "suffix": "" }, { "first": "A", "middle": [], "last": "Kratzer", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heim, I. and A. Kratzer (1998). Semantics in generative grammar. Wiley-Blackwell.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semantic Parsing as Graph Language Transformation -a Multidimensional Approach to Parsing Highly Inflectional Languages", "authors": [ { "first": "E", "middle": [], "last": "Hyv\u00f6nen", "suffix": "" } ], "year": 1984, "venue": "COLING", "volume": "", "issue": "", "pages": "517--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hyv\u00f6nen, E. (1984). Semantic Parsing as Graph Language Transformation -a Multidimensional Ap- proach to Parsing Highly Inflectional Languages. In COLING, pp. 517-520.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning to transform linguistic graphs", "authors": [ { "first": "V", "middle": [], "last": "Jijkoun", "suffix": "" }, { "first": "M", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2007, "venue": "Second Workshop on TextGraphs: Graph-Based Algorithms for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jijkoun, V. and M. de Rijke (2007). Learning to transform linguistic graphs. In Second Workshop on TextGraphs: Graph-Based Algorithms for Natural Language Processing, Rochester, NY, USA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Dependency Syntax: Theory and Practice", "authors": [ { "first": "I", "middle": [], "last": "Mel'\u010duk", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mel'\u010duk, I. (1988). Dependency Syntax: Theory and Practice. Albany: State Univ. of New York Press.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Handbook of Graph Grammars and Computing by Graph Transformations", "authors": [ { "first": "G", "middle": [], "last": "Rozenberg", "suffix": "" } ], "year": 1997, "venue": "Foundations. World Scientific", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rozenberg, G. (Ed.) (1997). Handbook of Graph Grammars and Computing by Graph Transformations, Volume 1: Foundations. World Scientific.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "\u03b1 cat = v voice = active \u03b2 cat = v voice = unk AUX PASS c 1 = copy feat(\u03b1, \u03b2, mood) c 2 = copy feat(\u03b1, \u03b2, tense) c 3 = add feat(\u03b2, voice = passive) c 4 = del edge(\u03b2, \u03b1, AUX PASS) c 5 = shift edge(\u03b1, \u03b2) c 6 = del node(\u03b1)", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "copy feat(\u03b2, \u03b3, prep) c 2 = del edge(\u03b2, \u03b3, OBJ) c 3 = shift edge(\u03b2, \u03b3) c 4 = del node(\u03b2)2.2.3 From lexical predicative phrases to semantic predicates", "uris": null }, "FIGREF3": { "num": null, "type_str": "figure", "text": "del edge(\u03b2, \u03b1, DET) c 2 = add edge(\u03b1, \u03b2, RSTR) c 3 = add edge(\u03b1, \u03b2, H) 2.2.5 Interpretation of different kinds of modification", "uris": null }, "TABREF0": { "content": "
SUJMOD_LOCOBJ
DETAFF_MOYENMODDET
le cat=detfran\u00e7ais cat=nse cat=proparle cat=v mood=ind voice=unk tense=presde moins en moins cat=advdans cat=prep prep=locles cat=detconf\u00e9rences cat=n
/de moins en moins/ cat=adv/dans/ cat=prep prep=loc/les/ cat=det
ARG1EQEQ ARG1NEQ ARG2H RSTR
/le/ cat=det/parle/ cat=v mood=ind tense=pres voice=active/conf\u00e9rences/ cat=n
HRSTRARG2 NEQARG1 NEQ
/fran\u00e7ais/ cat=n//
[057] je cat=proencourage cat=v mood=ind tense=pres voice=unk SUJ OBJA-OBJ Marie cat=npSUJ \u00e0 cat=prep prep=\u00e0OBJcat=v mood=inf voice=unk venir/je/ cat=pro ARG1NEQ/encourage/ cat=v mood=ind tense=pres voice=active ARG2 cat=np NEQ ARG3 EQ /venir/ cat=v mood=inf prep=\u00e0 voice=active /Marie/ ARG1 NEQ
cat=det [106] laDETs\u00e9rie cat=nANTdont cat=proPierre cat=np MOD_RELSUJconna\u00eet cat=v mood=ind voice=unk tense=pres DE-OBJla cat=det OBJDETfin cat=n/la/ cat=det RSTR Hcat=n /s\u00e9rie/ EQ/Pierre/ cat=np /conna\u00eet/ cat=v mood=ind tense=pres voice=active NEQ ARG1 ARG1/fin/ cat=n NEQ ARG2 NEQ/la/ cat=det RSTR H
", "text": "012] \"Le fran\u00e7ais se parle de moins en moins dans les conf\u00e9rences.\" [The French language is less and less spoken in conferences.] \"J'encourage Marie\u00e0 venir.\" [I invite Mary to come.] \"La s\u00e9rie dont Pierre conna\u00eet la fin\" [The story Peter knows the end of]", "num": null, "html": null, "type_str": "table" } } } }