{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:34.938086Z" }, "title": "Opinion-based Relational Pivoting for Cross-domain Aspect Term Extraction", "authors": [ { "first": "Ayal", "middle": [], "last": "Klein", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bar Ilan University", "location": {} }, "email": "ayal.s.klein@gmail.com" }, { "first": "Oren", "middle": [], "last": "Pereg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel Labs", "location": { "country": "Israel" } }, "email": "oren.pereg@intel.com" }, { "first": "Daniel", "middle": [], "last": "Korat", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel Labs", "location": { "country": "Israel" } }, "email": "daniel.korat@intel.com" }, { "first": "Vasudev", "middle": [], "last": "Lal", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel Labs", "location": { "country": "Israel" } }, "email": "vasudev.lal@intel.com" }, { "first": "Moshe", "middle": [], "last": "Wasserblat", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel Labs", "location": { "country": "Israel" } }, "email": "moshe.wasserblat@intel.com" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bar Ilan University", "location": {} }, "email": "dagan@cs.biu.ac.il" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Domain adaptation methods often exploit domain-transferable input features, a.k.a. pivots. The task of Aspect and Opinion Term Extraction presents a special challenge for domain transfer: while opinion terms largely transfer across domains, aspects change drastically from one domain to another (e.g. from restaurants to laptops). In this paper, we investigate and establish empirically a prior conjecture, which suggests that the linguistic relations connecting opinion terms to their aspects transfer well across domains and therefore can be leveraged for cross-domain aspect term extraction. We present several analyses supporting this conjecture, via experiments with four linguistic dependency formalisms to represent relation patterns. Subsequently, we present an aspect term extraction method that drives models to consider opinion-aspect relations via explicit multitask objectives. This method provides significant performance gains, even on top of a prior state-of-the-art linguistically-informed model, which are shown in analysis to stem from the relational pivoting signal.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Domain adaptation methods often exploit domain-transferable input features, a.k.a. pivots. The task of Aspect and Opinion Term Extraction presents a special challenge for domain transfer: while opinion terms largely transfer across domains, aspects change drastically from one domain to another (e.g. from restaurants to laptops). In this paper, we investigate and establish empirically a prior conjecture, which suggests that the linguistic relations connecting opinion terms to their aspects transfer well across domains and therefore can be leveraged for cross-domain aspect term extraction. We present several analyses supporting this conjecture, via experiments with four linguistic dependency formalisms to represent relation patterns. Subsequently, we present an aspect term extraction method that drives models to consider opinion-aspect relations via explicit multitask objectives. This method provides significant performance gains, even on top of a prior state-of-the-art linguistically-informed model, which are shown in analysis to stem from the relational pivoting signal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Sentiment Analysis is one of the most widely used applications of natural language processing. A common fine grained formulation of the task, termed Aspect Based Sentiment Analysis, matches the terms in the text expressing opinions to corresponding aspects. For example, in the restaurant review in Figure 1 , great, calm and quiet are opinion terms (OTs) referring to the aspect term (AT) ambience.", "cite_spans": [], "ref_spans": [ { "start": 299, "end": 307, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Following the SemEval shared tasks (Pontiki et al., 2014 (Pontiki et al., , 2015 , the preliminary task of AT and OT extraction has attracted significant research attention (Wang and Pan, 2020; Pereg et al., 2020, inter alia) , especially for its domain adaptation setup,", "cite_spans": [ { "start": 35, "end": 56, "text": "(Pontiki et al., 2014", "ref_id": "BIBREF9" }, { "start": 57, "end": 80, "text": "(Pontiki et al., , 2015", "ref_id": "BIBREF10" }, { "start": 173, "end": 193, "text": "(Wang and Pan, 2020;", "ref_id": "BIBREF16" }, { "start": 194, "end": 225, "text": "Pereg et al., 2020, inter alia)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The sound is very good ( and loud ) where a model trained on one domain is tested on another, unseen domain. Considering each product or service as a \"domain\", domain adaptation is crucial for making models of this task widely applicable. Yet performance on cross-domain aspect term extraction is still low, reflecting that it poses a special challenge to common domain adaptation paradigms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In most domain adaptation settings, some features of the input are domain specific, while others -also known as pivot features (Blitzer et al., 2006) -do transfer into unseen domains. Hence, cross-domain generalization concerns focusing the model's learning on the latter. However, aspect terms across domains share little direct commonalities. Essentially, their common denominator is being the target topic referred to by opinion terms. For this reason, prior works suggested using handcrafted syntactic rules (Hu and Liu, 2004; Ding et al., 2017) , or alternatively, injecting a full syntactic analysis into the model (Wang and Pan, 2018; Pereg et al., 2020) , aiming to capture the transferable relation-based properties of aspects.", "cite_spans": [ { "start": 127, "end": 149, "text": "(Blitzer et al., 2006)", "ref_id": "BIBREF1" }, { "start": 512, "end": 530, "text": "(Hu and Liu, 2004;", "ref_id": "BIBREF5" }, { "start": 531, "end": 549, "text": "Ding et al., 2017)", "ref_id": "BIBREF4" }, { "start": 621, "end": 641, "text": "(Wang and Pan, 2018;", "ref_id": "BIBREF15" }, { "start": 642, "end": 661, "text": "Pereg et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our first contribution is establishing the relational pivoting approach for cross-domain AT extraction on quantitative, data driven analysis ( \u00a73). We utilize four different linguistic formalisms (i.e., syntactic and semantic dependencies) to characterize OT-AT relations, and empirically confirm their domain transferability and importance for the task. Following, we propose an auxiliary multi-task learning method with specialized relation-focused tasks, designed to teach the model to focally capture these relations during OT and AT extraction training ( \u00a74). Our method improves cross-domain AT extraction performance when applied over both vanilla BERT (Devlin et al., 2019) and the stateof-the-art SA-EXAL (Pereg et al., 2020) models. We conclude with a quantitative analysis of model predictions, ascribing observed performance gains to enhanced relational pivoting. 1", "cite_spans": [ { "start": 660, "end": 681, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 714, "end": 734, "text": "(Pereg et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Following the SemEval Aspect Based Sentiment Analysis shared tasks (Pontiki et al., 2014 (Pontiki et al., , 2015 , recent works have formulated the OT and AT extraction task: given an opinionated text, identify the spans denoting OTs and ATs. We adopt the benchmark dataset that was used by recent works (Wang and Pan, 2020; Pereg et al., 2020) , which consists of three customer-review domains -(R)estaurants, (L)aptops and digital (D)evices -and was aggregated from the SemEval tasks jointly with several published resources (Hu and Liu, 2004; Wang et al., 2016) . While promising AT extraction performance has been demonstrated for in-domain settings (Li et al., 2018; Augustyniak et al., 2019) , it does not scale to unseen domains, where state-of-the-art models exhibited small incremental improvements and struggle to surpass F1 scores of 40-55 (for the different domain pairs).", "cite_spans": [ { "start": 67, "end": 88, "text": "(Pontiki et al., 2014", "ref_id": "BIBREF9" }, { "start": 89, "end": 112, "text": "(Pontiki et al., , 2015", "ref_id": "BIBREF10" }, { "start": 304, "end": 324, "text": "(Wang and Pan, 2020;", "ref_id": "BIBREF16" }, { "start": 325, "end": 344, "text": "Pereg et al., 2020)", "ref_id": "BIBREF8" }, { "start": 527, "end": 545, "text": "(Hu and Liu, 2004;", "ref_id": "BIBREF5" }, { "start": 546, "end": 564, "text": "Wang et al., 2016)", "ref_id": "BIBREF17" }, { "start": 654, "end": 671, "text": "(Li et al., 2018;", "ref_id": "BIBREF6" }, { "start": 672, "end": 697, "text": "Augustyniak et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Previous works have conjectured that aspect and opinion terms maintain frequent syntactic relations between them. Subsequently, Hu and Liu (2004) , followed by Qiu et al. (2011) , crafted a handful of simple syntactic patterns for in-domain AT extraction based on OTs. Motivated by the hypothesized domain transferability of syntactic OT-AT relations, Ding et al. (2017) employed pseudo labeling of AT based on the aforementioned patterns, which was used as auxiliary supervision for domain adaptation setup. We, however, extract our patterns from the data rather than manually crafting them.", "cite_spans": [ { "start": 128, "end": 145, "text": "Hu and Liu (2004)", "ref_id": "BIBREF5" }, { "start": 160, "end": 177, "text": "Qiu et al. (2011)", "ref_id": "BIBREF11" }, { "start": 352, "end": 370, "text": "Ding et al. (2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "In a related line of work, syntax was leveraged more broadly for the same relational pivoting mo- tivation. Wang and Pan (2018) and Wang and Pan (2020) encoded dependency relations with a recursive neural network using multitask learning, where the latter also applied domain-invariant adversarial learning. Most recently, the Syntactically Aware Extended Attention Layer model (SA-EXAL) (Pereg et al., 2020) improved cross-domain OT and AT extraction by augmenting BERT with an additional self-attention head that attends solely to the syntactic head of each token.", "cite_spans": [ { "start": 108, "end": 127, "text": "Wang and Pan (2018)", "ref_id": "BIBREF15" }, { "start": 132, "end": 151, "text": "Wang and Pan (2020)", "ref_id": "BIBREF16" }, { "start": 388, "end": 408, "text": "(Pereg et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The Relational Pivoting hypothesis is jointly entailed from two observations: (1) Opinion terms are similar across domains. 2The relationships between corresponding OT-AT pairs have common, domain transferable linguistic characteristics. Taken together, these suggest that OT-AT linguistic relations are informative pivot features for transferring aspect extraction across domains. In the following subsections, we show several analyses supporting the above observations and hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Data Analysis", "sec_num": "3" }, { "text": "We first measure the degree to which OTs and ATs are shared across domains, by computing crossdomain lexical overlap. Table 1 shows the percentage of term instances in the target domain occurring at least once in the source domain. Overall, unlike aspect terms, opinion terms have significant overlap across domains. For example, the terms great, good, best, better and nice all occur in the top-10 common OTs in each of the three domains, jointly covering 22%, 20% and 14% of OTs in the Restaurants, Devices and Laptops domains, respectively. In sharp contrast, there is only one aspect (price) occurring in the top 50 common ATs at all three domains. This is in sync with model experiments -both in-house and as reported by Wang and Pan (2020) -showing a drastic performance drop for cross-domain AT extraction, from lower 70s indomain to around 45 F1, while exhibiting a \"reasonable\" drop in OT extraction, from lower 80s to around 70 F1.", "cite_spans": [ { "start": 726, "end": 745, "text": "Wang and Pan (2020)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Opinions vs. Aspects Domain Variability", "sec_num": "3.1" }, { "text": "Next, we measure the degree to which linguistic relations connecting OT-AT pairs are shared across domains. To this end, we capture OT-AT linguistic relations using their path pattern in a dependency graph, i.e., the ordered list of the dependency relation labels occurring throughout the shortest (undirected) path between the terms (Figure 1 ). 2 We investigate and compare four linguistic formalisms: Spacy's syntactic dependencies 3 , Universal Dependencies (UD), and two formalisms from Semantic Dependency Parsing (Oepen et al., 2015) -DELPH-IN MRS (DM) and Prague Semantic Dependencies (PSD). 4 We parsed all the sentences in the benchmark dataset with state-of-the-art parsers -SpaCy 2.0, UDPipe 5 , and HIT-SCIR (Che et al., 2019) for DM and PSD. Importantly, since correspondences between ATs and OTs are not annotated in the benchmark dataset, we first heuristically define which (OT, AT) pairs would be considered related. Following a preliminary analysis, we selected for each formalism all pairs whose shortest path length is \u2264 2. This yields 9K-10K pairs which cover 60%-70% of the ATs across the different formalisms. These pairs and their path patterns constitute the data for the analyses below, as well as for training relation-focused auxiliary tasks ( \u00a74).", "cite_spans": [ { "start": 347, "end": 348, "text": "2", "ref_id": null }, { "start": 520, "end": 540, "text": "(Oepen et al., 2015)", "ref_id": "BIBREF7" }, { "start": 600, "end": 601, "text": "4", "ref_id": null }, { "start": 721, "end": 739, "text": "(Che et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 334, "end": 343, "text": "(Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "OT-AT Path Patterns", "sec_num": "3.2" }, { "text": "We find that between 94%-97% of the patterns in one domain are covered by another domain (More details in Appendix A). This confirm the prior presupposition that the linguistic structure of OT-AT relations is fairly domain invariant, and put forward path-patterns as promising features for domain transfer. In section 3.4 we further analyze the variability across different domain transfer settings. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OT-AT Path Patterns", "sec_num": "3.2" }, { "text": "To quantify the estimated potential of relationbased pivoting, we analyze a deterministic method for extracting ATs via gold OTs based on path patterns, similar to prior rule-based methods (Hu and Liu, 2004; Qiu et al., 2011) , and assess how well such an approach transfer across domains. Given predicted linguistic parses, we select the top k common OT-to-AT path patterns and apply them on every OT, where traversal destination tokens are selected as ATs. To illustrate, given the UD pattern OT CONJ \u2190 \u2212\u2212 \u2212* NSUBJ \u2212\u2212\u2212\u2192AT, the OTs quiet and calm would both yield ambience as an AT (Figure 1, bottom) . Notably, this analysis is only a rough upper-bound estimate; it is limited to identifying single-word ATs (70% of all ATs) which furthermore relates to an OT in a strictly known pattern, whereas models may generalize over some of these limitations.", "cite_spans": [ { "start": 189, "end": 207, "text": "(Hu and Liu, 2004;", "ref_id": "BIBREF5" }, { "start": 208, "end": 225, "text": "Qiu et al., 2011)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 583, "end": 601, "text": "(Figure 1, bottom)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Deterministic Relational Pivoting", "sec_num": "3.3" }, { "text": "Averaged results (across domain settings) are shown in Table 2 for varying k sizes (see Appendix B for a breakdown by domain pairs). Overall, pattern-based AT extraction can bring averaged F1 score up to 39 (DM), and recall up to 54 (UD). Crucially, there is hardly any drop in cross-domain settings relative to in-domain, affirming that patterns from a different source domain are as informative as in-domain patterns for opinion based AT extraction, consistent with observed pattern stability ( \u00a73.2). These findings suggest that driving a model to encode OT-AT relations should enhance domain adaptation.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 62, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Deterministic Relational Pivoting", "sec_num": "3.3" }, { "text": "It is illuminating to examine the differences between domains with respect to the path-pattern variability and transferability. In order to assess the linguistic diversity of OT-AT relations within each domain, we plot the relative cumulative pattern distribution for each linguistic formalism, visualizing how many OT-AT pairs (%) are covered by how many different patterns (See Figure 2 for a representative, and Appendix Figure 3 for the complete set of figures). The general picture is that the vast majority of OT-AT pairs exhibit a few dozens of path patterns, albeit most pairs are covered by a few high-frequency patterns.", "cite_spans": [], "ref_spans": [ { "start": 380, "end": 388, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 424, "end": 432, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Analysis of Domain Differences", "sec_num": "3.4" }, { "text": "Specifically, we observe that the Laptops domain is the most diverse and slowly-accumulating, while the opposite is true for the Restaurants domain. We conjecture that the linguistic variability of OT-AT relations inside a domain affects its transferability. High variability makes the domain harder to transfer to, as many relation patterns were not seen during training on the source domain. At the same time, it might make it a good choice for the source domain, acquainting the model with a rich set of relational linguistic constructions to generalize from.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of Domain Differences", "sec_num": "3.4" }, { "text": "Obviously, the within-domain variability is not the most prominent factor affecting domain transfer; rather, it interacts with the similarity of the domain pairs, both on the pivot features (here: OT-AT relations) and on the non-pivot features (here: the lexical and semantic profile of ATs and OTs).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of Domain Differences", "sec_num": "3.4" }, { "text": "To have a better handle on cross-domain similarity of OT-AT relations that accounts for pattern frequency in each domain, we compute the Jensen-Shannon Distance between path-pattern probability distributions (Table 6 in Appendix A), where smaller distance indicates greater similarity. While the Devices and Laptops domains are the most similar to each other, the Restaurants and Laptops domains are least similar.", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 216, "text": "(Table 6", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Analysis of Domain Differences", "sec_num": "3.4" }, { "text": "By and large, this is inline both with results of the deterministic pivoting analysis (Section 3.3) broken down by domain pairs (Table 7 in Appendix B) , and, to a smaller degree, with performance gains of our relation-focused multitask learning experiments (Section 5).", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 152, "text": "(Table 7 in Appendix B)", "ref_id": null } ], "eq_spans": [], "section": "Analysis of Domain Differences", "sec_num": "3.4" }, { "text": "To propagate the relational pivoting signal into an OT and AT extraction model, we apply auxiliary multitask learning (AMTL). We experimented with two auxiliary tasks for steering the model to encode OT-AT relationship information during training. Given an OT from an OT-AT pair of the collected auxiliary training data ( \u00a73.2), the model learns to: (1) predict its counterpart AT (ASP); and (2) predict the path-pattern connecting them on the dependency graph (PATT). 6 The ASP task should foreground the implicit representation of OT-AT relations, whereas PATT injects explicit, linguistically-oriented relation information. Prior multitask learning approaches for enriching models with syntax (Strubell et al., 2018; Pan, 2018, 2020) have pushed them to encode a full syntactic analysis, possibly including irrelevant information. In contrast, our auxiliary tasks form a \"partial parsing\" objective, specialized in the relevant terms and their multifarious relations. We use both vanilla BERT (Devlin et al., 2019) and stateof-the-art SA-EXAL (Pereg et al., 2020) as base models, where the latter may imply whether our relation-focused signal is subsumed by SA-EXAL's awareness to the full syntactic parse ( \u00a72).", "cite_spans": [ { "start": 469, "end": 470, "text": "6", "ref_id": null }, { "start": 696, "end": 719, "text": "(Strubell et al., 2018;", "ref_id": "BIBREF13" }, { "start": 720, "end": 736, "text": "Pan, 2018, 2020)", "ref_id": null }, { "start": 996, "end": 1017, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 1046, "end": 1066, "text": "(Pereg et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-task Learning Method", "sec_num": "4" }, { "text": "R \u2194 L R \u2194 D L \u2194 D", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-task Learning Method", "sec_num": "4" }, { "text": "We follow the experimental setup of (Pereg et al., 2020) and formulate OT and AT extraction as a single BIO-tagging task. One-layer classifiers are applied on top of either bert-base-uncased or SA-EXAL encoders, both for the main task and for the auxiliary tasks. Let Z = {z 1 , z 2 , . . . , z n } be the contextualized representations of the input sequence produced by the encoder, and op be the OT index from an extracted OT-AT pair. The auxiliary classifiers are defined as follows:", "cite_spans": [ { "start": 36, "end": 56, "text": "(Pereg et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation details", "sec_num": null }, { "text": "PATT(Z, op) = softmax(z op W P + U P ) ASP(Z, op) = softmax(o 1 , . . . , o n ) o i = (z op W A + U A ) \u2022 z i where W P \u2208 R d\u00d7m , U P \u2208 R m , W A \u2208 R d\u00d7d , Model ( + AMTL task -Formalism) L \u2192 R D \u2192 R R \u2192 L D \u2192 L R \u2192 D L \u2192 D Mean BERT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation details", "sec_num": null }, { "text": "47.2 (4.0) 51.6 (2.1) 44.5 (3.1) 46.7 (1.7) 38.3 (2.4) 42.6 (0.6) 45.16 BERT + ASP -DM 53.5 (3.3) 52.0 (2.1) 45.7 (2.4) 45.9 (2.3) 38.8 (1.5) 42.8 (1.0) 46.45 BERT + ASP -Spacy 49.8 (3.2) 51.6 (1.5) 46.2 (2.5) 45.2 (2.5) 39.4 (1.6) 42.5 (1.0) 45.77 BERT + PATT -DM 46.3 (4.7) 50.9 (2.6) 42.9 (3.4) 46.2 (2.4) 38.0 (1.9) 42.1 (1.0) 44.40 BERT + PATT -Spacy 50.1 (3.0) 51.6 (2.0) 43.1 (2.2) 46.6 (2.5) 37.8 (1.6) 42.0 (0.9) 45.20 SA-EXAL -DM 48.7 (5.8) 53.8 (2.8) 46.0 (3.1) 47.7 (1.8) 40.7 (1.3) 41.9 (0.6) 46.48 SA-EXAL -Spacy 47.9 (3.1) 54.1 (1.9) 45.4 (3.3) 47.1 (1.1) 40.7 (1.7) 42.1 (1.4) 46.24 SA-EXAL + ASP -DM 54.1 (2.3) 51.6 (2.0) 45.6 (2.9) 45.8 (4.1) 39.2 (1.9) 41.8 (0.9) 46.37 SA-EXAL + ASP -Spacy 54.0 (3.1) 52.6 (1.9) 47.1 (3.0) 46.9 (2.4) 39. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation details", "sec_num": null }, { "text": "Following Pereg et al. 2020, we run each model on 3 random data splits and 3 different random seeds, presenting the mean F1 (and standard deviation) of the 9 runs. Detailed results are shown in Table 4 , 7 omitting the UD and PSD formalisms -which perform virtually on par with the other formalisms -for space considerations. 8 For BERT, training for ASP consistently improves the mean F1 score, by up to 1.3 points (DM), bringing BERT's performance to be on par with the state-of-the-art SA-EXAL model. Improvements over the SA-EXAL baseline is generally smaller, yet some settings improve by 0.5-1 mean F1 points. Best performance is attained using SA-EXAL + PATT with semantic formalisms, indicating that pattern-focused signal is complementary to generic syntax enrichment methods.", "cite_spans": [], "ref_spans": [ { "start": 194, "end": 201, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5" }, { "text": "Performance Analysis The overlap between model predictions and the deterministic relational pivoting method ( \u00a73.3) indicates to what extent the model utilizes relational pivot features. Given model predictions, we define pivot-\u2206R as the recall 7 Our reported baseline figures are slightly different than those reported by Pereg et al. (2020) , as we could not fully reproduce their hyperparameter settings, e.g. random seeds. Aiming for a controlled experiment concerning only the AMTL improvements over baselines, we have not optimized the random seeds for any condition.", "cite_spans": [ { "start": 245, "end": 246, "text": "7", "ref_id": null }, { "start": 323, "end": 342, "text": "Pereg et al. (2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5" }, { "text": "8 Results for models trained with both ASP and PATT were also omitted due to their lower performance. improvement a model gains by unifying its true predicted ATs with those of the deterministic method (at k = 10). 9 Greater pivot-\u2206R indicates greater discrepancy from the potential scope of patternbased coverage, hinting that the model incorporates less relational pivot features. Taking DM as the formalism, we find that for the vanilla BERT model, average pivot-\u2206R across 6 domain transfers is 16.5 recall points, with 22.6 for the Laptops to Restaurants transfer (L \u2192 R). This implies that relational features have a significant potential for enhancing its cross-domain coverage, especially on L \u2192 R, where we indeed observe the most profound model improvements using our relation-focused tasks. In comparison, BERT + ASP (DM) has an averaged pivot-\u2206R of 14, with 15.7 on L \u2192 R (See Appendix E for more details). This drop confirms that the AMTL objective pushes the model to cover more OT-related ATs using relational pivoting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5" }, { "text": "We establish an opinion-based cross-domain AT extraction approach, by analyzing the domain invariance of linguistic OT-AT path pattern. We consequently propose a relation-focused multitask learning method, and demonstrate that it enhances models results by utilizing relational features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In Table 5 , we present the percentage of target domain path patterns occurring at least once in the source domain. To account for pattern frequency in each domain, we also compute the Jensen-Shannon Distance between pattern probability distributions (Table 6 ). Overall, DM has the best cross-domain pattern overlap, while the Devices and Laptops domains are slightly more similar to each other. ", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 251, "end": 259, "text": "(Table 6", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Appendices A Cross-domain overlap in path patterns", "sec_num": null }, { "text": "R \u2192 L R \u2192 D L \u2192 R L \u2192 D D \u2192 R D \u2192 L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendices A Cross-domain overlap in path patterns", "sec_num": null }, { "text": "In Section 3.3 we describe a deterministic domaintransfer AT extraction method based on gold opinion terms and top k most frequent OT-AT path patterns in source domain. Results per domain pair are shown in Table 7 for k = 10, which approximately optimizes recall-precision trade-off. Noticeably, the method is less effective for the Laptops target domain. This finding aligns with its wider pattern diversity, as mentioned in Section 3.4 and illustrated in Figure 3 , but should also be attributed to it having relatively fewer OT-AT pairs that exceed our path-length \u2264 2 criterion. In DM, for example, the ratio of the number of selected OT-AT pairs to the total number of aspect terms is 0.93 for Restaurants, 0.77 for Devices, but only 0.67 for the Laptops domain. Altogether, our investigation suggests that the domains vary in linguistic complexity, reflected in richer and longer path patterns for truly corresponding OT-AT pairs in some domains (e.g. Laptops) compared to others (e.g. Restaurants). Relational pivoting might be more contributive to the latter, as also demonstrated by the multitask experiments ( \u00a75).", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 213, "text": "Table 7", "ref_id": null }, { "start": 457, "end": 465, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "B Deterministic relation pivoting per domain pair", "sec_num": null }, { "text": "As mentioned in Section 3.4, we plot the relative cumulative pattern distribution for each domain and formalism, visualizing the number of different patterns vs. OT-AT pairs coverage (%) (Figure 3 ). Referring to differences between linguistic for-malisms, we find the cumulative distributions of DM and PSD more \"dense\". In DM, for example, the most frequent common pattern (simply OT ARG1 \u2212 \u2212\u2212 \u2192AT) covers 55% of the paths. This implies that semantic formalisms, designed for abstracting out surface realization details, strengthen the commonalities across different sentences, thus might have greater potential for relational pivoting. This conjecture is also backed by the deterministic pivoting analysis ( \u00a73.3). However, we did not find a significant advantage for semantic vs. syntactic formalisms in model experiments (See \u00a75).", "cite_spans": [], "ref_spans": [ { "start": 187, "end": 196, "text": "(Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "C Pattern distribution for different linguistic formalisms", "sec_num": null }, { "text": "As mentioned in Section 2, the SA-EXAL model augments BERT with a specialized, 13th attention head, incorporating the syntactic parse directly into the model attention mechanism. In the original paper, SA-EXAL was fed with syntactic dependency trees, where each token has a syntactic head token to which it should attend. The learned attention matrix A \u2208 R n\u00d7n is multiplied element-wise by a matrix representation of the syntactic parse P , where each row is a one-hot vector stating the token to which to attend.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D SA-EXAL for semantic graphs", "sec_num": null }, { "text": "However, semantic dependency formalisms, such as PSD and DM, produce bi-lexical directed acyclic graphs, in which a word can have zero \"heads\" (for semantically vacuous words, e.g. copular verbs) or multiple \"heads\" (i.e. outgoing edges). We modify the SA-EXAL model such that instead of one-hot rows, P can have all-one rows (no heads) or multiple-ones rows (multiple heads). Consequently, for tokens with no heads the network is learning the attention without external interference, whereas for tokens with multiple heads, the attention mass is distributed between the heads.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D SA-EXAL for semantic graphs", "sec_num": null }, { "text": "In Section 5 we define the pivot-\u2206R measure for model predictions, which quantifies how much can model predictions be improved with pattern-based relational pivoting. We observe that pivot-\u2206R is higher for the baseline models compared to the corresponding models enhanced by our AMTL objectives (specifically the Asp objective). Nonetheless, this reduction in pivot-\u2206R seem to correlate with model's improvement along the transfer settings. In Figure 4 we illustrate this for the BERT and BERT + ASP (DM) models. Observed Spearman's \u03c1 over", "cite_spans": [], "ref_spans": [ { "start": 444, "end": 452, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "E Correlating pivot-\u2206R and model improvement", "sec_num": null }, { "text": "R \u2192 L R \u2192 D L \u2192 R L \u2192 D D \u2192 R", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E Correlating pivot-\u2206R and model improvement", "sec_num": null }, { "text": "D \u2192 L Spacy P: 0.32 R: 0.22 F1: 0.26 P: 0.61 R: 0.29 F1: 0.4 P: 0.49 R: 0.37 F1: 0.42 P: 0.58 R: 0.33 F1: 0.42 P: 0.54 R: 0.37 F1: 0.44 P: 0.3 R: 0.24 F1: 0.27 UD P: 0.26 R: 0.23 F1: 0.24 P: 0.46 R: 0.29 F1: 0.36 P: 0.44 R: 0.39 F1: 0.41 P: 0.47 R: 0.3 F1: 0.36 P: 0.49 R: 0.37 F1: 0.43 P: 0.26 R: 0.23 F1: 0.24 DM P: 0.29 R: 0.25 F1: 0.27 P: 0.6 R: 0.34 F1: 0.44 P: 0.52 R: 0.4 F1: 0.45 P: 0.6 R: 0.37 F1: 0.46 P: 0.47 R: 0.39 F1: 0.43 P: 0.26 R: 0.26 F1: 0.26 PSD P: 0.22 R: 0.26 F1: 0.24 P: 0.41 R: 0.34 F1: 0.37 P: 0.35 R: 0.4 F1: 0.38 P: 0.41 R: 0.35 F1: 0.38 P: 0.3 R: 0.4 F1: 0.34 P: 0.19 R: 0.27 F1: 0.22 Table 7 : Results of deterministic relational pivoting per DA settings (K=10). the 6 transfer settings is 0.83 (though obviously this small sample cannot be tested for statistical significance). This examination of model predictions entails that the improvement we observe in model performance is indeed attributed to instances that exhibit a relation pattern present in the source domain. ", "cite_spans": [], "ref_spans": [ { "start": 613, "end": 620, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "E Correlating pivot-\u2206R and model improvement", "sec_num": null }, { "text": "We maintain edge direction by appending a directionality marker to each edge label. In case of multi-word terms, we take the token pair across the terms having the shortest path.3 https://spacy.io/ 4 We also experimented with three application-oriented UD extensions: Enhanced UD, Enhanced UD++(Schuster and Manning, 2016), and pyBART(Tiktinsky et al., 2020). These formalisms introduced more label variability compared with UD, but also shortened OT-AT paths and performed slightly better in the multitask experiments. However, we omit these for presentation convenience.5 https://ufal.mff.cuni.cz/udpipe", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The SA-EXAL model was amended to generalize over the graph structures (rather than trees) produced by semantic formalisms (Appendix E).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We average this measure as well over the 9 model runs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank Intel Emergent AI Lab members and the anonymous reviewers for their valuable comments and feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Comprehensive analysis of aspect term extraction methods using various text embeddings", "authors": [ { "first": "\u0141ukasz", "middle": [], "last": "Augustyniak", "suffix": "" }, { "first": "Tomasz", "middle": [], "last": "Kajdanowicz", "suffix": "" }, { "first": "Przemys\u0142aw", "middle": [], "last": "Kazienko", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.04917" ] }, "num": null, "urls": [], "raw_text": "\u0141ukasz Augustyniak, Tomasz Kajdanowicz, and Prze- mys\u0142aw Kazienko. 2019. Comprehensive analysis of aspect term extraction methods using various text embeddings. arXiv preprint arXiv:1909.04917.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Domain adaptation with structural correspondence learning", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "120--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In Proceedings of the 2006 con- ference on empirical methods in natural language processing, pages 120-128.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Hit-scir at mrp 2019: A unified pipeline for meaning representation parsing via efficient training and effective encoding", "authors": [ { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Longxu", "middle": [], "last": "Dou", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yuxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "76--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wanxiang Che, Longxu Dou, Yang Xu, Yuxuan Wang, Yijia Liu, and Ting Liu. 2019. Hit-scir at mrp 2019: A unified pipeline for meaning representation pars- ing via efficient training and effective encoding. In Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Confer- ence on Natural Language Learning, pages 76-85.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4171- 4186.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Recurrent neural networks with auxiliary labels for crossdomain opinion target extraction", "authors": [ { "first": "Ying", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Jianfei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2017, "venue": "Association for the Advancement of Artificial Intelligence", "volume": "", "issue": "", "pages": "3436--3442", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Ding, Jianfei Yu, and Jing Jiang. 2017. Recurrent neural networks with auxiliary labels for crossdo- main opinion target extraction. In Association for the Advancement of Artificial Intelligence, pages 3436- -3442.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Mining opinion features in customer reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "In American Association for Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. 2004. Mining opinion fea- tures in customer reviews. In American Association for Artificial Intelligence.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Aspect term extraction with history attention and selective transformation", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lidong", "middle": [], "last": "Bing", "suffix": "" }, { "first": "Piji", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" }, { "first": "Zhimou", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Li, Lidong Bing, Piji Li, Wai Lam, and Zhi- mou Yang. 2018. Aspect term extraction with his- tory attention and selective transformation. CoRR, abs/1805.00760.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Semeval 2015 task 18: Broad-coverage semantic dependency parsing", "authors": [ { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Kuhlmann", "suffix": "" }, { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" }, { "first": "Silvie", "middle": [], "last": "Cinkov\u00e1", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "915--926", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov\u00e1, Dan Flickinger, Jan Hajic, and Zdenka Uresova. 2015. Semeval 2015 task 18: Broad-coverage semantic dependency parsing. In Proceedings of the 9th International Workshop on Se- mantic Evaluation (SemEval 2015), pages 915-926.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Syntactically aware cross-domain aspect and opinion terms extraction", "authors": [ { "first": "Oren", "middle": [], "last": "Pereg", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Korat", "suffix": "" }, { "first": "Moshe", "middle": [], "last": "Wasserblat", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1772--1777", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Pereg, Daniel Korat, and Moshe Wasserblat. 2020. Syntactically aware cross-domain aspect and opinion terms extraction. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 1772-1777.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semeval-2014 task 4: Aspect based sentiment analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitrios", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "Harris", "middle": [], "last": "Papageorgiou", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "27--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitrios Galanis, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analy- sis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014) at (COLING 2014), pages 27-35.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Semeval-2015 task 12: Aspect based sentiment analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitrios", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "Harris", "middle": [], "last": "Papageorgiou", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th international workshop on semantic evaluation", "volume": "", "issue": "", "pages": "486--495", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitrios Galanis, Harris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analy- sis. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 486- 495.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Opinion word expansion and target extraction through double propagation", "authors": [ { "first": "Guang", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Bu", "suffix": "" }, { "first": "Chun", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2011, "venue": "Comput. Linguist", "volume": "37", "issue": "1", "pages": "9--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extrac- tion through double propagation. Comput. Linguist., 37(1):9-27.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Enhanced english universal dependencies: An improved representation for natural language understanding tasks", "authors": [ { "first": "Sebastian", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "2371--2378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Schuster and Christopher D Manning. 2016. Enhanced english universal dependencies: An im- proved representation for natural language under- standing tasks. In Proceedings of the Tenth Inter- national Conference on Language Resources and Evaluation (LREC'16), pages 2371-2378.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Linguisticallyinformed self-attention for semantic role labeling", "authors": [ { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Verga", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Andor", "suffix": "" }, { "first": "David", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "5027--5038", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically- informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 5027-5038.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Evidence-based syntactic transformations for ie", "authors": [ { "first": "Aryeh", "middle": [], "last": "Tiktinsky", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.01306" ] }, "num": null, "urls": [], "raw_text": "Aryeh Tiktinsky, Yoav Goldberg, and Reut Tsarfaty. 2020. pybart: Evidence-based syntactic transforma- tions for ie. arXiv preprint arXiv:2005.01306.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Recursive neural structural correspondence network for crossdomain aspect and opinion co-extraction", "authors": [ { "first": "Wenya", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Sinno Jialin Pan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenya Wang and Sinno Jialin Pan. 2018. Recursive neural structural correspondence network for cross- domain aspect and opinion co-extraction. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 1--11.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction", "authors": [ { "first": "Wenya", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Sinno Jialin Pan", "suffix": "" } ], "year": 2020, "venue": "Computational Linguistics", "volume": "45", "issue": "4", "pages": "705--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenya Wang and Sinno Jialin Pan. 2020. Syntacti- cally meaningful and transferable recursive neural networks for aspect and opinion extraction. Compu- tational Linguistics, 45(4):705-736.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Recursive neural conditional random fields for aspect-based sentiment analysis", "authors": [ { "first": "Wenya", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Sinno Jialin Pan", "suffix": "" }, { "first": "Xiaokui", "middle": [], "last": "Dahlmeier", "suffix": "" }, { "first": "", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "616--626", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 616- 626.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "text": "An OT (yellow) to AT (blue) path-pattern (green), defined on top of Universal Dependencies (UD), occurring in sentences from the Devices (top) and Restaurants (bottom) domains.", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "Relative cumulative frequency distribution of path patterns -Restaurants domain, UD formalism.", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": "Relative cumulative frequency distributions of path patterns for each domain in all formalisms, showing how many different patterns (X axis) cover what percentage of OT-AT pairs (Y axis).", "num": null, "type_str": "figure" }, "FIGREF4": { "uris": null, "text": "Relation between reduction in pivot-\u2206R from BERT to BERT + ASP and the corresponding improvement in model performance. Results are provided for DM dependencies.", "num": null, "type_str": "figure" }, "TABREF1": { "content": "
: Cross-Domain lexical term overlap -how many term instances from target domain occur at least once in source domain (percentage).
", "num": null, "html": null, "text": "", "type_str": "table" }, "TABREF3": { "content": "", "num": null, "html": null, "text": "Results of deterministically applying the top k common path patterns (in source domain) on gold OTs for extracting ATs. Evaluation is macro-averaged over the 3 in-domain or 6 cross-domain settings.", "type_str": "table" }, "TABREF5": { "content": "
: Jensen-Shannon Distances between pattern probabilities in different domains. Lower distance in-dicates similarity between the frequency signature of patterns in a domain pair.
", "num": null, "html": null, "text": "", "type_str": "table" }, "TABREF7": { "content": "", "num": null, "html": null, "text": "Cross-domain AT-extraction for different models and linguistic formalisms, evaluated by mean F1 score (and standard deviation). Each column (e.g. L \u2192 R) stands for a cross-domain transfer (e.g. Laptops to Restaurants), where the best BERT and SA-EXAL results are highlighted in bold.U A \u2208 R d are model parameters, \u2022 stands for dot product, d is the hidden vector size and m is the size of the output pattern vocabulary. m is set by taking all the patterns whose frequency in training data (i.e., source domain) is \u2265 3, while mapping other patterns to a fixed UNK symbol.", "type_str": "table" }, "TABREF9": { "content": "
Spacy UD DM PSD MeanR \u2194 L R \u2194 D L \u2194 D Mean 0.62 0.60 0.58 0.60 0.60 0.59 0.56 0.58 0.50 0.50 0.50 0.50 0.60 0.56 0.58 0.58 0.58 0.56 0.55 0.57
", "num": null, "html": null, "text": "Cross-domain pattern overlap -how many AT-OT paths in target domain share a pattern with paths in source domain (percentage).", "type_str": "table" }, "TABREF10": { "content": "
: Jensen-Shannon Distances between pattern probabilities in different domains. Lower distance in-dicates similarity between the frequency signature of patterns in a domain pair.
", "num": null, "html": null, "text": "", "type_str": "table" } } } }