{ "paper_id": "W07-0206", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:41:12.598831Z" }, "title": "Transductive Structured Classification through Constrained Min-Cuts", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania Philadelphia PA", "location": {} }, "email": "kuzman@cis.upenn.edu" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania Philadelphia PA", "location": {} }, "email": "pereira@cis.upenn.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We extend the Blum and Chawla (2001) graph min-cut algorithm to structured problems. This extension can alternatively be viewed as a joint inference method over a set of training and test instances where parts of the instances interact through a prespecified associative network. The method has has an efficient approximation through a linear-programming relaxation. On small training data sets, the method achieves up to 34.8% relative error reduction.", "pdf_parse": { "paper_id": "W07-0206", "_pdf_hash": "", "abstract": [ { "text": "We extend the Blum and Chawla (2001) graph min-cut algorithm to structured problems. This extension can alternatively be viewed as a joint inference method over a set of training and test instances where parts of the instances interact through a prespecified associative network. The method has has an efficient approximation through a linear-programming relaxation. On small training data sets, the method achieves up to 34.8% relative error reduction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We describe a method for transductive classification in structured problems. Our method extends the Blum and Chawla (2001) algorithm for transductive classification. In that algorithm, each training and test instance is represented by a vertex in a graph. The algorithm finds the min-cut that separates the positively and negatively labeled instances. We give a linear program that implements an approximation of this algorithm and extend it in several ways. First, our formulation can be used in cases where there are more than two labels. Second, we can use the output of a classifier to provide a prior preference of each instance for a particular label. This lets us trade off the strengths of the mincut algorithm against those of a standard classifier. Finally, we extend the algorithm further to deal with structured output spaces, by encoding parts of instances as well as constraints that ensure a consistent labeling of an entire instance.", "cite_spans": [ { "start": 100, "end": 122, "text": "Blum and Chawla (2001)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows. Section 2 explains what we mean by transductive classification and by structured problems. Section 3 reviews the Blum and Chawla (2001) algorithm, how we formulate it as a linear program and our proposed extensions. Section 4 relates our proposal to previous work. Section 5 describes our experimental results on real and synthetic data and Section 6 concludes the paper.", "cite_spans": [ { "start": 161, "end": 183, "text": "Blum and Chawla (2001)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work we combine two separate approaches to learning: transductive methods, in which classification of test instances arises from optimizing a single objective involving both training and test instances; and structured classification, in which instances involve several interdependent classification problems. The description of structured problems also introduces useful terminology for the rest of the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concepts and Notation", "sec_num": "2" }, { "text": "In supervised classification, training instances are used to induce a classifier that is then applied to individual test instances that need to be classified. In transductive classification, a single optimization problem is set up involving all training and test instances; the solution of the optimization problem yields labels for the test instances. In this way, the test instances provide evidence about the distribution of the data, which may be useful when the labeled data is limited and the distribution of unlabeled data Figure 1 : An example where unlabeled data helps to reveal the underlying distribution of the data points, borrowed from Sindhwani et al. (2005) . The circles represent data points (unlabeled are empty, positive have a \"+\" and negative have a \"-\"). The dashed lines represent decision boundaries for a classifier. The first figure shows the labeled data and the max-margin decision boundary (we use a linear boundary to conform with Occam's razor principle). The second figure shows the unlabeled data points revealing the distribution from which the training examples were selected. This distribution suggests that a linear boundary might not be appropriate for this data. The final figure shows a more appropriate decision boundary given the distribution of the unlabeled data.", "cite_spans": [ { "start": 651, "end": 674, "text": "Sindhwani et al. (2005)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 530, "end": 538, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Transductive Classification", "sec_num": "2.1" }, { "text": "is informative about the location of the decision boundary. Figure 1 illustrates this.", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 68, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Transductive Classification", "sec_num": "2.1" }, { "text": "The usual view of structured classification is as follows. An instance consists of a set of classification problems in which the labels of the different problems are correlated according to a certain graphical structure. The collection of classification labels in the instance forms a single structured label. A typical structured problem is part of speech (POS) tagging. The parts of speech of consecutive words are strongly correlated, while the POS of words that are far away do not influence each other much. In the natural language processing tasks that motivate this work, we usually formalize this observation with a Markov assumption, implemented by breaking up the instance into parts consisting of pairs of consecutive words. We assign a score for each possible label of each part and then use a dynamic programming algorithm to find the highest scoring label of the entire instance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Classification", "sec_num": "2.2" }, { "text": "In the rest of this paper, it will be sometimes more convenient to think of all the (labeled and unlabeled) instances of interest as forming a single joint classification problem on a large graph. In this joint problem, the atomic classification problems are linked according to the graphical structure imposed by their partition into structured classification instances. As we will see, other links between atomic problems arise in our setting that may cross between different structured instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Classification", "sec_num": "2.2" }, { "text": "For structured problems, instance refers to an entire problem (for example, an entire sentence for POS tagging). A token refers to the smallest unit that receives a label. In POS tagging, a token is a word. A part is one or more tokens and is a division used by a learning algorithm. For all our experiments, a part is a pair of consecutive tokens, but extension to other types of parts is trivial. If two parts share a token then a consistent label for those parts has to have the same label on the shared token. For example in the sentence \"I love learning .\" we have parts for \"I love\" and \"love learning\". These share the token \"love\" and two labels for the two parts has to agree on the label for the token in order to be consistent. In all our experiments, a part is a pair of consecutive tokens so two parts are independent unless one immediately follows the other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Terminology", "sec_num": "2.3" }, { "text": "We extend the min-cut formulation of Blum and Chawla (2001) to multiple labels and structured variables by adapting a linear-programming encoding of metric labeling problems. By relaxing the linear program, we obtain an efficient approximate inference algorithm. To understand our method, it is useful to review the mincut transductive classification algorithm (Section 3.1) as well as the metric labeling problem and its linear programming relaxation (Section 3.2). Section 3.3 describes how to encode a multi-way min-cut problem as an instance of metric labeling as well as a trivial extension that lets us introduce a bias when computing the cut.", "cite_spans": [ { "start": 37, "end": 59, "text": "Blum and Chawla (2001)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Section 3.4 extends this formalism to structured classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Blum and Chawla (2001) present an efficient algorithm for semi-supervised machine learning in the unstructured binary classification setting. At a high level, the algorithm is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Min-Cuts for Transductive Classification", "sec_num": "3.1" }, { "text": "\u2022 Construct a graph where each instance corresponds to a vertex;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Min-Cuts for Transductive Classification", "sec_num": "3.1" }, { "text": "\u2022 Add weighted edges between similar vertices with weight proportional to a measure of similarity;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Min-Cuts for Transductive Classification", "sec_num": "3.1" }, { "text": "\u2022 Find the min-cut that separates positively and negatively labeled training instances;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Min-Cuts for Transductive Classification", "sec_num": "3.1" }, { "text": "\u2022 Label all instances on the positive side of the cut as positive and all others as negative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Min-Cuts for Transductive Classification", "sec_num": "3.1" }, { "text": "For our purposes we need to consider two extensions to this problem: multi-way classification and constrained min-cut. For multi-way classification, instead of computing the binary min-cut as above, we need to find the multi-way min-cut. Unfortunately, doing this in general is NP-hard, but a polynomial time approximation exists (Dahlhaus et al., 1992) . In Section 3.3 we describe how we approximate this problem.", "cite_spans": [ { "start": 330, "end": 353, "text": "(Dahlhaus et al., 1992)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Min-Cuts for Transductive Classification", "sec_num": "3.1" }, { "text": "We extend this approach to structured data by constructing a graph whose vertices correspond to different parts of the instance, and add weighted edges between similar parts. We then find the multi-way min-cut that separates vertices with different labels subject to some constraints: if two parts overlap then the labels have to be consistent. Our main contribution is an algorithm that approximately computes this constrained multi-way min-cut with a linear programming relaxation. Kleinberg and Tardos (1999) introduce the metric labeling problem as a common inference problem in a variety of fields. The inputs to the problem are a weighted graph G = (V, E), a set of labels L = {i|i \u2208 1 . . . k}, a cost function c(v, i) which represents the preference of each vertex for each possible label and a metric d(i, j) between labels i and j. The goal is to assign a label to each vertex l : V \u2192 L so as to minimize the cost given by:", "cite_spans": [ { "start": 484, "end": 511, "text": "Kleinberg and Tardos (1999)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Min-Cuts for Transductive Classification", "sec_num": "3.1" }, { "text": "c(l) = v\u2208V c(v, l(v)) + (u,v)\u2208E d(l(u), l(v)) \u2022 w(u, v) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Labeling", "sec_num": "3.2" }, { "text": "(1) Kleinberg and Tardos (1999) give a linear programming approximation for this problem with an approximation factor of two and explain how this can be extended to an O(log k) approximation for arbitrary metrics by creating a hierarchy of labels. Chekuri et al. (2001) present an improved linear program that incorporates arbitrary metrics directly and provides an approximation at least as good as that of Kleinberg and Tardos (1999) . The idea in the new linear program is to have a variable for each edge labeling as well as one for each vertex labeling. Following Chekuri et al. (2001) , we represent the event that vertex u has label i by the variable x(u, i) having the value 1; if x(u, i) = 0 then vertex v must have some other label. Similarly, we use the variable and value x(u, i, v, j) = 1 to mean that the vertices u and v (which are connected by an edge) have label i and j respectively. The edge variables allow us to encode the costs associated with violated edges in the metric labeling problem. Edge variables should agree with vertex labels, and by symmetry we should have x(u, i, v, j) = x(v, j, u, i). If the linear program gives an integer solution, this is clearly the optimal solution to the original metric labeling instance. Chekuri et al. (2001) describe a rounding procedure to compute an integer solution to the LP that is guaranteed to be an approximation of the optimal integer solution. For the problems we considered, this was very rarely necessary. Their linear program relaxation is shown in Figure 2 . The cost function is the sum of the vertex costs and edge costs. The first constraint requires that each vertex have a total of one labeling unit distributed over its labels, that is, we cannot assign more or less than one label per vertex. The second constraint requires that vertex-and edge-label variables are consistent: the label that vertex variables give a vertex should agree with the labels that edge variables give that vertex. The third constraint imposes the edge-variable symmetry condition, and the final constraint requires that all the variables be in the range [0, 1].", "cite_spans": [ { "start": 4, "end": 31, "text": "Kleinberg and Tardos (1999)", "ref_id": "BIBREF6" }, { "start": 248, "end": 269, "text": "Chekuri et al. (2001)", "ref_id": "BIBREF2" }, { "start": 408, "end": 435, "text": "Kleinberg and Tardos (1999)", "ref_id": "BIBREF6" }, { "start": 569, "end": 590, "text": "Chekuri et al. (2001)", "ref_id": "BIBREF2" }, { "start": 1251, "end": 1272, "text": "Chekuri et al. (2001)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 1527, "end": 1535, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Metric Labeling", "sec_num": "3.2" }, { "text": "min X u\u2208V X i\u2208L c(u, i)x(u, i) + X (u,v)\u2208E X k,j\u2208L w(u, v)d(i, j)x(u, i, v, j) subject to X i\u2208L x(u, i) = 1 \u2200u \u2208 V x(u, i) \u2212 X j\u2208L x(u, i, v, j) = 0 \u2200u \u2208 V, v \u2208 N (u), i \u2208 L x(u, i, v, j) \u2212 x(v, j, u, i) = 0 \u2200u, v \u2208 V, i, j \u2208 L x(u, i, v, j), x(u, i) \u2208 [0, 1] \u2200u, v \u2208 V, i, j \u2208 L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric Labeling", "sec_num": "3.2" }, { "text": "Given an instance of the (multi-way) min-cut problem, we can translate it to an instance of metric labeling as follows. The underlying graph and edge weights will be the same as min-cut problem. We add vertex costs (c(u, i) \u2200u \u2208 V, i \u2208 L) and a label metric", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Min Cut as an Instance of Metric Labeling", "sec_num": "3.3" }, { "text": "(d(i, j) \u2200i, j \u2208 L).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Min Cut as an Instance of Metric Labeling", "sec_num": "3.3" }, { "text": "For all unlabeled vertices set the vertex cost to zero for all labels. For labeled vertices set the cost of the correct label to zero and all other labels to infinity. Finally let d(i, j) be one if i = j and zero otherwise. The optimal solution to this instance of metric labeling will be the same as the optimal solution of the initial min cut instance: the cost of any labeling is the number of edges that link vertices with different labels, which is exactly the number of cut edges. Also by the same argument, every possible labeling will correspond to some cut and approximations of the metric labeling formulation will be approximations of the origi-nal min-cut problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Min Cut as an Instance of Metric Labeling", "sec_num": "3.3" }, { "text": "Since the metric labeling problem allows arbitrary affinities between a vertex in the graph and possible labels for that vertex, we can trivially extend the algorithm by introducing a bias at each vertex for labels more compatible with that vertex. We use the output of a classifier to bias the cost towards agreement with the classifier. Depending on the strength of the bias, we can trade off our confidence in the performance of the min-cut algorithm against the our confidence in a fully-supervised classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Min Cut as an Instance of Metric Labeling", "sec_num": "3.3" }, { "text": "To extend this further to structured classification we modify the Chekuri et al. (2001) linear program ( Figure 2 ). In the structured case, we construct a vertex for every part of an instance.", "cite_spans": [ { "start": 66, "end": 87, "text": "Chekuri et al. (2001)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 105, "end": 113, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Extension to Structured Classification", "sec_num": "3.4" }, { "text": "Since we want to find a consistent labeling for an entire instance composed of overlapping parts, we need to add some more constraints to the linear program. We want to ensure that if two vertices correspond to two overlapping parts, then they are assigned consistent labels, that is, the token shared by two parts is given the same label by both. First we add a new zero-weight edge between every pair of vertices corresponding to overlapping parts. Since its weight is zero, this edge will not affect the cost. We then add a constraint to the linear-program that the edge variables for inconsistent labelings of the new edges have a value of zero. More formally, let (u, i, v, j) \u2208 \u039b denote that the part u having label i is consistent with the part v having label j; if u and v do not share any tokens, then any pair of labels for those parts are consistent. Now add zero-weight edges between overlapping parts. Then the only modification to the linear program is that What this modification does is to ensure that all the mass of the edge variables between vertices u and v lies in consistent labelings for their edge. The modified linear program is shown in Figure 3 . We can show that this can be encoded as a larger instance of the metric labeling problem (with roughly |V |+|E| more vertices and a label set that is four times as large), but modifying the linear program directly results in a more efficient implementation. The final LP has one variable for each labeling of each edge in the graph, so we have O(|E||L| 2 ) variables. Note that |L| is the number of labelings of a pair of tokens for us -even so, computation of a single dataset took on the order of minutes using the Xpress MP package.", "cite_spans": [], "ref_spans": [ { "start": 1163, "end": 1172, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Extension to Structured Classification", "sec_num": "3.4" }, { "text": "x(u, i) \u2212 j\u2208L x(u, i, v, j) = 0 \u2200u \u2208 V, v \u2208 N (u), i \u2208 L will become x(u, i) \u2212 j:(u,i,v,j)\u2208\u039b x(u, i, v, j) = 0 \u2200u \u2208 V, v \u2208 N (u), i \u2208 L . min X u\u2208V X i\u2208L c(u, i)x(u, i) + X (u,v)\u2208E X k,j\u2208L w(u, v)d(i, j)x(u, i, v, j) subject to X i\u2208L x(u, i) = 1 \u2200u \u2208 V x(u, i) \u2212 X j:(u,i,v,j)\u2208\u039b x(u, i, v, j) = 0 \u2200u \u2208 V, v \u2208 N (u), i \u2208 L x(u, i, v, j) \u2212 x(v, j, u, i) = 0 \u2200 (u, i, v, j) \u2208 \u039b x(u, i, v, j), x(u, i) \u2208 [0, 1] \u2200u, v \u2208 V, i, j \u2208 L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extension to Structured Classification", "sec_num": "3.4" }, { "text": "Our work is set of extensions to the work of- Blum and Chawla (2001) , which we have already described. Our extensions allow us to handle multi-class and structured data, as well as to take hints from a classifier. We can also specify a similarity metric between labels so that a cut-edge can cost different amounts depending on what partitions it spans. Taskar et al. (2004a) describe a class of Markov networks with associative clique potentials. That is, the clique potentials always prefer that all the nodes in the clique have the same label. The inference problem in these networks is to find the assignment of labels to all nodes in the graph that maximizes the sum of the clique potentials. Their paper describes a linear programming relaxation to find (or approximate) this inference problem which is very similar to the LP formulation of Chekuri et al. (2001) when all cliques are of size 2. They generalize this to larger cliques and prove that their LP gives an integral solution when the label alphabet has size 2 (even for large cliques). For the learning problem they exploit the dual of the LP formulation and use a maximum margin objective similar to the one used by Taskar et al. (2004b) . If we ignore the learning problem and focus on inference, one could view our work as inference over a Markov network created by combining a set of linear chain conditional random fields with an associative Markov network (with arbitrary structure). A direction for future work would be to train the associative Markov network either independently from the chain-structured model or jointly with it. This would be very similar to the joint inference work described in the next paragraph, and could be seen as a particular instantiation of either a non-linear conditional random field (Lafferty et al., 2001) or relational Markov network (Taskar et al., 2002) . Sutton and McCallum (2004) consider the use of linear chain CRFs augmented with extra skip edges which encode a probabilistic belief that the labels of two entities might be correlated. They provide experimental results on named entity recognition for e-mail messages announcing seminars, and their system achieves a 13.7% relative reduction in error on the \"Speaker\" field. Their work differs from ours in that they add skip edges only between identical capitalized words and only within an instance, which for them is an e-mail message. In particular, they can never have an edge between labeled and unlabeled parts. Their approach is useful for identification of personal names but less helpful for other named entity tasks where the names may not be capitalized. Lafferty et al. (2004) show a representer theorem allowing the use of Mercer kernels with CRFs. They use a kernel CRF with a graph kernel (Smola and Kondor, 2003) to do semisupervised learning. For them, the graph defines an implicit representation of the data, but inference is still performed only on the (chain) structure of the CRF. By contrast, we perform inference over the whole set of examples at the same time. Altun et al. (2006) extend the use of graphbased regularization to structured variables. Their work is in the framework of maximum margin learning for structured variables where learning is framed as an optimization problem. They modify the objective function by adding a penalty whenever two parts that are expected to have a similar label assign a different score to the same label. They show improvements of up to 5.3% on two real tasks: pitch accent prediction and optical character recognition (OCR). Unfortunately, to solve their optimization problem they have to invert an n\u00d7n matrix, where n is the number of parts in the training and testing data times the number of possible labels for each part. Because of this they are forced to train on an unrealistically small amount of data (4-40 utterances for pitch accent prediction and 10 words for OCR).", "cite_spans": [ { "start": 46, "end": 68, "text": "Blum and Chawla (2001)", "ref_id": "BIBREF1" }, { "start": 355, "end": 376, "text": "Taskar et al. (2004a)", "ref_id": "BIBREF18" }, { "start": 848, "end": 869, "text": "Chekuri et al. (2001)", "ref_id": "BIBREF2" }, { "start": 1184, "end": 1205, "text": "Taskar et al. (2004b)", "ref_id": "BIBREF19" }, { "start": 1791, "end": 1814, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF7" }, { "start": 1844, "end": 1865, "text": "(Taskar et al., 2002)", "ref_id": "BIBREF17" }, { "start": 1868, "end": 1894, "text": "Sutton and McCallum (2004)", "ref_id": "BIBREF16" }, { "start": 2635, "end": 2657, "text": "Lafferty et al. (2004)", "ref_id": "BIBREF8" }, { "start": 2773, "end": 2797, "text": "(Smola and Kondor, 2003)", "ref_id": "BIBREF15" }, { "start": 3055, "end": 3074, "text": "Altun et al. (2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Relation to Previous work", "sec_num": "4" }, { "text": "We performed experiments using our approach on three different datasets using a conditional random field as the base classifier. Unless otherwise noted this was regularized using a zeromean Gaussian prior with a variance of 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The first dataset is the pitch-accent prediction dataset used in semi-supervised learning by Altun et al. (2006) . There are 31 real and binary features (all are encoded as real values) and only two labels. Instances correspond to an utterance and each token corresponds to a word. Altun et al. (2006) perform experiments on 4 and 40 training instances using at most 200 unlabeled instances.", "cite_spans": [ { "start": 93, "end": 112, "text": "Altun et al. (2006)", "ref_id": "BIBREF0" }, { "start": 282, "end": 301, "text": "Altun et al. (2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The second dataset is the reference part of the Cora information extraction dataset. 1 This consists of 500 computer science research paper citations. Each token in a citation is labeled as being part of the name of an author, part of the title, part of the date or one of several other labels that we combined into a single category (\"other\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The third dataset is the chunking dataset from the CoNLL 2000 (Sang and Buchholz, 2000) shared task restricted to noun phrases. The task for this dataset is, given the words in a sentence as well as automatically assigned parts of speech for these words, label each word with B-NP if it is the first word in a base noun phrase, I-NP if it is part of a base noun phrase but not the first word and O if it is not part of a noun phrase.", "cite_spans": [ { "start": 72, "end": 87, "text": "Buchholz, 2000)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "For all experiments, we let each word be a token and consider parts consisting of two consecutive tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "For the pitch accent prediction dataset, we used the 5-nearest neighbors of each instance according to the Euclidean distance in the original feature space to construct the graph for min-cut. Table 1 shows the results of our experiments on this data, as well as the results reported by Altun et al. (2006) . The numbers in the table are per-token accuracy and each entry is the mean of 10 random train-test data selections.", "cite_spans": [ { "start": 286, "end": 305, "text": "Altun et al. (2006)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 192, "end": 199, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Pitch Accent Prediction", "sec_num": "5.1" }, { "text": "For this problem, our method improves performance over the base CRF classifier (except when the training data consists of only 4 utterances), but we do not see improvements as dramatic as those observed by Altun et al. (2006) . Note that even the larger dataset here is quite small -40 utterances where each token has been annotated with a binary value.", "cite_spans": [ { "start": 206, "end": 225, "text": "Altun et al. (2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Pitch Accent Prediction", "sec_num": "5.1" }, { "text": "For the Cora information extraction dataset, we used the first 100 principal components of the feature space to find 5 nearest neighbors of each part. This approximation is due to the cost of comuting nearest neighbors in high dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cora-IE", "sec_num": "5.2" }, { "text": "In these experiments we trained on 40 instances obtained the dataset from http://www.cs.umass.edu/ mccallum/data/cora-ie.tar.gz. Table 2 : Accuracy on the Cora-IE dataset as a percentage of tokens correctly classified at different settings for the CRF variance. Results for training on 40 instances and testing on 80. In all cases the scores are the mean of 10 random selections of 120 instances from the set of 500 available. and used 80 as testing data. In all cases we randomly selected training and testing instances 10 times from the total set of 500. Table 2 shows the average accuracies for the 10 repetitions, with different values for the variance of the Gaussian prior used to regularize the CRF. If we choose the optimal value for each method, our approach gives a 34.8% relative reduction in error over the CRF, and improves over it in each of the 10 random data selections, and all settings of the Guassian prior variance.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 136, "text": "Table 2", "ref_id": null }, { "start": 557, "end": 564, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Cora-IE", "sec_num": "5.2" }, { "text": "Our results are worst for the CoNLL NP-Chunking dataset. As above, we used 10 random selections of training and test sets, and used the 100 principal components of the feature space to find 5 nearest neighbors of each part. Table 3 shows the results of our experiments. The numbers in the table are per-token Table 3 : Results on the NP-chunking task. The table compares a CRF with our method using a CRF as a base classifier. The experiments use 20 labeled and 40 unlabeled and 40 labeled and 80 unlabeled instances.", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 231, "text": "Table 3", "ref_id": null }, { "start": 309, "end": 316, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "CoNLL NP-Chunking", "sec_num": "5.3" }, { "text": "accuracy as before. When the amount of training data is very small (20 instances) we improve slightly over the base CRF classifier, but with an increased amount of training data, the small improvement is replaced with a small loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CoNLL NP-Chunking", "sec_num": "5.3" }, { "text": "We have presented a new transductive algorithm for structured classification, which achieves error reductions on some real-world problems. Unfortunately, those gains are not always realized, and sometimes our approach leads to an increase in error. The main reason that our approach does not always work seems to be that our measure of similarity between different parts is very coarse. In general, finding all the pairs of parts have the same label is as difficult as finding the correct labeling of all instances, but it might be possible to use unlabeled data to learn the similarity measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The Cora IE dataset has been used inSeymore et al. (1999),Peng and McCallum (2004),McCallum et al. (2000) andHan et al. (2003), among others. We", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Maximum margin semi-supervised learning for structured variables", "authors": [ { "first": "Yasemin", "middle": [], "last": "Altun", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcallester", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Belkin", "suffix": "" } ], "year": 2006, "venue": "Advances in Neural Information Processing Systems", "volume": "18", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yasemin Altun, David McAllester, and Mikhail Belkin. 2006. Maximum margin semi-supervised learning for structured variables. In Y. Weiss, B. Sch\u00f6lkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 33-40. MIT Press, Cambridge, MA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning from labeled and unlabeled data using graph mincuts", "authors": [ { "first": "Avrim", "middle": [], "last": "Blum", "suffix": "" }, { "first": "Shuchi", "middle": [], "last": "Chawla", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 18th International Conf. on Machine Learning", "volume": "", "issue": "", "pages": "19--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Avrim Blum and Shuchi Chawla. 2001. Learn- ing from labeled and unlabeled data using graph mincuts. In Proceedings of the 18th International Conf. on Machine Learning, pages 19-26. Morgan Kaufmann, San Francisco, CA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Approximation algorithms for the metric labeling problem via a new linear programming formulation", "authors": [ { "first": "Chandra", "middle": [], "last": "Chekuri", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khanna", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Naor", "suffix": "" }, { "first": "Leonid", "middle": [], "last": "Zosin", "suffix": "" } ], "year": 2001, "venue": "Symposium on Discrete Algorithms", "volume": "", "issue": "", "pages": "109--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chandra Chekuri, Sanjeev Khanna, Joseph Naor, and Leonid Zosin. 2001. Approximation algo- rithms for the metric labeling problem via a new linear programming formulation. In Symposium on Discrete Algorithms, pages 109-118.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The complexity of multiway cuts", "authors": [ { "first": "E", "middle": [], "last": "Dahlhaus", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Johnson", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Papadimitriou", "suffix": "" }, { "first": "P", "middle": [ "D" ], "last": "Seymour", "suffix": "" }, { "first": "M", "middle": [], "last": "Yannakakis", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Dahlhaus, D. S. Johnson, C. H. Papadimitriou, P. D. Seymour, and M. Yannakakis. 1992. The complexity of multiway cuts (extended abstract).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Proceedings of the twenty-fourth annual ACM symposium on Theory of computing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "241--251", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the twenty-fourth annual ACM symposium on Theory of computing, pages 241- 251, New York, NY, USA. ACM Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic document metadata extraction using support vector machines", "authors": [ { "first": "H", "middle": [], "last": "Han", "suffix": "" }, { "first": "C", "middle": [], "last": "Giles", "suffix": "" }, { "first": "E", "middle": [], "last": "Manavoglu", "suffix": "" }, { "first": "H", "middle": [], "last": "Zha", "suffix": "" }, { "first": "Z", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "E", "middle": [], "last": "Fox", "suffix": "" } ], "year": 2003, "venue": "Joint Conference on Digital Libraries", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Han, C. Giles, E. Manavoglu, H. Zha, Z. Zhang, and E. Fox. 2003. Automatic document meta- data extraction using support vector machines. In Joint Conference on Digital Libraries.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Approximation algorithms for classification problems with pairwise relationships: Metric labeling and markov random fields", "authors": [ { "first": "Jon", "middle": [], "last": "Kleinberg", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Tardos", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 40th Annual Symposium on Foundations of Computer Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jon Kleinberg and Eva Tardos. 1999. Approx- imation algorithms for classification problems with pairwise relationships: Metric labeling and markov random fields. In Proceedings of the 40th Annual Symposium on Foundations of Computer Science, page 14, Washington, DC, USA. IEEE Computer Society.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 10th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 10th Inter- national Conference on Machine Learning, pages 282-289, San Francisco, CA, USA. Morgan Kauf- mann Publishers Inc.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Kernel conditional random fields: representation and clique selection", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the twentyfirst international conference on Machine learning", "volume": "64", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Xiaojin Zhu, and Yan Liu. 2004. Kernel conditional random fields: representation and clique selection. In Proceedings of the twenty- first international conference on Machine learn- ing, page 64, New York, NY, USA. ACM Press.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automating the construction of internet portals with machine learning", "authors": [ { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "K", "middle": [], "last": "Nigam", "suffix": "" }, { "first": "J", "middle": [], "last": "Rennie", "suffix": "" }, { "first": "K", "middle": [], "last": "Seymore", "suffix": "" } ], "year": 2000, "venue": "Information Retrieval", "volume": "3", "issue": "", "pages": "127--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. McCallum, K. Nigam, J. Rennie, and K. Sey- more. 2000. Automating the construction of in- ternet portals with machine learning. Information Retrieval, 3:127-163.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Accurate information extraction from research papers using conditional random fields", "authors": [], "year": null, "venue": "Main Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "329--336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Accurate information extraction from research papers using conditional random fields. In Daniel Marcu Susan Dumais and Salim Roukos, editors, Main Proceedings of HLT-NAACL, pages 329-336, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Introduction to the CoNLL-2000 shared task: Chunking", "authors": [ { "first": "Erik", "middle": [ "Tjong" ], "last": "", "suffix": "" }, { "first": "Kim", "middle": [], "last": "Sang", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Buchholz", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Fourth Conference on Computational Natural Language Learning and of the Second Learning Language in Logic Workshop. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Proceedings of the Fourth Confer- ence on Computational Natural Language Learn- ing and of the Second Learning Language in Logic Workshop. Association for Computational Lin- guistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning hidden markov model structure for information extraction", "authors": [ { "first": "K", "middle": [], "last": "Seymore", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "R", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1999, "venue": "AAAI'99 Workshop on Machine Learning for Information Extraction", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Seymore, A. McCallum, and R. Rosenfeld. 1999. Learning hidden markov model structure for in- formation extraction. In AAAI'99 Workshop on Machine Learning for Information Extraction.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Beyond the point cloud: from transductive to semi-supervised learning", "authors": [ { "first": "Vikas", "middle": [], "last": "Sindhwani", "suffix": "" }, { "first": "Partha", "middle": [], "last": "Niyogi", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Belkin", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 22nd International Conference on Machine Learning", "volume": "", "issue": "", "pages": "824--831", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. 2005. Beyond the point cloud: from transductive to semi-supervised learning. In Proceedings of the 22nd International Conference on Machine Learn- ing, pages 824-831.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Kernels and regularization on graphs", "authors": [ { "first": "Alexander", "middle": [], "last": "Smola", "suffix": "" }, { "first": "Risi", "middle": [], "last": "Kondor", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Sixteenth Annual Conference on Learning Theory and Kernels Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Smola and Risi Kondor. 2003. Kernels and regularization on graphs. In M. Warmuth and B. Scholkopf, editors, Proceedings of the Sixteenth Annual Conference on Learning Theory and Ker- nels Workshop.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Collective segmentation and labeling of distant entities in information extraction", "authors": [ { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "Presented at ICML Workshop on Statistical Relational Learning and Its Connections to Other Fields", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Sutton and Andrew McCallum. 2004. Col- lective segmentation and labeling of distant enti- ties in information extraction. Technical Report TR # 04-49, University of Massachusetts, July. Presented at ICML Workshop on Statistical Re- lational Learning and Its Connections to Other Fields.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Discriminative probabilistic models for relational data", "authors": [ { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Abbeel", "middle": [], "last": "Pieter", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI-02)", "volume": "", "issue": "", "pages": "485--492", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Taskar, Abbeel Pieter, and Daphne Koller. 2002. Discriminative probabilistic models for re- lational data. In Proceedings of the 18th An- nual Conference on Uncertainty in Artificial Intel- ligence (UAI-02), pages 485-492, San Francisco, CA. Morgan Kaufmann Publishers.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning associative markov networks", "authors": [ { "first": "B", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "V", "middle": [], "last": "Chatalbashev", "suffix": "" }, { "first": "D", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Twenty-First International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Taskar, V. Chatalbashev, and D. Koller. 2004a. Learning associative markov networks. In Pro- ceedings of the Twenty-First International Con- ference on Machine Learning (ICML).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Max-margin markov networks", "authors": [ { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2004, "venue": "Advances in Neural Information Processing Systems 16", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Taskar, Carlos Guestrin, and Daphne Koller. 2004b. Max-margin markov networks. In Se- bastian Thrun, Lawrence Saul, and Bernhard Sch\u00f6lkopf, editors, Advances in Neural Informa- tion Processing Systems 16. MIT Press, Cam- bridge, MA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Large margin methods for structured and interdependent output variables", "authors": [ { "first": "Ioannis", "middle": [], "last": "Tsochantaridis", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Hofmann", "suffix": "" }, { "first": "Yasemin", "middle": [], "last": "Altun", "suffix": "" } ], "year": 2005, "venue": "JMLR", "volume": "6", "issue": "", "pages": "1453--1484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. 2005. Large mar- gin methods for structured and interdependent output variables. JMLR, 6:1453-1484.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "The Chekuri et al. (2001) linear program used to approximate metric labeling. See text for discussion.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "The modified linear program used to approximate metric labeling. See text for discussion.", "num": null, "type_str": "figure" } } } }