aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
0706.2434
2143252188
In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.
In @cite_8 , Ilow and Hatzinakos model the interference as a shot noise process and show that the interference is a symmetric @math -stable process @cite_7 when the nodes are Poisson distributed on the plane. They also show that channel randomness affects the dispersion of the distribution, while the path-loss exponent affects the exponent of the process. The throughput and outage in the presence of interference are analyzed in @cite_14 @cite_22 @cite_9 . In @cite_14 , the shot-noise process is analyzed using stochastic geometry when the nodes are distributed as Poisson and the fading is Rayleigh. In @cite_23 upper and lower bounds are obtained under general fading and Poisson arrangement of nodes.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_7", "@cite_8", "@cite_9", "@cite_23" ], "mid": [ "2086605530", "2143252188", "2740650368", "2022637917", "2120852651", "2114207517" ], "abstract": [ "We define and analyze a random coverage process of the @math -dimensional Euclidian space which allows one to describe a continuous spectrum that ranges from the Boolean model to the Poisson-Voronoi tessellation to the Johnson-Mehl model. Like for the Boolean model, the minimal stochastic setting consists of a Poisson point process on this Euclidian space and a sequence of real valued random variables considered as marks of this point process. In this coverage process, the cell attached to a point is defined as the region of the space where the effect of the mark of this point exceeds an affine function of the cumulated effect of all marks. This cumulated effect is defined as the shot noise process associated with the marked point process. In addition to analyzing and visualizing this continuum, we study various basic properties of the coverage process such as the probability that a point or a pair of points be covered by a typical cell. We also determine the distribution of the number of cells which cover a given point, and show how to provide deterministic bounds on this number. Finally, we also analyze convergence properties of the coverage process using the framework of closed sets, and its differentiability properties using perturbation analysis. Our results require a pathwise continuity property for the shot noise process for which we provide sufficient conditions. The model in question stems from wireless communications where several antennas share the same (or different but interfering) channel(s). In this case, the area where the signal of a given antenna can be received is the area where the signal to interference ratio is large enough. We describe this class of problems in detail in the paper. The obtained results allow one to compute quantities of practical interest within this setting: for instance the outage probability is obtained as the complement of the volume fraction; the law of the number of cells covering a point allows one to characterize handover strategies etc.", "In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.", "We consider a single cell wireless uplink in which randomly arriving devices transmit their payload to a receiver. Given SNR per user, payload size per device, a fixed latency constraint T, total available bandwidth W, i.e., total symbol resources is given by N = TW. The total bandwidth W is evenly partitioned into B bins. Each time slot of duration T is split into a maximum number of retransmission attempts M. Hence, the N resources are partitioned into N MB resources each bin per retransmission. We characterize the maximum average rate or number of Poisson arrivals that can successfully complete the random access procedure such that the probability of outage is sufficiently small. We analyze the proposed setting for i) noise-limited regime and ii) interference-limited regime. We show that in the noise-limited regime the devices share the resources, and in the interference-limited regime, the resources split such that devices do not experience any interference. We then incorporate Rayleigh fading to model the channel power gain distribution. Although the variability of the channel causes a drop in the number of arrivals that can successfully complete the random access phase, similar scaling results extend to the Rayleigh fading case.", "In mobile networks, distance variations caused by node mobility generate fluctuations in the channel gains. Such fluctuations can be treated as another type of fading besides multipath effects. In this paper, the interference statistics in mobile random networks are characterized by incorporating the distance variations of mobile nodes to the channel gain fluctuations. The mean interference is calculated at the origin and at the border of a finite mobile network. The network performance is evaluated in terms of the outage probability. Compared to a static network, the interference in a single snapshot does not change under uniform mobility models. However, random waypoint mobility increases (decreases) the interference at the origin (at the border). Furthermore, due to the correlation of the node locations, the interference and outage are temporally and spatially correlated. We quantify the temporal correlation of the interference and outage in mobile Poisson networks in terms of the correlation coefficient and conditional outage probability, respectively. The results show that it is essential that routing, MAC, and retransmission schemes need to be smart (i.e., correlation-aware) to avoid bursts of transmission failures.", "This paper addresses non-Gaussian statistical modeling of interference as a superposition of a large number of small effects from terminals scatterers distributed in the plane volume according to a Poisson point process. This problem is relevant to multiple access communication systems without power control and radar. Assuming that the signal strength is attenuated over distance r as 1 r m, we show that the interference clutter could be modeled as a spherically symmetric spl alpha -stable noise. A novel approach to stable noise modeling is introduced based on the LePage series representation. This establishes grounds to investigate practical constraints in the system model adopted, such as the finite number of interferers and nonhomogeneous Poisson fields of interferers. In addition, the formulas derived allow us to predict noise statistics in environments with lognormal shadowing and Rayleigh fading. The results obtained are useful for the prediction of noise statistics in a wide range of environments with deterministic and stochastic power propagation laws. Computer simulations are provided to demonstrate the efficiency of the spl alpha -stable noise model in multiuser communication systems. The analysis presented will be important in the performance evaluation of complex communication systems and in the design of efficient interference suppression techniques.", "In cellular network models, the base stations are usually assumed to form a lattice or a Poisson point process (PPP). In reality, however, they are deployed neither fully regularly nor completely randomly. Accordingly, in this paper, we consider the very general class of motion-invariant models and analyze the behavior of the outage probability (the probability that the signal-to-interference-plus-noise-ratio (SINR) is smaller than a threshold) as the threshold goes to zero. We show that, remarkably, the slope of the outage probability (in dB) as a function of the threshold (also in dB) is the same for essentially all motion-invariant point processes. The slope merely depends on the fading statistics. Using this result, we introduce the notion of the asymptotic deployment gain (ADG), which characterizes the horizontal gap between the success probabilities of the PPP and another point process in the high-reliability regime (where the success probability is near 1). To demonstrate the usefulness of the ADG for the characterization of the SINR distribution, we investigate the outage probabilities and the ADGs for different point processes and fading statistics by simulations." ] }
0706.2434
2143252188
In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.
Even in the case of the PPP, the interference distribution is not known for all fading distributions and all channel attenuation models. Only the characteristic function or the Laplace transform of the interference can be obtained in most of the cases. The Laplace transform can be used to evaluate the outage probabilities under Rayleigh fading characteristics @cite_14 @cite_21 . In the analysis of outage probability, the Laplace transform is required, , the Laplace transform given that there is a point of the process located at the origin. For the PPP, the conditional Laplace transform is equal to the unconditional Laplace transform. To the best of our knowledge, we are not aware of any literature pertaining to the interference characterization in a clustered network.
{ "cite_N": [ "@cite_14", "@cite_21" ], "mid": [ "2114207517", "2042164227" ], "abstract": [ "In cellular network models, the base stations are usually assumed to form a lattice or a Poisson point process (PPP). In reality, however, they are deployed neither fully regularly nor completely randomly. Accordingly, in this paper, we consider the very general class of motion-invariant models and analyze the behavior of the outage probability (the probability that the signal-to-interference-plus-noise-ratio (SINR) is smaller than a threshold) as the threshold goes to zero. We show that, remarkably, the slope of the outage probability (in dB) as a function of the threshold (also in dB) is the same for essentially all motion-invariant point processes. The slope merely depends on the fading statistics. Using this result, we introduce the notion of the asymptotic deployment gain (ADG), which characterizes the horizontal gap between the success probabilities of the PPP and another point process in the high-reliability regime (where the success probability is near 1). To demonstrate the usefulness of the ADG for the characterization of the SINR distribution, we investigate the outage probabilities and the ADGs for different point processes and fading statistics by simulations.", "This paper deals with the distribution of cumulated instantaneous interference power in a Rayleigh fading channel for an infinite number of interfering stations, where each station transmits with a certain probability, independently of all others. If all distances are known, a necessary and sufficient condition is given for the corresponding distribution to be nondefective. Explicit formulae of density and distribution functions are obtained in the interesting special case that interfering stations are located on a linear grid. Moreover, the Laplace transform of cumulated power is investigated when the positions of stations follow a one- or two-dimensional Poisson process. It turns out that the corresponding distribution is defective for the two-dimensional models." ] }
0706.2434
2143252188
In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.
@cite_16 introduces the notion of transmission capacity , which is a measure of the area spectral efficiency of the successful transmissions resulting from the optimal contention density as a function of the link distance. Transmission capacity is defined as the product of the maximum density of successful transmissions and their data rate, given an outage constraint. , provide bounds for the transmission capacity under different models of fading, when the node location are Poisson distributed.
{ "cite_N": [ "@cite_16" ], "mid": [ "2963847582" ], "abstract": [ "Transmission capacity (TC) is a performance metric for wireless networks that measures the spatial intensity of successful transmissions per unit area, subject to a constraint on the permissible outage probability (where outage occurs when the signal to interference plus noise ratio (SINR) at a receiver is below a threshold). This volume gives a unified treatment of the TC framework that has been developed by the authors and their collaborators over the past decade. The mathematical framework underlying the analysis (reviewed in Section 2) is stochastic geometry: Poisson point processes model the locations of interferers, and (stable) shot noise processes represent the aggregate interference seen at a receiver. Section 3 presents TC results (exact, asymptotic, and bounds) on a simple model in order to illustrate a key strength of the framework: analytical tractability yields explicit performance dependence upon key model parameters. Section 4 presents enhancements to this basic model — channel fading, variable link distances (VLD), and multihop. Section 5 presents four network design case studies well-suited to TC: (i) spectrum management, (ii) interference cancellation, (iii) signal threshold transmission scheduling, and (iv) power control. Section 6 studies the TC when nodes have multiple antennas, which provides a contrast vs. classical results that ignore interference." ] }
0707.0648
2950368606
The k-forest problem is a common generalization of both the k-MST and the dense- @math -subgraph problems. Formally, given a metric space on @math vertices @math , with @math demand pairs @math and a target'' @math , the goal is to find a minimum cost subgraph that connects at least @math demand pairs. In this paper, we give an @math -approximation algorithm for @math -forest, improving on the previous best ratio of @math by Segev & Segev. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an @math point metric space with @math objects each with its own source and destination, and a vehicle capable of carrying at most @math objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an @math -approximation algorithm for the @math -forest problem implies an @math -approximation algorithm for Dial-a-Ride. Using our results for @math -forest, we get an @math - approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an @math -approximation by Charikar & Raghavachari; our results give a different proof of a similar approximation guarantee--in fact, when the vehicle capacity @math is large, we give a slight improvement on their results.
The @math -forest problem: The @math -forest problem is relatively new: it was defined by Hajiaghayi & Jain @cite_20 . An @math -approximation algorithm for even the directed @math -forest problem can be inferred from @cite_19 . Recently, Segev & Segev @cite_10 gave an @math approximation algorithm for @math -forest.
{ "cite_N": [ "@cite_19", "@cite_10", "@cite_20" ], "mid": [ "2173641041", "1971222832", "1500932665" ], "abstract": [ "An instance of the k-Steiner forest problem consists of an undirected graph G = (V,E), the edges of which are associated with non-negative costs, and a collection D = (si,ti): 1 ≤ i ≤ d of distinct pairs of vertices, interchangeably referred to as demands. We say that a forest F ⊆ G connects a demand (si, ti) when it contains an si-ti path. Given a requirement parameter K ≤ |D|, the goal is to find a minimum cost forest that connects at least k demands in D. This problem has recently been studied by Hajiaghayi and Jain [SODA'06], whose main contribution in this context was to relate the inapproximability of k-Steiner forest to that of the dense k-subgraph problem. However, Hajiaghayi and Jain did not provide any algorithmic result for the respective settings, and posed this objective as an important direction for future research. In this paper, we present the first non-trivial approximation algorithm for the k-Steiner forest problem, which is based on a novel extension of the Lagrangian relaxation technique. Specifically, our algorithm constructs a feasible forest whose cost is within a factor of O(min n2 3, √d ċ log d) of optimal, where n is the number of vertices in the input graph and d is the number of demands.", "In this paper we study the prize-collecting version of the Generalized Steiner Tree problem. To the best of our knowledge, there is no general combinatorial technique in approximation algorithms developed to study the prize-collecting versions of various problems. These problems are studied on a case by case basis by [5] by applying an LP-rounding technique which is not a combinatorial approach. The main contribution of this paper is to introduce a general combinatorial approach towards solving these problems through novel primal-dual schema (without any need to solve an LP). We fuse the primal-dual schema with Farkas lemma to obtain a combinatorial 3-approximation algorithm for the Prize-Collecting Generalized Steiner Tree problem. Our work also inspires a combinatorial algorithm [19] for solving a special case of Kelly's problem [22] of pricing edges.We also consider the k-forest problem, a generalization of k-MST and k-Steiner tree, and we show that in spite of these problems for which there are constant factor approximation algorithms, the k-forest problem is much harder to approximate. In particular, obtaining an approximation factor better than O(n1 6-e) for k-forest requires substantially new ideas including improving the approximation factor O(n1 3-e) for the notorious densest k-subgraph problem. We note that k-forest and prize-collecting version of Generalized Steiner Tree are closely related to each other, since the latter is the Lagrangian relaxation of the former.", "The k-forest problem is a common generalization of both the k-MST and the dense-k-subgraph problems. Formally, given a metric space on n vertices V, with m demand pairs ⊆ V × V and a \"target\" k ≤ m, the goal is to find a minimum cost subgraph that connects at least k demand pairs. In this paper, we give an O(min √n,√k )- approximation algorithm for k-forest, improving on the previous best ratio of O(min n2 3,√m log n) by Segev and Segev [20]. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an n point metric space with m objects each with its own source and destination, and a vehicle capable of carrying at most k objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an a-approximation algorithm for the k-forest problem implies an O(αċlog2 n)-approximation algorithm for Dial-a-Ride. Using our results for k-forest, we get an O(min √n,√k ċlog2 n)-approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an O(√k log n)-approximation by Charikar and Raghavachari [5]; our results give a different proof of a similar approximation guarantee-- in fact, when the vehicle capacity k is large, we give a slight improvement on their results. The reduction from Dial-a-Ride to the k-forest problem is fairly robust, and allows us to obtain approximation algorithms (with the same guarantee) for the following generalizations: (i) Non-uniform Dial-a-Ride, where the cost of traversing each edge is an arbitrary nondecreasing function of the number of objects in the vehicle; and (ii) Weighted Diala-Ride, where demands are allowed to have different weights. The reduction is essential, as it is unclear how to extend the techniques of Charikar and Raghavachari to these Dial-a-Ride generalizations." ] }
0707.0648
2950368606
The k-forest problem is a common generalization of both the k-MST and the dense- @math -subgraph problems. Formally, given a metric space on @math vertices @math , with @math demand pairs @math and a target'' @math , the goal is to find a minimum cost subgraph that connects at least @math demand pairs. In this paper, we give an @math -approximation algorithm for @math -forest, improving on the previous best ratio of @math by Segev & Segev. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an @math point metric space with @math objects each with its own source and destination, and a vehicle capable of carrying at most @math objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an @math -approximation algorithm for the @math -forest problem implies an @math -approximation algorithm for Dial-a-Ride. Using our results for @math -forest, we get an @math - approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an @math -approximation by Charikar & Raghavachari; our results give a different proof of a similar approximation guarantee--in fact, when the vehicle capacity @math is large, we give a slight improvement on their results.
Dense @math -subgraph: The problem is a generalization of the problem @cite_8 , as shown in @cite_20 . The best known approximation guarantee for the problem is @math where @math is some constant, due to @cite_8 , and obtaining an improved guarantee has been a long standing open problem. Strictly speaking, @cite_8 study a potentially harder problem: the version of , where one wants to pick @math vertices to maximize the number of edges in the induced graph. However, nothing better is known even for the version of (where one wants to pick the minimum number of vertices that induce @math edges), which is a special case of @math -forest. The @math -forest problem is also a generalization of @math -MST, for which a 2-approximation is known (Garg @cite_14 ).
{ "cite_N": [ "@cite_14", "@cite_20", "@cite_8" ], "mid": [ "1500932665", "2266714125", "2161190897" ], "abstract": [ "The k-forest problem is a common generalization of both the k-MST and the dense-k-subgraph problems. Formally, given a metric space on n vertices V, with m demand pairs ⊆ V × V and a \"target\" k ≤ m, the goal is to find a minimum cost subgraph that connects at least k demand pairs. In this paper, we give an O(min √n,√k )- approximation algorithm for k-forest, improving on the previous best ratio of O(min n2 3,√m log n) by Segev and Segev [20]. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an n point metric space with m objects each with its own source and destination, and a vehicle capable of carrying at most k objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an a-approximation algorithm for the k-forest problem implies an O(αċlog2 n)-approximation algorithm for Dial-a-Ride. Using our results for k-forest, we get an O(min √n,√k ċlog2 n)-approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an O(√k log n)-approximation by Charikar and Raghavachari [5]; our results give a different proof of a similar approximation guarantee-- in fact, when the vehicle capacity k is large, we give a slight improvement on their results. The reduction from Dial-a-Ride to the k-forest problem is fairly robust, and allows us to obtain approximation algorithms (with the same guarantee) for the following generalizations: (i) Non-uniform Dial-a-Ride, where the cost of traversing each edge is an arbitrary nondecreasing function of the number of objects in the vehicle; and (ii) Weighted Diala-Ride, where demands are allowed to have different weights. The reduction is essential, as it is unclear how to extend the techniques of Charikar and Raghavachari to these Dial-a-Ride generalizations.", "Numerous graph mining applications rely on detecting subgraphs which are large near-cliques. Since formulations that are geared towards finding large near-cliques are hard and frequently inapproximable due to connections with the Maximum Clique problem, the poly-time solvable densest subgraph problem which maximizes the average degree over all possible subgraphs \"lies at the core of large scale data mining\" [10]. However, frequently the densest subgraph problem fails in detecting large near-cliques in networks. In this work, we introduce the k-clique densest subgraph problem, k ≥ 2. This generalizes the well studied densest subgraph problem which is obtained as a special case for k=2. For k=3 we obtain a novel formulation which we refer to as the triangle densest subgraph problem: given a graph G(V,E), find a subset of vertices S* such that τ(S*)=max limitsS ⊆ V t(S) |S|, where t(S) is the number of triangles induced by the set S. On the theory side, we prove that for any k constant, there exist an exact polynomial time algorithm for the k-clique densest subgraph problem . Furthermore, we propose an efficient 1 k-approximation algorithm which generalizes the greedy peeling algorithm of Asahiro and Charikar [8,18] for k=2. Finally, we show how to implement efficiently this peeling framework on MapReduce for any k ≥ 3, generalizing the work of Bahmani, Kumar and Vassilvitskii for the case k=2 [10]. On the empirical side, our two main findings are that (i) the triangle densest subgraph is consistently closer to being a large near-clique compared to the densest subgraph and (ii) the peeling approximation algorithms for both k=2 and k=3 achieve on real-world networks approximation ratios closer to 1 rather than the pessimistic 1 k guarantee. An interesting consequence of our work is that triangle counting, a well-studied computational problem in the context of social network analysis can be used to detect large near-cliques. Finally, we evaluate our proposed method on a popular graph mining application.", "We present algorithmic and hardness results for network design problems with degree or order constraints. The first problem we consider is the Survivable Network Design problem with degree constraints on vertices. The objective is to find a minimum cost subgraph which satisfies connectivity requirements between vertices and also degree upper bounds @math on the vertices. This includes the well-studied Minimum Bounded Degree Spanning Tree problem as a special case. Our main result is a @math -approximation algorithm for the edge-connectivity Survivable Network Design problem with degree constraints, where the cost of the returned solution is at most twice the cost of an optimum solution (satisfying the degree bounds) and the degree of each vertex @math is at most @math . This implies the first constant factor (bicriteria) approximation algorithms for many degree constrained network design problems, including the Minimum Bounded Degree Steiner Forest problem. Our results also extend to directed graphs and provide the first constant factor (bicriteria) approximation algorithms for the Minimum Bounded Degree Arborescence problem and the Minimum Bounded Degree Strongly @math -Edge-Connected Subgraph problem. In contrast, we show that the vertex-connectivity Survivable Network Design problem with degree constraints is hard to approximate, even when the cost of every edge is zero. A striking aspect of our algorithmic result is its simplicity. It is based on the iterative relaxation method, which is an extension of Jain's iterative rounding method. This provides an elegant and unifying algorithmic framework for a broad range of network design problems. We also study the problem of finding a minimum cost @math -edge-connected subgraph with at least @math vertices, which we call the @math -subgraph problem. This generalizes some well-studied classical problems such as the @math -MST and the minimum cost @math -edge-connected subgraph problems. We give a polylogarithmic approximation for the @math -subgraph problem. However, by relating it to the Densest @math -Subgraph problem, we provide evidence that the @math -subgraph problem might be hard to approximate for arbitrary @math ." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
This section is dedicated to present some of the works that use game theory for power control. We remind that a Nash equilibrium is a stable solution, where no player has an incentive to deviate unilaterally, while a Pareto equilibrium is a cooperative dominating solution, where there is no way to improve the performance of a player without harming another one. Generally, both concepts do not coincide. Following the general presentation of power allocation games in @cite_11 @cite_31 , an abundance of works can be found on the subject.
{ "cite_N": [ "@cite_31", "@cite_11" ], "mid": [ "2116199025", "2143535483" ], "abstract": [ "We study in this paper a noncooperative approach for sharing resources of a common pool among users, wherein each user strives to maximize its own utility. The optimality notion is then a Nash equilibrium. First, we present a general framework of systems wherein a Nash equilibrium is Pareto inefficient, which are similar to the 'tragedy of the commons' in economics. As examples that fit in the above framework, we consider noncooperative flow-control problems in communication networks where each user decides its throughput to optimize its own utility. As such a utility, we first consider the power which is defined as the throughput divided by the expected end-to-end packet delay, and then consider another utility of additive costs. For both utilities, we establish the non-efficiency of the Nash equilibria.", "We address the problem of spectrum pricing in a cognitive radio network where multiple primary service providers compete with each other to offer spectrum access opportunities to the secondary users. By using an equilibrium pricing scheme, each of the primary service providers aims to maximize its profit under quality of service (QoS) constraint for primary users. We formulate this situation as an oligopoly market consisting of a few firms and a consumer. The QoS degradation of the primary services is considered as the cost in offering spectrum access to the secondary users. For the secondary users, we adopt a utility function to obtain the demand function. With a Bertrand game model, we analyze the impacts of several system parameters such as spectrum substitutability and channel quality on the Nash equilibrium (i.e., equilibrium pricing adopted by the primary services). We present distributed algorithms to obtain the solution for this dynamic game. The stability of the proposed dynamic game algorithms in terms of convergence to the Nash equilibrium is studied. However, the Nash equilibrium is not efficient in the sense that the total profit of the primary service providers is not maximized. An optimal solution to gain the highest total profit can be obtained. A collusion can be established among the primary services so that they gain higher profit than that for the Nash equilibrium. However, since one or more of the primary service providers may deviate from the optimal solution, a punishment mechanism may be applied to the deviating primary service provider. A repeated game among primary service providers is formulated to show that the collusion can be maintained if all of the primary service providers are aware of this punishment mechanism, and therefore, properly weight their profits to be obtained in the future." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
In particular, the utility generally considered in those articles is justified in @cite_4 where the author describes a widely applicable model from first principles''. Conditions under which the utility will allow to obtain non-trivial Nash equilibria (i.e., users actually transmit at the equilibrium) are derived. The utility consisting of throughput-to-power ratio (detailed in Sec. ) is shown to satisfy these conditions. In addition, it possesses a propriety of reliability in the sense that the transmission occurs at non-negligible rates at the equilibrium. This kind of utility function had been introduced in previous works, with an economic leaning @cite_13 @cite_5 .
{ "cite_N": [ "@cite_13", "@cite_5", "@cite_4" ], "mid": [ "2148911784", "2057838148", "2116199025" ], "abstract": [ "A game-theoretic model for studying power control in multicarrier code-division multiple-access systems is proposed. Power control is modeled as a noncooperative game in which each user decides how much power to transmit over each carrier to maximize its own utility. The utility function considered here measures the number of reliable bits transmitted over all the carriers per joule of energy consumed and is particularly suitable for networks where energy efficiency is important. The multidimensional nature of users' strategies and the nonquasi-concavity of the utility function make the multicarrier problem much more challenging than the single-carrier or throughput-based-utility case. It is shown that, for all linear receivers including the matched filter, the decorrelator, and the minimum-mean-square-error detector, a user's utility is maximized when the user transmits only on its \"best\" carrier. This is the carrier that requires the least amount of power to achieve a particular target signal-to-interference-plus-noise ratio at the output of the receiver. The existence and uniqueness of Nash equilibrium for the proposed power control game are studied. In particular, conditions are given that must be satisfied by the channel gains for a Nash equilibrium to exist, and the distribution of the users among the carriers at equilibrium is characterized. In addition, an iterative and distributed algorithm for reaching the equilibrium (when it exists) is presented. It is shown that the proposed approach results in significant improvements in the total utility achieved at equilibrium compared with a single-carrier system and also to a multicarrier system in which each user maximizes its utility over each carrier independently", "Power allocation across users in two adjacent cells is studied for a code-division multiple access (CDMA) data service. The forward link is considered and cells are modeled as one-dimensional with uniformly distributed users and orthogonal signatures within each cell. Each user is assumed to have a utility function that describes the user's received utility, or willingness to pay, for a received signal-to-interference-plus-noise ratio (SINR). The objective is to allocate the transmitted power to maximize the total utility summed over all users subject to power constraints in each cell. It is first shown that this optimization can be achieved by a pricing scheme in which each base station announces a price per unit transmitted power to the users, and each user requests power to maximize individual surplus (utility minus cost). Setting prices to maximize total revenue over both cells is also considered, and it is shown that, in general, the solution is different from the one obtained by maximizing total utility. Conditions are given for which independent optimization in each cell, which leads to a Nash equilibrium (NE), is globally optimal. It is shown that, in general, coordination between the two cells is needed to achieve the maximum utility or revenue.", "We study in this paper a noncooperative approach for sharing resources of a common pool among users, wherein each user strives to maximize its own utility. The optimality notion is then a Nash equilibrium. First, we present a general framework of systems wherein a Nash equilibrium is Pareto inefficient, which are similar to the 'tragedy of the commons' in economics. As examples that fit in the above framework, we consider noncooperative flow-control problems in communication networks where each user decides its throughput to optimize its own utility. As such a utility, we first consider the power which is defined as the throughput divided by the expected end-to-end packet delay, and then consider another utility of additive costs. For both utilities, we establish the non-efficiency of the Nash equilibria." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
Unfortunately, Nash equilibria often lead to inefficient allocations, in the sense that higher rates (Pareto equilibria) could be obtained for all mobiles if they cooperated. To alleviate this problem, in addition to the non-cooperative game setting, @cite_5 introduces a pricing strategy to force users to transmit at a socially optimal rate. They obtain communication at Pareto equilibrium.
{ "cite_N": [ "@cite_5" ], "mid": [ "2072113937" ], "abstract": [ "In the traffic assignment problem, first proposed by Wardrop in 1952, commuters select the shortest available path to travel from their origins to their destinations. We study a generalization of this problem in which competitors, who may control a nonnegligible fraction of the total flow, ship goods across a network. This type of games, usually referred to as atomic games, readily applies to situations in which the competing freight companies have market power. Other applications include intelligent transportation systems, competition among telecommunication network service providers, and scheduling with flexible machines. Our goal is to determine to what extent these systems can benefit from some form of coordination or regulation. We measure the quality of the outcome of the game without centralized control by computing the worst-case inefficiency of Nash equilibria. The main conclusion is that although self-interested competitors will not achieve a fully efficient solution from the system's point of view, the loss is not too severe. We show how to compute several bounds for the worst-case inefficiency that depend on the characteristics of cost functions and on the market structure in the game. In addition, building upon the work of Catoni and Pallotino, we show examples in which market aggregation (or collusion) adversely impacts the aggregated competitors, even though their market power increases. For example, Nash equilibria of atomic network games may be less efficient than the corresponding Wardrop equilibria. When competitors are completely symmetric, we provide a characterization of the Nash equilibrium using a potential function, and prove that this counterintuitive phenomenon does not arise. Finally, we study a pricing mechanism that elicits more coordination from the players by reducing the worst-case inefficiency of Nash equilibria." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
In @cite_17 , defining the utility as advised in @cite_4 as the ratio of the throughput to the transmission power, the authors obtain results of existence and unicity of a Nash equilibrium for a CDMA system. They extend this work to the case of multiple carriers in @cite_36 . In particular, it is shown that users will select and only transmit over their best carrier. As far as the attenuation is concerned, the consideration is restricted to flat fading in @cite_17 and in @cite_36 (each carrier being flat fading in the latter). However, wireless transmissions generally suffer from the effect of multiple paths, thus becoming frequency-selective. The goal of this paper is to determine the influence of the number of paths (or the selectivity of the channel) on the performance of PA.
{ "cite_N": [ "@cite_36", "@cite_4", "@cite_17" ], "mid": [ "2100366181", "2168150810", "2063449729" ], "abstract": [ "For pt.I see ibid., vol.44, no.7, p.2796-815 (1998). In multiaccess wireless systems, dynamic allocation of resources such as transmit power, bandwidths, and rates is an important means to deal with the time-varying nature of the environment. We consider the problem of optimal resource allocation from an information-theoretic point of view. We focus on the multiaccess fading channel with Gaussian noise, and define two notions of capacity depending on whether the traffic is delay-sensitive or not. In the present paper, we introduce a notion of delay-limited capacity which is the maximum rate achievable with delay independent of how slow the fading is. We characterize the delay-limited capacity region of the multiaccess fading channel and the associated optimal resource allocation schemes. We show that successive decoding is optimal, and the optimal decoding order and power allocation can be found explicitly as a function of the fading states; this is a consequence of an underlying polymatroid structure that we exploit.", "In this contribution, the performance of an uplink CDMA system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). We consider the realistic case of frequency selective channels. This scenario illustrates the case of decentralized schemes and aims at reducing the downlink signaling overhead. Various receivers are considered, namely the Matched filter, the MMSE filter and the optimum filter. The goal of this paper is to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large. To that end we combine two asymptotic methodologies. The first is asymptotic random matrix theory which allows us to obtain explicit expressions for the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games along with the Wardrop equilibrium concept which allows us to compute good approximations of the Nash equilibrium as the number of mobiles grow.", "We determine the optimal adaptive rate and power control strategies to maximize the total throughput in a multirate code-division multiple-access system. The total throughput of the system provides a meaningful baseline in the form of an upper bound to the throughput achievable with additional restrictions imposed on the system to guarantee fairness. Peak power and instantaneous bit energy-to-noise spectral density constraints are assumed at the transmitter with matched filter detection at the receiver. Our results apply to frequency selective fading in so far as the bit energy-to-equivalent noise power spectral density ratio definition can be used as the quality-of-service metric. The bit energy-to-equivalent noise power spectral density ratio metric coincides with the bit-error rate metric under the assumption that the processing gains and the number of users are high enough so that self-interference can be neglected. We first obtain results for the case where the rates available to each user are unrestricted, and we then consider the more practical scenario where each user has a finite discrete set of rates. An upper bound to the maximum average throughput is obtained and evaluated for Rayleigh fading. Suboptimal low-complexity schemes are considered to illustrate the performance tradeoffs between optimality and complexity. We also show that the optimum rate and power adaptation scheme with unconstrained rates is in fact just a rate adaptation scheme with fixed transmit powers, and it performs significantly better than a scheme that uses power adaptation alone." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
This work is an extension of @cite_17 in the case of frequency-selective fading, in the framework of multi-user systems. We do not consider multiple carriers, as in @cite_36 , and the results are very different to those obtained in that work. The extension is not trivial and involves advanced results on random matrices with non-equal variances due to Girko @cite_40 whereas classical results rely on the work of Silverstein @cite_14 . A part of this work was previously published as a conference paper @cite_19 .
{ "cite_N": [ "@cite_14", "@cite_36", "@cite_19", "@cite_40", "@cite_17" ], "mid": [ "2132158614", "2114927595", "2100366181", "2168150810", "2135963243" ], "abstract": [ "This paper extends Khatri (1964, 1969) distribution of the largest eigenvalue of central complex Wishart matrices to the noncentral case. It then applies the resulting new statistical results to obtain closed-form expressions for the outage probability of multiple-input-multiple-output (MIMO) systems employing maximal ratio combining (known also as \"beamforming\" systems) and operating over Rician-fading channels. When applicable these expressions are compared with special cases previously reported in the literature dealing with the performance of (1) MIMO systems over Rayleigh-fading channels and (2) single-input-multiple-output (SIMO) systems over Rician-fading channels. As a double check these analytical results are validated by Monte Carlo simulations and as an illustration of the mathematical formalism some numerical examples for particular cases of interest are plotted and discussed. These results show that, given a fixed number of total antenna elements and under the same scattering condition (1) SIMO systems are equivalent to multiple-input-single-output systems and (2) it is preferable to distribute the number of antenna elements evenly between the transmitter and the receiver for a minimum outage probability performance.", "We consider networks consisting of nodes with radios, and without any wired infrastructure, thus necessitating all communication to take place only over the shared wireless medium. The main focus of this paper is on the effect of fading in such wireless networks. We examine the attenuation regime where either the medium is absorptive, a situation which generally prevails, or the path loss exponent is greater than 3. We study the transport capacity, defined as the supremum over the set of feasible rate vectors of the distance weighted sum of rates. We consider two assumption sets. Under the first assumption set, which essentially requires only a mild time average type of bound on the fading process, we show that the transport capacity can grow no faster than O(n), where n denotes the number of nodes, even when the channel state information (CSI) is available noncausally at both the transmitters and the receivers. This assumption includes common models of stationary ergodic channels; constant, frequency-selective channels; flat, rapidly varying channels; and flat slowly varying channels. In the second assumption set, which essentially features an independence, time average of expectation, and nonzeroness condition on the fading process, we constructively show how to achieve transport capacity of spl Omega (n) even when the CSI is unknown to both the transmitters and the receivers, provided that every node has an appropriately nearby node. This assumption set includes common models of independent and identically distributed (i.i.d.) channels; constant, flat channels; and constant, frequency-selective channels. The transport capacity is achieved by nodes communicating only with neighbors, and using only point-to-point coding. The thrust of these results is that the multihop strategy, toward which much protocol development activity is currently targeted, is appropriate for fading environments. The low attenuation regime is open.", "For pt.I see ibid., vol.44, no.7, p.2796-815 (1998). In multiaccess wireless systems, dynamic allocation of resources such as transmit power, bandwidths, and rates is an important means to deal with the time-varying nature of the environment. We consider the problem of optimal resource allocation from an information-theoretic point of view. We focus on the multiaccess fading channel with Gaussian noise, and define two notions of capacity depending on whether the traffic is delay-sensitive or not. In the present paper, we introduce a notion of delay-limited capacity which is the maximum rate achievable with delay independent of how slow the fading is. We characterize the delay-limited capacity region of the multiaccess fading channel and the associated optimal resource allocation schemes. We show that successive decoding is optimal, and the optimal decoding order and power allocation can be found explicitly as a function of the fading states; this is a consequence of an underlying polymatroid structure that we exploit.", "In this contribution, the performance of an uplink CDMA system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). We consider the realistic case of frequency selective channels. This scenario illustrates the case of decentralized schemes and aims at reducing the downlink signaling overhead. Various receivers are considered, namely the Matched filter, the MMSE filter and the optimum filter. The goal of this paper is to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large. To that end we combine two asymptotic methodologies. The first is asymptotic random matrix theory which allows us to obtain explicit expressions for the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games along with the Wardrop equilibrium concept which allows us to compute good approximations of the Nash equilibrium as the number of mobiles grow.", "In this paper, we consider an adaptive modulation system with multiple-input-multiple-output (MIMO) antennas in conjunction with orthogonal frequency-division multiplexing (OFDM) operating over frequency-selective Rayleigh fading environments. In particular, we consider a type of beamforming with a maximum ratio transmission maximum ratio combining (MRT-MRC) transceiver structure. For this system, we derive a central limit theorem for various block-based performance metrics. This motivates an accurate Gaussian approximation to the system data rate and the number of outages per OFDM block. In addition to the data rate and outage distributions, we also consider the subcarrier signal-to-noise ratio (SNR) as a process in the frequency domain and compute level crossing rates (LCRs) and average fade bandwidths (AFBs). Hence, we provide fundamental but novel results for the MIMO OFDM channel. The accuracy of these results is verified by Monte Carlo simulations, and applications to performance analysis and system design are discussed." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
Moreover, in addition to the linear filters studied in @cite_17 , we study the enhancements provided by the optimum and successive interference cancellation filters.
{ "cite_N": [ "@cite_17" ], "mid": [ "2168729028" ], "abstract": [ "Several contributions have been made so far to develop optimal multichannel linear filtering approaches and show their ability to reduce the acoustic noise. However, there has not been a clear unifying theoretical analysis of their performance in terms of both noise reduction and speech distortion. To fill this gap, we analyze the frequency-domain (non-causal) multichannel linear filtering for noise reduction in this paper. For completeness, we consider the noise reduction constrained optimization problem that leads to the parameterized multichannel non-causal Wiener filter (PMWF). Our contribution is fivefold. First, we formally show that the minimum variance distortionless response (MVDR) filter is a particular case of the PMWF by properly formulating the constrained optimization problem of noise reduction. Second, we propose new simplified expressions for the PMWF, the MVDR, and the generalized sidelobe canceller (GSC) that depend on the signals' statistics only. In contrast to earlier works, these expressions are explicitly independent of the channel transfer function ratios. Third, we quantify the theoretical gains and losses in terms of speech distortion and noise reduction when using the PWMF by establishing new simplified closed-form expressions for three performance measures, namely, the signal distortion index, the noise reduction factor (originally proposed in the paper titled ldquoNew insights into the noise reduction Wiener filter,rdquo by J. Chen (IEEE Transactions on Audio, Speech, and Language Processing, Vol. 15, no. 4, pp. 1218-1234, Jul. 2006) to analyze the single channel time-domain Wiener filter), and the output signal-to-noise ratio (SNR). Fourth, we analyze the effects of coherent and incoherent noise in addition to the benefits of utilizing multiple microphones. Fifth, we propose a new proof for the a posteriori SNR improvement achieved by the PMWF. Finally, we provide some simulations results to corroborate the findings of this work." ] }
0707.0546
2952386894
We study the problem of assigning jobs to applicants. Each applicant has a weight and provides a preference list ranking a subset of the jobs. A matching M is popular if there is no other matching M' such that the weight of the applicants who prefer M' over M exceeds the weight of those who prefer M over M'. This paper gives efficient algorithms to find a popular matching if one exists.
Following the publication of the work of Abraham al @cite_11 , the topic of unweighted popular matchings has been further explored in many interesting directions. Suppose we want to go from an arbitrary matching to some popular matching by a sequence of matchings each more popular than the previous; Abraham and Kavitha @cite_0 showed that there is always a sequence of length at most two and gave a linear time algorithm to find it. One of the main drawbacks of popular matchings is that they may not always exist; Mahdian @cite_10 nicely addressed this issue by showing that the probability that a random instance admits a popular matching depends on the ratio @math , and exhibits a phase transition around @math . Motivated by a house allocation application, Manlove and Sng @cite_1 gave fast algorithms for popular assignments with capacities on the jobs.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_10", "@cite_11" ], "mid": [ "1529648317", "2096797149", "2605027943", "2547342773" ], "abstract": [ "We investigate the following problem: given a set of jobs and a set of people with preferences over the jobs, what is the optimal way of matching people to jobs? Here we consider the notion of popularity. A matching Mis popular if there is no matching Mi¾? such that more people prefer Mi¾? to Mthan the other way around. Determining whether a given instance admits a popular matching and, if so, finding one, was studied in [2]. If there is no popular matching, a reasonable substitute is a matching whose unpopularityis bounded. We consider two measures of unpopularity - unpopularity factordenoted by u(M) and unpopularity margindenoted by g(M). McCutchen recently showed that computing a matching Mwith the minimum value of u(M) or g(M) is NP-hard, and that if Gdoes not admit a popular matching, then we have u(M) i¾? 2 for all matchings Min G. Here we show that a matching Mthat achieves u(M) = 2 can be computed in @math time (where mis the number of edges in Gand nis the number of nodes) provided a certain graph Hadmits a matching that matches all people. We also describe a sequence of graphs: H= H 2 , H 3 ,...,H k such that if H k admits a matching that matches all people, then we can compute in @math time a matching Msuch that u(M) ≤ ki¾? 1 and @math . Simulation results suggest that our algorithm finds a matching with low unpopularity.", "We study the problem of matching applicants to jobs under one-sided preferences; that is, each applicant ranks a non-empty subset of jobs under an order of preference, possibly involving ties. A matching M is said to be more popular than T if the applicants that prefer M to T outnumber those that prefer T to M. A matching is said to be popular if there is no matching more popular than it. Equivalently, a matching M is popular if @f(M,T)>[email protected](T,M) for all matchings T, where @f(X,Y) is the number of applicants that prefer X to Y. Previously studied solution concepts based on the popularity criterion are either not guaranteed to exist for every instance (e.g., popular matchings) or are NP-hard to compute (e.g., least unpopular matchings). This paper addresses this issue by considering mixed matchings. A mixed matching is simply a probability distribution over matchings in the input graph. The function @f that compares two matchings generalizes in a natural manner to mixed matchings by taking expectation. A mixed matching P is popular if @f(P,Q)>[email protected](Q,P) for all mixed matchings Q. We show that popular mixed matchings always exist and we design polynomial time algorithms for finding them. Then we study their efficiency and give tight bounds on the price of anarchy and price of stability of the popular matching problem.", "Copyright © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Motivated by the fact that in several cases a matching in a graph is stable if and only if it is produced by a greedy algorithm, we study the problem of computing a maximum weight greedy matching on weighted graphs, termed GREEDY-MATCHING. In wide contrast to the maximum weight matching problem, for which many efficient algorithms are known, we prove that GREEDYMATCHING is strongly NP-hard and APX-complete, and thus it does not admit a PTAS unless P=NP, even on graphs with maximum degree at most 3 and with at most three different integer edge weights. Furthermore we prove that GREEDYMATCHING is strongly NP-hard if the input graph is in addition bipartite. Moreover we consider three natural parameters of the problem, for which we establish a sharp threshold behavior between NP-hardness and computational tractability. On the positive side, we present a randomized approximation algorithm (RGMA) for GREEDYMATCHING on a special class of weighted graphs, called bush graphs. We highlight an unexpected connection between RGMA and the approximation of maximum cardinality matching in unweighted graphs via randomized greedy algorithms. We show that, if the approximation ratio of RGMA is θ, then for every ϵ > 0 the randomized MRG algorithm of ( 1995) gives a (ρ - ϵ)-approximation for the maximum cardinality matching. We conjecture that a tight bound for ρ is 2 3; we prove our conjecture true for four subclasses of bush graphs. Proving a tight bound for the approximation ratio of MRG on unweighted graphs (and thus also proving a tight value for ρ) is a long-standing open problem (Poloczek and Szegedy 2012). This unexpected relation of our RGMA algorithm with the MRG algorithm may provide new insights for solving this problem.", "In an instance G = (A union B, E) of the stable marriage problem with strict and possibly incomplete preference lists, a matching M is popular if there is no matching M0 where the vertices that prefer M' to M outnumber those that prefer M to M'. All stable matchings are popular and there is a simple linear time algorithm to compute a maximum-size popular matching. More generally, what we seek is a min-cost popular matching where we assume there is a cost function c : E -> Q. However there is no polynomial time algorithm currently known for solving this problem. Here we consider the following generalization of a popular matching called a popular half-integral matching: this is a fractional matching x = (M_1 + M_2) 2, where M1 and M2 are the 0-1 edge incidence vectors of matchings in G, such that x satisfies popularity constraints. We show that every popular half-integral matching is equivalent to a stable matching in a larger graph G^*. This allows us to solve the min-cost popular half-integral matching problem in polynomial time." ] }
0707.1053
1672175433
We present a deterministic exploration mechanism for sponsored search auctions, which enables the auctioneer to learn the relevance scores of advertisers, and allows advertisers to estimate the true value of clicks generated at the auction site. This exploratory mechanism deviates only minimally from the mechanism being currently used by Google and Yahoo! in the sense that it retains the same pricing rule, similar ranking scheme, as well as, similar mathematical structure of payoffs. In particular, the estimations of the relevance scores and true-values are achieved by providing a chance to lower ranked advertisers to obtain better slots. This allows the search engine to potentially test a new pool of advertisers, and correspondingly, enables new advertisers to estimate the value of clicks leads generated via the auction. Both these quantities are unknown a priori, and their knowledge is necessary for the auction to operate efficiently. We show that such an exploration policy can be incorporated without any significant loss in revenue for the auctioneer. We compare the revenue of the new mechanism to that of the standard mechanism at their corresponding symmetric Nash equilibria and compute the cost of uncertainty, which is defined as the relative loss in expected revenue per impression. We also bound the loss in efficiency, as well as, in user experience due to exploration, under the same solution concept (i.e. SNE). Thus the proposed exploration mechanism learns the relevance scores while incorporating the incentive constraints from the advertisers who are selfish and are trying to maximize their own profits, and therefore, the exploration is essentially achieved via mechanism design. We also discuss variations of the new mechanism such as truthful implementations.
Moreover, as we discuss later in , the problem of designing a family of optimal exploratory mechanisms, which for example would provide the most information while minimizing expected loss in revenue is far from being solved. The work in @cite_3 and in this paper provide just two instances of mechanism design which do provably well, but more work that analyze different aspects of exploratory mechanisms are necessary in this emerging field. Thus, to the best of our knowledge, we are one of the first groups to formally study the problem of estimating relevance and valuations from incentive as well as learning theory perspective without deviating much from the current settings of the mechanism currently in place.
{ "cite_N": [ "@cite_3" ], "mid": [ "2021734699" ], "abstract": [ "We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a (1.5+e)-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees 5 3 approximation to the revenue achievable by an optimal deterministic truthful mechanism. We show that the 5 3-approximation mechanism provides the same approximation ratio also with respect to the optimal truthful-in-expectation mechanism. This shows that the performance gap between truthful-in-expectation and deterministic mechanisms is relatively small. En route, we solve an open question of Mehta and Vazirani [EC'04]. Finally, we extend some of our results to the multi-item case, and show how to compute the optimal truthful-in-expectation mechanisms for bidders with more complex valuations." ] }
0707.1099
1670687021
We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol.
The multiple correlated informants - single recipient'' communication problem we are considering in this paper, is basically well-known distributed source coding (DSC) problem. This problem was first considered by Slepian and Wolf @cite_11 for lossless compression of discrete random variables and by Wyner and Ziv @cite_24 for lossy distributed compression. However, these work only provided theoretical bounds on the compression, but no method of constructing practical codes which achieve predicted theoretical bounds.
{ "cite_N": [ "@cite_24", "@cite_11" ], "mid": [ "2138256990", "2156567124" ], "abstract": [ "We address the problem of compressing correlated distributed sources, i.e., correlated sources which are not co-located or which cannot cooperate to directly exploit their correlation. We consider the related problem of compressing a source which is correlated with another source that is available only at the decoder. This problem has been studied in the information theory literature under the name of the Slepian-Wolf (1973) source coding problem for the lossless coding case, and as \"rate-distortion with side information\" for the lossy coding case. We provide a constructive practical framework based on algebraic trellis codes dubbed as DIstributed Source Coding Using Syndromes (DISCUS), that can be applicable in a variety of settings. Simulation results are presented for source coding of independent and identically distributed (i.i.d.) Gaussian sources with side information available at the decoder in the form of a noisy version of the source to be coded. Our results reveal the promise of this approach: using trellis-based quantization and coset construction, the performance of the proposed approach is 2-5 dB from the Wyner-Ziv (1976) bound.", "This paper deals with the problem of multicasting a set of discrete memoryless correlated sources (DMCS) over a cooperative relay network. Necessary conditions with cut-set interpretation are presented. A Joint source-Wyner-Ziv encoding sliding window decoding scheme is proposed, in which decoding at each receiver is done with respect to an ordered partition of other nodes. For each ordered partition a set of feasibility constraints is derived. Then, utilizing the submodular property of the entropy function and a novel geometrical approach, the results of different ordered partitions are consolidated, which lead to sufficient conditions for our problem. The proposed scheme achieves operational separation between source coding and channel coding. It is shown that sufficient conditions are indeed necessary conditions in two special cooperative networks, namely, Aref network and finite-field deterministic network. Also, in Gaussian cooperative networks, it is shown that reliable transmission of all DMCS whose Slepian-Wolf region intersects the cut-set bound region within a constant number of bits, is feasible. In particular, all results of the paper are specialized to obtain an achievable rate region for cooperative relay networks which includes relay networks and two-way relay networks." ] }
0707.1099
1670687021
We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol.
One of the essential characteristic of the standard DSC problem is that the information sources, also called encoders or informants, are not allowed to interact or cooperate with each other, for the purpose of compressing their information. There are two approaches to solve the DSC problem. First, allow the data-gathering node, also called decoder or recipient, and the informants to interact with each other. Second, do not allow the interaction between the recipient and informants. Starting with the seminal paper @cite_11 , almost all of the work in the area of DSC has followed the second approach. In the recent past, Pradhan and Ramchandran @cite_0 and later @cite_9 @cite_18 @cite_2 @cite_10 @cite_15 @cite_17 have provided various practical schemes to achieve the optimal performance using this approach. An interested reader can refer to the survey in @cite_12 for more information. However, only a little work @cite_13 @cite_7 , has been done towards solving DSC problem when the recipient and the informants are allowed to interact with each other. Also, this work stops well short of addressing the general multiple correlated informants - single recipient'' interactive communication problem, which we are concerned with addressing in this paper.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_7", "@cite_9", "@cite_17", "@cite_0", "@cite_2", "@cite_15", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2138256990", "2074430484", "2109053700", "2963175488", "1512160561", "2130507035", "1964073652", "2170510856", "2963017858", "2116661435", "1485005937" ], "abstract": [ "We address the problem of compressing correlated distributed sources, i.e., correlated sources which are not co-located or which cannot cooperate to directly exploit their correlation. We consider the related problem of compressing a source which is correlated with another source that is available only at the decoder. This problem has been studied in the information theory literature under the name of the Slepian-Wolf (1973) source coding problem for the lossless coding case, and as \"rate-distortion with side information\" for the lossy coding case. We provide a constructive practical framework based on algebraic trellis codes dubbed as DIstributed Source Coding Using Syndromes (DISCUS), that can be applicable in a variety of settings. Simulation results are presented for source coding of independent and identically distributed (i.i.d.) Gaussian sources with side information available at the decoder in the form of a noisy version of the source to be coded. Our results reveal the promise of this approach: using trellis-based quantization and coset construction, the performance of the proposed approach is 2-5 dB from the Wyner-Ziv (1976) bound.", "This paper re-visits Shayevitz & Feder's recent ‘Posterior Matching Scheme’, an explicit, dynamical system encoder for communication with feedback that treats the message as a point on the [0,1] line and achieves capacity on memo-ryless channels. It has two key properties that ensure that it maximizes mutual information at each step: (a) the encoder sequentially hands the decoder what is missing; and (b) the next input has the desired statistics. Motivated by brain-machine interface applications and multi-antenna communications, we consider developing dynamical system feedback encoders for scenarios when the message point lies in higher dimensions. We develop a necessary and sufficient condition — the Jacobian equation — for any dynamical system encoder that maximizes mutual information. In general, there are many solutions to this equation. We connect this to the Monge-Kantorovich Optimal Transportation Problem, which provides a framework to identify a unique solution suiting a specific purpose. We provide two examplar capacity-achieving solutions — for different purposes — for the multi-antenna Gaussian channel with feedback. This insight further elucidates an interesting relationship between interactive decision theory problems and the theory of optimal transportation.", "Consider a pair of correlated Gaussian sources (X 1,X 2). Two separate encoders observe the two components and communicate compressed versions of their observations to a common decoder. The decoder is interested in reconstructing a linear combination of X 1 and X 2 to within a mean-square distortion of D. We obtain an inner bound to the optimal rate-distortion region for this problem. A portion of this inner bound is achieved by a scheme that reconstructs the linear function directly rather than reconstructing the individual components X 1 and X 2 first. This results in a better rate region for certain parameter values. Our coding scheme relies on lattice coding techniques in contrast to more prevalent random coding arguments used to demonstrate achievable rate regions in information theory. We then consider the case of linear reconstruction of K sources and provide an inner bound to the optimal rate-distortion region. Some parts of the inner bound are achieved using the following coding structure: lattice vector quantization followed by ldquocorrelatedrdquo lattice-structured binning.", "We consider the problem of providing privacy, in the private information retrieval (PIR) sense, to users requesting data from a distributed storage system (DSS). The DSS uses an (n, k) Maximum Distance Separable (MDS) code to store the data reliably on unreliable storage nodes. Some of these nodes can be spies which report to a third party, such as an oppressive regime, which data is being requested by the user. An information theoretic PIR scheme ensures that a user can satisfy its request while revealing, to the spy nodes, no information on which data is being requested. A user can achieve PIR by downloading all the data in the DSS. However, this is not a feasible solution due to its high communication cost. We construct PIR schemes with low download communication cost. When there is b = 1 spy node in the DSS, we construct PIR schemes with download cost 1 1−R per unit of requested data (R = k n is the code rate), achieving the information theoretic limit for linear schemes. The proposed schemes are universal since they depend on the code rate, but not on the generator matrix of the code. When there are 2 ≤ b ≤ n − k spy nodes, we devise linear PIR schemes that have download cost equal to b + k per unit of requested data.", "Index coding studies multiterminal source-coding problems where a set of receivers are required to decode multiple (possibly different) messages from a common broadcast, and they each know some messages a priori . In this paper, at the receiver end, we consider a special setting where each receiver knows only one message a priori , and each message is known to only one receiver. At the broadcasting end, we consider a generalized setting where there could be multiple senders, and each sender knows a subset of the messages. The senders collaborate to transmit an index code. This paper looks at minimizing the number of total coded bits the senders are required to transmit. When there is only one sender, we propose a pruning algorithm to find a lower bound on the optimal (i.e., the shortest) index codelength, and show that it is achievable by linear index codes. When there are two or more senders, we propose an appending technique to be used in conjunction with the pruning technique to give a lower bound on the optimal index codelength; we also derive an upper bound based on cyclic codes. While the two bounds do not match in general, for the special case where no two distinct senders know any message in common, the bounds match, giving the optimal index codelength. The results are expressed in terms of strongly connected components in directed graphs that represent the index-coding problems.", "Motivated by a problem of transmitting supplemental data over broadcast channels (Birk and Kol, INFOCOM 1998), we study the following coding problem: a sender communicates with n receivers R1,..., Rn. He holds an input x ∈ 0,01l n and wishes to broadcast a single message so that each receiver Ri can recover the bit xi. Each Ri has prior side information about x, induced by a directed graph Grain nodes; Ri knows the bits of a; in the positions j | (i,j) is an edge of G .G is known to the sender and to the receivers. We call encoding schemes that achieve this goal INDEXcodes for 0,1 n with side information graph G. In this paper we identify a measure on graphs, the minrank, which exactly characterizes the minimum length of linear and certain types of nonlinear INDEX codes. We show that for natural classes of side information graphs, including directed acyclic graphs, perfect graphs, odd holes, and odd anti-holes, minrank is the optimal length of arbitrary INDEX codes. For arbitrary INDEX codes and arbitrary graphs, we obtain a lower bound in terms of the size of the maximum acyclic induced subgraph. This bound holds even for randomized codes, but has been shown not to be tight.", "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter @math is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.", "In this paper we introduce a novel linear precoding technique. The approach used for the design of the precoding matrix is general and the resulting algorithm can address several optimization criteria with an arbitrary number of antennas at the user terminals. We have achieved this by designing the precoding matrices in two steps. In the first step we minimize the overlap of the row spaces spanned by the effective channel matrices of different users using a new cost function. In the next step, we optimize the system performance with respect to specific optimization criteria assuming a set of parallel single- user MIMO channels. By combining the closed form solution with Tomlinson-Harashima precoding we reach the maximum sum-rate capacity when the total number of antennas at the user terminals is less or equal to the number of antennas at the base station. By iterating the closed form solution with appropriate power loading we are able to extract the full diversity in the system and reach the maximum sum-rate capacity in case of high multi-user interference. Joint processing over a group of multi-user MIMO channels in different frequency and time slots yields maximum diversity regardless of the level of multi-user interference.", "Multiple parties observing correlated data seek to recover each other’s data and attain omniscience. To that end, they communicate interactively over a noiseless broadcast channel - each bit transmitted over this channel is received by all the parties. We give a universal interactive communication protocol, termed the recursive data exchange protocol (RDE), which attains omniscience for any sequence of data observed by the parties and provide an individual sequence guarantee of performance. As a by-product, for observations of length @math , we show the universal rate optimality of RDE up to an @math term in a generative setting where the data sequence is independent and identically distributed (in time). Furthermore, drawing on the duality between omniscience and secret key agreement due to Csiszar and Narayan, we obtain a universal protocol for generating a multiparty secret key of rate at most @math less than the maximum rate possible. A key feature of RDE is its recursive structure whereby when a subset @math of parties recover each-other’s data, the rates appear as if the parties have been executing the protocol in an alternative model where the parties in @math are collocated.", "A new coding problem is introduced for a correlated source (X sup n ,Y sup n ) sub n=1 sup spl infin . The observer of X sup n can transmit data depending on X sup n at a prescribed rate R. Based on these data the observer of Y sup n tries to identify whether for some distortion measure spl rho (like the Hamming distance) n sup -1 spl rho (X sup n ,Y sup n ) spl les d, a prescribed fidelity criterion. We investigate as functions of R and d the exponents of two error probabilities, the probabilities for misacceptance, and the probabilities for misrejection. In the case where X sup n and Y sup n are independent, we completely characterize the achievable region for the rate R and the exponents of two error probabilities; in the case where X sup n and Y sup n are correlated, we get some interesting partial results for the achievable region. During the process, we develop a new method for proving converses, which is called \"the inherently typical subset lemma\". This new method goes considerably beyond the \"entropy characterization\" the \"image size characterization,\" and its extensions. It is conceivable that this new method has a strong impact on multiuser information theory.", "In this paper, we consider the problem of min- imizing the multicast decoding delay of generalized instantly decodable network coding (G-IDNC) over persistent forward and feedback erasure channels with feedback intermittence. In such an environment, the sender does not always receive acknowledgement from the receivers after each transmission. Moreover, both the forward and feedback channels are subject to persistent erasures, which can be modelled by a two state (good and bad states) Markov chain known as Gilbert-Elliott channel (GEC). Due to such feedback imperfections, the sender is unable to determine subsequent instantly decodable packets combination for all receivers. Given this harsh channel and feedback model, we first derive expressions for the probability distributions of decoding delay increments and then employ these expressions in formulating the minimum decoding problem in such environment as a maximum weight clique problem in the G-IDNC graph. We also show that the problem formulations in simpler channel and feedback models are special cases of our generalized formulation. Since this problem is NP-hard, we design a greedy algorithm to solve it and compare it to blind approaches proposed in literature. Through extensive simulations, our adaptive algorithm is shown to outperform the blind approaches in all situations and to achieve significant improvement in the decoding delay, especially when the channel is highly persistent. Index Terms—Multicast Channels, Persistent Erasure Chan- nels, G-IDNC, Decoding Delay, Lossy Intermittent Feedback, Maximum Weight Clique Problem." ] }
0707.1099
1670687021
We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol.
In @cite_13 , only the scenario in which two correlated informants communicate with a recipient is considered. It assumed that both the informants and recipient know the joint distribution of informants' data. Also, only the average of total number of bits exchanged is minimized. In @cite_25 , only two messages are allowed to be exchanged between the encoder and a decoder, which may not be optimal for the general communication problem. Conversely, it does not address the problem of computing the optimal number of messages exchanged between the encoder and a decoder as well as the optimal number of bits sent by the encoder and a decoder for the given objective of the communication in an interactive communication scenario. Also, unlike @cite_13 , this work concerns itself with the lossy compression at the encoders.
{ "cite_N": [ "@cite_13", "@cite_25" ], "mid": [ "2074430484", "2138185924" ], "abstract": [ "This paper re-visits Shayevitz & Feder's recent ‘Posterior Matching Scheme’, an explicit, dynamical system encoder for communication with feedback that treats the message as a point on the [0,1] line and achieves capacity on memo-ryless channels. It has two key properties that ensure that it maximizes mutual information at each step: (a) the encoder sequentially hands the decoder what is missing; and (b) the next input has the desired statistics. Motivated by brain-machine interface applications and multi-antenna communications, we consider developing dynamical system feedback encoders for scenarios when the message point lies in higher dimensions. We develop a necessary and sufficient condition — the Jacobian equation — for any dynamical system encoder that maximizes mutual information. In general, there are many solutions to this equation. We connect this to the Monge-Kantorovich Optimal Transportation Problem, which provides a framework to identify a unique solution suiting a specific purpose. We provide two examplar capacity-achieving solutions — for different purposes — for the multi-antenna Gaussian channel with feedback. This insight further elucidates an interesting relationship between interactive decision theory problems and the theory of optimal transportation.", "The reduction in communication achievable by interaction is investigated. The model assumes two communicators: an informant having a random variable X, and a recipient having a possibly dependent random variable Y. Both communicators want the recipient to learn X with no probability of error, whereas the informant may or may not learn Y. To that end, they alternate in transmitting messages comprising finite sequences of bits. Messages are transmitted over an error-free channel and are determined by an agreed-upon, deterministic protocol for (X,Y) (i.e. a protocol for transmitting X to a person who knows Y). A two-message protocol is described, and its worst case performance is investigated. >" ] }
0707.1099
1670687021
We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol.
A preliminary version of our ideas appears in @cite_22 , where we also extend the notions of and , proposed in @cite_19 and derive some of their properties. We intend to address the average-case communication problem and some other variations of the problem considered here, in future.
{ "cite_N": [ "@cite_19", "@cite_22" ], "mid": [ "2026910567", "2251314334" ], "abstract": [ "In this paper, we address the problem of characterizing the instances of the multiterminal source model of Csiszar and Narayan in which communication from all terminals is needed for establishing a secret key of maximum rate. We give an information-theoretic sufficient condition for identifying such instances. We believe that our sufficient condition is in fact an exact characterization, but we are only able to prove this in the case of the three-terminal source model.", "In this paper we describe an application of language technology to policy formulation, where it can support policy makers assess the acceptance of a yet-unpublished policy before the policy enters public consultation. One of the key concepts is that instead of relying on thematic similarity, we extract arguments expressed in support or opposition of positions that are general statements that are, themselves, consistent with the policy or not. The focus of this paper in this overall pipeline, is identifying arguments in text: we present and empirically evaluate the hypothesis that verbal tense and mood are good indicators of arguments that have not been explored in the relevant literature." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Let @math and @math be two sets of materialized views and indexes, respectively, that are termed candidate and are susceptible to reduce the execution cost of a given query set @math (generally supposed representative of system workload). Let @math . Let @math be the storage space allotted by the data warehouse administrator to build objects (materialized views or indexes) from set @math . The joint materialized view and index selection problem consists in building an object configuration @math that minimizes the execution cost of @math , under storage space constraint. This NP-hard problem @cite_56 @cite_17 may be formalized as follows: @math ; @math , where @math is the disk space occupied by object @math .
{ "cite_N": [ "@cite_17", "@cite_56" ], "mid": [ "1623444038", "2132681112" ], "abstract": [ "Materialized view selection is a non-trivial task. Hence, its complexity must be reduced. A judicious choice of views must be cost-driven and influenced by the workload experienced by the system. In this paper, we propose a framework for materialized view selection that exploits a data mining technique (clustering), in order to determine clusters of similar queries. We also propose a view merging algorithm that builds a set of candidate views, as well as a greedy process for selecting a set of views to materialize. This selection is based on cost models that evaluate the cost of accessing data using views and the cost of storing these views. To validate our strategy, we executed a workload of decision-support queries on a test data warehouse, with and without using our strategy. Our experimental results demonstrate its efficiency, even when storage space is limited.", "A data warehouse uses multiple materialized views to efficiently process a given set of queries. These views are accessed by read-only queries and need to be maintained after updates to base tables. Due to the space constraint and maintenance cost constraint, the materialization of all views is not possible. Therefore, a subset of views needs to be selected to be materialized. The problem is NP-hard, therefore, exhaustive search is infeasible. In this paper, we design a View Relevance Driven Selection (VRDS) algorithm based on view relevance to select views. We take into consideration the query processing cost and the view maintenance cost. Our experimental results show that our heuristic aims to minimize the total processing cost, which is the sum of query processing cost and view maintenance cost. Finally, we compare our results against a popular greedy algorithm." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Classical papers in materialized view selection introduce a lattice framework that models and captures dependency (ancestor or descendent) among aggregate views in a multidimensional context @cite_34 @cite_23 @cite_0 @cite_37 . This lattice is greedily browsed with the help of cost models to select the best views to materialize. This problem has first been addressed in one data cube and then extended to multiple cubes @cite_8 . Another theoretical framework, the AND-OR view graph, may also be used to capture the relationships between views @cite_36 @cite_57 @cite_9 @cite_39 . Unfortunately, the majority of these solutions are theoretical and are not truly scalable.
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_36", "@cite_9", "@cite_39", "@cite_0", "@cite_57", "@cite_23", "@cite_34" ], "mid": [ "1623444038", "2055686029", "2055899255", "2950700385", "2136027283", "2753652460", "2520925521", "2963735494", "2963192682" ], "abstract": [ "Materialized view selection is a non-trivial task. Hence, its complexity must be reduced. A judicious choice of views must be cost-driven and influenced by the workload experienced by the system. In this paper, we propose a framework for materialized view selection that exploits a data mining technique (clustering), in order to determine clusters of similar queries. We also propose a view merging algorithm that builds a set of candidate views, as well as a greedy process for selecting a set of views to materialize. This selection is based on cost models that evaluate the cost of accessing data using views and the cost of storing these views. To validate our strategy, we executed a workload of decision-support queries on a test data warehouse, with and without using our strategy. Our experimental results demonstrate its efficiency, even when storage space is limited.", "We propose a probabilistic formulation of joint silhouette extraction and 3D reconstruction given a series of calibrated 2D images. Instead of segmenting each image separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most probable 3D shape that gives rise to the observed color information. The probabilistic framework, based on Bayesian inference, enables robust 3D reconstruction by optimally taking into account the contribution of all views. We solve the arising maximum a posteriori shape inference in a globally optimal manner by convex relaxation techniques in a spatially continuous representation. For an interactively provided user input in the form of scribbles specifying foreground and background regions, we build corresponding color distributions as multivariate Gaussians and find a volume occupancy that best fits to this data in a variational sense. Compared to classical methods for silhouette-based multiview reconstruction, the proposed approach does not depend on initialization and enjoys significant resilience to violations of the model assumptions due to background clutter, specular reflections, and camera sensor perturbations. In experiments on several real-world data sets, we show that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts. This results in more accurate visual hull estimation, needed by a multitude of image-based modeling approaches. We made use of recent advances in parallel computing with a GPU implementation of the proposed method generating reconstructions on volume grids of more than 20 million voxels in up to 4.41 seconds.", "Classical papers are of great help for beginners to get familiar with a new research area. However, digging them out is a difficult problem. This paper proposes Claper, a novel academic recommendation system based on two proven principles: the Principle of Download Persistence and the Principle of Citation Approaching (we prove them based on real-world datasets). The principle of download persistence indicates that classical papers have few decreasing download frequencies since they were published. The principle of citation approaching indicates that a paper which cites a classical paper is likely to cite citations of that classical paper. Our experimental results based on large-scale real-world datasets illustrate Claper can effectively recommend classical papers of high quality to beginners and thus help them enter their research areas.", "Topic models, such as Latent Dirichlet Allocation (LDA), posit that documents are drawn from admixtures of distributions over words, known as topics. The inference problem of recovering topics from admixtures, is NP-hard. Assuming separability, a strong assumption, [4] gave the first provable algorithm for inference. For LDA model, [6] gave a provable algorithm using tensor-methods. But [4,6] do not learn topic vectors with bounded @math error (a natural measure for probability vectors). Our aim is to develop a model which makes intuitive and empirically supported assumptions and to design an algorithm with natural, simple components such as SVD, which provably solves the inference problem for the model with bounded @math error. A topic in LDA and other models is essentially characterized by a group of co-occurring words. Motivated by this, we introduce topic specific Catchwords, group of words which occur with strictly greater frequency in a topic than any other topic individually and are required to have high frequency together rather than individually. A major contribution of the paper is to show that under this more realistic assumption, which is empirically verified on real corpora, a singular value decomposition (SVD) based algorithm with a crucial pre-processing step of thresholding, can provably recover the topics from a collection of documents drawn from Dominant admixtures. Dominant admixtures are convex combination of distributions in which one distribution has a significantly higher contribution than others. Apart from the simplicity of the algorithm, the sample complexity has near optimal dependence on @math , the lowest probability that a topic is dominant, and is better than [4]. Empirical evidence shows that on several real world corpora, both Catchwords and Dominant admixture assumptions hold and the proposed algorithm substantially outperforms the state of the art [5].", "This paper shows how to embed a similarity relation between complex descriptions in concept lattices. We formalize similarity by a tolerance relation: objects are grouped within a same concept when having similar descriptions, extending the ability of FCA to deal with complex data. We propose two different approaches. A first classical manner defines a discretization procedure. A second way consists in representing data by pattern structures, from which a pattern concept lattice can be constructed directly. In this case, considering a tolerance relation can be mathematically defined by a projection in a meet-semi-lattice. This allows to use concept lattices for their knowledge representation and reasoning abilities without transforming data. We show finally that resulting lattices are useful for solving information fusion problems.", "In this paper, we propose a novel 3D-RecGAN approach, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike the existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid by filling in the occluded missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets show that the proposed 3D-RecGAN significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects. Our code and data are available at: this https URL", "We propose a novel approach based on Formal Concept Analysis for Topic Detection.Our proposal overcomes traditional problems of the clustering and classification techniques.We analyse the parameters involved in the process in a Twitter-based framework.We propose a topic selection methodology based on the stability concept.We overcome the state-of-the-art results for the task. The Topic Detection Task in Twitter represents an indispensable step in the analysis of text corpora and their later application in Online Reputation Management. Classification, clustering and probabilistic techniques have been traditionally applied, but they have some well-known drawbacks such as the need to fix the number of topics to be detected or the problem of how to integrate the prior knowledge of topics with the detection of new ones. This motivates the current work, where we present a novel approach based on Formal Concept Analysis (FCA), a fully unsupervised methodology to group similar content together in thematically-based topics (i.e., the FCA formal concepts) and to organize them in the form of a concept lattice. Formal concepts are conceptual representations based on the relationships between tweet terms and the tweets that have given rise to them. It allows, in contrast to other approaches in the literature, their clear interpretability. In addition, the concept lattice represents a formalism that describes the data, explores correlations, similarities, anomalies and inconsistencies better than other representations such as clustering models or graph-based representations. Our rationale is that these theoretical advantages may improve the Topic Detection process, making them able to tackle the problems related to the task. To prove this point, our FCA-based proposal is evaluated in the context of a real-life Topic Detection task provided by the Replab 2013 CLEF Campaign. To demonstrate the efficiency of the proposal, we have carried out several experiments focused on testing: (a) the impact of terminology selection as an input to our algorithm, (b) the impact of concept selection as the outcome of our algorithm, and; (c) the efficiency of the proposal to detect new and previously unseen topics (i.e., topic adaptation). An extensive analysis of the results has been carried out, proving the suitability of our proposal to integrate previous knowledge of prior topics without losing the ability to detect novel and unseen topics as well as improving the best Replab 2013 results.", "In this paper, we propose a novel 3D-RecGAN approach, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike the existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid by filling in the occluded missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets show that the proposed 3D-RecGAN significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects. Our code and data are available at: https: github.com Yang7879 3D-RecGAN.", "A novel construction of lattices is proposed. This construction can be thought of as Construction A with codes that can be represented as the Cartesian product of L linear codes over Fp1 , . . . ,FpL , respectively; hence, is referred to as the product construction. The existence of a sequence of such lattices that are good for quantization and Poltyrev-good under multistage decoding is shown. This family of lattices is then used to generate a sequence of nested lattice codes which allows one to achieve the same computation rate of Nazer and Gastpar for computeand-forward under multistage decoding, which is referred to as lattice-based multistage compute-and-forward. Motivated by the proposed lattice codes, two families of signal constellations are then proposed for the separation-based compute-and-forward framework proposed by together with a multilevel coding multistage decoding scheme tailored specifically for these constellations. This scheme is termed separation-based multistage compute-and-forward and is shown having a complexity of the channel coding dominated by the greatest common divisor of the constellation size (may not be a prime number) instead of the constellation size itself." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
A wavelet framework for adaptively representing multidimensional data cubes has also been proposed @cite_40 . This method decomposes data cubes into an indexed hierarchy of wavelet view elements that correspond to partial and residual aggregations of data cubes. An algorithm greedily selects a non-expensive set of wavelet view elements that minimizes the average processing cost of data cube queries. In the same spirit, sis02dwa proposed the Dwarf structure, which compresses data cubes. Dwarf identifies prefix and suffix redundancies within cube cells and factors them out by coalescing their storage. Suppressing redundancy improves the maintenance and interrogation costs of data cubes. These approaches are very interesting, but they are mainly focused on computing efficient data cubes by changing their physical design.
{ "cite_N": [ "@cite_40" ], "mid": [ "2158924892" ], "abstract": [ "This article presents a method for adaptively representing multidimensional data cubes using wavelet view elements in order to more efficiently support data analysis and querying involving aggregations. The proposed method decomposes the data cubes into an indexed hierarchy of wavelet view elements. The view elements differ from traditional data cube cells in that they correspond to partial and residual aggregations of the data cube. The view elements provide highly granular building blocks for synthesizing the aggregated and range-aggregated views of the data cubes. We propose a strategy for selectively materializing alternative sets of view elements based on the patterns of access of views. We present a fast and optimal algorithm for selecting a non-expansive set of wavelet view elements that minimizes the average processing cost for supporting a population of queries of data cube views. We also present a greedy algorithm for allowing the selective materialization of a redundant set of view element sets which, for measured increases in storage capacity, further reduces processing costs. Experiments and analytic results show that the wavelet view element framework performs better in terms of lower processing and storage cost than previous methods that materialize and store redundant views for online analytical processing (OLAP)." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Other approaches detect common sub-expressions within workload queries in the relational context @cite_22 @cite_11 @cite_48 . The view selection problem then consists in finding common subexpressions corresponding to intermediary results that are suitable to materialize. However, browsing is very costly and these methods are not truly scalable with respect to the number of queries.
{ "cite_N": [ "@cite_48", "@cite_22", "@cite_11" ], "mid": [ "1597931764", "1810232998", "2098388305" ], "abstract": [ "Recently, multi-query optimization techniques have been considered as beneficial in view selection setting. The main interest of such techniques relies in detecting common sub expressions between the different queries of workload. This feature can be exploited for sharing updates and space storage. However, due to the reuse a query change may entail an important reorganization of the multi query graph. In this paper, we present an approach that is based on multi-query optimization for view selection and that attempts to reduce the drawbacks resulting from these techniques. Finally, we present a performance study using workloads consisting of queries over the schema of the TPC-H benchmark. This study shows that our view selection provides significant benefits over the other approaches.", "This paper presents an innovative approach for the publication and discovery of Web services. The proposal is based on two previous works: DIRE (DIstributed REgistry), for the user-centered distributed replication of service-related information, and URBE (UDDI Registry By Example), for the semantic-aware match making between requests and available services. The integrated view also exploits USQL (Unified Service Query Language) to provide users with a higher level and homogeneous means to interact with the different registries. The proposal improves background technology in different ways: we integrate USQL as high-level language to state service requests, widen user notifications based on URBE semantic matching, and apply URBE match making to all the facets with which services can be described in DIRE. All these new concepts are demonstrated on a simple scenario.", "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Finally, the most recent approaches are workload-driven. They syntactically analyze a workload to enumerate relevant candidate views @cite_43 . By exploiting the system's query optimizer, they greedily build a configuration of the most pertinent views. A workload is indeed a good starting point to predict future queries because these queries are probably within or syntactically close to a previous query workload. In addition, extracting candidate views from the workload ensures that future materialized views will probably be used when processing queries.
{ "cite_N": [ "@cite_43" ], "mid": [ "2010149990" ], "abstract": [ "Current trends in data management systems, such as cloud and multi-tenant databases, are leading to data processing environments that concurrently execute heterogeneous query workloads. At the same time, these systems need to satisfy diverse performance expectations. In these newly-emerging settings, avoiding potential Quality-of-Service (QoS) violations heavily relies on performance predictability, i.e., the ability to estimate the impact of concurrent query execution on the performance of individual queries in a continuously evolving workload. This paper presents a modeling approach to estimate the impact of concurrency on query performance for analytical workloads. Our solution relies on the analysis of query behavior in isolation, pairwise query interactions and sampling techniques to predict resource contention under various query mixes and concurrency levels. We introduce a simple yet powerful metric that accurately captures the joint effects of disk and memory contention on query performance in a single value. We also discuss predicting the execution behavior of a time-varying query workload through query-interaction timelines, i.e., a fine-grained estimation of the time segments during which discrete mixes will be executed concurrently. Our experimental evaluation on top of PostgreSQL TPC-H demonstrates that our models can provide query latency predictions within approximately 20 of the actual values in the average case." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
The index selection problem has been studied for many years in databases @cite_53 @cite_49 @cite_43 @cite_1 @cite_50 @cite_45 @cite_58 . In the more specific context of data warehouses, existing research studies may be clustered into two families: algorithms that optimize maintenance cost @cite_6 and algorithms that optimize query response time @cite_24 @cite_18 @cite_33 . In both cases, optimization is realized under storage space constraint. In this paper, we focus on the second family of solutions, which is relevant in our context. Studies falling in this category may be further categorized depending on how the set of candidate indexes @math and the final configuration of indexes @math are built.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_53", "@cite_1", "@cite_6", "@cite_24", "@cite_43", "@cite_45", "@cite_50", "@cite_49", "@cite_58" ], "mid": [ "2059326501", "2078524330", "2165481122", "1548134621", "2615162663", "1851390469", "2082729696", "2137077706", "2125203709", "2171438172", "2017733008" ], "abstract": [ "Abstract Index selection for relational databases is an important issue which has been researched quite extensively [1–5]. In the literature, in index selection algorithms for relational databases, at most one index is considered as a candidate for each attribute of a relation. However, it is possible that more than one different type of indexes with different storage space requirements may be present as candidates for an attribute. Also, it may not be possible to eliminate locally all but one of the candidate indexes for an attribute due to different benefits and storage space requirements associated with the candidates. Thus, the algorithms available in the literature for optimal index selection may not be used when there are multiple candidates for each attribute and there is a need for a global optimization algorithm in which at most one index can be selected from a set of candidate indexes for an attribute. The problem of index selection in the presence of multiple candidate indexes for each attribute (which we call the multiple choice index selection problem) has not been addressed in the literature. In this paper, we present the multiple choice index selection problem, show that it is NP-hard, and present an algorithm which gives an approximately optimal solution within a user specified error bound in a logarithmic time order.", "A problem of considerable interest in the design of a database is the selection of indexes. In this paper, we present a probabilistic model of transactions (queries, updates, insertions, and deletions) to a file. An evaluation function, which is based on the cost saving (in terms of the number of page accesses) attributable to the use of an index set, is then developed. The maximization of this function would yield an optimal set of indexes. Unfortunately, algorithms known to solve this maximization problem require an order of time exponential in the total number of attributes in the file. Consequently, we develop the theoretical basis which leads to an algorithm that obtains a near optimal solution to the index selection problem in polynomial time. The theoretical result consists of showing that the index selection problem can be solved by solving a properly chosen instance of the knapsack problem. A theoretical bound for the amount by which the solution obtained by this algorithm deviates from the true optimum is provided. This result is then interpreted in the light of evidence gathered through experiments.", "We study the index selection problem: Given a workload consisting of SQL statements on a database, and a user-specified storage constraint, recommend a set of indexes that have the maximum benefit for the given workload. We present a formal statement for this problem and show that it is computationally \"hard\" to solve or even approximate it. We develop a new algorithm for the problem which is based on treating the problem as a knapsack problem. The novelty of our approach lies in an LP (linear programming) based method that assigns benefits to individual indexes. For a slightly modified algorithm, that does more work, we prove that we can give instance specific guarantees about the quality of our solution. We conduct an extensive experimental evaluation of this new heuristic and compare it with previous solutions. Our results demonstrate that our solution is more scalable while achieving comparable quality.", "We critically evaluate the current state of research in multiple query opGrnization, synthesize the requirements for a modular opCrnizer, and propose an architecture. Our objective is to facilitate future research by providing modular subproblems and a good general-purpose data structure. In rhe context of this archiuzcture. we provide an improved subsumption algorithm. and discuss migration paths from single-query to multiple-query oplimizers. The architecture has three key ingredients. First. each type of work is performed at an appropriate level of abstraction. Segond, a uniform and very compact representation stores all candidate strategies. Finally, search is handled as a discrete optimization problem separable horn the query processing tasks. 1. Problem Definition and Objectives A multiple query optimizer (h4QO) takes several queries as input and seeks to generate a good multi-strategy, an executable operator gaph that simultaneously computes answers to all the queries. The idea is to save by evaluating common subexpressions only once. The commonalities to be exploited include identical selections and joins, predicates that subsume other predicates, and also costly physical operators such as relation scans and SOULS. The multiple query optimization problem is to find a multi-strategy that minimizes the total cost (with overlap exploited). Figure 1 .l shows a multi-strategy generated exploiting commonalities among queries Ql-Q3 at both the logical and physical level. To be really satisfactory, a multi-query optimization algorithm must offer solution quality, ejjiciency, and ease of Permission to copy without fee all a part of this mataial is granted provided that the copies are nut made a diitributed for direct commercial advantage, the VIDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Da Blse Endowment. To copy otherwise. urturepublim identify many kinds of commonalities (e.g., by predicate splitting, sharing relation scans); and search effectively to choose a good combination of l-strategies. Efficiency requires that the optimization avoid a combinatorial explosion of possibilities, and that within those it considers, redundant work on common subexpressions be minimized. ‘Finally, ease of implementation is crucial an algorithm will be practically useful only if it is conceptually simple, easy to attach to an optimizer, and requires relatively little additional soft-", "Many real applications in real-time news stream advertising call for efficient processing of long queries against short text. In such applications, dynamic news feeds are regarded as queries to match against an advertisement (ad) database for retrieving the k most relevant ads. The existing approaches to keyword retrieval cannot work well in this search scenario when queries are triggered at a very high frequency. To address the problem, we introduce new techniques to significantly improve search performance. First, we devise a two-level partitioning for tight upper bound estimation and a lazy evaluation scheme to delay full evaluation of unpromising candidates, which can bring three to four times performance boosting in a database with 7 million ads. Second, we propose a novel rank-aware block-oriented inverted index to further improve performance. In this index scheme, each entry in an inverted list is assigned a rank according to its importance in the ad. Then, we introduce a block-at-a-time search strategy based on the index scheme to support a much tighter upper bound estimation and a very early termination. We have conducted experiments with real datasets, and the results show that the rank-aware method can further improve performance by an order of magnitude.", "In this paper we describe novel techniques that make it possible to build an industrial-strength tool for automating the choice of indexes in the physical design of a SQL database. The tool takes as input a workload of SQL queries, and suggests a set of suitable indexes. We ensure that the indexes chosen are effective in reducing the cost of the workload by keeping the index selection tool and the query optimizer \"in step\". The number of index sets that must be evaluated to find the optimal configuration is very large. We reduce the complexity of this problem using three techniques. First, we remove a large number of spurious indexes from consideration by taking into account both query syntax and cost information. Second, we introduce optimizations that make it possible to cheaply evaluate the “goodness” of an index set. Third, we describe an iterative approach to handle the complexity arising from multicolumn indexes. The tool has been implemented on Microsoft SQL Server 7.0. We performed extensive experiments over a range of workloads, including TPC-D. The results indicate that the tool is efficient and its choices are close to optimal.", "This paper reports on a novel technique for literature indexing and searching in a mechanized library system. The notion of relevance is taken as the key concept in the theory of information retrieval and a comparative concept of relevance is explicated in terms of the theory of probability. The resulting technique called “Probabilistic Indexing,” allows a computing machine, given a request for information, to make a statistical inference and derive a number (called the “relevance number”) for each document, which is a measure of the probability that the document will satisfy the given request. The result of a search is an ordered list of those documents which satisfy the request ranked according to their probable relevance. The paper goes on to show that whereas in a conventional library system the cross-referencing (“see” and “see also”) is based solely on the “semantical closeness” between index terms, statistical measures of closeness between index terms can be defined and computed. Thus, given an arbitrary request consisting of one (or many) index term(s), a machine can elaborate on it to increase the probability of selecting relevant documents that would not otherwise have been selected. Finally, the paper suggests an interpretation of the whole library problem as one where the request is considered as a clue on the basis of which the library system makes a concatenated statistical inference in order to provide as an output an ordered list of those documents which most probably satisfy the information needs of the user.", "We study indexing techniques for main memory, including hash indexes, binary search trees, T-trees, B+-trees, interpolation search, and binary search on arrays. In a decision-support context, our primary concerns are the lookup time, and the space occupied by the index structure. Our goal is to provide faster lookup times than binary search by paying attention to reference locality and cache behavior, without using substantial extra space. We propose a new indexing technique called -Sensitive Search Trees\" (CSS-trees). Our technique stores a directory structure on top of a sorted array. Nodes in this directory have size matching the cache-line size of the machine. We store the directory in an array and do not store internal-node pointers; child nodes can be found by performing arithmetic on array osets. We compare the algorithms based on their time and space requirements. We have implemented all of the techniques, and present a performance study on two popular modern machines. We demonstrate that with This research was supported by a David and Lucile Packard Foundation Fellowship in Science and Engineering, by an NSF Young Investigator Award, by NSF grant number IIS-98-12014, and by NSF CISE award CDA-9625374.", "Query-processing costs on large text databases are dominated by the need to retrieve and scan the inverted list of each query term. Retrieval time for inverted lists can be greatly reduced by the use of compression, but this adds to the CPU time required. Here we show that the CPU component of query response time for conjunctive Boolean queries and for informal ranked queries can be similarly reduced, at little cost in terms of storage, by the inclusion of an internal index in each compressed inverted list. This method has been applied in a retrieval system for a collection of nearly two million short documents. Our experimental results show that the self-indexing strategy adds less than 20 to the size of the compressed inverted file, which itself occupies less than 10 of the indexed text, yet can reduce processing time for Boolean queries of 5-10 terms to under one fifth of the previous cost. Similarly, ranked queries of 40-50 terms can be evaluated in as little as 25 of the previous time, with little or no loss of retrieval effectiveness.", "Traditional query optimizers assume accurate knowledge of run-time parameters such as selectivities and resource availability during plan optimization, i.e., at compile time. In reality, however, this assumption is often not justified. Therefore, the “static” plans produced by traditional optimizers may not be optimal for many of their actual run-time invocations. Instead, we propose a novel optimization model that assigns the bulk of the optimization effort to compile-time and delays carefully selected optimization decisions until run-time. Our previous work defined the run-time primitives, “dynamic plans” using “choose-plan” operators, for executing such delayed decisions, but did not solve the problem of constructing dynamic plans at compile-time. The present paper introduces techniques that solve this problem. Experience with a working prototype optimizer demonstrates (i) that the additional optimization and start-up overhead of dynamic plans compared to static plans is dominated by their advantage at run-time, (ii) that dynamic plans are as robust as the “brute-force” remedy of run-time optimization, i.e., dynamic plans maintain their optimality even if parameters change between compile-time and run-time, and (iii) that the start-up overhead of dynamic plans is significantly less than the time required for complete optimization at run-time. In other words, our proposed techniques are superior to both techniques considered to-date, namely compile-time optimization into a single static plan as well as run-time optimization. Finally, we believe that the concepts and technology described can be transferred to commercial query optimizers in order to improve the performance of embedded queries with host variables in the query predicate and to adapt to run-time system loads unpredictable at compile time.", "Analytical queries defined on data warehouses are complex and use several join operations that are very costly, especially when run on very large data volumes. To improve response times, data warehouse administrators casually use indexing techniques. This task is nevertheless complex and fastidious. In this paper, we present an automatic, dynamic index selection method for data warehouses that is based on incremental frequent itemset mining from a given query workload. The main advantage of this approach is that it helps update the set of selected indexes when workload evolves instead of recreating it from scratch. Preliminary experimental results illustrate the efficiency of this approach, both in terms of performance enhancement and overhead." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Selecting a set of candidate indexes may be automatic or manual. Warehouse administrators may indeed appeal to their expertise and manually provide, from a given workload, a set of candidate indexes @cite_49 @cite_32 @cite_29 . Such a choice is however subjective. Moreover, the task may be very hard to achieve when the number of queries is very high. In opposition, candidate indexes can also be extracted automatically, through a syntactic analysis of queries @cite_14 @cite_1 @cite_33 . Such an analysis depends on the DBMS, since each DBMS is queried through a specific syntax derived from the SQL standard.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_29", "@cite_1", "@cite_32", "@cite_49" ], "mid": [ "2017733008", "2059326501", "2050272677", "1851390469", "2250701144", "2132729409" ], "abstract": [ "Analytical queries defined on data warehouses are complex and use several join operations that are very costly, especially when run on very large data volumes. To improve response times, data warehouse administrators casually use indexing techniques. This task is nevertheless complex and fastidious. In this paper, we present an automatic, dynamic index selection method for data warehouses that is based on incremental frequent itemset mining from a given query workload. The main advantage of this approach is that it helps update the set of selected indexes when workload evolves instead of recreating it from scratch. Preliminary experimental results illustrate the efficiency of this approach, both in terms of performance enhancement and overhead.", "Abstract Index selection for relational databases is an important issue which has been researched quite extensively [1–5]. In the literature, in index selection algorithms for relational databases, at most one index is considered as a candidate for each attribute of a relation. However, it is possible that more than one different type of indexes with different storage space requirements may be present as candidates for an attribute. Also, it may not be possible to eliminate locally all but one of the candidate indexes for an attribute due to different benefits and storage space requirements associated with the candidates. Thus, the algorithms available in the literature for optimal index selection may not be used when there are multiple candidates for each attribute and there is a need for a global optimization algorithm in which at most one index can be selected from a set of candidate indexes for an attribute. The problem of index selection in the presence of multiple candidate indexes for each attribute (which we call the multiple choice index selection problem) has not been addressed in the literature. In this paper, we present the multiple choice index selection problem, show that it is NP-hard, and present an algorithm which gives an approximately optimal solution within a user specified error bound in a logarithmic time order.", "Considering the wide deployment of databases and its size, particularly in data warehouses, it is important to automate the physical design so that the task of the database administrator (DBA) is minimized. An important part of physical database design is index selection. An auto-index selection tool capable of analyzing large amounts of data and suggesting a good set of indexes for a database is the goal of auto-administration. Clustering is a data mining technique with broad appeal and usefulness in exploratory data analysis. This idea provides a motivation to apply clustering techniques to obtain good indexes for a workload in the database. We describe a technique for auto-indexing using clustering. The experiments conducted show that the proposed technique performs better than Microsoft SQL server index selection tool (1ST) and can suggest indexes faster than Microsoft's IST.", "In this paper we describe novel techniques that make it possible to build an industrial-strength tool for automating the choice of indexes in the physical design of a SQL database. The tool takes as input a workload of SQL queries, and suggests a set of suitable indexes. We ensure that the indexes chosen are effective in reducing the cost of the workload by keeping the index selection tool and the query optimizer \"in step\". The number of index sets that must be evaluated to find the optimal configuration is very large. We reduce the complexity of this problem using three techniques. First, we remove a large number of spurious indexes from consideration by taking into account both query syntax and cost information. Second, we introduce optimizations that make it possible to cheaply evaluate the “goodness” of an index set. Third, we describe an iterative approach to handle the complexity arising from multicolumn indexes. The tool has been implemented on Microsoft SQL Server 7.0. We performed extensive experiments over a range of workloads, including TPC-D. The results indicate that the tool is efficient and its choices are close to optimal.", "In this paper, we define models for automatically translating a factoid question in natural language to an SQL query that retrieves the correct answer from a target relational database (DB). We exploit the DB structure to generate a set of candidate SQL queries, which we rerank with an SVM-ranker based on tree kernels. In particular, in the generation phase, we use (i) lexical dependencies in the question and (ii) the DB metadata, to build a set of plausible SELECT, WHERE and FROM clauses enriched with meaningful joins. We combine the clauses by means of rules and a heuristic weighting scheme, which allows for generating a ranked list of candidate SQL queries. This approach can be recursively applied to deal with complex questions, requiring nested SELECT instructions. Finally, we apply the reranker to reorder the list of question and SQL candidate pairs, whose members are represented as syntactic trees. The F1 of our model derived on standard benchmarks, 87 on the first question, is in line with the best models using external and expensive hand-crafted resources such as the question meaning interpretation. Moreover, our system shows a Recall of the correct answer of about 94 and 98 on the first 2 and 5 candidates, respectively. This is an interesting outcome considering that we only need pairs of questions and answers concerning a target DB (no SQL query is needed) to train our model.", "Data warehouses collect copies of information from remote sources into a single database. Since the remote data is cached at the warehouse, it appears as local relations to the users of the warehouse. To improve query response time, the warehouse administrator will often materialize views defined on the local relations to support common or complicated queries. Unfortunately, the requirement to keep the views consistent with the local relations creates additional overhead when the remote sources change. The warehouse is often kept only loosely consistent with the sources: it is periodically refreshed with changes sent from the source. When this happens, the warehouse is taken off-line until the local relations and materialized views can be updated. Clearly, the users would prefer as little down time as possible. Often the down time can be reduced by adding carefully selected materialized views or indexes to the physical schema. This paper studies how to select the sets of supporting views and of indexes to materialize to minimize the down time. We call this the view index selection (VIS) problem. We present an A* search based solution to the problem as well as rules of thumb. We also perform additional experiments to understand the space-time tradeoff as it applies to data warehouses." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Ascending greedy methods start from an empty set of candidate indexes @cite_20 @cite_49 @cite_29 @cite_14 . They incrementally add in indexes minimizing cost. This process stops when cost ceases decreasing. Contrarily, descending greedy methods consider the whole set of candidate indexes as a starting point. Then, at each iteration, indexes are pruned @cite_20 @cite_32 . If workload cost before pruning is lower (respectively, greater) than workload cost after pruning, the pruned indexes are useless (respectively, useful) for reducing cost. The pruning process stops when cost increases after pruning.
{ "cite_N": [ "@cite_14", "@cite_29", "@cite_32", "@cite_49", "@cite_20" ], "mid": [ "2006997130", "2435989427", "2153107031", "2963287528", "2114413837" ], "abstract": [ "We introduce static index pruning methods that significantly reduce the index size in information retrieval systems.We investigate uniform and term-based methods that each remove selected entries from the index and yet have only a minor effect on retrieval results. In uniform pruning, there is a fixed cutoff threshold, and all index entries whose contribution to relevance scores is bounded above by a given threshold are removed from the index. In term-based pruning, the cutoff threshold is determined for each term, and thus may vary from term to term. We give experimental evidence that for each level of compression, term-based pruning outperforms uniform pruning, under various measures of precision. We present theoretical and experimental evidence that under our term-based pruning scheme, it is possible to prune the index greatly and still get retrieval results that are almost as good as those based on the full index.", "We propose to prune a random forest (RF) for resource-constrained prediction. We first construct a RF and then prune it to optimize expected feature cost & accuracy. We pose pruning RFs as a novel 0-1 integer program with linear constraints that encourages feature re-use. We establish total unimodularity of the constraint set to prove that the corresponding LP relaxation solves the original integer program. We then exploit connections to combinatorial optimization and develop an efficient primal-dual algorithm, scalable to large datasets. In contrast to our bottom-up approach, which benefits from good RF initialization, conventional methods are top-down acquiring features based on their utility value and is generally intractable, requiring heuristics. Empirically, our pruning algorithm outperforms existing state-of-the-art resource-constrained algorithms.", "We show that if performance measures in a stochastic scheduling problem satisfy a set of so-called partial conservation laws (PCL), which extend previously studied generalized conservation laws (GCL), then the problem is solved optimally by a priority-index policy for an appropriate range of linear performance objectives, where the optimal indices are computed by a one-pass adaptive-greedy algorithm, based on Klimov's. We further apply this framework to investigate the indexability property of restless bandits introduced by Whittle, obtaining the following results: (1) we identify a class of restless bandits (PCL-indexable) which are indexable; membership in this class is tested through a single run of the adaptive-greedy algorithm, which also computes the Whittle indices when the test is positive; this provides a tractable sufficient condition for indexability; (2) we further indentify the class of GCL-indexable bandits, which includes classical bandits, having the property that they are indexable under any linear reward objective. The analysis is based on the so-called achievable region method, as the results follow from new linear programming formulations for the problems investigated.", "We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation-a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.", "Greedy search is commonly used in an attempt to generate solutions quickly at the expense of completeness and optimality. In this work, we consider learning sets of weighted action-selection rules for guiding greedy search with application to automated planning. We make two primary contributions over prior work on learning for greedy search. First, we introduce weighted sets of action-selection rules as a new form of control knowledge for greedy search. Prior work has shown the utility of action-selection rules for greedy search, but has treated the rules as hard constraints, resulting in brittleness. Our weighted rule sets allow multiple rules to vote, helping to improve robustness to noisy rules. Second, we give a new iterative learning algorithm for learning weighted rule sets based on RankBoost, an efficient boosting algorithm for ranking. Each iteration considers the actual performance of the current rule set and directs learning based on the observed search errors. This is in contrast to most prior approaches, which learn control knowledge independently of the search process. Our empirical results have shown significant promise for this approach in a number of domains." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Genetic algorithms are commonly used to resolve optimization problems. They have been adapted to the index selection problem @cite_45 . The initial population is a set of input indexes (an index is assimilated to an individual). The objective function to optimize is the workload cost corresponding to an index configuration. The combinatory construction of an index configuration is realized through the crossover, mutation and selection genetic operators. Eventually, the index selection problem has also been formulated in several studies as a knapsack problem @cite_44 @cite_21 @cite_1 @cite_50 where indexes are objects, index storage costs represent object weights, workload cost is the benefit function, and storage space is knapsack size.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_44", "@cite_45", "@cite_50" ], "mid": [ "2078524330", "2017565019", "2755583598", "2109865546", "2164792208" ], "abstract": [ "A problem of considerable interest in the design of a database is the selection of indexes. In this paper, we present a probabilistic model of transactions (queries, updates, insertions, and deletions) to a file. An evaluation function, which is based on the cost saving (in terms of the number of page accesses) attributable to the use of an index set, is then developed. The maximization of this function would yield an optimal set of indexes. Unfortunately, algorithms known to solve this maximization problem require an order of time exponential in the total number of attributes in the file. Consequently, we develop the theoretical basis which leads to an algorithm that obtains a near optimal solution to the index selection problem in polynomial time. The theoretical result consists of showing that the index selection problem can be solved by solving a properly chosen instance of the knapsack problem. A theoretical bound for the amount by which the solution obtained by this algorithm deviates from the true optimum is provided. This result is then interpreted in the light of evidence gathered through experiments.", "This paper considers a new variant of the two-dimensional bin packing problem where each rectangle is assigned a due date and each bin has a fixed processing time. Hence the objective is not only to minimize the number of bins, but also to minimize the maximum lateness of the rectangles. This problem is motivated by the cutting of stock sheets and the potential increased efficiency that might be gained by drawing on a larger pool of demand pieces by mixing orders, while also aiming to ensure a certain level of customer service. We propose a genetic algorithm for searching the solution space, which uses a new placement heuristic for decoding the gene based on the best fit heuristic designed for the strip packing problems. The genetic algorithm employs an innovative crossover operator that considers several different children from each pair of parents. Further, the dual objective is optimized hierarchically with the primary objective periodically alternating between maximum lateness and number of bins. As a result, the approach produces several non-dominated solutions with different trade-offs. Two further approaches are implemented. One is based on a previous Unified Tabu Search, suitably modified to tackle this revised problem. The other is randomized descent and serves as a benchmark for comparing the results. Comprehensive computational results are presented, which show that the Unified Tabu Search still works well in minimizing the bins, but the genetic algorithm performs slightly better. When also considering maximum lateness, the genetic algorithm is considerably better.", "This paper presents algorithm for optimal reconfiguration of distribution networks using hybrid heuristic genetic algorithm. Improvements introduced in this approach make it suitable for real-life networks with realistic degree of complexity and network size. The algorithm introduces several improvements related to the generation of initial set of possible solutions as well as crossover and mutation steps in genetic algorithm. Since the genetic algorithms are often used in distribution network reconfiguration problem, its application is well known, but most of the approaches have very poor effectiveness due to high level of individuals' rejections not-fulfilling radial network constraints requirements and poor convergence rate. One part of these problems is related to ineffective creation of initial population individuals. The other part of the problem in similar approaches is related to inefficient operators implemented in crossover and mutation process over created set of population individuals. The hybrid heuristic-genetic approach presented in this paper provides significant improvements in these areas. The presented algorithm can be used to find optimal radial distribution network topology with minimum network losses or with optimally balanced network loading. The algorithm is rested on a real size network of city of Dubrovnik to identify the optimal network topology after the interpolation (connection) of a new supply point.", "We propose a hybrid algorithm for finding a set of nondominated solutions of a multi objective optimization problem. In the proposed algorithm, a local search procedure is applied to each solution (i.e., each individual) generated by genetic operations. Our algorithm uses a weighted sum of multiple objectives as a fitness function. The fitness function is utilized when a pair of parent solutions are selected for generating a new solution by crossover and mutation operations. A local search procedure is applied to the new solution to maximize its fitness value. One characteristic feature of our algorithm is to randomly specify weight values whenever a pair of parent solutions are selected. That is, each selection (i.e., the selection of two parent solutions) is performed by a different weight vector. Another characteristic feature of our algorithm is not to examine all neighborhood solutions of a current solution in the local search procedure. Only a small number of neighborhood solutions are examined to prevent the local search procedure from spending almost all available computation time in our algorithm. High performance of our algorithm is demonstrated by applying it to multi objective flowshop scheduling problems.", "We consider situations in which a decision-maker with a fixed budget faces a sequence of options, each with a cost and a value, and must select a subset of them online so as to maximize the total value. Such situations arise in many contexts, e.g., hiring workers, scheduling jobs, and bidding in sponsored search auctions. This problem, often called the online knapsack problem, is known to be inapproximable. Therefore, we make the enabling assumption that elements arrive in a randomorder. Hence our problem can be thought of as a weighted version of the classical secretary problem, which we call the knapsack secretary problem. Using the random-order assumption, we design a constant-competitive algorithm for arbitrary weights and values, as well as a e-competitive algorithm for the special case when all weights are equal (i.e., the multiple-choice secretary problem). In contrast to previous work on online knapsack problems, we do not assume any knowledge regarding the distribution of weights and values beyond the fact that the order is random." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Another approach determines a trade-off between storage space allotted to indexes and materialized views, depending on query definition @cite_48 . According to the authors, the key factors to leverage query optimization is aggregation level, defined by the attribute list of Group by clauses in SQL queries, and the selectivity of attributes present in Where and Having clauses. View materialization indeed provides a great benefit for queries involving coarse granularity aggregations (few attributes in the Group by clause) because they produce few groups among a large number of tuples. On the other hand, indexes provide their best benefit with queries containing high selectivity attributes. Thus, queries with fine aggregations and high selectivity stimulate indexing, while queries with coarse aggregations and weak selectivity encourage view materialization.
{ "cite_N": [ "@cite_48" ], "mid": [ "1496130826" ], "abstract": [ "View materialization and indexing are the most effective techniques adopted in data warehouses to improve query performance. Since both materialization and indexing algorithms are driven by a constraint on the disk space made available for each, the designer would greatly benefit from being enabled to determine a priori which fractions of the global space available must be devoted to views and indexes, respectively, in order to optimally tune performances. In this paper we first present a comparative evaluation of the benefit (saving per disk page) brought by view materialization and indexing for a single query expressed on a star scheme. Then, we face the problem of determining an effective trade-off between the two space fractions for the core workload of the warehouse. Some experimental results are reported, which prove that the estimated trade-off is satisfactorily near to the optimal one." ] }
0707.1913
2951035381
Collaborative work on unstructured or semi-structured documents, such as in literature corpora or source code, often involves agreed upon templates containing metadata. These templates are not consistent across users and over time. Rule-based parsing of these templates is expensive to maintain and tends to fail as new documents are added. Statistical techniques based on frequent occurrences have the potential to identify automatically a large fraction of the templates, thus reducing the burden on the programmers. We investigate the case of the Project Gutenberg corpus, where most documents are in ASCII format with preambles and epilogues that are often copied and pasted or manually typed. We show that a statistical approach can solve most cases though some documents require knowledge of English. We also survey various technical solutions that make our approach applicable to large data sets.
The algorithmics of finding frequent items or patterns has received much attention. For a survey of the stream-based algorithms, see Cormode and Muthukrishnan [p. 253] corm:whats-hot . Finding frequent patterns robustly is possible using gap constraints @cite_29 .
{ "cite_N": [ "@cite_29" ], "mid": [ "2136593687" ], "abstract": [ "As data mining techniques are being increasingly applied to non-traditional domains, existing approaches for finding frequent itemsets cannot be used as they cannot model the requirement of these domains. An alternate way of modeling the objects in these data sets is to use graphs. Within that model, the problem of finding frequent patterns becomes that of discovering subgraphs that occur frequently over the entire set of graphs.The authors present a computationally efficient algorithm for finding all frequent subgraphs in large graph databases. We evaluated the performance of the algorithm by experiments with synthetic datasets as well as a chemical compound dataset. The empirical results show that our algorithm scales linearly with the number of input transactions and it is able to discover frequent subgraphs from a set of graph transactions reasonably fast, even though we have to deal with computationally hard problems such as canonical labeling of graphs and subgraph isomorphism which are not necessary for traditional frequent itemset discovery." ] }
0707.1954
1483340443
Wireless sensor networks are often used for environmental monitoring applications. In this context sampling and reconstruction of a physical field is one of the most important problems to solve. We focus on a bandlimited field and find under which conditions on the network topology the reconstruction of the field is successful, with a given probability. We review irregular sampling theory, and analyze the problem using random matrix theory. We show that even a very irregular spatial distribution of sensors may lead to a successful signal reconstruction, provided that the number of collected samples is large enough with respect to the field bandwidth. Furthermore, we give the basis to analytically determine the probability of successful field reconstruction.
Few papers have addressed the problem of sampling and reconstruction in sensor networks. Efficient techniques for spatial sampling in sensor networks are proposed in @cite_2 @cite_4 . In particular @cite_2 presents an algorithm to determine which sensor subsets should be selected to acquire data from an area of interest and which nodes should remain inactive to save energy. The algorithm chooses sensors in such a way that the node positions can be mapped into a blue noise binary pattern. @cite_4 , an adaptive sampling is described, which allows the central data-collector to vary the number of active sensors, i.e., samples, according to the desired resolution level. Data acquisition is also studied in @cite_10 , where the authors consider a unidimensional field, uniformly sampled at the Nyquist frequency by low precision sensors. The authors show that the number of sensors (i.e., samples) can be traded-off with the precision of sensors. The problem of the reconstruction of a bandlimited signal from an irregular set of samples at unknown locations is addressed in @cite_3 . There, different solution methods are proposed, and the conditions for which there exist multiple solutions or a unique solution are discussed.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2114129195", "2133489575", "2571527823", "2953338282" ], "abstract": [ "We address the problem of reconstructing a multiband signal from its sub-Nyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multi-coset sampling. To date, recovery methods for this sampling strategy ensure perfect reconstruction either when the band locations are known, or under strict restrictions on the possible spectral supports. In this paper, only the number of bands and their widths are assumed without any other limitations on the support. We describe how to choose the parameters of the multi-coset sampling so that a unique multiband signal matches the given samples. To recover the signal, the continuous reconstruction is replaced by a single finite-dimensional problem without the need for discretization. The resulting problem is studied within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate, and provides a first systematic study of compressed sensing in a truly analog setting. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.", "Distributed sampling and reconstruction of a physical field using an array of sensors is a problem of considerable interest in environmental monitoring applications of sensor networks. Our recent work has focused on the sampling of bandlimited sensor fields. However, sensor fields are not perfectly bandlimited but typically have rapidly decaying spectra. In a classical sampling set-up it is possible to precede the A D sampling operation with an appropriate analog anti-aliasing filter. However, in the case of sensor networks, this is infeasible since sampling must precede filtering. We show that even though the effects of aliasing on the reconstruction cannot be prevented due to the \"filter-less\" sampling constraint, they can be suitably controlled by oversampling and carefully reconstructing the field from the samples. We show using a dither-based scheme that it is possible to estimate non-bandlimited fields with a precision that depends on how fast the spectral content of the field decays. We develop a framework for analyzing non-bandlimited fields that lead to upper bounds on the maximum pointwise error for a spatial bit rate of R bits meter. We present results for fields with exponentially decaying spectra as an illustration. In particular, we show that for fields f(t) with exponential tails; i.e., F( spl omega ) < spl pi spl alpha sup - spl alpha | spl omega | , the maximum pointwise error decays as c2e sup -a sub 1 spl radic R+ c3 1 spl radic R (e sup -2a sub 1 spl radic R) with spatial bit rate R bits meter. Finally, we show that for fields with spectra that have a finite second moment, the distortion decreases as O((1 N) sup 2 3 ) as the density of sensors, N, scales up to infinity . We show that if D is the targeted non-zero distortion, then the required (finite) rate R scales as O (1 spl radic D log 1 D).", "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala [30], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Renyi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n+o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For “discrete” signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal pX.", "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by KrzakalaEtAl , message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate @math exceeds the (upper) R 'enyi information dimension of the signal, @math . More precisely, for a sequence of signals of diverging dimension @math whose empirical distribution converges to @math , reconstruction is with high probability successful from @math measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension @math and @math non-zero entries, this implies reconstruction from @math measurements. For discrete' signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from @math measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal @math ." ] }
0707.1954
1483340443
Wireless sensor networks are often used for environmental monitoring applications. In this context sampling and reconstruction of a physical field is one of the most important problems to solve. We focus on a bandlimited field and find under which conditions on the network topology the reconstruction of the field is successful, with a given probability. We review irregular sampling theory, and analyze the problem using random matrix theory. We show that even a very irregular spatial distribution of sensors may lead to a successful signal reconstruction, provided that the number of collected samples is large enough with respect to the field bandwidth. Furthermore, we give the basis to analytically determine the probability of successful field reconstruction.
Note that our work significantly differs from the studies above because we assume that the sensors location are known (or can be determined @cite_11 @cite_6 @cite_0 ) and the sensor precision is sufficiently high so that the quantization error is negligible. The question we pose is instead under which conditions (on the network system) the reconstruction of a bandlimited signal is successful with a given probability.
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_11" ], "mid": [ "2571527823", "2953338282", "2053334136" ], "abstract": [ "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala [30], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Renyi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n+o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For “discrete” signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal pX.", "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by KrzakalaEtAl , message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate @math exceeds the (upper) R 'enyi information dimension of the signal, @math . More precisely, for a sequence of signals of diverging dimension @math whose empirical distribution converges to @math , reconstruction is with high probability successful from @math measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension @math and @math non-zero entries, this implies reconstruction from @math measurements. For discrete' signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from @math measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal @math .", "We consider wireless sensor networks whose nodes are randomly deployed and, thus, provide an irregular sampling of the sensed field. The field is assumed to be bandlimited; a sink node collects the data gathered by the sensors and reconstructs the field by using a technique based on linear filtering. By taking the mean square error (MSE) as performance metric, we evaluate the effect of quasi-equally spaced sensor layouts on the quality of the reconstructed signal. The MSE is derived through asymptotic analysis for different sensor spatial distributions, and for two of them we are able to obtain an approximate closed form expression. The case of uniformly distributed sensors is also considered for the sake of comparison. The validity of our asymptotic analysis is shown by comparison against numerical results and it is proven to hold even for a small number of nodes. Finally, with the help of a simple example, we show the key role that our results play in the deployment of sensor networks." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
Propelled by the increasing interest for next-generation Internet architectures and, in particular, Content-Oriented Networking (CON), the research community has produced a large body of work dealing with CON building blocks @cite_27 @cite_3 @cite_33 @cite_44 @cite_24 , performance @cite_78 @cite_49 @cite_72 @cite_82 , and scalability @cite_14 @cite_45 . However, the quest for analyzing and enhancing security in CON is only at the beginning -- in particular, very little work has focused on privacy and anonymity. In this section, we review relevant prior work.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_78", "@cite_3", "@cite_44", "@cite_24", "@cite_27", "@cite_72", "@cite_45", "@cite_49", "@cite_82" ], "mid": [ "2337767373", "2018793106", "1535296432", "2120514843", "2903785932", "2886568791", "2590898937", "202035697", "2119486295", "2091803449", "2083961700" ], "abstract": [ "With the growing realization that current Internet protocols are reaching the limits of their senescence, several ongoing research efforts aim to design potential next-generation Internet architectures. Although they vary in maturity and scope, in order to avoid past pitfalls, these efforts seek to treat security and privacy as fundamental requirements. Resilience to Denialof-Service (DoS) attacks that plague today’s Internet is a major issue for any new architecture and deserves full attention. In this paper, we focus on DoS in Named Data Networking (NDN) – a specific candidate for next-generation Internet architecture designs. By naming data instead of its locations, NDN transforms data into a first-class entity and makes itself an attractive and viable approach to meet the needs for many current and emerging applications. It also incorporates some basic security features that mitigate classes of attacks that are commonly seen today. However, NDN’s resilience to DoS attacks has not been analyzed to-date. This paper represents a first step towards assessment and possible mitigation of DoS in NDN. After identifying and analyzing several new types of attacks, it investigates their variations, effects and counter-measures. This paper also sheds some light on the debate about relative virtues of self-certifying, as opposed to human-readable, names in the context of content-centric networking.", "With the growing realization that current Internet protocols are reaching the limits of their senescence, several on-going research efforts aim to design potential next-generation Internet architectures. Although they vary in maturity and scope, in order to avoid past pitfalls, these efforts seek to treat security and privacy as fundamental requirements. Resilience to Denial-of-Service (DoS) attacks that plague today's Internet is a major issue for any new architecture and deserves full attention. In this paper, we focus on DoS in Named Data Networking (NDN) - a specific candidate for next-generation Internet architecture designs. By naming data instead of its locations, NDN transforms data into a first-class entity and makes itself an attractive and viable approach to meet the needs for many current and emerging applications. It also incorporates some basic security features that mitigate classes of attacks that are commonly seen today. However, NDN's resilience to DoS attacks has not been analyzed to-date. This paper represents a first step towards assessment and possible mitigation of DoS in NDN. After identifying and analyzing several new types of attacks, it investigates their variations, effects and counter-measures. This paper also sheds some light on the debate about relative virtues of self-certifying, as opposed to human-readable, names in the context of content-centric networking.", "Content-centric networking proposals, as Parc's CCN, have recently emerged to define new network architectures where content, and not its location, becomes the core of the communication model. These new paradigms push data storage and delivery at network layer and are designed to better deal with current Internet usage, mainly centered around content dissemination and retrieval. In this paper, we develop an analytical model of CCN in-network storage and receiver-driven transport, that more generally applies to a class of content ori ented networks identified by chunk-based communication. We derive a closed-form expression for the mean stationary throughput as a function of hit miss probabilities at the caches along the path, of content popularity and of content cache size. Our analytical results, supported by chunk level simulations, can be used to analyze fundamental trade-offs in current CCN architecture, and provide an essential building block for the design and evaluation of enhanced CCN protocols.", "Research on performance, robustness, and evolution of the global Internet is fundamentally handicapped without accurate and thorough knowledge of the nature and structure of the contractual relationships between Autonomous Systems (ASs). In this work we introduce novel heuristics for inferring AS relationships. Our heuristics improve upon previous works in several technical aspects, which we outline in detail and demonstrate with several examples. Seeking to increase the value and reliability of our inference results, we then focus on validation of inferred AS relationships. We perform a survey with ASs' network administrators to collect information on the actual connectivity and policies of the surveyed ASs. Based on the survey results, we find that our new AS relationship inference techniques achieve high levels of accuracy: we correctly infer 96.5 customer to provider (c2p), 82.8 peer to peer (p2p), and 90.3 sibling to sibling (s2s) relationships. We then cross-compare the reported AS connectivity with the AS connectivity data contained in BGP tables. We find that BGP tables miss up to 86.2 of the true adjacencies of the surveyed ASs. The majority of the missing links are of the p2p type, which highlights the limitations of present measuring techniques to capture links of this type. Finally, to make our results easily accessible and practically useful for the community, we open an AS relationship repository where we archive, on a weekly basis, and make publicly available the complete Internet AS-level topology annotated with AS relationship information for every pair of AS neighbors.", "Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9 accuracy, our method achieves 55.7 ; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6 accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6 classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10 . Code is available at this https URL.", "The current internet architecture is inefficient in fulfilling the demands of newly emerging internet applications. To address this issue, several over-the-top application-level solutions have been employed, making the overall architecture very complex. Information-centric-networking (ICN) architecture has emerged as a promising alternative solution. The ICN architecture decouples the content from the host at the network level and supports the temporary storage of content in an in-network cache. Fundamentally, the ICN can be considered a multisource, multicast content-delivery solution. Because of the benefits of network coding in multicasting scenarios and proven benefits in distributed storage networks, the network coding is apt for the ICN architecture. In this study, we propose a solvable linear network-coding scheme for the ICN architecture. We also propose a practical implementation of the network-coding scheme for the ICN, particularly for the content-centric network (CCN) architecture, which is termed the coded CCN. The performance results show that the network-coding scheme improves the performance of the CCN and significantly reduces the network traffic and average download delay.", "The fast-growing Internet traffic is increasingly becoming content-based and driven by mobile users, with users more interested in data rather than its source. This has precipitated the need for an information-centric Internet architecture. Research in information-centric networks (ICNs) have resulted in novel architectures, e.g., CCN NDN, DONA, and PSIRP PURSUIT; all agree on named data based addressing and pervasive caching as integral design components. With network-wide content caching, enforcement of content access control policies become non-trivial. Each caching node in the network needs to enforce access control policies with the help of the content provider. This becomes inefficient and prone to unbounded latencies especially during provider outages. In this paper, we propose an efficient access control framework for ICN, which allows legitimate users to access and use the cached content directly, and does not require verification authentication by an online provider authentication server or the content serving router. This framework would help reduce the impact of system down-time from server outages and reduce delivery latency by leveraging caching while guaranteeing access only to legitimate users. Experimental simulation results demonstrate the suitability of this scheme for all users, but particularly for mobile users, especially in terms of the security and latency overheads.", "The ubiquity of the Internet has led to increased resource sharing between large numbers of users in widely-disparate administrative domains. Unfortunately, traditional identity-based solutions to the authorization problem do not allow for the dynamic establishment of trust, and thus cannot be used to facilitate interactions between previously-unacquainted parties. Furthermore, the management of identity-based systems becomes burdensome as the number of users in the system increases. To address this gap between the needs of open computing systems and existing authorization infrastructures, researchers have begun to investigate novel attribute-based access control (ABAC) systems based on techniques such as trust negotiation and other forms of distributed proving. To date, research in these areas has been largely theoretical and has produced many important foundational results. However, if these techniques are to be safely deployed in practice, the systems-level barriers hindering their adoption must be overcome. In this thesis, we show that safely and securely adopting decentralized ABAC approaches to authorization is not simply a matter of implementation and deployment, but requires careful consideration of both formal properties and practical issues. To this end, we investigate a progression of important questions regarding the safety analysis, deployment, implementation, and optimization of these types of systems. We first show that existing ABAC theory does not properly account for the asynchronous nature of open systems, which allows attackers to subvert these systems by forcing decisions to be made using inconsistent system states. To address this, we develop provably-secure and lightweight consistency enforcement mechanisms suitable for use in trust negotiation and distributed proof systems. We next focus on deployment issues, and investigate how user interactions can be audited in the absence of concrete user identities. We develop the technique of virtual fingerprinting, which can be used to accomplish this task without adversely affecting the scalability of audit systems. Lastly, we present TrustBuilder2, which is the first fully-configurable framework for trust negotiation. Within this framework, we examine availability problems associated with the trust negotiation process and develop a novel approach to policy compliance checking that leverages an efficient pattern-matching approach to outperform existing techniques by orders of magnitude.", "The advent of the Internet has made the transmission of personally identifiable information more common and often unintended by the user. As personal information becomes more accessible, individuals worry that businesses misuse the information that is collected while they are online. Organizations have tried to mitigate this concern in two ways: (1) by offering privacy policies regarding the handling and use of personal information and (2) by offering benefits such as financial gains or convenience. In this paper, we interpret these actions in the context of the information-processing theory of motivation. Information-processing theories, also known as expectancy theories in the context of motivated behavior, are built on the premise that people process information about behavior-outcome relationships. By doing so, they are forming expectations and making decisions about what behavior to choose. Using an experimental setting, we empirically validate predictions that the means to mitigate privacy concerns are associated with positive valences resulting in an increase in motivational score. In a conjoint analysis exercise, 268 participants from the United States and Singapore face trade-off situations, where an organization may only offer incomplete privacy protection or some benefits. While privacy protections (against secondary use, improper access, and error) are associated with positive valences, we also find that financial gains and convenience can significantly increase individuals' motivational score of registering with a Web site. We find that benefits-monetary reward and future convenience-significantly affect individuals' preferences over Web sites with differing privacy policies. We also quantify the value of Web site privacy protection. Among U.S. subjects, protection against errors, improper access, and secondary use of personal information is worth @math 44.62. Finally, our approach also allows us to identify three distinct segments of Internet users-privacy guardians, information sellers, and convenience seekers.", "The recent literature has hailed the benefits of content-oriented network architectures. However, such designs pose a threat to privacy by revealing a user's content requests. In this paper, we study how to ameliorate privacy in such designs. We present an approach that does not require any special infrastructure or shared secrets between the publishers and consumers of content. In lieu of any informational asymmetry, the approach leverages computational asymmetry by forcing the adversary to perform sizable computations to reconstruct each request. This approach does not provide ideal privacy, but makes it hard for an adversary to effectively monitor the content requests of a large number of users.", "Over the last several years, there has been an emerging interest in the development of wide-area data collection and analysis centers to help identify, track, and formulate responses to the ever-growing number of coordinated attacks and malware infections that plague computer networks worldwide. As large-scale network threats continue to evolve in sophistication and extend to widely deployed applications, we expect that interest in collaborative security monitoring infrastructures will continue to grow, because such attacks may not be easily diagnosed from a single point in the network. The intent of this position paper is not to argue the necessity of Internet-scale security data sharing infrastructures, as there is ample research [13, 48, 51, 54, 41, 47, 42] and operational examples [43, 17, 32, 53] that already make this case. Instead, we observe that these well-intended activities raise a unique set of risks and challenges. We outline some of the most salient issues faced by global network security centers, survey proposed defense mechanisms, and pose several research challenges to the computer security community. We hope that this position paper will serve as a stimulus to spur groundbreaking new research in protection and analysis technologies that can facilitate the collaborative sharing of network security data while keeping data contributors safe and secure." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
Security in CON. Wong and Nikander @cite_25 address security of naming mechanisms by constructing content name as the concatenation of content provider's ID, cryptographic ID of the content and some meta-data. @cite_5 adopt a similar approach where content name is defined as the concatenation of the hash of the public key and a set of attributes. Both schemes rely on cryptographic hash functions to name the content, which results in a human-unreadable flat naming. @cite_76 show that these schemes have several drawbacks, including the need for an indirection mechanism to map and the lack of binding between name and producer's identity. To resolve these shortcomings, they propose to keep hierarchical human readable names while signing both content name and the content itself, using producer's public key. @cite_31 study DoS and DDOS in CCN @cite_27 by presenting attacks and proposing some initial countermeasures. In another context, @cite_35 propose a secure lighting systems over Named-Data Networking (NDN), providing control access to fixtures via authorization policies, coupled with strong authentication. This approach is a first attempt to port CON out of the content distribution scenario.
{ "cite_N": [ "@cite_35", "@cite_27", "@cite_5", "@cite_31", "@cite_76", "@cite_25" ], "mid": [ "2514172418", "2620403523", "2092433893", "145176944", "2083158002", "1505083828" ], "abstract": [ "User, content, and device names as a security primitive have been an attractive approach especially in the context of Information-Centric Networking (ICN) architectures. We leverage Hierarchical Identity Based Encryption (HIBE) to build (content) name-based security mechanisms used for securely distributing content. In contrast to similar approaches, in our system each user maintains his own Private Key Generator used for generating the master secret key and the public system parameters required by the HIBE algorithm. This way our system does not suffer from the key escrow problem, which is inherent in many similar solutions. In order to disseminate the system parameters of a content owner in a fully distributed way, we use blockchains, a distributed, community managed, global list of transactions.", "We propose a multi-user Symmetric Searchable Encryption (SSE) scheme based on the single-user Oblivious Cross Tags (OXT) protocol (, CRYPTO 2013). The scheme allows any user to perform a search query by interacting with the server and any ( -1 ) ‘helping’ users, and preserves the privacy of database content against the server even assuming leakage of up to ( -1 ) users’ keys to the server (for a threshold parameter ( )), while hiding the query from the ( -1 ) ‘helping users’. To achieve the latter query privacy property, we design a new distributed key-homomorphic pseudorandom function (PRF) that hides the PRF input (search keyword) from the ‘helping’ key share holders. By distributing the utilized keys among the users, the need of constant online presence of the data owner to provide services to the users is eliminated, while providing resilience against user key exposure.", "Several projects propose an information-centric approach to the network of the future. Such an approach makes efficient content distribution possible by making information retrieval host-independent and integrating into the network storage for caching information. Requests for particular content can, thus, be satisfied by any host or server holding a copy. The current security model based on host authentication is not applicable in this context. Basic security functionality must instead be attached directly to the data and its naming scheme. A naming scheme to name content and other objects that enables verification of data integrity as well as owner authentication and identification is here presented. The naming scheme is designed for flexibility and extensibility, e.g., to integrate other security properties like access control. At the same time, the naming scheme offers persistent IDs even though the content, content owner and or owner's organizational structure, or location change. The requirements for the naming scheme and an analysis showing how the proposed scheme fulfills them are presented. Experience with prototyping the naming scheme is also discussed. The naming scheme builds the foundation for a secure information-centric network infrastructure that can also solve some of the main security problems of today's Internet.", "We consider what constitutes identities in cryptography. Typical examples include your name and your social-security number, or your fingerprint iris-scan, or your address, or your (non-revoked) public-key coming from some trusted public-key infrastructure. In many situations, however, where you are defines your identity. For example, we know the role of a bank-teller behind a bullet-proof bank window not because she shows us her credentials but by merely knowing her location. In this paper, we initiate the study of cryptographic protocols where the identity (or other credentials and inputs) of a party are derived from its geographic location. We start by considering the central task in this setting, i.e., securely verifying the position of a device. Despite much work in this area, we show that in the Vanilla (or standard) model, the above task (i.e., of secure positioning) is impossible to achieve. In light of the above impossibility result, we then turn to the Bounded Storage Model and formalize and construct information theoretically secure protocols for two fundamental tasks: Secure Positioning; and Position Based Key Exchange. We then show that these tasks are in fact universal in this setting --- we show how we can use them to realize Secure Multi-Party Computation.Our main contribution in this paper is threefold: to place the problem of secure positioning on a sound theoretical footing; to prove a strong impossibility result that simultaneously shows the insecurity of previous attempts at the problem; and to present positive results by showing that the bounded-storage framework is, in fact, one of the \"right\" frameworks (there may be others) to study the foundations of position-based cryptography.", "Name services are critical for mapping logical resource names to physical resources in large-scale distributed systems. The Domain Name System (DNS) used on the Internet, however, is slow, vulnerable to denial of service attacks, and does not support fast updates. These problems stem fundamentally from the structure of the legacy DNS.This paper describes the design and implementation of the Cooperative Domain Name System (CoDoNS), a novel name service, which provides high lookup performance through proactive caching, resilience to denial of service attacks through automatic load-balancing, and fast propagation of updates. CoDoNS derives its scalability, decentralization, self-organization, and failure resilience from peer-to-peer overlays, while it achieves high performance using the Beehive replication framework. Cryptographic delegation, instead of host-based physical delegation, limits potential malfeasance by namespace operators and creates a competitive market for namespace management. Backwards compatibility with existing protocols and wire formats enables CoDoNS to serve as a backup for legacy DNS, as well as a complete replacement. Performance measurements from a real-life deployment of the system in PlanetLab shows that CoDoNS provides fast lookups, automatically reconfigures around faults without manual involvement and thwarts distributed denial of service attacks by promptly redistributing load across nodes.", "This thesis describes a novel statistical named-entity (i.e. “proper name”) recognition system known as “MENE” (Maximum Entropy Named Entity). Named entity (N.E.) recognition is a form of information extraction in which we seek to classify every word in a document as being a person-name, organization, location, date, time, monetary value, percentage, or “none of the above”. The task has particular significance for Internet search engines, machine translation, the automatic indexing of documents, and as a foundation for work on more complex information extraction tasks. Two of the most significant problems facing the constructor of a named entity system are the questions of portability and system performance. A practical N.E. system will need to be ported frequently to new bodies of text and even to new languages. The challenge is to build a system which can be ported with minimal expense (in particular minimal programming by a computational linguist) while maintaining a high degree of accuracy in the new domains or languages. MENE attempts to address these issues through the use of maximum entropy probabilistic modeling. It utilizes a very flexible object-based architecture which allows it to make use of a broad range of knowledge sources in making its tagging decisions. In the DARPA-sponsored MUC-7 named entity evaluation, the system displayed an accuracy rate which was well-above the median, demonstrating that it can achieve the performance goal. In addition, we demonstrate that the system can be used as a post-processing tool to enhance the output of a hand-coded named entity recognizer through experiments in which MENE improved on the performance of N.E. systems from three different sites. Furthermore, when all three external recognizers are combined under MENE, we are able to achieve very strong results which, in some cases, appear to be competitive with human performance. Finally, we demonstrate the trans-lingual portability of the system. We ported the system to two Japanese-language named entity tasks, one of which involved a new named entity category, “artifact”. Our results on these tasks were competitive with the best systems built by native Japanese speakers despite the fact that the author speaks no Japanese." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
Privacy Issues in CON. To the best of our knowledge, the only related privacy study is the recent article by in @cite_48 @cite_37 , that covers security and privacy issues of CCN @cite_27 . Specifically, they highlight a few Denial-of-Service (DoS) vulnerabilities as well as different cache-related attacks. In CCN, a possible DoS attack (also discussed in @cite_27 ) relies on resource exhaustion, targeting either routers or content source. Routers are forced to perform expensive computation, such as, signature verification, which negatively affects the quality of service and can ultimately block traffic. Content source can also be flooded with a huge number of interests, ending up denying service to legitimate users. Additional DoS attacks mainly target the cache mechanism, to either decrease network performance or to gain free and uncontrolled storage. Transforming the cache into a permanent storage is achieved by continuously issuing interests for a desired file. Decreased network performance can also be achieved through cache pollution.
{ "cite_N": [ "@cite_48", "@cite_37", "@cite_27" ], "mid": [ "1984778122", "2159587870", "2901937773" ], "abstract": [ "With the advent of content-centric networking (CCN) where contents can be cached on each CCN router, cache robustness will soon emerge as a serious concern for CCN deployment. Previous studies on cache pollution attacks only focus on a single cache server. The question of how caching will behave over a general caching network such as CCN under cache pollution attacks has never been answered. In this paper, we propose a novel scheme called CacheShield for enhancing cache robustness. CacheShield is simple, easy-to-deploy, and applicable to any popular cache replacement policy. CacheShield can effectively improve cache performance under normal circumstances, and more importantly, shield CCN routers from cache pollution attacks. Extensive simulations including trace-driven simulations demonstrate that CacheShield is effective for both CCN and today's cache servers. We also study the impact of cache pollution attacks on CCN and reveal several new observations on how different attack scenarios can affect cache hit ratios unexpectedly.", "Content-Centric Networking (CCN) is an emerging paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. In CCN, named content - rather than addressable hosts - becomes a first-class entity. Content is therefore decoupled from its location. This allows, among other things, the implementation of ubiquitous caching. Named-Data Networking (NDN) is a prominent example of CCN. In NDN, all nodes (i.e., hosts, routers) are allowed to have a local cache, used to satisfy incoming requests for content. This makes NDN a good architecture for efficient large scale content distribution. However, reliance on caching allows an adversary to perform attacks that are very effective and relatively easy to implement. Such attacks include cache poisoning (i.e., introducing malicious content into caches) and cache pollution (i.e., disrupting cache locality). This paper focuses on cache pollution attacks, where the adversary's goal is to disrupt cache locality to increase link utilization and cache misses for honest consumers. We show, via simulations, that such attacks can be implemented in NDN using limited resources, and that their effectiveness is not limited to small topologies. We then illustrate that existing proactive countermeasures are ineffective against realistic adversaries. Finally, we introduce a new technique for detecting pollution attacks. Our technique detects high and low rate attacks on different topologies with high accuracy.", "Information leakage of sensitive data has become one of the fast growing concerns among computer users. With adversaries turning to hardware for exploits, caches are frequently a target for timing channels since they present different timing profiles for cache miss and hit latencies. Such timing channels operate by having an adversary covertly communicate secrets to a spy simply through modulating resource timing without leaving any physical evidence. In this article, we demonstrate a new vulnerability exposed by cache coherence protocols where adversaries could manipulate the coherence states on certain cache blocks to alter cache access timing and communicate secrets illegitimately. Our threat model assumes the trojan and spy can either exploit explicitly shared read-only physical pages (e.g., shared library code), or use memory deduplication feature to implicitly force create shared physical pages. We demonstrate a template that adversaries may use to construct covert timing channels through manipulating combinations of coherence states and data placement in different caches. We investigate several classes of cache coherence protocols, and observe that both directory-based and snoopy protocols can be subject to covert timing channel attacks. We identify that the root cause of the vulnerability to be the existence of access latency difference for cache lines in read-only cache coherence states: Exlusive and Shared. For defense, we propose a slightly modified cache coherence scheme that will enable the last level cache to directly respond to read data requests in these read-only coherence states, and avoid any latency difference that could enable timing channels." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
From the privacy perspective, work in @cite_48 @cite_37 identifies the issue of information leakage through caches in CCN. It proposes a few simple countermeasures, following and approaches. The former can be achieved using techniques similar to those addressing cache pollution attacks in IP @cite_53 , although such an approach can be difficult to port to CON due to the lack of source address. The latter can actually be global, i.e., treating all traffic as sensitive, delaying all traffic, or deploying a shared cache to circumvent the attack. Alternatively, a selective prevention approach may try to distinguish between sensitive and non-sensitive content, based on content popularity and context (time, location), and then delay or tunnel sensitive content. It is not clear, however, how to implement the selection mechanism to distinguish between private and non-private content, but authors of @cite_37 suggest to implement this service either in the network layer (i.e., the router classifies the content) or by the host (i.e., content source tags sensitive content). Such classification is in turn a very challenging task, since privacy is a relative notion that changes from one user to another. Also, are briefly discussed, although no countermeasures besides tunneling have been proposed.
{ "cite_N": [ "@cite_48", "@cite_37", "@cite_53" ], "mid": [ "2901937773", "2159587870", "2949272603" ], "abstract": [ "Information leakage of sensitive data has become one of the fast growing concerns among computer users. With adversaries turning to hardware for exploits, caches are frequently a target for timing channels since they present different timing profiles for cache miss and hit latencies. Such timing channels operate by having an adversary covertly communicate secrets to a spy simply through modulating resource timing without leaving any physical evidence. In this article, we demonstrate a new vulnerability exposed by cache coherence protocols where adversaries could manipulate the coherence states on certain cache blocks to alter cache access timing and communicate secrets illegitimately. Our threat model assumes the trojan and spy can either exploit explicitly shared read-only physical pages (e.g., shared library code), or use memory deduplication feature to implicitly force create shared physical pages. We demonstrate a template that adversaries may use to construct covert timing channels through manipulating combinations of coherence states and data placement in different caches. We investigate several classes of cache coherence protocols, and observe that both directory-based and snoopy protocols can be subject to covert timing channel attacks. We identify that the root cause of the vulnerability to be the existence of access latency difference for cache lines in read-only cache coherence states: Exlusive and Shared. For defense, we propose a slightly modified cache coherence scheme that will enable the last level cache to directly respond to read data requests in these read-only coherence states, and avoid any latency difference that could enable timing channels.", "Content-Centric Networking (CCN) is an emerging paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. In CCN, named content - rather than addressable hosts - becomes a first-class entity. Content is therefore decoupled from its location. This allows, among other things, the implementation of ubiquitous caching. Named-Data Networking (NDN) is a prominent example of CCN. In NDN, all nodes (i.e., hosts, routers) are allowed to have a local cache, used to satisfy incoming requests for content. This makes NDN a good architecture for efficient large scale content distribution. However, reliance on caching allows an adversary to perform attacks that are very effective and relatively easy to implement. Such attacks include cache poisoning (i.e., introducing malicious content into caches) and cache pollution (i.e., disrupting cache locality). This paper focuses on cache pollution attacks, where the adversary's goal is to disrupt cache locality to increase link utilization and cache misses for honest consumers. We show, via simulations, that such attacks can be implemented in NDN using limited resources, and that their effectiveness is not limited to small topologies. We then illustrate that existing proactive countermeasures are ineffective against realistic adversaries. Finally, we introduce a new technique for detecting pollution attacks. Our technique detects high and low rate attacks on different topologies with high accuracy.", "We investigate the problem of optimal request routing and content caching in a heterogeneous network supporting in-network content caching with the goal of minimizing average content access delay. Here, content can either be accessed directly from a back-end server (where content resides permanently) or be obtained from one of multiple in-network caches. To access a piece of content, a user must decide whether to route its request to a cache or to the back-end server. Additionally, caches must decide which content to cache. We investigate the problem complexity of two problem formulations, where the direct path to the back-end server is modeled as i) a congestion-sensitive or ii) a congestion-insensitive path, reflecting whether or not the delay of the uncached path to the back-end server depends on the user request load, respectively. We show that the problem is NP-complete in both cases. We prove that under the congestion-insensitive model the problem can be solved optimally in polynomial time if each piece of content is requested by only one user, or when there are at most two caches in the network. We also identify a structural property of the user-cache graph that potentially makes the problem NP-complete. For the congestion-sensitive model, we prove that the problem remains NP-complete even if there is only one cache in the network and each content is requested by only one user. We show that approximate solutions can be found for both models within a (1-1 e) factor of the optimal solution, and demonstrate a greedy algorithm that is found to be within 1 of optimal for small problem sizes. Through trace-driven simulations we evaluate the performance of our greedy algorithms, which show up to a 50 reduction in average delay over solutions based on LRU content caching." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
Our work extends that in @cite_48 @cite_37 by encompassing all privacy aspects: caching, naming, signature, and content. Also, it is more general as it does not only consider CCN @cite_27 , but CON in general, independently of the specific instantiation. Furthermore, when suggesting countermeasures, we only propose techniques that can be applied with a minimal change to the architecture.
{ "cite_N": [ "@cite_48", "@cite_37", "@cite_27" ], "mid": [ "2159587870", "2071740737", "2313059177" ], "abstract": [ "Content-Centric Networking (CCN) is an emerging paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. In CCN, named content - rather than addressable hosts - becomes a first-class entity. Content is therefore decoupled from its location. This allows, among other things, the implementation of ubiquitous caching. Named-Data Networking (NDN) is a prominent example of CCN. In NDN, all nodes (i.e., hosts, routers) are allowed to have a local cache, used to satisfy incoming requests for content. This makes NDN a good architecture for efficient large scale content distribution. However, reliance on caching allows an adversary to perform attacks that are very effective and relatively easy to implement. Such attacks include cache poisoning (i.e., introducing malicious content into caches) and cache pollution (i.e., disrupting cache locality). This paper focuses on cache pollution attacks, where the adversary's goal is to disrupt cache locality to increase link utilization and cache misses for honest consumers. We show, via simulations, that such attacks can be implemented in NDN using limited resources, and that their effectiveness is not limited to small topologies. We then illustrate that existing proactive countermeasures are ineffective against realistic adversaries. Finally, we introduce a new technique for detecting pollution attacks. Our technique detects high and low rate attacks on different topologies with high accuracy.", "Named Data Networking architectures have been proposed to improve various shortcomings of the current Internet architecture. A key part of these proposals is the capability of caching arbitrary content in arbitrary network locations. While caching has the potential to improve network performance, the data stored in caches can be seen as transient traces of past communication that attackers can exploit to compromise the users' privacy. With this editorial note, we aim to raise awareness of privacy attacks as an intrinsic and relevant issue in Named Data Networking architectures. Countermeasures against privacy attacks are subject to a trade-off between performance and privacy. We discuss several approaches to countermeasures representing different incarnations of this tradeoff, along with open issues to be looked at by the research community.", "Caching scheme will change the original feature of the network in Content Centric Networking (CCN). So it becomes a challenge to describe the caching node importance according to network traffic and user behavior. In this work, a new metric named Request Influence Degree (RID) is defined to reflect the degree of node importance. Then the caching performance of CCN has been addressed with specially focusing on the size of individual CCN router caches. Finally, a newly content store space heterogeneous allocation scheme based on the metric RID across the CCN network has been proposed. Numerical experiments reveal that the new scheme can decrease the routing stretch and the source server load contrasting that of the homogeneous assignment and several graph-related centrality metrics allocations." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
Anonymity in CON. AND @math NA @cite_30 proposes a Tor-like anonymizing tool for CCN @cite_27 to provide provable anonymity. It also aims to privacy protection via simple tunneling. However, as discussed in , AND @math NA is an all-in-one'' solution that introduces latency and impedes caching. Whereas, fine-grained privacy solutions are needed, since a widespread use of tunneling would inherently take away most of CON benefits in terms of performance and scalability. To provide censorship resistance, @cite_57 describes an algorithm to mix legitimate sensitive content with so-called cover files'' to hide it. By monitoring the content, an adversary would only see the mixed'' content, which prevents him from censoring the content.
{ "cite_N": [ "@cite_30", "@cite_27", "@cite_57" ], "mid": [ "2161829417", "1975016298", "1992286709" ], "abstract": [ "The ability to locate random relays is a key challenge for peer-to-peer (P2P) anonymous communication systems. Earlier attempts like Salsa and AP3 used distributes hash table lookups to locate relays, but the lack of anonymity in their lookup mechanisms enables an adversary to infer the path structure and compromise used anonymity. NISAN and Torsk are state-of-the-art systems for P2P anonymous communication. Their designs include mechanisms that are specifically tailored to mitigate information leak attacks. NISAN proposes to add anonymity into the lookup mechanism itself, while Torsk proposes the use of secret buddy nodes to anonymize the lookup initiator. In this paper, we attack the key mechanisms that hide the relationship between a lookup initiator and its selected relays in NISAN and Torsk. We present passive attacks on the NISAN lookup and show that it is not as anonymous as previously thought. We analyze three circuit construction mechanisms for anonymous communication using the NISAN lookup, and show that the information leaks in the NISAN lookup lead to a significant reduction in user anonymity. We also propose active attacks on Torsk that defeat its secret buddy mechanism and consequently compromise user anonymity. Our results are backed up by probabilistic modeling and extensive simulations. Our study motivates the search for a DHT lookup mechanism that is both secure and anonymous.", "Existing IP anonymity systems tend to sacrifice one of low latency, high bandwidth, or resistance to traffic-analysis. High-latency mix-nets like Mixminion batch messages to resist traffic-analysis at the expense of low latency. Onion routing schemes like Tor deliver low latency and high bandwidth, but are not designed to withstand traffic analysis. Designs based on DC-nets or broadcast channels resist traffic analysis and provide low latency, but are limited to low bandwidth communication. In this paper, we present the design, implementation, and evaluation of Aqua, a high-bandwidth anonymity system that resists traffic analysis. We focus on providing strong anonymity for BitTorrent, and evaluate the performance of Aqua using traces from hundreds of thousands of actual BitTorrent users. We show that Aqua achieves latency low enough for efficient bulk TCP flows, bandwidth sufficient to carry BitTorrent traffic with reasonable efficiency, and resistance to traffic analysis within anonymity sets of hundreds of clients. We conclude that Aqua represents an interesting new point in the space of anonymity network designs.", "It is not uncommon in the data anonymization literature to oppose the \"old\" @math k -anonymity model to the \"new\" differential privacy model, which offers more robust privacy guarantees. Yet, it is often disregarded that the utility of the anonymized results provided by differential privacy is quite limited, due to the amount of noise that needs to be added to the output, or because utility can only be guaranteed for a restricted type of queries. This is in contrast with @math k -anonymity mechanisms, which make no assumptions on the uses of anonymized data while focusing on preserving data utility from a general perspective. In this paper, we show that a synergy between differential privacy and @math k -anonymity can be found: @math k -anonymity can help improving the utility of differentially private responses to arbitrary queries. We devote special attention to the utility improvement of differentially private published data sets. Specifically, we show that the amount of noise required to fulfill @math ? -differential privacy can be reduced if noise is added to a @math k -anonymous version of the data set, where @math k -anonymity is reached through a specially designed microaggregation of all attributes. As a result of noise reduction, the general analytical utility of the anonymized output is increased. The theoretical benefits of our proposal are illustrated in a practical setting with an empirical evaluation on three data sets." ] }
1211.5263
2128011898
A smooth affine hypersurface Z of complex dimension n is homotopy equivalent to an n-dimensional cell complex. Given a defining polynomial f for Z as well as a regular triangulation of its Newton polytope, we provide a purely combinatorial construction of a compact topological space S as a union of components of real dimension n, and prove that S embeds into Z as a deformation retract. In particular, Z is homotopy equivalent to S.
A skeleton for Fermat hypersurfaces was described by Deligne in [pp. 88--90] Deligne , and this skeleton is visible in our own in a manner described in Remark . Our skeleta'' are different than the skeleta'' that appear in nonarchimedean geometry @cite_10 @cite_15 , but @math plays a similar role in both constructions. It would be interesting to study this resemblance further.
{ "cite_N": [ "@cite_15", "@cite_10" ], "mid": [ "2021150171", "2022460695" ], "abstract": [ "Recent advances on human motion analysis have made the extraction of human skeleton structure feasible, even from single depth images. This structure has been proven quite informative for discriminating actions in a recognition scenario. In this context, we propose a local skeleton descriptor that encodes the relative position of joint quadruples. Such a coding implies a similarity normalisation transform that leads to a compact (6D) view-invariant skelet al feature, referred to as skelet al quad. Further, the use of a Fisher kernel representation is suggested to describe the skelet al quads contained in a (sub)action. A Gaussian mixture model is learnt from training data, so that the generation of any set of quads is encoded by its Fisher vector. Finally, a multi-level representation of Fisher vectors leads to an action description that roughly carries the order of sub-action within each action sequence. Efficient classification is here achieved by linear SVMs. The proposed action representation is tested on widely used datasets, MSRAction3D and HDM05. The experimental evaluation shows that the proposed method outperforms state-of-the-art algorithms that rely only on joints, while it competes with methods that combine joints with extra cues.", "In this paper we explore the idea of characterizing sentences by the shapes of their structural descriptions only; for example, in the case of context free grammars, by the shapes of the derivation trees only. Such structural descriptions will be called skeletons . A skeleton exhibits all of the grouping structure (phrase structure) of the sentence without naming the syntactic categories used in the description. The inclusion of syntactic categories as variables is primarily a question of economy of description. Every context free grammar is strongly equivalent to a skelet al grammar, in a sense made precise in the paper. Besides clarifying the role of skeletons in mathematical linguistics, we show that skelet al automata provide a characterization of local sets, remedying a “defect” in the usual tree automata theory. We extend the method of skelet al structural descriptions to other forms of tree describing systems. We also suggest a theoretical basis for grammatical inference based on grouping structure only." ] }
1211.5263
2128011898
A smooth affine hypersurface Z of complex dimension n is homotopy equivalent to an n-dimensional cell complex. Given a defining polynomial f for Z as well as a regular triangulation of its Newton polytope, we provide a purely combinatorial construction of a compact topological space S as a union of components of real dimension n, and prove that S embeds into Z as a deformation retract. In particular, Z is homotopy equivalent to S.
Hypersurfaces in algebraic tori have been studied by Danilov-Khovanski @cite_14 and Batyrev @cite_19 . Danilov-Khovankski computed mixed Hodge numbers, while Batyev studied the variation of mixed Hodge structures. Log geometry has been extensively employed by Gross and Siebert @cite_21 in their seminal work studying the degenerations appearing in mirror symmetry. Their strategy is crucial to our work, even though we take a somewhat different track by working in a non-compact setting for hypersurfaces that are not necessarily Calabi-Yau. The non-compactness allows us to deal with log-smooth log structures. Mirror symmetry for general hypersurfaces was recently studied in @cite_16 (projective case) and @cite_4 (affine case) using polyhedral decompositions of the Newton polytope. This relates to the Gross-Siebert program by embedding the hypersurface in codimension two in the special fiber of a degenerating Calabi-Yau family. In this family, the hypersurface coincides with the log singular locus --- see @cite_0 for the simplicial case.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_21", "@cite_0", "@cite_19", "@cite_16" ], "mid": [ "2734934103", "2613622891", "2951039910", "1645016350", "170447308", "2265209761" ], "abstract": [ "We show that the category of coherent sheaves on the toric boundary divisor of a smooth quasiprojective DM toric stack is equivalent to the wrapped Fukaya category of a hypersurface in a complex torus. Hypersurfaces with every Newton polytope can be obtained. Our proof has the following ingredients. Using Mikhalkin-Viro patchworking, we compute the skeleton of the hypersurface. The result matches the [FLTZ] skeleton and is naturally realized as a Legendrian in the cosphere bundle of a torus. By [GPS1, GPS2, GPS3], we trade wrapped Fukaya categories for microlocal sheaf theory. By proving a new functoriality result for Bondal's coherent-constructible correspondence, we reduce the sheaf calculation to Kuwagaki's recent theorem on mirror symmetry for toric varieties.", "Using the Minimal Model Program, any degeneration of K-trivial varieties can be arranged to be in a Kulikov type form, i.e. with trivial relative canonical divisor and mild singularities. In the hyper-K \"ahler setting, we can then deduce a finiteness statement for monodromy acting on @math , once one knows that one component of the central fiber is not uniruled. Independently of this, using deep results from the geometry of hyper-K \"ahler manifolds, we prove that a finite monodromy projective degeneration of hyper-K \"ahler manifolds has a smooth filling (after base change and birational modifications). As a consequence of these two results, we prove a generalization of Huybrechts' theorem about birational versus deformation equivalence, allowing singular central fibers. As an application, we give simple proofs for the deformation type of certain geometric constructions of hyper-K \"ahler manifolds (e.g. Debarre--Voisin or Laza--Sacc a--Voisin). In a slightly different direction, we establish some basic properties (dimension and rational homology type) for the dual complex of a Kulikov type degeneration of hyper-K \"ahler manifolds.", "We consider mirror symmetry for (essentially arbitrary) hypersurfaces in (possibly noncompact) toric varieties from the perspective of the Strominger-Yau-Zaslow (SYZ) conjecture. Given a hypersurface @math in a toric variety @math we construct a Landau-Ginzburg model which is SYZ mirror to the blowup of @math along @math , under a positivity assumption. This construction also yields SYZ mirrors to affine conic bundles, as well as a Landau-Ginzburg model which can be naturally viewed as a mirror to @math . The main applications concern affine hypersurfaces of general type, for which our results provide a geometric basis for various mirror symmetry statements that appear in the recent literature. We also obtain analogous results for complete intersections.", "Let f be a polynomial of degree n in ZZ[x_1,..,x_n], typically reducible but squarefree. From the hypersurface f=0 one may construct a number of other subschemes Y by extracting prime components, taking intersections, taking unions, and iterating this procedure. We prove that if the number of solutions to f=0 in ^n is not a multiple of p, then all these intersections in ^n_ just described are reduced. (If this holds for infinitely many p, then it holds over as well.) More specifically, there is a_Frobenius splitting_ on ^n_ compatibly splitting all these subschemes Y . We determine when a Gr \"obner degeneration f_0=0 of such a hypersurface f=0 is again such a hypersurface. Under this condition, we prove that compatibly split subschemes degenerate to compatibly split subschemes, and stay reduced. Our results are strongest in the case that f's lexicographically first term is i=1 ^n x_i. Then for all large p, there is a Frobenius splitting that compatibly splits f's hypersurface and all the associated Y . The Gr \"obner degeneration Y' of each such Y is a reduced union of coordinate spaces (a Stanley-Reisner scheme), and we give a result to help compute its Gr \"obner basis. We exhibit an f whose associated Y include Fulton's matrix Schubert varieties, and recover much more easily the Gr \"obner basis theorem of [Knutson-Miller '05]. We show that in Bott-Samelson coordinates on an opposite Bruhat cell X^v_ in G B, the f defining the complement of the big cell also has initial term i=1 ^n x_i, and hence the Kazhdan-Lusztig subvarieties X^v_ w degenerate to Stanley-Reisner schemes. This recovers, in a weak form, the main result of [Knutson '08].", "We give a spectral sequence to compute the logarithmic Hodge groups on a hyper- surface type toric log Calabi-Yau space X, compute its E1 term explicitly in terms of tropical degeneration data and Jacobian rings and prove its degeneration at E2 under mild assumptions. We prove the basechange of the affine Hodge groups and deduce itfor the logarithmic Hodge groups in low dimensions. As an application, we prove a mirror symmetry duality in dimension two and four involving the ordinary Hodge numbers, the stringy Hodge numbers and the affine Hodge numbers.", "In the early 1990s, Borcea-Voisin orbifolds were some of the ear- liest examples of Calabi-Yau threefolds shown to exhibit mirror symmetry. However, their quantum theory has been poorly investigated. We study this in the context of the gauged linear sigma model, which in their case encom- passes Gromov-Witten theory and its three companions (FJRW theory and two mixed theories). For certain Borcea-Voisin orbifolds of Fermat type, we calculate all four genus zero theories explicitly. Furthermore, we relate the I-functions of these theories by analytic continuation and symplectic transfor- mation. In particular, the relation between the Gromov-Witten and FJRW theories can be viewed as an example of the Landau-Ginzburg Calabi-Yau correspondence for complete intersections of toric varieties." ] }
1211.5263
2128011898
A smooth affine hypersurface Z of complex dimension n is homotopy equivalent to an n-dimensional cell complex. Given a defining polynomial f for Z as well as a regular triangulation of its Newton polytope, we provide a purely combinatorial construction of a compact topological space S as a union of components of real dimension n, and prove that S embeds into Z as a deformation retract. In particular, Z is homotopy equivalent to S.
In the symplectic-topological setting, Mikhalkin @cite_5 constructed a degeneration of a projective algebraic hypersurface using a triangulation of its Newton polytope to provide a higher-dimensional pair-of-pants'' decomposition. He further identified a stratified torus fibration over the spine of the corresponding amoeba. This viewpoint was first applied to homological mirror symmetry ( HMS'') by Abouzaid @cite_11 . Mikhalkin's construction and perspective inform the current work greatly, even though our route from HMS is a bit top-down.'' We describe it here.
{ "cite_N": [ "@cite_5", "@cite_11" ], "mid": [ "2734934103", "2605842359" ], "abstract": [ "We show that the category of coherent sheaves on the toric boundary divisor of a smooth quasiprojective DM toric stack is equivalent to the wrapped Fukaya category of a hypersurface in a complex torus. Hypersurfaces with every Newton polytope can be obtained. Our proof has the following ingredients. Using Mikhalkin-Viro patchworking, we compute the skeleton of the hypersurface. The result matches the [FLTZ] skeleton and is naturally realized as a Legendrian in the cosphere bundle of a torus. By [GPS1, GPS2, GPS3], we trade wrapped Fukaya categories for microlocal sheaf theory. By proving a new functoriality result for Bondal's coherent-constructible correspondence, we reduce the sheaf calculation to Kuwagaki's recent theorem on mirror symmetry for toric varieties.", "We prove a homological mirror symmetry equivalence between an @math -brane category for the pair of pants, computed as a wrapped microlocal sheaf category, and a @math -brane category for a mirror LG model, understood as a category of matrix factorizations. The equivalence improves upon prior results in two ways: it intertwines evident affine Weyl group symmetries on both sides, and it exhibits the relation of wrapped microlocal sheaves along different types of Lagrangian skeleta for the same hypersurface. The equivalence proceeds through the construction of a combinatorial realization of the @math -model via arboreal singularities. The constructions here represent the start of a program to generalize to higher dimensions many of the structures which have appeared in topological approaches to Fukaya categories of surfaces." ] }
1211.5184
2951471647
Many online social networks feature restrictive web interfaces which only allow the query of a user's local neighborhood through the interface. To enable analytics over such an online social network through its restrictive web interface, many recent efforts reuse the existing Markov Chain Monte Carlo methods such as random walks to sample the social network and support analytics based on the samples. The problem with such an approach, however, is the large amount of queries often required (i.e., a long "mixing time") for a random walk to reach a desired (stationary) sampling distribution. In this paper, we consider a novel problem of enabling a faster random walk over online social networks by "rewiring" the social network on-the-fly. Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only information exposed by the restrictive web interface, constructs a "virtual" overlay topology of the social network while performing a random walk, and ensures that the random walk follows the modified overlay topology rather than the original one. We show that MTO-Sampler not only provably enhances the efficiency of sampling, but also achieves significant savings on query cost over real-world online social networks such as Google Plus, Epinion etc.
With global topology, @cite_12 discussed sampling techniques like random node, random edge, random subgraph in large graphs. @cite_22 introduced Albatross sampling which combines random jump and MHRW. @cite_26 also demonstrated true uniform sampling method among the users' id as ground-truth".
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_12" ], "mid": [ "1997991368", "2110117460", "2568950526" ], "abstract": [ "In this paper, we propose an efficient method to detect the underlying structures in data. The same as RANSAC, we randomly sample MSSs (minimal size samples) and generate hypotheses. Instead of analyzing each hypothesis separately, the consensus information in all hypotheses is naturally fused into a hypergraph, called random consensus graph, with real structures corresponding to its dense subgraphs. The sampling process is essentially a progressive refinement procedure of the random consensus graph. Due to the huge number of hyperedges, it is generally inefficient to detect dense subgraphs on random consensus graphs. To overcome this issue, we construct a pairwise graph which approximately retains the dense subgraphs of the random consensus graph. The underlying structures are then revealed by detecting the dense subgraphs of the pair-wise graph. Since our method fuses information from all hypotheses, it can robustly detect structures even under a small number of MSSs. The graph framework enables our method to simultaneously discover multiple structures. Besides, our method is very efficient, and scales well for large scale problems. Extensive experiments illustrate the superiority of our proposed method over previous approaches, achieving several orders of magnitude speedup along with satisfactory accuracy and robustness.", "Graph sampling via crawling has been actively considered as a generic and important tool for collecting uniform node samples so as to consistently estimate and uncover various characteristics of complex networks. The so-called simple random walk with re-weighting (SRW-rw) and Metropolis-Hastings (MH) algorithm have been popular in the literature for such unbiased graph sampling. However, an unavoidable downside of their core random walks -- slow diffusion over the space, can cause poor estimation accuracy. In this paper, we propose non-backtracking random walk with re-weighting (NBRW-rw) and MH algorithm with delayed acceptance (MHDA) which are theoretically guaranteed to achieve, at almost no additional cost, not only unbiased graph sampling but also higher efficiency (smaller asymptotic variance of the resulting unbiased estimators) than the SRW-rw and the MH algorithm, respectively. In particular, a remarkable feature of the MHDA is its applicability for any non-uniform node sampling like the MH algorithm, but ensuring better sampling efficiency than the MH algorithm. We also provide simulation results to confirm our theoretical findings.", "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013" ] }
1211.5184
2951471647
Many online social networks feature restrictive web interfaces which only allow the query of a user's local neighborhood through the interface. To enable analytics over such an online social network through its restrictive web interface, many recent efforts reuse the existing Markov Chain Monte Carlo methods such as random walks to sample the social network and support analytics based on the samples. The problem with such an approach, however, is the large amount of queries often required (i.e., a long "mixing time") for a random walk to reach a desired (stationary) sampling distribution. In this paper, we consider a novel problem of enabling a faster random walk over online social networks by "rewiring" the social network on-the-fly. Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only information exposed by the restrictive web interface, constructs a "virtual" overlay topology of the social network while performing a random walk, and ensures that the random walk follows the modified overlay topology rather than the original one. We show that MTO-Sampler not only provably enhances the efficiency of sampling, but also achieves significant savings on query cost over real-world online social networks such as Google Plus, Epinion etc.
@cite_21 found that the mixing time of typical online social networks is much larger than anticipated, which validates our motivation to shorten the mixing time of random walk. @cite_13 derived the fastest mixing random walk on a graph by convex optimization on second largest eigenvalue of the transition matrix, but it need the whole topology of the graph, and its high time complexity make it inapplicable in large graphs.
{ "cite_N": [ "@cite_21", "@cite_13" ], "mid": [ "2127503167", "1973616368" ], "abstract": [ "Social networks provide interesting algorithmic properties that can be used to bootstrap the security of distributed systems. For example, it is widely believed that social networks are fast mixing, and many recently proposed designs of such systems make crucial use of this property. However, whether real-world social networks are really fast mixing is not verified before, and this could potentially affect the performance of such systems based on the fast mixing property. To address this problem, we measure the mixing time of several social graphs, the time that it takes a random walk on the graph to approach the stationary distribution of that graph, using two techniques. First, we use the second largest eigenvalue modulus which bounds the mixing time. Second, we sample initial distributions and compute the random walk length required to achieve probability distributions close to the stationary distribution. Our findings show that the mixing time of social graphs is much larger than anticipated, and being used in literature, and this implies that either the current security systems based on fast mixing have weaker utility guarantees or have to be less efficient, with less security guarantees, in order to compensate for the slower mixing.", "In this article we present a study of the mixing time of a random walk on the largest component of a supercritical random graph, also known as the giant component. We identify local obstructions that slow down the random walk, when the average degree d is at most O( @math ), proving that the mixing time in this case is Θ((n-d)2) asymptotically almost surely. As the average degree grows these become negligible and it is the diameter of the largest component that takes over, yielding mixing time Θ(n-d) a.a.s.. We proved these results during the 2003–04 academic year. Similar results but for constant d were later proved independently by in [3]. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008 Most of this work was completed while the author was a research fellow at the School of Computer Science, McGill University." ] }
1211.5184
2951471647
Many online social networks feature restrictive web interfaces which only allow the query of a user's local neighborhood through the interface. To enable analytics over such an online social network through its restrictive web interface, many recent efforts reuse the existing Markov Chain Monte Carlo methods such as random walks to sample the social network and support analytics based on the samples. The problem with such an approach, however, is the large amount of queries often required (i.e., a long "mixing time") for a random walk to reach a desired (stationary) sampling distribution. In this paper, we consider a novel problem of enabling a faster random walk over online social networks by "rewiring" the social network on-the-fly. Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only information exposed by the restrictive web interface, constructs a "virtual" overlay topology of the social network while performing a random walk, and ensures that the random walk follows the modified overlay topology rather than the original one. We show that MTO-Sampler not only provably enhances the efficiency of sampling, but also achieves significant savings on query cost over real-world online social networks such as Google Plus, Epinion etc.
@cite_6 compared latent space model with real social network data. @cite_14 introduced hybrid graph model to incorporate the small world phenomenon. @cite_7 also measured the difference between multiple synthetic graphs and real world social network graphs.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_6" ], "mid": [ "1917516492", "2123718282", "2039750798" ], "abstract": [ "We propose a temporal latent space model for link prediction in dynamic social networks, where the goal is to predict links over time based on a sequence of previous graph snapshots. The model assumes that each user lies in an unobserved latent space, and interactions are more likely to occur between similar users in the latent space representation. In addition, the model allows each user to gradually move its position in the latent space as the network structure evolves over time. We present a global optimization algorithm to effectively infer the temporal latent space. Two alternative optimization algorithms with local and incremental updates are also proposed, allowing the model to scale to larger networks without compromising prediction accuracy. Empirically, we demonstrate that our model, when evaluated on a number of real-world dynamic networks, significantly outperforms existing approaches for temporal link prediction in terms of both scalability and predictive power.", "The small world phenomenon, that consistently occurs in numerous exist- ing networks, refers to two similar but different properties — small average distance and the clustering effect. We consider a hybrid graph model that incorporates both properties by combining a global graph and a local graph. The global graph is modeled by a random graph with a power law degree distribution, while the local graph has specified local connectivity. We will prove that the hybrid graph has average distance and diameter close to that of random graphs with the same degree distribution (under certain mild conditions). We also give a simple decomposition algorithm which, for any given (real) graph, identifies the global edges and extracts the local graph (which is uniquely determined depending only on the local connectivity). We can then apply our theoretical results for analyzing real graphs, provided the parameters of the hybrid model can be appropriately chosen.", "This paper explores two aspects of social network modeling. First, we generalize a successful static model of relationships into a dynamic model that accounts for friendships drifting over time. Second, we show how to make it tractable to learn such models from data, even as the number of entities n gets large. The generalized model associates each entity with a point in p-dimensional Euclidean latent space. The points can move as time progresses but large moves in latent space are improbable. Observed links between entities are more likely if the entities are close in latent space. We show how to make such a model tractable (sub-quadratic in the number of entities) by the use of appropriate kernel functions for similarity in latent space; the use of low dimensional KD-trees; a new efficient dynamic adaptation of multidimensional scaling for a first pass of approximate projection of entities into latent space; and an efficient conjugate gradient update rule for non-linear local optimization in which amortized time per entity during an update is O(log n). We use both synthetic and real-world data on up to 11,000 entities which indicate near-linear scaling in computation time and improved performance over four alternative approaches. We also illustrate the system operating on twelve years of NIPS co-authorship data." ] }
1211.5608
2952304478
We consider the problem of recovering two unknown vectors, @math and @math , of length @math from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension @math and the other with dimension @math . Although the observed convolution is nonlinear in both @math and @math , it is linear in the rank-1 matrix formed by their outer product @math . This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that for "generic" signals, the program can deconvolve @math and @math exactly when the maximum of @math and @math is almost on the order of @math . That is, we show that if @math is drawn from a random subspace of dimension @math , and @math is a vector in a subspace of dimension @math whose basis vectors are "spread out" in the frequency domain, then nuclear norm minimization recovers @math without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length @math which we code using a random @math coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length @math , then the receiver can recover both the channel response and the message when @math , to within constant and log factors.
While this paper is only concerned with recovery by nuclear norm minimization, other types of recovery techniques have proven effective both in theory and in practice; see for example @cite_4 @cite_12 @cite_14 . It is possible that the guarantees given in this paper could be extended to these other algorithms.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_12" ], "mid": [ "2136912397", "2508366294", "2747056667" ], "abstract": [ "The problem of minimizing the rank of a matrix subject to affine constraints has applications in several areas including machine learning, and is known to be NP-hard. A tractable relaxation for this problem is nuclear norm (or trace norm) minimization, which is guaranteed to find the minimum rank matrix under suitable assumptions. In this paper, we propose a family of Iterative Reweighted Least Squares algorithms IRLS-p (with 0 ≤ p ≤ 1), as a computationally efficient way to improve over the performance of nuclear norm minimization. The algorithms can be viewed as (locally) minimizing certain smooth approximations to the rank function. When p = 1, we give theoretical guarantees similar to those for nuclear norm minimization, that is, recovery of low-rank matrices under certain assumptions on the operator defining the constraints. For p < 1, IRLS-p shows better empirical performance in terms of recovering low-rank matrices than nuclear norm minimization. We provide an efficient implementation for IRLS-p, and also present a related family of algorithms, sIRLS-p. These algorithms exhibit competitive run times and improved recovery when compared to existing algorithms for random instances of the matrix completion problem, as well as on the MovieLens movie recommendation data set.", "Minimization of the nuclear norm is often used as a surrogate, convex relaxation, for finding the minimum rank completion (recovery) of a partial matrix. The minimum nuclear norm problem can be solved as a trace minimization semidefinite programming problem, (SDP). The SDP and its dual are regular in the sense that they both satisfy strict feasibility. Interior point algorithms are the current methods of choice for these problems. This means that it is difficult to solve large scale problems and difficult to get high accuracy solutions. In this paper we take advantage of the structure at optimality for the minimum nuclear norm problem. We show that even though strict feasibility holds, the facial reduction framework can be successfully applied to obtain a proper face that contains the optimal set, and thus can dramatically reduce the size of the final nuclear norm problem while guaranteeing a low-rank solution. We include numerical tests for both exact and noisy cases. In all cases we assume that knowledge of a target rank is available.", "The nonsmooth and nonconvex regularization has many applications in imaging science and machine learning research due to its excellent recovery performance. A proximal iteratively reweighted nuclear norm algorithm has been proposed for the nonsmooth and nonconvex matrix minimizations. In this paper, we aim to investigate the convergence of the algorithm. With the Kurdyka–Łojasiewicz property, we prove the algorithm globally converges to a critical point of the objective function. The numerical results presented in this paper coincide with our theoretical findings." ] }
1211.5608
2952304478
We consider the problem of recovering two unknown vectors, @math and @math , of length @math from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension @math and the other with dimension @math . Although the observed convolution is nonlinear in both @math and @math , it is linear in the rank-1 matrix formed by their outer product @math . This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that for "generic" signals, the program can deconvolve @math and @math exactly when the maximum of @math and @math is almost on the order of @math . That is, we show that if @math is drawn from a random subspace of dimension @math , and @math is a vector in a subspace of dimension @math whose basis vectors are "spread out" in the frequency domain, then nuclear norm minimization recovers @math without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length @math which we code using a random @math coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length @math , then the receiver can recover both the channel response and the message when @math , to within constant and log factors.
As we will see below, our mathematical analysis has mostly to do how matrices of the form in act on rank-2 matrices in a certain subspace. Matrices of this type have been considered in the context of sparse recovery in the compressed sensing literature for applications including multiple-input multiple-output channel estimation @cite_37 , multi-user detection @cite_18 , and multiplexing of spectrally sparse signals @cite_13 .
{ "cite_N": [ "@cite_13", "@cite_37", "@cite_18" ], "mid": [ "2962909343", "1770500012", "2130345277" ], "abstract": [ "In this paper, we improve existing results in the field of compressed sensing and matrix completion when sampled data may be grossly corrupted. We introduce three new theorems. (1) In compressed sensing, we show that if the m×n sensing matrix has independent Gaussian entries, then one can recover a sparse signal x exactly by tractable l 1 minimization even if a positive fraction of the measurements are arbitrarily corrupted, provided the number of nonzero entries in x is O(m (log(n m)+1)). (2) In the very general sensing model introduced in Candes and Plan (IEEE Trans. Inf. Theory 57(11):7235–7254, 2011) and assuming a positive fraction of corrupted measurements, exact recovery still holds if the signal now has O(m (log2 n)) nonzero entries. (3) Finally, we prove that one can recover an n×n low-rank matrix from m corrupted sampled entries by tractable optimization provided the rank is on the order of O(m (nlog2 n)); again, this holds when there is a positive fraction of corrupted samples.", "This paper revisits the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem is an extension of single channel sparse recovery, which lies at the heart of compressed sensing. Inspired by the links to array signal processing, a new family of MMV algorithms is considered that highlight the role of rank in determining the difficulty of the MMV recovery problem. The simplest such method is a discrete version of MUSIC which is guaranteed to recover the sparse vectors in the full rank MMV setting, under mild conditions. This idea is extended to a rank aware pursuit algorithm that naturally reduces to Order Recursive Matching Pursuit (ORMP) in the single measurement case while also providing guaranteed recovery in the full rank setting. In contrast, popular MMV methods such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques are shown to be rank blind in terms of worst case analysis. Numerical simulations demonstrate that the rank aware techniques are significantly better than existing methods in dealing with multiple measurements.", "Compressed sensing seeks to recover a sparse vector from a small number of linear and non-adaptive measurements. While most work so far focuses on Gaussian or Bernoulli random measurements we investigate the use of partial random circulant and Toeplitz matrices in connection with recovery by 1-minization. In contrast to recent work in this direction we allow the use of an arbitrary subset of rows of a circulant and Toeplitz matrix. Our recovery result predicts that the necessary number of measurements to ensure sparse reconstruction by 1-minimization with random partial circulant or Toeplitz matrices scales linearly in the sparsity up to a log-factor in the ambient dimension. This represents a significant improvement over previous recovery results for such matrices. As a main tool for the proofs we use a new version of the non-commutative Khintchine inequality." ] }
1502.00842
1560878967
We introduce a new family of erasure codes, called group decodable code (GDC), for distributed storage system. Given a set of design parameters ; ; k; t , where k is the number of information symbols, each codeword of an ( ; ; k; t)-group decodable code is a t-tuple of strings, called buckets, such that each bucket is a string of symbols that is a codeword of a [ ; ] MDS code (which is encoded from information symbols). Such codes have the following two properties: (P1) Locally Repairable: Each code symbol has locality ( ; - + 1). (P2) Group decodable: From each bucket we can decode information symbols. We establish an upper bound of the minimum distance of ( ; ; k; t)-group decodable code for any given set of ; ; k; t ; We also prove that the bound is achievable when the coding field F has size |F| > n-1 k-1.
In @cite_12 , the concept of @math -locality was defined, which captures the property that there exist @math pairwise disjoint local repair sets for a code symbol. An upper bound on the minimum distance for @math linear codes with information @math -locality was derived, and codes that attain this bound was constructed for the length @math . However, for @math , it is not known whether there exist codes attaining this bound. Upper bounds on the rate and minimum distance of codes with all-symbol @math -locality was proved in @cite_13 . However, no explicit construction of codes that achieve this bound was presented. It is still an open question whether the distance bound in @cite_13 is achievable.
{ "cite_N": [ "@cite_13", "@cite_12" ], "mid": [ "1969158823", "2018102393" ], "abstract": [ "Repair locality is a desirable property for erasure codes in distributed storage systems. Recently, different structures of local repair groups have been proposed in the definitions of repair locality. In this paper, the concept of regenerating set is introduced to characterize the local repair groups. A definition of locality @math (i.e., locality @math with repair tolerance @math ) under the most general structure of regenerating sets is given. All previously studied locality turns out to be special cases of this definition. Furthermore, three representative concepts of locality proposed before are reinvestigated under the framework of regenerating sets, and their respective upper bounds on the minimum distance are reproved in a uniform and brief form. Additionally, a more precise distance bound is derived for the square code which is a class of linear codes with locality @math and high information rate, and an explicit code construction attaining the optimal distance bound is obtained.", "In distributed storage systems, erasure codes with locality (r ) are preferred because a coordinate can be locally repaired by accessing at most (r ) other coordinates which in turn greatly reduces the disk I O complexity for small (r ) . However, the local repair may not be performed when some of the (r ) coordinates are also erased. To overcome this problem, we propose the ((r, )_ c ) -locality providing ( -1 ) nonoverlapping local repair groups of size no more than (r ) for a coordinate. Consequently, the repair locality (r ) can tolerate ( -1 ) erasures in total. We derive an upper bound on the minimum distance for any linear ([n,k] ) code with information ((r, )_ c ) -locality. Then, we prove existence of the codes that attain this bound when (n k(r( -1)+1) ) . Although the locality ((r, ) ) defined by provides the same level of locality and local repair tolerance as our definition, codes with ((r, )_ c ) -locality attaining the bound are proved to have more advantage in the minimum distance. In particular, we construct a class of codes with all symbol ((r, )_ c ) -locality where the gain in minimum distance is ( ( r ) ) and the information rate is close to 1." ] }
1502.00739
2038147967
This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. We propose a data-driven framework consisting of two phases of inference. The first phase, referred as "image co-segmentation", iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. "region colabeling"), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing configuration (e.g., item location and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-of-the-art methods.
In literature, existing efforts on clothing human segmentation and recognition mainly focused on constructing expressive models to address various clothing styles and appearances @cite_0 @cite_1 @cite_14 @cite_34 @cite_32 @cite_3 @cite_19 . One classic work @cite_0 proposed a composite And-Or graph template for modeling and parsing clothing configurations. Later works studied on blocking models to segment clothes for highly occluded group images @cite_29 , or deformable spatial priors modeling for improving performance of clothing segmentation @cite_9 . Recent approaches incorporated shape-based human model @cite_16 , or pose estimation and supervised region labeling @cite_22 , and achieved impressive results. Despite acknowledged successes, these works have not yet been extended to the problem of clothing co-parsing, and they often require much labeling workload.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_29", "@cite_9", "@cite_1", "@cite_32", "@cite_3", "@cite_16", "@cite_0", "@cite_19", "@cite_34" ], "mid": [ "2313077179", "2757508077", "2038147967", "2964318046", "1195044660", "2094311777", "2156001867", "1946323491", "2115091888", "2138948290", "2012451022" ], "abstract": [ "This paper aims at developing an integrated system for clothing co-parsing (CCP), in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. A novel data-driven system consisting of two phases of inference is proposed. The first phase, referred as “image cosegmentation,” iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM technique [1] . In the second phase (i.e., “region colabeling”), we construct a multiimage graphical model by taking the segmented regions as vertices, and incorporating several contexts of clothing configuration (e.g., item locations and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [2] , we construct a dataset called the SYSU-Clothes dataset consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the SYSU-Clothes datasets, respectively, which are superior compared with the previous methods. Furthermore, we apply our method on a challenging task, i.e., cross-domain clothing retrieval: given user photo depicting a clothing image, retrieving the same clothing items from online shopping stores based on the fine-grained parsing results.", "We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model \"redresses\" the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer's body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer's pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted. The codes and the data are available at this http URL edu.hk projects FashionGAN .", "This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. We propose a data-driven framework consisting of two phases of inference. The first phase, referred as \"image co-segmentation\", iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. \"region colabeling\"), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing configuration (e.g., item location and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-of-the-art methods.", "We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model “redresses” the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer’s body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer’s pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted.", "In this paper we tackle the problem of clothing parsing: Our goal is to segment and classify different garments a person is wearing. We frame the problem as the one of inference in a pose-aware Conditional Random Field (CRF) which exploits appearance, figure ground segmentation, shape and location priors for each garment as well as similarities between segments, and symmetries between different human body parts. We demonstrate the effectiveness of our approach on the Fashionista dataset [1] and show that we can obtain a significant improvement over the state-of-the-art.", "Clothing is one of the most informative cues of human appearance. In this paper, we propose a novel multi-person clothing segmentation algorithm for highly occluded images. The key idea is combining blocking models to address the person-wise occlusions. In contrary to the traditional layered model that tries to solve the full layer ranking problem, the proposed blocking model partitions the problem into a series of pair-wise ones and then determines the local blocking relationship based on individual and contextual information. Thus, it is capable of dealing with cases with a large number of people. Additionally, we propose a layout model formulated as Markov Network which incorporates the blocking relationship to pursue an approximately optimal clothing layout for group people. Experiments demonstrated on a group images dataset show the effectiveness of our algorithm.", "Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections.", "We address the problem of describing people based on fine-grained clothing attributes. This is an important problem for many practical applications, such as identifying target suspects or finding missing people based on detailed clothing descriptions in surveillance videos or consumer photos. We approach this problem by first mining clothing images with fine-grained attribute labels from online shopping stores. A large-scale dataset is built with about one million images and fine-detailed attribute sub-categories, such as various shades of color (e.g., watermelon red, rosy red, purplish red), clothing types (e.g., down jacket, denim jacket), and patterns (e.g., thin horizontal stripes, houndstooth). As these images are taken in ideal pose lighting background conditions, it is unreliable to directly use them as training data for attribute prediction in the domain of unconstrained images captured, for example, by mobile phones or surveillance cameras. In order to bridge this gap, we propose a novel double-path deep domain adaptation network to model the data from the two domains jointly. Several alignment cost layers placed inbetween the two columns ensure the consistency of the two domain features and the feasibility to predict unseen attribute categories in one of the domains. Finally, to achieve a working system with automatic human body alignment, we trained an enhanced RCNN-based detector to localize human bodies in images. Our extensive experimental evaluation demonstrates the effectiveness of the proposed approach for describing people based on fine-grained clothing attributes.", "Clothing recognition is a societ ally and commercially important yet extremely challenging problem due to large variations in clothing appearance, layering, style, and body shape and pose. In this paper, we tackle the clothing parsing problem using a retrieval-based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to recognize clothing items in the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse-masks (Paper Doll item transfer) from retrieved examples. We evaluate our approach extensively and show significant improvements over previous state-of-the-art for both localization (clothing parsing given weak supervision in the form of tags) and detection (general clothing parsing). Our experimental results also indicate that the general pose estimation problem can benefit from clothing parsing.", "We propose a shape-based, hierarchical part-template matching approach to simultaneous human detection and segmentation combining local part-based and global shape-template-based schemes. The approach relies on the key idea of matching a part-template tree to images hierarchically to detect humans and estimate their poses. For learning a generic human detector, a pose-adaptive feature computation scheme is developed based on a tree matching approach. Instead of traditional concatenation-style image location-based feature encoding, we extract features adaptively in the context of human poses and train a kernel-SVM classifier to separate human nonhuman patterns. Specifically, the features are collected in the local context of poses by tracing around the estimated shape boundaries. We also introduce an approach to multiple occluded human detection and segmentation based on an iterative occlusion compensation scheme. The output of our learned generic human detector can be used as an initial set of human hypotheses for the iterative optimization. We evaluate our approaches on three public pedestrian data sets (INRIA, MIT-CBCL, and USC-B) and two crowded sequences from Caviar Benchmark and Munich Airport data sets.", "We present a new method to classify human activities by leveraging on the cues available from depth images alone. Towards this end, we propose a descriptor which couples depth and spatial information of the segmented body to describe a human pose. Unique poses (i.e. codewords) are then identified by a spatial-based clustering step. Given a video sequence of depth images, we segment humans from the depth images and represent these segmented bodies as a sequence of codewords. We exploit unique poses of an activity and the temporal ordering of these poses to learn subsequences of codewords which are strongly discriminative for the activity. Each discriminative subsequence acts as a classifier and we learn a boosted ensemble of discriminative subsequences to assign a confidence score for the activity label of the test sequence. Unlike existing methods which demand accurate tracking of 3D joint locations or couple depth with color image information as recognition cues, our method requires only the segmentation masks from depth images to recognize an activity. Experimental results on the publicly available Human Activity Dataset (which comprises 12 challenging activities) demonstrate the validity of our method, where we attain a precision recall of 78.1 75.4 when the person was not seen before in the training set, and 94.6 93.1 when the person was seen before." ] }
1502.00739
2038147967
This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. We propose a data-driven framework consisting of two phases of inference. The first phase, referred as "image co-segmentation", iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. "region colabeling"), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing configuration (e.g., item location and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-of-the-art methods.
Clothing co-parsing is also highly related to image object co-labeling, where a batch of input images containing similar objects are processed jointly @cite_5 @cite_21 @cite_27 . For example, unsupervised shape guided approaches were adopted in @cite_13 to achieve single object category co-labeling. Winn et. al. @cite_24 incoporated automatic image segmentation and spatially coherent latent topic model to obtain unsupervised multi-class image labeling. These methods, however, solved the problem in an unsupervised manner, and might be intractable under circumstances with large numbers of categories and diverse appearances. To deal with more complex scenario, some recent works focused on supervised label propagation, utilizing pixelwise label map in the training set and propagating labels to unseen images. Pioneering work of @cite_5 proposed to propagate labels over scene images using a bi-layer sparse coding formulation. Similar ideas were also explored in @cite_25 . These methods, however, are often limited by expensive annotations. In addition, they extracted image correspondences upon the pixels (or superpixels), which are not discriminative for the clothing parsing problem.
{ "cite_N": [ "@cite_21", "@cite_24", "@cite_27", "@cite_5", "@cite_13", "@cite_25" ], "mid": [ "2313077179", "2038147967", "2204578866", "2115091888", "2890259823", "2143884379" ], "abstract": [ "This paper aims at developing an integrated system for clothing co-parsing (CCP), in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. A novel data-driven system consisting of two phases of inference is proposed. The first phase, referred as “image cosegmentation,” iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM technique [1] . In the second phase (i.e., “region colabeling”), we construct a multiimage graphical model by taking the segmented regions as vertices, and incorporating several contexts of clothing configuration (e.g., item locations and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [2] , we construct a dataset called the SYSU-Clothes dataset consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the SYSU-Clothes datasets, respectively, which are superior compared with the previous methods. Furthermore, we apply our method on a challenging task, i.e., cross-domain clothing retrieval: given user photo depicting a clothing image, retrieving the same clothing items from online shopping stores based on the fine-grained parsing results.", "This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. We propose a data-driven framework consisting of two phases of inference. The first phase, referred as \"image co-segmentation\", iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. \"region colabeling\"), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing configuration (e.g., item location and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-of-the-art methods.", "In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixel-wise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic structure and the local fine details within the cross-layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global image-level context. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN architecture over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [15] reaches 76.95 by Co-CNN, significantly higher than 62.81 and 64.38 by the state-of-the-art algorithms, M-CNN [21] and ATR [15], respectively.", "Clothing recognition is a societ ally and commercially important yet extremely challenging problem due to large variations in clothing appearance, layering, style, and body shape and pose. In this paper, we tackle the clothing parsing problem using a retrieval-based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to recognize clothing items in the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse-masks (Paper Doll item transfer) from retrieved examples. We evaluate our approach extensively and show significant improvements over previous state-of-the-art for both localization (clothing parsing given weak supervision in the form of tags) and detection (general clothing parsing). Our experimental results also indicate that the general pose estimation problem can benefit from clothing parsing.", "Fully convolutional networks (FCN) have achieved great success in human parsing in recent years. In conventional human parsing tasks, pixel-level labeling is required for guiding the training, which usually involves enormous human labeling efforts. To ease the labeling efforts, we propose a novel weakly supervised human parsing method which only requires simple object keypoint annotations for learning. We develop an iterative learning method to generate pseudo part segmentation masks from keypoint labels. With these pseudo masks, we train an FCN network to output pixel-level human parsing predictions. Furthermore, we develop a correlation network to perform joint prediction of part and object segmentation masks and improve the segmentation performance. The experiment results show that our weakly supervised method is able to achieve very competitive human parsing results. Despite our method only uses simple keypoint annotations for learning, we are able to achieve comparable performance with fully supervised methods which use the expensive pixel-level annotations.", "Scene parsing consists in labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features. In parallel to feature extraction, a tree of segments is computed from a graph of pixel dissimilarities. The feature vectors associated with the segments covered by each node in the tree are aggregated and fed to a classifier which produces an estimate of the distribution of object categories contained in the segment. A subset of tree nodes that cover the image are then selected so as to maximize the average \"purity\" of the class distributions, hence maximizing the overall likelihood that each segment will contain a single object. The system yields record accuracies on the the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170 classes) and near-record accuracy on the Stanford Background Dataset (8 classes), while being an order of magnitude faster than competing approaches, producing a 320 × 240 image labeling in less than 1 second, including feature extraction." ] }
1502.00749
2083858539
This article investigates a data-driven approach for semantic scene understanding, without pixelwise annotation or classifier training. The proposed framework parses a target image in two steps: first, retrieving its exemplars (that is, references) from an image database, where all images are unsegmented but annotated with tags; second, recovering its pixel labels by propagating semantics from the references. The authors present a novel framework making the two steps mutually conditional and bootstrapped under the probabilistic Expectation-Maximization (EM) formulation. In the first step, the system selects the references by jointly matching the appearances as well as the semantics (that is, the assigned labels) with the target. They process the second step via a combinatorial graphical representation, in which the vertices are superpixels extracted from the target and its selected references. Then they derive the potentials of assigning labels to one vertex of the target, which depend upon the graph edges that connect the vertex to its spatial neighbors of the target and to similar vertices of the references. The proposed framework can be applied naturally to perform image annotation on new test images. In the experiments, the authors validated their approach on two public databases, and demonstrated superior performance over the state-of-the-art methods in both semantic segmentation and image annotation tasks.
Traditional efforts for scene understanding mainly focused on capturing scene appearances, structures and spatial contexts by developing combinatorial models, e.g., CRF @cite_9 @cite_17 , Texton-Forest @cite_0 , Graph Grammar @cite_4 . These models were generally founded on supervised learning techniques, and required manually prepared training data containing labels at pixel level.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_4", "@cite_17" ], "mid": [ "2341569833", "2283234189", "1581592866", "2045587041" ], "abstract": [ "Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data— performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-theart RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset.", "Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data --- performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the-art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset. Additionally, we offer a route to generating synthesized frame or video data, and understanding of different factors influencing performance gains.", "In this work we propose a hierarchical approach for labeling semantic objects and regions in scenes. Our approach is reminiscent of early vision literature in that we use a decomposition of the image in order to encode relational and spatial information. In contrast to much existing work on structured prediction for scene understanding, we bypass a global probabilistic model and instead directly train a hierarchical inference procedure inspired by the message passing mechanics of some approximate inference procedures in graphical models. This approach mitigates both the theoretical and empirical difficulties of learning probabilistic models when exact inference is intractable. In particular, we draw from recent work in machine learning and break the complex inference process into a hierarchical series of simple machine learning subproblems. Each subproblem in the hierarchy is designed to capture the image and contextual statistics in the scene. This hierarchy spans coarse-to-fine regions and explicitly models the mixtures of semantic labels that may be present due to imperfect segmentation. To avoid cascading of errors and overfitting, we train the learning problems in sequence to ensure robustness to likely errors earlier in the inference sequence and leverage the stacking approach developed by", "Semantic reconstruction of a scene is important for a variety of applications such as 3D modelling, object recognition and autonomous robotic navigation. However, most object labelling methods work in the image domain and fail to capture the information present in 3D space. In this work we propose a principled way to generate object labelling in 3D. Our method builds a triangulated meshed representation of the scene from multiple depth estimates. We then define a CRF over this mesh, which is able to capture the consistency of geometric properties of the objects present in the scene. In this framework, we are able to generate object hypotheses by combining information from multiple sources: geometric properties (from the 3D mesh), and appearance properties (from images). We demonstrate the robustness of our framework in both indoor and outdoor scenes. For indoor scenes we created an augmented version of the NYU indoor scene dataset (RGBD images) with object labelled meshes for training and evaluation. For outdoor scenes, we created ground truth object labellings for the KITTY odometry dataset (stereo image sequence). We observe a significant speed-up in the inference stage by performing labelling on the mesh, and additionally achieve higher accuracies." ] }
1502.00749
2083858539
This article investigates a data-driven approach for semantic scene understanding, without pixelwise annotation or classifier training. The proposed framework parses a target image in two steps: first, retrieving its exemplars (that is, references) from an image database, where all images are unsegmented but annotated with tags; second, recovering its pixel labels by propagating semantics from the references. The authors present a novel framework making the two steps mutually conditional and bootstrapped under the probabilistic Expectation-Maximization (EM) formulation. In the first step, the system selects the references by jointly matching the appearances as well as the semantics (that is, the assigned labels) with the target. They process the second step via a combinatorial graphical representation, in which the vertices are superpixels extracted from the target and its selected references. Then they derive the potentials of assigning labels to one vertex of the target, which depend upon the graph edges that connect the vertex to its spatial neighbors of the target and to similar vertices of the references. The proposed framework can be applied naturally to perform image annotation on new test images. In the experiments, the authors validated their approach on two public databases, and demonstrated superior performance over the state-of-the-art methods in both semantic segmentation and image annotation tasks.
Several weakly supervised methods are proposed to indicate the classes presented in the images with only image-level labels. For example, @cite_16 proposed to learn object classes based on unsupervised image segmentation. @cite_8 learned classification models for all scene labels by selecting representative training samples, and multiple instance learning was utilized in @cite_7 . Some nonparametric approaches have been also studied that solve the problems by searching and matching with an auxiliary image database. For example, an efficient structure-aware matching algorithm was discussed in @cite_6 to transfer labels from the database to the target image, but the pixelwise annotation was required for the auxiliary images.
{ "cite_N": [ "@cite_16", "@cite_6", "@cite_7", "@cite_8" ], "mid": [ "2963697527", "2029731618", "2474876375", "2203062554" ], "abstract": [ "Supervised object detection and semantic segmentation require object or even pixel level annotations. When there exist image level labels only, it is challenging for weakly supervised algorithms to achieve accurate predictions. The accuracy achieved by top weakly supervised algorithms is still significantly lower than their fully supervised counterparts. In this paper, we propose a novel weakly supervised curriculum learning pipeline for multi-label object recognition, detection and semantic segmentation. In this pipeline, we first obtain intermediate object localization and pixel labeling results for the training images, and then use such results to train task-specific deep networks in a fully supervised manner. The entire process consists of four stages, including object localization in the training images, filtering and fusing object instances, pixel labeling for the training images, and task-specific network training. To obtain clean object instances in the training images, we propose a novel algorithm for filtering, fusing and classifying object instances collected from multiple solution mechanisms. In this algorithm, we incorporate both metric learning and density-based clustering to filter detected object instances. Experiments show that our weakly supervised pipeline achieves state-of-the-art results in multi-label image classification as well as weakly supervised object detection and very competitive results in weakly supervised semantic segmentation on MS-COCO, PASCAL VOC 2007 and PASCAL VOC 2012.", "We address the task of learning a semantic segmentation from weakly supervised data. Our aim is to devise a system that predicts an object label for each pixel by making use of only image level labels during training – the information whether a certain object is present or not in the image. Such coarse tagging of images is faster and easier to obtain as opposed to the tedious task of pixelwise labeling required in state of the art systems. We cast this task naturally as a multiple instance learning (MIL) problem. We use Semantic Texton Forest (STF) as the basic framework and extend it for the MIL setting. We make use of multitask learning (MTL) to regularize our solution. Here, an external task of geometric context estimation is used to improve on the task of semantic segmentation. We report experimental results on the MSRC21 and the very challenging VOC2007 datasets. On MSRC21 dataset we are able, by using 276 weakly labeled images, to achieve the performance of a supervised STF trained on pixelwise labeled training set of 56 images, which is a significant reduction in supervision needed.", "In this paper, we propose a novel method to perform weakly-supervised image parsing based on the dictionary learning framework. To deal with the challenges caused by the label ambiguities, we design a saliency guided weight assignment scheme to boost the discriminative dictionary learning. More specifically, with a collection of tagged images, the proposed method first conducts saliency detection and automatically infers the confidence for each semantic class to be foreground or background. These clues are then incorporated to learn the dictionaries, the weights, as well as the sparse representation coefficients in the meanwhile. Once obtained the coefficients of a superpixel, we use a sparse representation classifier to determine its semantic label. The approach is validated on the MSRC21, PASCAL VOC07, and VOC12 datasets. Experimental results demonstrate the encouraging performance of our approach in comparison with some state-of-the-arts.", "We present a weakly-supervised approach to semantic segmentation. The goal is to assign pixel-level labels given only partial information, for example, image-level labels. This is an important problem in many application scenarios where it is difficult to get accurate segmentation or not feasible to obtain detailed annotations. The proposed approach starts with an initial coarse segmentation, followed by a spectral clustering approach that groups related image parts into communities. A community-driven graph is then constructed that captures spatial and feature relationships between communities while a label graph captures correlations between image labels. Finally, mapping the image level labels to appropriate communities is formulated as a convex optimization problem. The proposed approach does not require location information for image level labels and can be trained using partially labeled datasets. Compared to the state-of-the-art weakly supervised approaches, we achieve a significant performance improvement of 9 on MSRC-21 dataset and 11 on LabelMe dataset, while being more than 300 times faster." ] }
1502.00743
2950368212
This paper investigates how to extract objects-of-interest without relying on hand-craft features and sliding windows approaches, that aims to jointly solve two sub-tasks: (i) rapidly localizing salient objects from images, and (ii) accurately segmenting the objects based on the localizations. We present a general joint task learning framework, in which each task (either object localization or object segmentation) is tackled via a multi-layer convolutional neural network, and the two networks work collaboratively to boost performance. In particular, we propose to incorporate latent variables bridging the two networks in a joint optimization manner. The first network directly predicts the positions and scales of salient objects from raw images, and the latent variables adjust the object localizations to feed the second network that produces pixelwise object masks. An EM-type method is presented for the optimization, iterating with two steps: (i) by using the two networks, it estimates the latent variables by employing an MCMC-based sampling method; (ii) it optimizes the parameters of the two networks unitedly via back propagation, with the fixed latent variables. Extensive experiments suggest that our framework significantly outperforms other state-of-the-art approaches in both accuracy and efficiency (e.g. 1000 times faster than competing approaches).
Extracting pixelwise objects-of-interest from an image, our work is related to the salient region object detections @cite_34 @cite_19 @cite_0 @cite_26 . These methods mainly focused on feature engineering and graph-based segmentation. For example, @cite_19 proposed a regional contrast based saliency extraction algorithm and further segmented objects by applying an iterative version of GrabCut. Some approaches @cite_9 @cite_14 trained object appearance models and utilized spatial or geometric priors to address this task. @cite_9 proposed to transfer segmentation masks from training data into testing images by searching and matching visually similar objects within the sliding windows. Other related approaches @cite_23 @cite_29 simultaneously processed a batch of images for object discovery and co-segmentation, but they often required category information as priors.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_9", "@cite_29", "@cite_0", "@cite_19", "@cite_23", "@cite_34" ], "mid": [ "2037954058", "2605929543", "2155080527", "1969840923", "2593150419", "2056860348", "1995444699", "1969366022" ], "abstract": [ "Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.", "Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.", "We present a generic detection localization algorithm capable of searching for a visual object of interest without training. The proposed method operates using a single example of an object of interest to find similar matches, does not require prior knowledge (learning) about objects being sought, and does not require any preprocessing step or segmentation of a target image. Our method is based on the computation of local regression kernels as descriptors from a query, which measure the likeness of a pixel to its surroundings. Salient features are extracted from said descriptors and compared against analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. We illustrate optimality properties of the algorithm using a naive-Bayes framework. The algorithm yields a scalar resemblance map, indicating the likelihood of similarity between the query and all patches in the target image. By employing nonparametric significance tests and nonmaxima suppression, we detect the presence and location of objects similar to the given query. The approach is extended to account for large variations in scale and rotation. High performance is demonstrated on several challenging data sets, indicating successful detection of objects in diverse contexts and under different imaging conditions.", "Abstract This paper pertains to the detection of objects located in complex backgrounds. A feature-based segmentation approach to the object detection problem is pursued, where the features are computed over multiple spatial orientations and frequencies. The method proceeds as follows: a given image is passed through a bank of even-symmetric Gabor filters. A selection of these filtered images is made and each (selected) filtered image is subjected to a nonlinear (sigmoidal like) transformation. Then, a measure of texture energy is computed in a window around each transformed image pixel. The texture energy (“Gabor features”) and their spatial locations are inputted to a squared-error clustering algorithm. This clustering algorithm yields a segmentation of the original image—it assigns to each pixel in the image a cluster label that identifies the amount of mean local energy the pixel possesses across different spatial orientations and frequencies. The method is applied to a number of visual and infrared images, each one of which contains one or more objects. The region corresponding to the object is usually segmented correctly, and a unique signature of “Gabor features” is typically associated with the segment containing the object(s) of interest. Experimental results are provided to illustrate the usefulness of this object detection method in a number of problem domains. These problems arise in IVHS, military reconnaissance, fingerprint analysis, and image database query.", "In this paper, we propose an unsupervised salient object segmentation approach using saliency and object features. In the proposed method, we utilize occlusion boundaries to construct a region-prior map which is then enhanced using object properties. To reject the non-salient regions, a region rejection strategy is employed based on the amount of detail (saliency information) and density of KAZE keypoints contained in them. Using the region rejection scheme, we obtain a threshold for binarizing the saliency map. The binarized saliency map is used to form a salient superpixel cluster. Finally, an iterative grabcut segmentation is applied with salient texture keypoints (SIFT keypoints on the Gabor convolved texture map) supplemented with salient KAZE keypoints (keypoints inside saliency cluster) as the foreground seeds and the binarized saliency map (obtained using the region rejection strategy) as a probably foreground region. We perform experiments on several datasets and show that the proposed segmentation framework outperforms the state of the art unsupervised salient object segmentation approaches on various performance metrics. Display Omitted Effective object segmentation using saliency and KAZE features is proposed.Region rejection strategy utilizing saliency and density of KAZE keypoints.KAZE keypoints are most suited for characterization of boundaryness.Objectness level information is enhanced with the help of salient keypoints.Outperform state of the art unsupervised salient object segmentation techniques.", "In this paper we present a Bayesian framework for parsing images into their constituent visual patterns. The parsing algorithm optimizes the posterior probability and outputs a scene representation as a \"parsing graph\", in a spirit similar to parsing sentences in speech and natural language. The algorithm constructs the parsing graph and re-configures it dynamically using a set of moves, which are mostly reversible Markov chain jumps. This computational framework integrates two popular inference approaches--generative (top-down) methods and discriminative (bottom-up) methods. The former formulates the posterior probability in terms of generative models for images defined by likelihood functions and priors. The latter computes discriminative probabilities based on a sequence (cascade) of bottom-up tests filters. In our Markov chain algorithm design, the posterior probability, defined by the generative models, is the invariant (target) probability for the Markov chain, and the discriminative probabilities are used to construct proposal probabilities to drive the Markov chain. Intuitively, the bottom-up discriminative probabilities activate top-down generative models. In this paper, we focus on two types of visual patterns--generic visual patterns, such as texture and shading, and object patterns including human faces and text. These types of patterns compete and cooperate to explain the image and so image parsing unifies image segmentation, object detection, and recognition (if we use generic visual patterns only then image parsing will correspond to image segmentation (Tu and Zhu, 2002. IEEE Trans. PAMI, 24(5):657--673). We illustrate our algorithm on natural images of complex city scenes and show examples where image segmentation can be improved by allowing object specific knowledge to disambiguate low-level segmentation cues, and conversely where object detection can be improved by using generic visual patterns to explain away shadows and occlusions.", "This paper presents a novel method for detecting and localizing objects of a visual category in cluttered real-world scenes. Our approach considers object categorization and figure-ground segmentation as two interleaved processes that closely collaborate towards a common goal. As shown in our work, the tight coupling between those two processes allows them to benefit from each other and improve the combined performance. The core part of our approach is a highly flexible learned representation for object shape that can combine the information observed on different training examples in a probabilistic extension of the Generalized Hough Transform. The resulting approach can detect categorical objects in novel images and automatically infer a probabilistic segmentation from the recognition result. This segmentation is then in turn used to again improve recognition by allowing the system to focus its efforts on object pixels and to discard misleading influences from the background. Moreover, the information from where in the image a hypothesis draws its support is employed in an MDL based hypothesis verification stage to resolve ambiguities between overlapping hypotheses and factor out the effects of partial occlusion. An extensive evaluation on several large data sets shows that the proposed system is applicable to a range of different object categories, including both rigid and articulated objects. In addition, its flexible representation allows it to achieve competitive object detection performance already from training sets that are between one and two orders of magnitude smaller than those used in comparable systems.", "In this paper, we address the problems of contour detection, bottom-up grouping, object detection and semantic segmentation on RGB-D data. We focus on the challenging setting of cluttered indoor scenes, and evaluate our approach on the recently introduced NYU-Depth V2 (NYUD2) dataset (, ECCV, 2012). We propose algorithms for object boundary detection and hierarchical segmentation that generalize the @math gPb-ucm approach of (TPAMI, 2011) by making effective use of depth information. We show that our system can label each contour with its type (depth, normal or albedo). We also propose a generic method for long-range amodal completion of surfaces and show its effectiveness in grouping. We train RGB-D object detectors by analyzing and computing histogram of oriented gradients on the depth image and using them with deformable part models (, TPAMI, 2010). We observe that this simple strategy for training object detectors significantly outperforms more complicated models in the literature. We then turn to the problem of semantic segmentation for which we propose an approach that classifies superpixels into the dominant object categories in the NYUD2 dataset. We design generic and class-specific features to encode the appearance and geometry of objects. We also show that additional features computed from RGB-D object detectors and scene classifiers further improves semantic segmentation accuracy. In all of these tasks, we report significant improvements over the state-of-the-art." ] }
1502.00743
2950368212
This paper investigates how to extract objects-of-interest without relying on hand-craft features and sliding windows approaches, that aims to jointly solve two sub-tasks: (i) rapidly localizing salient objects from images, and (ii) accurately segmenting the objects based on the localizations. We present a general joint task learning framework, in which each task (either object localization or object segmentation) is tackled via a multi-layer convolutional neural network, and the two networks work collaboratively to boost performance. In particular, we propose to incorporate latent variables bridging the two networks in a joint optimization manner. The first network directly predicts the positions and scales of salient objects from raw images, and the latent variables adjust the object localizations to feed the second network that produces pixelwise object masks. An EM-type method is presented for the optimization, iterating with two steps: (i) by using the two networks, it estimates the latent variables by employing an MCMC-based sampling method; (ii) it optimizes the parameters of the two networks unitedly via back propagation, with the fixed latent variables. Extensive experiments suggest that our framework significantly outperforms other state-of-the-art approaches in both accuracy and efficiency (e.g. 1000 times faster than competing approaches).
Recently resurgent deep learning methods have also been applied in object detection and image segmentation @cite_30 @cite_25 @cite_15 @cite_13 @cite_18 @cite_3 @cite_27 @cite_2 . Among these works, @cite_8 detected objects by training category-level convolutional neural networks. @cite_2 proposed to combine multiple components (e.g., feature extraction, occlusion handling, and classification) within a deep architecture for human detection. @cite_13 presented the multiscale recursive neural networks for robust image segmentation. These mentioned methods generally achieved impressive performances, but they usually rely on sliding detect windows over scales and positions of testing images. Very recently, @cite_10 adopted neural networks to recognize object categories while predicting potential object localizations without exhaustive enumeration. This work inspired us to design the first network to localize objects. To the best of our knowledge, our framework is original to make the different tasks collaboratively optimized by introducing latent variables together with network parameter learning.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_8", "@cite_10", "@cite_3", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_25" ], "mid": [ "2963542991", "1487583988", "2410641892", "1929903369", "2559348937", "2951001760", "2068730032", "2949150497", "2949192504", "2560096627" ], "abstract": [ "Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets.", "Deep convolutional neural networks (CNN) have seen tremendous success in large-scale generic object recognition. In comparison with generic object recognition, fine-grained image classification (FGIC) is much more challenging because (i) fine-grained labeled data is much more expensive to acquire (usually requiring domain expertise); (ii) there exists large intra-class and small inter-class variance. Most recent work exploiting deep CNN for image recognition with small training data adopts a simple strategy: pre-train a deep CNN on a large-scale external dataset (e.g., ImageNet) and fine-tune on the small-scale target data to fit the specific classification task. In this paper, beyond the fine-tuning strategy, we propose a systematic framework of learning a deep CNN that addresses the challenges from two new perspectives: (i) identifying easily annotated hyper-classes inherent in the fine-grained data and acquiring a large number of hyper-class-labeled images from readily available external sources (e.g., image search engines), and formulating the problem into multitask learning; (ii) a novel learning model by exploiting a regularization between the fine-grained recognition model and the hyper-class recognition model. We demonstrate the success of the proposed framework on two small-scale fine-grained datasets (Stanford Dogs and Stanford Cars) and on a large-scale car dataset that we collected.", "Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization.", "Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization.", "Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations.", "Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations.", "Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection. We achieve this goal by means of a multi-scale deep network that consists of five convolutional layers and a bifurcated fully-connected sub-network. The section from the input layer to the fifth convolutional layer is fixed and directly lifted from a pre-trained network optimized over a large-scale object classification task. This section of the network is applied to four different scales of the image input. These four parallel and identical streams are then attached to a bifurcated sub-network consisting of two independently-trained branches. One branch learns to predict the contour likelihood (with a classification objective) whereas the other branch is trained to learn the fraction of human labelers agreeing about the contour presence at a given point (with a regression criterion). We show that without any feature engineering our multi-scale deep learning approach achieves state-of-the-art results in contour detection.", "Deep convolution neural networks (CNNs) have demonstrated advanced performance on single-label image classification, and various progress also has been made to apply CNN methods on multilabel image classification, which requires annotating objects, attributes, scene categories, etc., in a single shot. Recent state-of-the-art approaches to the multilabel image classification exploit the label dependencies in an image, at the global level, largely improving the labeling capacity. However, predicting small objects and visual concepts is still challenging due to the limited discrimination of the global visual features. In this paper, we propose a regional latent semantic dependencies model (RLSD) to address this problem. The utilized model includes a fully convolutional localization architecture to localize the regions that may contain multiple highly dependent labels. The localized regions are further sent to the recurrent neural networks to characterize the latent semantic dependencies at the regional level. Experimental results on several benchmark datasets show that our proposed model achieves the best performance compared to the state-of-the-art models, especially for predicting small objects occurring in the images. Also, we set up an upper bound model (RLSD+ft-RPN) using bounding-box coordinates during training, and the experimental results also show that our RLSD can approach the upper bound without using the bounding-box annotations, which is more realistic in the real world." ] }
1502.00702
2169585179
In this paper, we propose a novel model for high-dimensional data, called the Hybrid Orthogonal Projection and Estimation (HOPE) model, which combines a linear orthogonal projection and a finite mixture model under a unified generative modeling framework. The HOPE model itself can be learned unsupervised from unlabelled data based on the maximum likelihood estimation as well as discriminatively from labelled data. More interestingly, we have shown the proposed HOPE models are closely related to neural networks (NNs) in a sense that each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework can be used as a novel tool to probe why and how NNs work, more importantly, to learn NNs in either supervised or unsupervised ways. In this work, we have investigated the HOPE framework to learn NNs for several standard tasks, including image recognition on MNIST and speech recognition on TIMIT. Experimental results have shown that the HOPE framework yields significant performance gains over the current state-of-the-art methods in various types of NN learning problems, including unsupervised feature learning, supervised or semi-supervised learning.
Similar to PCA, the Fisher's linear discriminant analysis (LDA) can also be viewed as a linear dimensionality reduction technique. However, PCA is unsupervised in the sense that PCA depends only on the data while Fisher's LDA is supervised since it uses both data and class-label information. The high-dimensional data are linearly projected to a subspace where various classes are best distinguished as measured by the Fisher criterion. In @cite_0 , the so-called heteroscedastic discriminant analysis (HDA) is proposed to extend LDA to deal with high-dimensional data with heteroscedastic covariance, where a linear projection can be learned from data and class labels based on the maximum likelihood criterion.
{ "cite_N": [ "@cite_0" ], "mid": [ "2108146080" ], "abstract": [ "Fishers linear discriminant analysis (LDA) is a classical multivariate technique both for dimension reduction and classification. The data vectors are transformed into a low dimensional subspace such that the class centroids are spread out as much as possible. In this subspace LDA works as a simple prototype classifier with linear decision boundaries. However, in many applications the linear boundaries do not adequately separate the classes. We present a nonlinear generalization of discriminant analysis that uses the kernel trick of representing dot products by kernel functions. The presented algorithm allows a simple formulation of the EM-algorithm in terms of kernel functions which leads to a unique concept for unsupervised mixture analysis, supervised discriminant analysis and semi-supervised discriminant analysis with partially unlabelled observations in feature spaces." ] }
1904.12164
2941755003
The problem of influence maximization is to select the most influential individuals in a social network. With the popularity of social network sites, and the development of viral marketing, the importance of the problem has been increased. The influence maximization problem is NP-hard, and therefore, there will not exist a polynomial-time algorithm to solve the problem unless P=NP. Many heuristics are proposed to find a nearly good solution in a shorter time. In this paper, we propose two heuristic algorithms to find good solutions. The heuristics are based on two ideas: (1) vertices of high degree have more influence in the network, and (2) nearby vertices influence on almost analogous sets of vertices. We evaluate our algorithms on several well-known data sets and show that our heuristics achieve better results (up to @math in influence spread) for this problem in a shorter time (up to @math improvement in the running time).
In order to improve the efficiency of the computations, many studies have been made. Leskovec al @cite_3 proposed Cost-Effective Lazy Forward (CELF) optimization that reduces the computation cost of the influence spread using sub-modularity property of the objective function.
{ "cite_N": [ "@cite_3" ], "mid": [ "2030378176" ], "abstract": [ "Influence maximization, defined as finding a small subset of nodes that maximizes spread of influence in social networks, is NP-hard under both Linear Threshold (LT) and Independent Cascade (IC) models, where a line of greedy heuristic algorithms have been proposed. The simple greedy algorithm [14] achieves an approximation ratio of 1-1 e. The advanced CELF algorithm [16], by exploiting the sub modular property of the spread function, runs 700 times faster than the simple greedy algorithm on average. However, CELF is still inefficient [4], as the first iteration calls for N times of spread estimations (N is the number of nodes in networks), which is computationally expensive especially for large networks. To this end, in this paper we derive an upper bound function for the spread function. The bound can be used to reduce the number of Monte-Carlo simulation calls in greedy algorithms, especially in the first iteration of initialization. Based on the upper bound, we propose an efficient Upper Bound based Lazy Forward algorithm (UBLF in short), by incorporating the bound into the CELF algorithm. We test and compare our algorithm with prior algorithms on real-world data sets. Experimental results demonstrate that UBLF, compared with CELF, reduces more than 95 Monte-Carlo simulations and achieves at least 2-5 times speed-raising when the seed set is small." ] }
1904.12164
2941755003
The problem of influence maximization is to select the most influential individuals in a social network. With the popularity of social network sites, and the development of viral marketing, the importance of the problem has been increased. The influence maximization problem is NP-hard, and therefore, there will not exist a polynomial-time algorithm to solve the problem unless P=NP. Many heuristics are proposed to find a nearly good solution in a shorter time. In this paper, we propose two heuristic algorithms to find good solutions. The heuristics are based on two ideas: (1) vertices of high degree have more influence in the network, and (2) nearby vertices influence on almost analogous sets of vertices. We evaluate our algorithms on several well-known data sets and show that our heuristics achieve better results (up to @math in influence spread) for this problem in a shorter time (up to @math improvement in the running time).
Chen al @cite_8 proposed new greedy algorithms for independent cascade and weighted cascade models. They made the greedy algorithm faster by combining their algorithms with CELF. They also proposed a new heuristic, named , which produces results of quality close to the greedy algorithm, while is much faster than that and performs better than the traditional degree and distance centrality heuristics. In order to avoid running repeated influence propagation simulations, Borgs al @cite_16 generated a random hypergraph according to reverse reachability probability of vertices in the original graph and select @math vertices that cover the largest number of vertices in the hypergraph. They guarantee @math approximation ratio of the solution with probability at least @math . Later, Tang al @cite_14 @cite_15 proposed TIM and IMM to cover drawbacks of Borgs al's algorithm @cite_16 and improved its running time. Bucur and Iacca @cite_1 and Kr "o mer and Nowakov 'a @cite_11 used genetic algorithms for the influence maximization problem. Weskida and Michalski @cite_6 used GPU acceleration in their genetic algorithm to improve its efficiency.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_1", "@cite_6", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "2552732996", "2151690061", "2030378176", "2069278600", "2139297408", "2605027943", "2011730242" ], "abstract": [ "Nowadays, in the world of limited attention, the techniques that maximize the spread of social influence are more than welcomed. Companies try to maximize their profits on sales by providing customers with free samples believing in the power of word-of-mouth marketing, governments and non-governmental organizations often want to introduce positive changes in the society by appropriately selecting individuals or election candidates want to spend least budget yet still win the election. In this work we propose the use of evolutionary algorithm as a mean for selecting seeds in social networks. By framing the problem as genetic algorithm challenge we show that it is possible to outperform well-known greedy algorithm in the problem of influence maximization for the linear threshold model in both: quality (up to 16 better) and efficiency (up to 35 times faster). We implemented these two algorithms by using GPGPU approach showing that also the evolutionary algorithm can benefit from GPU acceleration making it efficient and scaling better than the greedy algorithm. As the experiments conducted by using three real world datasets reveal, the evolutionary approach proposed in this paper outperforms the greedy algorithm in terms of the outcome and it also scales much better than the greedy algorithm when the network size is increasing. The only drawback in the GPGPU approach so far is the maximum size of the network that can be processed - it is limited by the memory of the GPU card. We believe that by showing the superiority of the evolutionary approach over the greedy algorithm, we will motivate the scientific community to look for an idea to overcome this limitation of the GPU approach - we also suggest one of the possible paths to explore. Since the proposed approach is based only on topological features of the network, not on the attributes of nodes, the applications of it are broader than the ones that are dataset-specific.", "The greedy sequential algorithm for maximal independent set (MIS) loops over the vertices in an arbitrary order adding a vertex to the resulting set if and only if no previous neighboring vertex has been added. In this loop, as in many sequential loops, each iterate will only depend on a subset of the previous iterates (i.e. knowing that any one of a vertex's previous neighbors is in the MIS, or knowing that it has no previous neighbors, is sufficient to decide its fate one way or the other). This leads to a dependence structure among the iterates. If this structure is shallow then running the iterates in parallel while respecting the dependencies can lead to an efficient parallel implementation mimicking the sequential algorithm. In this paper, we show that for any graph, and for a random ordering of the vertices, the dependence length of the sequential greedy MIS algorithm is polylogarithmic (O(log^2 n) with high probability). Our results extend previous results that show polylogarithmic bounds only for random graphs. We show similar results for greedy maximal matching (MM). For both problems we describe simple linear-work parallel algorithms based on the approach. The algorithms allow for a smooth tradeoff between more parallelism and reduced work, but always return the same result as the sequential greedy algorithms. We present experimental results that demonstrate efficiency and the tradeoff between work and parallelism.", "Influence maximization, defined as finding a small subset of nodes that maximizes spread of influence in social networks, is NP-hard under both Linear Threshold (LT) and Independent Cascade (IC) models, where a line of greedy heuristic algorithms have been proposed. The simple greedy algorithm [14] achieves an approximation ratio of 1-1 e. The advanced CELF algorithm [16], by exploiting the sub modular property of the spread function, runs 700 times faster than the simple greedy algorithm on average. However, CELF is still inefficient [4], as the first iteration calls for N times of spread estimations (N is the number of nodes in networks), which is computationally expensive especially for large networks. To this end, in this paper we derive an upper bound function for the spread function. The bound can be used to reduce the number of Monte-Carlo simulation calls in greedy algorithms, especially in the first iteration of initialization. Based on the upper bound, we propose an efficient Upper Bound based Lazy Forward algorithm (UBLF in short), by incorporating the bound into the CELF algorithm. We test and compare our algorithm with prior algorithms on real-world data sets. Experimental results demonstrate that UBLF, compared with CELF, reduces more than 95 Monte-Carlo simulations and achieves at least 2-5 times speed-raising when the seed set is small.", "We present a new efficient algorithm for the search version of the approximate Closest Vector Problem with Preprocessing (CVPP). Our algorithm achieves an approximation factor of O(n sqrt log n ), improving on the previous best of O(n^ 1.5 ) due to Lag arias, Lenstra, and Schnorr hkzbabai . We also show, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve the problem (with the slightly worse approximation factor of O(n)). We remark that this still leaves a large gap with respect to the decisional version of CVPP, where the best known approximation factor is O(sqrt n log n ) due to Aharonov and Regev AharonovR04 . To achieve these results, we show a reduction to the same problem restricted to target points that are close to the lattice and a more efficient reduction to a harder problem, Bounded Distance Decoding with preprocessing (BDDP). Combining either reduction with the previous best-known algorithm for BDDP by Liu, Lyubashevsky, and Micciancio LiuLM06 gives our main result. In the setting of CVP without preprocessing, we also give a reduction from (1+eps)gamma approximate CVP to gamma approximate CVP where the target is at distance at most 1+1 eps times the minimum distance (the length of the shortest non-zero vector) which relies on the lattice sparsification techniques of Dadush and Kun DadushK13 . As our final and most technical contribution, we present a substantially more efficient variant of the LLM algorithm (both in terms of run-time and amount of preprocessing advice), and via an improved analysis, show that it can decode up to a distance proportional to the reciprocal of the smoothing parameter of the dual lattice MR04 . We show that this is never smaller than the LLM decoding radius, and that it can be up to an wide tilde Omega (sqrt n ) factor larger.", "In this work, we study the notion of competing campaigns in a social network and address the problem of influence limitation where a \"bad\" campaign starts propagating from a certain node in the network and use the notion of limiting campaigns to counteract the effect of misinformation. The problem can be summarized as identifying a subset of individuals that need to be convinced to adopt the competing (or \"good\") campaign so as to minimize the number of people that adopt the \"bad\" campaign at the end of both propagation processes. We show that this optimization problem is NP-hard and provide approximation guarantees for a greedy solution for various definitions of this problem by proving that they are submodular. We experimentally compare the performance of the greedy method to various heuristics. The experiments reveal that in most cases inexpensive heuristics such as degree centrality compare well with the greedy approach. We also study the influence limitation problem in the presence of missing data where the current states of nodes in the network are only known with a certain probability and show that prediction in this setting is a supermodular problem. We propose a prediction algorithm that is based on generating random spanning trees and evaluate the performance of this approach. The experiments reveal that using the prediction algorithm, we are able to tolerate about 90 missing data before the performance of the algorithm starts degrading and even with large amounts of missing data the performance degrades only to 75 of the performance that would be achieved with complete data.", "Copyright © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Motivated by the fact that in several cases a matching in a graph is stable if and only if it is produced by a greedy algorithm, we study the problem of computing a maximum weight greedy matching on weighted graphs, termed GREEDY-MATCHING. In wide contrast to the maximum weight matching problem, for which many efficient algorithms are known, we prove that GREEDYMATCHING is strongly NP-hard and APX-complete, and thus it does not admit a PTAS unless P=NP, even on graphs with maximum degree at most 3 and with at most three different integer edge weights. Furthermore we prove that GREEDYMATCHING is strongly NP-hard if the input graph is in addition bipartite. Moreover we consider three natural parameters of the problem, for which we establish a sharp threshold behavior between NP-hardness and computational tractability. On the positive side, we present a randomized approximation algorithm (RGMA) for GREEDYMATCHING on a special class of weighted graphs, called bush graphs. We highlight an unexpected connection between RGMA and the approximation of maximum cardinality matching in unweighted graphs via randomized greedy algorithms. We show that, if the approximation ratio of RGMA is θ, then for every ϵ > 0 the randomized MRG algorithm of ( 1995) gives a (ρ - ϵ)-approximation for the maximum cardinality matching. We conjecture that a tight bound for ρ is 2 3; we prove our conjecture true for four subclasses of bush graphs. Proving a tight bound for the approximation ratio of MRG on unweighted graphs (and thus also proving a tight value for ρ) is a long-standing open problem (Poloczek and Szegedy 2012). This unexpected relation of our RGMA algorithm with the MRG algorithm may provide new insights for solving this problem.", "We study an online assignment problem, motivated by Adwords Allocation, in which queries are to be assigned to bidders with budget constraints. We analyze the performance of the Greedy algorithm (which assigns each query to the highest bidder) in a randomized input model with queries arriving in a random permutation. Our main result is a tight analysis of Greedy in this model showing that it has a competitive ratio of 1 - 1 e for maximizing the value of the assignment. We also consider the more standard i.i.d. model of input, and show that our analysis holds there as well. This is to be contrasted with the worst case analysis of [MSVV05] which shows that Greedy has a ratio of 1 2, and that the optimal algorithm presented there has a ratio of 1 - 1 e. The analysis of Greedy is important in the Adwords setting because it is the natural allocation algorithm for an auction-style process. From a theoretical perspective, our result simplifies and generalizes the classic algorithm of Karp, Vazirani and Vazirani for online bipartite matching. Our results include a new proof to show that the Ranking alforithm of [KVV90] has a ratio of 1 - 1 e in the worst case. It has been recently discovered [KV07] (independent of our results) that one of the crucial lemmas in [KVV90], related to a certain reduction, is incorrect. Our proof is direct, in that it does not go via such a reduction, which also enables us to generalize the analysis to our online assignment problem." ] }
1904.12164
2941755003
The problem of influence maximization is to select the most influential individuals in a social network. With the popularity of social network sites, and the development of viral marketing, the importance of the problem has been increased. The influence maximization problem is NP-hard, and therefore, there will not exist a polynomial-time algorithm to solve the problem unless P=NP. Many heuristics are proposed to find a nearly good solution in a shorter time. In this paper, we propose two heuristic algorithms to find good solutions. The heuristics are based on two ideas: (1) vertices of high degree have more influence in the network, and (2) nearby vertices influence on almost analogous sets of vertices. We evaluate our algorithms on several well-known data sets and show that our heuristics achieve better results (up to @math in influence spread) for this problem in a shorter time (up to @math improvement in the running time).
There are some community-based algorithms for the influence maximization problem that partition the graph into small subgraphs and select the most influential vertices from each subgraph. Chen al @cite_2 used H-Clustering algorithm and Manaskasemsak al @cite_13 used Markov clustering algorithm for community detection. Song al @cite_12 divided the graph into communities, then selected the most influential vertices by a dynamic programming algorithm.
{ "cite_N": [ "@cite_13", "@cite_12", "@cite_2" ], "mid": [ "2106597862", "2964201038", "2889208719" ], "abstract": [ "Given a social graph, the problem of influence maximization is to determine a set of nodes that maximizes the spread of influences. While some recent research has studied the problem of influence maximization, these works are generally too time consuming for practical use in a large-scale social network. In this article, we develop a new framework, community-based influence maximization (CIM), to tackle the influence maximization problem with an emphasis on the time efficiency issue. Our proposed framework, CIM, comprises three phases: (i) community detection, (ii) candidate generation, and (iii) seed selection. Specifically, phase (i) discovers the community structure of the network; phase (ii) uses the information of communities to narrow down the possible seed candidates; and phase (iii) finalizes the seed nodes from the candidate set. By exploiting the properties of the community structures, we are able to avoid overlapped information and thus efficiently select the number of seeds to maximize information spreads. The experimental results on both synthetic and real datasets show that the proposed CIM algorithm significantly outperforms the state-of-the-art algorithms in terms of efficiency and scalability, with almost no compromise of effectiveness.", "We consider the canonical problem of influence maximization in social networks. Since the seminal work of Kempe, Kleinberg, and Tardos there have been two, largely disjoint efforts on this problem. The first studies the problem associated with learning the generative model that produces cascades, and the second focuses on the algorithmic challenge of identifying a set of influencers, assuming the generative model is known. Recent results on learning and optimization imply that in general, if the generative model is not known but rather learned from training data, no algorithm for influence maximization can yield a constant factor approximation guarantee using polynomially-many samples, drawn from any distribution. In this paper we describe a simple algorithm for maximizing influence from training data. The main idea behind the algorithm is to leverage the strong community structure of social networks and identify a set of individuals who are influentials but whose communities have little overlap. Although in general, the approximation guarantee of such an algorithm is unbounded, we show that this algorithm performs well experimentally. To analyze its performance, we prove this algorithm obtains a constant factor approximation guarantee on graphs generated through the stochastic block model, traditionally used to model networks with community structure.", "In the well-studied Influence Maximization problem, the goal is to identify a set of k nodes in a social network whose joint influence on the network is maximized. A large body of recent work has justified research on Influence Maximization models and algorithms with their potential to create societ al or economic value. However, in order to live up to this potential, the algorithms must be robust to large amounts of noise, for they require quantitative estimates of the influence, which individuals exert on each other; ground truth for such quantities is inaccessible, and even decent estimates are very difficult to obtain. We begin to address this concern formally. First, we exhibit simple inputs on which even very small estimation errors may mislead every algorithm into highly suboptimal solutions. Motivated by this observation, we propose the Perturbation Interval model as a framework to characterize the stability of Influence Maximization against noise in the inferred diffusion network. Analyzing the susceptibility of specific instances to estimation errors leads to a clean algorithmic question, which we term the Influence Difference Maximization problem. However, the objective function of Influence Difference Maximization is NP-hard to approximate within a factor of O(n1−e) for any e > 0. Given the infeasibility of diagnosing instability algorithmically, we focus on finding influential users robustly across multiple diffusion settings. We define a Robust Influence Maximization framework wherein an algorithm is presented with a set of influence functions. The algorithm’s goal is to identify a set of k nodes who are simultaneously influential for all influence functions, compared to the (function-specific) optimum solutions. We show strong approximation hardness results for this problem unless the algorithm gets to select at least a logarithmic factor more seeds than the optimum solution. However, when enough extra seeds may be selected, we show that techniques of can be used to approximate the optimum robust influence to within a factor of 1−1 e. We evaluate this bicriteria approximation algorithm against natural heuristics on several real-world datasets. Our experiments indicate that the worst-case hardness does not necessarily translate into bad performance on real-world datasets; all algorithms perform fairly well." ] }
1904.12191
2941057241
We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.
Approximation properties of neural networks and, more generally, nonlinear approximation have been studied in detail in the nineties, see e.g. @cite_17 @cite_2 @cite_4 . The main concern of the present paper is quite different, since we focus on the random feature model, and the (recently proposed) neural tangent model. Further, our focus is on the high-dimensional regime in which @math grows with @math . Most approximation theory literature considers @math fixed, and @math .
{ "cite_N": [ "@cite_2", "@cite_4", "@cite_17" ], "mid": [ "2166116275", "2941057241", "2963446085" ], "abstract": [ "Approximation properties of a class of artificial neural networks are established. It is shown that feedforward networks with one layer of sigmoidal nonlinearities achieve integrated squared error of order O(1 n), where n is the number of nodes. The approximated function is assumed to have a bound on the first moment of the magnitude distribution of the Fourier transform. The nonlinear parameters associated with the sigmoidal nodes, as well as the parameters of linear combination, are adjusted in the approximation. In contrast, it is shown that for series expansions with n terms, in which only the parameters of linear combination are adjusted, the integrated squared approximation error cannot be made smaller than order 1 n sup 2 d uniformly for functions satisfying the same smoothness assumption, where d is the dimension of the input to the function. For the class of functions examined, the approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings. >", "We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.", "In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers). Moreover, we prove that the same four statements hold for deep linear neural networks with any depth, any widths and no unrealistic assumptions. As a result, we present an instance, for which we can answer to the following question: how difficult to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima and the property of the saddle points). We note that even though we have advanced the theoretical foundations of deep learning, there is still a gap between theory and practice." ] }
1904.12191
2941057241
We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.
The random features model , has been studied in considerable depth since the original work in @cite_21 . The classical viewpoint suggests that @math should be regarded as an approximation of the reproducing kernel Hilbert space (RKHS) @math defined by the kernel (see @cite_13 for general background) Indeed the space @math is the RKHS defined by the following finite-rank approximation of this kernel The paper @cite_21 proved convergence of @math to @math as functions. Subsequent work established quantitative approximation of @math by the random feature model @math . In particular, @cite_18 provides upper and lower bounds in terms of the eigenvalues of the kernel @math , which match up to logarithmic terms (see also @cite_26 @cite_11 @cite_23 for related work).
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_21", "@cite_23", "@cite_13", "@cite_11" ], "mid": [ "2418306335", "2124331852", "2950268835", "1988813039", "2941057241", "2963709899" ], "abstract": [ "A Hilbert space embedding of a distribution---in short, a kernel mean embedding---has recently emerged as a powerful tool for machine learning and inference. The basic idea behind this framework is to map distributions into a reproducing kernel Hilbert space (RKHS) in which the whole arsenal of kernel methods can be extended to probability measures. It can be viewed as a generalization of the original \"feature map\" common to support vector machines (SVMs) and other kernel methods. While initially closely associated with the latter, it has meanwhile found application in fields ranging from kernel machines and probabilistic modeling to statistical inference, causal discovery, and deep learning. The goal of this survey is to give a comprehensive review of existing work and recent advances in this research area, and to discuss the most challenging issues and open problems that could lead to new research directions. The survey begins with a brief introduction to the RKHS and positive definite kernels which forms the backbone of this survey, followed by a thorough discussion of the Hilbert space embedding of marginal distributions, theoretical guarantees, and a review of its applications. The embedding of distributions enables us to apply RKHS methods to probability measures which prompts a wide range of applications such as kernel two-sample testing, independent testing, and learning on distributional data. Next, we discuss the Hilbert space embedding for conditional distributions, give theoretical insights, and review some applications. The conditional mean embedding enables us to perform sum, product, and Bayes' rules---which are ubiquitous in graphical model, probabilistic inference, and reinforcement learning---in a non-parametric way. We then discuss relationships between this framework and other related areas. Lastly, we give some suggestions on future research directions.", "A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A pseudometric on the space of probability measures can be defined as the distance between distribution embeddings: we denote this as γk, indexed by the kernel function k that defines the inner product in the RKHS. We present three theoretical properties of γk. First, we consider the question of determining the conditions on the kernel k for which γk is a metric: such k are denoted characteristic kernels. Unlike pseudometrics, a metric is zero only when two distributions coincide, thus ensuring the RKHS embedding maps all distributions uniquely (i.e., the embedding is injective). While previously published conditions may apply only in restricted circumstances (e.g., on compact domains), and are difficult to check, our conditions are straightforward and intuitive: integrally strictly positive definite kernels are characteristic. Alternatively, if a bounded continuous kernel is translation-invariant on ℜd, then it is characteristic if and only if the support of its Fourier transform is the entire ℜd. Second, we show that the distance between distributions under γk results from an interplay between the properties of the kernel and the distributions, by demonstrating that distributions are close in the embedding space when their differences occur at higher frequencies. Third, to understand the nature of the topology induced by γk, we relate γk to other popular metrics on probability measures, and present conditions on the kernel k under which γk metrizes the weak topology.", "A nonparametric kernel-based method for realizing Bayes' rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The posterior is likewise an RKHS mean of a weighted sample. The estimator for the expectation of a function of the posterior is derived, and rates of consistency are shown. Some representative applications of the kernel Bayes' rule are presented, including Baysian computation without likelihood and filtering with a nonparametric state-space model.", "With the goal of accelerating the training and testing complexity of nonlinear kernel methods, several recent papers have proposed explicit embeddings of the input data into low-dimensional feature spaces, where fast linear methods can instead be used to generate approximate solutions. Analogous to random Fourier feature maps to approximate shift-invariant kernels, such as the Gaussian kernel, on Rd, we develop a new randomized technique called random Laplace features, to approximate a family of kernel functions adapted to the semigroup structure of R+d. This is the natural algebraic structure on the set of histograms and other non-negative data representations. We provide theoretical results on the uniform convergence of random Laplace features. Empirical analyses on image classification and surveillance event detection tasks demonstrate the attractiveness of using random Laplace features relative to several other feature maps proposed in the literature.", "We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.", "We show that kernel-based quadrature rules for computing integrals can be seen as a special case of random feature expansions for positive definite kernels, for a particular decomposition that always exists for such kernels. We provide a theoretical analysis of the number of required samples for a given approximation error, leading to both upper and lower bounds that are based solely on the eigenvalues of the associated integral operator and match up to logarithmic terms. In particular, we show that the upper bound may be obtained from independent and identically distributed samples from a specific non-uniform distribution, while the lower bound if valid for any set of points. Applying our results to kernel-based quadrature, while our results are fairly general, we recover known upper and lower bounds for the special cases of Sobolev spaces. Moreover, our results extend to the more general problem of full function approximations (beyond simply computing an integral), with results in L2- and L∞-norm that match known results for special cases. Applying our results to random features, we show an improvement of the number of random features needed to preserve the generalization guarantees for learning with Lipshitz-continuous losses." ] }
1904.12191
2941057241
We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.
Of course, this approach generally breaks down if the dimension @math is large (technically, if it grows with @math ). This curse of dimensionality' is already revealed by classical lower bounds in functional approximation, see e.g. @cite_17 @cite_18 . However, previous work does not clarify what happens precisely in this high-dimensional regime. In contrast, the picture emerging from our work is remarkably simple. In particular, in the regime @math , random feature models are performing vanilla linear regression with respect to the raw features.
{ "cite_N": [ "@cite_18", "@cite_17" ], "mid": [ "2941057241", "1991143958" ], "abstract": [ "We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.", "Constructing a good approximation to a function of many variables suffers from the “curse of dimensionality”. Namely, functions on ℝN with smoothness of order s can in general be captured with accuracy at most O(n−s N) using linear spaces or nonlinear manifolds of dimension n. If N is large and s is not, then n has to be chosen inordinately large for good accuracy. The large value of N often precludes reasonable numerical procedures. On the other hand, there is the common belief that real world problems in high dimensions have as their solution, functions which are more amenable to numerical recovery. This has led to the introduction of models for these functions that do not depend on smoothness alone but also involve some form of variable reduction. In these models it is assumed that, although the function depends on N variables, only a small number of them are significant. Another variant of this principle is that the function lives on a low dimensional manifold. Since the dominant variables (respectively the manifold) are unknown, this leads to new problems of how to organize point queries to capture such functions. The present paper studies where to query the values of a ridge function f(x)=g(a⋅x) when both a∈ℝN and g∈C[0,1] are unknown. We establish estimates on how well f can be approximated using these point queries under the assumptions that g∈Cs[0,1]. We also study the role of sparsity or compressibility of a in such query problems." ] }
1904.12191
2941057241
We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.
The connection between kernel methods and neural networks was recently revived by the work of Belkin and coauthors @cite_12 @cite_24 who pointed out intriguing similarities between some properties of modern deep learning models, and large scale kernel learning. A concrete explanation for this analogy was proposed in @cite_28 via the , model. This explanation postulates that, for large neural networks, the network weights do not change much during the training phase. Considering a random initialization @math and denoting by @math the change during the training phase, we linearize the neural network as Assuming @math (which is reasonable for certain random initializations), this suggests that a two-layers neural network learns a model in @math (if both layers are trained), or simply @math (if only the first layer is trained). The analysis of @cite_27 @cite_15 @cite_29 @cite_5 establishes that indeed this linearization is accurate in a certain highly overparametrized regime, namely when @math for a certain constant @math . Empirical evidence in the same direction was presented in @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_29", "@cite_24", "@cite_27", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2950743785", "2941057241", "2899748887", "2809090039", "2942052807", "2540366045", "2763894180", "2885059312" ], "abstract": [ "At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function @math (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function @math follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.", "We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.", "Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, networks used in practice are going wider and deeper. On the theoretical side, a long line of works has been focusing on training neural networks with one hidden layer. The theory of multi-layer networks remains largely unsettled. In this work, we prove why stochastic gradient descent (SGD) can find @math on the training objective of DNNs in @math . We only make two assumptions: the inputs are non-degenerate and the network is over-parameterized. The latter means the network width is sufficiently large: @math in @math , the number of layers and in @math , the number of samples. Our key technique is to derive that, in a sufficiently large neighborhood of the random initialization, the optimization landscape is almost-convex and semi-smooth even with ReLU activations. This implies an equivalence between over-parameterized neural networks and neural tangent kernel (NTK) in the finite (and polynomial) width setting. As concrete examples, starting from randomly initialized weights, we prove that SGD can attain 100 training accuracy in classification tasks, or minimize regression loss in linear convergence speed, with running time polynomial in @math . Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet).", "At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function (which maps input vectors to output vectors) follows the so-called kernel gradient associated with a new object, which we call the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.", "How well does a classic deep net architecture like AlexNet or VGG19 classify on a standard dataset such as CIFAR-10 when its \"width\" --- namely, number of channels in convolutional layers, and number of nodes in fully-connected internal layers --- is allowed to increase to infinity? Such questions have come to the forefront in the quest to theoretically understand deep learning and its mysteries about optimization and generalization. They also connect deep learning to notions such as Gaussian processes and kernels. A recent paper [, 2018] introduced the Neural Tangent Kernel (NTK) which captures the behavior of fully-connected deep nets in the infinite width limit trained by gradient descent; this object was implicit in some other recent papers. A subsequent paper [, 2019] gave heuristic Monte Carlo methods to estimate the NTK and its extension, Convolutional Neural Tangent Kernel (CNTK) and used this to try to understand the limiting behavior on datasets like CIFAR-10. The current paper gives the first efficient exact algorithm (based upon dynamic programming) for computing CNTK as well as an efficient GPU implementation of this algorithm. This results in a significant new benchmark for performance of a pure kernel-based method on CIFAR-10, being 10 higher than the methods reported in [, 2019], and only 5 lower than the performance of the corresponding finite deep net architecture (once batch normalization etc. are turned off). We give the first non-asymptotic proof showing that a fully-trained sufficiently wide net is indeed equivalent to the kernel regression predictor using NTK. Our experiments also demonstrate that earlier Monte Carlo approximation can degrade the performance significantly, thus highlighting the power of our exact kernel computation, which we have applied even to the full CIFAR-10 dataset and 20-layer nets.", "The learning capability of a neural network improves with increasing depth at higher computational costs. Wider layers with dense kernel connectivity patterns furhter increase this cost and may hinder real-time inference. We propose feature map and kernel level pruning for reducing the computational complexity of a deep convolutional neural network. Pruning feature maps reduces the width of a layer and hence does not need any sparse representation. Further, kernel pruning converts the dense connectivity pattern into a sparse one. Due to coarse nature, these pruning granularities can be exploited by GPUs and VLSI based implementations. We propose a simple and generic strategy to choose the least adversarial pruning masks for both granularities. The pruned networks are retrained which compensates the loss in accuracy. We obtain the best pruning ratios when we prune a network with both granularities. Experiments with the CIFAR-10 dataset show that more than 85 sparsity can be induced in the convolution layers with less than 1 increase in the missclassification rate of the baseline network.", "We perform an average case analysis of the generalization dynamics of large neural networks trained using gradient descent. We study the practically-relevant \"high-dimensional\" regime where the number of free parameters in the network is on the order of or even larger than the number of examples in the dataset. Using random matrix theory and exact solutions in linear models, we derive the generalization error and training error dynamics of learning and analyze how they depend on the dimensionality of data and signal to noise ratio of the learning problem. We find that the dynamics of gradient descent learning naturally protect against overtraining and overfitting in large networks. Overtraining is worst at intermediate network sizes, when the effective number of free parameters equals the number of samples, and thus can be reduced by making a network smaller or larger. Additionally, in the high-dimensional regime, low generalization error requires starting with small initial weights. We then turn to non-linear neural networks, and show that making networks very large does not harm their generalization performance. On the contrary, it can in fact reduce overtraining, even without early stopping or regularization of any sort. We identify two novel phenomena underlying this behavior in overcomplete models: first, there is a frozen subspace of the weights in which no learning occurs under gradient descent; and second, the statistical properties of the high-dimensional regime yield better-conditioned input correlations which protect against overtraining. We demonstrate that naive application of worst-case theories such as Rademacher complexity are inaccurate in predicting the generalization performance of deep neural networks, and derive an alternative bound which incorporates the frozen subspace and conditioning effects and qualitatively matches the behavior observed in simulation.", "We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior over the weights and biases is a Gaussian process (GP) in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike \"deep kernels\", has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84 classification error on MNIST, a new record for GPs with a comparable number of parameters." ] }
1904.12191
2941057241
We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.
It is worth mentioning that an alternative approach to the analysis of two-layers neural networks, in the limit of a large number of neurons, was developed in @cite_19 @cite_14 @cite_20 @cite_22 @cite_9 . Unlike in the neural tangent approach, the evolution of network weights is described beyond the linear regime in this theory.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_9", "@cite_19", "@cite_20" ], "mid": [ "2913010492", "2276412021", "2899748887", "2942052807", "2131524184" ], "abstract": [ "We consider learning two layer neural networks using stochastic gradient descent. The mean-field description of this learning dynamics approximates the evolution of the network weights by an evolution in the space of probability distributions in @math (where @math is the number of parameters associated to each neuron). This evolution can be defined through a partial differential equation or, equivalently, as the gradient flow in the Wasserstein space of probability distributions. Earlier work shows that (under some regularity assumptions), the mean field description is accurate as soon as the number of hidden units is much larger than the dimension @math . In this paper we establish stronger and more general approximation guarantees. First of all, we show that the number of hidden units only needs to be larger than a quantity dependent on the regularity properties of the data, and independent of the dimensions. Next, we generalize this analysis to the case of unbounded activation functions, which was not covered by earlier bounds. We extend our results to noisy stochastic gradient descent. Finally, we show that kernel ridge regression can be recovered as a special limit of the mean field analysis.", "This paper discusses a new method to perform propagation over a (two-layer, feed-forward) Neural Network embedded in a Constraint Programming model. The method is meant to be employed in Empirical Model Learning, a technique designed to enable optimal decision making over systems that cannot be modeled via conventional declarative means. The key step in Empirical Model Learning is to embed a Machine Learning model into a combinatorial model. It has been showed that Neural Networks can be embedded in a Constraint Programming model by simply encoding each neuron as a global constraint, which is then propagated individually. Unfortunately, this decomposition approach may lead to weak bounds. To overcome such limitation, we propose a new network-level propagator based on a non-linear Lagrangian relaxation that is solved with a subgradient algorithm. The method proved capable of dramatically reducing the search tree size on a thermal-aware dispatching problem on multicore CPUs. The overhead for optimizing the Lagrangian multipliers is kept within a reasonable level via a few simple techniques. This paper is an extended version of [27], featuring an improved structure, a new filtering technique for the network inputs, a set of overhead reduction techniques, and a thorough experimentation.", "Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, networks used in practice are going wider and deeper. On the theoretical side, a long line of works has been focusing on training neural networks with one hidden layer. The theory of multi-layer networks remains largely unsettled. In this work, we prove why stochastic gradient descent (SGD) can find @math on the training objective of DNNs in @math . We only make two assumptions: the inputs are non-degenerate and the network is over-parameterized. The latter means the network width is sufficiently large: @math in @math , the number of layers and in @math , the number of samples. Our key technique is to derive that, in a sufficiently large neighborhood of the random initialization, the optimization landscape is almost-convex and semi-smooth even with ReLU activations. This implies an equivalence between over-parameterized neural networks and neural tangent kernel (NTK) in the finite (and polynomial) width setting. As concrete examples, starting from randomly initialized weights, we prove that SGD can attain 100 training accuracy in classification tasks, or minimize regression loss in linear convergence speed, with running time polynomial in @math . Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet).", "How well does a classic deep net architecture like AlexNet or VGG19 classify on a standard dataset such as CIFAR-10 when its \"width\" --- namely, number of channels in convolutional layers, and number of nodes in fully-connected internal layers --- is allowed to increase to infinity? Such questions have come to the forefront in the quest to theoretically understand deep learning and its mysteries about optimization and generalization. They also connect deep learning to notions such as Gaussian processes and kernels. A recent paper [, 2018] introduced the Neural Tangent Kernel (NTK) which captures the behavior of fully-connected deep nets in the infinite width limit trained by gradient descent; this object was implicit in some other recent papers. A subsequent paper [, 2019] gave heuristic Monte Carlo methods to estimate the NTK and its extension, Convolutional Neural Tangent Kernel (CNTK) and used this to try to understand the limiting behavior on datasets like CIFAR-10. The current paper gives the first efficient exact algorithm (based upon dynamic programming) for computing CNTK as well as an efficient GPU implementation of this algorithm. This results in a significant new benchmark for performance of a pure kernel-based method on CIFAR-10, being 10 higher than the methods reported in [, 2019], and only 5 lower than the performance of the corresponding finite deep net architecture (once batch normalization etc. are turned off). We give the first non-asymptotic proof showing that a fully-trained sufficiently wide net is indeed equivalent to the kernel regression predictor using NTK. Our experiments also demonstrate that earlier Monte Carlo approximation can degrade the performance significantly, thus highlighting the power of our exact kernel computation, which we have applied even to the full CIFAR-10 dataset and 20-layer nets.", "We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning. Given a layer, we use non-linear least squares to compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a sum of a small number of rank-one tensors. At the second step, this decomposition is used to replace the original convolutional layer with a sequence of four convolutional layers with small kernels. After such replacement, the entire network is fine-tuned on the training data using standard backpropagation process. We evaluate this approach on two CNNs and show that it is competitive with previous approaches, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Thus, for the 36-class character classification CNN, our approach obtains a 8.5x CPU speedup of the whole network with only minor accuracy drop (1 from 91 to 90 ). For the standard ImageNet architecture (AlexNet), the approach speeds up the second convolution layer by a factor of 4x at the cost of @math increase of the overall top-5 classification error." ] }
1904.11968
2942322914
The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks.
introduce CODEnn @cite_7 , which creates vector representations of Java source code by jointly embedding code with a natural language description of the method. Their architecture uses recurrent neural networks (RNN) on sequences of API calls and on tokens from the method name. It then fuses this with the output of a multi-layer perceptron which takes inputs from the non-API tokens in the code. By jointly embedding the code with natural language, the learned vectors are tailored to summarize code at a human-level description, which may not always be accurate (given that code often evolves independently from comments), and is limited by the ability of natural langauges to describe specifications. Additionally, natural languages (and consequently, code comments) are context-sensitive, so the comment may be missing crucial information about the semantics of the code.
{ "cite_N": [ "@cite_7" ], "mid": [ "2794601162" ], "abstract": [ "To implement a program functionality, developers can reuse previously written code snippets by searching through a large-scale codebase. Over the years, many code search tools have been proposed to help developers. The existing approaches often treat source code as textual documents and utilize information retrieval models to retrieve relevant code snippets that match a given query. These approaches mainly rely on the textual similarity between source code and natural language query. They lack a deep understanding of the semantics of queries and source code. In this paper, we propose a novel deep neural network named CODEnn (Code-Description Embedding Neural Network). Instead of matching text similarity, CODEnn jointly embeds code snippets and natural language descriptions into a high-dimensional vector space, in such a way that code snippet and its corresponding description have similar vectors. Using the unified vector representation, code snippets related to a natural language query can be retrieved according to their vectors. Semantically related words can also be recognized and irrelevant noisy keywords in queries can be handled. As a proof-of-concept application, we implement a code search tool named D eep CS using the proposed CODEnn model. We empirically evaluate D eep CS on a large scale codebase collected from GitHub. The experimental results show that our approach can effectively retrieve relevant code snippets and outperforms previous techniques." ] }
1904.11968
2942322914
The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks.
@cite_0 convert the AST into a binary tree and then use an autoencoder to learn an embedding model for each node. This learned embedding model is applied recursively to the tree to obtain a final embedding for the root node. The model is an autoencoder, and as such, may fail to recognize the semantic equivalency between different implementations of the same algorithm. e.g. that a for" loop and a while" loop are equivalent.
{ "cite_N": [ "@cite_0" ], "mid": [ "2798657914" ], "abstract": [ "Sequence-to-sequence attention-based models have recently shown very promising results on automatic speech recognition (ASR) tasks, which integrate an acoustic, pronunciation and language model into a single neural network. In these models, the Transformer, a new sequence-to-sequence attention-based model relying entirely on self-attention without using RNNs or convolutions, achieves a new single-model state-of-the-art BLEU on neural machine translation (NMT) tasks. Since the outstanding performance of the Transformer, we extend it to speech and concentrate on it as the basic architecture of sequence-to-sequence attention-based model on Mandarin Chinese ASR tasks. Furthermore, we investigate a comparison between syllable based model and context-independent phoneme (CI-phoneme) based model with the Transformer in Mandarin Chinese. Additionally, a greedy cascading decoder with the Transformer is proposed for mapping CI-phoneme sequences and syllable sequences into word sequences. Experiments on HKUST datasets demonstrate that syllable based model with the Transformer performs better than CI-phoneme based counterpart, and achieves a character error rate (CER) of , which is competitive to the state-of-the-art CER of @math by the joint CTC-attention based encoder-decoder network." ] }
1904.11968
2942322914
The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks.
The work of @cite_4 encodes nodes of an AST using a weighted mixture of left and right weight matrices and children of that node. A tree-based convolutional neural network (CNN) is then applied against the tree to encode the AST. The only way semantic equivalents are learned in this model are by recognizing that certain nodes have the same children. This assumption is not necessarily correct, and as such, may not fully capture the semantic meaning of code.
{ "cite_N": [ "@cite_4" ], "mid": [ "2954346764" ], "abstract": [ "This paper proposes a learning strategy that extracts object-part concepts from a pre-trained convolutional neural network (CNN), in an attempt to 1) explore explicit semantics hidden in CNN units and 2) gradually grow a semantically interpretable graphical model on the pre-trained CNN for hierarchical object understanding. Given part annotations on very few (e.g., 3-12) objects, our method mines certain latent patterns from the pre-trained CNN and associates them with different semantic parts. We use a four-layer And-Or graph to organize the mined latent patterns, so as to clarify their internal semantic hierarchy. Our method is guided by a small number of part annotations, and it achieves superior performance (about 13 -107 improvement) in part center prediction on the PASCAL VOC and ImageNet datasets." ] }
1904.11968
2942322914
The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks.
@cite_1 used a long-short term memory (LSTM) cell in a tree structure applied to an AST to classify defects. The model is trained in an unsupervised manner to predict a node from its children. @cite_13 learn code vector embeddings by evaluating paths in the AST and evaluate the resulting vectors by predicting method names from code snippets. @cite_15 focus on identifying clones that fall between Type III and Type IV by using a deep neural network --- their model is limited by only using 24 method summary metrics as input, and so cannot deeply evaluate the code itself.
{ "cite_N": [ "@cite_13", "@cite_15", "@cite_1" ], "mid": [ "2259512711", "2952453038", "1952243512" ], "abstract": [ "Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have been successfully applied to a variety of sequence modeling tasks. In this paper we develop Tree Long Short-Term Memory (TreeLSTM), a neural network model based on LSTM, which is designed to predict a tree rather than a linear sequence. TreeLSTM defines the probability of a sentence by estimating the generation probability of its dependency tree. At each time step, a node is generated based on the representation of the generated sub-tree. We further enhance the modeling power of TreeLSTM by explicitly representing the correlations between left and right dependents. Application of our model to the MSR sentence completion challenge achieves results beyond the current state of the art. We also report results on dependency parsing reranking achieving competitive performance.", "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "We introduce a novel schema for sequence to sequence learning with a Deep Q-Network (DQN), which decodes the output sequence iteratively. The aim here is to enable the decoder to first tackle easier portions of the sequences, and then turn to cope with difficult parts. Specifically, in each iteration, an encoder-decoder Long Short-Term Memory (LSTM) network is employed to, from the input sequence, automatically create features to represent the internal states of and formulate a list of potential actions for the DQN. Take rephrasing a natural sentence as an example. This list can contain ranked potential words. Next, the DQN learns to make decision on which action (e.g., word) will be selected from the list to modify the current decoded sequence. The newly modified output sequence is subsequently used as the input to the DQN for the next decoding iteration. In each iteration, we also bias the reinforcement learning's attention to explore sequence portions which are previously difficult to be decoded. For evaluation, the proposed strategy was trained to decode ten thousands natural sentences. Our experiments indicate that, when compared to a left-to-right greedy beam search LSTM decoder, the proposed method performed competitively well when decoding sentences from the training set, but significantly outperformed the baseline when decoding unseen sentences, in terms of BLEU score obtained." ] }
1904.11968
2942322914
The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks.
In Deep Code Comment Generation @cite_11 , introduce structure-based traversals of an AST to feed into a sequence to sequence architecture, and train the model to translate source code into comments. Then the trained model is used to generate comments on new source code. The encoded vectors are designed to initialize a decoding comment generation phase, rather than be used directly, so they are not necessarily smooth, nor suitable for interpretation as semantic meaning. Nevertheless, we draw inspiration from their work for our model.
{ "cite_N": [ "@cite_11" ], "mid": [ "2964150020" ], "abstract": [ "We present a neural model for representing snippets of code as continuous distributed vectors ( code embeddings''). The main idea is to represent a code snippet as a single fixed-length code vector, which can be used to predict semantic properties of the snippet. To this end, code is first decomposed to a collection of paths in its abstract syntax tree. Then, the network learns the atomic representation of each path while simultaneously learning how to aggregate a set of them. We demonstrate the effectiveness of our approach by using it to predict a method's name from the vector representation of its body. We evaluate our approach by training a model on a dataset of 12M methods. We show that code vectors trained on this dataset can predict method names from files that were unobserved during training. Furthermore, we show that our model learns useful method name vectors that capture semantic similarities, combinations, and analogies. A comparison of our approach to previous techniques over the same dataset shows an improvement of more than 75 , making it the first to successfully predict method names based on a large, cross-project corpus. Our trained model, visualizations and vector similarities are available as an interactive online demo at http: code2vec.org. The code, data and trained models are available at https: github.com tech-srl code2vec." ] }
1904.12201
2942159993
As an intuitive way of expression emotion, the animated Graphical Interchange Format (GIF) images have been widely used on social media. Most previous studies on automated GIF emotion recognition fail to effectively utilize GIF's unique properties, and this potentially limits the recognition performance. In this study, we demonstrate the importance of human related information in GIFs and conduct human-centered GIF emotion recognition with a proposed Keypoint Attended Visual Attention Network (KAVAN). The framework consists of a facial attention module and a hierarchical segment temporal module. The facial attention module exploits the strong relationship between GIF contents and human characters, and extracts frame-level visual feature with a focus on human faces. The Hierarchical Segment LSTM (HS-LSTM) module is then proposed to better learn global GIF representations. Our proposed framework outperforms the state-of-the-art on the MIT GIFGIF dataset. Furthermore, the facial attention module provides reliable facial region mask predictions, which improves the model's interpretability.
GIF Analysis. Bakhshi @cite_13 show that animated GIFs are more engaging than other social media types by studying over 3.9 million posts on Tumblr. Gygli @cite_12 propose to automatically generate animated GIFs from videos with 100K user-generated GIFs and the corresponding video sources. The MIT's GIFGIF platform is frequently used for GIF emotion recognition studies. Jou @cite_24 recognize GIF emotions using color histograms, facial expressions, image based aesthetics and visual sentiment. Chen @cite_16 adopt 3D ConvNets to further improve the performance. The GIFGIF+ dataset @cite_2 is a larger GIF emotion recognition dataset. At the time of this study, GIFGIF+ is not released.
{ "cite_N": [ "@cite_24", "@cite_2", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "2786355254", "2017411072", "2578893299", "2406020846", "2963729528" ], "abstract": [ "Animated GIFs are widely used on the Internet to express emotions, but their automatic analysis is largely unexplored. Existing GIF datasets with emotion labels are too small for training contemporary machine learning models, so we propose a semi-automatic method to collect emotional animated GIFs from the Internet with the least amount of human labor. The method trains weak emotion recognizers on labeled data, and uses them to sort a large quantity of unlabeled GIFs. We found that by exploiting the clustered structure of emotions, the number of GIFs a labeler needs to check can be greatly reduced. Using the proposed method, a dataset called GIFGIF+ with 23,544 GIFs over 17 emotions was created, which provides a promising platform for affective computing research.", "Animated GIFs are everywhere on the Web. Our work focuses on the computational prediction of emotions perceived by viewers after they are shown animated GIF images. We evaluate our results on a dataset of over 3,800 animated GIFs gathered from MIT's GIFGIF platform, each with scores for 17 discrete emotions aggregated from over 2.5M user annotations - the first computational evaluation of its kind for content-based prediction on animated GIFs to our knowledge. In addition, we advocate a conceptual paradigm in emotion prediction that shows delineating distinct types of emotion is important and is useful to be concrete about the emotion target. One of our objectives is to systematically compare different types of content features for emotion prediction, including low-level, aesthetics, semantic and face features. We also formulate a multi-task regression problem to evaluate whether viewer perceived emotion prediction can benefit from jointly learning across emotion classes compared to disjoint, independent learning.", "Animated GIFs are widely used on the Internet to express emotions, but their automatic analysis is largely unexplored before. To help with the search and recommendation of GIFs, we aim to predict their emotions perceived by humans based on their contents. Since previous solutions to this problem only utilize image-based features and lose all the motion information, we propose to use 3D convolutional neural networks (CNNs) to extract spatiotemporal features from GIFs. We evaluate our methodology on a crowd-sourcing platform called GIFGIF with more than 6000 animated GIFs, and achieve a better accuracy then any previous approach in predicting crowd-sourced intensity scores of 17 emotions. It is also found that our trained model can be used to distinguish and cluster emotions in terms of valence and risk perception.", "Animated GIFs have been around since 1987 and recently gained more popularity on social networking sites. Tumblr, a large social networking and micro blogging platform, is a popular venue to share animated GIFs. Tumblr users follow blogs, generating a feed or posts, and choose to \"like' or to \"reblog' favored posts. In this paper, we use these actions as signals to analyze the engagement of over 3.9 million posts, and conclude that animated GIFs are significantly more engaging than other kinds of media. We follow this finding with deeper visual analysis of nearly 100k animated GIFs and pair our results with interviews with 13 Tumblr users to find out what makes animated GIFs engaging. We found that the animation, lack of sound, immediacy of consumption, low bandwidth and minimal time demands, the storytelling capabilities and utility for expressing emotions were significant factors in making GIFs the most engaging content on Tumblr. We also found that engaging GIFs contained faces and had higher motion energy, uniformity, resolution and frame rate. Our findings connect to media theories and have implications in design of effective content dashboards, video summarization tools and ranking algorithms to enhance engagement.", "We introduce the novel problem of automatically generating animated GIFs from video. GIFs are short looping video with no sound, and a perfect combination between image and video that really capture our attention. GIFs tell a story, express emotion, turn events into humorous moments, and are the new wave of photojournalism. We pose the question: Can we automate the entirely manual and elaborate process of GIF creation by leveraging the plethora of user generated GIF content? We propose a Robust Deep RankNet that, given a video, generates a ranked list of its segments according to their suitability as GIF. We train our model to learn what visual content is often selected for GIFs by using over 100K user generated GIFs and their corresponding video sources. We effectively deal with the noisy web data by proposing a novel adaptive Huber loss in the ranking formulation. We show that our approach is robust to outliers and picks up several patterns that are frequently present in popular animated GIFs. On our new large-scale benchmark dataset, we show the advantage of our approach over several state-of-the-art methods." ] }
1904.12201
2942159993
As an intuitive way of expression emotion, the animated Graphical Interchange Format (GIF) images have been widely used on social media. Most previous studies on automated GIF emotion recognition fail to effectively utilize GIF's unique properties, and this potentially limits the recognition performance. In this study, we demonstrate the importance of human related information in GIFs and conduct human-centered GIF emotion recognition with a proposed Keypoint Attended Visual Attention Network (KAVAN). The framework consists of a facial attention module and a hierarchical segment temporal module. The facial attention module exploits the strong relationship between GIF contents and human characters, and extracts frame-level visual feature with a focus on human faces. The Hierarchical Segment LSTM (HS-LSTM) module is then proposed to better learn global GIF representations. Our proposed framework outperforms the state-of-the-art on the MIT GIFGIF dataset. Furthermore, the facial attention module provides reliable facial region mask predictions, which improves the model's interpretability.
Emotion Recognition. Emotion recognition @cite_0 @cite_6 has been an interesting topic for decades. On a large scale dataset @cite_23 , Rao @cite_8 propose a multi-level deep representations for emotion recognition. Multi-modal feature fusion @cite_22 is also proved to be effective. Instead of modeling emotion recognition as a classification task @cite_8 @cite_23 , Zhao @cite_22 propose to learn emotion distributions instead, which alleviates the perception uncertainty problem that different people under different context may perceive different emotions from the same content. Regressing emotion intensity scores @cite_24 is another effective approach. Han @cite_14 propose a soft prediction framework for the perception uncertainty problem.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_8", "@cite_6", "@cite_0", "@cite_24", "@cite_23" ], "mid": [ "2765354427", "2533262878", "2548264631", "2149940198", "2765291577", "2767618761", "2548899748" ], "abstract": [ "Current image emotion recognition works mainly classified the images into one dominant emotion category, or regressed the images with average dimension values by assuming that the emotions perceived among different viewers highly accord with each other. However, due to the influence of various personal and situational factors, such as culture background and social interactions, different viewers may react totally different from the emotional perspective to the same image. In this paper, we propose to formulate the image emotion recognition task as a probability distribution learning problem. Motivated by the fact that image emotions can be conveyed through different visual features, such as aesthetics and semantics, we present a novel framework by fusing multi-modal features to tackle this problem. In detail, weighted multi-modal conditional probability neural network (WMMCPNN) is designed as the learning model to associate the visual features with emotion probabilities. By jointly exploring the complementarity and learning the optimal combination coefficients of different modality features, WMMCPNN could effectively utilize the representation ability of each uni-modal feature. We conduct extensive experiments on three publicly available benchmarks and the results demonstrate that the proposed method significantly outperforms the state-of-the-art approaches for emotion distribution prediction.", "In this paper we address the sentence-level multi-modal emotion recognition problem. We formulate the emotion recognition task as a multi-category classification problem and propose an innovative solution based on the automatically generated ensemble of trees with binary support vector machines (SVM) classifiers in the tree nodes. We demonstrate the efficacy of our approach by performing four-way (anger, happiness, sadness, neutral) and five-way (including excitement) emotion recognition on the University of Southern California's Interactive Emotional Motion Capture (USC-IEMOCAP) corpus using combinations of acoustic features, lexical features extracted from automatic speech recognition (ASR) output and visual features extracted from facial markers traced by a motion capture system. The experiments show that the proposed ensemble of trees of binary SVM classifiers outperforms classical multi-way SVM classification with one-vs-one voting scheme and achieves state-of-the-art results for all feature combinations.", "In the past three years, Emotion Recognition in the Wild (EmotiW) Grand Challenge has drawn more and more attention due to its huge potential applications. In the fourth challenge, aimed at the task of video based emotion recognition, we propose a multi-clue emotion fusion (MCEF) framework by modeling human emotion from three mutually complementary sources, facial appearance texture, facial action, and audio. To extract high-level emotion features from sequential face images, we employ a CNN-RNN architecture, where face image from each frame is first fed into the fine-tuned VGG-Face network to extract face feature, and then the features of all frames are sequentially traversed in a bidirectional RNN so as to capture dynamic changes of facial textures. To attain more accurate facial actions, a facial landmark trajectory model is proposed to explicitly learn emotion variations of facial components. Further, audio signals are also modeled in a CNN framework by extracting low-level energy features from segmented audio clips and then stacking them as an image-like map. Finally, we fuse the results generated from three clues to boost the performance of emotion recognition. Our proposed MCEF achieves an overall accuracy of 56.66 with a large improvement of 16.19 with respect to the baseline.", "In this paper, we apply a context-sensitive technique for multimodal emotion recognition based on feature-level fusion of acoustic and visual cues. We use bidirectional Long ShortTerm Memory (BLSTM) networks which, unlike most other emotion recognition approaches, exploit long-range contextual information for modeling the evolution of emotion within a conversation. We focus on recognizing dimensional emotional labels, which enables us to classify both prototypical and nonprototypical emotional expressions contained in a large audiovisual database. Subject-independent experiments on various classification tasks reveal that the BLSTM network approach generally prevails over standard classification techniques such as Hidden Markov Models or Support Vector Machines, and achieves F1-measures of the order of 72 , 65 , and 55 for the discrimination of three clusters in emotional space and the distinction between three levels of valence and activation, respectively.", "Automatic emotion recognition is a challenging task which can make great impact on improving natural human computer interactions. In this paper, we present our effort for the Affect Subtask in the Audio Visual Emotion Challenge (AVEC) 2017, which requires participants to perform continuous emotion prediction on three affective dimensions: Arousal, Valence and Likability based on the audiovisual signals. We highlight three aspects of our solutions: 1) we explore and fuse different hand-crafted and deep learned features from all available modalities including acoustic, visual, and textual modalities, and we further consider the interlocutor influence for the acoustic features; 2) we compare the effectiveness of non-temporal model SVR and temporal model LSTM-RNN and show that the LSTM-RNN can not only alleviate the feature engineering efforts such as construction of contextual features and feature delay, but also improve the recognition performance significantly; 3) we apply multi-task learning strategy for collaborative prediction of multiple emotion dimensions with shared representations according to the fact that different emotion dimensions are correlated with each other. Our solutions achieve the CCC of 0.675, 0.756 and 0.509 on arousal, valence, and likability respectively on the challenge testing set, which outperforms the baseline system with corresponding CCC of 0.375, 0.466, and 0.246 on arousal, valence, and likability.", "State-of-the-art approaches for the previous emotion recognition in the wild challenges are usually built on prevailing Convolutional Neural Networks (CNNs). Although there is clear evidence that CNNs with increased depth or width can usually bring improved predication accuracy, existing top approaches provide supervision only at the output feature layer, resulting in the insufficient training of deep CNN models. In this paper, we present a new learning method named Supervised Scoring Ensemble (SSE) for advancing this challenge with deep CNNs. We first extend the idea of recent deep supervision to deal with emotion recognition problem. Benefiting from adding supervision not only to deep layers but also to intermediate layers and shallow layers, the training of deep CNNs can be well eased. Second, we present a new fusion structure in which class-wise scoring activations at diverse complementary feature layers are concatenated and further used as the inputs for second-level supervision, acting as a deep feature ensemble within a single CNN architecture. We show our proposed learning method brings large accuracy gains over diverse backbone networks consistently. On this year's audio-video based emotion recognition task, the average recognition rate of our best submission is 60.34 , forming a new envelop over all existing records.", "In this paper, we present HoloNet, a well-designed Convolutional Neural Network (CNN) architecture regarding our submissions to the video based sub-challenge of the Emotion Recognition in the Wild (EmotiW) 2016 challenge. In contrast to previous related methods that usually adopt relatively simple and shallow neural network architectures to address emotion recognition task, our HoloNet has three critical considerations in network design. (1) To reduce redundant filters and enhance the non-saturated non-linearity in the lower convolutional layers, we use a modified Concatenated Rectified Linear Unit (CReLU) instead of ReLU. (2) To enjoy the accuracy gain from considerably increased network depth and maintain efficiency, we combine residual structure and CReLU to construct the middle layers. (3) To broaden network width and introduce multi-scale feature extraction property, the topper layers are designed as a variant of inception-residual structure. The main benefit of grouping these modules into the HoloNet is that both negative and positive phase information implicitly contained in the input data can flow over it in multiple paths, thus deep multi-scale features explicitly capturing emotion variation can be well extracted from multi-path sibling layers, and then can be further concatenated for robust recognition. We obtain competitive results in this year’s video based emotion recognition sub-challenge using an ensemble of two HoloNet models trained with given data only. Specifically, we obtain a mean recognition rate of 57.84 , outperforming the baseline accuracy with an absolute margin of 17.37 , and yielding 4.04 absolute accuracy gain compared to the result of last year’s winner team. Meanwhile, our method runs with a speed of several thousands of frames per second on a GPU, thus it is well applicable to real-time scenarios." ] }
1904.12200
2941736865
Magnetic resonance imaging (MRI) is being increasingly utilized to assess, diagnose, and plan treatment for a variety of diseases. The ability to visualize tissue in varied contrasts in the form of MR pulse sequences in a single scan provides valuable insights to physicians, as well as enabling automated systems performing downstream analysis. However many issues like prohibitive scan time, image corruption, different acquisition protocols, or allergies to certain contrast materials may hinder the process of acquiring multiple sequences for a patient. This poses challenges to both physicians and automated systems since complementary information provided by the missing sequences is lost. In this paper, we propose a variant of generative adversarial network (GAN) capable of leveraging redundant information contained within multiple available sequences in order to generate one or more missing sequences for a patient scan. The proposed network is designed as a multi-input, multi-output network which combines information from all the available pulse sequences, implicitly infers which sequences are missing, and synthesizes the missing ones in a single forward pass. We demonstrate and validate our method on two brain MRI datasets each with four sequences, and show the applicability of the proposed method in simultaneously synthesizing all missing sequences in any possible scenario where either one, two, or three of the four sequences may be missing. We compare our approach with competing unimodal and multi-modal methods, and show that we outperform both quantitatively and qualitatively.
Though all the methods discussed above propose a multi-input method, none of the methods have been proposed to synthesize multiple missing sequences (multi-output), and in one single pass. All three methods @cite_50 , @cite_52 , and @cite_2 synthesize only sequence in the presence of varying number of input sequences, while @cite_28 only synthesizes MRA using information from multiple inputs. Although the work presented in @cite_28 is close to our proposed method, theirs is not a truly multimodal network, since there is no empirical evidence that their method will generalize to multiple scenarios. To the best of our knowledge, we are the first to propose a method that is capable of synthesizing multiple missing sequences using a combination of various input sequences, and demonstrate the method on the complete set of scenarios (i.e., all combinations of missing sequences).
{ "cite_N": [ "@cite_28", "@cite_52", "@cite_50", "@cite_2" ], "mid": [ "2884442510", "2030927653", "2953204310", "2101432564" ], "abstract": [ "Accurate synthesis of a full 3D MR image containing tumours from available MRI (e.g. to replace an image that is currently unavailable or corrupted) would provide a clinician as well as downstream inference methods with important complementary information for disease analysis. In this paper, we present an end-to-end 3D convolution neural network that takes a set of acquired MR image sequences (e.g. T1, T2, T1ce) as input and concurrently performs (1) regression of the missing full resolution 3D MRI (e.g. FLAIR) and (2) segmentation of the tumour into subtypes (e.g. enhancement, core). The hypothesis is that this would focus the network to perform accurate synthesis in the area of the tumour. Experiments on the BraTS 2015 and 2017 datasets [1] show that: (1) the proposed method gives better performance than state-of-the art methods in terms of established global evaluation metrics (e.g. PSNR), (2) replacing real MR volumes with the synthesized MRI does not lead to significant degradation in tumour and sub-structure segmentation accuracy. The system further provides uncertainty estimates based on Monte Carlo (MC) dropout [11] for the synthesized volume at each voxel, permitting quantification of the system’s confidence in the output at each location.", "In this paper we propose novel methods for completion (from limited samples) and de-noising of multilinear (tensor) data and as an application consider 3-D and 4- D (color) video data completion and de-noising. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD)[11]. Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data. We first show that videos with linear camera motion can be represented more efficiently using t-SVD compared to the approaches based on vectorizing or flattening of the tensors. Since efficiency in representation implies efficiency in recovery, we outline a tensor nuclear norm penalized algorithm for video completion from missing entries. Application of the proposed algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. We also consider the problem of tensor robust Principal Component Analysis (PCA) for de-noising 3-D video data from sparse random corruptions. We show superior performance of our method compared to the matrix robust PCA adapted to this setting as proposed in [4].", "In this paper we propose novel methods for completion (from limited samples) and de-noising of multilinear (tensor) data and as an application consider 3-D and 4- D (color) video data completion and de-noising. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD)[11]. Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data. We first show that videos with linear camera motion can be represented more efficiently using t-SVD compared to the approaches based on vectorizing or flattening of the tensors. Since efficiency in representation implies efficiency in recovery, we outline a tensor nuclear norm penalized algorithm for video completion from missing entries. Application of the proposed algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. We also consider the problem of tensor robust Principal Component Analysis (PCA) for de-noising 3-D video data from sparse random corruptions. We show superior performance of our method compared to the matrix robust PCA adapted to this setting as proposed in [4].", "This paper describes a novel technique for the synthesis of imperative programs. Automated program synthesis has the potential to make programming and the design of systems easier by allowing programs to be specified at a higher-level than executable code. In our approach, which we call proof-theoretic synthesis, the user provides an input-output functional specification, a description of the atomic operations in the programming language, and a specification of the synthesized program's looping structure, allowed stack space, and bound on usage of certain operations. Our technique synthesizes a program, if there exists one, that meets the input-output specification and uses only the given resources. The insight behind our approach is to interpret program synthesis as generalized program verification, which allows us to bring verification tools and techniques to program synthesis. Our synthesis algorithm works by creating a program with unknown statements, guards, inductive invariants, and ranking functions. It then generates constraints that relate the unknowns and enforces three kinds of requirements: partial correctness, loop termination, and well-formedness conditions on program guards. We formalize the requirements that program verification tools must meet to solve these constraint and use tools from prior work as our synthesizers. We demonstrate the feasibility of the proposed approach by synthesizing programs in three different domains: arithmetic, sorting, and dynamic programming. Using verification tools that we previously built in the VS3 project we are able to synthesize programs for complicated arithmetic algorithms including Strassen's matrix multiplication and Bresenham's line drawing; several sorting algorithms; and several dynamic programming algorithms. For these programs, the median time for synthesis is 14 seconds, and the ratio of synthesis to verification time ranges between 1x to 92x (with an median of 7x), illustrating the potential of the approach." ] }
1904.12200
2941736865
Magnetic resonance imaging (MRI) is being increasingly utilized to assess, diagnose, and plan treatment for a variety of diseases. The ability to visualize tissue in varied contrasts in the form of MR pulse sequences in a single scan provides valuable insights to physicians, as well as enabling automated systems performing downstream analysis. However many issues like prohibitive scan time, image corruption, different acquisition protocols, or allergies to certain contrast materials may hinder the process of acquiring multiple sequences for a patient. This poses challenges to both physicians and automated systems since complementary information provided by the missing sequences is lost. In this paper, we propose a variant of generative adversarial network (GAN) capable of leveraging redundant information contained within multiple available sequences in order to generate one or more missing sequences for a patient scan. The proposed network is designed as a multi-input, multi-output network which combines information from all the available pulse sequences, implicitly infers which sequences are missing, and synthesizes the missing ones in a single forward pass. We demonstrate and validate our method on two brain MRI datasets each with four sequences, and show the applicability of the proposed method in simultaneously synthesizing all missing sequences in any possible scenario where either one, two, or three of the four sequences may be missing. We compare our approach with competing unimodal and multi-modal methods, and show that we outperform both quantitatively and qualitatively.
The main motivation for most synthesis methods is to retain the ability to meaningfully use some downstream analysis pipelines like segmentation or classification despite the partially missing input. However, there have been efforts by researchers working on those analysis pipelines to bypass any synthesis step by making the analysis methods themselves robust to missing sequences. Most notably, @cite_8 and @cite_45 provide methods for tumor segmentation using brain MRI that are robust to missing sequences @cite_8 , or to missing sequence labels @cite_45 . Although the methods bypass the requirement of having a synthesis step before actual downstream analysis, the performance of these robust versions of analysis pipelines often do not match the state-of-the-art performance of other non-robust methods in the case when all sequences are present. This is due to the fact that the methods not only have to learn how to perform the task (segmentation classification) well, but also to handle any missing input data. This two-fold objective for a single network raises a trade-off between robustness and performance.
{ "cite_N": [ "@cite_45", "@cite_8" ], "mid": [ "2884442510", "2891179298" ], "abstract": [ "Accurate synthesis of a full 3D MR image containing tumours from available MRI (e.g. to replace an image that is currently unavailable or corrupted) would provide a clinician as well as downstream inference methods with important complementary information for disease analysis. In this paper, we present an end-to-end 3D convolution neural network that takes a set of acquired MR image sequences (e.g. T1, T2, T1ce) as input and concurrently performs (1) regression of the missing full resolution 3D MRI (e.g. FLAIR) and (2) segmentation of the tumour into subtypes (e.g. enhancement, core). The hypothesis is that this would focus the network to perform accurate synthesis in the area of the tumour. Experiments on the BraTS 2015 and 2017 datasets [1] show that: (1) the proposed method gives better performance than state-of-the art methods in terms of established global evaluation metrics (e.g. PSNR), (2) replacing real MR volumes with the synthesized MRI does not lead to significant degradation in tumour and sub-structure segmentation accuracy. The system further provides uncertainty estimates based on Monte Carlo (MC) dropout [11] for the synthesized volume at each voxel, permitting quantification of the system’s confidence in the output at each location.", "We present an adversarial domain adaptation based deep learning approach for automatic tumor segmentation from T2-weighted MRI. Our approach is composed of two steps: (i) a tumor-aware unsupervised cross-domain adaptation (CT to MRI), followed by (ii) semi-supervised tumor segmentation using Unet trained with synthesized and limited number of original MRIs. We introduced a novel target specific loss, called tumor-aware loss, for unsupervised cross-domain adaptation that helps to preserve tumors on synthesized MRIs produced from CT images. In comparison, state-of-the art adversarial networks trained without our tumor-aware loss produced MRIs with ill-preserved or missing tumors. All networks were trained using labeled CT images from 377 patients with non-small cell lung cancer obtained from the Cancer Imaging Archive and unlabeled T2w MRIs from a completely unrelated cohort of 6 patients with pre-treatment and 36 on-treatment scans. Next, we combined 6 labeled pre-treatment MRI scans with the synthesized MRIs to boost tumor segmentation accuracy through semi-supervised learning. Semi-supervised training of cycle-GAN produced a segmentation accuracy of 0.66 computed using Dice Score Coefficient (DSC). Our method trained with only synthesized MRIs produced an accuracy of 0.74 while the same method trained in semi-supervised setting produced the best accuracy of 0.80 on test. Our results show that tumor-aware adversarial domain adaptation helps to achieve reasonably accurate cancer segmentation from limited MRI data by leveraging large CT datasets." ] }