abstract
stringlengths
0
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
353
__index_level_0__
int64
3
1,000k
Dynamic program analysis is widely used for detecting faults in software systems. Dynamic analysis tools conceptually either use a modified execution environment or inject instrumentation code into the system under test (SUT). Tools based on a modified environment, such as Java PathFinder (JPF), have full access to the state of the SUT but at the cost of a higher runtime overhead. Instrumentation-based tools have lower overhead but at the cost of less convenient control of runtime such as thread schedules. Combinations of these two approaches are largely unexplored, and to our knowledge, have never been done in JPF. We present a case study of adapting an existing instrumentation-based tool to run inside JPF. To keep the instrumentation unchanged, we limited our changes to the code invoked by the instrumentation. Ultimately, the required changes were few and essentially reduce to properly dividing the analysis-related state and logic between the JPF and host JVM levels. Others can benefit from our experience to quicker adapt their instrumentation-based tools to run in JPF.
['Karl Palmskog', 'Farah Hariri', 'Darko Marinov']
A Case Study on Executing Instrumented Code in Java PathFinder
624,717
['Daniel Gorín', 'Lutz Schröder']
Extending ALCQ with Bounded Self-Reference.
764,364
A schema mapping is a high-level declarative specification of how data structured under one schema, called the source schema, is to be transformed into data structured under a possibly different schema, called the target schema. We demonstrate SPIDER, a prototype tool for debugging schema mappings, where the language for specifying schema mappings is based on a widely adopted formalism. We have built SPIDER on top of a data exchange system, Clio, from IBM Almaden Research Center. At the heart of SPIDER is a data-driven facility for understanding a schema mapping through the display of routes. A route essentially describes the relationship between source and target data with the schema mapping. In this demonstration, we showcase our route engine, where we can display one or all routes starting from either source or target data, as well as the intermediary data and schema elements involved. In addition, we demonstrate "standard" debugging features for schema mappings that we have also built, such as computing and exploring routes step-by-step, stopping or pausing the computation with breakpoints, performing "guided" computation of routes by taking human input into account, as well as tracking the state of the target instance during the process of computing routes.
['Bogdan Alexe', 'Laura Chiticariu', 'Wang Chiew Tan']
SPIDER: a schema mapPIng DEbuggeR
536,902
Recently, Commutative Replicated Data Type(CRDT) algorithms have been proposed and proved by many references to outperform traditional algorithms in real-time collaborative editing. Replicated Growable Array(RGA) has the best average performance among CRDT algorithms. However, RGA only supports character-based primitive operations. This paper proposes an efficient collaborative editing approach supporting string-based operations(RGA Supporting String). Firstly, RGASS is presented under string-wise architecture to preserve operation intentions of collaborative users. Secondly, the time complexity of RGASS has been analyzed in theory to be lower than that of RGA and the state of the art OT algorithm(ABTSO). Thirdly, the experiment evaluations show that the computative performance of RGASS is better than that of RGA and ABTSO. Therefore, RGASS is more adaptable to large-scale collaborative editing with higher performance than a representative class of CRDT and OT algorithms in publications.
['Xiao Lv', 'Fazhi He', 'Weiwei Cai', 'Yuan Cheng']
An efficient collaborative editing algorithm supporting string-based operations
887,988
Understanding the connectivity structure of the human brain is a funda- mental prerequisite for the treatment of psychiatric or neurological diseases. Prob- abilistic tractography has become an established method to account for the inherent uncertainties of the actual course of fiber bundles in magnetic resonance imaging data. This paper presents a visualization system that addresses the assessment of fiber probabilities in relation to anatomical landmarks. We employ real-time trans- parent rendering strategy to display fiber tracts within their structural context in a virtual environment. Thereby, we not only emphasize spatial patterns but further- more allow an interactive control over the amount of visible anatomical informa- tion.
['Tobias Rick', 'Anette von Kapri', 'Svenja Caspers', 'Katrin Amunts', 'Karl Zilles', 'Torsten Kuhlen']
Visualization of Probabilistic Fiber Tracts in Virtual Reality
564,204
Based on the theory of Restless Multi-Armed Bandit model, a novel mechanism of dynamic spectrum access was proposed for the problem that how to coordinate multi-user access multi-channel which were idle. Firstly, take care of the sensing error must be exist in the practical network, the Whittle index policy which can deal with sensing error effectively was derived, in this policy, the users achieved one belief value for every channel based on the historical experience accumulation and chose the channels, which was need to sense and access, by considering the reward of immediate and future based on the belief values. Secondly, this paper used the multi-bid auction algorithm to deal with the collision among secondary users when they select the channels to improve the spectrum utilization. The simulation results demonstrate that, in the same environment, the cognitive users with the proposed mechanism have higher throughtput than the mechanism without dealing with sensing error or without multi-bid.
['Zhu Jiang', 'Han Chao', 'Yang Haolei', 'Xiong Jiahao']
A new mechanism of dynamic spectrum access based on restless bandit allocation indices
648,174
The crucial role of the evaluation in the development of the information retrieval tools is useful evidence to improve the performance of these tools and the quality of results that they return. However, the classic evaluation approaches have limitations and shortcomings especially regarding to the user consideration, the measure of the adequacy between the query and the returned documents and the consideration of characteristics, specifications and behaviors of the search tool. Therefore, we believe that the exploitation of contextual elements could be a very good way to evaluate the search tools. So, this paper presents a new approach that takes into account the context during the evaluation process at three complementary levels. The experiments gives at the end of this article has shown the applicability of the proposed approach to real research tools. The tests were performed with the most popular searching engine (i.e. Google, Bing and Yahoo) selected in particular for their high selectivity. The obtained results revealed that the ability of these engines to rejecting dead links, redundant results and parasites pages depends strongly to how queries are formulated, and to the political of sites offering this information to present their content. The relevance evaluation of results provided by these engines, using the user's judgments, then using an automatic manner to take into account the query context has also shown a general decline in the perceived relevance according to the number of the considered results.
['Abdelkrim Bouramoul', 'Mohamed Khireddine Kholladi', 'Bich Lien Doan']
USING CONTEXT TO IMPROVE THE EVALUATION OF INFORMATION RETRIEVAL SYSTEMS
448,576
Classical information retrieval (IR) methods often lose valuable information when aggregating weights, which may diminish the discriminating power between documents. To cope with this problem, the paper presents an approach for ranking documents in IR, based on a vector-based ordering technique already considered in fuzzy logic for multiple criteria analysis purpose. Moreover, the proposed approach uses a possibilistic framework for encoding the retrieval status values. The approach, applied to a benchmark collection, has been shown to improve IR precision w.r.t. classical approaches.
['Mohand Boughanem', 'Yannick Loiseau', 'Henri Prade']
Improving Document Ranking in Information Retrieval Using Ordered Weighted Aggregation and Leximin Refinement.
176,346
Fixed length entropy codes (FLC) have been previously introduced as an alternative to variable length entropy codes (VLC) in video compression applications. Video codecs employing FLC entropy codes have been shown to outperform the ones using VLC codes in noisy environments. Unfortunately, no computationally efficient implementation has been proposed for FLC codes since they had not received as much attention from the source encoding community as VLC codes had. We introduce data structures and algorithms to implement FLC coders and decoders that make feasible the use of FLC codes in a multitude of applications including real time video codecs. The proposed implementation makes the complexity of FLC and VLC codes comparable.
['Ramon Llados-Bernaus', 'Robert L. Stevenson']
Computationally efficient fixed-length entropy codec for robust video compression
107,143
Modern web sites provide easy access to large amounts of data via open application programming interfaces. Users interacting with these sites constantly change the underlying data sets, which can be represented in graph-structured form. Nodes in these dynamic graph structures exhibit dependencies over time (for example, one node changes before other nodes change in the same way). Analysing these dependencies is crucial for understanding and predicting the dynamics inherent to temporally changing graph structures on the web. When the graphs become large however, it is not feasible to take into account all properties of the graph and in general it is unclear how to choose the appropriate features. Moreover, comparing two nodes becomes difficult, if the nodes do not share exactly the same features. In this work we propose an algorithm that automatically learns the features that govern temporal dependencies between nodes in large dynamic graph structures. We present preliminary results of applying the algorithm to data collected from the web, discuss potential extensions of the framework and anticipate how a major problem in machine learning, sparse data, could be tackled by leveraging Linked Data.
['Andreas Harth']
Analysing Dependency Dynamics in Web Data
578,124
Notifications are a core mechanism of current smart devices. They inform about a variety of events including messages, social network comments, and application updates. While users appreciate the awareness that notifications provide, notifications cause distraction, higher cognitive load, and task interruptions. With the increasing importance of smart environments, the number of sensors that could trigger notifications will increase dramatically. A flower with a moisture sensor, for example, could create a notification whenever the flower needs water. We assume that current notification mechanisms will not scale with the increasing number of notifications. We therefore explore notification mechanisms for smart homes. Notifications are shown on smartphones, on displays in the environment, next to the sending objects, or on the user's body. In an online survey, we compare the four locations in four scenarios. While different aspects influence the perceived suitability of each notification location, the smartphone generally is rated the best.
['Alexandra Voit', 'Tonja Machulla', 'Dominik Weber', 'Valentin Schwind', 'Stefan Schneegass', 'Niels Henze']
Exploring notifications in smart home environments
878,935
A new Immune Genetic Algorithm (IGA) modeling was completed using Markov chain. The convergence rate of IGA to absorbed-state was deduced using norm and the analysis of transition probability matrix. According to the design and the performance of IGA, the detailed quantitative expressions of convergence rate to absorbed-state which include immune parameters in IGA was presented. Then the discussion was carried out about the effect of the parameters on the convergence rate. It was found that several parameters such as the population size, the population distribution, the string length etc. would all affect the optimization. The conclusions demonstrate that why IGA can maintain the diversity very well so that the optimization is very quick. This paper can also be helpful for the further study on the convergence rate of Immune Genetic Algorithm.
['Xiaoping Luo', 'Wenyao Pang', 'Ji Huang']
A Further Discussion on Convergence Rate of Immune Genetic Algorithm to Absorbed-state
173,299
['Jerónimo Irazábal', 'Gabriela Pérez', 'Claudia Pons', 'Roxana Silvia Giandini']
An Implementation Approach to Achieve Metamodel Independence in Domain Specific Model Manipulation Languages.
761,812
Since achieving W3C recommendation status in 2004, the Web Ontology Language (OWL) has been successfully applied to many problems in computer science. Practical experience with OWL has been quite positive in general; however, it has also revealed room for improvement in several areas. We systematically analyze the identified shortcomings of OWL, such as expressivity issues, problems with its syntaxes, and deficiencies in the definition of OWL species. Furthermore, we present an overview of OWL 2-an extension to and revision of OWL that is currently being developed within the W3C OWL Working Group. Many aspects of OWL have been thoroughly reengineered in OWL 2, thus producing a robust platform for future development of the language.
['Bernardo Cuenca Grau', 'Ian Horrocks', 'Boris Motik', 'Bijan Parsia', 'Peter F. Patel-Schneider', 'Ulrike Sattler']
OWL 2: The next step for OWL
214,625
Summary: Commonly used multiplicity adjustments fail to control the error rate for reported findings in many expression quantitative trait loci (eQTL) studies. TreeQTL implements a stage-wise multiple testing procedure which allows control of appropriate error rates defined relative to a hierarchical grouping of the eQTL hypotheses. Availability and Implementation: The R package TreeQTL is available for download at http://bioinformatics.org/treeqtl.
['Christine Peterson', 'Marina Bogomolov', 'Yoav Benjamini', 'Chiara Sabatti']
TreeQTL: hierarchical error control for eQTL findings.
708,389
Transactions on Emerging Telecommunications Technologies#R##N#Early View (Online Version of Record published before inclusion in an issue)
['Xueli An', 'Chan Zhou', 'Riccardo Trivisonno', 'Riccardo Guerzoni', 'Alexandros Kaloxylos', 'David Soldani', 'Artur Hecker']
On end to end network slicing for 5G communication systems
836,383
In this paper, a coarse-fine time-to-digital converter (TDC) based on Vernier delay line (VDL) was proposed. A new digital circuit was developed for tree delay line and this method led to high resolution and low power consumption. The TDC core was based on the pseudo-differential digital architecture that made it insensitive to nMOS and pMOS transistor mismatches. It also took advantage of a VDL used in conjunction with an asynchronous read-out circuitry. The time interval resolution was equal to the difference of delay between buffer of upper and lower chains. Then, with added extra chain in lower delay line, resolution can be controlled and area and power consumption was reduced. Measurement results of the TDC showed the resolution of 4.5 ps and output dynamic range of 32-bit and the differential non-linearity was always less the one least significant bits (1LSB), while the integral non-linearity showed the maximum of one least significant bits. This TDC achieved the consumption of 248.9 µW from 1.2 V supply.
['Asma Dehghani', 'Mohsen Saneei', 'Ali Mahani']
Time-to-digital convertor based on resolution control
687,654
Cross-layer mechanism, for which a protocol locating at a given layer uses information issued from other layers, may enhance the mobile networks performance. Some of those mechanisms are based on mobility metrics. For example, the establishment of a route by choosing less mobile nodes could improve the routing protocol. In this paper, we study the ability of mobility metrics to reflect the mobility influence over the protocol performances. The proposed approach evaluates the ability of a metric from its capacity to indicate or predict the routing protocol performance. Three routing protocols are considered: AODV, DSR and OLSR. The studied mobility metrics are Frequency of Link State Changes (LC), Link Connectivity Duration (LD) and Link Stability Metric (LS). The metrics are evaluated by simulation, firstly in a general case then in a scenario case.
['Cholatip Yawut', 'Beatrice Paillassa', 'Riadh Dhaou']
Mobility Metrics Evaluation for Self-Adaptive Protocols
453,585
The emerging Phase Change Memory (PCM) has many advantages such as good scalability and low leakage. MLC (Multi-Level Cell) PCM further extends the benefits by storing two or more bits per cell and thus reducing the per bit cost. However, adopting MLC PCM in main memory often leads to long write latency, high energy consumption, and degraded performance. In this paper, we propose TriState-SET, a proactive-SET based write strategy for improving MLC PCM write performance. TriState-SET proactively places device cells of a dirty memory line in full SET state. By utilizing only three states of 2bit MLC PCM, TriState-SET involves only fast state transitions when writing such a line at write-back time. Our experimental results show that TriState-SET increases performance by 11% and saves system energy by 6.7% (up to 12.2%), while achieving up to 25% (average 14.1%) energy-delay-product improvement.
['Xianwei Zhang', 'Youtao Zhang', 'Jun Yang']
TriState-SET: Proactive SET for improved performance of MLC phase change memories
580,518
The infinite-horizon continuous-time linear-quadratic regulator problem with conical control constraints is considered. Properties of the optimal value function are studied and illustrated: characterization as a solution to a stationary Hamilton-Jacobi equation; convex conjugacy with a dual value function; approximation via smooth value functions for perturbed problems; differentiability; and utility for stabilizing feedback design.
['Rafal Goebel']
The value function for the Linear-Quadratic Regulator with conical control constraints
492,490
Trust management scheme has been regarded as a powerful tool to defend against the wide set of security attacks and identify malicious nodes. In this paper, we propose a trust management scheme based on revised Dempster-Shafer (D-S) evidence theory. D-S theory is preponderant in tackling both random and subjective uncertainty in the trust mechanism. A trust propagation mechanism including conditional trust transitivity and dynamic recommendation aggregation is developed for obtaining the recommended trust values from third part nodes. We adopt a flexible synthesis method that uses recommended trust only when no direct trust exists to keep a good trust-energy consumption balance. We also consider on-off attack and bad mouthing attack in our simulation. The simulation results and analysis show that the proposed method has excellent ability to deal with typical network attacks, better security, and longer network lifetime.
['Renjian Feng', 'Shenyun Che', 'Xiao Wang', 'Ning Yu']
Trust Management Scheme Based on D-S Evidence Theory for Wireless Sensor Networks
469,595
We present a new approach to derive joint distributions for stationary Poisson loss systems. In particular, for M/M/m/0 and M/M/1/n we find the Laplace transforms (with respect to time t) of the probability that at time t there are i customers in the system and during [0,t], j customers are refused admissions for M/M/m/0 we further determine the LT of the probability that the system was full for less than s time units during [0,t] and serves i customers at time t. Explicit formulas for the corresponding moments are also given.
['Wolfgang Stadje', 'P. R. Parthasarathy']
Generating function analysis of some joint distributions for Poisson loss systems
392,006
Safety and acceptability are main concerns in the design of driver assistance system. In fact, these two requirements sometimes may conflict with each other depending on e situation and driver. This conflict is more emphasized particularly in the case of considering elderly driver. In order to solve this problem, this paper proposes a new driver-vehicle cooperation scheme, a ‘supervisory cooperation control’ which consists of model predictive constraint satisfaction and multimodal human-machine interface. The proposed cooperation scheme enables us to realize the safety without loosing acceptability.
['Takuma Yamaguchi', 'Hiroyuki Okuda', 'Tatsuya Suzuki', 'Soichiro Hayakawa', 'Ryojun Ikeura', 'Kenji Muto', 'Takafumi Ito']
Implementation and verification of supervisory cooperative control by model predictive method
957,107
The interest in drug combinations is growing rapidly due to the opportunities they create to increase the therapeutic effect and to reduce the frequency or magnitude of undesirable side effects when single drugs fail to deliver satisfactory results. Considerable effort in studying benefits of the joint action of drugs has been matched by the development of relevant statistical methods and tools for statistical analysis of the data obtained in such studies that allow important statistical assumptions to be taken into account, i.e. the appropriate statistical model and the distribution of the response of interest (e.g. Gaussian, Binomial, Poisson). However, much less attention has been given to the choice of suitable experimental designs for such studies, while only high quality data can ensure that the objectives of the studies will be fulfilled. Methods for construction of such experimental designs which are economical and make most efficient use of the available resources are proposed. It is shown how this can be performed when the distribution of the response is one of those belonging to the exponential family of distributions, and provide specific examples for the most common cases. In addition simple but flexible experimental designs, called ray-contour designs, are proposed. These designs are particularly useful when the use of low or high doses is undesirable and hence a standard statistical analysis of the data is not possible. Useful features of these designs are illustrated with an application in cancer study.
['Bader Almohaimeed', 'Alexander Donev']
Experimental designs for drug combination studies
368,958
In this paper, we present new features based on Spatial-Gradient-Features (SGF) at block level for identifying six video scripts namely, Arabic, Chinese, English, Japanese, Korean and Tamil. This works helps in enhancing the capability of the current OCR on video text recognition by choosing an appropriate OCR engine when video contains multi-script frames. The input for script identification is the text blocks obtained by our text frame classification method. For each text block, we obtain horizontal and vertical gradient information to enhance the contrast of the text pixels. We divide the horizontal gradient block into two equal parts as upper and lower at the centroid in the horizontal direction. Histogram on the horizontal gradient values of the upper and the lower part is performed to select dominant text pixels. In the same way, the method selects dominant pixels from the right and the left parts obtained by dividing the vertical gradient block vertically. The method combines the horizontal and the vertical dominant pixels to obtain text components. Skeleton concept is used to reduce pixel width to a single pixel to extract spatial features. We extract four features based on proximity between end points, junction points, intersection points and pixels. The method is evaluated on 770 frames of six scripts in terms of classification rate and is compared with an existing method. We have achieved 82.1% average classification rate.
['Danni Zhao', 'Palaiahnakote Shivakumara', 'Shijian Lu', 'Chew Lim Tan']
New Spatial-Gradient-Features for Video Script Identification
182,858
This paper presents a new proportional-integral (PI) controller that sets the operating point of computing tiles in a system on chip (SoC). We address data-parallel applications with throughput constraints. The controller settings are investigated for application configurations with different QoS levels and different buffer sizes. The control method is evaluated on a test chip with four tiles executing a realistic HMAX object recognition application. Experimental results suggest that the proposed controller outperforms the state-of-the-art results: it attains, on average, 25% less number of frequency switches and has slightly higher energy savings. The reduction in number of frequency switches is important because it decreases the involved overhead. In addition, the PI controller meets the throughput constraint in cases where other approaches fail.
['Anca Mariana Molnos', 'Warody Lombardi', 'Diego Puschini', 'Julien Mottin', 'Suzanne Lesecq', 'Arnaud Tonda']
Energy management via PI control for data parallel applications with throughput constraints
564,368
In this paper, we show how information exchange rules represented by a network having multiple layers (multiplex information networks) can be designed for enabling spatially evolving multiagent formations. Specifically, we consider the formation tracking problem and introduce a novel distributed control architecture. The proposed approach allows capable agents to spatially alter density and orientation of the resulting formation while tracking a dynamic, non-stationary target without requiring global information exchange ability. A numerical example is provided to illustrate the efficacy of the proposed distributed control architecture.
['Dung Tran', 'Tansel Yucelen', 'Eduardo L. Pasiliao']
Multiplex information networks for spatially evolving multiagent formations
861,146
The reliability of communication and sensor devices has been recognized as one of the crucial issues in Wireless Sensor Networks (WSNs). In distributed environments, micro-sensors are subject to high-frequency faults. To provide high stability and availability of large scale sensor networks, we propose a fault inference mechanism based on reverse multicast tree to evaluate sensor nodes' fault probabilities. This mechanism is formulated as maximization-likelihood estimation problem. Due to the characteristics (energy awareness, constraint bandwidth and so on) of wireless sensor networks; it is infeasible for each sensor to announce its working state to a centralized node. Therefore, maximum likelihood estimates of fault parameters depend on unobserved latent variables. Hence, our proposed inference mechanism is abstracted as Nondeterministic Finite Automata (NFA). It adopts iterative computation under Markov Chain to infer the fault probabilities of nodes in reverse multicast tree. Through our theoretical analysis and simulation experiments, we were able to achieve an accuracy of fault inference mechanism that satisfies the necessity of fault detection.
['Elhadi M. Shakshuki', 'Xinyu Xing']
A Fault Inference Mechanism in Sensor Networks Using Markov Chain
183,313
Post-fire forest regeneration is crucial to both ecological studies and forest management. Three restoration treatments, namely natural regeneration (NR), artificial regeneration (AR), and artificial promotion (AP), were adopted in the Greater Hinggan Mountain area of China after a serious fire occurred on May 6, 1987. NR means recovering naturally without any intervention, AR comprises salvage logging followed by complete planting, while AP includes regeneration by removing dead trees, weeding, and digging some pits to promote seed germination. The objective of this study was to detect and compare the effects of the three restoration treatments based on ALOS/PALSAR data. The four PALSAR images were pre-processed to acquire the backscattering coefficients. Then the coefficients in both HH and HV polarization were examined and two radar vegetation indices were derived and evaluated, based on which, the post-fire forest dynamics under different restoration treatments were detected and compared. The results showed that the forests under NR presented a completely different recovery trajectory with those under the other two treatments. This study indicated the effects of different restoration treatments, as well as demonstrated the applicability and efficiency of SAR techniques in forest monitoring and post-fire management.
['Wei Chen', 'Kazuyuki Moriya', 'Tetsuro Sakai', 'Chunxiang Cao']
Monitoring of post-fire forest recovery under different restoration treatments based on time-series ALOS/PALSAR data
934,741
['Vasiliki Mantzana', 'Marinos Themistocleous']
Towards a Conceptual Framework of Actors and Factors Affecting the EAI Adoption in Healthcare Organisations.
223,689
Diversity and evolution in database applications often result in multidatabase environment in which corporate data are stored in multiple, distributed data source. It is a formidable work to formulate query to access those multidatabase, especially when some complex constraints are involved. This article presents MQDP (multidatabase query design and processing), a graphical user interface to multidatabase. This tool allows the user to handle the overwhelming information from different data sources. It has supplied common interface to multidatabase and implemented the multidatabase query optimization in query processing. It has wrapped the heterogeneity of the data and put a uniform manner to the user. In this paper we discuss critical issues of this graphical user interface development for multidatabase
['Wu Mingxia', 'Huang Qi-chun', 'Chen Qi']
Graphical user interface based on multidatabase query optimization
118,726
Wireless Sensor Networks (WSN) are networks composed of autonomous devices manufactured to solve a specific problem, with limited computational capabilities and resource-constrained (e.g. limited battery). WSN are used to monitor physical or environmental conditions within an area (e.g. temperature, humidity). The popularity of the WSN is growing, precisely due to the wide range of sensors available. As a result, these networks are being deployed as part of several infrastructures. However, sensors are designed to collaborate only with sensors of the same type. In this sense, taking advantage of the heterogeneity of WSN in order to provide common services, like it is the case of routing, has not been sufficiently considered. For this reason, in this paper we propose a routing protocol based on traffic classification and role-assignment to enable heterogeneous WSN for cooperation. Our approach considers both QoS requirements and lifetime maximization to allow the coexistence of different applications in the heterogeneous network infrastructure.
['Ana Nieto', 'Javier Lopez']
Traffic Classifier for Heterogeneous and Cooperative Routing through Wireless Sensor Networks
535,355
['Vipul Goyal']
Constant Round Non-Malleable Protocols using One Way Functions.
749,227
The problem of estimating the frequencies of two complex sinusoids using only the phase data is addressed. Under the assumption that the amplitudes of the two signals are not equal, a computationally efficient approximate maximum likelihood estimation (AMLE) is obtained. This AMLE uses only the phase angles of the complex-valued data to estimate frequencies and uses the envelope data to resolve an inherent algebraic sign ambiguity in the difference frequency. At moderately high SNRs, satisfactory results are obtained from computer simulations. The mean-square errors of the estimates are compared with the corresponding values of the Cramer-Rao lower bounds (CRLBs). >
['Hongya Ge', 'Donald W. Tufts']
Estimating the frequencies of two sinusoids using only the phase angles of complex-valued data
374,036
The complexity of modeling the anaerobic digestion process meets the difficulty to describe and analyze them mathematically. This paper sets the fundamentals for further fully automatic modelling and analysis of biogas systems. Hence, a simplified mathematical model for the anaerobic digestion process of organic matter, in a continuous stirred tank reactor is proposed. With the aim of upgrading the produced biogas and integrate biogas plants in a virtual power plant, new control inputs reflecting addition of stimulating substrates (acetate and bicarbonate) are added. Based on two step (acidogenesis-methanogenesis) mass balance non linear model, a step-by-step model parameter's identification procedure is presented.
['Khadidja Chaib Draa', 'Holger Voos', 'Mohamed Darouach', 'Marouane Alma']
A Formal Modeling Framework for Anaerobic Digestion Systems
627,518
The energy efficiency of cloud computing has recently attracted a great deal of attention. As a result of raised expectations, cloud providers such as Amazon and Microsoft have started to deploy a new IaaS service, a MapReduce-style virtual cluster, to process data-intensive workloads. Considering that the IaaS provider supports multiple pricing options, we study batch-oriented consolidation and online placement for reserved virtual machines (VMs) and on-demand VMs, respectively. For batch cases, we propose a DVFS-based heuristic TRP-FS to consolidate virtual clusters on physical servers to save energy while guarantee job SLAs. We prove the most efficient frequency that minimizes the energy consumption, and the upper bound of energy saving through DVFS techniques. More interestingly, this frequency only depends on the type of processor. FS can also be used in combination with other consolidation algorithms. For online cases, a time-balancing heuristic OTB is designed for on-demand placement, which can reduce the mode switching by means of balancing server duration and utilization. The experimental results both in simulation and using the Hadoop testbed show that our approach achieves greater energy savings than existing algorithms.
['Fei Teng', 'L. G. Yu', 'Tianrui Li', 'Danting Deng', 'Frédéric Magoulès']
Energy efficiency of VM consolidation in IaaS clouds
833,272
Managing trust is a key issue for a wide acceptance of P2P computing, particularly in critical areas such as e-commerce. Reputation based trust management has been identified in the literature as a viable solution to the problem. However, the mechanism faces the challenges of subjectively, experiential weighting referrals when aggregating recommendation information. Furthermore, not considering some malicious attacks when building trust relationship between peers in the existing schemes make trust model very vulnerable. This paper presents P2PTrust-a new trust framework based on reputation for unstructured P2P networks. In P2PTrust, except reputation value, reputation revision value is also taken into account in order to deal with the dynamic or spoiling behavior of peers, which makes P2PTrust differ from other trust models based on the reputation only. Considering the credibility of peers' referrals, a credibility quantification and update scheme is proposed in the paper as reliable means of seeking honest feedbacks. Subsequent experimental results show that, compared to the existing trust models, our scheme is robust in systems where the vast majority of users are malicious and more advanced in successful transaction rate.
['Chunqi Tian', 'Shihong Zou', 'Lingwei Chu', 'Shiduan Cheng']
A New Trust Framework Based on Reputation for Unstructured P2P Networks
87,981
Controlling the granularity of workflow activities executed on widely distributed computing platforms such as grids is required to reduce the impact of task queuing and data transfer time. Most existing granularity control approaches assume extensive knowledge about the applications and resources (e.g. task duration on each resource), and that both the workload and available resources do not change over time. We propose a granularity control algorithm for platforms where such clairvoyant and offline conditions are not realistic. Our method groups tasks when the fineness degree of the application, which takes into account the ratio of shared data and the queuing/round-trip time ratio, becomes higher than a threshold determined from execution traces. The algorithm also de-groups task groups when new resources arrive. The application's behavior is constantly monitored so that the characteristics useful for the optimization are progressively discovered. Experimental results, obtained with 3 workflow activities deployed on the European Grid Infrastructure, show that (i) the grouping process yields speed-ups of about 2.5 when the amount of available resources is constant and that (ii) the use of de-grouping yields speed-ups of 2 when resources progressively appear.
['Rafael Ferreira da Silva', 'Tristan Glatard', 'Frédéric Desprez']
On-Line, non-clairvoyant optimization of workflow activity granularity on grids
826,641
The potential of smart living environments to provide a form of independent living for the ageing population is becoming more recognised. These environments are comprised of sensors which are used to assess the state of the environment, some form of information management to process the sensor data and finally a suite of actuators which can be used to change the state of the environment. When providing a form of support which may impinge upon the well being of the end user it is essential that a high degree of reliability can be maintained. Within this paper we present an information management framework to process sensor based data within smart environments. Based on this framework we assess the impact of sensor reliability on the classification of activities of daily living. From this assessment we show how it is possible to identify which sensors within a given set of experiments can be considered to be the most critical and as such consider how this information may be used for managing sensor reliability from a practical point of view.
['Chris D. Nugent', 'Xin Hong', 'Josef Hallberg', 'Dewar D. Finlay', 'Kåre Synnes']
Assessing the impact of individual sensor reliability within smart living environments
148,184
Business processes spanning across organizational boundaries inside and outside an enterprise are increasingly becoming common practice in today's networked business environments. Service level agreements (SLAs) are negotiated between enterprises to measure, ensure and enforce service fulfillment and quality in this dynamic context. Often, SLA violations are directly associated with penalty costs, making it crucial to stick to agreed SLAs and proactively intervene in case of potential violations. Thus, a framework is required which allows for (1) efficient business process compliance monitoring, and (2) taking immediate action in case of compliance violations in order to minimize the business impact. In this paper we present a novel compliance monitoring framework based on a Complex Event Processing (CEP) engine. It allows modeling business processes as event flows, whereby events reflect state changes in a process or the business environment. Compliance checkpoints are added to an event flow and signify aspects which may be relevant to monitor, such as the relative timeframe between two events. Upon these, monitoring rules are defined to detect compliance violations and automatically trigger corrective actions.
['Robert Thullner', 'Szabolcs Rozsnyai', 'Josef Schiefer', 'Hannes Obweger', 'Martin Suntinger']
Proactive Business Process Compliance Monitoring with Event-Based Systems
235,961
The field of digital document content analysis includes many important tasks, for example page segmentation or zone classification. It is impossible to build effective solutions for such problems and evaluate their performance without a reliable test set, that contains both input documents and expected results of segmentation and classification. In this paper we present GROTOAP --- a test set useful for training and performance evaluation of page segmentation and zone classification tasks. The test set contains input articles in a digital form and corresponding ground truth files. All input documents included in the test set have been selected from DOAJ database, which indexes articles published under CC-BY license. The whole test set is available under the same license.
['Dominika Tkaczyk', 'Artur Czeczko', 'K. Rusek', 'Lukasz Bolikowski', 'Roman Bogacewicz']
GROTOAP: ground truth for open access publications
158,143
Visual categorization problems, such as object classification or action recognition, are increasingly often approached using a detection strategy: a classifier function is first applied to candidate subwindows of the image or the video, and then the maximum classifier score is used for class decision. Traditionally, the subwindow classifiers are trained on a large collection of examples manually annotated with masks or bounding boxes. The reliance on time-consuming human labeling effectively limits the application of these methods to problems involving very few categories. Furthermore, the human selection of the masks introduces arbitrary biases (e.g., in terms of window size and location) which may be suboptimal for classification. We propose a novel method for learning a discriminative subwindow classifier from examples annotated with binary labels indicating the presence of an object or action of interest, but not its location. During training, our approach simultaneously localizes the instances of the positive class and learns a subwindow SVM to recognize them. We extend our method to classification of time series by presenting an algorithm that localizes the most discriminative set of temporal segments in the signal. We evaluate our approach on several datasets for object and action recognition and show that it achieves results similar and in many cases superior to those obtained with full supervision. HighlightsA framework for discriminative localization and classification is proposed.It works for both images and time series.It only requires weakly annotated data.It achieves results similar to those obtained with full supervision.
['Minh Hoai', 'Lorenzo Torresani', 'Fernando De la Torre', 'Carsten Rother']
Learning discriminative localization from weakly labeled data
84,233
['Michael Prilla', 'Oliver Blunk']
Reflective TEL: Augmenting Learning Tools with Reflection Support
782,172
['Rodrigo Mantovaneli Pessoa', 'Marten van Sinderen', 'Dick A. C. Quartel']
Towards Requirements Elicitation in Service-oriented Business Networks using Value and Goal Modelling.
778,882
Our main purpose is to enhance the efficiency of local search algorithms (issued from Walksat family) for the satisfiability problem (SAT) by including the structure of the treated instances in their resolution. The structure is described by the dependencies between the variables of the problem, interpreted as additional constraints hidden in the original formulation of the SAT instance. Checking these dependencies may allow a speeding up of the search and increasing the robustness of the incomplete methods. The extracted dependencies are implications and equivalencies between variables. The effective implementation of this purpose is achieved by an hybrid approach between a local search algorithm and an efficient DPL procedure.
['Djamal Habet', 'Michel Vasquez']
Improving Local Search for Satisfiability Problem by Integrating Structural Properties
144,934
Imperfect isolation of switching elements inside optical space switches gives rise to leakage signals that can result in homodyne crosstalk. One way to reduce the influence of this phenomenon is to guaranty that only one signal traverses a switching element at a time. Nevertheless, for large optical space switches, based on this concept, the homodyne crosstalk can still have some influence. An analysis of the impact of this phenomenon, in this kind of switches, is developed. The results are compared with the ones obtained in switches built with no crosstalk restrictions and show that there is a compromise between the switching element crosstalk and the network cost. The analysis uses a rigorous approach based on the Gaussian Quadrature Rules method and the switches considered in this paper belong to the family of strictly non-blocking Horizontal Expanded and Vertical Replicated Banyan network.
['Luis G. C. Cancela', 'J.J.O. Pires']
Crosstalk Effects in Large Strictly Non-blocking Optical Switches Based on Directional Couplers
100,118
We discuss the experiences gained with implementing the programming language Edison in Prolog. The evaluation of Prolog in this application area is based on a comparison with two other Edison compilers, one written in Pascal (procedural approach) and the other generated using the compiler writing systems PGS and GAG (declarative approach)
['Jukka Paakki']
Prolog in practical compiler writing
148,986
© Springer International Publishing Switzerland 2016.This paper presents a methodology for designing automotive user interfaces using Situation Awareness (SA). The development of interfaces providing users with relevant, timely information is critical for optimal performance. This paper presents a variation on the original SA model allowing for a consideration of all situational factors in a modern motor car. “Dual Goal SA” provides environmental and cognitive considerations for developing interfaces used when multi-tasking. A pilot study was carried out to test the Dual Goal concept and identify appropriate measures. Subjective results proved inconclusive. Objective results showed good evidence supporting the hypothesis but most promising was a factor analysis involving all key objective measures. This produced factors that classified the experimental groups consistently with predictions, demonstrating evidence of variations in performance consistent with variations in SA for two competing goals.
['Lee Skrypchuk', 'Patrick Langdon', 'P. John Clarkson', 'A Mouzakitis']
Creating Inclusive Automotive Interfaces Using Situation Awareness as a Design Philosophy
852,312
It is known that the deformation of the apparent contours of a surface under perspective projection and viewer motion enable the recovery of the geometry of the surface, for example by utilising the epipolar parametrization. These methods break down with apparent contours that are singular i.e., with cusps . In this paper we study this situation and show how, nevertheless, the surface geometry (including the Gauss curvature and mean curvature of the surface) can be recovered by following the cusps. Indeed the formulae are much simpler in this case and require lower spatio-temporal derivatives than in the general case of nonsingular apparent contours. We also show that following cusps does not by itself provide us with information on viewer motion.
['Roberto Cipolla', 'Gordon J. Fletcher', 'Peter Giblin']
Following Cusps
710,501
We describe PhotoCompas, a system that utilizes the time and location information embedded in digital photographs to automatically organize a personal photo collection PhotoCompas produces browseable location and event hierarchies for the collection. These hierarchies are created using algorithms that interleave time and location to produce an organization that mimics the way people think about their photo collections. In addition, our algorithm annotates the generated hierarchy with geographical names. We tested our approach in case studies of three real--world collections and verified that the results are meaningful and useful for the collection owners.
['Mor Naaman', 'Yee Jiun Song', 'Andreas Paepcke', 'Hector Garcia-Molina']
Automatic organization for digital photographs with geographic coordinates
375,198
article i nfo Objective: The mapping of haemodynamic changes related to interictal epileptic discharges (IED) in simulta- neous electroencephalography (EEG) and functional MRI (fMRI) studies is usually carried out by means of EEG-correlated fMRI analyses where the EEG information specifies the model to test on the fMRI signal. The sen- sitivity and specificity critically depend on the accuracy of EEG detection and the validity of the haemodynamic model. In this study we investigated whether an information theoretic analysisbased on the mutual information (MI) between the presence of epileptic activity on EEG and the fMRI data can provide further insights into the haemodynamic changes related to interictal epileptic activity. The important features of MI are that: 1) both recording modalities are treated symmetrically; 2) no requirement for a-priori models for the haemodynamic response function, or assumption of a linear relationship between the spiking activity and BOLD responses, and 3) no parametric model for the type of noise or its probability distribution is necessary for the computation of MI. Methods: Fourteen patients with pharmaco-resistant focal epilepsy underwent EEG-fMRI and intracranial EEG and/or surgical resection with positive postoperative outcome (seizure freedom or considerable reduction in seizure frequency) was available in 7/14 patients. We used nonparametric statistical assessment of the MI maps based on a four-dimensional wavelet packet resampling method. The results of MI were compared to the statistical parametric maps obtained with two conventional General Linear Model (GLM) analyses based on the informed basis set (canonical HRF and its temporal and dispersion derivatives) and the Finite Impulse Response (FIR) models. Results: The MI results were concordant with the electro-clinically or surgically defined epileptogenic area in 8/14 patients and showed the same degree of concordance as the results obtained with the GLM-based methods in 12 patients (7 concordant and 5 discordant). In one patient, the information theoretic analysis improved the delinea- tion of the irritative zone compared with the GLM-based methods. Discussion: Our findings suggest that an information theoretic analysis can provide clinically relevant information about the BOLD signal changes associated with the generation and propagation of interictal epileptic discharges. The concordance between the MI, GLM and FIR maps support the validity of the assumptions adopted in GLM-based analyses of interictal epileptic activity with EEG-fMRI in such a manner that they do not significantly constrain the localization of the epileptogenic zone.
['Cesar Caballero-Gaudes', 'Dimitri Van De Ville', 'Frédéric Grouiller', 'R Thornton', 'Louis Lemieux', 'Margitta Seeck', 'François Lazeyras', 'Serge Vulliemoz']
Mapping interictal epileptic discharges using mutual information between concurrent EEG and fMRI
352,035
['Sebastian Clauss', 'Pierre Aurich', 'Stefan Bruggenwirth', 'Vladimir Dobrokhodov', 'Isaac Kaminer', 'Axel Schulte']
Design and Evaluation of a UAS combining Cognitive Automation and Optimal Control
703,118
We study uncertainty models in sequential pattern mining. We discuss some kinds of uncertainties that could exist in data, and show how these uncertainties can be modelled using probabilistic databases. We then obtain possible world semantics for them and show how frequent sequences could be mined using the probabilistic frequentness measure.
['Muhammad Muzammal', 'Rajeev Raman']
Uncertainty in sequential pattern mining
166,869
['Rouven Walter', 'Wolfgang Küchlin']
ReMax - A MaxSAT aided Product (Re-)Configurator.
776,301
System-wide disturbances in power systems are a challenging problem for the utility industry because of the large scale and the complexity of the power system. When a major power system disturbance occurs, protection and control actions are required to stop the power system degradation, restore the system to a normal state, and minimize the impact of the disturbance. In some cases, the present control actions are not designed for a fast-developing disturbance and may be too slow. The report explores special protection schemes and new technologies for advanced, wide-area protection. There seems to be a great potential for advanced wide-area protection and control systems, based on powerful, flexible and reliable system protection terminals, high speed, communication, and GPS synchronization in conjunction with careful and skilled engineering by power system analysts and protection engineers in cooperation.
['Miroslav Begovic', 'Damir Novosel', 'Daniel Karlsson', 'C.F. Henville', 'Gary Michel']
Wide-Area Protection and Emergency Control
956,353
Recently, conditional logics have been developed for application to problems in default reasoning. We present a uniform framework for the development and investigation of conditional logics to represent and reason with "normality", and demonstrate these logics to be equivalent to extensions of the modal system S4. We also show that two conditional logics, recently proposed to reason with default knowledge, are equivalent to fragments of two logics developed in this framework.
['Craig Boutilier']
Conditional logics of normality as modal systems
54,546
Even if the multi-agent paradigm has been evolving for fifteen years, the development of concrete methods for problem solving remains a major challenge. This paper focuses on reactive multi-agent systems because they provide interesting properties such as adaptability and robustness. In particular, the role of the environment, which is the place where the system computes and communicates, is studied. From this analysis a principle to design or engineer reactive systems is introduced. Our approach is based on the representation of the problem's constraints considered as perturbations that agents have to stabilize.
['Franck Gechter', 'Olivier Simonin']
Conception de SMA réactifs pour la résolution de problèmes : une approche basée sur l'environnement.
789,574
Asthma is the most prevalent chronic disease among pediatrics, as it is the leading cause of student absenteeism and hospitalization for those under the age of 15. To address the significant need to manage this disease in children, the authors present a mobile health (mHealth) system that determines the risk of an asthma attack through physiological and environmental wireless sensors and representational state transfer application program interfaces (RESTful APIs). The data is sent from wireless sensors to a smartwatch application (app) via a Health Insurance Portability and Accountability Act (HIPAA) compliant cryptography framework, which then sends data to a cloud for real-time analytics. The asthma risk is then sent to the smartwatch and provided to the user via simple graphics for easy interpretation by children. After testing the safety and feasibility of the system in an adult with moderate asthma prior to testing in children, it was found that the analytics model is able to determine the overall asthma risk (high, medium, or low risk) with an accuracy of 80.10±14.13%. Furthermore, the features most important for assessing the risk of an asthma attack were multifaceted, highlighting the importance of continuously monitoring different wireless sensors and RESTful APIs. Future testing this asthma attack risk prediction system in pediatric asthma individuals may lead to an effective self-management asthma program.
['Anahita Hosseini', 'Chris M. Buonocore', 'Sepideh Hashemzadeh', 'Hannaneh Hojaiji', 'Haik Kalantarian', 'Costas Sideris', 'Alex A. T. Bui', 'Christine E. King', 'Majid Sarrafzadeh']
HIPAA compliant wireless sensing smartwatch application for the self-management of pediatric asthma
840,285
In this paper a granular approach for intelligent control using generalized type-2 fuzzy logic is presented. Granularity is used to divide the design of the global controller into several individual simpler controllers. The theory of alpha planes is used to implement the generalized type-2 fuzzy systems. The proposed method for control is applied to a non-linear control problem to test the advantages of the proposed approach. Also an optimization method is used to efficiently design the generalized type-2 fuzzy system to improve the control performance.
['Oscar Castillo', 'Leticia Cervantes', 'José Soria', 'Mauricio A. Sanchez', 'Juan R. Castro']
A generalized type-2 fuzzy granular approach with applications to aerospace
689,127
In this paper we present a fault simulator for flash memory testing and diagnostics, called RAMSES-FT. The fault simulator is designed for easy inclusion of new fault models by adding their fault descriptors without modifying the simulation engine. The flash memory fault models are discussed, based on the failures defined in the IEEE 1005 Standard. Both the NOR-type and NAND-type flash memory architectures are covered. Our flash memory fault simulator uses a parallel simulation strategy to reduce the simulation time complexity from O(N/sup 3/) to O(N/sup 2/), where N is the number of cells. With the proposed scaling method for March tests, the simulation time complexity is further reduced to O(W/sup 2/), where W is the word width of the memory. The fault simulator supports March algorithms as well as single memory operations, covering most of the flash memory tests. With RAMSES-FT we have developed a diagnostic algorithm that can distinguish the target flash memory faults.
['Kuo-Liang Cheng', 'Jen-Chieh Yeh', 'Chih-Wea Wang', 'Chih-Tsun Huang', 'Cheng-Wen Wu']
RAMSES-FT: a fault simulator for flash memory testing and diagnostics
478,373
Mobile Internet is the trend for the next generation networks and mobile multicast is one of the key technologies in it. However to provide reliable multicast transport service in mobile network is a challenge. Current mobile reliable multicast transport protocols can be categorized into two classes, local recovery and remote recovery according to their policy to solve the out-of-sync problem. Local recovery can provide lost data to mobile host quickly, however it requires large buffers. On the contrary, remote recovery needs small buffers but the time to recovery is long. What's more, it is difficult for remote recovery to choose the retransmission method, unicast or multicast. This paper proposes a new reliable multicast transport protocol for mobile environment, AMRM, which belongs to local recovery. AMRM only needs small buffer to provide reliable multicast service and still keeps the characteristic of quick recovery, so it is an effective reliable multicast transport protocol for mobile environment.
['Rui Fan', 'Guoqing Zhang']
AMRM: adaptive mobile reliable multicast protocol
453,103
Prolonged sitting and physical inactivity at workplace often lead to various health risks such as diabetes, heart attack, cancer etc. Many organizations are investing in wellness programs to ensure the well-being of their employees. Generally wearable devices are used in such wellness programs to detect health problems of employees, but studies have shown that wearables do not result in sustained adoption. Heart rate measurement has emerged as an effective tool to detect various ailments such as anxiety, stress, cardiovascular diseases etc. There are pre-existing techniques that use webcam feed to sense heart rate subject to some experimental constraints like stillness of face, light illumination etc. In this paper, we show that in-situ opportunities can be found and predicted for webcam based heart rate sensing in the workplace environment by analyzing data from unobtrusive sensors in a pervasive manner.
['Mridula Singh', 'Abhishek Kumar', 'Kuldeep Yadav', 'Himanshu J. Madhu', 'Tridib Mukherjee']
Mauka-mauka: measuring and predicting opportunities for webcam-based heart rate sensing in workplace environment
696,875
We show that a metric space embeds in the rectilinear plane (i.e., isL1-embeddable in ℝ2) if and only if every subspace with five or six points does. A simple construction shows that for higher dimensionsk of the host rectilinear space the numberc(k) of points that need to be tested grows at least quadratically withk, thus disproving a conjecture of Seth and Jerome Malitz.
['Hans-Jürgen Bandelt', 'Victor Chepoi']
Embedding Metric Spaces in the Rectilinear Plane: a Six-Point Criterion
493,259
This paper describes a segmentation method of online handwritten Japanese text of arbitrary line direction by a neural network to improve text recognition performance. This method extracts multidimensional features from strokes of handwritten text and input them into a neural network to preliminarily determine segmentation points. Then, it modifies segmentation candidates using some spatial features. We compare the method with the previous method and that by Fisher's linear discriminant using the database HANDS-Kondate/spl I.bar/t/spl I.bar/bf-2001-11. This paper also shows how to generate character segmentation candidates in order to achieve high discrimination rate by investigating the relationship between recall, precision and the f measure.
['Bilan Zhu', 'Masaki Nakagawa']
Segmentation of on-line handwritten Japanese text of arbitrary line direction by a neural network for improving text recognition
187,679
A novel invariant texture classification method is proposed. Invariance to linear/non-linear monotonic gray-scale transformations is achieved by submitting the image under study to the ranklet transform, an image processing technique relying on the analysis of the relative rank of pixels rather than on their gray-scale value. Some texture features are then extracted from the ranklet images resulting from the application at different resolutions and orientations of the ranklet transform to the image. Invariance to 90^o-rotations is achieved by averaging, for each resolution, correspondent vertical, horizontal, and diagonal texture features. Finally, a texture class membership is assigned to the texture feature vector by using a support vector machine (SVM) classifier. Compared to three recent methods found in literature and having being evaluated on the same Brodatz and Vistex datasets, the proposed method performs better. Also, invariance to linear/non-linear monotonic gray-scale transformations and 90^o-rotations are evidenced by training the SVM classifier on texture feature vectors formed from the original images, then testing it on texture feature vectors formed from contrast-enhanced, gamma-corrected, histogram-equalized, and 90^o-rotated images.
['Matteo Masotti', 'R. Campanini']
Texture classification using invariant ranklet features
541,471
A problem of index coding with side information was first considered by Y. Birk and T. Kol (IEEE INFOCOM, 1998). In the present work, a generalization of index coding scheme, where transmitted symbols are subject to errors, is studied. Error-correcting methods for such a scheme, and their parameters, are investigated. In particular, the following question is discussed: given the side information hypergraph of index coding scheme and the maximal number of erroneous symbols δ, what is the shortest length of a linear index code, such that every receiver is able to recover the required information? This question turns out to be a generalization of the problem of finding a shortest-length error-correcting code with a prescribed error-correcting capability in the classical coding theory. The Singleton bound and two other bounds, referred to as the α-bound and the κ-bound, for the optimal length of a linear error-correcting index code (ECIC) are established. For large alphabets, a construction based on concatenation of an optimal index code with an MDS classical code, is shown to attain the Singleton bound. For smaller alphabets, however, this construction may not be optimal. A random construction is also analyzed. It yields another inexplicit bound on the length of an optimal linear ECIC. Finally, the decoding of linear ECIC's is discussed. The syndrome decoding is shown to output the exact message if the weight of the error vector is less or equal to the error-correcting capability of the corresponding ECIC.
['Son Hoang Dau', 'Vitaly Skachek', 'Yeow Meng Chee']
Index coding and error correction
401,146
This paper, in addition to reporting some existing techniques, proposes a new technique for skew correction. It includes a novel document skew detection algorithm based on bounding box technique. The algorithm works quite efficiently for detecting the skew and then correcting it. The method has been experimented on various text documents and very promising results have been achieved given more than 97% accuracy. A comparative study has been reported to provide a detailed analysis of the proposed method together with some other existing methods in the literature.
['Muhammad Sarfraz', 'Zeehasham Rasheed']
Skew Estimation and Correction of Text Using Bounding Box
86,301
Rural developing regions are often defined in terms of their resource constraints, including limited technology exposure, lack of power, and low access to data connections (leading to an inability to access information from digital or physical sources), as well as being amongst the most socio-economically disadvantaged and least literate in their countries’ populations. This article is focused around information access in such regions, aiming to build upon and extend the audio-based services that are already widely used in order to provide access to further types of media. In this article, then, we present an extended exploration of AudioCanvas —an interactive telephone-based audio information system that allows cameraphone users to interact directly with their own photos of physical media to receive narration or description. Our novel approach requires no specialist hardware, literacy, or data connectivity, making it far more likely to be a suitable solution for users in such regions.
['Jennifer S. Pearson', 'Simon Robinson', 'Matt Jones']
Exploring Low-Cost, Internet-Free Information Access for Resource-Constrained Communities
941,374
['Martin Gronemann', 'Michael Jünger', 'Nils Kriege', 'Petra Mutzel']
MolMap - Visualizing Molecule Libraries as Topographic Maps.
747,427
Admission Control (AC) is an efficient way of dealing with congestion situations in a network. Using AC, when network resources in a path are not enough for all flows (i.e., during congestion), some of the flows receive the requested service and the rest do not. Congestion situations can be reduced by increasing network resources or by optimizing their use through better routing techniques, but if congestion still occurs, AC achieves efficient use of network resources by maximizing the number of satisfied flows. However, using AC complicates the network scheme, and therefore a major concern is making the AC as simple as possible. In this paper we review the main AC schemes that have been proposed for the Internet, focusing on the simplicity of their architectures in terms of the number of nodes that participate in the AC, the required state, the use of signaling, and others.
['Lluís Fàbrega', 'Teodor Jové']
A review of the architecture of admission control schemes in the Internet
447,374
Dealing with large quantities of flammable and explosive materials, usually at high-pressure high-temperature conditions, makes process plants very vulnerable to cascading effects compared with other infrastructures. The combination of the extremely low frequency of cascading effects and the high complexity and interdependencies of process plants makes risk assessment and vulnerability analysis of process plants very challenging in the context of such events. In the present study, cascading effects were represented as a directed graph; accordingly, the efficacy of a set of graph metrics and measurements was examined in both unit and plant-wide vulnerability analysis of process plants. We demonstrated that vertex-level closeness and betweenness can be used in the unit vulnerability analysis of process plants for the identification of critical units within a process plant. Furthermore, the graph-level closeness metric can be used in the plant-wide vulnerability analysis for the identification of the most vulnerable plant layout with respect to the escalation of cascading effects. Furthermore, the results from the application of the graph metrics have been verified using a Bayesian network methodology.
['Nima Khakzad', 'Genserik Reniers']
Using graph theory to analyze the vulnerability of process plants in the context of cascading effects
597,527
Default logic is one of the most popular and successful formalisms#R##N#for non-monotonic reasoning. In 2002, Bonatti and Olivetti#R##N#introduced several sequent calculi for credulous and skeptical#R##N#reasoning in propositional default logic. In this paper we examine#R##N#these calculi from a proof-complexity perspective. In particular, we#R##N#show that the calculus for credulous reasoning obeys almost the same#R##N#bounds on the proof size as Gentzen's system LK. Hence proving#R##N#lower bounds for credulous reasoning will be as hard as proving#R##N#lower bounds for LK. On the other hand, we show an exponential#R##N#lower bound to the proof size in Bonatti and Olivetti's enhanced#R##N#calculus for skeptical default reasoning.
['Olaf Beyersdorff', 'Arne Meier', 'Sebastian Müller', 'Michael Thomas', 'Heribert Vollmer']
Proof Complexity of Propositional Default Logic.
788,737
The identification of sources injecting periodic disturbances in electric systems is a critical point in the assessment of the electric power quality. The available methods, based on a deterministic approach, do not always provide correct results. Methods based on heuristic approaches, such as those based on a fuzzy inference system (FIS), provide better results but do not allow measurement uncertainty to be evaluated in a straightforward way. This paper applies a modified FIS to the identification of the sources producing periodic distortion in power systems. The method associates an index, provided together with its measurement uncertainty, to each load connected to a point of common coupling capable of assessing whether the load is injecting or suffering distortion and quantifying the severity of the injected or suffered distortion.
['Alessandro Ferrero', 'Marco Prioli', 'Simona Salicone']
Fuzzy Metrology-Sound Approach to the Identification of Sources Injecting Periodic Disturbances in Electric Networks
83,517
Simulations of continuous-time systems are frequently used by designers of signal processing and communication systems. Windowed finite-impulse response models are often used in these simulations to model continuous-time linear filters. We investigate the performance of some common windows with respect to waveform fidelity, which is a primary goal in waveform simulation, and we also obtain the form of optimum windows for this criterion. Our results indicate that the rectangular window is generally a practical and reasonably good choice for waveform simulation.
['Rick S. Blum', 'Michel C. Jeruchim']
A note on windowing in the simulation of continuous-time communication systems
57,350
While many state-of-the-art trackers focus on solving certain parts of the tracking problem (e.g., dealing with occlusion and scale variation), it is beneficial to exploit the merits of multiple trackers to tackle different application scenarios. This paper proposes an end-to-end tracker selection method to improve visual tracking performance. Any trackers with arbitrary tracking strategy can be integrated in our framework without the need of modifying any single component tracker, which is simple and effective. We introduce a novel verification mechanism to select the most reliable tracking result from multiple trackers. Both qualitative and quantitative evaluations demonstrate that the proposed method achieves a consistent performance improvement on the benchmark dataset.
['Tianqi Zheng', 'Chao Xie', 'Wengang Zhou', 'Houqiang Li']
Improve Visual Tracking by End-to-end Multi-Tracker Selection
969,320
Advanced CMOS technology can enable high levels of performance with reduced active power at the expense of increased standby leakage, MTCMOS has previously been described as a method of reducing leakage in standby modes, by addition of a power supply interrupt switch. Enhancements using variable well bias and layout techniques are described and demonstrate increased performance and reduced leakage over conventional MTCMOS circuits.
['Stephen V. Kosonocky', 'M. Irnmediato', 'P. Cottrell', 'Terence B. Hook', 'Randy W. Mann', 'Jeff Brown']
Enchanced multi-threshold (MTCMOS) circuits using variable well bias
253,387
The aim of this work is to combine penalization and level-set methods to solve inverse or shape optimization problems on uniform cartesian meshes. Penalization is a method to impose boundary conditions avoiding the use of body-fitted grids, whereas level-sets allow a natural non-parametric description of the geometries to be optimized. In this way, the optimization problem is set in a larger design space compared to classical parametric representation of the geometries, and, moreover, there is no need of remeshing at each optimization step. Special care is devoted to the solution of the governing equations in the vicinity of the penalized regions and a method is introduced to increase the accuracy of the discretization. Another essential feature of the optimization technique proposed is the shape gradient preconditioning. This aspect turns out to be crucial since the problem is infinite dimensional in the limit of grid resolution. Examples pertaining to model inverse problems and to shape design for Stokes flows are discussed, demonstrating the effectiveness of this approach.
['Frédéric Chantalat', 'Charles-Henri Bruneau', 'Cédric Galusinski', 'Angelo Iollo']
Level-set, penalization and cartesian meshes: A paradigm for inverse problems and optimal design
81,683
['Ersin Aslan']
Neighbor Isolated Tenacity of Graphs
714,520
Most industrial products are developed based on their former products including software. Revising existing software according to new requirements is thus an important issue. However, innovative techniques for software revision cannot be easily introduced to projects where software is not a central part. In this paper, we report how to explore and apply software engineering techniques to such non-ideal projects to encourage technology transfer to industry. We first show our experiences with industrial partners to explore which tasks could be supported in such projects and which techniques could be applied to such tasks. As a result, we found change impact analysis could be technically supported, and traceability techniques using information retrieval seemed to be suitable for it. We second had preliminary experiences of a method using such techniques with data in industry and evaluated them with our industrial partners. Based on the evaluation, we third improved such a method by using following techniques, indexing of technical documents for characterizing requirements changes, machine learning on source codes for validating predicted traceability and static source code analysis for finding indirect impacts. Our industrial partners finally evaluated the improved method, and they confirmed the improved method worked better than ever.
['Haruhiko Kaiya', 'Kenichiro Hara', 'Kyotaro Kobayashi', 'Akira Osada', 'Kenji Kaijiri']
Exploring How to Support Software Revision in Software Non-intensive Projects Using Existing Techniques
182,387
An electrocardiogram telemonitor using 3G cellular phone was developed. The cellular phone works completely in wireless environments through using Bluetooth device. Communication performance of the cellular phone is investigated. Experiments of the investigation are conducted in five different places with different populations or in the moving status to evaluate communication performance. Communication performance of the cellular phone was maintained near 64 kbps, the limitation of W-CDMA in static status. The results show that W-CDMA communication of cellular phone has enough speed and reliability by using TCP/IP in static status. However in the moving status, the performance was momentary below when using TCP/IP protocol. In the moving status, we suggest it is better to use UDP/IP to perform W-CDMA communication.
['Toshiyuki Sakamoto', 'Hui Wang', 'Daming Wei']
Performance Evaluation of Twelve-Lead Electrocardiogram Telemonitor Utilizing 3G Cellular Phone
86,775
We present a supervised framework for expanding an opinion lexicon for tweets. The lexicon contains part-of-speech (POS) disambiguated entries with a three-dimensional probability distribution for positive, negative, and neutral polarities. To obtain this distribution using machine learning, we propose word-level attributes based on POS tags and information calculated from streams of emoticon-annotated tweets. Our experimental results show that our method outperforms the three-dimensional word-level polarity classification performance obtained by semantic orientation, a state-of-the-art measure for establishing world-level sentiment.
['Felipe Bravo-Marquez', 'Eibe Frank', 'Bernhard Pfahringer']
Positive, negative, or neutral: learning an expanded opinion lexicon from emoticon-annotated tweets
590,479
Anomaly detection based on communication behavior is one of difficult problems of industrial control systems for intrusion detection. A normal communication behavior control model is established by using improved one-class SVM and a PSO-OCSVM algorithm based on particle swarm algorithm is designed to optimize parameters in this paper. This method established an intrusion detection model to identify abnormal Modbus TCP traffic according to the normal Modbus function code sequence. And the efficiency, reliability and real-time of the proposed method met the industrial control system for anomaly detection are proved by simulation results.
['Wenli Shang', 'Lin Li', 'Ming Wan', 'Peng Zeng']
Industrial communication intrusion detection algorithm based on improved one-class SVM
922,152
In this paper we address the problem of channel and frequency offset estimation for multiple-input multiple-output OFDM systems for mobile users. The proposed method stems from extended Kalman filtering. It is suitable for time and frequency selective channels. The algorithm performs channel and offset tracking in time-domain followed by equalization in frequency domain. Simulation results demonstrating high fidelity tracking capability are presented using realistic channel model in typical urban scenarios.
['Timo Roman', 'Mihai Enescu', 'Visa Koivunen']
Recursive estimation of time-varying channel and frequency offset in MIMO OFDM systems
399,915
In this paper, the main concepts related to the development of a single-stage three-phase high power factor rectifier, with high-frequency isolation and regulated DC bus are described. The structure operation is presented, being based on the DC-DC SEPIC converter operating in the discontinuous conduction mode. This operational mode provides to the rectifier a high power factor feature, with sinusoidal input current, without the use of any current sensors and current loop control. A design example and simulation results are presented in order to validate the theoretical analysis.
['Gabriel Tibola', 'Ivo Barbi']
A single-stage three-phase high power factor rectifier with high-frequency isolation and regulated DC-bus based on the DCM SEPIC converter
288,537
['Carlos Vazquez', 'Ram Krishnan', 'Eugene John']
Time series forecasting of cloud data center workloads for dynamic resource provisioning
901,433
In this paper, we consider a multiuser multiple-input multiple-output (MIMO) decode-and-forward (DF) relay broadcasting channel (BC) with single source, multiple energy harvesting (EH) relays, and multiple destinations. All the nodes are equipped with multiple antennas. The EH and information decoding tasks at the relays and destinations are separated over the time, which is termed as the time switching scheme. As optimal solutions for the sum-rate maximization problems of BC channels and the MIMO interference channels are hard to obtain, the end-to-end sum rate maximization problem of a multiuser MIMO DF relay BC channel is even harder. In this paper, we propose to tackle a simplified problem, where we employ the block diagonalization (BD) procedure at the source, and we mitigate the interference between the relay-destination channels using an algorithm similar to the BD method. In order to show the relevance of our low complex proposed solution, we compare it with the minimum mean-square error (mmse) solution that was shown in the literature to be equivalent to the solution of the sum-rate maximization in the MIMO broadcasting interfering channels. We also investigate the time division multiple access (TDMA) solution, which separates all the information transmissions from the source to the relays and from the relays to the destinations over time. We provide the numerical results to show the relevance of our proposed solution, in comparison with the no co-channel interference case, the TDMA-based solution, and the mmse-based solution.
['Fatma Benkhelifa', 'Ahmed Sultan Salem', 'Mohamed-Slim Alouini']
Sum-Rate Enhancement in Multiuser MIMO Decode-and-Forward Relay Broadcasting Channel With Energy Harvesting Relays
890,552
During the development of modern semiconductor processes, which has increasing complexity and an extremely high number of degrees of freedom, a large number of distinct test structures are required to test and ensure the yield and manufacturability. To increase the utilization of chip area, addressable methodology of test chip is developed. In this paper, we present a novel large-scale addressable test chip development procedure. Based on components for automation, this procedure is fully integrated and able to reduce layout time to 10% and eliminate much of the potential for human error. A 32×32 array on a 45nm technology has been designed and manufactured with this procedure; the silicon test data further prove the reliability and effectiveness of this procedure.
['Bo Zhang', 'Weiwei Pan', 'Yongjun Zheng', 'Zheng Shi', 'Xiaolang Yan']
A fully automated large-scale addressable test chip design with high reliability
188,551
['Xin Ge', 'Hu Jin', 'Xiuhua Li', 'Victor C. M. Leung']
Opportunistic fair resource sharing with secrecy considerations in uplink wiretap channels
636,156
The main techniques currently employed in analyzing microarray expression data are clustering and classification. In this paper we propose to use association rules to mine the association relationships among different genes under the same experimental condition. These kinds of relations may also exist across many different experiments with various experimental conditions. In this paper, a new approach, called LIS-growth (Large ItemSet growth) tree, is proposed for mining the microarray data. Our approach uses a new data structure, JG-tree (Jiang, Gruenwald), and a new data partition format for gene expression level data. Each data value can be presented by a sign bit, fractions and exponent bits. Each bit at the same position can be organized into a JG-tree. A JG-tree is a lossless and compression tree. It can be built on fly, a kind of real-time compression for bits string. Based on these two new data structures it is possible to mine the association rules efficiently and quickly from the gene expression database. Our algorithm was tested using the real-life datasets from the gene expression database at Stanford University.
['Xiang-Rong Jiang', 'Le Gruenwald']
Microarray gene expression data association rules mining based on JG-Tree
534,066
Smart meters are a key technology to transfer information between service providers and end-users. However, the massive amounts of data evolving from smart grid meters used for visualization and control purposes need to be sufficiently managed to increase the reliability and sustainability of the smart grid. Interestingly, the nature of smart grids can be considered as a big data challenge that deals with huge amounts of data and their analytics. Therefore, this unprecedented smart grid data require an effective platform that elevates the smart grid in the big data era. This paper presents a visual analytics framework that can promote the sustainability of the smart grid. An application of the framework has been applied on a smart grid that contains over 6,000 smart meters for dynamic demand response visualization. Further, another application on data that includes micro-generators including an electrical vehicle is presented. The findings suggest that the framework is feasible in performing visual analytics and further smart grid data analytics.
['Amr A. Munshi', 'Yasser Abdel-Rady I. Mohamed']
Cloud-based visual analytics for smart grids big data
958,143
['Steffen Müthing', 'Dirk Ribbrock', 'Dominik Göddeke']
Integrating Multi-threading and Accelerators into DUNE-ISTL
647,477
Visual sensor networks have emerged as an important#N#class of sensor-based distributed intelligent systems,#N#with unique performance, complexity, and quality of service#N#challenges. Consisting of a large number of low-power camera nodes,#N#visual sensor networks support a great number of novel#N#vision-based applications. The camera nodes provide information#N#from a monitored site, performing distributed and collaborative#N#processing of their collected data. Using multiple cameras in the#N#network provides different views of the scene, which enhances#N#the reliability of the captured events. However, the large amount#N#of image data produced by the cameras combined with the#N#network's resource constraints require exploring new means#N#for data processing, communication, and sensor management.#N#Meeting these challenges of visual sensor networks requires#N#interdisciplinary approaches, utilizing vision processing, communications#N#and networking, and embedded processing. In this#N#paper, we provide an overview of the current state-of-the-art in#N#the field of visual sensor networks, by exploring several relevant#N#research directions. Our goal is to provide a better understanding#N#of current research problems in the different research fields of#N#visual sensor networks, and to show how these different research#N#fields should interact to solve the many challenges of visual sensor#N#networks.
['Stanislava Soro', 'Wendi Rabiner Heinzelman']
A Survey of Visual Sensor Networks
533,456
Robotic telepresence is a promising tool for enhancing remote communications in a variety of applications. It enables a person to embody a robot and interact within a remote place in a direct and natural way. A particular scenario where robotic telepresence demonstrates its advantages is in elder telecare applications in which a caregiver regularly connects to the robots deployed at the apartments of the patients to check their health. Normally, in these cases, the caregiver may encounter additional problems in guiding the robot because s/he is not familiar with the houses. In this paper we describe a procedure to remotely create and to exploit different types of maps for facilitating the guidance of a telepresenc e robot. Our work has been implemented and successfully tested on the Giraff telepresence robot.
['Javier González Jiménez', 'Cipriano Galindo', 'Francisco Melendez-Fernandez', 'J.R. Ruiz-Sarmiento']
Building and Exploiting Maps in a Telepresence Robotic Application
662,424
GuiGen is a comprehensive set of tools for creating customized graphical user interfaces (GUIs). It draws from the concept of computing portals, which are here seen as interfaces to application-specific computing services for user communities. While GuiGen was originally designed for the use in computational grids, it can be used in client/server environments as well.Compared to other GUI generators, GuiGen is more versatile and more portable. It can be employed in many different application domains and on different target platforms. With GuiGen, application experts (rather than computer scientists) are able to create their own individually tailored GUIs.
['Alexander Reinefeld', 'Hinnerk Stüben (Hrsg.)', 'Florian Schintke', 'George Din']
GuiGen: a toolset for creating customized interfaces for grid user communities
137,339
This paper describes a new fluid motor named "Crown Motor". It uses a new mechanism that consists of two crown gears with more than three cylinders around the rotation axis. The torque of this motor is high, with a high gear ratio of 77:1, accomplished by using only two crown gears, while rotation speed is slow enough (/spl sim/16 rpm) to allow for mobile robotic applications. The Crown Motor has been developed for the new fire fighter robot "Genbu" under construction in our laboratory. It has many advanced properties to be used in many applications: 1) it can drive with a low-pressure fluid (0.2/spl sim/0.7 MPa), like tap water (/spl sim/0.2 MPa); 2) it is able to control its torque and rotation speed independently when the supplied pressure of fluid is constant; and 3) it is easy to apply the motor to many kinds of machines such as used under (near) water or any kind of fluid. The electric and mechanical valve systems used are also described.
['Hitoshi Kimura', 'Shigeo Hirose', 'Koji Nakaya']
Development of the Crown Motor
340,562
In distributed data mining, adopting a flat node distribution model can affect scalability. To address the problem of modularity, flexibility and scalability, we propose a Hierarchically-distributed Peer-to-Peer (HP2PC) architecture and clustering algorithm. The architecture is based on a multi-layer overlay network of peer neighborhoods. Supernodes, which act as representatives of neighborhoods, are recursively grouped to form higher level neighborhoods. Within a certain level of the hierarchy, peers cooperate within their respective neighborhoods to perform P2P clustering. Using this model, we can partition the clustering problem in a modular way across neighborhoods, solve each part individually using a distributed K-means variant, then successively combine clusterings up the hierarchy where increasingly more global solutions are computed. In addition, for document clustering applications, we summarize the distributed document clusters using a distributed keyphrase extraction algorithm, thus providing interpretation of the clusters. Results show decent speedup, reaching 165 times faster than centralized clustering for a 250-node simulated network, with comparable clustering quality to the centralized approach. We also provide comparison to the P2P K-means algorithm and show that HP2PC accuracy is better for typical hierarchy heights. Results for distributed cluster summarization match those of their centralized counterparts with up to 88% accuracy.
['Khaled M. Hammouda', 'Mohamed S. Kamel']
Hierarchically Distributed Peer-to-Peer Document Clustering and Cluster Summarization
134,341
This work shows a novel method to suppress linear and nonlinear residual echo components after application of a linear echo canceler. The main idea is to separately treat linear and nonlinear residual echo components, as linear echo is reduced by AEC, while nonlinear echo passes the AEC unchanged. In particular, it is shown that a very simple model, such as a hard clipping function, is sufficient to approximate the nonlinear residual echo power; and that the clipping threshold can be estimated by comparing the broad-band predicted nonlinear residual echo power (produced with the current clipping threshold estimate) to the broad-band observed nonlinear residual echo power (obtained through linear AEC and subtraction of the linear residual echo power, as determined with linear coupling factors). Experimental evaluations show ERLE improvements by up to 14.9 dB compared to linear echo cancellation and suppression at a negligible decrease in speech quality during double talk.
['Ingo Schalk-Schupp', 'Friedrich Faubel', 'Markus Buck', 'Andreas Wendemuth']
Approximation of a nonlinear distortion function for combined linear and nonlinear residual echo suppression
911,993
We investigate the estimation of illuminance flow using Histograms of Oriented Gradient features (HOGs). In a regression setting, we found for both ridge regression and support vector machines, that the optimal solution shows close resemblance to the gradient based structure tensor (also known as the second moment matrix).#R##N##R##N#Theoretical results are presented showing in detail how the structure tensor and the HOGs are connected. This relation will benefit computer vision tasks such as affine invariant texture/object matching using HOGs.#R##N##R##N#Several properties of HOGs are presented, among others, how many bins are required for a directionality measure, and how to estimate HOGs through spatial averaging that requires no binning.
['Stefan M. Karlsson', 'Sylvia C. Pont', 'Jan J. Koenderink', 'Andrew Zisserman']
Illuminance Flow Estimation by Regression
507,987
Spam reviewers are becoming more professional. The common approach in spam reviewer detection is mainly based on the similarities among reviews or ratings on the same products. Applying this approach to professional spammer detection has some difficulties. First, some of the review systems start to set some limitations, e.g., duplicate submissions from a same id on one product are forbidden. Second, the professional spammers also greatly improve their writing skills. They are consciously trying to use diverse expressions in reviews. In this paper, we present a novel model for detecting professional spam reviewers, which combines posting frequency and text sentiment strength by analyzing the writing and behavior styles. Specifically, we first introduce an approach for counting posting frequency based on a sliding window. We then evaluate the sentiment strength by calculating the sentimental words in the text. Finally, we present a linear combination model. Experimental results on a real dataset from Dianping.com demonstrate the effectiveness of the proposed method.
['Junlong Huang', 'Tieyun Qian', 'Guoliang He', 'Ming Zhong', 'Qingxi Peng']
Detecting Professional Spam Reviewers
471,615
In many pattern recognition tasks, given some input data and a model, a probabilistic likelihood score is often computed to measure how well the model describes the data. Extended Baum-Welch (EBW) transformations are most commonly used as a discriminative technique for estimating parameters of Gaussian mixtures, though recently they have been used to derive a gradient steepness measurement to evaluate the quality of the model to match the distribution of the data. In this paper, we explore applying the EBW gradient steepness metric in the context of Hidden Markov Models (HMMs) for recognition of broad phonetic classes and present a detailed analysis and results on the use of this gradient metric on the TIMIT corpus. We find that our gradient metric is able to outperform the baseline likelihood method, and offers improvements in noisy conditions.
['Tara N. Sainath', 'Dimitri Kanevsky', 'Bhuvana Ramabhadran']
Broad phonetic class recognition in a Hidden Markov model framework using extended Baum-Welch transformations
145,940