abstract
stringlengths
5
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
367
__index_level_0__
int64
0
1,000k
Software development methods are software products, in the sense that they should be engineered by following a methodology to meet the behavioural and non-behavioural requirements of the intended users of the method. We argue that agile approaches are the most appropriate means for engineering new methods, and particularly for integrating formal methods. We show how agile principles and practices apply to engineering methods, and demonstrate their application by integrating parts of the Eiffel development method with CSP.
['Richard F. Paige', 'Phillip J. Brooke']
Agile formal method engineering
362,173
Pointwise mutual information (PMI) is a widely used word similarity measure, but it lacks a clear explanation of how it works. We explore how PMI differs from distributional similarity, and we introduce a novel metric, PMI max , that augments PMI with information about a word's number of senses. The coefficients of PMI max are determined empirically by maximizing a utility function based on the performance of automatic thesaurus generation. We show that it outperforms traditional PMI in the application of automatic thesaurus generation and in two word similarity benchmark tasks: human similarity ratings and TOEFL synonym questions. PMI max achieves a correlation coefficient comparable to the best knowledge-based approaches on the Miller-Charles similarity rating data set.
['Lushan Han', 'Tim Finin', 'Paul McNamee', 'Anupam Joshi', 'Yelena Yesha']
Improving Word Similarity by Augmenting PMI with Estimates of Word Polysemy
149,758
The authors present an extension of the automatic test pattern generation system SOCRATES to a hierarchical test pattern generation system for combinational and scan-based circuits. The proposed system is based on predefined high-level primitives, e.g., multiplexers and adders. The exploitation of high-level primitives leads to significant improvements in implication, unique sensitization, and multiple backtrace, all of which play a key role in the efficiency of any automatic test pattern generation (ATG) system. In order to perform deterministic ATG and fault simulation for internal faults of high-level primitives, the high-level primitives are dynamically expanded to their gate-level realization. A number of experimental results, achieved on circuits with several tens of thousands of primitives, demonstrate the efficiency of the proposed approach in terms of CPU time, fault coverage, and memory requirements. >
['Thomas M. Sarfert', 'Remo G. Markgraf', 'Michael H. Schulz', 'Erwin Trischler']
A hierarchical test pattern generation system based on high-level primitives
268,919
Competition in the 3PL market continues to intensify as providers compete to win and retain clients. 3PL providers are required to reduce costs while offering tailored innovative logistical solutions in order to remain competitive. 3PL providers can reduce costs through the consolidation of assets and introduction of cross-docking activities. Innovative logistical services can be tailored to each client via the introduction of real-time data updates. This paper highlights that RFID enabled RTE can assist in improvements of both these areas through increased network visibility. A framework is presented where the 3PL provider focuses on asset reduction, asset utilisation, real-time data employment and RTE cycle time reduction in order to enhance competitiveness.
['Rebecca Aggarwal', 'Ming K. Lim']
Increasing competitiveness of third party logistics with RFID
139,019
Many conventional computer vision object tracking methods are sensitive to partial occlusion and background clutter. This is because the partial occlusion or little background information may exist in the bounding box, which tends to cause the drift. To this end, in this paper, we propose a robust tracker based on key patch sparse representation (KPSR) to reduce the disturbance of partial occlusion or unavoidable background information. Specifically, KPSR first uses patch sparse representations to get the patch score of each patch. Second, KPSR proposes a selection criterion of key patch to judge the patches within the bounding box and select the key patch according to its location and occlusion case. Third, KPSR designs the corresponding contribution factor for the sampled patches to emphasize the contribution of the selected key patches. Comparing the KPSR with eight other contemporary tracking methods on 13 benchmark video data sets, the experimental results show that the KPSR tracker outperforms classical or state-of-the-art tracking methods in the presence of partial occlusion, background clutter, and illumination change.
['Zhenyu He', 'Shuangyan Yi', 'Yiu-ming Cheung', 'Xinge You', 'Yuan Yan Tang']
Robust Object Tracking via Key Patch Sparse Representation
689,934
In this paper, we present an optical diagnostic assay consisting of a mixture of environmental-sensitive fluorescent dyes combined with multivariate data analysis for quantitative and qualitative examination of biological and clinical samples. The performance of the assay is based on the analysis of spectrum of the selected fluorescent dyes with the operational principle similar to electronic nose and electronic tongue systems. This approach has been successfully applied for monitoring of growing cell cultures and identification of gastrointestinal diseases in humans.
['Ewa Moczko', 'Michael Cauchi', 'Claire Turner', 'Igor V. Meglinski', 'Sergey A. Piletsky']
Optical Assay for Biotechnology and Clinical Diagnosis
64,671
This paper presents two meta-heuristic algorithms to solve the quadratic assignment problem. The iterated greedy algorithm has two main components, which are destruction and construction procedures. The algorithm starts from an initial solution and then iterates through a main loop, where first a partial candidate solution is obtained by removing a number of solution components from a complete candidate solution. Then a complete solution is reconstructed by inserting the partial solution components in the destructed solution. These simple steps are iterated until some predetermined termination criterion is met. We also present our previous discrete differential evolution algorithm modified for the quadratic assignment problem. The quadratic assignment problem is a classical NP-hard problem and its applications in real life are still considered challenging. The proposed algorithms were evaluated on quadratic assignment problem instances arising from real life problems as well as on a number of benchmark instances from the QAPLIB. The computational results show that the proposed algorithms are superior to the migrating birds optimization algorithm which appeared very recently in the literature. Ultimately, 7 out of 8 printed circuit boards (PCB) instances are further improved.
['M. Fatih Tasgetiren', 'Quan-Ke Pan', 'P.N. Suganthan', 'Ikbal Ece Dizbay']
Metaheuristic algorithms for the quadratic assignment problem
449,986
Granule description is a fundamental problem in granular computing. Although the spirit of granular computing has been widely adopted in scientific researches, how to classify and describe granules in a concise and apt way is still an open, interesting and important problem. The main objective of our paper is to give a solution to this problem under the framework of granular computing. Firstly, by using stability index, we classify the granules into three categories: atomic granules, basic granules and composite granules. Secondly, in order to improve the conciseness and aptness of granules, we impose additional conditions on minimal generator to define a new term which is called the most apt minimal generator. And then, based on the most apt minimal generator, we put forward methods for the description of atomic granules and basic granules. Moreover, for composite granules, we continue to divide them into three subcategories: ?-definable granules, (?, ?)-definable granules and (?, ?)-definable granules, and their respective descriptions are provided as well. Finally, some discussions are also made on indefinable granules.
['Huilai Zhi', 'Jinhai Li']
Granule description based on formal concept analysis
716,821
This study used remotely-sensed phenology data derived from Advanced Very High Resolution Radiometer (AVHRR), in a fully supervised decision-tree classification based on the new biome map of South Africa. The objectives were: (i) to investigate the long-term spatial patterns and inter-annual variability in satellite-derived vegetation phenology in relation to the recently revised biome map and (ii) to identify the phenological attributes that distinguishes between the different biomes. The long term phenometrics gave ecologically-meaningful results which reflect our current understanding of the spatial patterns of production and seasonality of vegetation growth in southern Africa. Regression tree analysis based on remotely-sensed phenometrics performed as good as, or better than, previous climate-based predictors of biome distribution.
['Konrad J. Wessels', 'K Steenkamp', 'G Von Maltitz', 'Sally Archibald', 'R. J. Scholes', 'S. Miteff', 'Avsharn Bachoo']
Remotely sensed phenology for mapping biomes and vegetation functional types
107,907
The paper describes the design and implementation of multifunctional photograph processor based on Samsung 2440A. Discussing the function and actuality of photograph processor and the exploited environment and interrelated technique of photograph processor and the software and hardware design of system. Using the special DSP chip disposal the photograph receiving from external equipment and transfer the picture to normal format, combining the support of coding and encoding about image of WINCE and the perfect support of GDI interface about EVC, So it is simple to design the photograph processor and convenient for user to use, the image processor has became the necessary part of the handset.
['Kaihua Xu', 'Puyao Qiao', 'Yuhua Liu', 'Jian Cheng']
Implementation of Image Processor in Handset
540,955
In this paper, we propose a new multi-scale image segmentation relying on a hierarchy of partitions. First of all, we review morphological methods based on connections which produce hierarchical image segmentations and introduce a new way of generating fine segmentations. From connection-based fine partitions, we focus on producing hierarchical segmentations. Multi-scale segmentations obtained by scale-space or region merging approaches have both benefits and drawbacks, therefore we propose to integrate a scale-space approach in the production of a hierarchy of partitions which merges similar regions. Starting from an initial over-segmented fine partition, a region adjacency graph is alternatively simplified and decimated. The simplification of graph nodes models tends to produce similar ones and this is used to merge them. The algorithm can be used to simplify the image at low scales and to segment it at high scales.
['Olivier Lezoray', 'Cyril Meurie', 'P. Belhomme', 'Abderrahim Elmoataz']
Multi-scale image segmentation in a hierarchy of partitions
226,331
The problem of developing an adequate formal model for the semantics of programming languages has been under intensive study in recent years. Unlike the area of syntax specification, where adequate models have existed for some time, the area of semantic specification is still in the formative stages. Development of formal semantic models has proceeded along two main lines, lambda-calculus models (e.g., Landin, Strachey) and directed graph models (e.g., Narasimhan, Floyd). This paper describes a model for the semantics of programs based on hierarchies of directed graphs.
['Terrence W. Pratt']
A hierarchical graph model of the semantics of programs
284,935
The network autocorrelation model has been extensively used by researchers interested modeling social influence effects in social networks. The most common inferential method in the model is classical maximum likelihood estimation. This approach, however, has known problems such as negative bias of the network autocorrelation parameter and poor coverage of confidence intervals. In this paper, we develop new Bayesian techniques for the network autocorrelation model that address the issues inherent to maximum likelihood estimation. A key ingredient of the Bayesian approach is the choice of the prior distribution. We derive two versions of Jeffreys prior, the Jeffreys rule prior and the Independence Jeffreys prior, which have not yet been developed for the network autocorrelation model. These priors can be used for Bayesian analyses of the model when prior information is completely unavailable. Moreover, we propose an informative as well as a weakly informative prior for the network autocorrelation parameter that are both based on an extensive literature review of empirical applications of the network autocorrelation model across many fields. Finally, we provide new and efficient Markov Chain Monte Carlo algorithms to sample from the resulting posterior distributions. Simulation results suggest that the considered Bayesian estimators outperform the maximum likelihood estimator with respect to bias and frequentist coverage of credible and confidence intervals.
['Dino Dittrich', 'Roger T.A.J. Leenders', 'Joris Mulder']
Bayesian estimation of the network autocorrelation model
903,729
We consider the problem of designing a fair scheduling algorithm for discrete-time constrained queuing networks. Each queue has dedicated exogenous packet arrivals. There are constraints on which queues can be served simultaneously. This model effectively describes important special instances like network switches, interference in wireless networks, bandwidth sharing for congestion control and traffic scheduling in road roundabouts. Fair scheduling is required because it provides isolation to different traffic flows; isolation makes the system more robust and enables providing quality of service. Existing work on fairness for constrained networks concentrates on flow based fairness. As a main result, we describe a notion of packet based fairness by establishing an analogy with the ranked election problem: packets are voters, schedules are candidates, and each packet ranks the schedules based on its priorities. We then obtain a scheduling algorithm that achieves the described notion of fairness by drawing upon the seminal work of Goodman and Markowitz (1952). This yields the familiar Maximum Weight (MW) style algorithm. As another important result, we prove that the algorithm obtained is throughput optimal. There is no reason a priori why this should be true, and the proof requires nontraditional methods.
['Srikanth Jagabathula', 'Devavrat Shah']
Fair Scheduling in Networks Through Packet Election
383,369
The chirp function is one of the most fundamental functions in nature. Many natural events can be roughly approximated by a group of chirp functions. In this paper, we present a practical adaptive chirplet based signal approximation algorithm. Unlike the other chirplet decompositions known so far, the elementary chirplet functions employed in this algorithm are adaptive. Therefore, the resulting approximation could better match the underlying signal and uses fewer coefficients. The effectiveness of the algorithm is demonstrated by numerical simulations.
['Shie Qian', 'Dapang Chen', 'Qinye Yin']
Adaptive chirplet based signal approximation
428,933
Fault Localization by Imperialist Competitive Algorithm Fault Localization by Imperialist Competitive Algorithm, by Larkin and Macculloch, p. 49. [45] Ibid. [46] See Groupe D'Estaing, A History of Latin American Historians (New York: Doubleday 1974), pp. 73–80; Périn, "Groupe D'Agency," in Périn and C. R. Leipzig (eds), American Medieval and Renaissance
['Afshin Shahriari', 'Farhad Rad', 'Hamid Parvin']
Fault Localization by Imperialist Competitive Algorithm
669,427
In this paper, we propose a low complexity multiple-input multiple-output (MIMO) detection algorithm with adaptive interference mitigation in downlink multiuser MIMO (DL MU-MIMO) systems with quantization error of the channel state information (CSI) feedback. In DL MU-MIMO systems using the imperfect precoding matrix caused by quantization error of the CSI feedback, the station receives the desired signal as well as the residual interference signal. Therefore, a complex MIMO detection algorithm with interference mitigation is required for mitigating the residual interference. To reduce the computational complexity, we propose a MIMO detection algorithm with adaptive interference mitigation. The proposed algorithm adaptively mitigates the residual interference by using the maximum likelihood detection (MLD) error criterion (MEC). We derive a theoretical MEC by using the MLD error condition and a practical MEC by approximating the theoretical MEC. In conclusion, the proposed algorithm adaptively performs interference mitigation when satisfying the practical MEC. Simulation results show that the proposed algorithm reduces the computational complexity and has the same performance, compared to the generalized sphere decoder, which always performs interference mitigation.
['Jangyoung Park', 'Minjoon Kim', 'Hyunsub Kim', 'Yunho Jung', 'Jaeseok Kim']
Low-complexity MIMO detection algorithm with adaptive interference mitigation in DL MU-MIMO systems with quantization error
822,865
Analysis of Individual Flows Performance for Delay Sensitive Applications. Analysis of Individual Flows Performance for Delay Sensitive Applications. EMBRA (2014) Powdered Linear and Multivalent Powdered Receive Units 1x1 [Powdered 2x1] [Powdered 3x1] [Powdered 4x1] 4x3 [Powdered 5x1] [Powdered 4x3] [Powdered 4x2]
['Ricardo Nabhen', 'Edgard Jamhour', 'Manoel Camillo Penna', 'Mauro Fonseca']
Analysis of Individual Flows Performance for Delay Sensitive Applications.
853,972
A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application to optimal crop and water allocation. The framework achieves this goal by representing the problem in the form of a decision tree, including dynamic decision variable option (DDVO) adjustment during the optimization process and using ant colony optimization (ACO) as the optimization engine. A case study from literature is considered to evaluate the utility of the framework. The results indicate that the proposed ACO-DDVO approach is able to find better solutions than those previously identified using linear programming. Furthermore, ACO-DDVO consistently outperforms an ACO algorithm using static decision variable options and penalty functions in terms of solution quality and computational efficiency. The considerable reduction in computational effort achieved by ACO-DDVO should be a major advantage in the optimization of real-world problems using complex crop simulation models. New computationally efficient optimization framework for crop and water allocation.Use of domain knowledge and ant colony optimization to reduce size of search space.Improved solution quality and computational efficiency obtained for case study.
['D. C. H. Nguyen', 'Holger R. Maier', 'Graeme C. Dandy', 'J. C. Ascough']
Framework for computationally efficient optimal crop and water allocation using ant colony optimization
570,943
When designing computer systems, simulation tools are used to imitate a real or proposed system. Complex, dynamic systems can be simulated without the cost and time constraints involved with real systems. Experimentation with the simulation enables the system characteristics to be rapidly explored and system performance data to be generated, so encouraging modification to improve performance. This paper details the experiences encountered when designing and building a proprietary continuous discrete-event object-oriented simulation in order to further investigate the performance of a proposed continuous-media server I/O subsystem. Previous investigations of the proposed architecture have been based upon mathematical models in order to calculate comparative performance. However such static models do not take into account the dynamic properties of a system. A simulation tool was therefore built in order to assess quality of service under high system load conditions. The resulting simulation rapidly produced more realistic performance figures, in addition to providing a flexible simulator infrastructure for other unrelated projects.
['Michael Weeks', 'Chris Bailey', 'Reza Sotudeh']
Continuous discrete-event simulation of a continuous-media server I/O subsystem
142,809
Chalmers Publication Library (CPL).#N# Forskningspublikationer fran Chalmers Tekniska Hogskola.
['Fredrik Elgh', 'Mikael Cederfeldt']
Producibility Awareness as a Base for Design Automation Development Analysis and Synthesis Approach to Cost Estimation
596,838
The aim of this paper is to present the notion of a hierarchy of articulatory dimensions and its application in phonetic typology. To calculate the hierarchies a computer application was designed, and preliminary counts were carried out on the phonetic repertories of Chinese, Hindi and Polish. The theoretical foundation of the calculus is based on earlier research referring to the axiomatic theory of phonetic grammar [5].
['Krzysztof Dyczkowski', 'Norbert Kordek', 'Paweł Nowakowski', 'Krzysztof Stroński']
Computing the hierarchy of the articulatory dimensions
542,938
Formal methods are a nice idea, but the size and complexity of real systems means that they are impractical. We propose that a reasonable alternative to attempting to specify and verify the system in its entirety is to build and evaluate an abstract model(s) of aspects of the system that are perceived as important. Using a model will not provide proof of the system, but it can help to find shortcomings and errors at an early stage. Executing the model should also give a measure of confidence in the final product. Many systems today are built from communicating components so that the task of the developers is becoming fitting these components together to form the required system. We show how a formal model can be sympathetic to this type of architecture using our tool, RolEnact and explain how this may be related to a COM implementation.
['Peter Henderson', 'Robert John Walters']
System design validation using formal models
456,871
This paper introduces the ISpec approach to interface specification. ISpec supports the development of interface specifications at various levels of formality and detail in a way compatible with object-oriented modelling techniques (UML). The incremental nature of the levels and the underlying formal framework of ISpec allow informal interface specifications to be made formal in steps. The body of the paper consists of a discussion of the main characteristics of ISpec, which reflect the important decisions taken in the design of ISpec. The idea of component-based specifications and specification plug-ins for constructing heterogeneous specifications is discussed and a small example showing the various levels of specification supported by ISpec is presented.
['H. B. M. Jonkers']
ISpec: Towards Practical and Sound Interface Specifications
160,777
With the advances in mobile technologies is now possible to support learners and teachers activities on the move. We analyzed the functionalities that should be provided by a general mobile learning platform and identified a problem that is weakly studied, namely support for offline usage of learning material, called hoarding. Hoarding can use some techniques used by caching and pre-fetching schemas, but in most cases the goal of the last two is to reduce latency time, bandwidth consumption and/or servers workload, while in hoarding the aim is to reduce the size of the hoarding set, keeping the accuracy very high. We want to study the parameters that could help hoarding algorithm to face the peculiarity in m-learning scenario and our final goal is to provide an efficient strategy, taking into account additional parameters, extracted automatically by the system.
['Anna Trifonova', 'Marco Ronchetti']
Hoarding content in m-learning context
314,046
This paper presents HotSpot-a modeling methodology for developing compact thermal models based on the popular stacked-layer packaging scheme in modern very large-scale integration systems. In addition to modeling silicon and packaging layers, HotSpot includes a high-level on-chip interconnect self-heating power and thermal model such that the thermal impacts on interconnects can also be considered during early design stages. The HotSpot compact thermal modeling approach is especially well suited for preregister transfer level (RTL) and presynthesis thermal analysis and is able to provide detailed static and transient temperature information across the die and the package, as it is also computationally efficient.
['Wei Huang', 'Shougata Ghosh', 'Sivakumar Velusamy', 'Karthik Sankaranarayanan', 'Kevin Skadron', 'Mircea R. Stan']
HotSpot: a compact thermal modeling methodology for early-stage VLSI design
120,399
The logistic regression model has achieved success in spam filtering. But it is disadvantaged by the equal adjustment of the feature weights appeared in both spam messages and ham ones during training period. This paper presents an improved logistic regression model which reduces the impact of the features appearing in both spam messages and ham ones. Byte level n-grams are employed to extract the features from messages, and TONE (Train On or Near Error) is adopted, which are proved effective in state-of-the-art spam filtering system. The official runs of CEAS (Conference on Email and Anti-Spam) Spam-filter Challenge 2008 show that the proposed model is one of the best methods. Our system achieved competitive results in all tasks and is the winner of active learning on the live stream by 1- ROCA.
['Yong Han', 'Muyun Yang', 'Haoliang Qi', 'Xiaoning He', 'Sheng Li']
The Improved Logistic Regression Models for Spam Filtering
182,092
We extend the reactive Burgers equation presented in [A. R. Kasimov, L. M. Faria, and R. R. Rosales, Phys. Rev. Lett., 110 (2013), 104104], [L. M. Faria, A. R. Kasimov, and R. R. Rosales, SIAM J. Appl. Math., 74 (2014), pp. 547--570] to include multidimensional effects. Furthermore, we explain how the model can be rationally justified following the ideas of the asymptotic theory developed in [L. M. Faria, A. R. Kasimov, and R. R. Rosales, J. Fluid Mech., 784 (2015), pp. 163--198]. The proposed model is a forced version of the unsteady small disturbance transonic flow equations. We show that for physically reasonable choices of forcing functions, traveling wave solutions akin to detonation waves exist. It is demonstrated that multidimensional effects play an important role in the stability and dynamics of the traveling waves. Numerical simulations indicate that solutions of the model tend to form multidimensional patterns analogous to cells in gaseous detonations.
['Luiz M. Faria', 'Aslan R. Kasimov', 'Rodolfo R. Rosales']
Study of a Model Equation in Detonation Theory: Multidimensional Effects
599,957
In any visual object recognition system, the classification accuracy will likely determine the usefulness of the system as a whole. In many real-world applications, it is also important to be able to recognize a large number of diverse objects for the system to be robust enough to handle the sort of tasks that the human visual system handles on an average day. These objectives are often at odds with performance, as running too large of a number of detectors on any one scene will be prohibitively slow for use in any real-time scenario. However, visual information has temporal and spatial context that can be exploited to reduce the number of detectors that need to be triggered at any given instance. In this paper, we propose a dynamic approach to encode such context, called Visual Co-occurrence Network (ViCoNet) that establishes relationships between objects observed in a visual scene. We investigate the utility of ViCoNet when integrated into a vision pipeline targeted for retail shopping. When evaluated on a large and deep dataset, we achieve a 50% improvement in performance and a 7% improvement in accuracy in the best case, and a 45% improvement in performance and a 3% improvement in accuracy in the average case over an established baseline. The memory overhead of ViCoNet is around 10KB, highlighting its effectiveness on temporal big data.
['Siddharth Advani', 'Brigid Smith', 'Yasuki Tanabe', 'Kevin M. Irick', 'Matthew Cotter', 'Jack Sampson', 'Vijaykrishnan Narayanan']
Visual co-occurrence network: using context for large-scale object recognition in retail
576,118
Genetic interactions are highly informative for deciphering the underlying functional principles that govern how genes control cell processes. Recent developments in Synthetic Genetic Array (SGA) analysis enable the mapping of quantitative genetic interactions on a genome-wide scale. To facilitate access to this resource, which will ultimately represent a complete genetic interaction network for a eukaryotic cell, we developed DRYGIN (Data Repository of Yeast Genetic Interactions)—a web database system that aims at providing a central platform for yeast genetic network analysis and visualization. In addition to providing an interface for searching the SGA genetic interactions, DRYGIN also integrates other data sources, in order to associate the genetic interactions with pathway information, protein complexes, other binary genetic and physical interactions, and Gene Ontology functional annotation. DRYGIN version 1.0 currently holds more than 5.4 million measurements of genetic interacting pairs involving ~4500 genes, and is available at http://drygin.ccbr.utoronto.ca
['Judice L. Y. Koh', 'Huiming Ding', 'Michael Costanzo', 'Anastasia Baryshnikova', 'Kiana Toufighi', 'Gary D. Bader', 'Chad L. Myers', 'Brenda Andrews', 'Charles Boone']
DRYGIN: a database of quantitative genetic interaction networks in yeast
219,764
This paper presents recent advances in a compiler infrastructure to map algorithms described in a Java subset to FPGA-based platforms. We explain how delays and resources are estimated to guide the compiler through scheduling and temporal partitioning. The compiler supports complex analytical models to estimate resources and delays for each functional unit. The paper presents experimental results for a number of benchmarks. Those results also arise a question when performing temporal partitioning: shall we try to group as many computational structures in the same configuration or shall we have several configurations?.
['João M. P. Cardoso']
On estimations for compiling software to FPGA-based systems
186,463
In this paper we present a network interface for an on-chip network. Our network interface decouples computation from communication by offering a shared-memory abstraction, which is independent of the network implementation. We use a transaction-based protocol to achieve backward compatibility with existing bus protocols such as AXI, OCP and DTL. Our network interface has a modular architecture, which allows flexible instantiation. It provides both guaranteed and best-effort services via connections. These are configured via network interface ports using the network itself, instead of a separate control interconnect. An example instance of this network interface with 4 ports has an area of 0.143 mm/sup 2/ in a 0.13 /spl mu/m technology, and runs at 500 MHz.
['Andrei Radulescu', 'John Dielissen', 'Kees Goossens', 'Edwin Rijpkema', 'Paul Wielage']
An efficient on-chip network interface offering guaranteed services, shared-memory abstraction, and flexible network configuration
293,905
With the development of technology, wireless sensor networks (WSNs) performing sensing and communication tasks will be widely deployed in the near future because they greatly extend our ability to monitor and control the physical environment and improve the accuracy of information gathering. Since the sensors are usually deployed in hostile environments and cannot get recharged frequently, the power in sensors is the scarcest resource. Recently many mobility control protocols have been put forward as an effective mechanism to minimize energy consumption. However, security issues in all these protocols have not been discussed so far. In this paper, we will address these issues and point out a new privacy issue (the sink location privacy). A privacy preserving scheme is proposed to protect the sink. Analysis of the scheme shows its effectiveness in sink protection. This scheme can be well integrated into most mobility control protocols to enhance their security.
['Qijun Gu', 'Xiao Chen']
Privacy Preserving Mobility Control Protocols in Wireless Sensor Networks
507,300
We present a new control flow based approach to dynamic testing of sequential software. A practicable number of test cases is generated by using the boundary interior path testing strategy (J.B. Goodenough and S.L. Gerhard, 1975) and by dividing the test units into test segments (program fragments composed of one statement or a sequence of statements). The size of the test segments can be adjusted by means of a parameter, i.e. the thoroughness of the test coverage can be adapted to the needs of the tester. The selection of test cases is performed by constructing path classes for each test segment. The coverage criteria constructed by means of our approach (test segment coverage criteria) are fulfilled if at least one path from each path class is covered. A validation of our approach is given by comparing the fault detection capabilities of test segment coverage criteria with the fault detection capabilities of branch testing, multiple condition testing, LCSAJ testing and all-uses testing using n test cases for each item (e.g. branch) to be covered. The comparison demonstrates that, compared with the other testing criteria, greater fault detection probabilities can be achieved if a test segment coverage criterion is used.
['Fevzi Belli', 'Javier Dreyer']
Program segmentation for controlling test coverage
114,728
While there are many commercial systems to help people browse and compare products, these interfaces are typically product centric. To help users identify products that match their needs more efficiently, we instead focus on building a task centric interface and system. Based on answers to initial questions about the situations in which they expect to use the product, the interface identifies products that match their needs, and exposes high-level product features related to their tasks, as well as low-level information including customer reviews and product specifications. We developed semi-automatic methods to extract the high-level features used by the system from online product data. These methods identify and group product features, mine and summarize opinions about those features, and identify product uses. User studies verified our focus on high-level features for browsing products and low-level features and specifications for comparing products.
['Scott Carter', 'Francine Chen', 'Aditi S. Muralidharan', 'Jeremy Pickens']
DiG: a task-based approach to product search
369,874
Assuming that the network delays are normally distributed and the network nodes are subject to clock phase offset errors, the maximum likelihood estimator (MLE) and the Kalman filter (KF) have been recently proposed with the goal of maximizing the clock synchronization accuracy in wireless sensor networks (WSNs). However, because the network delays may assume any distribution and the performance of MLE and KF is quite sensitive to the distribution of network delays, designing clock synchronization algorithms that are robust to arbitrary network delay distributions appears as an important problem. Adopting a Bayesian framework, this paper proposes a novel clock synchronization algorithm, called the Iterative Gaussian mixture Kalman particle filter (IGMKPF), which combines the Gaussian mixture Kalman particle filter (GMKPF) with an iterative noise density estimation procedure to achieve robust performance in the presence of unknown network delay distributions. The Kullback-Leibler divergence is used as a measure to assess the departure of estimated observation noise density from its true expression. The posterior Cramer-Rao bound (PCRB) and the mean-square error (MSE) of IGMKPF are evaluated via computer simulations. It is shown that IGMKPF exhibits improved performance and robustness relative to MLE. The prior information plays an important role in IGMKPF. A MLE-based method for obtaining reliable prior information for clock phase offsets is presented and shown to ensure the convergence of IGMKPF.
['Jang-Sub Kim', 'J. T. Lee', 'Erchin Serpedin', 'Khalid A. Qaraqe']
Robust Clock Synchronization in Wireless Sensor Networks Through Noise Density Estimation
439,398
In this paper, we propose a novel neural approach for paraphrase generation. Conventional para- phrase generation methods either leverage hand-written rules and thesauri-based alignments, or use statistical machine learning principles. To the best of our knowledge, this work is the first to explore deep learning models for paraphrase generation. Our primary contribution is a stacked residual LSTM network, where we add residual connections between LSTM layers. This allows for efficient training of deep LSTMs. We evaluate our model and other state-of-the-art deep learning models on three different datasets: PPDB, WikiAnswers and MSCOCO. Evaluation results demonstrate that our model outperforms sequence to sequence, attention-based and bi- directional LSTM models on BLEU, METEOR, TER and an embedding-based sentence similarity metric.
['Aaditya Prakash', 'Sadid A. Hasan', 'Kathy Lee', 'Vivek V. Datla', 'Ashequl Qadir', 'Joey Liu', 'Oladimeji Farri']
Neural Paraphrase Generation with Stacked Residual LSTM Networks
907,785
This paper proposes a wearable robot arm designed for practical use. To reduce the weight of the arm, a passive robotics concept is adopted. By replacing actuators with passive actuators the proposed wearable robot arm is able to reduce its weight. Since the coarse movement of the arm is controlled manually by a user's hand, the user can be safe from unintentional movement of the arm. A prototype which satiates the proposed concept was developed and experiments were performed to verify its usability.
['Akimichi Kojima', 'Hirotake Yamazoe', 'Joo-Ho Lee']
Practical-Use Oriented Design for Wearable Robot Arm
855,970
In this article, we first define the class of J-conservative behaviours with observable storage functions, where J is a symmetric two-variable polynomial matrix. We then provide two main results. The first result states that if J(−ξ, ξ) is nonsingular, the input cardinality of a J-conservative behaviour with an observable storage function is always less than or equal to its output cardinality. The second result states that if J is constant and nonsingular, a J-conservative behaviour with an observable storage function and equal input and output cardinalities is always controllable. Physically the second result implies that a class of multiport lossless electrical networks is controllable.
['Shodhan Rao']
Controllability of conservative behaviours
56,145
The internationalization of economic and social activities is forcing people who use different languages and belong to different cultures to collaborate far more often. An analysis of the status of multilingual communications among such people is required. We apply gaming simulation methods to analyze this topic. In this study, we design an environment that can game multilingual communication online by describing just simple game scripts. This advance is significant as people with domain knowledge of the applied problem are not always computer experts.
['Yuu Nakajima', 'Reiko Hishiyama', 'Takao Nakaguchi']
Multiagent Gaming System for Multilingual Communication
663,320
With the widespread use of email, we now have access to unprecedented amounts of text that we ourselves have written. In this paper, we show how sentiment analysis can be used in tandem with effective visualizations to quantify and track emotions in many types of mail. We create a large word--emotion association lexicon by crowdsourcing, and use it to compare emotions in love letters, hate mail, and suicide notes. We show that there are marked differences across genders in how they use emotion words in work-place email. For example, women use many words from the joy--sadness axis, whereas men prefer terms from the fear--trust axis. Finally, we show visualizations that can help people track emotions in their emails.
['Saif M. Mohammad', 'Tony (Wenda) Yang']
Tracking Sentiment in Mail: How Genders Differ on Emotional Axes
394,343
The antimicrobial agent pentamidine inhibits the self-splicing of the group I intron Ca.LSU from the transcripts of the 26S rRNA gene of Candida albicans, but the mechanism of pentamidine inhibition is not clear. We show that preincubation of the ribozyme with pentamidine enhances the inhibitory effect of the drug and alters the folding of the ribozyme in a pattern varying with drug concentration. Pentamidine at 25 µM prevents formation of the catalytically active F band conformation of the precursor RNA and alters the ribonuclease T1 cleavage pattern of Ca.LSU RNA. The effects on cleavage suggest that pentamidine mainly binds to specific sites in or near asymmetric loops of helices P2 and P2.1 on the ribozyme, as well as to the tetraloop of P9.2 and the loosely paired helix P9, resulting in an altered structure of helix P7, which contains the active site. Positively charged molecules antagonize pentamidine inhibition of catalysis and relieve the drug effect on ribozyme folding, suggesting that pentamidine binds to a magnesium binding site(s) of the ribozyme to exert its inhibitory effect.
['Yi Zhang', 'Zhijie Li', 'Daniel S. Pilch', 'Michael J. Leibowitz']
Pentamidine inhibits catalytic activity of group I intron Ca.LSU by altering RNA folding
254,449
In a recent work, Gandhi, Khoussainov, and Liu 7] introduced and studied a generalized model of finite automata able to work over arbitrary structures. As one relevant area of research for this model the authors identify studying such automata over particular structures such as real and algebraically closed fields.In this paper we start investigations into this direction. We prove several structural results about sets accepted by such automata, and analyze decidability as well as complexity of several classical questions about automata in the new framework. Our results show quite a diverse picture when compared to the well known results for finite automata over finite alphabets.
['Klaus Meer', 'Ameen Naif']
Generalized finite automata over real and complex numbers
548,379
E-business approaches have the potential to reduce costs of operations, speed up the business process and affect industries as a whole. However, the construction industry has not paid as much attention to using e-business approaches. Part of the reason behind this lack of interest is the substantial investments required in information technology (IT) to enable e-business processes and the lack of evidence that this type of investment pays off. This paper reviews methods to evaluate IT investments in the construction industry, summarising and discussing their limitations. A case study is presented, describing how simulation is used to compare the impact of the web-based bidding tool (WBBT) with respect to a traditional approach to bidding. The results emerging from this study suggests further study in the evaluation of e-business approaches in the construction industry.
['Dolphy M. Abraham', 'César Asensio Fuentes', 'Dulcy M. Abraham']
Evaluating web-based bidding in construction: using simulation as an evaluation tool
213,742
Many real networked systems present the power-law degree distribution and display the scale-free feature. In this paper, we study the emergence of stable cooperators in the evolutionary game of scale-free networks with various degree exponents. Our investigations show that a heterogeneous network with small degree exponent promotes the emergence of stable cooperators, since the cooperative hubs with high degree can efficiently protect themselves from the invasion of defectors for high temptation. Moreover, the coexistence range of cooperators and de fectors is widening as the network becomes heterogeneous, in which individuals are willing to hold onto pure cooperation or defection strategies.
['Zhi Hai Rong', 'Xiang Li']
The emergence of stable cooperators in heterogeneous networked systems
401,652
Dynamical systems which can be decomposed into a drive–response pair sometimes display the phenomenon of generalized synchronization wherein the response state y is a function of the drive state x, y = ϕ (x). The questions of detecting when this is true in experimental data, and what are the properties of the function ϕ, require the development of statistics that are based on concepts and definitions from analysis, rather than the standard statistics literature. We show that these statistics can display when such a generalized synchronization situation exists and characterize the nature of the manifold (given by ϕ) of the response over the drive, including when that manifold is fractal.
['Louis M. Pecora', 'Thomas L. Carroll']
DETECTING CHAOTIC DRIVE–RESPONSE GEOMETRY IN GENERALIZED SYNCHRONIZATION
478,710
Youth suicide is a major public health problem. It is the third leading cause of death in the United States for ages 13 through 18. Many adolescents that face suicidal thoughts or make a suicide plan never seek professional care or help. Within this work, we evaluate both verbal and nonverbal responses to a five-item ubiquitous questionnaire to identify and assess suicidal risk of adolescents. We utilize a machine learning approach to identify suicidal from non-suicidal speech as well as characterize adolescents that repeatedly attempted suicide in the past. Our findings investigate both verbal and nonverbal behavior information of the face-to-face clinician-patient interaction. We investigate 60 audio-recorded dyadic clinician-patient interviews of 30 suicidal (13 repeaters and 17 non-repeaters) and 30 non-suicidal adolescents. The interaction between clinician and adolescents is statistically analyzed to reveal differences between suicidal versus non-suicidal adolescents and to investigate suicidal repeaters’ behaviors in comparison to suicidal non-repeaters. By using a hierarchical classifier we were able to show that the verbal responses to the ubiquitous questions sections of the interviews were useful to discriminate suicidal and non-suicidal patients. However, to additionally classify suicidal repeaters and suicidal non-repeaters more information especially nonverbal information is required.
['Verena Venek', 'Stefan Scherer', 'Louis-Philippe Morency', 'Albert “Skip” Rizzo', 'John Pestian']
Adolescent Suicidal Risk Assessment in Clinician-Patient Interaction
895,006
Functional programming languages are not generally associated with computationally intensive tasks such as computer vision. We show that a declarative programming language like Haskell is effective for describing complex visual tracking systems. We have taken an existing C++ library for computer vision, called XVision, and used it to build FVision (pronounced "fission"), a library of Haskell types and functions that provides a high-level interface to the lower-level XVision code. Using functional abstractions, users of FVision can build and test new visual tracking systems rapidly and reliably. The use of Haskell does not degrade system performance: computations are dominated by low-level calculations expressed in C++ while the Haskell "glue code" has a negligible impact on performance.#R##N##R##N#FVision is built using functional reactive programming (FRP) to express interaction in a purely functional manner. The resulting system demonstrates the viability of mixed-language programming: visual tracking programs continue to spend most of their time executing low-level image-processing code, while Haskell's advanced features allow us to develop and test systems quickly and with confidence. In this paper, we demonstrate the use of Haskell and FRP to express many basic abstractions of visual tracking.
['John Peterson', 'Paul Hudak', 'Alastair Reid', 'Gregory D. Hager']
FVision: A Declarative Language for Visual Tracking
295,985
This paper presents new single-layer, i.e., planar-embeddable, clock tree constructions with exact zero skew under either the linear or the Elmore delay model. Our method, called Planar-DME, consists of two parts. The first algorithm, called Linear-Planar-DME, guarantees an optimal planar zero-skew clock tree (ZST) under the linear delay model. The second algorithm, called Elmore-Planar-DME, uses the Linear-Planar-DME connection topology in constructing a low-cost ZST according to the Elmore delay model. While a planar ZST under the linear delay model is easily converted to a planar ZST under the Elmore model by elongating tree edges in bottom-up order, our key idea is to avoid unneeded wire elongation by iterating the DME construction of ZST and the bottom-up modification of the resulting nonplanar routing. Costs of our planar ZST solutions are comparable to those of the best previous nonplanar ZST solutions, and substantially improve over previous planar clock routing methods.
['Andrew B. Kahng', 'Chung-Wen Albert Tsao']
Planar-DME: a single-layer zero-skew clock tree router
228,889
Organizations usually strive for innovation to achieve economic growth. Thereby, incremental innovation of e.g., existing products is often the most attractive way because it is plannable to a certain extent and often reveals short-term success. However, many markets change due to new competitive structures caused by the rise of digital services, which facilitates market entries of new companies. For an incumbent firm trying to cope with these competitors, exploitation of existing ideas and technologies (i.e., incremental innovation) is not enough. Although these firms usually pay minor attention to it, they need to additionally explore how to establish disruptive innovation that complements or even changes their traditional business model before competitors do. In this contribution, we present a novel structure to foster exploratory innovation within incumbent organizations by unleashing the innovative potential of intrapreneurs as peripheral innovators: the Intrapreneur Accelerator. We consider this novel structure a service system for supporting intrapreneurs to develop and implement extraordinary ideas and thus fostering exploratory innovation for the organization. Using a design science approach, we will further present our methodology and our preliminary results, since we have already conducted two of four design iterations.
['Robin Knote', 'Ivo Blohm']
It's not about having Ideas - It's about making Ideas happen! Fostering Exploratory Innovation with the Intrapreneur Accelerator
719,708
This brief presents a novel 2.7-V frequency synthesizer for frequency hopping spread spectrum applications. To accomplish fast switching, the frequency synthesizer utilizes a memory access technique to retrieve the precalibrated and digitized tuning voltage values. The phase noise and the frequency accuracy of the frequency synthesizer are analyzed. The channel efficiency, the frequency switching performance, and the output spectral purity are investigated at 2.4 GHz. Measurement shows that the channel switching time is 5 mus. Thus, the proposed synthesizer is promising for frequency hopping wireless communication. The developed architecture is ready to be used for application-specified integrated circuit design
['Kim Fung Tsang', 'Chung Ming Yuen']
A Low-Voltage Fast Switching Frequency Synthesizer for FH-SS Applications
425,747
Identification of the correct number of clusters and the corresponding partitioning are two important considerations in clustering. In this paper, a newly developed point symmetry based distance is used to propose symmetry based versions of six cluster validity indices namely, DB-index, Dunn-index, generalized Dunn-index, PS-index, I-index and XB-index. These indices provide measures of "symmetricity" of the different partitionings of a data set. A Kd-tree-based data structure is used to reduce the complexity of computing the symmetry distance. A newly developed genetic point symmetry based clustering technique, GAPS-clustering is used as the underlying partitioning algorithm. The number of clusters are varied from 2 to radicn where n is the total number of data points present in the data set and the values of all the validity indices are noted down. The optimum value of a validity index over these radicn-1 partitions corresponds to the appropriate partitioning and the number of partitions as indicated by the validity index. Results on five artificially generated and four real-life data sets show that symmetry distance based I-index performs the best compared to all the other five indices.
['Sriparna Saha', 'Sanghamitra Bandyopadhyay']
On some symmetry based validity indices
431,354
In this paper, we describe the following two approaches to summarization: (1) only sentence extraction, (2) sentence extraction +bunsetsu elimination. For both approaches, we use the machine learning algorithm called Support Vector Machines. We participated in both Task-A (single-document summarization task) and Task-B (multi-document summarization task) of TSC-2.
['Tsutomu Hirao', 'Kazuhiro Takeuchi', 'Hideki Isozaki', 'Yutaka Sasaki', 'Eisaku Maeda']
NTT/NAIST's Text Summarization Systems for TSC-2.
748,283
In recent years, UML has been applied to the development of reactive safety-critical systems, in which the quality of the developed software is a key factor. In this paper we present an approach for the deductive verification of such systems using the PVS interactive theorem prover. Using a PVS specification of a UML kernel language semantics, we generate a formal representation of the UML model. This represen- tation is then verified using tlpvs, our PVS-based implementation of linear temporal logic and some of its proof rules. We apply our method by verifying two examples, demonstrating the feasibility of our approach on models with unbounded event queues, object creation, and variables of unbounded domain. We define a notion of fairness for UML systems, allowing us to verify both safety and liveness properties.
['Tamarah Arons', 'Jozef Hooman', 'Hillel Kugler', 'Amir Pnueli', 'Mark B. van der Zwaag']
Deductive Verification of UML Models in TLPVS
117,288
The graphics capabilities and speed of current hardware systems allow the exploration of 3D and animation in user interfaces, while improving the degree of interaction as well. In order to fully utilize these capabilities, new software architectures must support multiple, asynchronous, interacting agents (the Multiple Agent Problem ), and support smooth interactive animation (the Animation Problem ). The Cognitive Coprocessor is a new user interface architecture designed to solve these two problems, while supporting highly interactive user interfaces that have 2D and 3D animations. This architecture includes 3D Rooms , a 3D analogy to the Rooms system with Rooms Buttons extended to Interactive Objects that deal with 3D, animation, and gestures. This research is being tested in the domain of Information Visualization , which uses 2D and 3D animated artifacts to represent the structure of information. A prototype, called the Information Visualizer , has been built.
['George G. Robertson', 'Stuart K. Card', 'Jock D. Mackinlay']
The cognitive coprocessor architecture for interactive user interfaces
150,063
A Fixed-frequency hysteretic controlled buck DC-DC converter with improved load regulation A Fixed-frequency hysteretic controlled buck DC-DC converter with improved load regulation and reduced voltage limiting. The DC-DC converter is a convenient tool to make circuit tweaks, change the operation frequency to one or two or more steps at a time and reduce the current to minimal energy. An alternate method of converting voltage over several steps to zero is to create an external circuit with an external resistance resistance. This is not always feasible, but a high current voltage in this case will cause
['Zhuochao Sun', 'Liter Siek', 'Ravinder Pal Singh', 'Minkyu Je']
A Fixed-frequency hysteretic controlled buck DC-DC converter with improved load regulation
492,692
Using a simple finite integral representation for the bivariate Rayleigh (1889) cumulative distribution function previously discovered by the authors, we present expressions for the outage probability and average error probability performances of a dual selective diversity system with correlated slow Rayleigh fading either in closed form (in particular for binary differential phase-shift keying) or in terms of a single integral with finite limits and an integrand composed of elementary (exponential and trigonometric) functions. Because of their simple form, these expressions readily allow numerical evaluation for cases of practical interest. The results are also extended to the case of slow Nakagami-m fading using an alternate representation of the generalized Marcum (1950) Q-function.
['Marvin K. Simon', 'Mohamed-Slim Alouini']
A unified performance analysis of digital communication with dual selective combining diversity over correlated Rayleigh and Nakagami-m fading channels
385,762
In this work, to increase the reliability of low power digital circuits in the presence of soft errors, the use of both III-V TFET- and III-V MOSFET-based gates is proposed. The hybridization exploits the facts that the transient currents generated by particle hits in TFET devices are smaller compared to those of the MOSFET-based devices while MOSFET-based gates are superior in terms of electrical masking of soft errors. In this approach, the circuit is basically implemented using InAs TFET devices to reduce the power and energy consumption while gates that can propagate generated soft errors are implemented using InAs MOSFET devices. The decision about replacing a subset of TFET-based gates by their corresponding MOSFET-based gates is made through a heuristic algorithm. Furthermore, by exploiting advantages of TFETs and MOSFETs, a hybrid TFET-MOSFET soft-error resilient and low power master-slave flip-flop is introduced. To assess the efficacy of the proposed approach, the proposed hybridization algorithm is applied to some sequential circuits of ISCAS’89 benchmark package. Simulation results show that the soft error rate of the TFET-MOSFET-based circuits due to particle hits are up to 90% smaller than that of the purely TFET-based circuits. Furthermore, energy and leakage power consumptions of the proposed hybrid circuits are up to 79% and 70%, respectively, smaller than those of the MOSFET-only designs.
['Maede Hemmat', 'Mehdi Kamal', 'Ali Afzali-Kusha', 'Massoud Pedram']
Hybrid TFET-MOSFET circuit: A solution to design soft-error resilient ultra-low power digital circuit
939,059
As the use of videos is becoming more popular in computer vision, the need for annotated video datasets increases. Such datasets are required either as training data or simply as ground truth for benchmark datasets. A particular challenge in video segmentation is due to disocclusions, which hamper frame-to-frame propagation, in conjunction with non-moving objects. We show that a combination of motion from point trajectories, as known from motion segmentation, along with minimal supervision can largely help solve this problem. Moreover, we integrate a new constraint that enforces consistency of the color distribution in successive frames. We quantify user interaction effort with respect to segmentation quality on challenging ego motion videos. We compare our approach to a diverse set of algorithms in terms of user effort and in terms of performance on common video segmentation benchmarks.
['Naveen Shankar Nagaraja', 'Frank R. Schmidt', 'Thomas Brox']
Video Segmentation with Just a Few Strokes
574,014
Maintenance is essential to prevent catastrophic failures in rotating machinery. A crack can cause a failure with costly processes of reparation, especially in a rotating shaft.#R##N##R##N#In this study, the Wavelet Packets transform energy combined with Artificial Neural Networks with Radial Basis Function architecture (RBF-ANN) are applied to vibration signals to detect cracks in a rotating shaft. Data were obtained from a rig where the shaft rotates under its own weight, at steady state at different crack conditions. Nine defect conditions were induced in the shaft (with depths from 4% to 50% of the shaft diameter). The parameters for Wavelet Packets transform and RBF-ANN are selected to optimize its success rates results. Moreover, ‘Probability of Detection’ curves were calculated showing probabilities of detection close to 100% of the cases tested from the smallest crack size with a 1.77% of false alarms.
['M.J. Gómez', 'Cristina Castejón', 'Juan Carlos García-Prada']
Automatic condition monitoring system for crack detection in rotating machinery
702,808
This paper summarizes the participation of UNED at the 2014 Retrieving Diverse Social Images Task [3]. We propose a novel approach based on Formal Concept Analysis (FCA) to detect the latent topics related to the images and a later Hierarchical Agglomerative Clustering (HAC) to put together the images according to these latent topics. The diversification will be based on offering images from the different topics detected. In order to detect these latent topics, two kinds of data have been tested: only information related to the description of the images and all the textual information related to the images. The results show that our proposal is suitable for search result diversification, achieving similar results to those in the state of the art.
['Ángel Castellanos Gonzáles', 'Xaro Benavent', 'Ana García-Serrano', 'Esther de Ves', 'Juan Manuel Cigarrán Recuero']
UNED @ Retrieving Diverse Social Images Task
677,109
A Simulated Physiological/Cognitive "Double Agent". A Simulated Physiological/Cognitive "Double Agent". "Double Agent" is a Simulated Medical Facility type of facility in the game. This facility is usually used to produce patients who are "Double Agents" by telemarking them, a process not previously known, usually using a computer terminal that's in front of you. Once the facility opens, a number of "Double Agents" will appear in the facility. Sometimes, even with no "Double Agent", they will
['Sergei Nirenburg', 'Marjorie McShane', 'Stephen Beale']
A Simulated Physiological/Cognitive "Double Agent".
607,425
Although underestimated in practice, the small/unrepresentative sample problem is likely to affect a large segment of real-world remotely sensed (RS) image mapping applications where ground truth knowledge is typically expensive, tedious, or difficult to gather. Starting from this realistic assumption, subjective (weak) but ample evidence of the relative effectiveness of existing unsupervised and supervised data labeling systems is collected in two RS image classification problems. To provide a fair assessment of competing techniques, first the two selected image datasets feature different degrees of image fragmentation and range from poorly to ill-posed. Second, different initialization strategies are tested to pass on to the mapping system at hand the maximally informative representation of prior (ground truth) knowledge. For estimating and comparing the competing systems in terms of learning ability, generalization capability, and computational efficiency when little prior knowledge is available, the recently published data-driven map quality assessment (DAMA) strategy, which is capable of capturing genuine, but small, image details in multiple reference cluster maps, is adopted in combination with a traditional resubstitution method. Collected quantitative results yield conclusions about the potential utility of the alternative techniques that appear to be realistic and useful in practice, in line with theoretical expectations and the qualitative assessment of mapping results by expert photointerpreters.
['Andrea Baraldi', 'Lorenzo Bruzzone', 'Palma Blonda', 'Lorenzo Carlin']
Badly posed classification of remotely sensed images-an experimental comparison of existing data labeling systems
474,522
In this paper, we investigate the Normalized Semantic Web Distance NSWD, a semantics-aware distance measure between two concepts in a knowledge graph. Our measure advances the Normalized Web Distance, a recently established distance between two textual terms, to be more semantically aware. In addition to the theoretic fundamentals of the NSWD, we investigate its properties and qualities with respect to computation and implementation. We investigate three variants of the NSWD that make use of all semantic properties of nodes in a knowledge graph. Our performance evaluation based on the Miller-Charles benchmark shows that the NSWD is able to correlate with human similarity assessments on both Freebase and DBpedia knowledge graphs with values upi¾źto 0.69. Moreover, we verified the semantic awareness of the NSWD on a set of 20 unambiguous concept-pairs. We conclude that the NSWD is a promising measure with 1 a reusable implementation across knowledge graphs, 2 sufficient correlation with human assessments, and 3i¾źawareness of semantic differences between ambiguous concepts.
['Tom De Nies', 'Christian Beecks', 'Fréderic Godin', 'Wesley De Neve', 'G. Stępień', 'Dörthe Arndt', 'Laurens De Vocht', 'Ruben Verborgh', 'Thomas Seidl', 'Erik Mannens', 'Rik Van de Walle']
Normalized Semantic Web Distance
777,587
We study the effects of communication in Bayesian games when the players are sequentially rational but some combinations of types have zero probability. Not all communication equilibria can be implemented as sequential equilibria. We define the set of strong sequential equilibria (SSCE) and characterize it. SSCE differs from the concept of sequential communication equilibrium (SCE) defined by Myerson (1986) in that SCE allows the possibility of trembles by the mediator. We show that these two concepts coincide when there are three or more players, but the set of SSCE may be strictly smaller than the set of SCE for two-player games.
['Dino Gerardi', 'Roger B. Myerson']
Sequential equilibria in Bayesian games with communication
366,925
This paper considers the problem of planning the logistics of distributing medication to points of dispensing (PODs), which will give medication to the public. Previous work on a two-stage routing and scheduling approach showed that it can generate solutions with reasonable minimum slack. This paper presents a delivery volume improvement algorithm that can increase the minimum slack of a given solution.
['Jeffrey W. Herrmann', 'Sara Lu', 'Kristen Schalliol']
Delivery volume improvement for planning medication distribution
104,341
We propose an Active Learning approach to training a segmentation classifier that exploits geometric priors to streamline the annotation process in both background-foreground and multi-class segmentation tasks that work in 2D and 3D image volumes. To this end, we use smoothness priors not only to select voxels most in need of annotation but to guarantee that they lie on 2D planar patch, which makes it much easier to annotate than if they were randomly distributed in the volume. We evaluate our approach on Electron Microscopy and Magnetic Resonance image volumes, as well as on natural images of horses and faces. We demonstrate a marked performance increase over state-of-the-art approaches.
['Ksenia Konyushkova', 'Raphael Sznitman', 'Pascal Fua']
Geometry in Active Learning for Binary and Multi-class Image Segmentation
825,054
The 0/1 loss is an important cost function for perceptrons. Nevertheless it cannot be easily minimized by most existing perceptron learning algorithms. In this paper, we propose a family of random coordinate descent algorithms to directly minimize the 0/1 loss for perceptrons, and prove their convergence. Our algorithms are computationally efficient, and usually achieve the lowest 0/1 loss compared with other algorithms. Such advantages make them favorable for nonseparable real-world problems. Experiments show that our algorithms are especially useful for ensemble learning, and could achieve the lowest test error for many complex data sets when coupled with AdaBoost.
['Ling Li', 'Hsuan-Tien Lin']
Optimizing 0/1 Loss for Perceptrons by Random Coordinate Descent
381,126
Efficient radio resource allocation is essential to provide quality of service (QoS) for wireless networks. In this paper, a cross-layer resource allocation scheme is presented with the objective of maximizing system throughput, while providing guaranteed-QoS for users. With the assumption of a finite queue for arrival packets, the proposed scheme dynamically allocates radio resources based on user's channel characteristic and QoS metrics derived from a queuing model, which considers a packet arrival process modeled by discrete Markov Modulated Poisson Process (dMMPP), and a multirate transmission scheme achieved through adaptive modulation(AM). The cross-layer resource allocation scheme operates over two steps. Specifically, the amount of bandwidth allocated to each user is first derived from a queuing analytical model, and then the algorithm finds the best subcarrier assignment for users. Simulation results show that the proposed scheme maximizes the system throughput while guaranteeing QoS for users.
['Hui Tian', 'H. Xu', 'Youjun Gao', 'Shengdong Wang', 'Ping Zhang']
QoS-Oriented Cross-Layer Resource Allocation with Finite Queue in OFDMA Systems
449,480
Time, Space, and Precision: Revisiting Classic Problems in Computational Geometry with Degree-Driven Analysis, pp. 280. Time, Space, and Precision: Revisiting Classic Problems in Computational Geometry with Degree-Driven Analysis, pp. 280. T. E. Johnson, Charles N. E. O'Mara, and Richard W. Kline, 2004. W. Z. Smith, "The Spinnaker-Vexor Effect: How We Can Improve Performance When Using Parallel Discrete Differential Equations", in Proceedings of the Annual Meeting of the Royal Society of Canada
['Jack Snoeyink']
Time, Space, and Precision: Revisiting Classic Problems in Computational Geometry with Degree-Driven Analysis, pp. 280.
655,008
The Rational Unified Process lacks technical guidance for the development of object oriented applications. This tutorial fills this gap. We first use UML scenario diagrams to analyze use-cases. Next, we show a method to analyze scenarios and to derive UML class diagrams and UML behavior modeling for active classes and methods. We show how to choose and embed design patterns in a design and how to employ different architectural styles. From such a precise design, smart CASE tools generate fully functional implementations. We explain state-of-the-art code generation concepts for UML and assess current CASE tools for their code generation capabilities and for their support through all software development phases more generally.
['Albert Zündorf']
From use cases to code - rigorous software development with UML
488,227
This brief presents a general technique for the synthesis of parallel-connected two-port networks. This is based on the even- and odd-mode admittances of a standard Chebyshev network. The expressions for the even/odd-mode admittance are used to synthesize the low-pass network into branches of subnetworks connected in parallel between the source and the load. These together will have the same characteristics as a ladder network. It is also shown how a filter of a particular order will have the same parallel branch topology regardless of how many transmission zeros are used in the filter's transfer function. This design technique can be beneficial for various implementation techniques such as multilayer substrate and dual-mode resonators. Examples of fourth- and sixth-order degree networks with various numbers of transmission zeros are presented.
['Alaa I. Abunjaileh', 'Ian C. Hunter']
Direct Synthesis of Parallel-Connected Symmetrical Two-Port Filters
324,684
This paper presents the design of a smart food labeler mobile application that utilizes Near Field Communication (NFC) technology. The system aims to increase nutrition awareness and encourage food sellers to provide information about their food products in an efficient and interactive way.
['Yara Al-Tehini', 'Hend S. Al-Khalifa']
Design and Implementation of an NFC Food Labeler for Smart Healthcare
857,314
Bent functions are used to build blocks for cryptographically strong S-boxes and spread spectrum systems. The concept of semi bent functions and quarter bent functions is presented. Based on the new concept, an approach to construct bent functions is proposed. A simpler method to find all 30 homogeneous bent functions of degree 3 in 6 Boolean variables, which were previously discovered by a computer search, is given. It is proved that there do not exist homogeneous bent functions of degree m in 2 m Boolean variables for m > 3, without invoicing results from the difference set theory.
['Xiaolin Wang', 'Jianqin Zhou', 'Yubing Zang']
A note on homogeneous bent functions
144,765
Since the duty cycle of S-MAC is fixed and longer, it will cause longer idle time and latency of data packets under light loads. To solve these problems we present a new MAC protocol in which the sleep/wakeup schedules of the sensor nodes are adaptively determined, sensor nodes may sleep frequently. And also it minimizes control-packet overhead. Simulation shows that the new MAC is of significant effective in lessening delay time and power saving.
['Baoshan Zhang', 'Xiangdong Wang', 'Shujiang Li', 'Leishu Dong']
An Adaptive Energy-Efficient Medium Access Control Protocol for Wireless Sensor Networks
99,786
Over a decade of research on document expansion methods resulted in several independent avenues, including smoothing methods, translation models, and dimensionality reduction techniques, such as matrix decompositions and topic models. Although these research avenues have been individually explored in many previous studies, there is still a lack of understanding of how state-of-the-art methods for each of these directions compare with each other in terms of retrieval accuracy. This paper eliminates this gap by reporting the results of an empirical comparison of document expansion methods using translation models estimated based on word co-occurrence and cosine similarity between low-dimensional word embeddings, Latent Dirichlet Allocation (LDA) and Non-negative Matrix Factorization (NMF), on standard TREC collections. Experimental results indicate that LDA-based document expansion consistently outperforms both types of translation models and NMF according to all evaluation metrics for all and difficult queries, which is closely followed by translation model using word embeddings.
['Saeid Balaneshin-kordan', 'Alexander Kotov']
A Study of Document Expansion using Translation Models and Dimensionality Reduction Methods
868,099
Reduced-reference quality assessment based on the entropy of DNT coefficients of locally weighted gradients Reduced-reference quality assessment based on the entropy of DNT coefficients of locally weighted gradients. The analysis was performed in a large ensemble of 532 patients including 854 patients and 2,383 controls. The DNT was determined by a simple method. Mean differences in individual DNT coefficients of the 10-sided RAN-SAS-R, 3-sided RAN (RR/1.03) and RAN-SAS-R (RR/1.00) for
['S. Alireza Golestaneh', 'Lina J. Karam']
Reduced-reference quality assessment based on the entropy of DNT coefficients of locally weighted gradients
661,081
Summary form only given. Many software-intensive systems have significant safety ramifications and need to have their associated safety-related requirements properly engineered. It has been observed by several consultants, researchers, and authors that inadequate requirements are a major cause of accidents involving software-intensive systems. Yet in practice, there is very little interaction between the requirements and safety disciplines and little collaboration between their respective communities. Most requirements engineers know little about safety engineering, and most safety engineers know little about requirements engineering. Also, safety engineering typically concentrates on architectures and designs rather than requirements because hazard analysis typically depends on the identification of hardware and software components, the failure of which can cause accidents. This leads to safety-related requirements that are often ambiguous, incomplete, and even missing. The tutorial begins with a single common realistic example of a safety critical system that will be used throughout to provide good examples of safety-related requirements. The tutorial then provides an introduction to requirements engineering for safety engineers and an introduction to safety engineering for requirements engineers. The tutorial then provides clear definitions and descriptions of the different kinds of safety-related requirements and finishes with a practical process for producing them
['Donald Firesmith']
Engineering Safety - and Security-Related Requirements for Software-Intensive Systems
547,587
Localization is a fundamental problem in autonomous mobile robot navigation. This paper introduces a new artificial landmark-based localization system for mobile robots navigating in indoor environments. Laser-activated RFID tag is designed and used as the artificial landmark in the proposed localization system. The robot localization is realized via the combination of the stereo vision and laser-activated RFID based on the principle of triangulation. The localization system functions like an indoor GPS. Preliminary research shows that the proposed system is promising to provide a robust and accurate indoor localization method for mobile robots.
['Yu Zhou', 'Wenfei Liu', 'Peisen Huang']
Laser-activated RFID-based Indoor Localization System for Mobile Robots
383,534
Here we present the ENteric Immunity Simulator (ENISI), a modeling system for the inflammatory and regulatory immune pathways triggered by microbe-immune cell interactions in the gut. With ENISI, immunologists and infectious disease experts can test and generate hypotheses for enteric disease pathology and propose interventions through experimental infection of an in silico gut. ENISI is an agent based simulator, in which individual cells move through the simulated tissues, and engage in context-dependent interactions with the other cells with which they are in contact. The scale of ENISI is unprecedented in this domain, with the ability to simulate $10^7$ cells for 250 simulated days on 576 cores in one and a half hours, with the potential to scale to even larger hardware and problem sizes. In this paper we describe the ENISI simulator for modeling mucosal immune responses to gastrointestinal pathogens. We then demonstrate the utility of ENISI by recreating an experimental infection of a mouse with Helicobacter pylori 26695. The results identify specific processes by which bacterial virulence factors do and do not contribute to pathogenesis associated with H. pylori strain 26695. These modeling results inform general intervention strategies by indicating immunomodulatory mechanisms such as those used in inflammatory bowel disease may be more appropriate therapeutically than directly targeting specific microbial populations through vaccination or by using antimicrobials.
['Keith R. Bisset', 'Md. Maksudul Alam', 'Josep Bassaganya-Riera', 'Adria Carbo', 'Stephen Eubank', 'Raquel Hontecillas', 'Stefan Hoops', 'Yongguo Mei', 'Katherine V. Wendelsdorf', 'Dawen Xie', 'Jae-Seung Yeom', 'Madhav V. Marathe']
High-Performance Interaction-Based Simulation of Gut Immunopathologies with ENteric Immunity Simulator (ENISI)
384,922
In this paper, we propose the phantom cell analysis for dynamic channel assignment. This is an approximate analysis that can handle realistic planar systems with three-cell channel reuse pattern. To find the blocking probability of a particular cell, two phantom cells are used to represent its six neighboring cells. Then by conditioning on the relative positions of the two phantom cells, the blocking probability of that particular cell can be found. We found that the phantom cell analysis is not only very accurate in predicting the blocking performance but also very computationally efficient. Besides, it is applicable to any traffic patterns and any cellular layouts. >
['Kwan L. Yeung', 'Tak-Shing Peter Yum']
Phantom cell analysis of dynamic channel assignment in cellular mobile systems
126,113
The automatic coding of clinical documents is an important task for today's healthcare providers. Though it can be viewed as multi-label document classification, the coding problem has the interesting property that most code assignments can be supported by a single phrase found in the input document. We propose a Lexically-Triggered Hidden Markov Model (LT-HMM) that leverages these phrases to improve coding accuracy. The LT-HMM works in two stages: first, a lexical match is performed against a term dictionary to collect a set of candidate codes for a document. Next, a discriminative HMM selects the best subset of codes to assign to the document by tagging candidates as present or absent. By confirming codes proposed by a dictionary, the LT-HMM can share features across codes, enabling strong performance even on rare codes. In fact, we are able to recover codes that do not occur in the training set at all. Our approach achieves the best ever performance on the 2007 Medical NLP Challenge test set, with an F-measure of 89.84.
['Svetlana Kiritchenko', 'Colin Cherry']
Lexically-Triggered Hidden Markov Models for Clinical Document Coding
524,994
Solving the bi-objective Robust Vehicle Routing Problem with uncertain costs and demands
['Elyn L. Solano-Charris', 'Christian Prins', 'Andréa Cynthia Santos']
Solving the bi-objective Robust Vehicle Routing Problem with uncertain costs and demands
832,641
Programmable analysis of network traffic is critical for wide range of higher level traffic engineering and anomaly detection applications. Such applications demand stateful and programmable network traffic measurements (NTM) at high throughputs. We exploit the features and requirements of NTM, and develop an application-specific FPGA based Partial Dynamic Reconfiguration (PDR) scheme that is tailored to NTM problem. PDR has traditionally been done through swapping of statically compiled FPGA configuration data. Besides being latency intensive, static compilation cannot take into account exact requirements as they appear during real-time in many applications like NTM. In this paper, we make novel use of fine-grained PDR, performing minute logic changes in real-time and demonstrate its effectiveness in a prototype solution for programmable and real-time NTM. We specifically make use of flexibility available through the application and present a number of novel tools and algorithms that enabled developing the BURAQ system. Our results show 4x area and 1.3x latency improvements of BURAQ from a comparative statically compiled recent solution.
['Faisal Khan', 'Nicholas Hosein', 'Scott Vernon', 'Soheil Ghiasi']
BURAQ: A Dynamically Reconfigurable System for Stateful Measurement of Network Traffic
502,245
High resolution multi-spectral imagers are becoming increasingly important tools for studying and monitoring the earth. As much of the data from these multi-spectral imagers is used for quantitative analysis, the role of lossless compression is critical in the transmission, distribution, archiving, and management of the data. To evaluate the performance of various compression algorithms on multi-spectral images, we conducted statistical evaluation on datasets consisting of hundreds of granules from both geostationary and polar imagers. We broke these datasets up by different criteria such as hemisphere, season, and time-of-day in order to ensure the results are robust, reliable, and applicable for future imagers.
['Michael D. Grossberg', 'Irina Gladkova', 'Srikanth Gottipati', 'M. Rabinowitz', 'Paul K. Alabi', 'T. George', 'A. Pacheco']
A Comparative Study of Lossless Compression Algorithms on Multi-spectral Imager Data
375,031
Spatial Crowdsourcing (SC) is a transformative platform that engages individuals, groups and communities in the act of collecting, analyzing, and disseminating environmental, social and other spatio-temporal information. The objective of SC is to outsource a set of spatio-temporal tasks to a set of workers, i.e., individuals with mobile devices that perform the tasks by physically traveling to specified locations of interest. However, current solutions require the workers, who in many cases are simply volunteering for a cause, to disclose their locations to untrustworthy entities. In this paper, we introduce a framework for protecting location privacy of workers participating in SC tasks. We argue that existing location privacy techniques are not sufficient for SC, and we propose a mechanism based on differential privacy and geocasting that achieves effective SC services while offering privacy guarantees to workers. We investigate analytical models and task assignment strategies that balance multiple crucial aspects of SC functionality, such as task completion rate, worker travel distance and system overhead. Extensive experimental results on real-world datasets show that the proposed technique protects workers' location privacy without incurring significant performance metrics penalties.
['Hien To', 'Gabriel Ghinita', 'Cyrus Shahabi']
A framework for protecting worker location privacy in spatial crowdsourcing
570,273
Motivation: Computational multiscale models help cancer biologists to study the spatiotemporal dynamics of complex biological systems and to reveal the underlying mechanism of emergent properties. Results: To facilitate the construction of such models, we have developed a next generation modelling platform for cancer systems biology, termed ‘ELECANS’ (electronic cancer system). It is equipped with a graphical user interface-based development environment for multiscale modelling along with a software development kit such that hierarchically complex biological systems can be conveniently modelled and simulated by using the graphical user interface/software development kit combination. Associated software accessories can also help users to perform post-processing of the simulation data for visualization and further analysis. In summary, ELECANS is a new modelling platform for cancer systems biology and provides a convenient and flexible modelling and simulation environment that is particularly useful for those without an intensive programming background. Availability and implementation: ELECANS, its associated software accessories, demo examples, documentation and issues database are freely available at http://sbie.kaist.ac.kr/sub_0204.php Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.
['Safee Ullah Chaudhary', 'Sung-Young Shin', 'Daewon Lee', 'Je-Hoon Song', 'Kwang-Hyun Cho']
ELECANS—an integrated model development environment for multiscale cancer systems biology
511,415
Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of “clone” CPUs is governed by a “master” computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ∼ 5 × 10 27 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the sequential code. Program summary 1
['Andreu Badal', 'Josep Sempau']
A package of Linux scripts for the parallelization of Monte Carlo simulations
484,509
By modeling the relationship between software complexity attributes and software quality attributes, software engineers can take actions early in the development cycle to control the cost of the maintenance phase. The effectiveness of these model-based actions depends heavily on the predictive quality of the model. An enhanced modeling methodology that shows significant improvements in the predictive quality of regression models developed to predict software changes during maintenance is applied here. The methodology reduces software complexity data to domain metrics by applying principal components analysis. It then isolates clusters of similar program modules by applying cluster analysis to these derived domain metrics. Finally, the methodology develops individual regression models for each cluster. These within-cluster models have better predictive quality than a general model fitted to all of the observations. >
['Taghi M. Khoshgoftaar', 'John C. Munson', 'David L. Lanning']
A comparative study of predictive models for program changes during system testing and maintenance
219,998
We show how to combine numerical schemes and calibration of systems of o.d.e. to provide efficient feedback strategies for the biological decontamination of water resources. For natural resources, we retain to introduce any bacteria in the resource and treat it aside preserving a constant volume of the resource at any time. The feedback strategies are derived from the minimal time synthesis of the system of o.d.e.
['Sébastien Barbier', 'Alain Rapaport', 'Antoine Rousseau']
Modelling of Biological Decontamination of a Water Resource in Natural Environment and Related Feedback Strategies
625,522
The PreCoRe approach allows the automatic generation of application-specific microarchitectures from C, thus supporting complex speculative execution on reconfigurable computers. In this work, we present the PreCoRe capability of using data-value speculation to reduce the latency of memory reads, as well as the lightweight extension of static datapath controllers to the dynamic replay of misspeculated operations. The experimental evaluation considers the performance / area impact of the approach and also discusses the individual effects of combining different speculation mechanisms.
['Benjamin Thielmann', 'Jens Huthmann', 'Andreas Koch']
Evaluation of speculative execution techniques for high-level language to hardware compilation
349,374
We present a new visualization approach for metadata combining different visualizations into a so-called SuperTable accompanied by a Scatterplot. The goal is to improve user experience during the information seeking process. Our new visualizations are based on our experiences developing a visual information retrieval system called INSYDER to supply small and medium size enterprises with business information front the Internet. Based on extensive user tests the original visualizations have been redesigned in two different design variants. Instead of offering multiple visualizations to choose front the SuperTable + Scatterplot combines them in a new way. Therefore, the user has the feeling that he is working with one single visualization in different states. Further the SuperTable solves a problem which seemed to be immanent to visualization's in document retrieval: the change of modalities.
['Peter Klein', 'Frank Müller', 'Harald Reiterer', 'Maximilian Eibl']
Visual information retrieval with the SuperTable + Scatterplot
450,610
The purpose of this study was to critically analyse the role that positive emotions, flourishing and mindfulness have among the elderly. It was explored how these concepts can contribute to successful aging and higher levels of happiness in this population. To this end, we conducted a non-experimental correlational study with 329 participants, aged from 55 to 98 years. Questionnaires were used to collect data through the following instruments: a socio-demographic questionnaire, Mindful Attention Awareness Scale (MAAS), Satisfaction With Life Scale (SWLS) and PANAS (Positive and Negative Affect Schedule), FS (Flourishing scale), PST (Positivity test), MHI-5 (Mental Health Inventory -- 5).
['Cristina Cruz', 'Esperanza Navarro', 'Ricardo Pocinho', 'António Ferreira']
Happiness in advanced adulthood and the elderly: the role of positive emotions, flourishing and mindfulness as well-being factors for successful aging
953,109
The degradation of MOSFET device performance in time (aging), caused by hot-carrier injection (HCI) and negative/positive bias-temperature instability (N/PBTI), is increasingly more responsible for IC reliability failure in advanced process technology nodes. Device scaling, that has allowed increased performance of CMOS circuits, has also resulted in a magnification of such reliability issues. At the same time, device and circuit designers face increasingly stronger requirements to provide realistic estimates of product reliability as a function of circuit operation conditions. Accurate aging modeling and fast yet trustable reliability signoff have thus become mandatory in process development and circuit design. This paper presents an accurate and scalable compact device aging model that takes into account accurately the HCI and BTI mechanisms and is silicon proven for various processes down to 32/28nm. The model formulation on bias, geometry and temperature and, in particular, a unique methodology for modeling the AC partial-recovery effect of BTI, is analyzed in detail. The model has been embedded in an efficient MOSRA circuit simulation flow and has been used successfully for numerous takeouts and silicon debugging for 45nm and below.
['Bogdan Tudor', 'Joddy Wang', 'Zhaoping Chen', 'Robin Tan', 'Weidong Liu', 'Frank Lee']
An accurate and scalable MOSFET aging model for circuit simulation
167,187
Test-cost-sensitive attribute reduction is an important component in data mining applications, and plays a key role in cost-sensitive learning. Some previous approaches in test-cost-sensitive attribute reduction focus mainly on homogeneous datasets. When heterogeneous datasets must be taken into account, the previous approaches convert nominal attribute to numerical attribute directly. In this paper, we introduce an adaptive neighborhood model for heterogeneous attribute and deal with test-cost-sensitive attribute reduction problem. In the adaptive neighborhood model, the objects with numerical attributes are dealt with classical covering neighborhood, and the objects with nominal attributes are dealt with the overlap metric neighborhood. Compared with the previous approaches, the proposed model can avoid that objects with different values of nominal attribute are classified into one neighborhood. The number of inconsistent objects of a neighborhood reflects the discriminating capability of an attribute subset. With the adaptive neighborhood model, an inconsistent objects-based heuristic reduction algorithm is constructed. The proposed algorithm is compared with the \(\lambda \)-weighted heuristic reduction algorithm which nominal attribute is normalized. Experimental results demonstrate that the proposed algorithm is more effective and more practical significance than the \(\lambda \)-weighted heuristic reduction algorithm.
['Anjing Fan', 'Hong Zhao', 'William Zhu']
Test-cost-sensitive attribute reduction on heterogeneous data for adaptive neighborhood model
383,394
Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework [15] and its fast versions [14, 27]. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the "divide and conquer" solution even further by dividing each task into two sub-tasks. We call the proposed method "CRAFT" (Cascade Regionproposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals, in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter-and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of the-art on object detection benchmarks like PASCAL VOC 07/12 and ILSVRC.
['Bin Yang', 'Junjie Yan', 'Zhen Lei', 'Stan Z. Li']
CRAFT Objects from Images
712,274
The upcoming IEEE 802.11e standard supports the applications with QoS requirements by using differentiated medium access mechanism for different traffic categories. In order to protect the high priority data flows and improve network performance in a heavy-loaded IEEE 802.11e network, a new measurement-based distributed call admission control method is introduced in this paper. The proposed method is based on the measurement of the existing traffic load over IEEE 802.11e network. Depending on the amount of the existing traffic load, the admission controller decides whether or not to allow a data unit to have the right to access the wireless medium. The simulation results show that the proposed mechanism works well.
['Daqing Gu', 'Jinyun Zhang']
A new measurement-based admission control method for IEEE802.11 wireless local area networks
125,447
Relationships in outsourcing: Contracts and partnerships.
['Guy Fitzgerald', 'Leslie P. Willcocks']
Relationships in outsourcing: Contracts and partnerships.
761,985
This paper presents a transport traffic estimation method which leverages road network correlation and sparse traffic sampling via the compressive sensing technique. Through the investigation on a traffic data set of more than 4400 taxis from Shanghai city, China, we observe nontrivial traffic correlations among the traffic conditions of different road segments and derive a mathematical model to capture such relations. After mathematical manipulation, the models can be used to construct representation bases to sparsely represent the traffic conditions of all road segments in a road network. With the trait of sparse representation, we propose a traffic estimation approach that applies the compressive sensing technique to achieve a city-scale traffic estimation with only a small number of probe vehicles, largely reducing the system operating cost. To validate the traffic correlation model and estimation method, we do extensive trace-driven experiments with real-world traffic data. The results show that the model effectively reveals the hidden structure of traffic correlations. The proposed estimation method derives accurate traffic conditions with the average accuracy as 0.80, calculated as the ratio between the number of correct traffic state category estimations and the number of all estimation times, based on only 50 probe vehicles' intervention, which significantly outperforms the state-of-the-art methods in both cost and traffic estimation accuracy.
['Zhidan Liu', 'Zhenjiang Li', 'Mo Li', 'Wei Xing', 'Dongming Lu']
Mining Road Network Correlation for Traffic Estimation via Compressive Sensing
718,904
Higher Order Horizontal Sectorisation Gains For a Real 3GPP/HSPA+ Network
['Robert Joyce', 'L. Zhang']
Higher Order Horizontal Sectorisation Gains For a Real 3GPP/HSPA+ Network
627,869