abstract
stringlengths
5
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
367
__index_level_0__
int64
0
1,000k
Software maintenance is becoming more challenging with the increased complexity of the software and the frequently applied changes. Performing impact analysis before the actual implementation of a change is a crucial task during system maintenance. While many tools and techniques are available to measure the impact of a change at the code level, only a few research work is done to measure the impact of a change at an earlier stage in the development process. Measuring the impact of a change at the model level speeds up the maintenance process allowing early discovery of critical components of the system before applying the actual change at the code level. In this paper, we present model-based impact analysis approach for state-based systems such as telecommunication or embedded systems. The proposed approach uses model dependencies to automatically measure the expected impact for a requested change instead of relying on the expertise of system maintainers, and it generates two impact sets representing the lower bound and the upper bound of the impact. Although it can be extended to other behavioral models, the presented approach mainly addresses extended finite-state machine (EFSM) models. An empirical study is conducted on six EFSM models to investigate the usefulness of the proposed approach. The results show that on average the size of the impact after a single modification (a change in a one EFSM transition) ranges between 14 and 38 % of the total size of the model. For a modification involving multiple transitions, the average size of the impact ranges between 30 and 64 % of the total size of the model. Additionally, we investigated the relationships (correlation) between the structure of the EFSM model, and the size of the impact sets. Upon preliminary analysis of the correlation, the concepts of model density and data density were defined, and it was found that they could be the major factors influencing the sizes of impact sets for models. As a result, these factors can be used to determine the types of models for which the proposed approach is the most appropriate.
['Nada Almasri', 'Luay Tahat', 'Bogdan Korel']
Toward automatically quantifying the impact of a change in systems
726,589
The hose resource provisioning model promises to provide an easy-to-use characterization framework for virtual private network service offerings. Significant research effort has recently been spent on proposing new algorithms for provisioning cost-optimal networks specified according to this new model. However, a detailed comparison of the bandwidth requirement for networks designed based on the hose model and networks designed based on the traditional pipe model has not been performed. The first contribution of this paper is a detailed comparison of the bandwidth needs of the two models assuming a range of network sizes and network topologies. This numerical evaluation required efficient calculation methods for determining resource allocation based on the hose model parameters, therefore, a linear programming based formulation is also presented for this purpose. The second contribution is the calculation of a lower bound for the hose based realization. This lower bound is very useful in evaluating the two models given that the problem of provisioning a minimal cost network based on the hose model specification can only approximately be solved in polynomial time.
['Alpar Juttner', 'István Szabó', 'Áron Szentesi']
On bandwidth efficiency of the hose resource management model in virtual private networks
209,458
The increasing customization of products, which leads to greater variances and smaller lot sizes, requires highly flexible manufacturing systems. These systems are subject to dynamic influences and demand increasing effort for the generation of feasible production schedules and process control. This paper presents an approach for dealing with these challenges. First, production scheduling is executed by coupling an optimization heuristic with a simulation model. Second, real-time system state data, to be provided by forthcoming cyber-physical systems, is fed back, so that the simulation model is continuously updated and the optimization heuristic can either adjust an existing schedule or generate a new one. The potential of the approach was tested by means of a use case embracing a semiconductor manufacturing facility, in which the simulation results were employed to support the selection of better dispatching rules, improving flexible manufacturing systems performance regarding the average production cycle time.
['Mirko Kück', 'Jens Ehm', 'Torsten Hildebrandt', 'Michael Freitag', 'Enzo Morosini Frazzon']
Potential of data-driven simulation-based optimization for adaptive scheduling and control of dynamic manufacturing systems
978,642
This paper presents a new encryption scheme implemented at the physical layer of wireless networks employing orthogonal frequency-division multiplexing (OFDM). The new scheme obfuscates the subcarriers by randomly reserving several subcarriers for dummy data and resequences the training symbol by a new secure sequence. Subcarrier obfuscation renders the OFDM transmission more secure and random, whereas training symbol resequencing protects the entire physical layer packet but does not affect the normal functions of synchronization and channel estimation of legitimate users while preventing eavesdroppers from performing these functions. The security analysis shows that the system is robust to various attacks by analyzing the search space using an exhaustive key search. Our scheme is shown to perform better in terms of search space, key rate, and complexity in comparison with other OFDM physical layer encryption schemes. The scheme offers options for users to customize the security level and the key rate according to the hardware resource. Its low complexity nature also makes the scheme suitable for resource-limited devices. Details of practical design considerations are highlighted by applying the approach to an IEEE 802.11 OFDM system case study.
['Junqing Zhang', 'Alan Marshall', 'Roger F. Woods', 'Trung Q. Duong']
Design of an OFDM Physical Layer Encryption Scheme
805,433
The Wireless Key Distribution is one of the most promising and fast growing areas in modern applied cryptography. This area covers various techniques of secure secret key distribution between two legitimate users who share a common radio channel with unpredictable signal fading in a multipath environment. In essence, the pair of legitimate nodes uses their multipath radio channel as a source of common randomness to establish a shared encryption key. There are a number of studies have been presented in recent publications devoted to experimental implementation of the Wireless Key Distribution using random variations in the received power of fading signal. Despite a number of valuable benefits, there is a much fewer experimental verifications of phase method with all of them are limited to a key distribution within some indoor environments only. Apparently, this is due to the technical difficulties of precise synchronization of legitimate users' equipment to provide coherent carrier phase measurements in a microwave radio frequency range. In this regard, our experiments can be considered as the first experimental verification of secure Wireless Key Distribution by observing random variations in the carrier phase of multipath signal at moving a mobile user within a real outdoor environment. To perform this, we used wireless Internet transmission of concurrent service data to maintain a required level of synchronization of one stationary and one mobile legal nodes. Despite the humble key generation rates we have achieved in practice, our results show possibility of secure wireless key distribution between the base station and mobile subscriber in a cellular communications scenario.
['Alexey D. Smolyakov', 'Amir I. Sulimov', 'Arkadiy V. Karpov', 'Aidar A. Galiev']
Experimental extraction of shared secret key from fluctuations of multipath channel at moving a mobile transceiver in an urban environment
682,256
This paper describes non-ideal properties of batteries and how these properties may impact power-performance trade-offs in wearable computing. The first part of the paper details the characteristics of an ideal battery and how these characteristics are used in sizing batteries and estimating discharge times. Typical non-ideal characteristics and the regions of operation where they occur are described. The paper then covers results from a first-principles, variable-load battery model, showing likely areas for exploiting battery behavior in mobile computing. The major result is that when battery behavior is non-ideal, lowering the average power or the energy per operation may not increase the amount of computation that can be completed in a battery life.
['Thomas L. Martin', 'Daniel P. Siewiorek']
Non-ideal battery properties and low power operation in wearable computing
910,509
This paper presents the viscoelastic behaviour of skin tissue under different thermal loadings and discusses the effect of temperature and corresponding dermal collagen denaturation on the mechanical viscoelastic properties of skin tissue. Differential scanning calorimetry (DSC) has been used to detect dermal collagen denaturation and to assess its thermal stability. The DSC results under various heating rates are used to derive the Arrhenius parameters ( a E , A ) in the burn damage integration, which are then used to calculate the degree of denatured collagen in the skin. Temperature tests have been performed using a dynamic mechanical analyzer (DMA) to evaluate changes in the skin viscoelastic properties as a function of collagen damage, specifically, to assess the changes in the storage modulus ( ' E ), and loss factor ( tan δ ). The results show remarkable changes in ' E , which is maybe due to the release of water, but there is no significant effect from tan δ . These results suggest that at a constant frequency the denaturation of collagen molecules has little effect on the viscoelasticity.
['Feng Xu', 'Ting Wen', 'Ka Seffen', 'Tianjian Lu']
Characterization of thermomechanical behaviour of skin tissue II: viscoelastic behaviour
283,788
HamleDT 2.0: Thirty Dependency Treebanks Stanfordized
['Rudolf Rosa', 'Jan Masek', 'David Mareċek', 'Martin Popel', 'Daniel Zeman', "Zdenėk Żabokrtsk'y"]
HamleDT 2.0: Thirty Dependency Treebanks Stanfordized
613,167
Haptic virtual fixtures have been shown to improve user performance and increase the safety of robot-assisted tasks, particularly for surgical applications. However, little research has studied virtual fixtures that provide moving force constraints based on motion of the environment, e.g., organ movement due to heartbeat or respiration. This work discusses design considerations of moving forbidden-region virtual fixtures and presents two methods of implementation: predicted-position and current-position virtual fixtures. Human subject experiments were performed to determine the effectiveness of moving virtual fixtures when interacting with an object in motion using a teleoperator. Results show that moving virtual fixtures can help improve user precision and decrease the amount of force applied.
['Tricia L. Gibo', 'Lawton N. Verner', 'David D. Yuh', 'Allison M. Okamura']
Design considerations and human-machine performance of moving virtual fixtures
266,043
Disc access algorithms
['L.D. Higgins', 'Francis J. Smith']
Disc access algorithms
459,411
Framing and Measuring Multi-dimensional Interpersonal Privacy Preferences of Social Networking Site Users
['Pamela J. Wisniewski', 'A. K. M. Najmul Islam', 'Heather Richter Lipford', 'David C. Wilson']
Framing and Measuring Multi-dimensional Interpersonal Privacy Preferences of Social Networking Site Users
633,298
This paper is motivated by the fact that the unknown disturbances (UDs) in discrete-time stochastic systems may be associated with state, such as the perturbation and the model error. In such case, the UD takes on the first two moment (FTM) at least, i.e. the property of both mean and covariance. If use the UD-FTM to correct the state estimation and its covariance simultaneously, it should result in a better accuracy than the classical methods including the augmentation, robust filter and interacting multiple model (IMM), which only consider the first moment (mean) property of UD to correct the state estimation, regardless of the second moment (covariance) of UD. In this paper, a two-stage expectation maximization (EM) algorithm is proposed to jointly identify the UD-FTM. The first EM is for joint state estimation and UD's pseudo measurement (UD-PM) identification, while the second EM is for Gaussian mixture (GM), which uses the identified UD-PM from the first EM to fit out the UD-FTM. Further we can improve the state estimation accuracy by using the fitted UD-FTM with an open-loop correction. Finally, simulation results illustrate the effectiveness of the proposed method.
['Yonggang Wang', 'Xiaoxu Wang', 'Quan Pan', 'Yan Liang']
Covariance correction filter with unknown disturbance associated to system state
850,580
AbstractCorporate blogs are expected to facilitate communication, knowledge sharing, and collaborative innovation within organizations. However, empirical evidence has yet to be found illustrating whether and how such applications have affected job performance. Drawing upon social network theory, we postulate a conceptual model suggesting that employees’ online social relationships accumulated through work- and nonwork-related blog participation will engender different effects on job performance. The model is empirically tested using digital trace and archival data collected from two in-practice systems of a large telecommunications company. The results reveal that, in the work-related blog network, the structural and cognitive dimensions of social relationships positively affect job performance, whereas the relational dimension shows a negative influence. Meanwhile, participation in nonwork-related blog network benefits job performance for employees with a high level of performance in the previous time p...
['Benjiang Lu', 'Xunhua Guo', 'Nianlong Luo', 'Guoqing Chen']
Corporate Blogging and Job Performance: Effects of Work-related and Nonwork-related Participation
716,968
The regular structure and addressing scheme for the Virtex-IIfamily of field programmable gate arrays (FPGAs) allows the relocation of partial bitstreams through direct bitstream manipulation. Our bitstream translation program relocates modules on an FPGA by changing the partial bitstream of the module. To take advantage of relocatable modules, three fault tolerant circuit designs are developed and tested. While operating through a fault, these designs provide support for efficient and transparent replacement of the faulty module with a relocated fault-free module. The architecture of the FPGA and static logic significantly constrain the placement of relocatable modules, especially when a microprocessor is placed on the FPGA.
['David P. Montminy', 'Rusty O. Baldwin', 'Paul D. Williams', 'Barry E. Mullins']
Using Relocatable Bitstreams for Fault Tolerance
17,758
This paper considers a mobile WSN that contains a big hole but there exists no redundant mobile sensor to heal the hole. To achieve the temporal full-coverage purpose or enhance the tracking quality, three distributed algorithms are proposed for moving the existing big coverage hole to a predefined location. Firstly, the sink chooses a promising direction for hole-movement. Then the basic, forward-only and any-direction movement mechanisms are proposed to move the hole along the promising direction in a manner of minimizing the total power consumption or balancing the energy consumption of the given WSN. Simulation results reveal that the proposed hole-movement mechanisms enhance the coverage of WSN and balance the energy consumption of mobile sensor nodes.
['Chih-Yung Chang', 'Hsu-Ruey Chang', 'Hsiao-Jung Liu', 'Sheng-Wen Chang']
On Providing Temporal Full-Coverage by Applying Energy-Efficient Hole-Movement Strategies for Mobile WSNs
137,352
Introduction of Multiple Video Recording and Browsing System into Weightlifting Training Camp.
['Katsuyoshi Shirai', 'Fumito Yoshikawa', 'Itaru Kitahara', 'Yuichi Ohta', 'M. Kikuta']
Introduction of Multiple Video Recording and Browsing System into Weightlifting Training Camp.
806,716
Dyadic Data Prediction (DDP) is an important problem in many research areas. This paper develops a novel fully Bayesian nonparametric framework which integrates two popular and complementary approaches, discrete mixed membership modeling and continuous latent factor modeling into a unified Heterogeneous Matrix Factorization~(HeMF) model, which can predict the unobserved dyadics accurately. The HeMF can determine the number of communities automatically and exploit the latent linear structure for each bicluster efficiently. We propose a Variational Bayesian method to estimate the parameters and missing data. We further develop a novel online learning approach for Variational inference and use it for the online learning of HeMF, which can efficiently cope with the important large-scale DDP problem. We evaluate the performance of our method on the EachMoive, MovieLens and Netflix Prize collaborative filtering datasets. The experiment shows that, our model outperforms state-of-the-art methods on all benchmarks. Compared with Stochastic Gradient Method (SGD), our online learning approach achieves significant improvement on the estimation accuracy and robustness.
['Guangyong Chen', 'Fengyuan Zhu', 'Pheng Ann Heng']
Online Prediction of Dyadic Data with Heterogeneous Matrix Factorization
597,171
In this paper the authors extend their previous work on Boundary Perturbation methods for scattering calculations from families of diffraction gratings to three dimensions and the full vector electromagnetic Maxwell equations. This extension is non-trivial in both its algorithmic implementation (not only are new terms added to the recursions, but also the full, coupled, vector Maxwell equations must be simulated) and in the size of the relevant computer simulations. Not only do we give details of the implementation of the method, but also provide results of numerical simulations.
['David P. Nicholls', 'Joseph Orville']
A Boundary Perturbation Method for Vector Electromagnetic Scattering from Families of Doubly Periodic Gratings
21,013
We derive several pattern maximum likelihood (PML) results, among them showing that if a pattern has only one symbol appearing once, its PML support size is at most twice the number of distinct symbols, and that if the pattern is ternary with at most one symbol appearing once, its PML support size is three. We apply these results to extend the set of patterns whose PML distribution is known to all ternary patterns, and to all but one pattern of length up to seven.
['Jayadev Acharya', 'Alon Orlitsky', 'Shengjun Pan']
The maximum likelihood probability of unique-singleton, ternary, and length-7 patterns
179,469
Applications for computer supported cooperative work can gain from component models and frameworks. The framework for "questionnaires", which is described in the paper offers a pattern to distribute artifacts to a group of receivers. We use mobile code to ensure highest flexibility and provide hooks to be able to collaborate with local applications on the receiver side. The questionnaire framework is not only interesting for tele-exams, which are described in detail, but can also be used to pass artifacts around in a workflow system. Our approach supports the whole life cycle of questionnaires. Standard tools support the design of the questions and their assembling into a questionnaire using components. A questionnaire is distributed to all students in an exam and collected at the end of the exam. The teacher application comprises a component to automatically evaluate the answered questionnaires and to store them persistently.
['Jakob Hummes', 'Arnd Kohrs', 'Bernard Merialdo']
Questionnaires: a framework using mobile code for component-based tele-exams
419,035
This paper presents a Radix-100 divider based on decimal non-restoring and selection by truncation method. Two decimal quotient digits can be selected in each iteration, which can reduce half of the iteration cycles. Initialization is required to scale the divisor into a pre-calculated range, and also used for generating some multiples of the scaled divisor. Implemented with STM 90-nm standard cells library, the proposed architecture takes 14 clock cycles, which is 373 FO4 to reach the desired accuracy. The latency is much shorter than Radix-10 dividers.
['Zhuo Wang', 'Liu Han', 'Seok-Bum Ko']
Design and implementation of a Radix-100 division unit
187,357
BlueGene/L is currently the world's fastest supercomputer. It consists of a large number of low power dual-processor compute nodes interconnected by high speed torus and collective networks, Because compute nodes do not have shared memory, MPI is the the natural programming model for this machine. The BlueGene/L MPI library is a port of MPICH2.In this paper we discuss the implementation of MPI collectives on BlueGene/L. The MPICH2 implementation of MPI collectives is based on point-to-point communication primitives. This turns out to be suboptimal for a number of reasons. Machine-optimized MPI collectives are necessary to harness the performance of BlueGene/L. We discuss these optimized MPI collectives, describing the algorithms and presenting performance results measured with targeted micro-benchmarks on real BlueGene/L hardware with up to 4096 compute nodes.
['George Almasi', 'Philip Heidelberger', 'Charles Archer', 'Xavier Martorell', 'C. Christopher Erway', 'José E. Moreira', 'Burkhard D. Steinmacher-Burow', 'Yili Zheng']
Optimization of MPI collective communication on BlueGene/L systems
471,751
A comparison of the conversion efficiency from optical power to electrical power for three common material homojunction photovoltaic micro-cells was performed. The device widths were varied as a function of incident wavelength such that optimum power conversions were determined whilst under illumination of monochromatic light. GaAs is the most effective material as optimum devices can be fabricated as thin as 15um thick with conversion efficiencies as high as 59%. However, GaAs is extremely expensive and has a limited wavelength response. Although Ge has the lowest conversion efficiency of 36%, it is the only material simulated that is responsive under illumination of long wavelengths above 1.0um, and may be particularly useful for specific applications as it is efficient at both 1310nm and 1550nm, where the attenuation in silica fibres is minimal. Si is a commercially viable material for the use as a photovoltaic power converter (PPC) with conversion efficiencies as high as 43% at 980nm. Lasers at this wavelength are extremely cheap to produce, as well as the cost of Silicon PPCs being minimal.
['Gary Allwood', 'Graham Wild', 'Steven Hinckley']
Power over Fibre: Material Properties of Homojunction Photovoltaic Micro-Cells
381,584
We consider the question of which nonconvex sets can be represented exactly as the feasible sets of mixed-integer convex optimization problems. We state the first complete characterization for the bounded case, where the number of possible integer assignments is finite. We develop a more general characterization for the unbounded case together with a simple necessary condition for representability which we use to prove the first known negative results. Finally, we study representability of subsets of the natural numbers, developing insight towards a more complete understanding of what modeling power can be gained by using convex sets instead of polyhedral sets; the latter case has been completely characterized in the context of mixed-integer linear optimization.
['Miles Lubin', 'Ilias Zadik', 'Juan Pablo Vielma']
Mixed-integer convex representability
940,328
Early detection of lung edema for patients suffering from chronic heart disease improves the medical treatment and can avoid committal of the patient to an intensive care unit. Therefore, an early warning system monitoring the amount of fluid in the lungs by measuring trans-thoracic bioimpedance outside the body has been developed. The proposed system(TiBIS) consists of a textile integrated measurement module and a Personal Digital Assistant for signal processing and user interaction.
['Thomas Schlebusch', 'Lisa Röthlingshöfer', 'Saim Kim', 'Marcus Köny', 'Steffen Leonhardt']
On the Road to a Textile Integrated Bioimpedance Early Warning System for Lung Edema
161,904
We present algorithms that permit increased efficiency in the calculation of conservation functions for cellular automata, and report results obtained from implementations of these algorithms to report conservation laws for 1-D cellular automata of higher order than any previously known. We introduce the notion of trivial and core conservation functions to distinguish truly new conservation functions from simple extensions of lower-order ones. We give new theorems related to these concepts, and show our use of them to derive more efficient algorithms for finding conservation functions. We then present the complete list of conservation functions up to order 16 for the 256 elementary 1-D binary cellular automata. These include CAs that were not previously known to have nontrivial conservation functions.
['Barry S. Fagin', 'Leemon C. Baird']
New Conservation Functions and a Partial Taxonomy for 1-D Cellular Automata
63,005
Abstract The development of rapid, accurate and simple screening system is essential in actual processes for large-volume water treatment from a standpoint of recent strict regulations on environmental water pollution. in the present work, we newly produced a dielectrophoretic microdevice with three dimensional microstructures and sophisticated microchannel to trap and detect bacteria efficiently. We measured the fluorescence area and intensity of stained bacteria in the gap between electrodes, and estimated the number of trapped bacteria quantitatively. The dependence of the trapping efficiency of bacteria on flow velocity was also discussed.
['Satoshi Uchida', 'Ryota Nakao', 'Chihiro Asai', 'Takayuki Jin', 'Yasuharu Shiine', 'Hiroyuki Nishikawa']
Optical counting of trapped bacteria in dielectrophoretic microdevice with pillar array
334,356
Objective Micro-Facial Movement Detection Using FACS-Based Regions and Baseline Evaluation
['Adrian K. Davison', 'Cliff Lansley', 'Choon Ching Ng', 'Kevin Tan', 'Moi Hoon Yap']
Objective Micro-Facial Movement Detection Using FACS-Based Regions and Baseline Evaluation
969,650
Weakly supervised learning from scale invariant feature transform keypoints: an approach combining fast eigendecompostion, regularization, and diffusion on graphs
['Youssef Chahir', 'Abderraouf Bouziane', 'Messaoud Mostefai', 'Adnan Al Alwani']
Weakly supervised learning from scale invariant feature transform keypoints: an approach combining fast eigendecompostion, regularization, and diffusion on graphs
526,654
This paper presents a framework that actively selects informative documents pairs for semi-supervised document clustering. The semi-supervised document clustering algorithm is a Constrained DBSCAN (Cons-DBSCAN), which incorporates instance-level constraints to guide the clustering process in DBSCAN. By obtaining user feedbacks, our proposed active learning algorithm can get informative instance level constraints to aid clustering process. Experimental results show that Cons-DBSCAN with the proposed active learning approach can provide an appealing clustering performance.
['Weizhong Zhao', 'Qing He', 'Huifang Ma', 'Zhongzhi Shi']
Active Learning of Instance-Level Constraints for Semi-supervised Document Clustering
88,934
Potentials and Challenges for Multi-Core Processors in Robotic Applications.
['Andreas Herkersdorf', 'Johny Paul', 'Ravi Kumar Pujari', 'Walter Stechele', 'Stefan Wallentowitz', 'Thomas Wild', 'Aurang Zaib']
Potentials and Challenges for Multi-Core Processors in Robotic Applications.
763,423
An image sensor architecture with an alternative image scan method, based on Morton (Z) order, is presented. This scan, compared to the conventional row scan, enables faster and efficient average computation of square image blocks. Digital averaging is used and the pixel data is read out with either the original resolution, a 2 /spl times/ 2 or a 4 /spl times/ 4 block averaging. A test chip of 128 /spl times/ 128 array has been implemented in 0.35-/spl mu/m CMOS technology, has 15% fill factor, is operated by a 3.3-V supply and dissipates 30 mW at video rate.
['Evgeny Artyomov', 'Yair Rivenson', 'Guy Levi', 'Orly Yadid-Pecht']
Morton (Z) scan based real-time variable resolution CMOS image sensor
384,949
Web application modeling and implementation method are related with middleware platform tightly, and its models usually can’t be reused on different platform. In order to reuse web application models, it is necessary to raise the level of abstraction of models, constructing middleware platform independent models. Roles are meant to capture observable behavioral aspects of objects. Role models describe the set of valid object collaboration tasks. This paper proposes a method for constructing platform independent models for Web applications. It consists of a stepwise process, a template role model and a mapping from role model to class model. The method can improve the reusability of models at different level of abstraction. The paper shows how the method works by a simple example
['Chengwan He', 'Wenjie Tu', 'Keqing He']
Role Based Platform Independent Web Application Modeling
122,852
Zeros of adjoint polynomials of paths and cycles
['Fengming Dong', 'K.L. Teo', 'Charles H. C. Little', 'Michael D. Hendy']
Zeros of adjoint polynomials of paths and cycles
709,164
We show that every connected Multiplicative Exponential Linear Logic (MELL) proof-structure (with or without cuts) is uniquely determined by a well-chosen element of its Taylor expansion: the one obtained by taking two copies of the content of each box. As a consequence, the relational#R##N#model is injective with respect to connected MELL proof-structures.
['Giulio Guerrieri', 'Luc Pellissier', 'Lorenzo Tortora de Falco']
Computing connected proof(-structure)s from their Taylor expansion
843,965
In this paper we study the problem of local motion analysis and apply it to swimming style recognition in broadcast sports video. Local motion analysis is challenging for two reasons: 1) local motion is usually buried in clutters involving complex motion from multiple objects; and 2) the process is more sensitive to noises compared to the recovery of global motion. However, an effective approach to local motion analysis is significant for understanding human activity from image sequences. In this work, we firstly extract the object-induced local motion by utilizing robust motion estimation and salient color. The object motion is accordingly characterized by compensated motion vectors and confidence measurement. Beyond a single image, we attempt to capture the motion periodicity over the local motion sequence. For each period, we locate a so-called salient frame within which we derive a compact representation to distinctly characterize an image sequence with repeated actions. Finally, we employ a hierarchical classifier to distinguish local motion based on periodicity and salient frames. Promising results have been achieved on swimming style recognition in broadcast sports video.
['Xiaofeng Tong', 'Ling-Yu Duan', 'Changsheng Xu', 'Qi Tian', 'Hanqing Lu']
Local Motion Analysis and Its Application in Video based Swimming Style Recognition
309,551
We propose a new data model for design and integration of mechatronic systems.We develop this new model for improving design methodology.We present the integration of multidisciplinary interfaces in mechatronic systems.We detail the model implementation to illustrate the methodology application. The design of mechatronic systems is based on the integration of several disciplines, such as mechanical, electrical and software engineering. How to achieve an integrated multidisciplinary design during the development process of mechatronic systems has attracted the attention of both academia and industry. However, solutions which can fully solve this problem have not been proposed by now. The concept of multidisciplinary interface represents the logical or physical relationship integrating the components of the mechatronic system or the components with their environment. As the design of mechatronic systems is a multidisciplinary work, the multidisciplinary interface model can be considered as one of the most effective supports to aid designers for achieving the integrated multidisciplinary design during the development process. The paper presents a multidisciplinary interface model for design of mechatronic systems in order to enable the multidisciplinary integration among design team members from different disciplines. On the one hand, the proposed model ensures the consistency of interface defined by the designers. On the other hand, it helps the designers to guarantee the different components integrate correctly. The interface model including three concepts: classification, data model and compatibility rules. The multidisciplinary interface model is implemented by a case study based on a 3D measurement system.
['Chen Zheng', 'Julien Le Duigou', 'Matthieu Bricogne', 'Benoît Eynard']
Multidisciplinary interface model for design of mechatronic systems
573,334
Cooperative spectrum sensing (CSS) is a key technology to improve the sensing performance. Fully distributed CSS has been applied in cognitive radio ad hoc networks with good detection performance. However, when the number of secondary users (SUs) is large and the range of cognitive radio network is wide, sensing overheads (e.g., time and energy consumption) will increase correspondingly. Meanwhile, the reliability of SUs will potentially influence the detection performance. In this paper, a fully decentralized cooperative spectrum sensing scheme based on consensus theory among reliable SUs is proposed. Firstly, belief vector is used for distinguishing the reliable SUs from the unreliables. Secondly, the sub network with most reliable SUs is chose as reliable region, in which sensing information of reliable SUs can be exchanged among their one-hop neighbors with finite iterations based on consensus theory. Lastly, SUs in consensus network will get a consensus sensing outcome which will be shared with SUs outside of consensus network. The proposed scheme does not need the prior knowledge of primary user in the entire process. Simulation results show that the proposed scheme has better detection performance than traditional methods.
['Ji Wei', 'Cao Haixi', 'Yang Zhen']
Distributed cooperative spectrum sensing based on consensus among reliable secondary users
565,504
The classification of deep Web sources is an important area in large-scale deep Web integration, which is still at an early stage. Many deep web sources are structured by providing structured query interfaces and results. Classifying such structured sources into domains is one of the critical steps toward the integration of heterogeneous Web sources. To date, in terms of the classification, existing works mainly focus on classifying texts or Web documents, and there is little in the deep web. In this paper, we present a deep Web model and machine learning based classifying model. The experimental results show that we can achieve a good performance with a small scale training samples for each domain, and as the number of training samples increases, the performance keeps stabilization.
['Hexiang Xu', 'Chenghong Zhang', 'Xiulan Hao', 'Yunfa Hu']
A Machine Learning Approach Classification of Deep Web Sources
21,086
This work presents a novel approach to on-line learning regression. The well-known risk functional is formulated in an incremental manner that is aggressive to incorporate a new example locally as much as possible and at the same time passive in the sense that the overall output is changed as little as possible. To achieve this localized learning, knowledge about the model structure of the approximator is utilized to steer the adaptation of the parameter vector. We present a continuously adapting first order learning algorithm that is stable, even for complex model structures and low data densities. Additionally, we present an approach to extend this algorithm to a second order version with greater robustness but lower flexibility. Both algorithms are compared to state of the art methods as well on synthetic data as on benchmark datasets to show the benefits of the new approach.
['Andreas Buschermöhle', 'Werner Brockmann']
Stable On-Line Learning with Optimized Local Learning, But Minimal Change of the Global Output
259,811
Animal crackers
['Peter G. Neumann']
Animal crackers
709,233
Building Configurable 3D Web Applications with Flex-VR
['Krzysztof Walczak']
Building Configurable 3D Web Applications with Flex-VR
587,067
Summary: High-throughput DNA sequencing technologies have spurred the development of numerous novel methods for genome assembly. With few exceptions, these algorithms are heuristic and require one or more parameters to be manually set by the user. One approach to parameter tuning involves assembling data from an organism with an available high-quality reference genome, and measuring assembly accuracy using some metrics.#R##N##R##N#We developed a system to measure assembly quality under several scoring metrics, and to compare assembly quality across a variety of assemblers, sequence data types, and parameter choices. When used in conjunction with training data such as a high-quality reference genome and sequence reads from the same organism, our program can be used to manually identify an optimal sequencing and assembly strategy for de novo sequencing of related organisms.#R##N##R##N#Availability: GPL source code and a usage tutorial is at http://ngopt.googlecode.com#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information:Supplementary data is available at Bioinformatics online.
['Aaron E. Darling', 'Andrew Tritt', 'Jonathan A. Eisen', 'Marc T. Facciotti']
Mauve Assembly Metrics
436,857
The realization of sensing swarms composed of value-specific sensing elements requires distributed cooperation among the members of the swarm. The number of value-specific sensing elements must change under changing external conditions. The recalculations of the optimal distribution of sensing elements can be carried out from the knowledge of the (changed) average value of the sensed feature. Mathematically this can be done by using the maximum-entropy principle. However, distribution operation requires solving two additional problems: the new average value must be known to each sensing element, and a new configuration must be achieved autonomously (i.e. under distributed control). For the solution to these problems the authors introduce a circulating swarm model in which the elements of the swarm circulate in a ring while performing the sensing operations and the self-organization. They show which elementary operations can be used in an algorithm for self-organization of the swarm in a new configuration optimized for sensing. >
['Susan Hackwood', 'Gerardo Beni']
Self-organization of sensors for swarm intelligence
292,818
We present an incremental lattice generation approach to speech act detection for spontaneous and overlapping speech in telephone conversations (CallHome Spanish). At each stage of the process it is therefore possible to use different models after the initial HMM models have generated a reasonable set of hypothesis. These lattices can be processed further by more complex models. This study shows how neural networks can be used very effectively in the classification of speech acts. We find that speech acts can be classified better using the neural net based approach than using the more classical ngram backoff model approach. The best resulting neural network operates only on unigrams and the integration of the ngram backoff model as a prior to the model reduces the performance of the model. The neural network can therefore more likely be robust against errors from an LVCSR system and can potentially be trained from a smaller database.
['Klaus Ries']
HMM and neural network based speech act detection
179,477
This emerging technology track focuses on key topics regarding dependability and security of software and services. Service-based systems are being used in business, safety, and mission-critical environments to achieve operational goals and possess special characteristics that bring in difficult challenges to the research and industry communities. Among these challenges, dependability and security have been widely identified as critical aspects that need to be addressed, especially when considering that many services are also nowadays being deployed on the web, used over unreliable networks, and potentially exposed to security threats. The goal of the Emerging Technology Track On Dependable and Secure Services is to bring together researchers and practitioners to present original research and industrial practice regarding techniques to improve the dependability and security of services. Services hold special characteristics, in particular their typically complex nature, high heterogeneity, and fast-changing dynamics. In such scenarios, infrastructure interdependencies, failure and recovery modeling and analysis, accidental threats and attack modeling and evaluation, testing approaches, testbeds, benchmarks, interoperability in presence of dependability and security guarantees, as well as techniques and tools to assess the impact of accidental and malicious threats, metrics for assessing dependability and security are among the crucial aspects to be addressed.
['Nuno Laranjeiro', 'Naghmeh Ivaki', 'Marco Vieira']
The 2016 IEEE Services Emerging Technology Track on Dependable and Secure Services (DSS 2016)
873,494
Abstract The aim of the reliability fixed-charge location problem is to find robust solutions to the fixed-charge location problem when some facilities might fail with probability q. In this paper we analyze for which allocation variables in the reliability fixed-charge location problem formulation the integrality constraint can be relaxed so that the optimal value matches the optimal value of the binary problem. We prove that we can relax the integrality of all the allocation variables associated to non-failable facilities or of all the allocation variables associated to failable facilities but not of both simultaneously. We also demonstrate that we can relax the integrality of all the allocation variables whenever a family of valid inequalities is added to the set of constraints or whenever the parameters of the problem satisfy certain conditions. Finally, when solving the instances in a data set we discuss which relaxation or which modification of the problem works better in terms of resolution time and we illustrate that relaxing the integrality of the allocation variables inappropriately can alter the objective value considerably.
['José L. Sainz-Pardo', 'Javier Alcaraz', 'Mercedes Landete', 'Juan F. Monge']
On relaxing the integrality of the allocation variables of the reliability fixed-charge location problem
796,557
With the combination of dragging force and little driving force, it is the most energy saving method for the spacecraft to transfer. For the initial design of this kind transferring problem, in this paper, is it proposed the designing of based on dragging force and little driving force about the spacecraft transferring in orbits. The method matches the necessary impetus and direction of the spacecraft entering its orbit with the dragging force, and mainly using the dragging force, to reduce the energy consumption in the process of spacecraft orbit transferring, therefore, the spacecraft can access to the specified target orbit and finish the tasks. This paper uses the data provided by the STK analysis module, makes the simulation design of spacecraft orbit transferring, and provides data to support the further research realization of the mobile platform applications of space capabilities.
['Yi Tan', 'Yang Wang', 'Lei Wang', 'Jia-ze Sun']
Simulation research on the aircraft orbit transferring based on dragging force using STK
239,533
FDSOI Process Technology for Subthreshold-Operation Ultralow-Power Electronics
['Steven A. Vitale', 'Peter W. Wyatt', 'Nisha Checka', 'Jakub Kedzierski', 'Craig L. Keast']
FDSOI Process Technology for Subthreshold-Operation Ultralow-Power Electronics
851,641
Feature detection and display are the essential goals of the visualization process. Most visualization software achieves these goals by mapping properties of sampled intensity values and their derivatives to color and opacity. In this work, we propose to explicitly study the local frequency distribution of intensity values in broader neighborhoods centered around each voxel. We have found frequency distributions to contain meaningful and quantitative information that is relevant for many kinds of feature queries. Our approach allows users to enter predicate-based hypotheses about relational patterns in local distributions and render visualizations that show how neighborhoods match the predicates. Distributions are a familiar concept to nonexpert users, and we have built a simple graphical user interface for forming and testing queries interactively. The query framework readily applies to arbitrary spatial data sets and supports queries on time variant and multifield data. Users can directly query for classes of features previously inaccessible in general feature detection tools. Using several well-known data sets, we show new quantitative features that enhance our understanding of familiar visualization results.
['C.R. Johnson', 'Jian Huang']
Distribution-Driven Visualization of Volume Data
232,602
Assessing and quantifying the effect of fundamental propagation mechanisms is essential for the accurate design and planning of wireless networks in metropolitan areas. In this paper, the finite-difference time-domain (FDTD) method is employed to investigate basic propagation mechanisms in simplified urban environments. In particular, we study propagation effects for WiMAX signals in the presence of simple obstacles. We compare FDTD results with empirical model ITU-R P.526-10, which considers only the effect of diffraction. Our goal is to investigate and interpret discrepancies between the empirical model and the FDTD results, which can be attributed to multiple reflection and transmission effects. To this end, we also introduce more complicated obstacles than previous studies. Our results confirm that the impact of multiple reflections becomes more dominant as the channel becomes more non-Line-Of-Sight.
['Pongphan Leelatien', 'Xiaoli Chu', 'Panagiotis Kosmas']
FDTD-based analysis of basic propagation mechanisms in urban environments
141,212
Social networks are getting closer to our real physical world. People share the exact location and time of their check-ins and are influenced by their friends. Modeling the spatio-temporal behavior of users in social networks is of great importance for predicting the future behavior of users, controlling the users' movements, and finding the latent influence network. It is observed that users have periodic patterns in their movements. Also, they are influenced by the locations that their close friends recently visited. Leveraging these two observations, we propose a probabilistic model based on a doubly stochastic point process with a periodic decaying kernel for the time of check-ins and a time-varying multinomial distribution for the location of check-ins of users in the location-based social networks. We learn the model parameters by using an efficient EM algorithm, which distributes over the users. Experiments on synthetic and real data gathered from Foursquare show that the proposed inference algorithm learns the parameters efficiently and our method models the real data better than other alternatives.
['Ali Zarezade', 'Sina Jafarzadeh', 'Hamid R. Rabiee']
Spatio-Temporal Modeling of Check-ins in Location-Based Social Networks
935,522
Dental biometrics deal with human identification from dental characteristics. In this paper, we present a new technique for identifying people based upon shapes and appearances of their teeth from dental X-ray radiographs. The new technique represents each tooth by a feature vector obtained from the forcefield energy function of the grayscale image of the tooth and Fourier descriptors of the contour of the tooth. The feature vector is composed of the distances between a small number of potential energy wells as well as a small number of Fourier descriptors. Given a query image (i.e., postmortem radiograph), each tooth is matched with the archived teeth in the database (antemortem radiographs) that have the same tooth number. Then, voting is used to obtain a list of best matches for the query image based upon the matching results of the individual teeth. Our goal of using appearance and shape-based features together is to overcome the drawback of using only the contour of the tooth, which can be strongly affected by the quality of the images. The experimental results on a database of 162 antemortem images show that our method is effective in identifying individuals based on their dental radiographs
['Omaima Nomir', 'Mohamed Abdel-Mottaleb']
Human Identification From Dental X-Ray Images Based on the Shape and Appearance of the Teeth
154,973
A challenge in technology transfer is to manage thecomplex assimilation process which requires the mutualadaptation of organization and technology to gain leadingedge. This paper describes a model and an enabling(industrial/academic collaboration) mechanism to transferand assimilate best practices through information systemsdevelopment in SME. A case study will be provided toillustrate the application of this model to manage theassimilation of Manufacturing Resource Planning in aSME to support concurrent manufacture.
['Walter W.C. Chung', 'W.B. Lee', 'Stanley Ko Chik']
Technology Transfer at The Hong Kong Polytechnic University
432,143
In this paper silicon-on-glass VDMOSFETs are electrothermally characterized for the first time. The silicon-on-glass transistors are compared with the corresponding bulk-silicon devices by means of electrical measurements of the thermal resistance, and numerical thermal simulations. Very large values of R/sub TH/ are measured on-wafer for each SOG VDMOSFET under test. Nevertheless, the simulations show that the electrothermal feedback is expected to be significantly reduced after the devices axe mounted on a thermally conducting PCB. They indicate that the surface mounted SOG VDMOSFETs should have at least as good thermal stability as a bulk-Si VDMOSFET with the wafer thickness of only 100 /spl mu/m.
['N. Nenadovic', 'H. Schellevis', 'V. Cuoco', 'A. Griffo', 'S.J.C.H. Theeuwen', 'L.K. Nanver', 'H.F.F. Jos', 'J.W. Slotboom']
Electrothermal characterization of silicon-on-glass VDMOSFETs
471,864
Networks are evolving to deliver a flexible mix of data, voice and video content. Application delivery mandates the need to process intelligent layer 2 to layer 7 data and content at line rate. These characteristics demand an improvement high- performance software architecture based on multi-core network processor. It implements the "low-layer processing" strategy to design an efficient and flexible kernel performance. The paper design control plane/data plane and fast path /slow path on kernel to implement IPSec.
['Jiang HanPing', 'Yan Jun']
Research and Design for IPSec Architecture on Kernel
490,991
Our here in the western US a "brand" is distinctive symbol placed on livestock (cattle, horses) to indicate their ranch of origin. Brands originated in the 1800's to deter thieves and to place lost animals. The brand is applied by heating an iron template to red-hot and then burning the brand into the hide of the animal. In that context, the title of this essay may be quite disturbing.
['Richard T. Snodgrass', 'Merrie Brucks']
Branding yourself
715,498
This study presents a hybrid harmony search particle swarm optimization with global dimension selection (HHSPSO-GDS) for improving the performance of particle swarm optimization (PSO). In HHSPSO-GDS, a new global velocity updating strategy is introduced to enhance the neighborhood region search of the current best solution and to get a better trade-off between convergence rate and robustness. Additionally, a dynamic non-linear decreased inertia weight is utilized to balance the global exploration and local exploitation. Moreover, the best-worst improvisation mechanism of harmony search (HS) is implanted in the HHSPSO-GDS algorithm and a global dimension selection is employed in the improvisation process, which can effectively accelerate convergence. Global best information sharing strategy is developed to link the two layer exploration frames (PSO and HS). Finally, a comprehensive experimental study is conducted on a large number of benchmark functions. The experimental results reveal that HHSPSO-GDS performs better in terms of the quality of solution, convergence rate, robustness and scalability compared to various state-of-the-art PSOs and other meta-heuristic search algorithms.
['Hai-bin Ouyang', 'Liqun Gao', 'Xiang-yong Kong', 'Steven Li', 'Dexuan Zou']
Hybrid harmony search particle swarm optimization with global dimension selection
619,471
This paper presents a novel processing technique to enhance the sound images with playback over elevated loudspeakers for both two-speaker and multiple-speaker configuration. This is achieved by a specially designed inverse filter together with HRTF filtering.
['Woon-Seng Gan', 'S.Y. Tan', 'Meng-Hwa Er', 'Yong-Kim Chong']
Elevated speaker projection for digital home entertainment system
26,630
This paper presents a novel approach in 3D content-based search and retrieval. First, a set of functional are applied on a 3D model's volume producing a new domain of concentric spheres. In this new domain a new set of functionals is applied, resulting to a completely rotation invariant descriptor vector, which is used for 3D model matching. Experiments were performed using a database and comparing the proposed method with the MPEG-7 3D shape spectrum descriptor. Experimental results show that the proposed method is superior in terms of precision versus recall and can be used for 3D model search and retrieval in a highly efficient manner.
['Petros Daras', 'Dimitrios Zarpalas', 'Dimitrios Tzovaras', 'Michael G. Strintzis']
3D model search and retrieval based on the spherical trace transform
415,691
Ultrahigh-speed electrical drive systems enable small-scale electrically driven turbo-compressors for industrial applications, such as fuel cells and heat pumps, due to their compact size, high power density, and high efficiency, in combination with oil-free operation. However, a major obstacle in the industrial implementation of such turbo-machinery is the lack of bearing technologies suited for high rotational speeds. Promising bearing candidates for long lifetime at high rotational speeds are contactless bearing types, such as gas bearings or active magnetic bearings. Gas bearings allow for compact system integration with high load capacity and stiffness; but their application at high rotational speeds is limited by their poor dynamic stability. Active magnetic bearings facilitate a precise control of the rotor dynamics; however, at the price of a substantially increased installation and system complexity. Aiming to combine the advantages of these two bearing technologies, a hybrid bearing approach is proposed using a self-acting gas bearing for providing the main load capacity in combination with a small-sized active magnetic damper to achieve stable operation at high operational speeds. In order not to impair the compactness of the resulting drive system, dedicated displacement sensors are avoided by employing an eddy-current-based self-sensing rotor displacement measurement method using high-frequency signal injection. A previously proposed eddy-current-based self-sensing method is refined and implemented; measurement results are presented for a prototype machine proving the feasibility of the proposed hybrid bearing approach.
['Andreas Looser', 'Arda Tuysuz', 'Christof Zwyssig', 'Johann W. Kolar']
Active Magnetic Damper for Ultrahigh-Speed Permanent-Magnet Machines With Gas Bearings
934,686
Boundary Value Problem (BVP) Path Planners generate potential fields whose gradient descent represents navigational routes from any point of the environment to a goal position. The resulting trajectories are smooth and free of local minima. A BVP planner has been recently used to compute trajectories in known inhomogeneous environments, corresponding to different degrees of traveling difficulties. In this case, we locally distort the potential field generating regions with high or low preferences for navigating. In this paper, we extend this idea, generating an strategy allowing the robot to explore an unknown environment dynamically considering environment preferences. Several experiments demonstrate that our method can be used as a core in an integrated exploration approach.
['Edson Prestes', 'Paulo Martins Engel']
Exploration driven by local potential distortions
219,215
Using high-level security policy rules to regulate low-level system, the security management system with a high level of expansibility and flexibility was made. For purpose of managing network security policy duly and flexibly in the complex network environment, and resolving its issue efficiency, a dynamic and self-adaptive security policy realization mechanism is proposed. The accident monitor and policy life-cycle are put forward, and the impact of safety equipment or user requests, such as system resources found on the flow control can be calculated automatically. The system can independently carry out a dynamic, flexible and real-time to adjust and control in the network environment and security needs change. The distribution model is given to response policy request rapidly, take the appropriate policy dissemination methods, and reduce PDP computing tasks, system resource consumption, which introduces the concepts of issue affecting factors, security domain addresses allocation, etc. Expression and making ways of the structure-dissimilarity policy faced on attribute characters and operation are analyzed emphatically. The effectiveness of the proposed model and algorithms is proved by experiments.
['Chenghua Tang', 'Shun-Zheng Yu']
A Dynamic and Self-Adaptive Network Security Policy Realization Mechanism
138,277
Recently, cluster computing has successfully provided a cost-effective solution for data-intensive applications. In order to make the programming on clusters easy, many programming toolkits such as MPICH, PVM, and DSM have been proposed in past re- searches. However, these programming toolkits are not easy enough for common users to develop parallel applications. To address this problem, we have successfully imple- mented the OpenMP programming interface on a software distributed shared memory system called Teamster. On the other hand, we add a scheduling option called Profiled Multiprocessor Scheduling (PMS) into the OpenMP directive. When this option is se- lected in user programs, a load balance mechanism based on the PMS algorithm will be performed during the execution of the programs. This mechanism can automatically balance the workload among the execution nodes even when the CPU speed and the processor number of each node are not identical. In this paper, we will present the design and implementation of the OpenMP API on Teamster, and discuss the results of per- formance evaluation in the test bed.
['Tyng-Yeu Liang', 'Shih-Hsien Wang', 'Ce-Kuen Shieh', 'Ching-Min Huang', 'Liang-I Chang']
Design and Implementation of the OpenMP Programming Interface on Linux-based SMP Clusters *
40,140
Cities are rapidly converging toward digital technologies in order to provide advanced information services, efficient management, and resource utilization that will positively impact all aspects of our life and economy. This has led to the proliferation of ubiquitous connectivity to critical infrastructures (electrical grid, utility networks, finance, etc.) that are used to deliver advanced information services to homes, businesses, and government. On the other hand, such smart systems are more complex, dynamic, heterogeneous, and have many vulnerabilities that can be exploited by cyberattacks. Protecting and securing the resources and services of smart cities become critically important due to the disruptive or even potentially life-threatening nature of a failure or attack on smart cities' infrastructures. In this paper we present a resilient architecture that protects smart cities' communications, controls, and computations based on autonomic computing and Moving Target Defense (MTD) techniques. The key idea to achieve resiliency is based on making it extremely difficult for the attackers to figure out the current active execution environments used to run smart city services by randomizing the use of these resources at runtime. We have evaluated and validated our approach by applying a wide range of attacks against our smart infrastructures testbed and demonstrated that the provided services can tolerate with little overhead these attacks.
['Jesus Pacheco', 'Cihan Tunc', 'Salim Hariri']
Design and evaluation of resilient infrastructures systems for smart cities
903,099
Fairness Properties for Collaborative Work Using Human-Computer Interactions and Human-Robot Interactions Based Environment: “Let Us Be Fair”
['Myriam El Mesbahi', 'Nabil Elmarzouqi', 'Jean-Christophe Lapayre']
Fairness Properties for Collaborative Work Using Human-Computer Interactions and Human-Robot Interactions Based Environment: “Let Us Be Fair”
561,799
As data center costs rise and space availability diminishes many organizations are investigating the viability of cloud computing for research use. Yet the majority of research investigators have not readily embraced cloud capability, regardless of the potential cost savings. Through interviews, case studies, and up-to-the-minute blog posts from top experts, it is possible to extract a basic framework of barriers that dampen widespread cloud adoption. Insight gained through examining these barriers can then be used to design an organizational strategic plan to build a cloud-enhanced campus cyber infrastructure.
['Traci L. Ruthkoski']
Exploratory Project: State of the Cloud, from University of Michigan and Beyond
513,261
We present a new method for the detection and estimation of multiple directional illuminants, using a single image of any object with known geometry and Lambertian reflectance. We use the resulting highly accurate estimates to modify virtually the illumination and geometry of a real scene and produce correctly illuminated mixed reality images. Our method obviates the need to modify the imaged scene by inserting calibration objects of any particular geometry, relying instead on partial knowledge of the geometry of the scene. Thus, the recovered multiple illuminants can be used both for image-based rendering and for shape reconstruction. Our method combines information both from the shading of the object and from shadows cast on the scene by the object. Initially we use a method based on shadows and a method based on shading independently. The shadow based method utilizes brightness variation inside the shadows cast by the object, whereas the shading based method utilizes brightness variation on the directly illuminated portions of the object. We demonstrate how the two sources of information complement each other in a number of occasions. We then describe an approach that integrates the two methods, with results superior to those obtained if the two methods are used separately. The resulting illumination information can be used (i) to render synthetic objects in a real photograph with correct illumination effects, and (ii) to virtually re-light the scene.
['Yang Wang', 'Dimitris Samaras']
Estimation of multiple directional light sources for synthesis of mixed reality images
170,262
We propose autonomous learning algorithm based on the internal state of the associative memory for intelligent robots. The proposed associative memory model consists of structural unstable oscillators and a common field such as chemical concentration. In computer simulations, we use the binary pattern as the stimuli. When the pattern memorized in the network is given to the network from the outer world, the internal state of the network becomes a periodic state. On the other hand, when the pattern has not been memorized is given to the network, the state becomes an intermittently chaotic and the output of the network travels around the input and some memorized patterns. This chaotic state is regarded as "I don't know" state. Further, when the proposed autonomous learning algorithm is applied to the proposed network, the network can learn only the novel patterns automatically without destroying the previously memorized patterns.
['Kazuhiro Kojima', 'Koji Ito']
Autonomous learning algorithm and associative memory for intelligent robots
42,222
There is a huge amount of audio data available that is compressed using the MPEG audio compression standard. Sound analysis is based on the computation of short time feature vectors that describe the instantaneous spectral content of the sound. An interesting possibility is the calculation of features directly from compressed data. Since the bulk of the feature calculation is performed during the encoding stage this process has a significant performance advantage if the available data is compressed. Combining decoding and analysis in one stage is also very important for audio streaming applications. In this paper, we describe the calculation of features directly from MPEG audio compressed data. Two of the basic processes of analyzing sound are: segmentation and classification. To illustrate the effectiveness of the calculated features we have implemented two case studies: a general audio segmentation algorithm and a music/speech classifier. Experimental data is provided to show that the results obtained are comparable with sound analysis algorithms working directly with audio samples.
['George Tzanetakis', 'F. Cook']
Sound analysis using MPEG compressed audio
484,641
A New Editor, The Same Transactions
['George F. Hayhoe']
A New Editor, The Same Transactions
730,253
The design of an embedded system is a process where the tuning of the architecture should take into account both the functionality and the timing performance while considering the heterogeneity of the hw and sw components. The goal of this paper is to present the new model developed during the SEED Esprit project, to estimate the software and hardware characteristics for cosimulation and profiling within the TOSCA codesign framework. The impact on the design space exploration of such an high-level cosimulation strategy has been tested by considering as a benchmark the reengineering of an industrial device.
['Alberto Allara', 'Carlo Brandolese', 'William Fornaciari', 'Fabio Salice', 'Donatella Sciuto']
System-level performance estimation strategy for sw and hw
275,151
Design Evaluation: Zeitliche Dynamik ästhetischer Wertschätzung
['Géza Harsányi', 'Fabian Gebauer', 'Peter Kraemer', 'Claus-Christian Carbon']
Design Evaluation: Zeitliche Dynamik ästhetischer Wertschätzung
666,942
In practical situations, it is of interest to investigate computing approx- imations of sets as an important step of attribute reduction in dynamic covering information systems. In this paper, we present incremental approaches to comput- ing the type-1 and type-2 characteristic matrices of coverings with the variation of elements. Then we construct the second and sixth lower and upper approxima- tions of sets by using incremental approaches from the view of matrices. We also employ examples to show how to compute approximations of sets by using the incremental and non-incremental approaches in dynamic covering approximation spaces.
['Guangming Lang', 'Qingguo Li', 'Mingjie Cai', 'Qimei Xiao']
Incremental Approaches to Computing Approximations of Sets in Dynamic Covering Approximation Spaces
552,415
Characterization for Functional Dependency and Boyce-Codd Normal Form Databases.
['Seymour Ginsburg', 'Richard Hull']
Characterization for Functional Dependency and Boyce-Codd Normal Form Databases.
545,592
Summary: Copy number variants (CNVs) are a major source of genetic variation. Comparing CNVs between samples is important in elucidating their potential effects in a wide variety of biological contexts. HD-CNV (hotspot detector for copy number variants) is a tool for downstream analysis of previously identified CNV regions from multiple samples, and it detects recurrent regions by finding cliques in an interval graph generated from the input. It creates a unique graphical representation of the data, as well as summary spreadsheets and UCSC (University of California, Santa Cruz) Genome Browser track files. The interval graph, when viewed with other software or by automated graph analysis, is useful in identifying genomic regions of interest for further study. Availability and implementation: HD-CNV is an open source Java code and is freely available, with tutorials and sample data from http:// daleylab.org.
['Jenna L. Butler', 'Marjorie Elizabeth Osborne Locke', 'Kathleen A. Hill', 'Mark Daley']
HD-CNV: hotspot detector for copy number variants
64,131
Motivation: Independent component analysis (ICA) is a signal processing technique that can be utilized to recover independent signals from a set of their linear mixtures. We propose ICA for the analysis of signals obtained from large proteomics investigations such as clinical multi-subject studies based on MALDI-TOF MS profiling. The method is validated on simulated and experimental data for demonstrating its capability of correctly extracting protein profiles from MALDI-TOF mass spectra.#R##N##R##N#Results: The comparison on peak detection with an open-source and two commercial methods shows its superior reliability in reducing the false discovery rate of protein peak masses. Moreover, the integration of ICA and statistical tests for detecting the differences in peak intensities between experimental groups allows to identify protein peaks that could be indicators of a diseased state. This data-driven approach demonstrates to be a promising tool for biomarker-discovery studies based on MALDI-TOF MS technology.#R##N##R##N#Availability: The MATLAB implementation of the method described in the article and both simulated and experimental data are freely available at http://www.unich.it/proteomica/bioinf/.#R##N##R##N#Contact: [email protected]
['Dante Mantini', 'Francesca Petrucci', 'Piero Del Boccio', 'Damiana Pieragostino', 'Marta Di Nicola', 'Alessandra Lugaresi', 'Giorgio Federici', 'Paolo Sacchetta', 'Carmine Di Ilio', 'Andrea Urbani']
Independent component analysis for the extraction of reliable protein signal profiles from MALDI-TOF mass spectra
115,135
An Autonomous Mobile Robot (AMR) has to show both goal-oriented behavior and reflexive behavior in order to be considered fully autonomous. In a classical, hierarchical control architecture these behaviors are realized by using several abstraction levels while their individual informational needs are satisfied by associated world models. The focus of this paper is to describle an approach which utilizes heterogenous information provided by a laser-radar and a set of sonar sensors in order to achieve reliable and complete world models for both real-time collision avoidance andlocal path planning. The approach was tested using MOBOT-IV, which serves as a test-platform within the scope of a research project on autonomous mobile robots for indoor applications. Thus, the experimental results presented here are based on real data
['Klaus-Werner Jörg']
World modeling for an autonomous mobile robot using heterogenous sensor information
24,648
This technical note studies the consensus problem for cooperative agents with nonlinear dynamics in a directed network. Both local and global consensus are defined and investigated. Techniques for studying the synchronization in such complex networks are exploited to establish various sufficient conditions for reaching consensus. The local consensus problem is first studied via a combination of the tools of complex analysis, local consensus manifold approach, and Lyapunov methods. A generalized algebraic connectivity is then proposed to study the global consensus problem in strongly connected networks and also in a broad class of networks containing spanning trees, for which ideas from algebraic graph theory, matrix theory, and Lyapunov methods are utilized.
['Wenwu Yu', 'Guanrong Chen', 'Ming Cao']
Consensus in Directed Networks of Agents With Nonlinear Dynamics
132,559
Motion blur arises when motion is fast relative to the shutter time of a camera. Unlike most work on motion blur, which considers the streaks due to motion blur to be noisy artifacts. In this paper we introduce a new method to extract motion information from these streaks. Previous methods with similar goals first extract an optic flow field from local information in the motion streaks and then infer global motion parameters. On the contrary, we adopt a more direct feature-based approach and extract global motion parameters from the motion streaks. We first extract edges in the motion blurred images, which we then group to determine the foci of expansion, the center of rotation, or motion parallel to the image plane. Furthermore, we determine the direction of motion. We present results on real images from a mobile robot in cluttered environments.
['Daniel Majchrzak', 'Sudeep Sarkar', 'Barry Sheppard', 'Robin R. Murphy']
Motion detection from temporally integrated images
45,163
Development of tools for the automated analysis of spectra generated by tandem mass spectrometry
['Sally R. Ellingson', 'Joe Hughes', 'Dylan Storey', 'Rick Weber', 'Nathan C. VerBerkmoes']
Development of tools for the automated analysis of spectra generated by tandem mass spectrometry
254,872
In this article we propose a method to refine the clustering results obtained with the nonnegative matrix factorization (NMF) technique, imposing consistency constraints on the final labeling of the data. The research community focused its effort on the initialization and on the optimization part of this method, without paying attention to the final cluster assignments. We propose a game theoretic framework in which each object to be clustered is represented as a player, which has to choose its cluster membership. The information obtained with NMF is used to initialize the strategy space of the players and a weighted graph is used to model the interactions among the players. These interactions allow the players to choose a cluster which is coherent with the clusters chosen by similar players, a property which is not guaranteed by NMF, since it produces a soft clustering of the data. The results on common benchmarks show that our model is able to improve the performances of many NMF formulations.
['Rocco Tripodi', 'Sebastiano Vascon', 'Marcello Pelillo']
Context aware nonnegative matrix factorization clustering
893,058
In this paper we propose a recognition technique of 3D dynamic gesture for human robot interaction (HRI) based on depth information provided by Kinect sensor. The body is tracked using the skeleton algorithm provided by the Kinect SDK. The main idea of this work is to compute the angles of the upper body joints which are active when executing gesture. The variation of these angles are used as inputs of Hidden Markov Models (HMM) in order to recognize the dynamic gestures. Results demonstrate the robustness of our method against environmental conditions such as illumination changes and scene complexity due to using depth information only.
['Hajar Hiyadi', 'Fakhreddine Ababsa', 'Christophe Montagne', 'El Houssine Bouyakhf', 'Fakhita Regragui']
A Depth-based Approach for 3D Dynamic Gesture Recognition
668,170
Biomedical ontologies play an important role for information extraction in the biomedical domain. We present a workflow for updating automatically biomedical ontologies, composed of four steps. We detail two contributions concerning the concept extraction and semantic linkage of extracted terminology.
['Juan Antonio Lossio-Ventura', 'Clement Jonquet', 'Mathieu Roche', 'Maguelonne Teisseire']
A Way to Automatically Enrich Biomedical Ontologies
647,932
In this paper, we consider efficient RSA modular exponentiations \(x^K \mod N\) which are regular and constant time. We first review the multiplicative splitting of an integer x modulo N into two half-size integers. We then take advantage of this splitting to modify the square-and-multiply exponentiation as a regular sequence of squarings always followed by a multiplication by a half-size integer. The proposed method requires around 16 % less word operations compared to Montgomery-ladder, square-always and square-and-multiply-always exponentiations. These theoretical results are validated by our implementation results which show an improvement by more than 12 % compared approaches which are both regular and constant time.
['Christophe Negre', 'Thomas Plantard']
Efficient regular modular exponentiation using multiplicative half-size splitting
809,042
Semantic Data Integration: Tools and Architectures
['Richard Mordinyi', 'Estefanía Serral', 'Fajar J. Ekaputra']
Semantic Data Integration: Tools and Architectures
937,876
Traditional multiview video coding schemes exploit all types of correlations at the encoder. The inherent high encoder complexity and the needs for large volume data communication between cameras make them impractical for many applications. In this paper, we propose a novel distributed coding scheme for multiview video. All the views are separately encoded but jointly decoded. At the decoder, we design a fusion algorithm that is able to make full use of three types of side information: temporal prediction, interview motion prediction and inter-view luma prediction. This is the key innovation in this research. This fusion scheme is able to exploit both intra-view and inter-view correlations with a low complexity encoder. Experimental results indicate significant R-D performance gain over single view distributed video coding and traditional intra coding.
['Qiwei Liu', 'Houqiang Li', 'Yan Song', 'Chang Wen Chen']
Distributed multiview video coding using the fusion of triple side information
359,277
This paper presents an improved particle swarm optimization (PSO) algorithm for onboard embedded applications in power-efficient wireless sensor networks (WSNs) and WSN-based security systems. The objective is to keep the main advantages of the standard PSO algorithm, such as simple form, easy implementation, low algorithmic complexity, and low computational burden while the performance and efficiency can be significantly improved. Numerical experiments are performed on a very difficult benchmark function to validate the performance of the improved PSO algorithm. The results show that the improved PSO algorithm outperforms the standard PSO algorithm.
['Erfu Yang', 'Ahmet T. Erdogan', 'Tughrul Arslan', 'Nick Barton']
An Improved Particle Swarm Optimization Algorithm for Power-Efficient Wireless Sensor Networks
499,721
In this paper, we propose to improve our previously developed method for joint compensation of additive and convolutive distortions (JAC) applied to model adaptation. The improvement entails replacing the vector Taylor series (VTS) approximation with unscented transform (UT) in formulating both the static and dynamic model parameter adaptation. Our new JAC-UT method differentiates itself from other UT-based approaches in that it combines the online noise and channel distortion estimation and model parameter adaptation in a unified UT framework. Experimental results on the standard Aurora 2 task show that the new algorithm enjoys 20.0% and 16.9% relative word error rate reductions over the previous JAC-VTS algorithm when using the simple and complex backend models, respectively. Index Terms: unscented transform, vector Taylor series, additive and convolutive distortions, robust ASR, adaptation
['Jinyu Li', 'Dong Yu', 'Li Deng', 'Yifan Gong']
Unscented Transform with Online Distortion Estimation for HMM Adaptation
472,824
Many Geographic Information Systems (GIS) handle large geospatial datasets stored in raster representation. Spatial joins over raster data are important queries in GIS for data analysis and decision support. However, evaluating spatial joins can be very time intensive due to the size of these datasets. In this paper we propose a new interactive framework that allows users to get approximate answers in near instantaneous time, thus allowing for truly interactive data exploration. Our method utilizes two proposed statistical approaches: probabilistic join and sampling based join. Our probabilistic join method provides speedup of two orders of magnitude with no correctness guarantee, while our sampling based method provides an order of magnitude improvement over the full quad-tree join and also provides running confidence intervals. We propose a framework that combines the two approaches to allow end users to tradeoff speed versus bounded accuracy. The two approaches are evaluated empirically with real and synthetic datasets.
['Wan D. Bae', 'Petr Vojtěchovský', 'Shayma Alkobaisi', 'Scott T. Leutenegger', 'Seon Ho Kim']
An interactive framework for raster data spatial joins
303,726
This paper considers the problem of partitioning analog integrated circuits for hierarchical symbolic analysis based on determinant decision diagrams (DDDs). The objective is to use DDDs with the minimum number of vertices to represent all the symbolic expressions. We show that the problem can be formulated as that of multi-level multi-way hyper graph partitioning with balance constraints, and be solved in two phases by connectivity-oriented initial clustering and iterative improvement. Our new contribution consists of a fast and effective heuristic for constructing a balanced initial partition, a potential gain formulae that can be computed efficiently, and a multiple-vertex moving strategy for relaxing and enforcing balance constraints. The proposed algorithm has been implemented and applied to symbolic analysis of several practical analog integrated circuits. Experimental results are described and compared to the contour tableau method of Sangiovanni-Vincentelli, Chen and Chua, and the SCAPP algorithm of Hassoun and Lin. The resulting hierarchical symbolic analyzer outperforms SPICE in numerical evaluations for a number of large analog circuits.
['Sheldon X.-D. Tan', 'C.-J. Richard Shi']
Balanced multi-level multi-way partitioning of analog integrated circuits for hierarchical symbolic analysis
260,014
Machine-learning techniques frequently predict the results of machining processes, based on pre-determined cutting tool settings. By doing so, key parameters of a machined product can be predicted before production begins. Nevertheless, a prediction model cannot capture all the features of interest under real-life industrial conditions. Moreover, careful assessment of prediction credibility is necessary for accurate calibration; aspects that should be addressed through appropriate modeling and visualization techniques. A machine process test problem is proposed to analyze data-visualization techniques, in which a real data set is analyzed that describes deep-drilling under different cutting and cooling conditions. The main objective is the efficient fusion of visualization techniques with the knowledge of industrial engineers. Common modeling and visualization techniques were first surveyed, to contrast standard practice with our novel approach. A hybrid technique combining conditional inference trees with dimensionality reduction was then examined. The results show that a process engineer will be able to estimate overall model accuracy and to verify the extent to which accuracy depends on industrial process settings and the statistical significance of model predictions. Moreover, evaluation of the data set in terms of its sufficiency for modeling purposes will help assess the credibility of these decisions.
['Andres Bustillo', 'Maciej Grzenda', 'Bohdan Macukow']
Interpreting tree-based prediction models and their data in machining processes
711,522
We introduce non-negative matrix factorization with orthogonality constraints (NMFOC) for detection of a target spectrum in a given set of Raman spectra data. An orthogonality measure is defined and two different orthogonality constraints are imposed on the standard NMF to incorporate prior information into the estimation and hence to facilitate the subsequent detection procedure. Both multiplicative and gradient type update rules have been developed. Experimental results are presented to compare NMFOC with the basic NMF in detection, and to demonstrate its effectiveness in the chemical agent detection problem.
['Hualiang Li', 'Tülay Adali', 'Wei Wang', 'Darren K. Emge', 'Andrzej Cichocki']
Non-negative Matrix Factorization with Orthogonality Constraints and its Application to Raman Spectroscopy
345,531
In this paper the benefits of RFID technology in the specific environment of the Supply Chain of renewable resources are examined. Due to their natural growth the production planning and the Supply Chain Management is faced with some uncertainties which differ from conventional Supply Chains. We will show how RFID can be applied to natural resources and how the whole Supply Chain can benefit from the implementation of RFID.
['Stefan Friedemann', 'Matthias Schumann']
Potentials and limitations of RFID to reduce uncertainty in production planning with renewable resources
662,925
We face the problem of simplifying automatically the visualization of expressions in an eager functional language. The problem is relevant for debugging in a programming environment, based on a rewriting model of expression evaluation that displays large intermediate expressions. The simplification technique must filter automatically the parts of an expression which are not interesting for debugging/understanding. We propose the use of logical fisheye views because they provide a balance between showing global context and local information (focus). A straightforward implementation of fisheye views displays too simplified expressions. In the article, we identify five design requirements and describe how they are satisfied. We also include several examples, a discussion and related work.
['J. Ángel Velázquez-Iturbide']
Principled design of logical fisheye views of functional expressions
518,932
More is More: The Benefits of Denser Sensor Deployment
['Matthew P. Johnson', 'Deniz Sarioz', 'Amotz Bar-Noy', 'Theodore Brown', 'Dinesh C. Verma', 'Chai Wah Wu']
More is More: The Benefits of Denser Sensor Deployment
135,779
Target-Based Topic Model for Problem Phrase Extraction
['Elena Tutubalina']
Target-Based Topic Model for Problem Phrase Extraction
554,651
We present an architecture and hardware for scheduling gigabit packet streams in server clusters that combines a network processor datapath and an FPGA for use in server NICs and server cluster switches. Our architectural framework can provide EDF static-priority, fair-share and DWCS native scheduling support for best effort and real-time streams. This allows (i) interoperability of scheduling hardware supporting different scheduling disciplines and (ii) helps in providing customized scheduling solutions in server clusters based on traffic type, stream content, stream volume and cluster hardware using a hardware implementation of a scheduler running at wire-speeds. The architecture scales easily from 4 to 32 streams on a single Xilinx Virtex 1000 chip and can support 64-byte 1500-byte Ethernet frames on a 1 Gbps link and 1500-byte Ethernet frames on a 10 Gbps link. A running hardware prototype of a stream scheduler in a Virtex 1000 PCI card can divide the bandwidth based on user specifications and meet the temporal bounds and packet-time requirements of multi-gigabit links.
['Raj Krishnamurthy', 'S. Yalamanchill', 'Karsten Schwan', 'R. West']
Architecture and hardware for scheduling gigabit packet streams
57,987
Although concurrency is generally perceived to be a 'hard' s ubject, it can in fact be very simple — provided that the underlying model is simple. The occam-π parallel processing language provides such a simple yet powerful concurrency model that is based on CSP and the π-calculus. This paper presents pony, the occam-π Net- work Environment. occam-π and pony provide a new, unified, concurrency model that bridges inter- and intra-processor concurrency. This enables the development of distributed applications in a transparent, dynamic and hig hly scalable way. The first part of this paper discusses the philosophy behind pony, explains how it is used, and gives a brief overview of its implementation. The second part evaluates pony's perfor- mance by presenting a number of benchmarks.
['Mario Schweigler', 'Adam T. Sampson']
pony - The occam-pi Network Environment
82,158
Classification des actions humaines basée sur les descripteurs spatio-temporels.
['Sameh Megrhi', 'Azeddine Beghdadi', 'Wided Souidene']
Classification des actions humaines basée sur les descripteurs spatio-temporels.
735,756