abstract
stringlengths
0
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
353
__index_level_0__
int64
3
1,000k
Full textFull text is available as a scanned copy of the original print version. Get a printable copy (PDF file) of the complete article (168K), or click on a page image below to browse page by page. #R##N##R##N##R##N##R##N##R##N#1075
['Kenneth D. Mandl', 'Alberto Riva', 'Isaac S. Kohane']
A Distributed, Secure File System For Personal Medical Records
608,889
This study proposes methods for determining the optimal lot sizes for sequential auctions that are conducted to sell sizable quantities of an item. These auctions are fairly common in business to consumer (B2C) auctions. In these auctions, the tradeoff for the auctioneer is between the alacrity with which funds are received, and the amount of funds collected by the faster clearing of inventory using larger lot sizes. Observed bids in these auctions impact the auctioneer's decision on lot sizes in future auctions. We first present a goal programming approach for estimating the bid distribution for the bidder population from the observed bids, readily available in these auctions. We then develop models to compute optimal lot sizes for both stationary and non-stationary bid distributions. For stationary bid distribution, we present closed form solutions and structural results. Our findings show that the optimal lot size increases with inventory holding costs and number of bidders. Our model for non-stationary bid distribution captures the inter-auction dynamics such as the number of bidders, their bids, past winning bids, and lot size. We use simulated data to test the robustness of our model.
['Arvind K. Tripathi', 'Suresh K. Nair', 'Gilbert G. Karuga']
Optimal Lot Sizing Policies For Sequential Online Auctions
397,114
Background#R##N#Visualization of DNA microarray data in two or three dimensional spaces is an important exploratory analysis step in order to detect quality issues or to generate new hypotheses. Principal Component Analysis (PCA) is a widely used linear method to define the mapping between the high-dimensional data and its low-dimensional representation. During the last decade, many new nonlinear methods for dimension reduction have been proposed, but it is still unclear how well these methods capture the underlying structure of microarray gene expression data. In this study, we assessed the performance of the PCA approach and of six nonlinear dimension reduction methods, namely Kernel PCA, Locally Linear Embedding, Isomap, Diffusion Maps, Laplacian Eigenmaps and Maximum Variance Unfolding, in terms of visualization of microarray data.
['Christoph Bartenhagen', 'Hans-Ulrich Klein', 'Christian Ruckert', 'Xiaoyi Jiang', 'Martin Dugas']
Comparative study of unsupervised dimension reduction techniques for the visualization of microarray gene expression data
186,582
UNB is dedicated to long range and low power transmission in IoT networks. The channel access is Random-FTMA, where nodes select their time and frequency in a random and continuous way. This randomness leads to a new behavior of the interference which has not been theoretically analyzed yet, when considering the pathloss of nodes located randomly in an area. In this paper, in order to quantify the system performance, we derive and exploit a theoretical expression of the packet error rate in a UNB based IoT network, when taking into account both interference due to the spectral randomness and path loss due to the propagation.
['Yuqi Mo', 'Claire Goursaud', 'Jean-Marie Gorce']
Theoretical analysis of UNB-based IoT networks with path loss and random spectrum access
969,276
['Sangsig Kim', 'Yen-Ting Lee', 'Yuanlin Zhu', 'Dae-Kyoo Kim', 'Lunjin Lu', 'Vijayan Sugumaran']
A Feature-Based Modeling Approach to Configuring Privacy and Temporality in RBAC.
798,036
Define the neighborhood characteristic of a graph to be s1 s2 +s3··· ,w here si counts subsets of i vertices that are all adjacent to some vertex outside the subset. This amounts to replacing cliques by neighborhoods in the traditional ‘Euler characteristic’ (the number of vertices, minus the number of edges, plus the number of triangles, etc.). The neighborhood characteristic can also be calculated by knowing,
['Terry A. McKee']
The Neighborhood Characteristic Parameter for Graphs
29,130
The lexical acquisition system presented in this paper incrementally updates linguistic properties of unknown words inferred from their surrounding context by parsing sentences with an HPSG grammar for German. We employ a gradual, information-based concept of "unknownness" providing a uniform treatment for the range of completely known to maximally unknown lexical entries. "Unknown" information is viewed as revisable information, which is either generalizable or specializable. Updating takes place after parsing, which only requires a modified lexical lookup. Revisable pieces of information are identified by grammar-specified declarations which provide access paths into the parse feature structure. The updating mechanism revises the corresponding places in the lexical feature structures iff the context actually provides new information. For revising generalizable information, type union is required. A worked-out example demonstrates the inferential capacity of our implemented system.
['Petra Barg', 'Markus Walther']
Processing Unknown Words in HPSG
131,964
We are concerned with a discrete-time undiscounted dynamic lot size model in which demand and cost parameters are constant for an initial few periods. As our main result, we obtain an upper bound on the number of these periods which guarantees the optimality of the Economic Order Quantity EOQ as the size of the production lot to be produced in the first period. The upper bound is given and the optimality holds for every problem with an horizon not less than the upper bound and for the infinite horizon problem. The data beyond the upper bound are allowed to be specified arbitrarily. In the context of forecast horizon theory, we obtain conditions for a finite forecast horizon to exist in the undiscounted dynamic lot size model. Furthermore, existence results for forecast horizons in an undiscounted optimization problem are obtained for the first time in this paper.
['Suresh Chand', 'Suresh P. Sethi', 'J.-M. Proth']
Existence of forecast horizons in undiscounted discrete-time lot size models
456,763
Web user forums are a valuable means for users to resolve specific information needs, both interactively for the participants and statically for users who search/browse over historical thread data. How- ever, the complex structure of forum threads can make it dicult for users to extract relevant information. Information retrieval (IR) over fo- rum threads is one important way to obtain useful information on ques- tions asked by others. In this paper, we investigate the task of IR over web user forums by utilising the discourse structure of forum threads. Experimental results show that exploiting the characteristics of discourse structure of forum threads can benefit IR, when compared to previously- published results.
['Li Wang', 'Su Nam Kim', 'Timothy Baldwin']
The Utility of Discourse Structure in Forum Thread Retrieval
562,076
Design experience from previous design cases could help designers solving current design problems. Therefore, a case-based recommender system can recall suitable design cases for designers based on a similarity measurement mechanism. In the context of Product-Service System (PSS) design, the measurement of similarity between different cases becomes more challenging because of the complex nature of the PSS design. In this research, we propose a similarity measurement framework of PSS design cases based on the context-based activity model. In the proposed framework, a PSS design case is indexed and quantified by design activity element, design process, and function requirement. Ways to measure similarity between design indexes and design cases are also specified. A case study along with an empirical validation was conducted to validate the framework.
['Yu Wu', 'Ji-Hyun Lee', 'Yong Se Kim', 'Sang Won Lee', 'Sun-Joong Kim', 'Xiaofang Yuan']
A similarity measurement framework of product-service system design cases based on context-based activity model
967,622
Although the geological factor and extensive human adverse impacts have exacerbated the soil erosion problems, the topography with accomplice of the above-mentioned hydrological characteristics controls the erosion potentials in the undulating hilly topography of the western part of the Chinese Loess Plateau. Thus, to estimate the spatial distribution of the topography-controlled erosion potentials, the spatial distribution of topographic attributes have to be spatially depicted. This paper apply the USPED model to four different scale DEMs in Gaoquangou watershed located in the western Chinese Loess Plateau to calculate the erosion potentials and evaluate the DEM scale effects on GIS-processes topographic parameters that are required in modeling the erosion potentials. Four differently-scaled DEM data (1:250,000, 1:50,000, 1:10,000, and 1:5,000) are re-sampled to 10-m resolution and the topographic parameters (e.g., slope, aspect, upslope contributing area, rill density and others) are GIS-processed to be input parameters in erosion model. The two most important erosion type, rill erosion and sheet erosion, are calculated from the four differently-scaled DEMs and the calculated results are compared and evaluated in terms of their theoretical and practical adequacy. The results show that the USPED model can be used to reasonably depict the effects of complex terrains on soil erosion. The comparison of calculated erosions from four scale DEMs indicates that 1:50,000 DEM can provide nearly equally accurate topographic information as higher-resolution DEM data for soil erosion studies, and 1:250,000 scale DEM underestimates the potential of rill erosion and sheet erosion
['Xiaowen Zhang', 'Zhaodong Feng']
Scale effects of DEM on erosion potentials in the western Chinese Loess Plateau
328,230
Code smells represent well known symptoms of problems at code level, and architectural smells can be seen as their counterpart at architecture level. If identified in a system, they are usually considered more critical than code smells, for their effect on maintainability issues. In this paper, we introduce a tool for the detection of architectural smells that could have an impact on the stability of a system. The detection techniques are based on the analysis of dependency graphs extracted from compiled Java projects and stored in a graph database. The results combine the information gathered from dependency and instability metrics to identify flaws hidden in the software architecture. We also propose some filters trying to avoid possible false positives.
['Francesca Arcelli Fontana', 'Ilaria Pigazzini', 'Riccardo Roveda', 'Marco Zanoni']
Automatic Detection of Instability Architectural Smells
978,612
Since the beginning of the Linked Open Data initiative, the number of published open datasets has gradually increased, but the datasets often do not contain description about content such as the dataset domain (e.g., medicine, cancer), when this information is available, it is usually coarse-grained e.g. organic-edunet contains the metadata about a collection of learning objects exposed through the Organic.Edunet portal, but it is classified as Life science. In this work we propose approaches that will provide a detailed description of existing datasets as well as linking assistance when publishing new datasets by generating detailed descriptions of the publishers dataset.
['Andrejs Abele']
Linked Data Profiling: Identifying the Domain of Datasets Based on Data Content and Metadata
827,291
Creating, cataloguing and distributing high quality learning contents requires a significant investment of resources and time. Nowadays information technology applied as a tool in education has drastically reduced these efforts. It makes it possible to share, use and exchange digital documents under certain conditions specified by the author without any cost. Furthermore, the use of standards can improve development and reutilization. Learning objects are the new paradigm developed for spreading knowledge and increasing interoperability through metadata.
['Miguel Latorre', 'F. García-Sevilla', 'Eugenio Lopez', 'Sergio Martin', 'Rosario Gil', 'Martin Llamas', 'Manuel Caeiro', 'Manuel Castro', 'Juan Peire']
A Good Practice Example on Learning Object Reutilization
97,198
RESUME. La numerisation de documents a l’aide des smartphones introduit un nombre important de degradations qui doivent etre corrigees ou detectees sur le mobile, avant l’envoi de donnees sur un reseau payant ou la perte de disponibilite du document. Dans cet article, nous proposons un systeme permettant de corriger les problemes de perspective et d’illumination avant d’estimer la nettete de l’image pour un traitement OCR. L’etape corrective repose sur une detection des contours, suivie d’une normalisation de l’illumination. Son evaluation sur un jeu de donnees prive montre une amelioration franche des resultats OCR. L’etape de controle repose sur une combinaison de mesures de focus. Son evaluation sur un jeu de donnees public montre que cette approche simple donne des performances comparables a celles des meilleures methodes basees sur des traitements lourds, et surpasse les methodes basees sur des metriques.
['Marçal Rusiñol', 'Joseph Chazalon', 'Jean-Marc Ogier']
Normalisation et validation d'images de documents capturées en mobilité
805,252
Current schedulers for acyclic regions schedule operations in dependence order and never undo a scheduling decision. In contrast, backtracking schedulers may unschedule operations and can often generate better schedules. In this paper, we describe a conventional cycle schedule, followed by two novel backtracking schedulers, OperBT and ListBT. The full-backtracking OperBT scheduler enables backtracking for all operations and unschedules operations to make space for the current operation. The OperBT scheduler increases the percentage of super-blocks scheduled optimally over a conventional non-backtracking scheduler from an average of 66.9% to 81.4%, an increase of 21.7%. The selective backtracking ListBT scheduler enables backtracking only when scheduling operations for which backtracking is likely to be advantageous. This hybrid scheduler is almost as good as the OperBT backtracking scheduler in terms of generated code quality, but backtracks about four times less often.
['Santosh G. Abraham', 'Waleed Meleis', 'Ivan D. Baev']
Efficient Backtracking Instruction Schedulers
94,060
We derive a monotonicity property for general, transient flows of a commodity transferred throughout a network, where the flow is characterized by density and mass flux dynamics on the edges with density continuity and mass balance conditions at the nodes. The dynamics on each edge are represented by a general system of partial differential equations that approximates subsonic compressible fluid flow with energy dissipation. The transferred commodity may be injected or withdrawn at any of the nodes, and is propelled throughout the network by nodally located compressors. These compressors are controllable actuators that provide a means to manipulate flows through the network, which we therefore consider as a control system. A canonical problem requires compressor control protocols to be chosen such that time-varying nodal commodity withdrawal profiles are delivered and the density remains within strict limits while an economic or operational cost objective is optimized. In this manuscript, we consider the situation where each nodal commodity withdrawal profile is uncertain, but is bounded within known maximum and minimum time-dependent limits. We introduce the monotone parameterized control system property, and prove that general dynamic dissipative network flows possess this characteristic under certain conditions. This property facilitates very efficient formulation of optimal control problems for such systems in which the solutions must be robust with respect to commodity withdrawal uncertainty. We discuss several applications in which such control problems arise and where monotonicity enables simplified characterization of system behavior.
['Anatoly Zlotnik', 'Sidhant Misra', 'Marc Vuffray', 'Michael Chertkov']
Monotonicity of actuated flows on dissipative transport networks
547,781
H.264/AVC video compression standard provides high coding efficiency, but requires a considerable amount of complexity and power consumption. This paper presents advanced low-power algorithms for an H.264/AVC encoder and a power-aware design composed of low-power algorithms. Power reduction algorithms with frame memory compression and early skip mode decision are presented, and the search range for motion estimation is reduced for further power reduction. The proposed power-aware design controls the power consumption depending on the remaining energy by controlling the operation condition of the proposed low-power algorithms. In order to estimate the power reduction by the proposed algorithms, the power consumed by external memory as well as the bus between an H.264 encoder and an external DRAM is considered. Simulation results show that up to 49.9% of the power consumed by bus and external memory is reduced and that the power consumption from 0% to 41.56% is achieved with a reasonably small degradation of R-D performance.
['H. Kim', 'Chae Eun Rhee', 'Jinsung Kim', 'Sunwoong Kim', 'Hyuk-Jae Lee']
Power-aware design with various low-power algorithms for an H.264/AVC encoder
248,623
In virtual human (VH) applications, and in particular, games, motions with different functions are to be synthesized, such as communicative and manipulative hand gestures, locomotion, expression of emotions or identity of the character. In the bodily behavior, the primary motions define the function, while the more subtle secondary motions contribute to the realism and variability. From a technological point of view, there are different methods at our disposal for motion synthesis: motion capture and retargeting, procedural kinematic animation, force-driven dynamical simulation, or the application of Perlin noise. Which method to use for generating primary and secondary motions, and how to gather the information needed to define them? In this paper we elaborate on informed usage, in its two meanings. First we discuss, based on our own ongoing work, how motion capture data can be used to identify joints involved in primary and secondary motions, and to provide basis for the specification of essential parameters for motion synthesis methods used to synthesize primary and secondary motion. Then we explore the possibility of using different methods for primary and secondary motion in parallel in such a way, that one methods informs the other. We introduce our mixed usage of kinematic an dynamic control of different body parts to animate a character in real-time. Finally we discuss motion Turing test as a methodology for evaluation of mixed motion paradigms.
['Herwin van Welbergen', 'Zsófia Ruttkay', 'Balázs Varga']
Informed Use of Motion Synthesis Methods
281,434
Medical image can provide valuable information for preclinical research, clinical diagnosis, and treatment. As the widespread use of digital medical imaging, many researchers are currently developing medical image processing algorithms and systems in order to accommodate a better result to clinical community, including accurate clinical parameters or processed images from the original images. In this paper, we propose a web-based platform to present and process medical images. By using Internet and novel database technologies, authorized users can easily access to medical images and facilitate their workflows of processing with server-side powerful computing performance without any installation. We implement a series of algorithms of image processing and visualization in the initial version of Rayplus. Integration of our system allows much flexibility and convenience for both research and clinical communities.
['Rong Yuan', 'Ming Luo', 'Zhi Sun', 'Shuyue Shi', 'Peng Xiao', 'Qingguo Xie']
RayPlus: a Web-Based Platform for Medical Image Processing.
952,809
A zap is a 2-round, public coin witness-indistinguishable protocol in which the first round, consisting of a message from the verifier to the prover, can be fixed “once and for all” and applied to any instance. We present a zap for every language in NP, based on the existence of noninteractive zero-knowledge proofs in the shared random string model. The zap is in the standard model and hence requires no common guaranteed random string. We present several applications for zaps, including 3-round concurrent zero-knowledge and 2-round concurrent deniable authentication, in the timing model of Dwork, Naor, and Sahai [J. ACM, 51 (2004), pp. 851-898], using moderately hard functions. We also characterize the existence of zaps in terms of a primitive called verifiable pseudorandom bit generators.
['Cynthia Dwork', 'Moni Naor']
Zaps and Their Applications
150,421
Any computation of Boolean matrix product by an acyclic network#R##N#using only the operations of binary conjunction and disjunction#R##N#requires at least IJK conjunctions and IJ(K-1) disjunctions for the#R##N#product of matrices of sizes I x K and K x J. Furthermore any two#R##N#such networks having these minimum numbers of operations are equivalent#R##N#using only the commutativity of both operations and the associativity#R##N#of disjunction.
['Mike Paterson']
Complexity of Monotone Networks for Boolean Matrix Product
972,403
This paper discusses new approaches to interaction design for communication of art in the physical museum space. In contrast to the widespread utilization of interactive technologies in cultural heritage and natural science museums it is generally a challenge to introduce technology in art museums without disturbing the domain of the art works. To explore the possibilities of communicating art through the use of technology, and to minimize disturbance of the artworks, we apply four main approaches in the communication: 1) gentle audio augmentation of art works; 2) conceptual affinity of art works and remote interactive installations; 3) using the body as an interaction device; 4) consistent audio-visual cues for interaction opportunities. The paper describes the application of these approaches for communication of inspirational material for a Mariko Mori exhibition. The installations are described and argued for. Experiences with the interactive communication are discussed based on qualitative and quantitative evaluations of visitor reactions. It is concluded that the installations are received well by the visitors, who perceived exhibition and communication as a holistic user experience with a seamless interactive communication.
['Karen Johanne Kortbek', 'Kaj Grønbæk']
Communicating art through interactive technology: new approaches for interaction design in art museums
280,830
['Yasutaka Maeda', 'Shun-ichiro Ohmi', 'Tetsuya Goto', 'Tadahiro Ohmi']
High Quality Pentacene Film Formation on N-Doped LaB 6 Donor Layer
721,564
['Kerstin Schäfer', 'Oliver Günther']
Evaluating the Effectiveness of Relational Marketing Strategies in Corporate Website Performance.
777,640
Plusieurs facteurs inherents a l’ecriture augmentent la complexite de la reconnaissance automatique de documents manuscrits, comme la taille de l’ecriture. Dans ce travail nous nous interessons a la prise en compte de tels facteurs dans la modelisation afin d’ameliorer la performance des systemes automatiques. Les experimentations ont ete conduites sur des textes manuscrits arabes extraits de l’une des plus grandes bases etiquetees de documents manuscrits arabes, la base de donnees NIST-OpenHaRT qui inclut de grandes variabilites dans la taille du texte inter et intra mots et lignes. Nous proposons plusieurs approches pour faire face a ces variations lors des deux phases d’apprentissage et de reconnaissance. Les premieres experimentations montrent que la reconnaissance est largement affectee par la taille d’ecriture. Pour prendre en compte ce parametre nous proposons de classifier les donnees en trois classes selon la taille. En phase de reconnaissance, nous avons redimensionne chaque donnee de test a plusieurs tailles predefinies, puis nous avons combine les scores de reconnaissance associes a chacune des tailles. Cette approche a permis des gains notables de performance de deux systemes de reconnaissance, HMM et BLSTM. De plus, nous avons integre des donnees artificiellement redimensionnees pour adapter les modeles HMM a differentes echelles. Nous avons aussi obtenu des gains de performance par deux methodes differentes de combinaison (ROVER, treillis) des resultats des modeles adaptes. Nous fournissons les resultats de reconnaissance obtenus qui montrent les avantages de l’exploitation de la taille d’ecriture.
['Edgard Chammas', 'Chafic Mokbel', 'Laurence Likforman-Sulem']
Exploitation de l'échelle d'écriture pour améliorer la reconnaissance automatique des textes manuscrits arabe.
979,467
Public institutions worldwide are moving a number of services targeted to citizens from traditional data infrastructures to cloud based technologies, with the aim of improving the quality offered, and reducing costs. This phenomenon is particularly relevant in the field of health-related public services. Among the technological devices and media enabling users' access to such services, the emerging smart TV platform represents a promising and valid means of interaction. In many countries, where population ageing is becoming the leading welfare concern, information and communication technologies are expected to play a basic role in alleviating the pressure on public healthcare services. In this scenario, the paper discusses the feasibility of an e-health service delivered through the smart TV, by the design and implementation of a prototype application that enables the remote consultation of personal medical reports. Users can access their personal health records, and visualise the outcomes of medical exam...
['Laura Raffaeli', 'Susanna Spinsante', 'Ennio Gambi']
Feasibility of e-health services through the smart TV: a prototype demonstrator
897,315
Abstract Popular video games often provide people with the option to play characters that are good or evil in nature, and yet, little is known about how individual differences in personality relate to the moral and ethical alignments people chose in their digital representations. We examined whether participants' pre-existing levels of moral disengagement and Big 5 scores predicted the alignments they selected for their avatar in video game play. Results revealed that men, relative to women, were more likely to play “bad guys” and that moral disengagement predicted this finding. Agreeableness and conscientiousness mediated the relationship between moral disengagement and alignment such that those higher in these two traits were more likely to play good characters.
['Patrick J. Ewell', 'Rosanna E. Guadagno', 'Matthew Jones', 'Robert Andrew Dunn']
Good Person or Bad Character? Personality Predictors of Morality and Ethics in Avatar Selection for Video Game Play
843,791
Human interaction recognition has been widely studied because it has great scientific importance and many potential practical applications. However, recognizing human interactions is also very challenging especially in realistic environments where the background is dynamic or has varying lighting conditions. Most existing methods rely on either spatio-temporal local features (i.e. SIFT) or human poses, or human joints to model human interactions. As a result, they are not fully unsupervised processes because they require either hand-designed features or human detection results. Motivated by the recent success of deep learning networks, we investigate a three-layer convolutional network which uses the Independent Subspace Analysis (ISA) algorithm to learn hierarchical invariant features from videos. Using the invariant features learned by the ISA, we build a bag-of-features (BOF) model to recognize human interactions. We also evaluate the performance of our approach and the effectiveness of hierarchical invariant features on video sequences of the UT-Interaction dataset which contain both interacting persons and irrelevant pedestrians in the scenes. The dataset imposes several challenging factors including moving backgrounds, clutter scenes, scales and camera jitters. Experimental results show that our three-layer convolutional ISA network is able to learn features which are effective to represent complex activities such as human interactions in realistic environments.
['Ngoc Nguyen', 'Atsuo Yoshitaka']
Human Interaction Recognition Using Hierarchical Invariant Features
638,310
OO software engineering (OOSE) has been a popular methodology for years; however, there are still some issues remaining unsolved: a generic mechanism for checking consistency of designs is still lacking; software has some problems resulting from process issue; and imperative engineer have a huge gap to adopt OOSE. These three issues intersect in the early analysis phase, therefore we represent a new methodology that provides a complete global view of OO software system that solves issues we identified, and uses requirements document and some analysis documents as foundation. Rules and a case study are also presented to exemplify the result of applying our methodology on both OOSE and imperative software engineering.
['Yiausyu Earl Tsai', 'Hewijin Christine Jiau', 'Kuo-Feng Ssu']
Scenario architecture - a methodology to build a global view of OO software system
358,855
['Romain Robbes', 'Rocco Oliveto', 'Massimiliano Di Penta']
Guest editorial: special section on software reverse engineering
721,693
Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.
['Wenyu Zhang', 'Zhenjiang Zhang', 'Dapeng Qi', 'Yun Liu']
Automatic Crack Detection and Classification Method for Subway Tunnel Safety Monitoring
192,825
In the context of generating efficient, contention free schedules for inter-node communication through a switch fabric in cluster computing or data center type environments, all-to-all scheduling with equal sized data transfer requests has been studied in the literature [1, 3, 4]. In this paper, we propose a communication scheduling module (CSM) towards generating contention free communication schedules for many-to-many communication with arbitrary sized data. Towards this end, we propose three approximation algorithms - PST, LDT and SDT. From time to time, the CSM first generates a bipartite graph from the set of received requests, then determines which of these three algorithms gives the best approximation factor on this graph and finally executes that algorithm to generate a contention free schedule. Algorithm PST has a worst case run time of O(max (Δ|E|, |E| log (|E|))) and guarantees an approximation factor of 2H2Δ-1, where |E| is the number of edges in the bipartite graph, Δ is the maximum node degree of the bipartite graph and H2Δ-1 is the (2Δ - 1)- th harmonic number. LDT runs in O(|E|2) and has an approximation factor of 2(1 + τ), where τ is a constant defined as a guard band or pause time to eliminate the possibility of contention (in an apparently contention free schedule) caused by system jitter and synchronization inaccuracies between the nodes. SDT gives an approximation factor of 4 log (wmax) and has a worst case run time of O(Δ|E| log (wmax)), where wmax represents the longest communication time in a set of received requests.
['Satyajit Banerjee', 'Atish Datta Chowdhury', 'Koushik Sinha', 'Subhas Kumar Ghosh']
Contention-free many-to-many communication scheduling for high performance clusters
209,932
['A Finkelstein']
Requirements Engineering Research: Coordination and Infrastructure (Review Article).
926,339
The paper considers the sensor network whose sensors observe a common quantity and are affected by arbitrary additive bounded noises with a known upper bound. During the experiment, any sensor can communicate only a finite and given number of bits of information to the decision center. The contributions of the particular sensors, the rules of data encoding, decoding, and fusion, as well as the estimation scheme should be designed to achieve the best overall performance in estimation of the observed quantity by the decision center. The optimal algorithm is obtained that minimizes the maximal feasible error. It is shown that it considerably over-performs a ‘natural’ algorithm proposed in recent papers in the area and examined only in the idealized case of noiseless sensors. This analysis highlights the need for special decentralized data encoding rules that are robust against the sensor noises in the context of networked cooperative observation. Such a rule is the core of the proposed optimal algorithm.
['Gleb Zherlitsyn', 'Alexey S. Matveev']
Optimal data encoding and fusion in sensor networks
265,504
In this correspondence we report results from our experiments to find useful measures of local image autocovariance parameters from small subblocks of data. Our criterion for the reliability of parameter estimates is that they should correlate with observed signal activity and yield high quality results when used in adaptive processing. We describe a method for estimating the correlation parameters of first-order Markov (nonseparable exponential) autocovariance models, The method assumes that image data are stationary within N × N pixel sub-blocks. Values of the autocovariance parameters may be calculated at every pixel location. A value of N = 16 yields results which fit our criterion, even when the original data are degraded by blur and noise. An application to data compression is suggested.
['Robin N. Strickland']
Estimation of local statistics for digital processing of nonstationary images
235,515
Fixed channelization configuration in today's wireless devices falls inefficient in the presence of growing data traffic and heterogeneous devices. In this regard, a number of fairly recent studies have provided spectrum adaptation capabilities for current wireless devices, however, they are limited to inband adaptation or incur substantial coordination overhead. The target of this paper is to fill the gaps in spectrum adaptation by overcoming these limitations. We propose Seer , a frame-level wideband spectrum adaptation solution which consists of two major components: i) a specially-constructed preamble that can be detected by receivers with arbitrary RF bands, and ii) a spectrum detection algorithm that identifies the desired transmission band in the context of multiple asynchronous senders by exploiting the preamble's temporal and spectral properties. Seer can be realized on commodity radios, and can be easily integrated into devices running different PHY/MAC protocols. We have prototyped Seer on the GNURadio/USRP platform to demonstrate its feasibility. Furthermore, using 1.6GHz channel measurements and trace-driven simulations, we have evaluated the merits of Seer over state-of-the-art approaches.
['Wei Wang', 'Yinejie Chen', 'Zeyu Wang', 'Jin Zhang', 'Kaishun Wu', 'Qian Zhang']
Wideband Spectrum Adaptation Without Coordination
699,600
['Se-yong Ro', 'Lin-bo Luo', 'Jong-Wha Chong']
An Improved Look-Up Table-Based FPGA Implementation of Image Warping for CMOS Image Sensors
102,203
Some marine seismic data sets exceed 10 Tbytes, and there are seismic surveys planned with a volume of around 120 Tbytes. The need to compress these very large seismic data files is imperative. Nevertheless, seismic data are quite different from the typical images used in image processing and multimedia applications. Some of their major differences are the data dynamic range exceeding 100 dB in theory, very often it is data with extensive oscillatory nature, the x and y directions represent different physical meaning, and there is significant amount of coherent noise which is often present in seismic data. Up to now some of the algorithms used for seismic data compression were based on some form of wavelet or local cosine transform, while using a uniform or quasiuniform quantization scheme and they finally employ a Huffman coding scheme. Using this family of compression algorithms we achieve compression results which are acceptable to geophysicists, only at low to moderate compression ratios. For higher compression ratios or higher decibel quality, significant compression artifacts are introduced in the reconstructed images, even with high-dimensional transforms. The objective of this paper is to achieve higher compression ratio, than achieved with the wavelet/uniform quantization/Huffman coding family of compression schemes, with a comparable level of residual noise. The goal is to achieve above 40 dB in the decompressed seismic data sets. Several established compression algorithms are reviewed, and some new compression algorithms are introduced. All of these compression techniques are applied to a good representation of seismic data sets, and their results are documented in this paper. One of the conclusions is that adaptive multiscale local cosine transform with different windows sizes performs well on all the seismic data sets and outperforms the other methods from the SNR point of view. All the described methods cover wide range of different data sets. Each data set will have his own best performed method chosen from this collection. The results were performed on four different seismic data sets. Special emphasis was given to achieve faster processing speed which is another critical issue that is examined in the paper. Some of these algorithms are also suitable for multimedia type compression.
['Amir Averbuch', 'François G. Meyer', 'Jan-Olov Strömberg', 'Ronald R. Coifman', 'Anthony Vassiliou']
Low bit-rate efficient compression for seismic data
510,264
We investigate the class of numerical semigroups verifying the property ρ i + 1 *** ρ i *** 2 for every two consecutive elements smaller than the conductor. These semigroups generalize Arf semigroups.
['Carlos Munuera', 'Fernando Torres', 'Juan Elmer Villanueva']
Sparse Numerical Semigroups
178,820
['Thiago Christiano Silva', 'Liang Zhao']
Machine Learning in Complex Networks
748,772
Frequency-division multiple access (FDMA) and single channel per carrier (SCPC) in the uplink and time-division multiplexing (TDM) in the downlink are employed in the system described. To interface FDMA in the uplink and TDM in the downlink, multicarrier demodulation (MCD) is required onboard the satellite. The operation of the onboard MCD is the separation of each individual channel and subsequent demodulation. The results, which concern onboard frequency demultiplexing and demodulation for low-bit-rate carriers, are constrained by the requirement that new-generation payloads could serve the stations already active in the INTELSAT Business System. A digital hardware design that implements an MCD that can process three channels at 4.4 Mb/s or 12 channels at 1.1. Mb/s is described. The test results confirm the MCD feasibility, and further improvements are expected from a semicustom implementation. >
['Fulvio Ananasso', 'G. Chiassarini', 'E. Del Re', 'Romano Fantacci', 'D. Rousset', 'Enrico Saggese']
A multirate digital multicarrier demodulator: design, implementation, and performance evaluation
132,345
For link prediction, Common Neighbours (CN) ranking measures allow to discover quality links between nodes in a social network, assessing the likelihood of a new link based on the neighbours frontier of the already existing nodes. A zero rank value is often given to a large number of pairs of nodes, which have no common neighbours, that instead can be potentially good candidates for a quality assessment. With the aim of improving the quality of the ranking for link prediction, in this work we propose a general technique to evaluate the likelihood of a linkage, iteratively applying a given ranking measure to the Quasi-Common Neighbours (QCN) of the node pair, i.e. iteratively considering paths between nodes, which include more than one traversing step. Experiments held on a number of datasets already accepted in literature show that QCNAA, our QCN measure derived from the well know Adamic-Adar (AA), effectively improves the quality of link prediction methods, keeping the prediction capability of the original AA measure. This approach, being general and usable with any CN measure, has many different applications, e.g. trust management, terrorism prevention, disambiguation in co-authorship networks.
['Andrea Chiancone', 'Valentina Franzoni', 'Yuanxi Li', 'Krassimir Markov', 'Alfredo Milani']
Leveraging Zero Tail in Neighbourhood for Link Prediction
648,567
['Daniel Campello', 'Carlos Crespo', 'Akshat Verma', 'Raju Rangaswami', 'Praveen Jayachandran']
Coriolis: Scalable VM Clustering in Clouds
593,676
Imbalanced data sets present a particular challenge to the data mining community. Often, it is the rare event that is of interest and the cost of misclassifying the rare event is higher than misclassifying the usual event. When the data is highly skewed toward the usual, it can be very difficult for a learning system to accurately detect the rare event. There have been many approaches in recent years for handling imbalanced data sets, from under-sampling the majority class to adding synthetic points to the minority class in feature space. However, distances between time series are known to be non-Euclidean and non-metric, since comparing time series requires warping in time. This fact makes it impossible to apply standard methods like SMOTE to insert synthetic data points in feature spaces. We present an innovative approach that augments the minority class by adding synthetic points in distance spaces. We then use Support Vector Machines for classification. Our experimental results on standard time series show that our synthetic points significantly improve the classification rate of the rare events, and in most cases also improves the overall accuracy of SVMs. We also show how adding our synthetic points can aid in the visualization of time series data sets.
['Suzan Koknar-Tezel', 'Longin Jan Latecki']
Improving SVM classification on imbalanced time series data sets with ghost points
468,332
Abstract#R##N##R##N#A heterogeneous distributed sensor network (HDSN) is a type of distributed sensor network where sensors with different deployment groups and different functional types participate at the same time. In other words, the sensors are divided into different deployment groups according to different types of data transmissions, but they cooperate with each other within and out of their respective groups. However, in traditional heterogeneous sensor networks, the classification is based on transmission range, energy level, computation ability, and sensing range. Taking this model into account, we propose a secure group association authentication mechanism using one-way accumulator which ensures that: before collaborating for a particular task, any pair of nodes in the same deployment group can verify the legitimacy of group association of each other. Secure addition and deletion of sensors are also supported in this approach. In addition, a policy-based sensor addition procedure is also suggested. For secure handling of disconnected nodes of a group, we use an efficient pairwise key derivation scheme to resist any adversary's attempt. Along with proposing our mechanism, we also discuss the characteristics of HDSN, its scopes, applicability, future, and challenges. The efficiency of our security management approach is also demonstrated with performance evaluation and analysis. Copyright © 2010 John Wiley & Sons, Ltd.
['Al-Sakib Khan Pathan', 'Muhammad Mostafa Monowar', 'Jinfang Jiang', 'Lei Shu', 'Guangjie Han']
An efficient approach of secure group association management in densely deployed heterogeneous distributed sensor network
277,901
In this paper, we study the problem of pricing American contingent claims in an incomplete market where the stock price process is supposed to be driven by both a Wiener process and a Poisson random measure and the portfolios are constrained. We formulate this problem as to find the minimal solution of a backward stochastic differential equation (BSDE) with constraints. We use the penalization method to construct a sequence of BSDEs with respect to Wiener process and Poisson random measure, and we show that the solutions of these equations converge to the minimal solution we are interested in. Finally, in the Markovian case, we characterize the minimal hedging price as the minimal viscosity supersolution of an integral-partial differential inequality with constraints.
['Rainer Buckdahn', 'Ying Hu']
Pricing of American Contingent Claims with Jump Stock Price and Constrained Portfolios
223,980
This paper analyzes the problem of distributed detection in a sensor network of binary sensors. In particular, statistical dependence between local decisions (at binary sensors) is assumed, and two complementary methods to save energy have been considered: censoring, to avoid some transmissions from sensors to fusion center, and a sleep and wake up random schedule at local sensors. The effect of possible failures in transmission has been also included, considering the probability of having a successful transmission from a sensor to the fusion center. In this scenario, the necessary statistical information has been identified, the optimal decision rule at the fusion center has been obtained, and some examples have been used to analyze the effect of statistical dependence in a simple network with two sensors. HighlightsOptimal statistical decentralized detection rule to fuse dependent binary local decision.Identification of the required statistical information.Censoring and sleep and wake up scheduling are used to save energy.
['Marcelino Lázaro']
Decentralized detection for censored binary observations with statistical dependence
596,084
Recent work in data integration has shown the importance of statistical information about the coverage and overlap of sources for efficient query processing. Despite this recognition, there are no effective approaches for learning the needed statistics. The key challenge in learning such statistics is keeping the number of needed statistics low enough to have the storage and learning costs manageable. In this paper, we present a set of connected techniques that estimate the coverage and overlap statistics, while keeping the needed statistics tightly under control. Our approach uses a hierarchical classification of the queries and threshold-based variants of familiar data mining techniques to dynamically decide the level of resolution at which to learn the statistics. We describe the details of our method, and, present experimental results demonstrating the efficiency of the learning algorithms and the effectiveness of the learned statistics over both controlled data sources and in the context of BibFinder with autonomous online sources.
['Zaiqing Nie', 'Subbarao Kambhampati', 'Ullas Nambiar']
Effectively mining and using coverage and overlap statistics for data integration
94,151
Recently the amount of documents written in Amharic language has been dramatically increasing. Searching such content using localized and regional version of general search engine such as google.com.et returns documents containing search key terms while excluding specific characteristics of Amharic Language. In this paper, we present the design and implementation of Semantic Search Engine for Amharic documents. The search engine has Crawler, Ontology/Knowledge base, Indexer and Query Processor that consider characteristics of Amharic language. The ontology provides shared concepts Sport. This ontology is built manually by language and sport domain experts and it is used in building semantic indexer, ranker and query engine. In addition, we show how the system facilitate meaning based searching, document relevant and popularity based documents ranking.
['Fekade Getahun', 'Genet Asefa']
Towards amharic semantic search engine
662,298
In haptic length perception biases occur that have previously been shown to depend on stimulus orientation and stimulus length. We propose that these biases arise from the muscular torque needed to counteract the gravitational forces acting on the arm. In a model study, we founded this hypothesis by showing that differences in muscular torque can indeed explain the pattern of biases obtained in several experimental studies.
['Nienke B. Debats', 'Idsart Kingma', 'Peter J. Beek', 'Jeroen B. J. Smeets']
Muscular torque can explain biases in haptic length perception: a model study on the radial-tangential illusion
23,097
The replica method originated from statistical mechanics has been successfully applied to analyzing performance of the global maximum-likelihood (GML) MIMO detector in the large-system limit. In this paper, the analysis is extended to the local maximum-likelihood (LML) detectors. A bit error rate (BER) formula for the LML detectors with a fixed neighborhood size is obtained by the replica method and interestingly by the method of Gaussian approximation as well. It is shown that the LML BER is always one of the solutions to the GML BER in any system configuration. Furthermore, the LML BER is the only solution of the GML BER in a broad range of system parameters of practical interest. In the high signal-to-noise ratio regime, both LML and GML detectors achieve the AWGN channel performance when the channel load is up to 1.51 bits/dimension with an equal-energy distribution, and the load can be higher with an unequal-energy distribution. This analytical result is verified by simulation that the sequential likelihood ascent search detector, which is a linear-complexity LML detector, can approach the BER of the NP-hard GML detector predicted by the analysis. This result might be practically useful in large MIMO systems.
['Yi Sun', 'Le Zheng', 'Pengcheng Zhu', 'Xiaodong Wang']
On Optimality of Local Maximum-Likelihood Detectors in Large-Scale MIMO Channels
698,048
The increased complexity, heterogeneity and the dynamism of networked systems and services make current control and management tools to be ineffective in managing and securing such systems and services. A new paradigm to control and manage large-scale complex and dynamic networked systems is critically needed. In this paper, we present a new paradigm based on the principles of autonomic computing that can handle efficiently complexity, dynamism and uncertainty in networked systems and their applications. We have developed a general Component Management Interface (CMI) to enable autonomic behaviors and operations of any legacy resource or a software component. The CMI consists of four management ports: Configuration Port, Function Port, Control Port and Operation Port. We have successfully implemented the CMI in XML format and validated the effectiveness and performance of our approach to develop automated and self-configuring security patch deployment services.
['Huoping Chen', 'Salim Hariri', 'Fahd Rasul']
An Innovative Self-Configuration Approach for Networked Systems and Applications
363,405
Consider the minimum mean-square error (MMSE) of estimating an arbitrary random variable from its observation contaminated by Gaussian noise. The MMSE can be regarded as a function of the signal-to-noise ratio (SNR) as well as a functional of the input distribution (of the random variable to be estimated). It is shown that the MMSE is concave in the input distribution at any given SNR. For a given input distribution, the MMSE is found to be infinitely differentiable at all positive SNR, and in fact a real analytic function in SNR under mild conditions. The key to these regularity results is that the posterior distribution conditioned on the observation through Gaussian channels always decays at least as quickly as some Gaussian density. Furthermore, simple expressions for the first three derivatives of the MMSE with respect to the SNR are obtained. It is also shown that, as functions of the SNR, the curves for the MMSE of a Gaussian input and that of a non-Gaussian input cross at most once over all SNRs. These properties lead to simple proofs of the facts that Gaussian inputs achieve both the secrecy capacity of scalar Gaussian wiretap channels and the capacity of scalar Gaussian broadcast channels, as well as a simple proof of the entropy power inequality in the special case where one of the variables is Gaussian.
['Dongning Guo', 'Yihong Wu', 'Shlomo Shamai', 'Sergio Verdu']
Estimation in Gaussian Noise: Properties of the Minimum Mean-Square Error
398,665
Location-dependent spatial query in the wireless environment is that mobile users query the spatial objects dependent on their current location. The window query is one of the essential spatial queries, which finds spatial objects located within a given window. In this paper, we propose a Hilbert curve-based distributed index for window queries in the wireless data broadcast systems. Our proposed algorithm allocates spatial objects in the Hilbert-curve order to preserve the spatial locality. Moreover, to quickly answer window queries, our proposed algorithm utilizes the neighbor-link index, which has knowledge about neighbor objects, to return the answered objects. From our experimental study, we have shown that our proposed algorithm outperforms the distributed spatial index.
['Jun-Hong Shen', 'Ye-In Chang']
A Hilbert Curve-Based Distributed Index for Window Queries in Wireless Data Broadcast Systems
405,512
An ever increasing number of dynamic interactive applications are implemented on portable consumer electronics. Designers depend largely on operating systems to map these applications on the architecture. However, today's embedded operating systems abstract away the precise architectural details of the platform. As a consequence, they cannot exploit the energy efficiency of scratchpad memories. We present in this paper a novel integrated hardware/software solution to support scratchpad memories at a high abstraction level. We exploit hardware support to alleviate the transfer cost from/to the scratchpad memory and at the same time provide a high-level programming interface for run-time scratchpad management. We demonstrate the effectiveness of our approach with a case-study.
['Poletti Francesco', 'Paul Marchal', 'David Atienza', 'Luca Benini', 'Francky Catthoor', 'José M. Mendías']
An integrated hardware/software approach for run-time scratchpad management
110,021
ABSTRACT#R##N##R##N#Despite more than 25 years of research on the processes and outcomes of information systems development in organizations, deficiencies exist in our knowledge about the effective management of complex systems development processes. Although individual studies have generated a wealth of findings, there is a need for a cumulative framework that facilitates interpretation of what has been learned and what needs to be learned about the process of information systems development. This paper reviews prior research on ISD processes and identifies the different types of contributions that have been made to our growing knowledge. More important, it generates a cumulative framework for understanding the process of ISD that could provide a valuable template for future research and practice.
['V. Sambamurthy', 'Laurie J. Kirsch']
An Integrative Framework of the Information Systems Development Process
154,502
We describe a novel approach for 3-D ear biometrics using video. A series of frames is extracted from a video clip and the region of interest in each frame is independently reconstructed in 3-D using shape from shading. The resulting 3-D models are then registered using the iterative closest point algorithm. We iteratively consider each model in the series as a reference model and calculate the similarity between the reference model and every model in the series using a similarity cost function. Cross validation is performed to assess the relative fidelity of each 3-D model. The model that demonstrates the greatest overall similarity is determined to be the most stable 3-D model and is subsequently enrolled in the database. Experiments are conducted using a gallery set of 402 video clips and a probe of 60 video clips. The results (95.0% rank-1 recognition rate and 3.3% equal error rate) indicate that the proposed approach can produce recognition rates comparable to systems that use 3-D range data. To the best of our knowledge, we are the first to develop a 3-D ear biometric system that obtains a 3-D ear structure from a video sequence.
['Steven Cadavid', 'Mohamed Abdel-Mottaleb']
3-D Ear Modeling and Recognition From Video Sequences Using Shape From Shading
120,407
['Raoua Abdelkhalek', 'I. Boukhris', 'Zied Elouedi']
Evidential Item-Based Collaborative Filtering
902,767
We present a technique for fault tolerance in prefix-based adders, and show its application by implementing a Kogge-Stone adder. The technique is based on the fact that an n-bit Kogge-Stone adder can be split into two independent n-bit Han-Carlson (HC) adders by augmenting an additional computation stage to the adder. The presence of single faults only affects one of these HC adders, thus we use a multiplexer to select the correct output. Moreover, the adder can correct multiple faults (up to 50% faulty nodes) as long as all the faults are located on one adder. A 64-bit version of this adder is implemented, and both area and power overhead (relative to a standard KS adder) are less 20%. If faults are present, the delay is 16%. If no faults are present, the delay of the adder is 2% relative to a KS adder
['Patrick Ndai', 'Shih-Lien Lu', 'Dinesh Somesekhar', 'Kaushik Roy']
Fine-Grained Redundancy in Adders
341,001
We study transport protocol performance from the perspective of real-time applications. More precisely, we evaluate TCP and UDP supportive role in terms of real-time QoS, network stability and fairness. A new metric for the evaluation of real-time application performance is proposed to capture both bandwidth and delay requirements. Using this metric as a primary criterion in our evaluation analysis, we reach several conclusions on the specific impact of wireless links, real-time traffic friendliness, and UDP/TCP protocol efficiency. Beyond that, we also reach an unexpected result: UDP traffic has occasionally negative impact compared with TCP traffic not only for the systemwide behavior, but also for the supporting application as well.
['Panagiotis Papadimitriou', 'Vassilis Tsaoussidis']
On transport layer mechanisms for real-time QoS
324,558
We investigate the dividend optimization problem for a company whose surplus process is modeled by a regime-switching compound Poisson model with credit and debit interest. The surplus process is controlled by subtracting the cumulative dividends. The performance of a dividend distribution strategy which determines the timing and amount of dividend payments, is measured by the expectation of the total discounted dividends until ruin. The objective is to identify an optimal dividend strategy which attains the maximum performance. We show that a regime-switching band strategy is optimal.
['Jinxia Zhu']
Singular optimal dividend control for the regime-switching Cramér-Lundberg model with credit and debit interest
351,078
['Fabian Kneer', 'Erik Kamsties']
Model-based Generation of a Requirements Monitor.
786,101
After 15 years of extensive research, multimedia retrieval has finally come to its prime time when everything becomes accessible on the web. However, web search provides both a new paradigm and poses a new challenge to the multimedia retrieval research community. It calls for a rethinking of the traditional content-based approaches, especially in how to make use of the massive yet noisy meta data associated with the web pages and links. In this talk, we will first review some familiar approaches in content-based multimedia retrieval. Then, we will present a set of efforts in web multimedia search to illustrate interesting new thoughts that potentially will lead to new breakthroughs in multimedia search and stimulate new research ideas.
['Hong-Jiang Zhang']
Multimedia content analysis and search: new perspectives and approaches
93,522
Frequency-domain and subband implementations improve the computational efficiency and the convergence rate of adaptive schemes. The well-known multidelay adaptive filter (MDF) belongs to this class of block adaptive structures and is a DFT-based algorithm. We develop adaptive structures that are based on the trigonometric transforms, discrete cosine transform (DCT) and discrete sine transform (DST), and on the discrete Hartley transform (DHT). As a result, these structures involve only real arithmetic and are attractive alternatives in cases where the traditional DFT-based scheme exhibits poor performance. The filters are derived by first presenting a derivation for the classical DFT based filter that allows us to pursue these extensions immediately. The approach used in this paper also provides further insights into subband adaptive filtering.
['Ricardo Merched', 'Ali H. Sayed']
An embedding approach to frequency-domain and subband adaptive filtering
512,563
This paper presents a video summarization technique for an Internet video to provide a quick way to overview its content. This is a challenging problem because finding important or informative parts of the original video requires to understand its content. Furthermore the content of Internet videos is very diverse, ranging from home videos to documentaries, which makes video summarization much more tough as prior knowledge is almost not available. To tackle this problem, we propose to use deep video features that can encode various levels of content semantics, including objects, actions, and scenes, improving the efficiency of standard video summarization techniques. For this, we design a deep neural network that maps videos as well as descriptions to a common semantic space and jointly trained it with associated pairs of videos and descriptions. To generate a video summary, we extract the deep features from each segment of the original video and apply a clustering-based summarization technique to them. We evaluate our video summaries using the SumMe dataset as well as baseline approaches. The results demonstrated the advantages of incorporating our deep semantic features in a video summarization technique.
['Mayu Otani', 'Yuta Nakashima', 'Esa Rahtu', 'Janne Heikkilä', 'Naokazu Yokoya']
Video Summarization Using Deep Semantic Features
894,966
To detect collisions between deformable objects we introduce an algorithm that computes the closest distances between certain feature points defined in their meshes. The strategy is to divide the objects into regions and to define a representative vertex that serves to compute the distance to the regions of the other objects. Having obtained the closest regions between two objects, we proceed to explore these regions by expanding them and detecting the closest sub-regions. We handle a hierarchy of regions and distances where the first level contains n1 regions, each one is divided into n2 sub-regions, and so on. A collision is obtained when the distance between two vertices in the last level of the tree is less than a predefined value e. The advantage of our algorithm is that we can follow the deformation of the surface with the representative vertices defined in the hierarchy.
['Francisco A. Madera', 'A. M. Day', 'Stephen D. Laycock']
A Distance Hierarchy to Detect Collisions Between Deformable Objects
634,796
This paper describes how to find the extremum of a constrained multivariable function by taking advantage of its structure. Three steps lead to the solution. The first step discusses restructuring a given problem?reformulating it in a way which makes possible the calculations performed in the next two steps. The second step presents the conversion of the restructured problem into a block diagram. The last step presents the problem solution from the block diagram obtained.
['Francesco Brioschi', 'A. Locatelli']
Extremization of Constrained Multivariable Function: Structural Programming
327,074
Among various service providers providing identical or similar services with varying quality of service, trust is essential for service consumers to find the right one. Manually assigning feedback costs much time and suffers from several drawbacks. Only automatic trust calculation is feasible for large-scale service-oriented applications. Therefore an automatic method of trust calculation is proposed. To make the calculation accurate, the Kalman filter is adopted to filter out malicious non-trust quality criterion (NTQC) values instead of malicious trust values. To offer higher detection accuracy, it is further improved by considering the relationship between NTQC values and variances. Since dishonest or inaccurate values can still influence trust values, the similarity between consumers is used to weight data from other consumers. As existing models only used the Euclidean function and ignored others, a collection of distance functions is modified to calculate the similarity. Finally, experiments are carried out to access the robustness of the proposed model. The results show that the improved algorithm can offer higher detection accuracy, and it was discovered that another equation outperformed the Euclidean function.
['Bo Ye', 'Maziar Nekovee', 'Anjum Pervez', 'Mohammad Ghavami']
Automatic trust calculation for service-oriented systems
218,365
A blind 3D watermarking method based on optimizing the placement of displaced vertices on the surface of the 3D object is proposed in this paper. A new mesh distortion function is used as a cost function for the Levenberg-Marquardt optimization algorithm in order to ensure a minimal distortion to the 3D object surface after watermarking. The surface distortion function consists of the sums of Euclidean distances from the displaced vertex to the original surface, to the new surface and to the original vertex location. Experimental results assess the level of distortion produced to 3D surfaces by watermarking as well as the robustness to various attacks.
['Adrian G. Bors', 'Ming Luo']
Minimal surface distortion function for optimizing 3D watermarking
128,518
The effectiveness and efficiency of case-based reasoning (CBR) systems depend largely on the success of case-based retrieval. The case-base maintenance (CBM) issues become imperative and important especially for modern societies. This paper proposes a new competence model and a new maintenance procedure for the proposed competence model. Based on the proposed competence maintenance procedure, footprint-based retrieval (FBR), a competence-based case base retrieval method, is able to preserve its own retrieval effectiveness and efficiency.
['Ning Lu', 'Jie Lu', 'Guangquan Zhang']
Maintaining Footprint-Based Retrieval for Case Deletion
535,569
['Trung Dung Nguyen', 'Van Duc Nguyen', 'Thanh Tung Nguyen', 'Van Tien Van Tien Pham', 'Trong Hieu Pham', 'Wakasugi Koichiro']
An Energy-Efficient Ring Search Routing Protocol Using Energy Parameters in Path Selection
597,948
The purpose of this study was to determine whether there is a difference between an exergame-based and a traditional balance training program, in undergraduate Physical Education students. Thirty two third-year undergraduate students at the Democritus University of Thrace were randomly divided into two training program groups of 16 students each, a traditional and a Nintendo Wii group. The two training program groups performed a specific balance program for 8 weeks, two times per week, and 24 min per session. The Nintendo Wii group used the interactive games Wii Fit Plus of the Nintendo Wii console, as a training method to improve their balance, while the traditional group used an exercise program with mini trampoline and inflatable discs. Before and after the completion of the eight-week balance program, participants completed a single leg static balance assessment for both limbs on the Biodex stability system. Two-way analyses of variance (ANOVAs), with repeated measures on the last factor, were conducted to determine effect of training program groups (traditional, Nintendo Wii) and measures (pre-test, post-test) on balance test indices (SI, API, and MLI). Where initial differences between groups were verified, one-way analyses of covariance (ANCOVAs) were applied. Analysis of the data illustrated that both groups demonstrated an improvement in SI, API and MLI mean scores for the right and the left limp as well. Conclusively, findings support the effectiveness of using the Nintendo Wii gaming console as an intervention for undergraduate Physical Education students, and specifically, its effects on physical function related to balance competence.
['Nikolaos Vernadakis', 'Asimenia Gioftsidou', 'Panagiotis Antoniou', 'Dionysis Ioannidis', 'Maria Giannousi']
The impact of Nintendo Wii to physical education students' balance compared to the traditional approaches
207,080
['Darshana Sedera', 'Guy G. Gable']
A Factor and Structural Equation Analysis of the Enterprise Systems Success Measurement Model.
975,064
The throughput of a mobile ad hoc network (MANET) is determined by the transceiver link capacity available at each node and the type of traffic pattern that is prevalent in the network. In order for a routing protocol to be scalable, its control overhead must not exceed transceiver link capacity. To achieve capacity compatible routing, hierarchical techniques may be employed. This paper describes how link state routing, with a single layer of hierarchy, provides sufficient scalability for MANETs where the traffic pattern consists of unicast communication between arbitrary pairs of nodes.
['John Sucec', 'Ivan Marsic']
Capacity compatible 2-level link state routing for ad hoc networks with mobile clusterheads
150,581
Designing the jet ejector optimally is a challenging task and has a great impact on industrial applications. Three different sets of nozzles (namely, 1, 3, and 5) inside the jet ejector are compared in this study by using numerical simulations. More precisely, dynamics of bubble coalescence and breakup in the multinozzle jet ejectors are studied by means of Computational Fluid Dynamics (CFD). The population balance approach is used for the gas phase such that different bubble size groups are included in CFD and the number densities of each of them are predicted in CFD simulations. Here, commercial CFD software ANSYS Fluent 14.0 is used. The realizable - turbulence model is used in CFD code in three-dimensional computational domains. It is clear that Reynolds-Averaged Navier-Stokes (RANS) models have their limitations, but on the other hand, turbulence modeling is not the key issue in this study and we can assume that the RANS models can predict turbulence of the carrying phase accurately enough. In order to validate our numerical predictions, results of one, three, and five nozzles are compared to laboratory experiments data for Cl2-NaOH system. Predicted gas volume fractions, bubble size distributions, and resulting number densities of the different bubble size groups as well as the interfacial area concentrations are in good agreement with experimental results.
['Dhanesh Patel', 'Ashvinkumar Chaudhari', 'Arto Laari', 'Matti Heiliö', 'Jari Hämäläinen', 'Kishorilal Agrawal']
Numerical Simulation of Bubble Coalescence and Break-Up in Multinozzle Jet Ejector
646,393
We study the mixing time of some Markov chains converging to critical physical models. These models are indexed by a parameter beta and there exists some critical value beta c where the model undergoes a phase transition. According to physics lore, the mixing time of such Markov chains is often of logarithmic order outside the critical regime, when beta ne beta c , and satisfies-some power law at criticality, when beta = beta c . We prove this in the two following settings: 1. Lazy random walk on the critical percolation cluster of "mean-field" graphs, which include the complete graph and random d-regular graphs. The critical mixing time here is of order Theta(n). This answers a question of Benjamini, Kozma and Wormald. 2. Swendsen-Wang dynamics on the complete, graph. The critical mixing time, here is of order Theta(n 1/4 ). This improves results of Cooper, Dyer, Frieze and Rue. In both settings, the main tool is understanding the Markov chain dynamics via properties of critical percolation on the underlying graph.
['Yun Long', 'Asaf Nachmias', 'Yuval Peres']
Mixing Time Power Laws at Criticality
16,063
With the increasing number of users of mobile computing devices (e.g. personal digital assistants) and the advent of third generation mobile phones, wireless communications are becoming increasingly important. Many applications rely on the device maintaining a replica of a data-structure which is stored on a server, for example news databases, calendars and e-mail. In this paper we explore the question of the optimal strategy for synchronising such replicas. We utilise probabilistic models to represent how the data-structures evolve and to model user behaviour. We then formulate objective functions which can be minimised with respect to the synchronisation timings. We demonstrate, using two real world data-sets, that a user can obtain more up-to-date information using our approach.
['Neil D. Lawrence', 'Anthony I. T. Rowstron', 'Christopher M. Bishop', 'Michael J. Taylor']
Optimising Synchronisation Times for Mobile Devices
238,884
We present a novel system for automatically generating immersive and interactive virtual reality (VR) environments using the real world as a template. The system captures indoor scenes in 3D, detects obstacles like furniture and walls, and maps walkable areas (WA) to enable real-walking in the generated virtual environment (VE). Depth data is additionally used for recognizing and tracking objects during the VR experience. The detected objects are paired with virtual counterparts to leverage the physicality of the real world for a tactile experience. Our approach is new, in that it allows a casual user to easily create virtual reality worlds in any indoor space of arbitrary size and shape without requiring specialized equipment or training. We demonstrate our approach through a fully working system implemented on the Google Project Tango tablet device.
['Misha Sra', 'Sergio Garrido-Jurado', 'Chris Schmandt', 'Pattie Maes']
Procedurally generated virtual reality from 3D reconstructed physical space
928,645
Geometric active contours based on edges perform poorly in the presence of noise or clutter. When the edges have gaps or are indistinct, the contour leaks through the boundary. Furthermore, when spurious edge points that do not belong to the object are present in the image, the contour is stopped by them and either does not converge to the object boundary or there is oversegmentation. This paper addresses the second difficulty. We propose a novel technique which classifies image features as valid or invalid making the curve stop only at valid features and allowing it to bridge the invalid ones. This is incorporated in the stopping force of boundary based level sets, achieving a robust contour estimation. Our algorithm organizes edge points into connected segments (denoted herein as strokes) and classifies each segment as valid or invalid. A confidence degree (weight) is assigned to each stroke and updated during the evolution process. Thus, the proposed stopping force is adaptive. Experimental results with real data will be provided to illustrate the performance of the proposed algorithm.
['Margarida Silveira', 'Jacinto C. Nascimento', 'Jorge S. Marques']
Level set segmentation with outlier rejection
379,852
This paper proposes the kernel entropy component analysis (KECA) for clustering remote sensing data. The method generates nonlinear features that reveal structure related to the Renyi entropy of the input space data set. Unlike other kernel feature extraction methods, the top eigenvalues and eigenvectors of the kernel matrix are not necessarily chosen. Data are interestingly mapped with a distinct angular structure, which is exploited to derive a new angle-based spectral clustering algorithm based on the mapped data. An out-of-sample extension of the method is also presented to deal with test data. We focus on cloud screening from MERIS images. Several images are considered to account for the high variability of the problem. Good results show the suitability of the proposal.
['Luis Gomez-Chova', 'Robert Jenssen', 'Gustavo Camps-Valls']
Kernel entropy component analysis in remote sensing data clustering
87,020
Designing diversity achieving schemes over the wireless broadband fading relay channels is crucial to achieve higher diversity gains. These gains are achieved by exploiting the multipath (frequency) and cooperative diversities to combat the fading nature of wireless channels. The challenge is how to design space frequency codes, distributed among randomly located nodes that can exploit the frequency diversity of the wireless broadband channels. In this paper, the design of distributed space-frequency codes (DSFCs) for wireless relay networks is considered. The proposed DSFCs are designed to achieve the frequency and cooperative diversities of the wireless relay channels. The use of DSFCs with the decode-and-forward (DAF) and amplify-and forward (AAF) protocols is considered. The code design criteria to achieve full diversity, based on the pairwise error probability (PEP) analysis, are derived. For DSFC with the DAF protocol, a two-stage coding scheme, with source node coding and relay nodes coding, is proposed. We derive sufficient conditions for the proposed code structures at the source and relay nodes to achieve full diversity of order NL, where N is the number of relay nodes and L is the number of paths per channel. For the case of DSFC with the AAF protocol, a structure for distributed space-frequency coding is proposed.
['Karim G. Seddik', 'K. J. Ray Liu']
Distributed Space-Frequency Coding over Broadband Relay Channels
162,318
3D printing is revolutionizing the next-generation manufacturing industry. With increasing design complexity, computation in the prefabrication process is becoming the bottleneck of 3D printing. For example, a multi-scale, multi-material 3D design (e.g., a bionic bone) takes a few hours or even days to complete the prefabrication computation. Therefore, it is prudent to improve the performance of 3D printing prefabrication. In this paper, we investigate the computational challenges in 3D printing. First, we develop the PBench, an open-source standard benchmark suite for 3D printing prefabrication. To the best of our knowledge, this is the first benchmark suite for 3D printing prefabrication. Second, we study the properties of PBench using TotalChar, a proposed benchmark characterization framework analyzing PBench from three complementary dimensions, i.e., (1) microarchitecture independent analysis, (2) architecture bottleneck analysis and (3) functional performance analysis. Experiments show that 3D printing prefabrication can be potentially optimized through specified function accelerator design or parallelism exploration. Overall, this work establishes the potential for accelerating 3D printing prefabrication.
['F. Yang', 'Feng Lin', 'Chen Song', 'Chi Zhou', 'Zhanpeng Jin', 'Wenyao Xu']
Pbench: a benchmark suite for characterizing 3D printing prefabrication
902,119
['David R. Gnimpieba Zanfack', 'Ahmed Nait-Sidi-Moh', 'David Durand', 'Jérôme Fortin']
Publish and Subscribe Pattern for Designing Demand Driven Supply Networks
783,861
We prove that the function that maps a word of a rational language onto its successor for the radix order in this language is a finite union of co-sequential functions.
['Pierre-Yves Angrand', 'Jacques Sakarovitch']
Radix enumeration of rational languages
76,739
We present our effort to extend and complement the core modules of the Distributed and Unified Numerics Environment DUNE (this http URL) by a well tested and structured collection of utilities and concepts. We describe key elements of our four modules dune-xt-common, dune-xt-grid, dune-xt-la and dune-xt-functions, which aim at further enabling the programming of generic algorithms within DUNE as well as adding an extra layer of usability and convenience.
['Tobias Leibner', 'René Milk', 'Felix Schindler']
Extending DUNE: The dune-xt modules
660,883
In this paper, we present our Concurrent Systems class, where parallel programming and parallel and distributed computing (PDC) concepts have been taught for more than 20 years. Despite several rounds of changes in hardware, the class maintains its goals of allowing students to learn parallel computer organizations, studying parallel algorithms, and writing code to be able to run on parallel and distributed platforms. We discuss the benefits of such a class, reveal the key elements in developing this class and receiving funding to replace outdated hardware. We will also share our activities in attracting more students to be interested in PDC and related topics.
['Jie Liu']
20 Years of teaching parallel processing to computer science seniors
963,995
In this paper, we address the problems of enumerating siphons and minimal siphons in ordinary Petri nets (PNs) by resorting to the semi-tensor product (STP) of matrices. First, a matrix equation, called the siphon equation (SE), is established by using STP. Second, an algorithm is proposed to calculate all siphons in ordinary PNs. An example is presented to illustrate the theoretical results and show that the proposed method is more effective than other existing methods in calculating all siphons of PNs. Third, an efficient recursion algorithm is also proposed, which can be applied to computing all minimal siphons for any ordinary PNs. Last, some results on the computational complexity of the proposed algorithms, in this paper, are provided, as well as experimental results.
['Xiaoguang Han', 'Zengqiang Chen', 'Zhongxin Liu', 'Qing Zhang']
Calculation of Siphons and Minimal Siphons in Petri Nets Based on Semi-Tensor Product of Matrices
721,849
Channel-width adaptation can significantly improve the connectivity, capacity and reduce the power consumption in wireless networks. The OFDMA-based channel-width adaptation based on traffic demand has been studied in wireless networks with sufficient spectral resources. However, the traffic demand may not be fully satisfied in resource-limited scenarios. In this paper, we study the channel-width adaptation problem in wireless mesh networks (WMNs) with limited spectral resources. More specifically, considering diverse quality-of-service (QoS) requirements, a QoS-aware channel-width adaptation scheme is proposed. First, resource allocation with QoS-aware channel-width adaptation is modelled as an optimization problem. Genetic algorithm is employed to get a near-optimal solution. Then, in order to reduce the computational complexity, a greedy algorithm is developed to suit the highly dynamic traffic demand. Simulation results show that the proposed low-complexity algorithm can guarantee the QoS support in resource-limited scenarios.
['Hao Li', 'Jiliang Zhang', 'Qi Hong', 'Hui Zheng', 'Yang Wang', 'Jie Zhang']
QoS-aware channel-width adaptation in wireless mesh networks
844,297
['Jonathan Burton', 'Cliff B. Jones']
Atomicity in system design and execution (Proceedings of Dagstuhl-Seminar 04181) - J. UCS Special Issue
746,291
a b s t r a c t We present the design of an analog circuit which solves linear programming (LP) or quadratic programming (QP) problem. In particular, the steady-state circuit voltages are the components of the LP (QP) optimal solution. The paper shows how to construct the circuit and provides a proof of equivalence between the circuit and the LP (QP) problem. The proposed method is used to implement an LP-based Model Predictive Controller by using an analog circuit. Simulative and experimental results show the effectiveness of the proposed approach.
['Sergey Vichik', 'Francesco Borrelli']
Solving linear and quadratic programs with an analog circuit
54,560
With the use of an immersive display system such as the CAVE, the user is able to perceive a 3D environment realistically. However, the interaction on such systems faces inherent difficulties such as inaccurate tracking, lack of depth cues, and unstable spatial manipulation without the sense of touch. In this paper, we propose a see-through lens interface using a PDA (Personal Digital Assistant) for supporting spatial manipulation within an immersive display. Compared to related techniques, a PDA-based see-through lens interface includes the following advantages, which we believe will provide more natural and flexible manipulation and interaction schemes in an immersive environment: physical contact of the screen surface for easy 2D manipulation in 3D space, built-in controls for immediate command execution, and a built-in display for a flexible GUI (graphical user interface). This paper describes the basic ideas and implementation details of the proposed system and some functionalities provided with the interface, such as image-based selection of a 3D object.
['Miranda Miranda Miguel', 'Takefumi Ogawa', 'Kiyoshi Kiyokawa', 'Haruo Takemura']
A PDA-based See-through Interface within an Immersive Environment
81,121
We present a model based on congestion pricing for resource control in wireless CDMA networks carrying traffic streams that have fixed-rate requirements, but can adapt their signal quality. Our model considers the resource usage constraint in the uplink of CDMA networks, and does not differentiate users based on their distance from the base station. We compare our model with other economic models that have appeared in the literature, identifying their similarities and differences. Our investigations include the effects of a mobile's distance and the wireless network's load on the target signal quality, the transmission power, and the user benefit.
['Vasilios A. Siris', 'Bob Briscoe', 'David Songhurst']
Economic models for resource control in wireless networks
180,630
['Cédric Wemmert', 'Germain Forestier']
SLEMC : Apprentissage semi-supervisé enrichi par de multiples clusterings.
794,019
['Juraj Hromkovic']
Closure Properties of the Family of Languages Recognized by One-Way Two-Head Deterministic Finite State Automata
29,638
['Jordan Fréry', 'Christine Largeron', 'Mihaela Juganaru-Mathieu']
UJM at CLEF in Author Identification Notebook for PAN at CLEF 2014.
790,793
The importance of security has been recognized in ad hoc networks for many years. Consequently many secure routing methods have been proposed in this field. This paper discusses major security attacks in ad hoc networks, and proposes a number of prevention methods for resource exhaustion attacks that have severe negative effects on targeted ad hoc networks
['Masao Tanabe', 'Masaki Aida']
Preventing Resource Exhaustion Attacks in Ad Hoc Networks
517,018
This paper introduces a measure of distribution-sensitivity, which is similar to Arrow-Pratt’s measure of risk aversion, for a poverty index. The measure also gauges poverty aversion and has a clear and straightforward interpretation. Using this measure, we can define various classes of minimum distribution-sensitive poverty indices. We show that the ordering condition for such a class of poverty indices is simply a generalized (poverty) deprivation profile dominance. Properties and implications of this dominance condition are analyzed and poverty orderings by different classes of poverty measures are compared. Journal of Economic
['Buhong Zheng']
Minimum Distribution-Sensitivity, Poverty Aversion, and Poverty Orderings
320,833
['Markus Wagner']
Probabilistic User Models for the Verification of Human-Computer Interaction.
782,817
The classification of images using regular or geometric moment functions suffers from two major problems. First, odd orders of central moments give zero value for images with symmetry in the x and/or y directions and symmetry at centroid. Secondly, these moments are very sensitive to noise especially for higher order moments. In this paper, a single solution is proposed to solve both these problems. The solution involves the computation of the moments from a reference point other than the image centroid. The new reference centre is selected such that the invariant properties like translation, scaling and rotation are still maintained. In this paper, it is shown that the new proposed moments can solve the symmetrical problem. Next, we show that the new proposed moments are less sensitive to Gaussian and random noise as compared to two different types of regular moments derived by Hu.6 Extensive experimental study using a neural network classification scheme with these moments as inputs are conducted to verify the proposed method.
['Ramaswamy Palaniappan', 'Paramesran Raveendran', 'Sigeru Omatu']
NEURAL NETWORK CLASSIFICATION OF SYMMETRICAL AND NONSYMMETRICAL IMAGES USING NEW MOMENTS WITH HIGH NOISE TOLERANCE
275,757