abstract
stringlengths
5
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
367
__index_level_0__
int64
0
1,000k
A new carrier phase and frequency recovery scheme is proposed for multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. The algorithm uses an extended Kalman filter (EKF) to estimate the instantaneous phase and frequency offset. A linear minimum mean squared error (LMMSE) data-channel extractor is used to isolate the phase and frequency dependent term from the received signal. The proposed algorithm can provide both phase and carrier frequency recovery and has lower complexity than other approaches in the literature. In addition there is very little latency associated with the proposed approach. Simulation results show that the proposed algorithm can track quickly carrier phase and frequency offset and the performance loss compared to the synchronous case is very small.
['Hui Jin', 'Jaekyun Moon', 'Taehyun Jeon', 'Sok-Kyu Lee']
Carrier phase and frequency recovery for MIMO-OFDM
265,335
Categorization concept could be divided into three levels according to the degree of generalization: superordinate, basic and the subordinate levels. To investigate the neural mechanism of concept information retrieval from the different levels, animals and vehicles were chosen as materials and a picture-word matching task was used in this study with the technique of ERP. The behavioral results showed basic concepts were retrieved most quickly and much faster than retrieving those concepts in superordinate and subordinate levels. The ERP results showed that there was an enhanced ERP signals in the early stage for the condition of superordinate level, including the time windows of N1 and 300−500 ms, suggesting a superordinate level advantage; but in the late stage (time window 500−600 ms), a basic-level advantage was observed. These results indicated the retrieving advantage of concept level appeared from superordinate back to basic.
['Sanxia Fan', 'Xuyan Wang', 'Zhizhou Liao', 'Zhoujun Long', 'Haiyan Zhou', 'Yulin Qin']
Basic level advantage during information retrieval: an ERP study
552,809
The traditional approach used to implement a business process (BP) in today's information systems (IS) no longer covers the actual needs of the dynamically changing business. Therefore, a necessity for a new approach of dynamic business process (DBP) modelling and simulation has arisen. To date, existing approaches to DBP modelling and simulation have been incomplete, i.e. they lack theory or a case study or both. Furthermore, there is no commonly accepted definition of BDP. Current BP modelling tools are suitable almost solely for the modelling and simulation of a static BP that strictly prescribes which activities, and in which sequence, to execute. Usually, a DBP is not defined strictly at the beginning of its execution, and it changes under new conditions at runtime. In our paper, we propose six requirements of DBP and an approach for rule- and context-based DBP modelling and simulation. The approach is based on changing BP rules, BP actions and their sequences at process instance runtime, according to the new business system context. Based on the proposed approach, a reference architecture and prototype of a DBP simulation tool were developed. Modelling and simulation were carried out using this prototype, and the case study shows correspondence to the needs of dynamically changing business, as well as possibilities for modelling and simulating DBP.
['Olegas Vasilecas', 'Diana Kalibatiene', 'Dejan Lavbič']
Rule- and context-based dynamic business process modelling and simulation
875,416
The optimization of a quadratic objective function with linear constraints is useful for interpolation purposes. This formulation may be employed to derive an initial prediction in the lifting scheme domain in order to construct wavelet transforms. We modify the formulation to design final prediction and update lifting steps. The linear constraints relate wavelet bases and coefficients with the underlying signal. The objective function is the detail signal energy for the prediction lifting design and the gradient of the approximation signal for the update. To report concrete results and the power of the approach, we derive update steps using an auto-regressive image model that show better performance than the 5/3 wavelet for the compression of several image classes.
['Joel Solé', 'Philippe Salembier']
A Common Formulation for Interpolation, Prediction, and Update Lifting Design
95,309
Many transportation companies in the current environmental corporate socially responsible era have long term objectives to move their operations towards green logistics. This requires them to use more environmental friendly resources as inputs that will produce fewer harmful outputs to the environment. In this regard, converting diesel trucks to operate on Natural Gas (NG) and diesel has recently received significant attention in the transportation sector of logistics. However, initial results in the literature indicate that the performance of a dual diesel/NG engine is not the same as a conventional diesel powered engine. Apart from engine conversion, this demonstrates the significance of engine performance management as one of the important steps to be carried out for achieving the goal of green logistics in this regard. Existing work in the literature has focussed on studying the engine's performance outputs from different types of fuels but not in the area of engine performance management. In this paper, the need to address this gap is highlighted and an approach based on pattern discovery and association mining techniques is proposed that provides knowledge by which an engine's key performance factors can be managed or fine-tuned so that the performance of a dual diesel/NG engine is similar a diesel one in specific operational conditions. The proposed approach will also provide comprehensive analysis from a business perspective including cost-benefit decision-making that will assist transport companies in the changeover decision-making process.
['Atefe Zakeri', 'Elizabeth Chang', 'Omar Khadeer Hussain']
TUNE: Tree Mining-Based Approach for Dual-Fuel Engine Performance Management of Heavy Duty Trucks
976,626
Hardware masking is a well-known countermeasure against Side-Channel Attacks (SCA). Like many other countermeasures, the side-channel resistance of masked circuits is susceptible to low-level circuit effects. However, no detailed analysis is available that explains how , and to what extent , these low-level circuit effects are causing side-channel leakage. Our first contribution is a unified and consistent analysis to explain how glitches and inter-wire capacitance cause side-channel leakage on masked hardware. Our second contribution is to show that inter-wire capacitance and glitches are causing side-channel leakage of comparable magnitude according to HSPICE simulations. Our third contribution is to confirm our analysis with a successful DPA-attack on a 90nm COMS FPGA implementation of a glitch-free masked AES S-Box. According to existing literature, this circuit would be side-channel resistant, while according to our analysis and measurement, it shows side-channel leakage. Our conclusion is that circuit-level effects, not only glitches, present a practical concern for masking schemes.
['Zhimin Chen', 'Syed Imtiaz Haider', 'Patrick Schaumont']
Side-Channel Leakage in Masked Circuits Caused by Higher-Order Circuit Effects
206,128
Arc length, vocabulary richness and text size.
['Ioan-Iovitz Popescu', 'Peter Zörnig', 'Gabriel Altmann']
Arc length, vocabulary richness and text size.
785,477
RNA structure prediction, or folding, is a compute-intensive task that lies at the core of several search applications in bioinformatics. We begin to address the need for high-throughput RNA folding by accelerating the Nussinov folding algorithm using a 2D systolic array architecture. We adapt classic results on parallel string parenthesization to produce efficient systolic arrays for the Nussinov algorithm, elaborating these array designs to produce fully realized FPGA implementations. Our designs achieve estimated speedups up to 39times on a Xilinx Virtex-II 6000 FPGA over a modern x86 CPU.
['Arpith C. Jacob', 'Jeremy Buhler', 'Roger D. Chamberlain']
Accelerating Nussinov RNA secondary structure prediction with systolic arrays on FPGAs
198,529
Network-on-chip has been proposed as an alternative to bus-based system to achieve high performance and scalability. The topology of on-chip interconnect plays a crucial role in system on chip performance, energy, and area requirements. In this paper, an on-chip interconnects architecture based on WK-recursive network is proposed. WK-recursive structure is analyzed and compared to 2D mesh and Spidergon structures. Simulation results show that WK-recursive on-chip interconnect generally outperforms the other architectures.
['Suboh A. Suboh', 'Mohamed Bakhouya', 'Tarek A. El-Ghazawi']
Simulation and Evaluation of On-Chip Interconnect Architectures: 2D Mesh, Spidergon, and WK-Recursive Network
313,532
Data-intensive applications fall into two computing styles: Internet services (cloud computing) or high-performance computing (HPC). In both categories, the underlying file system is a key component for scalable application performance. In this paper, we explore the similarities and differences between PVFS, a parallel file system used in HPC at large scale, and HDFS, the primary storage system used in cloud computing with Hadoop. We integrate PVFS into Hadoop and compare its performance to HDFS using a set of data-intensive computing benchmarks. We study how HDFS-specific optimizations can be matched using PVFS and how consistency, durability, and persistence tradeoffs made by these file systems affect application performance. We show how to embed multiple replicas into a PVFS file, including a mapping with a complete copy local to the writing client, to emulate HDFS's file layout policies. We also highlight implementation issues with HDFS's dependence on disk bandwidth and benefits from pipelined replication.
['Wittawat Tantisiriroj', 'Seung Woo Son', 'Swapnil Patil', 'Samuel Lang', 'Garth A. Gibson', 'Robert B. Ross']
On the duality of data-intensive file system design: reconciling HDFS and PVFS
63,317
Neural Network Language Model with Cache
['Daniel Soutner', 'Zdeněk Loose', 'Luděk Müller', 'Aleš Pražák']
Neural Network Language Model with Cache
394,178
Stream processing applications executed on multiprocessor systems usually contain cyclic data dependencies due to the presence of bounded FIFO buffers and feedback loops, as well as cyclic resource dependencies due to the usage of shared processors. In recent works it has been shown that temporal analysis of such applications can be performed by iterative fixed-point algorithms that combine dataflow and response time analysis techniques. However, these algorithms consider resource dependencies based on the assumption that tasks on shared processors are enabled simultaneously, resulting in a significant overestimation of interference between such tasks. This paper extends these approaches by integrating an explicit consideration of precedence constraints with a notion of offsets between tasks on shared processors, leading to a significant improvement of temporal analysis results for cyclic stream processing applications. Moreover, the addition of an iterative buffer sizing enables an improvement of temporal analysis results for acyclic applications as well. The performance of the presented approach is evaluated in a case study using a WLAN transceiver application. It is shown that 56% higher throughput guarantees and 52% smaller end-to-end latencies can be determined compared to state-of-the-art.
['Philip S. Kurtin', 'Joost P. H. M. Hausmans', 'Marco Jan Gerrit Bekooij']
Combining Offsets with Precedence Constraints to Improve Temporal Analysis of Cyclic Real-Time Streaming Applications
716,093
Full textFull text is available as a scanned copy of the original print version. Get a printable copy (PDF file) of the complete article (265K), or click on a page image below to browse page by page. #R##N##R##N##R##N##R##N##R##N#1050
['Tm Hulgan', 'Fred R. Hargrove', 'Talbert Da', 'Samuel Trent Rosenbloom']
A Decision Support System to Promote Oral Antibiotic Dosing.
664,024
Clock distribution networks are extremely critical from a performance and power standpoint. They account for about 20-30% of the total power dissipated in current generation microprocessors. Many three-dimensional (3D) schemes propose to reduce interconnect length to improve performance and decrease power consumption. In this paper we propose a clock distribution network for a 3D multilayer core microprocessor. The 3D microprocessor floor plan has a single core folded onto multiple layers. A separate layer for the clock distribution network is proposed in the 3D microprocessor. This arrangement of a 3D chip stack reduces (a) power lost in long interconnects at block level and (b) in the clock distribution. Simulation results indicate a 15-20% power saving for this clock distribution scheme as compared to a 2D structure. A methodology for turning off the global clock grid along with the logic for an entire layer in a 3D stack is also proposed. Simulation results indicate an additional 8-10% savings in power with minimal impact on the critical parameters of the clock grid.
['Venkatesh Arunachalam', 'Wayne Burleson']
Low-power clock distribution in a multilayer core 3d microprocessor
223,279
We study an equivalence of (i) deterministic pathwise statements appearing in the online learning literature (termed \emph{regret bounds}), (ii) high-probability tail bounds for the supremum of a collection of martingales (of a specific form arising from uniform laws of large numbers for martingales), and (iii) in-expectation bounds for the supremum. By virtue of the equivalence, we prove exponential tail bounds for norms of Banach space valued martingales via deterministic regret bounds for the online mirror descent algorithm with an adaptive step size. We extend these results beyond the linear structure of the Banach space: we define a notion of \emph{martingale type} for general classes of real-valued functions and show its equivalence (up to a logarithmic factor) to various sequential complexities of the class (in particular, the sequential Rademacher complexity and its offset version). For classes with the general martingale type 2, we exhibit a finer notion of variation that allows partial adaptation to the function indexing the martingale. Our proof technique rests on sequential symmetrization and on certifying the \emph{existence} of regret minimization strategies for certain online prediction problems.
['Alexander Rakhlin', 'Karthik Sridharan']
On Equivalence of Martingale Tail Bounds and Deterministic Regret Inequalities
577,203
Efficient Lattice-based Authenticated Encryption: A Practice-Oriented Provable Security Approach.
['Ahmad Boorghany', 'Siavash Bayat Sarmadi', 'Rasool Jalili']
Efficient Lattice-based Authenticated Encryption: A Practice-Oriented Provable Security Approach.
976,379
Verbal Phraseological Units are phrasesmade up of two or more words in which at least one of thewords is a verb that plays the role of the predicate. Oneof the characteristics of this type of expression is that itsglobal meaning rarely can be deduced from the meaningof its components. The automatic recognition of this typeof linguistic structures is a very important task, since theyare a standard way of expressing a concept or idea. Inthis paper we present the results obtained when differentsupervised machine learning methods are employed fordetermining whether or not a verbal phraseological unitis present in a given story of a newspaper. The experimentshave been carried out using a supervised corpusof news stories (written in Mexican Spanish). Besidethe results obtained in the experiments aforementioned,we provide access to a new lexicon having phrases asentries (instead of single words), in which each entry isassociated to a real value (normalized between zero andone) indicating its probability of being a verbal phraseologicalunit.
['Belém Priego Sánchez', 'David Pinto']
Identification of Verbal Phraseological Units in Mexican News Stories
588,586
We demonstrate a kernel of size O(k 2 ) for 3-HITTING SET (HITTING SET when all subsets in the collection to be hit are of size at most three), giving a partial answer to an open question of Niedermeier by improving on the O(k 3 ) kernel of Niedermeier and Rossmanith. Our technique uses the Nemhauser-Trotter linear-size kernel for VERTEX COVER, and generalizes to demonstrating a kernel of size O(k r-1 ) for r-HITTING SET (for fixed r).
['Naomi Nishimura', 'Prabhakar Ragde', 'Dimitrios M. Thilikos']
Smaller Kernels for Hitting Set Problems of Constant Arity
383,846
This paper studies a continuous time queueing system with multiple types of customers and a first-come-first-served service discipline. Customers arrive according to a semi-Markov arrival process and the service times of individual types of customers have PH-distributions. A GI /M/1 type Markov process for a generalized age process of batches of customers is constructed. The stationary distribution of the GI /M/1 type Markov process is found explicitly and, consequently, the distributions of the age of the batch in service, the total workload in the system, waiting times, and sojourn times of different batches and different types of customers are obtained. The paper gives the matrix representations of the PH-distributions of waiting times and sojourn times. Some results are obtained for the distributions of queue lengths at departure epochs and at an arbitrary time. These results can be used to analyze not only the queue length, but also the composition of the queue. Computational methods are developed for calculating steady state distributions related to the queue lengths, sojourn times, and waiting times.
['Qi-Ming He']
Analysis of a continuous time SM[K]/PH[K]/1/FCFS queue: Age process, sojourn times, and queue lengths
37,477
Discovery of Precursors to Adverse Events using Time Series Data.
['Vijay Manikandan Janakiraman', 'Bryan L. Matthews', 'Nikunj C. Oza']
Discovery of Precursors to Adverse Events using Time Series Data.
873,280
In recent years research in the three-dimensional sound generation field has been primarily focussed upon new applications of spatialized sound. In the computer graphics community the use of such techniques is most commonly found being applied to virtual, immersive environments. However, the field is more varied and diverse than this and other research tackles the problem in a more complete, and computationally expensive manner. Furthermore, the simulation of light and sound wave propagation is still unachievable at a physically accurate spatio-temporal quality in real time. Although the Human Visual System (HVS) and the Human Auditory System (HAS) are exceptionally sophisticated, they also contain certain perceptional and attentional limitations. Researchers, in fields such as psychology, have been investigating these limitations for several years and have come up with findings which may be exploited in other fields. This paper provides a comprehensive overview of the major techniques for generating spatialized sound and, in addition, discusses perceptual and cross-modal influences to consider. We also describe current limitations and provide an in-depth look at the emerging topics in the field. © 2012 Wiley Periodicals, Inc.
['Vedad Hulusic', 'Carlo Harvey', 'Kurt Debattista', 'Nicolas Tsingos', 'Steve Walker', 'David M. Howard', 'Alan Chalmers']
Acoustic Rendering and Auditory–Visual Cross-Modal Perception and Interaction
184,475
This paper presents a comparative performance study of adaptive and deterministic routing algorithms in wormhole-switched hypercubes and investigates the performance vicissitudes of these routing schemes under a variety of network operating conditions. Despite the previously reported results, our results show that the adaptive routing does not consistently outperform the deterministic routing even for high dimensional networks. In fact, it appears that the superiority of adaptive routing is highly dependent to the broadcast traffic rate generated at each node and it begins to deteriorate by growing the broadcast rate of generated message.
['Alireza Shahrabi', 'Mohamed Ould-Khaoua']
On the performance of routing algorithms in wormhole-switched multicomputer networks
421,357
Geosynchronous satellite can measure any area with high temporal repetitivity within its coverage region because of its relative static location compared to Earth. Considering the temporal repetitivity, it can satisfy requirements for coastal zone monitoring but also has to face the influence of the varying solar angle and sensor angle (zenith and azimuth). Up to now, there is no geosynchronous sensor dedicated to ocean color monitoring (a geosynchronous sensor "Korea Geostationary Ocean Color Imager" (KGOCI) is supposed to be launched in 2009 [1]). To obtain radiances from the ocean at 36000km of altitude, we have to use a simulation model. In this conference, we present generic model of simulation of geosynchronous optical sensor. This model is composed of different models: a water bio-optical model, an atmospheric transfer model and a sensor model. We also present our recent results, that is the influence of solar angle and sensor angle on deviation of estimation of chlorophyll concentration in open ocean (case1 water).
['Manchun Lei', 'Audrey Minghelli-Roman', 'Sandrine Mathieu', 'Jean-Marie Froidefond', 'Annick Bricaud', 'Pierre Gouton']
Assessment of the Potential future high and medium resolution sensors on geosynchronous orbit for coastal zone monitoring
47,395
The research on real-time software systems has produced algorithms that allow to effectively schedule system resources while guaranteeing the deadlines of the application and to group tasks in a very short number of non-preemptive sets which require much less RAM memory for stack. Unfortunately, up to now the research focus has been on time guarantees rather than the optimization of RAM usage. Furthermore, these techniques do not apply to multiprocessor architectures which are likely to be widely used in future microcontrollers. This paper presents a fast and simple algorithm for sharing resources in multiprocessor systems, together with an innovative procedure for assigning preemption thresholds to tasks. This allows to guarantee the schedulability of hard real-time task sets while minimizing RAM usage. The experimental part shows the effectiveness of a simulated annealing-based tool that allows to find a near-optimal task allocation. When used in conjunction with our preemption threshold assignment algorithm, our tool further reduces the RAM usage in multiprocessor systems.
['P. Gai', 'Giuseppe Lipari', 'M. Di Natale']
Minimizing memory utilization of real-time task sets in single and multi-processor systems-on-a-chip
463,938
Corrections to "Genetic algorithm optimization of multidimensional grayscale soft morphological filters with applications in film archive restoration"
['Mahmoud S. Hamid', 'Neal R. Harvey', 'Stephen Marshall']
Corrections to "Genetic algorithm optimization of multidimensional grayscale soft morphological filters with applications in film archive restoration"
57,808
Multi-gate CMOS devices promise to usher an era of transistors with good electrostatic integrity at the sub-22nm nodes, which makes it essential to rethink traditional approaches to designing low-leakage digital logic and sequential elements formerly used in high-performance planar single-gate technologies. In the current work, we explore the design space of symmetric (Symm-Φ G ) and asymmetric gate workfunction (Asymm-Φ G ) FinFET logic gates, latches, and flip-flops for optimal trade-offs in leakage vs. delay and temperature in a high-performance FinFET technology. We demonstrate, using mixed-mode Sentaurus technology computer-aided design (TCAD) device simulations, that Asymm-Φ G shorted-gate n/p-FinFETs, which use both workfunctions corresponding to typical high-performance n/p-FinFETs, yield over two orders of magnitude lower leakage without excessive degradation in on-state current, in comparison to Symm-Φ G shorted-gate (SG) FinFETs, placing them in a better position than back-gate biased independent-gate (IG) FinFETs for leakage reduction. Results for elementary logic gates like INV, NAND2, NOR2, XOR2, and XNOR2 using Asymm-Φ G SG-mode FinFETs indicate that they are more optimally located in the leakage-delay spectrum in comparison to the most versatile configurations possible by mixing corresponding Symm-Φ G SG- and IG-mode FinFETs. Latches and flip-flops, however, require an astute combination of Symm-Φ G and Asymm-Φ G FinFETs to optimize leakage, delay, and setup time simultaneously.
['Ajay N. Bhoj', 'Niraj K. Jha']
Design of ultra-low-leakage logic gates and flip-flops in high-performance FinFET technology
129,921
Significant properties of maximum likelihood ML estimate are consistency, normality and efficiency. However, it has been proven that these properties are valid when the sample size approaches infinity. Many researches warn that a behavior of ML estimator working with the small sample size is largely unknown. But, in real tasks we usually do not have enough data to completely fulfill the conditions of optimal ML estimate. The question, which we discuss in the article is, how much data we need to be able to estimate the Gaussian model that provides sufficiently accurate likelihood estimates. This issue is addressed with respect to the dimension of space and it is taken into account possible property of ill conditioned data.
['Josef Psutka', 'Josef Psutka']
Sample Size for Maximum Likelihood Estimates of Gaussian Model
650,991
Power consumption is a critical concern in communications over wireless sensor networks (WSN). In this paper, we address the rate-allocation problem for Slepian-Wolf coding of multiple correlated sources. The goal is to find the optimal rate-point that allows lossless reconstruction of the sources, while minimizing the overall transmission power consumption of the WSN under an exponential cost model. A novel water-filling algorithm to be performed by the receiver is proposed to solve the problem in a recursive manner. The feasibility and optimality of the proposed solution are analyzed mathematically and verified experimentally. Compared to the conventional Lagrangian-multiplier approach, our algorithm achieves dramatic reduction in computational complexity.
['Wei Liu', 'Lina Dong', 'Wenjun Zeng']
Power-efficient rate allocation for Slepian-Wolf coding over wireless sensor networks
342,946
This paper presents techniques for generating on-chip buses suitable for dynamically integrating hardware modules into an FPGA-based SoC by partial reconfiguration. The buses permit direct connections of master and slave modules to the bus in combination with a flexible fine-grained module placement and with minimized latency and area overheads. A test system will demonstrate a transfer rate of 800 MB/s while providing an extreme high placement flexibility.
['Dirk Koch', 'Christian Haubelt', 'Jürgen Teich']
Efficient Reconfigurable On-Chip Buses for FPGAs
239,127
Pattern matching algorithms, which may be realized via associative memories, require further improvements in both accuracy and power consumption to achieve more widespread use in real-world applications. In this work we utilized a memristive crossbar to combine computation and memory in an approximate Hamming distance computing architecture for an associative memory. For classifying handwritten digits from the MNIST data-set, we showed that using the Hamming distance rather than the traditional dot product increased accuracy, and decreased power consumption by 100×. Moreover, we showed that we can trade-off accuracy to save additional power or vice-versa by adjusting the input voltage. This trade-off may be adjusted for the architecture depending on its application. Our architecture consumed 200× less power than other previously proposed Hamming distance associative memory architectures, due to the use of memristive devices, and is 256× faster than prior work due to our leveraging of in-memory computation. Improved associative memories should prove useful for GPUs, handwriting recognition, DNA sequence matching, object detection, and other applications.
['Mohammad Mahmoud A. Taha', 'Walt Woods', 'Christof Teuscher']
Approximate in-memory Hamming distance calculation with a memristive associative memory
888,091
The effects of four hypertext learning environments with a hierarchical graphical overview were studied on the coherence of the node sequence, extraneous load and comprehension. Navigation patterns were influenced by the type of overview provided (i.e., dynamic, static) and whether navigation was restricted (i.e., restricted, non-restricted). It was hypothesised that redundant use of the overview for inducing a high-coherence reading sequence would result in high extraneous load and low comprehension. Coherence was higher in the dynamic than in the static conditions. Coherence was also higher in the restricted than in the non-restricted conditions. Mental effort as a measure of extraneous load was higher at the end than at the beginning of the learning phase, especially in the dynamic restricted and the static non-restricted conditions, although there was no significant interaction. Comprehension was lowest in the dynamic restricted condition and highest in the dynamic non-restricted and static restricted conditions. Low comprehension in the dynamic restricted condition indicates that overviews can become redundant for reading sequence coherence, negatively impacting comprehension. The evidence suggests that severe restriction of navigation paths should be avoided and that continuous use of overviews such as in dynamic overviews may be detrimental to learning.
['Eniko Bezdan', 'Liesbeth Kester', 'Paul A. Kirschner']
The influence of node sequence and extraneous load induced by graphical overviews on hypertext learning
288,571
Modern multislice computed tomography (CT) scanners produce isotropic CT images with a thickness of 0.6 mm. These CT images offer detailed information of lung cavities, which could be used for better surgical planning of treating lung cancer. The major challenge for developing a surgical planning system is the automatic segmentation of lung lobes by identifying the lobar fissures. This paper presents a lobe segmentation algorithm that uses a two-stage approach: (1) adaptive fissure sweeping to find fissure regions and (2) wavelet transform to identify the fissure locations and curvatures within these regions. Tested on isotropic CT image stacks from nine anonymous patients with pathological lungs, the algorithm yielded an accuracy of 76.7%-94.8% with strict evaluation criteria. In comparison, surgeons obtain an accuracy of 80% for localizing the fissure regions in clinical CT images with a thickness of 2.5-7.0 mm. As well, this paper describes a procedure for visualizing lung lobes in three dimensions using software-amira-and the segmentation algorithm. The procedure, including the segmentation, needed about 5 min for each patient. These results provide promising potential for developing an automatic algorithm to segment lung lobes for surgical planning of treating lung cancer.
['Qiao Wei', 'Yaoping Hu', 'Gary Gelfand', 'John H. MacGregor']
Segmentation of Lung Lobes in High-Resolution Isotropic CT Images
133,431
In this paper, we describe a supervised four-dimensional (4D) light field segmentation method that uses a graph-cut algorithm. Since 4D light field data has implicit depth information and contains redundancy, it differs from simple 4D hyper-volume. In order to preserve redundancy, we define two neighboring ray types (spatial and angular) in light field data. To obtain higher segmentation accuracy, we also design a learning-based likelihood, called objectness, which utilizes appearance and disparity cues. We show the effectiveness of our method via numerical evaluation and some light field editing applications using both synthetic and real-world light fields.
['Hajime Mihara', 'Takuya Funatomi', 'Kenichiro Tanaka', 'Hiroyuki Kubo', 'Yasuhiro Mukaigawa', 'Hajime Nagahara']
4D light field segmentation with spatial and angular consistencies
673,308
Electrical signaling in neurons is mediated by the opening and closing of large numbers of individual ion channels. The ion channels' state transitions are stochastic and introduce fluctuations in the macroscopic current through ion channel populations. This creates an unavoidable source of intrinsic electrical noise for the neuron, leading to fluctuations in the membrane potential and spontaneous spikes. While this effect is well known, the impact of channel noise on single neuron dynamics remains poorly understood. Most results are based on numerical simulations. There is no agreement, even in theoretical studies, on which ion channel type is the dominant noise source, nor how inclusion of additional ion channel types affects voltage noise. Here we describe a framework to calculate voltage noise directly from an arbitrary set of ion channel models, and discuss how this can be use to estimate spontaneous spike rates.
["Cian O'Donnell", 'Mark Van Rossum']
Systematic analysis of the contributions of stochastic voltage gated channels to neuronal noise
166,681
Heterogeneous wireless networks (HetNets) provide a powerful approach to meeting the dramatic mobile traffic growth, but also impose a significant challenge on backhaul. Caching and multicasting at macro and pico base stations (BSs) are two promising methods to support massive content delivery and reduce backhaul load in HetNets. In this paper, we jointly consider caching and multicasting in a large-scale cache-enabled HetNet with backhaul constraints. We propose a hybrid caching design consisting of identical caching in the macro-tier and random caching in the pico-tier, and a corresponding multicasting design. By carefully handling different types of interferers and adopting appropriate approximations, we derive tractable expressions for the successful transmission probability in the general signal-to-noise ratio (SNR) and user density region as well as the high SNR and user density region, utilizing tools from stochastic geometry. Then, we consider the successful transmission probability maximization by optimizing design parameters, which is a very challenging mixed discrete-continuous optimization problem. By exploring structural properties, we obtain a near optimal solution with superior performance and manageable complexity. This solution achieves better performance in the general region than any asymptotically optimal solution, under a mild condition. The analysis and optimization results provide valuable design insights for practical cache-enabled HetNets.
['Ying Cui', 'Dongdong Jiang']
Analysis and Optimization of Caching and Multicasting in Large-Scale Cache-Enabled Heterogeneous Wireless Networks
711,894
In this paper, a DCT-based multipurpose image watermarking algorithm is proposed. Through adopting dither modulation in subimages gained by subsampling, two independent robust watermarks can be embedded in the original image. Experimental results demonstrate the effectiveness of the proposed algorithm.
['Fan Gu', 'Zhe-Ming Lu', 'Jeng-Shyang Pan']
Multipurpose image watermarking in DCT domain using subsampling
344,825
Today's highly-networked visualization computing environments include systems from a wide variety of hardware vendors, each running its own operating system and sporting its own input, output, display, and storage peripherals. Faced with such a bewildering variety of hardware and software, today's visualization user is in dire need of software systems that integrate these resources over a network and allow him or her to take maximum advantage of them with a minimum of hassle and networking technical knowledge.Video equipment has become one such network resource. Visualization video equipment is used to record visualization animations, process the video signal, play back animations at varying speeds forward and backward, and edit animations into polished final productions. Computer control of video equipment allows many of these operations to be performed automatically or through slick graphical user interfaces. However, the single serial communications line connecting the video device to a host means that one can only access that device via that host. This paper discusses the Video Tools software developed at the San Diego Supercomputer Center (SDSC) to overcome these restrictions and turn a site's video equipment into a network-accessible resource.
['David R. Nadeau', 'Michael J. Bailey']
Network video device control
222,385
We introduce Quantile Boost (QBoost) algorithms which predict conditional quantiles of the interested response for regression and binary classification. Quantile Boost Regression (QBR) performs gradient descent in functional space to minimize the objective function used by quantile regression (QReg). In the classification scenario, the class label is defined via a hidden variable, and the quantiles of the class label are estimated by fitting the corresponding quantiles of the hidden variable. An equivalent form of the definition of quantile is introduced, whose smoothed version is employed as the objective function, which is maximized by gradient ascent in functional space to get the Quantile Boost Classification (QBC) algorithm. Extensive experiments show that QBoost performs better than the original QReg and other alternatives for regression and classification. Furthermore, QBoost is more robust to noisy predictors.
['Songfeng Zheng']
Boosting based conditional quantile estimation for regression and binary classification
537,445
The impact of information technology on business operations is widely recognized and its role in the emergence of new business models is well-known. In order to leverage the benefits of IT-supported business processes the security of the underlying information systems must be managed. Various socalled best-practice models and information security standards have positioned themselves as generic solutions for a broad range of risks. In this paper we inspect the metamodel of the information security standard ISO 27001 and describe its application for a set of generalized phases in information security management. We conclude with a demonstration of its practicality by providing an example of how such a metamodel can be applied, before discussing potential future research.
['Danijel Milicevic', 'Matthias Goeken']
Application of models in information security management
120,937
We present a method for verifying partial correctness properties of imperative programs that manipulate integers and arrays by using techniques based on the transformation of constraint logic programs (CLP). We use CLP as a metalanguage for representing imperative programs, their executions, and their properties. First, we encode the correctne ss of an imperative program, say prog, as the negation of a predicate incorrect defined by a CLP program T . By construction, incorrect holds in the least model of T if and only if the execution of prog from an initial configuration eventually halts in an error configuration. Then, we apply to progra m T a sequence of transformations that preserve its least model semantics. These transformations are based on well-known transformation rules, such as unfolding and folding, guided by suitable transformation strategies, such as specialization and generalization. The objective of the transformations is to derive a new CLP program TransfT where the predicate incorrect is defined either by (i) the fact ‘ incorrect.’ (and in this case prog is not correct), or by (ii) the empty set of clauses (and in this case prog is correct). In the case where we derive a CLP program such that neither (i) nor (ii) holds, we iterate the transformation. Since the problem is undecidable, this process may not terminate. We show through examples that our method can be applied in a rather systematic way, and is amenable to automation by transferring to the field of program verification many techniques developed in the fiel d of program transformation.
['Emanuele De Angelis', 'Fabio Fioravanti', 'Alberto Pettorossi', 'Maurizio Proietti']
Verification of Imperative Programs by Constraint Logic Program Transformation
163,941
This paper introduces performance aspects as a new optimization criteria when generating Gate Matrix Layouts. A new layout model is presented that limits the amount of parasitic capacitance in signal paths and the resistance in power supply lines. The performance considerations are combined with a new layout s-trategy that improves circuit performance with little or no area penalty. An Automatic Transistor Layout Synthesizer (ATLAS) implements the proposed changes to the Gate Matrix Layout style.
['Bjarne Hald', 'Jan Madsen']
Performance Aspects of Gate Matrix Layout
321,619
The MPEG reconfigurable video coding (RVC) framework is a new standard under development by MPEG that aims at providing a unified high-level specification of current MPEG video coding technologies. In this framework, a decoder is built as a configuration of video coding modules taken from the standard ldquoMPEG toolbox libraryrdquo. The elements of the library are specified by a textual description that expresses the I/O behavior of each module and by a reference software written using the CAL Actor Language. A decoder configuration is written in an XML dialect by connecting a set of CAL modules. Code generators are fundamental supports that enable the direct transformation of a high level specification to efficient hardware and software implementations. This paper presents a synthesis tool that from a CAL dataflow program generates C code and an associated SystemC model. Experimental results of the RVC Expertpsilas MPEG-4 simple profile decoder synthesis are reported. The generated code and the associated SystemC model are validated against the original CAL description which is simulated using the open dataflow environment.
['Ghislain Roquier', 'Matthieu Wipliez', 'Mickaël Raulet', 'Jorn W. Janneck', 'Ian D. Miller', 'David B. Parlour']
Automatic software synthesis of dataflow program: An MPEG-4 simple profile decoder case study
430,293
Distribution middleware is often integrated as a COTS, providing distribution facilities for critical, embedded or large-scale applications. So far, typical middleware does not come with a complete analysis of their behavioral properties. In this paper, we present our work on middleware modeling and the verification of its behavioral properties; the study is applied to our middleware architecture: PolyORB. Then we present the tools and techniques deployed to actually verify the behavioral properties of our model: Petri nets, temporal logic and advanced algorithms to reduce the size of the state space. Finally, we detail some properties we verify and assess our methodology.
['Jérôme Hugues', 'Thomas Vergnaud', 'Laurent Pautet', 'Yann Thierry-Mieg', 'Souheib Baarir', 'Fabrice Kordon']
On the Formal Verification of Middleware Behavioral Properties
330,649
The conventional embedded wavelet image coder exploiting the adjacent neighbors is limited to efficiently compress the sign coefficients, since the wavelet coefficients are highly correlated along the dominant image features, such as edges or contours. To solve the problem, in this paper, we propose the direction-adaptive sign context modeling by adaptively exploiting the neighbors suitable to the dominant image features. Experimental results show that the sign coding based on the proposed context modeling reduces the sign bits up to 5.5% compared to the conventional sign coding method.
['Sungjei Kim', 'Jinwoo Jeong', 'Yong-Goo Kim', 'Yungho Choi', 'Yoonsik Choe']
Direction-adaptive context modeling for sign coding in embedded wavelet image coder
501,917
Chip Multiprocessor (CMP) memory systems suffer from the effects of destructive thread interference. This interference reduces performance predictability because it depends heavily on the memory access pattern and intensity of the co-scheduled threads. In this work, we confirm that all shared units must be thread-aware in order to provide memory system fairness. However, the current proposals for fair memory systems are complex as they require an interference measurement mechanism and a fairness enforcement policy for all hardware-controlled shared units. Furthermore, they often sacrifice system throughput to reach their fairness goals which is not desirable in all systems. In this work, we show that our novel fairness mechanism, called the Dynamic Miss Handling Architecture (DMHA), is able to reduce implementation complexity by using a single fairness enforcement policy for the complete hardware-managed shared memory system. Specifically, it controls the total miss bandwidth available to each thread by dynamically manipulating the number of Miss Status Holding Registers (MSHRs) available in each private data cache. When fairness is chosen as the metric of interest and we compare to a state-of-the-art fairness-aware memory system, DMHA improves fairness by 26% on average with the single program baseline. With a different configuration, DMHA improves throughput by 13% on average compared to a conventional memory system.
['Magnus Jahre', 'Lasse Natvig']
A light-weight fairness mechanism for chip multiprocessor memory systems
320,702
F-index of a graph is the sum of the cube of the degrees of the vertices. In this paper, we investigate the F-indices of unicyclic graphs by introducing some transformation, and characterize the unicyclic graphs with the first five largest F-indices and the unicyclic graphs with the first two smallest F-indices, respectively.
['Ruhul Amin', 'Sk. Md. Abu Nayeem']
Ordering Unicyclic Graphs with Respect to F-index
870,650
Emerging application domains such as interactive vision, animation, and multimedia collaboration display dynamic scalable parallelism and high-computational requirements, making them good candidates for executing on parallel architectures such as SMPs and clusters of SMPs. Stampede is a programming system that has many of the needed functionalities such as high-level data sharing, dynamic cluster-wide threads and their synchronization, support for task and data parallelism, handling of time-sequenced data items, and automatic buffer management. We present an overview of Stampede, the primary data abstractions, the algorithmic basis of garbage collection, and the issues in implementing these abstractions on a cluster of SMPs. We also present a set of micromeasurements along with two multimedia applications implemented on top of Stampede, through which we demonstrate the low overhead of this runtime and that it is suitable for the streaming multimedia applications.
['Umakishore Ramachandran', 'Rishiyur S. Nikhil', 'James M. Rehg', 'Yavor Angelov', 'Arnab Paul', 'Sameer Adhikari', 'Kenneth M. Mackenzie', 'Nissim Harel', 'Kathleen Knobe']
Stampede: a cluster programming middleware for interactive stream-oriented applications
232,165
Model-Driven Software Engineering (MDSE) has been successfully applied in order to accelerate the development of Web Services (WS). Despite the reported successes identified by a systematic mapping in this context, there was no reported method that takes advantage of modelling facilities outside the development phases by exposing the models as a feature to the end-users. Therefore, in this paper, we study how would be possible to push MDSE to be more than a method to accelerate development. In our envision, MDSE could be evolved into a paradigm suitable to create WS systems that present modelling principles from MDSE as features visible to the end-users while functioning employing tools originally meant for MDSE development for several other uses. We refer to these MDSE-evolved WS systems as Model Oriented Web Services (MOWS). We also provide a pattern and a development process to guide the development of MOWS. This paper includes case studies that indicate evidences on its feasibility. We also discuss advantages, disadvantages and suitability for real world applications.
['Thiago Gottardi', 'Rosana T. V. Braga']
Model-Oriented Web Services
744,954
Motivation: Propagating functional annotations to sequence-similar, presumably homologous proteins lies at the heart of the bioinformatics industry. Correct propagation is crucially dependent on the accurate identification of subtle sequence motifs that are conserved in evolution. The evolutionary signal can be difficult to detect because functional sites may consist of non-contiguous residues while segments in-between may be mutated without affecting fold or function.#R##N##R##N#Results: Here, we report a novel graph clustering algorithm in which all known protein sequences simultaneously self-organize into hypothetical multiple sequence alignments. This eliminates noise so that non-contiguous sequence motifs can be tracked down between extremely distant homologues. The novel data structure enables fast sequence database searching methods which are superior to profile-profile comparison at recognizing distant homologues. This study will boost the leverage of structural and functional genomics and opens up new avenues for data mining a complete set of functional signature motifs.#R##N##R##N#Availability: http://www.bioinfo.biocenter.helsinki.fi/gtg #R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Supplementary data are available at Bioinformatics online.
['Andreas Heger', 'Swapan Mallick', 'Chris Wilton', 'Liisa Holm']
The global trace graph, a novel paradigm for searching protein sequence databases
432,760
Full textFull text is available as a scanned copy of the original print version. Get a printable copy (PDF file) of the complete article (158K), or click on a page image below to browse page by page. #R##N##R##N##R##N##R##N##R##N#1020
['Sylvia E. Barrett', 'Latanya Sweeney']
Y2K BEACON: An Intelligent Web-Based Crisis Center for Disseminating Year 2000 Biomedical Equipment Compliance Information
679,278
We present an algorithm for active learning (adaptive selection of training data) within the context of semi-supervised multi-task classifier design. The semi-supervised multi-task classifier exploits manifold information provided by the unlabeled data, while also leveraging relevant information across multiple data sets. The active-learning component defines which data would be most informative to classifier design if the associated labels are acquired. The framework is demonstrated through application to a real landmine detection problem.
['Hui Li', 'Xuejun Liao', 'Lawrence Carin']
Active learning for semi-supervised multi-task learning
389,003
Graphical abstractDisplay Omitted HighlightsNovel fuzzy rule-based classifiers (FRBCs) for financial data classification tasks are proposed.FRBCs are designed using multi-objective evolutionary optimization algorithms.Collection of FRBCs with various levels of accuracy-interpretability trade-off is generated.Benchmark financial data sets are used to evaluate the effectiveness of the proposed system.Highly accurate, interpretable and fast financial decision support is provided by our approach. Credit classification is an important component of critical financial decision making tasks such as credit scoring and bankruptcy prediction. Credit classification methods are usually evaluated in terms of their accuracy, interpretability, and computational efficiency. In this paper, we propose an approach for automatic designing of fuzzy rule-based classifiers (FRBCs) from financial data using multi-objective evolutionary optimization algorithms (MOEOAs). Our method generates, in a single experiment, an optimized collection of solutions (financial FRBCs) characterized by various levels of accuracy-interpretability trade-off. In our approach we address the complexity- and semantics-related interpretability issues, we introduce original genetic operators for the classifier's rule base processing, and we implement our ideas in the context of Non-dominated Sorting Genetic Algorithm II (NSGA-II), i.e., one of the presently most advanced MOEOAs. A significant part of the paper is devoted to an extensive comparative analysis of our approach and 24 alternative methods applied to three standard financial benchmark data sets, i.e., Statlog (Australian Credit Approval), Statlog (German Credit Approval), and Credit Approval (also referred to as Japanese Credit) sets available from the UCI repository of machine learning databases (http://archive.ics.uci.edu/ml). Several performance measures including accuracy, sensitivity, specificity, and some number of interpretability measures are employed in order to evaluate the obtained systems. Our approach significantly outperforms the alternative methods in terms of the interpretability of the obtained financial data classifiers while remaining either competitive or superior in terms of their accuracy and the speed of decision making.
['Marian B. Gorzałczany', 'Filip Rudziński']
A multi-objective genetic optimization for fast, fuzzy rule-based credit classification with balanced accuracy and interpretability
561,410
Underwater acoustic networks (UANs) have drawn significant attention from both academia and industry in recent years. Even though many underwater MAC protocols have been proposed and studied based on simulations and theoretical analysis, few work has been conducted to test and evaluate these protocols in a multi-hop real sea experiment. Due to the harsh acoustic channel condition caused by complex multi-path environment, fast varying acoustic channel and heterogenous channel quality, current simulators can hardly tell us how the protocols work in the real world. Along this direction, we conduced real sea experiments at Atlantic Ocean with 9 nodes deployed forming a multi-hop string network. In this experiment, the performance of three representative MAC protocols, random access based UW-Aloha, handshaking based SASHA, and scheduling based pipelined transmission MAC (PTMAC) are compared and analyzed at both packet behavior and node behavior levels. The end-to-end performance of these three protocols are also tested and studied in terms of throughput, delay, and packet delivery ratio. From field experiment results, the high packet loss rate and significant channel asymmetry, temporal and spatial transmission range uncertainty and delayed data transmissions are discovered to have evidential effects on the MAC performance. We provide some inspirations to address these observed issues in MAC design for real multi-hop networks.
['Lina Pu', 'Yu Luo', 'Haining Mo', 'Zheng Peng', 'Jun-Hong Cui', 'Zaihan Jiang']
Comparing underwater MAC protocols in real sea experiment
337,353
To reduce the motion artifacts, a new scanning configuration is proposed for tomosynthesis in dynamic reconstruction. In this new configuration, multiple x-ray sources are uniformly distributed on the circular scanning trajectory and moving simultaneously. Numerical experiments are performed using two dynamic digital phantoms and algebraic reconstruction technique. The reconstruction images of single-source tomosynthesis and multi-source tomosynthesis are compared and evaluated. The results show that multi-source tomosynthesis could reduce artifacts effectively, thus improving image quality. The advantages of multi-source tomosynthesis in dynamic reconstruction are important to cardiac imaging and respiratory imaging.
['Jiaju Peng', 'Zhao Jun']
Application of algebraic reconstruction technique of multi-source tomosynthesis in dynamic reconstruction
51,934
Age-Related Macular Degeneration Detection and Stage Classification Using Choroidal OCT Images
['Jingjing Deng', 'Xianghua Xie', 'Louise Terry', 'Ashley Wood', 'Nick White', 'Tom H. Margrain', 'R. V. North']
Age-Related Macular Degeneration Detection and Stage Classification Using Choroidal OCT Images
839,019
Perfect difference networks (PDNs) that are based on the mathematical notion of perfect difference sets have been shown to comprise an asymptotically optimal method for connecting a number of nodes into a network with diameter 2. Justifications for, and mathematical underpinning of, PDNs appear in a companion paper. In this paper, we compare PDNs and some of their derivatives to interconnection networks with similar cost/performance, including certain generalized hypercubes and their hierarchical variants. Additionally, we discuss point-to-point and collective communication algorithms and derive a general emulation result that relates the performance of PDNs to that of complete networks as ideal benchmarks. We show that PDNs are quite robust, both with regard to node and link failures that can be tolerated and in terms of blandness (not having weak spots). In particular, we prove that the fault diameter of PDNs is no greater than 4. Finally, we study the complexity and scalability aspects of these networks, concluding that PDNs and their derivatives allow the construction of very low diameter networks close to any arbitrary desired size and that, in many respects, PDNs offer optimal performance and fault tolerance relative to their complexity or implementation cost.
['Behrooz Parhami', 'Mikhail A. Rakov']
Performance, algorithmic, and robustness attributes of perfect difference networks
541,588
Finite Model Reasoning in Horn-SHIQ.
['Yazmin Angélica Ibáñez-García', 'Carsten Lutz', 'Thomas Schneider']
Finite Model Reasoning in Horn-SHIQ.
792,771
This letter presents a robust decision-feedback equalization design that mitigates the error-propagation problem for multiuser direct-sequence code-division multiple-access systems under multipath fading. Explicit constraints for signal energy preservation are imposed on the filter weight vector to monitor and maintain the quality of the hard decisions in the nonlinear feedback loop. Such a measure protects the desired signal power against the detrimental effect of erroneous past decisions, thus providing the leverage to curb error propagation.
['Zhi Tian']
Mitigating error propagation in decision-feedback equalization for multiuser CDMA
245,225
In a mobile telecommunication system the received signal envelope has a very large dynamic range. A gain control is then required to adjust the signal amplitude at the input of the Analog to Digital Converter, ADC. The Automatic Gain Control, AGC, must control such amplitude to avoid (or limit) ADC saturation and, at the same time, to ensure an efficient usage of its dynamic range. In the state of art approach, the AGC loop adjustment is based on the measurement of the received signal power. This measurement can be made directly from the RF unit or after the ADC. In this contribution, we describe an approach based on indirect power measurement with low complexity circuitry. Shortly, this method measures the signal probability of exceeding a properly defined threshold, allowing power estimate.
['N. Piazzese', 'Giuseppe Avellone', 'Ettore Messina', 'Alberto Serratore']
Automatic Gain Control Loop Based on Threshold Crossing Rate Measurement
128,631
Experimenting platforms for wireless 60 GHz networking measurements are limited and extremely costly. The requirements for such a platform in terms of bandwidth and antenna capabilities are very high. For instance, the 802.11ad protocol uses channels with a bandwidth of 2.16 GHz and requires electronically steerable phased antenna arrays. Devices implementing this protocol are available as consumer-grade off-the-shelf hardware but are typically a black box which barely allows any insights for research purposes. In this paper, we show the hidden monitoring capabilities of such a consumer-grade 60 GHz device, and explain how to access lower layer parameters such as modulation and coding schemes, antenna steering, and packet decoding. Moreover, we present an extensive set of experiments showing the behavior of these parameters by means of the aforementioned monitoring capabilities.
['Adrian Loch', 'Guillermo Bielsa', 'Joerg Widmer']
Practical lower layer 60 GHz measurements using commercial off-the-shelf hardware
897,805
textabstractIn recent years academic research has focused on understanding and modeling the survey response process. This paper examines an understudied systematic response tendency in surveys: the extent to which observed responses are subject to state dependence, i.e., response carryover from one item to another independent of specific item content. We develop a statistical model that simultaneously accounts for state dependence, item content, and scale usage heterogeneity. The paper explores how state dependence varies by response category, item characteristics, item sequence, respondent characteristics, and whether it becomes stronger as the survey progresses. Two empirical applications provide evidence of substantial and significant state dependence. We find that the degree of state dependence depends on item characteristics and item sequence, and it varies across individuals and countries. The article demonstrates that ignoring state dependence may affect reliability and predictive validity, and it provides recommendations for survey researchers.
['Martijn Jong', 'Donald R. Lehmann', 'Oded Netzer']
State-Dependence Effects in Surveys
965,443
Multipath routing is an alternative routing technique, which uses redundant paths to deliver data from source to destination. Compared to single path routing protocols, it can address reliability, delay, and energy consumption issues. Thus, multipath routing is a potential technique to overcome the long propagation delay and adverse link condition in underwater environment. However, there are still some problems in multipath routing. For example, the multiple paths may interfere with each other and arouse large end-to-end delay difference amongst multiple paths. This paper proposes a novel multipath routing structure and a conflict-free algorithm based on TDMA scheme. The forwarding nodes are selected based on the propagation delay and location information. This special multiple routing structure not only can ensure parallel multiple transmission without collision, but also can get a small end-to-end delay difference amongst multiple paths. Simulation results show that the multipath routing scheme proposed in this paper outperforms the traditional strategy.
['Weigang Bai', 'Haiyan Wang', 'Xiaohong Shen', 'Ruiqin Zhao', 'Yuzhi Zhang']
Minimum delay multipath routing based on TDMA for underwater acoustic sensor network
641,153
The development of a software project often requires more time and effort than originally expected. However, at end of a project, it is hard to determine which components required more time or were more complex than what was originally planned. In addition, project managers are interested to know what could have happened if some requirements were dropped or if they were implemented in different time sequencing. Lagrein is a tool that supports managers and developers in answering such kind of questions.
['Andrejs Jermakovics', 'Marco Scotto', 'Alberto Sillitti', 'Giancarlo Succi']
Lagrein: Visualizing User Requirements and Development Effort
126,462
Many existing rate control schemes consider spatial quality only. As a result, this may introduce a large distortion variation over frames to which people are sensitive. In fact, people are also sensitive to temporal quality. We design a real-time rate control based on a PID controller to have better tradeoff between spatial and temporal quality. From different estimated number of bits per frame and buffer status, different target bits for each frame are used in order to reduce the flickering effect (smaller distortion variation) which is one factor affecting temporal quality. Experimental results suggest that our scheme can obtain more consistent quality while keeping high spatial quality.
['Chi-Wah Wong', 'Oscar C. Au', 'Hong-Kwai Lam']
PID-based real-time rate control
306,701
Dynamic time warping, or DTW, is a powerful and domain-general sequence alignment method for computing a similarity measure. Such dynamic programming-based techniques like DTW are now the backbone and driver of most bioinformatics methods and discoveries. In neuroscience it has had far less use, though this has begun to change. We wanted to explore new ways of applying DTW, not simply as a measure with which to cluster or compare similarity between features but in a conceptually different way. We have used DTW to provide a more interpretable spectral description of the data, compared to standard approaches such as the Fourier and related transforms. The DTW approach and standard discrete Fourier transform (DFT) are assessed against benchmark measures of neural dynamics. These include EEG microstates, EEG avalanches and the sum squared error (SSE) from a multilayer perceptron (MLP) prediction of the EEG timeseries, and simultaneously acquired FMRI BOLD signal. We explored the relationships between these variables of interest in an EEG-FMRI dataset acquired during a standard cognitive task, which allowed us to explore how DTW differentially performs in different task settings. We found that despite strong correlations between DTW and DFT-spectra, DTW was a better predictor for almost every measure of brain dynamics. Using these DTW measures, we show that predictability is almost always higher in task than in rest states, which is consistent to other theoretical and empirical findings, providing additional evidence for the utility of the DTW approach.
['Martin Dinov', 'Romy Lorenz', 'Gregory Scott', 'David J. Sharp', 'Erik D. Fagerholm', 'Robert Leech']
Novel Modeling of Task vs. Rest Brain State Predictability Using a Dynamic Time Warping Spectrum: Comparisons and Contrasts with Other Standard Measures of Brain Dynamics.
728,210
Effective Feature Subset Selection (FSS) is an important step when designing engineering systems that classify complex data in real time. The electromyographic (EMG) signal-based walking assistance system is a typical system that requires an efficient computational architecture for classification. The performance of such a system depends largely on a criterion function that assesses the quality of selected feature subsets. However, many well-known conventional criterion functions use less relevant features for classification or they have a high computational cost. Here, we propose a new criterion function that provides more effective FSS. The proposed criterion function, known as a separability index matrix (SIM), provides features pertinent to the classification task and a very low computational cost. This new function produces to a simple feature selection algorithm when combined with the forward search paradigm. We performed extensive experimental comparisons in terms of classification accuracy and computational costs to confirm that the proposed algorithm outperformed other filter-type feature selection methods that are based on various distance measures, including inter-intra, Euclidean, Mahalanobis, and Bhattacharyya distances. We then applied the proposed method to a gait phase recognition problem in our EMG signal-based walking assistance system. We demonstrated that the proposed method performed competitively when compared with other wrapper-type feature selection methods in terms of class-separability and recognition rate.
['JeongSu Han', 'Sang Wan Lee', 'Zeungnam Bien']
Feature subset selection using separability index matrix
283,601
We present a resolution complete point to point path planner for up to 6 degrees of freedom (DOF) robot manipulators, based on a discretized configuration space (C-space) representation. The robot operates in an initially known workspace (W-space). Vision integration in an easy way makes the planner capable of dealing with initially unknown obstacles. Furthermore, a Kohonen map based reorganization of the C-space increases the resolution in collision free regions of the C-space and enables path finding even for difficult paths. Experiments with low on-line computation times (lying in the order of a fraction of a second to a few seconds) demonstrate the effectiveness of the planner.
['Eleni Ralli', 'Gerd Hirzinger']
A global and resolution complete path planner for up to 6DOF robot manipulators
90,484
This paper presents the automatic microassembly of miniaturized gear system by hybrid vision-force control with the vision system and the 3-DOF force sensor feedback data. The assembly process consists of three phases, including visual positioning, searching meshing state, and inserting. Visual feedback is used to achieve the coarse positioning and guide the grasping and transporting of microparts during the visual positioning phase. During the searching phase, a fuzzy PID controller is used to control the contact force on z axis and fuzzy logic strategy is developed to search the meshing state by the force feedback on x and y axes. The tolerance compensations movement is used to support the inserting task to avoid the case of blocking and prevent the micro parts from damaging. The assembly experiment of three planetary gears validates the hybrid force control strategy.
['Hui Xie', 'Liguo Chen', 'Lining Sun', 'Weibin Rong']
Hybrid Vision-Force Control for Automatic Assembly of Miniaturized Gear System
294,400
This paper proposes that a mechanism through which a firm's location in the interorganizational network influences the firm's internal innovation activities is modifying the amount of information flowing within the firm. Exploring a firm's internal innovation activities, I hypothesized that structural centrality of an inventor in the intrafirm coinventing network is associated with her impact on her firm's innovation activities in an inverted-U-shape relation. I further hypothesized that this relationship is moderated by the firm's centrality and span of structural holes in the interfirm network. I found strong support for these hypotheses in a longitudinal study of eight large pharmaceutical firms. The findings in this paper, apart from having managerial implications, have implications for research on alliances, network studies, and innovation processes.
['Srikanth Paruchuri']
Intraorganizational Networks, Interorganizational Networks, and the Impact of Central Inventors: A Longitudinal Study of Pharmaceutical Firms
334,641
This paper concerns document ranking in information retrieval. In information retrieval systems, the widely accepted probability ranking principle (PRP) suggests that, for optimal retrieval, documents should be ranked in order of decreasing probability of relevance. In this paper, we present a new document ranking paradigm, arguing that a better, more general solution is to optimize top- n ranked documents as a whole, rather than ranking them independently. Inspired by the Modern Portfolio Theory in finance, we quantify a ranked list of documents on the basis of its expected overall relevance (mean) and its variance; the latter serves as a measure of risk, which was rarely studied for document ranking in the past. Through the analysis of the mean and variance, we show that an optimal rank order is the one that maximizes the overall relevance (mean) of the ranked list at a given risk level (variance). Based on this principle, we then derive an efficient document ranking algorithm. It extends the PRP by considering both the uncertainty of relevance predictions and correlations between retrieved documents. Furthermore, we quantify the benefits of diversification, and theoretically show that diversifying documents is an effective way to reduce the risk of document ranking. Experimental results on the collaborative filtering problem confirms the theoretical insights with improved recommendation performance, e.g., achieved over 300% performance gain over the PRP-based ranking on the user-based recommendation.
['Jun Wang']
Mean-Variance Analysis: A New Document Ranking Theory in Information Retrieval
238,783
Timing is an important parameter necessary to ensure the correctness of a design. Timed asynchronous designs can have complex timing paths that include combinational cycles. Commercial electronic design automation (EDA) tools do not support asynchronous designs because timing graphs are required to be acyclic. This paper reports on a methodology that enables commercial tools to support full cyclic path timing validation of timed asynchronous designs.
['William Lee', 'Tannu Sharma', 'Kenneth S. Stevens']
Path Based Timing Validation for Timed Asynchronous Design
673,065
We present a system for luggage visualization where any object is clearly distinguishable from its neighbors. It supports virtual unpacking by visually moving any object away from its original pose. To achieve these, we first apply a volume segmentation guided by a confidence measure that recursively splits connected regions until semantically meaningful objects are obtained, and a label volume whose voxels specifying the object IDs is generated. The original luggage dataset and the label volume are visualized by volume rendering. Through an automatic coloring algorithm, any pair of objects whose projections are adjacent in an image are assigned distinct hues that are modulated onto a transfer function to both reduce rendering cost as well as to improve the smoothness across object boundaries. We have designed a layered framework to efficiently render a scene mixed with packed luggage, animated unpacking objects, and already unpacked objects put aside for further inspection. The system uses GPU to quickly select unpackable objects that are not blocked by others to make the unpacking plausible.
['Wei Li', 'Gianluca Paladini', 'Leo Grady', 'Timo Kohlberger', 'Vivek Kumar Singh', 'Claus Bahlmann']
Luggage visualization and virtual unpacking
55,770
Previous research has shown the benefits of random-push Network Coding (NC) for P2P video streaming. On the other hand, scalable video coding provides graceful quality adaptation to heterogeneous network conditions. Nevertheless, packet scheduling for scalable media streaming with P2P NC is still a largely unexplored problem. Our ongoing research aims at designing a packet scheduling scheme that maximizes the quality of the video with minimal coordination among peers. In this work, we provide a preliminary description of our scheduling scheme and preliminary performance measurements.
['Anooq Muzaffar Sheikh', 'Attilio Fiandrotti', 'Enrico Magli']
Distributed scheduling for scalable P2P video streaming with Network Coding
109,408
Socially acceptable design of a ubiquitous system for monitoring elderly family members
['Sebastian Hoberg', 'Ludger Schmidt', 'Axel Hoffmann', 'Matthias Söllner', 'Jan Marco Leimeister', 'Christian Voigtmann', 'Klaus David', 'Julia Zirfas', 'Alexander Roßnagel']
Socially acceptable design of a ubiquitous system for monitoring elderly family members
764,353
Emotion simulation is very useful for modeling believable virtual character in safety education software, in order to enhance the efficacy of emotion simulation for Web3D applications, a fuzzy productive rule-based emotion model is presented, it integrates the mechanism of emotion decay and emotion filtering, an emotion intensity is calculated by Mamdani inference model. A 3D virtual interactive virtual environment is built in Virtools, a virtual character with the emotion model can express emotion to others, the result shows that the emotion simulation can improve the friendly interface of online education software.
['Zhen Liu', 'Shaohua He']
Emotion Simulation in Interactive Virtual Environment for Children's Safety Education
449,900
An Aerial-Ground Robotic Team for Systematic Soil and Biota Sampling in Estuarine Mudflats
['Pedro Deusdado', 'Eduardo Pinto', 'Magno Guedes', 'Francisco Marques', 'Paulo C. Rodrigues', 'André Lourenço', 'Ricardo Mendonça', 'André Silva', 'Pedro F. Santana', 'José Corisco', 'Marta Neves Rebelo de Almeida', 'Luís Portugal', 'Raquel Caldeira', 'José Barata', 'Luís Flores']
An Aerial-Ground Robotic Team for Systematic Soil and Biota Sampling in Estuarine Mudflats
701,246
Visual Debuggers and Deaf Programmers
['Marcos Devaner do Nascimento', 'Francisco Carlos de Mattos Brito Oliveira', 'Adriano Tavares de Freitas', 'Lidiane Castro Silva']
Visual Debuggers and Deaf Programmers
860,893
Image denoising has always been one of the standard problems in image processing and computer vision. It is always recommendable for a denoising method to preserve important image features, such as edges, corners, etc., during its execution. Image denoising methods based on wavelet transforms have been shown their excellence in providing an efficient edge-preserving image denoising, because they provide a suitable basis for separating noisy signal from the image signal. This paper presents a novel edge-preserving image denoising technique based on wavelet transforms. The wavelet domain representation of the noisy image is obtained through its multi-level decomposition into wavelet coefficients by applying a discrete wavelet transform. A patch-based weighted-SVD filtering technique is used to effectively reduce noise while preserving important features of the original image. Experimental results, compared to other approaches, demonstrate that the proposed method achieves very impressive gain in denoising performance.
['Paras Jain', 'Vipin Tyagi']
An adaptive edge-preserving image denoising technique using patch-based weighted-SVD filtering in wavelet domain
691,559
Multicell coordinated beamforming (CB) can mitigate intercell interference (ICI). However, existing work on CB focuses on systems that maximize overall throughput. This paper considers CB for systems with multiple receive antennas and further extends the CB to multiuser and multistream transmission. We consider fairness and pay close attention to cell-edge users. CB that maximizes the harmonic sum of signal-to-interference-plus-noise ratio (SINR) can be derived into a low-complexity algorithm that favors cell-edge users. Two iterative algorithms are developed. From the simulation results, the proposed algorithms can provide a better fifth-percentile user throughput compared with the CB algorithms minimizing mean square error, maximizing uplink SINR, and maximizing weighted sum rate. Furthermore, the computational complexity of the proposed algorithms is significantly lower than that maximizing the minimum SINR of users.
['Daewon Lee', 'Geoffrey Ye Li', 'Xiaolong-Long Zhu', 'Yusun Fu']
Multistream Multiuser Coordinated Beamforming for Cellular Networks With Multiple Receive Antennas
726,113
The gateway is a key component for sensor network deployments and the Internet of Things. Sensor deployments often tend towards low-power communication protocols such as Bluetooth Low Energy or IEEE 802.15.4. Gateways are essential to connect these devices to the Internet at large. Over time though, gateways have gained additional responsibilities as well. Sensors expect gateways to handle device-specific data translation and local processing while also providing services, such as time synchronization, to the low-power device. As a centralized computing resource, the gateway is also an obvious location for running local applications which interact with sensor data and control nearby actuators. Today, vendors and researchers often create their own device-specific gateways to handle these responsibilities. We propose a generic gateway platform capable of supporting the needs of many devices. In our architecture, devices provide a pointer, such as a URL, to descriptions of their interfaces. The gateway can download the interface descriptions and use them to determine how to interact with the device, translating its data to a usable format and enabling local services to communicate with it. The translated data is provided to services including user applications, local logging, device status monitoring, and cloud applications. By simultaneously supporting communication with many sensors, our gateway architecture can simplify future sensor network deployments and enable intelligent building applications.
['Bradford Campbell', 'Branden Ghena', 'Ye Sheng Kuo', 'Prabal Dutta']
Swarm Gateway: Demo Abstract
929,305
Given a connected graph G and a terminal set $$R \subseteq VG$$, Steiner tree asks for a tree that includes all of R with at most r edges for some integer $$r \ge 0$$. It is known from [ND12,Garey et al. [1]] that Steiner tree is NP-complete in general graphs. Split graph is a graph which can be partitioned into a clique and an independent set. K. White et al. [2] has established that Steiner tree in split graphs is NP-complete. In this paper, we present an interesting dichotomy: we show that Steiner tree on $$K_{1,4}$$-free split graphs is polynomial-time solvable, whereas, Steiner tree on $$K_{1,5}$$-free split graphs is NP-complete. We investigate $$K_{1,4}$$-free and $$K_{1,3}$$-free also known as claw-free split graphs from a structural perspective. Further, using our structural study, we present polynomial-time algorithms for Steiner tree in $$K_{1,4}$$-free and $$K_{1,3}$$-free split graphs. Although, polynomial-time solvability of $$K_{1,3}$$-free split graphs is implied from $$K_{1,4}$$-free split graphs, we wish to highlight our structural observations on $$K_{1,3}$$-free split graphs which may be used in other combinatorial problems.
['Madhu Illuri', 'P. Renjith', 'N. Sadagopan']
Complexity of Steiner Tree in Split Graphs - Dichotomy Results
619,638
In this paper a predictor-based subspace model identification method is presented that relaxes the requirement that the past window has to be large for asymptotical consistent estimates. By utilizing a VARMAX model, a finite description of the input-output relation is formulated. An extended least squares recursion is used to estimate the Markov parameters in the VARMAX model set. Using the Markov parameters the state sequence can be estimated and consequently the system matrices can be recovered. The effectiveness of the proposed method in comparison with an existing method is emphasized with a simulation study on a wind turbine model operating in closed loop.
['Ivo Houtzager', 'Jan-Willem van Wingerden', 'Michel Verhaegen']
VARMAX-based closed-loop subspace model identification
176,059
A Polynomial Time Algorithm for Finding a Spanning Tree with Maximum Number of Internal Vertices on Interval Graphs
['Xingfu Li', 'Haodi Feng', 'Haitao Jiang', 'Binhai Zhu']
A Polynomial Time Algorithm for Finding a Spanning Tree with Maximum Number of Internal Vertices on Interval Graphs
856,514
Generating Circular Motion of a Human-Like Robotic Arm Using Attractor Selection Model
['Atsushi Sugahara', 'Yutaka Nakamura', 'Ippei Fukuyori', 'Yoshio Matsumoto', 'Hiroshi Ishiguro']
Generating Circular Motion of a Human-Like Robotic Arm Using Attractor Selection Model
987,349
This paper presents a new fundamental approach to modal participation analysis of linear time-invariant systems, leading to new insights and new formulas for modal participation factors. Modal participation factors were introduced over a quarter century ago as a way of measuring the relative participation of modes in states, and of states in modes, for linear time-invariant systems. Participation factors have proved their usefulness in the field of electric power systems and in other applications. However, in the current understanding, it is routinely taken for granted that the measure of participation of modes in states is identical to that for participation of states in modes. Here, a new analysis using averaging over an uncertain set of system initial conditions yields the conclusion that these quantities (participation of modes in states and participation of states in modes) should not be viewed as interchangeable. In fact, it is proposed that a new definition and calculation replace the existing ones for state in mode participation factors, while the previously existing participation factors definition and formula should be retained but viewed only in the sense of mode in state participation factors. Several examples are used to illustrate the issues addressed and the results obtained.
['Wael A. Hashlamoun', 'Munther A. Hassouneh', 'Eyad H. Abed']
New Results on Modal Participation Factors: Revealing a Previously Unknown Dichotomy
331,751
Progressive skinning for video game character animations
['Simon Pilgrim', 'Alberto Aguado', 'Kenny Mitchell', 'Anthony Steed']
Progressive skinning for video game character animations
206,501
Many real-time image processing applications are confronted with performance limitations when implemented in software. The skin segmentation algorithm utilized in hand gesture recognition as developed by the ICT department of Delft University of Technology presents an example of such an application. This paper presents the design of an FPGA based accelerator which alleviates the host PC's computational effort required for real-time skin segmentation. We show that our design utilizes no more than 88% of the resources available within the targeted XC2VP30 device. In addition, the proposed approach is highly portable and not limited to the considered real-time image processing algorithm only.
['B. de Ruijsscher', 'Georgi Gaydadjiev', 'Jeroen F. Lichtenauer', 'E. Hendriks']
FPGA accelerator for real-time skin segmentation
396,177
This paper describes our solution for the MSR Video to Language Challenge. We start from the popular ConvNet + LSTM model, which we extend with two novel modules. One is early embedding, which enriches the current low-level input to LSTM by tag embeddings. The other is late reranking, for re-scoring generated sentences in terms of their relevance to a specific video. The modules are inspired by recent works on image captioning, repurposed and redesigned for video. As experiments on the MSR-VTT validation set show, the joint use of these two modules add a clear improvement over a non-trivial ConvNet + LSTM baseline under four performance metrics. The viability of the proposed solution is further confirmed by the blind test by the organizers. Our system is ranked at the 4th place in terms of overall performance, while scoring the best CIDEr-D, which measures the human-likeness of generated captions.
['Jianfeng Dong', 'Xirong Li', 'Weiyu Lan', 'Yujia Huo', 'Cees G. M. Snoek']
Early Embedding and Late Reranking for Video Captioning
894,955
Time-shifting of Sampled Sound with a Real-time Granulation Technique
['Barry Truax']
Time-shifting of Sampled Sound with a Real-time Granulation Technique
753,492
It is an important significance for SAR radiometric calibration accuracy to select a proper code sequence about active coded transponder (ACT), in the process of active coding radiometric calibration using synthetic aperture radar (SAR). According to the principle of active coding radiometric calibration, a signal processing model of active coded reflected signals is proposed in this paper. And m sequences, Gold sequences and random sequences are studied. Simulation experiments with the compression of SAR azimuth signals are carried out.
['Yiding Wang', 'Yuanshu Li', 'Zhulei Wang']
Code sequence selection for SAR radiometric calibration
508,905
Audio Narrowcasting for Multipresent Avatars on Workstations and Mobile Phones.
['Michael F. Cohen', 'Owen Noel Newton Fernando', 'Uresh Chanaka Duminduwardena', 'Makoto Kawaguchi', 'Kazuya Adachi']
Audio Narrowcasting for Multipresent Avatars on Workstations and Mobile Phones.
769,377
Today's applications are highly mobile; we download software from the Internet, machine executable code arrives attached to electronic mail, and Java applets increase the functionality and appearance of Web pages. This movement has stirred a great deal of research in the area of mobile code security. The fact remains that a newly arrived program to a local host has the potential to inflict significant damage to the local host and local resources. Perhaps the new program originated from a charlatan host masquerading as a trusted server, or has been modified by a malicious party during transit from the trusted server to the local host. In light of this risk, security models that address mobile code are in high demand. We have developed a framework named SECRYT, which enables users of a mobile application to validate the application with integrity and authentication data while simplifying the management and distribution of the authentication data.
['Mike Jochen', 'Lisa M. Marvel', 'Lori L. Pollock']
A framework for tamper detection marking of mobile applications
523,351
The concept of social capital refers to the advantage that is created from the structures of an actor's social ties within a network. This paper presents an approach to investigate the extent to which a particular network is characterized by brokerage or closure social capital based on triadic analysis. We split each network into two, respectively comprising of strong and weak ties. To facilitate this splitting, we measure the strength of the tie among a pair actors based on the reciprocity of their relationship and the number of their shared or mutual friends. We hypothesize that the network composed of strong ties is expected to be rich in closure triads, whereas the network composed of weak ties is expected to be rich in brokerage triads. We test our hypotheses on four popular online social networks (OSNs), namely, Facebook, Twitter, Slashdot and YouTube. Empirical analysis reports that most networks composed of strong ties comprise both brokerage and closure triads, leading to the rejection of the first hypothesis. On the other hand, all networks composed of weak ties except for Slashdot comprise a significant number of only brokerage triads, leading to the acceptance of the second hypothesis. We discuss how the motives and mechanisms of interaction on each OSN contribute to its structure in the form of strong and weak ties, which results in the presence or absence of a particular form of social capital.
['Huda Alhazmi', 'Swapna S. Gokhale']
Mining Social Capital on Online Social Networks with Strong and Weak Ties
892,864
This article describes the concrete truck's hydraulic regenerative braking system structure, working mechanism, matching principle, and the relevant dynamic model. In order to achieve the purpose of simulating regenerative braking system, the software AMESim was used to establish physical model. By the target vehicle torque control strategy, according to the real parameters of the tuck, hydraulic parameters of secondary element and accumulator were analyzed, compared, and determined. And bench test validation was conducted. Based on those advanced works the hydraulic regenerative brake system for the truck was developed.
['Cui Zhang', 'Xinhui Liu', 'Zhan Wang', 'Qiang Shi']
Analysis of the regenerative brake system parameters for concrete mixing truck basded on AMESim
353,113
The maintenance of communication systems is a critical operation which monopolizes human and network resources. The management of multiple, parallel maintenance jobs is a complex task that can generate faults and undesirable service interruption. In the context of IP/MPLS networks, Graceful Restart mechanisms allow, under strict conditions, the maintenance of a single router without impacting its forwarding plane. Still, the network-wide coordination of the routers restarts is an unresolved problem. To solve this issue, we propose a solution for the automation and optimized orchestration of maintenance operations, based on an adaptive and fully-distributed planning process. The operator is then relieved from tedious maintenance tasks and is only responsible for setting performance objectives and assessing progress reports.
['Samir Ghamri-Doudane', 'Laurent Ciavaglia']
Domain-wide scheduling of OSPF graceful restarts for maintenance purposes
303,182
Research on forecasting has traditionally focused on building more accurate statistical models for a given time series. The models are mostly applied to limited data due to efficiency and scalability problems. However, many enterprise applications require scalable forecasting on large number of data series. For example, telecommunication companies need to forecast each of their customers' traffic load to understand their usage behavior and to tailor targeted campaigns. Forecasting models are typically applied on aggregate data to estimate the total traffic volume for revenue estimation and resource planning. However, they cannot be easily applied to each user individually as building accurate models for large number of users would be time consuming. The problem is exacerbated when the forecasting process is continuous and the models need to be updated periodically. This paper addresses the problem of building and updating forecasting models continuously for multiple data series. We propose dynamic clustered modeling for forecasting by utilizing representative models as an analogy to cluster centers. We apply the models to each individual series through iterative nonlinear optimization. We develop two approaches: The Integrated Clustered Modeling integrates clustering and modeling simultaneously, and the Sequential Clustered Modeling applies them sequentially. Our findings indicate that modeling an individual's behavior using its segment can be more scalable and accurate than the individual model itself. The grouped models avoid overfits and capture common motifs even on noisy data. Experimental results from a telco CRM application show the method is efficient and scalable, and also more accurate than having separate individual models.
['Izzeddin Gur', 'Mehmet Güvercin', 'Hakan Ferhatosmanoglu']
Scaling forecasting algorithms using clustered modeling
273,897
Motivation: In detection of non-coding RNAs, it is often necessary to identify the secondary structure motifs from a set of putative RNA sequences. Most of the existing algorithms aim to provide the best motif or few good motifs, but biologists often need to inspect all the possible motifs thoroughly.#R##N##R##N#Results: Our method RNAmine employs a graph theoretic representation of RNA sequences and detects all the possible motifs exhaustively using a graph mining algorithm. The motif detection problem boils down to finding frequently appearing patterns in a set of directed and labeled graphs. In the tasks of common secondary structure prediction and local motif detection from long sequences, our method performed favorably both in accuracy and in efficiency with the state-of-the-art methods such as CMFinder.#R##N##R##N#Availability: The software is available upon request.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Visit the following URL for Supplementary information, software availability and the information about the web server: http://www.ncrna.org/RNAMINE/
['Michiaki Hamada', 'Koji Tsuda', 'Taku Kudo', 'Taishin Kin', 'Kiyoshi Asai']
Mining frequent stem patterns from unaligned RNA sequences
319,825
The rise in the use of social networks in the recent years has resulted in an abundance of information on different aspects of everyday social activities that is available online, with the most prominent and timely source of such information being Twitter. This has resulted in a proliferation of tools and applications that can help end-users and large-scale event organizers to better plan and manage their activities. In this process of analysis of the information originating from social networks, an important aspect is that of the geographic coordinates, i.e., geolocalisation, of the relevant information, which is necessary for several applications (e.g., on trending venues, traffic jams, etc.). Unfortunately, only a very small percentage of the twitter posts are geotagged, which significantly restricts the applicability and utility of such applications. In this work, we address this problem by proposing a framework for geolocating tweets that are not geotagged. Our solution is general, and estimates the location from which a post was generated by exploiting the similarities in the content between this post and a set of geotagged tweets, as well as their time-evolution characteristics. Contrary to previous approaches, our framework aims at providing accurate geolocation estimates at fine grain (i.e., within a city). The experimental evaluation with real data demonstrates the efficiency and effectiveness of our approach.
['Pavlos Paraskevopoulos', 'Themis Palpanas']
Fine-Grained Geolocalisation of Non-Geotagged Tweets
651,669
In this paper, we study contention resolution protocols from a game-theoretic perspective. We focus on acknowledgment-based protocols, where a user gets feedback from the channel only when she attempts transmission. In this case she will learn whether her transmission was successful or not. Users that do not transmit will not receive any feedback. We are interested in equilibrium protocols, where no player has an incentive to deviate.#R##N##R##N#The limited feedback makes the design of equilibrium protocols a hard task as best response policies usually have to be modeled as Partially Observable Markov Decision Processes, which are hard to analyze. Nevertheless, we show how to circumvent this for the case of two players and present an equilibrium protocol. For many players, we give impossibility results for a large class of acknowledgment-based protocols, namely age-based and backoff protocols with finite expected finishing time. Finally, we provide an age-based equilibrium protocol, which has infinite expected finishing time, but every player finishes in linear time with high probability.
['George Christodoulou', 'Martin Gairing', 'Sotiris E. Nikoletseas', 'Christoforos Raptopoulos', 'Paul G. Spirakis']
Strategic Contention Resolution with Limited Feedback
833,291
This paper is concerned with a stage structure model with spatiotemporal delay and homogeneous Dirichlet boundary condition. The existence of steady state solution bifurcating from the trivial equilibrium is obtained by using Lyapunov–Schmidt reduction. The stability analysis of the positive spatially nonhomogeneous steady state solution is investigated by a detailed analysis of the characteristic equation. Using the properties of the omega limit set, we obtain the global convergence of the solution with finite delay.
['Shuling Yan', 'Shangjiang Guo']
Stability analysis of a stage structure model with spatiotemporal delay effect
954,573