abstract
stringlengths
5
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
367
__index_level_0__
int64
0
1,000k
As the number of clinical applications requiring nonrigid image registration continues to grow, it is important to design registration algorithms that not only build on the best available theory, but also are computationally efficient. Thirion's Demons algorithm [1] estimates nonrigid deformations by successively estimating force vectors that drive the deformation toward alignment, and then smoothing the force vectors by convolution with a Gaussian kernel. It essentially approximates a deformation under diffusion regularization [2], and it is a popular choice of algorithm for nonrigid registration because of its linear computational complexity and ease of implementation. In this article, we show how the Demons algorithm can be generalized to handle other common regularizers, yielding O(n) algorithms that employ Gaussian convolution for elastic, fluid, and curvature registration. We compare the speed of the proposed algorithms with algorithms based on Fourier methods [3] for registering serial chest CT studies.
['Nathan D. Cahill', 'J. Alison Noble', 'David J. Hawkes']
Demons algorithms for fluid and curvature registration
171,618
This paper considers the problem of interference suppression in GPS coarse/acquisition (C/A) signals. Particularly, an anti-jam receiver is developed, utilizing the unique repetitive feature of the GPS C/A-signal. The proposed receiver does not require the knowledge of the transmitted GPS symbols or the satellite positions. It utilizes the repetition of the Gold code within each navigation symbol to simultaneously suppress the interference of all satellites.
['Wei Sun', 'Moh Ainin']
Interference suppression for GPS coarse/acquisition signals using antenna array
382,386
Tissue engineers seek to build living tissue constructs for replacing or repairing damaged tissues. Computational methods foster tissue engineering by pointing out dominant mechanisms involved in shaping multicellular systems. Here we apply the Lattice Boltzmann (LB) method to study the fusion of multicellular constructs. This process is of interest in bioprinting, in which multicellular spheroids or cylinders are embedded in a supportive hydrogel by a computer-controlled device. We simulated post-printing rearrangements of cells, aiming to predict the shape and stability of certain printed structures. To this end, we developed a two-dimensional LB model of a multicellular system in a hydrogel. Our parallel computing code was implemented using the Portable Extensible Toolkit for Scientific Computation (PETSc). To validate the LB model, we simulated the fusion of multicellular cylinders in a contiguous, hexagonal arrangement. Our two-dimensional LB simulation describes the evolution of the transversal cross section of the construct built from three-dimensional multicellular cylinders whose length is much larger than their diameter. Fusion eventually gave rise to a tubular construct, in qualitative agreement with bioprinting experiments. Then we simulated the time course of a defect in a bioprinted tube. To address practical problems encountered in tissue engineering, we also simulated the evolution of a planar construct, as well as of a bulky, perfusable construct made of multicellular cylinders. The agreement with experiments indicates that our LB model captures certain essential features of morphogenesis, and, therefore, it may be used to test new working hypotheses faster and cheaper than in the laboratory. Graphical abstractDisplay Omitted HighlightsA Lattice Boltzmann model describes shape changes of tissue constructs in hydrogel.We simulate the evolution of structures built by printing multicellular cylinders.A defect in a bioprinted tissue construct increases as cells rearrange.A stack of multicellular cylinders may give rise to a perfusable tissue construct.
['Artur Cristea', 'Adrian Neagu']
Shape changes of bioprinted tissue constructs simulated by the Lattice Boltzmann method
594,037
GEORGIA STATE UNIVERSITY MUSIC TECHNOLOGY STUDIO REPORT
['Tae Hong Park', 'Robert Scott Thompson', 'Alex Marse', 'Johnathan Turner']
GEORGIA STATE UNIVERSITY MUSIC TECHNOLOGY STUDIO REPORT
776,358
On high resolution computed tomography (HRCT) images, dilated airways appear as two parallel lines that resemble tram tracks, when they lie in the plane of scan. Tram tracks, when visible, are characteristic of bronchiectasis, a disease caused by the irreversible dilatation of the bronchial tree. Detection of such patterns provides valuable diagnostic information. In this work, semi-supervised learning together with image analysis techniques have been used to detect tram tracks on HRCT images. The approach was tested on 1091 HRCT images belonging to 54 patients, and the results visually validated by radiologists. Sensitivity and specificity of 80% and 91% respectively were achieved.
['Mamatha Rudrapatna', 'Prinith Amaratunga', 'Mithun Prasad', 'Arcot Sowmya', 'Peter Wilson']
Automatic Detection of Tram Tracks on HRCT Images
30,194
Several approaches for scalable interest management (IM) within real-time distributed virtual environments (DVEs) have been proposed based upon some division of the data-space in to disjoint volumes or cells. Any such approach, however, must implement some mechanism for propagating the query and update messages around the distributed system. The efficiency of this process can greatly effect the scalability of such systems. In this paper we evaluate an adaptive approach to this problem.
['Rob Minson', 'Georgios K. Theodoropoulos']
Push-Pull Interest Management for Virtual Worlds
168,636
Modern assertion languages, such as PSL and SVA, include many constructs that are best handled by rewriting to a small set of base cases. Since previous rewrite attempts have shown that the rules could be quite involved, sometimes counterintuitive, and that they can make a significant difference in the complexity of interpreting assertions, workable procedures for proving the correctness of these rules must be established. In this paper, we outline the methodology for computer-assisted proofs of a set of previously published rewrite rules for PSL properties. We show how to express PSLpsilas syntax and semantics in the PVS theorem prover, and proceed to prove the correctness of a set of thirty rewrite rules. In doing so, we also demonstrate how to circumvent issues with PSL semantics regarding the never and eventually! operators.
['Katell Morin-Allory', 'Marc Boulé', 'Dominique Borrione', 'Zeljko Zilic']
Proving and disproving assertion rewrite rules with automated theorem provers
190,852
The interpolated helical milling (IHM) is considered a very flexible strategy which allows milling, instead of drilling, holes with more generic tools. However, despite of its dissemination in industry, it is currently known that few research works have been carried out about the influence of cutting condition on the quality of holes and their cutting time. In this study, the production of holes was investigated using the IHM technique for rough and finish machining conditions, and the process performance was evaluated by the cutting time, and the holes surface roughness and roundness. Fifty four holes were milled in AISI 1045 steel bars with end mill cutters in a vertical machining centre; following Taguchi (L9) experiments where the cutting speed, circular feed per tooth and axial feed per tooth were analysed for rough operations, and for the finish operations, the radial depth of cut was also investigated. From the results, it can be concluded that high quality surface can be achieved at the rough phas...
['Dalberto Dias da Costa', 'A.B. Marques', 'Fred L. Amorim']
Hole quality and cutting time evaluation in the interpolated helical milling
902,750
Just-in-Time Static Analysis
['Lisa Nguyen', 'Karim Ali', 'Ben Livshits', 'Eric Bodden', 'Justin Smith', 'Emerson R. Murphy-Hill']
Just-in-Time Static Analysis
886,080
The main advantages of cascade-correlation learning are the abilities to learn quickly and to determine the network size. However, recent studies have shown that in many problems the generalization performance of a cascade-correlation trained network may not be quite optimal. Moreover, to reach a certain performance level, a larger network may be required than with other training methods. Recent advances in statistical learning theory emphasize the importance of a learning method to be able to learn optimal hyperplanes. This has led to advanced learning methods, which have demonstrated substantial performance improvements. Based on these recent advances in statistical learning theory, we introduce modifications to the standard cascade-correlation learning that take into account the optimal hyperplane constraints. Experimental results demonstrate that with modified cascade correlation, considerable performance gains are obtained compared to the standard cascade-correlation learning. This includes better generalization, smaller network size, and faster learning.
['Mikko Lehtokangas']
Modified cascade-correlation learning for classification
356,684
This paper discusses the application of holonic control paradigm to sensor management in military surveillance operations. Sensor management is described both as part of the data fusion process and as a control problem. The choice of holonic control as the most adequate architecture for sensor management in the military environment is explained and its application to surveillance operations illustrated. Holonic control is then simulated using a littoral surveillance scenario to test sensor management with and without feedback from high-level fusion (closed-loop versus open-loop). The results show that closed-loop sensor management significantly enhances global surveillance performance, reduces detection time of threatening objects, and reduces platforms' load, thus increasing stealth and decreasing detectability.
['Abder Rezak Benaskeur', 'Hengameh Irandoust', 'Peter McGuire', 'Robert Brennan']
Holonic control-based Sensor management
124,442
User Experience Changing Patterns of Chinese Users
['Yanan Chen', 'Jing Liu', 'Guozhen Zhao', 'Xianghong Sun']
User Experience Changing Patterns of Chinese Users
852,202
Using workstations on a LAN as a parallel computer is becoming increasingly common. At the same time, parallelizing compilers are making such systems easier to program. Understanding the traffic of compiler-parallelized programs running on networks is vital for network planning and designing quality of service systems. To provide a basis for such understanding, we measured the traffic of six dense-matrix applications written in a dialect of High Performance Fortran, compiled with the Fx parallelizing compiler, and run on an Ethernet LAN. The traffic of these programs is profoundly different from typical network traffic. In particular the programs exhibit global collective communication patterns, correlated traffic along many connections, constant burst sizes, and periodic burstiness with bandwidth dependent periodicity. The traffic of these programs can be characterized by the power spectra of their instantaneous average bandwidth.
['Peter A. Dinda', 'Brad M. Garcia', 'Kwok-Shing Leung']
The measured network traffic of compiler-parallelized programs
529,960
Any assessment formed by a strategy and a prior probability is a coherent conditional probability and can be extended, generally not in a unique way, to a full conditional probability. The corresponding class of all extensions is studied and a closed form expression for its envelopes is provided. Subclasses of extensions meeting further analytical properties are considered by imposing conglomerability and a conditional version of conglomerability, respectively. Then, the envelopes of extensions satisfying these conditions are characterized.
['Davide Petturiti', 'Barbara Vantaggi']
Envelopes of conditional probabilities extending a strategy and a prior probability
647,812
For humanoid robots to fulfill their mobility potential they must demonstrate reliable and efficient locomotion over rugged and irregular terrain. In this paper we present the perception and planning algorithms which have allowed a humanoid robot to use only passive stereo imagery (as opposed to actuating a laser range sensor) to safely plan footsteps to continuously walk over rough and uneven surfaces without stopping. The perception system continuously integrates stereo imagery to build a consistent 3D model of the terrain which is then used by our footstep planner which reasons about obstacle avoidance, kinematic reachability and foot rotation through mixed-integer quadratic optimization to plan the required step positions. We illustrate that our stereo imagery fusion approach can measure the walking terrain with sufficient accuracy that it matches the quality of terrain estimates from LIDAR. To our knowledge this is the first such demonstration of the use of computer vision to carry out general purpose terrain estimation on a locomoting robot — and additionally to do so in continuous motion. A particular integration challenge was ensuring that these two computationally intensive systems operate with minimal latency (below 1 second) to allow re-planning while walking. The results of extensive experimentation and quantitative analysis are also presented. Our results indicate that a laser range sensor is not necessary to achieve locomotion in these challenging situations.
['Maurice F. Fallon', 'Pat Marion', 'Robin Deits', 'Thomas Whelan', 'Matthew E. Antone', 'John McDonald', 'Russ Tedrake']
Continuous humanoid locomotion over uneven terrain using stereo fusion
576,871
The performance measure of an algorithm is a crucial part of its analysis. The performance can be determined by the study on the convergence rate of the algorithm in question. It is necessary to study some (hopefully convergent) sequence that will measure how "good" is the approximated optimum compared to the real optimum. The concept of Regret is widely used in the bandit literature for assessing the performance of an algorithm. The same concept is also used in the framework of optimization algorithms, sometimes under other names or without a specific name. And the numerical evaluation of convergence rate of noisy algorithms often involves approximations of regrets. We discuss here two types of approximations of Simple Regret used in practice for the evaluation of algorithms for noisy optimization. We use specific algorithms of different nature and the noisy sphere function to show the following results. The approximation of Simple Regret, termed here Approximate Simple Regret, used in some optimization testbeds, fails to estimate the Simple Regret convergence rate. We also discuss a recent new approximation of Simple Regret, that we term Robust Simple Regret, and show its advantages and disadvantages.
['Sandra Astete-Morales', 'Marie-Liesse Cauwet', 'Olivier Teytaud']
Analysis of Different Types of Regret in Continuous Noisy Optimization
854,467
Incremental Attribute Learning (IAL) is a feasible machine learning strategy for solving high-dimensional pattern classification problems. It gradually trains features one by one, which is quite different from those conventional machine learning approaches where features are trained in one batch. Preprocessing, such as feature selection, feature ordering and feature extraction, has been verified as useful steps for improving classification performance by previous IAL studies. However, in the previous research, these preprocessing approaches were individually employed and they have not been applied for training simultaneously. Therefore, it is still unknown whether the classification results can be further improved by these different preprocess approaches when they are used at the same time. This study integrates different feature preprocessing steps for IAL, where feature extraction, feature selection and feature ordering are simultaneously employed. Experimental results indicate that such an integrated preprocessing approach is applicable for pattern classification performance improvement. Moreover, statistical significance testing also verified that such an integrated preprocessing approach is more suitable for the datasets with high-dimensional inputs.
['Ting Wang', 'Wei Zhou', 'Xiaoyan Zhu', 'Fangzhou Liu', 'Sheng-Uei Guan']
Integrated feature preprocessing for classification based on neural incremental attribute learning
881,108
Real-time outdoor navigation in highly dynamic environments is an crucial problem. The recent literature on real-time static SLAM don't scale up to dynamic outdoor environments. Most of these methods assume moving objects as outliers or discard the information provided by them. We propose an algorithm to jointly infer the camera trajectory and the moving object trajectory simultaneously. In this paper, we perform a sparse scene flow based motion segmentation using a stereo camera. The segmented objects motion models are used for accurate localization of the camera trajectory as well as the moving objects. We exploit the relationship between moving objects for improving the accuracy of the poses. We formulate the poses as a factor graph incorporating all the constraints. We achieve exact incremental solution by solving a full nonlinear optimization problem in real time. The evaluation is performed on the challenging KITTI dataset with multiple moving cars. Our method outperforms the previous baselines in outdoor navigation.
['N. Dinesh Kumar Reddy3', 'Iman Abbasnejad', 'Sheetal Reddy', 'Amit Kumar Mondal', 'Vindhya Devalla']
Incremental real-time multibody VSLAM with trajectory optimization using stereo camera
862,429
The paper deals with a problem of Carreau fluid flow between corrugated plates. A meshless numerical procedure for solving nonlinear governing equation is constructed using the Picard iteration method in combination with the method of fundamental solutions and the radial basis functions. The dimensionless average velocity and the product of friction factor and Reynolds number are calculated for different values of the fluid model and parameters of the considered region.
['Jakub Krzysztof Grabski', 'Jan Adam Kołodziej']
Analysis of Carreau fluid flow between corrugated plates
846,854
Model-based testing is a promising quality assurance technique. Automatic test generation from behavioral models is state of the art. Coverage criteria at model level are often used to measure test quality and to steer automatic test generation. While tests are focused on behavior, however, most coverage criteria are focused on the structure of test models. Thus, semantic-preserving model transformations can be used to alter the effect of the applied coverage criteria. In previous work, we presented the notion of simulated satisfaction to enhance the effect of applicable coverage criteria. In this paper, we present model transformations to restrict the effect of coverage criteria. The aim is to show the relative strength of coverage criteria and their dependence on the model structure. The long-term objective is to define lower boundaries for coverage criteria.
['Stephan Weißleder', 'Thomas Rogenhofer']
Simulated Restriction of Coverage Criteria on UML State Machines
170,188
Electronic commerce is widely expected to promote "friction-free" capitalism, with consumers sending software agents to scour the Net for the best deals. Many distribution chains will indeed be simplified and costs substantially reduced. However, we are also likely to see the creation of artificial barriers in electronic commerce, designed by sellers to extract more value from consumers. Frequent flyer mileage plans and the bundling of software into suites are just two examples of the marketing schemes that are likely to proliferate. It appears that there will be much less a la carte selling of individual items than is commonly expected, and more subscription plans. Therefore many current development plans should be redirected. Electronic commerce is likely to be even more exasperating to consumers than current airline pricing, and will be even further removed from the common conception of a "just price." As a result, there are likely to be more attempts to introduce government regulation into electronic commerce.
['Andrew M. Odlyzko']
The Bumpy Road of Electronic Commerce
231,464
POM is a Parallel Observable Machine featuring mechanisms for building and observing distributed applications. It comes in the form of a library built upon the many communication kernels available on current parallel architectures and a loader that provides the user with a homogeneous syntax for launching parallel applications on any parallel platform.
['Frédéric Guidec', 'Yves Mahéo']
POM: A Parallel Observable Machine.
771,143
From a probabilistic point of view, this paper deduces an optimal initial value of the bias for max-min fuzzy neural network with n input neurons, which converges to 1 as n increases. Supporting numerical experiments are provided.
['Jie Yang', 'Long Li', 'Yan Liu', 'Jing Wang', 'Wei Wu']
Choice of initial bias in max-min fuzzy neural networks
25,243
Distributed Sequence Pattern Detection Over Multiple Data Streams
['Ahmed Khan Leghari', 'Jianneng Cao', 'Yongluan Zhou']
Distributed Sequence Pattern Detection Over Multiple Data Streams
639,238
Normalization for the simply-typed @l-calculus is proven in Twelf, an implementation of the Edinburgh Logical Framework. Since due to proof-theoretical restrictions Twelf Tait's computability method does not seem to be directly usable, a syntactical proof is adapted and formalized instead. In this case study, some boundaries of Twelf current capabilities are touched and discussed.
['Andreas Abel']
Normalization for the Simply-Typed Lambda-Calculus in Twelf
345,641
Power-electronics-based microgrids (MGs) consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel-connected three-phase VSIs are derived. The proposed voltage and current inner control loops and the mathematical models of the VSIs are based on the stationary reference frame. A hierarchical control scheme for the paralleled VSI system is developed comprising two levels. The primary control includes the droop method and the virtual impedance loops, in order to share active and reactive powers. The secondary control restores the frequency and amplitude deviations produced by the primary control. Also, a synchronization algorithm is presented in order to connect the MG to the grid. Experimental results are provided to validate the performance and robustness of the parallel VSI system control architecture.
['Juan Carlos Vasquez', 'Josep M. Guerrero', 'Mehdi Savaghebi', 'Joaquin Eloy-Garcia', 'Remus Teodorescu']
Modeling, Analysis, and Design of Stationary-Reference-Frame Droop-Controlled Parallel Three-Phase Voltage Source Inverters
522,906
The main aim of this paper is to achieve a depth of understanding of the various similarities and differences in terms of challenges between developed and developing countries and in regards to the implementation of ICT innovations. Indeed, advances in Information and Communication Technologies (ICTs) have brought many innovations to the field of Information Systems (IS). Despite agreements on their importance to the success of organizations, the implementation processes of such innovations are multifaceted and require proper addressing of a wide-spread issues and challenges. In this study, we address this matter by first; synthesizing a comprehensive body of recent and classified literature concerning five ICT initiatives, second; analyzing and classifying ICTs challenges for both developed and developing countries as well as justifying their similarities and differences following thematic analysis qualitative methods, and third; presenting the study conclusions and identifying future research areas drawn upon the conducted comparative analysis.
['Mutaz M. Al-Debei', 'Enas M. Al-Lozi']
Implementations of ICT Innovations: A Comparative Analysis in terms of Challenges between Developed and Developing Countries
332,344
Proposed here is a bias circuit for use in a cascode operational amplifier to provide a wide output dynamic range. The bias circuit has been designed so that the drain-source voltage of each MOS transistor used in the gain stage is minimized to V/sub dsat/ automatically, making it possible to widen the output dynamic range.
['Takeshi Fukumoto', 'Hiroyuki Okada', 'Kazuyuki Nakamura']
Optimizing bias-circuit design of cascode operational amplifier for wide dynamic range operations
381,459
Performance of array processing algorithms is directly determined by the number of array elements. Processing of data from a large number of receive array elements involves a large number of receivers, resulting in high cost and complex implementations. In many practical applications, a large number of receiving antenna elements is available, however, the number of receivers is limited due to their high cost. In this work, we address the problem of direction-of-arrival (DOA) estimation with a large number of antenna array elements and a small number of receivers, where the receivers are connected to the array elements via a reconfigurable switching matrix. A cognitive approach, named cognitive antenna selection (CASE), for sequentially switching the elements of the array based on previous observations is proposed. The criterion for antenna selection is based on minimization of the conditional Bobrovski-Zakai bound (BZB) given history observations on the mean-squared-error (MSE) of the DOA estimate. The performance of the CASE algorithm is evaluated via simulations and compared to two other non-adaptive approaches. It is shown that the CASE algorithm outperforms the other compared algorithms in terms of MSE both asymptotically and in the threshold region.
['Omri Isaacs', 'Joseph Tabrikian', 'Igal Bilik']
Cognitive antenna selection for optimal source localization
611,981
We present a general framework for performing featurebased synthesis ‐ that is, for producing audio characterized by arbitrarily specified sets of perceptually motivated, quantifiable acoustic features of the sort used in many music information retrieval systems.
['Matthew D. Hoffman', 'Perry R. Cook']
Feature-Based Synthesis: A Tool for Evaluating, Designing, and Interacting with Music IR Systems
450,091
We introduce the concepts of monitoring paths (MPs) and monitoring cycles (MCs) for unique localization of shared risk linked group (SRLG) failures in all-optical networks. An SRLG failure causes multiple links to break simultaneously due to the failure of a common resource. MCs (MPs) start and end at the same (distinct) monitoring location(s). They are constructed such that any SRLG failure results in the failure of a unique combination of paths and cycles. We derive necessary and sufficient conditions on the set of MCs and MPs needed for localizing any single SRLG failure in an arbitrary graph. When a single monitoring location is employed, we show that a network must be ( k +2)-edge connected for localizing all SRLG failures, each involving up to k links. For networks that are less than ( k +2)-edge connected, we derive necessary and sufficient conditions on the placement of monitoring locations for unique localization of any single SRLG failure of up to k links. We use these conditions to develop an algorithm for determining monitoring locations. We show a graph transformation technique that converts the problem of identifying MCs and MPs with multiple monitoring locations to a problem of identifying MCs with a single monitoring location. We provide an integer linear program and a heuristic to identify MCs for networks with one monitoring location. We then consider the monitoring problem for networks with no dedicated bandwidth for monitoring purposes. For such networks, we use passive probing of lightpaths by employing optical splitters at various intermediate nodes. Through an integer linear programming formulation, we identify the minimum number of optical splitters that are required to monitor all SRLG failures in the network. Extensive simulations are used to demonstrate the effectiveness of the proposed monitoring technique.
['Satyajeet Ahuja', 'Srinivasan Ramasubramanian', 'Marwan Krunz']
SRLG failure localization in optical networks
440,670
We study the combinatorial properties of optimal refueling policies, which specify the transportation paths and the refueling operations along the paths to minimize the total transportation costs between vertices. The insight into the structure of optimal refueling policies leads to an elegant reduction of the problem of finding optimal refueling policies into the classical shortest path problem, which ends in simple and more efficient algorithms for finding optimal refueling policies.
['Shieu-Hong Lin']
Finding Optimal Refueling Policies in Transportation Networks
168,566
Constraint satisfaction problems (CSPs) are widely used in Artificial Intelligence. The problem of the existence of a solution in a CSP being NP-complete, filtering techniques and particularly arc-consistency are essential. They remove some local inconsistencies and so make the search easier. Since many problems in AI require a dynamic environment, the model was extended to dynamic CSPs (DCSPs) and some incremental arc-consistency algorithms were proposed. However, all of them have important drawbacks. DnAC-4 has an expensive worst-case space complexity and a bad average time complexity. AC/DC has a non-optimal worst-case time complexity which prevents from taking advantage of its good space complexity. The algorithm we present in this paper has both lower space requirements and better time performances than DnAC-4 while keeping an optimal worst case time complexity.
['Romuald Debruyne']
Arc-consistency in dynamic CSPs is no more prohibitive
436,916
This paper discusses ways of navigating online contact networks -networks of social connections defined under a relational context -on a way that can provide more meaningful information to those that use them. We use the concept of social distance to address the different levels of social information and express the subjective vagueness in social ties by means of fuzzy numbers. The implementation of an algorithm for measuring prestige based on such vague estimations is reported. For the sake of illustration, a numerical example using a fuzzy graph is provided in which the strength or weakness of the connections is defined as a fuzzy number.
['Nikolaos Korfiatis', 'Miguel-Angel Sicilia']
Social Measures and Flexible Navigation on Online Contact Networks
48,378
Model-based speech enhancement methods, such as vector-Taylor series-based methods (VTS) [1, 2], share a common methodology: they estimate speech using the expected value of the clean speech given the noisy speech under a statistical model. We show that it may be better to use the expected value of the noise under the model and subtract it from the noisy observation to form an indirect estimate of the speech. Interestingly, for VTS, this methodology turns out to be related to the application of an SNR-dependent gain to the direct VTS speech estimate. In results obtained on an automotive noise task, this methodology produces an average improvement of 1.6 dB signal-to-noise ratio (SNR), relative to conventional methods.
['Jonathan Le Roux', 'John R. Hershey']
Indirect model-based speech enhancement
305,840
In this paper, we propose two different types of downlink scheduling scheme based on assembling the schemes of 1) the switch-and-examine scheduling in the network layer and 2) the joint adaptive modulation and output-threshold maximum ratio combining (OT-MRC) in the physical layer and, thus, attempt to find an efficient cross-layer access scheme to exploit the user diversity as well as the high spectral efficiency and the antenna diversity gain. We consider a multiuser environment in which each base station (BS) equipped with a single antenna attempts to communicate with each user equipped with an L -finger RAKE receiver based on switch-and-examine transmission (SET) as a user selection-scheduling scheme. We propose two different SETs, i.e., spectrally efficient SET and feedback load-efficient SET. We analyze the performance of our proposed schemes and compare them with the optimal scheduling scheme in terms of average spectral efficiency and average number of estimated paths per user to quantify the complexity.
['Ki Hong Park', 'Young Chai Ko', 'Mohamed Slim Alouini']
Joint Adaptive Combining and Multiuser Downlink Scheduling
188,351
Designing energy-efficient clusters has recently become an important concern to make these systems economically attractive for many applications. Since the links and switch buffers consume the major portion of the power budget of the cluster, the focus of this paper is to optimize the energy consumption in these two components. To minimize power in the links, we propose a novel dynamic link shutdown (DLS) technique. The DLS technique makes use of an appropriate adaptive routing algorithm to shutdown the links intelligently. We also present an optimized buffer design for reducing leakage energy. Our analysis on different networks using a complete system simulator reveals that the proposed DLS technique can provide optimized performance-energy behavior (up to 40% energy savings with less than 5% performance degradation in the best case) for the cluster interconnects.
['Eun Jung Kim', 'Ki Hwan Yum', 'Greg M. Link', 'Narayanan Vijaykrishnan', 'Mahmut T. Kandemir', 'Mary Jane Irwin', 'Mazin S. Yousif', 'Chita R. Das']
Energy optimization techniques in cluster interconnects
23,671
The world of marketing has changed through the incorporation of electronic means through which new customers and new markets can be reached. As a result, the world of trade and commerce has been revolutionized, revealing new and sometimes less scrupulous ways of dealing in an online marketplace. The article provides three Australian examples each featuring a nexus between e-marketing and fraudulent online transactions in order to gain a deeper appreciation of the darker side that exists to e-marketing. It also explores education and adult learning as means of raising awareness and skills in dealing with harmful e-marketing practices found in occurrences such as Internet fraud.
['Francesco Sofo', 'Michelle Sofo']
The Role of Education in Breaking the Nexus between e-Marketing and Online Fraud
853,650
Using sensors to measure parameters of interest in rotating environments and communicating the measurements in real-time over wireless links, requires a reliable power source. In this paper, we have investigated the possibility to generate electric power locally by evaluating six different energy-harvesting technologies. The applicability of the technology is evaluated by several parameters that are important to the functionality in an industrial environment. All technologies are individually presented and evaluated, a concluding table is also summarizing the technologies strengths and weaknesses. To support the technology evaluation on a more theoretical level, simulations has been performed to strengthen our claims. Among the evaluated and simulated technologies, we found that the variable reluctance-based harvesting technology is the strongest candidate for further technology development for the considered use-case.
['Fredrik Haggstrom', 'Jonas Gustafsson', 'Jerker Delsing']
Energy harvesting technologies for wireless sensors in rotating environments
910,744
Comparing the Quality of Focused Crawlers and of the Translation Resources Obtained from them
['Bruno J. Laranjeira', 'Viviane Pereira Moreira', 'Aline Villavicencio', 'Carlos Ramisch', 'Maria José Bocorny Finatto']
Comparing the Quality of Focused Crawlers and of the Translation Resources Obtained from them
612,522
Erhöhung der Transparenz eines adaptiven Empfehlungsdiensts
['Nadine Walter', 'Benjamin Kaplan', 'Tobias Altmüller', 'Klaus Bengler']
Erhöhung der Transparenz eines adaptiven Empfehlungsdiensts
581,854
For the solution of convection-diffusion problems we present a multilevel self-adaptive mesh-refinement algorithm to resolve locally strong varying behavior, like boundary and interior layers. The method is based on discontinuous Galerkin (Baumann-Oden DG) discretization. The recursive mesh-adaptation is interwoven with the multigrid solver. The solver is based on multigrid V-cycles with damped block-Jacobi relaxation as a smoother. Grid transfer operators are chosen in agreement with the Galerkin structure of the discretization, and local grid-refinement is taken care of by the transfer of local truncation errors between overlapping parts of the grid.#R##N##R##N#We propose an error indicator based on the comparison of the discrete solution on the finest grid and its restriction to the next coarser grid. It refines in regions, where this difference is too large. Several results of numerical experiments are presented which illustrate the performance of the method.
['Daniela Vasileva', 'Anton Kuut', 'Pieter W. Hemker']
An adaptive multigrid strategy for convection-diffusion problems
181,875
The relative velocity between ground surface and receiver platform leads to clutter expansion in the range-doppler surface. The expanded clutters can mask slow moving targets. The passive radar is a low cost and light weight radar with high detection ability, due to its multistatic geometry. In this paper, we employ the MISO passive radar with at least four transmitters of opportunity to constitute an over-determined system of equations. We propose an algorithm to localize moving targets in the presence of clutters. In the first step, the clutters are separated from moving targets and in the second one, the position and velocity of the target are computed by solving the system of equations. Performance of the proposed method is verified by experimental results.
['Mohammad J. Ahmadi', 'Rouhollah Amiri', 'Fereidoon Behnia']
Localization in MISO airborne passive radar through heavy ground clutters
818,954
Open Education Practices as Answer to New Demands of Training in Entrepreneurship Competences: The Role of Recommender Systems
['Edmundo Tovar', 'Nelson Piedra', 'Jorge Lopez', 'Janneth Chizaiza']
Open Education Practices as Answer to New Demands of Training in Entrepreneurship Competences: The Role of Recommender Systems
731,474
AbstractAn accurate measuring tool is required to measure the full-field displacement of the complex geometry of the human proximal femur when subjected to axial loads. Optical techniques, such as digital image correlation (DIC), have been used extensively in order to measure planar or volumetric displacement. Even though DIC methods are now in frequent use in experimental bone mechanics, this technique has only recently been reported to measure 3D proximal ex-vivo femur displacement fields during loading under axial loads. The innovation of this study is to extend the use of the DIC technique to visualise and track the progress of failure of the proximal femur under limb stance configuration. An experimental protocol has been proposed and validated on eight human cadaveric femurs. The results indicated that failure takes place by shearing in the head–neck region. Furthermore, it was found that this method enables to extract additional information on the fracture profile and its initiation. The obtained i...
['Awad Bettamer', 'S. Allaoui', 'Ridha Hambli']
Using 3D digital image correlation to visualise the progress of failure of human proximal femur
649,876
Reengineering Public Administration through Semantic Technologies and a Reference Domain Ontology.
['Vassilios Peristeras', 'Konstantinos A. Tarabanis']
Reengineering Public Administration through Semantic Technologies and a Reference Domain Ontology.
746,711
We propose a machine learning approach to characterize the functional status of astrocytes, the most abundant cells in human brain, based on time-lapse Ca2+ imaging data. The interest in analyzing astrocyte Ca2+ dynamics is evoked by recent discoveries that astrocytes play proactive regulatory roles in neural information processing, and is enabled by recent technical advances in modern microscopy and ultrasensitive genetically encoded Ca2+ indicators. However, current analysis relies on eyeballing the time-lapse imaging data and manually drawing regions of interest, which not only limits the analysis throughput but also at risk to miss important information encoded in the big complex dynamic data. Thus, there is an increased demand to develop sophisticated tools to dissect Ca2+ signaling in astrocytes, which is challenging due to the complex nature of Ca2+ signaling and low signal to noise ratio. We develop Functional AStrocyte Phenotyping (FASP) to automatically detect functionally independent units (FIUs) and extract the corresponding characteristic curves in an integrated way. FASP is data-driven and probabilistically principled, flexibly accounts for complex patterns and accurately controls false discovery rates. We demonstrate the effectiveness of FASP on both synthetic and real data sets.
['Yinxue Wang', 'Guilai Shi', 'David J. Miller', 'Yizhi Wang', 'Gerard Joseph Broussard', 'Yue Wang', 'Lin Tian', 'Guoqiang Yu']
FASP: A machine learning approach to functional astrocyte phenotyping from time-lapse calcium imaging data
821,810
Let k be an odd natural number ź5, and let G be a ( 6 k - 7 ) -edge-connected graph of bipartite index at least k - 1 . Then, for each mapping f : V ( G ) ź N , G has a subgraph H such that each vertex v has H-degree f ( v ) modulo k. We apply this to prove that, if c : V ( G ) ź Z k is a proper vertex-coloring of a graph G of chromatic number k ź 5 or k - 1 ź 6 , then each edge of G can be assigned a weight 1 or 2 such that each weighted vertex-degree of G is congruent to c modulo k. Consequently, each nonbipartite ( 6 k - 7 ) -edge-connected graph of chromatic number at most k (where k is any odd natural number ź3) has an edge-weighting with weights 1 , 2 such that neighboring vertices have distinct weighted degrees (even after reducing these weighted degrees modulo k). We characterize completely the bipartite graph having an edge-weighting with weights 1 , 2 such that neighboring vertices have distinct weighted degrees. In particular, that problem belongs to P while it is NP-complete for nonbipartite graphs. The characterization also implies that every 3-edge-connected bipartite graph with at least 3 vertices has such an edge-labeling, and so does every simple bipartite graph of minimum degree at least 3.
['Carsten Thomassen', 'Yezhou Wu', 'Cun-Quan Zhang']
The 3-flow conjecture, factors modulo k, and the 1-2-3-conjecture
849,591
Domestic and real world robotics requires continuous learning of new skills and behaviors to interact with humans. Auto-supervised learning, a compromise between supervised and completely unsupervised learning, consist in relying on previous knowledge to acquire new skills. We propose here to realize auto-supervised learning by exploiting statistical regularities in the sensorimotor space of a robot. In our context, it corresponds to achieve feature selection in a Bayesian programming framework. We compare several feature selection algorithms and validate them on a real robotic experiment.
['Pierre Dangauthier', 'P. Bessieere', 'Anne Spalanzani']
Auto-supervised learning in the Bayesian Programming Framework
188,850
The object oriented design methods and their CASE tools are widely used in practice by many real time software developers. However, object oriented CASE tools require an additional step of identifying tasks from a given design model. Unfortunately, it is difficult to automate this step for a couple of reasons: (1) there are inherent discrepancies between objects and tasks; and (2) it is hard to derive tasks while maximizing real time schedulability, since this problem makes a non-trivial optimization problem. As a result, in practical object oriented CASE tools, task identification is usually performed in an ad hoc manner using hints provided by human designers. We present a systematic, schedulability-aware approach that can help mapping real time object oriented models to multithreaded implementations. In our approach, a task contains a group of mutually exclusive transactions that may possess different periods and deadline. For this new task model, we provide a schedulability analysis algorithm. We also show how the run-time system is implemented and how executable code is generated in our framework. We have performed a case study. It shows the difficulty of the task derivation problem and the utility of the automated synthesis of implementation.
['Saehwa Kim', 'Sukjae Cho', 'Seongsoo Hong']
Schedulability-aware mapping of real-time object-oriented models to multi-threaded implementations
149,410
This paper considers the problem of sliding mode control for a class of uncertain switched systems with parameter uncertainties and external disturbances. A key feature of the controlled system is that each subsystem is not required to share the same input channel, which was usually assumed in some existing works. By means of a weighted sum of the input matrix, a common sliding surface is designed in this work. It is shown that the reachability of the sliding surface can be ensured by the present sliding mode controller. Moreover, the sliding motion on the specified sliding surface is asymptotically stable under the proposed switching signal dependent on the state and time. Additionally, the above results are further extended to the case that the system states are unavailable. Both the sliding surface and sliding mode controller are designed by utilising state-observer. Finally, numerical simulation examples are given to illustrate the effectiveness of the present method.
['Yonghui Liu', 'Tinggang Jia', 'Yugang Niu', 'Yuanyuan Zou']
Design of sliding mode control for a class of uncertain switched systems
439,141
In this paper, we investigate the fundamental tradeoff between rate and bandwidth when a constraint is imposed on the error exponent. Specifically, we consider both additive white Gaussian noise (AWGN) and Rayleigh-fading channels where the input symbols are assumed to have a peak constraint. For the AWGN channel model, the optimal values of R z (0) and R z (0) are calculated, where R z (1/B) is the maximum rate at which information can be transmitted over a channel with bandwidth B when the error-exponent is constrained to be greater than or equal to z. The computation of R z (0) follows Gallager's infinite-bandwidth reliability function computation, while the computation of R z (0) is new and parallels Verdu's second-order calculation for channel capacity. Based on these calculations, we say that a sequence of input distributions is near optimal if both R z (0) and R z (0) are achieved. We show that quaternary phase-shift keying (QPSK), a widely used signaling scheme, is near optimal within a large class of input distributions for the AWGN channel. Similar results are also established for a fading channel where full channel side information (CSI) is available at the receiver
['Xinzhou Wu', 'Rayadurgam Srikant']
Asymptotic Behavior of Error Exponents in the Wideband Regime
52,937
CHOYCE is a web server for homology modelling of protein components and the fitting of those components into cryo electron microscopy (cryoEM) maps of their assemblies. It provides an interactive approach to improving the selection of models based on the quality of their fit into the EM map.|http://choyce.ismb.lon.ac.uk/|[email protected]; [email protected]|Supplementary data are available at Bioinformatics online.
['Reda Rawi', 'Lee Whitmore', 'Maya Topf']
CHOYCE: a web server for constrained homology modelling with cryoEM maps
302,761
A planar subdivision is any partition of the plane into (possibly unbounded) polygonal regions. The subdivision search problem is the following: given a subdivision $S$ with $n$ line segments and a query point $p$, determine which region of $S$ contains $p$. We present a practical algorithm for subdivision search that achieves the same (optimal) worst case complexity bounds as the significantly more complex algorithm of Lipton and Tarjan, namely $O(\log n)$ search time with $O(n)$ storage. Our subdivision search structure can be constructed in linear time from the subdivision representation used in many applications.
['David G. Kirkpatrick']
Optimal Search in Planar Subdivisions
89,495
A supremum-of-quadratics representation for a class of convex barrier-type constraints is developed and applied in a class of continuous time state constrained linear regulator problems. Using this representation, it is shown that any linear regulator problem constrained by such a convex barrier-type constraint can be equivalently formulated as an unconstrained two player linear quadratic game.
['Peter M. Dower', 'William M. McEneaney', 'Michael Cantoni']
A game representation for state constrained linear regulator problems
975,988
Mobile technologies have created a lot of challenges for distributed systems over the last decade. Intermitent connections and weak network signals can induce unnexpected behaviour in some applications. One kind of application that suffer from these difficulties is video streaming. This paper investigates the use of SDDL, a publish/subscribe middleware based on DDS, for live video streaming. Since SDDL is designed for scalable communications in a dynamic environment, we believe the proposed solution is fit for use in mobile networks.
['João Martins de Oliveira Neto', 'Lincoln David Nery e Silva']
Video Streaming Over Publish/Subscribe
936,686
Integrating prior knowledge into data mining is a widely interested topic. This paper firstly introduces the ontology-aided method and its application. Secondly, the methodology of our novel method is presented and then a case study is given. In the case study, on the base of analysis of former method, the detail of the novel method is described and testing on a public available dataset shows satisfied results. Finally, we draw conclusions and address future works.
['Guoqi Li', 'Minyan Lu', 'Bin Liu']
A Novel Ontology-Aided Method for Integrating Prior Knowledge into Data Mining
402,872
Abstract#R##N##R##N#The aim of this paper is to look from the point of view of admissibility of inference rules at intermediate logics having the finite model property which extend Heyting's intuitionistic propositional logic H. A semantic description for logics with the finite model property preserving all admissible inference rules for H is given. It is shown that there are continuously many logics of this kind. Three special tabular intermediate logics λ, 1 ≥ i ≥ 3, are given which describe all tabular logics preserving admissibility: a tabular logic λ preserves all admissible rules for H iff 7λ has width not more than 2 and is not included in each λ. MSC: 03B55, 03B20.
['V. V. Rybakov']
Intermediate logics preserving admissible inference rules of heyting calculus
123,229
To fully appreciate the benefits of arbitrary waveform design capability for transmit adaptive systems, the trade space between constraints (employed to increase the measure of practicality for radar) and the usual performance driver (signal-to-interference-plus-noise ratio) needs to be better defined and understood.We address this issue by developing performance models for radar waveforms with cumulative-modulus and energy constraints. Radar waveforms typically require a constant-modulus (constant-amplitude) transmit signal to efficiently exploit the available transmit power. However, recent hardware advances and the capability for arbitrary (phase and amplitude) designed waveforms have forced a reexamination of this assumption in order to quantify the impact of the nonconstant modulus property.We develop performance models for the signal-to-interference-plus-noise ratio as a function of the cumulative modulus for a random colored interference environment and validate the models against measured data.
['Aaron M. Jones', 'Brian D. Rigling', 'Muralidhar Rangaswamy']
Signal-to-interference-plus- noise-ratio analysis for constrained radar waveforms
989,545
Motivated by the problem of decentralized direction-tracking, we consider the general problem of cooperative learning in multi-agent systems with time-varying connectivity and intermittent measurements. We propose a distributed learning protocol capable of learning an unknown vector μ from noisy measurements made independently by autonomous nodes. Our protocol is completely distributed and able to cope with the time-varying, unpredictable, and noisy nature of inter-agent communication, and intermittent noisy measurements of μ. Our main result bounds the learning speed of our protocol in terms of the size and combinatorial features of the (time-varying) network connecting the nodes.
['Naomi Ehrich Leonard', 'Alexander Olshevsky']
Cooperative learning in multi-agent systems from intermittent measurements
352,534
Previous research demonstrates that multiple representations can enhance students' learning. However, learning with multiple representations is hard. Students need to acquire representational fluency with each of the representations and they need to be able to make connections between the representations. It is yet unclear how to balance these two aspects of learning with multiple representations. In the present study, we focus on a key aspect of this question, namely the temporal sequencing of representations when students work with multiple representations one-at-a-time. Specifically, we investigated the effects of blocking versus interleaving multiple representations of fractions in an intelligent tutoring system. We conducted an in vivo experiment with 296 5th- and 6th-grade students. The results show an advantage for blocking representations and for moving from a blocked to an interleaved sequence. This effect is especially pronounced for students with low prior knowledge.
['Martina A. Rau', 'Vincent Aleven', 'Nikol Rummel']
Blocked versus interleaved practice with multiple representations in an intelligent tutoring system for fractions
511,524
We propose adding a vigilante player and using non-traditional game strategy for decision making to improve the performance of a cognitive radio network. To date, the application to cognitive radio networks of a hybrid player such as a vigilante, which is both cooperative and non- cooperative, has not been significantly studied. We use a novel play strategy (i.e., altruism) to police a wireless network. Using this new player, we present and test a series of predictive algorithms that show an improvement in wireless channel utilization over traditional collision- detection algorithms. Our results demonstrate the viability of using this novel strategy to inform and create more efficient cognitive radio networks.
['Khashayar Kotobi', 'Sven G. Bilén']
Introduction of Vigilante Players in Cognitive Networks with Moving Greedy Players
639,808
Historically, the process of synchronizing a decision support system with data from operational systems has been referred to as Extract, Transform, Load (ETL) and the tools supporting such process have been referred to as ETL tools. Recently, ETL was replaced by the more comprehensive acronym, data integration (DI). DI describes the process of extracting and combining data from a variety of data source formats, transforming that data into a unified data model representation and loading it into a data store. This is done in the context of a variety of scenarios, such as data acquisition for business intelligence, analytics and data warehousing, but also synchronization of data between operational applications, data migrations and conversions, master data management, enterprise data sharing and delivery of data services in a service-oriented architecture context, amongst others. With these scenarios relying on up-to-date information it is critical to implement a highly performing, scalable and easy to maintain data integration system. This is especially important as the complexity, variety and volume of data is constantly increasing and performance of data integration systems is becoming very critical. Despite the significance of having a highly performing DI system, there has been no industry standard for measuring and comparing their performance. The TPC, acknowledging this void, has released TPC-DI, an innovative benchmark for data integration. This paper motivates the reasons behind its development, describes its main characteristics including workload, run rules, metric, and explains key decisions.
['Meikel Poess', 'Tilmann Rabl', 'Hans-Arno Jacobsen', 'Brian Caufield']
TPC-DI: the first industry benchmark for data integration
688,502
In the first of a two-part series, we review industry best practices for designing, validating, deploying, and operating IP-based services at the network edge with tight service-level agreements (SLAs). We describe the important SLA metrics for IP service performance and discuss why Diffserv is the preferred technology to achieve these SLAs.
['John Evans', 'Clarence Filsfils']
Deploying Diffserv at the network edge for tight SLAs, Part 2
532,394
Today, the Cloud networking aspect is a critical factor for adopting the Cloud computing approach. The main drawback of Cloud networking consists in the lack of Quality of Service (QoS) guarantee and management in conformance with a corresponding Service Level Agreement (SLA). This paper presents a framework for resource allocation according to an end-to-end SLA established between a Cloud Service User (CSU) and several Cloud Service Providers (CSPs) in a Cloud networking environment. We focus on QoS parameters for Network as a Service (NaaS) and Infrastructure as a Service (IaaS) services. In addition, we propose algorithms for the best CSPs selection to allocate Virtual Machines (VMs) and network resources in inter-cloud broker and federation scenarios. Our objective is to minimize the cost while satisfying NaaS and IaaS QoS constraints. Moreover, we simulate our proposed Cloud networking architecture to provide videoconferencing and intensive computing applications with QoS guarantee. We observe that the broker architecture is the most interesting while ensuring QoS requirements.
['Mohamad Hamze', 'Nader Mbarek', 'Olivier Togni']
Broker and federation based Cloud networking architecture for IaaS and NaaS QoS guarantee
700,407
The issue of energy efficiency of buildings must be taken into account as early as possible in the life cycle of buildings, i.e., during the architectural phase. The orientation of the building, its structure, the choice of materials, windows and other openings, all these aspects participate to the future energy consumption profile in a complex and highly non-linear way. Furthermore, even though sustainability is today a major objective, the cost of the construction remains another decision factor that cannot be underestimated - and energy efficiency and cost are alas contradictory objectives. Thus the problem of designing efficient buildings is a multi-objective problem. This work tackles this problem using a state-of-the-art evolutionary multi-objective algorithm, HYPE, tightly linked to an energy consumption simulation program, EnergyPlus. Several parameters defining the design are considered, namely the orientation angle, the materials used for the thermal insulation of the walls, and the size of the windows in order to explore daylighting. In the end, a diverse set of Pareto optimal solutions (i.e., solutions offering optimal tradeoffs between both objectives) are proposed to the decision maker. The approach is validated on five buildings of different categories, where energy savings of up to 20% compared to the original design are automatically obtained.
['Álvaro Fialho', 'Youssef Hamadi', 'Marc Schoenauer']
A Multi-objective approach to balance buildings construction cost and energy efficiency
791,102
Wormhole switching is a switching technique nowadays commonly used in networks-on-chips (NoCs). It is efficient but prone to deadlock. The design of a deadlock-free adaptive routing function constitutes an important challenge. We present a novel algorithm for the automatic verification that a routing function is deadlock-free in wormhole networks. A sufficient condition for deadlock-free routing and an associated algorithm are defined. The algorithm is proven complete for the condition. The condition, the algorithm, and the correctness theorem have been formalized and checked in the logic of the ACL2 interactive theorem proving system. The algorithm has a time complexity in O( N 3 ), where N denotes the number of nodes in the network. This outperforms the previous solution of Taktak et al. by one degree. Experimental results confirm the high efficiency of our algorithm. This paper presents a formally proven correct algorithm that detects deadlocks in a 2D-mesh with about 4000 nodes and 15000 channels within seconds.
['Freek Verbeek', 'Julien Schmaltz']
Automatic verification for deadlock in Networks-on-Chips with adaptive routing and wormhole switching
108,683
Pathology reports are written by pathologists, skilled physicians, that know how to interpret disorders in various tissue samples from the human body. To obtain valuable statistics on outcome of disorders, as for example cancer and effect of treatment, statistics are collected. Therefore, cancer pathology reports interpreted and coded into databases at cancer registries. In Norway is this task carried out by the Cancer Registry of Norway (Kreftregisteret) by 25 different human coders. There is a need to automate this process. The authors of this article received 25 prostate cancer pathology reports written in Norwegian from the Cancer Registry of Norway, each documenting various stages of prostate cancer and the corresponding correct manual coding. A rule-based algorithm was produced that processed the reports in order to prototype automation. The output of the algorithm was compared to the output of the manual coding. The evaluation showed an average F-Score of 0.94 on four of these data points namely Total Malign, Primary Gleason, Secondary Gleason and Total Gleason and a lower result with on average F-score of 0.76 on all ten data points. The results are in line with previous research.
['Anders Dahl', 'Atilla Özkan', 'Hercules Dalianis']
Pathology text mining - on Norwegian prostate cancer reports
821,774
This paper describes the VERTIGO Goggles, a hardware upgrade to the SPHERES satellites that enables vision-based navigation research in the 6 degree-of-freedom, microgravity environment of the International Space Station ISS. The Goggles include stereo cameras, an embedded x86 computer, a high-speed wireless communications system, and the associated electromechanical and software systems. The Goggles were designed to be a modular, expandable, and upgradable "open research test bed" that have been used for a variety of other experiments by external researchers. In February 2013, the Goggles successfully completed a hardware checkout on the ISS and was used for initial vision-based navigation research. This checkout included a successful camera calibration by an astronaut onboard the ISS. This paper describes the requirements, design, and operation of this test bed as well as the experimental results of its first checkout operations.
['Brent E. Tweddle', 'Timothy P. Setterfield', 'Alvar Saenz-Otero', 'David W. Miller']
An Open Research Facility for Vision-Based Navigation Onboard the International Space Station
554,295
This paper deals with a nonsmooth version of the connection between the maximum principle and dynamic programming principle, for the stochastic recursive control problem when the control domain is convex. By employing the notions of sub- and super-jets, the set inclusions are derived among the value function and the adjoint processes. The general case for non-convex control domain is open.
['Tianyang Nie', 'Jingtao Shi', 'Zhen Wu']
Connection between MP and DPP for stochastic recursive optimal control problems: Viscosity solution framework in local case
687,001
The era of nano technology and SOC's is driving the need for better and more compact design of embedded flash memories. This paper discusses circuit technique for a switched polarity 1.8 V charge pump based on Dickson's charge pump. The circuit can generate typical voltage levels for flash memories during both program and erase operations. The charge pump utilizes the same circuit elements to generate both positive and negative voltages using two non-overlapping clocks resulting in a very simple and compact design. Simulation studies comparing our design with a recent design of a charge pump show the advantages of our design in terms of rise time, charging characteristics, and area.
['Mohammad Gh. Mohammad', 'Jalal Fahmi', 'Omar Al-Terkawi']
Switched Polarity Charge Pump for NOR-type Flash Memories
247,378
In the NP-hard Cluster Editing problem, we have as input an undirected graph G and an integer k>=0. The question is whether we can transform G, by inserting and deleting at most k edges, into a cluster graph, that is, a union of disjoint cliques. We first confirm a conjecture by Michael Fellows [IWPEC 2006] that there is a polynomial-time kernelization for Cluster Editing that leads to a problem kernel with at most 6k vertices. More precisely, we present a cubic-time algorithm that, given a graph G and an integer k>=0, finds a graph G^' and an integer k^'@?k such that G can be transformed into a cluster graph by at most k edge modifications iff G^' can be transformed into a cluster graph by at most k^' edge modifications, and the problem kernel G^' has at most 6k vertices. So far, only a problem kernel of 24k vertices was known. Second, we show that this bound for the number of vertices of G^' can be further improved to 4k vertices. Finally, we consider the variant of Cluster Editing where the number of cliques that the cluster graph can contain is stipulated to be a constant d>0. We present a simple kernelization for this variant leaving a problem kernel of at most (d+2)k+d vertices.
['Jiong Guo']
A more effective linear kernelization for cluster editing
202,301
The Reality of a Real-Time UNIX Operating System
['Borko Furht']
The Reality of a Real-Time UNIX Operating System
646,500
Quality Measures in Biometric Systems.
['Fernando Alonso-Fernandez', 'Julian Fierrez', 'Josef Bigun']
Quality Measures in Biometric Systems.
764,560
With de noVo rational drug design, scientists can rapidly generate a very large number of potentially biologically active probes. However, many of them may be synthetically infeasible and, therefore, of limited value to drug developers. On the other hand, most of the tools for synthetic accessibility evaluation are very slow and can process only a few molecules per minute. In this study, we present two approaches to quickly predict the synthetic accessibility of chemical compounds by utilizing support vector machines operating on molecular descriptors. The first approach, RSSVM, is designed to identify the compounds that can be synthesized using a specific set of reactions and starting materials and builds its model by training on the compounds identified as synthetically accessible or not by retrosynthetic analysis. The second approach, DRSVM, is designed to provide a more general assessment of synthetic accessibility that is not tied to any set of reactions or starting materials. The training set compounds for this approach are selected from a diverse library based on the number of other similar compounds within the same library. Both approaches have been shown to perform very well in their corresponding areas of applicability with the RSSVM achieving a receiver operator characteristic score of 0.952 in cross-validation experiments and the DRSVM achieving a score of 0.888 on an independent set of compounds. Our implementations can successfully process thousands of compounds per minute.
['Yevgeniy Podolyan', 'Michael A. Walters', 'George Karypis']
Assessing synthetic accessibility of chemical compounds using machine learning methods.
128,097
This paper describes PBSM (Partition Based Spatial-Merge), a new algorithm for performing spatial join operation. This algorithm is especially effective when neither of the inputs to the join have an index on the joining attribute. Such a situation could arise if both inputs to the join are intermediate results in a complex query, or in a parallel environment where the inputs must be dynamically redistributed. The PBSM algorithm partitions the inputs into manageable chunks, and joins them using a computational geometry based plane-sweeping technique. This paper also presents a performance study comparing the the traditional indexed nested loops join algorithm, a spatial join algorithm based on joining spatial indices, and the PBSM algorithm. These comparisons are based on complete implementations of these algorithms in Paradise, a database system for handling GIS applications. Using real data sets, the performance study examines the behavior of these spatial join algorithms in a variety of situations, including the cases when both, one, or none of the inputs to the join have an suitable index. The study also examines the effect of clustering the join inputs on the performance of these join algorithms. The performance comparisons demonstrates the feasibility, and applicability of the PBSM join algorithm.
['Jignesh M. Patel', 'David J. DeWitt']
Partition based spatial-merge join
73,032
Displacement, an operation of cartographic generalization, resolves congestion and overlap of map features that is caused by enlargement of map symbols to ensure readability at reduced scales. Algorithms for displacement must honour spatial context, avoid creating secondary spatial conflicts, and retain spatial patterns and relations such as alignments and relative distances that characterize the original map features. We present an algorithm for displacement of buildings based on optimization. While existing approaches directly displace the individual buildings, our algorithm first forms a truss of of elastic beams to capture important spatial patterns and preserve them during displacement. The algorithm proceeds in two phases. The first phase analyses spatial relationships to construct a truss as a weighted graph. The truss is initially based on the minimum spanning tree connecting the building centroids, with beam stiffness determined by spatial relationships. The second phase iteratively deforms the t...
['Matthias Bader', 'Mathieu Barrault', 'Robert Weibel']
Building displacement over a ductile truss
5,644
People design what they say specifically for their conversational partners, and they adapt to their partners over the course of a conversation. A comparison of keyboard conversations involving a simulated computer partner (as in a natural language interface) with those involving a human partner (as in teleconferencing) yielded striking differences and some equally striking similarities. For instance, there were significantly fewer acknowledgments in human/computer dialogue than in human/human. However, regardless of the conversational partner, people expected connectedness across conversational turns. In addition, the style of a partner's response shaped what people subsequently typed. These results suggest some issues that need to be addressed before a natural language computer interface will be able to hold up its end of a conversation.
['Susan E. Brennan']
Conversation with and through Computers
125,261
Advances in human brain neuroimaging for high-temporal and high-spatial resolutions will depend on localization of electroencephalography EEG signals to their cortex sources. The source localization inverse problem is inherently ill-posed and depends critically on the modeling of human head electromagnetics. We present a systematic methodology to analyze the main factors and parameters that affect the EEG source-mapping accuracy. These factors are not independent, and their effect must be evaluated in a unified way. To do so requires significant computational capabilities to explore the problem landscape, quantify uncertainty effects, and evaluate alternative algorithms. Bringing high-performance computing to this domain is necessary to open new avenues for neuroinformatics research. The head electromagnetics forward problem is the heart of the source localization inverse. We present two parallel algorithms to address tissue inhomogeneity and impedance anisotropy. Highly accurate head modeling environments will enable new research and clinical neuroimaging applications. Cortex-localized dense-array EEG analysis is the next-step in neuroimaging domains such as early childhood reading, understanding of resting-state brain networks, and models of full brain function. Therapeutic treatments based on neurostimulation will also depend significantly on high-performance computing integration. Copyright © 2015i¾?John Wiley & Sons, Ltd.
['Adnan Salman', 'Allen D. Malony', 'Sergei Turovets', 'Vasily Volkov', 'David Ozog', 'Don M. Tucker']
Concurrency in electrical neuroinformatics: parallel computation for studying the volume conduction of brain electrical fields in human head tissues
181,756
Heterogeneous wireless networks where several systems with different bands coexist for multimedia service are currently in service and will be widely adopted to support various traffic demand. Under heterogeneous networks, a mobile station can transmit over multiple and simultaneous radio access technologies (RATs) such as WLAN, HSPA, and WCDMA LTE. Also, cognitive radio for the efficient use of underutilized/unused frequency band is successfully implemented in some networks. In this letter, we address such operational issues as air interface and band selection for a mobile and power allocation to the chosen links. An optimal solution is sought and analyzed and a distributed joint allocation algorithm is proposed to maximize total system capacity. We investigate the benefit of multiple transmissions by multiple RATs over a single transmission by a single RAT at a time, which can be interpreted as network diversity. Numerical results validate the performance enhancement of our proposed algorithm.
['Yonghoon Choi', 'Hoon Kim', 'Sang-wook Han', 'Youngnam Han']
Joint Resource Allocation for Parallel Multi-Radio Access in Heterogeneous Wireless Networks
386,559
In this paper we describe NAVRNA, an interactive system that enables biologists or researchers in bioinformatics to visualize, explore and edit RNA molecules. The key characteristics of NAVRNA are (1) to exploit multiple display surfaces (2) to enable the manipulation of both the 2D view of RNA called secondary structure, as well as the 3D view of RNA called tertiary structure while maintaining consistency between the two views, (3) to enable co-located synchronous collaborative manipulation of the RNA structures and (4) to provide two-handed interaction techniques for navigating and editing RNA structures and in particular a two-handed technique for bending the structure.
['Gilles Bailly', 'Laurence Nigay', 'David Auber']
NAVRNA: visualization - exploration - editing of RNA
219,477
This paper describes the complete mathematical optimization process of an inductive powering system suitable for the application within implanted biomedical systems. The optimization objectives are thereby size, energy efficiency, and tissue absorption. Within the first step, the influence of the operational frequency on the given quantities is computed by means of finite element simulations, yielding a compromise of power transfer efficiency of the wireless link and acceptable tissue heating in terms of the specific absorption rate. All simulations account for the layered structure of the human head, modeling the dielectric properties with Cole-Cole dispersion effects. In the second step, the relevant coupling and loss effects of the transmission coils are modeled as a function of the geometrical design parameters, enabling a noniterative and comprehensible mathematical derivation of the optimum coil geometry given an external size constraint. Further investigations of the optimum link design also consider high-permeability structures being applied to the primary coil, enhancing the efficiency by means of an increased mutual inductance. Thereby, a final link efficiency of 80% at a coil separation distance of 5 mm and 20% at 20 mm using a 10-mm planar receiving coil can be achieved, contributing to a higher integration density of multichannel brain implanted sensors. Moreover, the given procedure does not only give insight into the optimization of the coil design, but also provides a minimized set of mathematical expressions for designing a highly efficient primary side coil driver and for selecting the components of the secondary side impedance matching. All mathematical models and descriptions have been verified by simulation and concluding measurements.
['Sebastian Stoecklin', 'Adnan Yousaf', 'Tobias Volk', 'Leonhard M. Reindl']
Efficient Wireless Powering of Biomedical Sensor Systems for Multichannel Brain Implants
672,777
We discuss holographic image representations. Arbitrary portions of a holographic representation enable reconstruction of the whole image, with distortions that decrease gradually with the increase of the size of the portions available. Holographic representations enable progressive refinement in image communication or retrieval tasks, with no restrictions on the order in which the data fragments (sections of the representation) are accessed or become available.
['Alfred M. Bruckstein', 'Robert J. Holt', 'Arun N. Netravali']
Holographic image representations: the subsampling method
81,949
The game domination number is a graph invariant that arises from a game, which is related to graph domination in a similar way as the game chromatic number is related to graph coloring. In this paper we show that deciding whether the game domination number of a graph is bounded by a given integer is PSPACE-complete. This contrasts the situation of the game coloring problem whose complexity is still unknown.
['Boštjan Brešar', 'Paul Dorbec', 'Sandi Klavžar', 'Gašper Košmrlj', 'Gabriel Renault']
Complexity of the Game Domination Problem
169,622
ICT and the Environment in Developing Countries: A Review of Opportunities and Developments.
['John Houghton']
ICT and the Environment in Developing Countries: A Review of Opportunities and Developments.
864,205
Real-Time Design Patterns: Architectural Designs for Automatic Semi-Partitioned and Global Scheduling
['Amina Magdich', 'Yessine Hadj Kacem', 'Adel Mahfoudhi', 'Mickaël Kerboeuf', 'Mohamed Abid']
Real-Time Design Patterns: Architectural Designs for Automatic Semi-Partitioned and Global Scheduling
630,945
We develop efficient algorithms for problems in computational geometry-convex hull, smallest enclosing box, ECDF two-set dominance, maximal points, all-nearest neighbor and closest-pair-on the OTIS-Mesh optoelectronic computer We also demonstrate the algorithms for computing convex hull and prefix sum with condition on a multi-dimensional mesh, which are used to compute convex hull and ECDF respectively. We show that all these problems can be solved in O(/spl radic/N) time even with N/sup 2/ inputs.
['Chih-Fang Wang', 'Sartaj Sahni']
Computational geometry on the OTIS-Mesh optoelectronic computer
316,835
Fluid-based Analysis of TCP Flows in a Scale-Free Network
['Yusuke Sakumoto', 'Hiroyuki Ohsaki', 'Makoto Imase']
Fluid-based Analysis of TCP Flows in a Scale-Free Network
992,345
Algorithms for Fair Partitioning of Convex Polygons
['Bogdan Armaselu', 'Ovidiu Daescu']
Algorithms for Fair Partitioning of Convex Polygons
342,121
Steady-state temperature due to Joule self-heating for highly coupled integrated-circuit interconnect can be found rapidly on individual interconnect segments during electromigration reliability verification. It has previously been shown that the dc electric current solution on each interconnect segment of a net may be modified to form the analytical solution of the 1-D time-independent heat equation along the entire net. A symbolic solution of the network equations (requiring O(P 3 ) operations, where P is the number of nodes) is first evaluated to solve the electrical problem and then evaluated again to solve the resulting Joule heat problem (each evaluation requiring O(P) operations). The symbolic solution is extended here to couple each interconnect segment to the weighted average temperature of the segments on neighboring nets. The temperature over the entire set of nets may be found by iterating until convergence, which does not require a significant overall increase in operations. The accuracy of the temperature trajectories is principally dependent on the validity of the assumptions that the temperature background seen by each individual interconnect segment is uniform and that vias conduct heat only along their lengths. The estimated temperature of self-heated nets is 110% of the finite-element result for a realistic layout example. The net-based solution is well suited to distributed processing and identifying problematic layout.
['Andrew Labun', 'Karan Jagjitkumar']
Rapid Detailed Temperature Estimation for Highly Coupled IC Interconnect
259,092
To overcome the increasingly time consuming and potentially challenging identification of key points and the associated rationales in large-scale online deliberations, we propose a computational linguistics method that has the potential of facilitating this process of reading and evaluating the text. Our approach is novel in how we determine the sentiment of a rationale at the sentence level and in that it includes a text similarity measure and sentence-level sentiment analysis to achieve this goal.
['Wanting Mao', 'Lu Xiao', 'Robert E. Mercer']
The Use of Text Similarity and Sentiment Analysis to Examine Rationales in the Large-Scale Online Deliberations
141,454
Visual Features for Linguists: Basic image analysis techniques for multimodally-curious NLPers
['Elia Bruni', 'Marco Baroni']
Visual Features for Linguists: Basic image analysis techniques for multimodally-curious NLPers
616,903
Redesigning the Relationship between Government and Civil Society: An Investigation of Emerging Models of Networked Democracy in Brazil
['Eduardo Henrique Diniz', 'Manuella Maia Ribeiro']
Redesigning the Relationship between Government and Civil Society: An Investigation of Emerging Models of Networked Democracy in Brazil
53,845
CLOUDIZER: plateforme de distribution des services dans une architecture de type cloud.
['Sylvain Lefebvre', 'Raja Chiky', 'Renaud Pawlak']
CLOUDIZER: plateforme de distribution des services dans une architecture de type cloud.
808,699
In the recent past, wireless sensor technology has undergone advancements in its autonomous data collecting aspects, and has become an area worth investigating in relation to structural monitoring applications. This paper describes an analysis of the performance of a linear aligned wireless sensor network, an application that could be used in remote monitoring long distance overhead transmission lines.
['Sibukele Gumbo', 'Hippolyte N. Muyingi']
Performance Investigation of Wireless Sensor Network for Long Distance Overhead Power Lines; Mica2 Motes, a Case Study
364,972
This paper presents an adaptable online Multilingual Discourse Processing System (Mul- tiDPS), composed of four natural language processing tools: named entity recognizer, anapho- ra resolver, clause splitter and a discourse parser. This NLP Meta System allows any user to run it on the web or via web services and, if necessary, to build its own processing chain, by incorporating knowledge or resources for each tool for the desired language. In this paper is presented a brief description for each independent module, and a case study in which the sys- tem is adapted to five different languages for creating a multilingual summarization system.
['Daniel Alexandru Anechitei']
MultiDPS -- A multilingual Discourse Processing System
614,326
Recent developments in computing capabilities and persistent surveillance systems have enabled advanced analytics and visualization of image data. Using our existing capabilities, this work focuses on developing a unified approach to address the task of visualizing track data in 3-dimensional environments. Our current structure from motion (SfM) workflow is reviewed to highlight our point cloud generation methodology, which offers the option to use available sensor telemetry to improve performance. To this point, an algorithm outline for navigation-guided feature matching and geo-rectification in the absence of ground control points (GCPs) is included in our discussion. We then provide a brief overview of our onboard processing suite, which includes real-time mosaic generation, image stabilization, and feature tracking. Exploitation of geometry refinements, inherent to the SfM workflow, is then discussed in the context of projecting track data into the point cloud environment for advanced visualization. Results using the new Exelis airborne collection system, Corvus Eye, are provided to discuss conclusions and areas for future work.
['Derek J. Walvoord', 'Andrew C. Blose', 'Bernard V. Brower']
An automated workflow for observing track data in 3-dimensional geo-accurate environments
912,268
Multi-source streaming is essential for the design of a large-scale P2P streaming architecture. In this paper, we focus on improving the dependability of multi-source video streaming as an alternative to the more prevalent point-to-point streaming. We have designed and implemented a dynamic FEC (D-FEC) protocol for multi-source video streaming. Our D-FEC protocol has the ability to dynamically switch between 4 different FEC techniques to adapt to varying network conditions. A comprehensive performance evaluation was performed with a full-scale system prototype that gives insights into the behavior of the different adaptive FEC schemes, and shows the feasibility of the concept of switching FEC schemes during a streaming session. Guidelines are given in order to best design a FEC protocol switching strategy taking into account the experienced loss rate, the loss burstiness, and the original video stream rate.
['Cedric Lamoriniere', 'Abdelhamid Nafaa', 'Liam Murphy']
Dynamic Switching between Adaptive FEC Protocols for Reliable Multi-Source Streaming
136,045
On the computability of affordances as relations
['Jonathan R. A. Maier']
On the computability of affordances as relations
739,310
Quantitative measurements of change in β-amyloid load from Positron Emission Tomography (PET) images play a critical role in clinical trials and longitudinal observational studies of Alzheimer's disease. These measurements are strongly affected by methodological differences between implementations, including choice of reference region and use of partial volume correction, but there is a lack of consensus for an optimal method. Previous works have examined some relevant variables under varying criteria, but interactions between them prevent choosing a method via combined meta-analysis. In this work, we present a thorough comparison of methods to measure change in β-amyloid over time using Pittsburgh Compound B (PiB) PET imaging.#R##N#Methods#R##N#We compare 1,024 different automated software pipeline implementations with varying methodological choices according to four quality metrics calculated over three-timepoint longitudinal trajectories of 129 subjects: reliability (straightness/variance); plausibility (lack of negative slopes); ability to predict accumulator/non-accumulator status from baseline value; and correlation between change in β-amyloid and change in Mini Mental State Exam (MMSE) scores.#R##N#Results and conclusion#R##N#From this analysis, we show that an optimal longitudinal measure of β-amyloid from PiB should use a reference region that includes a combination of voxels in the supratentorial white matter and those in the whole cerebellum, measured using two-class partial volume correction in the voxel space of each subject's corresponding anatomical MR image.
['Christopher G. Schwarz', 'Matthew L. Senjem', 'Jeffrey L. Gunter', 'Nirubol Tosakulwong', 'Stephen D. Weigand', 'Bradley J. Kemp', 'Anthony J. Spychalla', 'Prashanthi Vemuri', 'Ronald C. Petersen', 'Val J. Lowe', 'Clifford R. Jack']
Optimizing PiB-PET SUVR change-over-time measurement by a large-scale analysis of longitudinal reliability, plausibility, separability, and correlation with MMSE
866,556