abstract
stringlengths
5
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
367
__index_level_0__
int64
0
1,000k
This paper presents a segmentation method that extends geodesic active region methods by the incorporation of a statistical classifier trained using feature selection. The classifier provides class probability maps based on class representative local features, and the geodesic active region formulation enables the partitioning of the image according to the region information. We demonstrate automatic segmentation results of the myocardium in cardiac late gadolinium-enhanced magnetic resonance imaging (CE-MRI) data using coupled level set curve evolutions, in which the classifier is incorporated both from a region term and from a shape term from particle filtering. The results show potential for clinical studies of scar tissue in late CE-MRI data.
['Jenny Folkesson', 'Eigil Samset', 'Raymond Y. Kwong', 'Carl-Fredrik Westin']
Unifying Statistical Classification and Geodesic Active Regions for Segmentation of Cardiac MRI
410,358
Operations for extending/embedding a smaller network into a larger network that preserve the insufficiency of classes of linear network codes are presented. Linear network codes over some finite field are said to be sufficient for a network if and only if for every point in the network coding rate region, there exists a code over that finite field to achieve it. Three operations are defined, and it is proven that they have the desired inheritance property, both for scalar linear network codes and for vector linear network codes, separately. Experimental results on the rate regions of multilevel diversity coding systems (MDCS), a sub-class of the broader family of multi-source multi-sink networks with special structure, are presented for demonstration. These results demonstrate that these notions of embedding operations enable one to investigate the existences of small numbers of forbidden network minors for sufficiency of linear network codes over a given field.
['Congduan Li', 'Steven Weber', 'John MacLaren Walsh']
Network embedding operations preserving the insufficiency of linear network codes
944,257
This paper proposes a methodology to stabilize relative equilibria in a model of identical, steered particles moving in three-dimensional Euclidean space. Exploiting the Lie group structure of the resulting dynamical system, the stabilization problem is reduced to a consensus problem. We first derive the stabilizing control laws in the presence of all- to-all communication. Providing each agent with a consensus estimator, we then extend the results to a general setting that allows for unidirectional and time-varying communication topologies.
['Luca Scardovi', 'Naomi Ehrich Leonard', 'Rodolphe Sepulchre']
Stabilization of collective motion in three dimensions: A consensus approach
302,992
A Product Development System using Knowledge-intensive Support Approach
['Janus S. Liang', 'Kuo-Ming Chao', 'Paul Ivey']
A Product Development System using Knowledge-intensive Support Approach
793,299
In this paper, we present a new scheme for face recognition. The main idea is to represent the images with the similarity features against the reference set and to provide the relative match for two images. For any image, we first compute the similarities between it and all the reference images, and then we take these similarities as its feature. Based on the similarity features, a linear discriminating classifier is constructed to recognize the querying image. Inspired by research in cognitive psychology, the perceptual distance based dynamic similarity function is proposed to compute the similarity features. The proposed method can be regarded as a generalization of kernel discriminant analysis, and it can well deal with the nonlinear variations, especially occlusion. Extensive experiments are conducted to show its performance and robustness to occlusion.
['Qingshan Liu', 'W. M. Yan', 'Hanqing Lu', 'Songde Ma']
Occlusion Robust Face Recognition with Dynamic Similarity Features
61,654
We have developed a graphical user interface based dendrimer builder toolkit (DBT) which can be used to generate the dendrimer configuration of desired generation for various dendrimer architectures. The validation of structures generated by this tool was carried out by studying the structural properties of two well known classes of dendrimers: ethylenediamine cored poly(amidoamine) (PAMAM) dendrimer, diaminobutyl cored poly(propylene imine) (PPI) dendrimer. Using full atomistic molecular dynamics (MD) simulation we have calculated the radius of gyration, shape tensor and monomer density distribution for PAMAM and PPI dendrimer at neutral and high pH. A good agreement between the available simulation and experimental (small angle X-ray and neutron scattering; SAXS, SANS) results and calculated radius of gyration was observed. With this validation we have used DBT to build another new class of nitrogen cored poly(propyl ether imine) dendrimer and study it's structural features using all atomistic MD simulation. DBT is a versatile tool and can be easily used to generate other dendrimer structures with different chemistry and topology. The use of general amber force field to describe the intra-molecular interactions allows us to integrate this tool easily with the widely used molecular dynamics software AMBER. This makes our tool a very useful utility which can help to facilitate the study of dendrimer interaction with nucleic acids, protein and lipid bilayer for various biological applications. (c) 2012 Wiley Periodicals, Inc.
['Vishal Maingi', 'Vaibhav Jain', 'Prasad V. Bharatam', 'Prabal K. Maiti']
Dendrimer building toolkit: Model building and characterization of various dendrimer architectures
307,808
Intra coding is one of the most effective ways of reducing the impact of error propagation caused by predictive coding. However, Intra coding requires a higher bitrate when compared to inter coding. In order to use Inter coding and reduce error propagation it is important that Inter macroblocks predict from “safe” areas that have a decreased chance of spreading errors. To this end we propose a low complexity method of biasing the prediction mechanism towards recently intra updated macroblocks. We devise a method of adjusting the distortion used in rate distortion optimization to take into account the temporal distance of the last Intra macroblock. Our simulations show that our Intra-distance Derived Weighting (IDW) method improves video coding performance in a lossy environment by up to 1.4 dB for a modest increase in bitrate.
['Sunday Nyamweno', 'Ramdas Satyan', 'Fabrice Labeau']
Intra-distance Derived Weighted distortion for error resilience
517,528
This paper presents an algorithm that allows to learn low dimensional representations of images in an unsupervised manner. The core idea is to combine two criteria that play important roles in unsupervised representation learning, namely sparsity and trace quotient. The former is known to be a convenient tool to identify underlying factors, and the latter is known as a disentanglement of underlying discriminative factors. In this work, we develop a generic cost function for learning jointly a sparsifying dictionary and a dimensionality reduction transformation. It leads to several counterparts of classic low dimensional representation methods, such as Principal Component Analysis, Local Linear Embedding, and Laplacian Eigenmap. Our proposed optimisation algorithm leverages the efficiency of geometric optimisation on Riemannian manifolds and a closed form solution to the elastic net problem.
['Xian Wei', 'Hao Shen', 'Martin Kleinsteuber']
Trace Quotient Meets Sparsity: A Method for Learning Low Dimensional Image Representations
818,995
I investigate the chromatic aberration of full-color optical scanning holography (OSH) and propose a digital filtering technique that compensates the chromatic aberration. The focal lengths and the extents of Red, Green and Blue (RGB) time-dependent Fresnel zone plates (TD-FZPs) which encode the RGB complex holograms depend on wavelengths of RGB beams. This generates chromatic aberration in full-color OSH. A digital filtering technique that matches the focal lengths and the extents of the recorded RGB holograms is proposed to eliminate the chromatic aberration in full-color OSH.
['Taegeun Kim']
Chromatic aberration issue on full-color optical scanning holography [Invited]
987,989
The efficient scheduling of large mixed parallel applications is challenging. Most existing algorithms utilize scheduling heuristics and approximation algorithms to determine a good schedule as basis for an efficient execution in large scale scientific computing. This paper concentrates on the scheduling of mixed parallel applications represented by task graphs with parallel tasks and precedence constraints between them. Layer-based scheduling algorithms for homogeneous target platforms are improved by adding a move-blocks phase that further reduces the resulting parallel runtime.The layer-based scheduling approach is described and the move-blocks algorithm is introduced in detail.The move-blocks extension provides better scheduling results for small as well as for large problems but has only a small increase in runtime.This is shown by a comparison of the modified and the original algorithms over a wide range of test cases.
['Raphael Kunis', 'Gudula Rünger']
Optimization of Layer-based Scheduling Algorithms for Mixed Parallel Applications with Precedence Constraints Using Move-blocks
375,296
In this paper, we study a layered linear binary field network with time-varying channels, which is a simplified model reflecting broadcast, interference, and fading natures of wireless communications. We observe that fading can play an important role in mitigating interuser interference effectively for both single-hop and multihop networks. We propose new coding schemes with randomized ergodic channel pairing, which exploit such channel variations, and derive their achievable ergodic rates. By comparing them with the cut-set upper bound, the capacity region of single-hop networks and the sum capacity of multihop networks are characterized for some classes of channel distributions and network topologies.
['Sang-Woon Jeon', 'Sae-Young Chung']
Capacity of a Class of Linear Binary Field Multisource Relay Networks
398,922
In this paper, we propose an architecture synthesis methodology to realize cascaded infinite impulse response (IIR) filter in table look up (TLU) field programmable gate arrays (FPGA). The synthesis procedure involves a systematic transformation of the dependance graph (DG) corresponding to the cascaded IIR filler to a pipelined fixed full size array (PFFSA). We offer an implementation of a cascaded 8th order IIR filters on Xilinx XC3090 FPGA devices. >
['G. N. Rathna', 'Sudipta Nandy', 'K. Parthasarathy']
A methodology for architecture synthesis of cascaded IIR filters on TLU FPGAs
233,952
We present policy gradient results within the framework of linearly-solvable MDPs. For the first time, compatible function approximators and natural policy gradients are obtained by estimating the cost-to-go function, rather than the (much larger) state-action advantage function as is necessary in traditional MDPs. We also develop the first compatible function approximators and natural policy gradients for continuous-time stochastic systems.
['Emanuel Todorov']
Policy gradients in linearly-solvable MDPs
242,059
Motion compensated prediction (MCP) implemented in most video coding schemes is based on translational motion model. However, nontranslational motions, for example, rotational motions, are common in videos. Higher-order motion model researches try to enhance the prediction accuracy of MCP by modeling those nontranslational motions. However, they require affine parameter estimation, and most of them have very high computational complexity. In this paper, a translational and rotational MCP method using special subsampling in the interpolated frame is proposed. This method is simple to implement and has low computational complexity. Experimental results show that many blocks can be better predicted by the proposed method, and therefore a higher prediction quality can be achieved with acceptable overheads. We believe this approach opens a new direction in MCP research.
['Ka-Ho Ng', 'Lai-Man Po', 'Kwok-Wai Cheung', 'Ka-Man Wong']
Block-matching translational and rotational motion compensated prediction using interpolated reference frame
416,924
Global Positioning System (GPS) tide gauges have been realized in different configurations, e.g., with one zenith-looking antenna, using the multipath interference pattern for signal-to-noise ratio (SNR) analysis, or with one zenith- and one nadir-looking antenna, analyzing the difference in phase delay, to estimate the sea level height. In this study, for the first time, we use a true Global Navigation Satellite System (GNSS) tide gauge, installed at the Onsala Space Observatory. This GNSS tide gauge is recording both GPS and Globalnaya Navigatsionnaya Sputnikovaya Sistema (GLONASS) signals and makes it possible to use both the one- and two-antenna analysis approach. Both the SNR analysis and the phase delay analysis were evaluated using dual-frequency GPS and GLONASS signals, i.e., frequencies in the L-band, during a 1-month-long campaign. The GNSS-derived sea level results were compared to independent sea level observations from a co-located pressure tide gauge and show a high correlation for both systems and frequency bands, with correlation coefficients of 0.86 to 0.97. The phase delay results show a better agreement with the tide gauge sea level than the SNR results, with root-mean-square differences of 3.5 cm (GPS L1 and L2) and 3.3/3.2 cm (GLONASS L1/L2 bands) compared to 4.0/9.0 cm (GPS L1/L2) and 4.7/8.9 cm (GLONASS L1/L2 bands). GPS and GLONASS show similar performance in the comparison, and the results prove that for the phase delay analysis, it is possible to use both frequencies, whereas for the SNR analysis, the L2 band should be avoided if other signals are available. Note that standard geodetic receivers using code-based tracking, i.e., tracking the un-encrypted C/A-code on L1 and using the manufacturers’ proprietary tracking method for L2, were used. Signals with the new C/A-code on L2, the so-called L2C, were not tracked. Using wind speed as an indicator for sea surface roughness, we find that the SNR analysis performs better in rough sea surface conditions than the phase delay analysis. The SNR analysis is possible even during the highest wind speed observed during this campaign (17.5 m/s), while the phase delay analysis becomes difficult for wind speeds above 6 m/s.
['Johan Löfgren', 'Rüdiger Haas']
Sea level measurements using multi-frequency GPS and GLONASS observations
109,607
Multidimensional Structure of Colorfulness: Chroma Variation in Color Images of Natural Scenes.
['Ea Fedorovskaya', 'Huib de Ridder', 'Sn Sergej Yendrikhovskij', 'Frans J. J. Blommaert']
Multidimensional Structure of Colorfulness: Chroma Variation in Color Images of Natural Scenes.
772,459
The transmission of information within a data network is constrained by network topology and link capacities. In this paper, we study the fundamental upper bound of information multicast rates with these constraints, given the unique replicable and encodable property of information flows. Based on recent information theory advances in coded multicast rates, we are able to formulate the maximum multicast rate problem as a linear network optimization problem, assuming the general undirected network model. We then proceed to apply Lagrangian relaxation techniques to obtain (1) a necessary and sufficient condition for multicast rate feasibility, and (2) a subgradient solution for computing the maximum rate and the optimal routing strategy to achieve it. The condition we give is a generalization of the well-known conditions for the unicast and broadcast cases. Our subgradient solution takes advantage of the underlying network flow structure of the problem, and therefore outperforms general linear programming solving techniques. It also admits a natural intuitive interpretation, and is amenable to fully distributed implementations.
['Zongpeng Li', 'Baochun Li']
Efficient and distributed computation of maximum multicast rates
450,284
Legislative prediction with dual uncertainty minimization from heterogeneous information
['Yu Cheng', 'Ankit Agrawal', 'Huan Liu', 'Alok N. Choudhary']
Legislative prediction with dual uncertainty minimization from heterogeneous information
718,322
Evaluation of Test Coverage for Embedded System Testing
['Jingsong Zhu', 'Son T. Vuong', 'Samuel T. Chanson']
Evaluation of Test Coverage for Embedded System Testing
286,694
The human visual system perceives 3D depth following sensing via its binocular optical system, a series of massively parallel processing units, and a feedback system that controls the mechanical dynamics of eye movements and the crystalline lens. The process of accommodation (focusing of the crystalline lens) and binocular vergence is controlled simultaneously and symbiotically via cross-coupled communication between the two critical depth computation modalities. The output responses of these two subsystems, which are induced by oculomotor control, are used in the computation of a clear and stable cyclopean 3D image from the input stimuli. These subsystems operate in smooth synchronicity when one is viewing the natural world; however, conflicting responses can occur when viewing stereoscopic 3D (S3D) content on fixed displays, causing physiological discomfort. If such occurrences could be predicted, then they might also be avoided (by modifying the acquisition process) or ameliorated (by changing the relative scene depth). Toward this end, we have developed a dynamic accommodation and vergence interaction (DAVI) model that successfully predicts visual discomfort on S3D images. The DAVI model is based on the phasic and reflex responses of the fast fusional vergence mechanism. Quantitative models of accommodation and vergence mismatches are used to conduct visual discomfort prediction. Other 3D perceptual elements are included in the proposed method, including sharpness limits imposed by the depth of focus and fusion limits implied by Panum’s fusional area. The DAVI predictor is created by training a support vector machine on features derived from the proposed model and on recorded subjective assessment results. The experimental results are shown to produce accurate predictions of experienced visual discomfort.
['Heeseok Oh', 'Sanghoon Lee', 'Alan C. Bovik']
Stereoscopic 3D Visual Discomfort Prediction: A Dynamic Accommodation and Vergence Interaction Model
579,064
We study rumor propagation process with incubation and constant immigration. We take into account a deterministic rumor spreading model and demonstrate the persistence of a rumor when the basic reproduction number is greater than one. Due to the presence of a randomness in the influence that the incubators exert on ignorants, we extrapolate the deterministic rumor model to a stochastic one by using a stochastic coefficient for the term representing the latter influence within the system. The existence and boundedness of both local and global solutions are demonstrated. We prove the uniqueness of these solutions. Conditions of extinction is also established. We perform numerical simulations to verify our stochastic model. The present work can assist decision takers in the analysis of the dynamical evolution of rumors in a given society as well as in the study of information dissemination strategies.
['M. Z. Dauhoo', 'Diksha Juggurnath', 'Noure-Roukayya Badurally Adam']
The stochastic evolution of rumors within a population
729,909
Windows of opportunity and product life cycles have been shortening, placing pressure on firms to stay competitive. Many firms have responded to this pressure by setting goals of reducing new product development (NPD) cycle time and/or improving product performance, often by setting up fuzzy gates between stages, cross-functional teams, or both. This study examines the tradeoff between product performance and time to market, focusing on the effect of overlapping stages during which marketing, design, and manufacturing engineering are jointly working on performance improvement, An NPD process model comprising a design stage, a process stage, and an intermediary overlap stage representing the interaction between design and process personnel is developed. Key findings include the following. (1) Overlapping stages reduces time to market, but the marginal returns to lengthening the overlap stage yield progressively smaller improvements in time to market. (2) The longer the market window is open, the less is the pressure to rush the product to market, and product performance can be further improved by leaving the product longer in development. (3) It is better to keep the product longer in development rather than accelerate time to market if the base product performance is low. (4) If the productivity of the overlap stage is increased, it is more profitable to keep the product in development longer and boost product performance at launch than to rush the product to market quicker. (5) The greater the market power the firm possesses, the faster it should bring the product to market, as long as product performance and sustainability of market power are not substantially reduced. A set of propositions is derived from the model, and is tested in a small-scale empirical study on firms in the automobile and automotive supply industry. The results are largely supportive of the propositions. Management implications and recommendations for further research are presented.
['Roger J. Calantone', 'C.A. Di Benedetto']
Performance and time to market: accelerating cycle time with overlapping stages
260,214
Multi-view learning algorithms typically assume a complete bipartite mapping between the different views in order to exchange information during the learning process. However, many applications provide only a partial mapping between the views, creating a challenge for current methods. To address this problem, we propose a multi-view algorithm based on constrained clustering that can operate with an incomplete mapping. Given a set of pairwise constraints in each view, our approach propagates these constraints using a local similarity measure to those instances that can be mapped to the other views, allowing the propagated constraints to be transferred across views via the partial mapping. It uses co-EM to iteratively estimate the propagation within each view based on the current clustering model, transfer the constraints across views, and then update the clustering model. By alternating the learning process between views, this approach produces a unified clustering model that is consistent with all views. We show that this approach significantly improves clustering performance over several other methods for transferring constraints and allows multi-view clustering to be reliably applied when given a limited mapping between the views. Our evaluation reveals that the propagated constraints have high precision with respect to the true clusters in the data, explaining their benefit to clustering performance in both single- and multi-view learning scenarios.
['Eric Eaton', 'Marie desJardins', 'Sara Jacob']
Multi-view constrained clustering with an incomplete mapping between views
513,604
We describe how Python can be leveraged to streamline the curation, modelling and dissemination of drug discovery data as well as the development of innovative, freely available tools for the related scientific community. We look at various examples, such as chemistry toolkits, machine-learning applications and web frameworks and show how Python can glue it all together to create efficient data science pipelines.
['Michał Nowotka', 'George Papadatos', 'Mark Davies', 'Nathan Dedman', 'Anne Hersey']
Want Drugs? Use Python
832,498
Model Transformations by Graph Transformation are Functors.
['Hartmut Ehrig', 'Karsten Ehrig', 'Claudia Ermel', 'Ulrike Prange']
Model Transformations by Graph Transformation are Functors.
785,837
The shock absorber inside the above-knee amputation prosthesis is a feasible solution to absorb impact forces from the ground during patent's walking and running. A conventional shock absorber consists of a spring and a damper with constant coefficients, producing too rigid reactions when encountering impact forces, whereas extremely weak reactions for gentle loadings. This study proposes a novel shock absorber design for an above-knee prosthesis, which provides automatic smooth tuning of the damping coefficient and rapid rebounds after impact loads. This design achieves the automatic tuning by a pressure-sensitive plunger valve system and check valves without any electronic devices. Theoretical modelling of the shock absorber is established in this study. Parameters of the valve system under certain absorbing performance requirements are determined in this paper.
['Yen-Chieh Mao']
Modeling and simulation of a self-tuning rapid-releasing shock absorber for above-knee prosthesis
226,077
There is a lot of recent interest in applying views to optimize the processing of tree pattern queries (TPQs). However, existing work in this area has focused predominantly on logical optimization issues, namely, view selection and query rewriting. With the exception of the recent work on InterJoin (which is primarily focused on path queries and views), there is very little work that has examined the important physical optimization issue of how to efficiently evaluate TPQs using materialized views. In this paper, we present a new storage scheme for materialized TPQ views and a novel evaluation algorithm for processing general TPQ queries using materialized TPQ views. Our experimental results demonstrate that our proposed method outperforms the state-of-the-art approaches.
['Ding Chen', 'Chee-Yong Chan']
ViewJoin: Efficient view-based evaluation of tree pattern queries
146,895
This paper presents a topic-driven framework for generating a generic summary from multi-documents. Our approach is based on the intuition that, from the statistical point of view, the summary’s probability distribution over the topics should be consistent with the multi-documents’ probability distribution over the inherent topics. Here, the topics are defined as weighted “bag-of-words” and derived by Latent Dirichlet Allocation from a collection of documents, either the given multi-documents or a related large-scale corpus. In this sense, we could represent various kinds of text units, such as word, sentence, summary, document and multi-documents, using a single vector space model via their corresponding probability distributions over the derived topics. Therefore, we are able to extract a sentence or summary by calculating the similarity between a sentence/summary and the given multi-documents via their topic probability distributions. In particular, we propose two methods in similarity measurement: the static method and the dynamic method. While the former is employed to detect the salience of information in a static way, the later further controls redundancy in a dynamic way. In addition, we integrate various popular features to improve the performance. Evaluation on the TAC 2008 update summarization task shows encouraging results.
['Hongling Wang', 'Guodong Zhou']
Topic-Driven Multi-document Summarization
123,472
This paper is an exploration of numerical methods for solving initial-value problems for ordinary differential equations. We look in detail at one “simple” problem, namely, the leaky bucket, and use it to explore the notions of explicit marching methods vs. implicit marching methods, interpolation, backward error, and stiffness. While the leaky bucket example is very well known, and these topics are studied in a great many textbooks, we will here emphasize backward error in a way that might be new and that we hope will be useful for your students. Indeed, the paper is intended to be a resource for students themselves. We will also use two techniques not normally seen in a first course, a new one, namely, “optimal backward error,” and an old one, namely, “the method of modified equations.”
['Robert M. Corless', 'Julia E. Jankowski']
Variations on a Theme of Euler
932,445
The long latency associated with mobile IPv6 home-address and care-of-address tests can significantly impact delay-sensitive applications. This paper presents an optimization of mobile IPv6 correspondent registrations that evades the latency of both address tests. An optimized correspondent registration eliminates 50%, or more, of the additional delay that a standard correspondent registration adds to the network stack's overall latency. The optimization is realized as an optional, and fully backward-compatible, extension to mobile IPv6.
['Christian Vogt', 'Roland Bless', 'Mark Doll', 'Tobias Kuefner']
Early binding updates for mobile IPv6
289,578
HyperSQL is an interoperability layer that enables database administrators to rapidly construct browser-based query interfaces to remote Sybase databases. Current browsers (i.e., Netscape, Mosaic, Internet Explorer) do not easily interoperate with databases without extensive CGI (Common Gateway Interface) programming. HyperSQL can be used to create forms and hypertext-based database interfaces for non-computer experts (e.g., scientists, business users). Such interfaces permit the user to query databases by filling out query forms selected from menus. No knowledge of SQL is required because the interface automatically composes SQL from user input. Database results are automatically formatted as graphics and hypertext, including clickable links which can issue additional queries for browsing through related data, bring up other Web pages, or access remote search engines. Query interfaces are constructed by inserting a small set of HyperSQL descriptors and HTML formatting into text files. No compilation is necessary because commands are interpreted and carried out by the special gateway, positioned between the remote databases and the Web browser. Feedback from developers who have used the initial release of HyperSQL has been encouraging. At present, query interfaces have been successfully implemented for three major NSF-sponsored biological databases: Microbial Germplasm Database, Mycological Types Collection, and Vascular Plants Types Collection.
['Mark Newsome', 'Cherri M. Pancake', 'F. Joe Hanus']
HyperSQL: web-based query interfaces for biological databases
96,386
This paper studies the loading coordinations for large-population autonomous individual (plug-in) electric vehicles (EVs) and a few controllable bulk loads, e.g. EV fleets, pumped storage hydro units, and so on. Due to the computational infeasibility of the centralized coordination methods to the underlying large-population systems, in this paper we develop a novel game-based decentralized coordination strategy. Following the proposed decentralized strategy update mechanism and under some mild conditions, the system may quickly converge to a nearly valley-fill Nash equilibrium. The results are illustrated with numerical examples.
['Xiaokun Yin', 'Zhongjing Ma', 'Lei Dong']
Decentralized loading coordinations for large-population plug-in electric vehicles and a few controllable bulk loads
197,500
We focus on McCarthy's method of predicate circumscription in order to establish various results about its consistency, and about its ability to conjecture new information. A basic result is that predicate circumscription cannot account for the standard kinds of default reasoning. Another is that predicate circumscription yields no new information about the equality predicate. This has important consequences for the unique names and domain closure assumptions.#R##N##R##N##R##N##R##N#Nous nous concentrons sur la methode de McCarthy de limitation des predicats afin ?etablir differents constats sur sa consistance et sur sa capacityea deduire de ľinformation nouvelle. Un des principaux resultats est que la limitation des predicats ne peut rendre compte des cas ordinaires de raisonnement par defaut. ?autre part, la limitation des predicats n'apporte aucune information nouvelle sur le predicat ?egalite Ceci a des consequences importantes pour les hypotheses sur les noms uniques et la fermeture du domaine.
['David W. Etherington', 'Robert E. Mercer', 'Raymond Reiter']
On the adequacy of predicate circumscription for closed-world reasoning
258,321
For spoken term detection, it is very important to consider Out-of-Vocabulary (OOV). Therefore, sub-word unit based recognition and retrieval methods have been proposed. This paper describes a very fast Japanese spoken term detection system that is robust for considering OOV words. We used individual syllables as sub-word unit in continuous speech recognition and an n-gram index of syllables in a recognized syllable-based lattice. We proposed an n-gram indexing/retrieval method in the syllable lattice for attacking OOV and high speed retrieval. Specially, in this paper, we redefineded the distance of the n-gram and used trigram, bigram and unigram that instead of using only trigram to calculate the exact distance. In our experiments, where using text and speech query, we achieved to improve the retrieval performance.
['Nagisa Sakamoto', 'Seiichi Nakagawa']
Robust/fast out-of-vocabulary spoken term detection by N-gram index with exact distance through text/speech input
412,449
Information propagation on graphs is a fundamental topic in distributed computing. One of the simplest models of information propagation is the push protocol in which at each round each agent independently pushes the current knowledge to a random neighbour. In this paper we study the so-called coalescing-branching random walk (COBRA), in which each vertex pushes the information to k randomly selected neighbours and then stops passing information until it receives the information again. The aim of COBRA is to propagate information fast but with a limited number of transmissions per vertex per step. In this paper we study the cover time of the COBRA process defined as the minimum time until each vertex has received the information at least once. Our main result says that if G is an n -vertex r -regular graph whose transition matrix has second eigenvalue λ, then the COBRA cover time of G is O (log n ), if 1-λ is greater than a positive constant, and O ((log n )/(1-λ) 3 )), if 1-λ >> √log ( i >n)/ n }. These bounds are independent of r and hold for 3 ≤ r ≤ n -1. They improve the previous bound of O (log 2 n ) for expander graphs [Dutta et al., SPAA 2013]. Our main tool in analysing the COBRA process is a novel duality relation between this process and a discrete epidemic process, which we call a biased infection with persistent source (BIPS). A fixed vertex v is the source of an infection and remains permanently infected. At each step each vertex u other than v selects k neighbours, independently and uniformly, and u is infected in this step if and only if at least one of the selected neighbours has been infected in the previous step. We show the duality between COBRA and BIPS which says that the time to infect the whole graph in the BIPS process is of the same order as the cover time of the COBRA process.
['Colin Cooper', 'Tomasz Radzik', 'Nicolás Rivera']
The Coalescing-Branching Random Walk on Expanders and the Dual Epidemic Process
636,478
Automatically identifying the musical instruments present in audio recordings is a complex and difficult task. Although the focus has recently shifted to identifying instruments in a polyphonic setting, the task of identifying solo instruments has not been solved. Most empirical studies recognizing musical instruments use only a single dataset in the experiments, despiteevidence that mapproaches do not generalize from one dataset to another dataset. In this work, we present a method for data driven learning of spectral filters for use in feature extraction from audio recordings of solo musical instruments and discuss the extensibility of this approach to polyphonic mixtures of instruments. We examine four datasets of musical instrument sounds that have 13 instruments in common. We demonstrate cross-dataset validation by showing that a feature extraction scheme learned from one dataset can be used successfully for feature extraction and classification on another dataset.
['Patrick J. Donnelly', 'John W. Sheppard']
Cross-Dataset Validation of Feature Sets in Musical Instrument Classification
908,122
Mean-squared-error (MSE) is one of the most widely used performance metrics for the designs and analysis of multi-input-multiple-output (MIMO) communications. Weighted MSE minimization, a more general formulation of MSE minimization, plays an important role in MIMO transceiver optimization. While this topic has a long history and has been extensively studied, existing treatments on the methods in solving the weighted MSE optimization are more or less sporadic and non-systematic. In this paper, we firstly review the two major methodologies, Lagrange multiplier method and majorization theory based method, and their common procedures in solving the weighted MSE minimization. Then some problems and limitations of the methods that were usually neglected or glossed over in existing literature are provided. These problems are fundamental and of critical importance for the corresponding MIMO transceiver optimizations. In addition, a new extended matrix-field weighted MSE model is proposed. Its solutions and applications are discussed in details. Compared with existing models, this new model has wider applications, e.g., nonlinear MIMO transceiver designs and capacity-maximization transceiver designs for general MIMO networks.
['Chengwen Xing', 'Yindi Jing', 'Yiqing Zhou']
On Weighted MSE Model for MIMO Transceiver Optimization
900,878
In this paper we propose an extension to the Fuzzy Cognitive Maps (FCMs) that aims at aggregating a number of reasoning tasks into a one parallel run. The described approach consists in replacing real-valued activation levels of concepts (and further influence weights) by random variables. Such extension, followed by the implemented software tool, allows for determining ranges reached by concept activation levels, sensitivity analysis as well as statistical analysis of multiple reasoning results. We replace multiplication and addition operators appearing in the FCM state equation by appropriate convolutions applicable for discrete random variables. To make the model computationally feasible, it is further augmented with aggregation operations for discrete random variables. We discuss four implemented aggregators, as well as we report results of preliminary tests.
['Piotr Szwed']
Combining Fuzzy Cognitive Maps and Discrete Random Variables
584,756
With the advent of cloud and virtualization technologies and the integration of various computer communication technologies, today’s computing environments can provide virtualized high quality services. The network traffic has also continuously increased with remarkable growth. Software defined networking/network function virtualization (SDN/NFV) enhancing the infrastructure agility, thus network operators and service providers are able to program their own network functions on vendor independent hardware substrate. However, in order for the SDN/NFV to realize a profit, it must provide a new resource sharing and monitoring procedures among the regionally distributed and virtualized computers. In this paper, we proposes a NFV monitoring architecture based practical measuring framework for network performance measurement. We also proposes a end-to-end connectivity support platform across a whole SDN/NFV networks has not been fully addressed.
['Hyuncheol Kim', 'Seunghyun Yoon', 'Hongseok Jeon', 'Wonhyuk Lee', 'Seungae Kang']
Service platform and monitoring architecture for network function virtualization (NFV)
892,674
In many applications a Poisson shot noise (PSN) process is said to statistically "represent" its intensity process. In this paper an investigation is made of the relationship between a PSN process and its intensity, when the latter is a sample function of a continuous stochastic process. The difference of the moments and the mean-square difference between the two processes are examined. The continuity assumption on the intensity permits the development of a sequence of moment relationships in which the effect of the PSN parameters can be seen. The results simplify and afford some degree of physical interpretation when the component functions of the PSN are "rectangular," or when the intensity process does not vary appreciably over their time width. An integral equation is derived that defines the component function that minimizes the mean-square difference between the two processes. It is shown that a "degenerate" form of component function induces complete statistical equality of the two processes. The problem has application to optical communication systems using photodetectors.
['Sherman Karp', 'Robert M. Gagliardi']
On the representation of a continuous stochastic intensity by Poisson shot noise
216,697
With requirements of spiraling data rates and limited spectrum availability, there is an increased interest in mm-wave beamformer-based communications for 5G. For upcoming cellular networks, the critical point is to exploit the increased number of employable antennas at both Tx and Rx to: 1) combat increased path loss; 2) tackle higher interference due to higher user density; and 3) handle multipath effects in frequency selective channels. Toward this, a multi-beam spatiotemporal superresolution beamforming framework is proposed in this paper as a promising candidate to design beampatterns that mitigate/suppress co-channel interference and deliver massive gain in the desired directions. Initially, channel and signal models suitable for the mm-wave MIMO system are presented using the manifold vectors of both Tx and Rx antenna arrays. Based on these models, a novel subspace-based channel estimator is employed, which estimates delays, directions, velocities, and fading coefficients of the desired signal paths. This information is then exploited by the proposed spatiotemporal beamformer to provide a massive array gain that combats path loss without increasing the number of antenna array elements and to be tolerant to the near-far problem in a high interference environment. The performance of the proposed channel estimator and beamformer is examined using computer simulation studies.
['Vidhya Sridhar', 'Thibaud Gabillard', 'Athanassios Manikas']
Spatiotemporal-MIMO Channel Estimator and Beamformer for 5G
892,838
This study extends IT ethics research by proposing an IT ethical behavioral model that includes attitude, perceived importance, subjective norms, situational factors, and individual characteristics. The proposed model integrates elements from the Theory of Planned Behavior (TPB) and Theory of Reasoned Action (TRA) as well as ethical decision-making models. It is hypothesized that behavioral intention is influenced by an individual's attitude (which in turn is influenced by consequences of the action and the environment), obligation, and personal characteristics. The results of the study show that some factors are consistently significant in affecting attitude and behavioral intention. Other factors are significant only in certain scenarios. From the results, organizations may be able to develop realistic training programs for IT professionals and managers and incorporate deterrent and preventive measures that can curb the rising tide of undesired misuse.
['Lori N. K. Leonard', 'Timothy Paul Cronan', 'Jennifer Kreie']
What influences IT ethical behavior intentions: planned behavior, reasoned action, perceived importance, or individual characteristics?
382,654
In this paper, a body sensor network (BSN) system for measuring systolic blood pressure (SBP) by a cuffless approach based on h-Shirt has been developed. Two experiments were conducted on a total of 22 subjects to evaluate the cuffless calibration approach and BSN system respectively. The results showed that using the estimated time and distance travelled by a pulse and a parameter derived from the secondary derivative of PPG, the BSN system can measure SBP completely without using a cuff within 1.2±6.0 mmHg of the reference on 10 healthy subjects aged 21-35 yrs.
['W.B. Gu', 'Carmen C. Y. Poon', 'M. Y. Sy', 'H. K. Leung', 'Yan Liang', 'Yuan-Ting Zhang']
A h-Shirt-Based Body Sensor Network for Cuffless Calibration and Estimation of Arterial Blood Pressure
92,570
This paper describes a prototype visualization system#R##N#for concurrent and distributed applications programmed#R##N#using Erlang, providing two levels of granularity of view. Both#R##N#visualizations are animated to show the dynamics of aspects of#R##N#the computation.#R##N##R##N#At the low level, we show the concurrent behaviour of the#R##N#Erlang schedulers on a single instance of the Erlang virtual#R##N#machine, which we call an Erlang node. Typically there will be#R##N#one scheduler per core on a multicore system. Each scheduler#R##N#maintains a run queue of processes to execute, and we visualize#R##N#the migration of Erlang concurrent processes from one run queue#R##N#to another as work is redistributed to fully exploit the hardware.#R##N#The schedulers are shown as a graph with a circular layout. Next#R##N#to each scheduler we draw a variable length bar indicating the#R##N#current size of the run queue for the scheduler.#R##N##R##N#At the high level, we visualize the distributed aspects of the#R##N#system, showing interactions between Erlang nodes as a dynamic#R##N#graph drawn with a force model. Specifically we show message#R##N#passing between nodes as edges and lay out nodes according to#R##N#their current connections. In addition, we also show the grouping#R##N#of nodes into “s_groups” using an Euler diagram drawn with#R##N#circles.
['Robert Baker', 'Peter Rodgers', 'Simon J. Thompson', 'Huiqing Li']
Multi-level Visualization of Concurrent and Distributed Computation in Erlang
604,145
In the current communications environment, there is rapid proliferation of different types of broadband wired and wireless access networks. It is conceivable for users to roam between these heterogeneous networks and access content and services in a seamless manner and at different times. From the viewpoint of a service provider, the availability of different networks creates the possibility to deliver content using the most cost-effective route. This is particularly important for most multimedia content, especially of the entertainment genre where there is flexibility of time of delivery. Because of the added flexibility on the consumption time, a route selection among the different available networks for content delivery becomes feasible. In this paper, we present a solution that uses a remote site downloading function to deliver requested content through a customer selected route. Key problems of remote site downloading are addressed, including location management, remote service invocations, access account management and security etc. An overall system architecture design is presented.
['Jun Li', 'Junbiao Zhang', 'Snigdha Verma', 'Kumar Ramaswamy']
Mobile content delivery through heterogeneous access networks
538,767
In the paper the model of the Occupational Health and Safety management system (OHS management system model.) implementable in organizations striving for continuous improvement and excellence is presented. The concepts of excellent organizations and organizational maturity are explained. The model presented by the author is the result of the case study based research conducted in five manufacturing companies in the Wielkopolska region. All the analyzed companies in a very clear way focus their attention on the issue of improving OHS management system and are interested in are assessment of organizational maturity in this area and meet the requirements of any health and safety excellence model. The basic assumption of the model is application of the continuous improvement principle at three management lev- els: strategic, tactical and operational. As an extension of the model presented, the option of the implementation of Deming's fourteen principles to the area of health and safety management is introduced. Approach to the management of health and safety presented points to the ever increasing interest of enterprises in these issues, in addition it proves the fact that achieving and improving orga- nizational maturity is only possible with regard to issues of health and safety.
['Anna Mazur']
Model of OHS Management Systems in an Excellent Company
546,079
Data Exchange Topologies for the DISCO-HITS Algorithm to Solve the QAP
['Omar Abdelkafi', 'Lhassane Idoumghar', 'Julien Lepagnot', 'Mathieu Brévilliers']
Data Exchange Topologies for the DISCO-HITS Algorithm to Solve the QAP
942,233
THE DYNAMICS OF LOCUST NON-SPIKING LOCAL INTERNEURONS - Responses to Imposed Limb Movements
['Oliver P. Dewhirst', 'Natalia Angarita-Jaimes', 'D.M. Simpson', 'Robert Allen', 'Philip L. Newland']
THE DYNAMICS OF LOCUST NON-SPIKING LOCAL INTERNEURONS - Responses to Imposed Limb Movements
756,820
The numbers of the roots of det (B-λA) in different portions of the real axis are determined, whereA andB are Hermitian matrices andB is positive definite. The result is an extension of a theorem by Xiaoshu and Hua [4].
['H. Väliaho']
A note on locating eigenvalues
585,961
Linear feedback shift register (LFSR) is an important component of the cyclic redundancy check (CRC) operations and BCH encoders. The contribution of this paper is two fold. First, this paper presents a mathematical proof of existence of a linear transformation to transform LFSR circuits into equivalent state space formulations. This transformation achieves a full speed-up compared to the serial architecture at the cost of an increase in hardware overhead. This method applies to all generator polynomials used in CRC operations and BCH encoders. Second, a new formulation is proposed to modify the LFSR into the form of an infinite impulse response (IIR) filter. We propose a novel high speed parallel LFSR architecture based on parallel IIR filter design, pipelining and retiming algorithms. The advantage of the proposed approach over the previous architectures is that it has both feedforward and feedback paths. We further propose to apply combined parallel and pipelining techniques to eliminate the fanout effect in long generator polynomials. The proposed scheme can be applied to any generator polynomial, i.e., any LFSR in general. The proposed parallel architecture achieves better area-time product compared to the previous designs.
['Manohar Ayinala', 'Keshab K. Parhi']
High-Speed Parallel Architectures for Linear Feedback Shift Registers
458,103
A variety of web sites and web based services produce textual lists at varying time granularities ranked according to several criteria. For example, Google Trends produces lists of popular query keywords which can be visualized according to several criteria. At Flickr, lists of popular tags used to tag the images uploaded can be visualized as a cloud based on their popularity. Identification of the k most popular terms can be easily conducted by utilizing well known rank aggregation algorithms. In this paper we take a different approach to information discovery from such ranked lists. We maintain the same rank aggregation framework but we elevate terms at a higher level by making use of popular term hierarchies commonly available. Under such a transformation we show that typical early stopping certificates available for rank aggregation algorithms are no longer applicable. Based on this observation, in this paper, we present a probabilistic framework for early stopping in this setting. We introduce a relaxed version of the rank aggregation problem involving a deterministic stopping condition with user specified precision. We introduce an algorithm pH -- RA for the solution of this problem. In addition we introduce techniques to improve the performance of pH -- RA even further via precomputation utilizing a sparse set system. Through a detailed experimental evaluation using synthetic and real datasets we demonstrate the efficiency of our framework.
['Nilesh Bansal', 'Sudipto Guha', 'Nick Koudas']
Ad-hoc aggregations of ranked lists in the presence of hierarchies
28,660
Differentiated service (DiffServ) in combination with multiprotocol label switching (MPLS) is a promising technology in converting the best-effort Internet into a QoS-capable network. This paper describes the design and implementation of key DiffServ components, including classifier, meter/shaper, queue manager, and scheduler, in an MPLS edge router architecture on a network-processor platform. We describe how the DiffServ functionalities were realized and also analyze the performance of the system under different traffic patterns. Various factors that cause performance degradation in the architecture are observed and analyzed. These include receiving processing architecture, transmitting processing architecture, micro-engine processing power, memory access latency, and complexity of each DiffServ component. Among them, we found that traffic classification demands more processing resource and hence is a major factor in limiting the system throughput
['Wei-Chu Lai', 'Kuo-Ching Wu', 'Ting-Chao Hou']
Design and Evaluation of Diffserv Functionalities in the MPLS Edge Router Architecture
354,482
It is well known that wavelets provide good non-linear approximation of one-dimensional (1-D) piecewise smooth functions. However, it has been shown that the use of a basis with good approximation properties does not necessarily lead to a good compression algorithm. The situation in 2-D is much more complicated since wavelets are not good for modeling piecewise smooth signals (where discontinuities are along smooth curves). The purpose of this work is to analyze the performance of compression algorithms for 2-D piecewise smooth functions directly in a rate distortion context. We consider some simple image models and compute rate distortion bounds achievable using oracle based methods. We then present a practical compression algorithm based on optimal quadtree decomposition that, in some cases, achieve the oracle performance.
['Minh N. Do', 'Pier Luigi Dragotti', 'Rahul Shukla', 'Martin Vetterli']
On the compression of two-dimensional piecewise smooth functions
211,875
User profile data (for example, age and sex) is usually self-reported by users, so it is prone to human errors or biases. For example, a user can be reluctant to provide a company with private information such as his/her actual age upon subscription, thus the user either does not fill in the age column or put in some random numbers to avoid unwanted privacy intrusion. However, inaccurate or uncertain user profile data undermines the integrity of a company's marketing or operational intelligence. Targeting customers based on uncertain user profile data will not as effective as targeting customers based on accurate user profile data. Thus companies perform preprocessing on user profile data as part of effort to maintain the accuracy of their user profile data. This paper presents a study of preprocessing uncertain user profile data based on a proposed simple collaborative learning algorithm. We demonstrate that a user's accurate profile information can be inferred from profile information of the user's social network neighbors. Particularly, we address the issue of how a communication service company can verify whether a user's reported age is true or not. We implement a simple collaborative learning algorithm using mobile network data. The dataset contains anonymized user data from a large Korean mobile company, capturing 174,071 users' demographic profiles and their communication histories. To construct a mobile social network among users, we collect 3G voice call histories including 561,787 unique call receivers who belong to the same service carrier. Results reveal that the prediction accuracy of the proposed method based on voice network data is 97% which is very high compared to 53%, the best accuracy by among competing methods and indicates that our method effectively detects users with great discrepancy between self-reported age and actual age.
['Sung Hyuk Park', 'Sang Pil Han', 'Soon Young Huh', 'Hojin Lee']
Preprocessing Uncertain User Profile Data: Inferring User's Actual Age from Ages of the User's Neighbors
414,456
This paper describes the system development of an admittance-controlled 2-dof haptic device using electrostatic motors. The unique point of the device is that all components including the motors and sensors are fabricated using non ferromagnetic materials to realize operation in a strong magnetic field. The device consists of an operation handle, two electrostatic motor units and two optical force sensors, and is an improved version of our previous development. The new device has stiffer platforms, improved force sensors, and newly fabricated driving electronics. The employed electrostatic motor is one of the strongest electrostatic actuators and can generate thrust force of several newtons. To operate a motor unit, two three-phase signals are required and they have to be amplified to over 1 kV. In our previous development, the signals were amplified by six large commercial amplifiers, which was not practical enough considering the portability of the system. This work developed an original driving electronics for the motor using Field Programmable Gate Array (FPGA) and step-up transformers to reduce the system size. The device was implemented on one of the major haptic toolkits, CHAI 3D. To be able to run sample impedance-type programs without any modifications, we implemented our device to behave like impedance-type on CHAI 3D.
['Fumitaka Kimura', 'Akio Yamamoto', 'Masayuki Hara', 'Jin Ryu', 'Hannes Bleuler', 'Toshiro Higuchi']
System development of an admittance-controlled 2-dof haptic device using electrostatic motors
383,932
Operators of federated networks require timely detection and diagnosis of networking problems. For detection of such problems, access to monitoring data that exists in multiple organisations is necessary. This paper presents an ongoing work on an Alarms Service which uses a standards-based access mechanism to obtain monitoring data from multiple organisations and then raises alarms based on pre-defined conditions. It discusses typical monitoring requirements from such networks and how these can be addressed with a flexible architecture.
['Charaka Palansuriya', 'J. Nowell', 'Florian Scharinger', 'Kostas Kavoussanakis', 'Arthur S. Trew']
A Standards-Based Alarms Service for Monitoring Federated Networks
475,078
ABSTRACT On July 1, 2001, the South Carolina Governor signed Proviso 72.95 of the Annual Appropriations Act, which required that a public library receiving state funds must equip computers with software to filter the Internet. On July 19, 2001, the South Carolina State Library Board of Trusters voted unanimously not to implement the provision of the Proviso. Within 24 hours of the decision by the State Library Board of Trustees, the leadership of the South Carolina State Legislature threatened to sue the State Library to comply with the law, and to have all the appointed State Library Trustees and the State Librarian removed from their appointed positions. The conflict between the South Carolina State Library and the South Carolina State Legislature over funding of local public libraries signals a change in the relationship between State Library agencies and local public libraries. To understand these changes we must look at the events in South Carolina from the perspective of two areas of political theor...
['Robert C. Ward']
State Library and Local Public Library Relationships: A Case Study of Legislative Conflict Within South Carolina from the Principle/Agent Perspective
594,248
Efficient data delivery is a great challenge in vehicular networks because of frequent network disruption, fast topological change and mobility uncertainty. The vehicular trajectory knowledge plays a key role in data delivery. Existing algorithms have largely made predictions on the trajectory with coarse-grained patterns such as spatial distribution or/and the inter-meeting time distribution, which has led to poor data delivery performance. In this paper, we mine the extensive trace datasets of vehicles in an urban environment through conditional entropy analysis, we find that there exists strong spatiotemporal regularity. By extracting mobile patterns from historical traces, we develop accurate trajectory predictions by using multiple order Markov chains. Based on an analytical model, we theoretically derive packet delivery probability with predicted trajectories. We then propose routing algorithms taking full advantage of predicted vehicle trajectories. Finally, we carry out extensive simulations based on real traces of vehicles. The results demonstrate that our proposed routing algorithms can achieve significantly higher delivery ratio at lower cost when compared with existing algorithms.
['Yuchen Wu', 'Yanmin Zhu', 'Bo Li']
Trajectory improves data delivery in vehicular networks
287,168
Facial expression recognition is necessary for designing any realistic human-machine interfaces. Previous published facial expression recognition systems achieve good recognition rates, but most of them perform well only when the user faces the camera and does not change his 3D head pose. We propose a new method for robust, view-independent recognition of facial expressions that does not make this assumption. The system uses a novel 3D model-based tracker to extract simultaneously and robustly the pose and shape of the face at every frame of a monocular video sequence. There are two main contributions. First, we demonstrate that the 3D information extracted through 3D tracking enables robust facial expression recognition in spite of large rotational and translational head movements (up to 90 degrees in head rotation). Second, we show that the Support Vector Machine is a suitable engine for robust classification. Recognition rates as high as 91 percent are achieved at classifying 5 distinct dynamic facial motions (neutral, opening/closing mouth, smile, raising eyebrow).
['Salih Burak Gokturk', 'Jean-Yves Bouguet', 'Carlo Tomasi', 'Bernd Girod']
Model-based face tracking for view-independent facial expression recognition
544,086
Bluetooth, ultra-wideband (UWB), ZigBee, and Wi-Fi are four popular wireless standards for short-range communications. Specifically, ZigBee network is an emerging technology designed for low cost, low power consumption and low-rate wireless personal area networks (LR-WPAN) with a focus on the device-level communication for enabling the wireless sensor networks. In this paper, after a brief overview of the four short-range wireless standards, the developed ZigBee platforms, ITRI ZBnode, have been presented for wireless sensor networking applications. Moreover, design issues for ZigBee industrial applications and the experimental implementation have been demonstrated via a multi-hop tree network.
['Jin-Shyan Lee', 'Chun-Chieh Chuang', 'Chung-Chou Shen']
Applications of Short-Range Wireless Technologies to Industrial Automation: A ZigBee Approach
256,496
This paper describes how model checking has been integrated into an industrial hardware design process. We present an application oriented specification language for assumption/commitment style properties and an abstraction algorithm that generates an intuitive and efficient representation of synchronous circuits. These approaches are embedded in our Circuit Verification Environment CVE. They are demonstrated on two industrial applications.
['Jörg Bormann', 'Jörg Lohse', 'Michael Payer', 'Gerd Venzl']
Model Checking in Industrial Hardware Design
45,241
Stressing the BER simulation of LDPC codes in the error floor region using GPU clusters
['Gabriel Falcão Paiva Fernandes', 'João Sousa Andrade', 'Vítor Manuel Mendes da Silva', 'Shinichi Yamagiwa', 'Leonel Sousa']
Stressing the BER simulation of LDPC codes in the error floor region using GPU clusters
630,462
AbstractThe inverted generalized exponential distribution is defined as an alternative model for lifetime data. The existence of moments of this distribution is shown to hold under some restrictions. However, all the moments exist for the truncated inverted generalized exponential distribution and closed form expressions for them are derived in this paper. The distributional properties of this truncated distribution are studied. Maximum likelihood estimation method is discussed for the estimation of the parameters of the distribution both theoretically and empirically. In order to see the modeling performance of the distribution, two real data sets are analyzed.
['Ali I. Genç']
Truncated Inverted Generalized Exponential Distribution and Its Properties
938,051
The proposition expressed by a sentence is relative to a context. But what determines the content of the context? Many theorists would include among these determinants aspects of the speaker’s intention in speaking. My thesis is that, on the contrary, the determinants of the context never include the speaker’s intention. My argument for this thesis turns on a consideration of the role that the concept of proposition expressed in context is supposed to play in a theory of linguistic communication. To illustrate an alternative approach, I present an original theory of the reference of demonstratives according to which the referent of a demonstrative is the object that adequately and best satisfies certain accessibility criteria. Although I call my thesis zero tolerance for pragmatics, it is not an expression of intolerance for everything that might be called “pragmatics.”
['Christopher Gauker']
Zero tolerance for pragmatics
181,445
A Parameterized Formulation for the Maximum Number of Runs Problem.
['Andrew Baker', 'Antoine Deza', 'Frantisek Franek']
A Parameterized Formulation for the Maximum Number of Runs Problem.
731,284
We present SwiVRChair, a motorized swivel chair to nudge users' orientation in 360 degree storytelling scenarios. Since rotating a scene in virtual reality (VR) leads to simulator sickness, storytellers currently have no way of controlling users' attention. SwiVRChair allows creators of 360 degree VR movie content to be able to rotate or block users' movement to either show certain content or prevent users from seeing something. To enable this functionality, we modified a regular swivel chair using a 24V DC motor and an electromagnetic clutch. We developed two demo scenarios using both mechanisms (rotate and block) for the Samsung GearVR and conducted a user study (n=16) evaluating the presence, enjoyment and simulator sickness for participants using SwiVRChair compared to self control (Foot Control). Users rated the experience using SwiVRChair to be significantly more immersive and enjoyable whilst having a decrease in simulator sickness.
['Jan Gugenheimer', 'Dennis Wolf', 'Gabriel Haas', 'Sebastian Krebs', 'Enrico Rukzio']
SwiVRChair: A Motorized Swivel Chair to Nudge Users' Orientation for 360 Degree Storytelling in Virtual Reality
758,027
A simple online algorithm for partitioning of a digital curve into digital straight-line segments of maximal length is given. The algorithm requires O(N) time and O(1) space and is therefore optimal. Efficient representations of the digital segments are obtained as byproducts. The algorithm also solves a number-theoretical problem concerning nonhomogeneous spectra of numbers. >
['M. A. Linderbaum', 'Alfred M. Bruckstein']
On recursive, O(N) partitioning of a digitized curve into digital straight segments
297,109
Approximate Graph Isomorphism.
['Vikraman Arvind', 'Sebastian Kuhnert', 'Johannes Köbler', 'Yadu Vasudev']
Approximate Graph Isomorphism.
791,372
802.11 WLAN is a popular choice for wireless access on a range of ICT devices. A growing concern is the increased energy usage of ICT, for reasons of cost and environmental protection. The Power Save Mode (PSM) in 802.11 deactivates the wireless network interface during periods of inactivity. However, applications increasingly use push models, and so devices may be active much of the time. We have investigated the effectiveness of PSM, and considered its impact on performance when a device is active. Rather than concentrate on the NIC, we have taken a system-wide approach, to gauge the impact of the PSM from an application perspective. We experimentally evaluated performance at the packet level and system-wide power usage under various offered loads, controlled by packet size and data rate, on our 802.11n test bed. We have measured the system-wide power consumption corresponding to the individual traffic profiles and have derived application-specific effective energy-usage. We have found that in our scenarios, no significant benefit can be gained from using PSM.
['Markus Tauber', 'Saleem N. Bhatti']
The Effect of the 802.11 Power Save Mechanism (PSM) on Energy Efficiency and Performance during System Activity
916,331
AMISCO: The Austrian German Multi-Sensor Corpus.
['Hannes Pessentheiner', 'Thomas Pichler', 'Martin Hagmüller']
AMISCO: The Austrian German Multi-Sensor Corpus.
996,120
The aim of this study was to evaluate feasibility and reproducibility of quantitative assessment of colonic morphology on CT colonography (CTC). CTC datasets from 60 patients with optimal colonic distension were assessed using prototype software. Metrics potentially associated with poor endoscopic performance were calculated for the total colon and each segment including: length, volume, tortuosity (number of high curvature points <90°), and compactness (volume of box containing centerline divided by centerline length). Sigmoid apex height relative to the lumbosacral junction was also measured. Datasets were quantified twice each, and intra-reader reliability was evaluated using concordance correlation coefficient and Bland–Altman plot. Complete quantitative datasets including the five proposed metrics were generated from 58 of 60 (97 %) CTC examinations. The sigmoid and transverse segments were the longest (55.9 and 51.4 cm), had the largest volumes (0.410 and 0.609 L), and were the most tortuous (3.39 and 2.75 high curvature points) and least compact (3347 and 3595 mm2), noting high inter-patient variability for all metrics. Mean height of the sigmoid apex was 6.7 cm, also with high inter-patient variability (SD 6.8 cm). Intra-reader reliability was high for total and segmental lengths and sigmoid apex height (CCC = 0.9991) with excellent repeatability coefficient (CR = 3.0–3.3). There was low percent variance of metrics dependent upon length (median 5 %). Detailed automated quantitative assessment of colonic morphology on routine CTC datasets is feasible and reproducible, requiring minimal reader interaction.
['C.N. Weber', 'Anna S. Lev-Toaff', 'Marc S. Levine', 'Sandra Sudarsky', 'Lutz Guendel', 'Bernhard Geiger', 'Hanna Zafar']
Detailed quantitative assessment of colonic morphology at CT colonography using novel software: a feasibility and reproducibility study
815,500
Software maintenance is a complex process that requires the understanding and comprehension of software project details. It involves the understanding of the evolution of the software project, hundreds of software components and the relationships among software items in the form of inheritance, interface implementation, coupling and cohesion. Consequently, the aim of evolutionary visual software analytics is to support software project managers and developers during software maintenance. It takes into account the mining of evolutionary data, the subsequent analysis of the results produced by the mining process for producing evolution facts, the use of visualizations supported by interaction techniques and the active participation of users. Hence, this paper proposes an evolutionary visual software analytics tool for the exploration and comparison of project structural, interface implementation and class hierarchy data, and the correlation of structural data with metrics, as well as socio-technical relationships. Its main contribution is a tool that automatically retrieves evolutionary software facts and represent them using a scalable visualization design.
['Antonio González-Torres', 'Roberto Therón', 'Francisco José García-Peñalvo', 'Michel Wermelinger', 'Yijun Yu']
Maleku: An evolutionary visual software analysis tool for providing insights into software evolution
426,000
Replicability of machine learning experiments measures how likely it is that the outcome of one experiment is repeated when performed with a different randomization of the data. In this paper, we present an estimator of replicability of an experiment that is efficient. More precisely, the estimator is unbiased and has lowest variance in the class of estimators formed by a linear combination of outcomes of experiments on a given data set.We gathered empirical data for comparing experiments consisting of different sampling schemes and hypothesis tests. Both factors are shown to have an impact on replicability of experiments. The data suggests that sign tests should not be used due to low replicability. Ranked sum tests show better performance, but the combination of a sorted runs sampling scheme with a t-test gives the most desirable performance judged on Type I and II error and replicability.
['Remco R. Bouckaert']
Estimating replicability of classifier learning experiments
446,594
Currently considerable research is being directed toward developing methodologies for controlling emotion or releasing stress. An applied branch of the basic field of psychophysiology, known as biofeedback, has been developed to fulfill clinical and non-clinical needs related to such control. Wearable medical devices have permitted unobtrusive monitoring of vital signs and emerging biofeedback services in a pervasive manner. With the global recession, unemployment has become one of the most serious social problems; therefore, the combination of biofeedback techniques with wearable technology for stress management of unemployed population is undoubtedly meaningful. This article describes a wearable biofeedback system based on combining integrated multi-biosensor platform with resonance frequency training (RFT) biofeedback strategy for stress management of unemployed population. Compared to commercial system, in situ experiments with multiple subjects indicated that our biofeedback system was discreet, easy to wear, and capable of offering ambulatory RFT biofeedback.Moreover, the comparative studies on the altered autonomic nervous system (ANS) modulation before and after three week RFT biofeedback training was performed in unemployed population with the aid of our wearable biofeedback system. The achieved results suggested that RFT biofeedback in combination with wearable technology was capable of significantly increasingoverall HRV, which indicated by decreasing sympathetic activities, increasing parasympathetic activities, and increasing ANS synchronization. After 3-week RFT-based respiration training, the ANS's regulating function and coping ability of unemployed population have doubled, and tended toward a dynamic balance.
['Wanqing Wu', 'Yeongjoon Gil', 'Jungtae Lee']
Combination of Wearable Multi-Biosensor Platform and Resonance Frequency Training for Stress Management of the Unemployed Population
306,630
Nonconvex nonsmooth regularization has advantages over convex regularization for restoring images with neat edges. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonconvex nonsmooth minimization. In this paper, we deal with nonconvex nonsmooth minimization methods for image restoration and reconstruction. Our theoretical results show that the solution of the nonconvex nonsmooth minimization problem is composed of constant regions surrounded by closed contours and neat edges. The main goal of this paper is to develop fast minimization algorithms to solve the nonconvex nonsmooth minimization problem. Our experimental results show that the effectiveness and efficiency of the proposed algorithms.
['Mila Nikolova', 'Michael K. Ng', 'Chi-Pan Tam']
Fast Nonconvex Nonsmooth Minimization Methods for Image Restoration and Reconstruction
392,347
Fecal contamination in bodies of water is an issue that cities must combat regularly. Often, city governments must restrict access to water sources until the contaminants dissipate. Sourcing the species of the fecal matter helps curb the issue in the future, giving city governments the ability to mitigate the effects before they occur again. Microbial Source Tracking (MST) aims to determine source host species of strains of microbiological lifeforms and library-based MST is one method that can assist in sourcing fecal matter. Recently, the Biology Department in conjunction with the Computer Science Department at California Polytechnic State University San Luis Obispo (Cal Poly) teamed up to build a database called the Cal Poly Library of Pyroprints (CPLOP). Students collect fecal samples, culture and pyrosequence the E. coli in the samples, and insert this data, called pyroprints, into CPLOP. Using two intergenic transcribed spacer regions of DNA, Cal Poly biologists perform studies on strain differentiation. We propose using k-Nearest Neighbors, a straightforward machine learning technique, to classify the host species of a given pyroprint, construct four algorithms to resolve the regions, and investigate classification accuracy.
['Jeffrey D. McGovern', 'Alexander Dekhtyar', 'Christopher L. Kitts', 'Michael Black', 'Jennifer VanderKelen', 'Anya Goodman']
Leveraging the k-Nearest Neighbors classification algorithm for Microbial Source Tracking using a bacterial DNA fingerprint library
586,064
This paper proposes a fast blotch detection algorithm based on a Markov random field (MRF) model with less computational load and with lower false alarm rate than the existing MRF-based algorithms. The proposed algorithm can save the computational time for detecting the blotches by restricting the attention of the detection process only to the candidate areas. The experimental results show that our proposed method provides the computational simplicity and an efficient detecting performance for the blotches.
['Sang-Churl Nam', 'Masahide Abe', 'Masayuki Kawamata']
Fast Blotch Detection Algorithm for Degraded Film Sequences Based on MRF Models
87,555
Since its induction, the selective-identity (sID) model for identity-based cryptosystems and its relationship with various other notions of security has been extensively studied. As a result, it is a general consensus that the sID model is much weaker than the full-identity (ID) model. In this paper, we study the sID model for the particular case of identity-based signatures (IBS). The main focus is on the problem of constructing an ID-secure IBS given an sID-secure IBS without using random oracles-the so-called standard model-and with reasonable security degradation. We accomplish this by devising a generic construction which uses as black-box: i) a chameleon hash function and ii) a weakly-secure public-key signature. We argue that the resulting IBS is ID-secure but with a tightness gap of O(q(s)), where q(s) is the upper bound on the number of signature queries that the adversary is allowed to make. To the best of our knowledge, this is the first attempt at such a generic construction.
['Sanjit Chatterjee', 'Chethan Kamath']
From Selective-ID to Full-ID IBS without Random Oracles
561,668
In order to better protect and conserve biodiversity, ecologists use machine learning and statistics to understand how species respond to their environment and to predict how they will respond to future climate change, habitat loss and other threats. A fundamental modeling task is to estimate the probability that a given species is present in (or uses) a site, conditional on environmental variables such as precipitation and temperature. For a limited number of species, survey data consisting of both presence and absence records are available, and can be used to fit a variety of conventional classification and regression models. For most species, however, the available data consist only of occurrence records — locations where the species has been observed. In two closely-related but separate bodies of ecological literature, diverse special-purpose models have been developed that contrast occurrence data with a random sample of available environmental conditions. The most widespread statistical approaches involve either fitting an exponential model of species' conditional probability of presence, or fitting a naive logistic model in which the random sample of available conditions is treated as absence data; both approaches have well-known drawbacks, and do not necessarily produce valid probabilities. After summarizing existing methods, we overcome their drawbacks by introducing a new scaled binomial loss function for estimating an underlying logistic model of species presence/absence. Like the Expectation-Maximization approach of Ward et al. and the method of Steinberg and Cardell, our approach requires an estimate of population prevalence, Pr(y = 1), since prevalence is not identifiable from occurrence data alone. In contrast to the latter two methods, our loss function is straightforward to integrate into a variety of existing modeling frameworks such as generalized linear and additive models and boosted regression trees. We also demonstrate that approaches by Lele and Keim and by Lancaster and Imbens that surmount the identifiability issue by making parametric data assumptions do not typically produce valid probability estimates.
['Steven J. Phillips', 'Jane Elith']
Logistic methods for resource selection functions and presence-only species distribution models
779,982
Keynote panel
['Doug Maughan', 'George Hull', 'Salvatore J. Stolfo', 'Robert Stratton']
Keynote panel
675,583
Several methods for doing intrusion detection have been developed over the years. However, most of these methods are based on crisp statistical techniques that measure deviation from a norm. Due to the wide range of attacks on computers, statistical methods are not always effective because they aggregate many system variables into a single mathematical measure. Instead, taxonomies of attack features based on the concepts of fuzzy logic can be utilized to classify attacks and build simple response rules based on local system variables. Taxonomies however require correct hierarchial construction from subtaxonomies of attack classifiers. An architecture that defines self organizing taxonomies based on fuzzy logic is therefore developed for future investigation.
['Gregory Vert', 'René Doursat', 'Sara Nasser']
Towards Utilizing Fuzzy Self-Organizing Taxonomies to Identify Attacks on Computer Systems and Adaptively Respond
242,698
With the perspective of ubiquitous computing becoming more common form of technology in our everyday lives, our increasing dependency on these systems will require them to be always available, failure-free, fully operational and safe. They will also enable more activities to be carried out and provide new opportunities for solving problems. In view of the potential offered by ubiquitous computing and the challenges it raises, this work proposes a self-healing architecture to support ubiquitous applications aimed at healthcare The goal is to continuously provide reliable services to meet their requirements despite changes in the environment. We outline the application scenario and proposed architecture, as well as giving a detailed account of its main modules with particular emphasis on the fault detector.
['Anubis Graciela de Moraes Rossetto', 'Cláudio Fernando Resin Geyer', 'Carlos Oberdan Rolim', 'Valderi R. Q. Leithardt', 'Luciana Arantes']
An Architecture for Resilient Ubiquitous Systems
765,685
Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space.#R##N##R##N#Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods.#R##N##R##N#Availability and implementation:#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Supplementary data are available at Bioinformatics online.
['Céline Brouard', 'Huibin Shen', 'Kai Dührkop', "Florence D'Alché-Buc", 'Sebastian Böcker', 'Juho Rousu']
Fast metabolite identification with Input Output Kernel Regression
821,287
Based on a transport partial differential equation representation of input delay, this paper deals with the classic problem of trajectory tracking for uncertain linear time-delay systems under partial state measurement, plant parameter and actuator delay adaptation. An implementable output feedback control strategy is proposed and a recent restriction on relative degree is removed. The Lyapunov-based analysis shows the local stability of the closed-loop error system.
['Yang Zhu', 'Miroslav Krstic', 'Hongye Su']
Prediction-based boundary control of linear delayed systems without restriction on relative degree
652,183
In order to improve the ability of users to share data within and across organizations, enterprises can move their data from user-managed devices to data centers accessible via cloud computing services. Such a transition in data storage, access, and management raises technical issues that are not addressed by current operating system, tagging, and configuration and control management technologies. In this paper we describe the notion of a cloud control environment in which cloud services automatically enforce multi-schema-based rules on the organization and manipulation of data objects.
['Doron Drusinsky', 'James Bret Michael', 'Thomas W. Otani', 'Man-Tak Shing']
Putting order into the cloud: Object-oriented UML-based enforcement for document and application organization
192,434
Analysis of consistency in large multi-section courses using exploration of linked visual data summaries
['Mehmet Adil Yalcin', 'Elizabeth E Gardner', 'Lindsey B Anderson', 'Rowie Kirby-Straker', 'Andrew D. Wolvin', 'Benjamin B. Bederson']
Analysis of consistency in large multi-section courses using exploration of linked visual data summaries
633,833
This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.
['Zhuoyuan Chen', 'Xun Sun', 'Liang Wang', 'Yinan Yu', 'Chang Huang']
A Deep Visual Correspondence Embedding Model for Stereo Matching Costs
585,462
This paper presents a new proposal for three-input logic function implementation in MOS current mode logic (MCML) style. The conventional realization of such logic employs three levels of stacked source-coupled transistor pairs. It puts restriction on minimum power supply requirement and results in increased static power. The new proposal presents a circuit element named as quad-tail cell which reduces number of stacked source-coupled transistor levels by two. A three-input exclusive-OR (XOR) gate, a vital element in digital system design, is chosen to elaborate the approach. Its behavior is analyzed and SPICE simulations using TSMC 180 nm CMOS technology parameters are included to support the theoretical concept. The performance of the proposed circuit is compared with its counterparts based on CMOS complementary pass transistor logic, conventional MCML, and cascading of existing two input tripple-tail XOR cells and applying triple-tail concept in conventional MCML topology. It is found that the proposed XOR gate performs best in terms of most of the performance parameters. The sensitivity of the proposed XOR gate towards process variation shows a variation of 1.54 between the best and worst case. As an extension, a realization of 4 : 1 multiplexer has also been included.
['Neeta Pandey', 'Kirti Gupta', 'Bharat Choudhary']
New Proposal for MCML Based Three-Input Logic Implementation
891,404
Collaborative solid modeling (CSM) systems are cooperative networking combined with Computer Aided Design (CAD) techniques. And a distributed solution of collaborative naming is explored with a case study.
['Bin Liao', 'Bo Meng']
A Distributed Solution for Automatic Name Correspondence in Replicated Solid Modeling Systems
387,350
This paper studies differences in estimating length (and also trajectory) of an unknown parametric curve γ: [0, 1] → IR n from an ordered collection of data points q i = γ(t i ), with either the ti's known or unknown. For the ti's uniform (known or unknown) piecewise Lagrange interpolation provides efficient length estimates, but in other cases it may fail. In this paper, we apply this classical algorithm when the ti's are sampled according to first α-order and then when sampling is e-uniform. The latter was introduced in [20] for the case where the ti's are unknown. In the present paper we establish new results for the case when the ti's are known for both types of samplings. For curves sampled e-uniformly, comparison is also made between the cases, where the tabular parameters ti's are known and unknown. Numerical experiments are carried out to investigate sharpness of our theoretical results. The work may be of interest in computer vision and graphics, approximation and complexity theory, digital and computational geometry, and digital image analysis.
['Ryszard Kozera', 'Lyle Noakes', 'Reinhard Klette']
External versus internal parameterizations for lengths of curves with nonuniform samplings
880,631
Cell Outage Compensation (COC) is a self-healing functionality in the overall Self-Organizing Networks vision, which is defined by 3GPP. The Operation Administration and Maintenance system triggers the COC for mitigating the outage problem, e.g. out-of-service cell, by providing an adequate level of service to the affected users in the outage cell. We propose an outage compensation algorithm based on reconfiguring the surrounding cells by the proposed control parameters (the base station total transmission power and the mobility parameters) to provide an adequate capacity for the affected users wih minimal deterioration in network performance. The algorithm works in both frequency reuse one (FRone), and soft frequency reuse (SFR) networks. The SFR configuration has another degree of freedom that can be used to increase the affected users' capacity. Simulation results show the effectiveness of the algorithm to rescue the affected users in a homogeneous LTE networks under a full traffic load scenario.
['Mai O. Said', 'Omar A. Nasr', 'Tamer A. ElBatt']
Cell outage compensation algorithm for frequency reuse one and ICIC LTE networks
887,829
Three dimensional (3D) image processing and visualisation methods were applied in craniomaxillofacial surgery for preoperative surgical procedures and surgery planning. Each patient differed in their formation of cranium and facial bones, hence requiring customised reconstruction to identify the defect area and to plan procedural steps. This paper explores the processing and visualisation of patients' data into 3D form, constructed from flat two dimensional (2D) Computed Tomography (CT) images. Depth perception has been useful to identify certain regions of Interest (ROI) elusive in 2D CT slices. We have noted that the 3D models have exemplified the depth perception with the provision of additional cues of perspective, motion, texture and steropsis. This has led to the improvement of treatment design and implementation for patients in this study.
['Tan Su Tung', 'Alwin Kumar Rathinam', 'Yuwaraj Kumar', 'Zainal Ariff Abdul Rahman']
Additional Cues Derived from Three Dimensional Image Processing to Aid Customised Reconstruction for Medical Applications
412,665
Dictation using speech recognition could potentially serve as an efficient input method for touchscreen devices. However, dictation systems today follow a mentally disruptive speech interaction model: users must first formulate utterances and then produce them, as they would with a voice recorder. Because utterances do not get transcribed until users have finished speaking, the entire output appears and users must break their train of thought to verify and correct it. In this paper, we introduce Voice Typing, a new speech interaction model where users' utterances are transcribed as they produce them to enable real-time error identification. For fast correction, users leverage a marking menu using touch gestures. Voice Typing aspires to create an experience akin to having a secretary type for you, while you monitor and correct the text. In a user study where participants composed emails using both Voice Typing and traditional dictation, they not only reported lower cognitive demand for Voice Typing but also exhibited 29% relative reduction of user corrections. Overall, they also preferred Voice Typing.
['Anuj Kumar', 'Tim Paek', 'Bongshin Lee']
Voice typing: a new speech interaction model for dictation on touchscreen devices
302,027
Watermarking cyberspace
['Hal Berghel']
Watermarking cyberspace
713,960
This paper presents a new anisotropic diffusion model which is based on a new diffusion coefficient for image denoising. In the proposed model, a new diffusion coefficient and a method of automatically set gradient threshold parameter are introduced into an anisotropic diffusion model, which weakens the staircasing effect and preserves fine edges in a processed image. Comparative experiments show that the new model achieves the more satisfied denoising results than the other existing models.
['Yang Xu', 'Jianjun Yuan']
Anisotropic diffusion equation with a new diffusion coefficient for image denoising
925,405
Interval Testing Strategies Applied to COSY's Interval and Taylor Model Arithmetic.
['George F. Corliss', 'Jun Yu']
Interval Testing Strategies Applied to COSY's Interval and Taylor Model Arithmetic.
975,979
Automated journalism, automatically generating stories based on algorithms, has received considerable critical attention in diverse fields. However, automated journalism has not addressed the TV industry in much detail. This research aims to create a system to automatically generate news about TV ratings. The framework will involve undergoing the processes of data gathering, identifying important events by predefined algorithms, generating a story in narrative format, and publishing the output. The algorithm that determines the structure of the stories is defined by analyzing existing news about TV ratings that reflects key variables. Although the output of the research is limited to one type of news template, further attempts could expand to various formats.
['Soomin Kim', 'JongHwan Oh', 'Joonhwan Lee']
Automated News Generation for TV Program Ratings
829,892
In the database literature, different types of temporal functional dependencies (TFDs) have been proposed to constrain the temporal evolution of information. Unfortunately, the lack of a common notation makes it difficult to compare, to integrate, and to possibly extend the various proposals. In this paper, we outline a unifying algebraic framework for TFDs. We first introduce the proposed approach, then we use it to give a uniform account of existing TFDs, and finally we show that it allows one to easily express new meaningful TFDs.
['Carlo Combi', 'Angelo Montanari', 'Rosalba Rossato']
A uniform algebraic characterization of temporal functional dependencies
48,093
To ensure that they can participate in the Semantic Web, libraries need to prepare their legacy metadata for use as linked data. eXtensible Catalog (XC) software facilitates converting legacy library data into linked data using a platform that enables risk-free experimentation and that can be used to address problems with legacy metadata using batch services. The eXtensible Catalog also provides "lessons learned" regarding the conversion of legacy data to linked data by demonstrating what MARC metadata elements can be transformed to linked data, and helping to suggest priorities for the cleanup and enrichment of legacy data. Converting legacy metadata to linked data will require a team of experts, including MARC-based catalogers, specialists in other metadata schemas, software developers, and Semantic Web experts to design and test normalization/conversion algorithms, develop new schemas, and prepare individual records for automated conversion. Library software applications that do not depend upon linked data may currently have little incentive to enable its use. However, given recent advances in registering legacy library vocabularies, converting national library catalogs to linked data, and the availability of open source software such as XC to convert legacy data to linked data, libraries may soon find it difficult to justify continuing to create metadata that is not linked data compliant. The library community can now begin to propose smart practices for using linked data, and can encourage library system developers to implement linked data. XC is demonstrating that implementing linked data, and converting legacy library data to linked data, are indeed achievable.
['Jennifer Bowen']
Moving library metadata toward linked data: opportunities provided by the eXtensible Catalog
204,977
We analyze the asynchronous operation for the hybrid OCDMA/WDMA system. The OCDMA techniques can be implemented directly in the optical domain based on ultrashort optical pulse laser sources and the asynchronous operation of OCDMA systems is a very desirable characteristic of any multiplexing technique especially at high data rates. For the performance analysis of the hybrid OCDMA/WDMA system, we calculated the bit error rate (BER) and capacity and compared both situations of synchronous and asynchronous transmission. Further, the system performance improves drastically by increasing the code length (M) and more users with asynchronous multiplexed communications can be provided. As the ratio (D) between OCDMA user's bit duration and WDMA user's bit duration becomes larger, the BER becomes smaller. If the OCDMA data rate is fixed, then when D becomes larger or, equivalently, WDMA date rate becomes larger, the total throughput improves drastically. But, if the WDMA data rate is fixed, then when D becomes larger or, equivalently, OCDMA date rate becomes smaller, the total throughput becomes smaller. Therefore, there will be a trade-off between D and the total throughput.
['Po-Hao Chang', 'Fu-Shan Ting', 'Jun-Ren Chen']
Asynchronous hybrid optical code division/wavelength division multiple access system
332,199