abstract
stringlengths
5
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
367
__index_level_0__
int64
0
1,000k
Frequently the reliabilities of the linguistic values of the variables in the rule base are becoming important in the modeling of fuzzy systems. Taking into consideration the reliability degree of the fuzzy values of variables of the rules the design of inference mechanism acquires importance. For this purpose, Z number based fuzzy rules that include constraint and reliability degrees of information are constructed. Fuzzy rule interpolation is presented for designing of an inference engine of fuzzy rule-based system. The mathematical background of the fuzzy inference system based on interpolative mechanism is developed. Based on interpolative inference process Z number based fuzzy controller for control of dynamic plant has been designed. The transient response characteristic of designed controller is compared with the transient response characteristic of the conventional fuzzy controller. The obtained comparative results demonstrate the suitability of designed system in control of dynamic plants.
['Rahib Hidayat Abiyev']
Z Number Based Fuzzy Inference System for Dynamic Plant Control
943,766
We propose an automatic parametric human body reconstruction algorithm which can efficiently construct a model using a single Kinect sensor. A user needs to stand still in front of the sensor for a couple of seconds to measure the range data. The user’s body shape and pose will then be automatically constructed in several seconds. Traditional methods optimize dense correspondences between range data and meshes. In contrast, our proposed scheme relies on sparse key points for the reconstruction. It employs regression to find the corresponding key points between the scanned range data and some annotated training data. We design two kinds of feature descriptors as well as corresponding regression stages to make the regression robust and accurate. Our scheme follows with dense refinement where a pre-factorization method is applied to improve the computational efficiency. Compared with other methods, our scheme achieves similar reconstruction accuracy but significantly reduces runtime.
['Ke-Li Cheng', 'Ruofeng Tong', 'Min Tang', 'Jingye Qian', 'Michel Adib Sarkis']
Parametric Human Body Reconstruction Based on Sparse Key Points
718,308
Graphical abstractDisplay Omitted HighlightsA novel approach for data stream clustering to linear model prototypes.Good performance, robust operation, low computational complexity and simple implementation.Validation of results by comparison to well-known algorithms. In this paper a new approach called evolving principal component clustering is applied to a data stream. Regions of the data described by linear models are identified. The method recursively estimates the data variance and the linear model parameters for each cluster of data. It enables good performance, robust operation, low computational complexity and simple implementation on embedded computers. The proposed approach is demonstrated on real and simulated examples from laser-range-finder data measurements. The performance, complexity and robustness are validated through a comparison with the popular split-and-merge algorithm.
['Gregor Klančar', 'Igor Škrjanc']
Evolving principal component clustering with a low run-time complexity for LRF data mapping
601,967
A technique for reducing the offset at the frequency-dependent pinched hysteresis loop of memristor emulator circuits is introduced. The technique involves at integrating two DC voltage sources in the emulator circuits, keeping not only the circuit size reasonable, but also the original behavior equation of the memristor emulator circuits is not drastically modified. Using this technique, we will show how the offset is reduced due to the nonlinearities of the integrator circuit and of the multiplying core, principally. The technique is applicable to floating and grounded memristor emulator circuits, whose design is based on analog multipliers.
['Carlos Sánchez-López', 'Miguel Ángel Carrasco-Aguilar', 'F. E. Morales-Lopez']
Offset reduction on memristor emulator circuits
693,423
Stop consonants are an important constituent of the speech signal. They contribute significantly to its intelligibility and subjective quality. However, because of their dynamic and unpredictable nature, they tend to be difficult to encode using conventional approaches such as linear predictive coding and transform coding. This paper presents a system to detect, segment, and modify stop consonants in a speech signal. This system is then used to assess the following hypothesis: Muting the burst phase of stop consonants has a negligible impact on the subjective quality of speech. The muting operation is implemented and its impact on subjective quality is evaluated on a database of speech signals. The results show that this apparently drastic alteration has in reality very little perceptual impact. The implications for speech coding are then discussed.
['Vincent Santini', 'Philippe Gournay', 'Roch Lefebvre']
A study of the perceptual relevance of the burst phase of stop consonants with implications in speech coding
979,788
The performance of image retrieval with SVM active learning is known to be poor when started with few labeled images only. In this paper, the problem is solved by incorporating the unlabelled images into the bootstrapping of the learning process. In this work, the initial SVM classifier is trained with the few labeled images and the unlabelled images randomly selected from the image database. Both theoretical analysis and experimental results show that by incorporating unlabelled images in the bootstrapping, the efficiency of SVM active learning can be improved, and thus improves the overall retrieval performance.
['Lei Wang', 'Kap Luk Chan', 'Zhihua Zhang']
Bootstrapping SVM active learning by incorporating unlabelled images for image retrieval
233,226
This paper addresses issues in fall detection in videos. We propose a novel method to detect human falls from arbitrary view angles, through analyzing dynamic shape and motion of image regions of human bodies on Riemannian manifolds. The proposed method exploits time-dependent dynamic features on smooth manifolds based on the observation that human falls often involve drastically shape changes and abrupt motions as comparing with other activities. The main novelties of this paper include: (a) representing videos of human activities by dynamic shape points and motion points moving on two separate unit n-spheres, or, two simple Riemannian manifolds; (b) characterizing the dynamic shape and motion of each video activity by computing the velocity statistics on the two manifolds, based on geodesic distances; (c) combining the statistical features of dynamic shape and motion that are learned from their corresponding manifolds via mutual information. Experiments were conducted on three video datasets, containing 400 videos of 5 activities, 100 videos of 4 activities, and 768 videos of 3 activities, respectively, where videos were captured from cameras in different view angles. Our test results have shown high detection rate (average 99.38%) and low false alarm (average 1.84%). Comparisons with eight state-of-the-art methods have provided further support to the proposed method.
['Yixiao Yun', 'Irene Y.H. Gu']
Human fall detection in videos by fusing statistical features of shape and motion dynamics on Riemannian manifolds
815,548
Motivation: The recent advances in genome sequencing have revealed an abundance of non-synonymous polymorphisms among human individuals; subsequently it is of immense interest and importance to predict whether such substitutions are functional neutral or have deleterious effects. The accuracy of such prediction algorithms depends on the quality of the multiple sequence alignment, which is used to infer how an amino acid substitution is tolerated at a given position. Due to the scarcity of orthologous protein sequences in the past, the existing prediction algorithms all include sequences of protein paralogs in the alignment, which can dilute the conservation signal and affect prediction accuracy. However we believe that, with the sequencing of a large number of mammalian genomes, it is now feasible to include only protein orthologs in the alignment and improve the prediction performance. Results: We have developed a novel prediction algorithm, named SNPdryad, which only includes protein orthologs in building a multiple sequence alignment. Among many other innovations, SNPdryad uses different conservation scoring schemes and uses Random Forest as a classifier. We have tested SNPdryad on several datasets. We found that SNPdryad consistently outperformed other methods in several performance metrics, which is attributed to the exclusion of paralogous sequence. We have run SNPdryad on the complete human proteome, generating prediction scores for all the possible amino acid substitutions. Availability: The algorithm and the prediction results can be accessed from the website: http://snps.ccbr.utoronto.ca:8080/SNPdryad/.
['Ka-Chun Wong', 'Zhaolei Zhang']
SNPdryad: predicting deleterious non-synonymous human SNPs using only orthologous protein sequences.
137,622
WWW Interface Design for Computerized Service Supporting System
['Thu-Hua Liu', 'Sheue-Ling Hwang', 'Jia-Ching Wang']
WWW Interface Design for Computerized Service Supporting System
430,556
A comparative analysis of publication activity and citation impact based on the core literature in Bioinformatics
['Wolfgang Glänzel', 'Frizo Janssens', 'Bart Thijs']
A comparative analysis of publication activity and citation impact based on the core literature in Bioinformatics
896,062
Sous-Groupes Definissables d'Un Groupe Stable
['Bruno Poizat']
Sous-Groupes Definissables d'Un Groupe Stable
280,388
Implementing image processing applications in embedded systems is a difficult challenge due to the drastic constraints in terms of cost, energy consumption and real time execution. Reconfigurable architectures are good candidates to take-up this challenge and especially when the architecture is able to support different word-lengths of pixel through Sub-Word Parallelism (SWP) capabilities. Exploiting the diversity of supported data-types requires automation tools able to optimize the data word-length under an accuracy constraint. In this paper, a new approach for word-length optimization in the case of SWP operations is proposed. Compared to existing approaches the optimization time is significantly reduced without sacrificing the quality of the optimized solution. The results show the ability of our approach to exploit the SWP capabilities associated with multimedia processors.
['Daniel Menard', 'Hai Nam Nguyen', 'François Charot', 'Stéphane Guyetant', 'Jérémie Guillot', 'Erwan Raffin', 'Emmanuel Casseau']
Exploiting reconfigurable SWP operators for multimedia applications
305,252
The main aim of this work is to formalize the mechanism of resolving conflicts between statutory legal rules with a view to implementing them into a legal advisory system. The model is build on the basis of the ASPIC + argument modeling framework. The paper presents a discussion and a formal model of the mechanism of conflict recognition as well as models of three different mechanisms of conflict solving and a discussion of the relations between them.
['Tomasz Zurek']
Modeling conflicts between legal rules
898,325
In packet-switched networks, queueing of packets at the switches can result when multiple connections share the same physical link. To accommodate a large number of connections, a switch can employ link-scheduling algorithms to prioritize the transmission of the queued packets. Due to the high-speed links and small packet sizes, a hardware solution is needed for the priority queue in order to make the link schedulers effective. But for good performance, the switch should also support a large number of priority levels (P) and be able to buffer a large number of packets (N). So a hardware priority queue design must be both fast and scalable (with respect to N and P) in order to be implemented effectively. In this paper we first compare four existing hardware priority queue architectures, and identify scalability limitations on implementing these existing architectures for large N and P. Based on our findings, we propose two new priority queue architectures, and evaluate them using simulation results from Verilog HDL and Epoch implementations.
['Sung-Whan Moon', 'Kang G. Shin', 'Jennifer Rexford']
Scalable hardware priority queue architectures for high-speed packet switches
535,601
A purpose of our research is to realize teleoperation assistance to improve the performance with operator's initiative. One of advantages of teleoperation system is utilized human abilities which are global recognition, planning, prediction and so on. However, the operator is required to get skills for suitable operation, and assistance from the system is important. Although effects of the operation assistance are generally contemplated by system designers, it is not always available because of depending on variable operator's characteristics and environment condition. Solving this problem, adaptive assistance is required, however, autonomous robot motion may disturb to give an initiative to human operator, that is, human abilities cannot be brought out. Therefore, assistance by modification of robot dynamics parameters has been proposed. Because human operators tries to adapt and learn the robot dynamics instinctively, such assistance often brings about discomfort and less maneuverability. Consequently, human awareness characteristics are considered to improve this problem an assistance technique without human perception using a cognitive science approach is addressed. In this paper, new assist technique without discomfort feelings of operators is proposed.
['Hiroshi Igarashi']
Human adaptive assist planning without operator awareness
4,802
Background: Given three signed permutations, an inversion median is a fourth permutation that minimizes the sum of the pairwise inversion distances between it and the three others. This problem is NP-hard as well as hard to approximate. Yet median-based approaches to phylogenetic reconstruction have been shown to be among the most accurate, especially in the presence of long branches. Most existing approaches have used heuristics that attempt to find a longest sequence of inversions from one of the three permutations that, at each step in the sequence, moves closer to the other two permutations; yet very little is known about the quality of solutions returned by such approaches. Results: Recently, Arndt and Tang took a step towards finding longer such sequences by using sets of commuting inversions. In this paper, we formalize the problem of finding such sequences of inversions with what we call signatures and provide algorithms to find maximum cardinality sets of commuting and noninterfering inversions. Conclusion: Our results offer a framework in which to study the inversion median problem, faster algorithms to obtain good medians, and an approach to study characteristic events along an evolutionary path.
['Krister M. Swenson', 'Yokuki To', 'Jijun Tang', 'Bernard M. E. Moret']
Maximum independent sets of commuting and noninterfering inversions
190,318
Self-reconfigurable robots are composed of many individual modules that can autonomously move to transform the shape and structure of the robot. The task of self-reconfiguration, transforming a set of modules from one arrangement to another specified arrangement, is a key problem for these robots and has been heavily studied. However, consideration of this problem has typically been limited to kinematics and so in this work we introduce analysis of dynamics for the problem. We characterize optimal reconfiguration movements in terms of basic laws of physics relating force, mass, acceleration, distance traveled, and movement time. A key property resulting from this is that through the simultaneous application of constant-bounded forces by a system of modules, certain modules in the system can achieve velocities exceeding any constant bounds. This delays some modules in order to accelerate others. To exhibit the significance of simultaneously considering both kinematic and dynamics bounds, we consider the following "x-axis to y-axis" reconfiguration problem. Given a horizontal row of n modules, reconfigure that collection into a vertical column of n modules. The goal is to determine the sequence of movements of the modules that minimizes the movement time needed to achieve the desired reconfiguration of the modules. In this work we prove tight upper and lower bounds on the movement time for the above reconfiguration problem. Prior work on reconfiguration problems which focused only on kinematic constraints kept a constant velocity bound on individual modules and so required time linear in n to complete problems of this type.
['John H. Reif', 'Sam Slee']
ASYMPTOTICALLY OPTIMAL KINODYNAMIC MOTION PLANNING FOR A CLASS OF MODULAR SELF-RECONFIGURABLE ROBOTS
513,134
In this paper we introduce a new mesh-based method for $N$-body calculations. The method is founded on a particle-particle--particle-mesh (P$^3$M) approach, which decomposes a potential into rapidly decaying short-range interactions and smooth, mesh-resolvable long-range interactions. However, in contrast to the traditional approach of using Gaussian screen functions to accomplish this decomposition, our method employs specially designed polynomial bases to construct the screened potentials. Because of this form of the screen, the long-range component of the potential is then solved accurately with a finite element method, leading ultimately to a sparse matrix problem that is solved efficiently with standard multigrid methods, though the short-range calculation is now more involved than P$^3$M particle-mesh-Ewald (PME) methods. We introduce the method, analyze its key properties, and demonstrate the accuracy of the algorithm.
['Natalie N. Beams', 'Luke N. Olson', 'Jonathan B. Freund']
A Finite Element Based P3M Method for N-body Problems
561,372
Let v be a vertex with n edges incident to it, such that the n edges partition an infinitesimally small circle C around v into convex pieces. The minimum local convex partition (MLCP) problem asks for two or three out of the n edges that still partition C into convex pieces and that are of minimum total length. We present an optimal algorithm solving the problem in linear time if the edges incident to v are sorted clockwise by angle. For unsorted edges our algorithm runs in O(n log n) time. For unsorted edges we also give a linear time approximation algorithm and a lower time bound.
['Magdalene Grantson', 'Christos Levcopoulos']
Tight time bounds for the minimum local convex partition problem
829,494
Sign Language in the Interface.
['Matt Huenerfauth', 'Vicki L. Hanson']
Sign Language in the Interface.
788,431
Accurate and effective cervical smear image segmentation is required for automated cervical cell analysis systems. Thus, we proposed a novel superpixel-based Markov random field (MRF) segmentation framework to acquire the nucleus, cytoplasm and image background of cell images. We seek to classify color non-overlapping superpixel-patches on one image for image segmentation. This model describes the whole image as an undirected probabilistic graphical model and was developed using an automatic label-map mechanism for determining nuclear, cytoplasmic and background regions. A gap-search algorithm was designed to enhance the model efficiency. Data show that the algorithms of our framework provide better accuracy for both real-world and the public Herlev datasets. Furthermore, the proposed gap-search algorithm of this model is much more faster than pixel-based and superpixel-based algorithms. HighlightsWe proposed a novel gap-search Markov random field (MRF) for accurate cervical smear image segmentation.This method could acquire three regions (nuclei, cytoplasm, and background) automatically by a label-map mechanism.The gap-search algorithm is faster than other three algorithms in the experiments.A copy of source codes will be released as an open source project for continuing studies.
['Lili Zhao', 'Kuan Li', 'Mao Wang', 'Jianping Yin', 'En Zhu', 'Chengkun Wu', 'Siqi Wang', 'Chengzhang Zhu']
Automatic cytoplasm and nuclei segmentation for color cervical smear image using an efficient gap-search MRF
629,649
Information on web and in use of web services is increasing enormously even on hourly bases. On semantic web the information is represented in ontology. For system and services to share the information, a sort of mediation (i.e., mappings) is required. Mappings are established between the ontologies (information sources) of web services for resolving the terminological and conceptual incompatibilities. However, with the discovery of new knowledge in the field and accommodating the knowledge in domain ontologies makes the ontology to evolve from one consistent state to another. This consequently makes existing mappings between ontologies unreliable and staled due to the changes in resources. So there is a need for mapping evolution to eliminate discrepancies from the existing mappings. To re-establish the mappings between dynamic ontologies, existing systems restart the complete mapping process which is time consuming. The approach proposed in this paper provides the benefits of mapping reconciliation between the updated ontologies. It takes less time as compared to the existing systems. It only considers the changed resources and eliminates the staleness from the mappings. This approach uses the change history of ontology to drastically reduce the time required for reconciling mappings among ontologies. Experimental results with four different mapping systems using standard data sets are provided that validate our claims.
['Asad Masood Khattak', 'Zeeshan Pervez', 'Khalid Latif', 'A. M. Jehad Sarkar', 'Sungyoung Lee', 'Young-Koo Lee']
Reconciliation of Ontology Mappings to Support Robust Service Interoperability
401,076
Word embeddings have recently seen a strong increase in interest as a result of strong performance gains on a variety of tasks. However, most of this research also underlined the importance of benchmark datasets, and the difficulty of constructing these for a variety of language-specific tasks. Still, many of the datasets used in these tasks could prove to be fruitful linguistic resources, allowing for unique observations into language use and variability. In this paper we demonstrate the performance of multiple types of embeddings, created with both count and prediction-based architectures on a variety of corpora, in two language-specific tasks: relation evaluation, and dialect identification. For the latter, we compare unsupervised methods with a traditional, hand-crafted dictionary. With this research, we provide the embeddings themselves, the relation evaluation task benchmark for use in further research, and demonstrate how the benchmarked embeddings prove a useful unsupervised linguistic resource, effectively used in a downstream task.
['Stéphan Tulkens', 'Chris Emmery', 'Walter Daelemans']
Evaluating Unsupervised Dutch Word Embeddings as a Linguistic Resource
822,762
Peter Neil Hamkin is a peaceful bartender in Litherland, a small village north of Liverpool, in UK. On February 13, 2003, Scotland Yard collared him under indictment for murdering a 24 year-old lady named Annalisa Vincentini during a robbery attempt in Castiglioncello, a small village on the Tuscany coast, not far from Florence, Italy, on the previous August 19. His DNA nailed him. The police found it in the abundant trail of blood left on the crime scene: Annalisa's boyfriend reacted to the robbery attempt and hit the mugger's face with a stone, causing him to bleed freely.
['Alessandro Ferrero', 'Veronica Scotti']
The story of the right measurement that caused injustice and the wrong measurement that did justice; How to explain the importance of metrology to lawyers and judges [Legal Metrology]
547,247
Automatic Aortic Root Segmentation with Shape Constraints and Mesh Regularisation.
['Robert Ieuan Palmer', 'Xianghua Xie', 'Gary K. L. Tam']
Automatic Aortic Root Segmentation with Shape Constraints and Mesh Regularisation.
700,211
The technology acceptance model (TAM) predicts whether users will ultimately use software applications based upon causal relationships among belief and attitudinal constructs that influence usage behavior. Electronic mail, or email, is a collaborative technology available to virtually all members of an organization, and typically, there are alternative email applications available for use. This study applies TAM to assess the user acceptance and voluntary usage of a particular email application, cc:mail, in two different organizations. The results largely validate TAM, although the findings suggest that certain external variables, namely length of time since first use, and level of education, directly affect email usage behavior apart from their influence as mediated through the perceived usefulness (PU) and perceived ease of use (PEOU) constructs.
['Geoffrey S. Hubona', 'Andrew Burton-Jones']
Modeling the user acceptance of e-mail
303,748
Network safeguarding practices involve decisions in at least three areas: identification of well-defined security policies, selection of cost-effective defense strategies and implementation of real-time defense tactics. Although choices made in each of these three affect the others, many existing decision models handle these three decision areas in isolation. There is no comprehensive tool that can integrate them to provide a single efficient model for safeguarding a network. In addition, there is no clear way to determine which particular combinations of defense decisions result in cost-effective solutions. To address these problems, this paper introduces a layered decision model (LDM) for use in deciding how to address defense decisions based on cost-effectiveness. To illustrate the technique, the LDM model is applied to the design of network defense for a sample e-commercial business. While there is a proliferation of tools for decision support, the connectivity between decisions about security policies, defense strategies and defense tactics is weak and there is no guarantee that these decisions are consistent. It is also hard to tell how a cost decision of one kind (e.g., about goals) affects cost outcomes at another level (e.g., regarding tactics). We present a layered decision model to support consistent, connected decisions at three layers: security policies, defense strategies, and defense tactics, and to balance costs at all layers. The layered decision model (LDM) integrates decisions about security policies, defense strategies and defense tactics in a uniform framework. In addition, this model provides an analytical framework that allows traceability of costs between layers. This framework combines risk assessment, business cost modeling, and cost-benefit analysis which uses return on investment (ROI) analysis. The work in this paper is preliminary, but should provide a good foundation for future work in this area.
['Huaqiang Wei', 'Deborah A. Frincke', 'Jim Alves-Foss', 'Terence Soule', 'Hugh Pforsich']
A layered decision model for cost-effective network defense
18,748
The amount of the internet video has been growing rapidly in recent years. Efficient video indexing and retrieval, therefore, is becoming an important research and system-design issue. Reliably extracting metadata from video as indexes is one major step toward efficient video management. There are numerous video types, and everyone can define new video types of his own. We believe an open video analysis framework should help when one needs to automatically process various types of videos. More, the nature of video can be so different that we may end up having a dedicated video analysis module for each video type. It is infeasible to design a system to automatically process every type of video. In the paper, we propose a scene-based video analytic studio that comes with (1) an open video analysis framework where the video analysis modules are developed and deployed as plug-ins, (2) an authoring tool where videos can be manually tagged, and (3) an HTML5-based video player the play backs videos using the metadata we generate. In addition, it provides a runtime environment with standard libraries and proprietary rule-based automaton modules to facilitate the plug-in development. At the end, we will show its application to click able (shoppable) videos, which we plan to apply to our e-learning projects.
['Chia-Wei Liao', 'Kai-Hsuan Chan', 'Wen-Tsung Chang', 'Sheng-Tsung Tu']
Scene-Based Video Analytics Studio
284,842
An improvement of the information transfer rate of brain-computer communication is necessary for the creation of more powerful and convenient applications. This paper presents an asynchronously controlled three-class brain-computer interface-based spelling device [virtual keyboard (VK)], operated by spontaneous electroencephalogram and modulated by motor imagery. Of the first results of three able-bodied subjects operating the VK, two were successful, showing an improvement of the spelling rate /spl sigma/, the number of correctly spelled letters/min, up to /spl sigma/=3.38 (average /spl sigma/=1.99).
['Reinhold Scherer', 'Gernot R. Müller', 'Christa Neuper', 'Bernhard Graimann', 'Gert Pfurtscheller']
An asynchronously controlled EEG-based virtual keyboard: improvement of the spelling rate
346,673
A Bounded Rationality Account of Wishful Thinking.
['Rebecca Neumann', 'Anna N. Rafferty', 'Thomas L. Griffiths']
A Bounded Rationality Account of Wishful Thinking.
773,106
Study on Laos-China Cross-Border Regional Economic Cooperation Based on Symbiosis Theory: A Case of Construction of Laos Savan Water Economic Zone
['Sisavath Thiravong', 'Jingrong Xu', 'Qin Jing']
Study on Laos-China Cross-Border Regional Economic Cooperation Based on Symbiosis Theory: A Case of Construction of Laos Savan Water Economic Zone
941,907
Social networks and interactions in social media involve both positive and negative relationships. Signed graphs capture both types of relationships: positive edges correspond to pairs of "friends", and negative edges to pairs of "foes". The {\em edge sign prediction problem} is an important graph mining task for which many heuristics have recently been proposed \cite{leskovec2010predicting,leskovec2010signed}. #R##N#In this paper we model the edge sign prediction problem as a noisy correlation clustering problem with two clusters. We are allowed to query each pair of nodes whether they belong to the same cluster or not, but the answer to the query is corrupted with some probability $0<q<\frac{1}{2}$. We provide an algorithm that recovers the clustering with high probability in the presence of noise with $O(n \log{n})$ queries if that gap $\frac{1}{2}-q$ is constant. Finally, we provide a novel generalization to $k \geq 3$ clusters and prove that our techniques can recover the clustering if the gap is constant in this generalized setting.
['Michael Mitzenmacher', 'Charalampos E. Tsourakakis']
Predicting Signed Edges with $O(n \log{n})$ Queries
868,700
Computational Verifiable Secret Sharing Revisited.
['Michael Backes', 'Aniket Kate', 'Arpita Patra']
Computational Verifiable Secret Sharing Revisited.
809,619
With the recent explosion in usage of the World Wide Web, the problem of caching Web objects has gained considerable importance. Caching on the Web differs from traditional caching in several ways. The nonhomogeneity of the object sizes is probably the most important such difference. In this paper, we give an overview of caching policies designed specifically for Web objects and provide a new algorithm of our own. This new algorithm can be regarded as a generalization of the standard LRU algorithm. We examine the performance of this and other Web caching algorithms via event- and trace-driven simulation.
['Charu C. Aggarwal', 'Joel L. Wolf', 'Philip S. Yu']
Caching on the World Wide Web
204,928
Firms adopt the Internet for different purposes, ranging from simple Internet presence to using the Internet to transform business operations. This paper examines the contingency factors that affect levels of Internet adoption and their impact on competitive advantage. A questionnaire was used to gather data for this study; 159 usable responses were obtained from a sample of 566 firms in Singapore. Results indicate that most firms are still exploring the business use of the Internet. A proactive business technology strategy was found to be positively associated with the level of Internet adoption. Technology compatibility and top management support were found to have no significant relationships with the level of Internet adoption. Further, the level of Internet adoption had a significant positive relationship with competitive advantage. These results provide a better understanding of the contingency factors affecting the level of Internet adoption, as well as providing some evidence of the positive impact of Internet adoption on competitive advantage.
['Thompson S. H. Teo', 'Yujun Pian']
A contingency perspective on internet adoption and competitive advantage
313,659
Spectral and Colorimetric Constancy and Accuracy of Multispectral Stereo Systems.
['Julie Klein', 'Til Aach']
Spectral and Colorimetric Constancy and Accuracy of Multispectral Stereo Systems.
766,839
We used different tools from experimental psychology to obtain a broad picture of the possible neural underpinnings of temporal processing in the range of milliseconds. The temporal variability of human subjects was measured in timing tasks that differed in terms of: explicit-implicit timing, perception-production, single-multiple intervals, and auditory-visual interval markers. The results showed a dissociation between implicit and explicit timing. Inside explicit timing, we found a complex interaction in the temporal variability between tasks. These findings do not support neither a unique nor a ubiquitous mechanism for explicit timing, but support the notion of a partially distributed timing mechanism, integrated by main core structures such as the cortico-thalamic-basal ganglia circuit, and areas that are selectively engaged depending on the specific behavioral requirement of a task. A learning-generalization study of motor timing also supports this hypothesis and suggests that neurons of the timing circuit should be tuned to interval durations.
['Hugo Merchant', 'Ramón Bartolo', 'Juan Carlos Méndez', 'Oswaldo Pérez', 'Wilbert Zarco', 'Germán Mendoza']
What can be inferred from multiple-task psychophysical studies about the mechanisms for temporal processing?
562,441
A deeper understanding of the interactions between people and artefacts that characterise creative activities could be valuable in designing the next generation of creativity support. This paper presents three perspectives on creative interaction that have emerged from four years of empirical and design research. We argue that creative interaction can be usefully viewed in terms of Productive Interaction - focused engagement on the development of a creative outcome, Structural Interaction - the development of the structures in which production occurs, and Longitudinal Interaction - the long-term development of resources and relationships that increase creative potential. An analysis of each perspective is described, along with the development of an exemplary prototype. The use of the perspectives as a basis for design is considered, including the influence of contextual factors on instances of creative activities.
['Tim Coughlan', 'Peter Johnson']
Understanding productive, structural and longitudinal interactions in the design of tools for creative activities
134,367
In this paper a fault tolerant state estimation (FTSE) framework is developed for reliable navigation. The framework features kinematic state estimation using Bayesian filtering of sensor measurements, and sensor fault detection and isolation. Another development is an uncoupled fusion architecture that allows the system state to be updated by asynchronous sensors, makes the system easily scalable and allows the system to degenerate gracefully during one or more sensor outage. A novel procedure to incorporate relative measurements, such as relative pose from stereo sensors, into the Bayesian filtering framework is also developed. In addition, a novel kinematic state transition model is developed that exploits the dynamics of UGV, provides a coupled linear and angular motion model and avoids over-fitting of measurement data. The FTSE system's performance is demonstrated based on results from processing real data.
['Abhijit Sinha', 'Abir Mukherjee', 'Xia Liu', 'Simon P. Monckton', 'Gregory S. Broten']
A fault tolerant state estimation framework with application to UGV navigation in complex terrain
36,535
Recommender Systems are indispensable to provide personalized services on the Web. Recommending items which match a user's preference has been researched for a long time, and there exist a lot of useful approaches. Especially, Collaborative Filtering, which gives recommendation based on users' feedbacks to items, is considered useful. Feedbacks are categorized into explicit feedbacks and implicit feedbacks. In this paper, Collaborative Filtering with implicit feedbacks is addressed. Explicit feedbacks are feedbacks provided by users intentionally and represent users' preferences for items explicitly. For example, in Netflix, users can rate movies on a scale of 1-5, and, based on these ratings, users can receive movie recommendation. On the other hand, implicit feedbacks are collected by the system automatically. In Amazon.com, products that users buy and click are used for recommendation. While Collaborative Filtering with explicit feedbacks has been a central topic for a long time, implicit feedbacks have become a more and more important research topic recently because these are easier to obtain and more abundant than explicit feedbacks. However, implicit feedbacks are often noisy. They often contain feedbacks which do not represent users' real preferences for items. Our approach addresses to this noise problem. We propose three discounting methods for observed values in implicit feedbacks. The key idea is that there is hidden uncertainty for each observed feedback, and effects by observed feedbacks of much uncertainty are discounted. The three discounting methods do not need additional information besides ordinary user-item feedbacks pairs and timestamps. Experiments with huge real-world datasets confirm that all of the three methods contribute to improving the performance. Moreover, our discounting methods can easily be combined with existing methods and improve the recommendation accuracy of existing models.
['Kento Kawai', 'Hiroyuki Kitagawa']
Collaborative Filtering with Implicit Feedbacks by Discounting Positive Feedbacks
864,139
Cognitive impairments, assistive technologies, design and evaluation methodologies, accessibility, inclusive design, universal usability.
['Joanna McGrenere', 'Jim Sullivan', 'Ronald M. Baecker']
Designing technology for people with cognitive impairments
389,667
The rapid advance of wireless and Web technologies enable the mobile Web applications to provide plenty kinds of services for mobile users. Under a mobile Web system, analyzing mobile user's movement sequences and requested services is important for wide applications in wireless communication like data allocation, data replication, location-based and personalization services. The main challenge in this research issue is to effectively deal with the user's diverse behavior and the huge amount of data. However, to our best knowledge, no studies have been done on the problem of mining sequential mobile access patterns with both movement and service requests considered simultaneously. In this paper, we propose a novel data mining method, namely SMAP-Mine, that can discover patterns of sequential movement associated with requested services for mobile users in mobile Web systems. Through empirical evaluation on various simulation conditions, the proposed method is shown to deliver excellent performance in terms of accuracy, execution efficiency, and scalability.
['Vincent S. Tseng', 'Kawuu W. Lin']
Mining sequential mobile access patterns efficiently in mobile Web systems
141,988
Measuring the Effect Of Usability Mechanisms On User Efficiency, Effectiveness and Satisfaction.
['Marianella Aveledo', 'Diego Mauricio Curtino', 'Agustín De la Rosa', 'Ana M. Moreno']
Measuring the Effect Of Usability Mechanisms On User Efficiency, Effectiveness and Satisfaction.
779,941
A New Construction of Block Codes From Algebraic Curves
['Lingfei Jin']
A New Construction of Block Codes From Algebraic Curves
671,327
In this paper, a phase control scheme for Class-DE-E dc-dc converter is proposed and its performance is clarified. The proposed circuit is composed of phase-controlled Class-DE inverter and Class-E rectifier. The proposed circuit achieves the fixed frequency control without frequency harmonics lower than the switching frequency. Moreover, it is possible to achieve the continuous control in a wide range of the line and load variations. The output voltage decreases in proportion to the increase of the phase shift. The proposed converter keeps the advantages of Class-DE-E dc-dc converter, namely, a high power conversion efficiency under a high-frequency operation and low switch-voltage stress. Especially, high power conversion efficiency can be kept for narrow range control. We present numerical calculations for the design and the numerical analyses to clarify the characteristics of the proposed control. By carrying out circuit experiments, we show a quantitative similarity between the numerical predictions and the experimental results. In our experiments, the measured efficiency is over 84% with 2.5 W output power for 1.0-MHz operating frequency at the nominal operation. Moreover, the output voltage is regulated from 100% to 39%, keeping over 57% power conversion efficiency by using the proposed control scheme.
['Hiroo Sekiya', 'Shunsuke Nemoto', 'Jianming Lu', 'Takashi Yahagi']
Phase control for resonant DC-DC converter with class-DE inverter and class-E rectifier
105,808
This paper reports the scaling behavior of the fuzzy c-means based classifier (FCMC) with many parameters. FCMC is a classifier based on clustering approaches. The classification accuracy on test sets (i.e., the generalization capability) is not necessarily improved by increasing the number of clusters. Especially when the number of training samples is relatively small, not only the classification boundary over-fits the data, but also covariance matrices and cluster centers are computed incorrectly, since the number of samples in each cluster becomes smaller. Hence, the test set accuracy deteriorates. The performance of FCMC with two clusters in each class and the number of training samples less than 1000, was reported in the literature. This paper reports the scaling behavior of FCMC by testing with variously-sized training samples. The number of clusters of FCMC is increased up to eight. The number of clusters used in this paper is not very large but the number of parameters is relatively large. So, the parameters are optimized to training sets. LibSVM is one of the widely known state of the art tools for support vector machines (SVM). The test set accuracy, training time and testing time (i.e., the detection time) of FCMC are compared with LibSVM by varying the size of training sets. FCMC shows a good generalization capability, though the parameters are optimized to training sets. When the number of training samples is increased by 10 times, the training time of FCMC increases by 10 times, but that of LibSVM increases by a factor of 100. The testing time is also much shorter than LibSVM when the size of the training set is large.
['Hidetomo Ichihashi', 'Katsuhiro Honda', 'Akira Notsu']
Comparison of scaling behavior between fuzzy c-means based classifier with many parameters and LibSVM
448,052
WEIGHTED TIME WARPING FOR TEMPORAL SEGMENTATION OF MULTI-PARAMETER PHYSIOLOGICAL SIGNALS
['Gartheeban Ganeshapillai', 'John V. Guttag']
WEIGHTED TIME WARPING FOR TEMPORAL SEGMENTATION OF MULTI-PARAMETER PHYSIOLOGICAL SIGNALS
731,660
This paper considers distributed detection over large scale connected networks with arbitrary topology. Contrasting to the canonical parallel fusion network where a single node has access to the outputs from all other sensors, each node can only exchange one-bit information with its direct neighbors in the present setting. Our approach adopts a novel consensus reaching algorithm using asymmetric bounded quantizers that allow controllable consensus error. Under the Neyman-Pearson criterion, we show that, with each sensor employing an identical one-bit quantizer for local information exchange, this approach achieves the optimal error exponent of centralized detection provided that the algorithm converges. Simulations show that the algorithm converges when the network is large enough.
['Shengyu Zhu', 'Biao Chen']
Distributed detection over connected networks via one-bit quantizer
869,585
Remote Drawing on Vertical Surfaces with a Self-Actuated Display
['Patrick Bader', 'Norman Pohl', 'Valentin Schwind', 'Niels Henze', 'Katrin Wolf', 'Stefan Schneegass', 'A. Schmidt']
Remote Drawing on Vertical Surfaces with a Self-Actuated Display
640,748
We prove an analog for trace groups of the Mobius inversion formula for trace monoids (Cartier-Foata, 1969). A by-product is to obtain an explicit and combinatorial formula for the growth series of a trace group. This is used to study the average height of traces.
['Anne Bouillard', 'Jean Mairesse']
Generating series of the trace group
891,325
The Development and Application of an Evaluation Methodology for Person Search Engines.
['Roland Brennecke', 'Thomas Mandl', 'Christa Womser-Hacker']
The Development and Application of an Evaluation Methodology for Person Search Engines.
735,144
Economic Load Dispatch (ELD) is one of an important optimization tasks which provides an economic condition for a power systems. In this paper, Particle Swarm Optimization (PSO) as an effective and reliable evolutionary based approach has been proposed to solve the constraint economic load dispatch problem. The proposed method is able to determine, the output power generation for all of the power generation units, so that the total constraint cost function is minimized. In this paper, a piecewise quadratic function is used to show the fuel cost equation of each generation units, and the B-coefficient matrix is used to represent transmission losses. The feasibility of the proposed method to show the performance of this method to solve and manage a constraint problems is demonstrated in 4 power system test cases, consisting 3,6,15, and 40 generation units with neglected losses in two of the last cases. The obtained PSO results are compared with Genetic Algorithm (GA) and Quadratic Programming (QP) base approaches. These results prove that the proposed method is capable of getting higher quality solution including mathematical simplicity, fast convergence, and robustness to solve hard optimization problems
['Abolfazl Zaraki', 'Mohd Fauzi Othman']
Implementing Particle Swarm Optimization to Solve Economic Load Dispatch Problem
336,145
In this paper we present two approaches to improve computational efficiency of a keyword spotting system running on a resource constrained device. This embedded keyword spotting system detects a pre-specified keyword in real time at low cost of CPU and memory. Our system is a two stage cascade. The first stage extracts keyword hypotheses from input audio streams. After the first stage is triggered, hand-crafted features are extracted from the keyword hypothesis and fed to a support vector machine (SVM) classifier on the second stage. This paper focuses on improving the computational efficiency of the second stage SVM classifier. More specifically, select a subset of feature dimensions and merge the SVM classifier to a smaller size, while maintaining the keyword spotting performance. Experimental results indicate that we can remove more than 36% of the non-discriminative SVM features, and reduce the number of support vectors by more than 60% without significant performance degradation. This results in more than 15% relative reduction in CPU utilization.
['Ming Sun', 'Varun K. Nagaraja', 'Bjorn Hoffmeister', 'Shiv Vitaladevuni']
Model Shrinking for Embedded Keyword Spotting
668,504
We study the problem of objects search in clutter. In cluttered environments, partial occlusion among objects prevents vision systems from correctly recognizing objects. Hence, the agent needs to move objects around to gather information, which helps reduce uncertainty in perception. At the same time, the agent needs to minimize the efforts of moving objects to reduce the time required to complete the task. We model the problem as a Partially Observable Markov Decision Process (POMDP), formulating it as a problem of optimal decision making under uncertainty. By exploiting spatial constraints, we are able to adapt online POMDP planners to handle objects search problems with large state space and action space. Experiments show that the POMDP solution outperforms greedy approaches, especially in cases where multi-step manipulation is required.
['Jue Kun Li', 'David Hsu', 'Wee Sun Lee']
Act to See and See to Act: POMDP planning for objects search in clutter
956,653
Parametric algebraic specifications with Gentzen formulas – from quasi-freeness to free functor semantics
['Michael Löwe', 'Uwe Wolter']
Parametric algebraic specifications with Gentzen formulas – from quasi-freeness to free functor semantics
448,641
Smart Autism is a cloud based, automated framework for autism screening and confirmation. In developing countries, due to lack of resources and expertise, autism is detected later than early ages which consequently delays timely intervention. Therefore a mobile, interactive and integrated framework is proposed to screen and confirm autism in different age group (0 to 17 years) with 3 layers of assessment process. Firstly, it screens by evaluating the responses of pictorial based screening questionnaire through mobile application. If autism is suspected, then in virtual assessment process, the child watches a video, its reaction is recorded and uploaded to the cloud for remote expert assessment. If autism is still suspected, then the child is referred to the nearest Autism Resource Center (ARC) for actual assessment. Analyzing these results, the integrated framework confirms autism automatically and reduce user's ARC visit. It is expected that the proposed framework will bring changes in autism diagnosis process and create awareness.
['Khondaker Abdullah Al Mamun', 'Sharmistha Bardhan', 'Md. Anwar Ullah', 'Evdokia Anagnostou', 'Jessica Brian', 'Shaheen Akhter', 'Mohammod Golam Rabbani']
Smart autism — A mobile, interactive and integrated framework for screening and confirmation of autism
912,186
Peer-to-Peer (P2P) streaming technology based on various service requirements often remains on the elusive benefits of user-friendly video streaming in cloud computing environments. Cloud-assisted P2P media streaming gives an opportunity to enhance on-demand, dynamic and easily accessible videos. This paper outlines the fundamental video block device (VBD) method for user-friendly viewing patterns that inherits from user access patterns and provides efficient delivery using an enhanced BitTorrent protocol.
['Yong-Ju Lee']
Video block device for user-friendly delivery in IaaS clouds
223,569
In order to analyze the dynamics of the particle of the particle swarm optimization (abbr. PSO) rigorously, we proposed a canonical deterministic PSO (abbr. CD-PSO). The CD-PSO can be described a quite simple equation, and it is very easy to analyze the dynamics. However, the CD-PSO is a deterministic system, therefore, the solution search ability is worse than the conventional PSO which contains stochastic factors. The deterministic system is easy to implement since stochastic factors are not contained. Therefore, we consider the improvement method of the search ability for the CD-PSO. Based on the analysis results of the CD-PSO, we propose a deterministic multi-agent solving method (abbr. MAS). To improve the solution search performance, we propose a novel asynchronous MAS whose update manner is asynchronous. By using some benchmark functions, we confirm the effectiveness of the asynchronous MAS.
['Kenta Kohinata', 'Takuya Kurihara', 'Takuya Shindo', "Kenya Jin'no"]
A Novel Deterministic Multi-agent Solving Method
609,909
Modeling External Volume Changes in Stereo Echo Cancellers
['Elias Nemer', 'Wilfried Leblanc']
Modeling External Volume Changes in Stereo Echo Cancellers
783,430
Certificateless Key-Insulated Encryption: Cryptographic Primitive for Achieving Key-Escrow Free and Key-Exposure Resilience
['Libo He', 'Chen Yuan', 'Hu Xiong', 'Zhiguang Qin']
Certificateless Key-Insulated Encryption: Cryptographic Primitive for Achieving Key-Escrow Free and Key-Exposure Resilience
862,717
Motivated by applications to the wiretap channel of type II with the coset coding scheme as well as secret sharing, the concept of the fullrank value function is introduced in this paper. Several constructing methods for fullrank value functions are presented by using the finite projective geometry. The relationship between fullrank value functions and secret sharing schemes, and the relationship between fullrank value functions and separation of linear codes are given as well. In particular, new (2,2)-separating codes are constructed by using our methods.
['Zihui Liu', 'Xin-Wen Wu']
The fullrank value function
589,925
This paper describes a novel filtering method to reconstruct an arbitrarily focused image from two differently focused images. Based on the assumption that image scene has two layers-foreground and background-, two differently focused images are used as inputs, one of which is focused on the foreground and the other is focused on the background. The linear equation that holds between these images and the desired image, which is derived from their imaging models, can be formulated as an image restoration problem. This paper shows that the solution of this problem exists as an inverse filter and the desired image can be reconstructed only by the linear filters. As a result, fast reconstruction with high accuracy can be achieved. Experiments using real images are shown.
['Alcira Kubota', 'Kiyoharu Aizawa']
Inverse filters for reconstruction of arbitrarily focused images from two differently focused images
186,577
Petri nets, or equivalently vector addition systems (VAS), are widely recognized as a central model for concurrent systems. Many interesting properties are decidable for this class, such as bounded ness, reach ability, regularity, as well as context-freeness, which is the focus of this paper. The context-freeness problem asks whether the trace language of a given VAS is context-free. This problem was shown to be decidable by Schwer in 1992, but the proof is very complex and intricate. The resulting decision procedure relies on five technical conditions over a customized cover ability graph. These five conditions are shown to be necessary, but the proof that they are sufficient is only sketched. In this paper, we revisit the context-freeness problem for VAS, and give a simpler proof of decidability. Our approach is based on witnesses of non-context-freeness, that are bounded regular languages satisfying a nesting condition. As a corollary, we obtain that the trace language of a VAS is context-free if, and only if, it has a context-free intersection with every bounded regular language.
['Jérôme Leroux', 'Vincent Penelle', 'Grégoire Sutre']
On the Context-Freeness Problem for Vector Addition Systems
369,315
Allowing real-time systems to autonomously evolve or self-organize during their life-time poses challenges on guidance of such a process. Hard real-time systems must never break their timing constraints even if undergoing a change in configuration. We propose to enhance future real-time systems with an in-system model-based timing analysis engine capable of deciding whether a configuration is feasible to be executed. This engine is complemented by a formal procedure guiding system evolution. The distributed implementation of a runtime environment (RTE) implementing this procedure imposes two key questions of consistency: How do we ensure model consistency across the distributed system and how do we ensure consistency of the actual system behavior with the model? We present a synchronization protocol solving the model consistency issues and provide a discussion on implications of different mode-change protocols on consistency of the system with its model.
['Steffen Stein', 'Moritz Neukirchner', 'Harald Schrom', 'Rolf Ernst']
Consistency Challenges in Self-Organizing Distributed Hard Real-Time Systems
128,953
An optimal sensor layout is attained when a limited number of sensors are placed in an area such that the cost of the placement is minimized while the value of the obtained information is maximized. In this paper, we discuss the optimal sensor layout design problem from first principles, show how an existing optimization criterion (maximum entropy of the measured variables) can be derived, and compare the performance of this criterion with three others that have been reported in the literature for a specific situation for which we have detailed experimental data available. This is achieved by firstly learning a spatial model of the environment using a Bayesian Network, then predicting the expected sensor data in the rest of the space, and finally verifying the predicted results with the experimental measurements. The development of rigorous techniques for optimizing sensor layouts is argued to be an essential requirement for reconfigurable and self-adaptive networks.
['X. Rosalind Wang', 'George M. Mathews', 'Don Price', 'Mikhail Prokopenko']
Optimising Sensor Layouts for Direct Measurement of Discrete Variables
187,605
We present a new algorithm that computes the probability that there is an operating path from a node s to a node t in a stochastic network. The computation time of this algorithm is bounded by a polynomial in the number of s, t-cuts in the network. We also examine the complexity of other connectedness reliability problems with respect to the number of cutsets and pathsets in the network. These problems are distinguished as either having algorithms that are polynomial in the number of such sets, or having no such algorithms unless P = NP.
['J. Scott Provan', 'Michael O. Ball']
Computing Network Reliability in Time Polynomial in the Number of Cuts
518,530
Field Programmable Gate Array (FPGA) technology is expected to play a key role in the development of Software Defined Radio platforms. To reduce design time required when targeting such a technology, high-level synthesis tools can be used. These tools are available in current FPGA CAD tools. In this demo, we will present the design of a FFT component for Long Term Evolution standard and its implementation on a Xilinx Virtex 6 based ML605 board. Our flexible FFT can support FFT sizes among 128, 256, 512, 1024, 1536 and 2048 to compute OFDM symbols. The FFT is specified at a high-level (i.e. in C language). Both dynamic partial reconfiguration and run-time configuration based on input control signals of the flexible FFT will be shown. These two approaches provide interesting tradeoff between reconfiguration time and area.
['Mai-Thanh Tran', 'Emmanuel Casseau', 'Matthieu Gautier']
Demo abstract: FPGA-based implementation of a flexible FFT dedicated to LTE standard
942,284
We present the design of a scalable parallel pathline construction method for visualizing large time-varying 3D vector fields. A 4D (i.e., time and the 3D spatial domain) representation of the vector field is introduced to make a time-accurate depiction of the flow field. This representation also allows us to obtain pathlines through streamline tracing in the 4D space. Furthermore, a hierarchical representation of the 4D vector field, constructed by clustering the 4D field, makes possible interactive visualization of the flow field at different levels of abstraction. Based on this hierarchical representation, a data partitioning scheme is designed to achieve high parallel efficiency. We demonstrate the performance of parallel pathline visualization using data sets obtained from terascale flow simulations. This new capability will enable scientists to study their time-varying vector fields at the resolution and interactivity previously unavailable to them.
['Hongfeng Yu', 'Chaoli Wang', 'Kwan-Liu Ma']
Parallel hierarchical visualization of large time-varying 3D vector fields
485,223
Intervehicle communications could be very valuable in emergency situations such as after natural or man made disasters or accidents. The main goal in designing protocols for similar situations is to guarantee lower delay, needed throughput and probably most importantly to enable prioritization of emergency information. Reaching such goals is not easy in presence of emergency situations when the intervehicle traffic increases exponentially. We present a reliable hierarchical routing protocol that uses load balancing to keep low message delay even in presence of high level of traffic. Our protocol is based on geographical routing. The protocol is designed for highway travelers but can be used in any mobile ad-hoc network. The highway is divided in virtual cells, which moves as the vehicles moves. The cell members might choose one or more Cell/spl I.bar/Leaders that will behave for a certain time interval as Base Stations. Every node has its geographical position given by global positioning system (GPS). Cell/spl I.bar/Leaders form a virtual backbone used to forward messages among nodes on different cells. The traffic is distributed among Cell/spl I.bar/Leaders in order to optimize the communication delay. We study the effect of load balancing in minimizing delay. Our simulation results show that our proposed protocol improves the network utilization compared to existing inter-vehicles protocols.
['Mimoza Durresi', 'Arjan Durresi', 'Leonard Barolli', 'Frank Hsu']
Intervehicle communication protocol for emergency situations
53,676
Image segmentation has traditionally been thought of us a low/mid-level vision process incorporating no high level constraints. However, in complex and uncontrolled environments, such bottom-up strategies have drawbacks that lead to large misclassification rates. Remedies to this situation include taking into account (1) contextual and application constraints, (2) user input and feedback to incrementally improve the performance of the system. We attempt to incorporate these in the context of pipeline segmentation in industrial images. This problem is of practical importance for the 3D reconstruction of factory environments. However it poses several fundamental challenges mainly due to shading. Highlights and textural variations, etc. Our system performs pipe segmentation by fusing methods from physics-based vision, edge and texture analysis, probabilistic learning and the use of the graph-cut formalism.
['Bertrand Thirion', 'Benedicte Bascle', 'Visvanathan Ramesh', 'Nassir Navab']
Fusion of color, shading and boundary information for factory pipe segmentation
152,124
Traditional Chinese Medicine (TCM) Semantic Grid is an application of Semantic Grid technique in TCM domain. It comprises TCM ontology and TCM Databases as data resources and Dart Search, Dart Query etc. as TCM application. This paper reports an ontology engineering component of the TCM Semantic Grid. Although ontology engineering has gained significant progress, ontology construction still mainly depends on manual work and tends to be mistaken prone. In this paper, we introduce a semantic relation verification method based on both domain ontology and domain publications. A modified vector space model is used to extract semantic relations from domain publications, which is particularly useful when the semantic relation cannot be extracted directly. Association rule learning method is used to distinguish significant relations from trivial ones. Further verification method is used to give user recommendations of relation types. We use Traditional Chinese Medicine Language System, domain ontology for Traditional Chinese Medicine, and relevant publications to validate our approach. But our method is not limited to this field. In fact, any data source that can be extracted into relevant instance pairs is applicable.
['Xiaogang Zhang', 'Huajun Chen', 'Jun Ma', 'Jinhuo Tao']
Ontology Based Semantic Relation Verification for TCM Semantic Grid
308,731
TSD (Tjenester for Sensitive Data), is an isolated infrastructure for storing and processing sensitive research data, e.g. human patient genomics data. Due to the isolation of the TSD, it is not possible to install software in the traditional fashion. Docker containers is a platform implementing lightweight virtualization technology for applying the build-once-run-anyware approach in software packaging and sharing. This paper describes our experience at USIT (The University Centre of Information Technology) at the University of Oslo With Docker containers as a solution for installing and running software packages that require downloading of dependencies and binaries during the installation, inside a secure isolated infrastructure. Using Docker containers made it possible to package software packages as Docker images and run them smoothly inside our secure system, TSD. The paper describes Docker as a technology, its benefits and weaknesses in terms of security, demonstrates our experience with a use case for installing and running the Galaxy bioinformatics portal as a Docker container inside the TSD, and investigates the use of Stroll file-system as a proxy between Galaxy portal and the HPC cluster.
['Abdulrahman Azab', 'Diana Domanska']
Software Provisioning Inside a Secure Environment as Docker Containers Using Stroll File-System
853,228
Decision diagrams such as binary decision diagrams and multi-valued decision diagrams play an important role in various fields, including symbolic model checking. An ongoing challenge is to develop datastructures and algorithms for modern multi-core architectures. The BDD package Sylvan provides one contribution by implementing parallelized BDD operations and thus allowing sequential algorithms to exploit the power of multi-core machines.#R##N##R##N#We present several extensions to Sylvan. We implement parallel operations on list decision diagrams, a variant of multi-valued decision diagrams that is useful for symbolic model checking. We also substitute several core components of Sylvan by new designs, such as the work-stealing framework, the unique table and the operation cache. Furthermore, we combine parallel operations with parallelization on a higher level, by partitioning the transition relation. We show that this results in an improved speedup using the model checking toolset ltsmin. We also demonstrate that the parallelization of symbolic model checking for explicit-state modeling languages with an on-the-fly next-state function, as supported by ltsmin, scales well.
['Tom van Dijk', 'Jaco van de Pol']
Sylvan: Multi-Core Decision Diagrams
570,432
Correlation between Thermal Conductivity and Bond Length Alternation in Carbon Nanotubes: A Combined Reverse Nonequilibrium Molecular Dynamics-Crystal Orbital Analysis
['Mohammad Alaghemandi', 'Joachim Schulte', 'Frédéric Leroy', 'Florian Müller-Plathe', 'Michael C. Böhm']
Correlation between Thermal Conductivity and Bond Length Alternation in Carbon Nanotubes: A Combined Reverse Nonequilibrium Molecular Dynamics-Crystal Orbital Analysis
225,421
Graphs are very powerful and widely used representational tools in computer applications. We present a relaxation approach to (sub)graph matching based on a fuzzy assignment matrix. The algorithm has a computational complexity of O(n/sup 2/m/sup 2/) where n and m are the number of nodes in the two graphs being matched, and can perform both exact and inexact matching. To illustrate the performance of the algorithm, we summarize the results obtained for more than 12 000 pairs of graphs of varying types (weighted graphs, attributed graphs, and noisy graphs). We also compare our results with those obtained using the graduated assignment algorithm.
['Swarup Medasani', 'Raghu Krishnapuram', 'YoungSik Choi']
Graph matching by relaxation of fuzzy assignments
226,475
The management of large collections of music data in a multimedia database has received much attention in the past few years. In most current work, researchers extract features from the music data and develop indices that will help to retrieve the relevant music quickly. Several reports have pointed out that these features of music can be transformed and represented in the form of music feature strings. However, these approaches lack scalability upon increasing music data. In this paper we propose an approach to transform music data into numeric forms and develop an index structure base on the R-tree for effective retrieval. The experimental results show that our approach outperforms existing string index approaches.
['Yu-lung Lo', 'Shiou-jiuan Chen']
The numeric indexing for music data
237,952
A Semantic Discovery Framework to Support Supply-demand Matchmaking in Cloud Service Markets.
['Giuseppe Di Modica', 'Orazio Tomarchio']
A Semantic Discovery Framework to Support Supply-demand Matchmaking in Cloud Service Markets.
790,075
Benchmarking robotic technologies is of utmost importance for actual deployment of robotic applications in industrial and every-day environments, therefore many efforts have recently focused on this problem. Among the many different ways of benchmarking robotic systems, scientific competitions are recognized as one of the most effective ways of rapid development of scientific progress in a field. The RoboCup@Home league targets the development and deployment of autonomous service and assistive robot technology, being essential for future personal domestic applications, and offers an important approach to benchmarking domestic and service robots.#R##N##R##N#In this paper we present the new methodology for benchmarking DSR adopted in RoboCup@Home, that includes the definition of multiple benchmarks (tests) and of performance metrics based on the relationships between key abilities required to the robots and the tests. We also discuss the results of our benchmarking approach over the past years and provide an outlook on short- and mid-term goals of Home and of DSR in general.
['Thomas Wisspeintner', 'Tijn van der Zan', 'Luca Iocchi', 'Stefan Schiffer']
RoboCup@Home: results in benchmarking domestic service robots
452,606
In this paper, we investigate the interference caused by Non Linear (NL) power amplifiers together with timing errors for multicarrier OFDM/FBMC transmissions. This kind of interference will be found in Cognitive Radio systems where users are non synchronized. We found that the global interference can be computed by adding the NL effect with the unsynchronized one. Finally, simulation results are presented and are in good agreement with the developed theoretical model.
['Mahmoud Khodjet-Kesba', 'Charbel Saber', 'Daniel Roviras', 'Yahia Medjahdi']
Multicarrier Interference Evaluation with Jointly Non-Linear Amplification and Timing Errors
372,776
Unreliable and harsh environmental conditions in avionics and space applications demand run-time adaptation capabilities to withstand environmental changes and radiation-induced faults. Modern SRAM-based FPGAs integrating high computational power with partial and dynamic reconfiguration abilities are a usual candidate for such systems. However, due to the vulnerability of these devices to Single Event Upsets (SEUs), designs need proper fault-handling mechanisms. In this work we propose a novel circuit instrumentation method for probing Triple Modular Redundancy (TMR) circuits for error detection at the granularity of individual domains and then use selective run-time dynamic reconfiguration for recovery. Error detection logic is inserted in the physical net-list to identify and localize faults. Moreover, selective domain reconfiguration is achieved by careful considerations in the placement phase on the FPGA reconfigurable area. The proposed technique is suitable for systems having hard real-time constraints. Our results demonstrate that this approach has an overhead of 2 LUTs per majority voter in internal partitions in terms of area when compared to the standard TMR circuits. In addition, it brings down the reconfiguration times of TMR circuits to a single domain and ensures a 100% availability of the device assuming the Single Event Upset fault model.
['Luca Sterpone', 'Anees Ullah']
On the optimal reconfiguration times for TMR circuits on SRAM based FPGAs
472,388
Rheumatoid Arthritis is a chronic disease that leads to swelling and inflammation of the joints and even spread to surrounding tissues and blood vessels. Physical therapy has been used successfully to slow the effects of this degenerative disease. Patients, however, do not want to do these exercises due to the fact they are boring and repetitive. In this paper, we introduce the first steps in creating a virtual environment using a CAVE System for the physical therapy sessions where the user will be engaged and motived to complete the exercises prescribed by his or her doctor.
['Shawn N. Gieser', 'Eric Becker', 'Fillia Makedon']
Using CAVE in physical rehabilitation exercises for rheumatoid arthritis
371,094
Vérification des propriétés structurelles et non fonctionnelles d'assemblages de composants UML2.0.
['Mourad Kmimech', 'Mohamed Tahar Bhiri', 'Mohamed Graiet', 'Philippe Aniorté']
Vérification des propriétés structurelles et non fonctionnelles d'assemblages de composants UML2.0.
755,325
We describe a content-based audio classification algorithm based on novel multiscale spectro-temporal modulation features inspired by a model of auditory cortical processing. The task explored is to discriminate speech from nonspeech consisting of animal vocalizations, music, and environmental sounds. Although this is a relatively easy task for humans, it is still difficult to automate well, especially in noisy and reverberant environments. The auditory model captures basic processes occurring from the early cochlear stages to the central cortical areas. The model generates a multidimensional spectro-temporal representation of the sound, which is then analyzed by a multilinear dimensionality reduction technique and classified by a support vector machine (SVM). Generalization of the system to signals in high level of additive noise and reverberation is evaluated and compared to two existing approaches (Scheirer and Slaney, 2002 and Kingsbury et al., 2002). The results demonstrate the advantages of the auditory model over the other two systems, especially at low signal-to-noise ratios (SNRs) and high reverberation.
['Nima Mesgarani', 'Malcolm Slaney', 'Shihab A. Shamma']
Discrimination of speech from nonspeech based on multiscale spectro-temporal Modulations
125,205
With more and more computing devices being deployed in buildings there has been a steady rise in buildings' electricity consumption. At the same time there is a pressing need to reduce overall building energy consumption. Pervasive computing could further exacerbate this problem but it could also provide a solution. Context information (e.g., user location) likely to be available in pervasive computing environments could enable highly effective device power management. The objective of such context-aware power management (CAPM) is to minimise the overall electricity consumption of a building while maintaining acceptable user-perceived device performance. To investigate the potential of CAPM we conducted experimental trials for two simple location-aware power management policies. Our results highlight the presence of two distinct user behaviour patterns but also show that location alone is not enough for effective power management. We therefore propose a CAPM framework that employs Bayesian networks to support prediction of user behaviour patterns from multi-modal sensor data for effective power management. We further propose the use of acoustic data as an interesting context for predicting finer-grained user behaviour. The paper presents an initial evaluation of the resulting framework.
['Colin Harris', 'Vinny Cahill']
Exploiting user behaviour for context-aware power management
44,696
This paper motivates, from historical, philosophical, and industrial points of view, the adoption of a novel scheme for developing complex measuring systems as perceptive agencies. The general concept of agency, a cooperative multiagent system defined within distributed artificial intelligence and robotics, is discussed together with its particular application to the field of intelligent instruments. An embryonic example of perceptive agency applied to the field of environmental monitoring is reported.
['Francesco Amigoni', 'Arnaldo Brandolini', "G. D'Antona", 'R. Ottoboni', 'Marco Somalvico']
Artificial intelligence in science of measurements: from measurement instruments to perceptive agencies
424,859
In this paper, we propose a novel method for reconstruction of the multi-gray level images using the exact computation of Legendre moments. The purpose of this method is to ensure high accuracy and low computation time, by dividing the image into a set of blocks instead of treating the whole image. For mitigating the blocking artifact involved in the reconstruction process, we propose a new approach based on block Reconstruction using a Global Overlapping Block and Exact Legendre Moments computation (GOBRELM). A typical comparison of the proposed method with the conventional ones concludes to very impressive and promising results concerning image quality and time consuming. The main motivation of the proposed approach is to allow fast and efficient reconstruction algorithm, with improvement of the reconstructed image quality and time consumption. This makes our method attractive for real time applications.
['Zaineb Bahaoui', 'Khalid Zenkouar', 'Arsalane Zarghili', 'Hakim El Fadili', 'Hassan Qjidaa']
Global overlapping block based reconstruction using exact Legendre moments
914,335
We consider a cellular network inspired channel model which consists of the bi-directional exchange of information between a base-station and M terminal nodes with the help of a relay. The base-station has a message for each of M terminals, and conversely, each terminal node has a message for the base-station. A single relay assists the bi-directional communication endeavor. We assume an AWGN channel model with direct links (omitted in previous studies) between the base-station, relay, and half-duplex nodes. In this scenario, we derive achievable rate regions for two temporal protocols - needed in half-duplex networks - which indicate which users transmit when. These achievable rate regions are based on a novel lattice encoding and decoding strategy which outperforms previously derived regions using random-coding based decode-and-forward strategies under certain channel conditions. The terminal nodes employ nested lattice codes, and the relay decodes a series of codeword combinations - one of the main novelties of our scheme - from which it deduce the sum-codewords of the base-station to terminal node i, which it then broadcasts. This scheme differs markedly from previously considered successive-decoding based lattice strategies and provides a more general framework for the joint decoding of lattice codewords in a MAC. Numerical evaluations of our lattice-based inner bounds are shown to improve upon previous random-coding based schemes under certain channel conditions, and are compared to half-duplex cut-set outer bounds. We further demonstrate a constant sum-rate gap result using this lattice-based scheme for symmetric channels for one of the two protocols.
['Sang Joon Kim', 'Besma Smida', 'Natasha Devroye']
Lattice strategies for a multi-pair bi-directional relay network
12,024
Ground truth is both - indispensable for training and evaluating document analysis methods, and yet very tedious to create manually. This especially holds true for complex historical manuscripts that exhibit challenging layouts with interfering and overlapping handwriting. In this paper, we propose a novel semi-automatic system to support layout annotations in such a scenario based on document graphs and a pen-based scribbling interaction. On the one hand, document graphs provide a sparse page representation that is already close to the desired ground truth and on the other hand, scribbling facilitates an efficient and convenient pen-based interaction with the graph. The performance of the system is demonstrated in the context of a newly introduced database of historical manuscripts with complex layouts.
['Angelika Garz', 'Mathias Seuret', 'Fotini Simistira', 'Andreas Fischer', 'Rolf Ingold']
Creating Ground Truth for Historical Manuscripts with Document Graphs and Scribbling Interaction
817,231
This paper contributes to the issue of QoS differentiation in wireless ad hoc networks. The focus is on the IEEE 802.11e MAC protocol that we propose to enhance by introducing a mechanism to "dynamically" assign priorities to traffic. Priorities are hop-by-hop assigned according to network resource availability and load in order to satisfy throughput requirements of each class of traffic
['Antonio Iera', 'Antonella Molinaro', 'Giuseppe Ruggeri', 'Domenico Tripodi']
Dynamic priority assignment in IEEE 802.11e ad-hoc networks
191,012
Finding a Nonempty Algebraic Subset of an Edge Set in Linear Time
['Mauro Mezzini']
Finding a Nonempty Algebraic Subset of an Edge Set in Linear Time
271,761
The metagenomics approach allows the simultaneous sequencing of all genomes in an environmental sample. This results in high complexity datasets, where in addition to repeats and sequencing errors, the number of genomes and their abundance ratios are unknown. Recently developed next-generation sequencing (NGS) technologies sig- nificantly improve the sequencing efficiency and cost. On the other hand, they result in shorter reads, which makes the separation of reads from different species harder. In this work, we present a two-phase heuristic algorithm for separating short paired-end reads from different genomes in a metagenomic dataset. We use the observation that most of the l- mers belong to unique genomes when l is sufficiently large. The first phase of the algorithm results in clusters of l-mers each of which belongs to one genome. During the second phase, clusters are merged based on l-mer repeat information. These final clusters are used to assign reads. The algorithm could handle very short reads and sequencing errors. Our tests on a large number of simulated metagenomic datasets concerning species at various phylogenetic distances demonstrate that genomes can be separated if the number of common repeats is smaller than the number of genome-specific repeats. For such genomes, our method can separate NGS reads with a high precision and sensitivity.
['Olga Tanaseichuk', 'James Borneman', 'Tao Jiang']
Separating Metagenomic Short Reads into Genomes via Clustering (Extended Abstract)
746,300
This work is an extension of our previous attempt to construct a spatial representation of 21 initial consonants in Thai by partitioning them into homogeneous clusters based on empirical measures of confusability and distance among phonemes. The measures were taken from perceptual identification performance of 28 listeners (seven full subjects) when stimuli were presented in noise. In present study, two methods of clustering, namely Multidimensional scaling analysis and k-means clustering were employed, yielding six different classifications and four perceptually relevant categories: intra-cluster short distance, intra-cluster long distance, inter-cluster short distance, and inter-cluster long distance. Another set of perceptual experiment (eight listeners; two full subjects) was carried out to verify the predictions. The findings reveal that the derived perceptual clusters and defined categories fit relatively well with the listeners' performance. Distinctive feature systems in phonological theory appear to provide some basis for the clustering of phonemes.
['P. Phienphanich', 'Chutamanee Onsuwan', 'Charturong Tantibundhit', 'Nantaporn Saimai', 'Tanawan Saimai']
Modeling predictive perceptual representation of Thai initial consonants
932,446
Variability represents an important challenge in multi-tenant SaaS applications. In fact, even if multi-tenancy realizes SaaS providers dream of having a single maintained software instance serving multiple customers (tenants) for common functionality, variations in tenants needs and their specific requirements at many places of the application bring providers back to the real world. They face an additional design concern: supporting application variability on a pertenant basis. In this paper, we focus on such variability concern and try to reduce its complexity by decoupling its management through different application layers. We rely on a two-steps decoupling approach: the first step consists of representing application variations as an explicit variability model while the second step consists of choosing the must appropriate application layer(s) to manage each variation. Our approach is illustrated by relying on a case study from the food industry.
['Ali Ghaddar', 'Dalila Tamzalit', 'Ali Assaf']
Decoupling variability management in multi-tenant SaaS applications
265,923
Multi-layer ad hoc wireless networks with UAVs is an ideal infrastructure to establish a rapidly deployable wireless communication system any time any where in the world for military applications. In this tactical environment, information traffic is quite asymmetric. Ground fighting units are information consumers and receive far more data than they transmit. The up-link is used for sending requests for information and some networking configuration overhead with a few kilobits, while the down-link is used to return the data requested with megabits size (e.g. multimedia file of images and charts). Centralized intelligent channel assigned multiple access (C-ICAMA) is a MAC layer protocol proposed for ground backbone nodes to access UAV (unmanned aerial vehicle) to solve the highly asymmetric data traffic in this tactical environment. With it's intelligent scheduling algorithm, it can dynamically allocate bandwidth for up-link and downlink to fit the instantaneous status of asymmetric traffic. The results of C-ICAMA is very promising, due to the dynamic bandwidth allocation of asymmetric up-link and down-link, the access delay is tremendously reduced.
['Daniel Lihui Gu', 'Henry Ly', 'Xiaoyan Hong', 'Mario Gerla', 'Guangyu Pei', 'Yeng-Zhong Lee']
C-ICAMA, a centralized intelligent channel assigned multiple access for multi-layer ad-hoc wireless networks with UAVs
386,328
Nonmonotonic logic is intended to apply specifically to situations where the initial information is incomplete. Using nonmonotonic reasoning procedures we shall be able to jump to conclusions, but withdraw them later when we gain additional information. A number of nonmonotonic logics have been introduced and widely discussed. Nonmonotonic logics tend to be introduced proof theoretically, and little attention is paid to their semantic characteristics or their computational tractability. We address both of these issues by presenting a nonmonotonic logic for the Herbrand subset of first-order predicate logic. This nonmonotonic logic is shown to be both sound and complete. Theories formulated in this logic can be executed in logic programming fashion. >
['Thomas J. Weigert', 'Jeffrey J. P. Tsai']
A computationally tractable nonmonotonic logic
476,800
Using a local maximum filter, individual trees were extracted from a 1 m spatial resolution IKONOS image and represented as points. The spatial pattern of individual trees was determined to represent forest age (a surrogate for forest structure). Point attributes, based on the spatial pattern of trees, were generated via nearest neighbour statistics and used as the basis for aggregating points into forest structure units. The forest structure units allowed for the mapping of a forested area into one of three age categories: young (1–20 years), intermediate (21–120 years), and mature (>120 years). This research indicates a new approach to image processing, where objects generated from the processing of image data (rather than pixels or spectral values) are subjected to spatial statistical analysis to estimate an attribute relating an aspect of forest structure.
['Trisalyn A. Nelson', 'K. Olaf Niemann', 'Michael A. Wulder']
Spatial statistical techniques for aggregating point objects extracted from high spatial resolution remotely sensed imagery
322,523
Two specific cases of signal detection involving uncertainty in the frequency of a sound signal are compared with the case of the signal-known-exactly. In the first case the signal is either of two known frequencies; in the second case the signal is any frequency within a given range. It is suggested that detection behavior that is optimal for the three cases requires a deal mechanism: a combination of a wide-open receiver end a panoramic receiver. Evidence is presented that supports the existence of such a mechanism. Estimates of the bandwidth end seen-rate of the receiver are included.
['Wilson P. Tanner', 'Robert Z. Norman']
The human use of information--II: Signal detection for the case of an unknown signal parameter
163,090
We address the problem of 6DOF spacecraft formation control in a leader-follower architecture using Robust Integral of the Sign Error (RISE) based Neural Network (NN) technique. Based on 6DOF relative dynamic model of the spacecraft formation, RISE based NN is introduced to approximate the dynamics of the follower as well as various practical disturbances. It is shown that the errors of the entire formation closed-loop are guaranteed to be asymptotical stable (AS), which takes significant advantage over the typical Uniformly Upper Bounded (UUB) property of most NN controllers in high-precision formation tasks. Numerical simulation is provided to illustrate the effectiveness of the proposed algorithm.
['Haibo Min', 'Fuchun Sun', 'Shicheng Wang', 'Zhiguo Liu', 'Jinsheng Zhang']
Spacecraft coordination control in 6DOF based on Neural Network
410,522
A Coq Library for Internal Verification of Running-Times
['Jay McCarthy', 'Burke Fetscher', 'Max S. New', 'Daniel Feltey', 'Robert Bruce Findler']
A Coq Library for Internal Verification of Running-Times
808,767
This work aims to assess the potential of Synthetic Aperture Radar (SAR) data combined with optical data to support local administrations in the knowledge of the land use and land cover at regional scale. In particular, the contribution of data available in the future through the SIASGE project, combining L-band and X-band radar imagery, is assessed in order to produce thematic maps. Moreover, the further contribution brought by C-band has been evaluated. The classification, focused on two regions in the north side of Italy, is driven by the legend of already existing maps tackling the real needs of the land managing authorities. As the combination of data from optical imagery is fundamental to achieve good thematic accuracy, the work has exploited the Support Vector Machine learning technique, which is more suitable than standard statistical parametric approaches in this respect. Concerning the classification step, some algorithmic issues has been faced to improve the results, such as training set selection strategy and data fusion techniques. The work has proved as the multi source data set (SAR and optical) is fairly suitable to produce thematic maps comparable to what already in use at local administrative level, allowing to obtain reliable maps with a classification accuracy in the order of 90 %.
['Nazzareno Pierdicca', 'F. Pelliccia', 'Marco Chini']
Thematic mapping at regional scale using SIASGE Radar data at X and L band and optical images
315,502