title
stringlengths
4
406
abstract
stringlengths
6
11.1k
Preliminary Design of a Network Protocol Learning Tool Based on the Comprehension of High School Students: Design by an Empirical Study Using a Simple Mind Map
The purpose of this study is to develop a learning tool for high school students studying the scientific aspects of information and communication net- works. More specifically, we focus on the basic principles of network proto- cols as the aim to develop our learning tool. Our tool gives students hands-on experience to help understand the basic principles of network protocols.
A methodology for the physically accurate visualisation of roman polychrome statuary
This paper describes the design and implementation of a methodology for the visualisation and hypothetical virtual reconstruction of Roman polychrome statuary for research purposes. The methodology is intended as an attempt to move beyond visualisations which are simply believable towards a more physically accurate approach. Accurate representations of polychrome statuary have great potential utility both as a means of illustrating existing interpretations and as a means of testing and revising developing hypotheses. The goal of this methodology is to propose a pipeline which incorporates a high degree of physical accuracy whilst also being practically applicable in a conventional archaeological research setting. The methodology is designed to allow the accurate visualisation of surviving objects and colourants as well as providing reliable methods for the hypothetical reconstruction of elements which no longer survive. The process proposed here is intended to limit the need for specialist recording equipment, utilising existing data and those data which can be collected using widely available technology. It is at present being implemented as part of the 'Statues in Context' project at Herculaneum and will be demonstrated here using the case study of a small area of the head of a painted female statue discovered at Herculaneum in 2006.
Comparison of GARCH, Neural Network and Support Vector Machine in Financial Time Series Prediction
This article applied GARCH model instead AR or ARMA model to compare with the standard BP and SVM in forecasting of the four international including two Asian stock markets indices.These models were evaluated on five performance metrics or criteria. Our experimental results showed the superiority of SVM and GARCH models, compared to the standard BP in forecasting of the four international stock markets indices.
Identifying Psychological Theme Words from Emotion Annotated Interviews
Recent achievements in Natural Language Processing (NLP) and Psychology invoke the challenges to identify the insight of emotions. In the present study, we have identified different psychology related theme words while analyzing emotions on the interview data of ISEAR (International Survey of Emotion Antecedents and Reactions) research group. Primarily, we have developed a Graphical User Interface (GUI) to generate visual graphs for analyzing the impact of emotions with respect to different background, behavioral and physiological variables available in the ISEAR dataset. We have discussed some of the interesting results as observed from the generated visual graphs. On the other hand, different text clusters are identified from the interview statements by selecting individual as well as different combinations of the variables. Such textual clusters are used not only for retrieving the psychological theme words but also to classify the theme words into their respective emotion classes. In order to retrieve the psychological theme words from the text clusters, we have developed a rule based baseline system considering unigram based keyword spotting technique. The system has been evaluated based on a Top-n ranking strategy (where n=10, 20 or 30 most frequent theme words). Overall, the system achieves the average F-Scores of .42, .32, .36, .42, .35, .40 and .40 in identifying theme words with respect to Joy, Anger, Disgust, Fear, Guilt, Sadness and Shame emotion classes, respectively.
Multisymplectic Spectral Methods for the Gross-Pitaevskii Equation
Recently, Bridges and Reich introduced the concept of multisymplectic spectral discretizations for Hamiltonian wave equations with periodic boundary conditions [5]. In this paper, we show that the ID nonlinear Schrodinger equation and the 2D Gross-Pitaevskii equation are multi-symplectic and derive multi-symplectic spectral discretizations of these systems. The effectiveness of the discretizations is numerically tested using initial data for multi-phase solutions.
Relational Abstract Interpretation of Higher Order Functional Programs (extended abstract)
Most applications of the abstract interpretation framework[2] have been foranalyzing functional programs use functions on abstract values to approxi-mate functions, thus assuming that functions may be called at all arguments.When the abstract domain is finite, this approach can easily be generalizedto higher order functional languages as shown for example by [1]. In practicethis leads to combinatorial explosion problems as observed, for example, instrictness analysis of higher order functional languages.
Speech training systems using lateral shapes of vocal tract and F1-F2 diagram for hearing-impaired children
Three speech training systems for hearing-impaired children were designed and constructed using a minicomputer and a microprocessor. The first system displays the lateral shape of the vocal tract for each vowel estimated from the speech sound. In this system, three storages are prepared. One of them can be used to store a reference shape which may be estimated from a teacher's voice or a prepared standard shape. A child can articulate seeing the reference shape and re-articulate comparing his own first articulation with the reference shape. The system can also compare the estimated shape with the reference one and produce instructions with animated cartoons which show where any articulatory defects exist. The second system displays successively the lateral shapes for articulation of any phoneme sequences containing consonants. The third system displays the first two formant frequencies extracted from speech as a spot on the F1-F2 plane, where the regions of vowels are shown in color. Preliminary use of these devices has shown that they complement each other to form a more effective system of speech training.
Knowledge Engineering for Affective Bi-Modal Interaction in Mobile Devices
This paper focuses on knowledge engineering for the development of a system that provides affective interaction in mobile devices. The system bases its inferences about users' emotions on user input evidence from the keyboard and the microphone of the mobile device. For this purpose different experimental studies have been conducted with the participation of mobile users and human experts. The experiments' aim was twofold. They aimed at revealing the criteria that are taken into account in each mode for emotion recognition as well as their weight of importance. The results of the studies are further used for the application of a multi-criteria decision making model.
Link-time compaction of MIPS programs
Embedded systems often have limited amounts of available memory, thus encouraging the development of compact programs. This paper presents a link-time program compactor for the embedded MIPS architecture. The application of several important data flow and control flow analyses and the related program transformations at link-time are discussed and evaluated for a collection of typical embedded applications compiled against the uClibc library targeted at the embedded market. With the presented link-time compactor, code size reductions of up to 27% are obtained, and speedups of up to 17%.
Leveraging legacy code to deploy desktop applications on the web
Xax is a browser plugin model that enables developers to leverage existing tools, libraries, and entire programs to deliver feature-rich applications on the web. Xax employs a novel combination of mechanisms that collectively provide security, OS-independence, performance, and support for legacy code. These mechanisms include memory-isolated native code execution behind a narrow syscall interface, an abstraction layer that provides a consistent binary interface across operating systems, system services via hooks to existing browser mechanisms, and lightweight modifications to existing tool chains and code bases. We demonstrate a variety of applications and libraries from existing code bases, in several languages, produced with various tool chains, running in multiple browsers on multiple operating systems. With roughly two person-weeks of effort, we ported 3.3 million lines of code to Xax, including a PDF viewer, a Python interpreter, a speech synthesizer, and an OpenGL pipeline.
A pedestrian navigation method for user's safe and easy wayfinding
In recent years, most of mobile phones have a function of pedestrian navigation guidance. It was reported that users sometimes feel anxiety because of low accuracy of the position estimation especially in urban area and delay of information updating. In order to reduce the anxiety, a route planning algorithm is proposed in this study, which weighs user's difficulty (or easiness) of locating own current position as well as total physical distance of courses. The difficulty is estimated by valuation functions based on the "recognizability" and "visibility" of landmarks. An experimental study conducted in real situation using a prototype system to examine and refine the model for the optimal route planning. As the result, a modified model is proposed as a promising method of route planning for user's easy wayfinding.
Word pairs in language modeling for information retrieval
Previous language modeling approaches to information retrieval have focused primarily on single terms. The use of bigram models has been studied, but the restriction on word order and adjacency may not be justified for information retrieval. We propose a new language modeling approach to information retrieval that incorporates lexical affinities, or pairs of words that occur near each other, without a constraint on word order. The use of compound terms in the vector space model has been shown to outperform the vector model with only single terms (Nie & Dufort, 2002). We explore the use of compound terms in a language modeling approach, and compare our results with the vector space model, and unigram and bigram language model approaches.
Leakage-Resilient spatial encryption
Spatial encryption is a generic public-key cryptosystem where vectors play the role of public keys and secret keys are associated to affine spaces. Any secret key associated to a space can decrypt all ciphertexts encrypted for vectors in that space, and the delegation relation is defined by subspace inclusion. Though several constructions of spatial encryption schemes have been proposed in the literature, none of them are known to remain secure in the leakage-resilient setting, in which the adversary may be capable of learning limited additional information about the master secret key and other secret keys in the system. In this paper, we propose the first spatial encryption scheme achieving leakage resilience in the standard model, based on existing static assumptions over bilinear groups of composite order. Our new scheme is based on the leakageresilient HIBE scheme by Lewko, Rouselakis, and Waters in TCC 2011 and can be seen as a generalization of Moriyama-Doi spatial encryption scheme to the leakage-resilient setting.
On system rollback and totalized fields: An algebraic approach to system change
In system operations the term rollback is often used to imply that arbitrary changes can be reversed i.e. ‘rolled back’ from an erroneous state to a previously known acceptable state. We show that this assumption is flawed and discuss error-correction schemes based on absolute rather than relative change.#R##N##R##N#Insight may be gained by relating change management to the theory of computation. To this end, we reformulate previously-defined ‘convergent change operators’ of Burgess into the language of groups and rings. We show that, in this form, the problem of rollback from a convergent operation becomes equivalent to that of ‘division by zero’ in computation. Hence, we discuss how recent work by Bergstra and Tucker on zero-totalized fields helps to clear up long-standing confusion about the options for ‘rollback’ in change management.
Model-Driven Strategic Awareness: From a Unified Business Strategy Meta-Model (UBSMM) to Enterprise Architecture
Business strategy should be well understood in order to support an enterprise to achieve its vision and to define an architecture supporting that vision. While business views are identified in many Enterprise Architecture (EA) proposals, business strategy formulations from the area of Strategic Management are overlooked. Thus, IT solutions cannot be traced back to business strategy in a clear and unambiguous way. Our intended proposal, a Unified Business Strategy Meta-Model (UBSMM), aims at establishing such a link. UBSMM is a formalization of the integration of known business strategy formulations with precise semantics enabling its model-level usage to provide strategic awareness to Enterprise Architecture. In this paper we present the development process of UBSMM, and further, we propose conceptual relationships towards Enterprise Architecture (EA).
FTP Mirror Tracker: A Few Steps Towards URN
FTP Mirror Tracker is a software package (written in Perl and C++) that enables transparent, user-controlled redirection to the nearest anonymous FTP mirror sites that are exact replicas of the original source. This redirection can be achieved by using a Web Cache server or by making HTTP requests to the FTP Mirror Tracker directly. The Mirror Tracker also has internal URN support and can be used as a URN resolver for FTP requests. Underlying the system is a MySQL database recording FTP mirror site details. In this report we explain how this database is constructed, and show how it may be used - directly by end users, and under the policy based control of Web Cache and mirror service administrators.
Information Systems Uncertainty Design and Implementation Combining: Rough, Fuzzy, and Intuitionistic Approaches
There are a number of alternative techniques for dealing with uncertainty. Here we discuss rough set, fuzzy rough set, and intuitionistic rough set approaches andhowtoincorporateuncertaintymanagementusingthemintherelationaldatabase model. The impacts of rough set techniques on fundamental database concepts such as functional dependencies and information theory are also considered.
Breast Cancer Identification Based on Thermal Analysis and a Clustering and Selection Classification Ensemble
Breast cancer is the most common form of cancer in women. Early diagnosis is necessary for effective treatment and therefore of crucial importance. Medical thermography has been demonstrated an effective and inexpensive method for detecting breast cancer, in particular in early stages and in dense tissue. In this paper, we propose a medical decision support system based on analysing bilateral asymmetries in breast thermograms. The underlying data is imbalanced, as the number of benign cases significantly exceeds that of malignant ones, which will lead to problems for conventional pattern recognition algorithms. To address this, we propose an ensemble classifier system which is based on the idea of Clustering and Selection. The feature space, which is derived from a series of image symmetry features, is partitioned in order to decompose the problem into a set of simpler decision areas. We then delegate a locally competent classifier to each of the generated clusters. The set of predictors is composed of both standard models as well as models dedicated to imbalanced classification, so that we are able to employ a specialised classifier to clusters that show high class imbalance, while maintaining a high specificity for other clusters. We demonstrate that our method provides excellent classification performance and that it statistically outperforms several state-of-the-art ensembles dedicated to imbalanced problems.
Automated Object Identification and Position Estimation for Airport Lighting Quality Assessment
The development of an automated system for the quality assessment of aerodrome ground lighting (AGL), in accordance with associated standards and recommendations, is presented. The system is composed of an image sensor, placed inside the cockpit of an aircraft to record images of the AGL during a normal descent to an aerodrome. A model-based methodology is used to ascertain the optimum match between a template of the AGL and the actual image data in order to calculate the position and orientation of the camera at the instant the image was acquired. The camera position and orientation data are used along with the pixel grey level for each imaged luminaire, to estimate a value for the luminous intensity of a given luminaire. This can then be compared with the expected brightness for that luminaire to ensure it is operating to the required standards. As such, a metric for the quality of the AGL pattern is determined. Experiments on real image data is presented to demonstrate the application and effectiveness of the system.
Quality Assessment on User Generated Image for Mobile Search Application
Quality specified image retrieval is helpful to improve the user experiences in mobile searching and social media sharing. However, the model for evaluating the quality of the user generated images, which are popular in social media sharing, remains unexploited. In this paper, we propose a scheme for quality assessment on user generated image. The scheme is formed by four attribute dimensions, including intrinsic quality, favorability, relevancy and accessibility of images. Each of the dimensions is defined and modeled to pool a final quality score of a user generated image. The proposed scheme can reveal the quality of user generated image in comprehensive manner. Experimental results show that the scores obtained by our scheme have high correlation coefficients with the benchmark data. Therefore, our scheme is suitable for quality specified image retrieval on mobile applications.
Traveling wave solutions of the n-dimensional coupled Yukawa equations
We discuss traveling wave solutions to the Yukawa equations, a system of nonlinear partial differential equations which has applications to meson–nucleon interactions. The Yukawa equations are converted to a six-dimensional dynamical system, which is then studied for various values of the wave speed and mass parameter. The stability of the solutions is discussed, and the methods of competitive modes is used to describe parameter regimes for which chaotic behaviors may appear. Numerical solutions are employed to better demonstrate the dependence of traveling wave solutions on the physical parameters in the Yukawa model. We find a variety of interesting behaviors in the system, a few of which we demonstrate graphically, which depend upon the relative strength of the mass parameter to the wave speed as well as the initial data.
Multi-layer topology preserving mapping for K-means clustering
In this paper, we investigate the multi-layer topology preserving mapping for K-means. We present a Multi-layer Topology Preserving Mapping (MTPM) based on the idea of deep architectures. We demonstrate that the MTPM output can be used to discover the number of clusters for K-means and initialize the prototypes of K-means more reasonably. Also, K-means clusters the data based on the discovered underlying structure of the data by the MTPM. The standard wine data set is used to test our algorithm. We finally analyse a real biological data set with no prior clustering information available.
A general semantic analyser for data base access
The paper discusses the design principles and current status of a natural language front end for access to data bases. This Is based on the use, first, of a semantically-oriented question analyser exploiting general, language-wide semantic categories and patterns, rather than data base-specific ones; and, second, of a data base-oriented translation component for obtaining search specifications from the meaning representations for questions derived by the analyser. This approach is motivated by the desire to reduce the effort of providing data base-specific material for the front end, by the belief that a general analyser is well suited to the "casual" data base user, and by the assumption that the rich semantic apparatus used will be both adequate as a means of analysis and appropriate as a tool for linking the characterisations of input and data language items. The paper describes this approach in more detail, with emphasis on the existing, tested, analyser.
Interval Abstraction Refinement for Model Checking of Timed-Arc Petri Nets
State-space explosion is a major obstacle in verification of time-critical distributed systems. An important factor with a negative influence on the tractability of the analysis is the size of constants that clocks are compared to. This problem is particularly accented in ex- plicit state-space exploration techniques. We suggest an approximation method for reducing the size of constants present in the model. The proposed method is developed for Timed-Arc Petri Nets and creates an under-approximation or an over-approximation of the model behaviour. The verification of approximated Petri net models can be considerably faster but it does not in general guarantee conclusive answers. We im- plement the algorithms within the open-source model checker TAPAAL and demonstrate on a number of experiments that our approximation techniques often result in a significant speed-up of the verification.
Kernel PLS variants for regression
We focus on covariance criteria for flnding a suitable sub- space for regression in a reproducing kernel Hilbert space: kernel princi- pal component analysis, kernel partial least squares and kernel canonical correlation analysis, and we demonstrate how this flts within a more gen- eral context of subspace regression. For the kernel partial least squares case some variants are considered and the methods are illustrated and compared on a number of examples.
Two notes from experimental study on image steganalysis
In recent years, several advanced methods for image steganalysis were proposed. During research process, some concerns are more and more addressed by steganalyzer. In this paper, we focus on several of these concerns. The first one is how to utilize SVM classifier in practical steganalysis, we use clustering analysis to divide training samples and train several SVM for detecting stego image. In this part we also discussed building an image database that can be used for evaluating steganography/steganalysis fairly. The second is how to designed proper classifier for steganalysis, especially how to take information of cover/stego image pair into account. We will discuss several notions regard to these two concerns.
Experimental Validation of a Rapid, Adaptive Robotic Assessment of the MCP Joint Angle Difference Threshold
This paper presents an experimental evaluation of a rapid, adaptive assessment of the difference threshold (DL) of passive metacar- pophalangeal index finger joint flexion using a robotic device. Parameter Estimation by Sequential Testing (PEST) is compared to the method of constant stimuli (MOCS) using a two-alternative forced-choice par- adigm. The pilot study with 13 healthy subjects provided DLs within similar ranges for MOCS and PEST, averaging at 2.15 ◦ ± 0.77 ◦ and 1.73 ◦ ±0.78 ◦ , respectively, in accordance with the literature. However, no significant correlation was found between the two methods (r(11) = 0.09, p =0 .762). The average number of trials required for PEST to converge was 58.7± 17.6, and significantly lower compared to 120 trials for MOCS ( p< 0.001), leading to an assessment time of under 15 min. These results suggest that rapid, adaptive methods, such as PEST, could be success- fully implemented in novel robotic tools for clinical assessment of sensory deficits.
Monetization as a Motivator for the Freemium Educational Platform Growth
The paper describes user behavior as a result of introducing monetization in the freemium educational online platform. Monetization resulted in alternative system growth mechanisms, causing viral increase in the number of users. Given different options, users choose the most advantageous and simple ones for them. System metrics in terms of the K-factor was utilized as an indicator of the system user base growth. The weekly K-factor almost doubled as a result of monetization introduction. Monetization and viral growth can be both competing and complementary mechanisms for the system growth.
Learning of abstractions from structural descriptions of pictures
The acquisition of concepts induced by structural descriptions of pictures is discussed and a representation scheme is presented which allows the construction of various abstractions based on different points of views and their storage in a simulated associative memory.
Hypergraph Transversal Computation with Binary Decision Diagrams
We study a hypergraph transversal computation: given a hypergraph, the problem is to generate all minimal transversals. This problem is related to many applications in computer science and vari- ous algorithms have been proposed. We present a new efficient algorithm using the compressed data structures BDDs and ZDDs, and we analyze the time complexity for it. By conducting computational experiments, we show that our algorithm is highly competitive with existing algorithms.
A Comparative Study of Scrum and Kanban Approaches on a Real Case Study Using Simulation
We present the application of software process modeling and simulation using an agent-based approach to a real case study of soft- ware maintenance. The original process used PSP/TSP; it spent a large amount of time estimating in advance maintenance requests, and needed to be greatly improved. To this purpose, a Kanban system was success- fully implemented, that demonstrated to be able to substantially improve the process without giving up PSP/TSP. We customized the simulator and, using input data with the same characteristics of the real ones, we were able to obtain results very similar to that of the processes of the case study, in particular of the original process. We also simulated, using the same input data, the possible application of the Scrum process to the same data, showing results comparable to the Kanban process.
Acquiring entailment pairs across languages and domains: a data analysis
Entailment pairs are sentence pairs of a premise and a hypothesis, where the premise textually entails the hypothesis. Such sentence pairs are important for the development of Textual Entailment systems. In this paper, we take a closer look at a prominent strategy for their automatic acquisition from newspaper corpora, pairing first sentences of articles with their titles. We propose a simple logistic regression model that incorporates and extends this heuristic and investigate its robustness across three languages and three domains. We manage to identify two predictors which predict entailment pairs with a fairly high accuracy across all languages. However, we find that robustness across domains within a language is more difficult to achieve.
The role of the community in a technical support community: a case study
Resource tagging has become an integral and important feature in enabling community users to easily access relevant content in a timely manner. Various methods have been proposed and implemented to optimize the identification of and access to tags used to characterize resources across different types of social web-based communities. While these user-focused tagging methods have shown promise in their limited application, they do not transfer well to internal business applications where the cost, time, tagged content, and user resources needed to implement them is prohibitive. This paper provides a case study of the process, tools, and methods used to engage users in the development and management of a tag taxonomy (folksontology) used to characterize content in an internal technical support community in the Cisco Global Technology Center.
PSO-Based Design of RF Integrated Inductor
This paper addresses an optimization-based approach for the design of RF integrated inductors. The methodology presented deals with the complexity of the design problem by formulating it as a multi-objective optimization. The multi-modal nature of the underlying functions combined with the need to be able to explore design trade-offs leads to the use of niching methods. This allows exploring not only the best trade-off solutions lying on the Pareto-optimum surface but also the quasi-optimum solutions that would be otherwise discarded. In this paper we take advantage of the niching properties of lbest PSO algorithm using ring topology to devise a simple optimizer able to find the local-optima. For the efficiency of the process analytical models are used for the passive/active devices. In spite the use of physics-based analytical expressions for the evaluation of the lumped elements, the variability of the process parameters is ignored in the optimization stage due to the significant computational burden it involves. Thus in the final stage both the Pareto- optimum solutions and the quasi-optimum solutions are evaluated with respect to the sensitivity to process parameter variations.
Context-Aware Staged Configuration of Process Variants@Runtime
Process-based context-aware applications are increasingly be- coming more complex and dynamic. Besides the large sets of process vari- ants to be managed in such dynamic systems, process variants need to be context sensitive in order to accommodate new user requirements and intrinsic complexity. This paradigm shift forces us to defer decisions to runtime where process variants must be customized and executed based on a recognized context. However, there exists a lack of deferral of the entire process variant configuration and execution to perform an auto- mated decision of subsequent variation points at runtime. In this paper, we present a holistic methodology to automatically resolve process vari- ability at runtime. The proposed solution performs a staged configuration considering static and dynamic context data to accomplish effective de- cision making. We demonstrate our approach by exemplifying a storage operation process in a smart logistics scenario. Our evaluation demon- strates the performance and scalability results of our methodology.
Inserting rhetorical predicates for quasi-abstractive summarization
We investigate the problem of inserting rhetorical predicates (e.g. "to present", "to discuss", "to indicate", "to show") during non extractive summary generation and compare various algorithms for the task which we trained over a set of human written summaries. The algorithms which use a set of features previously introduced in the summarization literature achieve between 57% to 62% accuracy depending on the machine learning algorithm used. We draw conclusions with respect to the use of context during predicate prediction.
GTP supertrees from unrooted gene trees: linear time algorithms for NNI based local searches
Gene tree parsimony (GTP) problems infer species supertrees from a collection of rooted gene trees that are confounded by evolutionary events like gene duplication, gene duplication and loss, and deep coalescence. These problems are NP-complete, and consequently, they often are addressed by effective local search heuristics that perform a stepwise search of the tree space, where each step is guided by an exact solution to an instance of a local search problem. Still, GTP problems require rooted input gene trees; however, in practice, most phylogenetic methods infer unrooted gene trees and it may be difficult to root correctly. In this work, we (i) define the first local NNI search problems to solve heuristically the GTP equivalents for unrooted input gene trees, called unrooted GTP problems, and (ii) describe linear time algorithms for these local search problems. We implemented the first NNI based local search heuristics for unrooted GTP problems, which enable analyses for thousands of genes. Further, analysis of a large plant data set using the unrooted NNI search provides support for an intriguing new hypothesis regarding the evolutionary relationships among major groups of flowering plants.
Distributed computing in sensor networks using multi-agent systems and code morphing
We propose and show a parallel and distributed runtime environment for multi-agent systems that provides spatial agent migration ability by employing code morphing. The focus of the application scenario lies on sensor networks and low-power, resource-aware single System-On-Chip designs. An agent approach provides stronger autonomy than a traditional object or remote-procedure-call based approach. Agents can decide for themselves which actions are performed, and they are capable of reacting on the environment and other agents with flexible behaviour. Data processing nodes exchange code rather than data to transfer information. A part of the state of an agent is preserved within its own program code, which also implements the agent's migration functionality. The practicability of the approach is shown using a simple distributed Sobel filter as an example.
Electronic Medical Record (EMR) Utilization for Public Health Surveillance
Introduction: Public health surveillance systems need to be refined. We intend to use a generic approach for early identification of patients with severe influenza-like illness (ILI) by calculating a score that estimates a patient’s disease-severity. Accordingly, we built the Intelligent Severity Score Estimation Model (ISSEM), structured so that the inference process would reflect experts’ decisionmaking logic. Each patient’s disease-severity score is calculated from numbers of respiratory ICD9 encounters, and laboratory, radiologic, and prescription-therapeutic orders in the EMR. Other ISSEM components include chronic disease evidence, probability of immunodeficiency, and the provider’s general practice-behavior patterns. Results: Sensitivity was determined from 200 randomly selected patients with upper- and lowerrespiratory tract ILI; specificity, from 300 randomly selected patients with URI only. For different age groups, ISSEM sensitivity ranged between 90% and 95%; specificity was 72% to 84%. Conclusion: Our preliminary assessment of ISSEM performance demonstrated 93.5% sensitivity and 77.3% specificity across all age groups. Background
Load Balancing for Imbalanced Data Sets: Classifying Scientific Artefacts for Evidence Based Medicine
Data skewness is a challenge encountered, in particular, when apply- ing supervised machine learning approaches in various domains, such as in healthcare and biomedical information engineering. Evidence Based Medicine (EBM) is a clinical strategy for prescribing treatment based on current best evi- dence for individual patients. Clinicians need to query publication repositories in order to find the best evidence to support their decision-making processes. This sophisticated information is materialised in the form of scientific artefacts in scholarly publications and the automatic extraction of these artefacts is a technical challenge for current generic search engines. Many classification ap- proaches have been proposed for identifying key scientific artefacts in EBM, however their performance is affected by the imbalanced characteristic of data in this domain. In this paper, we present four data balancing approaches applied in a binary ensemble classifier framework for classifying scientific artefacts in the EBM domain. Our balancing approaches improve the ensemble classifier's F-score by up to 15% for classes of scientific artefacts with extremely low cov- erage in the domain. In addition, we propose a classifier selection method for choosing the best classifier based on the distributional feature of classes. The resulting classifiers show improved classification performances when compared to state of the art approaches.
Towards intelligent distributed computing : cell-oriented computing
Distributed computing systems are of huge importance in a number of recently established and future functions in computer science. For example, they are vital to banking applications, communication of electronic systems, air traffic control, manufacturing automation, biomedical operation works, space monitoring systems and robotics information systems. As the nature of computing comes to be increasingly directed towards intelligence and autonomy, intelligent computations will be the key for all future applications. Intelligent distributed computing will become the base for the growth of an innovative generation of intelligent distributed systems. Nowadays, research centres require the development of architectures of intelligent and collaborated systems; these systems must be capable of solving problems by themselves to save processing time and reduce costs. Building an intelligent style of distributed computing that controls the whole distributed system requires communications that must be based on a completely consistent system. The model of the ideal system to be adopted in building an intelligent distributed computing structure is the human body system, specifically the body’s cells. As an artificial and virtual simulation of the high degree of intelligence that controls the body’s cells, this chapter proposes a Cell-Oriented Computing model as a solution to accomplish the desired Intelligent Distributed Computing system.
Improving the optimal bounds for black hole search in rings
In this paper we re-examine the well known problem of asynchronous black hole search in a ring. It is well known that at least 2 agents are needed and the total number of agents' moves is at least Ω(n log n); solutions indeed exist that allow a team of two agents to locate the black hole with the asymptotically optimal cost of Θ(n log n) moves.#R##N##R##N#In this paper we first of all determine the exact move complexity of black hole search in an asynchronous ring. In fact, we prove that 3n log3 n-O(n) moves are necessary. We then present a novel algorithm that allows two agents to locate the black hole with at most 3n log3 n + O(n) moves, improving the existing upper bounds, and matching the lower bound up to the constant of proportionality. Finally we show how to modify the protocol so to achieve asymptotically optimal time complexity Θ(n), still with 3n log3 n + O(n) moves; this improves upon all existing time-optimal protocols, which require O(n2) moves. This protocol is the first that is optimal with respect to all three complexity measures: size (number of agents), cost (number of moves) and time; in particular, its cost and size complexities match the lower bounds up to the constant.
WorkCellSimulator: a 3d simulator for intelligent manufacturing
This paper presents WorkCellSimulator, a software platform that allows to manage an environment for the simulation of robot tasks. It uses the most advanced artificial intelligence algorithms in order to define the production process, by controlling one or more robot manipulators and machineries present in the work cell. The main goal of this software is to assist the user in defining customized production processes which involve specific automated cells. It has been developed by IT+Robotics, a spin-off company of the University of Padua, founded in 2005 from the collaboration between young researchers in the field of Robotics and a group of professors from the Department of Information Engineering, University of Padua.
How Preprocessing Affects Unsupervised Keyphrase Extraction
Unsupervised keyphrase extraction techniques generally consist of candidate phrase selection and ranking techniques. Previous studies treat the candidate phrase selection and ranking as a whole, while the effectiveness of identifying candidate phrases and the impact on ranking algorithms have remained undiscovered. This paper surveys common candidate selection techniques and analyses the effect on the performance of ranking algorithms from different candidate selection approaches. Our evaluation shows that candidate selection approaches with better coverage and accuracy can boost the performance of the ranking algorithms.
COCA filters: co-occurrence aware bloom filters
We propose an indexing data structure based on a novel variation of Bloom filters. Signature files have been proposed in the past as a method to index large text databases though they suffer from a high false positive error problem. In this paper we introduce COCA Filters, a new type of Bloom filters which exploits the co-occurrence probability of words in documents to reduce the false positive error. We show experimentally that by using this technique we can reduce the false positive error by up to 21.6 times for the same index size. Furthermore Bloom filters can be replaced by COCA filters wherever the co-occurrence of any two members of the universe is identifiable.
SPIN Query Tools for De-identified Research on a Humongous Database
The Shared Pathology Informatics Network (SPIN), a research initiative of the National Cancer Institute, will allow for the retrieval of more than 4 million pathology reports and specimens. In this paper, we describe the special query tool as developed for the Indianapolis/Regenstrief SPIN node, integrated into the ever-expanding Indiana Network for Patient care (INPC). This query tool allows for the retrieval of de-identified data sets using complex logic, auto-coded final diagnoses, and intrinsically supports multiple types of statistical analyses. The new SPIN/INPC database represents a new generation of the Regenstrief Medical Record system – a centralized, but federated system of repositories.
AST Pre-Processing For The Sliding Window Method Using Genetic Algorithms.
Modular exponentiation is a cornerstone operation to several public-key cryptography systems such as the RSA. It is performed using successive modular multiplications. The latter is time consuming for large operands. Accelerating public-key cryptography software or hardware needs reducing the total number of modular multiplication needed. This paper introduces a novel idea based on genetic algorithms for evolving an optimal addition chain that allows one to perform precomputations necessary in the window modular exponentiation methods. The obtained addition chain allows one to perform exponentiation with a minimal number of multiplication and hence implementing efficiently the exponentiation operation. We compare our results with those obtained using the algorithm of Brun.
UNT: A Supervised Synergistic Approach to Semantic Text Similarity
This paper presents the systems that we participated with in the Semantic Text Similarity task at SEMEVAL 2012. Based on prior research in semantic similarity and relatedness, we combine various methods in a machine learning framework. The three variations submitted during the task evaluation period ranked number 5, 9 and 14 among the 89 participating systems. Our evaluations show that corpus-based methods display a more robust behavior on the training data, yet combining a variety of methods allows a learning algorithm to achieve a superior decision than that achievable by any of the individual parts.
Robustness of prosodic features to voice imitation
Comunicacio presentada a 9th Annual Conference of the International Speech Communication Association celebrada a Brisbane (Australia) del 22 al 26 de setembre de 2008.
PIQASso: PIsa Question Answering System
PiQASso is a Question Answering system based on a combination of modern IR techniques and a series of semantic filters for selecting paragraphs containing a justifiable answer. Semantic filtering is based on several NLP tools, including a dependency-based parser, a POS tagger, a NE tagger and a lexical database. Semantic analysis of questions is performed in order to extract key word used in retrieval queries and to detect the expected answer type. Semantic analysis of retrieved paragraphs includes checking the presence of entities of the expected answer type and extracting logical relations between words. A paragraph is considered to justify an answer if similar relations are present in the question. When no answer passes the filters, the process is repeated applying further levels of query expansions in order to increase recall. We discuss results and limitations of the current implementation.
Sleep musicalization: automatic music composition from sleep measurements
We introduce data musicalization as a novel approach to aid analysis and understanding of sleep measurement data. Data musicalization is the process of automatically composing novel music, with given data used to guide the process. We present Sleep Musicalization, a methodology that reads a signal from state-of-the-art mattress sensor, uses highly non-trivial data analysis methods to measure sleep from the signal, and then composes music from the measurements. As a result, Sleep Musicalization produces music that reflects the user's sleep during a night and complements visualizations of sleep measurements. The ultimate goal is to help users improve their sleep and well-being. For practical use and later evaluation of the methodology, we have built a public web service at http://sleepmusicalization.net for users of the sleep sensors.
A Distributed Dynamic Mobility Architecture with Integral Cross-Layered and Context-Aware Interface for Reliable Provision of High Bitrate mHealth Services
Mobile health (mHealth) has been receiving more and more attention recently as an emerging paradigm that brings together the evolution of advanced mobile and wireless communication technologies with the vision of "connected health" aiming to deliver the right care in the right place at the right time. However, there are several cardinal problems hampering the successful and widespread deployment of mHealth services from the mobile networking perspective. On one hand, issues of continuous wireless connectivity and mobility management must be solved in future heterogeneous mobile Internet architectures with ever growing traffic demands. On the other hand, Quality of Service (QoS) and Quality of Experience (QoE) must be guaranteed in a reliable, robust and diagnostically acceptable way. In this paper we propose a context- and content-aware, jointly optimized, distributed dynamic mobility management architecture to cope with the future traffic explosion and meet the medical QoS/QoE requirements in varying environments.
Des séquences aux tendances.
Les donnees temporelles peuvent etre traitees de nombreuses facons afin d'en extraire des connaissances. La decouverte de motifs sequentiels met en evidence des sous-sequences frequentes contenues dans des sequences d'enregistrements annotes temporellement. L'analyse des acces a un site web permet par exemple de decouvrir que "5% des utilisateurs accedent a la page register.php puis a la page help.html". Cependant, les motifs sequentiels ne permettent pas d'extraire des tendances temporelles, du type "une augmentation du nombre de requetes au formulaire d'inscription precede souvent une augmentation des requetes a la page d'aide quelques secondes plus tard". Dans cet article, nous proposons d'extraire des motifs caracterisant ces evolutions frequentes grâce a deux algorithmes, TED et EVA. Nous presentons notre approche, implementee et testee sur des donnees reelles.
Traffic observation and situation assessment
Utilization of camera systems for surveillance tasks (e. g. traffic monitoring) has become a standard procedure and has been in use for over 20 years. However, most of the cameras are operated locally and data analyzed manually. Locally means here a limited field of view and that the image sequences are processed independently from other cameras. For the enlargement of the observation area and to avoid occlusions and non-accessible areas multiple camera systems with overlapping and non-overlapping cameras are used. The joint processing of image sequences of a multi-camera system is a scientific and technical challenge. The processing is divided traditionally into camera calibration, object detection, tracking and interpretation. The fusion of information from different cameras is carried out in the world coordinate system. To reduce the network load, a distributed processing concept can be implemented.#R##N##R##N#Object detection and tracking are fundamental image processing tasks for scene evaluation. Situation assessments are based mainly on characteristic local movement patterns (e.g. directions and speed), from which trajectories are derived. It is possible to recognize atypical movement patterns of each detected object by comparing local properties of the trajectories. Interaction of different objects can also be predicted with an additional classification algorithm.#R##N##R##N#This presentation discusses trajectory based recognition algorithms for atypical event detection in multi object scenes to obtain area based types of information (e.g. maps of speed patterns, trajectory curvatures or erratic movements) and shows that two-dimensional areal data analysis of moving objects with multiple cameras offers new possibilities for situational analysis.
Structure and Practice of "Four in One" Hybrid-Practice Teaching Mode
Yunnan Radio and TV University tried to "explore the Open Univer- sity building model" that was approved by the State Council in October 2010. Having tried for more than two years, the university explores building the "Four in One" hybrid-practice teaching model which is an integration of network virtual training, entities training inside school, outside expand training, and learning package individual training. It aims to break through the bottleneck of open and distance education. The model has been applied gradually in practice teaching, and it shows positive initial results.
Mobilizing the semantic web with DAML-enabled web services
The Web is evolving from a repository for text and images to a provider of services - both information-providing services, and services that have some effect on the world. Today's Web was designed primarily for human use. To enable reliable, large-scale automated interoperation of services by computer programs or agents, the properties, capabilities, interfaces and effects of Web services must be understandable to computers. In this paper we propose a vision and a partial realization of precisely this. We propose markup of Web services in the DAML family of semantic Web markup languages. Our markup of Web services enables a wide variety of agent technologies for automated Web service discovery, execution, composition and interoperation. We present one logic-based agent technology for service composition, predicated on the use of reusable, task-specific, high-level generic procedures and user-specific customizing constraints.
Exponentially Smoothed Interactive Gaze Tracking Method
Gaze tracking is an aspect of human-computer interaction still growing in popularity. Tracking human eye fixation points can help control user interfaces and eventually may help in the interface evalu- ation or optimization. Unfortunately professional eye-trackers are very expensive and thus hardly available for researchers and small companies. The paper presents very effective, exponentially smoothed, low cost, ap- pearance based, improved gaze tracking method. The method achieves very high absolute precision (1 deg) at 20 fps, exploiting a simple HD web camera with reasonable environment restrictions. The paper de- scribes results of experimental tests, both static on absolute gaze point estimation, and dynamic on gaze controlled path following.
Search lessons learned from crossword puzzles
The construction of a program that generates crossword puzzles is discussed. As in a recent paper by Dechter and Meiri, we make an experimental comparison of various search techniques. The conclusions to which we come differ from theirs in some areas - although we agree that directional arc consistency is better than path-consistency or other forms of lookahead, and that backjumping is to be preferred to backtracking, we disagree in that we believe dynamic ordering of the constraints to be necessary in the solution of more difficult problems.
On the structure of recognizable languages of dependence graphs
Dans le cadre de la theorie des traces, un graphe de dependance represente le comportement d'un systeme distribue (par exemple un reseau de Petri) par analogie avec la relation mot-automate dans le cas sequentiel. Un langage reconnaissable de graphes de dependances represente ainsi l'ensemble de tous les comportements d'un systeme distribue satisfaisant des conditions de regularite. Dans cet article nous caracterisons les languages de graphes obtenus a partir des precedents par suppression des etiquettes sur les sommets
All brutes are Subhuman: Aristotle and Ockham on private negation
The mediaeval logic of Aristotelian privation, represented by Ockham's expositionof All A is non-P as All S is of a type T that is naturally P and no S is P, iscritically evaluated as an account of privative negation. It is argued that there aretwo senses of privative negation: (1) an intensifier (as in subhuman), the dualof Neoplatonic hypernegation (superhuman), which is studied in linguistics asan operator on scalar adjectives, and (2) a (often lexicalized) Boolean complementrelative to the extension of a privative negation in sense (1) (e.g., Brute). Thissecond sense, which is the privative negation discussed in modern linguistics, isshown to be Aristotle's. It is argued that Ockham's exposition fails to capture muchof the logic of Aristotelian privation due to limitations in the expressive power of thesyllogistic.
On The Generalization of Fuzzy Rough Approximation Based on Asymmetric Relation
An asymmetric relation, called a weak similarity relation, is introduced as a more realistic relation in representing the relationship between two elements of data in a real-world application. A conditional probability relation is considered as a concrete example of the weak similarity relation by which a covering of the universe is provided as a generalization of a disjoint partition. A generalized concept of rough approximations regarded as a kind of fuzzy rough set is proposed and defined based on the covering of the universe. Additionally, a more generalized fuzzy rough approximation of a given fuzzy set is proposed and discussed as an alternative to provide interval-valued fuzzy sets. Their properties are examined.
On Lascar rank in non-multidimensional omega-stable theories.
This chapter considers ‘T’ to be a countable complete ω -stable theory. The notion “dimension” is used for classes of non-orthogonal regular types over models. T is nonmultidimensional if the number μ (T) of dimensions is bounded. Models of non-multidimensional ω-stable theories can be classified by μ (T)-tuples of cardinals. The chapter also presents some preliminaries from stability theory and focuses on the algebra of models of Th(F c (p n , N O )). The chapter also discusses the Lascar rank computation and dimensions.
Soft Systems Methodology for Hard Systems Engineering - The Case of Information Systems Development at LIT/INPE/BRAZIL
The Soft Systems Methodology (SSM) was developed to deal with soft systems, systems in which the human components predominate. Any kind of soft- ware is a hard system, since technical factors predominate in it. But when the software is a component of an Information System its success depends heavily on soft aspects. This paper analyzes the potential contribution of SSM to Software Engineering in order to propose a method to support requirements elicitation for the development of Information System that helps to understand and consider the human, social and political factors that will influence the system success. A real situation of the Integration and Testing Laboratory (LIT) of INPE (Brazilian Insti- tute for Space Research) was used to perform the study and to exemplify the use of the proposed method.
Receive antenna selection for uplink multiuser MIMO systems over correlated rayleigh fading channels
Channel correlation has the effect to reduce the sum rate and user capacity of multiuser multi-input multi-output (MU-MIMO) systems considerably. In this paper, receive antenna selection is proposed for uplink MU-MIMO system to maximize the sum rate and to maintain high user capacity over correlated Rayleigh fading channel. Two antenna selection criteria are presented to tradeoff the computational complexity and performance. Capacity based selection criterion (CBSC) provides optimal performance at the cost of high complexity compared with the suboptimal norm based selection criterion (NBSC). Simulation results demonstrate and validate the effectiveness of proposed method compared with conventional MU-MIMO systems.
Conceptual Indexing: Practical Large-Scale AI for Efficient Information Access
Finding information is a problem shared by people and intelligent systems. This paper describes an experiment combining both human and machine aspects in a knowledgebased system to help people find information in text. Unlike many previous attempts, this system demonstrates a substantial improvement in search effectiveness by using linguistic and world knowledge and exploiting sophisticated knowledge representation techniques. It is also an example of practical subsumption technology on a large scale and with domainindependent knowledge. Results from this experiment are relevant to general problems of knowledge-based reasoning with large-scale knowledge bases.
Optional finer granularity in an open learner model
Open learner models (OLMs) available independently from specific tutoring or guidance, such as an intelligent tutoring system may provide, can encourage learners to take greater responsibility for learning., Our results suggest that finer grained OLM information, in this context, can support learners in identifying strengths/weaknesses, planning and focussing learning, when different OLM granularities exist. Learners drew regular comparison between OLM and domain information, showing the flexibility of interaction to be important.
Engaging learning groups using Social Interaction Strategies
Conversational Agents have been shown to be effective tutors in a wide range of educational domains. However, these agents are often ignored and abused in collaborative learning scenarios involving multiple students. In our work presented here, we design and evaluate interaction strategies motivated from prior research in small group communication. We will discuss how such strategies can be implemented in agents. As a first step towards evaluating agents that can interact socially, we report results showing that human tutors employing these strategies are able to cover more concepts with the students besides being rated as better integrated, likeable and friendlier.
Simultaneously resettable arguments of knowledge
In this work, we study simultaneously resettable arguments of knowledge. As our main result, we show a construction of a constant-round simultaneously resettable witness-indistinguishable argument of knowledge (simresWIAoK, for short) for any NP language. We also show two applications of simresWIAoK: the first constant-round simultaneously resettable zero-knowledge argument of knowledge in the Bare Public-Key Model; and the first simultaneously resettable identification scheme which follows the knowledge extraction paradigm.
Model Development in the UML-based Specification Environment (USE)
The tool USE (UML-based Specification Environment) supports analysts,#R##N#designers and developers in executing UML models and checking OCL#R##N#constraints and thus enables them to employ model-driven techniques#R##N#for software production. USE has been developed since 1998 at the#R##N#University of Bremen. This paper will discuss to what extent and how#R##N#USE relates to the questions and topics (Model quality, Modelling#R##N#method, Model Effectiveness, Model Maintainability) raised for this#R##N#seminar.
A Scientometrics Study of Rough Sets in Three Decades
Rough set theory has been attracting researchers and practitioners over three decades. The theory and its applications experienced unprecedented prosperity especially in the recent ten years. It is essential to explore and review the progress made in the field of rough sets. Mainly based on Web of Science database, we analyze the prolific authors, impact authors, impact groups, and the most impact papers in the past three decades. In addition, we also examine rough set development in the recent five years. One of the goals of this article is to use scientometrics approaches to study three decade research in rough sets. We review the historic growth of rough sets and elaborate on recent development status in this field.
Attaching multiple personal identifiers in X.509 digital certificates
The appeals for interoperable and decentralized Electronic Identity Management are rapidly increasing, especially since their contribution towards interoperability across the entire "electronic" public sector, effective information sharing and simplified access to electronic services, is unquestioned. This paper presents an efficient and user-centric method for storing multiple users' identifiers in X.509 digital certificates while preserving their confidentiality, allowing for interoperable user identification in environments where users cannot be identified by an all embracing unique identifier.
Noise Robust Feature Extraction for ASR using the Aurora 2 Database
Four front-end processing techniques developed for noise robust speech recognition are tested with the Aurora 2 database. These techniques include three previously published algorithms: variable frame rate analysis [Zhu and Alwan, 2000], peak isolation [Strope and Alwan, 1997], and harmonic demodulation [Zhu and Alwan, 2000], and a new technique for peak-to-valley ratio locking. Our previous work has focused on isolated digit recognition. In this paper, these algorithms are modified for recognition of connected digits. Recognition results with the Aurora 2 database show that a combination of these four techniques results in 40% error rate reduction when compared to the baseline MFCC front-end for the clean training condition, with no significant increase in computational complexity.
Fully utilize feedbacks: language model based relevance feedback in information retrieval
Relevance feedback algorithm is proposed to be an effective way to improve the precision of information retrieval. However, most researches about relevance feedback are based on vector space model, which can't be used in other more complicated and powerful models, such as language model and logic model. Meanwhile, other researches are conceptually restricted to the view of a query as a set of terms, and so cannot be naturally applied to more general case when the query is considered as a sequence of terms and the frequency information of a query tern is considered. In this paper, we mainly focuses on relevant feedback Algorithm based on language model. We use a mixture model to describe the process of generating document and use EM to solve model's parameters. Our research also employs semi-supervised learning to calculate collection model and proposes an effective way to obtain feedback from irrelevant documents to improve our algorithm.
Teaching and Learning in Technical IT Courses
In this chapter, I discuss my experiences applying Problem-Based Learn- ing to technical IT courses such as mathematics and computer application design and programming. One of the hallmarks of nearly any technical concentration is a critical need to develop strong individual, and typically challenging, technical skill sets early in the program, and high dependence on these skill sets in later courses, internships, and ultimately professional work or graduate studies. Another hallmark of the IT field is a need for strong metacognitive skills to analyze complex scenarios and the ability to creatively use their technical skill sets to synthesize new solutions and create working systems, much like an artist creates new works. The chapter is a reflection of how I try to: (1) Structure early courses to help stu- dents rapidly gain difficult technical skill sets without crushing their will to contin- ue as a result of harsh weed-out techniques, but at the same time rigorously demand and assess progress; (2) Balance conflicts between the desire for rapid and efficient assessment versus the need for detailed and careful assessment that avoids gaming of the assessment system by students; (3) Gradually push technical students away from memorize-and-regurgitate learning approaches and towards an independent learning model; and (4) Help students make the transition from early individual- ized skill set development to wider-scale analysis and creative synthesis of complex systems.
Ontology as a Source for Rule Generation
This paper discloses the potential of OWL (Web Ontology Language) ontologies for generation of rules. The main purpose of this paper is to identify new types of rules, which may be generated from OWL ontologies. Rules, generated from OWL ontologies, are necessary for the functioning of the Semantic Web Expert System. It is expected that the Semantic Web Expert System (SWES) will be able to process ontologies from the Web with the purpose to supplement or even to develop its knowledge base.
High Dimensional Search Using Polyhedral Query
It is well known that, as the dimensionality of a metric space increases, metric search techniques become less effective and the cost of indexing mechanisms becomes greater than the saving they give. This is due to the so-called curse of dimensionality. One effect of increasing dimensionality is that the ratio of unit hypersphere to unit hypercube volume decreases rapidly, making the solution to a similarity query (the query ball, or hypersphere) ever more difficult to identify by using metric invariants such as triangle inequality. In this paper we take a different approach, by identifying points within a query polyhedron rather than a ball. We show how this can be achieved by constructing a surrogate metric space, such that a query ball in the surrogate space corresponds to a polyhedron in the original space. If the polyhedron contains the ball, the overall cost of the query is likely to be increased in high dimensions; however, we show that shrinking the polyhedron can capture a surprisingly high proportion of the points within the ball, whilst at the same time giving a more efficient, and more scalable, search. We show results which confirm our underlying hypothesis. In some cases we can retrieve significant volumes of query results from spaces which are otherwise intractable.
Automatic pipeline construction for real-time annotation
Many annotation tasks in computational linguistics are tackled with manually constructed pipelines of algorithms. In real-time tasks where information needs are stated and addressed ad-hoc, however, manual construction is infeasible. This paper presents an artificial intelligence approach to automatically construct annotation pipelines for given information needs and quality prioritizations. Based on an abstract ontological model, we use partial order planning to select a pipeline's algorithms and informed search to obtain an efficient pipeline schedule. We realized the approach as an expert system on top of Apache UIMA, which offers evidence that pipelines can be constructed ad-hoc in near-zero time.
Improving the Forward Chaining Algorithm for Conceptual Graphs Rules
Simple Conceptual Graphs (SGs) are used to represent entities and relations between these entities: they can be translated into positive, conjunctive, existential first-order logics, without function symbols. Sound and complete reasonings w.r.t. associated logic formulas are obtained through a kind of graph homomorphism called projection. Conceptual Graphs Rules (or CG rules) are a standard extension to SGs, keeping sound and complete reasonings w.r.t. associated logic formulas (they have the same form as tuple generating dependencies in database): these graphs represent knowledge of the form “IF ... THEN”. We present here an optimization of the natural forward chaining algorithm for CG rules. Generating a graph of rules dependencies makes the following sequences of rule applications far more efficient, and the structure of this graph can be used to obtain new decidability results.
Fusion of local features for face recognition by multiple least square solutions
In terms of supervised face recognition, linear discriminant analysis (LDA) has been viewed as one of the most popular approaches during the past years. In this paper, taking advantage of the equivalence between LDA and the least square problem, we propose a new fusion method for face classification, based on the combination of least square solutions for local mean and local texture into multiple optimization problems. Extensive experiments on AR_Gray and Yale face database indicate the competitive performance of the proposed method, compared to the traditional LDA.
Ehipasiko: A Content-based Image Indexing and Retrieval System
Presently, retrieving images from a digital library requires different retrieval techniques to those used to retrieve text documents. In this paper, we demonstrate the possibility of converting the contents of images into texts, which enables us to utilise text-base retrieval techniques for image retrieval. The potential advantages and applications of this approach are also illustrated in this paper.
Semantic Alliance: a framework for semantic allies
We present an architecture and software framework for semantic allies: Semantic systems that complement existing software applications with semantic services and interactions based on a background ontology. On the one hand, our Semantic Alliance framework follows an invasive approach: Users can profit from semantic technology without having to leave their accustomed workflows and tools. On the other hand, Semantic Alliance offers a largely application-independent way of extending existing (open API) applications with MKM technologies. Semantic Alliance framework presented in this paper consists of three components: i.) a universal semantic interaction manager for given abstract document types, ii.) a set of thin APIs realized as invasive extensions to particular applications, and iii.) a set of renderer components for existing semantic services. We validate the Semantic Alliance approach by instantiating it with a spreadsheet-specific interaction manager, thin APIs for LibreOffice Calc 3.4 and MS Excel'10, and a browser-based renderer.
Towards an Organizational MAS Methodology
Organizations are a powerful way to coordinate complex behavior in human society. Thus, human organizations can serve as a basis for better understanding and designing open multi-agent systems. Organizational models have been recently used in agent theory for modelling coordination in open systems and to ensure social order in multi-agent system applications. This work discusses several organizational features of organization-oriented multiagent system methodologies and analyzes whether they take into account human organizational designs. Moreover, several guidelines that any organization-oriented MAS methodology must take into account are proposed.
Adaptive localization in a dynamic WiFi environment through multi-view learning
Accurately locating users in a wireless environment is an important task for many pervasive computing and AI applications, such as activity recognition. In a WiFi environment, a mobile device can be localized using signals received from various transmitters, such as access points (APs). Most localization approaches build a map between the signal space and the physical location space in a offline phase, and then using the received-signal-strength (RSS) map to estimate the location in an online phase. However, the map can be outdated when the signal-strength values change with time due to environmental dynamics. It is infeasible or expensive to repeat data calibration for reconstructing the RSS map. In such a case, it is important to adapt the model learnt in one time period to another time period without too much recalibration. In this paper, we present a location-estimation approach based on Manifold co-Regularization, which is a machine learning technique for building a mapping function between data. We describe LeManCoR, a system for adapting the mapping function between the signal space and physical location space over different time periods based on Manifold Co-Regularization. We show that LeManCoR can effectively transfer the knowledge between two time periods without requiring too much new calibration effort. We illustrate LeMan-CoR's effectiveness in a real 802.11 WiFi environment.
A 2.7 Gcps and 7-Multiplexing CDMA Serial Communication Chip for Real-Time Robot Control with Multiprocessors
Intelligent robot control using multiprocessors, sensors, and actuators requires real-time flexible networks for communicating various types of real-time data, e.g., sensing data and interrupt signals. Furthermore, serial data transfer is required for implementing the network using a few wiring lines. To meet these requirements, we propose a CDMA serial communication interface utilizing novel two-step synchronization. The transmitter and receiver chip fabricated with 0.25µm digital CMOS technology achieved 2.7Gcps (chips per second) and 7-multiplex communication. The experimental interface board was developed for demonstrating flexible transfer of multiimage data by installing CDMA chips in addition to an FPGA.
Integrating provenance into an operational data product information system
Knowledge of how a science data product has been generated is a critical component to determining its fitness-for-use for a given analysis. One objective of science information systems is to allow users to search for data products based on a wide range of criteria; spatial and temporal extent, observed parameter, research domain, and organizational project are common search criteria. Currently, science information systems are geared towards helping users find data, but not in helping users determine how the products were generated. An information system that exposes the provenance of available data products, that is what observations, assumptions, and science processing were involved in the generation of the data products, would contribute significant benefit to user fitness-for-use decision-making.#R##N##R##N#In this work we discuss semantics-driven provenance extensions to the Virtual Solar Terrestrial Observatory (VSTO) information system. The VSTO semantic web portal uses an ontology to provide a unified search and product retrieval interface to data in the fields of solar, solar-terrestrial, and space physics. We have developed an extension to the VSTO ontology that allows it to express item-level data product records. We will show how the Open Provenance Model (OPM) and the Proof Markup Language (PML) can be used to express the provenance of data product records. Additionally, we will discuss ways in which domain semantics can aid in the formulation - and answering - of provenance queries. Our extension to the VSTO ontology has also been integrated with a solar-terrestrial profile of the Observation and Measurement (OM we utilize this integration to connect observation events to the data product record lineage.#R##N##R##N#Our additions to the VSTO ontology will allow us to extend the VSTO web portal user interface with search criteria based on provenance and observation characteristics. More critically, provenance information will allow the VSTO portal to display important knowledge about selected data records; what science processes and assumptions were applied to generate the record, what observations the record derives from, and the results of quality processing that had been applied to the record and any records it derives from. We conclude by showing our interface for showing record provenance information and discuss how it aids users in determining fitness-for-use of the data.
Querying in spaces of music information
This study is focused on querying accomplished on structured spaces of information. Querying is understood in terms of mining structures of information and of knowledge understanding. We consider information as a subject of descriptions expressed in some language. Information is hidden behind such descriptions. Operations done on structured spaces of information are performed on language constructions describing such structures. However, automatic operations not always can be performed directly on language constructions. In such cases it is necessary to expand performance to the space of information. The study concerns paginated (i.e. printed and handwritten) music notation. It is shown that querying in the space of music information requires syntactic structuring as well as its expansion to semantic analysis. It is worth underlining that data understanding requires analysis of uncertainty: analyzed data are usually incomplete, uncertain and with some incorrectness. Such imperfectness of information is hidden under the level of syntax and semantics. Due to limitation of the paper this problem is not studied.
Agricultural Knowledge Management Systems in Practice: The Ability to Support Wereda Knowledge Centers in Ethiopia
Agriculture is the dominant sector in the Ethiopian economy but it is characterized by low productivity. Ethiopia is interested in creating access to agricultural knowledge through an agricultural knowledge management system (AKMS). Such a system has been developed using a web-based portal named Ethiopian Agriculture Portal (EAP). It is facilitated through Woreda Knowledge Centers (WKCs) which are in 10 Pilot Learning Woredas (PLW). Providing knowledge in the appropriate format, identification of affordable technological infrastructure, and integrating indigenous agricultural knowledge into the knowledge system is vital to empowering development agents (extension workers) in Ethiopia. This study addresses two research questions: 1)To what extent does the centralized AKMS support WKCs access and utilization of agricultural knowledge? 2) How can the existing AKMS support capturing and sharing of indigenous agricultural knowledge and best practices?
Learning to Merge Word Senses
It has been widely observed that different NLP applications require different sense granularities in order to best exploit word sense distinctions, and that for many applications WordNet senses are too fine-grained. In contrast to previously proposed automatic methods for sense clustering, we formulate sense merging as a supervised learning problem, exploiting human-labeled sense clusterings as training data. We train a discriminative classifier over a wide variety of features derived from WordNet structure, corpus-based evidence, and evidence from other lexical resources. Our learned similarity measure outperforms previously proposed automatic methods for sense clustering on the task of predicting human sense merging judgments, yielding an absolute F-score improvement of 4.1% on nouns, 13.6% on verbs, and 4.0% on adjectives. Finally, we propose a model for clustering sense taxonomies using the outputs of our classifier, and we make available several automatically sense-clustered WordNets of various sense granularities.
Optimizing Network Patching Policy Decisions
Patch management of networks is essential to mitigate the risks from the exploitation of vulnerabilities through malware and other attacks, but by set- ting too rigorous a patching policy for network devices the IT security team can also create burdens for IT operations or disruptions to the business. Different patch deployment timelines could be adopted with the aim of reducing this op- erational cost, but care must be taken not to substantially increase the risk of emergency disruption from potential exploits and attacks. In this paper we ex- plore how the IT security policy choices regarding patching timelines can be made in terms of economically-based decisions, in which the aim is to minimize the expected overall costs to the organization from patching-related activity. We introduce a simple cost function that takes into account costs incurred from dis- ruption caused by planned patching and from expected disruption caused by emergency patching. To explore the outcomes under different patching policies we apply a systems modelling approach and Monte Carlo style simulations. The results from the simulations show disruptions caused for a range of patch dep- loyment timelines. These results together with the cost function are then used to identify the optimal patching timelines under different threat environment con- ditions and taking into account the organization's risk tolerance.
A streaming digital ink framework for multi-party collaboration
We present a framework for pen-based, multi-user, online collaboration in mathematical domains. This environment provides participants, who may be in the same room or across the planet, with a shared whiteboard and voice channel. The digital ink stream is transmitted as InkML, allowing special recognizers for different content types, such as mathematics and diagrams. Sessions may be recorded and stored for later playback, analysis or annotation. The framework is currently structured to use the popular Skype and Google Talk services for the communications channel, but other transport mechanisms could be used. The goal of the work is to support computer-enhanced distance collaboration, where domain-specific recognizers handle different kinds of digital ink input and editing. The first of these recognizers is for mathematics, which allows converting math input into machine-understandable format. This supports multi-party collaboration, with sessions recorded in rich formats that allow semantic analysis and manipulation of the content.
Normalization of place/transition-systems preserves net behaviour
Nous considerons dans cet article des reseaux de Petri etiquetes sans λ. On les appelle normalises si leurs arcs ne sont pas values et si leurs marquages initiaux et finals sont des sous-ensembles de l'ensemble des places. Nous prouvons que tout reseau de Petri general peut etre (effectivement) transforme en un reseau de Petri normalise ayant exactement le meme comportement concurrent. Ses comportements sequentiels finis et infinis ainsi que ses suites de pas sont egalement preserves. Ceci permet de toujours considerer des reseaux de Petri sous une forme normalisee quand on travaille sur le comportement des reseaux, sans restreindre la generalite des resultats. Ainsi un bon nombre de recherches futures devrait se trouver facilite
Precision and negative predictive value of links between ClinicalTrials.gov and PubMed.
One of the goals of translational science is to shorten the time from discovery to clinical use. Clinical trial registries were established to increase transparency in completed and ongoing clinical trials, and they support linking trials with resulting publications. We set out to investigate precision and negative predictive value (NPV) of links between ClinicalTrials.gov (CT.gov) and PubMed. CT.gov has been established to increase transparency in clinical trials and the link to PubMed is crucial for supporting a number of important functions, including ascertaining publication bias. We drew a random sample of trials downloaded from CT.gov and performed manual review of retrieved publications. We characterize two types of links between trials and publications (NCT-link originating from MEDLINE and PMID-link originating from CT.gov).The link precision is different based on type (NCT-link: 100%; PMID-link: 63% to 96%). In trials with no linked publication, we were able to find publications 44% of the time (NPV=56%) by searching PubMed. This low NPV shows that there are potentially numerous publications that should have been formally linked with the trials. Our results indicate that existing trial registry and publisher policies may not be fully enforced. We suggest some automated methods for improving link quality.
Machine learning for information extraction from XML marked-up text on the semantic web
The last few years have seen an explosion in the amount of text becoming available on the World Wide Web as online communities of users in diverse domains emerge to share documents and other digital resources. In this paper we explore the issue of how to provide a low-level information extraction tool based on hidden Markov models that can identify and classify terminology based on previously marked-up examples. Such a tool should provide the basis for a domain portable information extraction system, that when combined with search technology can help users to access information more effectively within their document collections than today's information retrieval engines alone. We present results of applying the model in two diverse domains: news and molecular biology and discuss the model and term markup issues that this investigation reveals.
Authentication Services in Mobile Networks
Authentication, Authorization, and Accounting (AAA) technologies are widely considered to be the key to the growth of e-commerce. Mobile network operators may be one of the first to offer such services thanks to a number of advantages. They will face some issues, however, if they attempt to launch AAA services for e-commerce. This paper introduces some of the issues including ID mappings, certificate validation, security awareness, and environments. Some of the solutions for these issues are also discussed.
Fuzzy clustering of the self-organizing map: some applications on financial time series
The Self-organizing map (SOM) has been widely used in financial applications, not least for time-series analysis. The SOM has not only been utilized as a stand-alone clustering technique, its output has also been used as input for second-stage clustering. However, one ambiguity with the SOM clustering is that the degree of membership in a particular cluster is not always easy to judge. To this end, we propose a fuzzy C-means clustering of the units of two previously presented SOM models for financial time-series analysis: financial benchmarking of companies and monitoring indicators of currency crises. It allows each time-series point to have a partial membership in all identified, but overlapping, clusters, where the cluster centers express the representative financial states for the companies and countries, while the fluctuations of the membership degrees represent their variations over time.
Re-grasping: improving capability for multi-arm-robot-system by dynamic reconfiguration
In previous works a novel flexible and versatile handling concept, called PARAGRIP(Parallel Gripping), was introduced. This concept is based ona reconfigurable architecture with a modular and alterable layout. The robot system is able to handle objects with six DOF by forming a parallel kinematic structure including several robotic arms and the object itself. As many kinematic parameters, like the grasp- and base-points of the arms as well as the arm combination can be designed freely, the handling system offers a fast and economic possibility to adapt the performances to the requirements of the task. This adaption can proceed before or even during manipulation. The latter is realized by dynamic re-grasping, where the object is passed from one arm to the next, if more than three arms are available in the layout.#R##N##R##N#This Paper deals with the question how an optimal configuration set can be planned automatically, if the robot layout offers the possibility of dynamic re-grasping. It shows the benefits and the challenges as well as the strategies of the planning process and the realization. The focus of this paper is how to manage the complexity of this numerousness of configuration possibilities and choose the optimal one within the shortest computation time.
Modeling, Simulation, and Optimization of Supply Chains: A Continuous Approach
This book offers a state-of-the-art introduction to the mathematical theory of supply chain networks, focusing on supply chain networks described by partial differential equations (PDEs). The authors discuss modeling of complex supply networks as well as their mathematical theory; explore modeling, simulation, and optimization of some of the discussed models; and present analytical and numerical results on optimization problems. Real-world examples are given to demonstrate the applicability of the presented approaches. Audience: Graduate students and researchers who are interested in the theory of supply chain networks described by PDEs will find this book useful. It can also be used in advanced graduate-level courses on modeling of physical phenomena, as well as introductory courses on supply chain theory. Contents: Preface; Chapter 1: Introduction; Chapter 2: Mathematical Preliminaries Chapter 3: Basic Queueing Models; Chapter 4: Models Based on Ordinary Differential Equations; Chapter 5: Models Based on Partial Differential Equations; Chapter 6: Continuum-Discrete Models; Chapter 7: Control and Optimization Problem for Networks; Chapter 8: Computational Results; Bibliography; Index
Empirical assessment of business model transformations based on model simulation
Business processes are recognized by organizations as one of the most important intangible assets, since they let organizations improve their competitiveness. Business processes are supported by enterprise information systems, which can evolve over time and embed particular business rules that are not present anywhere else. Thus, there are many organizations with inaccurate business processes, which prevent the modernization of enterprise information systems in line with the business processes that they support. Therefore, business process mining techniques are often used to retrieve reliable business processes from the event logs recorded during the execution of enterprise systems. Unfortunately, such event logs are represented with purpose-specific notations such as Mining XML and still don't apply the recent software modernization standard: ISO 19506 (KDM, Knowledge Discovery Metamodel). This paper presents an exogenous model transformation between these two notations. The main advantage is that process mining techniques can be effectively reused within software modernization projects according to the standard notation. This paper is particularly focused on the empirical evaluation of this transformation by simulating different kinds of business process models and several event logs with different sizes and configurations from such models. After analyzing all the model transformation executions, the study demonstrates that the transformation can provide suitable KDM models in a linear time in accordance with the size of the input models.
Cooperation between the Inference System and the Rule Base by Using Multiobjective Genetic Algorithms
This paper presents an evolutionary Multiobjective learning model achieving positive synergy between the Inference System and the Rule Base in order to obtain simpler and still accurate linguistic fuzzy models by learning fuzzy inference operators and applying rule selection. The Fuzzy Rule Based Systems obtained in this way, have a better trade-off between interpretability and accuracy in linguistic fuzzy modeling applications.
A type-theoretical approach for ontologies: The case of roles
In the domain of ontology design as well as in Knowledge Representation, modeling universals is a challenging problem. Most approaches that have addressed this problem rely on Description Logics (DLs) but many difficulties remain , due to under-constrained representation which reduces the inferences that can be drawn and further causes problems in expressiveness. In mathematical logic and program checking, type theories have proved to be appealing but, so far they have not been applied in the formalization of ontologies. To bridge this gap, we present in this paper a theory for representing ontologies in a de pendently- typed framework which relies on strong formal foundations including both a constructive logic and a functional type system. The language of this theory defines in a precise way what ontol ogical primitives such as classes, relations, properties, etc., and thereof roles, are. The first part of the paper details how the se primitives are defined and used within the theory. In a seco nd part, we focus on the formalization of the role primitive. A review of significant role properties leads to the specificati on of a role profile and most of the remaining work details through nu merous examples, how the proposed theory is able to fully satisfy this profile. It is demonstrated that dependent types can mod el several non-trivial aspects of roles including a formal s olution for generalization hierarchies, identity criteria for roles a nd other contributions. A discussion is given on how the theory is able to cope with many of the constraints inherent in a good role representation.