abstract
stringlengths 5
11.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 4
367
| __index_level_0__
int64 0
1,000k
|
---|---|---|---|
Editorial: From phenomenological data and sensations to cognition | ['José Manuel Ferrández', 'D. Maravall', 'José Ramón Álvarez-Sánchez'] | Editorial: From phenomenological data and sensations to cognition | 811,331 |
The analysis of externalities in technology-based networks continues to be of significant managerial importance in e-commerce and traditional IS operations. Competitive strategy, economics and IS researchers share this interest, and have been exploring technology adoption, development and product launch contexts where understanding the issues is critical. We examine three levels of analysis in which positive demand-side and negative congestion externalities act as countervailing drivers of business value: the market or economy level, the business process or firm level, and the individual or product level. We develop game-theoretic deterministic expected-value models to analyze countervailing network externalities, and illustrate their application in several real world settings. We further illustrate how managerial decision-making can be enriched using real options analysis in contexts involving dynamic countervailing externalities | ['Robert J. Kauffman', 'Ajay Kumar'] | Modeling Network Decisions under Uncertainty: Countervailing Externalities and Embedded Options | 126,350 |
A Model-Driven Approach for Mobile Business Information Systems Applications. | ['Luís Pires da Silva', 'Fernando Brito e Abreu', 'Vasco Amaral'] | A Model-Driven Approach for Mobile Business Information Systems Applications. | 805,383 |
In this paper we present a placement-intelligent resynthesis methodology and optimization algorithms to meet post-layout timing constraints while at the same time reducing the interconnect congestion. We begin with the synthesized design netlist after initial placement and make incremental modifications - taking placement into account - to generate a final netlist and placement that meets the delay constraints after place and route. The algorithms described have been implemented as part of a tool for placement based resynthesis. The tool has been used with a number of pre-optimized designs from industry and to obtain improvements in post-placement delays ranging from 13 to 22% with improved routability. | ['Lalgudi N. Kannan', 'Peter R. Suaris', 'Hong-Gee Fang'] | A Methodology and Algorithms for Post-Placement Delay Optimization | 423,810 |
In QoS networks, Connection Admission Control (CAC) is required to ensure that network resources are used efficiently. Very often CAC algorithms attempt to estimate available network resources (e.g., bandwidth, buffer space) to make admission decisions. In the case of real time video applications, transmission delay and packet loss rate constraints are crucial QoS parameters. However, the mapping between these parameters and available network resources is not so easy. In this paper, we present and analyze an adaptive admission control and traffic adjustment algorithm for real time video transmission, which is based on the measurement of end-to-end delay probability distribution and the user-defined QoS level profiles. As the network status varies, the algorithm attempts to improve or degrade the QoS levels of existing calls within the acceptable range. Simulation results demonstrate that the proposed algorithm performs well in terms of decreasing the call blocking rate while utilizing the network bandwidth effectively. | ['Ying He', 'Jian Yan', 'Zhengxin Ma', 'Xuming Liu'] | Adaptive Call Admission Control for Real Time Video Communications Based on Delay Probability Distribution | 49,079 |
This paper presents an algorithm that allows for the rapid design of motion effects for 4D films. Our algorithm is based on a viewer-centered rendering strategy that matches chair motion to the movement of a viewer's visual attention. Object tracking algorithm is used to estimate the movement of visual attention under the assumption that visual attention follows an object of interest. We performed several experiments to find optimal parameters for implementation, such as the required accuracy of object tracking. Our algorithm enables motion effects design to be at least 10 times faster than the current practice of manual authoring. We also assessed the subjective quality of the motion effects generated by our algorithm, and results indicated that our algorithm can provide perceptually plausible motion effects. | ['Jaebong Lee', 'Bohyung Han', 'Seungmoon Choi'] | Interactive motion effects design for a moving object in 4D films | 933,902 |
The lung cancer is the reason of a lot of deaths on population around the world. An early diagnosis brings a most curable and simpler treatment options. Due to complexity diagnosis of small pulmonary nodules, Computer-Aided Diagnosis (CAD) tools provides an assistance to radiologist aiming the improvement in the diagnosis. Extracting relevant image features is of great importance for these tools. In this work we extracted 3D Texture Features (TF) and 3D Margin Sharpness Features (MSF) from the Lung Image Database Consortium (LIDC) in order to create a classification model to classify small pulmonary nodules with diameters between 3-10mm. We used three machine learning algorithm: k-Nearest Neighbor (k-NN), Multilayer Perceptron (MLP) and Random Forest (RF). These algorithms were trained by different set of features from the TF and MSF. The classification model with MLP algorithm using the selected features from the integration of TF and MSF achieved the best AUC of 0.820. | ['Ailton Felix', 'Marcelo Costa Oliveira', 'Aydano Pamponet Machado', 'Jose Raniery'] | Using 3D Texture and Margin Sharpness Features on Classification of Small Pulmonary Nodules | 983,976 |
Abstract The purpose of this study was to investigate Internet use patterns and Internet addiction among adolescents and to examine the correlation between Internet addiction and eating attitudes and body mass index (BMI). The study was conducted among 1,938 students, aged between 14 and 18 years. The Internet Addiction Test (IAT), the Eating Attitudes Test (EAT), and a sociodemographic query form were used to collect data. According to the IAT, 12.4% of the study sample met the criteria for Internet addiction. A significant positive correlation between BMI and the IAT (r=0.307; p 0.05). No relationship was found between the EAT and the IAT and duration of weekly Internet use. Linear regression analysis revealed a significant independent association of the IAT with BMI (r=0.235; p<0.001). These results indicate an a... | ['Fatih Canan', 'Osman Yildirim', 'T.Y. Ustunel', 'Gjergji Sinani', 'Arzu Hisarvant Kaleli', 'Cemalettin Gunes', 'Ahmet Ataoglu'] | The Relationship Between Internet Addiction and Body Mass Index in Turkish Adolescents | 180,236 |
Quantum communication networks need to securely transmit a quantum frame from source to destination. In the wireless communication network, source and destination have two types of connection: direct or indirect communication. In the direct connected mode, the transmission security can be achieved by using a quantum key distribution. In the indirect connected mode, it is a difficult problem to deal with the unsafe routing path from source to destination due to eavesdropping and man- in-the-middle attack. In this paper, we propose a quantum mechanism that designs a flexible quantum frame to achieve transmission integrity. This frame uses a quantum correlated key to let the receiver has the capability to judge whether the received quantum frame is secure or not. This mechanism provides a new solution to solve the transmission security in the unsafe routing path. | ['Tien-Sheng Lin', 'I-Ming Tsai', 'Sy-Yen Kuo'] | Quantum Transmission Integrity Mechanism for Indirect Communication | 58,557 |
There yet exist no truly parallel file systems. Those that make the claim fall short when it comes to providing adequate concurrent write performance at large scale. This limitation causes large usability headaches in HPC. Users need two major capabilities missing from current parallel file systems. One, they need low latency interactivity. Two, they need high bandwidth for large parallel IO; this capability must be resistant to IO patterns and should not require tuning. There are no existing parallel file systems which provide these features. Frighteningly, exascale renders these features even less attainable from currently available parallel file systems. Fortunately, there is a path forward. | ['John Bent', 'Gary Grider', 'Brett M. Kettering', 'Adam Manzanares', 'Meghan McClelland', 'Aaron Torres', 'Alfred Torrez'] | Storage challenges at Los Alamos National Lab | 401,220 |
An intelligent agent must plan future deductions and anticipate what other agents will deduce from their beliefs- Creary (1979) proposed to do this by simulation. Often the agent's future beliefs, or the beliefs of other agents, involve terms unknown to the agent at simulation time. This paper shows how to extend Creary technique to handle this case. | ['Andrew R Haas'] | Reasoning about deduction with unknown constants | 100,344 |
CodeCity is a language-independent interactive 3D visualization tool for the analysis of large software systems. Based on a city metaphor, it depicts classes as buildings and packages as districts of a "software city". By offering consistent locality and solid orientation points we keep the viewer oriented during the exploration of a city. We applied our tool on several large-scale industrial systems. | ['Richard Wettel', 'Michele Lanza'] | CodeCity: 3D visualization of large-scale software | 498,196 |
In cellular radio, accessing requests are transmitted through signaling channels in the form of minipackets. The accessing requests are generated by a variety of mobile users that impose different constraints. The authors consider data and high-priority public-safety cellular users and isolate a single channel that is shared by the mixed user population. They devise and analyze a mixed protocol that satisfies the constraints of the high-priority public-safety users, while maintaining the data traffic with satisfactory delays. For the high-priority users, a well-defined and finite but bursty population and a number of different priorities are considered. The population of data users is assumed time-varying and not well-defined, and is modeled as limit Poisson and in the analysis of the protocol. > | ['Ming T. Liu', 'Panayota Papantoni-Kazakos'] | A protocol for cellular radio signaling channels carrying data and high-priority accessing requests | 483,410 |
Mobile agents are a special category of software entities, with the capacity to move between nodes of one or more networks. However, they are subject to deficiency of security, related particularly to the environments on which they land or other malicious agents they may meet on their paths. Security of mobile agents is divided into two parts, the first one relates to the vulnerabilities of the host environment receiving the agent, and the second one is concerning the malevolence of the agent towards the host platform and other agents.#R##N##R##N#In this paper, we will address the second part while trying to develop an hybrid solution combining the two parts. A solution for this security concern will be presented and performed .It involves the integration of cryptographic mechanisms such as Diffie-Hellman key exchange for authentication between the set (platform, agent) and the Advanced Encryption Standard (AES) to communicate the data with confidentiality. These mechanisms are associated with XML serialization in order to ensure easy and persistent portability across the network, especially for non permanent connection. | ['Hind Idrissi', 'Arnaud Revel', 'El Mamoun Souidi'] | A New Approach based on Cryptography and XML Serialization for Mobile Agent Security | 733,866 |
Monitoring of posture allocations and activities enables accurate estimation of energy expenditure and may aid in obesity prevention and treatment. At present, accurate devices rely on multiple sensors distributed on the body and thus may be too obtrusive for everyday use. This paper presents a novel wearable sensor, which is capable of very accurate recognition of common postures and activities. The patterns of heel acceleration and plantar pressure uniquely characterize postures and typical activities while requiring minimal preprocessing and no feature extraction. The shoe sensor was tested in nine adults performing sitting and standing postures and while walking, running, stair ascent/descent and cycling. Support vector machines (SVMs) were used for classification. A fourfold validation of a six-class subject-independent group model showed 95.2% average accuracy of posture/activity classification on full sensor set and over 98% on optimized sensor set. Using a combination of acceleration/pressure also enabled a pronounced reduction of the sampling frequency (25 to 1 Hz) with out significant loss of accuracy (98% versus 93%). Subjects had shoe sizes (US) M9.5-11 and W7-9 and body mass index from 18.1 to 39.4 kg/m2 and thus suggesting that the device can be used by individuals with varying anthropometric characteristics. | ['Edward Sazonov', 'George D. Fulk', 'James O. Hill', 'Y. Schutz', 'Raymond C. Browning'] | Monitoring of Posture Allocations and Activities by a Shoe-Based Wearable Sensor | 358,391 |
In this paper, a general methodology based on the application of discrete wavelet transform (DWT) to the diagnosis of the cage motor condition using transient stator currents is exposed. The approach is based on the identification of characteristic patterns introduced by fault components in the wavelet signals obtained from the DWT of transient stator currents. These patterns enable a reliable detection of the corresponding fault as well as a clear interpretation of the physical phenomenon taking place in the machine. The proposed approach is applied to the detection of rotor asymmetries in two alternative ways, i.e., by using the startup current and by using the current during plugging stopping. Mixed eccentricities are also detected by means of the transient-based methodology. This paper shows how the evolution of other non-fault-related components such as the principal slot harmonic (PSH) can be extracted with the proposed technique. A compilation of experimental cases regarding the application of the methodology to the previous cases is presented. Guidelines for the easy application of the methodology by any user are also provided under a didactic perspective. | ['M. Riera-Guasp', 'Jose A. Antonino-Daviu', 'M. Pineda-Sanchez', 'Ruben Puche-Panadero', 'J. Perez-Cruz'] | A General Approach for the Transient Detection of Slip-Dependent Fault Components Based on the Discrete Wavelet Transform | 527,733 |
This tool paper describes Leap, a tool for the verification of concurrent datatypes and parametrized systems composed by an unbounded number of threads that manipulate infinite data.#R##N##R##N#Leap receives as input a concurrent program description and a specification and automatically generates a finite set of verification conditions which are then discharged to specialized decision procedures. The validity of all discharged verification conditions implies that the program executed by any number of threads satisfies the specification. Currently, Leap includes not only decision procedures for integers and Booleans, but it also implements specific theories for heap memory layouts such as linked-lists and skiplists. | ['Alejandro Sánchez', 'César Sánchez'] | LEAP: A Tool for the Parametrized Verification of Concurrent Datatypes | 488,665 |
This paper proposes an enhanced Particle Swarm Optimisation (PSO) algorithm and examines its performance. In the proposed PSO approach, PSO is combined with Evolutionary Game Theory to improve convergence. One of the main challenges of such stochastic optimisation algorithms is the difficulty in the theoretical analysis of the convergence and performance. Therefore, this paper analytically investigates the convergence and performance of the proposed PSO algorithm. The analysis results show that convergence speed of the proposed PSO is superior to that of the Standard PSO approach. This paper also develops another algorithm combining the proposed PSO with the Standard PSO algorithm to mitigate the potential premature convergence issue in the proposed PSO algorithm. The combined approach consists of two types of particles, one follows Standard PSO and the other follows the proposed PSO. This enables exploitation of both diversification of the particles' exploration and adaptation of the search direction. | ['Cédric Leboucher', 'Hyo-Sang Shin', 'Patrick Siarry', 'Stéphane Le Ménec', 'Rachid Chelouah', 'Antonios Tsourdos'] | Convergence proof of an enhanced Particle Swarm Optimisation method integrated with Evolutionary Game Theory | 599,455 |
Temperature Correction and Reflection Removal in Thermal Images using 3D Temperature Mapping | ['Björn Zeise', 'Bernardo Wagner'] | Temperature Correction and Reflection Removal in Thermal Images using 3D Temperature Mapping | 876,015 |
Managing business collaboration is about coordinating the flow of information among organizations and linking their business processes into a cohesive whole. A consistent outcome is expected within and among involving organization. However, inadequate coordination onthe autonomous, heterogeneous, and long-lasting cross-organizational business processes can lead to inconsistent execution in loosely-coupled environment, e.g., Service Oriented Computing (SOC). Therefore, a model that describes business collaboration to enforce consistency for each partner as well as consistency for thecollaboration as a whole becomes essential. Previously we proposed a Business Transaction Net(BTx-Net) as a theoretical model to describe business collaboration during design time and manage consistency at run-time execution. In this paper, we will further explore the BTx-Net's enforceability for managing consistency by performingTransactional Management Actions(TMAs) which are needed to be taken during the occurrence of runtime faults and exceptions. | ['Haiyang Sun', 'Jian Yang'] | Enforcing Business Collaboration Consistency in Business Transaction Net | 134,133 |
For a 5-bar finger mechanism with redundant actuators, it is shown that judicial choice of the location of one redundant actuator greatly enhances the load handling capacity of the system when compared to that of nonredundant systems, And, it is also shown that excessive redundant actuation enables the system to modulate the end-point stiffness or motion frequency by internal load distributions. Especially, the motion frequency modulation via internal loads in redundantly actuated closed-chain systems is believed to be fairly new. To show the effectiveness of the proposed algorithms, several simulation results are illustrated. | ['Byung-Ju Yi', 'Il Hong Suh', 'Sang-Rok Oh'] | Analysis of a 5-bar finger mechanism having redundant actuators with applications to stiffness and frequency modulations | 114,361 |
Kombination von Bildanalyse und physikalischer Simulation für die Planung von Behandlungen maligner Lebertumoren mittels laserinduzierter Thermotherapie | ['Arne Littmann', 'Andrea Schenk', 'Bernhard Preim', 'Andre Roggan', 'Kai S. Lehmann', 'Jörg-Peter Ritz', 'Christoph-Thomas Germer', 'Heinz-Otto Peitgen'] | Kombination von Bildanalyse und physikalischer Simulation für die Planung von Behandlungen maligner Lebertumoren mittels laserinduzierter Thermotherapie | 518,109 |
For the last decade, the Java Virtual Machine (JVM) has been a popular platform to host languages other than Java. Language implementation frameworks like Truffle allow the implementation of dynamic languages such as JavaScript or Ruby with competitive performance and completeness. However, statically typed languages are still rare under Truffle. We present Sulong, an LLVM IR interpreter that brings all LLVM-based languages including C, C++, and Fortran in one stroke to the JVM. Executing these languages on the JVM enables a wide area of future research, including high-performance interoperability between high-level and low-level languages, combination of static and dynamic optimizations, and a memory-safe execution of otherwise unsafe and unmanaged languages. | ['Manuel Rigger', 'Matthias Grimmer', 'Hanspeter Mössenböck'] | Sulong - execution of LLVM-based languages on the JVM: position paper | 990,885 |
This paper presents a novel method for the suppression of random-valued impulsive noise from corrupted images. The proposed method is composed of an efficient noise detector and a pixel-restoration operator. The noise detector has been used to discriminate the uncorrupted pixels from the corrupted pixels. The noise-free intensity values of the corrupted pixels have been computed by using triangle-based linear interpolation and the values of tuning parameters of the proposed method have been optimized with differential evolution algorithm. Extensive simulation experiments indicate that the proposed method significantly outperforms all of the comparison methods mentioned in this paper. The success of the proposed method over comparison methods is due to its excellent detail preservation performance independent from the level of noise density. | ['Pinar Civicioglu'] | Removal of random-valued impulsive noise from corrupted images | 377,408 |
Given a set of training examples in the form of (input, output) pairs, induction generates a set of rules that when applied to an input example, can come up with a target output or class for that example. At deduction time, these rules can be applied to a pre-classified test set to evaluate their accuracy. With existing rule induction systems, the rules are "frozen" on the training set, and they cannot adapt to a changing distribution of examples. In this paper we propose two approaches to dynamically refine the rules at deduction time, to overcome this limitation. For each test example, we perform a classification using existing rules. Depending on whether the classification is correct or not, the rule which was responsible for the classification is refined. When the correct classification is found, we refine the associated rule in one of two ways: by increasing the coverages of all conjunctions associated with the rule, or by increasing the coverage of the rule's most important conjunction only for the test example in question. These refined rules are then used for deducing the classifications for remaining examples. Of the two deduction methods, the second method has been shown to significantly improve the accuracy of the rules when compared to the regular non-dynamic deduction process. | ['Kalyani K. Manchi', 'Xindong Wu'] | Dynamic refinement of classification rules | 238,224 |
Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time. | ['Ching-Chuan Wei', 'Yu-Chang Song', 'Chia-Chi Chang', 'Chuan-Bi Lin'] | Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor | 936,925 |
We develop, in this paper, a representation of time and events that supports a range of reasoning tasks such as monitoring and detection of event patterns which may facilitate the explanation of root cause(s) of faults. We shall compare two approaches to event definition: the active database approach in which events are defined in terms of the conditions for their detection at an instant, and the knowledge representation approach in which events are defined in terms of the conditions for their occurrence over an interval. We shall show the shortcomings of the former definition and employ a three-valued temporal first order nonmonotonic logic, extended with events, in order to integrate both definitions. | ['Nadim Obeid', 'Raj B. K. N. Rao'] | On integrating event definition and event detection | 129,495 |
In recent years, data envelopment analysis DEA has been widely used to assess both efficiency and effectiveness. Accurate measurement of overall performance is a product of concurrent consideration of these measures. There are a couple of well-known methods to assess both efficiency and effectiveness. However, some issues can be found in previous methods. The issues include non-linearity problem, paradoxical improvement solutions, efficiency and effectiveness evaluation in two independent environments: dividing an operating unit into two autonomous departments for performance evaluation and problems associated with determining economies of scale. To overcome these issues, this paper aims to develop a series of linear DEA methods to estimate efficiency, effectiveness, and returns to scale of decision-making units DMUs, simultaneously. This paper considers the departments of a DMU as a united entity to recommend consistent improvements. We first present a model under constant returns to scale CRS assumption, and examine its relationship with one of existing network DEA model. We then extend model under variable returns to scale VRS condition, and again its relationship with one of existing network DEA models is discussed. Next, we introduce a new integrated two-stage additive model. Finally, an in-depth analysis of returns to scale is provided. A case study demonstrates applicability of the proposed models. | ['Mohsen Khodakarami', 'Amir Shabani', 'Reza Farzipoor Saen'] | Concurrent estimation of efficiency, effectiveness and returns to scale | 590,019 |
An undergraduate laboratory based on a functionally complete set of virtual tools for laboratory experimenting is described in the paper. An outstanding feature of the experimenting environment is an easy-to-use, graphical user interface to a laboratory experiment. If significantly shortens the time needed to implement experimenter-defined laboratory procedures and eliminates the need for high-level-language programming on the experimenter side. Taking advantage of industry-wide standards such as VISA, GP-IB, and VXI, our virtual instruments can perform their function using either physical or simulated, local, or remote and network-distributed instruments coming from a variety of different manufacturers. A paperless laboratory with on-screen graded measurement reports was fully implemented. The environment as-is allows remote experimenting-world-wide in principle. It currently requires a remote LabVIEW/sup TM/ or XWindow/sup TM/ session. Work is in progress on giving full access to our virtual tools using Java/sup TM/-capable web browsers such as Netscape/sup TM/, Hot Java/sup TM/, or MS Explorer/sup TM/ and thus to provide students with a student-affordable remote experimenting platform. | ['Marcin A. Stegawski', 'Rolf Schaumann'] | A new virtual-instrumentation-based experimenting environment for undergraduate laboratories with application in research and manufacturing | 52,515 |
We identify data-intensive operations that are common to classifiers and develop a middleware that decomposes and schedules these operations efficiently using a backend SQL database. Our approach has the added advantage of not requiring any specialized physical data organization. We demonstrate the scalability characteristics of our enhanced client with experiments on Microsoft SQL Server 7.0 by varying data size, number of attributes and characteristics of decision trees. | ['Surajit Chaudhuri', 'Usama M. Fayyad', 'Jeff Bernhardt'] | Scalable classification over SQL databases | 76,229 |
A new multiobjective genetic programming approach using compromise distance ranking for automated design of nonlinear system design | ['Shuang Wei', 'Defu Jiang', 'Feng Wang'] | A new multiobjective genetic programming approach using compromise distance ranking for automated design of nonlinear system design | 811,502 |
We present a novel planning strategy which is applicable to high performance unmanned aerial vehicles. The proposed approach takes as input a 3-D sequence of way-points connected by straight flight trim conditions, and ldquosmoothsrdquo it in an optimal way with the goal of making it compatible with the vehicle dynamics. The smoothing step is achieved by selecting appropriate sequences of alternating trims and maneuvers from within a precomputed library of motion primitives. The resulting extremal trajectory is compatible with the vehicle and therefore trackable with small errors; furthermore, it is guaranteed to stay within the flight envelope boundary, alleviating the need for flight envelope protection systems. Yet it can be computed in real-time using closed-form expressions, all nonlinearities due to the vehicle model being confined to the stored library of motion primitives. The new method is demonstrated for the aggressive maneuvering of a helicopter. | ['Carlo L. Bottasso', 'D. Leonello', 'Barbara Savini'] | Path Planning for Autonomous Vehicles by Trajectory Smoothing Using Motion Primitives | 186,516 |
A dynamic hybrid position/force control method, which takes into consideration the manipulator dynamics and the constraints on the end-effector specified by the given task, is obtained by extending the method of M.H. Raibert and J.J. Craig (1981). One difficulty in implementing the method is that precise information on the size and position of the object with which the end-effector contacts is not usually available. To cope with this difficulty, a problem of dynamic hybrid control with unknown constraint is studied. An online algorithm is developed that estimates the local shape of the constraint surface by using measured data on the position and force of the end-effector. It is shown by experiments, using a SCARA-type robot, that the combination of this algorithm with the dynamic hybrid control method works fairly well, making the dynamic hybrid control approach more practical. > | ['Tsuneo Yoshikawa', 'Akio Sudou'] | Dynamic hybrid position/force control of robot manipulators-on-line estimation of unknown constraint | 335,012 |
We present identities used to represent real numbers of the form xum±yvn for appropriately chosen real numbers x, y, u, v and nonnegative integers m and n. We present the proofs of the identities by applying Zeilberger's algorithm. | ['George Grossman', 'Akalu Tefera', 'Aklilu Zeleke'] | Summation identities for representation of certain real numbers | 36,446 |
Remote UsabilityTests von modellbasierten interaktiven Systemen | ['Gregor Buchholz', 'Anke Dittmar', 'Peter Forbrig', 'Daniel Reichart', 'Andreas Wolff'] | Remote UsabilityTests von modellbasierten interaktiven Systemen | 551,741 |
This is a commentary to the McGeoch's feature article. It focuses on a few aspects of the feature article, primarily from the background of computational tests on network optimization algorithms and on combinatorial optimization algorithms. | ['James B. Orlin'] | Commentary—On Experimental Methods for Algorithm Simulation | 216,121 |
We discuss uses of embedded computing questions (ECQs) in interactive electronic textbooks on programming, identifying a non-exhaustive list of three main categories of ECQs and nine subcategories. The main categories are: ECQs that introduce content, ECQs that reinforce learning, and ECQs that highlight content. We provide examples from an existing ebook, discuss how student perceptions may pose challenges to the use of ECQs, and invite the research community to debate ECQs and investigate them empirically. | ['Juha Sorva', 'Teemu Sirkiä'] | Embedded questions in ebooks on programming: useful for a) summative assessment, b) formative assessment, or c) something else? | 647,919 |
A Different View of Learning and Knowledge Creation in Collaborative Networks | ['Frans M. van Eijnatten', 'Goran D. Putnik'] | A Different View of Learning and Knowledge Creation in Collaborative Networks | 123,013 |
Conventional models of multirobot control assume independent robots and tasks. This allows an additive model in which the operator controls robots sequentially neglecting each until its performance deteriorates sufficiently to require new operator input. This paper presents a model and experiment intended to extend the neglect tolerance model to situations in which robots must cooperate to perform dependent tasks. In the experiment operators controlled 2 robot teams to perform a box pushing task under high cooperation demand (teleoperation), moderate demand (waypoint control/heterogeneous robots), and low demand (waypoint control/homogeneous robots) conditions. Measured demand and performance were consistent with the model's predictions. | ['Jijun Wang', 'Michael Lewis'] | Assessing coordination overhead in control of robot teams | 386,005 |
Addresses the issue of developing a motion planning algorithm for a general class of modular mobile robots. A modular mobile robot is essentially a reconfigurable robot system in which locomotive modules such as legs, wheels, propellers, etc. can be attached at various locations on the body. The equations of motion for the robot are developed in terms of a set of generalized inputs related to the constraints and configurations of each individual module. The motion planning method developed in Lafferriere and Sussmann (1991) and extended in Goodwine and Burdick (1997) can then be modified to generate required trajectories for the modular robot. These methods generally lead to sequential motion plans-we also develop conditions under which two consecutive plans can be executed simultaneously. | ['Sachin Chitta', 'James P. Ostrowski'] | Motion planning for heterogeneous modular mobile systems | 412,204 |
Blind multiply distorted image quality assessment using relevant perceptual features | ['Chaofeng Li', 'Yu Zhang', 'Xiaojun Wu', 'Wei Fang', 'Li Mao'] | Blind multiply distorted image quality assessment using relevant perceptual features | 673,284 |
Asymptotic normality and confidence intervals for inverse regression models with convolution-type operators | ['Nicolai Bissantz', 'Melanie Birke'] | Asymptotic normality and confidence intervals for inverse regression models with convolution-type operators | 637,569 |
The Grid is an heterogeneous and dynamic environment which enables distributed computation. This makes it a technology prone to failures. Some related work uses replication to overcome failures in a set of independent tasks, and in workflow applications, but they do not consider possible resource limitations when scheduling the replicas. In this paper, we focus on the use of task replication techniques for workflow applications, trying to achieve not only tolerance to the possible failures in an execution, but also to speed up the computation without demanding the user to implement an application-level checkpoint, which may be a difficult task depending on the application. Moreover, we also study what to do when there are not enough resources for replicating all running tasks. We establish different priorities of replication depending on the graph of the workflow application, giving more priority to tasks with a higher output degree. We have implemented our proposed policy in the GRID superscalar system, and we have run the fastDNAml as an experiment to prove our objectives are reached. Finally, we have identified and studied a problem which may arise due to the use of replication in workflow applications: the replication wait time. | ['Raül Sirvent', 'Rosa M. Badia', 'Jesús Labarta'] | Graph-Based Task Replication for Workflow Applications | 381,282 |
The properties of a LPAPI-matrix derived from an extended Direct-SIMPLE scheme are demonstrated. It is shown that such LPAPI-matrix for a 3D strong P..V coupling problem is difficult to numerically solve by nature, though the Direct-SIMPLE algorithm is theoretically efficient and computationally available for a 2D STP-based simulation. Convergent computation can not be achieved for an entire 3D solidification process using SOR, Jacobi and ICCG iterative methods to solve the 3D P..V coupling. This appeals some special solution techniques to the LPAPI-matrix for an efficient 3D STP-based simulation with an extended Direct-SIMPLE algorithm. | ['Darning Xu', 'Jun Ni'] | A Numerical Approach of Direct-SIMPLE Deduced Pressure Equations to Simulations of Transport Phenomena During Shaped Casting | 252,681 |
Quantitative analysis of soccer players' passing ability focuses on descriptive statistics without considering the players' real contribution to the passing and ball possession strategy of their team. Which player is able to help the build-up of an attack, or to maintain the possession of the ball? We introduce a novel methodology called QPass to answer questions like these quantitatively. Based on the analysis of an entire season, we rank the players based on the intrinsic value of their passes using QPass. We derive an album of pass trajectories for different gaming styles. Our methodology reveals a quite counterintuitive paradigm: losing the ball possession could lead to better chances to win a game. | ['László Gyarmati', 'Rade Stanojevic'] | QPass: a Merit-based Evaluation of Soccer Passes | 896,939 |
In this paper there are stated asymptotic solvability results which yield a Farkas' lemma for sublinear functions and operators. Applications to duality for homogeneous programming and to necessary optimality conditions for nonlinear programming are given. Some closedness conditions used in the paper are also discussed. | ['Constantin Zălinescu'] | Solvability results for sublinear functions and operators | 447,569 |
Low Inter-Annotator Agreement in Sentence Boundary Detection and Annotator Personality | ['Anton Stepikhov', 'Anastassia Loukina'] | Low Inter-Annotator Agreement in Sentence Boundary Detection and Annotator Personality | 870,638 |
Projections of pattern structures don't always lead to pattern struc- tures, however residual projections and o-projections do. As a unifying approach, we introduce the notion of pattern morphisms between pattern structures and pro- vide a general sufficient condition for a homomorphic image of a pattern structure being again a pattern structure. In particular, we receive a better understanding of the theory of o-projections. | ['Lars Lumpe', 'Stefan E. Schmidt'] | Pattern Structures and Their Morphisms. | 749,462 |
Graph processing has achieved a lot of attention in differentbig data scenarios. In this paper, we present the design, implementation, and experimental evaluation of graph processing algorithmsin two differentapplication areas. First, we use semi-clustering as an example of an algorithmtypically used social network analysis. Then, we examine an algorithm for collaborative filteringas typically used in E-Commerce scenarios. For both algorithms, we make use of Apache GraphX as an existing distributedgraph processing framework based on Apache Spark. As GraphX does not include these two algorithms, we describe how to implement them using a combination of GraphX andthe underlying Spark Core. Based on our implementation, we perform experiments to test the scalability of both the algorithmsand the GraphX processing framework. The experiments show that different kinds of graphalgorithms can be supported within the Spark framework. Furthermore, we show that for our test data the algorithmsscale almost linearly when properly designed. | ['Jakob Smedegaard Andersen', 'Olaf Zukunft'] | Evaluating the Scaling of Graph-Algorithms for Big Data Using GraphX | 893,763 |
The intention of shape coding in the MPEG-4 is to improve the coding efficiency as well as to facilitate the object-oriented applications, such as shape-based object recognition and retrieval. These require both efficient shape compression and effective shape description. Although these two issues have been intensively investigated in data compression and pattern recognition fields separately, it remains an open problem when both objectives need to be considered together. To achieve high coding gain, the operational rate-distortion optimal framework can be applied, but the direction restriction of the traditional eight-direction edge encoding structure reduces its compression efficiency and description effectiveness. We present two arbitrary direction edge encoding structures to relax this direction restriction. They consist of a sector number, a short component, and a long component, which represent both the direction and the magnitude information of an encoding edge. Experiments on both shape coding and hand gesture recognition validate that our structures can reduce a large number of encoding vertices and save up to 48.9% bits. Besides, the object contours are effectively described and suitable for the object-oriented applications. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI. (DOI: 10.1117/1.JEI.23.4.043009) | ['Zhongyuan Lai', 'Junhuan Zhu', 'Jiebo Luo'] | Operationally optimal vertex-based shape coding with arbitrary direction edge encoding structures | 458,384 |
Temporal coordination patterns of performer and audience in vaudeville settings. | ['Ryota Nomura', 'Takeshi Okada'] | Temporal coordination patterns of performer and audience in vaudeville settings. | 761,912 |
Advances in fuzzy cognitive maps theory | ['Wojciech Froelich', 'Jose L. Salmeron'] | Advances in fuzzy cognitive maps theory | 955,262 |
A methodology is presented which allows comparison between models under different modeling paradigms. Consider the following situation: Two models have been constructed to study different aspects of the same system. One model simulates a fleet of aircraft moving a given combination of cargo and passengers from an onload point to an offload point. A second model is an optimization (linear programming) model, which for given cargo and passenger requirements, optimizes aircraft and route selection in order to minimize late- and non-deliveries. The optimization model represents a much more aggregated view of the airlift system than does the simulation. The two models do not have immediately comparable input or output structures, which complicates a comparison of the two models’ outputs. A methodology is developed to structure this comparison. Two models which compare favorably using this methodology are considered covalid models. We define the covalidity of models in the narrow sense as models which perform similarly under approximately the same input conditions. Structurally different models which are covalid (in the narrow sense) may hold the potential to be used in an iterative fashion to improve the input (and thus, the output) of one another. Ultimately it is hoped that we may, through such a series of innovations, effect a convergence to valid representations of the real-world situation. We define such a condition as covalidation in the wide sense. Further, if one of the models has been independently validated (in the traditional meaning), then we may effect a validation by proxy of the other model through this process. | ['Sammuel A. Wright', 'Kenneth W. Bauer'] | Covalidation of dissimilarly structured models | 83,149 |
In this paper, we propose a new spatial and temporal encoding approach for generic on-chip global buses with repeaters that enables higher performance while reducing peak energy and average energy. The proposed encoding approach exploits the benefits of temporal encoding circuit and spatial bus-invert coding techniques to simultaneously eliminate opposite transitions on adjacent wires and reduce the number of self-transitions and coupling-transitions. In the design process of applying encoding techniques for reduced bus delay and energy, we present a repeater insertion design methodology to determine the repeater size and inter-repeater bus length which minimizes the total bus energy dissipation while satisfying target delay and slew-rate constraints. This methodology can be employed to obtain optimal energy vs. delay trade-offs under slew-rate constraint for various encoding techniques. | ['Qingli Zhang', 'Jinxiang Wang', 'Yizheng Ye'] | Delay and Energy Efficient Design of On-Chip Encoded Bus with Repeaters | 332,457 |
Abstract Background With the introduction of the electronic health record, physiotherapists too are encouraged to store their patient records in a structured digital format. The typical nature of a physiotherapy treatment requires a specific record structure to be implemented, with special attention to user-friendliness and communication with other healthcare providers. Objective The objective of this study was to establish a framework for the electronic physiotherapy record and to define a model for the interoperability with the other healthcare providers involved in the patients' care. Although we started from the Belgian context, we used a generic approach so that the results can easily be extrapolated to other countries. The framework we establish here defines not only the different building blocks of the electronic physiotherapy record, but also describes the structure and the content of the exchanged data elements. Methods Through a combined effort by all involved parties, we elaborated an eight-level structure for the electronic physiotherapy record. Furthermore we designed a server-based model for the exchange of data between electronic record systems held by physicians and those held by physiotherapists. Two newly defined XML messages enable data interchange: the physiotherapy prescription and the physiotherapy report. Results We succeeded in defining a solid, structural model for electronic physiotherapist record systems. Recent wide scale implementation of operational elements such as the electronic registry has proven to make the administrative work easier for the physiotherapist. Moreover, within the proposed framework all the necessary building blocks are present for further data exchange and communication with other healthcare parties in the future. Conclusions Although we completed the design of the structure and already implemented some new aspects of the electronic physiotherapy record, the real challenge lies in persuading the end-users to start using these electronic record systems. Via a quality label certification procedure, based on adequate criteria, the Ministry of Health tries to promote the use of electronic physiotherapy records. We must keep in mind that physiotherapists will show an interest in electronic record keeping, only if this will lead to a positive return for them. | ['Ronald Buyl', 'Marc Nyssen'] | Structured electronic physiotherapy records | 302,483 |
Ökonomisch sinnhafte Bewertung von In-Memory-basierten betrieblichen Informationssystemen. | ['Marco Meier', 'Alexa Scheffler'] | Ökonomisch sinnhafte Bewertung von In-Memory-basierten betrieblichen Informationssystemen. | 750,379 |
The objective of the study was to achieve balanced load among processors, reduce the communication overhead of the load balancing algorithm, and improve respource utilization, which results in better average resonse time. A communication protocol and a fully distributed algorithm for dynamic load balancing through task migration in a connected N-processor network are presented. Each processor communicates its load directly with only a subset (of the size square root N) of processors, reducing communication traffic and average response time. It is proved that the given algorithm will perform task migration even if there is only one light load processor and one heavy load processor in the system. Simulation results show that the proposed scheme can save up to 60% of the protocol messages used by the broadcast algorithms and can reduce the average response time. > | ['Tony T. Y. Suen', 'Johnny Wong'] | Efficient task migration algorithm for distributed systems | 51,587 |
This paper describes an XSEDE Extended Collaboration Support Service (ECSS) effort on scaling a campus-developed online BLAST service (BLASTer) into an XSEDE gateway to help bridge the gap between genomic researchers and advanced computing and data environments like those found in the Extreme Science and Engineering Discovery Environment (XSEDE) network. Biologists and geneticists all over the world use the suite of Basic Local Alignment Search Tools (BLAST) developed by the National Center for Biotechnology Information (NCBI) throughout the full spectrum of genomic research. It has become one of the de facto bioinformatics applications used in all variety of computing environments. BLASTer allows researchers to achieve those tasks faster and without expert computing knowledge by converting BLAST jobs to parallel executions. It handles all of the details of computation submission, execution, and database access for users through an intuitive web-based interface provided by the unique features of the HUBzero gateway platform. This paper details the core development of BLASTer for campus computing resources at Purdue University, some of its successes among the user community, and the current efforts by an ECSS scientific gateways project from XSEDE to include data-intensive use of resources like Wrangler at the Texas Advanced Computing Center (TACC) in the XSEDE network. The lessons learned from this project will be used to bring other XSEDE computing resources to BLASTer in the future and other programs like BLASTer to XSEDE users. | ['Christopher S. Thompson', 'Steven M. Clark', 'X. Carol Song'] | The XSEDE BLAST Gateway: Leveraging Campus Development for the XSEDE Community | 877,175 |
This paper presents a novel FPGA-based design for the lightweight block cipher PRESENT and its implementation results. The proposed design allows to study area-performance trade-offs and thus constructing smaller or faster implementations. When optimized by area, the proposed design exhibits smaller latency and fewer FPGA resources than representative related works in the literature. | ['Carlos Andres Lara-Nino', 'Miguel Morales-Sandoval', 'Arturo Diaz-Perez'] | Novel FPGA-Based Low-Cost Hardware Architecture for the PRESENT Block Cipher | 920,261 |
We evaluate a low-cost fault-tolerance mechanism for microprocessors, which can detect and recover from transient faults, using multimedia applications. There are two driving forces to study fault-tolerance techniques for microprocessors. One is deep submicron fabrication technologies. Future semiconductor technologies could become more susceptible to alpha particles and other cosmic radiation. The other is the increasing popularity of mobile platforms. Recently cell phones have been used for applications which are critical to our financial security, such as flight ticket reservation, mobile banking, and mobile trading. In such applications, it is expected that computer systems will always work correctly. From these observations, we propose a mechanism which is based on an instruction reissue technique for incorrect data speculation recovery which utilizes time redundancy. Unfortunately, we found significant performance loss when we evaluated the proposal using the SPEC2000 benchmark suite. We evaluate it using MediaBench which contains more practical mobile applications than SPEC2000. | ['Toshinori Sato', 'Itsujiro Arita'] | Evaluating low-cost fault-tolerance mechanism for microprocessors on multimedia applications | 401,053 |
We review recent biological vision studies that are related to human motion segmentation. Our goal is to develop a practically plausible computational framework that is guided by recent cognitive and psychological studies on the human visual system for the segmentation of human body in a video sequence. Specifically, we discuss the roles and interactions of bottom-up and top-down processes in visual perception processing as well as how to combine them synergistically in one computational model to guide human motion segmentation. We also examine recent research on biological movement perception, such as neural mechanisms and functionalities for biological movement recognition and two major psychological tracking theories. We attempt to develop a comprehensive computational model that involves both bottom-up and top-down processing and is deeply inspired by biological motion perception. According to this model, object segmentation, motion estimation, and action recognition are results of recurrent feedforward (bottom-up) and feedback (top-down) processes. Some open technical questions are also raised and discussed for future research. | ['Cheng Chen', 'Guoliang Fan'] | What Can We Learn from Biological Vision Studies for Human Motion Segmentation | 887,654 |
A Method for Transforming TimeER Model-Based Specification into Temporal XML | ['Quang Hoang', 'Tinh Van Nguyen', 'Hoang Lien Minh Vo', 'Truong Thi Nhu Thuy'] | A Method for Transforming TimeER Model-Based Specification into Temporal XML | 828,893 |
In this paper, we propose an interactive system for reconstructing human facial expression. In the system, a nonlinear mass-spring model is employed to simulate twenty two facial musclespsila tensions during facial expressions, and then the elastic forces of these tensions are grouped into a vector which is used as the input for facial expression recognition. The experimental results show that the nonlinear facial mass-spring model coupled with the SVM classifier is effective to recognize the facial expressions. Finally, we introduce our robot that can make artificial facial expressions. Experimental results of facial expression generation demonstrate that our robot can imitate six types of facial expressions. | ['Shuzhi Sam Ge', 'Chen Wang', 'Chang Chieh Hang'] | Facial expression imitation in human robot interaction | 242,533 |
We define a two-player virus game played on a finite cyclic digraph G = (V, E). Each vertex is either occupied by a single virus, or is unoccupied. A move consists of transplanting a virus from some u into a selected neighborhood N(u) of u, while devouring every virus in N(u), and replicating in N(u), i.e., placing a virus on all vertices of N(u) where there wasn't any virus. The player first killing all the virus wins, and the opponent loses. If there is no last move, the outcome is a draw. Giving a minimum of the underlying theory, we exhibit the nature of the games on hand of examples. The 3-fold motivation for exploring these games stems from complexity considerations in combinatorial game theory, extending the hitherto 0-player and solitaire cellular automata games to two-player games, and the theory of linear error correcting codes. | ['Aviezri S. Fraenkel'] | Virus versus mankind | 860,928 |
This paper addresses the problem of automatic 3D shape segmentation in point cloud representation. Of particular interest are segmentations of noisy real scans, which is a difficult problem in previous works. To guide segmentation of target shape, a small set of pre-segmented exemplar shapes in the same category is adopted. The main idea is to register the target shape with exemplar shapes in a piece-wise rigid manner, so that pieces under the same rigid transformation are more likely to be in the same segment. To achieve this goal, an over-complete set of candidate transformations is generated in the first stage. Then, each transformation is treated as a label and an assignment is optimized over all points. The transformation labels, together with nearest-neighbor transferred segment labels, constitute final labels of target shapes. The method is not dependent on high-order features, and thus robust to noise as can be shown in the experiments on challenging datasets. | ['Rongqi Qiu', 'Ulrich Neumann'] | Exemplar-Based 3D Shape Segmentation in Point Clouds | 960,028 |
This research proposes an evaluation model to measure a computer programming learning system success, for helping students to overcome learning difficulties. Teachers use programming learning environments, as computer-assisted instruction systems. The proposed model is supported in a the theory of information systems success evaluation, and in the characteristics of a virtual programming learning tool. The used tool was a web-based virtual robot learning system. Results presented here suggest the main features that this type of solution should include. | ['Carlos J. Costa', 'Manuela Aparicio'] | Evaluating success of a programming learning tool | 349,259 |
We present techniques for computing small space representations of massive data streams. These are inspired by traditional wavelet-based approximations that consist of specific linear projections of the underlying data. We present general "sketch"-based methods for capturing various linear projections and use them to provide pointwise and rangesum estimation of data streams. These methods use small amounts of space and per-item time while streaming through the data and provide accurate representation as our experiments with real data streams show. | ['Anna C. Gilbert', 'Yannis Kotidis', 'S. Muthukrishnan', 'Martin J. Strauss'] | One-pass wavelet decompositions of data streams | 161,171 |
There is a growing need for a theory of "local to global" in distributed multi-agent systems. one which is able systematically to describe and analyze a variety of problems. This is the first in a series of two papers that begins to develop such a theory. Here, we analyze one particular multi-agent problem - the "equigrouping problem," in which multiple identical agents organize themselves into groups of equal size. We develop a formal model for describing the system and an notion of equivalence characterizing multi-agent algorithms in terms of the group behaviors induced by the algorithm. Our main result is a characterization of the space of all solutions to the equigrouping problem with respect to this group behavior equivalence. The result allows us to obtain infinitely many substantially different solutions to the Equigrouping problem, and to understand these different solutions in a qualitatively satisfying manner. The second paper in this series indicates how to develop and generalize the modeling method obtained here to other problems. | ['Daniel Yamins'] | Towards a theory of "local to global" in distributed multi-agent systems (II) | 332,521 |
The paper focuses on an approach to the study of the dynamics and control of large flexible space structures, comprised of subassemblies, a subject of considerable contemporary interest. To begin with, a relatively general Lagrangian formulation of the problem is presented. The governing equations are nonlinear, nonautonomous, coupled, and extremely lengthy even in matrix notation. Next, an efficient computer code is developed and the versatility of the program illustrated through a dynamical study of the first element launch (FEL) configuration of the Space Station Freedom, now superseded by the International Space Station. Finally, robust control of the rigid body motion of the FEL configuration using both the linear-quadratic-Gaussian/loop transfer recovery (LQG/LTR) and H/sub /spl infin// procedures is demonstrated. The controllers designed using the simplified linear models, prove to be effective in regulating librational disturbances. Such a global approach-formulation numerical code, dynamics, and control-is indeed rare. It can serve as a powerful tool to gain comprehensive understanding of dynamical interactions and thus aid in the development of an effective and efficient control system. | ['Anant Grewal', 'Vinod J. Modi'] | Multibody dynamics and robust control of flexible spacecraft | 468,658 |
An alignment matching method to explore pseudosyllable properties across different corpora. | ['Raymond W. M. Ng', 'Thomas Hain', 'Keikichi Hirose'] | An alignment matching method to explore pseudosyllable properties across different corpora. | 767,129 |
We analyse an abridged version of the active-set algorithm FPC_AS proposed in Wen et al . [A fast algorithm for sparse reconstruction based on shrinkage, subspace optimisation and continuation, SIAM J. Sci. Comput. 32 2010, pp. 1832–1857] for solving the l 1-regularized problem, i.e. a weighted sum of the l 1-norm | x |1 and a smooth function f x . The active-set algorithm alternatively iterates between two stages. In the first ‘non-monotone line search NMLS’ stage, an iterative first-order method based on ‘shrinkage’ is used to estimate the support at the solution. In the second ‘subspace optimization’ stage, a smaller smooth problem is solved to recover the magnitudes of the non-zero components of x . We show that NMLS itself is globally convergent and the convergence rate is at least R-linearly. In particular, NMLS is able to identify the zero components of a stationary point after a finite number of steps under some mild conditions. The global convergence of FPC_AS is established based on the properties of NMLS. | ['Zaiwen Wen', 'Wotao Yin', 'Hongchao Zhang', 'Donald Goldfarb'] | On the convergence of an active-set method for ℓ1 minimization | 428,497 |
Over 660 Chinese researchers were questioned about their scholarly use, citing, and publishing and how trust is exercised in these key activities. Research showed few signs of new forms of scholarly usage behaviour taking hold, despite multiple opportunities afforded by Science 2.0 developments. Thus, for determining trustworthiness for usage purposes, the most important activity was reading the | ['David Nicholas', 'Jie Xu', 'Lifang Xu', 'Jing Su', 'Anthony Watkinson'] | Chinese researchers, scholarly communication behaviour and trust | 601,823 |
Advanced receivers are a key component of the 5th Generation (5G) ultra-dense small cells concept given their capability of efficiently dealing with the ever-increasing problem of inter-cell interference. In this paper, we evaluate the potential of interference suppression receivers in real network scenarios using a software defined radio (SDR) testbed. The experimentation is carried out in an indoor office and open hall scenarios. In particular, we study to what extent the interference suppression receiver is an alternative to traditional frequency reuse techniques. To this end we evaluate the Interference Rejection Combining (IRC) and Successive Interference Cancellation (SIC) receivers and different rank adaptation approaches. Each node in our software defined radio (SDR) testbed features a 2X2 MIMO transceiver built with the USRP N200 hardware by Ettus Research. Our experimental results confirm that interference suppression receivers can be a valid alternative to frequency reuse, by achieving nearly the same outage and higher peak throughput performance. | ['Dereje A. Wassie', 'Gilberto Berardinelli', 'Davide Catania', 'Fernando Menezes Leitão Tavares', 'Troels Bundgaard Sørensen', 'Preben Mogensen'] | Experimental Evaluation of Interference Suppression Receivers and Rank Adaptation in 5G Small Cells | 636,209 |
Concurrent multibody and Finite Element analysis of the lower-limb during amputee running | ['Stacey M. Rigney', 'Anne Simmons', 'Lauren Kark'] | Concurrent multibody and Finite Element analysis of the lower-limb during amputee running | 669,811 |
We study a three-node network with a full-duplex base-station communicating with one uplink and one downlink half-duplex node. In this network, both self-interference and inter-node interference are present. We use an antenna-theory-based channel model to study the spatial degrees of freedom of such network, and study if and how much does spatial isolation outperforms time-division (i.e. half-duplex) counterpart. Using degrees of freedom analysis, we show that spatial isolation outperforms time division, unless the angular spread of the objects that scatters to the intended users is overlapped by the spread of objects that back scatters to the receivers. | ['Yujun Chen', 'Ashutosh Sabharwal'] | Degrees of freedom of spatial self-interference suppression for in-band full-duplex with inter-node interference | 826,210 |
Pedagogy, Continuance Theory and Mobile Devices: Findings from a New Zealand case study. | ['Noeline Wright'] | Pedagogy, Continuance Theory and Mobile Devices: Findings from a New Zealand case study. | 790,525 |
Millimeter communication systems use large antenna arrays to provide good average received power and to take advantage of multi-stream MIMO communication. Unfortunately, due to power consumption in the analog front-end, it is impractical to perform beamforming and fully digital precoding at baseband. Hybrid precoding/combining architectures have been proposed to overcome this limitation. The hybrid structure splits the MIMO processing between the digital and analog domains, while keeping the performance close to that of the fully digital solution. In this paper, we introduce and analyze several algorithms that efficiently design hybrid precoders and combiners starting from the known optimum digital precoder/combiner, which can be computed when perfect channel state information is available. We propose several low complexity solutions which provide different trade-offs between performance and complexity. We show that the proposed iterative solutions perform better in terms of spectral efficiency and/or are faster than previous methods in the literature. All of them provide designs which perform close to the known optimal digital solution. Finally, we study the effects of quantizing the analog component of the hybrid design and show that even with coarse quantization, the average rate performance is good. | ['Cristian Rusu', 'Roi Mendez-Rial', 'Nuria Gonzalez-Prelcic', 'Robert W. Heath'] | Low Complexity Hybrid Precoding Strategies for Millimeter Wave Communication Systems | 896,895 |
EXPLORING mEMD FOR FACE RECOGNITION | ['Esteve Gallego-Jutglà', 'Jordi Solé-Casals'] | EXPLORING mEMD FOR FACE RECOGNITION | 768,612 |
Many organizations invest considerable cost and effort in building traceability matrices in order to comply with regulatory requirements or process improvement initiatives. Unfortunately, these matrices are frequently left un-used and project stakeholders continue to perform critical software engineering activities such as change impact analysis or requirements satisfaction assessment without benefit of the established traces. A major reason for this is the lack of a process framework and associated tools to support the use of these trace matrices in a strategic way. In this position paper, we present a model-based approach designed to help organizations gain full benefit from the traces they develop and to allow project stakeholders to plan, generate, and execute trace strategies in a graphical modeling environment. The approach includes a standard notation for capturing strategic traceability decisions in the form of a graph, and also notation for modeling reusable trace queries using augmented sequence diagrams. All of the model elements, including project specific data, are represented using XML. The approach is demonstrated through examples from a traffic simulator project composed of requirements, UML class diagrams, code, test cases, and test case results. | ['Jane Cleland-Huang', 'Jane Huffman Hayes', 'J. M. Domel'] | Model-based traceability | 201,141 |
Computing as a utility, that is, on-demand access to computing and storage infrastructure, has emerged in the form of the Cloud. In this model of computing, elastic resource allocation, i.e., the ability to scale resource allocation for specific applications, should be optimized to manage cost versus performance. Meanwhile, the wake of the information sharing/mining age is invoking a pervasive sharing of Web services and data sets in the Cloud, and at the same time, many data-intensive scientific applications are being expressed as these services. In this paper, we explore an approach to accelerate service processing in a Cloud setting. We have developed a cooperative scheme for caching data output from services for reuse. We propose algorithms for scaling our cache system up during peak querying times, and back down to save costs. Using the Amazon EC2 public Cloud, a detailed evaluation of our system has been performed, considering speed up and elastic scalability in terms resource allocation and relaxation. | ['David Chiu', 'Apeksha Shetty', 'Gagan Agrawal'] | Elastic Cloud Caches for Accelerating Service-Oriented Computations | 348,038 |
In this paper we present a new BIST test pattern generator architecture and a methodology to program this architecture to generate tests for any given inter-chip interconnect circuitry via IEEE 1149.1 boundary-scan architecture. The test architecture uses two test pattern generators, a C-TPG that generates test patterns for the control cells in the boundary scan chain and a D-TPG that generates rest patterns for the data cells. The other main component of the test architecture is a lookup table which is programmed to select, for each boundary scan cell, a specific C-TPG or D-TPG stage whose content is shifted into that cell. This test architecture provides a complete BIST solution for interconnect testing. The proposed BIST TPG design procedure uses the notions of incompatibility and conditional incompatibility and generates TPG designs that (i) guarantee that no circuit damage can occur due to multi-driver conflicts, (ii) guarantee the detection of all interconnect faults, (iii) have low area overhead, and (iv) have low test length. The proposed procedure is used to obtain TPG designs that require significantly less test time and area than other TPG designs, for eight interconnect circuits extracted from industrial boards. | ['Chen-Huan Chiang', 'Sandeep K. Gupta'] | BIST TPGs for faults in board level interconnect via boundary scan | 370,176 |
Summary form only given. Telepresence is the future of multimedia systems and will allow participants to share professional and private experiences, meetings, games, parties. The concepts of distributed virtual environments are a key technology to implement this telepresence. Using virtual humans within the shared environment is a essential supporting tool for presence. Real-time realistic 3D avatars will be essential in the future, but we will need interactive perceptive actors to populate the virtual worlds. The ultimate objective in creating realistic and believable virtual actors is to build intelligent autonomous virtual humans with adaptation, perception and memory. These actors should be able to act freely and emotionally. Ideally, they should be conscious and unpredictable. But, how far are we from such a ideal situation? Our interactive perceptive actors are able to perceive the virtual world, the people living in this world and in the real world. They may act based on their perception in an autonomous manner. Their intelligence is constrained and limited to the results obtained in the development of new methods of artificial intelligence. However, the representation under the form of virtual actors is a way of visually evaluating the progress. In the future, we may expect to meet intelligent actors able to learn or understand a few situations | ['Daniel Thalmann'] | Populating the Virtual Worlds with Interactive Perceptive Virtual Actors | 606,290 |
Trust on Information Sources: A Theoretical and Computational Approach. | ['Alessandro Sapienza', 'Rino Falcone', 'Cristiano Castelfranchi'] | Trust on Information Sources: A Theoretical and Computational Approach. | 771,953 |
Distributed Computing in Monotone Topological Spaces | ['Susmit Bagchi'] | Distributed Computing in Monotone Topological Spaces | 830,497 |
The interest in wireless communication systems for industrial applications has grown significantly over the last years. More flexible, easier to install and maintain, wireless networks present a promising alternative to the currently used wired systems. However, reliability and timeliness requirements at present met by wired networks also need to be fulfilled by wireless solutions. Packet errors introduced when packets travel through wireless channels imply a significant challenge to fulfill these requirements. Relaying has been recognized to improve the reliability in industrial wireless networks without causing additional delay. Furthermore, the recent results have shown that relaying combined with packet aggregation significantly outperforms simple relaying. However, it is not always cost efficient to introduce additional relay nodes into an industrial network and hence, in this paper, we propose using a combination of relaying and packet aggregation at the source nodes. The results show that when relaying and aggregation are used at the source nodes, the transmission schedule plays a crucial role. A schedule adapting to the varying channel conditions improves performance substantially. By carefully choosing which packet to aggregate, even further improvements can be achieved. | ['Svetlana Girs', 'Andreas Willig', 'Elisabeth Uhlemann', 'Mats Björkman'] | Scheduling for Source Relaying With Packet Aggregation in Industrial Wireless Networks | 689,786 |
We investigate an iterative receiver with a linear detector for complex symbols to suppress the co-antenna interference introduced by a multiple input multiple output (MIMO) channel. For generalized MIMO systems, i.e., systems that transmit complex conjugate repetitions as well as the pure data, we show that the standard linear detector has to be replaced by a widely linear (WL) detector. The WL detector consists of four real filters represented by two complex filters for the data symbols and their complex conjugates, respectively. The generalized MIMO transmitter structure allows for a description of the detector that is applicable to many different MIMO systems. Furthermore, we demonstrate that even for systems without complex conjugate repetitions, WL filtering is beneficial in iterative detection. | ['Melanie Witzke', 'Stephan Bäro', 'Joachim Hagenauer'] | Iterative detection of generalized coded MIMO signals using a widely linear detector | 247,900 |
This paper provides an overview of important software engineering research issues related to the development of applications that run on mobile devices. Among the topics are development processes, tools, user interface design, application portability, quality, and security. | ['Anthony I. Wasserman'] | Software engineering issues for mobile application development | 495,185 |
It has been recently advocated that in large communication systems it is benecial both for the users and for the network as a whole to store content closer to users. One particular implementation of such an approach is to co-locate caches with wireless base stations. In this paper we study geographically distributed caching of a xed collection of les. We model cache placement with the help of stochastic geometry and optimize the allocation of storage capacity among les in order to minimize the cache miss probability. We consider both per cache capacity constraints as well as an average capacity constraint over all caches. The case of per cache capacity constraints can be eciently solved using dynamic programming, whereas the case of the average constraint leads to a convex optimization problem. We demonstrate that the average constraint leads to signicantly smaller cache miss probability. Finally, we suggest a simple LRU-based policy for geographically distributed caching and show that its performance is close to the optimal. | ['Konstantin Avrachenkov', 'Xinwei Bai', 'Jasper Goseling'] | Optimization of Caching Devices with Geometric Constraints | 631,168 |
In this paper we present a novel two-camera-based accurate vehicle speed detection system. Two high-resolution cameras, with high-speed and narrow field of view, are mounted on a fixed pole. Using different focal lengths and orientations, each camera points to a different stretch of the road. Unlike standard average speed cameras, where the cameras are separated by several kilometers and the errors in measurement of distance can be in the order of several meters, our approach deals with a short stretch of a few meters, which involves a challenging scenario where distance estimation errors should be in the order of centimeters. The relative distance of the vehicles w.r.t. the cameras is computed using the license plate as a known reference. We demonstrate that there is a specific geometry between the cameras that minimizes the speed error. The system was tested on a real scenario using a vehicle equipped with DGPS to compute ground truth speed values. The obtained results validate the proposal with maximum speed errors < 3kmh at speeds up to 80kmh. | ['David Fernández Llorca', 'C. Salinas', 'Mario Jimenez', 'Ignacio Parra', 'A. G. Morcillo', 'Rubén Izquierdo', 'J. Lorenzo', 'Miguel Ángel Sotelo'] | Two-camera based accurate vehicle speed measurement using average speed at a fixed point | 962,564 |
Multiple-input multiple-output (MIMO) systems that realize a high-speed data transmission with multiple antennas at both transmitter and receiver are drawing much attention. In MIMO systems, it has been reported that the scheme that weights transmitted signals based on minimum mean square error (MMSE) criterion at the transmitter using feedback channel state information (CSI) achieves good performance (denoted hereafter as MMSE precoding). A throughput maximization transmission control scheme (TMTC) that selects the transmission mode (modulation schemes and code rates) based on CSI has been proposed. We expect that its throughput performance can be improved by addition of MMSE precoding. However, throughput performance is degraded in the presence of feedback delay. In this paper, we propose TMTC with MMSE precoding; it reduces the throughput performance degradation by using channel prediction and the receive weights robust against feedback delay. The proposed scheme selects a transmission mode (the number of substreams, modulation schemes, and code rates) and configures the precoder to achieve maximum throughput based on the SINR given by channel prediction. Simulation results show that even when feedback delay exists, the proposed scheme attains high throughput. | ['Kenichi Kobayashi', 'Tomoaki Ohtsuki', 'Toshinobu Kaneko'] | Throughput Maximization Transmission Control Scheme Using Precoding for MIMO Systems | 46,706 |
Internet of Things (IoTs) is gaining increasing significance due to real-time communication and decision making capabilities of sensors integrated into everyday objects. Securing IoTs is one of the foremost concerns due to the ubiquitous nature of the sensors coupled with the increasing sensitivity of user data. Further, power-constrained nature of the IoTs emphasizes the need for lightweight security that can tailor to the stringent resource requirements of the sensors. In this work, we propose a lighweight security framework for IoTs using Identity based Cryptography. In particular, we develop a hierarchical security architecture for IoTs and further develop protocols for secure communication in IoTs using identity based cryptography. Our proposed mechanism has been evaluated using simulations conducted using Contiki and RELIC. Evaluation shows that our proposed mechanism is lightweight incurring lesser overhead and thus can be applied in IoTs. | ['Sriram Sankaran'] | Lightweight security framework for IoTs using identity based cryptography | 928,245 |
Summary: We have developed essaMEM, a tool for finding maximal exact matches that can be used in genome comparison and read mapping. essaMEM enhances an existing sparse suffix array implementation with a sparse child array. Tests indicate that the enhanced algorithm for finding maximal exact matches is much faster, while maintaining the same memory footprint. In this way, sparse suffix arrays remain competitive with the more complex compressed suffix arrays. Availability: Source code is freely available at | ['Michaël Vyverman', 'Bernard De Baets', 'Veerle Fack', 'Peter Dawyndt'] | essaMEM: finding maximal exact matches using enhanced sparse suffix arrays | 257,045 |
The potential of player-game interaction to facilitate L2 learning in stand-alone digital games has been established in many empirical investigations (e.g. Miller & Hegelheimer, 2006; Hitosugi, Schmidt, & Hayashi, 2014; Sundqvist & SylvA©n, 2012); however, the fine-grained dynamics that comprise it are still largely underexplored. A potential reason for this gap is that, to date, there is a lack of refined methodologies for conceptualizing and analyzing the various processes that occur during gameplay between a learner-player and the game. The present study proposes an ecology-sensitive, multidimensional approach to de- and re-constructing player-game interaction dynamics through analysis of thick detail-rich play data from a variety of sources and levels: cognitive, virtual, and sociocultural. Using a model of player-game interaction detailing how activities interrelate and/or interact across different levels, it reconstructs player-game interaction as a holistic activity. To this end, it utilizes a variety of qualitative data sources that offer empirical information about different facets and dimensions of player-game interaction, including thinkaloud protocol, gaming journals, walkthroughs, and debriefing interviews. To illustrate the approach, this paper will apply it to interpretation of player-game interactions of one learner of Arabic as a foreign language in the Egyptian Arabic simulation-management game Baalty (PPIC-Work, 2004). Findings include the observation that the L2 learner-player played their game in her L2 by decoding in-game discourses and using in-game semiotic resources. The paper will include discussion of theoretical background, research implications, and potential applications of this approach. | ['Karim Hesham Shaker Ibrahim'] | A Method and Model for De- and Reconstructing Player-Game Interaction: The Case of the Arabic Simulation-Management Game Baalty | 961,898 |
Recommendation on signed social rating networks is studied through an innovative approach. Bayesian probabilistic modeling is used to postulate a realistic generative process, wherein user and item interactions are explained by latent factors, whose relevance varies within the underlying network organization into user communities and item groups. Approximate posterior inference captures distrust propagation and drives Gibbs sampling to allow rating and (dis)trust prediction for recommendation along with the unsupervised exploratory analysis of network organization. Comparative experiments reveal the superiority of our approach in rating and link prediction on Epinions and Ciao , besides community quality and recommendation sensitivity to network organization. | ['Gianni Costa', 'Riccardo Ortale'] | Model-Based Collaborative Personalized Recommendation on Signed Social Rating Networks | 836,540 |
The construction and understanding of Gene Regulatory Networks (GRNs) are among the hardest tasks faced by systems biology. To infer gene regulatory networks from gene expression data has been a vigorous research area. It aims to constitute an intermediate step from exploratory to gene expression analysis. In recent years, many reverse engineering methods have been proposed. In practice, different model approaches will generate different network structures. Therefore, it is very important for users to assess the performance of these algorithms. We present a comparative study with three different reverse engineering methods, including the S-system Parameter Estimation Method (SPEM), the Graphical Gaussian Model (GGM) and the TimeDelay-ARACNE. Our approach consists of the analysis of real gene expression data with the different methods, and the assessment of algorithmic performances by sensitivity, specificity, precision and F-score. | ['Charles C. N. Wang', 'Pei-Chun Chang', 'Phillip C.-Y. Sheu', 'Jeffrey J. P. Tsai'] | A Comparison Study of Reverse Engineering Gene Regulatory Network Modeling | 958,520 |
Modeling AIDS Spread in Social Networks - An In-Silico Study Using Exploratory Agent-Based Modeling. | ['Muaz A. Niazi', 'Amnah Siddiqa', 'Giancarlo Fortino'] | Modeling AIDS Spread in Social Networks - An In-Silico Study Using Exploratory Agent-Based Modeling. | 763,249 |
Customer churn means the loss of existing customers to a competitor. Accurately predicting customer behavior may help firms to minimize this loss by proactively building a lasting relationship with their customers. In this paper, the application of the factor analysis and the Variable Consistency Dominance-based Rough Set Approach (VC-DRSA) in the customer relationship management (CRM) of the airline market is introduced. A set of ''if...then...'' decision rules are used as the preference model to classify customers by a set of criteria and regular attributes. The proposed method can determine the competitive position of an airline by understanding the behavior of its customers based on their perception of choice, and so develop the appropriate marketing strategies. A large sample of customers from an international airline is used to derive a set of rules and to evaluate its prediction ability. | ['James J. H. Liou'] | A novel decision rules approach for customer relationship management of the airline market | 529,329 |
We study a practical version of the broadcast channel (BC) where the channel state information at the transmitter (CSIT) is available only for some of the users, dubbing the system as mixed CSIT. We start with the simplest instance of such a heterogeneous BC where a multi-antenna transmitter base-station (BS) is trying to communicate data to two single-antenna user equipments (UE), having the perfect CSIT about UE1 and no CSIT about UE2. We propose a very simple transmission strategy at the BS for such a system and compare its performance with the interference alignment solution proposed elsewhere which achieves full degrees-of-freedom (DOF) when channel coherence lengths and switching instants abide by certain restrictions. Then with an additional receive antenna granted to UE2 (whose CSIT is not available) and keeping the same CSIT perspective, achievable rates for two users are derived and compared with earlier schemes. Analytical and simulation results show that this additional receive antenna brings significant gains in the system with the same transmission strategy and without any extra requirement of channel training or feedback. | ['Umer Salim', 'Irfan Ghauri'] | Mixed CSIT DL Channel: Gains with an Additional Receive Antenna | 154,013 |
As a young and emerging field in social human–robot interaction (HRI), semantic-free utterances (SFUs) research has been receiving attention over the last decade. SFUs are an auditory interaction means for machines that allow emotion and intent expression, which are composed of vocalizations and sounds without semantic content or language dependence. Currently, SFUs are most commonly utilized in animation movies (e.g., R2-D2, WALL-E, Despicable Me), cartoons (e.g., “Teletubbies,” “Morph,” “La Linea”), and computer games (e.g., The Sims) and hold significant potential for applications in HRI. SFUs are categorized under four general types: Gibberish Speech (GS), Non-Linguistic Utterances (NLUs), Musical Utterances (MU), and Paralinguistic Utterances (PU). By introducing the concept of SFUs and bringing multiple sets of studies in social HRI that have never been analyzed jointly before, this article addresses the need for a comprehensive study of the existing literature for SFUs. It outlines the current gran... | ['Selma Yilmazyildiz', 'Robin Read', 'Tony Belpeame', 'Werner Verhelst'] | Review of Semantic-Free Utterances in Social Human–Robot Interaction | 593,887 |
Drive-by-downloads are malware that push, and then execute, malicious code on a client system without the user's consent. The purpose of this paper is to introduce a discussion of the usefulness of antivirus software for detecting the installation of such malware, providing groundwork for future studies. Client honey-pots collected drive-by malware which was then evaluated using common antivirus products. Initial analysis showed that most of such antivirus products identi-fied less than 70% of these highly polymorphic malware programs. Also, it was observed that the antivirus products tested, even when successfully detecting this malware, often failed to classify it, leading to the conclusion that further work could involve not only developing new behavioral detection technologies, but also empirical studies that improve general understanding of these threats. Toward that end, one example of malicious code was analyzed behaviorally to provide insight into next steps for the future direction of this research. | ['Julia Narvaez', 'Barbara Endicott-Popovsky', 'Christian Seifert', 'Chiraag Uday Aval', 'Deborah A. Frincke'] | Drive-by-Downloads | 368,697 |