abstract
stringlengths 0
11.1k
| authors
stringlengths 9
1.96k
| title
stringlengths 4
353
| __index_level_0__
int64 3
1,000k
|
---|---|---|---|
Many of the static and dynamic properties of an atomic Bose–Einstein condensate (BEC) are usually studied by solving the mean-field Gross–Pitaevskii (GP) equation, which is a nonlinear partial differential equation for short-range atomic interaction. More recently, BEC of atoms with long-range dipolar atomic interaction are used in theoretical and experimental studies. For dipolar atomic interaction, the GP equation is a partial integro-differential equation, requiring complex algorithm for its numerical solution. Here we present numerical algorithms for both stationary and non-stationary solutions of the full three-dimensional (3D) GP equation for a dipolar BEC, including the contact interaction. We also consider the simplified one- (1D) and two-dimensional (2D) GP equations satisfied by cigar- and disk-shaped dipolar BECs. We employ the split-step Crank–Nicolson method with real- and imaginary-time propagations, respectively, for the numerical solution of the GP equation for dynamic and static properties of a dipolar BEC. The atoms are considered to be polarized along the zz axis and we consider ten different cases, e.g., stationary and non-stationary solutions of the GP equation for a dipolar BEC in 1D (along xx and zz axes), 2D (in x–yx–y and x–zx–z planes), and 3D, and we provide working codes in Fortran 90/95 and C for these ten cases (twenty programs in all). We present numerical results for energy, chemical potential, root-mean-square sizes and density of the dipolar BECs and, where available, compare them with results of other authors and of variational and Thomas–Fermi approximations.#R##N#Program summary#R##N#Program title: (i) imag1dZ, (ii) imag1dX, (iii) imag2dXY, (iv) imag2dXZ, (v) imag3d, (vi) real1dZ, (vii) real1dX, (viii) real2dXY, (ix) real2dXZ, (x) real3d#R##N##R##N#Catalogue identifier: AEWL_v1_0#R##N##R##N#Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEWL_v1_0.html#R##N##R##N#Program obtainable from: CPC Program Library, Queens University, Belfast, N. Ireland#R##N##R##N#Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html#R##N##R##N#No. of lines in distributed program, including test data, etc.: 111384#R##N##R##N#No. of bytes in distributed program, including test data, etc.: 604013#R##N##R##N#Distribution format: tar.gz | ['R. Kishor Kumar', 'Luis E. Young-S.', 'D. Vudragovic', 'Antun Balaž', 'P. Muruganandam', 'Sadhan K. Adhikari'] | Fortran and C programs for the time-dependent dipolar Gross-Pitaevskii equation in an anisotropic trap | 461,865 |
Automatic image annotation is a promising methodology for image retrieval. However most current annotation models are not yet sophisticated enough to produce high quality annotations. Given an image, some irrelevant keywords to image contents are produced, which are a primary obstacle to getting high-quality image retrieval. In this paper an approach is proposed to improve automatic image annotation two directions. One is to combine annotation keywords produced by underlying three classic image annotation models of translation model, continuous-space relevance model and multiple Bernoulli relevance models, hoping to increase the number of potential correctly annotated keywords. Another is to remove irrelevant keywords to image semantics based on semantic similarity calculation using WordNet. To verify the proposed hybrid annotation model, we carried out the experiments on the widely used Corel image data set, and the reported experimental results showed that the proposed approach improved image annotation to some extent. | ['Peng Huang', 'Jiajun Bu', 'Chun Chen', 'Kangmiao Liu', 'Guang Qiu'] | Improve Image Annotation by Combining Multiple Models | 97,989 |
['Heikki Topi'] | Enabling and encouraging productive student collaboration online | 151,272 |
|
In this paper, we investigate the use of the autoregressive conditional heteroscedasticity (ARCH) model as a replacement to the decision-directed method in the log-spectral amplitude estimator for speech enhancement. We employ three sound quality measures: speech distortion, noise reduction and musical noise, and explain the effect the ARCH model parameters have on these measures. We demonstrate and compare the use of the decision-directed and ARCH estimators and show that the ARCH model achieves better results than the decision-directed for some of these measures, while compromising between the speech distortion and noise reduction. | ['Aviva Atkins', 'Israel Cohen'] | Speech enhancement using arch model | 918,630 |
Matrix factorization into the product of lowrank matrices induces non-identifiability, i.e., the mapping between the target matrix and factorized matrices is not one-to-one. In this paper, we theoretically investigate the influence of non-identifiability on Bayesian matrix factorization. More specifically, we show that a variational Bayesian method involves regularization effect even when the prior is non-informative, which is intrinsically different from the maximum a posteriori approach. We also extend our analysis to empirical Bayes scenarios where hyperparameters are also learned from data. | ['Shinichi Nakajima', 'Masashi Sugiyama'] | Implicit Regularization in Variational Bayesian Matrix Factorization | 17,808 |
NOAA, through the Joint Polar Satellite System (JPSS) program, in partnership with National Aeronautical Space Administration (NASA), will launch the NPOESS Preparatory Project (NPP) satellite, a risk reduction and data continuity mission, prior to the first operational JPSS launch. The JPSS program will execute the NPP Calibration and Validation (Cal/Val) program to ensure the data products comply with the requirements of the sponsoring agencies. | ['Lawrence E. Flynn', 'Didier F. G. Rault', 'Glen Jaross', 'Irina Petropavlovskikh', 'Craig S. Long', 'Jonas Hörnstein', 'Eric Beach', 'Wei Yu 0013', 'Jianguo Niu', 'Dustin Swales'] | NPOESS preparatory project validation plans for the ozone mapping and profiler suite | 126,537 |
['Juan Pino', 'Aurelien Waite', 'Tong Xiao', 'Adrià de Gispert', 'Federico Flego', 'William Byrne'] | The University of Cambridge Russian-English System at WMT13 | 615,195 |
|
We introduce a provably correct learning algorithm for latent-variable PCFGs. The algorithm relies on two steps: first, the use of a matrix-decomposition algorithm applied to a co-occurrence matrix estimated from the parse trees in a training sample; second, the use of EM applied to a convex objective derived from the training samples in combination with the output from the matrix decomposition. Experiments on parsing and a language modeling problem show that the algorithm is efficient and effective in practice. | ['Shay B. Cohen', 'Michael J. Collins'] | A Provably Correct Learning Algorithm for Latent-Variable PCFGs | 769,986 |
Structured overlay networks form a major class of peer-to-peer systems, which are touted for their abilities to scale, tolerate failures, and self-manage. Any long-lived Internet-scale distributed system is destined to face network partitions. Although the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems, it has hardly been studied in the context of structured peer-to-peer systems. These systems have mainly been studied under churn (frequent joins/failures), which as a side effect solves the problem of network partitions, as it is similar to massive node failures. Yet, the crucial aspect of network mergers has been ignored. In fact, it has been claimed that ring-based structured overlay networks, which constitute the majority of the structured overlays, are intrinsically ill-suited for merging rings. In this paper, we present an algorithm for merging multiple similar ring-based overlays when the underlying network merges. We examine the solution in dynamic conditions, showing how our solution is resilient to churn during the merger, something widely believed to be difficult or impossible. We evaluate the algorithm for various scenarios and show that even when falsely detecting a merger, the algorithm quickly terminates and does not clutter the network with many messages. The algorithm is flexible as the tradeoff between message complexity and time complexity can be adjusted by a parameter. | ['Tallat M. Shafaat', 'Ali Ghodsi', 'Seif Haridi'] | Dealing with network partitions in structured overlay networks | 268,769 |
This work discusses the technology mapping problem on hybrid field programmable architectures (HFPA). HFPAs are realized using a combination of lookup tables (LUTs) and programmable logic arrays (PLAs). HFPAs provide the designers with the advantages of both LUT-based field programmable gate arrays (FPGA) and PLAs. Specifically, the use of PLAs leads to reduced area in mapping the given circuit. Designing of technology mapping methodologies which map a given circuit on to the HFPA that exploits the above-mentioned advantages is a problem of great research and commercial interest. This work presents SHAPER, which maps the circuits onto HFPAs using reconvergence analysis. Empirically, it is shown that SHAPER yields better area-reduction than the previous known algorithms. | ['R. Manimegalai', 'B. Jayaram', 'A. Manojkumar', 'V. Kamakoti'] | SHAPER: synthesis for hybrid FPGAs containing PLAs using reconvergence analysis | 159,589 |
Viral Marketing, the idea of exploiting social interactions of users to propagate awareness for products, has gained considerable focus in recent years. One of the key issues in this area is to select the best seeds that maximize the influence propagated in the social network. In this paper, we define the seed selection problem (called t-Influence Maximization, or t-IM) for multiple products. Specifically, given the social network and t products along with their seed requirements, we want to select seeds for each product that maximize the overall influence. As the seeds are typically sent promotional messages, to avoid spamming users, we put a hard constraint on the number of products for which any single user can be selected as a seed. In this paper, we design two efficient techniques for the t-IM problem, called Greedy and FairGreedy. The Greedy algorithm uses simple greedy hill climbing, but still results in a 1/3-approximation to the optimum. Our second technique, FairGreedy, allocates seeds with not only high overall influence (close to Greedy in practice), but also ensures fairness across the influence of different products. We also design efficient heuristics for estimating the influence of the selected seeds, that are crucial for running the seed selection on large social network graphs. Finally, using extensive simulations on real-life social graphs, we show the effectiveness and scalability of our techniques compared to existing and naive strategies. | ['Samik Datta', 'Anirban Majumder', 'Nisheeth Shrivastava'] | Viral Marketing for Multiple Products | 422,913 |
Detecting moving objects by using an adaptive background model is a critical component for many vision-based applications. Most background models were maintained in pixel-based forms, while some approaches began to study block-based representations which are more robust to non-stationary backgrounds. In this paper, we propose a method that combines pixel-based and block-based approaches into a single framework. We show that efficient hierarchical backgrounds can be built by considering that these two approaches are complementary to each other. In addition, a novel descriptor is proposed for block-based background modeling in the coarse level of the hierarchy. Quantitative evaluations show that the proposed hierarchical method can provide better results than existing single-level approaches. | ['Yu-Ting Chen', 'Chu-Song Chen', 'Chun-Rong Huang', 'Yi-Ping Hung'] | Efficient hierarchical method for background subtraction | 454,324 |
An AWSN composed of bird-sized Unmanned Aerial Vehicles (UAVs) equipped with sensors and wireless radio, enables low cost high granularity three-dimensional sensing of the physical world. The sensed data is relayed in real-time over a multi-hop wireless communication network to ground stations. In this paper, we investigate the use of a hybrid antenna to accomplish efficient neighbor discovery and reliable communication in AWSNs. We propose the design of a hybrid Omni Bidirectional ESPAR (O-BESPAR) antenna, which combines the complimentary features of an isotropic omni radio (360°źcoverage) and directional ESPAR antennas (beamforming and reduced interference). Control and data messages are transmitted separately over the omni and directional modules of the antenna, respectively. Moreover, a communication protocol is presented to perform neighbor UAVs discovery and beam steering. We present results from an extensive set of simulations. We consider three different real-world AWSN application scenarios and employ empirical aerial link characterization to demonstrate that the proposed antenna design and protocol reduces the packet loss rate and end to end delay by up to 54% and 49% respectively, and increases the goodput by up to 33%, as compared to a single omni or ESPAR antenna. | ['Kai Li', 'Nadeem Ahmed', 'Salil S. Kanhere', 'Sanjay Jha'] | Reliable transmissions in AWSNs by using O-BESPAR hybrid antenna | 700,497 |
Researchers continue to develop visual prostheses towards safer and more efficacious systems. However limitations still exist in the number of stimulating channels that can be integrated. Therefore there is a need for spatial and time multiplexing techniques to provide improved performance of the current technology. In particular, bright and high-contrast visual scenes may require simultaneous activation of several electrodes. In this research, a 24-electrode array was suprachoroidally implanted in three normally-sighted cats. Multi-unit activity was recorded from the primary visual cortex. Four stimulation strategies were contrasted to provide activation of seven electrodes arranged hexagonally: simultaneous monopolar, sequential monopolar, sequential bipolar and hexapolar. Both monopolar configurations showed similar cortical activation maps. Hexapolar and sequential bipolar configurations activated a lower number of cortical channels. Overall, the return configuration played a more relevant role in cortical activation than time multiplexing and thus, rapid sequential stimulation may assist in reducing the number of channels required to activate large retinal areas. | ['Alejandro Barriga-Rivera', 'John W. Morley', 'Nigel H. Lovell', 'Gregg J. Suaning'] | Cortical responses following simultaneous and sequential retinal neurostimulation with different return configurations | 913,320 |
This paper studies recursive nonlinear least squares parameter estimation in inference networks with observations distributed across multiple agents and sensed sequentially over time. Conforming to a given inter-agent communication or interaction topology, distributed recursive estimators of the consensus + innovations type are presented in which at every observation sampling epoch the network agents exchange a single round of messages with their communication neighbors and recursively update their local parameter estimates by simultaneously processing the received neighborhood data and the new information (innovation) embedded in the observation sample. Under rather weak conditions on the connectivity of the inter-agent communication and a global observability criterion, it is shown that the proposed algorithms lead to consistent parameter estimates at each agent. Furthermore, under standard smoothness assumptions on the sensing nonlinearities, the distributed estimators are shown to yield order-optimal convergence rates, i.e., as far as the order of pathwise convergence is concerned, the local agent estimates are as good as the optimal centralized nonlinear least squares estimator having access to the entire network observation data at all times. | ['Soummya Kar', 'José M. F. Moura', 'H. Vincent Poor'] | On a consistent procedure for distributed recursive nonlinear least-squares estimation | 940,307 |
A game-theoretic approach is proposed to investigate the problem of relay selection and power control with quality of service constraints in multiple-access wireless body area networks (WBANs). Each sensor node seeks a strategy that ensures the optimal energy efficiency and, at the same time, provides a guaranteed upper bound on the end-to-end packet delay and jitter. The existence of Nash equilibrium for the proposed non-cooperative game is proved, the Nash power control solution is analytically calculated, and a distributed algorithm is provided that converges to a Nash relay selection solution. The game theoretic analysis is then employed in an IEEE 802.15.6-based WBAN to gauge the validity and effectiveness of the proposed framework. Performance behaviors in terms of energy efficiency and end-to-end delay and jitter are examined for various scenarios. Results demonstrate the merits of the proposed framework, particularly for moving WBANs under severe fading conditions. | ['Hussein Moosavi', 'Francis Minhthang Bui'] | Optimal Relay Selection and Power Control With Quality-of-Service Provisioning in Wireless Body Area Networks | 722,895 |
An internal force-based impedance control scheme for cooperating manipulators is introduced which controls the motion of the objects being manipulated and the internal force on the objects. The controller enforces a relationship between the velocity of each manipulator and the internal force on the manipulated objects. Each manipulator is directly given the properties of an impedance by the controller; thus, eliminating the gain limitation inherent in the structure of previously proposed schemes. The controller uses the forces sensed at the robot end effectors to compensate for the effects of the objects' dynamics and to compute the internal force using only kinematic relationships. Thus, knowledge of the objects' dynamics is not required. Stability of the system is proven using Lyapunov theory and simulation results are presented validating the proposed concepts. The effect of computational delays in digital control implementations is analyzed vis-a-vis stability and a lower bound derived on the size of the desired manipulator inertia relative to the actual manipulator endpoint inertia. The bound is independent of the sample time. | ['Robert G. Bonitz', 'Tien C. Hsia'] | Internal force-based impedance control for cooperating manipulators | 323,045 |
BackgroundDuring generation of microarray data, various forms of systematic biases are frequently introduced which limits accuracy and precision of the results. In order to properly estimate biolog ... | ['Max Bylesjö', 'Daniel Eriksson', 'Andreas Sjödin', 'Stefan Jansson', 'Thomas Moritz', 'Johan Trygg'] | Orthogonal projections to latent structures as a strategy for microarray data normalization | 178,358 |
Threat of Distributed Denial of Service (DDoS) attacks has been increasing with growth of computer and network infrastructures. DDoS attacks generating mass traffics make network bandwidth and/or system resources depleted. Therefore, it is significant to detect DDoS attacks in early stage. Our previous approach used a traffic matrix to detect DDoS attack. However, it is hard to tune up the parameters of the matrix including (i) size of traffic matrix, (ii) packet based window size, and (iii) threshold value of variance from packets information with respect to various monitoring environments and DDoS attacks. In this paper, we propose an enhanced DDoS attacks detection approach which (i) improves the traffic matrix building operation and (ii) optimizes the parameters of the traffic matrix using Genetic Algorithm (GA). We perform experiments with DARPA 2000 dataset and LBL-PKT-4 dataset of Lawrence Berkeley Laboratory to show its performance in terms of detection accuracy and speed. | ['Je Hak Lee', 'Dong Seong Kim', 'Sang Min Lee', 'Jong Sou Park'] | DDoS Attacks Detection Using GA Based Optimized Traffic Matrix | 459,192 |
In a multi-hop wireless network, each node has a transmission radius and is able to send a message to one or all of its neighbors that are located within the radius. In a broadcasting task, a source node sends the same message to all the nodes in the network. Some existing solutions apply re-broadcasting from each cluster-head or border node in a clustered structure. We propose to reduce the communication overhead of the broadcasting algorithm by applying the concept of internal nodes. The maintenance of internal nodes requires much less communication overhead than the maintenance of the cluster structure of the nodes. In one-to-all broadcasting, only the internal nodes forward the message, while in the one-to-one case, messages are forwarded on the edges that connect two internal nodes and on edges that connect each non-internal node with its closest internal node. Existing notions of internal nodes are improved by using node degrees instead of their IDs in internal node decisions. Highest node degrees are also proposed for reducing the number of cluster-heads and border nodes in a clustering algorithm. Further savings are obtained if GPS and the concept of planar subgraphs are used for one-to-one networks. In case of one-to-all model, no re-broadcasting is needed if all neighbors have already received the message. The important features of the proposed algorithms are their reliability, significant savings in the re-broadcasting, and their localized and parameterless behavior. The reduction in the communication overhead for the broadcasting task, with respect to existing methods, is measured experimentally. | ['Ivan Stojmenovic', 'Mahtab Seddigh', 'Jovisa D. Zunic'] | Internal nodes based broadcasting in wireless networks | 22,292 |
['Yanwei Pang', 'Ling Shao'] | Special issue on dimensionality reduction for visual big data | 589,111 |
|
This paper reports on the use, effectiveness, and acceptance of graduate computer science course lectures recorded and formatted for mobile devices, including Video iPods, PDAs, and Ultra-Mobile PCs (UMPC). Technology convergence is trending toward that allows students to participate live in class discussion from anywhere that they have connectivity over Wi-Fi, mobile broadband, or wired LAN. Students were allowed to attend each class in-person, or remote using laptop or mobile devices including Ultra-Mobile PCs (UMPC), PDA, Video iPod, iPhone, or cell-phone. Students found a conventional laptop to be most effective for both synchronous and asynchronous distance learning. | ['Kenneth E. Hoganson'] | Distance-Learning and Converging Mobile Devices | 178,648 |
Viruses, worms, trojan horses and crackers all exist and threaten the security of our computer systems. Often, we are aware of an intrusion only after it has occured. On some occasions, we may have a fragment of code left behind-used by an adversary to gain access or to damage the system. A natural question to ask is ''Can we use this remnant of code to identify the culprit or gain clues as to his identity?rd In this paper, we define the study of features of code remnants that might be analyzed to identify their authors. We further outline some of the difficulties involved in tracing an intruder by analyzing code. We conclude by discussing some future work that needs to be done before this approach can be more formally applied. We refer to our process as software forensics, similar to medical forensics: we are examining the remains to obtain evidence about the actors involved. | ['Eugene H. Spafford', 'Stephen A. Weeber'] | Software forensics: Can we track code to its authors? | 336,884 |
This paper describes about how a mobile phone can be a piece of equipment or tools used by student to identify various kinds of insect. This mobile application can help their observations and instantly analyzed with quite a lot of data but with a fairly simple method that is a key determinant of insect. Development is usually a key determinant of conventional print media, whether books or sheet form that is not practical to be taken by students. To address these difficulties, the key determination method is developed which is based J2ME mobile that can be installed on mobile phones. The key determination was developed specifically to identify these insects can help students to identify different kinds of Insect found in students and group them with more practical and more attractive than the conventional determination key. With this application, students can immediately identify the insect on the spot. The key determinant of this mobile includes a description of the characteristics of an organism is presented with the opposite character. These characters are presented ranging from the general character of the increasingly more and more specific until eventually leading to the key characters whose show a particular species. | ['Aciek Ida Wuryandari', 'R. Priyo Hartono Adji', 'Yera Permatasari'] | Design and application key determination (dichotomy) J2ME based forms to help students in practicum biological observations | 274,094 |
Complex systems such as those in evolution, growth and depinning models do not evolve slowly and gradually, but exhibit avalanche dynamics or punctuated equilibria. Self-Organized Criticality (SOC) and Highly Optimized Tolerance (HOT) are two theoretical models that explain such avalanche dynamics. We have studied avalanche dynamics in two vastly different grid computing systems: Optimal Grid and Vishva. Failures in optimal grid cause an avalanche effect with respect to the overall computation. Vishva does not exhibit failure avalanches. Interestingly, Vishva exhibits load avalanche effects at critical load density, wherein a small load disturbance in one node can cause load disturbances in several other nodes. The avalanche dynamics of grid computing systems implies that grids can be viewed as SOC systems or as HOT systems. An SOC perspective suggests that grids may be sub-optimal in performance, but may be robust to unanticipated uncertainties. A HOT perspective suggests that grids can be made optimal in performance, but would then be sensitive to unanticipated perturbations. An ideal approach for grid systems research is to explore a combination of SOC and HOT as a basis for design, resulting in robust yet optimal systems. | ['A. Vijay Srinivas', 'D. Janakiram', 'M. Venkateswar Reddy'] | Avalanche Dynamics in Grids: Indications of SOC or HOT? | 608,762 |
A context-based communication system enables the indirect addressing and routing of messages according to the users' contexts. This provides, for example, the means to send a message to all students on campus who attend a certain class, with information about an upcoming exam. However, for a targeted forwarding of messages towards users, the routers need information about the context of connected users. Global knowledge, i.e., each router knowing about every user, is not scalable, though, because of the necessary update messages to keep this information up-to-date. | ['Lars Geiger', 'Frank Dürr', 'Kurt Rothermel'] | Aggregation of user contexts in context-based communication | 48,250 |
IEEE 802.15.4 standard for Low Power Wireless Personal Area Networks (LoWPANs) is emerging as a promising technology to bring envisioned ubiquitous paragon, into realization. Considerable efforts are being carried on to integrate LoWPANs with other wired and wireless IP networks, in order to make use of pervasive nature and existing infrastructure associated with IP technologies. Designing a security solution becomes a challenging task as this involves threats from wireless domain of resource constrained devices as well as from extremely mature IP domain. In this paper we have i) identified security threats and requirements for LoWPANs ii) analyzed current security solutions and identified their shortcomings, iii) proposed a generic security framework that can be modified according to application requirements to provide desired level of security. We have also given example implementation scenario of our proposed framework for resource and security critical applications. | ['Rabia Riaz', 'Ki-Hyung Kim', 'H. Farooq Ahmed'] | Security analysis survey and framework design for IP connected LoWPANs | 220,759 |
['Oswald Lanz', 'Paul Chippendale', 'Roberto Brunelli'] | An Appearance-Based Particle Filter for Visual Tracking in Smart Rooms. | 547,499 |
|
The goal of this special issue is to celebrate the great work being done on the interface between practice and theory. The articles show that real-world cryptography isn't just focused on the traditional aspects of communications security but now ranges far and wide. They also demonstrate that practitioners are concerned about the societal impacts and the social constructs underlying our "science." | ['Dan Boneh', 'Kenny Paterson', 'Nigel P. Smart'] | Building a Community of Real-World Cryptographers | 969,853 |
This paper examines two key features of time-dependent conformal mappings in doubly-connected regions, the evolution of the conformal modulus Q(t) and the boundary transformation generalizing the Hilbert transform. It also applies the theory to an unsteady free surface flow. Focusing on inviscid, incompressible, irrotational fluid sloshing in a rectangular vessel, it is shown that the explicit calculation of the conformal modulus is essential to correctly predict features of the flow. Results are also presented for fully dynamic simulations which use a time-dependent conformal mapping and the Garrick generalization of the Hilbert transform to map the physical domain to a time-dependent rectangle in the computational domain. The results of this new approach are compared to the complementary numerical scheme of Frandsen (J. Comput. Phys. 196:53---87, 14) and it is shown that correct calculation of the conformal modulus is essential in order to obtain agreement between the two methods. | ['M. R. Turner', 'Thomas J. Bridges'] | Time-dependent conformal mapping of doubly-connected regions | 632,559 |
Wireless full-duplexing enables a transmission and a reception on the same frequency channel at the same time, and has the potential to improve the end-to-end throughput of wireless multi-hop networks. In the present paper, we propose a media access control (MAC) protocol for wireless full- duplex and multi-hop networks called Relay Full- Duplex MAC (RFD-MAC). The RFD-MAC is an asynchronous full-duplex MAC protocol, which consists of a primary transmission and a secondary transmission. The RFD-MAC increases the full-duplex links by overhearing frames, which include 1-bit information concerning the existence of a successive frame, and selecting a secondary transmission node using the gathered information. The gathered information is also used to avoid a collision between the primary and secondary transmission. Simulation results reveal that the proposed RFD-MAC improves up to 68%, 49% and 56% of end-to-end throughput compared to CSMA/CA, FD-MAC and MFD-MAC, respectively. | ['Kenta Tamaki', 'Hendrotomo Ari Raptino', 'Yusuke Sugiyama', 'Masaki Bandai', 'Shunsuke Saruwatari', 'Takashi Watanabe'] | Full Duplex Media Access Control for Wireless Multi-Hop Networks | 260,061 |
Presents a statistical approach to modeling superscalar processor performance. Standard trace-driven techniques are very accurate, but require extremely long simulation times, especially as traces reach lengths in the billions of instructions. A framework for statistical models is described which facilitates fast, accurate performance evaluation. A machine model is built up from components: buffers, pipelines, etc. Each program trace is scanned once, generating a set of program parallelism parameters which can be used across an entire family of machine models. The machine model and program parallelism parameters are combined to form a Markov chain. The Markov chain is partitioned in order to reduce the size of the state space, and the resulting linked models are solved using an iterative technique. The use of this framework is demonstrated with two simple processor microarchitectures. The IPC estimates are very close to the IPCs generated by trace-driven simulation of the same microarchitectures. Resource utilization and other performance data can also be obtained from the statistical model. | ['Derek B. Noonburg', 'John Paul Shen'] | A framework for statistical modeling of superscalar processor performance | 214,959 |
We used Ramadge-Wonham (RW) theory (1987) of supervisory control to control a system of mobile robots. We discuss our experience in modeling and implementation of the developed control system. We specifically address the control program structure that manages the interaction of the RW controller with its plant. We also present our approach in dealing with practical issues such as forcing events and simultaneous events. The advantages and disadvantages of the RW controller are discussed. | ['Jing Liu', 'Houshang Darabi'] | Ramadge-Wonham supervisory control of mobile robots: lessons from practice | 350,590 |
Functional dependencies (FDs) are an integral part of relational database theory since they are used in integrity enforcement and in database design. Despite their importance FDs are often not specified or some of them are not expected by database designers, but they occur in the data and the need of inferring them from data arises. Furthermore, in several areas as data cleaning, data integration and data analysis, an important task is to find approximate functional dependencies (that are FDs approximately satisfied by a data collection) in order to discovery erroneous or exceptional elements in the data. In this work we present a system, called Fox, that infers approximate functional dependencies from XML documents employing a new notion of approximation suitable for XML data. Moreover, we show experimental results assessing the effectiveness of the Fox system and indicating that our approach is promising from the point of view of the semantic significance of the mined knowledge. | ['Fabio Fassetti', 'Bettina Fazzinga'] | FOX: Inference of Approximate Functional Dependencies from XML Data | 323,466 |
— Imaginary motor tasks cause brain oscillations that can be detected through the analysis of electroencephalo-graphic (EEG) recordings. This article aims at studying whether or not the characteristics of the brain activity induced by the combined motor imagery (MI) of both hands can be assumed as the superposition of the activity generated during simple hand MIs. After analyzing the sensorimotor rhythms in EEG signals of five healthy subjects, results show that the imagination of both hands movement generates in each brain hemisphere similar activity as the one produced by each simple hand MI in the contralateral side. Furthermore, during simple hand MIs, brain activity over the ipsilateral hemisphere presents similar characteristics as those observed during the rest condition. Thus, it is shown that the proposed scheme is valid and promising for brain-computer interfaces (BCI) control, allowing to easily detect patterns induced by combined MIs. | ['Cecilia Lindig-León', 'Laurent Bougrain'] | Comparison of sensorimotor rhythms in EEG signals during simple and combined motor imageries over the contra and ipsilateral hemispheres | 665,137 |
Sufficient intercommunication between ranging module and positioning module was lacked during the current research on the localization issue for IR-UWB wireless sensor networks. The idea of integrative ranging and positioning was proposed in this paper. The ranging module was designed to provide not only the range estimation results to the positioning module, but also the corresponding reliability evaluation results and NLOS identification results. Three positioning algorithms that differently utilize the ranging information were proposed, including LS, WLS, and MLE. Simulation results show that WLS and MLE greatly outperform LS. Since the computation complexity of WLS is much less than MLE, WLS is more appropriate for practical application. The effects of pulse energy and NLOS ratio of anchor nodes on positioning performance were also investigated, some useful results which can guide the practical design are drawn. | ['Shaohua Wu', 'Qiaoling Zhang', 'Qinyu Zhang', 'Haiping Yao'] | Integrative Ranging and Positioning for IR-UWB Wireless Sensor Networks | 263,029 |
A new generation of acquisition devices with high dynamics is rapidly overcoming the limitations of current hardware. The dynamic range of visualisation devices is lower by some orders of magnitude with respect to acquisition hardware. Dedicated algorithms are needed to fill the gap between the high dynamics of acquired scenes and the low dynamic range of visualisation devices. We propose a novel approach to reduce adaptively the dynamics of video sequences to fit the display range. Our algorithm takes into account time relationships between neighbouring frames to avoid annoying artifacts. | ['Gaetano Impoco', 'Stefano Marsi', 'Giovanni Ramponi'] | Adaptive reduction of the dynamics of HDR video sequences | 341,082 |
In classification problems, machine learning algorithms often make use of the assumption that (dis)similar inputs lead to (dis)similar outputs. In this case, two questions naturally arise: what does it mean for two inputs to be similar and how can this be used in a learning algorithm? In support vector machines, similarity between input examples is implicitly expressed by a kernel function that calculates inner products in the feature space. For numerical input examples the concept of an inner product is easy to define, for discrete structures like sequences of symbolic data however these concepts are less obvious. This article describes an approach to SVM learning for symbolic data that can serve as an alternative to the bag-of-words approach under certain circumstances. This latter approach first transforms symbolic data to vectors of numerical data which are then used as arguments for one of the standard kernel functions. In contrast, we will propose kernels that operate on the symbolic data directly. | ['Bram Vanschoenwinkel', 'Bernard Manderick'] | Appropriate kernel functions for support vector machine learning with sequences of symbolic data | 857,620 |
This paper proposes a concept for machine learning that integrates a grid scheme (GS) into a least squares support vector machine (LSSVM) (called GS-LSSVM) with a mixed kernel in order to solve data classification problems. The purpose of GS-LSSVM is to execute feature selections, mixed kernel applications, and parameter optimization in a learning paradigm. The proposed learning paradigm includes three steps. First, an orthogonal design is utilized to initialize the number of input features and candidate parameters stored in GS. Then, the features are randomly selected according to the first grid acquired from the first step. These features and the candidate parameters are then passed to LSSVM. Finally, an artificial bee colony algorithm, the recently popular heuristic algorithm, is used to optimize parameters for LSSVM learning. For illustration and evaluation purposes, ten remarkable data sets from the University of California Irvine database are used as testing targets. The experimental results reveal that the proposed GS-LSSVM can produce a classification model more easily interpreted using a small number of features. In terms of accuracy (hit ratio), the GS-LSSVM can significantly outperform other methods listed in this paper. These findings imply that the GS-LSSVM is a promising approach to classification exploration. | ['Tsung-Jung Hsieh', 'Wei-Chang Yeh'] | Knowledge Discovery Employing Grid Scheme Least Squares Support Vector Machines Based on Orthogonal Design Bee Colony Algorithm | 256,436 |
In 1997, Moody and Wu presented recurrent reinforcement learning (RRL) as a viable machine learning method within algorithmic trading. Subsequent research has shown a degree of controversy with regards to the benefits of incorporating technical indicators in the recurrent reinforcement learning framework. In 1991, Nison introduced Japanese candlesticks to the global research community as an alternative to employing traditional indicators within the technical analysis of financial time series. The literature accumulated over the past two and a half decades of research contains conflicting results with regards to the utility of using Japanese candlestick patterns to exploit inefficiencies in financial time series. In this paper, we combine features based on Japanese candlesticks with recurrent reinforcement learning to produce a high-frequency algorithmic trading system for the E-mini S&P 500 index futures market. Our empirical study shows a statistically significant increase in both return and Sharpe ratio compared to relevant benchmarks, suggesting the existence of exploitable spatio-temporal structure in Japanese candlestick patterns and the ability of recurrent reinforcement learning to detect and take advantage of this structure in a high-frequency equity index futures trading environment. | ['Patrick Gabrielsson', 'Ulf Johansson'] | High-Frequency Equity Index Futures Trading Using Recurrent Reinforcement Learning with Candlesticks | 610,616 |
We describe an efficient incentive mechanism for P2P systems t generates a wide diversity of content offerings while responding adaptively to customer demand. Files are served and paid for through a parimutuel market similar to that commonly used for betting in horse races. An analysis of the performance of such a system shows that there exists an equilibrium with a long tail in the distribution of content offerings, which guarantees the real time provision of any content regardless of its popularity. | ['Bernardo A. Huberman', 'Fang Wu'] | Bootstrapping the Long Tail in Peer to Peer Systems | 343,814 |
Asynchronous algorithms have been demonstrated to improve scalability of a variety of applications in parallel environments. Their distributed adaptations have received relatively less attention, particularly in the context of conventional execution environments and associated overheads. One such framework, MapReduce, has emerged as a commonly used programming framework for large-scale distributed environments. While the MapReduce programming model has proved to be effective for data-parallel applications, significant questions relating to its performance and application scope remain unresolved. The strict synchronization between map and reduce phases limits expression of asynchrony and hence, does not readily support asynchronous algorithms. This paper investigates the notion of partial synchronizations in iterative MapReduce applications to overcome global synchronization overheads. The proposed approach applies a locality-enhancing partition on the computation. Map tasks execute local computations with (relatively) frequent local synchronizations, with less frequent global synchronizations. This approach yields significant performance gains in distributed environments, even though their serial operation counts are higher. We demonstrate these performance gains on asynchronous algorithms for diverse applications, including pagerank, shortestpath, and kmeans. We make the following specific contributions in the paper(i) we motivate the need to extend MapReduce with constructs for asynchrony, (ii) we propose an API to facilitate partial synchronizations combined with eager scheduling and locality enhancing techniques, and (iii) demonstrate performance improvements from our proposed extensions through a variety of applications from different domains. | ['Karthik Kambatla', 'Naresh Rapolu', 'Suresh Jagannathan', 'Ananth Grama'] | Asynchronous Algorithms in MapReduce | 150,170 |
Smartcard software developers suffer from the lack of a standard communication framework between a workstation and a smartcard. To address this problem, we extended the UNIX filesystem to provide access to smartcard storage, which enables us to use files in a smartcard as though normal UNIX files, but with the additional security properties inherent to smart-cards. | ['Naomaru Itoi', 'Peter Honeyman', 'Jim Rees'] | SCFS: a UNIX filesystem for smartcards | 181,242 |
This paper presents initial results of comparisons between fluently spoken Japanese and English on a common task: speaker independent digit recognition with applications in voice dialing. The complexity of this task across these languages is comparable in terms of lexicon size and perplexity of the language model. The English lexicon contained 11 words, and the Japanese lexicon contained 13 words. The durations of the words, as well as phones proved to be longer and have greater variation in English than in Japanese. An analysis of several key recognition parameters, namely the frame duration, LPC order, and feature vector dimensionality are also included. None of the above parameters seems to show language dependency in our test. > | ['Kazuhiro Kondo', 'Joseph Picone', 'Barbara Wheatley'] | A comparative analysis of Japanese and English digit recognition | 409,998 |
['Chiying Wang', 'Sergio A. Alvarez', 'Carolina Ruiz', 'Majaz Moonis'] | Semi-Markov Modeling-Clustering of Human Sleep with Efficient Initialization and Stopping | 769,817 |
|
Remote sensing is often used to assess rangeland condition and biophysical parameters across large areas. In particular, the relationship between the Normalized Difference Vegetation Index (NDVI) and above-ground biomass can be used to assess rangeland primary productivity (seasonal carbon gain or above-ground biomass “yield”). We evaluated the NDVI–yield relationship for a southern Alberta prairie rangeland, using seasonal trends in NDVI and biomass during the 2009 and 2010 growing seasons, two years with contrasting rainfall regimes. The study compared harvested biomass and NDVI from field spectrometry to NDVI from three satellite platforms: the Aqua and Terra Moderate Resolution Imaging Spectroradiometer (MODIS) and Systeme Pour l’Observation de la Terre (SPOT 4 and 5). Correlations between ground spectrometry and harvested biomass were also examined for each growing season. The contrasting precipitation patterns were easily captured with satellite NDVI, field NDVI and green biomass measurements. NDVI provided a proxy measure for green plant biomass, and was linearly related to the log of standing green biomass. NDVI phenology clearly detected the green biomass increase at the beginning of each growing season and the subsequent decrease in green biomass at the end of each growing season due to senescence. NDVI–biomass regressions evolved over each growing season due to end-of-season senescence and carryover of dead biomass to the following year. Consequently, mid-summer measurements yielded the strongest correlation (R2 = 0.97) between NDVI and green biomass, particularly when the data were spatially aggregated to better match the satellite sampling scale. Of the three satellite platforms (MODIS Aqua, MODIS Terra, and SPOT), Terra yielded the best agreement with ground-measured NDVI, and SPOT yielded the weakest relationship. When used properly, NDVI from satellite remote sensing can accurately estimate peak-season productivity and detect interannual variation in standing green biomass, and field spectrometry can provide useful validation for satellite data in a biomass monitoring program in this prairie ecosystem. Together, these methods can be used to identify the effects of year-to-year precipitation variability on above-ground biomass in a dry mixed-grass prairie. These findings have clear applications in monitoring yield and productivity, and could be used to support a rangeland carbon monitoring program. | ['Donald C. Wehlage', 'John A. Gamon', 'Donnette R. Thayer', 'David V. Hildebrand'] | Interannual Variability in Dry Mixed-Grass Prairie Yield: A Comparison of MODIS, SPOT, and Field Measurements | 909,018 |
Pinterest is a popular social curation site where people collect, organize, and share pictures of items. We studied a fundamental issue for such sites: what patterns of activity attract attention (audience and content reposting)-- We organized our studies around two key factors: the extent to which users specialize in particular topics, and homophily among users. We also considered the existence of differences between female and male users. We found: (a) women and men differed in the types of content they collected and the degree to which they specialized; male Pinterest users were not particularly interested in stereotypically male topics; (b) sharing diverse types of content increases your following, but only up to a certain point; (c) homophily drives repinning: people repin content from other users who share their interests; homophily also affects following, but to a lesser extent. Our findings suggest strategies both for users (e.g., strategies to attract an audience) and maintainers (e.g., content recommendation methods) of social curation sites. | ['Shuo Chang', 'Vikas Kumar', 'Eric Gilbert', 'Loren G. Terveen'] | Specialization, homophily, and gender in a social curation site: findings from pinterest | 112,403 |
In order to explore suitable video processing algorithms for a unique all digital display system, the DLP (digital light processing) projection system using the DMD (digital micro-mirror device), a high performance programmable video processor has been desired. A SIMD (single instruction multiple data stream) type real-time video processor, the SVP2 (second generation scan-line video processor) is used to implement the majority of algorithms required on the DLP systems. The SVP2 is fully programmable to create various video algorithms. The SVP2 device architecture, the optimal software programming schemes, and the developed video signal enhancement algorithms including a deinterlacer, a peaking and a simple chroma transient improvement (CTI) optimized on an SVP2 are described. | ['Kazuhiro Ohara', 'Akira Takeda', 'Gary Sextro'] | Video signal enhancement optimized on SVP2 | 88,789 |
We describe a new Jacobi ordering for parallel computation of SVD problems. The ordering uses the high bandwidth of a perfect binary fat-tree to minimise global interprocessor communication costs. It can thus be implemented efficiently on fat-tree architectures. | ['Bing Bing Zhou', 'Richard P. Brent'] | Parallel Computation of the Singular Value Decomposition on Tree Architectures | 234,320 |
Recently, minimally invasive surgery is becoming popular because it offers several benefits to patients and surgeons. Many surgical robot systems are being developed. Among various systems, we are developing modularized system which has strong point in terms of size and accessibility. In this study, we propose the laparoscope handler as a module for the robot surgery system. Most laparoscopic handlers, developed up to this point, have lower accessibility due to their bulkiness and lower usability. In this paper, we solve these problems by utilizing 4-DOF with double- parallelogram remote center of motion and collet chuck mounting method for commercial laparoscope. As a result, our laparoscopic handler system is compact, low-cost and uses minimal workspace without any safety issues. Furthermore, by using teleoperation, surgeons can utilize our system easily. | ['Dong-hoon Kang', 'Hyunwoo Baek', 'Byung-Sik Cheon', 'Deok-Gyun Jeong', 'Hyun-young Lee', 'Dong-Soo Kwon'] | Robotic handler for interchangeability with various size of laparoscope | 941,342 |
Interfaces are data types that are very useful for providing abstract and organized views on programs and APIs, and opportunities for writing more generic code and for reuse. Extract interface refactoring is a well known local refactoring which is commonly used in development tools. Beyond that local refactoring, there is a need for mass extraction of an interface hierarchy from a class hierarchy. In this paper, we made an experience with master students to put into practice an existing Formal Concept Analysis (FCA) based approach for solving that problem. The results show that the data selection (selected datatypes: interfaces, abstract classes, concrete classes; attributes; attribute description; methods; method description; etc.) was not obvious as it was expected to be, and that the students used the approach more as an analysis technique that would guide the extraction, than as a turn key solution. | ['Marianne Huchard'] | Full application of the extract interface refactoring: conceptual structures in the hands of master students | 873,692 |
Indoor localization is a key topic for the Ambient Intelligence (AmI) research community. In this scenarios, recent advancements in wearable technologies, particularly smartwatches with built-in sensors, and personal devices, such as smartphones, are being seen as the breakthrough for making concrete the envisioned Smart Environment (SE) paradigm. In particular, scenarios devoted to indoor localization represent a key challenge to be addressed. Many works try to solve the indoor localization issue, but the lack of a common dataset or frameworks to compare and evaluate solutions represent a big barrier to be overcome in the field. The unavailability and uncertainty of public datasets hinders the possibility to compare different indoor localization algorithms. This constitutes the main motivation of the proposed dataset described herein. We collected Wi-Fi and geo-magnetic field fingerprints, together with inertial sensor data during two campaigns performed in the same environment. Retrieving sincronized data from a smartwatch and a smartphone worn by users at the purpose of create and present a public available dataset is the goal of this work. | ['Paolo Barsocchi', 'Antonino Crivello', 'Davide La Rosa', 'Filippo Palumbo'] | A multisource and multivariate dataset for indoor localization methods based on WLAN and geo-magnetic field fingerprinting | 941,408 |
In this paper, adaptive dynamic surface control (DSC) is developed for a class of pure-feedback nonlinear systems with unknown dead zone and perturbed uncertainties using neural networks. The explosion of complexity in traditional backstepping design is avoided by utilizing dynamic surface control and introducing integral-type Lyapunov function. It is proved that the proposed design method is able to guarantee semi-global uniform ultimate boundedness of all signals in the closed-loop system, with arbitrary small tracking error by appropriately choosing design constants. Simulation results demonstrate the effectiveness of the proposed approach. | ['Tangjie Zhang', 'Shuzhi Sam Ge'] | Brief paper: Adaptive dynamic surface control of nonlinear systems with unknown dead zone in pure feedback form | 242,912 |
['Massimo Bartoletti', 'Roberto Zunino'] | Constant-deposit multiparty lotteries on Bitcoin. | 989,836 |
|
Scientific workflow systems are used to integrate existing software components (actors) into larger analysis pipelines to perform in silico experiments. Current approaches for handling data in nested-collection structures, as required in many scientific domains, lead to many record-management actors (shims) that make the workflow structure overly complex, and as a consequence hard to construct, evolve and maintain. By constructing and executing workflows from bioinformatics and geosciences in the Kepler system, we will demonstrate how COMAD (Collection-Oriented Modeling and Design), an extension of conventional workflow design, addresses these shortcomings. In particular, COMAD provides a hierarchical data stream model (as in XML) and a novel declarative configuration language for actors that functions as a middleware layer between the workflow's data model (streaming nested collections) and the actor's data model (base data and lists thereof). Our approach allows actor developers to focus on the internal actor processing logic oblivious to the workflow structure. Actors can then be re-used in various workflows simply by adapting actor configurations. Due to streaming nested collections and declarative configurations, COMAD workflows can usually be realized as linear data processing pipelines, which often reflect the scientific data analysis intention better than conventional designs. This linear structure not only simplifies actor insertions and deletions (workflow evolution), but also decreases the overall complexity of the workflow, reducing future effort in maintenance. | ['Lei Dou', 'Daniel Zinn', 'Timothy McPhillips', 'Sven Köhler', 'Sean Riddle', 'Shawn Bowers', 'Bertram Ludäscher'] | Scientific workflow design 2.0: Demonstrating streaming data collections in Kepler | 138,716 |
During the perception of human actions by robotic assistants, the robotic assistant needs to direct its computational and sensor resources to relevant parts of the human action. In previous work we have introduced HAMMER (Hierarchical Attentive Multiple Models for Execution and Recognition) in Demiris, Y. and Khadhouri, B., (2006), a computational architecture that forms multiple hypotheses with respect to what the demonstrated task is, and multiple predictions with respect to the forthcoming states of the human action. To confirm their predictions, the hypotheses request information from an attentional mechanism, which allocates the robot's resources as a function of the saliency of the hypotheses. In this paper we augment the attention mechanism with a component that considers the content of the hypotheses' requests, with respect to reliability, utility and cost. This content-based attention component further optimises the utilisation of the resources while remaining robust to noise. Such computational mechanisms are important for the development of robotic devices that will rapidly respond to human actions, either for imitation or collaboration purposes | ['Yiannis Demiris', 'Bassam Khadhouri'] | Content-based control of goal-directed attention during human action perception | 169,469 |
Categorising gender for soft biometric recognition is especially challenging from low quality surveillance footage. Our novel approach discovers super fine-grained visual taxonomies of gender from pairwise similarity comparisons, annotated via crowdsourcing. This paper presents our techniques for collection, interpretation and clustering of perceived visual similarities, and discusses the transition from pre-defined categorisation to similarity comparisons between subjects. We compare and evaluate our proposal on two diverse datasets, demonstrating the ability to describe multiple concepts, including ambiguity and uncertainty, that go beyond binary male-female designators. Our method is applicable to a wide range of soft biometric traits and image attributes, and can aid in efficiently annotating large-scale datasets, by generating more discriminative, reproducible and flexible categorical labels. | ['Daniel Martinho-Corbishley', 'Mark S. Nixon', 'John N. Carter'] | On categorising gender in surveillance imagery | 869,384 |
Fault-tolerance may be expected to gain more and more importance in the future. Extremely harsh and changing environments, like outer space, already force us to think about this issue today, but issues like production of large-scale devices might put the same requirements on the devices of tomorrow. Imagine a mixture of chemical substances in a reservoir, together with a circuit-implementing shell that has self-repairing properties based on the maintenance of the chemical equilibrium. Could this type of solution be the basis for a robust future technology for evolvable hardware? A long term goal of evolvable hardware is to evolve large complex designs for large devices. However, both evolving large complex designs and manufacturing large reliable devices is technologically out of reach due to the resource greedy nature of GAs and low device yield rates. In this article we explore the technological requirements of digital design, design by evolution and development and the reliability issue in the light of today's digital evolvable hardware technology, FPGA and a proposed fault tolerant technology, Amorphous Computers. Considering the limitation of these platforms, we project these findings towards possible future technology platforms. | ['Pauline C. Haddow', 'P. van Remortel'] | From here to there : future robust EHW technologies for large digital designs | 439,392 |
['Thomas Prätzlich', 'Meinard Müller'] | Frame-Level Audio Segmentation for Abridged Musical Works. | 753,921 |
|
We present a new role system in which the type (or role ) of each object depends on its referencing relationships with other objects, with the role changing as these relationships change. Roles capture important object and data structure properties and provide useful information about how the actions of the program interact with these properties. Our role system enables the programmer to specify the legal aliasing relationships that define the set of roles that objects may play, the roles of procedure parameters and object fields, and the role changes that procedures perform while manipulating objects. We present an interprocedural, compositional, and context-sensitive role analysis algorithm that verifies that a program maintains role constraints. | ['Viktor Kuncak', 'Patrick Lam', 'Martin C. Rinard'] | Role analysis | 680,661 |
Two laboratory projects-a Doppler radar and a synthetic aperture radar (SAR)-designed to augment traditional electromagnetics education are proposed. The projects introduce students to component and system level design and expose them to modern computer-aided design (CAD) tools, microstrip and surface mount fabrication technologies, and industry standard test equipment and procedures. Additionally, because the projects result in a working radar system, students gain new enthusiasm for the electromagnetics discipline and directly see its relevance in the engineering field. Implementation of these laboratories within the curriculum have proven to be highly motivational and educational and have even contributed to increased enrolments in upper division electromagnetics courses. | ['Michael A. Jensen', 'David V. Arnold', 'Donald Crockett'] | System-level microwave design: radar-based laboratory projects | 379,227 |
In ad hoc networks, energy conservation is a critical issue because devices are battery powered. In order to save energy, nodes should turn their radios off when they have no packet to send or receive. To achieve this, IEEE 802.11 defined a power saving mechanism (PSM) based on periodical beacon transmitting. It allows devices to turn their radios off when no data has to be sent or received. However, the idle state is large because the device must remain awake during the entire beacon interval even to transmit only a small amount of data packets. Although other PSMs were proposed to solve this problem, they still suffer from a large idle state. In this paper, we propose a new power saving mechanism by dynamically adjusting the sleeping strategy of nodes based on traffic load. Simulation results show that the proposed PSM outperforms other PSMs in energy goodput and lifetime. | ['Tsung-Chuan Huang', 'Jui-Hua Tan', 'Chao-Chieh Huang'] | A Traffic-Load Oriented Power Saving Mechanism for MAC Protocol in Ad Hoc Networks | 29,533 |
Signal Integrity (SI) and Power Integrity (PI) are the most inportant characteristics for system level design, simulation and analysis of high speed systems. In this paper, HSLINK system is optimized for better SI and PI. Linear models for eye amplitude and jitter are derived by Design of Experiments (DOE). Cost effective solution strategy is also presented using linear models obtained. | ['Raj Kumar Nagpal', 'Jai Narayan Tripathi', 'Rakesh Malik'] | Cost-effective optimization of serial link system for Signal Integrity and Power Integrity | 471,671 |
Localization has attracted a lot of research effort in the last decade due to the explosion of location based service (LBS). In particular, wireless fingerprinting localization has received much attention due to its simplicity and compatibility with existing hardware. In this work, we take a closer look at the underlying aspects of wireless fingerprinting localization. First, we review the various methods to create a radiomap. In particular, we look at the traditional fingerprinting method which is based purely on measurements, the parametric pathloss regression model and the non-parametric Gaussian Process (GP) regression model. Then, based on these three methods and measurements from a real world deployment, the various aspects such as the density of access points (APs) and impact of an outdated signature map which affect the performance of fingerprinting localization are examined. At the end of the paper, the audiences should have a better understanding of what to expect from fingerprinting localization in a real world deployment. | ['Simon Yiu', 'Marzieh Dashti', 'Holger Claussen', 'Fernando Perez-Cruz'] | Wireless RSSI fingerprinting localization | 825,192 |
Simulation in manufacturing has traditionally been used for high level capacity planning. Simulation use is rapidly growing in other fields such as scheduling, detailed equipment models, and application specific models for use in emulation, engineering, sales and marketing. This growth can be attributed to the ability of simulation software tools to more accurately model manufacturing and material handling operations than with traditional simulation tools. It is not unheard of today to have the ability to create a model of an automated system with near 98 percent accurate representation. Much of the recent focus from software vendors has been to increase the ability to accurately depict manufacturing operations. Until very recently, the simulation industry suffered greatly with its inability to analyze complex manufacturing systems in enough detail to provide optimum or near optimum solutions to the problems being addressed. The paper considers performance optimization in simulation. | ['Matk Pool', 'Richard Stafford'] | Optimazation and analysis of performance in simulation | 248,372 |
This paper is concerned with the asymptotic convergence of nu- merical solutions toward discrete travelling waves for a class of relaxation nu- merical schemes, approximating the scalar conservation law. It is shown that if the initial perturbations possess some algebraic decay in space, then the nu- merical solutions converge to the discrete travelling wave at a corresponding algebraic rate in time, provided the sums of the initial perturbations for the u-component equal zero. A polynomially weighted l 2 norm on the perturba- tion of the discrete travelling wave and a technical energy method are applied to obtain the asymptotic convergence rate. | ['Hailiang Liu'] | Convergence rates to the discrete travelling wave for relaxation schemes | 109,812 |
['Stefan Edelkamp', 'Fritz Jacob'] | Learning Event Time Series for the Automated Quality Control of Videos | 870,832 |
|
This paper is a contribution to network decontamination with a view inherited from parallel processing. At the beginning some or all the vertices may be contaminated. The network is visited by a group of decontaminating agents. When a decontaminated vertex is left by the agents, it can be re-contaminated only if the number of infected neighbors exceeds a certain immunity threshold m . The main goal of the studies in this line is to minimize the number A A of agents needed to do the job and, for a minimum team, to minimize the number M M of agent moves. Instead of M M we consider the number T T of steps (i.e. parallel moves) as a measure of time, and evaluate the quality of a protocol on the basis of its work W=AT W = A T . Taking butterfly networks as an example, we compare different protocols and show that, for some values of m , a larger team of agents may require smaller work. | ['Fabrizio Luccio', 'Linda Pagli'] | More agents may decrease global work: A case in butterfly decontamination ☆ | 907,348 |
In this paper, we present general methods that can be used to explore the information processing potential of a medium composed of oscillating (self-exciting) droplets. Networks of Belousov–Zhabotinsky (BZ) droplets seem especially interesting as chemical reaction-diffusion computers because their time evolution is qualitatively similar to neural network activity. Moreover, such networks can be self-generated in microfluidic reactors. However, it is hard to track and to understand the function performed by a medium composed of droplets due to its complex dynamics. Corresponding to recurrent neural networks, the flow of excitations in a network of droplets is not limited to a single direction and spreads throughout the whole medium. In this work, we analyze the operation performed by droplet systems by monitoring the information flow. This is achieved by measuring mutual information and time delayed mutual information of the discretized time evolution of individual droplets. To link the model with reality, we use experimental results to estimate the parameters of droplet interactions. We exemplarily investigate an evolutionary generated droplet structure that operates as a NOR gate. The presented methods can be applied to networks composed of at least hundreds of droplets. | ['Gerd Gruenert', 'Konrad Gizynski', 'Gabi Escuela', 'Bashar Ibrahim', 'Jerzy Gorecki', 'Peter Dittrich'] | Understanding Networks of Computing Chemical Droplet Neurons Based on Information Flow. | 529,800 |
This paper presents a compiler from expressive, relational specifications to logic programs. Specifically, the compiler translates the Imperative Alloy specification language to Prolog. Imperative Alloy is a declarative, relational specification language based on first-order logic and extended with imperative constructs; Alloy specifications are traditionally not executable. In spite of this theoretical limitation, the compiler produces useful prototype implementations for many specifications. | ['Joseph P. Near'] | From Relational Specifications to Logic Programs | 443,478 |
We introduce DVSA, distributed virtual shared areas, a virtual machine supporting the sharing of information on distributed memory architectures. The shared memory is structured as a set of areas where the size of each area may be chosen in all architecture dependent range. DVSA supports the sharing of areas rather than of variables because the exchange of chunks of data may result in better performances on distributed memory architectures offering little or no hardware support to information sharing. DVSA does not implement replication or prefetching strategies under the assumption that these strategies should be implemented by application specific virtual machines. The definition of these machines may often be driven by the compilation of the adopted programming languages. To validate the assumption, at first we consider the implementation of data parallel loops and show that a set of static analyses based on the closed forms approach makes it possible to define compiler driven caching and prefetching strategies. These strategies fully exploit the operations offered by the DVSA machine and they noticeably reduce the time to access shared information. The optimizations strategies that can be exploited by the compiler includes the merging of accesses to avoid multiple access to the same area, the prefetching of areas and the reduction of the overhead due to barrier synchronization. Preliminary performance figures are discussed. | ['Fabrizio Baiardi', 'Davide Guerri', 'Paolo Mori', 'Laura Ricci'] | Evaluation of a virtual shared memory machine by the compilation of data parallel loops | 198,941 |
['Tony Owen'] | Control Of Robot Manipulators by F.L. Lewis, Abdallah C.T. and Dawson D.M. Maxwell Macmillan Publishing Co, Oxford, UK, 1993, 424 pp, index (£27.95). | 475,945 |
|
Abstract#R##N##R##N#An empirical modeling of road related and non-road related landslide hazard for a large geographical area using logistic regression in tandem with signal detection theory is presented. This modeling was developed using geographic information system (GIS) and remote sensing data, and was implemented on the Clearwater National Forest in central Idaho. The approach is based on explicit and quantitative environmental correlations between observed landslide occurrences, climate, parent material, and environmental attributes while the receiver operating characteristic (ROC) curves are used as a measure of performance of a predictive rule. The modeling results suggest that development of two independent models for road related and non-road related landslide hazard was necessary because spatial prediction and predictor variables were different for these models. The probabilistic models of landslide potential may be used as a decision support tool in forest planning involving the maintenance, obliteration or development of new forest roads in steep mountainous terrain. | ['Pece V. Gorsevski', 'Paul E. Gessler', 'Randy B. Foltz', 'William J. Elliot'] | Spatial Prediction of Landslide Hazard Using Logistic Regression and ROC Analysis | 476,477 |
A randomly distributed microphone array is considered in this work. In many applications exact design of the array is impractical. The performance of these arrays, characterized by a large number of microphones deployed in vast areas, cannot be analyzed by traditional deterministic methods. We therefore derive a novel statistical model for performance analysis of the MWF beamformer. We consider the scenario of one desired source and one interfering source arriving from the far-field and impinging on a uniformly distributed linear array. A theoretical model for the MMSE is developed and verified by simulations. The applicability of the proposed statistical model for speech signals is discussed. | ['Shmulik Markovich Golan', 'Sharon Gannot', 'Israel Cohen'] | Performance analysis of a randomly spaced wireless microphone array | 254,107 |
We generalize the classical expected-utility criterion by weakening transitivity to Suzumura consistency. In the absence of full transitivity, reflexivity and completeness no longer follow as a consequence of the system of axioms employed and a richer class of rankings of probability distributions results. This class is characterized by means of standard expected-utility axioms in addition to Suzumura consistency. An important feature of some members of our new class is that they allow us to soften the negative impact of wellknown paradoxes without abandoning the expected-utility framework altogether. | ['Walter Bossert', 'Kotaro Suzumura'] | Expected utility without full transitivity | 598,004 |
Web service consumption may account for a nonnegligible share of the energy that is consumed by mobile applications. Unawareness of the energy consumption characteristics of Web service-based applications during development may cause the battery of devices, e.g., smartphones, to run out more frequently. Compared to related exeprimental energy consumption studies, the work at hand is the first work that focuses on factors which are specific to services computing, such as the timing of Web service invocations and the Web service response caching logic. Further, Web service invocations are the only variable energy-consuming activity included in the experiments. Based on the results, it is shown, firstly, how the execution of exactly the same Web service invocations may lead to energy consumption results that present differences of up to ca. 15% for WLAN and ca. 60% for UMTS connections, and, secondly, how rules and techniques for energy-efficient development of mobile Web service-based applications can be extracted from the gained knowledge. | ['Apostolos Papageorgiou', 'Ulrich Lampe', 'Dieter Schuller', 'Ralf Steinmetz', 'Athanasios Bamis'] | Invoking Web Services Based on Energy Consumption Models | 183,717 |
High efficiency video coding (HEVC), the most recent video compression standard, offers about double the compression ratio over its immediate predecessor H.264/AVC at the same level of video quality or substantially higher video quality at the same bit-rate. Careful refinement of existing tools, as well as the introduction of a variety of parallel processing tools, helps the HEVC to attain the same. In HEVC, quantization is one of the key processes that decides the coding efficiency and the quality of the reconstructed video. Adaptive quantizers based on human visual system (HVS) models can be incorporated into the HEVC anchor model to improve the performance of the HEVC anchor. However, the major limitations of such schemes are the complexity involved. This paper presents an encoder, which uses a mathematical model of the contrast sensitivity function (CSF) to remove the visually insignificant information before the quantization without much impact on the visual quality of the video. The proposed method provides an average bit rate reduction of 2.75% for intra main configuration for a quantization parameter (QP) value of 22. | ['M Sini Simon', 'Abhilash Antony', 'G. Sreelekha'] | Performance improvement in HEVC using contrast sensitivity function | 928,619 |
Direct extensions of distributed greedy interference avoidance (IA) techniques developed for centralized networks to networks with multiple distributed receivers (as in ad hoc networks) are not guaranteed to converge. Motivated by this fact, we develop a waveform adaptation (WA) algorithm framework for IA based on potential game theory. The potential game model ensures the convergence of the designed algorithms in distributed networks and leads to desirable network solutions. Properties of the game model are then exploited to design distributed implementations of the algorithm that involve limited feedback in the network. Finally, variations of IA algorithms including IA with respect to legacy systems and IA with combined transmit-power and WA adaptations are investigated. | ['Rekha Menon', 'Allen B. MacKenzie', 'R.M. Buehrer', 'Jeffrey H. Reed'] | Interference avoidance in networks with distributed receivers | 522,036 |
A new algorithm is suggested for prediction of a Lagrangian particle position in a stochastic flow, given observations of other particles. The algorithm is based on linearization of the motion equations and appears to be efficient for an initial tight cluster and small prediction time. A theoretical error analysis is given for the Brownian flow and a stochastic flow with memory. The asymptotic formulas are compared with simulation results to establish their applicability limits. Monte Carlo simulations are carried out to compare the new algorithm with two others: the center- of-mass prediction and a Kalman filter-type method. The algorithm is also tested on real data in the tropical Pacific. | ['Leonid I. Piterbarg', 'Tamay M. Özgökmen'] | A SIMPLE PREDICTION ALGORITHM FOR THE LAGRANGIAN MOTION IN TWO-DIMENSIONAL TURBULENT FLOWS ∗ | 259,467 |
One of benefit of coarse-grained dynamically reconfigurable processor arrays (DRPAs) is their low dynamic power consumption by operating a number of processing element (PE) in parallel with a low frequency clock. However, in the future advanced process, the leakage power will occupy a considerable part of the total power consumption, and it may degrade the advantage of DRPAs. In order to reduce the leakage power of DRPA without severe performance degradation, eight designs (Mult, Sw, MultSw, LowHalf, 1Row, ColHalf, Sw+Half and Sw+Mult) using Dual-Vt cells are evaluated based on a prototype DRPA called MuCCRA-3T. Evaluation results show that Sw in which Low-Vt cells are only used in switching elements of the array achieved the best power-delay product. If performance of Sw is not enough, Sw+Half in which Low-Vt cells are used for a lower half PEs and all switching elements improves 24% of the leakage power with 5%–14% of extra delay time of the design with all Low-Vt cells. | ['Keiichiro Hirai', 'Masaru Kato', 'Yoshiki Saito', 'Hideharu Amano'] | Leakage power reduction for coarse-grained dynamically reconfigurable processor arrays using Dual Vt cells | 371,304 |
Nowadays, business process management is an important approach for managing organizations from an operational perspective. As a consequence, it is common to see organizations develop collections of hundreds or even thousands of business process models. Such large collections of process models bring new challenges and provide new opportunities, as the knowledge that they encapsulate requires to be properly managed. Therefore, a variety of techniques for managing large collections of business process models is being developed. The goal of this paper is to provide an overview of the management techniques that currently exist, as well as the open research challenges that they pose. | ['Remco M. Dijkman', 'Marcello La Rosa', 'Hajo A. Reijers'] | Managing large collections of business process models - Current techniques and challenges | 596,601 |
Grid computing systems that have been the focus of much research in recent years provide a virtual framework for controlled sharing of resources across institutional boundaries. Security is a major concern in any system that enables remote execution. Several techniques can be used for providing security in grid systems including sandboxing, encryption, and other access control and authentication mechanisms. The additional overhead caused by these mechanisms may negate the performance advantages gained by grid computing. Hence, we contend that it is essential for the scheduler to consider the security implications while performing resource allocations. In this paper, we present a trust model for grid systems and show how the model can be used to incorporate security implications into scheduling algorithms. Three scheduling heuristics that can be used in a grid system are modified to incorporate the trust notion and simulations are performed to evaluate the performance. | ['Farag Azzedin', 'Muthucumaru Maheswaran'] | Integrating trust into grid resource management systems | 24,619 |
The natural join and the inner union combine in different ways tables of a relational database. Tropashko [18] observed that these two operations are the meet and join in a class of lattices—called the relational lattices—and proposed lattice theory as an alternative algebraic approach to databases. Aiming at query optimization, Litak et al. [12] initiated the study of the equational theory of these lattices. We carry on with this project, making use of the duality theory developed in [16]. The contributions of this paper are as follows. Let A be a set of column’s names and D be a set of cell values; we characterize the dual space of the relational lattice $\mathsf {R}(D,A)$ by means of a generalized ultrametric space, whose elements are the functions from A to D , with the P ( A )-valued distance being the Hamming one but lifted to subsets of A . We use the dual space to present an equational axiomatization of these lattices that reflects the combinatorial properties of these generalized ultrametric spaces: symmetry and pairwise completeness. Finally, we argue that these equations correspond to combinatorial properties of the dual spaces of lattices, in a technical sense analogous of correspondence theory in modal logic. In particular, this leads to an exact characterization of the finite lattices satisfying these equations. | ['Luigi Santocanale'] | Relational lattices via duality | 634,926 |
This paper concerns the open problem of Lovasz and Saks (1988) regarding the relationship between the communication complexity of a Boolean function and the rank of the associated matrix. We first give an example exhibiting the largest gap known. We then prove two related theorems. > | ['Noam Nisan', 'Avi Wigderson'] | On rank vs. communication complexity | 226,243 |
A campus butterfly garden is a useful teaching resource for studying insect ecology because students can learn about a butterfly's life cycle and become familiar with its habitual behavior by breeding and observation activities. However, it requires professional construction and maintenance for sustainable development, so very few schools can afford to own a butterfly garden. In this study, the augmented reality and mobile learning technologies have been used to develop a virtual butterfly ecological system by combining with campus host plants and virtual breeding activities. Students can use smart phones or tablet PCs to breed virtual butterflies on host plants and observe their life cycles at different growing stages. Using the available space in campus, a virtual butterfly garden can also be created as a greenhouse where students are able to observe different species of butterflies using the tracking telescope and catch a butterfly to obtain its information by touch-screen control. The virtual butterfly ecological system can increase the learning motivation and interest of students through virtual breeding and observation activities, so it is a suitable assistant tool for science education. A teaching experiment has been conducted to investigate students' learning effectiveness and attitudes after using the system, and the results show that using the virtual butterfly ecological system can improve their learning effectively. | ['Wernhuar Tarng', 'Kuo-Liang Ou', 'Chuan-Sheng Yu', 'Fong-Lu Liou', 'Hsin-Hun Liou'] | Development of a virtual butterfly ecological system based on augmented reality and mobile learning technologies | 130,678 |
It is well-known that TCP performs poorly in a wireless environment. This paper presents an empirical performance analysis of TCP on cellular digital packet data (CDPD) and Bluetooth, This analysis brings out the weaknesses of TCP in realistic conditions, We also present CentaurusComm, a message-based transport protocol designed to perform well in low-bandwidth networks and resource-poor devices. In particular, CentaurusComm is optimized to handle data exchanges consisting of short message sizes. The application used to perform all the experiments is typical of common applications that would use these protocols and network technologies, Typical mobile devices used in the experiments included Palm Pilots. We show that TCP performance on CDPD is very poor because of its low bandwidth and high latency. CentaurusComm outperforms TCP on CDPD. We show that on Bluetooth, which has higher bandwidth and lower latency than CDPD, both protocols perform comparably. | ['Sasikanth Avancha', 'Vladimir Korolev', 'Anupam Joshi', 'Tim Finin'] | Transport protocols in wireless networks | 284,026 |
A novel random access protocol combining a tree algorithm (TA) with successive interference cancellation (SIC) has been introduced recently. By migrating physical layer benefits to the medium access control (MAC) through a cross-layer approach, SICTA can afford stable throughput as high as 0.693. However, SICTA may lead to deadlocks caused by channel fading and error propagation in error-prone wireless networks. To mitigate such effects, we put forth a truncated version of SICTA that we term SICTA/FS (SICTA with first success). We establish using analysis and simulations that while providing high throughput, SICTA/FS is robust to errors, it is easy to implement, and can be readily incorporated to existing standards | ['Xin Wang', 'Yingqun Yu', 'Georgios B. Giannakis'] | A robust high-throughput tree algorithm using successive interference cancellation | 104,962 |
Human Computation Games (HCGs), harness the element of fun from games and information is generated as a byproduct of game play. A number of location-based mobile HCGs have emerged recently. Understanding actual usage and usability issues is crucial in identifying the challenges users face while using such applications. We introduce SPLASH (Seek, Play, Share), a mobile HCG that blends game play with location-based information sharing. This paper also highlights the actual usage and users' perspective of SPLASH by 40 participants who took part in an evaluation of the application. Participants kept a six day diary and completed a post study questionnaire. Results suggested the participants were encouraged to contribute to information by the games in SPLASH. The implications of this study are discussed. | ['Dion Hoe-Lian Goh', 'Khasfariyati Razikin', 'Alton Yeow-Kuan Chua', 'Chei Sian Lee', 'K. T. Tan'] | Understanding Location-Based Information Sharing in a Mobile Human Computation Game | 338,306 |
Abstract The large amount of short read data that has to be assembled in future applications, such as in metagenomics or cancer genomics, strongly motivates the investigation of disk-based approaches to index next-generation sequencing (NGS) data. Positive results in this direction stimulate the investigation of efficient external memory algorithms for de novo assembly from NGS data. Our article is also motivated by the open problem of designing a space-efficient algorithm to compute a string graph using an indexing procedure based on the Burrows–Wheeler transform (BWT). We have developed a disk-based algorithm for computing string graphs in external memory: the light string graph (LSG). LSG relies on a new representation of the FM-index that is exploited to use an amount of main memory requirement that is independent from the size of the data set. Moreover, we have developed a pipeline for genome assembly from NGS data that integrates LSG with the assembly step of SGA (Simpson and Durbin, 2012), a state-... | ['Paola Bonizzoni', 'Gianluca Della Vedova', 'Yuri Pirola', 'Marco Previtali', 'Raffaella Rizzi'] | LSG: An External-Memory Tool to Compute String Graphs for Next-Generation Sequencing Data Assembly | 691,237 |
['Sergio García-Vega', 'Andrés Marino Álvarez-Meza', 'César Germán Castellanos-Domínguez'] | MoCap Data Segmentation and Classification Using Kernel Based Multi-channel Analysis | 802,663 |
|
This paper presents an evolutionary approach able to process a digital image and detect tracks left by preceding vehicles on ice and snow in Antarctica. Biologically inspired by a colony of ants able to interact and cooperate to determine the shortest path to the food, this approach is based on autonomous agents moving along the image pixels and iteratively improving an initial coarse solution. The unfriendly Antarctic environment makes this image analysis problem extremely challenging, since light reflections, abruptly varying brightness conditions, and different terrain slopes must be considered as well. The ant-based approach is compared to a more traditional Hough-based solution and the results are discussed. | ['Alberto Broggi', 'Massimo Cellario', 'Paolo Lombardi', 'Marco Porta'] | An evolutionary approach to visual sensing for vehicle navigation | 50,439 |
This paper presents a prototype of a sensor device including heart rate, EDA and accelerometer sensors to investigate learners' internal state. Through experiments, sensor data were collected, visualized and correlated with information on learner's' emotional state derived from a self-report questionnaire. The results can be used to improve signal processing, and help find appropriate indicators from physiological data for learning environment design. | ['Vladimir Brovkov', 'Albrecht Fortenbacher', 'Haeseon Yun', 'Daniel Junker'] | Prototype of a sensor device for learning environments | 970,545 |
This paper discusses the integration of ontologies with service choreographies in view of recommending interest points to the modeler for model improvement. The concept is based on an ontology of recommendations (evaluated by metrics) attached to the elements of the model. The ontology and an associated knowledge base are used in order to extract correct recommendations (specified as textual annotations attached to the model) and present them to the modeler. Recommendations may result in model improvements. The recommendations rely on similarity measures between the captured modeler design intention and the knowledge stored in the ontology and knowledge bases. | ['Mario Cortes-Cornax', 'Ioana Ciuciu', 'Sophie Dupuy-Chessa', 'Dominique Rieu', 'Agnès Front'] | Towards the Integration of Ontologies with Service Choreographies | 643,560 |
We propose a multi-pass linear fold algorithm for sentence boundary detection in spontaneous speech. It uses only prosodic cues and does not rely on segmentation information from a speech recognition decoder. We focus on features based on pitch breaks and pitch durations, study their local and global structural properties and find their relationship with sentence boundaries. In the first step, the algorithm, which requires no training, automatically finds a set of candidate pitch breaks by simple curve fitting. In the next step, by exploiting statistical properties of sentence boundaries and disfluency, the algorithm finds the sentence boundaries within these candidate pitch breaks. With this simple method without any explicit segmentation information from an ASR, a 25% error rate was achieved on a randomly selected portion of the switchboard corpus. The result from this method is comparable with those that include word segmentation information and can be used in conjunction to improve the overall performance and confidence. | ['Dagen Wang', 'Shrikanth Narayanan'] | A multi-pass linear fold algorithm for sentence boundary detection using prosodic cues | 483,477 |
['Subramanian Ramachandran', 'Frank Mueller'] | Distributed Job Allocation for Large-Scale Manycores. | 982,149 |
|
The demand of spectrum sensing can be met by a reliable modulation classification (MC) scheme. This paper proposed a novel features clustering algorithm based on the joint distribution of time and frequency. It uses Pseudo Wigner-Ville Distribution (PWVD) as feature extraction approach. Density-Based Spatial Clustering of Applications with Noise (DBSCAN), alternatively, is utilized as classifier for single carrier modulation classification. Unlike ALRT approach, which is conventional method of likelihood based, this novel one is free from carrier phase offset. In addition, it has no variance in features, which is a huge advantage over Cumulant-based approach. The latter suffers from its variance in features, which degrades its performance in complex scheme badly. Moreover, training, which is an incredible time-consuming step for Support Vector Machine (SVM), is not necessary for DBSCAN. This enables DB-SCAN a faster processing. Simulation results indicate an overwhelming advantage over Cumulant-based classifier in performance. Moreover, carrier phase offset does not influent its performance at all. | ['Xu Zhu', 'Takeo Fujii'] | A novel modulation classification method in cognitive radios based on features clustering of time-frequency | 701,063 |
['Seyed Hossein Haeri', 'Sibylle Schupp'] | Reusable Components for Lightweight Mechanisation of Programming Languages | 576,283 |
|
Background#R##N#One of the goals of global metabolomic analysis is to identify metabolic markers that are hidden within a large background of data originating from high-throughput analytical measurements. Metabolite-based clustering is an unsupervised approach for marker identification based on grouping similar concentration profiles of putative metabolites. A major problem of this approach is that in general there is no prior information about an adequate number of clusters. | ['Peter Meinicke', 'Thomas Lingner', 'Alexander Kaever', 'Kirstin Feussner', 'Cornelia Göbel', 'Ivo Feussner', 'Petr Karlovsky', 'Burkhard Morgenstern'] | Metabolite-based clustering and visualization of mass spectrometry data using one-dimensional self-organizing maps. | 171,357 |
This paper proposes a complete method of diagnostic test generation for transition faults. The method creates a diagnostic test generation model for a pair of transition faults to be distinguished from a given full-scan sequential circuit and employs an ordinary transition fault ATPG tool. The proposed model supports launch-off-capture and launch-off-shift modes which is supported by the ATPG tool. Diagnostic test patterns generated by the proposed method are of the same form as the scan test patterns of the given circuit, i.e., no pattern conversion is necessary. Experimental results show that a commercial transition fault ATPG tool can be utilized in our proposed method using benchmark circuits and, for a given undistinguished pair, the proposed method can generate a test pattern for distinguishing them or prove that they are indistinguishable. | ['Renji Ono', 'Satoshi Ohtake'] | A Method of Diagnostic Test Generation for Transition Faults | 607,361 |
['Jelena Fiosina'] | Decentralised Regression Model for Intelligent Forecasting in Multi-agent Traffic Networks | 636,883 |