The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError Exception: ArrowInvalid Message: Failed to parse string: 'quant-ph/9912100' as a scalar of type double Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2116, in cast_array_to_feature return array_cast( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1963, in array_cast return array.cast(pa_type) File "pyarrow/array.pxi", line 996, in pyarrow.lib.Array.cast File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/compute.py", line 404, in cast return call_function("cast", [arr], options, memory_pool) File "pyarrow/_compute.pyx", line 590, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 385, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Failed to parse string: 'quant-ph/9912100' as a scalar of type double The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
id
float64 | submitter
string | title
string | categories
string | abstract
string | labels
string | domain
string |
---|---|---|---|---|---|---|
704.0002 | Louis Theran | Sparsity-certifying Graph Decompositions | math.CO cs.CG | We describe a new algorithm, the $(k,\ell)$-pebble game with colors, and use
it obtain a characterization of the family of $(k,\ell)$-sparse graphs and
algorithmic solutions to a family of problems concerning tree decompositions of
graphs. Special instances of sparse graphs appear in rigidity theory and have
received increased attention in recent years. In particular, our colored
pebbles generalize and strengthen the previous results of Lee and Streinu and
give a new proof of the Tutte-Nash-Williams characterization of arboricity. We
also present a new decomposition that certifies sparsity based on the
$(k,\ell)$-pebble game with colors. Our work also exposes connections between
pebble game algorithms and previous sparse graph algorithms by Gabow, Gabow and
Westermann and Hendrickson.
| Combinatorics, Computational Geometry | Mathematics |
704.0046 | Denes Petz | A limit relation for entropy and channel capacity per unit cost | quant-ph cs.IT math.IT | In a quantum mechanical model, Diosi, Feldmann and Kosloff arrived at a
conjecture stating that the limit of the entropy of certain mixtures is the
relative entropy as system size goes to infinity. The conjecture is proven in
this paper for density matrices. The first proof is analytic and uses the
quantum law of large numbers. The second one clarifies the relation to channel
capacity per unit cost for classical-quantum channels. Both proofs lead to
generalization of the conjecture.
| Quantum Physics, Information Theory, Information Theory | Physics |
704.0047 | Igor Grabec | Intelligent location of simultaneously active acoustic emission sources:
Part I | cs.NE cs.AI | The intelligent acoustic emission locator is described in Part I, while Part
II discusses blind source separation, time delay estimation and location of two
simultaneously active continuous acoustic emission sources.
The location of acoustic emission on complicated aircraft frame structures is
a difficult problem of non-destructive testing. This article describes an
intelligent acoustic emission source locator. The intelligent locator comprises
a sensor antenna and a general regression neural network, which solves the
location problem based on learning from examples. Locator performance was
tested on different test specimens. Tests have shown that the accuracy of
location depends on sound velocity and attenuation in the specimen, the
dimensions of the tested area, and the properties of stored data. The location
accuracy achieved by the intelligent locator is comparable to that obtained by
the conventional triangulation method, while the applicability of the
intelligent locator is more general since analysis of sonic ray paths is
avoided. This is a promising method for non-destructive testing of aircraft
frame structures by the acoustic emission method.
| Neural and Evolutionary Computing, Artificial Intelligence | Computer Science |
704.005 | Igor Grabec | Intelligent location of simultaneously active acoustic emission sources:
Part II | cs.NE cs.AI | Part I describes an intelligent acoustic emission locator, while Part II
discusses blind source separation, time delay estimation and location of two
continuous acoustic emission sources.
Acoustic emission (AE) analysis is used for characterization and location of
developing defects in materials. AE sources often generate a mixture of various
statistically independent signals. A difficult problem of AE analysis is
separation and characterization of signal components when the signals from
various sources and the mode of mixing are unknown. Recently, blind source
separation (BSS) by independent component analysis (ICA) has been used to solve
these problems. The purpose of this paper is to demonstrate the applicability
of ICA to locate two independent simultaneously active acoustic emission
sources on an aluminum band specimen. The method is promising for
non-destructive testing of aircraft frame structures by acoustic emission
analysis.
| Neural and Evolutionary Computing, Artificial Intelligence | Computer Science |
704.0062 | Tom\'a\v{s} Vina\v{r} | On-line Viterbi Algorithm and Its Relationship to Random Walks | cs.DS | In this paper, we introduce the on-line Viterbi algorithm for decoding hidden
Markov models (HMMs) in much smaller than linear space. Our analysis on
two-state HMMs suggests that the expected maximum memory used to decode
sequence of length $n$ with $m$-state HMM can be as low as $\Theta(m\log n)$,
without a significant slow-down compared to the classical Viterbi algorithm.
Classical Viterbi algorithm requires $O(mn)$ space, which is impractical for
analysis of long DNA sequences (such as complete human genome chromosomes) and
for continuous data streams. We also experimentally demonstrate the performance
of the on-line Viterbi algorithm on a simple HMM for gene finding on both
simulated and real DNA sequences.
| Data Structures and Algorithms | Computer Science |
704.009 | Lester Ingber | Real Options for Project Schedules (ROPS) | cs.CE cond-mat.stat-mech cs.MS cs.NA physics.data-an | Real Options for Project Schedules (ROPS) has three recursive
sampling/optimization shells. An outer Adaptive Simulated Annealing (ASA)
optimization shell optimizes parameters of strategic Plans containing multiple
Projects containing ordered Tasks. A middle shell samples probability
distributions of durations of Tasks. An inner shell samples probability
distributions of costs of Tasks. PATHTREE is used to develop options on
schedules.. Algorithms used for Trading in Risk Dimensions (TRD) are applied to
develop a relative risk analysis among projects.
| Computational Engineering, Finance, and Science, Statistical Mechanics, Mathematical Software, Numerical Analysis, Data Analysis, Statistics and Probability | Computer Science |
704.0098 | Jack Raymond | Sparsely-spread CDMA - a statistical mechanics based analysis | cs.IT math.IT | Sparse Code Division Multiple Access (CDMA), a variation on the standard CDMA
method in which the spreading (signature) matrix contains only a relatively
small number of non-zero elements, is presented and analysed using methods of
statistical physics. The analysis provides results on the performance of
maximum likelihood decoding for sparse spreading codes in the large system
limit. We present results for both cases of regular and irregular spreading
matrices for the binary additive white Gaussian noise channel (BIAWGN) with a
comparison to the canonical (dense) random spreading code.
| Information Theory, Information Theory | Computer Science |
704.0108 | Sergey Gubin | Reducing SAT to 2-SAT | cs.CC | Description of a polynomial time reduction of SAT to 2-SAT of polynomial
size.
| Computational Complexity | Computer Science |
704.0213 | Ketan Mulmuley D | Geometric Complexity Theory V: On deciding nonvanishing of a generalized
Littlewood-Richardson coefficient | cs.CC | This article has been withdrawn because it has been merged with the earlier
article GCT3 (arXiv: CS/0501076 [cs.CC]) in the series. The merged article is
now available as:
Geometric Complexity Theory III: on deciding nonvanishing of a
Littlewood-Richardson Coefficient, Journal of Algebraic Combinatorics, vol. 36,
issue 1, 2012, pp. 103-110. (Authors: Ketan Mulmuley, Hari Narayanan and Milind
Sohoni)
The new article in this GCT5 slot in the series is:
Geometric Complexity Theory V: Equivalence between blackbox derandomization
of polynomial identity testing and derandomization of Noether's Normalization
Lemma, in the Proceedings of FOCS 2012 (abstract), arXiv:1209.5993 [cs.CC]
(full version) (Author: Ketan Mulmuley)
| Computational Complexity | Computer Science |
704.0217 | Wiroonsak Santipach | Capacity of a Multiple-Antenna Fading Channel with a Quantized Precoding
Matrix | cs.IT math.IT | Given a multiple-input multiple-output (MIMO) channel, feedback from the
receiver can be used to specify a transmit precoding matrix, which selectively
activates the strongest channel modes. Here we analyze the performance of
Random Vector Quantization (RVQ), in which the precoding matrix is selected
from a random codebook containing independent, isotropically distributed
entries. We assume that channel elements are i.i.d. and known to the receiver,
which relays the optimal (rate-maximizing) precoder codebook index to the
transmitter using B bits. We first derive the large system capacity of
beamforming (rank-one precoding matrix) as a function of B, where large system
refers to the limit as B and the number of transmit and receive antennas all go
to infinity with fixed ratios. With beamforming RVQ is asymptotically optimal,
i.e., no other quantization scheme can achieve a larger asymptotic rate. The
performance of RVQ is also compared with that of a simpler reduced-rank scalar
quantization scheme in which the beamformer is constrained to lie in a random
subspace. We subsequently consider a precoding matrix with arbitrary rank, and
approximate the asymptotic RVQ performance with optimal and linear receivers
(matched filter and Minimum Mean Squared Error (MMSE)). Numerical examples show
that these approximations accurately predict the performance of finite-size
systems of interest. Given a target spectral efficiency, numerical examples
show that the amount of feedback required by the linear MMSE receiver is only
slightly more than that required by the optimal receiver, whereas the matched
filter can require significantly more feedback.
| Information Theory, Information Theory | Computer Science |
704.0218 | Yuri Pritykin | On Almost Periodicity Criteria for Morphic Sequences in Some Particular
Cases | cs.DM cs.LO | In some particular cases we give criteria for morphic sequences to be almost
periodic (=uniformly recurrent). Namely, we deal with fixed points of
non-erasing morphisms and with automatic sequences. In both cases a
polynomial-time algorithm solving the problem is found. A result more or less
supporting the conjecture of decidability of the general problem is given.
| Discrete Mathematics, Logic in Computer Science | Computer Science |
704.0229 | Ketan Mulmuley D | Geometric Complexity Theory VI: the flip via saturated and positive
integer programming in representation theory and algebraic geometry | cs.CC | This article belongs to a series on geometric complexity theory (GCT), an
approach to the P vs. NP and related problems through algebraic geometry and
representation theory. The basic principle behind this approach is called the
flip. In essence, it reduces the negative hypothesis in complexity theory (the
lower bound problems), such as the P vs. NP problem in characteristic zero, to
the positive hypothesis in complexity theory (the upper bound problems):
specifically, to showing that the problems of deciding nonvanishing of the
fundamental structural constants in representation theory and algebraic
geometry, such as the well known plethysm constants--or rather certain relaxed
forms of these decision probelms--belong to the complexity class P. In this
article, we suggest a plan for implementing the flip, i.e., for showing that
these relaxed decision problems belong to P. This is based on the reduction of
the preceding complexity-theoretic positive hypotheses to mathematical
positivity hypotheses: specifically, to showing that there exist positive
formulae--i.e. formulae with nonnegative coefficients--for the structural
constants under consideration and certain functions associated with them. These
turn out be intimately related to the similar positivity properties of the
Kazhdan-Lusztig polynomials and the multiplicative structural constants of the
canonical (global crystal) bases in the theory of Drinfeld-Jimbo quantum
groups. The known proofs of these positivity properties depend on the Riemann
hypothesis over finite fields and the related results. Thus the reduction here,
in conjunction with the flip, in essence, says that the validity of the P vs.
NP conjecture in characteristic zero is intimately linked to the Riemann
hypothesis over finite fields and related problems.
| Computational Complexity | Computer Science |
704.0282 | Samuele Bandi | On Punctured Pragmatic Space-Time Codes in Block Fading Channel | cs.IT cs.CC math.IT | This paper considers the use of punctured convolutional codes to obtain
pragmatic space-time trellis codes over block-fading channel. We show that good
performance can be achieved even when puncturation is adopted and that we can
still employ the same Viterbi decoder of the convolutional mother code by using
approximated metrics without increasing the complexity of the decoding
operations.
| Information Theory, Computational Complexity, Information Theory | Computer Science |
704.0301 | Akitoshi Kawamura | Differential Recursion and Differentially Algebraic Functions | cs.CC | Moore introduced a class of real-valued "recursive" functions by analogy with
Kleene's formulation of the standard recursive functions. While his concise
definition inspired a new line of research on analog computation, it contains
some technical inaccuracies. Focusing on his "primitive recursive" functions,
we pin down what is problematic and discuss possible attempts to remove the
ambiguity regarding the behavior of the differential recursion operator on
partial functions. It turns out that in any case the purported relation to
differentially algebraic functions, and hence to Shannon's model of analog
computation, fails.
| Computational Complexity | Computer Science |
704.0304 | Carlos Gershenson | The World as Evolving Information | cs.IT cs.AI math.IT q-bio.PE | This paper discusses the benefits of describing the world as information,
especially in the study of the evolution of life and cognition. Traditional
studies encounter problems because it is difficult to describe life and
cognition in terms of matter and energy, since their laws are valid only at the
physical scale. However, if matter and energy, as well as life and cognition,
are described in terms of information, evolution can be described consistently
as information becoming more complex.
The paper presents eight tentative laws of information, valid at multiple
scales, which are generalizations of Darwinian, cybernetic, thermodynamic,
psychological, philosophical, and complexity principles. These are further used
to discuss the notions of life, cognition and their evolution.
| Information Theory, Artificial Intelligence, Information Theory, Populations and Evolution | Computer Science |
704.0309 | Guohun Zhu | The Complexity of HCP in Digraps with Degree Bound Two | cs.CC cs.DM | The Hamiltonian cycle problem (HCP) in digraphs D with degree bound two is
solved by two mappings in this paper. The first bijection is between an
incidence matrix C_{nm} of simple digraph and an incidence matrix F of balanced
bipartite undirected graph G; The second mapping is from a perfect matching of
G to a cycle of D. It proves that the complexity of HCP in D is polynomial, and
finding a second non-isomorphism Hamiltonian cycle from a given Hamiltonian
digraph with degree bound two is also polynomial. Lastly it deduces P=NP base
on the results.
| Computational Complexity, Discrete Mathematics | Computer Science |
704.0361 | Ioannis Chatzigeorgiou | Pseudo-random Puncturing: A Technique to Lower the Error Floor of Turbo
Codes | cs.IT math.IT | It has been observed that particular rate-1/2 partially systematic parallel
concatenated convolutional codes (PCCCs) can achieve a lower error floor than
that of their rate-1/3 parent codes. Nevertheless, good puncturing patterns can
only be identified by means of an exhaustive search, whilst convergence towards
low bit error probabilities can be problematic when the systematic output of a
rate-1/2 partially systematic PCCC is heavily punctured. In this paper, we
present and study a family of rate-1/2 partially systematic PCCCs, which we
call pseudo-randomly punctured codes. We evaluate their bit error rate
performance and we show that they always yield a lower error floor than that of
their rate-1/3 parent codes. Furthermore, we compare analytic results to
simulations and we demonstrate that their performance converges towards the
error floor region, owning to the moderate puncturing of their systematic
output. Consequently, we propose pseudo-random puncturing as a means of
improving the bandwidth efficiency of a PCCC and simultaneously lowering its
error floor.
| Information Theory, Information Theory | Computer Science |
704.0468 | Jinsong Tan | Inapproximability of Maximum Weighted Edge Biclique and Its Applications | cs.CC cs.DS | Given a bipartite graph $G = (V_1,V_2,E)$ where edges take on {\it both}
positive and negative weights from set $\mathcal{S}$, the {\it maximum weighted
edge biclique} problem, or $\mathcal{S}$-MWEB for short, asks to find a
bipartite subgraph whose sum of edge weights is maximized. This problem has
various applications in bioinformatics, machine learning and databases and its
(in)approximability remains open. In this paper, we show that for a wide range
of choices of $\mathcal{S}$, specifically when $| \frac{\min\mathcal{S}} {\max
\mathcal{S}} | \in \Omega(\eta^{\delta-1/2}) \cap O(\eta^{1/2-\delta})$ (where
$\eta = \max\{|V_1|, |V_2|\}$, and $\delta \in (0,1/2]$), no polynomial time
algorithm can approximate $\mathcal{S}$-MWEB within a factor of $n^{\epsilon}$
for some $\epsilon > 0$ unless $\mathsf{RP = NP}$. This hardness result gives
justification of the heuristic approaches adopted for various applied problems
in the aforementioned areas, and indicates that good approximation algorithms
are unlikely to exist. Specifically, we give two applications by showing that:
1) finding statistically significant biclusters in the SAMBA model, proposed in
\cite{Tan02} for the analysis of microarray data, is
$n^{\epsilon}$-inapproximable; and 2) no polynomial time algorithm exists for
the Minimum Description Length with Holes problem \cite{Bu05} unless
$\mathsf{RP=NP}$.
| Computational Complexity, Data Structures and Algorithms | Computer Science |
704.0492 | Shenghui Su | Refuting the Pseudo Attack on the REESSE1+ Cryptosystem | cs.CR | We illustrate through example 1 and 2 that the condition at theorem 1 in [8]
dissatisfies necessity, and the converse proposition of fact 1.1 in [8] does
not hold, namely the condition Z/M - L/Ak < 1/(2 Ak^2) is not sufficient for
f(i) + f(j) = f(k). Illuminate through an analysis and ex.3 that there is a
logic error during deduction of fact 1.2, which causes each of fact 1.2, 1.3, 4
to be invalid. Demonstrate through ex.4 and 5 that each or the combination of
qu+1 > qu * D at fact 4 and table 1 at fact 2.2 is not sufficient for f(i) +
f(j) = f(k), property 1, 2, 3, 4, 5 each are invalid, and alg.1 based on fact 4
and alg.2 based on table 1 are disordered and wrong logically. Further,
manifest through a repeated experiment and ex.5 that the data at table 2 is
falsified, and the example in [8] is woven elaborately. We explain why Cx = Ax
* W^f(x) (% M) is changed to Cx = (Ax * W^f(x))^d (% M) in REESSE1+ v2.1. To
the signature fraud, we point out that [8] misunderstands the existence of T^-1
and Q^-1 % (M-1), and forging of Q can be easily avoided through moving H.
Therefore, the conclusion of [8] that REESSE1+ is not secure at all (which
connotes that [8] can extract a related private key from any public key in
REESSE1+) is fully incorrect, and as long as the parameter Omega is fitly
selected, REESSE1+ with Cx = Ax * W^f(x) (% M) is secure.
| Cryptography and Security | Computer Science |
704.0499 | Lawrence Ong | Optimal Routing for Decode-and-Forward based Cooperation in Wireless
Networks | cs.IT math.IT | We investigate cooperative wireless relay networks in which the nodes can
help each other in data transmission. We study different coding strategies in
the single-source single-destination network with many relay nodes. Given the
myriad of ways in which nodes can cooperate, there is a natural routing
problem, i.e., determining an ordered set of nodes to relay the data from the
source to the destination. We find that for a given route, the
decode-and-forward strategy, which is an information theoretic cooperative
coding strategy, achieves rates significantly higher than that achievable by
the usual multi-hop coding strategy, which is a point-to-point non-cooperative
coding strategy. We construct an algorithm to find an optimal route (in terms
of rate maximizing) for the decode-and-forward strategy. Since the algorithm
runs in factorial time in the worst case, we propose a heuristic algorithm that
runs in polynomial time. The heuristic algorithm outputs an optimal route when
the nodes transmit independent codewords. We implement these coding strategies
using practical low density parity check codes to compare the performance of
the strategies on different routes.
| Information Theory, Information Theory | Computer Science |
704.0528 | Soung Liew | Many-to-One Throughput Capacity of IEEE 802.11 Multi-hop Wireless
Networks | cs.NI cs.IT math.IT | This paper investigates the many-to-one throughput capacity (and by symmetry,
one-to-many throughput capacity) of IEEE 802.11 multi-hop networks. It has
generally been assumed in prior studies that the many-to-one throughput
capacity is upper-bounded by the link capacity L. Throughput capacity L is not
achievable under 802.11. This paper introduces the notion of "canonical
networks", which is a class of regularly-structured networks whose capacities
can be analyzed more easily than unstructured networks. We show that the
throughput capacity of canonical networks under 802.11 has an analytical upper
bound of 3L/4 when the source nodes are two or more hops away from the sink;
and simulated throughputs of 0.690L (0.740L) when the source nodes are many
hops away. We conjecture that 3L/4 is also the upper bound for general
networks. When all links have equal length, 2L/3 can be shown to be the upper
bound for general networks. Our simulations show that 802.11 networks with
random topologies operated with AODV routing can only achieve throughputs far
below the upper bounds. Fortunately, by properly selecting routes near the
gateway (or by properly positioning the relay nodes leading to the gateway) to
fashion after the structure of canonical networks, the throughput can be
improved significantly by more than 150%. Indeed, in a dense network, it is
worthwhile to deactivate some of the relay nodes near the sink judiciously.
| Networking and Internet Architecture, Information Theory, Information Theory | Computer Science |
704.054 | Jinhua Jiang | On the Achievable Rate Regions for Interference Channels with Degraded
Message Sets | cs.IT math.IT | The interference channel with degraded message sets (IC-DMS) refers to a
communication model in which two senders attempt to communicate with their
respective receivers simultaneously through a common medium, and one of the
senders has complete and a priori (non-causal) knowledge about the message
being transmitted by the other. A coding scheme that collectively has
advantages of cooperative coding, collaborative coding, and dirty paper coding,
is developed for such a channel. With resorting to this coding scheme,
achievable rate regions of the IC-DMS in both discrete memoryless and Gaussian
cases are derived, which, in general, include several previously known rate
regions. Numerical examples for the Gaussian case demonstrate that in the
high-interference-gain regime, the derived achievable rate regions offer
considerable improvements over these existing results.
| Information Theory, Information Theory | Computer Science |
704.059 | Rachit Agarwal | A Low Complexity Algorithm and Architecture for Systematic Encoding of
Hermitian Codes | cs.IT math.IT | We present an algorithm for systematic encoding of Hermitian codes. For a
Hermitian code defined over GF(q^2), the proposed algorithm achieves a run time
complexity of O(q^2) and is suitable for VLSI implementation. The encoder
architecture uses as main blocks q varying-rate Reed-Solomon encoders and
achieves a space complexity of O(q^2) in terms of finite field multipliers and
memory elements.
| Information Theory, Information Theory | Computer Science |
704.0671 | Maxim Raginsky | Learning from compressed observations | cs.IT cs.LG math.IT | The problem of statistical learning is to construct a predictor of a random
variable $Y$ as a function of a related random variable $X$ on the basis of an
i.i.d. training sample from the joint distribution of $(X,Y)$. Allowable
predictors are drawn from some specified class, and the goal is to approach
asymptotically the performance (expected loss) of the best predictor in the
class. We consider the setting in which one has perfect observation of the
$X$-part of the sample, while the $Y$-part has to be communicated at some
finite bit rate. The encoding of the $Y$-values is allowed to depend on the
$X$-values. Under suitable regularity conditions on the admissible predictors,
the underlying family of probability distributions and the loss function, we
give an information-theoretic characterization of achievable predictor
performance in terms of conditional distortion-rate functions. The ideas are
illustrated on the example of nonparametric regression in Gaussian noise.
| Information Theory, Machine Learning, Information Theory | Computer Science |
704.073 | Hamed Haddadi MSc MIEE | Revisiting the Issues On Netflow Sample and Export Performance | cs.PF cs.NI | The high volume of packets and packet rates of traffic on some router links
makes it exceedingly difficult for routers to examine every packet in order to
keep detailed statistics about the traffic which is traversing the router.
Sampling is commonly applied on routers in order to limit the load incurred by
the collection of information that the router has to undertake when evaluating
flow information for monitoring purposes. The sampling process in nearly all
cases is a deterministic process of choosing 1 in every N packets on a
per-interface basis, and then forming the flow statistics based on the
collected sampled statistics. Even though this sampling may not be significant
for some statistics, such as packet rate, others can be severely distorted.
However, it is important to consider the sampling techniques and their relative
accuracy when applied to different traffic patterns. The main disadvantage of
sampling is the loss of accuracy in the collected trace when compared to the
original traffic stream. To date there has not been a detailed analysis of the
impact of sampling at a router in various traffic profiles and flow criteria.
In this paper, we assess the performance of the sampling process as used in
NetFlow in detail, and we discuss some techniques for the compensation of loss
of monitoring detail.
| Performance, Networking and Internet Architecture | Computer Science |
704.0788 | Kerry Soileau | Optimal Synthesis of Multiple Algorithms | cs.DS cs.PF | In this paper we give a definition of "algorithm," "finite algorithm,"
"equivalent algorithms," and what it means for a single algorithm to dominate a
set of algorithms. We define a derived algorithm which may have a smaller mean
execution time than any of its component algorithms. We give an explicit
expression for the mean execution time (when it exists) of the derived
algorithm. We give several illustrative examples of derived algorithms with two
component algorithms. We include mean execution time solutions for
two-algorithm processors whose joint density of execution times are of several
general forms. For the case in which the joint density for a two-algorithm
processor is a step function, we give a maximum-likelihood estimation scheme
with which to analyze empirical processing time data.
| Data Structures and Algorithms, Performance | Computer Science |
704.0802 | Caleb Lo | Hybrid-ARQ in Multihop Networks with Opportunistic Relay Selection | cs.IT math.IT | This paper develops a contention-based opportunistic feedback technique
towards relay selection in a dense wireless network. This technique enables the
forwarding of additional parity information from the selected relay to the
destination. For a given network, the effects of varying key parameters such as
the feedback probability are presented and discussed. A primary advantage of
the proposed technique is that relay selection can be performed in a
distributed way. Simulation results find its performance to closely match that
of centralized schemes that utilize GPS information, unlike the proposed
method. The proposed relay selection method is also found to achieve throughput
gains over a point-to-point transmission strategy.
| Information Theory, Information Theory | Computer Science |
704.0805 | Caleb Lo | Opportunistic Relay Selection with Limited Feedback | cs.IT math.IT | It has been shown that a decentralized relay selection protocol based on
opportunistic feedback from the relays yields good throughput performance in
dense wireless networks. This selection strategy supports a hybrid-ARQ
transmission approach where relays forward parity information to the
destination in the event of a decoding error. Such an approach, however,
suffers a loss compared to centralized strategies that select relays with the
best channel gain to the destination. This paper closes the performance gap by
adding another level of channel feedback to the decentralized relay selection
problem. It is demonstrated that only one additional bit of feedback is
necessary for good throughput performance. The performance impact of varying
key parameters such as the number of relays and the channel feedback threshold
is discussed. An accompanying bit error rate analysis demonstrates the
importance of relay selection.
| Information Theory, Information Theory | Computer Science |
704.0831 | Brooke Shrader | On packet lengths and overhead for random linear coding over the erasure
channel | cs.IT math.IT | We assess the practicality of random network coding by illuminating the issue
of overhead and considering it in conjunction with increasingly long packets
sent over the erasure channel. We show that the transmission of increasingly
long packets, consisting of either of an increasing number of symbols per
packet or an increasing symbol alphabet size, results in a data rate
approaching zero over the erasure channel. This result is due to an erasure
probability that increases with packet length. Numerical results for a
particular modulation scheme demonstrate a data rate of approximately zero for
a large, but finite-length packet. Our results suggest a reduction in the
performance gains offered by random network coding.
| Information Theory, Information Theory | Computer Science |
704.0834 | Anatoly Rodionov | P-adic arithmetic coding | cs.DS | A new incremental algorithm for data compression is presented. For a sequence
of input symbols algorithm incrementally constructs a p-adic integer number as
an output. Decoding process starts with less significant part of a p-adic
integer and incrementally reconstructs a sequence of input symbols. Algorithm
is based on certain features of p-adic numbers and p-adic norm. p-adic coding
algorithm may be considered as of generalization a popular compression
technique - arithmetic coding algorithms. It is shown that for p = 2 the
algorithm works as integer variant of arithmetic coding; for a special class of
models it gives exactly the same codes as Huffman's algorithm, for another
special model and a specific alphabet it gives Golomb-Rice codes.
| Data Structures and Algorithms | Computer Science |
704.0838 | Gil Shamir | Universal Source Coding for Monotonic and Fast Decaying Monotonic
Distributions | cs.IT math.IT | We study universal compression of sequences generated by monotonic
distributions. We show that for a monotonic distribution over an alphabet of
size $k$, each probability parameter costs essentially $0.5 \log (n/k^3)$ bits,
where $n$ is the coded sequence length, as long as $k = o(n^{1/3})$. Otherwise,
for $k = O(n)$, the total average sequence redundancy is $O(n^{1/3+\epsilon})$
bits overall. We then show that there exists a sub-class of monotonic
distributions over infinite alphabets for which redundancy of
$O(n^{1/3+\epsilon})$ bits overall is still achievable. This class contains
fast decaying distributions, including many distributions over the integers and
geometric distributions. For some slower decays, including other distributions
over the integers, redundancy of $o(n)$ bits overall is achievable, where a
method to compute specific redundancy rates for such distributions is derived.
The results are specifically true for finite entropy monotonic distributions.
Finally, we study individual sequence redundancy behavior assuming a sequence
is governed by a monotonic distribution. We show that for sequences whose
empirical distributions are monotonic, individual redundancy bounds similar to
those in the average case can be obtained. However, even if the monotonicity in
the empirical distribution is violated, diminishing per symbol individual
sequence redundancies with respect to the monotonic maximum likelihood
description length may still be achievable.
| Information Theory, Information Theory | Computer Science |
704.0858 | Mohamed Kaaniche | Lessons Learned from the deployment of a high-interaction honeypot | cs.CR | This paper presents an experimental study and the lessons learned from the
observation of the attackers when logged on a compromised machine. The results
are based on a six months period during which a controlled experiment has been
run with a high interaction honeypot. We correlate our findings with those
obtained with a worldwide distributed system of lowinteraction honeypots.
| Cryptography and Security | Computer Science |
704.086 | Mohamed Kaaniche | Availability assessment of SunOS/Solaris Unix Systems based on Syslogd
and wtmpx logfiles : a case study | cs.PF | This paper presents a measurement-based availability assessment study using
field data collected during a 4-year period from 373 SunOS/Solaris Unix
workstations and servers interconnected through a local area network. We focus
on the estimation of machine uptimes, downtimes and availability based on the
identification of failures that caused total service loss. Data corresponds to
syslogd event logs that contain a large amount of information about the normal
activity of the studied systems as well as their behavior in the presence of
failures. It is widely recognized that the information contained in such event
logs might be incomplete or imperfect. The solution investigated in this paper
to address this problem is based on the use of auxiliary sources of data
obtained from wtmpx files maintained by the SunOS/Solaris Unix operating
system. The results obtained suggest that the combined use of wtmpx and syslogd
log files provides more complete information on the state of the target systems
that is useful to provide availability estimations that better reflect reality.
| Performance | Computer Science |
704.0861 | Mohamed Kaaniche | Empirical analysis and statistical modeling of attack processes based on
honeypots | cs.PF cs.CR | Honeypots are more and more used to collect data on malicious activities on
the Internet and to better understand the strategies and techniques used by
attackers to compromise target systems. Analysis and modeling methodologies are
needed to support the characterization of attack processes based on the data
collected from the honeypots. This paper presents some empirical analyses based
on the data collected from the Leurr{\'e}.com honeypot platforms deployed on
the Internet and presents some preliminary modeling studies aimed at fulfilling
such objectives.
| Performance, Cryptography and Security | Computer Science |
704.0865 | Mohamed Kaaniche | An architecture-based dependability modeling framework using AADL | cs.PF cs.SE | For efficiency reasons, the software system designers' will is to use an
integrated set of methods and tools to describe specifications and designs, and
also to perform analyses such as dependability, schedulability and performance.
AADL (Architecture Analysis and Design Language) has proved to be efficient for
software architecture modeling. In addition, AADL was designed to accommodate
several types of analyses. This paper presents an iterative dependency-driven
approach for dependability modeling using AADL. It is illustrated on a small
example. This approach is part of a complete framework that allows the
generation of dependability analysis and evaluation models from AADL models to
support the analysis of software and system architectures, in critical
application domains.
| Performance, Software Engineering | Computer Science |
704.0879 | Mohamed Kaaniche | A Hierarchical Approach for Dependability Analysis of a Commercial
Cache-Based RAID Storage Architecture | cs.PF | We present a hierarchical simulation approach for the dependability analysis
and evaluation of a highly available commercial cache-based RAID storage
system. The archi-tecture is complex and includes several layers of
overlap-ping error detection and recovery mechanisms. Three ab-straction levels
have been developed to model the cache architecture, cache operations, and
error detection and recovery mechanism. The impact of faults and errors
oc-curring in the cache and in the disks is analyzed at each level of the
hierarchy. A simulation submodel is associated with each abstraction level. The
models have been devel-oped using DEPEND, a simulation-based environment for
system-level dependability analysis, which provides facili-ties to inject
faults into a functional behavior model, to simulate error detection and
recovery mechanisms, and to evaluate quantitative measures. Several fault
models are defined for each submodel to simulate cache component failures, disk
failures, transmission errors, and data errors in the cache memory and in the
disks. Some of the parame-ters characterizing fault injection in a given
submodel cor-respond to probabilities evaluated from the simulation of the
lower-level submodel. Based on the proposed method-ology, we evaluate and
analyze 1) the system behavior un-der a real workload and high error rate
(focusing on error bursts), 2) the coverage of the error detection mechanisms
implemented in the system and the error latency distribu-tions, and 3) the
accumulation of errors in the cache and in the disks.
| Performance | Computer Science |
704.0954 | Jos\'e M. F. Moura | Sensor Networks with Random Links: Topology Design for Distributed
Consensus | cs.IT cs.LG math.IT | In a sensor network, in practice, the communication among sensors is subject
to:(1) errors or failures at random times; (3) costs; and(2) constraints since
sensors and networks operate under scarce resources, such as power, data rate,
or communication. The signal-to-noise ratio (SNR) is usually a main factor in
determining the probability of error (or of communication failure) in a link.
These probabilities are then a proxy for the SNR under which the links operate.
The paper studies the problem of designing the topology, i.e., assigning the
probabilities of reliable communication among sensors (or of link failures) to
maximize the rate of convergence of average consensus, when the link
communication costs are taken into account, and there is an overall
communication budget constraint. To consider this problem, we address a number
of preliminary issues: (1) model the network as a random topology; (2)
establish necessary and sufficient conditions for mean square sense (mss) and
almost sure (a.s.) convergence of average consensus when network links fail;
and, in particular, (3) show that a necessary and sufficient condition for both
mss and a.s. convergence is for the algebraic connectivity of the mean graph
describing the network topology to be strictly positive. With these results, we
formulate topology design, subject to random link failures and to a
communication cost constraint, as a constrained convex optimization problem to
which we apply semidefinite programming techniques. We show by an extensive
numerical study that the optimal design improves significantly the convergence
speed of the consensus algorithm and can achieve the asymptotic performance of
a non-random network at a fraction of the communication cost.
| Information Theory, Machine Learning, Information Theory | Computer Science |
704.0967 | Jia Liu | Cross-Layer Optimization of MIMO-Based Mesh Networks with Gaussian
Vector Broadcast Channels | cs.IT cs.AR math.IT | MIMO technology is one of the most significant advances in the past decade to
increase channel capacity and has a great potential to improve network capacity
for mesh networks. In a MIMO-based mesh network, the links outgoing from each
node sharing the common communication spectrum can be modeled as a Gaussian
vector broadcast channel. Recently, researchers showed that ``dirty paper
coding'' (DPC) is the optimal transmission strategy for Gaussian vector
broadcast channels. So far, there has been little study on how this fundamental
result will impact the cross-layer design for MIMO-based mesh networks. To fill
this gap, we consider the problem of jointly optimizing DPC power allocation in
the link layer at each node and multihop/multipath routing in a MIMO-based mesh
networks. It turns out that this optimization problem is a very challenging
non-convex problem. To address this difficulty, we transform the original
problem to an equivalent problem by exploiting the channel duality. For the
transformed problem, we develop an efficient solution procedure that integrates
Lagrangian dual decomposition method, conjugate gradient projection method
based on matrix differential calculus, cutting-plane method, and subgradient
method. In our numerical example, it is shown that we can achieve a network
performance gain of 34.4% by using DPC.
| Information Theory, Hardware Architecture, Information Theory | Computer Science |
704.0985 | Mohd Abubakr | Architecture for Pseudo Acausal Evolvable Embedded Systems | cs.NE cs.AI | Advances in semiconductor technology are contributing to the increasing
complexity in the design of embedded systems. Architectures with novel
techniques such as evolvable nature and autonomous behavior have engrossed lot
of attention. This paper demonstrates conceptually evolvable embedded systems
can be characterized basing on acausal nature. It is noted that in acausal
systems, future input needs to be known, here we make a mechanism such that the
system predicts the future inputs and exhibits pseudo acausal nature. An
embedded system that uses theoretical framework of acausality is proposed. Our
method aims at a novel architecture that features the hardware evolability and
autonomous behavior alongside pseudo acausality. Various aspects of this
architecture are discussed in detail along with the limitations.
| Neural and Evolutionary Computing, Artificial Intelligence | Computer Science |
704.102 | Gyorgy Ottucsak | The on-line shortest path problem under partial monitoring | cs.LG cs.SC | The on-line shortest path problem is considered under various models of
partial monitoring. Given a weighted directed acyclic graph whose edge weights
can change in an arbitrary (adversarial) way, a decision maker has to choose in
each round of a game a path between two distinguished vertices such that the
loss of the chosen path (defined as the sum of the weights of its composing
edges) be as small as possible. In a setting generalizing the multi-armed
bandit problem, after choosing a path, the decision maker learns only the
weights of those edges that belong to the chosen path. For this problem, an
algorithm is given whose average cumulative loss in n rounds exceeds that of
the best path, matched off-line to the entire sequence of the edge weights, by
a quantity that is proportional to 1/\sqrt{n} and depends only polynomially on
the number of edges of the graph. The algorithm can be implemented with linear
complexity in the number of rounds n and in the number of edges. An extension
to the so-called label efficient setting is also given, in which the decision
maker is informed about the weights of the edges corresponding to the chosen
path at a total of m << n time instances. Another extension is shown where the
decision maker competes against a time-varying path, a generalization of the
problem of tracking the best expert. A version of the multi-armed bandit
setting for shortest path is also discussed where the decision maker learns
only the total weight of the chosen path but not the weights of the individual
edges on the path. Applications to routing in packet switched networks along
with simulation results are also presented.
| Machine Learning, Symbolic Computation | Computer Science |
704.1028 | Jianlin Cheng | A neural network approach to ordinal regression | cs.LG cs.AI cs.NE | Ordinal regression is an important type of learning, which has properties of
both classification and regression. Here we describe a simple and effective
approach to adapt a traditional neural network to learn ordinal categories. Our
approach is a generalization of the perceptron method for ordinal regression.
On several benchmark datasets, our method (NNRank) outperforms a neural network
classification method. Compared with the ordinal regression methods using
Gaussian processes and support vector machines, NNRank achieves comparable
performance. Moreover, NNRank has the advantages of traditional neural
networks: learning in both online and batch modes, handling very large training
datasets, and making rapid predictions. These features make NNRank a useful and
complementary tool for large-scale data processing tasks such as information
retrieval, web page ranking, collaborative filtering, and protein ranking in
Bioinformatics.
| Machine Learning, Artificial Intelligence, Neural and Evolutionary Computing | Computer Science |
704.1043 | Hector Zenil | On the Kolmogorov-Chaitin Complexity for short sequences | cs.CC cs.IT math.IT | A drawback of Kolmogorov-Chaitin complexity (K) as a function from s to the
shortest program producing s is its noncomputability which limits its range of
applicability. Moreover, when strings are short, the dependence of K on a
particular universal Turing machine U can be arbitrary. In practice one can
approximate it by computable compression methods. However, such compression
methods do not always provide meaningful approximations--for strings shorter,
for example, than typical compiler lengths. In this paper we suggest an
empirical approach to overcome this difficulty and to obtain a stable
definition of the Kolmogorov-Chaitin complexity for short sequences.
Additionally, a correlation in terms of distribution frequencies was found
across the output of two models of abstract machines, namely unidimensional
cellular automata and deterministic Turing machine.
| Computational Complexity, Information Theory, Information Theory | Computer Science |
704.1068 | Leo Liberti | Fast paths in large-scale dynamic road networks | cs.NI cs.DS | Efficiently computing fast paths in large scale dynamic road networks (where
dynamic traffic information is known over a part of the network) is a practical
problem faced by several traffic information service providers who wish to
offer a realistic fast path computation to GPS terminal enabled vehicles. The
heuristic solution method we propose is based on a highway hierarchy-based
shortest path algorithm for static large-scale networks; we maintain a static
highway hierarchy and perform each query on the dynamically evaluated network.
| Networking and Internet Architecture, Data Structures and Algorithms | Computer Science |
704.107 | Hua Fu | Differential Diversity Reception of MDPSK over Independent Rayleigh
Channels with Nonidentical Branch Statistics and Asymmetric Fading Spectrum | cs.IT cs.PF math.IT | This paper is concerned with optimum diversity receiver structure and its
performance analysis of differential phase shift keying (DPSK) with
differential detection over nonselective, independent, nonidentically
distributed, Rayleigh fading channels. The fading process in each branch is
assumed to have an arbitrary Doppler spectrum with arbitrary Doppler bandwidth,
but to have distinct, asymmetric fading power spectral density characteristic.
Using 8-DPSK as an example, the average bit error probability (BEP) of the
optimum diversity receiver is obtained by calculating the BEP for each of the
three individual bits. The BEP results derived are given in exact, explicit,
closed-form expressions which show clearly the behavior of the performance as a
function of various system parameters.
| Information Theory, Performance, Information Theory | Computer Science |
704.1158 | Bernardo Huberman | Novelty and Collective Attention | cs.CY cs.IR physics.soc-ph | The subject of collective attention is central to an information age where
millions of people are inundated with daily messages. It is thus of interest to
understand how attention to novel items propagates and eventually fades among
large populations. We have analyzed the dynamics of collective attention among
one million users of an interactive website -- \texttt{digg.com} -- devoted to
thousands of novel news stories. The observations can be described by a
dynamical model characterized by a single novelty factor. Our measurements
indicate that novelty within groups decays with a stretched-exponential law,
suggesting the existence of a natural time scale over which attention fades.
| Computers and Society, Information Retrieval, Physics and Society | Computer Science |
704.1196 | Shengchao Ding | Novel algorithm to calculate hypervolume indicator of Pareto
approximation set | cs.CG cs.NE | Hypervolume indicator is a commonly accepted quality measure for comparing
Pareto approximation set generated by multi-objective optimizers. The best
known algorithm to calculate it for $n$ points in $d$-dimensional space has a
run time of $O(n^{d/2})$ with special data structures. This paper presents a
recursive, vertex-splitting algorithm for calculating the hypervolume indicator
of a set of $n$ non-comparable points in $d>2$ dimensions. It splits out
multiple child hyper-cuboids which can not be dominated by a splitting
reference point. In special, the splitting reference point is carefully chosen
to minimize the number of points in the child hyper-cuboids. The complexity
analysis shows that the proposed algorithm achieves $O((\frac{d}{2})^n)$ time
and $O(dn^2)$ space complexity in the worst case.
| Computational Geometry, Neural and Evolutionary Computing | Computer Science |
704.1198 | Minkyu Kim | A Doubly Distributed Genetic Algorithm for Network Coding | cs.NE cs.NI | We present a genetic algorithm which is distributed in two novel ways: along
genotype and temporal axes. Our algorithm first distributes, for every member
of the population, a subset of the genotype to each network node, rather than a
subset of the population to each. This genotype distribution is shown to offer
a significant gain in running time. Then, for efficient use of the
computational resources in the network, our algorithm divides the candidate
solutions into pipelined sets and thus the distribution is in the temporal
domain, rather that in the spatial domain. This temporal distribution may lead
to temporal inconsistency in selection and replacement, however our experiments
yield better efficiency in terms of the time to convergence without incurring
significant penalties.
| Neural and Evolutionary Computing, Networking and Internet Architecture | Computer Science |
704.1267 | Laurence Likforman | Text Line Segmentation of Historical Documents: a Survey | cs.CV | There is a huge amount of historical documents in libraries and in various
National Archives that have not been exploited electronically. Although
automatic reading of complete pages remains, in most cases, a long-term
objective, tasks such as word spotting, text/image alignment, authentication
and extraction of specific fields are in use today. For all these tasks, a
major step is document segmentation into text lines. Because of the low quality
and the complexity of these documents (background noise, artifacts due to
aging, interfering lines),automatic text line segmentation remains an open
research field. The objective of this paper is to present a survey of existing
methods, developed during the last decade, and dedicated to documents of
historical interest.
| Computer Vision and Pattern Recognition | Computer Science |
704.1269 | Lenka Zdeborova | Phase Transitions in the Coloring of Random Graphs | cond-mat.dis-nn cond-mat.stat-mech cs.CC | We consider the problem of coloring the vertices of a large sparse random
graph with a given number of colors so that no adjacent vertices have the same
color. Using the cavity method, we present a detailed and systematic analytical
study of the space of proper colorings (solutions).
We show that for a fixed number of colors and as the average vertex degree
(number of constraints) increases, the set of solutions undergoes several phase
transitions similar to those observed in the mean field theory of glasses.
First, at the clustering transition, the entropically dominant part of the
phase space decomposes into an exponential number of pure states so that beyond
this transition a uniform sampling of solutions becomes hard. Afterward, the
space of solutions condenses over a finite number of the largest states and
consequently the total entropy of solutions becomes smaller than the annealed
one. Another transition takes place when in all the entropically dominant
states a finite fraction of nodes freezes so that each of these nodes is
allowed a single color in all the solutions inside the state. Eventually, above
the coloring threshold, no more solutions are available. We compute all the
critical connectivities for Erdos-Renyi and regular random graphs and determine
their asymptotic values for large number of colors.
Finally, we discuss the algorithmic consequences of our findings. We argue
that the onset of computational hardness is not associated with the clustering
transition and we suggest instead that the freezing transition might be the
relevant phenomenon. We also discuss the performance of a simple local Walk-COL
algorithm and of the belief propagation algorithm in the light of our results.
| Disordered Systems and Neural Networks, Statistical Mechanics, Computational Complexity | Physics |
704.1274 | Dev Rajnarayan | Parametric Learning and Monte Carlo Optimization | cs.LG | This paper uncovers and explores the close relationship between Monte Carlo
Optimization of a parametrized integral (MCO), Parametric machine-Learning
(PL), and `blackbox' or `oracle'-based optimization (BO). We make four
contributions. First, we prove that MCO is mathematically identical to a broad
class of PL problems. This identity potentially provides a new application
domain for all broadly applicable PL techniques: MCO. Second, we introduce
immediate sampling, a new version of the Probability Collectives (PC) algorithm
for blackbox optimization. Immediate sampling transforms the original BO
problem into an MCO problem. Accordingly, by combining these first two
contributions, we can apply all PL techniques to BO. In our third contribution
we validate this way of improving BO by demonstrating that cross-validation and
bagging improve immediate sampling. Finally, conventional MC and MCO procedures
ignore the relationship between the sample point locations and the associated
values of the integrand; only the values of the integrand at those locations
are considered. We demonstrate that one can exploit the sample location
information using PL techniques, for example by forming a fit of the sample
locations to the associated values of the integrand. This provides an
additional way to apply PL techniques to improve MCO.
| Machine Learning | Computer Science |
704.1294 | Ahmed Sidky Ahmed Sidky | A Disciplined Approach to Adopting Agile Practices: The Agile Adoption
Framework | cs.SE | Many organizations aspire to adopt agile processes to take advantage of the
numerous benefits that it offers to an organization. Those benefits include,
but are not limited to, quicker return on investment, better software quality,
and higher customer satisfaction. To date however, there is no structured
process (at least in the public domain) that guides organizations in adopting
agile practices. To address this problem we present the Agile Adoption
Framework. The framework consists of two components: an agile measurement
index, and a 4-Stage process, that together guide and assist the agile adoption
efforts of organizations. More specifically, the agile measurement index is
used to identify the agile potential of projects and organizations. The 4-Stage
process, on the other hand, helps determine (a) whether or not organizations
are ready for agile adoption, and (b) guided by their potential, what set of
agile practices can and should be introduced.
| Software Engineering | Computer Science |
704.1308 | Nihar Jindal | Antenna Combining for the MIMO Downlink Channel | cs.IT math.IT | A multiple antenna downlink channel where limited channel feedback is
available to the transmitter is considered. In a vector downlink channel
(single antenna at each receiver), the transmit antenna array can be used to
transmit separate data streams to multiple receivers only if the transmitter
has very accurate channel knowledge, i.e., if there is high-rate channel
feedback from each receiver. In this work it is shown that channel feedback
requirements can be significantly reduced if each receiver has a small number
of antennas and appropriately combines its antenna outputs. A combining method
that minimizes channel quantization error at each receiver, and thereby
minimizes multi-user interference, is proposed and analyzed. This technique is
shown to outperform traditional techniques such as maximum-ratio combining
because minimization of interference power is more critical than maximization
of signal power in the multiple antenna downlink. Analysis is provided to
quantify the feedback savings, and the technique is seen to work well with user
selection and is also robust to receiver estimation error.
| Information Theory, Information Theory | Computer Science |
704.1317 | Naftali Sommer | Low Density Lattice Codes | cs.IT math.IT | Low density lattice codes (LDLC) are novel lattice codes that can be decoded
efficiently and approach the capacity of the additive white Gaussian noise
(AWGN) channel. In LDLC a codeword x is generated directly at the n-dimensional
Euclidean space as a linear transformation of a corresponding integer message
vector b, i.e., x = Gb, where H, the inverse of G, is restricted to be sparse.
The fact that H is sparse is utilized to develop a linear-time iterative
decoding scheme which attains, as demonstrated by simulations, good error
performance within ~0.5dB from capacity at block length of n = 100,000 symbols.
The paper also discusses convergence results and implementation considerations.
| Information Theory, Information Theory | Computer Science |
704.1353 | Paul Prekop | Supporting Knowledge and Expertise Finding within Australia's Defence
Science and Technology Organisation | cs.OH cs.DB cs.DL cs.HC | This paper reports on work aimed at supporting knowledge and expertise
finding within a large Research and Development (R&D) organisation. The paper
first discusses the nature of knowledge important to R&D organisations and
presents a prototype information system developed to support knowledge and
expertise finding. The paper then discusses a trial of the system within an R&D
organisation, the implications and limitations of the trial, and discusses
future research questions.
| Other Computer Science, Databases, Digital Libraries, Human-Computer Interaction | Computer Science |
704.1358 | Torleiv Kl{\o}ve | Distance preserving mappings from ternary vectors to permutations | cs.DM cs.IT math.IT | Distance-preserving mappings (DPMs) are mappings from the set of all q-ary
vectors of a fixed length to the set of permutations of the same or longer
length such that every two distinct vectors are mapped to permutations with the
same or even larger Hamming distance than that of the vectors. In this paper,
we propose a construction of DPMs from ternary vectors. The constructed DPMs
improve the lower bounds on the maximal size of permutation arrays.
| Discrete Mathematics, Information Theory, Information Theory | Computer Science |
704.1373 | Burgy Laurent | A Language-Based Approach for Improving the Robustness of Network
Application Protocol Implementations | cs.PL | The secure and robust functioning of a network relies on the defect-free
implementation of network applications. As network protocols have become
increasingly complex, however, hand-writing network message processing code has
become increasingly error-prone. In this paper, we present a domain-specific
language, Zebu, for describing protocol message formats and related processing
constraints. From a Zebu specification, a compiler automatically generates
stubs to be used by an application to parse network messages. Zebu is easy to
use, as it builds on notations used in RFCs to describe protocol grammars. Zebu
is also efficient, as the memory usage is tailored to application needs and
message fragments can be specified to be processed on demand. Finally,
Zebu-based applications are robust, as the Zebu compiler automatically checks
specification consistency and generates parsing stubs that include validation
of the message structure. Using a mutation analysis in the context of SIP and
RTSP, we show that Zebu significantly improves application robustness.
| Programming Languages | Computer Science |
704.1394 | Tarik Had\v{z}i\'c | Calculating Valid Domains for BDD-Based Interactive Configuration | cs.AI | In these notes we formally describe the functionality of Calculating Valid
Domains from the BDD representing the solution space of valid configurations.
The formalization is largely based on the CLab configuration framework.
| Artificial Intelligence | Computer Science |
704.1409 | Yao Hengshuai | Preconditioned Temporal Difference Learning | cs.LG cs.AI | This paper has been withdrawn by the author. This draft is withdrawn for its
poor quality in english, unfortunately produced by the author when he was just
starting his science route. Look at the ICML version instead:
http://icml2008.cs.helsinki.fi/papers/111.pdf
| Machine Learning, Artificial Intelligence | Computer Science |
704.1411 | Lorenzo Cappellari | Trellis-Coded Quantization Based on Maximum-Hamming-Distance Binary
Codes | cs.IT math.IT | Most design approaches for trellis-coded quantization take advantage of the
duality of trellis-coded quantization with trellis-coded modulation, and use
the same empirically-found convolutional codes to label the trellis branches.
This letter presents an alternative approach that instead takes advantage of
maximum-Hamming-distance convolutional codes. The proposed source codes are
shown to be competitive with the best in the literature for the same
computational complexity.
| Information Theory, Information Theory | Computer Science |
704.1455 | Aaron Wagner | A Better Good-Turing Estimator for Sequence Probabilities | cs.IT math.IT | We consider the problem of estimating the probability of an observed string
drawn i.i.d. from an unknown distribution. The key feature of our study is that
the length of the observed string is assumed to be of the same order as the
size of the underlying alphabet. In this setting, many letters are unseen and
the empirical distribution tends to overestimate the probability of the
observed letters. To overcome this problem, the traditional approach to
probability estimation is to use the classical Good-Turing estimator. We
introduce a natural scaling model and use it to show that the Good-Turing
sequence probability estimator is not consistent. We then introduce a novel
sequence probability estimator that is indeed consistent under the natural
scaling model.
| Information Theory, Information Theory | Computer Science |
704.1524 | Daniel Ryan | GLRT-Optimal Noncoherent Lattice Decoding | cs.IT math.IT | This paper presents new low-complexity lattice-decoding algorithms for
noncoherent block detection of QAM and PAM signals over complex-valued fading
channels. The algorithms are optimal in terms of the generalized likelihood
ratio test (GLRT). The computational complexity is polynomial in the block
length; making GLRT-optimal noncoherent detection feasible for implementation.
We also provide even lower complexity suboptimal algorithms. Simulations show
that the suboptimal algorithms have performance indistinguishable from the
optimal algorithms. Finally, we consider block based transmission, and propose
to use noncoherent detection as an alternative to pilot assisted transmission
(PAT). The new technique is shown to outperform PAT.
| Information Theory, Information Theory | Computer Science |
704.1571 | Philippe Gambette | On restrictions of balanced 2-interval graphs | cs.DM q-bio.QM | The class of 2-interval graphs has been introduced for modelling scheduling
and allocation problems, and more recently for specific bioinformatic problems.
Some of those applications imply restrictions on the 2-interval graphs, and
justify the introduction of a hierarchy of subclasses of 2-interval graphs that
generalize line graphs: balanced 2-interval graphs, unit 2-interval graphs, and
(x,x)-interval graphs. We provide instances that show that all the inclusions
are strict. We extend the NP-completeness proof of recognizing 2-interval
graphs to the recognition of balanced 2-interval graphs. Finally we give hints
on the complexity of unit 2-interval graphs recognition, by studying
relationships with other graph classes: proper circular-arc, quasi-line graphs,
K_{1,5}-free graphs, ...
| Discrete Mathematics, Quantitative Methods | Computer Science |
704.1675 | Kristina Lerman | Exploiting Social Annotation for Automatic Resource Discovery | cs.AI cs.CY cs.DL | Information integration applications, such as mediators or mashups, that
require access to information resources currently rely on users manually
discovering and integrating them in the application. Manual resource discovery
is a slow process, requiring the user to sift through results obtained via
keyword-based search. Although search methods have advanced to include evidence
from document contents, its metadata and the contents and link structure of the
referring pages, they still do not adequately cover information sources --
often called ``the hidden Web''-- that dynamically generate documents in
response to a query. The recently popular social bookmarking sites, which allow
users to annotate and share metadata about various information sources, provide
rich evidence for resource discovery. In this paper, we describe a
probabilistic model of the user annotation process in a social bookmarking
system del.icio.us. We then use the model to automatically find resources
relevant to a particular information domain. Our experimental results on data
obtained from \emph{del.icio.us} show this approach as a promising method for
helping automate the resource discovery task.
| Artificial Intelligence, Computers and Society, Digital Libraries | Computer Science |
704.1676 | Kristina Lerman | Personalizing Image Search Results on Flickr | cs.IR cs.AI cs.CY cs.DL cs.HC | The social media site Flickr allows users to upload their photos, annotate
them with tags, submit them to groups, and also to form social networks by
adding other users as contacts. Flickr offers multiple ways of browsing or
searching it. One option is tag search, which returns all images tagged with a
specific keyword. If the keyword is ambiguous, e.g., ``beetle'' could mean an
insect or a car, tag search results will include many images that are not
relevant to the sense the user had in mind when executing the query. We claim
that users express their photography interests through the metadata they add in
the form of contacts and image annotations. We show how to exploit this
metadata to personalize search results for the user, thereby improving search
performance. First, we show that we can significantly improve search precision
by filtering tag search results by user's contacts or a larger social network
that includes those contact's contacts. Secondly, we describe a probabilistic
model that takes advantage of tag information to discover latent topics
contained in the search results. The users' interests can similarly be
described by the tags they used for annotating their images. The latent topics
found by the model are then used to personalize search results by finding
images on topics that are of interest to the user.
| Information Retrieval, Artificial Intelligence, Computers and Society, Digital Libraries, Human-Computer Interaction | Computer Science |
704.1678 | Shanghua Teng | Settling the Complexity of Computing Two-Player Nash Equilibria | cs.GT cs.CC | We settle a long-standing open question in algorithmic game theory. We prove
that Bimatrix, the problem of finding a Nash equilibrium in a two-player game,
is complete for the complexity class PPAD Polynomial Parity Argument, Directed
version) introduced by Papadimitriou in 1991.
This is the first of a series of results concerning the complexity of Nash
equilibria. In particular, we prove the following theorems:
Bimatrix does not have a fully polynomial-time approximation scheme unless
every problem in PPAD is solvable in polynomial time. The smoothed complexity
of the classic Lemke-Howson algorithm and, in fact, of any algorithm for
Bimatrix is not polynomial unless every problem in PPAD is solvable in
randomized polynomial time. Our results demonstrate that, even in the simplest
form of non-cooperative games, equilibrium computation and approximation are
polynomial-time equivalent to fixed point computation. Our results also have
two broad complexity implications in mathematical economics and operations
research: Arrow-Debreu market equilibria are PPAD-hard to compute. The P-Matrix
Linear Complementary Problem is computationally harder than convex programming
unless every problem in PPAD is solvable in polynomial time.
| Computer Science and Game Theory, Computational Complexity | Computer Science |
704.1694 | Sergey Yekhanin | Locally Decodable Codes From Nice Subsets of Finite Fields and Prime
Factors of Mersenne Numbers | cs.CC math.NT | A k-query Locally Decodable Code (LDC) encodes an n-bit message x as an N-bit
codeword C(x), such that one can probabilistically recover any bit x_i of the
message by querying only k bits of the codeword C(x), even after some constant
fraction of codeword bits has been corrupted. The major goal of LDC related
research is to establish the optimal trade-off between length and query
complexity of such codes.
Recently [Y] introduced a novel technique for constructing locally decodable
codes and vastly improved the upper bounds for code length. The technique is
based on Mersenne primes. In this paper we extend the work of [Y] and argue
that further progress via these methods is tied to progress on an old number
theory question regarding the size of the largest prime factors of Mersenne
numbers.
Specifically, we show that every Mersenne number m=2^t-1 that has a prime
factor p>m^\gamma yields a family of k(\gamma)-query locally decodable codes of
length Exp(n^{1/t}). Conversely, if for some fixed k and all \epsilon > 0 one
can use the technique of [Y] to obtain a family of k-query LDCs of length
Exp(n^\epsilon); then infinitely many Mersenne numbers have prime factors arger
than known currently.
| Computational Complexity, Number Theory | Computer Science |
704.1707 | Linda Buisman | A Cut-free Sequent Calculus for Bi-Intuitionistic Logic: Extended
Version | cs.LO | Bi-intuitionistic logic is the extension of intuitionistic logic with a
connective dual to implication. Bi-intuitionistic logic was introduced by
Rauszer as a Hilbert calculus with algebraic and Kripke semantics. But her
subsequent ``cut-free'' sequent calculus for BiInt has recently been shown by
Uustalu to fail cut-elimination. We present a new cut-free sequent calculus for
BiInt, and prove it sound and complete with respect to its Kripke semantics.
Ensuring completeness is complicated by the interaction between implication and
its dual, similarly to future and past modalities in tense logic. Our calculus
handles this interaction using extended sequents which pass information from
premises to conclusions using variables instantiated at the leaves of failed
derivation trees. Our simple termination argument allows our calculus to be
used for automated deduction, although this is not its main purpose.
| Logic in Computer Science | Computer Science |
704.1709 | Marie Cottrell | Traitement Des Donnees Manquantes Au Moyen De L'Algorithme De Kohonen | stat.AP cs.NE | Nous montrons comment il est possible d'utiliser l'algorithme d'auto
organisation de Kohonen pour traiter des donn\'ees avec valeurs manquantes et
estimer ces derni\`eres. Apr\`es un rappel m\'ethodologique, nous illustrons
notre propos \`a partir de trois applications \`a des donn\'ees r\'eelles.
-----
We show how it is possible to use the Kohonen self-organizing algorithm to
deal with data which contain missing values and to estimate them. After a
methodological recall, we illustrate our purpose from three real databases
applications.
| Applications, Neural and Evolutionary Computing | Statistics |
704.1748 | Frank Schweitzer | Self-Organization applied to Dynamic Network Layout | physics.comp-ph cs.DS nlin.AO | As networks and their structure have become a major field of research, a
strong demand for network visualization has emerged. We address this challenge
by formalizing the well established spring layout in terms of dynamic
equations. We thus open up the design space for new algorithms. Drawing from
the knowledge of systems design, we derive a layout algorithm that remedies
several drawbacks of the original spring layout. This new algorithm relies on
the balancing of two antagonistic forces. We thus call it {\em arf} for
"attractive and repulsive forces". It is, as we claim, particularly suited for
a dynamic layout of smaller networks ($n < 10^3$). We back this claim with
several application examples from on going complex systems research.
| Computational Physics, Data Structures and Algorithms, Adaptation and Self-Organizing Systems | Physics |
704.1751 | Olivier Rioul | Information Theoretic Proofs of Entropy Power Inequalities | cs.IT math.IT | While most useful information theoretic inequalities can be deduced from the
basic properties of entropy or mutual information, up to now Shannon's entropy
power inequality (EPI) is an exception: Existing information theoretic proofs
of the EPI hinge on representations of differential entropy using either Fisher
information or minimum mean-square error (MMSE), which are derived from de
Bruijn's identity. In this paper, we first present an unified view of these
proofs, showing that they share two essential ingredients: 1) a data processing
argument applied to a covariance-preserving linear transformation; 2) an
integration over a path of a continuous Gaussian perturbation. Using these
ingredients, we develop a new and brief proof of the EPI through a mutual
information inequality, which replaces Stam and Blachman's Fisher information
inequality (FII) and an inequality for MMSE by Guo, Shamai and Verd\'u used in
earlier proofs. The result has the advantage of being very simple in that it
relies only on the basic properties of mutual information. These ideas are then
generalized to various extended versions of the EPI: Zamir and Feder's
generalized EPI for linear transformations of the random variables, Takano and
Johnson's EPI for dependent variables, Liu and Viswanath's
covariance-constrained EPI, and Costa's concavity inequality for the entropy
power.
| Information Theory, Information Theory | Computer Science |
704.1756 | Jose M. Martin-Garcia | The Invar Tensor Package | cs.SC gr-qc hep-th | The Invar package is introduced, a fast manipulator of generic scalar
polynomial expressions formed from the Riemann tensor of a four-dimensional
metric-compatible connection. The package can maximally simplify any polynomial
containing tensor products of up to seven Riemann tensors within seconds. It
has been implemented both in Mathematica and Maple algebraic systems.
| Symbolic Computation, General Relativity and Quantum Cosmology, High Energy Physics - Theory | Computer Science |
704.1768 | Enrique ter Horst A | Assessment and Propagation of Input Uncertainty in Tree-based Option
Pricing Models | cs.CE cs.GT | This paper aims to provide a practical example on the assessment and
propagation of input uncertainty for option pricing when using tree-based
methods. Input uncertainty is propagated into output uncertainty, reflecting
that option prices are as unknown as the inputs they are based on. Option
pricing formulas are tools whose validity is conditional not only on how close
the model represents reality, but also on the quality of the inputs they use,
and those inputs are usually not observable. We provide three alternative
frameworks to calibrate option pricing tree models, propagating parameter
uncertainty into the resulting option prices. We finally compare our methods
with classical calibration-based results assuming that there is no options
market established. These methods can be applied to pricing of instruments for
which there is not an options market, as well as a methodological tool to
account for parameter and model uncertainty in theoretical option pricing.
| Computational Engineering, Finance, and Science, Computer Science and Game Theory | Computer Science |
704.1783 | Francesco Santini | Unicast and Multicast Qos Routing with Soft Constraint Logic Programming | cs.LO cs.AI cs.NI | We present a formal model to represent and solve the unicast/multicast
routing problem in networks with Quality of Service (QoS) requirements. To
attain this, first we translate the network adapting it to a weighted graph
(unicast) or and-or graph (multicast), where the weight on a connector
corresponds to the multidimensional cost of sending a packet on the related
network link: each component of the weights vector represents a different QoS
metric value (e.g. bandwidth, cost, delay, packet loss). The second step
consists in writing this graph as a program in Soft Constraint Logic
Programming (SCLP): the engine of this framework is then able to find the best
paths/trees by optimizing their costs and solving the constraints imposed on
them (e.g. delay < 40msec), thus finding a solution to QoS routing problems.
Moreover, c-semiring structures are a convenient tool to model QoS metrics. At
last, we provide an implementation of the framework over scale-free networks
and we suggest how the performance can be improved.
| Logic in Computer Science, Artificial Intelligence, Networking and Internet Architecture | Computer Science |
704.1818 | Martin Wainwright | Low-density graph codes that are optimal for source/channel coding and
binning | cs.IT math.IT | We describe and analyze the joint source/channel coding properties of a class
of sparse graphical codes based on compounding a low-density generator matrix
(LDGM) code with a low-density parity check (LDPC) code. Our first pair of
theorems establish that there exist codes from this ensemble, with all degrees
remaining bounded independently of block length, that are simultaneously
optimal as both source and channel codes when encoding and decoding are
performed optimally. More precisely, in the context of lossy compression, we
prove that finite degree constructions can achieve any pair $(R, D)$ on the
rate-distortion curve of the binary symmetric source. In the context of channel
coding, we prove that finite degree codes can achieve any pair $(C, p)$ on the
capacity-noise curve of the binary symmetric channel. Next, we show that our
compound construction has a nested structure that can be exploited to achieve
the Wyner-Ziv bound for source coding with side information (SCSI), as well as
the Gelfand-Pinsker bound for channel coding with side information (CCSI).
Although the current results are based on optimal encoding and decoding, the
proposed graphical codes have sparse structure and high girth that renders them
well-suited to message-passing and other efficient decoding procedures.
| Information Theory, Information Theory | Computer Science |
704.1827 | Gerald Krafft | Transaction-Oriented Simulation In Ad Hoc Grids | cs.DC | This paper analyses the possibilities of performing parallel
transaction-oriented simulations with a special focus on the space-parallel
approach and discrete event simulation synchronisation algorithms that are
suitable for transaction-oriented simulation and the target environment of Ad
Hoc Grids. To demonstrate the findings a Java-based parallel
transaction-oriented simulator for the simulation language GPSS/H is
implemented on the basis of the promising Shock Resistant Time Warp
synchronisation algorithm and using the Grid framework ProActive. The
validation of this parallel simulator shows that the Shock Resistant Time Warp
algorithm can successfully reduce the number of rolled back Transaction moves
but it also reveals circumstances in which the Shock Resistant Time Warp
algorithm can be outperformed by the normal Time Warp algorithm. The conclusion
of this paper suggests possible improvements to the Shock Resistant Time Warp
algorithm to avoid such problems.
| Distributed, Parallel, and Cluster Computing | Computer Science |
704.1829 | Kamil Kloch | On-line Chain Partitions of Up-growing Semi-orders | cs.DM | On-line chain partition is a two-player game between Spoiler and Algorithm.
Spoiler presents a partially ordered set, point by point. Algorithm assigns
incoming points (immediately and irrevocably) to the chains which constitute a
chain partition of the order. The value of the game for orders of width $w$ is
a minimum number $\fVal(w)$ such that Algorithm has a strategy using at most
$\fVal(w)$ chains on orders of width at most $w$. We analyze the chain
partition game for up-growing semi-orders. Surprisingly, the golden ratio comes
into play and the value of the game is $\lfloor\frac{1+\sqrt{5}}{2}\; w
\rfloor$.
| Discrete Mathematics | Computer Science |
704.1833 | Inanc Inan | Analysis of the 802.11e Enhanced Distributed Channel Access Function | cs.NI | The IEEE 802.11e standard revises the Medium Access Control (MAC) layer of
the former IEEE 802.11 standard for Quality-of-Service (QoS) provision in the
Wireless Local Area Networks (WLANs). The Enhanced Distributed Channel Access
(EDCA) function of 802.11e defines multiple Access Categories (AC) with
AC-specific Contention Window (CW) sizes, Arbitration Interframe Space (AIFS)
values, and Transmit Opportunity (TXOP) limits to support MAC-level QoS and
prioritization. We propose an analytical model for the EDCA function which
incorporates an accurate CW, AIFS, and TXOP differentiation at any traffic
load. The proposed model is also shown to capture the effect of MAC layer
buffer size on the performance. Analytical and simulation results are compared
to demonstrate the accuracy of the proposed approach for varying traffic loads,
EDCA parameters, and MAC layer buffer space.
| Networking and Internet Architecture | Computer Science |
704.1838 | Inanc Inan | Performance Analysis of the IEEE 802.11e Enhanced Distributed
Coordination Function using Cycle Time Approach | cs.OH | The recently ratified IEEE 802.11e standard defines the Enhanced Distributed
Channel Access (EDCA) function for Quality-of-Service (QoS) provisioning in the
Wireless Local Area Networks (WLANs). The EDCA uses Carrier Sense Multiple
Access with Collision Avoidance (CSMA/CA) and slotted Binary Exponential
Backoff (BEB) mechanism. We present a simple mathematical analysis framework
for the EDCA function. Our analysis considers the fact that the distributed
random access systems exhibit cyclic behavior where each station successfully
transmits a packet in a cycle. Our analysis shows that an AC-specific cycle
time exists for the EDCA function. Validating the theoretical results via
simulations, we show that the proposed analysis accurately captures EDCA
saturation performance in terms of average throughput, medium access delay, and
packet loss ratio. The cycle time analysis is a simple and insightful
substitute for previously proposed more complex EDCA models.
| Other Computer Science | Computer Science |
704.1842 | Inanc Inan | Fairness Provision in the IEEE 802.11e Infrastructure Basic Service Set | cs.OH | Most of the deployed IEEE 802.11e Wireless Local Area Networks (WLANs) use
infrastructure Basic Service Set (BSS) in which an Access Point (AP) serves as
a gateway between wired and wireless domains. We present the unfairness problem
between the uplink and the downlink flows of any Access Category (AC) in the
802.11e Enhanced Distributed Channel Access (EDCA) when the default settings of
the EDCA parameters are used. We propose a simple analytical model to calculate
the EDCA parameter settings that achieve weighted fair resource allocation for
all uplink and downlink flows. We also propose a simple model-assisted
measurement-based dynamic EDCA parameter adaptation algorithm. Moreover, our
dynamic solution addresses the differences in the transport layer and the
Medium Access Control (MAC) layer interactions of User Datagram Protocol (UDP)
and Transmission Control Protocol (TCP). We show that proposed Contention
Window (CW) and Transmit Opportunity (TXOP) limit adaptation at the AP provides
fair UDP and TCP access between uplink and downlink flows of the same AC while
preserving prioritization among ACs.
| Other Computer Science | Computer Science |
704.1873 | Yi Cao | An Achievable Rate Region for Interference Channels with Conferencing | cs.IT math.IT | In this paper, we propose an achievable rate region for discrete memoryless
interference channels with conferencing at the transmitter side. We employ
superposition block Markov encoding, combined with simultaneous superposition
coding, dirty paper coding, and random binning to obtain the achievable rate
region. We show that, under respective conditions, the proposed achievable
region reduces to Han and Kobayashi achievable region for interference
channels, the capacity region for degraded relay channels, and the capacity
region for the Gaussian vector broadcast channel. Numerical examples for the
Gaussian case are given.
| Information Theory, Information Theory | Computer Science |
704.1886 | Pedro Resende | An algebraic generalization of Kripke structures | math.LO cs.LO math.RA | The Kripke semantics of classical propositional normal modal logic is made
algebraic via an embedding of Kripke structures into the larger class of
pointed stably supported quantales. This algebraic semantics subsumes the
traditional algebraic semantics based on lattices with unary operators, and it
suggests natural interpretations of modal logic, of possible interest in the
applications, in structures that arise in geometry and analysis, such as
foliated manifolds and operator algebras, via topological groupoids and inverse
semigroups. We study completeness properties of the quantale based semantics
for the systems K, T, K4, S4, and S5, in particular obtaining an axiomatization
for S5 which does not use negation or the modal necessity operator. As
additional examples we describe intuitionistic propositional modal logic, the
logic of programs PDL, and the ramified temporal logic CTL.
| Logic, Logic in Computer Science, Rings and Algebras | Mathematics |
704.1925 | Yuanning Yu | Blind Identification of Distributed Antenna Systems with Multiple
Carrier Frequency Offsets | cs.IT math.IT | In spatially distributed multiuser antenna systems, the received signal
contains multiple carrier-frequency offsets (CFOs) arising from mismatch
between the oscillators of transmitters and receivers. This results in a
time-varying rotation of the data constellation, which needs to be compensated
at the receiver before symbol recovery. In this paper, a new approach for blind
CFO estimation and symbol recovery is proposed. The received base-band signal
is over-sampled, and its polyphase components are used to formulate a virtual
Multiple-Input Multiple-Output (MIMO) problem. By applying blind MIMO system
estimation techniques, the system response can be estimated and decoupled
versions of the user symbols can be recovered, each one of which contains a
distinct CFO. By applying a decision feedback Phase Lock Loop (PLL), the CFO
can be mitigated and the transmitted symbols can be recovered. The estimated
MIMO system response provides information about the CFOs that can be used to
initialize the PLL, speed up its convergence, and avoid ambiguities usually
linked with PLL.
| Information Theory, Information Theory | Computer Science |
704.201 | Juliana Bernardes | A study of structural properties on profiles HMMs | cs.AI | Motivation: Profile hidden Markov Models (pHMMs) are a popular and very
useful tool in the detection of the remote homologue protein families.
Unfortunately, their performance is not always satisfactory when proteins are
in the 'twilight zone'. We present HMMER-STRUCT, a model construction algorithm
and tool that tries to improve pHMM performance by using structural information
while training pHMMs. As a first step, HMMER-STRUCT constructs a set of pHMMs.
Each pHMM is constructed by weighting each residue in an aligned protein
according to a specific structural property of the residue. Properties used
were primary, secondary and tertiary structures, accessibility and packing.
HMMER-STRUCT then prioritizes the results by voting. Results: We used the SCOP
database to perform our experiments. Throughout, we apply leave-one-family-out
cross-validation over protein superfamilies. First, we used the MAMMOTH-mult
structural aligner to align the training set proteins. Then, we performed two
sets of experiments. In a first experiment, we compared structure weighted
models against standard pHMMs and against each other. In a second experiment,
we compared the voting model against individual pHMMs. We compare method
performance through ROC curves and through Precision/Recall curves, and assess
significance through the paired two tailed t-test. Our results show significant
performance improvements of all structurally weighted models over default
HMMER, and a significant improvement in sensitivity of the combined models over
both the original model and the structurally weighted models.
| Artificial Intelligence | Computer Science |
704.2014 | Leandro R\^ego | Extensive Games with Possibly Unaware Players | cs.GT cs.MA | Standard game theory assumes that the structure of the game is common
knowledge among players. We relax this assumption by considering extensive
games where agents may be unaware of the complete structure of the game. In
particular, they may not be aware of moves that they and other agents can make.
We show how such games can be represented; the key idea is to describe the game
from the point of view of every agent at every node of the game tree. We
provide a generalization of Nash equilibrium and show that every game with
awareness has a generalized Nash equilibrium. Finally, we extend these results
to games with awareness of unawareness, where a player i may be aware that a
player j can make moves that i is not aware of, and to subjective games, where
payers may have no common knowledge regarding the actual game and their beliefs
are incompatible with a common prior.
| Computer Science and Game Theory, Multiagent Systems | Computer Science |
704.2017 | Giacomo Bacci | Large System Analysis of Game-Theoretic Power Control in UWB Wireless
Networks with Rake Receivers | cs.IT cs.GT math.IT | This paper studies the performance of partial-Rake (PRake) receivers in
impulse-radio ultrawideband wireless networks when an energy-efficient power
control scheme is adopted. Due to the large bandwidth of the system, the
multipath channel is assumed to be frequency-selective. By using noncooperative
game-theoretic models and large system analysis, explicit expressions are
derived in terms of network parameters to measure the effects of self- and
multiple-access interference at a receiving access point. Performance of the
PRake is compared in terms of achieved utilities and loss to that of the
all-Rake receiver.
| Information Theory, Computer Science and Game Theory, Information Theory | Computer Science |
704.2083 | Hassan Satori | Introduction to Arabic Speech Recognition Using CMUSphinx System | cs.CL cs.AI | In this paper Arabic was investigated from the speech recognition problem
point of view. We propose a novel approach to build an Arabic Automated Speech
Recognition System (ASR). This system is based on the open source CMU Sphinx-4,
from the Carnegie Mellon University. CMU Sphinx is a large-vocabulary;
speaker-independent, continuous speech recognition system based on discrete
Hidden Markov Models (HMMs). We build a model using utilities from the
OpenSource CMU Sphinx. We will demonstrate the possible adaptability of this
system to Arabic voice recognition.
| Computation and Language, Artificial Intelligence | Computer Science |
704.2092 | Jinsong Tan | A Note on the Inapproximability of Correlation Clustering | cs.LG cs.DS | We consider inapproximability of the correlation clustering problem defined
as follows: Given a graph $G = (V,E)$ where each edge is labeled either "+"
(similar) or "-" (dissimilar), correlation clustering seeks to partition the
vertices into clusters so that the number of pairs correctly (resp.
incorrectly) classified with respect to the labels is maximized (resp.
minimized). The two complementary problems are called MaxAgree and MinDisagree,
respectively, and have been studied on complete graphs, where every edge is
labeled, and general graphs, where some edge might not have been labeled.
Natural edge-weighted versions of both problems have been studied as well. Let
S-MaxAgree denote the weighted problem where all weights are taken from set S,
we show that S-MaxAgree with weights bounded by $O(|V|^{1/2-\delta})$
essentially belongs to the same hardness class in the following sense: if there
is a polynomial time algorithm that approximates S-MaxAgree within a factor of
$\lambda = O(\log{|V|})$ with high probability, then for any choice of S',
S'-MaxAgree can be approximated in polynomial time within a factor of $(\lambda
+ \epsilon)$, where $\epsilon > 0$ can be arbitrarily small, with high
probability. A similar statement also holds for $S-MinDisagree. This result
implies it is hard (assuming $NP \neq RP$) to approximate unweighted MaxAgree
within a factor of $80/79-\epsilon$, improving upon a previous known factor of
$116/115-\epsilon$ by Charikar et. al. \cite{Chari05}.
| Machine Learning, Data Structures and Algorithms | Computer Science |
704.2201 | Hassan Satori | Arabic Speech Recognition System using CMU-Sphinx4 | cs.CL cs.AI | In this paper we present the creation of an Arabic version of Automated
Speech Recognition System (ASR). This system is based on the open source
Sphinx-4, from the Carnegie Mellon University. Which is a speech recognition
system based on discrete hidden Markov models (HMMs). We investigate the
changes that must be made to the model to adapt Arabic voice recognition.
Keywords: Speech recognition, Acoustic model, Arabic language, HMMs,
CMUSphinx-4, Artificial intelligence.
| Computation and Language, Artificial Intelligence | Computer Science |
704.2258 | Stefan Laendner | On the Hardness of Approximating Stopping and Trapping Sets in LDPC
Codes | cs.IT math.IT | We prove that approximating the size of stopping and trapping sets in Tanner
graphs of linear block codes, and more restrictively, the class of low-density
parity-check (LDPC) codes, is NP-hard. The ramifications of our findings are
that methods used for estimating the height of the error-floor of moderate- and
long-length LDPC codes based on stopping and trapping set enumeration cannot
provide accurate worst-case performance predictions.
| Information Theory, Information Theory | Computer Science |
704.2259 | Lifeng Lai | The Wiretap Channel with Feedback: Encryption over the Channel | cs.IT cs.CR math.IT | In this work, the critical role of noisy feedback in enhancing the secrecy
capacity of the wiretap channel is established. Unlike previous works, where a
noiseless public discussion channel is used for feedback, the feed-forward and
feedback signals share the same noisy channel in the present model. Quite
interestingly, this noisy feedback model is shown to be more advantageous in
the current setting. More specifically, the discrete memoryless modulo-additive
channel with a full-duplex destination node is considered first, and it is
shown that the judicious use of feedback increases the perfect secrecy capacity
to the capacity of the source-destination channel in the absence of the
wiretapper. In the achievability scheme, the feedback signal corresponds to a
private key, known only to the destination. In the half-duplex scheme, a novel
feedback technique that always achieves a positive perfect secrecy rate (even
when the source-wiretapper channel is less noisy than the source-destination
channel) is proposed. These results hinge on the modulo-additive property of
the channel, which is exploited by the destination to perform encryption over
the channel without revealing its key to the source. Finally, this scheme is
extended to the continuous real valued modulo-$\Lambda$ channel where it is
shown that the perfect secrecy capacity with feedback is also equal to the
capacity in the absence of the wiretapper.
| Information Theory, Cryptography and Security, Information Theory | Computer Science |
704.2282 | Wim H. Hesselink | Kekul\'e Cells for Molecular Computation | cs.OH cs.DM | The configurations of single and double bonds in polycyclic hydrocarbons are
abstracted as Kekul\'e states of graphs. Sending a so-called soliton over an
open channel between ports (external nodes) of the graph changes the Kekul\'e
state and therewith the set of open channels in the graph. This switching
behaviour is proposed as a basis for molecular computation. The proposal is
highly speculative but may have tremendous impact.
Kekul\'e states with the same boundary behaviour (port assignment) can be
regarded as equivalent. This gives rise to the abstraction of Kekul\'e cells.
The basic theory of Kekul\'e states and Kekul\'e cells is developed here, up to
the classification of Kekul\'e cells with $\leq 4$ ports. To put the theory in
context, we generalize Kekul\'e states to semi-Kekul\'e states, which form the
solutions of a linear system of equations over the field of the bits 0 and 1.
We briefly study so-called omniconjugated graphs, in which every port
assignment of the right signature has a Kekul\'e state. Omniconjugated graphs
may be useful as connectors between computational elements. We finally
investigate some examples with potentially useful switching behaviour.
| Other Computer Science, Discrete Mathematics | Computer Science |
704.2295 | Hassan Jameel | Using Image Attributes for Human Identification Protocols | cs.CR | A secure human identification protocol aims at authenticating human users to
a remote server when even the users' inputs are not hidden from an adversary.
Recently, the authors proposed a human identification protocol in the RSA
Conference 2007, which is loosely based on the ability of humans to efficiently
process an image. The advantage being that an automated adversary is not
effective in attacking the protocol without human assistance. This paper
extends that work by trying to solve some of the open problems. First, we
analyze the complexity of defeating the proposed protocols by quantifying the
workload of a human adversary. Secondly, we propose a new construction based on
textual CAPTCHAs (Reverse Turing Tests) in order to make the generation of
automated challenges easier. We also present a brief experiment involving real
human users to find out the number of possible attributes in a given image and
give some guidelines for the selection of challenge questions based on the
results. Finally, we analyze the previously proposed protocol in detail for the
relationship between the secrets. Our results show that we can construct human
identification protocols based on image evaluation with reasonably
``quantified'' security guarantees based on our model.
| Cryptography and Security | Computer Science |
704.2344 | Publications Ampere | Parallel computing for the finite element method | cs.DC | A finite element method is presented to compute time harmonic microwave
fields in three dimensional configurations. Nodal-based finite elements have
been coupled with an absorbing boundary condition to solve open boundary
problems. This paper describes how the modeling of large devices has been made
possible using parallel computation, New algorithms are then proposed to
implement this formulation on a cluster of workstations (10 DEC ALPHA 300X) and
on a CRAY C98. Analysis of the computation efficiency is performed using simple
problems. The electromagnetic scattering of a plane wave by a perfect electric
conducting airplane is finally given as example.
| Distributed, Parallel, and Cluster Computing | Computer Science |
704.2351 | Giorgi Pascal | Parallel computation of the rank of large sparse matrices from algebraic
K-theory | math.KT cs.DC cs.SC math.NT | This paper deals with the computation of the rank and of some integer Smith
forms of a series of sparse matrices arising in algebraic K-theory. The number
of non zero entries in the considered matrices ranges from 8 to 37 millions.
The largest rank computation took more than 35 days on 50 processors. We report
on the actual algorithms we used to build the matrices, their link to the
motivic cohomology and the linear algebra and parallelizations required to
perform such huge computations. In particular, these results are part of the
first computation of the cohomology of the linear group GL_7(Z).
| K-Theory and Homology, Distributed, Parallel, and Cluster Computing, Symbolic Computation, Number Theory | Mathematics |
704.2353 | Natasha Devroye | Scaling Laws of Cognitive Networks | cs.IT math.IT | We consider a cognitive network consisting of n random pairs of cognitive
transmitters and receivers communicating simultaneously in the presence of
multiple primary users. Of interest is how the maximum throughput achieved by
the cognitive users scales with n. Furthermore, how far these users must be
from a primary user to guarantee a given primary outage. Two scenarios are
considered for the network scaling law: (i) when each cognitive transmitter
uses constant power to communicate with a cognitive receiver at a bounded
distance away, and (ii) when each cognitive transmitter scales its power
according to the distance to a considered primary user, allowing the cognitive
transmitter-receiver distances to grow. Using single-hop transmission, suitable
for cognitive devices of opportunistic nature, we show that, in both scenarios,
with path loss larger than 2, the cognitive network throughput scales linearly
with the number of cognitive users. We then explore the radius of a primary
exclusive region void of cognitive transmitters. We obtain bounds on this
radius for a given primary outage constraint. These bounds can help in the
design of a primary network with exclusive regions, outside of which cognitive
users may transmit freely. Our results show that opportunistic secondary
spectrum access using single-hop transmission is promising.
| Information Theory, Information Theory | Computer Science |
704.2355 | Luigi Santocanale | A Nice Labelling for Tree-Like Event Structures of Degree 3 | cs.DC | We address the problem of finding nice labellings for event structures
of degree 3. We develop a minimum theory by which we prove that the labelling
number of an event structure of degree 3 is bounded by a linear function of the
height. The main theorem we present in this paper states that event structures
of degree 3 whose causality order is a tree have a nice labelling with 3
colors. Finally, we exemplify how to use this theorem to construct upper bounds
for the labelling number of other event structures of degree 3.
| Distributed, Parallel, and Cluster Computing | Computer Science |
704.2375 | Stefano Buzzi | Power control algorithms for CDMA networks based on large system
analysis | cs.IT math.IT | Power control is a fundamental task accomplished in any wireless cellular
network; its aim is to set the transmit power of any mobile terminal, so that
each user is able to achieve its own target SINR. While conventional power
control algorithms require knowledge of a number of parameters of the signal of
interest and of the multiaccess interference, in this paper it is shown that in
a large CDMA system much of this information can be dispensed with, and
effective distributed power control algorithms may be implemented with very
little information on the user of interest. An uplink CDMA system subject to
flat fading is considered with a focus on the cases in which a linear MMSE
receiver and a non-linear MMSE serial interference cancellation receiver are
adopted; for the latter case new formulas are also given for the system SINR in
the large system asymptote. Experimental results show an excellent agreement
between the performance and the power profile of the proposed distributed
algorithms and that of conventional ones that require much greater prior
knowledge.
| Information Theory, Information Theory | Computer Science |
704.2383 | Stefano Buzzi | Power control and receiver design for energy efficiency in multipath
CDMA channels with bandlimited waveforms | cs.IT math.IT | This paper is focused on the cross-layer design problem of joint multiuser
detection and power control for energy-efficiency optimization in a wireless
data network through a game-theoretic approach. Building on work of Meshkati,
et al., wherein the tools of game-theory are used in order to achieve
energy-efficiency in a simple synchronous code division multiple access system,
system asynchronism, the use of bandlimited chip-pulses, and the multipath
distortion induced by the wireless channel are explicitly incorporated into the
analysis. Several non-cooperative games are proposed wherein users may vary
their transmit power and their uplink receiver in order to maximize their
utility, which is defined here as the ratio of data throughput to transmit
power. In particular, the case in which a linear multiuser detector is adopted
at the receiver is considered first, and then, the more challenging case in
which non-linear decision feedback multiuser detectors are employed is
considered. The proposed games are shown to admit a unique Nash equilibrium
point, while simulation results show the effectiveness of the proposed
solutions, as well as that the use of a decision-feedback multiuser receiver
brings remarkable performance improvements.
| Information Theory, Information Theory | Computer Science |
704.2386 | Pilar Albert | Bounded Pushdown dimension vs Lempel Ziv information density | cs.CC cs.IT math.IT | In this paper we introduce a variant of pushdown dimension called bounded
pushdown (BPD) dimension, that measures the density of information contained in
a sequence, relative to a BPD automata, i.e. a finite state machine equipped
with an extra infinite memory stack, with the additional requirement that every
input symbol only allows a bounded number of stack movements. BPD automata are
a natural real-time restriction of pushdown automata. We show that BPD
dimension is a robust notion by giving an equivalent characterization of BPD
dimension in terms of BPD compressors. We then study the relationships between
BPD compression, and the standard Lempel-Ziv (LZ) compression algorithm, and
show that in contrast to the finite-state compressor case, LZ is not universal
for bounded pushdown compressors in a strong sense: we construct a sequence
that LZ fails to compress signicantly, but that is compressed by at least a
factor 2 by a BPD compressor. As a corollary we obtain a strong separation
between finite-state and BPD dimension.
| Computational Complexity, Information Theory, Information Theory | Computer Science |
704.2448 | Ugo Dal Lago | Light Logics and Optimal Reduction: Completeness and Complexity | cs.LO cs.PL | Typing of lambda-terms in Elementary and Light Affine Logic (EAL, LAL, resp.)
has been studied for two different reasons: on the one hand the evaluation of
typed terms using LAL (EAL, resp.) proof-nets admits a guaranteed polynomial
(elementary, resp.) bound; on the other hand these terms can also be evaluated
by optimal reduction using the abstract version of Lamping's algorithm. The
first reduction is global while the second one is local and asynchronous. We
prove that for LAL (EAL, resp.) typed terms, Lamping's abstract algorithm also
admits a polynomial (elementary, resp.) bound. We also show its soundness and
completeness (for EAL and LAL with type fixpoints), by using a simple geometry
of interaction model (context semantics).
| Logic in Computer Science, Programming Languages | Computer Science |
End of preview.