doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
sequence
1104.4874
1
Exploiting the performance of today's microprocessors requires intimate knowledge of the microarchitecture as well as an awareness of the ever-growing complexity in thread and cache topology. LIKWID is a set of command line utilities that addresses four key problems: Probing the thread and cache topology of a shared-memory node, enforcing thread-core affinity on a program, measuring performance counter metrics, and microbenchmarking for reliable upper performance bounds. Moreover, it includes an mpirun wrapper allowing for portable thread-core affinity in MPI and hybrid MPI/threaded applications. To demonstrate the capabilities of the tool set we show the in uence of thread affinity on performance using the well-known OpenMP STREAM triad benchmark, use hardware counter tools to study the performance of a stencil code, and finally show how to detect bandwidth problems on ccNUMA-based compute nodes.
Exploiting the performance of today's microprocessors requires intimate knowledge of the microarchitecture as well as an awareness of the ever-growing complexity in thread and cache topology. LIKWID is a set of command line utilities that addresses four key problems: Probing the thread and cache topology of a shared-memory node, enforcing thread-core affinity on a program, measuring performance counter metrics, and microbenchmarking for reliable upper performance bounds. Moreover, it includes a mpirun wrapper allowing for portable thread-core affinity in MPI and hybrid MPI/threaded applications. To demonstrate the capabilities of the tool set we show the influence of thread affinity on performance using the well-known OpenMP STREAM triad benchmark, use hardware counter tools to study the performance of a stencil code, and finally show how to detect bandwidth problems on ccNUMA-based compute nodes.
[ { "type": "R", "before": "an", "after": "a", "start_char_pos": 498, "end_char_pos": 500 }, { "type": "R", "before": "in uence", "after": "influence", "start_char_pos": 664, "end_char_pos": 672 } ]
[ 0, 191, 475, 603 ]
1104.5090
1
In this paper, we introduce an ASEP-like transport model for bidirectional motion of particles on a multi-lane lattice. The model is motivated by {\em experiments URLanelle motility along a microtubule (MT), where particles are propelled by molecular motors (dynein and kinesin) along the thirteen protofilaments of the MT . In the model, particles can switch directions of motion due to "tug-of-war" events between counteracting motors. Collisions of particles on the same lane can be cleared by switching to adjacent filaments (lane changes). We analyze transport properties of the model with no-flux boundary conditions at the end of a MT ("plus-end" or tip). In particular, we find a nonlinear scaling of the mean {\em number of particles accumulated at the tip (%DIFDELCMD < {\em %%% tip size ) with injection rate and an associated phase transition leading to {\em pulsing states} characterized by periodic filling and emptying of the system . Moreover, we show that the ability of changing protofilaments can affect the transport efficiency. Finally, we show that the particle-direction change rate obtained from experiments is close to optimal in order to achieve efficient motor URLanelle transport in a living cell .
In this paper, we introduce an ASEP-like transport model for bidirectional motion of particles on a multi-lane lattice. The model is motivated by {\em in vivo experiments URLanelle motility along a microtubule (MT), consisting of thirteen protofilaments, where particles are propelled by molecular motors (dynein and kinesin) . In the URLanelles (particles) can switch directions of motion due to "tug-of-war" events between counteracting motors. Collisions of particles on the same lane can be cleared by switching to adjacent protofilaments (lane changes). We analyze transport properties of the model with no-flux boundary conditions at one end of a MT ("plus-end" or tip). We show that the ability of lane changes can affect the transport efficiency and the particle-direction change rate obtained from experiments is close to optimal in order to achieve efficient motor URLanelle transport in a living cell. In particular, we find a nonlinear scaling of the mean {\em tip size (the number of particles accumulated at the tip %DIFDELCMD < {\em %%% ) with injection rate and an associated phase transition leading to {\em pulsing states} characterized by periodic filling and emptying of the system .
[ { "type": "A", "before": null, "after": "in vivo", "start_char_pos": 151, "end_char_pos": 151 }, { "type": "A", "before": null, "after": "consisting of thirteen protofilaments,", "start_char_pos": 209, "end_char_pos": 209 }, { "type": "D", "before": "along the thirteen protofilaments of the MT", "after": null, "start_char_pos": 281, "end_char_pos": 324 }, { "type": "R", "before": "model, particles", "after": "URLanelles (particles)", "start_char_pos": 334, "end_char_pos": 350 }, { "type": "R", "before": "filaments", "after": "protofilaments", "start_char_pos": 521, "end_char_pos": 530 }, { "type": "R", "before": "the", "after": "one", "start_char_pos": 628, "end_char_pos": 631 }, { "type": "A", "before": null, "after": "We show that the ability of lane changes can affect the transport efficiency and the particle-direction change rate obtained from experiments is close to optimal in order to achieve efficient motor URLanelle transport in a living cell.", "start_char_pos": 665, "end_char_pos": 665 }, { "type": "A", "before": null, "after": "tip size", "start_char_pos": 726, "end_char_pos": 726 }, { "type": "A", "before": null, "after": "(the", "start_char_pos": 727, "end_char_pos": 727 }, { "type": "D", "before": "(", "after": null, "start_char_pos": 771, "end_char_pos": 772 }, { "type": "D", "before": "tip size", "after": null, "start_char_pos": 794, "end_char_pos": 802 }, { "type": "D", "before": ". Moreover, we show that the ability of changing protofilaments can affect the transport efficiency. Finally, we show that the particle-direction change rate obtained from experiments is close to optimal in order to achieve efficient motor URLanelle transport in a living cell", "after": null, "start_char_pos": 953, "end_char_pos": 1229 } ]
[ 0, 119, 439, 546, 664, 954, 1053 ]
1104.5243
1
Volume reconstruction by backprojection is the computational bottleneck in many interventional clinical computed tomography (CT) applications. Today vendors in this field replace special purpose hardware accelerators by standard hardware like multicore chips and GPGPUs. This paper presents low-level optimizations for the backprojection algorithm, guided by a thorough performance analysis on four generations of Intel multicore processors (Harpertown, Westmere, Nehalem EX, and Sandy Bridge). We choose the RabbitCT benchmark, a standardized testcase well supported in industry, to ensure transparent and comparable results. Our aim is to provide not only the fastest implementation but also compare to performance models and hardware counter data in order to fully understand the results. We separate the influence of algorithmic optimizations, parallelization, SIMD vectorization, and microarchitectural issues on performance and pinpoint problems with current instruction set extensions on standard CPUs (SSE, AVX) . Finally we compare our results to the best GPGPU implementations available for this open competition benchmark.
Volume reconstruction by backprojection is the computational bottleneck in many interventional clinical computed tomography (CT) applications. Today vendors in this field replace special purpose hardware accelerators by standard hardware like multicore chips and GPGPUs. Medical imaging algorithms are on the verge of employing High Performance Computing (HPC) technology, and are therefore an interesting new candidate for optimization. This paper presents low-level optimizations for the backprojection algorithm, guided by a thorough performance analysis on four generations of Intel multicore processors (Harpertown, Westmere, Westmere EX, and Sandy Bridge). We choose the RabbitCT benchmark, a standardized testcase well supported in industry, to ensure transparent and comparable results. Our aim is to provide not only the fastest possible implementation but also compare to performance models and hardware counter data in order to fully understand the results. We separate the influence of algorithmic optimizations, parallelization, SIMD vectorization, and microarchitectural issues and pinpoint problems with current SIMD instruction set extensions on standard CPUs (SSE, AVX) . The use of assembly language is mandatory for best performance . Finally we compare our results to the best GPGPU implementations available for this open competition benchmark.
[ { "type": "A", "before": null, "after": "Medical imaging algorithms are on the verge of employing High Performance Computing (HPC) technology, and are therefore an interesting new candidate for optimization.", "start_char_pos": 271, "end_char_pos": 271 }, { "type": "R", "before": "Nehalem", "after": "Westmere", "start_char_pos": 465, "end_char_pos": 472 }, { "type": "A", "before": null, "after": "possible", "start_char_pos": 671, "end_char_pos": 671 }, { "type": "D", "before": "on performance", "after": null, "start_char_pos": 917, "end_char_pos": 931 }, { "type": "A", "before": null, "after": "SIMD", "start_char_pos": 967, "end_char_pos": 967 }, { "type": "A", "before": null, "after": ". The use of assembly language is mandatory for best performance", "start_char_pos": 1023, "end_char_pos": 1023 } ]
[ 0, 142, 270, 495, 627, 793 ]
1105.0068
1
In the context of stochastic volatility models, we study representation formulas in terms of expectations for the power series' coefficients associated to the call price-function. As in a recent paper by Antonelli and Scarlatti the expansion is done w.r.t. the correlation between the noises driving the underlying asset price process and the volatility process. We first obtain expressions for the power series' coefficients from the generalized Hull and White formula obtained by Elisa Al\`os. Afterwards, we provide representations turning out from the approach for the sensitivity problem tackled by Malliavin calculus techniques . Finally, we show for several stochastic volatility models the numerical performance of the associated Monte Carlo estimators .
In the context of stochastic volatility models, we study representation formulas in terms of expectations for the power series' coefficients associated to the call price-function. As in a recent paper by Antonelli and Scarlatti the expansion is done w.r.t. the correlation between the noises driving the underlying asset price process and the volatility process. We first obtain expressions for the power series' coefficients from the generalized Hull and White formula obtained by Elisa Al\`os. Afterwards, we provide representations turning out from the approach for the sensitivity problem tackled by Malliavin calculus techniques , and these allow to handle not only vanilla options . Finally, we show the numerical performance of the associated Monte Carlo estimators for several stochastic volatility models .
[ { "type": "A", "before": null, "after": ", and these allow to handle not only vanilla options", "start_char_pos": 634, "end_char_pos": 634 }, { "type": "D", "before": "for several stochastic volatility models", "after": null, "start_char_pos": 654, "end_char_pos": 694 }, { "type": "A", "before": null, "after": "for several stochastic volatility models", "start_char_pos": 762, "end_char_pos": 762 } ]
[ 0, 179, 362, 495, 636 ]
1105.0238
1
This paper studies the valuation of game-type credit default swaps (CDSs) that allow the protection buyer and seller to raise or reduce the respective position once prior to default. This leads to the study of a stochastic game with optimal stopping subject to early termination resulting from a default . Under a structural credit risk model based on spectrally negative Levy processes, we analyze the existence of the Nash equilibrium and derive the associated saddle point. Using the principles of smooth and continuous fit , we determine the buyer's and seller's equilibrium exercise strategies , which are of threshold type . Numerical examples are provided to illustrate the impacts of default risk and contractual features on the fair premium and exercise strategies .
This paper studies game-type credit default swaps that allow the protection buyer and seller to raise or reduce their respective positions once prior to default. This leads to the study of an optimal stopping game subject to early default termination . Under a structural credit risk model based on spectrally negative Levy processes, we apply the principles of smooth and continuous fit to identify the equilibrium exercise strategies for the buyer and the seller. We then rigorously prove the existence of the Nash equilibrium and compute the contract value at equilibrium . Numerical examples are provided to illustrate the impacts of default risk and other contractual features on the players' exercise timing at equilibrium .
[ { "type": "D", "before": "the valuation of", "after": null, "start_char_pos": 19, "end_char_pos": 35 }, { "type": "D", "before": "(CDSs)", "after": null, "start_char_pos": 67, "end_char_pos": 73 }, { "type": "R", "before": "the respective position", "after": "their respective positions", "start_char_pos": 136, "end_char_pos": 159 }, { "type": "R", "before": "a stochastic game with optimal stopping", "after": "an optimal stopping game", "start_char_pos": 210, "end_char_pos": 249 }, { "type": "R", "before": "termination resulting from a default", "after": "default termination", "start_char_pos": 267, "end_char_pos": 303 }, { "type": "R", "before": "analyze the existence of the Nash equilibrium and derive the associated saddle point. Using the", "after": "apply the", "start_char_pos": 391, "end_char_pos": 486 }, { "type": "R", "before": ", we determine the buyer's and seller's", "after": "to identify the", "start_char_pos": 527, "end_char_pos": 566 }, { "type": "R", "before": ", which are of threshold type", "after": "for the buyer and the seller. We then rigorously prove the existence of the Nash equilibrium and compute the contract value at equilibrium", "start_char_pos": 599, "end_char_pos": 628 }, { "type": "A", "before": null, "after": "other", "start_char_pos": 709, "end_char_pos": 709 }, { "type": "R", "before": "fair premium and exercise strategies", "after": "players' exercise timing at equilibrium", "start_char_pos": 738, "end_char_pos": 774 } ]
[ 0, 182, 305, 476 ]
1105.0819
1
In lowest unique bid auctions, N players bid for an item. The winner is whoever places the lowest bid, provided that it is also unique. We derive an analytical expression for the equilibrium distribution of the game as a function of N and study its properties, which are then compared with a large dataset of internet auctions. The empirical collective strategy reproduces the theoretical equilibrium with striking accuracy for small N, while for larger N the quality of the fit deteriorates. As a consequence, the same game exhibits lottery-like and game-of-skill features, depending on the collective size of the bidding pool . Our results question the actual possibility of a large population to adapt and find the optimal strategy when participating in a collective game.
In lowest unique bid auctions, N players bid for an item. The winner is whoever places the lowest bid, provided that it is also unique. We use a grand canonical approach to derive an analytical expression for the equilibrium distribution of strategies. We then study the properties of the solution as a function of the mean number of players, and compare them with a large dataset of internet auctions. The theory agrees with the data with striking accuracy for small population size N, while for larger N a qualitatively different distribution is observed. We interpret this result as the emergence of two different phases, one in which adaptation is feasible and one in which it is not . Our results question the actual possibility of a large population to adapt and find the optimal strategy when participating in a collective game.
[ { "type": "A", "before": null, "after": "use a grand canonical approach to", "start_char_pos": 139, "end_char_pos": 139 }, { "type": "R", "before": "the game", "after": "strategies. We then study the properties of the solution", "start_char_pos": 208, "end_char_pos": 216 }, { "type": "R", "before": "N and study its properties, which are then compared", "after": "the mean number of players, and compare them", "start_char_pos": 234, "end_char_pos": 285 }, { "type": "R", "before": "empirical collective strategy reproduces the theoretical equilibrium", "after": "theory agrees with the data", "start_char_pos": 333, "end_char_pos": 401 }, { "type": "A", "before": null, "after": "population size", "start_char_pos": 435, "end_char_pos": 435 }, { "type": "R", "before": "the quality of the fit deteriorates. As a consequence, the same game exhibits lottery-like and game-of-skill features, depending on the collective size of the bidding pool", "after": "a qualitatively different distribution is observed. We interpret this result as the emergence of two different phases, one in which adaptation is feasible and one in which it is not", "start_char_pos": 458, "end_char_pos": 629 } ]
[ 0, 57, 135, 328, 494, 631 ]
1105.0866
1
In this paper we present a mathematical model of tripartite synapses, where astrocytes mediate information flow from the pre-synaptic to the post-synaptic neuron . The model consists of a pre-synaptic bouton, a post-synaptic dendritic spine head , a synaptic cleft and a perisynaptic astrocyte controlling Ca2+ dynamics inside the synaptic bouton. This in turn controls glutamate release dynamics in the cleft. As a consequence of this, glutamate concentration in the cleft has been modeled, in which glutamate reuptake by astrocytes has also been incorporated. Finally, dendritic spine head dynamics has been modeled. As an application, this model clearly shows synaptic potentiation in the hippocampal region, i.e., astrocyte Ca2+ mediates synaptic plasticity, which is in conformity with the majority of the recent findings .
In this paper we present a biologically detailed mathematical model of tripartite synapses, where astrocytes modulate short-term synaptic plasticity . The model consists of a pre-synaptic bouton, a post-synaptic dendritic spine-head , a synaptic cleft and a peri-synaptic astrocyte controlling Ca2+ dynamics inside the synaptic bouton. This in turn controls glutamate release dynamics in the cleft. As a consequence of this, glutamate concentration in the cleft has been modeled, in which glutamate reuptake by astrocytes has also been incorporated. Finally, dendritic spine-head dynamics has been modeled. As an application, this model clearly shows synaptic potentiation in the hippocampal region, i.e., astrocyte Ca2+ mediates synaptic plasticity, which is in conformity with the majority of the recent findings (Perea Araque, 2007; Henneberger et al., 2010) .
[ { "type": "A", "before": null, "after": "biologically detailed", "start_char_pos": 27, "end_char_pos": 27 }, { "type": "R", "before": "mediate information flow from the pre-synaptic to the post-synaptic neuron", "after": "modulate short-term synaptic plasticity", "start_char_pos": 88, "end_char_pos": 162 }, { "type": "R", "before": "spine head", "after": "spine-head", "start_char_pos": 236, "end_char_pos": 246 }, { "type": "R", "before": "perisynaptic", "after": "peri-synaptic", "start_char_pos": 272, "end_char_pos": 284 }, { "type": "R", "before": "spine head", "after": "spine-head", "start_char_pos": 582, "end_char_pos": 592 }, { "type": "A", "before": null, "after": "(Perea", "start_char_pos": 828, "end_char_pos": 828 }, { "type": "A", "before": null, "after": "Araque, 2007; Henneberger et al., 2010)", "start_char_pos": 829, "end_char_pos": 829 } ]
[ 0, 164, 348, 411, 562, 619 ]
1105.1488
1
The paper studies problem of continuous time optimal portfolio selection for a diffusion model of incomplete market . It is shown that, under some mild conditions, the suboptimal strategies for investors with different performance criterions can be constructed using a limited number of fixed processes (mutual funds), for a market with a larger number of available risky stocks. In other words, a relaxed version of Mutual Fund Theorem is obtained .
The paper studies problem of continuous time optimal portfolio selection for a incom- plete market diffusion model . It is shown that, under some mild conditions, near optimal strategies for investors with different performance criteria can be constructed using a limited number of fixed processes (mutual funds), for a market with a larger number of available risky stocks. In other words, a dimension reduction is achieved via a relaxed version of the Mutual Fund Theorem .
[ { "type": "R", "before": "diffusion model of incomplete market", "after": "incom- plete market diffusion model", "start_char_pos": 79, "end_char_pos": 115 }, { "type": "R", "before": "the suboptimal", "after": "near optimal", "start_char_pos": 164, "end_char_pos": 178 }, { "type": "R", "before": "criterions", "after": "criteria", "start_char_pos": 231, "end_char_pos": 241 }, { "type": "A", "before": null, "after": "dimension reduction is achieved via a", "start_char_pos": 398, "end_char_pos": 398 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 418, "end_char_pos": 418 }, { "type": "D", "before": "is obtained", "after": null, "start_char_pos": 439, "end_char_pos": 450 } ]
[ 0, 117, 379 ]
1105.2359
1
We establish that mass conserving single terminal-linkage networks of chemical reactions admit positive equilibria regardless of the choice of reaction rate constants and network deficiency . Our proof uses a new convex-optimization approach for analyzing the equilibrium behavior of networks of chemical reactions following mass-action kinetics. We provide a fixed-point method to compute these equilibria and report some of our numerical experiments.
We establish that mass conserving single terminal-linkage networks of chemical reactions admit positive steady states regardless of network deficiency and the choice of reaction rate constants . This result holds for closed systems without material exchange across the boundary, as well as for open systems with material exchange at rates that satisfy a simple sufficient and necessary condition . Our proof uses a fixed point of a novel convex optimization formulation to find the steady state behavior of chemical reaction networks that satisfy the law of mass-action kinetics. A fixed point iteration can be used to compute these steady states, and we show that it converges for weakly reversible homogeneous systems. We report the results of our algorithm on numerical experiments.
[ { "type": "R", "before": "equilibria regardless of", "after": "steady states regardless of network deficiency and", "start_char_pos": 104, "end_char_pos": 128 }, { "type": "R", "before": "and network deficiency", "after": ". This result holds for closed systems without material exchange across the boundary, as well as for open systems with material exchange at rates that satisfy a simple sufficient and necessary condition", "start_char_pos": 167, "end_char_pos": 189 }, { "type": "R", "before": "new convex-optimization approach for analyzing the equilibrium behavior of networks of chemical reactions following", "after": "fixed point of a novel convex optimization formulation to find the steady state behavior of chemical reaction networks that satisfy the law of", "start_char_pos": 209, "end_char_pos": 324 }, { "type": "R", "before": "We provide a fixed-point method", "after": "A fixed point iteration can be used", "start_char_pos": 347, "end_char_pos": 378 }, { "type": "R", "before": "equilibria and report some of our", "after": "steady states, and we show that it converges for weakly reversible homogeneous systems. We report the results of our algorithm on", "start_char_pos": 396, "end_char_pos": 429 } ]
[ 0, 191, 346 ]
1105.2905
1
We study the totally asymmetric simple exclusion process (TASEP) on complex networks, as a paradigmatic model for transport subject to excluded volume interactions. Building on TASEP phenomenology on a single segment and borrowing ideas from random networks we investigate the effect of connectivity on transport. In particular, we argue that the presence of disorder in the topology of vertices crucially modifies the transport features of a network: irregular networks develop bimodal density distributions , whereas regular networks are dominated by shocks . The proposed numerical approach of solving for mean-field transport on networks provides a general framework for studying TASEP on large networks, and is expected to generalize to other transport processes.
We study the totally asymmetric simple exclusion process (TASEP) on complex networks, as a paradigmatic model for transport subject to excluded volume interactions. Building on TASEP phenomenology on a single segment and borrowing ideas from random networks we investigate the effect of connectivity on transport. In particular, we argue that the presence of disorder in the topology of vertices crucially modifies the transport features of a network: irregular networks involve homogeneous segments and have a bimodal distribution of edge densities , whereas regular networks are dominated by shocks leading to a unimodal density distribution . The proposed numerical approach of solving for mean-field transport on networks provides a general framework for studying TASEP on large networks, and is expected to generalize to other transport processes.
[ { "type": "R", "before": "develop bimodal density distributions", "after": "involve homogeneous segments and have a bimodal distribution of edge densities", "start_char_pos": 471, "end_char_pos": 508 }, { "type": "A", "before": null, "after": "leading to a unimodal density distribution", "start_char_pos": 560, "end_char_pos": 560 } ]
[ 0, 164, 313, 562 ]
1105.2933
1
Network analysis became a powerful tool giving new insights to the understanding of cellular behavior . Heat shock , the archetype of stress responses, is a well-characterized and simple model of cellular dynamics. S. cerevisiae is an appropriate URLanism, since both its protein-protein interaction network (interactome) and stress response at the gene expression level have been well characterized. However, the analysis of the URLanization of the yeast interactome during stress has not been investigated yet. Changes of interaction-weights of yeast interactome were calculated from the changes of mRNA expression levels . Heat shock induced a significant decrease in both the overlaps and connections of yeast interactome modules. In agreement with this the weighted diameter of the yeast interactome had a 4.9-fold increase in heat shock. Several key proteins of the heat shock response became centers of heat shock-induced local communities, as well as bridges providing a residual connection of modules after heat shock. The observed changes resemble to a stratus-cumulus type transition of the interactome structure, since the unstressed yeast interactome had a globally URLanization, similar to that of stratus clouds, whereas the heat shocked interactome had a URLanization, similar to that of cumulus clouds. Our results showed that heat shock induces a partial disintegration of the URLanization of the yeast interactome. This change may be rather general occurring in many types of stresses. Moreover, other complex systems, such as single proteins, social networks and ecosystems may also decrease their inter-modular links, thus develop more compact modules, and display a partial disintegration of their global structure in the initial phase of crisis. Thus, our work may provide a model of a general, system-level adaptation mechanism to environmental changes.
Network analysis became a powerful tool in recent years . Heat shock is a well-characterized model of cellular dynamics. S. cerevisiae is an appropriate URLanism, since both its protein-protein interaction network (interactome) and stress response at the gene expression level have been well characterized. However, the analysis of the URLanization of the yeast interactome during stress has not been investigated yet. We calculated the changes of the interaction-weights of the yeast interactome from the changes of mRNA expression levels upon heat shock. The major finding of our study is that heat shock induced a significant decrease in both the overlaps and connections of yeast interactome modules. In agreement with this the weighted diameter of the yeast interactome had a 4.9-fold increase in heat shock. Several key proteins of the heat shock response became centers of heat shock-induced local communities, as well as bridges providing a residual connection of modules after heat shock. The observed changes resemble to a " stratus-cumulus " type transition of the interactome structure, since the unstressed yeast interactome had a globally URLanization, similar to that of stratus clouds, whereas the heat shocked interactome had a URLanization, similar to that of cumulus clouds. Our results showed that heat shock induces a partial disintegration of the URLanization of the yeast interactome. This change may be rather general occurring in many types of stresses. Moreover, other complex systems, such as single proteins, social networks and ecosystems may also decrease their inter-modular links, thus develop more compact modules, and display a partial disintegration of their global structure in the initial phase of crisis. Thus, our work may provide a model of a general, system-level adaptation mechanism to environmental changes.
[ { "type": "R", "before": "giving new insights to the understanding of cellular behavior", "after": "in recent years", "start_char_pos": 40, "end_char_pos": 101 }, { "type": "D", "before": ", the archetype of stress responses,", "after": null, "start_char_pos": 115, "end_char_pos": 151 }, { "type": "D", "before": "and simple", "after": null, "start_char_pos": 176, "end_char_pos": 186 }, { "type": "R", "before": "Changes of", "after": "We calculated the changes of the", "start_char_pos": 513, "end_char_pos": 523 }, { "type": "R", "before": "yeast interactome were calculated", "after": "the yeast interactome", "start_char_pos": 547, "end_char_pos": 580 }, { "type": "R", "before": ". Heat", "after": "upon heat shock. The major finding of our study is that heat", "start_char_pos": 624, "end_char_pos": 630 }, { "type": "A", "before": null, "after": "\"", "start_char_pos": 1063, "end_char_pos": 1063 }, { "type": "A", "before": null, "after": "\"", "start_char_pos": 1080, "end_char_pos": 1080 } ]
[ 0, 103, 214, 400, 512, 625, 734, 843, 1027, 1321, 1435, 1506, 1770 ]
1105.3337
1
Understanding of virus URLanization and self-assembly mechanisms helps to get an insight into the protein interactions which render virus infectious, but also to advance new methods in nanotechnology which use capsid self-assembly to produce virus-like nanoparticles. As in abiotic nanostructures, the obstacles along this way are related not only to the nanoscopic size of capsids but also to their unconventional topology and symmetry. In the present work on the example of exceptional families of viruses we : i) show the existence of a completely new type URLanization, resulting in a chiral pentagonal quasicrystalline order of protein positions in a capsid with spherical topology and dodecahedral geometry ; ii) generalize the classical theory of quasicrystals ( QC ) to explain URLanization and demonstrate that a particular non-linear phason strain induces chirality in QC; and iii) establish the relation between chiral order and inhomogeneous buckling strain of the capsid shell .
On the example of exceptional families of viruses we i) show the existence of a completely new type of URLanization in nanoparticles, in which the regions with a chiral pentagonal quasicrystalline order of protein positions are arranged in a structure commensurate with the spherical topology and dodecahedral geometry , ii) generalize the classical theory of quasicrystals ( QCs ) to explain URLanization , and iii) establish the relation between local chiral QC order and nonzero curvature of the dodecahedral capsid faces .
[ { "type": "R", "before": "Understanding of virus URLanization and self-assembly mechanisms helps to get an insight into the protein interactions which render virus infectious, but also to advance new methods in nanotechnology which use capsid self-assembly to produce virus-like nanoparticles. As in abiotic nanostructures, the obstacles along this way are related not only to the nanoscopic size of capsids but also to their unconventional topology and symmetry. In the present work on the", "after": "On the", "start_char_pos": 0, "end_char_pos": 464 }, { "type": "D", "before": ":", "after": null, "start_char_pos": 511, "end_char_pos": 512 }, { "type": "R", "before": "URLanization, resulting in", "after": "of URLanization in nanoparticles, in which the regions with", "start_char_pos": 560, "end_char_pos": 586 }, { "type": "R", "before": "in a capsid with", "after": "are arranged in a structure commensurate with the", "start_char_pos": 651, "end_char_pos": 667 }, { "type": "R", "before": ";", "after": ",", "start_char_pos": 713, "end_char_pos": 714 }, { "type": "R", "before": "QC", "after": "QCs", "start_char_pos": 770, "end_char_pos": 772 }, { "type": "R", "before": "and demonstrate that a particular non-linear phason strain induces chirality in QC; and", "after": ", and", "start_char_pos": 799, "end_char_pos": 886 }, { "type": "R", "before": "chiral order and inhomogeneous buckling strain of the capsid shell", "after": "local chiral QC order and nonzero curvature of the dodecahedral capsid faces", "start_char_pos": 923, "end_char_pos": 989 } ]
[ 0, 267, 437, 714, 882 ]
1105.4341
1
In this paper, we provide a new and effective approach for studying super-exponential solution of a retrial supermarket model with Poisson arrivals, exponential service times and exponential retrial timesand with two different probing-server numbers. We describe the retrial supermarket model as a system of differential equations by means of density-dependent jump Markov processes, and obtain an iterative algorithm for computing the fixed point of the system of differential equations. Based on the fixed point, we analyze the expected sojourn time that a tagged arriving customer spends in this system, and use numerical examples to indicate different influence of the two probing-server numbers on system performance including the fixed point and the expected sojourn time. Furthermore, we analyze exponential convergence of the current location of the retrial supermarket model to the fixed point, and apply the Kurtz Theorem to study density-dependent jump Markov process given in the retrial supermarket model, which leads to a Lipschitz condition under which the fraction measure of the retrial supermarket model weakly converges to the system of differential equations . This paper arrives at a new understanding of how the workload probing can help in load balancing jobs in retrial supermarket models .
When decomposing the total orbit into N sub-orbits (or simply orbits) related to each of N servers and through comparing the numbers of customers in these orbits, we introduce a retrial supermarket model of N identical servers, where two probing-server choice numbers are respectively designed for dynamically allocating each primary arrival and each retrial arrival into these orbits when the chosen servers are all busy. Note that the designed purpose of the two choice numbers can effectively improve performance measures of this retrial supermarket model. This paper analyzes a simple and basic retrial supermarket model of N identical servers, that is, Poisson arrivals, exponential service and retrial times. To this end, we first provide a detailed probability computation to set up an infinite-dimensional system of differential equations (or mean-field equations) satisfied by the expected fraction vector. Then, as N goes to infinity, we apply the operator semigroup to obtaining the mean-field limit (or chaos of propagation) for the sequence of Markov processes which express the state of this retrial supermarket model . Specifically, some simple and basic conditions for the mean-field limit as well as for the Lipschitz condition are established through the first two moments of the queue length in any orbit. Finally, we show that the fixed point satisfies a system of nonlinear equations which is an interesting networking generalization of the tail equations given in the M/M/1 retrial queue, and also use the fixed point to give performance analysis of this retrial supermarket model through numerical computation .
[ { "type": "R", "before": "In this paper, we provide a new and effective approach for studying super-exponential solution of a", "after": "When decomposing the total orbit into N sub-orbits (or simply orbits) related to each of N servers and through comparing the numbers of customers in these orbits, we introduce a retrial supermarket model of N identical servers, where two probing-server choice numbers are respectively designed for dynamically allocating each primary arrival and each retrial arrival into these orbits when the chosen servers are all busy. Note that the designed purpose of the two choice numbers can effectively improve performance measures of this retrial supermarket model. This paper analyzes a simple and basic", "start_char_pos": 0, "end_char_pos": 99 }, { "type": "R", "before": "with", "after": "of N identical servers, that is,", "start_char_pos": 126, "end_char_pos": 130 }, { "type": "R", "before": "times and exponential retrial timesand with two different probing-server numbers. We describe the retrial supermarket model as a", "after": "and retrial times. To this end, we first provide a detailed probability computation to set up an infinite-dimensional", "start_char_pos": 169, "end_char_pos": 297 }, { "type": "R", "before": "by means of density-dependent jump Markov processes, and obtain an iterative algorithm for computing the fixed point of the system of differential equations. Based on the fixed point, we analyze the expected sojourn time that a tagged arriving customer spends in this system, and use numerical examples to indicate different influence of the two probing-server numbers on system performance including the fixed point and the expected sojourn time. Furthermore, we analyze exponential convergence of", "after": "(or mean-field equations) satisfied by", "start_char_pos": 331, "end_char_pos": 829 }, { "type": "R", "before": "current location of the", "after": "expected fraction vector. Then, as N goes to infinity, we apply the operator semigroup to obtaining the mean-field limit (or chaos of propagation) for the sequence of Markov processes which express the state of this", "start_char_pos": 834, "end_char_pos": 857 }, { "type": "R", "before": "to the fixed point, and apply the Kurtz Theorem to study density-dependent jump Markov process given in the retrial supermarket model, which leads to a Lipschitz condition under which the fraction measure of the retrial supermarket model weakly converges to the system of differential equations . This paper arrives at a new understanding of how the workload probing can help in load balancing jobs in retrial supermarket models", "after": ". Specifically, some simple and basic conditions for the mean-field limit as well as for the Lipschitz condition are established through the first two moments of the queue length in any orbit. Finally, we show that the fixed point satisfies a system of nonlinear equations which is an interesting networking generalization of the tail equations given in the M/M/1 retrial queue, and also use the fixed point to give performance analysis of this retrial supermarket model through numerical computation", "start_char_pos": 884, "end_char_pos": 1312 } ]
[ 0, 250, 488, 778, 1180 ]
1106.0123
1
In this work, we have presented a simple analytical approximation scheme for generic non-linear FBSDEs. By treating the interested systems as the linear decoupled FBSDE perturbed with a non-linear generator , we have shown that it is possible to carry out recursive approximation to an arbitrarily higher order of expansion . We have also provided two concrete examples to demonstrate how it works and shown its accuracy relative to the results directly obtained from numerical techniques, such as PDE and Monte Carlo simulation. It was also shown that the same technique can be applied even when the forward and backward components are fully coupled. The method presented in this paper may be useful for various important problems which have prevented from being studied analytically so far.
In this work, we have presented a simple analytical approximation scheme for the generic non-linear FBSDEs. By treating the interested system as the linear decoupled FBSDE perturbed with non-linear generator and feedback terms , we have shown that it is possible to carry out recursive approximation to an arbitrarily higher order , where the required calculations in each order are equivalent to those for the standard European contingent claims . We have also applied the perturbative method to the PDE framework following the so-called Four Step Scheme. The method was found to render the original non-linear PDE into a series of standard parabolic linear PDEs. Due to the equivalence of the two approaches, it is also possible to derive approximate analytic solution for the non-linear PDE by applying the asymptotic expansion to the corresponding probabilistic model. Two simple examples were provided to demonstrate how the perturbation works and show its accuracy relative to the known numerical techniques. The method presented in this paper may be useful for various important problems which have prevented analytical treatment so far.
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 77, "end_char_pos": 77 }, { "type": "R", "before": "systems", "after": "system", "start_char_pos": 132, "end_char_pos": 139 }, { "type": "D", "before": "a", "after": null, "start_char_pos": 185, "end_char_pos": 186 }, { "type": "A", "before": null, "after": "and feedback terms", "start_char_pos": 208, "end_char_pos": 208 }, { "type": "R", "before": "of expansion", "after": ", where the required calculations in each order are equivalent to those for the standard European contingent claims", "start_char_pos": 313, "end_char_pos": 325 }, { "type": "R", "before": "provided two concrete examples", "after": "applied the perturbative method to the PDE framework following the so-called Four Step Scheme. The method was found to render the original non-linear PDE into a series of standard parabolic linear PDEs. Due to the equivalence of the two approaches, it is also possible to derive approximate analytic solution for the non-linear PDE by applying the asymptotic expansion to the corresponding probabilistic model. Two simple examples were provided", "start_char_pos": 341, "end_char_pos": 371 }, { "type": "R", "before": "it works and shown", "after": "the perturbation works and show", "start_char_pos": 391, "end_char_pos": 409 }, { "type": "R", "before": "results directly obtained from numerical techniques, such as PDE and Monte Carlo simulation. It was also shown that the same technique can be applied even when the forward and backward components are fully coupled.", "after": "known numerical techniques.", "start_char_pos": 439, "end_char_pos": 653 }, { "type": "R", "before": "from being studied analytically", "after": "analytical treatment", "start_char_pos": 755, "end_char_pos": 786 } ]
[ 0, 104, 327, 531, 653 ]
1106.0123
2
In this work, we have presented a simple analytical approximation scheme for the generic non-linear FBSDEs. By treating the interested system as the linear decoupled FBSDE perturbed with non-linear generator and feedback terms, we have shown that it is possible to carry out recursive approximation to an arbitrarily higher order, where the required calculations in each order are equivalent to those for the standard European contingent claims. We have also applied the perturbative method to the PDE framework following the so-called Four Step Scheme. The method was found to render the original non-linear PDE into a series of standard parabolic linear PDEs. Due to the equivalence of the two approaches, it is also possible to derive approximate analytic solution for the non-linear PDE by applying the asymptotic expansion to the corresponding probabilistic model. Two simple examples were provided to demonstrate how the perturbation works and show its accuracy relative to the known numerical techniques. The method presented in this paper may be useful for various important problems which have prevented analytical treatment so far.
In this work, we have presented a simple analytical approximation scheme for generic non-linear FBSDEs. By treating the interested system as the linear decoupled FBSDE perturbed with non-linear generator and feedback terms, we have shown that it is possible to carry out a recursive approximation to an arbitrarily higher order, where the required calculations in each order are equivalent to those for standard European contingent claims. We have also applied the perturbative method to the PDE framework following the so-called Four Step Scheme. The method is found to render the original non-linear PDE into a series of standard parabolic linear PDEs. Due to the equivalence of the two approaches, it is also possible to derive approximate analytic solution for the non-linear PDE by applying the asymptotic expansion to the corresponding probabilistic model. Two simple examples are provided to demonstrate how the perturbation works and show its accuracy relative to known numerical techniques. The method presented in this paper may be useful for various important problems which have eluded analytical treatment so far.
[ { "type": "D", "before": "the", "after": null, "start_char_pos": 77, "end_char_pos": 80 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 275, "end_char_pos": 275 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 406, "end_char_pos": 409 }, { "type": "R", "before": "was", "after": "is", "start_char_pos": 566, "end_char_pos": 569 }, { "type": "R", "before": "were", "after": "are", "start_char_pos": 891, "end_char_pos": 895 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 981, "end_char_pos": 984 }, { "type": "R", "before": "prevented", "after": "eluded", "start_char_pos": 1104, "end_char_pos": 1113 } ]
[ 0, 107, 446, 554, 662, 870, 1012 ]
1106.0466
1
Based on the phosphorelay kinetics operative within BvgAS two component system we propose a mathematical model for signal transduction and differential gene regulation in {\it Bordetella pertussis}. {\it To understand the system behavior under elevated temperature, the developed model has been studied in two different ways. First, a quasi-steady state analysis has been carried out for the two component system, comprising of sensor BvgS and response regulator BvgA. The quasi-steady state analysis reveals a positive feedback and temperature induced molecular switch, leading to graded response and amplification in the output of BvgA. Accumulation of large pool of BvgA thus results into differential regulation of the downstream genes, including the gene encoding toxin. Furthermore, numerical integration of the full network kinetics has been carried out to explore time dependent behavior of different system components, that qualitatively capture the essential features of experimental results performed {\it in vivo} .
Based on the phosphorelay kinetics operative within BvgAS two component system we propose a mathematical framework for signal transduction and gene regulation of phenotypic phases in {\it Bordetella pertussis}. The proposed model identifies a novel mechanism of transcriptional interference between two promoters present in the{\it bvg locus. To understand the system behavior under elevated temperature, the developed model has been studied in two different ways. First, a quasi-steady state analysis has been carried out for the two component system, comprising of sensor BvgS and response regulator BvgA. The quasi-steady state analysis reveals temperature induced sharp molecular switch, leading to amplification in the output of BvgA. Accumulation of a large pool of BvgA thus results into differential regulation of the downstream genes, including the gene encoding toxin. Numerical integration of the full network kinetics is then carried out to explore time dependent behavior of different system components, that qualitatively capture the essential features of experimental results performed {\it in vivo} . Furthermore, the developed model has been utilized to study mutants that are impaired in their ability to phosphorylate the transcription factor, BvgA, of the signaling network .
[ { "type": "R", "before": "model", "after": "framework", "start_char_pos": 105, "end_char_pos": 110 }, { "type": "R", "before": "differential gene regulation", "after": "gene regulation of phenotypic phases", "start_char_pos": 139, "end_char_pos": 167 }, { "type": "A", "before": null, "after": "The proposed model identifies a novel mechanism of transcriptional interference between two promoters present in the", "start_char_pos": 199, "end_char_pos": 199 }, { "type": "A", "before": null, "after": "bvg", "start_char_pos": 204, "end_char_pos": 204 }, { "type": "A", "before": null, "after": "locus.", "start_char_pos": 205, "end_char_pos": 205 }, { "type": "R", "before": "a positive feedback and temperature induced", "after": "temperature induced sharp", "start_char_pos": 511, "end_char_pos": 554 }, { "type": "D", "before": "graded response and", "after": null, "start_char_pos": 584, "end_char_pos": 603 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 657, "end_char_pos": 657 }, { "type": "R", "before": "Furthermore, numerical", "after": "Numerical", "start_char_pos": 779, "end_char_pos": 801 }, { "type": "R", "before": "has been", "after": "is then", "start_char_pos": 843, "end_char_pos": 851 }, { "type": "A", "before": null, "after": ". Furthermore, the developed model has been utilized to study mutants that are impaired in their ability to phosphorylate the transcription factor, BvgA, of the signaling network", "start_char_pos": 1029, "end_char_pos": 1029 } ]
[ 0, 198, 327, 470, 640, 778 ]
1106.0466
2
Based on the phosphorelay kinetics operative within BvgAS two component system we propose a mathematical framework for signal transduction and gene regulation of phenotypic phases in %DIFDELCMD < {\it %%% Bordetella pertussis . The proposed model identifies a novel mechanism of transcriptional interference between two promoters present in the %DIFDELCMD < {\it %%% bvg locus. To understand the system behavior under elevated temperature, the developed model has been studied in two different ways. First, a quasi-steady state analysis has been carried out for the two component system, comprising of sensor BvgS and response regulator BvgA. The quasi-steady state analysis reveals temperature induced sharp molecular switch, leading to amplification in the output of BvgA. Accumulation of a large pool of BvgA thus results into differential regulation of the downstream genes, including the gene encoding toxin. Numerical integration of the full network kinetics is then carried out to explore time dependent behavior of different system components, that qualitatively capture the essential features of experimental results performed %DIFDELCMD < {\it %%% in vivo . Furthermore, the developed model has been utilized to study mutants that are impaired in their ability to phosphorylate the transcription factor, BvgA, of the signaling network.
Based on the phosphorelay kinetics operative within BvgAS two component system we propose a mathematical framework for signal transduction and gene regulation of phenotypic phases in %DIFDELCMD < {\it %%% Bordetella pertussis . The proposed model identifies a novel mechanism of transcriptional interference between two promoters present in the %DIFDELCMD < {\it %%% bvg locus. To understand the system behavior under elevated temperature, the developed model has been studied in two different ways. First, a quasi-steady state analysis has been carried out for the two component system, comprising of sensor BvgS and response regulator BvgA. The quasi-steady state analysis reveals temperature induced sharp molecular switch, leading to amplification in the output of BvgA. Accumulation of a large pool of BvgA thus results into differential regulation of the downstream genes, including the gene encoding toxin. Numerical integration of the full network kinetics is then carried out to explore time dependent behavior of different system components, that qualitatively capture the essential features of experimental results performed %DIFDELCMD < {\it %%% in vivo . Furthermore, the developed model has been utilized to study mutants that are impaired in their ability to phosphorylate the transcription factor, BvgA, of the signaling network.
[ { "type": "R", "before": "Bordetella pertussis", "after": "Bordetella pertussis", "start_char_pos": 205, "end_char_pos": 225 }, { "type": "R", "before": "bvg", "after": "bvg", "start_char_pos": 367, "end_char_pos": 370 }, { "type": "R", "before": "in vivo", "after": "in vivo", "start_char_pos": 1158, "end_char_pos": 1165 } ]
[ 0, 227, 377, 499, 642, 774, 913, 1167 ]
1106.0866
1
The LIBOR market model is very popular for pricing interest rate derivatives, but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term is growing exponentially fast (as a function of the tenor length). In this work, we consider a L\'evy-driven LIBOR model and aim at developing accurate and efficient log-L\'evy approximations for the dynamics of the rates. The approximations are based on truncation of the drift term and Picard approximation of suitable processes. Numerical experiments for FRAs, caps and swaptions show that the approximations perform very well. In addition, we also consider the log-%DIFDELCMD < \lev %%% approximation of annuities, which offers good approximations for high volatility regimes.
The LIBOR market model is very popular for pricing interest rate derivatives, but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term is growing exponentially fast (as a function of the tenor length). In this work, we consider a L\'evy-driven LIBOR model and aim at developing accurate and efficient log-L\'evy approximations for the dynamics of the rates. The approximations are based on truncation of the drift term and Picard approximation of suitable processes. Numerical experiments for FRAs, caps , swaptions and sticky ratchet caps show that the approximations perform very well. In addition, we also consider the %DIFDELCMD < \lev %%% log-L\'evy approximation of annuities, which offers good approximations for high volatility regimes.
[ { "type": "R", "before": "and swaptions", "after": ", swaptions and sticky ratchet caps", "start_char_pos": 579, "end_char_pos": 592 }, { "type": "D", "before": "log-", "after": null, "start_char_pos": 675, "end_char_pos": 679 }, { "type": "A", "before": null, "after": "log-L\\'evy", "start_char_pos": 701, "end_char_pos": 701 } ]
[ 0, 116, 276, 432, 541, 640 ]
1106.2311
1
Negative and positive transcriptional feedback loops are present in natural and synthetic genetic oscillators. A single gene with negative transcriptional feedback needs a time delay and nonlinearity in the transmission of the feedback signal in order to produce biochemical rhythms. However, no equivalent requirements have been found for producing oscillations in a single gene with positive transcriptional feedback . To demonstrate that this single-gene network can also easily produce rhythms , we examine a model comprised of two well-differentiated parts. The first is a positive feedback created by a protein that binds to the promoter of its own gene and activates the transcription. The second is a negative interaction in which a repressor molecule prevents this protein from binding to its promoter. A stochastic study shows that the system is robust to noise. A deterministic study identifies that the dynamics of the oscillator are mainly driven by two types of biomolecules: the protein, and the complex formed by the repressor and this protein. In conclusion , a simple and usual negative interaction, such as degradation, sequestration or inhibition, acting on the positive transcriptional feedback of a single gene is a sufficient condition to produce reliable oscillations. Surprisingly, one gene is enough and the positive transcriptional feedback signal does not need to activate a second repressor gene . The model needs neither cooperative binding reactions nor the formation of protein multimers. Therefore, our findings could help to clarify the design principles of cellular clocks and constitute a new efficient tool for engineering synthetic genetic oscillators.
Negative and positive transcriptional feedback loops are present in natural and synthetic genetic oscillators. A single gene with negative transcriptional feedback needs a time delay and sufficiently strong nonlinearity in the transmission of the feedback signal in order to produce biochemical rhythms. A single gene with only positive transcriptional feedback does not produce oscillations. Here, we demonstrate that this single-gene network in conjunction with a simple negative interaction can also easily produce rhythms . We examine a model comprised of two well-differentiated parts. The first is a positive feedback created by a protein that binds to the promoter of its own gene and activates the transcription. The second is a negative interaction in which a repressor molecule prevents this protein from binding to its promoter. A stochastic study shows that the system is robust to noise. A deterministic study identifies that the dynamics of the oscillator are mainly driven by two types of biomolecules: the protein, and the complex formed by the repressor and this protein. The main conclusion of this paper is that a simple and usual negative interaction, such as degradation, sequestration or inhibition, acting on the positive transcriptional feedback of a single gene is a sufficient condition to produce reliable oscillations. One gene is enough and the positive transcriptional feedback signal does not need to activate a second repressor gene . This means that at the genetic level an explicit negative feedback loop is not necessary . The model needs neither cooperative binding reactions nor the formation of protein multimers. Therefore, our findings could help to clarify the design principles of cellular clocks and constitute a new efficient tool for engineering synthetic genetic oscillators.
[ { "type": "A", "before": null, "after": "sufficiently strong", "start_char_pos": 187, "end_char_pos": 187 }, { "type": "R", "before": "However, no equivalent requirements have been found for producing oscillations in a", "after": "A", "start_char_pos": 285, "end_char_pos": 368 }, { "type": "A", "before": null, "after": "only", "start_char_pos": 386, "end_char_pos": 386 }, { "type": "R", "before": ". To", "after": "does not produce oscillations. Here, we", "start_char_pos": 421, "end_char_pos": 425 }, { "type": "A", "before": null, "after": "in conjunction with a simple negative interaction", "start_char_pos": 468, "end_char_pos": 468 }, { "type": "R", "before": ", we", "after": ". We", "start_char_pos": 501, "end_char_pos": 505 }, { "type": "R", "before": "In conclusion ,", "after": "The main conclusion of this paper is that", "start_char_pos": 1064, "end_char_pos": 1079 }, { "type": "R", "before": "Surprisingly, one", "after": "One", "start_char_pos": 1296, "end_char_pos": 1313 }, { "type": "A", "before": null, "after": ". This means that at the genetic level an explicit negative feedback loop is not necessary", "start_char_pos": 1428, "end_char_pos": 1428 } ]
[ 0, 110, 284, 422, 565, 695, 814, 875, 1063, 1295, 1430, 1524 ]
1106.2342
1
Welehov\'a (2009) showed that the class of Archimedean copulas coincides with the class of multivariate \ell_1-norm symmetric distributions. Building upon their results, we } introduce a class of multivariate Markov processes that we call `Archimedean survival processes' (ASPs) and present some of their properties. An ASP is defined over a finite time horizon and,a priori , its terminal value has an \ell_1-norm symmetric distribution and an Archimedean survival copula . Indeed, there is a bijection between the class of Archimedean copulas and the class of ASPs . The finite-dimensional distributions of an ASP are of multivariate Liouville-type. The one-dimensional marginal processes are so-called gamma random bridges (that is, each marginal is the product of gamma bridge with an independent positive random variable). These marginal processes are increasing and, in general, not independent, but they are identical in law. The law of an n-dimensional ASP is equivalent to that of an n-dimensional gamma process, and we provide details of the associated change of measure. The law an n-dimensional ASP is identical to the law of a positive random variable multiplied by the Hadamard product of an n-dimensional Dirichlet random variable and a vector of n independent gamma bridges. We generalise ASPs to a family of stochastic processes that we call `Liouville processes.' A Liouville process is also a multivariate Markov process whose finite-dimensional distributions are multivariate Liouville. However, in general, the terminal value of a Liouville process does not have an Archimedean survival copula, and its one-dimensional marginal processes are not identical in law .
Archimedean copulas are popular in the world of multivariate modelling as a result of their breadth, tractability, and flexibility. A. J. McNeil and J. Ne\v{slehov\'a (2009) showed that the class of Archimedean copulas coincides with the class of multivariate \ell_1-norm symmetric distributions. Building upon their results, we } introduce a class of multivariate Markov processes that we call `Archimedean survival processes' (ASPs) . The terminal value of an ASP has an Archimedean survival copula , and there exists a bijection from the class of ASPs to the class of Archimedean copulas. We provide various characterisations of ASPs, and a generalisation .
[ { "type": "R", "before": "We", "after": "Archimedean copulas are popular in the world of multivariate modelling as a result of their breadth, tractability, and flexibility. A. J. McNeil and J. Ne\\v{s", "start_char_pos": 0, "end_char_pos": 2 }, { "type": "D", "before": "and present some of their properties. An ASP is defined over a finite time horizon and,", "after": null, "start_char_pos": 279, "end_char_pos": 366 }, { "type": "D", "before": "a priori", "after": null, "start_char_pos": 366, "end_char_pos": 374 }, { "type": "R", "before": ", its terminal value has an \\ell_1-norm symmetric distribution and an", "after": ". The terminal value of an ASP has an", "start_char_pos": 375, "end_char_pos": 444 }, { "type": "R", "before": ". Indeed, there is a bijection between", "after": ", and there exists a bijection from", "start_char_pos": 473, "end_char_pos": 511 }, { "type": "R", "before": "Archimedean copulas and the class of ASPs . The finite-dimensional distributions of an ASP are of multivariate Liouville-type. The one-dimensional marginal processes are so-called gamma random bridges (that is, each marginal is the product of gamma bridge with an independent positive random variable). These marginal processes are increasing and, in general, not independent, but they are identical in law. The law of an n-dimensional ASP is equivalent to that of an n-dimensional gamma process, and we provide details of the associated change of measure. The law an n-dimensional ASP is identical to the law of a positive random variable multiplied by the Hadamard product of an n-dimensional Dirichlet random variable and a vector of n independent gamma bridges. We generalise ASPs to a family of stochastic processes that we call `Liouville processes.' A Liouville process is also a multivariate Markov process whose finite-dimensional distributions are multivariate Liouville. However, in general, the terminal value of a Liouville process does not have an Archimedean survival copula, and its one-dimensional marginal processes are not identical in law", "after": "ASPs to the class of Archimedean copulas. We provide various characterisations of ASPs, and a generalisation", "start_char_pos": 525, "end_char_pos": 1683 } ]
[ 0, 140, 316, 474, 651, 827, 932, 1081, 1290, 1506 ]
1106.2342
2
Archimedean copulas are popular in the world of multivariate modelling as a result of their breadth, tractability, and flexibility. A. J. McNeil and J. Neslehov\'a (2009) showed that the class of Archimedean copulas coincides with the class of multivariate \ell_1-norm symmetric distributions. Building upon their results, we introduce a class of multivariate Markov processes that we call `Archimedean survival processes' (ASPs). The terminal value of an ASP has an Archimedean survival copula , and there exists a bijection from the class of ASPs to the class of Archimedean copulas. We provide various characterisations of ASPs, and a generalisation.
Archimedean copulas are popular in the world of multivariate modelling as a result of their breadth, tractability, and flexibility. A. J. McNeil and J. Neslehov\'a (2009) showed that the class of Archimedean copulas coincides with the class of multivariate \ell_1-norm symmetric distributions. Building upon their results, we introduce a class of multivariate Markov processes that we call `Archimedean survival processes' (ASPs). An ASP is defined over a finite time interval, is equivalent in law to a multivariate gamma process, and its terminal value has an Archimedean survival copula . There exists a bijection from the class of ASPs to the class of Archimedean copulas. We provide various characterisations of ASPs, and a generalisation.
[ { "type": "R", "before": "The terminal value of an ASP", "after": "An ASP is defined over a finite time interval, is equivalent in law to a multivariate gamma process, and its terminal value", "start_char_pos": 431, "end_char_pos": 459 }, { "type": "R", "before": ", and there", "after": ". There", "start_char_pos": 495, "end_char_pos": 506 } ]
[ 0, 131, 293, 430, 585 ]
1106.2781
1
This paper deals with optimal dividend payment problem in the general setup of a piecewise-deterministic compound Poisson risk model . The objective of an insurance business under consideration is to maximize the expected discounted dividend payout up to the time of ruin. Both restricted and unrestricted payment schemes are considered . In the case of restricted payment scheme, the value function is shown to be a classical solution of the corresponding Hamilton-Jacobi-Bellman equation, which , in turn , leads to an optimal restricted dividend payment policy . When the claims are exponentially distributed, the value function and an optimal dividend payment policy of the threshold type are determined in closed forms under certain conditions. The case of unrestricted payment scheme gives rise to a singular stochastic control problem. By solving the associated integro-differential quasi-variational inequality, the value function and an optimal barrier strategy are determined explicitly in exponential claim size distributions. Two examples are demonstrated and compared to illustrate the main results .
This paper considers the optimal dividend payment problem in piecewise-deterministic compound Poisson risk models . The objective is to maximize the expected discounted dividend payout up to the time of ruin. We provide a comparative study in this general framework of both restricted and unrestricted payment schemes , which were only previously treated separately in certain special cases of risk models in the literature . In the case of restricted payment scheme, the value function is shown to be a classical solution of the corresponding HJB equation, which in turn leads to an optimal restricted payment policy known as the threshold strategy. In the case of unrestricted payment scheme , by solving the associated integro-differential quasi-variational inequality, we obtain the value function as well as an optimal unrestricted dividend payment scheme known as the barrier strategy. When claim sizes are exponentially distributed, we provide easily verifiable conditions under which the threshold and barrier strategies are optimal restricted and unrestricted dividend payment policies, respectively. The main results are illustrated with several examples, including a new example concerning regressive growth rates .
[ { "type": "R", "before": "deals with", "after": "considers the", "start_char_pos": 11, "end_char_pos": 21 }, { "type": "D", "before": "the general setup of a", "after": null, "start_char_pos": 58, "end_char_pos": 80 }, { "type": "R", "before": "model", "after": "models", "start_char_pos": 127, "end_char_pos": 132 }, { "type": "D", "before": "of an insurance business under consideration", "after": null, "start_char_pos": 149, "end_char_pos": 193 }, { "type": "R", "before": "Both", "after": "We provide a comparative study in this general framework of both", "start_char_pos": 273, "end_char_pos": 277 }, { "type": "R", "before": "are considered", "after": ", which were only previously treated separately in certain special cases of risk models in the literature", "start_char_pos": 322, "end_char_pos": 336 }, { "type": "R", "before": "Hamilton-Jacobi-Bellman", "after": "HJB", "start_char_pos": 457, "end_char_pos": 480 }, { "type": "R", "before": ", in turn ,", "after": "in turn", "start_char_pos": 497, "end_char_pos": 508 }, { "type": "R", "before": "dividend payment policy . When the claims are exponentially distributed, the value function and an optimal dividend payment policy of the threshold type are determined in closed forms under certain conditions. The", "after": "payment policy known as the threshold strategy. In the", "start_char_pos": 540, "end_char_pos": 753 }, { "type": "R", "before": "gives rise to a singular stochastic control problem. By", "after": ", by", "start_char_pos": 790, "end_char_pos": 845 }, { "type": "A", "before": null, "after": "we obtain", "start_char_pos": 920, "end_char_pos": 920 }, { "type": "R", "before": "and an optimal barrier strategy are determined explicitly in exponential claim size distributions. Two examples are demonstrated and compared to illustrate the main results", "after": "as well as an optimal unrestricted dividend payment scheme known as the barrier strategy. When claim sizes are exponentially distributed, we provide easily verifiable conditions under which the threshold and barrier strategies are optimal restricted and unrestricted dividend payment policies, respectively. The main results are illustrated with several examples, including a new example concerning regressive growth rates", "start_char_pos": 940, "end_char_pos": 1112 } ]
[ 0, 134, 272, 338, 565, 749, 842, 1038 ]
1106.3006
1
We study a multi-period Arrow-Debreu equilibrium in a heterogeneous economy populated by agents trading in a complete market. Each agent is represented by an exponential utility function, where additionally no negative level of consumption is permitted . We derive an explicit formula for the optimal consumption policies involving a put option depending on the state price density . We exploit this formula to prove the existence of an equilibrium and then provide a characterization of all possible equilibria, under the assumption of positive endowments. Via particular examples, we demonstrate that uniqueness is not always guaranteed. Finally, we discover the presence of infinitely many equilibria when endowments are vanishing .
This paper investigates various aspects of the discrete-time exponential utility maximization problem, where feasible consumption policies are not permitted to be negative. By using the Kuhn-Tucker theorem, some ideas from convex analysis and the notion of aggregate state price density , we provide a solution to this problem, in the setting of both complete and incomplete markets (with random endowments). Then, we exploit this result and use certain fixed-point techniques to provide an explicit characterization of a heterogeneous equilibrium in a complete market setting. Moreover, we construct concrete examples of models admitting multiple (including infinitely many) equilibria. By using Cramer's large deviation theorem, we study the asymptotic behavior of endogenously determined in equilibrium prices of zero coupon bonds. Lastly, we show that for incomplete markets, un-insurable future income reduces the current consumption level, thus confirming the presence of the precautionary savings motive in our model .
[ { "type": "R", "before": "We study a multi-period Arrow-Debreu equilibrium in a heterogeneous economy populated by agents trading in a complete market. Each agent is represented by an exponential utility function, where additionally no negative level of consumption is permitted . We derive an explicit formula for the optimal consumption policies involving a put option depending on the", "after": "This paper investigates various aspects of the discrete-time exponential utility maximization problem, where feasible consumption policies are not permitted to be negative. By using the Kuhn-Tucker theorem, some ideas from convex analysis and the notion of aggregate", "start_char_pos": 0, "end_char_pos": 361 }, { "type": "R", "before": ". We exploit this formula to prove the existence of an equilibrium and then provide a characterization of all possible equilibria, under the assumption of positive endowments. Via particular examples, we demonstrate that uniqueness is not always guaranteed. Finally, we discover the presence of infinitely many equilibria when endowments are vanishing", "after": ", we provide a solution to this problem, in the setting of both complete and incomplete markets (with random endowments). Then, we exploit this result and use certain fixed-point techniques to provide an explicit characterization of a heterogeneous equilibrium in a complete market setting. Moreover, we construct concrete examples of models admitting multiple (including infinitely many) equilibria. By using Cramer's large deviation theorem, we study the asymptotic behavior of endogenously determined in equilibrium prices of zero coupon bonds. Lastly, we show that for incomplete markets, un-insurable future income reduces the current consumption level, thus confirming the presence of the precautionary savings motive in our model", "start_char_pos": 382, "end_char_pos": 733 } ]
[ 0, 125, 254, 383, 557, 639 ]
1106.3006
2
This paper investigates various aspects of the discrete-time exponential utility maximization problem , where feasible consumptionpolicies are not permitted to be negative. By using the Kuhn-Tucker theorem , some ideas from convex analysis and the notion of aggregate state price density , we provide a solution to this problem , in the setting of both complete and incomplete markets (with random endowments). Then, we exploit this result and use certain fixed-point techniques to provide an explicit characterization of a heterogeneous equilibrium in a complete market setting. Moreover , we construct concrete examples of models admitting multiple (including infinitely many) equilibria. By using Cramer's large deviation theorem, we study the asymptotic behavior of endogenously determined in equilibrium prices of zero coupon bonds. Lastly, we show that for incomplete markets, un-insurable future income reduces the current consumption level, thus confirming the presence of the precautionary savings motive in our model .
This paper investigates various aspects of the discrete-time exponential utility maximization problem with non-negative consumption. Using the Kuhn-Tucker theorem and the notion of aggregate state price density (Malamud and Trubowitz (2007)) , we provide a solution to this problem in the setting of both complete and incomplete markets (with random endowments). Then, we exploit this result to provide an explicit characterization of complete market heterogeneous equilibria. Furthermore , we construct concrete examples of models admitting multiple (including infinitely many) equilibria. By using Cramer's large deviation theorem, we study the asymptotics of equilibrium zero coupon bonds. Lastly, we conduct a study of the precautionary savings motive in incomplete markets .
[ { "type": "R", "before": ", where feasible consumptionpolicies are not permitted to be negative. By using", "after": "with non-negative consumption. Using", "start_char_pos": 102, "end_char_pos": 181 }, { "type": "D", "before": ", some ideas from convex analysis", "after": null, "start_char_pos": 206, "end_char_pos": 239 }, { "type": "A", "before": null, "after": "(Malamud and Trubowitz (2007))", "start_char_pos": 288, "end_char_pos": 288 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 329, "end_char_pos": 330 }, { "type": "D", "before": "and use certain fixed-point techniques", "after": null, "start_char_pos": 441, "end_char_pos": 479 }, { "type": "R", "before": "a heterogeneous equilibrium in a complete market setting. Moreover", "after": "complete market heterogeneous equilibria. Furthermore", "start_char_pos": 523, "end_char_pos": 589 }, { "type": "R", "before": "asymptotic behavior of endogenously determined in equilibrium prices of", "after": "asymptotics of equilibrium", "start_char_pos": 748, "end_char_pos": 819 }, { "type": "R", "before": "show that for incomplete markets, un-insurable future income reduces the current consumption level, thus confirming the presence", "after": "conduct a study", "start_char_pos": 850, "end_char_pos": 978 }, { "type": "R", "before": "our model", "after": "incomplete markets", "start_char_pos": 1018, "end_char_pos": 1027 } ]
[ 0, 172, 411, 580, 691, 838 ]
1106.3006
3
This paper investigates various aspects of the discrete-time exponential utility maximization problem with non-negative consumption . Using the Kuhn-Tucker theorem and the notion of aggregate state price density (Malamud and Trubowitz (2007)), we provide a solution to this problem in the setting of both complete and incomplete markets (with random endowments). Then, we exploit this result to provide an explicit characterization of complete market heterogeneous equilibria. Furthermore, we construct concrete examples of models admitting multiple (including infinitely many) equilibria. By using Cramer's large deviation theorem, we study the asymptotics of equilibrium zero coupon bonds. Lastly, we conduct a study of the precautionary savings motive in incomplete markets.
We offer mathematical tractability and new insights for a framework of exponential utility with non-negative consumption , a constraint often omitted in the literature giving rise to economically unviable solutions. Specifically, using the Kuhn-Tucker theorem and the notion of aggregate state price density (Malamud and Trubowitz (2007)), we provide a solution to this problem in the setting of both complete and incomplete markets (with random endowments). Then, we exploit this result to provide an explicit characterization of complete market heterogeneous equilibria. Furthermore, we construct concrete examples of models admitting multiple (including infinitely many) equilibria. By using Cramer's large deviation theorem, we study the asymptotics of equilibrium zero coupon bonds. Lastly, we conduct a study of the precautionary savings motive in incomplete markets.
[ { "type": "R", "before": "This paper investigates various aspects of the discrete-time exponential utility maximization problem", "after": "We offer mathematical tractability and new insights for a framework of exponential utility", "start_char_pos": 0, "end_char_pos": 101 }, { "type": "R", "before": ". Using", "after": ", a constraint often omitted in the literature giving rise to economically unviable solutions. Specifically, using", "start_char_pos": 132, "end_char_pos": 139 } ]
[ 0, 362, 476, 589, 691 ]
1106.3025
1
We study natural selection in complete financial markets, populated by heterogeneous agents. We allow for a rich structure of heterogeneity: Individuals may differ in their beliefs concerning the economy, information and learning mechanism, risk aversion, impatience (time preference rate) and degree of habits . We develop new techniques for studying long run behavior of such economies, based on the Strassen's functional law of iterated logarithm. In particular, we explicitly determine an agent's survival index and show how the latter depends on the agent's characteristics. We use these results to study the long run behavior of the equilibrium interest rate and the market price of risk.
We study the market selection hypothesis in complete financial markets, populated by heterogeneous agents. We allow for a rich structure of heterogeneity: individuals may differ in their beliefs concerning the economy, information and learning mechanism, risk aversion, impatience and 'catching up with Joneses' preferences . We develop new techniques for studying the long-run behavior of such economies, based on the Strassen's functional law of iterated logarithm. In particular, we explicitly determine an agent's survival index and show how the latter depends on the agent's characteristics. We use these results to study the long-run behavior of the equilibrium interest rate and the market price of risk.
[ { "type": "R", "before": "natural selection", "after": "the market selection hypothesis", "start_char_pos": 9, "end_char_pos": 26 }, { "type": "R", "before": "Individuals", "after": "individuals", "start_char_pos": 141, "end_char_pos": 152 }, { "type": "R", "before": "(time preference rate) and degree of habits", "after": "and 'catching up with Joneses' preferences", "start_char_pos": 267, "end_char_pos": 310 }, { "type": "R", "before": "long run", "after": "the long-run", "start_char_pos": 352, "end_char_pos": 360 }, { "type": "R", "before": "long run", "after": "long-run", "start_char_pos": 614, "end_char_pos": 622 } ]
[ 0, 92, 312, 450, 579 ]
1106.3921
1
To better understand the spatial structure of large panels of economic and financial time series and provide a guideline for constructing semiparametric models, this paper first considers estimating a large spatial covariance matrix of the generalized m-dependent and \beta-mixing time series (with J variables and T observations) by hard thresholding regularization as long as {{\log J \, \cx^*(\ct)}}/{T} = \Co(1) (the former scheme with some time dependence measure \cx^*(\ct)) or \log J /{T} = \Co(1) (the latter scheme with the mixing coefficient\beta_{mix=}%DIFDELCMD < \CO%%% \{(J^{2+\delta' \log J T)^{-1}\}, \delta' >0. } We quantify the interplay between the estimators' consistency rate and the time dependence level, discuss an intuitive resampling scheme for threshold selection, and also prove a general cross-validation result justifying this. Given a consistently estimated covariance (correlation) matrix, by utilizing its natural links with graphical models and semiparametrics, after "screening" the (explanatory) variables, we implement a novel forward (and backward) label permutation procedure to cluster the "relevant" variables and construct the corresponding semiparametric model, which is further estimated by the groupwise dimension reduction method with sign constraints. We call this the SCE (screen - cluster - estimate) approach for modeling high dimensional data with complex spatial structure. Finally we apply this method to study the spatial structure of large panels of economic and financial time series and find the proper semiparametric structure for estimating the consumer price index (CPI) to illustrate its superiority over the linear models.
To better understand the spatial structure of large panels of economic and financial time series and provide a guideline for constructing semiparametric models, this paper first considers estimating a large spatial covariance matrix of the generalized m-dependent and \beta-mixing time series (with J variables and T observations) by hard thresholding regularization as long as {{\log J \, \cx^*(\ct)}}/{T} = \Co(1) (the former scheme with some time dependence measure \cx^*(\ct)) or \log J /{T} = \Co(1) (the latter scheme with =}%DIFDELCMD < \CO%%% \log J T)^{-1}\}, \delta' >0. } some upper bounded mixing coefficient). We quantify the interplay between the estimators' consistency rate and the time dependence level, discuss an intuitive resampling scheme for threshold selection, and also prove a general cross-validation result justifying this. Given a consistently estimated covariance (correlation) matrix, by utilizing its natural links with graphical models and semiparametrics, after "screening" the (explanatory) variables, we implement a novel forward (and backward) label permutation procedure to cluster the "relevant" variables and construct the corresponding semiparametric model, which is further estimated by the groupwise dimension reduction method with sign constraints. We call this the SCE (screen - cluster - estimate) approach for modeling high dimensional data with complex spatial structure. Finally we apply this method to study the spatial structure of large panels of economic and financial time series and find the proper semiparametric structure for estimating the consumer price index (CPI) to illustrate its superiority over the linear models.
[ { "type": "D", "before": "the mixing coefficient\\beta_{mix", "after": null, "start_char_pos": 529, "end_char_pos": 561 }, { "type": "D", "before": "\\{(J^{2+\\delta'", "after": null, "start_char_pos": 583, "end_char_pos": 598 }, { "type": "A", "before": null, "after": "some upper bounded mixing coefficient).", "start_char_pos": 631, "end_char_pos": 631 } ]
[ 0, 330, 415, 859, 1300, 1427 ]
1106.4582
1
In join the shortest queue networks, incoming jobs are assigned to the shortest queue from among a randomly chosen subset of D queues, in a system of N queues; after completion of service at its queue, a job leaves the network. We also assume that jobs arrive into the system according to a rate-\alpha N Poisson process, \alpha < 1, with rate-1 service at each queue. When the service at queues is exponentially distributed, it was shown in Vvedenskaya et al. 16%DIFDELCMD < ] %%% that the tail of the equilibrium queue size decays doubly exponentially in the limit as N goes to infinity. This is a substantial improvement over the case D=1, where the queue size decays exponentially. The reasoning in 16%DIFDELCMD < ] %%% does not easily generalize to jobs with nonexponential service time distributions. A modularized program for treating general service time distributions was introduced in Bramson et al. 4%DIFDELCMD < ]%%% . The program relies on an ansatz that asserts, in equilibrium, any fixed number of queues become independent of one another as N goes to infinity. This ansatz was demonstrated in several settings in Bramson et al. 5%DIFDELCMD < ]%%% , including for networks where the service discipline is FIFO and the service time distribution has a decreasing hazard rate. In this article, we investigate the limiting behavior, as N goes to infinity, of the equilibrium at a queue when the service discipline is FIFO and the service time distribution has a power law with a given exponent -\beta, for \beta > 1. We show under the above ansatz that, as N goes to infinity , the tail of the equilibrium queue size exhibits a wide range of behavior depending on the relationship between \beta and D. In particular, if \beta>D/(D-1), the tail is doubly exponential and, if \beta<D/(D-1), the tail has a power law. When \beta=D/(D-1), the tail is exponentially distributed.
In join the shortest queue networks, incoming jobs are assigned to the shortest queue from among a randomly chosen subset of D queues, in a system of N queues; after completion of service at its queue, a job leaves the network. We also assume that jobs arrive into the system according to a rate-\alpha N Poisson process, %DIFDELCMD < ] %%% %DIFDELCMD < ] %%% %DIFDELCMD < ]%%% %DIFDELCMD < ]%%% \alpha<1, with rate-1 service at each queue. When the service at queues is exponentially distributed, it was shown in Vvedenskaya et al. [Probl. Inf. Transm. 32 (1996) 15-29] that the tail of the equilibrium queue size decays doubly exponentially in the limit as N\rightarrow\infty. This is a substantial improvement over the case D=1, where the queue size decays exponentially. The reasoning in [Probl. Inf. Transm. 32 (1996) 15-29] does not easily generalize to jobs with nonexponential service time distributions. A modularized program for treating general service time distributions was introduced in Bramson et al. [In Proc. ACM SIGMETRICS (2010) 275-286]. The program relies on an ansatz that asserts, in equilibrium, any fixed number of queues become independent of one another as N\rightarrow\infty. This ansatz was demonstrated in several settings in Bramson et al. [Queueing Syst. 71 (2012) 247-292], including for networks where the service discipline is FIFO and the service time distribution has a decreasing hazard rate. In this article, we investigate the limiting behavior, as N\rightarrow \infty, of the equilibrium at a queue when the service discipline is FIFO and the service time distribution has a power law with a given exponent -\beta, for \beta> 1. We show under the above ansatz that, as N \rightarrow\infty , the tail of the equilibrium queue size exhibits a wide range of behavior depending on the relationship between \beta and D. In particular, if \beta>D/(D-1), the tail is doubly exponential and, if \beta<D/(D-1), the tail has a power law. When \beta=D/(D-1), the tail is exponentially distributed.
[ { "type": "D", "before": "\\alpha < 1, with rate-1 service at each queue. When the service at queues is exponentially distributed, it was shown in Vvedenskaya et al.", "after": null, "start_char_pos": 322, "end_char_pos": 460 }, { "type": "D", "before": "16", "after": null, "start_char_pos": 461, "end_char_pos": 463 }, { "type": "D", "before": "that the tail of the equilibrium queue size decays doubly exponentially in the limit as N goes to infinity. This is a substantial improvement over the case D=1, where the queue size decays exponentially. The reasoning in", "after": null, "start_char_pos": 482, "end_char_pos": 702 }, { "type": "D", "before": "16", "after": null, "start_char_pos": 703, "end_char_pos": 705 }, { "type": "D", "before": "does not easily generalize to jobs with nonexponential service time distributions. A modularized program for treating general service time distributions was introduced in Bramson et al.", "after": null, "start_char_pos": 724, "end_char_pos": 909 }, { "type": "D", "before": "4", "after": null, "start_char_pos": 910, "end_char_pos": 911 }, { "type": "D", "before": ". The program relies on an ansatz that asserts, in equilibrium, any fixed number of queues become independent of one another as N goes to infinity. This ansatz was demonstrated in several settings in Bramson et al.", "after": null, "start_char_pos": 929, "end_char_pos": 1143 }, { "type": "D", "before": "5", "after": null, "start_char_pos": 1144, "end_char_pos": 1145 }, { "type": "R", "before": ", including for networks where the service discipline is FIFO and the service time distribution has a decreasing hazard rate. In this article, we investigate the limiting behavior, as N goes to infinity, of the equilibrium at a queue when the service discipline is FIFO and the service time distribution has a power law with a given exponent -\\beta, for \\beta >", "after": "\\alpha<1, with rate-1 service at each queue. When the service at queues is exponentially distributed, it was shown in Vvedenskaya et al. [Probl. Inf. Transm. 32 (1996) 15-29] that the tail of the equilibrium queue size decays doubly exponentially in the limit as N\\rightarrow\\infty. This is a substantial improvement over the case D=1, where the queue size decays exponentially. The reasoning in [Probl. Inf. Transm. 32 (1996) 15-29] does not easily generalize to jobs with nonexponential service time distributions. A modularized program for treating general service time distributions was introduced in Bramson et al. [In Proc. ACM SIGMETRICS (2010) 275-286]. The program relies on an ansatz that asserts, in equilibrium, any fixed number of queues become independent of one another as N\\rightarrow\\infty. This ansatz was demonstrated in several settings in Bramson et al. [Queueing Syst. 71 (2012) 247-292], including for networks where the service discipline is FIFO and the service time distribution has a decreasing hazard rate. In this article, we investigate the limiting behavior, as N\\rightarrow \\infty, of the equilibrium at a queue when the service discipline is FIFO and the service time distribution has a power law with a given exponent -\\beta, for \\beta>", "start_char_pos": 1163, "end_char_pos": 1524 }, { "type": "R", "before": "goes to infinity", "after": "\\rightarrow\\infty", "start_char_pos": 1570, "end_char_pos": 1586 } ]
[ 0, 159, 227, 368, 460, 589, 685, 806, 1076, 1288, 1712, 1825 ]
1106.5274
1
This paper highlights the role of risk neutral investors in generating endogenous bubbles in derivatives markets. We propose the following theorem. A market for derivatives, which has all the features of a perfect market except completeness and has some risk neutral investors, may exhibit almost surely extreme price movements which represent a violation to the Gaussian random walk hypothesis. This can be viewed as a paradox because it contradicts wide-held conjectures about prices in informationally efficient markets with rational investors. The theorem implies that prices are not always good approximations of the fundamental values of derivatives, and that extreme price movements like price peaks or crashes may have endogenous origin and happen with a higher-than-normal frequency.
This paper highlights the role of risk neutral investors in generating endogenous bubbles in derivatives markets. We find that a market for derivatives, which has all the features of a perfect market except completeness and has some risk neutral investors, can exhibit extreme price movements which represent a violation to the Gaussian random walk hypothesis. This can be viewed as a paradox because it contradicts wide-held conjectures about prices in informationally efficient markets with rational investors. Our findings imply that prices are not always good approximations of the fundamental values of derivatives, and that extreme price movements like price peaks or crashes may have endogenous origin and happen with a higher-than-normal frequency.
[ { "type": "R", "before": "propose the following theorem. A", "after": "find that a", "start_char_pos": 117, "end_char_pos": 149 }, { "type": "R", "before": "may exhibit almost surely", "after": "can exhibit", "start_char_pos": 278, "end_char_pos": 303 }, { "type": "R", "before": "The theorem implies", "after": "Our findings imply", "start_char_pos": 548, "end_char_pos": 567 } ]
[ 0, 113, 147, 395, 547 ]
1106.5706
1
In financial markets valuable information is rarely circulated homogeneously, because of time required for information to spread. However, advances in communication technology means that the 'lifetime' of an important piece of information is typically short. Hence, viewed as a tradable asset, information shares the characteristics of a nondurable commodity: while it can be stored and transmitted freely, its worth diminishes rapidly in time. In view of recent developments where internet search engines and other information providers are offering valuable information to financial institutions, the problem of pricing information is becoming increasingly important. With this in mind, a new formulation of utility-indifference argument is introduced and used as a basis for pricing information. Specifically, we regard information as a quantity that converts a prior distribution into a posterior distribution. The amount of information can then be quantified by relative entropy. The key to our utility indifference argument is to equate the maximised a posterior utility, after paying certain cost for the information, with the a posterior expectation of the utility based on the a priori optimal strategy. This formulation leads to one price for a given quantity of upside information; and another price for a given quantity of downside information. The ideas are illustrated by means of simple examples.
In financial markets valuable information is rarely circulated homogeneously, because of time required for information to spread. However, advances in communication technology means that the 'lifetime' of important information is typically short. Hence, viewed as a tradable asset, information shares the characteristics of a perishable commodity: while it can be stored and transmitted freely, its worth diminishes rapidly in time. In view of recent developments where internet search engines and other information providers are offering information to financial institutions, the problem of pricing information is becoming increasingly important. With this in mind, a new formulation of utility-indifference argument is introduced and used as a basis for pricing information. Specifically, we regard information as a quantity that converts a prior distribution into a posterior distribution. The amount of information can then be quantified by relative entropy. The key to our utility indifference argument is to equate the maximised a posterior utility, after paying certain cost for the information, with the a posterior expectation of the utility based on the a priori optimal strategy. This formulation leads to one price for a given quantity of upside information; and another price for a given quantity of downside information. The ideas are illustrated by means of simple examples.
[ { "type": "R", "before": "an important piece of", "after": "important", "start_char_pos": 205, "end_char_pos": 226 }, { "type": "R", "before": "nondurable", "after": "perishable", "start_char_pos": 338, "end_char_pos": 348 }, { "type": "D", "before": "valuable", "after": null, "start_char_pos": 551, "end_char_pos": 559 }, { "type": "R", "before": "a posterior", "after": "a posterior", "start_char_pos": 1057, "end_char_pos": 1068 } ]
[ 0, 129, 258, 444, 669, 798, 914, 984, 1212, 1292, 1356 ]
1106.5913
1
In this paper we quantify the statistical coherence between financial time series by means of Renyi's entropy. With the help of Cambell 's coding theorem we show that Renyi's entropy selectively emphasizes only certain sectors of the underlying empirical distribution while strongly suppresses others. This accentuation is controlled with Renyi's parameter q. To tackle the issue of the information flow between time series we formulate the concept of Renyi's transfer entropy as a measure of information that is transferred only between certain parts of underlying distributions. This is particularly pertinent in financial time series where the knowledge of marginal events such as spikes or sudden jumps is of a crucial importance. We apply the Renyian information flow to stock market time series from 11 world stock indices as sampled at a daily rate in the time period 02.01.1990 - 31.12.2009. Corresponding heat maps and net information flows are represented graphically. A detailed discussion of the transfer entropy between DAX and S&P500 indices based on minute tick data gathered in the period from 02.04.2008 to 11.09.2009 is also provided. Our analysis shows that the bivariate information flow between world markets is strongly asymmetric with a distinct information surplus flowing from the Asia-Pacific region both to Europe and the U.S. markets. Important, yet less dramatic excess of information also flows from Europe to the U. S. This is particularly clearly seen from a careful analysis of Renyi information flow between DAX and S&P500 .
In this paper , we quantify the statistical coherence between financial time series by means of the Renyi entropy. With the help of Campbell 's coding theorem we show that the Renyi entropy selectively emphasizes only certain sectors of the underlying empirical distribution while strongly suppressing others. This accentuation is controlled with Renyi's parameter q. To tackle the issue of the information flow between time series we formulate the concept of Renyi's transfer entropy as a measure of information that is transferred only between certain parts of underlying distributions. This is particularly pertinent in financial time series where the knowledge of marginal events such as spikes or sudden jumps is of a crucial importance. We apply the Renyian information flow to stock market time series from 11 world stock indices as sampled at a daily rate in the time period 02.01.1990 - 31.12.2009. Corresponding heat maps and net information flows are represented graphically. A detailed discussion of the transfer entropy between the DAX and S&P500 indices based on minute tick data gathered in the period from 02.04.2008 to 11.09.2009 is also provided. Our analysis shows that the bivariate information flow between world markets is strongly asymmetric with a distinct information surplus flowing from the Asia-Pacific region to both European and US markets. An important yet less dramatic excess of information also flows from Europe to the US. This is particularly clearly seen from a careful analysis of Renyi information flow between the DAX and S&P500 indices .
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 14, "end_char_pos": 14 }, { "type": "R", "before": "Renyi's", "after": "the Renyi", "start_char_pos": 95, "end_char_pos": 102 }, { "type": "R", "before": "Cambell", "after": "Campbell", "start_char_pos": 129, "end_char_pos": 136 }, { "type": "R", "before": "Renyi's", "after": "the Renyi", "start_char_pos": 168, "end_char_pos": 175 }, { "type": "R", "before": "suppresses", "after": "suppressing", "start_char_pos": 284, "end_char_pos": 294 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1034, "end_char_pos": 1034 }, { "type": "R", "before": "both to Europe and the U.S. markets. Important,", "after": "to both European and US markets. An important", "start_char_pos": 1328, "end_char_pos": 1375 }, { "type": "R", "before": "U. S.", "after": "US.", "start_char_pos": 1446, "end_char_pos": 1451 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1544, "end_char_pos": 1544 }, { "type": "A", "before": null, "after": "indices", "start_char_pos": 1560, "end_char_pos": 1560 } ]
[ 0, 111, 302, 581, 735, 900, 979, 1154, 1364 ]
1106.5929
1
In this paper we investigate model-independent bounds for exotic options written on a risky asset using infinite-dimensional linear programming methods. Using arguments from the theory of Monge-Kantorovich mass-transport we establish a dual version of the problem that has a natural financial interpretation in terms of semi-static hedging .
In this paper we investigate model-independent bounds for exotic options written on a risky asset . Based on arguments from the theory of Monge-Kantorovich mass-transport we establish a dual version of the problem that has a natural financial interpretation in terms of semi-static hedging . In particular we prove that there is no duality gap .
[ { "type": "R", "before": "using infinite-dimensional linear programming methods. Using", "after": ". Based on", "start_char_pos": 98, "end_char_pos": 158 }, { "type": "A", "before": null, "after": ". In particular we prove that there is no duality gap", "start_char_pos": 340, "end_char_pos": 340 } ]
[ 0, 152 ]
1106.6102
1
In this paper , we investigate two different frameworks for assessing the risk in a multi-period decision process: a dynamically inconsistent formulation (whereby a single , static risk measure is applied to the entire sequence of future costs), and a dynamically consistent one, obtained by suitably composing one-step risk mappings. We characterize the class of dynamically consistent measures that provide a tight approximation for a given inconsistent measure , and discuss how the approximation factors can be computed. For the case where the consistent measures are given by Average Value-at-Risk, we derive a polynomial-time algorithm for approximating an arbitrary inconsistent distortion measure. We also present exact analytical bounds for the case where the dynamically inconsistent measureis also given by Average Value-at-Risk, and briefly discuss managerial implications in multi-period risk-assessment processes . Our theoretical and algorithmic constructions exploit interesting connections between the study of risk measures and the theory of submodularity and lattice programming , which may be of independent interest.
This paper compares two different frameworks recently introduced in the literature for measuring risk in a multi-period setting. The first corresponds to applying a single coherent risk measure to the cumulative future costs, while the second involves applying a composition of one-step coherent risk mappings. We summarize the relative strengths of the two methods, characterize several necessary and sufficient conditions under which one of the measurements always dominates the other, and introduce a metric to quantify how close the two risk measures are. Using this notion, we address the question of how tightly a given coherent measure can be approximated by lower or upper-bounding compositional measures. We exhibit an interesting asymmetry between the two cases: the tightest possible upper-bound can be exactly characterized, and corresponds to a popular construction in the literature, while the tightest-possible lower bound is not readily available. We show that testing domination and computing the approximation factors is generally NP-hard, even when the risk measures in question are comonotonic and law-invariant. However, we characterize conditions and discuss several examples where polynomial-time algorithms are possible. One such case is the well-known Conditional Value-at-Risk measure, which is further explored in our companion paper Huang, Iancu, Petrik and Subramanian, "Static and Dynamic Conditional Value at Risk" (2012) . Our theoretical and algorithmic constructions exploit interesting connections between the study of risk measures and the theory of submodularity and combinatorial optimization , which may be of independent interest.
[ { "type": "R", "before": "In this paper , we investigate", "after": "This paper compares", "start_char_pos": 0, "end_char_pos": 30 }, { "type": "R", "before": "for assessing the", "after": "recently introduced in the literature for measuring", "start_char_pos": 56, "end_char_pos": 73 }, { "type": "R", "before": "decision process: a dynamically inconsistent formulation (whereby a single , static risk measure is applied to the entire sequence of future costs), and a dynamically consistent one, obtained by suitably composing", "after": "setting. The first corresponds to applying a single coherent risk measure to the cumulative future costs, while the second involves applying a composition of", "start_char_pos": 97, "end_char_pos": 310 }, { "type": "A", "before": null, "after": "coherent", "start_char_pos": 320, "end_char_pos": 320 }, { "type": "R", "before": "characterize the class of dynamically consistent measures that provide a tight approximation for a given inconsistent measure , and discuss how the approximation factors can be computed. For the case where the consistent measures are given by Average Value-at-Risk, we derive a", "after": "summarize the relative strengths of the two methods, characterize several necessary and sufficient conditions under which one of the measurements always dominates the other, and introduce a metric to quantify how close the two risk measures are. Using this notion, we address the question of how tightly a given coherent measure can be approximated by lower or upper-bounding compositional measures. We exhibit an interesting asymmetry between the two cases: the tightest possible upper-bound can be exactly characterized, and corresponds to a popular construction in the literature, while the tightest-possible lower bound is not readily available. We show that testing domination and computing the approximation factors is generally NP-hard, even when the risk measures in question are comonotonic and law-invariant. However, we characterize conditions and discuss several examples where", "start_char_pos": 339, "end_char_pos": 616 }, { "type": "R", "before": "algorithm for approximating an arbitrary inconsistent distortion measure. We also present exact analytical bounds for the case where the dynamically inconsistent measureis also given by Average Value-at-Risk, and briefly discuss managerial implications in multi-period risk-assessment processes", "after": "algorithms are possible. One such case is the well-known Conditional Value-at-Risk measure, which is further explored in our companion paper", "start_char_pos": 633, "end_char_pos": 927 }, { "type": "A", "before": null, "after": "Huang, Iancu, Petrik and Subramanian, \"Static and Dynamic Conditional Value at Risk\" (2012)", "start_char_pos": 928, "end_char_pos": 928 }, { "type": "R", "before": "lattice programming", "after": "combinatorial optimization", "start_char_pos": 1080, "end_char_pos": 1099 } ]
[ 0, 335, 525, 706, 930 ]
1106.6328
1
Performance evaluation of the 802.11 MAC protocol is classically based on the decoupling assumption, which hypothesizes that the backoff processes at different nodes are independent. A necessary condition for the validity of this approach in the asymptotic sense (when the number of wireless nodes tends to infinity) is the existence and uniqueness of a solution to a fixed point equation . However , it was also recently pointed out that this condition is not sufficient; in contrast, a necessary and sufficient condition is a global stability property of the associated ordinary differential equation. Such a property was established only for a specific case, namely for a homogeneous system (all nodes have the same parameters) and when the number of backoff stages is either two or infinite and with other restrictive conditions. In this paper, we give a simple condition that establishes the asymptotic validity of the decoupling assumption for the homogeneous case. We also discuss the heterogeneous and the differentiated service cases and formulate a new ordinary differential equation. It is shown that the uniqueness of a solution to the associated fixed point equation is not sufficient; we exhibit one case where the fixed point equation has a unique solution but the decoupling assumption is not valid in the asymptotic sense .
Performance evaluation of the 802.11 MAC protocol is classically based on the decoupling assumption, which hypothesizes that the backoff processes at different nodes are independent. This decoupling assumption results from mean field convergence and is generally true in transient regime in the asymptotic sense (when the number of wireless nodes tends to infinity) , but, contrary to widespread belief, may not necessarily hold in stationary regime. The issue is often related with the existence and uniqueness of a solution to a fixed point equation ; however , it was also recently shown that this condition is not sufficient; in contrast, a sufficient condition is a global stability property of the associated ordinary differential equation. In this paper, we give a simple condition that establishes the asymptotic validity of the decoupling assumption for the homogeneous case. We also discuss the heterogeneous and the differentiated service cases and formulate a new ordinary differential equation. We show that the uniqueness of a solution to the associated fixed point equation is not sufficient; we exhibit one case where the fixed point equation has a unique solution but the decoupling assumption is not valid in the asymptotic sense in stationary regime .
[ { "type": "R", "before": "A necessary condition for the validity of this approach in", "after": "This decoupling assumption results from mean field convergence and is generally true in transient regime in", "start_char_pos": 183, "end_char_pos": 241 }, { "type": "R", "before": "is", "after": ", but, contrary to widespread belief, may not necessarily hold in stationary regime. The issue is often related with", "start_char_pos": 317, "end_char_pos": 319 }, { "type": "R", "before": ". However", "after": "; however", "start_char_pos": 389, "end_char_pos": 398 }, { "type": "R", "before": "pointed out", "after": "shown", "start_char_pos": 422, "end_char_pos": 433 }, { "type": "D", "before": "necessary and", "after": null, "start_char_pos": 488, "end_char_pos": 501 }, { "type": "D", "before": "Such a property was established only for a specific case, namely for a homogeneous system (all nodes have the same parameters) and when the number of backoff stages is either two or infinite and with other restrictive conditions.", "after": null, "start_char_pos": 604, "end_char_pos": 833 }, { "type": "R", "before": "It is shown", "after": "We show", "start_char_pos": 1095, "end_char_pos": 1106 }, { "type": "A", "before": null, "after": "in stationary regime", "start_char_pos": 1339, "end_char_pos": 1339 } ]
[ 0, 182, 390, 472, 603, 833, 971, 1094, 1198 ]
1107.1174
1
Financial markets provide an ideal frame for the study of first-passage time events of non-Gaussian correlated dynamics mainly because large data sets are available. Tick-by-tick data of six futures markets are herein considered resulting in fat tailed first-passage time probabilities. The scaling of the return with the standard deviation collapses the probabilities of all markets considered , and also for different time horizons, into single curves, suggesting that first-passage statistics is market independent (at least for high-frequency data). On the other hand, a very closely related quantity, the survival probability, still shows a hyperbolic t^{-1/2} decay typical of a diffusion-like dynamics . Modifications of the Weibull and Student distributions are good candidates for a phenomenological description of first-passage time properties . The scaling strategies shown may be useful for risk control and algorithmic trading.
Financial markets provide an ideal frame for the study of crossing or first-passage time events of non-Gaussian correlated dynamics mainly because large data sets are available. Tick-by-tick data of six futures markets are herein considered resulting in fat tailed first-passage time probabilities. The scaling of the return with the standard deviation collapses the probabilities of all markets examined , and also for different time horizons, into single curves, suggesting that first-passage statistics is market independent (at least for high-frequency data). On the other hand, a very closely related quantity, the survival probability, shows, away from the center and tails of the distribution, a hyperbolic t^{-1/2} decay typical of a Markovian dynamics albeit the existence of memory in markets . Modifications of the Weibull and Student distributions are good candidates for the phenomenological description of first-passage time properties under certain regimes . The scaling strategies shown may be useful for risk control and algorithmic trading.
[ { "type": "A", "before": null, "after": "crossing or", "start_char_pos": 58, "end_char_pos": 58 }, { "type": "R", "before": "considered", "after": "examined", "start_char_pos": 385, "end_char_pos": 395 }, { "type": "R", "before": "still shows", "after": "shows, away from the center and tails of the distribution,", "start_char_pos": 633, "end_char_pos": 644 }, { "type": "R", "before": "diffusion-like dynamics", "after": "Markovian dynamics albeit the existence of memory in markets", "start_char_pos": 686, "end_char_pos": 709 }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 791, "end_char_pos": 792 }, { "type": "A", "before": null, "after": "under certain regimes", "start_char_pos": 855, "end_char_pos": 855 } ]
[ 0, 166, 287, 554, 857 ]
1107.1442
1
ATP-dependent chromatin remodeling enzymes (CRE) are bio-molecular motors in eukaryotic cells. These are driven by a chemical fuel, namely, adenosine triphosphate (ATP). CREs actively participate in many cellular processes that require accessibility of specific stretches of DNA which are packaged as chromatin. The basic unit of chromatin is a nucleosome where 146 bp \sim 50 nm of a double stranded DNA (dsDNA) is wrapped around a spool formed by histone proteins. We investigate the mechanism of peeling of the histone spool, and its complete detachment, from the dsDNA by a CRE . Our two-state model of a CRE captures effectively two distinct chemical (or conformational) states in the mechano-chemical cycle of each ATP-dependent CRE. We calculate the mean times for histone detachment . Our predictions on the ATP-dependence of the measurable quantities can be tested by carrying out {\it in-vitro} experiments .
ATP-dependent chromatin remodeling enzymes (CRE) are bio-molecular motors in eukaryotic cells. These are driven by a chemical fuel, namely, adenosine triphosphate (ATP). CREs actively participate in many cellular processes that require accessibility of specific segments of DNA which are packaged as chromatin. The basic unit of chromatin is a nucleosome where 146 bp \sim 50 nm of a double stranded DNA (dsDNA) is wrapped around a spool formed by histone proteins. The helical path of histone-DNA contact on a nucleosome is also called "footprint". We investigate the mechanism of footprint traversal by a CRE that translocates along the dsDNA . Our two-state model of a CRE captures effectively two distinct chemical (or conformational) states in the mechano-chemical cycle of each ATP-dependent CRE. We calculate the mean time of traversal . Our predictions on the ATP-dependence of the mean traversal time can be tested by carrying out {\it in-vitro} experiments on mono-nucleosomes .
[ { "type": "R", "before": "stretches", "after": "segments", "start_char_pos": 262, "end_char_pos": 271 }, { "type": "A", "before": null, "after": "The helical path of histone-DNA contact on a nucleosome is also called \"footprint\".", "start_char_pos": 467, "end_char_pos": 467 }, { "type": "R", "before": "peeling of the histone spool, and its complete detachment, from the dsDNA", "after": "footprint traversal", "start_char_pos": 500, "end_char_pos": 573 }, { "type": "A", "before": null, "after": "that translocates along the dsDNA", "start_char_pos": 583, "end_char_pos": 583 }, { "type": "R", "before": "times for histone detachment", "after": "time of traversal", "start_char_pos": 764, "end_char_pos": 792 }, { "type": "R", "before": "measurable quantities", "after": "mean traversal time", "start_char_pos": 840, "end_char_pos": 861 }, { "type": "A", "before": null, "after": "on mono-nucleosomes", "start_char_pos": 919, "end_char_pos": 919 } ]
[ 0, 94, 169, 311, 466, 741, 794 ]
1107.1617
1
We provide easily verifiable conditions for the well-posedness of the optimal investment problem for a behavioral investor in an incomplete discrete-time multiperiod financial market model, for the first time in the literature. Under suitable assumptions we also establish the existence of optimal strategies.
We provide easily verifiable conditions for the well-posedness of the optimal investment problem for a behavioral investor in an incomplete discrete-time multiperiod financial market model, for the first time in the literature. Under two different sets of assumptions we also establish the existence of optimal strategies.
[ { "type": "R", "before": "suitable", "after": "two different sets of", "start_char_pos": 234, "end_char_pos": 242 } ]
[ 0, 227 ]
1107.2346
1
The Continuous-Time Random Walk (CTRW) formalism can be adapted to encompass stochastic processes with memory. In this letter we will show how the random combination of two different unbiased CTRWs can give raise to a process with neat drift, if one of them is a CTRW with memory. If one identifies the other one as noise, the effect can be thought as a kind of stochastic resonance. The ultimate origin of this phenomenon is the same that Parrondo's paradox in game theory .
The Continuous-Time Random Walk (CTRW) formalism can be adapted to encompass stochastic processes with memory. In this article we will show how the random combination of two different unbiased CTRWs can give raise to a process with clear drift, if one of them is a CTRW with memory. If one identifies the other one as noise, the effect can be thought as a kind of stochastic resonance. The ultimate origin of this phenomenon is the same of the Parrondo's paradox in game theory
[ { "type": "R", "before": "letter", "after": "article", "start_char_pos": 119, "end_char_pos": 125 }, { "type": "R", "before": "neat", "after": "clear", "start_char_pos": 231, "end_char_pos": 235 }, { "type": "R", "before": "that", "after": "of the", "start_char_pos": 435, "end_char_pos": 439 }, { "type": "D", "before": ".", "after": null, "start_char_pos": 474, "end_char_pos": 475 } ]
[ 0, 110, 280, 383 ]
1107.2562
1
Financial volatility risk is addressed through a multiple round evolutionary quantum game equilibrium leading to Multifractal Self-Organized Criticality (MSOC) in the financial returns and in the risk dynamics. The model is simulated and the results are compared with financial volatility data.
Financial volatility risk and its relation to a business cycle-related intrinsic time is addressed through a multiple round evolutionary quantum game equilibrium leading to turbulence and multifractal signatures in the financial returns and in the risk dynamics. The model is simulated and the results are compared with actual financial volatility data.
[ { "type": "A", "before": null, "after": "and its relation to a business cycle-related intrinsic time", "start_char_pos": 26, "end_char_pos": 26 }, { "type": "R", "before": "Multifractal Self-Organized Criticality (MSOC)", "after": "turbulence and multifractal signatures", "start_char_pos": 114, "end_char_pos": 160 }, { "type": "A", "before": null, "after": "actual", "start_char_pos": 269, "end_char_pos": 269 } ]
[ 0, 211 ]
1107.2716
1
We investigate the continuity of expected exponential utility maximization with respect to perturbation of the Sharpe ratio of markets. By focusing only on continuity, we impose weaker regularity conditions than those found in the literature. Specifically, foritkovi\'c (2007), a local bmo hypothesis, a condition which is seen to always be trivially satisfied in the setting of Larsen and Zitkovi\'c (2007). For } markets of the form S = M + \int \lambda d<M>, we require a uniform bound on the norm of \lambda \cdot M in a suitable bmo space.
We investigate the continuity of expected exponential utility maximization with respect to perturbation of the Sharpe ratio of markets. By focusing only on continuity, we impose weaker regularity conditions than those found in the literature. Specifically, we require, in addition to the V-compactness hypothesis of Larsen and \v{Zitkovi\'c (2007), a local bmo hypothesis, a condition which is seen to always be trivially satisfied in the setting of Larsen and Zitkovi\'c (2007). For } markets of the form S = M + \int \lambda d<M>, these conditions are simultaneously implied by the existence of a uniform bound on the norm of \lambda \cdot M in a suitable bmo space.
[ { "type": "R", "before": "for", "after": "we require, in addition to the V-compactness hypothesis of Larsen and \\v{Z", "start_char_pos": 257, "end_char_pos": 260 }, { "type": "R", "before": "we require", "after": "these conditions are simultaneously implied by the existence of", "start_char_pos": 462, "end_char_pos": 472 } ]
[ 0, 135, 242, 408 ]
1107.2748
1
We derive the explicit formula for the joint Laplace transform of the Wishart process and its time integral which extends the original approach of Bru. We compare our methodology with the alternative results given by the variation of constants method, the linearization of the Matrix Riccati ODE's and the Runge-Kutta algorithm. The new formula turns out to be fast, accurate and very useful for applications when dealing with stochastic volatility and stochastic correlation modelling .
A study on the Laplace transform as originally derived by Bru .
[ { "type": "R", "before": "We derive the explicit formula for the joint Laplace transform of the Wishart process and its time integral which extends the original approach of Bru. We compare our methodology with the alternative results given by the variation of constants method, the linearization of the Matrix Riccati ODE's and the Runge-Kutta algorithm. The new formula turns out to be fast, accurate and very useful for applications when dealing with stochastic volatility and stochastic correlation modelling", "after": "A study on the Laplace transform as originally derived by Bru", "start_char_pos": 0, "end_char_pos": 485 } ]
[ 0, 151, 328 ]
1107.2748
2
A study on the Laplace transform as originally derived by Bru .
We derive the explicit formula for the joint Laplace transform of the Wishart process and its time integral which extends the original approach of Bru. We compare our methodology with the alternative results given by the variation of constants method, the linearization of the Matrix Riccati ODE's and the Runge-Kutta algorithm. The new formula turns out to be fast and accurate .
[ { "type": "R", "before": "A study on the Laplace transform as originally derived by Bru", "after": "We derive the explicit formula for the joint Laplace transform of the Wishart process and its time integral which extends the original approach of Bru. We compare our methodology with the alternative results given by the variation of constants method, the linearization of the Matrix Riccati ODE's and the Runge-Kutta algorithm. The new formula turns out to be fast and accurate", "start_char_pos": 0, "end_char_pos": 61 } ]
[ 0 ]
1107.2988
1
This paper resolves a question proposed in Kardaras and Robertson ( 2011): how to invest in a robust growth-optimal way in a market where precise knowledge of the covariance structure of the underlying process is unavailable. Among an appropriate class of admissible covariance structures, we characterize the optimal trading strategy in terms of a generalized version of a principal half-eigenvalue of a Pucci extremal operator and its associated eigenfunction.
This paper resolves a question proposed in Kardaras and Robertson ( arXiv:1005.3454) how to invest in a robust growth-optimal way in a market where precise knowledge of the covariance structure of the underlying process is unavailable. Among an appropriate class of admissible covariance structures, we characterize the optimal trading strategy in terms of a generalized version of a principal half-eigenvalue of a Pucci extremal operator and its associated eigenfunction.
[ { "type": "R", "before": "2011):", "after": "arXiv:1005.3454)", "start_char_pos": 68, "end_char_pos": 74 } ]
[ 0, 225 ]
1107.2988
2
This paper resolves a question proposed in Kardaras and Robertson ( arXiv:1005.3454 ) how to invest in a robust growth-optimal way in a market where precise knowledge of the covariance structure of the underlying process is unavailable. Among an appropriate class of admissible covariance structures, we characterize the optimal trading strategy in terms of a generalized version of a principal half-eigenvalue of a Pucci extremal operator and its associated eigenfunction .
This paper resolves a question proposed in Kardaras and Robertson ( 2012) arXiv:1005.3454 : how to invest in a robust growth-optimal way in a market where precise knowledge of the covariance structure of the underlying assets is unavailable. Among an appropriate class of admissible covariance structures, we characterize the optimal trading strategy in terms of a generalized version of the principal eigenvalue of a fully nonlinear elliptic operator and its associated eigenfunction , by slightly restricting the collection of non-dominated probability measures .
[ { "type": "A", "before": null, "after": "2012)", "start_char_pos": 68, "end_char_pos": 68 }, { "type": "R", "before": ")", "after": ":", "start_char_pos": 85, "end_char_pos": 86 }, { "type": "R", "before": "process", "after": "assets", "start_char_pos": 214, "end_char_pos": 221 }, { "type": "R", "before": "a principal half-eigenvalue of a Pucci extremal", "after": "the principal eigenvalue of a fully nonlinear elliptic", "start_char_pos": 384, "end_char_pos": 431 }, { "type": "A", "before": null, "after": ", by slightly restricting the collection of non-dominated probability measures", "start_char_pos": 474, "end_char_pos": 474 } ]
[ 0, 237 ]
1107.2988
3
This paper resolves a question proposed in Kardaras and Robertson (2012) arXiv:1005.3454 ]: how to invest in a robust growth-optimal way in a market where precise knowledge of the covariance structure of the underlying assets is unavailable. Among an appropriate class of admissible covariance structures, we characterize the optimal trading strategy in terms of a generalized version of the principal eigenvalue of a fully nonlinear elliptic operator and its associated eigenfunction, by slightly restricting the collection of non-dominated probability measures.
This paper resolves a question proposed in Kardaras and Robertson Ann. Appl. Probab. 22 (2012) 1576-1610 ]: how to invest in a robust growth-optimal way in a market where precise knowledge of the covariance structure of the underlying assets is unavailable. Among an appropriate class of admissible covariance structures, we characterize the optimal trading strategy in terms of a generalized version of the principal eigenvalue of a fully nonlinear elliptic operator and its associated eigenfunction, by slightly restricting the collection of nondominated probability measures.
[ { "type": "A", "before": null, "after": "Ann. Appl. Probab. 22", "start_char_pos": 66, "end_char_pos": 66 }, { "type": "R", "before": "arXiv:1005.3454", "after": "1576-1610", "start_char_pos": 74, "end_char_pos": 89 }, { "type": "R", "before": "non-dominated", "after": "nondominated", "start_char_pos": 529, "end_char_pos": 542 } ]
[ 0, 242 ]
1107.3278
1
Most biological processes are described as a series of interactions between proteins and other molecules, and interactions are in turn described in terms of atomic structures. To annotate protein functions as sets of interaction states at atomic resolution, and thereby to better understand the relation between protein interactions and biological functions, we conducted exhaustive all-against-all atomic structure comparisons of all known interfaces for binding ligands including small molecules, proteins and nucleic acids, and identified recurring elementary motifs. By integrating the elementary motifs associated with each subunit, we defined composite motifs which represent context-dependent combinations of elementary motifs. It is demonstrated that function similarity can be better inferred from composite motif similarity compared to the similarity of protein sequences or of individual interfaces . By integrating the composite motifs associated with each protein function, we define meta-composite motifs each of which is regarded as a time-independent diagrammatic representation of a biological process. It is shown that meta-composite motifs provide richer annotations of biological processes than sequence clusters. The present results serve as a basis for bridging atomic structures to higher-order biological phenomena by classification and integration of interaction interface structures.
Most biological processes are described as a series of interactions between proteins and other molecules, and interactions are in turn described in terms of atomic structures. To annotate protein functions as sets of interaction states at atomic resolution, and thereby to better understand the relation between protein interactions and biological functions, we conducted exhaustive all-against-all atomic structure comparisons of all known binding sites for ligands including small molecules, proteins and nucleic acids, and identified recurring elementary motifs. By integrating the elementary motifs associated with each subunit, we defined composite motifs which represent context-dependent combinations of elementary motifs. It is demonstrated that function similarity can be better inferred from composite motif similarity compared to the similarity of protein sequences or of individual binding sites . By integrating the composite motifs associated with each protein function, we define meta-composite motifs each of which is regarded as a time-independent diagrammatic representation of a biological process. It is shown that meta-composite motifs provide richer annotations of biological processes than sequence clusters. The present results serve as a basis for bridging atomic structures to higher-order biological phenomena by classification and integration of binding site structures.
[ { "type": "R", "before": "interfaces for binding", "after": "binding sites for", "start_char_pos": 441, "end_char_pos": 463 }, { "type": "R", "before": "interfaces", "after": "binding sites", "start_char_pos": 899, "end_char_pos": 909 }, { "type": "R", "before": "interaction interface", "after": "binding site", "start_char_pos": 1376, "end_char_pos": 1397 } ]
[ 0, 175, 570, 734, 911, 1119, 1233 ]
1107.4476
1
We study how quantization, occurring when a continuously varying process is approximated by or observed on a grid of discrete values, changes the properties of a Gaussian long-memory process. By computing the asymptotic behavior of the autocovariance and of the spectral density , we find that the quantized process has the same Hurst exponent of the original process. We show that the log-periodogram regression and the Detrended Fluctuation Analysis (DFA) are severely negatively biased estimators of the Hurst exponent for quantized processes . We compute the asymptotics of the DFA for a generic long-memory process and we study them for quantized processes.
We study how the round-off error changes the statistical properties of a Gaussian long memory process. We show that the autocovariance and the spectral density of the discretized process (i.e. the process with round-off error) are asymptotically rescaled by a factor smaller than one, and we compute exactly this scaling factor. Consequently, we find that the discretized process is also long memory with the same Hurst exponent as the original process. We consider the properties of two estimators of the Hurst exponent, namely the log-periodogram regression and the Detrended Fluctuation Analysis (DFA) . By using numerical simulations and analytical considerations we show that the estimators of the Hurst exponent of the discretized process are severely negatively biased . We compute the asymptotic properties of the DFA for a generic long memory process and we apply the result to discretized processes.
[ { "type": "R", "before": "quantization, occurring when a continuously varying process is approximated by or observed on a grid of discrete values, changes the", "after": "the round-off error changes the statistical", "start_char_pos": 13, "end_char_pos": 145 }, { "type": "R", "before": "long-memory process. By computing the asymptotic behavior of the autocovariance and of", "after": "long memory process. We show that the autocovariance and", "start_char_pos": 171, "end_char_pos": 257 }, { "type": "R", "before": ", we", "after": "of the discretized process (i.e. the process with round-off error) are asymptotically rescaled by a factor smaller than one, and we compute exactly this scaling factor. Consequently, we", "start_char_pos": 279, "end_char_pos": 283 }, { "type": "R", "before": "quantized process has", "after": "discretized process is also long memory with", "start_char_pos": 298, "end_char_pos": 319 }, { "type": "R", "before": "of", "after": "as", "start_char_pos": 344, "end_char_pos": 346 }, { "type": "R", "before": "show that the", "after": "consider the properties of two estimators of the Hurst exponent, namely the", "start_char_pos": 372, "end_char_pos": 385 }, { "type": "R", "before": "are severely negatively biased", "after": ". By using numerical simulations and analytical considerations we show that the", "start_char_pos": 458, "end_char_pos": 488 }, { "type": "R", "before": "for quantized processes", "after": "of the discretized process are severely negatively biased", "start_char_pos": 522, "end_char_pos": 545 }, { "type": "R", "before": "asymptotics", "after": "asymptotic properties", "start_char_pos": 563, "end_char_pos": 574 }, { "type": "R", "before": "long-memory", "after": "long memory", "start_char_pos": 600, "end_char_pos": 611 }, { "type": "R", "before": "study them for quantized", "after": "apply the result to discretized", "start_char_pos": 627, "end_char_pos": 651 } ]
[ 0, 191, 368, 547 ]
1107.4476
2
We study how the round-off error changes the statistical properties of a Gaussian long memory process. We show that the autocovariance and the spectral density of the discretized process (i.e. the process with round-off error) are asymptotically rescaled by a factor smaller than one, and we compute exactly this scaling factor. Consequently, we find that the discretized process is also long memory with the same Hurst exponent as the original process. We consider the properties of two estimators of the Hurst exponent, namely the log-periodogram regression and the Detrended Fluctuation Analysis (DFA). By using numerical simulations and analytical considerations we show that the estimators of the Hurst exponent of the discretized process are severely negatively biased . We compute the asymptotic properties of the DFA for a generic long memory process and we apply the result to discretized processes.
We study how the round-off (or discretization) error changes the statistical properties of a Gaussian long memory process. We show that the autocovariance and the spectral density of the discretized process are asymptotically rescaled by a factor smaller than one, and we compute exactly this scaling factor. Consequently, we find that the discretized process is also long memory with the same Hurst exponent as the original process. We consider the properties of two estimators of the Hurst exponent, namely the local Whittle (LW) estimator and the Detrended Fluctuation Analysis (DFA). By using analytical considerations and numerical simulations we show that , in presence of round-off error, both estimators are severely negatively biased in finite samples. Under regularity conditions we prove that the LW estimator applied to discretized processes is consistent and asymptotically normal. Moreover, we compute the asymptotic properties of the DFA for a generic (i.e. non Gaussian) long memory process and we apply the result to discretized processes.
[ { "type": "A", "before": null, "after": "(or discretization)", "start_char_pos": 27, "end_char_pos": 27 }, { "type": "D", "before": "(i.e. the process with round-off error)", "after": null, "start_char_pos": 188, "end_char_pos": 227 }, { "type": "R", "before": "log-periodogram regression", "after": "local Whittle (LW) estimator", "start_char_pos": 534, "end_char_pos": 560 }, { "type": "R", "before": "numerical simulations and analytical considerations", "after": "analytical considerations and numerical simulations", "start_char_pos": 616, "end_char_pos": 667 }, { "type": "R", "before": "the estimators of the Hurst exponent of the discretized process", "after": ", in presence of round-off error, both estimators", "start_char_pos": 681, "end_char_pos": 744 }, { "type": "R", "before": ". We", "after": "in finite samples. Under regularity conditions we prove that the LW estimator applied to discretized processes is consistent and asymptotically normal. Moreover, we", "start_char_pos": 776, "end_char_pos": 780 }, { "type": "A", "before": null, "after": "(i.e. non Gaussian)", "start_char_pos": 840, "end_char_pos": 840 } ]
[ 0, 103, 329, 454, 606, 777 ]
1107.5122
1
We introduce the concept of spontaneous symmetry breaking to arbitrage modeling. In the model, the arbitrage strategy is considered as being in the symmetry breaking phase and the phase transition between arbitrage mode and no-arbitrage mode is triggered by a control parameter. We estimate the control parameter for momentum strategy with real historical data. The momentum strategy aided by symmetry breaking shows stronger performance and has better risk measure than the naive momentum strategy in U.S. and South Korea markets.
We introduce the concept of spontaneous symmetry breaking to arbitrage modeling. In the model, the arbitrage strategy is considered as being in the symmetry breaking phase and the phase transition between arbitrage mode and no-arbitrage mode is triggered by a control parameter. We estimate the control parameter for momentum strategy with real historical data. The momentum strategy aided by symmetry breaking shows stronger performance and has a better risk measure than the naive momentum strategy in U.S. and South Korean markets.
[ { "type": "A", "before": null, "after": "a", "start_char_pos": 446, "end_char_pos": 446 }, { "type": "R", "before": "Korea", "after": "Korean", "start_char_pos": 518, "end_char_pos": 523 } ]
[ 0, 80, 278, 361 ]
1107.5720
1
We study the explicit calculation of the set of superhedging portfolios of contingent claims in a discrete-time market model for d assets with proportional transaction costs when the underlying probability space is finite . The set of superhedging portfolios can be obtained by a recursive construction involving set operations, going backward in the event tree. We reformulate the problem as a sequence of linear vector optimization problems and solve it by adapting known algorithms. A corresponding superhedging strategy can be obtained going forward in the tree. We discuss the selection of a trading strategy from the set of all superhedging trading strategies. Examples are given involving multiple correlated assets and basket options. Furthermore, we relate existing algorithms for the calculation of the scalar superhedging price to the set-valued algorithm by a recent duality theory for vector optimization problems .
We study the explicit calculation of the set of superhedging portfolios of contingent claims in a discrete-time market model for d assets with proportional transaction costs . The set of superhedging portfolios can be obtained by a recursive construction involving set operations, going backward in the event tree. We reformulate the problem as a sequence of linear vector optimization problems and solve it by adapting known algorithms. The corresponding superhedging strategy can be obtained going forward in the tree. Examples are given involving multiple correlated assets and basket options. Furthermore, we relate existing algorithms for the calculation of the scalar superhedging price to the set-valued algorithm by a recent duality theory for vector optimization problems . The main contribution of the paper is to establish the connection to linear vector optimization, which allows to solve numerically multi-asset superhedging problems under transaction costs .
[ { "type": "D", "before": "when the underlying probability space is finite", "after": null, "start_char_pos": 174, "end_char_pos": 221 }, { "type": "R", "before": "A", "after": "The", "start_char_pos": 486, "end_char_pos": 487 }, { "type": "D", "before": "We discuss the selection of a trading strategy from the set of all superhedging trading strategies.", "after": null, "start_char_pos": 567, "end_char_pos": 666 }, { "type": "A", "before": null, "after": ". The main contribution of the paper is to establish the connection to linear vector optimization, which allows to solve numerically multi-asset superhedging problems under transaction costs", "start_char_pos": 927, "end_char_pos": 927 } ]
[ 0, 223, 362, 485, 566, 666, 742 ]
1107.5852
1
We consider several problems of optimal investment with intermediate consumption in the framework of an incomplete semimartingale model of a financial market. Our goal is to find minimal conditions on the model and the utility stochastic field for the validity of several key assertions of the theory to hold true. We show that a necessary and sufficient condition on both the utility stochastic field and the model is that the value functions of the primal and dual problems are finite.
In this paper we consider a problem of optimal investment with intermediate consumption in the framework of an incomplete semimartingale model of a financial market. We show that a necessary and sufficient condition for the validity of key assertions of the theory is that the value functions of the primal and dual problems are finite.
[ { "type": "R", "before": "We consider several problems", "after": "In this paper we consider a problem", "start_char_pos": 0, "end_char_pos": 28 }, { "type": "D", "before": "Our goal is to find minimal conditions on the model and the utility stochastic field for the validity of several key assertions of the theory to hold true.", "after": null, "start_char_pos": 159, "end_char_pos": 314 }, { "type": "R", "before": "on both the utility stochastic field and the model", "after": "for the validity of key assertions of the theory", "start_char_pos": 365, "end_char_pos": 415 } ]
[ 0, 158, 314 ]
1107.5852
2
In this paper we consider a problem of optimal investment with intermediate consumption in the framework of an incomplete semimartingale model of a financial market. We show that a necessary and sufficient condition for the validity of key assertions of the theory is that the value functions of the primal and dual problems are finite .
We consider a problem of optimal investment with intermediate consumption in the framework of an incomplete semimartingale model of a financial market. We show that a necessary and sufficient condition for the validity of key assertions of the theory is that the value functions of the primal and dual problems are finite
[ { "type": "R", "before": "In this paper we", "after": "We", "start_char_pos": 0, "end_char_pos": 16 }, { "type": "D", "before": ".", "after": null, "start_char_pos": 336, "end_char_pos": 337 } ]
[ 0, 165 ]
1108.0188
1
This paper proposes an alternative to the classical price-adjustment mechanism (called " t\^atonnement " after Walras) that is second-order in time. The proposed mechanism, an analogue to the damped harmonic oscillator, provides a dynamic equilibration process that depends only on local information. The discrete-time form of the model can result in two-step limit cycles, but as the distance covered by the cycle depends on the size of the damping, the proposed mechanism can lead to both highly unstable and relatively stable behaviour, as observed in real economies . We suggest an economic interpretation for the process .
This paper proposes an alternative to the classical price-adjustment mechanism (called " t\^{a " after Walras) that is second-order in time. The proposed mechanism, an analogue to the damped harmonic oscillator, provides a dynamic equilibration process that depends only on local information. We show how such a process can result from simple behavioural rules. The discrete-time form of the model can result in two-step limit cycles, but as the distance covered by the cycle depends on the size of the damping, the proposed mechanism can lead to both highly unstable and relatively stable behaviour, as observed in real economies .
[ { "type": "R", "before": "t\\^atonnement", "after": "t\\^{a", "start_char_pos": 89, "end_char_pos": 102 }, { "type": "A", "before": null, "after": "We show how such a process can result from simple behavioural rules.", "start_char_pos": 301, "end_char_pos": 301 }, { "type": "D", "before": ". We suggest an economic interpretation for the process", "after": null, "start_char_pos": 571, "end_char_pos": 626 } ]
[ 0, 148, 300, 572 ]
1108.0719
1
This paper studies pricing of stock options for the case when the evolution of the risk-free assets or bond is stochastic . We show that, in the typical scenario, the martingale measure is not unique , that there are non-replicable claims , and that the martingale prices can vary significantly; for instance, for a European put option, any positive real number is a martingale price for some martingale measure. In addition, the second moment of the hedging error for a strategy calculated via a given martingale measure can take any arbitrary positive value under some equivalent measure. Some reasonable choices of martingale measures are suggested, including a measure that ensures local risk minimizing hedging strategy .
This paper studies stock options pricing in the presence of the stochastic deviations in bond prices. The martingale measure is not unique for this market, and there are non-replicable claims . It is shown that arbitrarily small deviations cause significant changes in the market properties. In particular, the martingale prices and the second moment of the hedging error can vary significantly and take extreme values, for some extreme choices of the martingale measures. The paper suggests ad discusses some choices of the martingale measures .
[ { "type": "R", "before": "pricing of stock options for the case when the evolution of the risk-free assets or bond is stochastic . We show that, in the typical scenario, the", "after": "stock options pricing in the presence of the stochastic deviations in bond prices. The", "start_char_pos": 19, "end_char_pos": 166 }, { "type": "R", "before": ", that", "after": "for this market, and", "start_char_pos": 200, "end_char_pos": 206 }, { "type": "R", "before": ", and that the martingale prices can vary significantly; for instance, for a European put option, any positive real number is a martingale price for some martingale measure. In addition, the", "after": ". It is shown that arbitrarily small deviations cause significant changes in the market properties. In particular, the martingale prices and the", "start_char_pos": 239, "end_char_pos": 429 }, { "type": "R", "before": "for a strategy calculated via a given martingale measure can take any arbitrary positive value under some equivalent measure. Some reasonable choices of martingale measures are suggested, including a measure that ensures local risk minimizing hedging strategy", "after": "can vary significantly and take extreme values, for some extreme choices of the martingale measures. The paper suggests ad discusses some choices of the martingale measures", "start_char_pos": 465, "end_char_pos": 724 } ]
[ 0, 123, 295, 412, 590 ]
1108.0719
2
This paper studies stock options pricing in the presence of the stochastic deviations in bond prices. The martingale measure is not unique for this market, and there are non-replicable claims. It is shown that arbitrarily small deviations cause significant changes in the market properties. In particular, the martingale prices and the second moment of the hedging error can vary significantly and take extreme values, for some extreme choices of the martingale measures. The paper suggests ad discusses some choices of the martingale measures .
This papers addresses the stock option pricing problem in a continuous time market model where there are two stochastic tradable assets, and one of them is selected as a num\'eraire. It is shown that the presence of arbitrarily small stochastic deviations in the evolution of the num\'eraire process causes significant changes in the market properties. In particular, an equivalent martingale measure is not unique for this market, and there are non-replicable claims. The martingale prices and the hedging error can vary significantly and take extreme values, for some extreme choices of the equivalent martingale measures. Some rational choices of the equivalent martingale measures are suggested and discussed, including implied measures calculated from observed bond prices. This allows to calculate the implied market price of risk process .
[ { "type": "R", "before": "paper studies stock options pricing in", "after": "papers addresses the stock option pricing problem in a continuous time market model where there are two stochastic tradable assets, and one of them is selected as a num\\'eraire. It is shown that", "start_char_pos": 5, "end_char_pos": 43 }, { "type": "R", "before": "the", "after": "arbitrarily small", "start_char_pos": 60, "end_char_pos": 63 }, { "type": "R", "before": "bond prices. The", "after": "the evolution of the num\\'eraire process causes significant changes in the market properties. In particular, an equivalent", "start_char_pos": 89, "end_char_pos": 105 }, { "type": "R", "before": "It is shown that arbitrarily small deviations cause significant changes in the market properties. In particular, the", "after": "The", "start_char_pos": 193, "end_char_pos": 309 }, { "type": "D", "before": "second moment of the", "after": null, "start_char_pos": 336, "end_char_pos": 356 }, { "type": "A", "before": null, "after": "equivalent", "start_char_pos": 451, "end_char_pos": 451 }, { "type": "R", "before": "The paper suggests ad discusses some", "after": "Some rational", "start_char_pos": 473, "end_char_pos": 509 }, { "type": "R", "before": "martingale measures", "after": "equivalent martingale measures are suggested and discussed, including implied measures calculated from observed bond prices. This allows to calculate the implied market price of risk process", "start_char_pos": 525, "end_char_pos": 544 } ]
[ 0, 101, 192, 290, 472 ]
1108.1035
1
The aim of this paper is to construct and analyze explicit solutions to a class of Hamilton-Jacobi-Bellman equations with range bounds on the optimal response variable. We construct a fully nonlinear partial differential equation as a model for the evolution of the risk preference in the optimal investment problem under the random risk process . We construct monotone risk-seeking and risk-averse traveling wave solutions .
The aim of this paper is to construct and analyze solutions to a class of Hamilton-Jacobi-Bellman equations with range bounds on the optimal response variable. Using the Riccati transformation we derive and analyze a fully nonlinear parabolic partial differential equation for the optimal response function . We construct monotone traveling wave solutions and identify parametric regions for which the traveling wave solution has a positive or negative wave speed .
[ { "type": "D", "before": "explicit", "after": null, "start_char_pos": 50, "end_char_pos": 58 }, { "type": "R", "before": "We construct", "after": "Using the Riccati transformation we derive and analyze", "start_char_pos": 169, "end_char_pos": 181 }, { "type": "A", "before": null, "after": "parabolic", "start_char_pos": 200, "end_char_pos": 200 }, { "type": "R", "before": "as a model for the evolution of the risk preference in the optimal investment problem under the random risk process", "after": "for the optimal response function", "start_char_pos": 231, "end_char_pos": 346 }, { "type": "D", "before": "risk-seeking and risk-averse", "after": null, "start_char_pos": 371, "end_char_pos": 399 }, { "type": "A", "before": null, "after": "and identify parametric regions for which the traveling wave solution has a positive or negative wave speed", "start_char_pos": 425, "end_char_pos": 425 } ]
[ 0, 168, 348 ]
1108.1167
1
In a market with one safe and one risky asset, an investor with a long horizon, constant investment opportunities, and constant relative risk aversion trades with small proportional transaction costs. We derive explicit formulas for the optimal investment policy, its implied welfare, liquidity premium, and trading volume. At the first order, the liquidity premium equals the spread, times share turnover, times a universal constant. Results are robust to consumption and finite-horizons. If the mean-variance ratio is constant, and price and volatility shocks uncorrelated, they are also robust to heteroskedasticity . We exploit the equivalence of the transaction cost market to another frictionless market, with a shadow risky asset, in which investment opportunities are stochastic. The shadow price is also found explicitly.
In a market with one safe and one risky asset, an investor with a long horizon, constant investment opportunities, and constant relative risk aversion trades with small proportional transaction costs. We derive explicit formulas for the optimal investment policy, its implied welfare, liquidity premium, and trading volume. At the first order, the liquidity premium equals the spread, times share turnover, times a universal constant. Results are robust to consumption and finite horizons . We exploit the equivalence of the transaction cost market to another frictionless market, with a shadow risky asset, in which investment opportunities are stochastic. The shadow price is also found explicitly.
[ { "type": "R", "before": "finite-horizons. If the mean-variance ratio is constant, and price and volatility shocks uncorrelated, they are also robust to heteroskedasticity", "after": "finite horizons", "start_char_pos": 473, "end_char_pos": 618 } ]
[ 0, 200, 323, 434, 489, 620, 787 ]
1108.1632
1
Equity order flow is persistent in the sense that buy orders tend to be followed by buy orders and sellorders tend to be followed by sell orders. For equity order flow this persistence is extremely long-ranged, with positive correlations spanning thousands of orders, over time intervals of up to several days. Such persistence in supply and demand is economically important because it influences the market impact as a function of both time and size and because it indicates that the market is in a sense out of equilibrium. Persistence can be caused by two types of behavior : (1) Order splitting, in which a single investor repeatedly places an order of the same sign, or (2) herding, in which different investors place orders of the same sign. We develop a method to decompose the autocorrelation function into splitting and herding components and apply this to order flow data from the London Stock Exchange containing exchange membership identifiers. Members typically act as brokers for other investors, so that it is not clear whether patterns we observe in brokerage data also reflect patterns in the behavior of single investors. To address this problem we develop models for the distortion caused by brokerageand demonstrate that persistence in order flow is overwhelmingly due to order splitting by single investors. At longer time scales we observe that different investors' behavior is anti-correlated. We show that this is due to differences in the response to price-changing vs. non-price-changing market orders .
Order flow in equity markets is remarkably persistent in the sense that order signs (to buy or sell) are positively autocorrelated out to time lags of tens of thousands of orders, corresponding to many days. Two possible explanations are herding, corresponding to positive correlation in the behavior of different investors, or order splitting, corresponding to positive autocorrelation in the behavior of single investors. We investigate this using order flow data from the London Stock Exchange for which we have membership identifiers. By formulating models for herding and order splitting, as well as models for brokerage choice, we are able to overcome the distortion introduced by brokerage. On timescales of less than a few hours the persistence of order flow is overwhelmingly due to splitting rather than herding. We also study the properties of brokerage order flow and show that it is remarkably consistent both cross-sectionally and longitudinally .
[ { "type": "R", "before": "Equity order flow is", "after": "Order flow in equity markets is remarkably", "start_char_pos": 0, "end_char_pos": 20 }, { "type": "R", "before": "buy orders tend to be followed by buy orders and sellorders tend to be followed by sell orders. For equity order flow this persistence is extremely long-ranged, with positive correlations spanning", "after": "order signs (to buy or sell) are positively autocorrelated out to time lags of tens of", "start_char_pos": 50, "end_char_pos": 246 }, { "type": "R", "before": "over time intervals of up to several days. Such persistence in supply and demand is economically important because it influences the market impact as a function of both time and size and because it indicates that the market is in a sense out of equilibrium. Persistence can be caused by two types of behavior : (1) Order splitting, in which a single investor repeatedly places an order of the same sign, or (2) herding, in which different investors place orders of the same sign. We develop a method to decompose the autocorrelation function into splitting and herding components and apply this to", "after": "corresponding to many days. Two possible explanations are herding, corresponding to positive correlation in the behavior of different investors, or order splitting, corresponding to positive autocorrelation in the behavior of single investors. We investigate this using", "start_char_pos": 268, "end_char_pos": 865 }, { "type": "R", "before": "containing exchange", "after": "for which we have", "start_char_pos": 913, "end_char_pos": 932 }, { "type": "R", "before": "Members typically act as brokers for other investors, so that it is not clear whether patterns we observe in brokerage data also reflect patterns in the behavior of single investors. To address this problem we develop models for the distortion caused by brokerageand demonstrate that persistence in", "after": "By formulating models for herding and order splitting, as well as models for brokerage choice, we are able to overcome the distortion introduced by brokerage. On timescales of less than a few hours the persistence of", "start_char_pos": 957, "end_char_pos": 1255 }, { "type": "R", "before": "order splitting by single investors. At longer time scales we observe that different investors' behavior is anti-correlated. We show that this is due to differences in the response to price-changing vs. non-price-changing market orders", "after": "splitting rather than herding. We also study the properties of brokerage order flow and show that it is remarkably consistent both cross-sectionally and longitudinally", "start_char_pos": 1292, "end_char_pos": 1527 } ]
[ 0, 145, 310, 525, 747, 956, 1139, 1328, 1416 ]
1108.1910
1
The pricing and hedging of a general class of options (including American, Bermudan and European options) on multiple assets are studied in the context of currency markets where trading in all assets is subject to proportional transaction costs, and where the existence of a riskfree numeraire is not assumed. Probabilistic dual representations are obtained for the bid and ask pricesof such options, together with constructions of hedging strategies , optimal stopping times and approximate martingale representations for both long and short option positions .
The pricing and hedging of a general class of options (including American, Bermudan and European options) on multiple assets are studied in the context of currency markets where trading is subject to proportional transaction costs, and where the existence of a risk-free num\'eraire is not assumed. Constructions leading to algorithms for computing the prices, optimal hedging strategies and stopping times are presented for both long and short option positions in this setting, together with probabilistic (martingale) representations for the option prices .
[ { "type": "D", "before": "in all assets", "after": null, "start_char_pos": 186, "end_char_pos": 199 }, { "type": "R", "before": "riskfree numeraire", "after": "risk-free num\\'eraire", "start_char_pos": 275, "end_char_pos": 293 }, { "type": "R", "before": "Probabilistic dual representations are obtained for the bid and ask pricesof such options, together with constructions of hedging strategies , optimal stopping times and approximate martingale representations", "after": "Constructions leading to algorithms for computing the prices, optimal hedging strategies and stopping times are presented", "start_char_pos": 310, "end_char_pos": 518 }, { "type": "A", "before": null, "after": "in this setting, together with probabilistic (martingale) representations for the option prices", "start_char_pos": 560, "end_char_pos": 560 } ]
[ 0, 309 ]
1108.2305
1
In emissions trading, the initial permit allocation is an intractable issue because it needs to be essentially fair to the participating countries. There are many ways to distribute a given total amount of emissions permits among countries, but the existing distribution methods such as auctioning and grandfathering have been debated. Here we describe a new model for permit allocation in emissions trading using the Boltzmann distribution. The Boltzmann distribution is introduced to permit allocation by combining it with concepts in emissions trading. A price determination mechanism for emission permits is then developed in relation to the \beta%DIFDELCMD < } %%% value in the Boltzmann distribution. Finally, it is demonstrated how emissions permits can be practically allocated among participating countries in empirical results . The new allocation model using the Boltzmann distribution describes a most probable, natural, and unbiased distribution of emissions permits among multiple countries. Based on its simplicity and versatility, the model using the Boltzmann distribution has potential for various economic and environmental studies .
In emissions trading, the initial allocation of permits is an intractable issue because it needs to be essentially fair to the participating countries. There are many ways to distribute a given total amount of emissions permits among countries, but the existing distribution methods , such as auctioning and grandfathering , have been debated. In this paper we describe a new method for allocating permits in emissions trading using the Boltzmann distribution. We introduce the Boltzmann distribution to permit allocation by combining it with concepts in emissions trading. %DIFDELCMD < } %%% We then demonstrate through empirical data analysis how emissions permits can be allocated in practice among participating countries . The new allocation method using the Boltzmann distribution describes the most probable, natural, and unbiased distribution of emissions permits among multiple countries. Simple and versatile, this new method holds potential for many economic and environmental applications .
[ { "type": "R", "before": "permit allocation", "after": "allocation of permits", "start_char_pos": 34, "end_char_pos": 51 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 279, "end_char_pos": 279 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 318, "end_char_pos": 318 }, { "type": "R", "before": "Here", "after": "In this paper", "start_char_pos": 338, "end_char_pos": 342 }, { "type": "R", "before": "model for permit allocation", "after": "method for allocating permits", "start_char_pos": 361, "end_char_pos": 388 }, { "type": "R", "before": "The Boltzmann distribution is introduced", "after": "We introduce the Boltzmann distribution", "start_char_pos": 444, "end_char_pos": 484 }, { "type": "D", "before": "A price determination mechanism for emission permits is then developed in relation to the", "after": null, "start_char_pos": 558, "end_char_pos": 647 }, { "type": "D", "before": "\\beta", "after": null, "start_char_pos": 648, "end_char_pos": 653 }, { "type": "R", "before": "value in the Boltzmann distribution. Finally, it is demonstrated", "after": "We then demonstrate through empirical data analysis", "start_char_pos": 672, "end_char_pos": 736 }, { "type": "R", "before": "practically allocated", "after": "allocated in practice", "start_char_pos": 766, "end_char_pos": 787 }, { "type": "D", "before": "in empirical results", "after": null, "start_char_pos": 818, "end_char_pos": 838 }, { "type": "R", "before": "model", "after": "method", "start_char_pos": 860, "end_char_pos": 865 }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 909, "end_char_pos": 910 }, { "type": "R", "before": "Based on its simplicity and versatility, the model using the Boltzmann distribution has potential for various", "after": "Simple and versatile, this new method holds potential for many", "start_char_pos": 1008, "end_char_pos": 1117 }, { "type": "R", "before": "studies", "after": "applications", "start_char_pos": 1145, "end_char_pos": 1152 } ]
[ 0, 147, 337, 443, 557, 708, 840, 1007 ]
1108.3155
1
In a previous publication, we have discussed the common belief that for the pedestrian observer, financial markets look completely random with erratic and uncontrollable behavior. To a large extend, this is correct. However, it has been shown on one example, the Euro future contract, that the difference between real financial time series and random walks, as small as it is, is detectable using modern statistical multivariate analysis . This has been achieved using several triggers encoded in a multivariate trading system. Then a non-random content of the financial series can be inferred . Of course, this is not a general proof , as we focus on one particular example and the generality of the process can not be claimed. In this letter , we produce a second example on a completely different markets, largely uncorrelated to the Euro future, namely the DAX and Cacao future contracts. The same procedure is followed using a trading system, based on exactly the same ingredients. We show that similar results can be obtained and we conclude that this is an evidence that some invariants, as encoded in our system, have been identified. They provide a kind of quantification of the non-random content of the financial markets explored over a 10 years period of time.
For the pedestrian observer, financial markets look completely random with erratic and uncontrollable behavior. To a large extend, this is correct. At first approximation the difference between real price changes and the random walk model is too small to be detected using traditional time series analysis. However, we show in the following that this difference between real financial time series and random walks, as small as it is, is detectable using modern statistical multivariate analysis , with several triggers encoded in trading systems. This kind of analysis are based on methods widely used in nuclear physics, with large samples of data and advanced statistical inference. Considering the movements of the Euro future contract at high frequency, we show that a part of the non-random content of this series can be inferred , namely the trend-following content depending on volatility ranges . Of course, this is not a general proof of statistical inference , as we focus on one particular example and the generality of the process can not be claimed. Therefore , we produce other examples on a completely different markets, largely uncorrelated to the Euro future, namely the DAX and Cacao future contracts. The same procedure is followed using a trading system, based on the same ingredients. We show that similar results can be obtained and we conclude that this is an evidence that some invariants, as encoded in our system, have been identified. They provide a kind of quantification of the non-random content of the financial markets explored over a 10 years period of time.
[ { "type": "R", "before": "In a previous publication, we have discussed the common belief that for the", "after": "For the", "start_char_pos": 0, "end_char_pos": 75 }, { "type": "R", "before": "However, it has been shown on one example, the Euro future contract, that the", "after": "At first approximation the difference between real price changes and the random walk model is too small to be detected using traditional time series analysis. However, we show in the following that this", "start_char_pos": 216, "end_char_pos": 293 }, { "type": "R", "before": ". This has been achieved using", "after": ", with", "start_char_pos": 438, "end_char_pos": 468 }, { "type": "R", "before": "a multivariate trading system. Then a", "after": "trading systems. This kind of analysis are based on methods widely used in nuclear physics, with large samples of data and advanced statistical inference. Considering the movements of the Euro future contract at high frequency, we show that a part of the", "start_char_pos": 497, "end_char_pos": 534 }, { "type": "R", "before": "the financial", "after": "this", "start_char_pos": 557, "end_char_pos": 570 }, { "type": "A", "before": null, "after": ", namely the trend-following content depending on volatility ranges", "start_char_pos": 594, "end_char_pos": 594 }, { "type": "A", "before": null, "after": "of statistical inference", "start_char_pos": 636, "end_char_pos": 636 }, { "type": "R", "before": "In this letter", "after": "Therefore", "start_char_pos": 731, "end_char_pos": 745 }, { "type": "R", "before": "a second example", "after": "other examples", "start_char_pos": 759, "end_char_pos": 775 }, { "type": "D", "before": "exactly", "after": null, "start_char_pos": 959, "end_char_pos": 966 } ]
[ 0, 179, 215, 439, 527, 596, 730, 894, 988, 1144 ]
1108.4329
1
The dynamics of noise-resilient Boolean networks with majority functions and arbitrary topologies is investigated. A wide class of possible topological configurations is parametrized as a general blockmodel. For this class of networks, the dynamics always undergoes a phase transition from a non-ergodic regime, where a memory of its past states is preserved, to an ergodic regime, where no such memory exists and every microstate is equally probable. Both the average error on the network, as well as the critical value of noise where the transition occurs are investigated analytically, and compared to numerical simulations. The results for "partially dense" networks, comprised of relatively few, but dynamically important nodes, which have a number of inputs which greatly exceeds the average for the entire network, give very general upper bounds on the maximum resilience against noise attainable on globally sparse systems.
The dynamics of noise-resilient Boolean networks with majority functions and diverse topologies is investigated. A wide class of possible topological configurations is parametrized as a stochastic blockmodel. For this class of networks, the dynamics always undergoes a phase transition from a non-ergodic regime, where a memory of its past states is preserved, to an ergodic regime, where no such memory exists and every microstate is equally probable. Both the average error on the network, as well as the critical value of noise where the transition occurs are investigated analytically, and compared to numerical simulations. The results for "partially dense" networks, comprised of relatively few, but dynamically important nodes, which have a number of inputs which greatly exceeds the average for the entire network, give very general upper bounds on the maximum resilience against noise attainable on globally sparse systems.
[ { "type": "R", "before": "arbitrary", "after": "diverse", "start_char_pos": 77, "end_char_pos": 86 }, { "type": "R", "before": "general", "after": "stochastic", "start_char_pos": 188, "end_char_pos": 195 } ]
[ 0, 114, 207, 451, 627 ]
1108.4341
1
As a model of evolved genetic regulatory systems, we investigate the evolution of Boolean networks , which are subject to a selective pressure which favors robustness against noise . By mapping the evolutionary process into a statistical ensemble, and minimizing its associated free energy, we find the structural properties which emerge as the selective pressure is increased, and identify a phase transition from a random topology to a "segregated core" structure, where a smaller and more densely connected subset of the nodes is responsible for most of the regulation in the network. This segregated structure is identical to what is found in gene regulatory networks, where only a much smaller subset of genes - those responsible for transcription factors - is responsible for global regulation. We obtain the full phase diagram of the evolutionary process as a function of selective pressure and the average number of inputs per node. We compare the theoretical predictions with Monte-Carlo simulations of actual networks, and find an excellent agreement .
We investigate the evolution of Boolean networks subject to a selective pressure which favors robustness against noise , as a model of evolved genetic regulatory systems . By mapping the evolutionary process into a statistical ensemble, and minimizing its associated free energy, we find the structural properties which emerge as the selective pressure is increased, and identify a phase transition from a random topology to a "segregated core" structure, where a smaller and more densely connected subset of the nodes is responsible for most of the regulation in the network. This segregated structure is identical to what is found in gene regulatory networks, where only a much smaller subset of genes - those responsible for transcription factors - is responsible for global regulation. We obtain the full phase diagram of the evolutionary process as a function of selective pressure and the average number of inputs per node. We compare the theoretical predictions with Monte-Carlo simulations of evolved networks, and with empirical data for Saccharomyces cerevisiae and Escherichi coli .
[ { "type": "R", "before": "As a model of evolved genetic regulatory systems, we", "after": "We", "start_char_pos": 0, "end_char_pos": 52 }, { "type": "D", "before": ", which are", "after": null, "start_char_pos": 99, "end_char_pos": 110 }, { "type": "A", "before": null, "after": ", as a model of evolved genetic regulatory systems", "start_char_pos": 181, "end_char_pos": 181 }, { "type": "R", "before": "actual", "after": "evolved", "start_char_pos": 1013, "end_char_pos": 1019 }, { "type": "R", "before": "find an excellent agreement", "after": "with empirical data for Saccharomyces cerevisiae and Escherichi coli", "start_char_pos": 1034, "end_char_pos": 1061 } ]
[ 0, 183, 588, 801, 941 ]
1108.4341
2
We investigate the evolution of Boolean networks subject to a selective pressure which favors robustness against noise, as a model of evolved genetic regulatory systems. By mapping the evolutionary process into a statistical ensemble , and minimizing its associated free energy, we find the structural properties which emerge as the selective pressure is increased , and identify a phase transition from a random topology to a "segregated core" structure, where a smaller and more densely connected subset of the nodes is responsible for most of the regulation in the network. This segregated structure is identical to what is found in gene regulatory networks, where only a much smaller subset of genes - those responsible for transcription factors - is responsible for global regulation. We obtain the full phase diagram of the evolutionary process as a function of selective pressure and the average number of inputs per node. We compare the theoretical predictions with Monte-Carlo simulations of evolved networks , and with empirical data for Saccharomyces cerevisiae and Escherichi coli.
We investigate the evolution of Boolean networks subject to a selective pressure which favors robustness against noise, as a model of evolved genetic regulatory systems. By mapping the evolutionary process into a statistical ensemble and minimizing its associated free energy, we find the structural properties which emerge as the selective pressure is increased and identify a phase transition from a random topology to a "segregated core" structure, where a smaller and more densely connected subset of the nodes is responsible for most of the regulation in the network. This segregated structure is very similar qualitatively to what is found in gene regulatory networks, where only a much smaller subset of genes --- those responsible for transcription factors --- is responsible for global regulation. We obtain the full phase diagram of the evolutionary process as a function of selective pressure and the average number of inputs per node. We compare the theoretical predictions with Monte Carlo simulations of evolved networks and with empirical data for Saccharomyces cerevisiae and Escherichia coli.
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 234, "end_char_pos": 235 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 365, "end_char_pos": 366 }, { "type": "R", "before": "identical", "after": "very similar qualitatively", "start_char_pos": 606, "end_char_pos": 615 }, { "type": "R", "before": "-", "after": "---", "start_char_pos": 704, "end_char_pos": 705 }, { "type": "R", "before": "-", "after": "---", "start_char_pos": 750, "end_char_pos": 751 }, { "type": "R", "before": "Monte-Carlo", "after": "Monte Carlo", "start_char_pos": 974, "end_char_pos": 985 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1018, "end_char_pos": 1019 }, { "type": "R", "before": "Escherichi", "after": "Escherichia", "start_char_pos": 1077, "end_char_pos": 1087 } ]
[ 0, 169, 576, 789, 929 ]
1108.4936
1
The homology search process depends on the free energy of double stranded DNA (dsDNA) triplets bound to pre-synaptic filaments . It has been assumed that the total free energy is a linear function of the number of bound dsDNA triplets . We present an analytical model using a simplified version of the known structure of dsDNA bound to ssDNA/RecA filaments. This model predicts that the mechanical energy stored in dsDNA bound to RecA increases non-linearly with the number of contiguous bound dsDNA triplets. We suggest that the free energy increase for the homology searching state is much more rapid than the increase for the post-strand exchange state and propose that this difference may play a vital role in the homology search/strand exchange process .
It has long been known that during homology recognition and strand exchange the double stranded DNA (dsDNA) in DNA/RecA filaments is highly extended, but the functional role of the extension has been unclear . We present an analytical model for the dsDNA tension. The model suggests that the binding of additional dsDNA base pairs to the DNA/RecA filament alters the tension in the dsDNA that was already bound to the filament. Such coupled tension changes may explain several previously unexplained experimental results .
[ { "type": "R", "before": "The homology search process depends on the free energy of", "after": "It has long been known that during homology recognition and strand exchange the", "start_char_pos": 0, "end_char_pos": 57 }, { "type": "R", "before": "triplets bound to pre-synaptic filaments . It has been assumed that the total free energy is a linear function of the number of bound dsDNA triplets", "after": "in DNA/RecA filaments is highly extended, but the functional role of the extension has been unclear", "start_char_pos": 86, "end_char_pos": 234 }, { "type": "R", "before": "using a simplified version of the known structure of dsDNA bound to ssDNA/RecA filaments. This model predicts that the mechanical energy stored in dsDNA bound to RecA increases non-linearly with the number of contiguous bound dsDNA triplets. We suggest that the free energy increase for the homology searching state is much more rapid than the increase for the post-strand exchange state and propose that this difference may play a vital role in the homology search/strand exchange process", "after": "for the dsDNA tension. The model suggests that the binding of additional dsDNA base pairs to the DNA/RecA filament alters the tension in the dsDNA that was already bound to the filament. Such coupled tension changes may explain several previously unexplained experimental results", "start_char_pos": 268, "end_char_pos": 757 } ]
[ 0, 128, 236, 357, 509 ]
1108.4936
2
It has long been known that during homology recognition and strand exchange the double stranded DNA (dsDNA) in DNA/RecA filaments is highly extended, but the functional role of the extension has been unclear. We present an analytical model for the dsDNA tension . The model suggests that the binding of additional dsDNA base pairs to the DNA/RecA filament alters the tension in the dsDNA that was already bound to the filament . Such coupled tension changes may explain several previously unexplained experimental results.
It is well known that during homology recognition and strand exchange the double stranded DNA (dsDNA) in DNA/RecA filaments is highly extended, but the functional role of the extension has been unclear. We present an analytical model that calculates the distribution of tension in the extended dsDNA during strand exchange . The model suggests that the binding of additional dsDNA base pairs to the DNA/RecA filament alters the tension in dsDNA that was already bound to the filament , resulting in a non-linear increase in the mechanical energy as a function of the number of bound base pairs. This collective mechanical response may promote homology stringency and underlie unexplained experimental results.
[ { "type": "R", "before": "has long been", "after": "is well", "start_char_pos": 3, "end_char_pos": 16 }, { "type": "R", "before": "for the dsDNA tension", "after": "that calculates the distribution of tension in the extended dsDNA during strand exchange", "start_char_pos": 240, "end_char_pos": 261 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 378, "end_char_pos": 381 }, { "type": "R", "before": ". Such coupled tension changes may explain several previously", "after": ", resulting in a non-linear increase in the mechanical energy as a function of the number of bound base pairs. This collective mechanical response may promote homology stringency and underlie", "start_char_pos": 427, "end_char_pos": 488 } ]
[ 0, 208, 263, 428 ]
1108.5109
1
Heat-bath cooling is a component of practicable algorithmic cooling of spins, an approach which might be useful for in vivo 13C spectroscopy, in particular for prolonged metabolic processes where substrates that are hyperpolarized ex-vivo are not effective. We applied heat-bath cooling to 1,2-13C2-amino acids, using the \alpha\ protons to shift entropy from selected carbons to the environment. For glutamate and glycine, the polarizations of both labeled carbons were enhanced , and in other experiments the total entropy of each spin system was shown to decrease . The effect of adding Magnevist, a gadolinium contrast agent, on heat-bath cooling of glutamate was investigated.
Heat-bath cooling is a component of practicable algorithmic cooling of spins, an approach which might be useful for in vivo 13C spectroscopy, in particular for prolonged metabolic processes where substrates that are hyperpolarized ex-vivo are not effective. We applied heat-bath cooling to 1,2-13C2-amino acids, using the alpha protons to shift entropy from selected carbons to the environment. For glutamate and glycine, both carbons were cooled by about 2.5-fold , and in other experiments the polarization of C1 nearly doubled while all other spins had equilibrium polarization, indicating reduction in total entropy . The effect of adding Magnevist, a gadolinium contrast agent, on heat-bath cooling of glutamate was investigated.
[ { "type": "R", "before": "\\alpha\\", "after": "alpha", "start_char_pos": 322, "end_char_pos": 329 }, { "type": "R", "before": "the polarizations of both labeled carbons were enhanced", "after": "both carbons were cooled by about 2.5-fold", "start_char_pos": 424, "end_char_pos": 479 }, { "type": "R", "before": "total entropy of each spin system was shown to decrease", "after": "polarization of C1 nearly doubled while all other spins had equilibrium polarization, indicating reduction in total entropy", "start_char_pos": 511, "end_char_pos": 566 } ]
[ 0, 257, 396, 568 ]
1108.5238
1
Chemical reaction networks taken with mass-action kinetics are dynamical systems that arise in chemical engineering and systems biology. Deciding whether a chemical reaction network admits multiple positive steady states is to determine existence of multiple positive solutions to a system of polynomials with unknown coefficients. In this work, we consider the question of whether the minimal (in a precise sense) networks, which we propose to call `atoms of multistationarity,' characterize the entire set of multistationary networks. We show that if a subnetwork admits multiple nondegenerate positive steady states, then these steady states can be extended to establish multistationarity of a larger network, provided that the two networks share the same stoichiometric subspace. Our result provides the mathematical foundation for a technique used by Siegal-Gaskins et al. of establishing bistability by way of `network ancestry.' Here, our main application is for enumerating small multistationary continuous-flow stirred-tank reactors (CFSTRs), which are networks in which all chemical species take part in the inflow and outflow. Recent work of the first author presented a simple characterization of the one-reaction CFSTRs that admit multiple positive steady states. Here we consider the two-reaction CFSTRs. We enumerate all 386 bimolecular and reversible such networks. Of these, exactly 35 admit multiple positive steady states. Moreover, each admits a unique minimal multistationary subnetwork, and these subnetworks form a poset (with respect to the relation of `removing species') which has 11 minimal elements .
Chemical reaction systems are dynamical systems that arise in chemical engineering and systems biology. In this work, we consider the question of whether the minimal (in a precise sense) multistationary chemical reaction networks, which we propose to call `atoms of multistationarity,' characterize the entire set of multistationary networks. Our main result states that the answer to this question is `yes' in the context of fully open continuous-flow stirred-tank reactors (CFSTRs), which are networks in which all chemical species take part in the inflow and outflow. In order to prove this result, we show that if a subnetwork admits multiple steady states, then these steady states can be lifted to a larger network, provided that the two networks share the same stoichiometric subspace. We also prove an analogous result when a smaller network is obtained from a larger network by `removing species.' Our results provide the mathematical foundation for a technique used by Siegal-Gaskins et al. of establishing bistability by way of `network ancestry.' Additionally, our work provides sufficient conditions for establishing multistationarity by way of atoms and moreover reduces the problem of classifying multistationary CFSTRs to that of cataloging atoms of multistationarity. As an application, we enumerate and classify all 386 bimolecular and reversible two-reaction networks. Of these, exactly 35 admit multiple positive steady states. Moreover, each admits a unique minimal multistationary subnetwork, and these subnetworks form a poset (with respect to the relation of `removing species') which has 11 minimal elements (the atoms of multistationarity) .
[ { "type": "R", "before": "networks taken with mass-action kinetics", "after": "systems", "start_char_pos": 18, "end_char_pos": 58 }, { "type": "D", "before": "Deciding whether a chemical reaction network admits multiple positive steady states is to determine existence of multiple positive solutions to a system of polynomials with unknown coefficients.", "after": null, "start_char_pos": 137, "end_char_pos": 331 }, { "type": "A", "before": null, "after": "multistationary chemical reaction", "start_char_pos": 415, "end_char_pos": 415 }, { "type": "R", "before": "We", "after": "Our main result states that the answer to this question is `yes' in the context of fully open continuous-flow stirred-tank reactors (CFSTRs), which are networks in which all chemical species take part in the inflow and outflow. In order to prove this result, we", "start_char_pos": 538, "end_char_pos": 540 }, { "type": "D", "before": "nondegenerate positive", "after": null, "start_char_pos": 583, "end_char_pos": 605 }, { "type": "R", "before": "extended to establish multistationarity of", "after": "lifted to", "start_char_pos": 653, "end_char_pos": 695 }, { "type": "R", "before": "Our result provides", "after": "We also prove an analogous result when a smaller network is obtained from a larger network by `removing species.' Our results provide", "start_char_pos": 785, "end_char_pos": 804 }, { "type": "R", "before": "Here, our main application is for enumerating small multistationary continuous-flow stirred-tank reactors (CFSTRs), which are networks in which all chemical species take part in the inflow and outflow. Recent work of the first author presented a simple characterization of the one-reaction CFSTRs that admit multiple positive steady states. Here we consider the two-reaction CFSTRs. We enumerate", "after": "Additionally, our work provides sufficient conditions for establishing multistationarity by way of atoms and moreover reduces the problem of classifying multistationary CFSTRs to that of cataloging atoms of multistationarity. As an application, we enumerate and classify", "start_char_pos": 937, "end_char_pos": 1332 }, { "type": "R", "before": "such", "after": "two-reaction", "start_char_pos": 1368, "end_char_pos": 1372 }, { "type": "A", "before": null, "after": "(the atoms of multistationarity)", "start_char_pos": 1628, "end_char_pos": 1628 } ]
[ 0, 136, 331, 537, 784, 936, 1138, 1277, 1319, 1382, 1442 ]
1108.5366
1
We introduce a new approach for calculating ionic fluxes through narrow nano-pores and transmembrane channels. The method relies on a dual-control-volume grand-canonical molecular dynamics (DCV-GCMD) simulation and the analytical solution for the electrostatic potential inside a cylindrical nano-pore recently obtained by Levin [ 1 ]. The theory is used to calculate the ionic fluxes through a gramicidin A channel , obtaining the current-voltage and the current-concentration relations under various experimental conditions. A good agreement with experimental results is observed. 1%DIFDELCMD < ] %%% Y. Levin. Europhys. Letters, 76, 163 (2006) .
We introduce an implicit solvent Molecular Dynamics approach for calculating ionic fluxes through narrow nano-pores and transmembrane channels. The method relies on a dual-control- volume grand-canonical molecular dynamics (DCV-GCMD) simulation and the analytical solution for the electrostatic potential inside a cylindrical nano-pore recently obtained by Levin [ Europhys. Lett., 76, 163 (2006) ]. The theory is used to calculate the ionic fluxes through an artificial trans-membrane c hannel which mimics the antibacterial gramicidin A channel . Both current-voltage and current-concentration relations are calculated under various experimental conditions. %DIFDELCMD < ] %%% We show that our results are comparable to the characteristics associated to the gramicidin A pore, specially the existence of two binding sites inside the pore and the observed saturation in the current-concentration profiles .
[ { "type": "R", "before": "a new", "after": "an implicit solvent Molecular Dynamics", "start_char_pos": 13, "end_char_pos": 18 }, { "type": "R", "before": "dual-control-volume", "after": "dual-control- volume", "start_char_pos": 134, "end_char_pos": 153 }, { "type": "R", "before": "1", "after": "Europhys. Lett., 76, 163 (2006)", "start_char_pos": 331, "end_char_pos": 332 }, { "type": "R", "before": "a", "after": "an artificial trans-membrane c hannel which mimics the antibacterial", "start_char_pos": 393, "end_char_pos": 394 }, { "type": "R", "before": ", obtaining the", "after": ". Both", "start_char_pos": 416, "end_char_pos": 431 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 452, "end_char_pos": 455 }, { "type": "A", "before": null, "after": "are calculated", "start_char_pos": 488, "end_char_pos": 488 }, { "type": "D", "before": "A good agreement with experimental results is observed.", "after": null, "start_char_pos": 528, "end_char_pos": 583 }, { "type": "D", "before": "1", "after": null, "start_char_pos": 584, "end_char_pos": 585 }, { "type": "R", "before": "Y. Levin. Europhys. Letters, 76, 163 (2006)", "after": "We show that our results are comparable to the characteristics associated to the gramicidin A pore, specially the existence of two binding sites inside the pore and the observed saturation in the current-concentration profiles", "start_char_pos": 604, "end_char_pos": 647 } ]
[ 0, 110, 335, 527, 583, 613 ]
1108.5940
1
In this work, we consider the hedging error due to discrete trading in models with jumps. Extending an approach developped by Fukasawa (2011) for continuous processes, we propose a framework enabling to (asymptotically) optimize the discretization times. More precisely, a discretization rule is said to be optimal if for a given cost function, no strategy has (asymptotically, for large cost) a lower mean square discretization error for a smaller cost. We focus on discretization rules based on hitting times and give explicit expressions for the optimal rules within this class.
In this work, we consider the hedging error due to discrete trading in models with jumps. Extending an approach developed by Fukasawa In Stochastic Analysis with Financial Applications (2011) 331-346 Birkh\"{a for continuous processes, we propose a framework enabling us to (asymptotically) optimize the discretization times. More precisely, a discretization rule is said to be optimal if for a given cost function, no strategy has (asymptotically, for large cost) a lower mean square discretization error for a smaller cost. We focus on discretization rules based on hitting times and give explicit expressions for the optimal rules within this class.
[ { "type": "R", "before": "developped by Fukasawa", "after": "developed by Fukasawa", "start_char_pos": 112, "end_char_pos": 134 }, { "type": "A", "before": null, "after": "In Stochastic Analysis with Financial Applications", "start_char_pos": 135, "end_char_pos": 135 }, { "type": "A", "before": null, "after": "331-346 Birkh\\\"{a", "start_char_pos": 143, "end_char_pos": 143 }, { "type": "A", "before": null, "after": "us", "start_char_pos": 202, "end_char_pos": 202 } ]
[ 0, 89, 257, 457 ]
1109.0828
1
The model presented here derives the product life cycle of durable goods. It is based on the idea that the purchase process consists of first purchase and repurchase. First purchase is determined by the market penetration process (diffusion process), while repurchase is the sum of replacement and multiple purchase. The key property of durables goods is to have a mean lifetime in the order of several years. Therefore replacement purchase creates periodic variations of the unit sales (Juglar cycles) having its origin in the initial diffusion process. The theory suggests that there exists two diffusion processes. The first can be described by Bass diffusion and is related to the information spreading process within the social network of potential consumers. The other diffusion process comes into play, when the price of the durable is such, that only those consumers with a sufficient personal income can afford the good . We have to distinguish between a monopoly market and a polypoly/oligopoly market. In the first case periodic variations of the total sales occur caused by the initial Bass diffusion, even when the price is constant. In the latter case the mutual competition between the brands leads with time to a decrease of the mean price. Based on an evolutionary approach , it can be shown that the mean price decreases exponentially and the corresponding diffusion process is governed by Gompertz equation (Gompertz diffusion). Most remarkable is that Gibrat's rule of proportionate growth is a direct consequence of the competition between the brands . The model allows a derivation of the lognormal size distribution of product sales and the logistic replacement of durables in competition. A comparison with empirical data suggests that the theory describes the main trend of the product life cycle superimposed by short term events like the introduction of new models .
A dynamic model of the product lifecycle of (nearly) homogeneous durables in polypoly markets is established. It describes the concurrent evolution of the unit sales and price of durable goods. The theory is based on the idea that the sales dynamics is determined by a meeting process of demanded with supplied product units. Taking advantage from the Bass model for first purchase and a logistic model for repurchase the entire product lifecycle of a durable can be established. For the case of a fast growing supply the model suggests that the mean price of the good decreases according to a logistic law. Both, the established unit sales and price evolution are in agreement with the empirical data studied in this paper. The presented approach discusses further the interference of the diffusion process with the supply dynamics . The model predicts the occurrence of lost sales in the initial stages of the lifecycle due to supply constraints. They are the origin for a retarded market penetration. The theory suggests that the imitation rate B indicating social contagion in the Bass model has its maximum magnitude for the case of a large amount of available units at introduction and a fast output increase. The empirical data of the investigated samples are in qualitative agreement with this prediction .
[ { "type": "R", "before": "The model presented here derives the product life cycle of", "after": "A dynamic model of the product lifecycle of (nearly) homogeneous durables in polypoly markets is established. It describes the concurrent evolution of the unit sales and price of", "start_char_pos": 0, "end_char_pos": 58 }, { "type": "R", "before": "It", "after": "The theory", "start_char_pos": 74, "end_char_pos": 76 }, { "type": "R", "before": "purchase process consists of", "after": "sales dynamics is determined by a meeting process of demanded with supplied product units. Taking advantage from the Bass model for", "start_char_pos": 107, "end_char_pos": 135 }, { "type": "R", "before": "repurchase. First purchase is determined by the market penetration process (diffusion process), while repurchase is the sum of replacement and multiple purchase. The key property of durables goods is to have a mean lifetime in the order of several years. Therefore replacement purchase creates periodic variations of the unit sales (Juglar cycles) having its origin in the initial diffusion process. The theory suggests that there exists two diffusion processes. The first can be described by Bass diffusion and is related to the information spreading process within the social network of potential consumers. The other diffusion process comes into play, when the", "after": "a logistic model for repurchase the entire product lifecycle of a durable can be established. For the case of a fast growing supply the model suggests that the mean", "start_char_pos": 155, "end_char_pos": 818 }, { "type": "R", "before": "durable is such, that only those consumers with a sufficient personal income can afford the good . We have to distinguish between a monopoly market and a polypoly/oligopoly market. In the first case periodic variations of the total sales occur caused by the initial Bass diffusion, even when the price is constant. In the latter case the mutual competition between the brands leads with time to a decrease of the mean price. Based on an evolutionary approach , it can be shown that the mean price decreases exponentially and the corresponding diffusion process is governed by Gompertz equation (Gompertz diffusion). Most remarkable is that Gibrat's rule of proportionate growth is a direct consequence of the competition between the brands", "after": "good decreases according to a logistic law. Both, the established unit sales and price evolution are in agreement with the empirical data studied in this paper. The presented approach discusses further the interference of the diffusion process with the supply dynamics", "start_char_pos": 832, "end_char_pos": 1571 }, { "type": "R", "before": "allows a derivation of the lognormal size distribution of product sales and the logistic replacement of durables in competition. A comparison with empirical data", "after": "predicts the occurrence of lost sales in the initial stages of the lifecycle due to supply constraints. They are the origin for a retarded market penetration. The theory", "start_char_pos": 1584, "end_char_pos": 1745 }, { "type": "R", "before": "theory describes the main trend of the product life cycle superimposed by short term events like the introduction of new models", "after": "imitation rate B indicating social contagion in the Bass model has its maximum magnitude for the case of a large amount of available units at introduction and a fast output increase. The empirical data of the investigated samples are in qualitative agreement with this prediction", "start_char_pos": 1764, "end_char_pos": 1891 } ]
[ 0, 73, 166, 316, 409, 554, 617, 764, 930, 1012, 1146, 1256, 1447, 1573, 1712 ]
1109.0891
1
We build a statistical ensemble representation of an economic system which can be interpreted as a simplified credit market. To this purpose we adopt the Boltzmann-Gibbs distribution where the role of the Hamiltonian , as recently suggested in the literature, is taken by the total money supply (i.e. including money created from debt) of a set of economic agents interacting over the credit market . As a result, we can read the main thermodynamic quantities in terms of monetary ones. Furthermore, with our formalism we recover and extend some results concerning the temperature of an economic system, previously presented in the literature by considering only the monetary base as conserved quantity. Finally we study the statistical ensemble for the Pareto distribution.
We build a statistical ensemble representation of two economic models describing respectively, in simplified terms, a payment system and a credit market. To this purpose we adopt the Boltzmann-Gibbs distribution where the role of the Hamiltonian is taken by the total money supply (i.e. including money created from debt) of a set of interacting economic agents . As a result, we can read the main thermodynamic quantities in terms of monetary ones. In particular, we define for the credit market model a work term which is related to the impact of monetary policy on credit creation. Furthermore, with our formalism we recover and extend some results concerning the temperature of an economic system, previously presented in the literature by considering only the monetary base as conserved quantity. Finally , we study the statistical ensemble for the Pareto distribution.
[ { "type": "R", "before": "an economic system which can be interpreted as a simplified", "after": "two economic models describing respectively, in simplified terms, a payment system and a", "start_char_pos": 50, "end_char_pos": 109 }, { "type": "D", "before": ", as recently suggested in the literature,", "after": null, "start_char_pos": 217, "end_char_pos": 259 }, { "type": "R", "before": "economic agents interacting over the credit market", "after": "interacting economic agents", "start_char_pos": 348, "end_char_pos": 398 }, { "type": "A", "before": null, "after": "In particular, we define for the credit market model a work term which is related to the impact of monetary policy on credit creation.", "start_char_pos": 487, "end_char_pos": 487 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 713, "end_char_pos": 713 } ]
[ 0, 124, 400, 486, 704 ]
1109.0897
1
The optimal capital structure model with endogenous bankruptcy was first studied by Leland (1994) and Leland and Toft (1996), and was later extended to the spectrally negative Levy model by Hilberink and Rogers (2002) and Kyprianou and Surya (2007). This paper generalizes the problem by allowing the values of bankruptcy costs , coupon rates and tax benefits dependent on the firm's asset value. By using the fluctuation identities for the spectrally negative Levy process, we obtain a candidate bankruptcy level as well as a sufficient condition for optimality. The optimality holds in particular when, monotonically in the asset value, the coupon rate is decreasing, the value of tax benefits is increasing, the loss amount at bankruptcy is increasing, and its proportion relative to the asset value is decreasing. The solution admits a semi-explicit form, and this allows for instant computation of the optimal bankruptcy levels, equity/debt values and optimal leverage ratios .
The optimal capital structure model with endogenous bankruptcy was first studied by Leland (1994) and Leland and Toft (1996), and was later extended to the spectrally negative Levy model by Hilberink and Rogers (2002) and Kyprianou and Surya (2007). This paper incorporates the scale effects by allowing the values of bankruptcy costs and tax benefits dependent on the firm's asset value. These effects have been empirically shown, among others, in Warner (1976), Ang et al. (1982), and Graham and Smith (1999). By using the fluctuation identities for the spectrally negative Levy process, we obtain a candidate bankruptcy level as well as a sufficient condition for optimality. The optimality holds in particular when, monotonically in the asset value, the value of tax benefits is increasing, the loss amount at bankruptcy is increasing, and its proportion relative to the asset value is decreasing. The solution admits a semi-explicit form, and this allows for instant computation of the optimal bankruptcy levels, equity/debt /firm values and optimal leverage ratios . A series of numerical studies are given to analyze the impacts of scale effects on the default strategy and the optimal capital structure .
[ { "type": "R", "before": "generalizes the problem", "after": "incorporates the scale effects", "start_char_pos": 261, "end_char_pos": 284 }, { "type": "D", "before": ", coupon rates", "after": null, "start_char_pos": 328, "end_char_pos": 342 }, { "type": "A", "before": null, "after": "These effects have been empirically shown, among others, in Warner (1976), Ang et al. (1982), and Graham and Smith (1999).", "start_char_pos": 397, "end_char_pos": 397 }, { "type": "D", "before": "coupon rate is decreasing, the", "after": null, "start_char_pos": 644, "end_char_pos": 674 }, { "type": "A", "before": null, "after": "/firm", "start_char_pos": 947, "end_char_pos": 947 }, { "type": "A", "before": null, "after": ". A series of numerical studies are given to analyze the impacts of scale effects on the default strategy and the optimal capital structure", "start_char_pos": 983, "end_char_pos": 983 } ]
[ 0, 249, 396, 564, 818 ]
1109.0897
2
The optimal capital structure model with endogenous bankruptcy was first studied by Leland (1994) and Leland and Toft (1996), and was later extended to the spectrally negative Levy model by Hilberink and Rogers (2002) and Kyprianou and Surya (2007). This paper incorporates the scale effects by allowing the values of bankruptcy costs and tax benefits dependent on the firm's asset value. These effects have been empirically shown, among others, in Warner (1976), Ang et al. (1982), and Graham and Smith (1999). By using the fluctuation identities for the spectrally negative Levy process, we obtain a candidate bankruptcy level as well as a sufficient condition for optimality. The optimality holds in particular when, monotonically in the asset value, the value of tax benefits is increasing, the loss amount at bankruptcy is increasing, and its proportion relative to the asset value is decreasing. The solution admits a semi-explicit form , and this allows for instant computation of the optimal bankruptcy levels, equity/debt/firm values and optimal leverage ratios . A series of numerical studies are given to analyze the impacts of scale effects on the default strategy and the optimal capital structure.
The optimal capital structure model with endogenous bankruptcy was first studied by Leland (1994) and Leland and Toft (1996), and was later extended to the spectrally negative Levy model by Hilberink and Rogers (2002) and Kyprianou and Surya (2007). This paper incorporates the scale effects by allowing the values of bankruptcy costs and tax benefits to be dependent on the firm's asset value. By using the fluctuation identities for the spectrally negative Levy process, we obtain a candidate bankruptcy level as well as a sufficient condition for optimality. The optimality holds in particular when, monotonically in the asset value, the value of tax benefits is increasing, the loss amount at bankruptcy is increasing, and its proportion relative to the asset value is decreasing. The solution admits a semi-explicit form in terms of the scale function . A series of numerical studies are given to analyze the impacts of scale effects on the default strategy and the optimal capital structure.
[ { "type": "A", "before": null, "after": "to be", "start_char_pos": 352, "end_char_pos": 352 }, { "type": "D", "before": "These effects have been empirically shown, among others, in Warner (1976), Ang et al. (1982), and Graham and Smith (1999).", "after": null, "start_char_pos": 390, "end_char_pos": 512 }, { "type": "R", "before": ", and this allows for instant computation of the optimal bankruptcy levels, equity/debt/firm values and optimal leverage ratios", "after": "in terms of the scale function", "start_char_pos": 944, "end_char_pos": 1071 } ]
[ 0, 249, 389, 512, 679, 902, 1073 ]
1109.2036
1
We analytically study the input-output properties of a neuron whose active dendritic tree, modeled as a Cayley tree of excitable elements, is subjected to Poisson stimulus. Both single-site and two-site mean-field approximations incorrectly predict a non-equilibrium phase transition which is not allowed in the model. Instead of taking into account three sites and beyond, we propose a novel excitable-wave mean-field approximation . Such approach is also a single-site approximation, but it manages to keep track of the excitable-wave direction of propagation. It shows good agreement with simulations and accounts for finite-size effects. We also discuss the relevance of our results to experiments in neuroscience, emphasizing the role of active dendrites in the enhancement of dynamic range and in gain control modulation.
We analytically study the input-output properties of a neuron whose active dendritic tree, modeled as a Cayley tree of excitable elements, is subjected to Poisson stimulus. Both single-site and two-site mean-field approximations incorrectly predict a non-equilibrium phase transition which is not allowed in the model. We propose an excitable-wave mean-field approximation which shows good agreement with previously published simulation results Gollo et al., PLoS Comput. Biol. 5(6) e1000402 (2009) and accounts for finite-size effects. We also discuss the relevance of our results to experiments in neuroscience, emphasizing the role of active dendrites in the enhancement of dynamic range and in gain control modulation.
[ { "type": "R", "before": "Instead of taking into account three sites and beyond, we propose a novel", "after": "We propose an", "start_char_pos": 319, "end_char_pos": 392 }, { "type": "R", "before": ". Such approach is also a single-site approximation, but it manages to keep track of the excitable-wave direction of propagation. It", "after": "which", "start_char_pos": 433, "end_char_pos": 565 }, { "type": "R", "before": "simulations", "after": "previously published simulation results", "start_char_pos": 592, "end_char_pos": 603 }, { "type": "A", "before": null, "after": "Gollo et al., PLoS Comput. Biol. 5(6) e1000402 (2009)", "start_char_pos": 604, "end_char_pos": 604 } ]
[ 0, 172, 318, 434, 562, 642 ]
1109.2648
1
A common feature of biological networks is the geometric property of self-similarity. Molecular regulatory networks through to circulatory systems, nervous systems and ecological trophic networks, show self-similar connectivity at multiple scales. There are many ways to achieve this, however, and we analyze the relationship between topology and signaling in contrasting classes of such topologies. We find that networks differ in their ability to contain or propagate signals between arbitrary nodes in a network depending on whether they possess branching or loop-like features. They also differ in how they respond to noise, such that one allows for greater integration at high noise, and this performance is reversed at low noise. Surprisingly, small-world topologies are less integrated (more modular) than networks with longer path lengths. All of these phenomena are essentially mesoscopic, vanishing in the infinite limit but producing strong effects at sizes and timescales relevant to biology.
A common feature of biological networks is the geometric property of self-similarity. Molecular regulatory networks through to circulatory systems, nervous systems , social systems and ecological trophic networks, show self-similar connectivity at multiple scales. We analyze the relationship between topology and signaling in contrasting classes of such topologies. We find that networks differ in their ability to contain or propagate signals between arbitrary nodes in a network depending on whether they possess branching or loop-like features. Networks also differ in how they respond to noise, such that one allows for greater integration at high noise, and this performance is reversed at low noise. Surprisingly, small-world topologies , with diameters logarithmic in system size, have slower dynamical timescales, and may be less integrated (more modular) than networks with longer path lengths. All of these phenomena are essentially mesoscopic, vanishing in the infinite limit but producing strong effects at sizes and timescales relevant to biology.
[ { "type": "A", "before": null, "after": ", social systems", "start_char_pos": 164, "end_char_pos": 164 }, { "type": "R", "before": "There are many ways to achieve this, however, and we", "after": "We", "start_char_pos": 249, "end_char_pos": 301 }, { "type": "R", "before": "They", "after": "Networks", "start_char_pos": 583, "end_char_pos": 587 }, { "type": "R", "before": "are", "after": ", with diameters logarithmic in system size, have slower dynamical timescales, and may be", "start_char_pos": 774, "end_char_pos": 777 } ]
[ 0, 85, 248, 400, 582, 736, 848 ]
1109.2803
1
This article makes use of the apparent indifference that the market has been devoting to the developments made on the fundamentals of quantitative finance, to introduce novel insight for better understanding market evolution. We show how these drops and crises emerge as a natural result of local economical principles ruling tradesbetween economical agents and present evidence that heavy-tails of the return distributions are bounded by constraints associated with the topology of agent relations . Finally, we discuss how these constraints may be helpful for properly evaluate model risk.
We consider the evolution of scale-free networks according to preferential attachment schemes and show the conditions for which the exponent characterizing the degree distribution is bounded by upper and lower values. Our framework is an agent model, presented in the context of economic networks of trades, which shows the emergence of critical behavior. Starting from a brief discussion about the main features of the evolving network of trades, we show that the logarithmic return distributions have bounded heavy-tails , and the corresponding bounding exponent values can be derived . Finally, we discuss these findings in the context of model risk.
[ { "type": "R", "before": "This article makes use of", "after": "We consider the evolution of scale-free networks according to preferential attachment schemes and show the conditions for which the exponent characterizing the degree distribution is bounded by upper and lower values. Our framework is an agent model, presented in the context of economic networks of trades, which shows the emergence of critical behavior. Starting from a brief discussion about the main features of the evolving network of trades, we show that", "start_char_pos": 0, "end_char_pos": 25 }, { "type": "R", "before": "apparent indifference that the market has been devoting to the developments made on the fundamentals of quantitative finance, to introduce novel insight for better understanding market evolution. We show how these drops and crises emerge as a natural result of local economical principles ruling tradesbetween economical agents and present evidence that", "after": "logarithmic return distributions have bounded", "start_char_pos": 30, "end_char_pos": 383 }, { "type": "R", "before": "of the return distributions are bounded by constraints associated with the topology of agent relations", "after": ", and the corresponding bounding exponent values can be derived", "start_char_pos": 396, "end_char_pos": 498 }, { "type": "R", "before": "how these constraints may be helpful for properly evaluate", "after": "these findings in the context of", "start_char_pos": 521, "end_char_pos": 579 } ]
[ 0, 225, 500 ]
1109.2945
1
We consider the utility maximization problem of terminal wealth from the point of view of a portfolio manager paid by an incentive scheme given as a convex function g of the terminal wealth. The manager's own utility function U is assumed to be smooth and strictly concave, however the resulting utility function U \circ g fails to be concave. As a consequence, this problem does not fit into the classical portfolio optimization theory. Using duality theory, we prove wealth-independent existence and uniqueness of the optimal wealth in general (incomplete) semimartingale markets as long as the unique optimizer of the dual problem has no atom with respect to the Lebesgue measure . In many cases, this fact is independent of the incentive scheme and depends only on the structure of the set of equivalent local martingale measures. As example we discuss stochastic volatility models and show that existence and uniqueness of an optimizer are guaranteed as long as the market price of risk satisfies a certain (Malliavin-) smoothness condition . We provide also a detailed analysis of the case when this criterium fails, leading to optimization problems whose solvability by duality methods depends on the initial wealth of the investor.
We consider the utility maximization problem of terminal wealth from the point of view of a portfolio manager paid by an incentive scheme given as a convex function g of the terminal wealth. The manager's own utility function U is assumed to be smooth and strictly concave, however the resulting utility function U \circ g fails to be concave. As a consequence, this problem does not fit into the classical portfolio optimization theory. Using duality theory, we prove wealth-independent existence and uniqueness of the optimal wealth in general (incomplete) semimartingale markets as long as the unique optimizer of the dual problem has a continuous law . In many cases, this fact is independent of the incentive scheme and depends only on the structure of the set of equivalent local martingale measures. As examples we discuss (complete) one-dimensional models as well as (incomplete) lognormal mixture models . We provide also a detailed analysis of the case when this criterium fails, leading to optimization problems whose solvability by duality methods depends on the initial wealth of the investor.
[ { "type": "R", "before": "no atom with respect to the Lebesgue measure", "after": "a continuous law", "start_char_pos": 638, "end_char_pos": 682 }, { "type": "R", "before": "example we discuss stochastic volatility models and show that existence and uniqueness of an optimizer are guaranteed as long as the market price of risk satisfies a certain (Malliavin-) smoothness condition", "after": "examples we discuss (complete) one-dimensional models as well as (incomplete) lognormal mixture models", "start_char_pos": 838, "end_char_pos": 1045 } ]
[ 0, 190, 343, 437, 684, 834, 1047 ]
1109.2945
2
We consider the utility maximization problem of terminal wealth from the point of view of a portfolio manager paid by an incentive scheme given as a convex function g of the terminal wealth. The manager's own utility function U is assumed to be smooth and strictly concave, however the resulting utility function U \circ g fails to be concave. As a consequence, this problem does not fit into the classical portfolio optimization theory. Using duality theory, we prove wealth-independent existence and uniqueness of the optimal wealth in general (incomplete) semimartingale markets as long as the unique optimizer of the dual problem has a continuous law. In many cases, this fact is independent of the incentive scheme and depends only on the structure of the set of equivalent local martingale measures. As examples we discuss (complete) one-dimensional models as well as (incomplete) lognormal mixture models. We provide also a detailed analysis of the case when this criterium fails , leading to optimization problems whose solvability by duality methods depends on the initial wealth of the investor.
We consider the terminal wealth utility maximization problem from the point of view of a portfolio manager who is paid by an incentive scheme , which is given as a convex function g of the terminal wealth. The manager's own utility function U is assumed to be smooth and strictly concave, however the resulting utility function U \circ g fails to be concave. As a consequence, the problem considered here does not fit into the classical portfolio optimization theory. Using duality theory, we prove wealth-independent existence and uniqueness of the optimal portfolio in general (incomplete) semimartingale markets as long as the unique optimizer of the dual problem has a continuous law. In many cases, this existence and uniqueness result is independent of the incentive scheme and depends only on the structure of the set of equivalent local martingale measures. As examples , we discuss (complete) one-dimensional models as well as (incomplete) lognormal mixture and popular stochastic volatility models. We also provide a detailed analysis of the case where the unique optimizer of the dual problem does not have a continuous law , leading to optimization problems whose solvability by duality methods depends on the initial wealth of the investor.
[ { "type": "A", "before": null, "after": "terminal wealth", "start_char_pos": 16, "end_char_pos": 16 }, { "type": "D", "before": "of terminal wealth", "after": null, "start_char_pos": 46, "end_char_pos": 64 }, { "type": "A", "before": null, "after": "who is", "start_char_pos": 111, "end_char_pos": 111 }, { "type": "A", "before": null, "after": ", which is", "start_char_pos": 140, "end_char_pos": 140 }, { "type": "R", "before": "this problem", "after": "the problem considered here", "start_char_pos": 365, "end_char_pos": 377 }, { "type": "R", "before": "wealth", "after": "portfolio", "start_char_pos": 531, "end_char_pos": 537 }, { "type": "R", "before": "fact", "after": "existence and uniqueness result", "start_char_pos": 679, "end_char_pos": 683 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 821, "end_char_pos": 821 }, { "type": "A", "before": null, "after": "and popular stochastic volatility", "start_char_pos": 909, "end_char_pos": 909 }, { "type": "R", "before": "provide also", "after": "also provide", "start_char_pos": 921, "end_char_pos": 933 }, { "type": "R", "before": "when this criterium fails", "after": "where the unique optimizer of the dual problem does not have a continuous law", "start_char_pos": 966, "end_char_pos": 991 } ]
[ 0, 193, 346, 440, 658, 808, 917 ]
1109.3069
1
Robust and reliable covariance estimation plays a decisive role in financial applications. An important class of estimators is based on Factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong market we evince that our proposed method leads to improved portfolio allocation.
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on Factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong market we show that our proposed method leads to improved portfolio allocation.
[ { "type": "R", "before": "estimation plays", "after": "estimates play", "start_char_pos": 31, "end_char_pos": 47 }, { "type": "A", "before": null, "after": "and many other", "start_char_pos": 77, "end_char_pos": 77 }, { "type": "R", "before": "evince", "after": "show", "start_char_pos": 599, "end_char_pos": 605 } ]
[ 0, 91, 151, 407, 522 ]
1109.3160
1
Association networks represent systems of interacting elements, where a link between two different elements indicates a sufficient level of similarity between element attributes. While in reality relational ties between elements can be expected to be based on similarity across multiple attributes, the vast majority of work to date on association networks involves ties defined with respect to only a single attribute. We propose an approach for the inference of multi-attribute association networks from measurements on continuous attribute variables, using canonical correlation and a hypothesis-testing strategy. Within this context, we then study the impact of partial information on multi-attribute network inference and characterization, when only a subset of attributes is available. We examine through a combination of analytical and numerical techniques the implications of the choice and number of node attributes on the ability to detect network links and, more generally, to estimate higher-level network summary statistics, such as node degree, clustering coefficients, and measures of centrality. We consider in detail the case of two attributes and discuss generalization of our findings to more than two attributes. Our work is motivated by and illustrated within the context of gene /protein regulatory networks in human cancer cells .
Our work is motivated by and illustrated with application of association networks in computational biology, specifically in the context of gene/protein regulatory networks. Association networks represent systems of interacting elements, where a link between two different elements indicates a sufficient level of similarity between element attributes. While in reality relational ties between elements can be expected to be based on similarity across multiple attributes, the vast majority of work to date on association networks involves ties defined with respect to only a single attribute. We propose an approach for the inference of multi-attribute association networks from measurements on continuous attribute variables, using canonical correlation and a hypothesis-testing strategy. Within this context, we then study the impact of partial information on multi-attribute network inference and characterization, when only a subset of attributes is available. We consider in detail the case of two attributes, wherein we examine through a combination of analytical and numerical techniques the implications of the choice and number of node attributes on the ability to detect network links and, more generally, to estimate higher-level network summary statistics, such as node degree, clustering coefficients, and measures of centrality. Illustration and applications throughout the paper are developed using gene and protein expression measurements on human cancer cell lines from the NCI-60 database .
[ { "type": "A", "before": null, "after": "Our work is motivated by and illustrated with application of association networks in computational biology, specifically in the context of gene/protein regulatory networks.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "A", "before": null, "after": "consider in detail the case of two attributes, wherein we", "start_char_pos": 796, "end_char_pos": 796 }, { "type": "R", "before": "We consider in detail the case of two attributes and discuss generalization of our findings to more than two attributes. Our work is motivated by and illustrated within the context of gene /protein regulatory networks in human cancer cells", "after": "Illustration and applications throughout the paper are developed using gene and protein expression measurements on human cancer cell lines from the NCI-60 database", "start_char_pos": 1114, "end_char_pos": 1353 } ]
[ 0, 179, 420, 617, 792, 1113, 1234 ]
1109.3488
1
Portfolio managers are often constrained by turnover limits, minimum and maximum stock positions, cardinality, a target market capitalization and sometimes the need to hew to a style ( growth or value). In addition, many portfolio managers choose stocks based upon fundamental data, e. g. price-to-earnings and dividend yield in an effort to maximize return. All of these are typical real-world constraints a portfolio manager faces. Another constraint, of sorts, is the need to outperform a stock index benchmark. Performance higher than the benchmark means a better compensation package. Underperforming the benchmark means a lesser compensation package. We use MOEAs to satisfy the above real-world constraints and consistently outperform typical performance benchmarks . Our first MOEA solves all the constraints (except turnover and position limits) and generates feasible portfolios.The second MOEA tests each of the potential feasible portfolios of the first MOEA by trading off mean return, variance, turnover and position limits. The best portfolio is chosen from these feasible portfolios and becomes the portfolio of choice for the next quarter. The MOEAs are applied to the following problems -- generate a series of monthly portfolios that outperform the S P 500 over the past 30 years and generate a set of monthly portfolios that outperform the Russell 1000 Growth index over the last 15 years. Our two MOEAs accomplish both these goals on a risk adjusted and non-risk adjusted return basis
Portfolio managers are typically constrained by turnover limits, minimum and maximum stock positions, cardinality, a target market capitalization and sometimes the need to hew to a style ( such as growth or value). In addition, portfolio managers often use multifactor stock models to choose stocks based upon their respective fundamental data. We use multiobjective evolutionary algorithms (MOEAs) to satisfy the above real-world constraints . The portfolios generated consistently outperform typical performance benchmarks and have statistically significant asset selection.
[ { "type": "R", "before": "often", "after": "typically", "start_char_pos": 23, "end_char_pos": 28 }, { "type": "A", "before": null, "after": "such as", "start_char_pos": 185, "end_char_pos": 185 }, { "type": "R", "before": "many portfolio managers", "after": "portfolio managers often use multifactor stock models to", "start_char_pos": 217, "end_char_pos": 240 }, { "type": "R", "before": "fundamental data, e. g. price-to-earnings and dividend yield in an effort to maximize return. All of these are typical real-world constraints a portfolio manager faces. Another constraint, of sorts, is the need to outperform a stock index benchmark. Performance higher than the benchmark means a better compensation package. Underperforming the benchmark means a lesser compensation package. We use MOEAs", "after": "their respective fundamental data. We use multiobjective evolutionary algorithms (MOEAs)", "start_char_pos": 266, "end_char_pos": 670 }, { "type": "R", "before": "and", "after": ". The portfolios generated", "start_char_pos": 715, "end_char_pos": 718 }, { "type": "D", "before": ". Our first MOEA solves all the constraints (except turnover and position limits) and generates feasible portfolios.The second MOEA tests each of the potential feasible portfolios of the first MOEA by trading off mean return, variance, turnover and position limits. The best portfolio is chosen from these feasible portfolios and becomes the portfolio of choice for the next quarter. The MOEAs are applied to the following problems -- generate a series of monthly portfolios that outperform the S", "after": null, "start_char_pos": 774, "end_char_pos": 1270 }, { "type": "R", "before": "P 500 over the past 30 years and generate a set of monthly portfolios that outperform the Russell 1000 Growth index over the last 15 years. Our two MOEAs accomplish both these goals on a risk adjusted and non-risk adjusted return basis", "after": "and have statistically significant asset selection.", "start_char_pos": 1271, "end_char_pos": 1506 } ]
[ 0, 203, 359, 434, 515, 590, 657, 775, 890, 1039, 1157, 1410 ]
1109.3594
1
In recent paper{\it [Phys. Rev. E {\bf 83}, 042903 (2011)], a simple model for the translation of messenger RNA by ribosomes is provided, and the expression of translational ratio of protein is given. In this comments, varied methods to get this ratio are addressed. Depending on a different method of protein from mRNA to degradation rate \omega_p of protein, is obtained. The key point to get this ratio r is to get the translation rate \rm tl. In the study of Valleriani }{\it is assumed to be the mean value of measured translation rate, i.e. the mean value of ratio of the translation number of protein to the lifetime of mRNA. However, in experiments different methods might be used to get \rm tl. Therefore, for the sake of future application of their model to more experimental data analysis, in this comment three methods to get the translation rate \rm tl, and consequently the translational ratio r, are provided. Based on one of the methods which might be employed in most of the experiments} , we find that , roughly speaking, this translational ratio decays exponentially with mRNA length in prokaryotic cell, and reciprocally with mRNA length in eukaryotic cells .
In the recent paper of Valleriani{\it et al [Phys. Rev. E {\bf 83}, 042903 (2011)], a simple model for describing the translation of messenger RNA (mRNA) by ribosomes is presented, and an expression of the translational ratio r, defined as the ratio of translation rate \omega_{\rm tl of protein from mRNA to degradation rate \omega_p of protein, is obtained. The key point to get this ratio r is to get the translation rate \rm tl. In the study of Valleriani }{\it et al , \omega_{\rm tl is assumed to be the mean value of measured translation rate, i.e. the mean value of ratio of the translation number of protein to the lifetime of mRNA. However, in experiments different methods might be used to get \rm tl. Therefore, for the sake of future application of their model to more experimental data analysis, in this comment three methods to get the translation rate \rm tl, and consequently the translational ratio r, are provided. Based on one of the methods which might be employed in most of the experiments} , we find that the translational ratio r decays exponentially with the length of mRNA in prokaryotic cells, and decays reciprocally with the length of mRNA in eukaryotic cells . This result is slight different from that obtained in Valleriani's study .
[ { "type": "R", "before": "recent paper", "after": "the recent paper of Valleriani", "start_char_pos": 3, "end_char_pos": 15 }, { "type": "A", "before": null, "after": "et al", "start_char_pos": 20, "end_char_pos": 20 }, { "type": "A", "before": null, "after": "describing", "start_char_pos": 80, "end_char_pos": 80 }, { "type": "A", "before": null, "after": "(mRNA)", "start_char_pos": 114, "end_char_pos": 114 }, { "type": "R", "before": "provided, and the expression of translational ratio of protein is given. In this comments, varied methods to get this ratio are addressed. Depending on a different method", "after": "presented, and an expression of the translational ratio r, defined as the ratio of translation rate \\omega_{\\rm tl", "start_char_pos": 131, "end_char_pos": 301 }, { "type": "A", "before": null, "after": "et al", "start_char_pos": 483, "end_char_pos": 483 }, { "type": "A", "before": null, "after": ", \\omega_{\\rm tl", "start_char_pos": 484, "end_char_pos": 484 }, { "type": "R", "before": ", roughly speaking, this translational ratio", "after": "the translational ratio r", "start_char_pos": 1025, "end_char_pos": 1069 }, { "type": "R", "before": "mRNA length", "after": "the length of mRNA in prokaryotic cells, and decays reciprocally with the length of mRNA", "start_char_pos": 1096, "end_char_pos": 1107 }, { "type": "D", "before": "prokaryotic cell, and reciprocally with mRNA length in", "after": null, "start_char_pos": 1111, "end_char_pos": 1165 }, { "type": "A", "before": null, "after": ". This result is slight different from that obtained in Valleriani's study", "start_char_pos": 1183, "end_char_pos": 1183 } ]
[ 0, 27, 203, 269, 376, 449, 708, 929 ]
1109.3908
1
In a Markovian stochastic volatility model, we consider financial agents whose investment criteria are modelled by forward exponential performance processes. The problem of contingent claim indifference valuation is first addressed and a number of properties are proved and discussed. Special attention is taken on the comparison between the forward and backward indifference valuation. In addition, we initiate the problem of optimal risk sharing on this forward setting and we solve it when the agents' forward performance criteria are exponential.
In a Markovian stochastic volatility model, we consider ?nancial agents whose investment criteria are modelled by forward exponential performance processes. The problem of contingent claim indi?fference valuation is ? first addressed and a number of properties are proved and discussed. Special attention is given to the comparison between the forward exponential and the backward exponential utility indiff?erence valuation. In addition, we construct the problem of optimal risk sharing in this forward setting and solve it when the agents' forward performance criteria are exponential.
[ { "type": "R", "before": "financial", "after": "?nancial", "start_char_pos": 56, "end_char_pos": 65 }, { "type": "R", "before": "indifference valuation is", "after": "indi?fference valuation is ?", "start_char_pos": 190, "end_char_pos": 215 }, { "type": "R", "before": "taken on", "after": "given to", "start_char_pos": 306, "end_char_pos": 314 }, { "type": "R", "before": "and backward indifference", "after": "exponential and the backward exponential utility indiff?erence", "start_char_pos": 350, "end_char_pos": 375 }, { "type": "R", "before": "initiate", "after": "construct", "start_char_pos": 403, "end_char_pos": 411 }, { "type": "R", "before": "on", "after": "in", "start_char_pos": 448, "end_char_pos": 450 }, { "type": "D", "before": "we", "after": null, "start_char_pos": 476, "end_char_pos": 478 } ]
[ 0, 157, 284, 386 ]
1109.4498
1
Living systems are forced away from thermodynamic equilibrium by exchange of mass and energy with their environment. In order to model a biochemical reaction network in a non-equilibrium state one requires a mathematical formulation to mimic this forcing. No general formulation exists for forcing an arbitrary large kinetic model in a manner that is still consistent with the existence of a non-equilibrium steady state. We prove a theorem that guarantees the existence of a non-equilibrium steady state for chemical kinetic models, assuming two conditions; that every reaction is mass balanced , and that reaction kinetic rate laws never lead to a negative molecule concentration. These conditions can be verified in polynomial time and , flexible enough to permit a novel method of forcing a system away from equilibrium. In an expository biochemical example we show how a reversible, mass balanced with thermodynamically infeasible kinetic parameters, can be used to perpetually force a kinetic model of anaerobic glycolysis in a manner consistent with the existence of a steady state. Our definition of easily testable existence conditions is foundational for efforts to reliably compute non-equilibrium steady states in genome-scale biochemical kinetic models.
Living systems are forced away from thermodynamic equilibrium by exchange of mass and energy with their environment. In order to model a biochemical reaction network in a non-equilibrium state one requires a mathematical formulation to mimic this forcing. We provide a general formulation to force an arbitrary large kinetic model in a manner that is still consistent with the existence of a non-equilibrium steady state. We can guarantee the existence of a non-equilibrium steady state assuming only two conditions; that every reaction is mass balanced and that continuous kinetic reaction rate laws never lead to a negative molecule concentration. These conditions can be verified in polynomial time and are flexible enough to permit one to force a system away from equilibrium. In an expository biochemical example we show how a reversible, mass balanced perpetual reaction, with thermodynamically infeasible kinetic parameters, can be used to perpetually force a kinetic model of anaerobic glycolysis in a manner consistent with the existence of a steady state. Easily testable existence conditions are foundational for efforts to reliably compute non-equilibrium steady states in genome-scale biochemical kinetic models.
[ { "type": "R", "before": "No general formulation exists for forcing", "after": "We provide a general formulation to force", "start_char_pos": 256, "end_char_pos": 297 }, { "type": "R", "before": "prove a theorem that guarantees", "after": "can guarantee", "start_char_pos": 425, "end_char_pos": 456 }, { "type": "R", "before": "for chemical kinetic models, assuming", "after": "assuming only", "start_char_pos": 505, "end_char_pos": 542 }, { "type": "R", "before": ", and that reaction kinetic", "after": "and that continuous kinetic reaction", "start_char_pos": 596, "end_char_pos": 623 }, { "type": "R", "before": ",", "after": "are", "start_char_pos": 739, "end_char_pos": 740 }, { "type": "R", "before": "a novel method of forcing a", "after": "one to force a", "start_char_pos": 767, "end_char_pos": 794 }, { "type": "A", "before": null, "after": "perpetual reaction,", "start_char_pos": 902, "end_char_pos": 902 }, { "type": "R", "before": "Our definition of easily", "after": "Easily", "start_char_pos": 1091, "end_char_pos": 1115 }, { "type": "R", "before": "is", "after": "are", "start_char_pos": 1146, "end_char_pos": 1148 } ]
[ 0, 116, 255, 421, 558, 682, 824, 1090 ]
1109.4726
1
We introduce a model of financial bubbles with two assets (risky and risk-less ), in which rational investors and noise traders co-exist. Rational investors form continuously evolving expectations on the return and risk of a risky asset and maximize their expected utility with respect to their allocation on the risky asset versus the risk-free asset. Noise traders are subjected to social imitation and follow momentum trading. We find the existence of a set of bifurcations controlled by the relative influence of noise traders with respect to rational investors that separate a normal regime of the price dynamics to a phase punctuated by recurrent exponentially explosive bubbles. The transition to a bubble regime is favored by noise traders who are more social, and who use more momentum trading with shorter time horizons . The model accounts well for the behavior of traders and for the price dynamics that developed during the dotcom bubble in 1995-2000. Momentum strategies are shown to be transiently profitable, supporting these strategies as enhancing herding behavior.
We introduce a model of financial bubbles with two assets (risky and risk-free ), in which rational investors and noise traders co-exist. Rational investors form expectations on the return and risk of a risky asset and maximize their expected utility with respect to their allocation on the risky asset versus the risk-free asset. Noise traders are subjected to social imitation and follow momentum trading. By contrast to previous models in the field, we do not allow agents to switch between trading strategies. Allowing for random time-varying herding propensity, we are able to reproduce several stylized facts of financial markets such as a fat-tail distribution of returns, volatility clustering and transient faster-than-exponential bubble growth with approximate log-periodic behavior . The model accounts well for the behavior of traders and for the price dynamics that developed during the dotcom bubble in 1995-2000. Momentum strategies are shown to be transiently profitable, supporting these strategies as enhancing herding behavior.
[ { "type": "R", "before": "risk-less", "after": "risk-free", "start_char_pos": 69, "end_char_pos": 78 }, { "type": "D", "before": "continuously evolving", "after": null, "start_char_pos": 162, "end_char_pos": 183 }, { "type": "R", "before": "We find the existence of a set of bifurcations controlled by the relative influence of noise traders with respect to rational investors that separate a normal regime of the price dynamics to a phase punctuated by recurrent exponentially explosive bubbles. The transition to a bubble regime is favored by noise traders who are more social, and who use more momentum trading with shorter time horizons", "after": "By contrast to previous models in the field, we do not allow agents to switch between trading strategies. Allowing for random time-varying herding propensity, we are able to reproduce several stylized facts of financial markets such as a fat-tail distribution of returns, volatility clustering and transient faster-than-exponential bubble growth with approximate log-periodic behavior", "start_char_pos": 430, "end_char_pos": 829 } ]
[ 0, 137, 352, 429, 685, 831, 964 ]
1109.4726
2
We introduce a model of financial bubbles with two assets (risky and risk-free), in which rational investors and noise traders co-exist. Rational investors form expectations on the return and risk of a risky asset and maximize their expected utility with respect to their allocation on the risky asset versus the risk-free asset. Noise traders are subjected to social imitation and follow momentum trading. By contrast to previous models in the field, we do not allow agents to switch between trading strategies. Allowing for random time-varying herding propensity, we are able to reproduce several stylized facts of financial markets such as a fat-tail distribution of returns , volatility clusteringand transient faster-than-exponential bubble growth with approximate log-periodic behavior . The model accounts well for the behavior of traders and for the price dynamics that developed during the dotcom bubble in 1995-2000. Momentum strategies are shown to be transiently profitable, supporting these strategies as enhancing herding behavior.
We introduce a model of super-exponential financial bubbles with two assets (risky and risk-free), in which rational investors and noise traders co-exist. Rational investors form expectations on the return and risk of a risky asset and maximize their constant relative risk aversion expected utility with respect to their allocation on the risky asset versus the risk-free asset. Noise traders are subjected to social imitation and follow momentum trading. Allowing for random time-varying herding propensity, we are able to reproduce several well-known stylized facts of financial markets such as a fat-tail distribution of returns and volatility clustering. In particular, we observe transient faster-than-exponential bubble growth with approximate log-periodic behavior and give analytical arguments why this follows from our framework . The model accounts well for the behavior of traders and for the price dynamics that developed during the dotcom bubble in 1995-2000. Momentum strategies are shown to be transiently profitable, supporting these strategies as enhancing herding behavior.
[ { "type": "A", "before": null, "after": "super-exponential", "start_char_pos": 24, "end_char_pos": 24 }, { "type": "A", "before": null, "after": "constant relative risk aversion", "start_char_pos": 234, "end_char_pos": 234 }, { "type": "D", "before": "By contrast to previous models in the field, we do not allow agents to switch between trading strategies.", "after": null, "start_char_pos": 409, "end_char_pos": 514 }, { "type": "A", "before": null, "after": "well-known", "start_char_pos": 601, "end_char_pos": 601 }, { "type": "R", "before": ", volatility clusteringand", "after": "and volatility clustering. In particular, we observe", "start_char_pos": 681, "end_char_pos": 707 }, { "type": "A", "before": null, "after": "and give analytical arguments why this follows from our framework", "start_char_pos": 795, "end_char_pos": 795 } ]
[ 0, 137, 331, 408, 514, 797, 930 ]
1109.5118
1
The impact of external force to Peyrard-Bishop DNA denaturation is investigated through statistical mechanics approach. The partition function is obtained using transfer integral method, and further the stretching of hydrogen bond is calculated using time independent perturbation method. It is shown that the external force accelerates the denaturation processes at lower temperature .
The impact of various types of external forces to Peyrard-Bishop DNA denaturation is investigated through statistical mechanics approach. The partition function is obtained using transfer integral method, and further the stretching of hydrogen bond is calculated using time independent perturbation method. It is shown that all types of external forces accelerate the denaturation processes at lower temperature . In particular, it is argued that the Gaussian force with infinitesimal width should realize the constant force at one end of DNA sequence as already done in some previous works .
[ { "type": "R", "before": "external force", "after": "various types of external forces", "start_char_pos": 14, "end_char_pos": 28 }, { "type": "R", "before": "the external force accelerates", "after": "all types of external forces accelerate", "start_char_pos": 306, "end_char_pos": 336 }, { "type": "A", "before": null, "after": ". In particular, it is argued that the Gaussian force with infinitesimal width should realize the constant force at one end of DNA sequence as already done in some previous works", "start_char_pos": 385, "end_char_pos": 385 } ]
[ 0, 119, 288 ]
1109.5118
2
The impact of various types of external forces to Peyrard-Bishop DNA denaturation is investigated through statistical mechanics approach. The partition function is obtained using transfer integral method, and further the stretching of hydrogen bond is calculated using time independent perturbation method. It is shown that all types of external forces accelerate the denaturation processes at lower temperature. In particular, it is argued that the Gaussian force with infinitesimal width should realize the constant force at one end of DNA sequence as already done in some previous works.
The impact of various types of external potentials to the Peyrard-Bishop DNA denaturation is investigated through statistical mechanics approach. The partition function is obtained using transfer integral method, and further the stretching of hydrogen bond is calculated using time independent perturbation method. It is shown that all types of external potentials accelerate the denaturation processes at lower temperature. In particular, it is argued that the Gaussian potential with infinitesimal width reproduces a constant force at one end of DNA sequence as already done in some previous works.
[ { "type": "R", "before": "forces to", "after": "potentials to the", "start_char_pos": 40, "end_char_pos": 49 }, { "type": "R", "before": "forces", "after": "potentials", "start_char_pos": 346, "end_char_pos": 352 }, { "type": "R", "before": "force", "after": "potential", "start_char_pos": 459, "end_char_pos": 464 }, { "type": "R", "before": "should realize the", "after": "reproduces a", "start_char_pos": 490, "end_char_pos": 508 } ]
[ 0, 137, 306, 412 ]
1109.5149
1
In this work we extend the characterization of injectivity via the Jacobian criterion first developed by Craciun and Feinberg for chemical reaction networks with outflow reactions to arbitrary chemical reaction networks taken with mass action kinetics . Injective chemical reaction networks do not have the capacity to admit multiple positive steady states for any rate constants and within each stoichiometric class. It is shown that a network is injective if and only if the determinant of the Jacobian of the system of ordinary differential equations associated to the network never vanishes . The determinant is a polynomial on the species concentrations and the rate constants , and its coefficients are fully determined. Previous works apply to chemical reaction networks whose stoichiometric space has maximal dimension. Here we present a direct route, independent of the dimension of the stoichiometric space which precludes at the same time the existence of degenerate steady states .
We provide a Jacobian criterion that applies to chemical reaction networks taken with mass-action kinetics to preclude the existence of multiple positive steady states within any stoichiometric class for any choice of rate constants. Our work extends previous results by Craciun and Feinberg for chemical reaction networks with stoichiometric space of maximal dimension. Specifically, a network is called injective if the species formation rate function is injective in the interior of the positive orthant within each stoichiometric class. We show that a network is injective if and only if the determinant of the Jacobian of a certain function does not vanish. The function consists of components of the species formation rate function and a maximal set of independent conservation laws . The determinant of the function is a polynomial in the species concentrations and the rate constants (linear in the latter) and its coefficients are fully determined. The criterion also precludes the existence of degenerate steady states . Further, we relate injectivity of a chemical reaction network to that of the chemical reaction network obtained by adding outflow reactions for all species .
[ { "type": "R", "before": "In this work we extend the characterization of injectivity via the Jacobian criterion first developed by Craciun and Feinberg for chemical reaction networks with outflow reactions to arbitrary", "after": "We provide a Jacobian criterion that applies to", "start_char_pos": 0, "end_char_pos": 192 }, { "type": "R", "before": "mass action kinetics . Injective chemical reaction networks do not have the capacity to admit", "after": "mass-action kinetics to preclude the existence of", "start_char_pos": 231, "end_char_pos": 324 }, { "type": "R", "before": "for any rate constants and", "after": "within any stoichiometric class for any choice of rate constants. Our work extends previous results by Craciun and Feinberg for chemical reaction networks with stoichiometric space of maximal dimension. Specifically, a network is called injective if the species formation rate function is injective in the interior of the positive orthant", "start_char_pos": 357, "end_char_pos": 383 }, { "type": "R", "before": "It is shown", "after": "We show", "start_char_pos": 418, "end_char_pos": 429 }, { "type": "R", "before": "the system of ordinary differential equations associated to the network never vanishes", "after": "a certain function does not vanish. The function consists of components of the species formation rate function and a maximal set of independent conservation laws", "start_char_pos": 508, "end_char_pos": 594 }, { "type": "A", "before": null, "after": "of the function", "start_char_pos": 613, "end_char_pos": 613 }, { "type": "R", "before": "on", "after": "in", "start_char_pos": 630, "end_char_pos": 632 }, { "type": "R", "before": ",", "after": "(linear in the latter)", "start_char_pos": 683, "end_char_pos": 684 }, { "type": "R", "before": "Previous works apply to chemical reaction networks whose stoichiometric space has maximal dimension. Here we present a direct route, independent of the dimension of the stoichiometric space which precludes at the same time the", "after": "The criterion also precludes the", "start_char_pos": 728, "end_char_pos": 954 }, { "type": "A", "before": null, "after": ". Further, we relate injectivity of a chemical reaction network to that of the chemical reaction network obtained by adding outflow reactions for all species", "start_char_pos": 993, "end_char_pos": 993 } ]
[ 0, 253, 417, 596, 727, 828 ]
1109.5149
2
We provide a Jacobian criterion that applies to chemical reaction networks taken with mass-action kinetics to preclude the existence of multiple positive steady states within any stoichiometric class for any choice of rate constants. Our work extends previous results by Craciun and Feinberg for chemical reaction networks with stoichiometric space of maximal dimension. Specifically, a network iscalled injective if the species formation rate function is injective in the interior of the positive orthant within each stoichiometric class. We show that a network is injective if and only if the determinant of the Jacobian of a certain function does not vanish. The function consists of components of the species formation rate function and a maximal set of independent conservation laws. The determinant of the function is a polynomial in the species concentrations and the rate constants (linear in the latter) and its coefficients are fully determined. The criterion also precludes the existence of degenerate steady states. Further, we relate injectivity of a chemical reaction network to that of the chemical reaction network obtained by adding outflow reactions for all species.
We provide a Jacobian criterion that applies to arbitrary chemical reaction networks taken with mass-action kinetics to preclude the existence of multiple positive steady states within any stoichiometric class for any choice of rate constants. We are concerned with the characterization of injective networks, that is, networks for which the species formation rate function is injective in the interior of the positive orthant within each stoichiometric class. We show that a network is injective if and only if the determinant of the Jacobian of a certain function does not vanish. The function consists of components of the species formation rate function and a maximal set of independent conservation laws. The determinant of the function is a polynomial in the species concentrations and the rate constants (linear in the latter) and its coefficients are fully determined. The criterion also precludes the existence of degenerate steady states. Further, we relate injectivity of a chemical reaction network to that of the chemical reaction network obtained by adding outflow , or degradation, reactions for all species.
[ { "type": "A", "before": null, "after": "arbitrary", "start_char_pos": 48, "end_char_pos": 48 }, { "type": "R", "before": "Our work extends previous results by Craciun and Feinberg for chemical reaction networks with stoichiometric space of maximal dimension. Specifically, a network iscalled injective if", "after": "We are concerned with the characterization of injective networks, that is, networks for which", "start_char_pos": 235, "end_char_pos": 417 }, { "type": "A", "before": null, "after": ", or degradation,", "start_char_pos": 1159, "end_char_pos": 1159 } ]
[ 0, 234, 371, 540, 662, 789, 956, 1028 ]
1109.5316
1
We study the generalized composite pure and randomized hypothesis testing problems. In addition to characterizing the corresponding optimal tests, we examine the conditions under which these two hypothesis testing problems are equivalent, and provide counterexamples when they are not. This analysis is useful for portfolio optimization problems that maximize some success probability given a fixed initial capital. The corresponding dual is related to a pure hypothesis testing problem which may or may not coincide with the randomized hypothesis testing problem . Our framework is applicable to both complete and incomplete market settings .
We study the portfolio problem of maximizing the outperformance probability over a random benchmark through dynamic trading with a fixed initial capital. Under a general incomplete market framework, this stochastic control problem can be formulated as a composite pure hypothesis testing problem . We analyze the connection between this pure testing problem and its randomized counterpart, and from latter we derive a dual representation for the maximal outperformance probability. Moreover, in a complete market setting, we provide a closed-form solution to the problem of beating a leveraged exchange traded fund. For a general benchmark under an incomplete stochastic factor model, we provide the Hamilton-Jacobi-Bellman PDE characterization for the maximal outperformance probability .
[ { "type": "R", "before": "generalized composite pure and randomized hypothesis testing problems. In addition to characterizing the corresponding optimal tests, we examine the conditions under which these two hypothesis testing problems are equivalent, and provide counterexamples when they are not. This analysis is useful for portfolio optimization problems that maximize some success probability given a", "after": "portfolio problem of maximizing the outperformance probability over a random benchmark through dynamic trading with a", "start_char_pos": 13, "end_char_pos": 392 }, { "type": "R", "before": "The corresponding dual is related to a", "after": "Under a general incomplete market framework, this stochastic control problem can be formulated as a composite", "start_char_pos": 416, "end_char_pos": 454 }, { "type": "R", "before": "which may or may not coincide with the randomized hypothesis testing problem . Our framework is applicable to both complete and incomplete market settings", "after": ". We analyze the connection between this pure testing problem and its randomized counterpart, and from latter we derive a dual representation for the maximal outperformance probability. Moreover, in a complete market setting, we provide a closed-form solution to the problem of beating a leveraged exchange traded fund. For a general benchmark under an incomplete stochastic factor model, we provide the Hamilton-Jacobi-Bellman PDE characterization for the maximal outperformance probability", "start_char_pos": 487, "end_char_pos": 641 } ]
[ 0, 83, 285, 415, 565 ]
1109.5791
1
Presented is an evolutionary model of consumer non-durable markets, which is an extension of a previously published paper on consumer durables. The model suggests that the repurchase process is governed by preferential growth. Applying statistical methods it can be shown that in a competitive market the mean price declines according to an exponential law towards a natural price, while the corresponding price distribution is approximately given by a Laplace distribution for independent price decisions of the manufacturers. The sales of individual brands are determined by a replicator dynamics. As a consequence the size distribution of business units is a lognormal distribution, while the growth rates are also given by a Laplace distribution. Moreover products with a higher fitness replace those with a lower fitness according to a logistic law. Most remarkable is the prediction that the price distribution becomes unstable at market clearing, which is in striking difference to the Walrasian picture in standard microeconomics. The reason for this statement is that competition between products exists only if there is an excess supply, causing a decreasing mean price. When, for example by significant events, demand increases or is equal to supply, competition breaks down and the price exhibits a jump. When this supply shortage is accompanied with an arbitrage for traders, it may even evolve into a speculative bubble. Neglecting the impact of speculation here, the evolutionary model can be linked to a stochastic jump-diffusion model .
A new microeconomic model is presented that aims at a description of the long-term unit sales and price evolution of homogeneous non-durable goods in polypoly markets. It merges the product lifecycle approach with the price dispersion dynamics of homogeneous goods. The model predicts a minimum critical lifetime of non-durables in order to survive. Under the condition that the supply side of the market evolves much faster than the demand side the theory suggests that unsatisfied demands are present in the first stages of the lifecycle. With the growth of production capacities these demands disappear accompanied with a logistic decrease of the mean price of the good. The model is applied to electricity as a non-durable satisfying the model condition. The presented theory allows a deeper understanding of the sales and price dynamics of non-durables .
[ { "type": "R", "before": "Presented is an evolutionary model of consumer", "after": "A new microeconomic model is presented that aims at a description of the long-term unit sales and price evolution of homogeneous", "start_char_pos": 0, "end_char_pos": 46 }, { "type": "R", "before": "markets, which is an extension of a previously published paper on consumer durables. The model suggests that the repurchase process is governed by preferential growth. Applying statistical methods it can be shown that in a competitive market the mean price declines according to an exponential law towards a natural price, while the corresponding price distribution is approximately given by a Laplace distribution for independent price decisions of the manufacturers. The sales of individual brands are determined by a replicator dynamics. As a consequence the size distribution of business units is a lognormal distribution, while the growth rates are also given by a Laplace distribution. Moreover products with a higher fitness replace those with a lower fitness according to a logistic law. Most remarkable is the prediction that the price distribution becomes unstable at market clearing, which is in striking difference to the Walrasian picture in standard microeconomics. The reason for this statement is that competition between products exists only if there is an excess supply, causing a decreasing mean price. When, for example by significant events, demand increases or is equal to supply, competition breaks down and the price exhibits a jump. When this supply shortage is accompanied with an arbitrage for traders, it may even evolve into a speculative bubble. Neglecting the impact of speculation here, the evolutionary model can be linked to", "after": "goods in polypoly markets. It merges the product lifecycle approach with the price dispersion dynamics of homogeneous goods. The model predicts a minimum critical lifetime of non-durables in order to survive. Under the condition that the supply side of the market evolves much faster than the demand side the theory suggests that unsatisfied demands are present in the first stages of the lifecycle. With the growth of production capacities these demands disappear accompanied with", "start_char_pos": 59, "end_char_pos": 1517 }, { "type": "R", "before": "stochastic jump-diffusion model", "after": "logistic decrease of the mean price of the good. The model is applied to electricity as a non-durable satisfying the model condition. The presented theory allows a deeper understanding of the sales and price dynamics of non-durables", "start_char_pos": 1520, "end_char_pos": 1551 } ]
[ 0, 143, 226, 527, 599, 750, 854, 1038, 1180, 1316, 1434 ]
1109.6715
1
The wormlike chain (WLC) model of DNA bending accurately reproduces single-molecule force-extension profiles of long (kilobase) chains. These bending statistics over large scales do not, however, establish a unique microscopic model for elasticity at the 1-10 bp scale, which holds particular interest in biological contexts. Here we examine a specific microscopic description, introduced by Yan and Marko, which allows for disruption of base pairing (i.e., "melting" ) and consequently enhanced local flexibility. We first reformulate this model to ensure consistency with the well-established thermodynamics of melting in long chains. Using Monte Carlo simulations, we compute cyclization rates of such a meltable wormlike chain (MWLC) over a broad range of chain lengths, including very short molecules (30 bp) that have not yet been explored experimentally. For chains longer than about 120 bp, including most molecules studied to date in the laboratory, we find that melting excitations have little impact on cyclization kinetics. Strong signatures of melting, which might be resolved within typical experimental scatter, emerge only for shorter chains.
The wormlike chain (WLC) model of DNA bending accurately reproduces single-molecule force-extension profiles of long (kilobase) chains. These bending statistics over large scales do not, however, establish a unique microscopic model for elasticity at the 1-10 bp scale, which holds particular interest in biological contexts. Here we examine a class of microscopic models which allow for disruption of base pairing (i.e., a `melt' or `kink', generically an `excitation' ) and consequently enhanced local flexibility. We first analyze the effect on the excitation free energy of integrating out the spatial degrees of freedom in a wormlike chain. Based on this analysis, we present a formulation of these models that ensures consistency with the well-established thermodynamics of melting in long chains. Using a new method to calculate cyclization statistics of short chains from enhanced-sampling Monte Carlo simulations, we compute J-factors of a meltable wormlike chain (MWLC) over a broad range of chain lengths, including very short molecules (30 bp) that have not yet been explored experimentally. For chains longer than about 120 bp, including most molecules studied to date in the laboratory, we find that melting excitations have little impact on cyclization kinetics. Strong signatures of melting, which might be resolved within typical experimental scatter, emerge only for shorter chains.
[ { "type": "R", "before": "specific microscopic description, introduced by Yan and Marko, which allows", "after": "class of microscopic models which allow", "start_char_pos": 344, "end_char_pos": 419 }, { "type": "R", "before": "\"melting\"", "after": "a `melt' or `kink', generically an `excitation'", "start_char_pos": 458, "end_char_pos": 467 }, { "type": "R", "before": "reformulate this model to ensure", "after": "analyze the effect on the excitation free energy of integrating out the spatial degrees of freedom in a wormlike chain. Based on this analysis, we present a formulation of these models that ensures", "start_char_pos": 524, "end_char_pos": 556 }, { "type": "A", "before": null, "after": "a new method to calculate cyclization statistics of short chains from enhanced-sampling", "start_char_pos": 643, "end_char_pos": 643 }, { "type": "R", "before": "cyclization rates of such", "after": "J-factors of", "start_char_pos": 680, "end_char_pos": 705 } ]
[ 0, 135, 325, 514, 636, 862, 1036 ]
1110.0220
1
This paper studies the optimal timing to liquidate defaultable securities in a general intensity-based credit risk model under stochastic interest rate. We incorporate the potential price discrepancy between the market and investors, which is characterized by risk-neutral valuation under different default risk premia specifications. To quantify the value of optimally timing to sell , we introduce the delayed liquidation premium which is closely related to the stochastic bracket between the market price and a pricing kernel. We analyze the optimal liquidation policy for various credit derivatives. Our model serves as the building block for the sequential buying and selling problem . We also discuss the extensions to a jump-diffusion default intensity model as well as a defaultable equity model .
This paper studies the optimal timing to liquidate credit derivatives in a general intensity-based credit risk model under stochastic interest rate. We incorporate the potential price discrepancy between the market and investors, which is characterized by risk-neutral valuation under different default risk premia specifications. We quantify the value of optimally timing to sell through the concept of delayed liquidation premium , and analyze the associated probabilistic representation and variational inequality. We illustrate the optimal liquidation policy for both single-named and multi-named credit derivatives. Our model is extended to study the sequential buying and selling problem with and without short-sale constraint .
[ { "type": "R", "before": "defaultable securities", "after": "credit derivatives", "start_char_pos": 51, "end_char_pos": 73 }, { "type": "R", "before": "To", "after": "We", "start_char_pos": 335, "end_char_pos": 337 }, { "type": "R", "before": ", we introduce the", "after": "through the concept of", "start_char_pos": 385, "end_char_pos": 403 }, { "type": "R", "before": "which is closely related to the stochastic bracket between the market price and a pricing kernel. We analyze", "after": ", and analyze the associated probabilistic representation and variational inequality. We illustrate", "start_char_pos": 432, "end_char_pos": 540 }, { "type": "R", "before": "various", "after": "both single-named and multi-named", "start_char_pos": 576, "end_char_pos": 583 }, { "type": "R", "before": "serves as the building block for the", "after": "is extended to study the", "start_char_pos": 614, "end_char_pos": 650 }, { "type": "R", "before": ". We also discuss the extensions to a jump-diffusion default intensity model as well as a defaultable equity model", "after": "with and without short-sale constraint", "start_char_pos": 689, "end_char_pos": 803 } ]
[ 0, 152, 334, 529, 603, 690 ]
1110.0276
1
RNA crystallographic models , our richest sources of RNA structural information, contain pervasive errors due to ambiguities in manually fitting RNA backbones into experimental density maps. To resolve these ambiguities , we have developed a new Rosetta structure prediction tool (ERRASER: Enumerative Real-space Refinement ASsisted by Electron density under Rosetta ) and coupled it to MolProbity validation and PHENIX diffraction-based refinement. On 15 crystallographic datasetsfor ribozymes, riboswitches, and other RNA domains, ERRASER/PHENIX corrects the majority of identifiable sugar pucker errors, steric clashes , suspicious backbone rotamers, and incorrect bond lengths/angles, while, on average, improving Rfree correlation to set-aside diffraction data by 0.010. As further confirmation of improved accuracy, the refinement enhances agreement between crystals solved by independent groups and between domains related by non-crystallographic symmetry (NCS). Finally, we demonstrate successful application of ERRASER on coordinates for an entire 30S ribosomal subunit. By rapidly and systematically disambiguating RNA model fitting, ERRASER enables RNA crystallography with significantly fewer errors .
RNA crystallographic models contain pervasive ambiguities due to the difficulty of manually fitting RNA backbones into experimental density maps. Recent advances in ab initio RNA structure prediction suggest an automated way to resolve these ambiguities . We present a protocol for Enumerative Real-space Refinement ASsisted by Electron density under Rosetta (ERRASER), coupled to PHENIX diffraction-based refinement. On 24 RNA crystallographic datasets, including a 30S ribosomal subunit and an unreleased IRES domain of hepatitis C virus, the protocol corrects the majority of steric clashes and anomalous backbone and bond geometries, as assessed by MolProbity. Furthermore, the method improves the average Rfree by 0.014, resolves functionally important discrepancies in protein-binding kink turns and group I ribozyme active sites, and refines low-resolution structures to better match higher resolution structures. By enabling such 'super-resolution' interpretation of crystallographic data, ERRASER is a unique application of RNA structure prediction that promises routine use in experimental structural biology .
[ { "type": "R", "before": ", our richest sources of RNA structural information, contain pervasive errors due to ambiguities in", "after": "contain pervasive ambiguities due to the difficulty of", "start_char_pos": 28, "end_char_pos": 127 }, { "type": "R", "before": "To", "after": "Recent advances in ab initio RNA structure prediction suggest an automated way to", "start_char_pos": 191, "end_char_pos": 193 }, { "type": "R", "before": ", we have developed a new Rosetta structure prediction tool (ERRASER:", "after": ". We present a protocol for", "start_char_pos": 220, "end_char_pos": 289 }, { "type": "R", "before": ") and coupled it to MolProbity validation and", "after": "(ERRASER), coupled to", "start_char_pos": 367, "end_char_pos": 412 }, { "type": "R", "before": "15 crystallographic datasetsfor ribozymes, riboswitches, and other RNA domains, ERRASER/PHENIX", "after": "24 RNA crystallographic datasets, including a 30S ribosomal subunit and an unreleased IRES domain of hepatitis C virus, the protocol", "start_char_pos": 453, "end_char_pos": 547 }, { "type": "R", "before": "identifiable sugar pucker errors, steric clashes , suspicious backbone rotamers, and incorrect bond lengths/angles, while, on average, improving Rfree correlation to set-aside diffraction data by 0.010. As further confirmation of improved accuracy, the refinement enhances agreement between crystals solved by independent groups and between domains related by non-crystallographic symmetry (NCS). Finally, we demonstrate successful application of ERRASER on coordinates for an entire 30S ribosomal subunit. By rapidly and systematically disambiguating RNA model fitting, ERRASER enables RNA crystallography with significantly fewer errors", "after": "steric clashes and anomalous backbone and bond geometries, as assessed by MolProbity. Furthermore, the method improves the average Rfree by 0.014, resolves functionally important discrepancies in protein-binding kink turns and group I ribozyme active sites, and refines low-resolution structures to better match higher resolution structures. By enabling such 'super-resolution' interpretation of crystallographic data, ERRASER is a unique application of RNA structure prediction that promises routine use in experimental structural biology", "start_char_pos": 573, "end_char_pos": 1211 } ]
[ 0, 190, 449, 775, 969, 1079 ]
1110.0276
2
RNA crystallographic models contain pervasive ambiguities due to the difficulty of manually fitting RNA backbones into experimental density maps . Recent advances in ab initio RNA structure prediction suggest an automated way to resolve these ambiguities. We present a protocol for Enumerative Real-space Refinement ASsisted by Electron density under Rosetta (ERRASER), coupled to PHENIX diffraction-based refinement. On 24 RNA crystallographic datasets, including a 30S ribosomal subunit and an unreleased IRES domain of hepatitis C virus, the protocol corrects the majority of steric clashes and anomalous backbone and bond geometries, as assessed by MolProbity. Furthermore, the method improves the average Rfree by 0.014 , resolves functionally important discrepancies in protein-binding kink turns and group I ribozyme active sites, and refines low-resolution structures to better match higher resolution structures. By enabling such 'super-resolution' interpretation of crystallographic data, ERRASER is a unique application of RNA structure prediction that promises routine use in experimental structural biology .
Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average Rfree factor , resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models .
[ { "type": "R", "before": "RNA crystallographic models contain pervasive ambiguities due to the difficulty of manually fitting RNA backbones into experimental density maps . Recent advances in ab initio RNA structure prediction suggest an automated way to resolve these ambiguities. We present a protocol for Enumerative Real-space Refinement ASsisted by Electron", "after": "Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron", "start_char_pos": 0, "end_char_pos": 336 }, { "type": "R", "before": "PHENIX", "after": "Python-based hierarchical environment for integrated 'xtallography' (PHENIX)", "start_char_pos": 381, "end_char_pos": 387 }, { "type": "R", "before": "RNA crystallographic datasets, including a 30S ribosomal subunit and an unreleased IRES domain of hepatitis C virus, the protocol", "after": "data sets, ERRASER automatically", "start_char_pos": 424, "end_char_pos": 553 }, { "type": "R", "before": "steric clashes and anomalous backbone and bond geometries, as assessed by MolProbity. Furthermore, the method", "after": "MolProbity-assessed errors,", "start_char_pos": 579, "end_char_pos": 688 }, { "type": "R", "before": "by 0.014", "after": "factor", "start_char_pos": 716, "end_char_pos": 724 }, { "type": "R", "before": "protein-binding kink turns and group I ribozyme active sites, and", "after": "noncanonical structure and", "start_char_pos": 776, "end_char_pos": 841 }, { "type": "R", "before": "structures", "after": "models", "start_char_pos": 865, "end_char_pos": 875 }, { "type": "R", "before": "higher resolution structures. By enabling such 'super-resolution' interpretation of crystallographic data, ERRASER is a unique application of RNA structure prediction that promises routine use in experimental structural biology", "after": "higher-resolution models", "start_char_pos": 892, "end_char_pos": 1119 } ]
[ 0, 255, 417, 664, 921 ]
1110.0403
1
We analyze pricing and portfolio optimization problems in defaultable regime switching markets. We contribute to both of these problems by obtaining novel characterizations of option prices and optimal portfolio strategies under regime-switching . Using our option price representation, we develop a novel efficient method to price claims which may depend on the full path of the underlying Markov chain. This is done via a change of probability measure and a short-time asymptotic expansion of the claim's price in terms of the Laplace transforms of the symmetric Dirichlet distribution . The proposed approach is applied to price not only simple European claims such as defaultable bonds, but also a new type of path-dependent claims that we term self-decomposable as well as the important class of vulnerable call and put options on a stock. In the portfolio optimization context, we obtain explicit constructions of value functions and investment strategies for investors with Constant Relative Risk Aversion (CRRA) utilities, built on the Hamilton-Jacobi-Bellman (HJB) framework developed in Capponi and Figueroa-Lopez (2011). We give a precise characterization of the investment strategies in terms of corporate bond returns, forward rates, and expected recovery at default, and illustrate the dependence of the optimal strategies on time, losses given default, and risk aversion level of the investor through a detailed economical analysis .
Using a suitable change of probability measure, we obtain a novel Poisson series representation for the arbitrage- free price process of vulnerable contingent claims in a regime-switching market driven by an underlying continuous- time Markov process. As a result of this representation, along with a short-time asymptotic expansion of the claim's price process, we develop an efficient method for pricing claims whose payoffs may depend on the full path of the underlying Markov chain . The proposed approach is applied to price not only simple European claims such as defaultable bonds, but also a new type of path-dependent claims that we term self-decomposable , as well as the important class of vulnerable call and put options on a stock. We provide a detailed error analysis and illustrate the accuracy and computational complexity of our method on several market traded instruments, such as defaultable bond prices, barrier options, and vulnerable call options. Using again our Poisson series representation, we show differentiability in time of the pre-default price function of European vulnerable claims, which enables us to rigorously deduce Feynman-Kac representations for the pre-default pricing function and new semimartingale representations for the price process of the vulnerable claim under both risk-neutral and objective probability measures .
[ { "type": "R", "before": "We analyze pricing and portfolio optimization problems in defaultable regime switching markets. We contribute to both of these problems by obtaining novel characterizations of option prices and optimal portfolio strategies under", "after": "Using a suitable change of probability measure, we obtain a novel Poisson series representation for the arbitrage- free price process of vulnerable contingent claims in a", "start_char_pos": 0, "end_char_pos": 228 }, { "type": "R", "before": ". Using our option price representation, we develop a novel efficient method to price claims which may depend on the full path of the underlying Markov chain. This is done via a change of probability measure and", "after": "market driven by an underlying continuous- time Markov process. As a result of this representation, along with", "start_char_pos": 246, "end_char_pos": 457 }, { "type": "R", "before": "in terms of the Laplace transforms of the symmetric Dirichlet distribution", "after": "process, we develop an efficient method for pricing claims whose payoffs may depend on the full path of the underlying Markov chain", "start_char_pos": 513, "end_char_pos": 587 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 767, "end_char_pos": 767 }, { "type": "R", "before": "In the portfolio optimization context, we obtain explicit constructions of value functions and investment strategies for investors with Constant Relative Risk Aversion (CRRA) utilities, built on the Hamilton-Jacobi-Bellman (HJB) framework developed in Capponi and Figueroa-Lopez (2011). We give a precise characterization of the investment strategies in terms of corporate bond returns, forward rates, and expected recovery at default, and illustrate the dependence of the optimal strategies on time, losses given default, and risk aversion level of the investor through a detailed economical analysis", "after": "We provide a detailed error analysis and illustrate the accuracy and computational complexity of our method on several market traded instruments, such as defaultable bond prices, barrier options, and vulnerable call options. Using again our Poisson series representation, we show differentiability in time of the pre-default price function of European vulnerable claims, which enables us to rigorously deduce Feynman-Kac representations for the pre-default pricing function and new semimartingale representations for the price process of the vulnerable claim under both risk-neutral and objective probability measures", "start_char_pos": 846, "end_char_pos": 1447 } ]
[ 0, 95, 404, 589, 845, 1132 ]
1110.1319
1
We present a novel methodology to determine the fundamental value of firms in the social-networking sector , motivated by recent realized IPOs and by reports that suggest sky-high valuations of firms such as facebook, Groupon, LinkedIn Corp., Pandora Media Inc, Twitter, Zynga. Our valuation of these firms is based on two ingredients: (i) revenues and profits of a social-networking firm are inherently linked to its user basis through a direct channel that has no equivalent in other sectors; (ii) the growth of the number of users can be calibrated with standard logistic growth models and allows for reliable extrapolations of the size of the business at long time horizons. Illustrating the methodology with facebook, one of the biggest of the social-media giants , we find a clear signature of a change of regime that occurred in 2010 on the growth of the number of users, from a pure exponential behavior (a paradigm for unlimited growth) to a logistic function describing the evolution towards an asymptotic plateau (a paradigm for growth in competition). We consider three different scenarios, a base case, a high growth and an extreme growth scenario. Using a discount factor of 5\%, a profit margin of 29\% and 3.5 USD of revenues per user per year yields a value of facebook of 15.3 billion USD in the base case scenario, 20.2 billion USD in the high growth scenario and 32.9 billion USD in the extreme growth scenario. According to our methodology, this would imply that facebook would need to increase its profit per user before the IPO by a factor of 3 to 6 in the base case scenario, 2.5 to 5 in the high growth scenario and 1.5 to 3 in the extreme growth scenario in order to meet the current, widespread, high expectations .
We present a novel methodology to determine the fundamental value of firms in the social-networking sector based on two ingredients: (i) revenues and profits are inherently linked to its user basis through a direct channel that has no equivalent in other sectors; (ii) the growth of the number of users can be calibrated with standard logistic growth models and allows for reliable extrapolations of the size of the business at long time horizons. We illustrate the methodology with a detailed analysis of facebook, one of the biggest of the social-media giants . There is a clear signature of a change of regime that occurred in 2010 on the growth of the number of users, from a pure exponential behavior (a paradigm for unlimited growth) to a logistic function with asymptotic plateau (a paradigm for growth in competition). We consider three different scenarios, a base case, a high growth and an extreme growth scenario. Using a discount factor of 5\%, a profit margin of 29\% and 3.5 USD of revenues per user per year yields a value of facebook of 15.3 billion USD in the base case scenario, 20.2 billion USD in the high growth scenario and 32.9 billion USD in the extreme growth scenario. According to our methodology, this would imply that facebook would need to increase its profit per user before the IPO by a factor of 3 to 6 in the base case scenario, 2.5 to 5 in the high growth scenario and 1.5 to 3 in the extreme growth scenario in order to meet the current, widespread, high expectations . To prove the wider applicability of our methodology, the analysis is repeated on Groupon, the well-known deal-of-the-day website which is expected to go public in November 2011. The results are in line with the facebook analysis. Customer growth will plateau. By not taking this fundamental property of the growth process into consideration, estimates of its IPO are wildly overpriced .
[ { "type": "D", "before": ", motivated by recent realized IPOs and by reports that suggest sky-high valuations of firms such as facebook, Groupon, LinkedIn Corp., Pandora Media Inc, Twitter, Zynga. Our valuation of these firms is", "after": null, "start_char_pos": 107, "end_char_pos": 309 }, { "type": "D", "before": "of a social-networking firm", "after": null, "start_char_pos": 361, "end_char_pos": 388 }, { "type": "R", "before": "Illustrating", "after": "We illustrate", "start_char_pos": 679, "end_char_pos": 691 }, { "type": "A", "before": null, "after": "a detailed analysis of", "start_char_pos": 713, "end_char_pos": 713 }, { "type": "R", "before": ", we find", "after": ". There is", "start_char_pos": 770, "end_char_pos": 779 }, { "type": "R", "before": "describing the evolution towards an", "after": "with", "start_char_pos": 970, "end_char_pos": 1005 }, { "type": "A", "before": null, "after": ". To prove the wider applicability of our methodology, the analysis is repeated on Groupon, the well-known deal-of-the-day website which is expected to go public in November 2011. The results are in line with the facebook analysis. Customer growth will plateau. By not taking this fundamental property of the growth process into consideration, estimates of its IPO are wildly overpriced", "start_char_pos": 1742, "end_char_pos": 1742 } ]
[ 0, 277, 494, 678, 1064, 1162, 1432 ]
1110.1567
1
In a global economy, fair and universal management of worldwide markets with a guarantee of sustainability of life on this planet is a key element . In this work, a universal measure of emissions to be applied at the international level is proposed, based on a modification of the Greenhouse Gas Intensity (GHG-INT) measure. It is hoped that the generality and low administrative cost of this measure, which we call the Modified Greenhouse Gas Intensity measure (MGHG-INT), will eliminate any need to classify nations , and provide a uniform approach to penalizing emissions . The core of the MGHG-INT is what we call the Modified Gross Domestic Product ( MGDP ), based on the Inequality-adjusted Human Development Index (IHDI). The MGDP enables us to normalize the status of all nations on a common scale, making it possible for us to propose universal measures, such as MGHG-INT. We also propose a carbon border tax , either as a direct tax or a tax adjustment, applicable at national borders, based on MGHG-INT and MGDP . This carbon tax is supported by a proposed global Emissions Trading System (ETS) applied at the international level. It is assumed that each country will implement its own equivalent carbon tax system to preserve the competitiveness of the marketplace; however, consideration of this aspect of our concept is beyond the scope of our work here. The proposed carbon tax is analyzed in a short-term scenario with interesting outcomes, such as the simultaneous lowering of emissions levels and the maintenance of reasonable growth in MGDP. The proposed carbon tax and ETS can also be implemented at the national level in a country, in provincial or corporate slices . In addition to annual GHG emissions, cumulative GHG emissions over a decade are considered for the big players in the world economy with almost the same results.
It will be difficult to gain the agreement of all the actors on any proposal for climate change management, if universality and fairness are not considered . In this work, a universal measure of emissions to be applied at the international level is proposed, based on a modification of the Greenhouse Gas Intensity (GHG-INT) measure. It is hoped that the generality and low administrative cost of this measure, which we call the Modified Greenhouse Gas Intensity measure (MGHG-INT), will eliminate any need to classify nations . The core of the MGHG-INT is what we call the IHDI-adjusted Gross Domestic Product ( IDHIGDP ), based on the Inequality-adjusted Human Development Index (IHDI). The IDHIGDP makes it possible to propose universal measures, such as MGHG-INT. We also propose a carbon border tax applicable at national borders, based on MGHG-INT and IDHIGDP . This carbon tax is supported by a proposed global Emissions Trading System (ETS) . The proposed carbon tax is analyzed in a short-term scenario , where it is shown that it can result in significant reduction in global emissions while keeping the economy growing at a positive rate . In addition to annual GHG emissions, cumulative GHG emissions over two decades are considered with almost the same results.
[ { "type": "R", "before": "In a global economy, fair and universal management of worldwide markets with a guarantee of sustainability of life on this planet is a key element", "after": "It will be difficult to gain the agreement of all the actors on any proposal for climate change management, if universality and fairness are not considered", "start_char_pos": 0, "end_char_pos": 146 }, { "type": "D", "before": ", and provide a uniform approach to penalizing emissions", "after": null, "start_char_pos": 518, "end_char_pos": 574 }, { "type": "R", "before": "Modified", "after": "IHDI-adjusted", "start_char_pos": 622, "end_char_pos": 630 }, { "type": "R", "before": "MGDP", "after": "IDHIGDP", "start_char_pos": 656, "end_char_pos": 660 }, { "type": "R", "before": "MGDP enables us to normalize the status of all nations on a common scale, making it possible for us", "after": "IDHIGDP makes it possible", "start_char_pos": 733, "end_char_pos": 832 }, { "type": "D", "before": ", either as a direct tax or a tax adjustment,", "after": null, "start_char_pos": 918, "end_char_pos": 963 }, { "type": "R", "before": "MGDP", "after": "IDHIGDP", "start_char_pos": 1018, "end_char_pos": 1022 }, { "type": "R", "before": "applied at the international level. It is assumed that each country will implement its own equivalent carbon tax system to preserve the competitiveness of the marketplace; however, consideration of this aspect of our concept is beyond the scope of our work here.", "after": ".", "start_char_pos": 1106, "end_char_pos": 1368 }, { "type": "R", "before": "with interesting outcomes, such as the simultaneous lowering of emissions levels and the maintenance of reasonable growth in MGDP. The proposed carbon tax and ETS can also be implemented at the national level in a country, in provincial or corporate slices", "after": ", where it is shown that it can result in significant reduction in global emissions while keeping the economy growing at a positive rate", "start_char_pos": 1430, "end_char_pos": 1686 }, { "type": "R", "before": "a decade are considered for the big players in the world economy", "after": "two decades are considered", "start_char_pos": 1756, "end_char_pos": 1820 } ]
[ 0, 148, 324, 576, 728, 881, 1141, 1277, 1368, 1560, 1688 ]