doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
sequence
1203.4610
2
We study capital requirements for financial positions belonging to spaces of bounded measurable functions . We allow for general acceptance sets and general positive eligible (or "reference") assets, which include defaultable bonds, options, or limited liability assets . Since the payoff of these assets is not bounded away from zero the resulting capital requirements cannot be transformed into cash-invariant risk measures by a simple change of numeraire. However, extending the range of eligible assets is important because, as exemplified by the recent financial crisis, the existence of default-free securities may not be a realistic assumption to make. We study finiteness and continuity properties of capital requirements in this general context. We apply the results to capital requirements based on Value-at-Risk and Tail-Value-at-Risk acceptability, the two most important acceptability criteria in practice. Finally, we prove that it is not possible to choose the eligible asset so that the corresponding capital requirement dominates the capital requirement corresponding any other choice of the eligible asset. Our examples and results on finiteness and continuity show that a theory of capital requirements allowing for general eligible assets is richer than that of cash-invariant capital requirements .
We study capital requirements for bounded financial positions defined as the minimum amount of capital to invest in a chosen eligible asset targeting a pre-specified acceptability test . We allow for general acceptance sets and general eligible assets, including defaultable bonds . Since the payoff of these assets is not necessarily bounded away from zero the resulting risk measures cannot be transformed into cash-additive risk measures by a change of numeraire. However, extending the range of eligible assets is important because, as exemplified by the recent financial crisis, assuming the existence of default-free bonds may be unrealistic. We focus on finiteness and continuity properties of these general risk measures. As an application, we discuss capital requirements based on Value-at-Risk and Tail-Value-at-Risk acceptability, the two most important acceptability criteria in practice. Finally, we prove that there is no optimal choice of the eligible asset. Our results and our examples show that a theory of capital requirements allowing for general eligible assets is richer than the standard theory of cash-additive risk measures .
[ { "type": "R", "before": "financial positions belonging to spaces of bounded measurable functions", "after": "bounded financial positions defined as the minimum amount of capital to invest in a chosen eligible asset targeting a pre-specified acceptability test", "start_char_pos": 34, "end_char_pos": 105 }, { "type": "R", "before": "positive eligible (or \"reference\") assets, which include defaultable bonds, options, or limited liability assets", "after": "eligible assets, including defaultable bonds", "start_char_pos": 157, "end_char_pos": 269 }, { "type": "A", "before": null, "after": "necessarily", "start_char_pos": 312, "end_char_pos": 312 }, { "type": "R", "before": "capital requirements", "after": "risk measures", "start_char_pos": 350, "end_char_pos": 370 }, { "type": "R", "before": "cash-invariant", "after": "cash-additive", "start_char_pos": 398, "end_char_pos": 412 }, { "type": "D", "before": "simple", "after": null, "start_char_pos": 432, "end_char_pos": 438 }, { "type": "A", "before": null, "after": "assuming", "start_char_pos": 577, "end_char_pos": 577 }, { "type": "R", "before": "securities may not be a realistic assumption to make. We study", "after": "bonds may be unrealistic. We focus on", "start_char_pos": 608, "end_char_pos": 670 }, { "type": "R", "before": "capital requirements in this general context. We apply the results to", "after": "these general risk measures. As an application, we discuss", "start_char_pos": 711, "end_char_pos": 780 }, { "type": "R", "before": "it is not possible to choose the eligible asset so that the corresponding capital requirement dominates the capital requirement corresponding any other", "after": "there is no optimal", "start_char_pos": 945, "end_char_pos": 1096 }, { "type": "R", "before": "examples and results on finiteness and continuity", "after": "results and our examples", "start_char_pos": 1131, "end_char_pos": 1180 }, { "type": "R", "before": "that of cash-invariant capital requirements", "after": "the standard theory of cash-additive risk measures", "start_char_pos": 1276, "end_char_pos": 1319 } ]
[ 0, 107, 459, 661, 756, 921, 1126 ]
1203.5298
1
We consider a simple stochastic model of a urban housing market, in which the interaction of tenants and landlords induces rent (or price) fluctuations. We simulate the model numerically and measure the equilibrium price distribution, which is found to be well-described by a lognormal law. We also study the influence of the density of agents (or equivalently, the vacancy rate) on the price distribution. A simplified version of the model, amenable to analytical treatment, is proposed and allows us to recover a normal distribution for the logarithm of the price . The predicted equilibrium value agrees quantitatively with numerical simulations, while a qualitative agreement is obtained for the standard deviation .
We consider a simple stochastic model of a urban rental housing market, in which the interaction of tenants and landlords induces rent fluctuations. We simulate the model numerically and measure the equilibrium rent distribution, which is found to be close to a lognormal law. We also study the influence of the density of agents (or equivalently, the vacancy rate) on the rent distribution. A simplified version of the model, amenable to analytical treatment, is studied and leads to a lognormal distribution of rents . The predicted equilibrium value agrees quantitatively with numerical simulations, while a qualitative agreement is obtained for the standard deviation . The connection with non-equilibrium statistical physics models like ratchets is also emphasized .
[ { "type": "A", "before": null, "after": "rental", "start_char_pos": 49, "end_char_pos": 49 }, { "type": "D", "before": "(or price)", "after": null, "start_char_pos": 129, "end_char_pos": 139 }, { "type": "R", "before": "price", "after": "rent", "start_char_pos": 216, "end_char_pos": 221 }, { "type": "R", "before": "well-described by", "after": "close to", "start_char_pos": 257, "end_char_pos": 274 }, { "type": "R", "before": "price", "after": "rent", "start_char_pos": 388, "end_char_pos": 393 }, { "type": "R", "before": "proposed and allows us to recover a normal distribution for the logarithm of the price", "after": "studied and leads to a lognormal distribution of rents", "start_char_pos": 480, "end_char_pos": 566 }, { "type": "A", "before": null, "after": ". The connection with non-equilibrium statistical physics models like ratchets is also emphasized", "start_char_pos": 720, "end_char_pos": 720 } ]
[ 0, 153, 291, 407, 568 ]
1203.5513
1
We consider a short rate model, driven by a stochastic process on the cone of positive semidefinite matrices. We propose a new closed form solution for the pricing of zero-coupon bonds and interest-rate derivatives, based on the Cameron-Martin approach outlined in Gnoatto and Grasselli (2011). Moreover, we derive sufficient conditions ensuring that the model replicates normal, inverse or humped yield curves.
We consider a short rate model, driven by a stochastic process on the cone of positive semidefinite matrices. We derive sufficient conditions ensuring that the model replicates normal, inverse or humped yield curves.
[ { "type": "D", "before": "propose a new closed form solution for the pricing of zero-coupon bonds and interest-rate derivatives, based on the Cameron-Martin approach outlined in Gnoatto and Grasselli (2011). Moreover, we", "after": null, "start_char_pos": 113, "end_char_pos": 307 } ]
[ 0, 109, 294 ]
1203.5574
1
The branching process (BP) approach has been successful in understanding the avalanche dynamics in complex networks. However, its applications are mainly focused on unipartite networks in which nodes are all of the same type. Here, we develop the BP approach for a particular bipartite Boolean network , composed of logic OR and AND gates, which is motivated to understand the avalanche dynamics in metabolic networks . We reduce the bipartite to a unipartite network by renormalizing the OR gates, and obtain an effective branching ratio for the AND gates. Then the standard BP approach is applied to the reduced unipartite network, and the avalanche size distribution is obtained. We also test the analytic result with simulations on a couple of metabolic networksin real world. They are in reasonable agreement with each other .
The branching process (BP) approach has been successful in explaining the avalanche dynamics in complex networks. However, its applications are mainly focused on unipartite networks , in which all nodes are of the same type. Here, motivated by a need to understand avalanche dynamics in metabolic networks, we extend the BP approach to a particular bipartite network composed of Boolean AND and OR logic gates . We reduce the bipartite network into a unipartite network by integrating out OR gates, and obtain the effective branching ratio for the remaining AND gates. Then the standard BP approach is applied to the reduced network, and the avalanche size distribution is obtained. We test the BP results with simulations on the model networks and two microbial metabolic networks, demonstrating the usefulness of the BP approach .
[ { "type": "R", "before": "understanding", "after": "explaining", "start_char_pos": 59, "end_char_pos": 72 }, { "type": "R", "before": "in which nodes are all", "after": ", in which all nodes are", "start_char_pos": 185, "end_char_pos": 207 }, { "type": "R", "before": "we develop", "after": "motivated by a need to understand avalanche dynamics in metabolic networks, we extend", "start_char_pos": 232, "end_char_pos": 242 }, { "type": "R", "before": "for", "after": "to", "start_char_pos": 259, "end_char_pos": 262 }, { "type": "R", "before": "Boolean network , composed of logic OR and AND gates, which is motivated to understand the avalanche dynamics in metabolic networks", "after": "network composed of Boolean AND and OR logic gates", "start_char_pos": 286, "end_char_pos": 417 }, { "type": "R", "before": "to", "after": "network into", "start_char_pos": 444, "end_char_pos": 446 }, { "type": "R", "before": "renormalizing the", "after": "integrating out", "start_char_pos": 471, "end_char_pos": 488 }, { "type": "R", "before": "an", "after": "the", "start_char_pos": 510, "end_char_pos": 512 }, { "type": "A", "before": null, "after": "remaining", "start_char_pos": 547, "end_char_pos": 547 }, { "type": "D", "before": "unipartite", "after": null, "start_char_pos": 615, "end_char_pos": 625 }, { "type": "R", "before": "also test the analytic result", "after": "test the BP results", "start_char_pos": 687, "end_char_pos": 716 }, { "type": "R", "before": "a couple of metabolic networksin real world. They are in reasonable agreement with each other", "after": "the model networks and two microbial metabolic networks, demonstrating the usefulness of the BP approach", "start_char_pos": 737, "end_char_pos": 830 } ]
[ 0, 116, 225, 419, 558, 683, 781 ]
1203.5903
1
In this paper quasi-closed-form solutions are derived for the price of equity and VIX derivatives under the assumption that the underlying follows a 3/2 process with jumps in the index. The newly-found formulae allow for an empirical analysis to be performed. In the case of the pure-diffusion 3/2 model , the dynamics are rich enough to capture the observed upward-sloping implied-volatility skew in VIX options. This observation contradicts a common perception in the literature that jumps are required for the consistent modeling of equity and VIX derivatives. We find that the 3/2 plus jumps model is more parsimonious than competing models from its class; it is able to accurately capture the joint dynamics of equity and VIX derivatives, without sacrificing analytic tractability. The model produces a good short-term fit to the implied volatility of index options due to the richer dynamics, while retaining the analytic tractability of its pure-diffusion counterpart .
The paper demonstrates that a pure-diffusion 3/2 model is able to capture the observed upward-sloping implied volatility skew in VIX options. This observation contradicts a common perception in the literature that jumps are required for the consistent modelling of equity and VIX derivatives. The pure-diffusion model, however, struggles to reproduce the smile in the implied volatilities of short-term index options. One remedy to this problem is to augment the model by introducing jumps in the index. The resulting 3/2 plus jumps model turns out to be as tractable as its pure-diffusion counterpart when it comes to pricing equity, realized variance and VIX derivatives, but accurately captures the smile in implied volatilities of short-term index options .
[ { "type": "R", "before": "In this paper quasi-closed-form solutions are derived for the price of equity and VIX derivatives under the assumption that the underlying follows a 3/2 process with jumps in the index. The newly-found formulae allow for an empirical analysis to be performed. In the case of the", "after": "The paper demonstrates that a", "start_char_pos": 0, "end_char_pos": 278 }, { "type": "R", "before": ", the dynamics are rich enough", "after": "is able", "start_char_pos": 304, "end_char_pos": 334 }, { "type": "R", "before": "implied-volatility", "after": "implied volatility", "start_char_pos": 374, "end_char_pos": 392 }, { "type": "R", "before": "modeling", "after": "modelling", "start_char_pos": 524, "end_char_pos": 532 }, { "type": "R", "before": "We find that the", "after": "The pure-diffusion model, however, struggles to reproduce the smile in the implied volatilities of short-term index options. One remedy to this problem is to augment the model by introducing jumps in the index. The resulting", "start_char_pos": 564, "end_char_pos": 580 }, { "type": "R", "before": "is more parsimonious than competing models from its class; it is able to accurately capture the joint dynamics of equity", "after": "turns out to be as tractable as its pure-diffusion counterpart when it comes to pricing equity, realized variance", "start_char_pos": 602, "end_char_pos": 722 }, { "type": "R", "before": "without sacrificing analytic tractability. The model produces a good short-term fit to the implied volatility of index options due to the richer dynamics, while retaining the analytic tractability of its pure-diffusion counterpart", "after": "but accurately captures the smile in implied volatilities of short-term index options", "start_char_pos": 744, "end_char_pos": 974 } ]
[ 0, 185, 259, 413, 563, 660, 786 ]
1203.6560
1
In a recent paper it was shown that, for chemical reaction networks possessing a subtle structural property called concordance, dynamical behavior of a very circumscribed (and largely stable) kind is enforced, so long as the kinetics lies within the very broad and natural class of weakly monotonic kinetics . In particular, multiple equilibria are precluded, as are degenerate positive equilibria. Moreover, under certain circumstances, also related to concordance, all real eigenvalues associated with a positive equilibrium are negative. Although concordance of a reaction network can be decided by readily available computational means, we show here that, when a nondegenerate network's Species Reaction Graph satisfies certain mild conditions, concordance and its dynamical consequences are ensured. These conditions are weaker than earlier ones invoked to establish kinetic system injectivity, which, in turn, is just one ramication of network concordance. Because the Species Reaction Graph resembles pathway depictions often drawn by biochemists, results here expand the possibility of inferring significant dynamical information directly from standard biochemical reaction diagrams.
In a recent paper it was shown that, for chemical reaction networks possessing a subtle structural property called concordance, dynamical behavior of a very circumscribed (and largely stable) kind is enforced, so long as the kinetics lies within the very broad and natural weakly monotonic class . In particular, multiple equilibria are precluded, as are degenerate positive equilibria. Moreover, under certain circumstances, also related to concordance, all real eigenvalues associated with a positive equilibrium are negative. Although concordance of a reaction network can be decided by readily available computational means, we show here that, when a nondegenerate network's Species-Reaction Graph satisfies certain mild conditions, concordance and its dynamical consequences are ensured. These conditions are weaker than earlier ones invoked to establish kinetic system injectivity, which, in turn, is just one ramification of network concordance. Because the Species-Reaction Graph resembles pathway depictions often drawn by biochemists, results here expand the possibility of inferring signicant dynamical information directly from standard biochemical reaction diagrams.
[ { "type": "R", "before": "class of weakly monotonic kinetics", "after": "weakly monotonic class", "start_char_pos": 273, "end_char_pos": 307 }, { "type": "R", "before": "Species Reaction", "after": "Species-Reaction", "start_char_pos": 691, "end_char_pos": 707 }, { "type": "R", "before": "ramication", "after": "ramification", "start_char_pos": 928, "end_char_pos": 938 }, { "type": "R", "before": "Species Reaction", "after": "Species-Reaction", "start_char_pos": 975, "end_char_pos": 991 }, { "type": "R", "before": "significant", "after": "signicant", "start_char_pos": 1104, "end_char_pos": 1115 } ]
[ 0, 309, 398, 540, 804, 962 ]
1204.0178
1
In this article the Gordan theorem is applied to the thermodynamics of a chemical reaction network at steady state. From a theoretical viewpoint it states that the exclusion (presence) of closed reactions loops makes possible (impossible) the definition of a thermodynamic potential and vice versa. On the computational side, it reveals that calculating reactions free energy and correcting reaction fluxes from infeasible loops are dual problems whose solutions are alternatively inconsistent. The relevance of this result for applications is discussed with an example in the field of constraints-based modeling of cellular metabolism where it leads to efficient and scalable methods to afford the energy balance analysis.
In this article the Gordan theorem is applied to the thermodynamics of a chemical reaction network at steady state. From a theoretical viewpoint it is equivalent to the Clausius formulation of the second law for the out of equilibrium steady states of chemical networks, i.e. it states that the exclusion (presence) of closed reactions loops makes possible (impossible) the definition of a thermodynamic potential and vice versa. On the computational side, it reveals that calculating reactions free energy and searching infeasible loops in flux states are dual problems whose solutions are alternatively inconsistent. The relevance of this result for applications is discussed with an example in the field of constraints-based modeling of cellular metabolism where it leads to efficient and scalable methods to afford the energy balance analysis.
[ { "type": "R", "before": "states", "after": "is equivalent to the Clausius formulation of the second law for the out of equilibrium steady states of chemical networks, i.e. it states", "start_char_pos": 148, "end_char_pos": 154 }, { "type": "R", "before": "correcting reaction fluxes from infeasible loops", "after": "searching infeasible loops in flux states", "start_char_pos": 380, "end_char_pos": 428 } ]
[ 0, 115, 298, 494 ]
1204.0350
1
On December 16th, 2011, Zynga, the well-known social game developing company went public. This event followed other recent IPOs in the world of social networking companies, such as Groupon or Linkedin among others. With a valuation close to 7 billion USD at the time when it went public, Zynga became one of the biggest web IPOs since Google. This recent enthusiasm for social networking companies raises the question whether they are overvalued. Indeed, during the few months since its IPO, Zynga showed significant variability, its market capitalization going from 5.6 to 10.2 billion USD, hinting at a possible irrational behavior from the market. To bring substance to the debate, we propose a two-tiered approach to compute the intrinsic value of Zynga. First, we introduce a new model to forecast its user base, based on the individual dynamics of its major games. Next, we model the revenues per user using a logistic function, a standard model for growth in competition. This allows us to bracket the valuation of Zynga using three different scenarios: 3.4, 4.0 and 4.8 billion USD in the base case, high growth and extreme growth scenario respectively. This suggests that Zynga has been overpriced ever since its IPO. Given our diagnostic of a bubble , trading strategies should be tuned to capture the sentiments and herding spirits associated with social networks, while minimizing the impact of standard fundamental factors .
On December 16th, 2011, Zynga, the well-known social game developing company went public. This event followed other recent IPOs in the world of social networking companies, such as Groupon or Linkedin among others. With a valuation close to 7 billion USD at the time when it went public, Zynga became one of the biggest web IPOs since Google. This recent enthusiasm for social networking companies raises the question whether they are overvalued. Indeed, during the few months since its IPO, Zynga showed significant variability, its market capitalization going from 5.6 to 10.2 billion USD, hinting at a possible irrational behavior from the market. To bring substance to the debate, we propose a two-tiered approach to compute the intrinsic value of Zynga. First, we introduce a new model to forecast its user base, based on the individual dynamics of its major games. Next, we model the revenues per user using a logistic function, a standard model for growth in competition. This allows us to bracket the valuation of Zynga using three different scenarios: 3.4, 4.0 and 4.8 billion USD in the base case, high growth and extreme growth scenario respectively. This suggests that Zynga has been overpriced ever since its IPO. Finally, we propose an investment strategy, which is based on our diagnostic of a bubble for Zynga and how this herding / bubbly sentiment can be expected to play together with two important coming events (the quarterly financial result announcement around April 26th, 2012 followed by the end of a first lock-up period around April 30th, 2012). On the long term, our analysis indicates that Zynga's price should decrease significantly .
[ { "type": "R", "before": "Given", "after": "Finally, we propose an investment strategy, which is based on", "start_char_pos": 1227, "end_char_pos": 1232 }, { "type": "R", "before": ", trading strategies should be tuned to capture the sentiments and herding spirits associated with social networks, while minimizing the impact of standard fundamental factors", "after": "for Zynga and how this herding / bubbly sentiment can be expected to play together with two important coming events (the quarterly financial result announcement around April 26th, 2012 followed by the end of a first lock-up period around April 30th, 2012). On the long term, our analysis indicates that Zynga's price should decrease significantly", "start_char_pos": 1260, "end_char_pos": 1435 } ]
[ 0, 89, 214, 342, 446, 650, 758, 870, 978, 1161, 1226 ]
1204.0733
1
We utilize a coarse-grained directional dynamic bonding DNA model [ C. URL, Comp. Phys. Comm. (In Press DOI:10.1016/j.cpc.2012.03.005) ] to study DNA self-assembly and DNA computation . In our DNA model, a single nucleotide is represented by a single interaction site , and complementary sites can hybridize reversibly . Along with the dynamic hybridization bonds, angular and dihedral bonds are dynamically introduced and removed to model the collective properties of double helix structure on the DNA zippering dynamics. We use this DNA model to simulate the temperature dependent self-assembly of DNA tetrahedra at several temperatures, a DNA icosahedron, and also strand displacement operations used in DNA computation.
We study DNA self-assembly and DNA computation using a coarse-grained DNA model within the directional dynamic bonding framework [ C. URL, Comp. Phys. Comm. 183, 1793 (2012) ] . In our model, a single nucleotide or domain is represented by a single interaction site . Complementary sites can reversibly hybridize and dehybridize during a simulation. This bond dynamics induces a dynamics of the angular and dihedral bonds , that model the collective effects of chemical structure on the hybridization dynamics. We use the DNA model to perform simulations of the self-assembly kinetics of DNA tetrahedra , an icosahedron, as well as strand displacement operations used in DNA computation.
[ { "type": "R", "before": "utilize", "after": "study DNA self-assembly and DNA computation using", "start_char_pos": 3, "end_char_pos": 10 }, { "type": "A", "before": null, "after": "DNA model within the", "start_char_pos": 28, "end_char_pos": 28 }, { "type": "R", "before": "DNA model", "after": "framework", "start_char_pos": 57, "end_char_pos": 66 }, { "type": "R", "before": "(In Press DOI:10.1016/j.cpc.2012.03.005)", "after": "183, 1793 (2012)", "start_char_pos": 95, "end_char_pos": 135 }, { "type": "D", "before": "to study DNA self-assembly and DNA computation", "after": null, "start_char_pos": 138, "end_char_pos": 184 }, { "type": "D", "before": "DNA", "after": null, "start_char_pos": 194, "end_char_pos": 197 }, { "type": "A", "before": null, "after": "or domain", "start_char_pos": 225, "end_char_pos": 225 }, { "type": "R", "before": ", and complementary sites can hybridize reversibly . Along with the dynamic hybridization bonds,", "after": ". Complementary sites can reversibly hybridize and dehybridize during a simulation. This bond dynamics induces a dynamics of the", "start_char_pos": 270, "end_char_pos": 366 }, { "type": "R", "before": "are dynamically introduced and removed to", "after": ", that", "start_char_pos": 394, "end_char_pos": 435 }, { "type": "R", "before": "properties of double helix", "after": "effects of chemical", "start_char_pos": 457, "end_char_pos": 483 }, { "type": "R", "before": "DNA zippering", "after": "hybridization", "start_char_pos": 501, "end_char_pos": 514 }, { "type": "R", "before": "this", "after": "the", "start_char_pos": 532, "end_char_pos": 536 }, { "type": "R", "before": "simulate the temperature dependent", "after": "perform simulations of the", "start_char_pos": 550, "end_char_pos": 584 }, { "type": "A", "before": null, "after": "kinetics", "start_char_pos": 599, "end_char_pos": 599 }, { "type": "R", "before": "at several temperatures, a DNA icosahedron, and also", "after": ", an icosahedron, as well as", "start_char_pos": 618, "end_char_pos": 670 } ]
[ 0, 82, 94, 186, 322, 524 ]
1204.1416
1
We simulate the non-local Stokesian hydrodynamics of an elastic filament with a permanent distribution of stresslets along its contour. A bending instability of an initially straight filament induces curvatures in the distribution of stresslets, thus producing a net hydrodynamic flow in which the filament propels autonomously. Depending on the ratio of stresslet strength to elasticity, the linear instability can develop into unsteady states with large-amplitude non-linear deformations, where the filament conformation and the center of mass velocity fluctuate frequently. In planar flows, these unsteady states finally decay into steady states where the filament has constant translational or rotational motion . Our results can be tested in molecular-motor filament mixtures, synthetic chains of autocatalytic particles or other linearly connected systems where chemical energy is converted to mechanical energy in a fluid environment.
We simulate the nonlocal Stokesian hydrodynamics of an elastic filament with a permanent distribution of stresslets along its contour. A bending instability of an initially straight filament spontaneously breaks flow symmetry and leads to autonomous filament motion which, depending on conformational symmetry can be translational or rotational. At high ratios of stresslet strength to elasticity, the linear instability develops into non-linear fluctuating states with large amplitude deformations. The dynamics of these states can be qualitatively understood as a superposition of translational and rotational motion associated with filament conformational modes of opposite symmetry . Our results can be tested in molecular-motor filament mixtures, synthetic chains of autocatalytic particles or other linearly connected systems where chemical energy is converted to mechanical energy in a fluid environment.
[ { "type": "R", "before": "non-local", "after": "nonlocal", "start_char_pos": 16, "end_char_pos": 25 }, { "type": "R", "before": "induces curvatures in the distribution of stresslets, thus producing a net hydrodynamic flow in which the filament propels autonomously. Depending on the ratio", "after": "spontaneously breaks flow symmetry and leads to autonomous filament motion which, depending on conformational symmetry can be translational or rotational. At high ratios", "start_char_pos": 192, "end_char_pos": 351 }, { "type": "R", "before": "can develop into unsteady states with large-amplitude non-linear deformations, where the filament conformation and the center of mass velocity fluctuate frequently. In planar flows, these unsteady states finally decay into steady states where the filament has constant translational or rotational motion", "after": "develops into non-linear fluctuating states with large amplitude deformations. The dynamics of these states can be qualitatively understood as a superposition of translational and rotational motion associated with filament conformational modes of opposite symmetry", "start_char_pos": 412, "end_char_pos": 715 } ]
[ 0, 135, 328, 576, 717 ]
1204.1416
2
We simulate the nonlocal Stokesian hydrodynamics of an elastic filament with a permanent distribution of stresslets along its contour. A bending instability of an initially straight filament spontaneously breaks flow symmetry and leads to autonomous filament motion which, depending on conformational symmetry can be translational or rotational. At high ratios of stresslet strength to elasticity, the linear instability develops into non-linear fluctuating states with large amplitude deformations. The dynamics of these states can be qualitatively understood as a superposition of translational and rotational motion associated with filament conformational modes of opposite symmetry. Our results can be tested in molecular-motor filament mixtures, synthetic chains of autocatalytic particles or other linearly connected systems where chemical energy is converted to mechanical energy in a fluid environment.
We simulate the nonlocal Stokesian hydrodynamics of an elastic filament which is active due a permanent distribution of stresslets along its contour. A bending instability of an initially straight filament spontaneously breaks flow symmetry and leads to autonomous filament motion which, depending on conformational symmetry , can be translational or rotational. At high ratios of activity to elasticity, the linear instability develops into nonlinear fluctuating states with large amplitude deformations. The dynamics of these states can be qualitatively understood as a superposition of translational and rotational motion associated with filament conformational modes of opposite symmetry. Our results can be tested in molecular-motor filament mixtures, synthetic chains of autocatalytic particles , or other linearly connected systems where chemical energy is converted to mechanical energy in a fluid environment.
[ { "type": "R", "before": "with", "after": "which is active due", "start_char_pos": 72, "end_char_pos": 76 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 310, "end_char_pos": 310 }, { "type": "R", "before": "stresslet strength", "after": "activity", "start_char_pos": 365, "end_char_pos": 383 }, { "type": "R", "before": "non-linear", "after": "nonlinear", "start_char_pos": 436, "end_char_pos": 446 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 796, "end_char_pos": 796 } ]
[ 0, 134, 346, 500, 687 ]
1204.1452
1
In this paper, we propose a forecasting model for volatility based on its decomposition to several investment horizons and jumps. As a forecasting tool, we utilize Realized GARCH framework of Hansen et al. (2011), which models jointly returns and realized measures of volatility. For the decomposition, we use jump wavelet two scale realized volatility estimator (JWTSRV) of Barunik and Vacha (2012). While the main advantage of our time-frequency estimator is that it provides us with realized volatility measure robust to noise as well as with consistent estimate of jumps, it also allows to decompose volatility into the several investment horizons . On currency futures data covering the period of recent financial crisis , we compare forecasts from Realized GARCH (1,1) model using several measures. Namely, we use the realized volatility, bipower variation, two- scale realized volatility, realized kernel and our jump wavelet two scale realized volatility. We find that in-sample as well as out-of-sample performance of the model significantly differs based on the realized measure used. When JWTSRV estimator is used, model produces significantly best forecasts . We also utilize jumps and build Realized Jump-GARCH model. Utilizing the decomposition obtained by our estimator, we finally build Realized Wavelet-Jump GARCH model, which uses estimated jumps as well as volatility at several investment horizons . Our Realized Wavelet-Jump GARCH model proves to further improve the volatility forecasts. We conclude that realized volatility measurement in the time-frequency domain and inclusion of jumps improves the volatility forecasting considerably.
In this paper, we propose a forecasting model for volatility based on its decomposition to several investment horizons and jumps. As a forecasting tool, we use Realized GARCH framework which models jointly returns and realized measures of volatility. Using jump wavelet two scale realized volatility estimator (JWTSRV) , we first decompose the returns volatility into several investment horizons and jumps and then utilise this decomposition in a newly proposed Realized Jump-GARCH and Realized Wavelet-Jump GARCH models . On currency futures data covering the period of recent financial crisis we moreover compare the forecasts from Realized GARCH model using several additional realized volatility measures. Namely, we use the realized volatility, bipower variation, two-scale realized volatility, realized kernel and jump wavelet two scale realized volatility. We find that in-sample as well as out-of-sample performance of the model significantly differs based on the realized measure used. When JWTSRV estimator is used, model produces significantly best forecasts . Our Realized Wavelet-Jump GARCH model proves to further improve the volatility forecasts. We conclude that realized volatility measurement in the time-frequency domain and inclusion of jumps improves the volatility forecasting considerably.
[ { "type": "R", "before": "utilize", "after": "use", "start_char_pos": 156, "end_char_pos": 163 }, { "type": "D", "before": "of Hansen et al. (2011),", "after": null, "start_char_pos": 189, "end_char_pos": 213 }, { "type": "R", "before": "For the decomposition, we use", "after": "Using", "start_char_pos": 280, "end_char_pos": 309 }, { "type": "R", "before": "of Barunik and Vacha (2012). While the main advantage of our time-frequency estimator is that it provides us with realized volatility measure robust to noise as well as with consistent estimate of jumps, it also allows to decompose volatility into the", "after": ", we first decompose the returns volatility into", "start_char_pos": 372, "end_char_pos": 623 }, { "type": "A", "before": null, "after": "and jumps and then utilise this decomposition in a newly proposed Realized Jump-GARCH and Realized Wavelet-Jump GARCH models", "start_char_pos": 652, "end_char_pos": 652 }, { "type": "R", "before": ", we compare", "after": "we moreover compare the", "start_char_pos": 727, "end_char_pos": 739 }, { "type": "D", "before": "(1,1)", "after": null, "start_char_pos": 770, "end_char_pos": 775 }, { "type": "A", "before": null, "after": "additional realized volatility", "start_char_pos": 796, "end_char_pos": 796 }, { "type": "R", "before": "two- scale", "after": "two-scale", "start_char_pos": 866, "end_char_pos": 876 }, { "type": "D", "before": "our", "after": null, "start_char_pos": 918, "end_char_pos": 921 }, { "type": "D", "before": ". We also utilize jumps and build Realized Jump-GARCH model. Utilizing the decomposition obtained by our estimator, we finally build Realized Wavelet-Jump GARCH model, which uses estimated jumps as well as volatility at several investment horizons", "after": null, "start_char_pos": 1172, "end_char_pos": 1419 } ]
[ 0, 129, 279, 400, 654, 806, 965, 1096, 1173, 1232, 1421, 1511 ]
1204.1452
2
In this paper , we propose a forecasting model for volatility based on its decomposition to several investment horizons and jumps . As a forecasting tool, we use Realized GARCH framework which models jointly returns and realized measures of volatility. Using jump wavelet two scale realized volatility estimator (JWTSRV) , we first decompose the returns volatility into several investment horizons and jumps and then utilise this decomposition in a newly proposed Realized Jump-GARCH and Realized Wavelet-Jump GARCH models. On currency futures data covering the period of recent financial crisis we moreover compare the forecasts from Realized GARCH model using several additional realized volatility measures. Namely, we use the realized volatility, bipower variation, two-scale realized volatility, realized kernel and jump wavelet two scale realized volatility. We find that in-sample as well as out-of-sample performance of the model significantly differs based on the realized measure used. When JWTSRV estimator is used, model produces significantly best forecasts. Our Realized Wavelet-Jump GARCH model proves to further improve the volatility forecasts. We conclude that realized volatility measurement in the time-frequency domain and inclusion of jumps improves the volatility forecasting considerably .
This paper investigates how the forecasts of volatility vary with different high frequency measures. In addition, using a forecasting model based on Realized GARCH combined with time-frequency decomposed volatility, we attempt to study the influence of intra-day investment horizons on daily volatility forecasts. The decomposition of volatility into several investment horizons and jumps is possible due to a recently proposed jump wavelet two scale realized volatility estimator (JWTSRV) . On exchange rate futures data covering the recent financial crisis , we moreover compare forecasts using several additional realized volatility measures. Our results show that inclusion of jumps and realized measures robust to noise improves forecasting ability of the model considerably. Thus for a forecaster, it is crucial to use proper high frequency measure. An interesting insight into the volatility process is also provided by its decomposition. We find that most of the information for future volatility comes from high frequency part of the spectra representing very short investment horizons .
[ { "type": "R", "before": "In this paper , we propose", "after": "This paper investigates how the forecasts of volatility vary with different high frequency measures. In addition, using", "start_char_pos": 0, "end_char_pos": 26 }, { "type": "R", "before": "for volatility based on its decomposition to", "after": "based on Realized GARCH combined with time-frequency decomposed volatility, we attempt to study the influence of intra-day investment horizons on daily volatility forecasts. The decomposition of volatility into", "start_char_pos": 47, "end_char_pos": 91 }, { "type": "R", "before": ". As a forecasting tool, we use Realized GARCH framework which models jointly returns and realized measures of volatility. Using", "after": "is possible due to a recently proposed", "start_char_pos": 130, "end_char_pos": 258 }, { "type": "R", "before": ", we first decompose the returns volatility into several investment horizons and jumps and then utilise this decomposition in a newly proposed Realized Jump-GARCH and Realized Wavelet-Jump GARCH models. On currency", "after": ". On exchange rate", "start_char_pos": 321, "end_char_pos": 535 }, { "type": "D", "before": "period of", "after": null, "start_char_pos": 562, "end_char_pos": 571 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 596, "end_char_pos": 596 }, { "type": "R", "before": "the forecasts from Realized GARCH model", "after": "forecasts", "start_char_pos": 617, "end_char_pos": 656 }, { "type": "R", "before": "Namely, we use the realized volatility, bipower variation, two-scale realized volatility, realized kernel and jump wavelet two scale realized volatility. We find that in-sample as well as out-of-sample performance", "after": "Our results show that inclusion of jumps and realized measures robust to noise improves forecasting ability", "start_char_pos": 712, "end_char_pos": 925 }, { "type": "R", "before": "significantly differs based on the realized measure used. When JWTSRV estimator is used, model produces significantly best forecasts. Our Realized Wavelet-Jump GARCH model proves to further improve the volatility forecasts. We conclude that realized volatility measurement in the time-frequency domain and inclusion of jumps improves the volatility forecasting considerably", "after": "considerably. Thus for a forecaster, it is crucial to use proper high frequency measure. An interesting insight into the volatility process is also provided by its decomposition. We find that most of the information for future volatility comes from high frequency part of the spectra representing very short investment horizons", "start_char_pos": 939, "end_char_pos": 1312 } ]
[ 0, 131, 252, 523, 711, 865, 996, 1072, 1162 ]
1204.1452
3
This paper investigates how the forecasts of volatility vary with different high frequency measures. In addition, using a forecasting model based on Realized GARCH combined with time-frequency decomposed volatility, we attempt to study the influence of intra-day investment horizons on daily volatility forecasts. The decomposition of volatility into several investment horizons and jumps is possible due to a recently proposed jump wavelet two scale realized volatility estimator (JWTSRV). On exchange rate futures data covering the recent financial crisis , we moreover compare forecasts using several additional realized volatility measures . Our results show that inclusion of jumps and realized measures robust to noise improves forecasting ability of the model considerably. Thus for a forecaster, it is crucial to use proper high frequency measure . An interesting insight into the volatility process is also provided by its decomposition. We find that most of the information for future volatility comes from high frequency part of the spectra representing very short investment horizons .
This paper proposes an enhanced approach to modeling and forecasting volatility using high frequency data. Using a forecasting model based on Realized GARCH with multiple time-frequency decomposed realized volatility measures, we study the influence of different timescales on volatility forecasts. The decomposition of volatility into several timescales approximates the behaviour of traders at corresponding investment horizons. The proposed methodology is moreover able to account for impact of jumps due to a recently proposed jump wavelet two scale realized volatility estimator . We propose a realized Jump-GARCH models estimated in two versions using maximum likelihood as well as observation-driven estimation framework of generalized autoregressive score. We compare forecasts using several popular realized volatility measures on foreign exchange rate futures data covering the recent financial crisis . Our results indicate that disentangling jump variation from the integrated variation is important for forecasting performance . An interesting insight into the volatility process is also provided by its multiscale decomposition. We find that most of the information for future volatility comes from high frequency part of the spectra representing very short investment horizons . Our newly proposed models outperform statistically the popular as well conventional models in both one-day and multi-period-ahead forecasting .
[ { "type": "R", "before": "investigates how the forecasts of volatility vary with different high frequency measures. In addition, using", "after": "proposes an enhanced approach to modeling and forecasting volatility using high frequency data. Using", "start_char_pos": 11, "end_char_pos": 119 }, { "type": "R", "before": "combined with", "after": "with multiple", "start_char_pos": 164, "end_char_pos": 177 }, { "type": "R", "before": "volatility, we attempt to", "after": "realized volatility measures, we", "start_char_pos": 204, "end_char_pos": 229 }, { "type": "R", "before": "intra-day investment horizons on daily", "after": "different timescales on", "start_char_pos": 253, "end_char_pos": 291 }, { "type": "R", "before": "investment horizons and jumps is possible", "after": "timescales approximates the behaviour of traders at corresponding investment horizons. The proposed methodology is moreover able to account for impact of jumps", "start_char_pos": 359, "end_char_pos": 400 }, { "type": "R", "before": "(JWTSRV). On", "after": ". We propose a realized Jump-GARCH models estimated in two versions using maximum likelihood as well as observation-driven estimation framework of generalized autoregressive score. We compare forecasts using several popular realized volatility measures on foreign", "start_char_pos": 481, "end_char_pos": 493 }, { "type": "D", "before": ", we moreover compare forecasts using several additional realized volatility measures", "after": null, "start_char_pos": 558, "end_char_pos": 643 }, { "type": "R", "before": "show that inclusion of jumps and realized measures robust to noise improves forecasting ability of the model considerably. Thus for a forecaster, it is crucial to use proper high frequency measure", "after": "indicate that disentangling jump variation from the integrated variation is important for forecasting performance", "start_char_pos": 658, "end_char_pos": 854 }, { "type": "A", "before": null, "after": "multiscale", "start_char_pos": 932, "end_char_pos": 932 }, { "type": "A", "before": null, "after": ". Our newly proposed models outperform statistically the popular as well conventional models in both one-day and multi-period-ahead forecasting", "start_char_pos": 1097, "end_char_pos": 1097 } ]
[ 0, 100, 313, 490, 645, 780, 856, 947 ]
1204.1804
1
We present a computational and theoretical study of a many-body Brownian ratchet, in which a "gel " of multiple, stiff polymerizing filaments pushes a diffusing obstacle. Our results show that steady-state dynamics of this system are strongly influenced by a layer of depleted filament density at the obstacle-gel interface. Inter-filament correlations within this molecule-thick layer have dramatic consequences for the velocity and structure of the growing gel. These emergent behaviors can be captured by mean field theories that emphasize the non-additivity of polymerization forces and indicate a key role for the fluctuating gap between gel and obstacle .
Many forms of cell motility rely on Brownian ratchet mechanisms that involve multiple stochastic processes. We present a computational and theoretical study of the nonequilibrium statistical dynamics of such a many-body ratchet, in the specific form of a growing polymer gel that pushes a diffusing obstacle. We find that oft-neglected correlations among constituent filaments impact steady-state kinetics and significantly deplete the gel's density within molecular distances of its leading edge. These behaviors are captured quantitatively by a self-consistent theory for extreme fluctuations in filaments' spatial distribution .
[ { "type": "A", "before": null, "after": "Many forms of cell motility rely on Brownian ratchet mechanisms that involve multiple stochastic processes.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "A", "before": null, "after": "the nonequilibrium statistical dynamics of such", "start_char_pos": 53, "end_char_pos": 53 }, { "type": "D", "before": "Brownian", "after": null, "start_char_pos": 66, "end_char_pos": 74 }, { "type": "R", "before": "which a \"gel \" of multiple, stiff polymerizing filaments", "after": "the specific form of a growing polymer gel that", "start_char_pos": 87, "end_char_pos": 143 }, { "type": "R", "before": "Our results show that", "after": "We find that oft-neglected correlations among constituent filaments impact", "start_char_pos": 173, "end_char_pos": 194 }, { "type": "R", "before": "dynamics of this system are strongly influenced by a layer of depleted filament density at the obstacle-gel interface. Inter-filament correlations within this molecule-thick layer have dramatic consequences for the velocity and structure of the growing gel. These emergent behaviors can be captured by mean field theories that emphasize the non-additivity of polymerization forces and indicate a key role for the fluctuating gap between gel and obstacle", "after": "kinetics and significantly deplete the gel's density within molecular distances of its leading edge. These behaviors are captured quantitatively by a self-consistent theory for extreme fluctuations in filaments' spatial distribution", "start_char_pos": 208, "end_char_pos": 661 } ]
[ 0, 172, 326, 465 ]
1204.2090
1
This paper deals with dependence across marginally-exponentially distributed arrival times, such as default times in financial modeling or inter-failure times in reliability theory. We explore the relationship between dependence and the possibility to sample final multivariate survival in a long time-interval as a sequence of iterations of local multivariate survivals along a partition of the total time interval. We find that this is possible under a form of multivariate lack of memory that is linked to a property of the survival times copula. This property defines a "self-chaining-copula", and we show that this coincides with the extreme value copulas characterization. The self-chaining condition is satisfied by the Gumbel-Hougaard copula, a full characterization of self chaining copulas in the Archimedean family, and by the Marshall-Olkin copula. We present a homogeneity characterization of the self chaining condition. The result has important practical implications for consistent single-step and multi-step simulation of multivariate arrival times in a way that does not destroy dependence through iterations, as happens when inconsistently iterating a Gaussian copula.
This paper deals with dependence across marginally exponentially distributed arrival times, such as default times in financial modeling or inter-failure times in reliability theory. We explore the relationship between dependence and the possibility to sample final multivariate survival in a long time-interval as a sequence of iterations of local multivariate survivals along a partition of the total time interval. We find that this is possible under a form of multivariate lack of memory that is linked to a property of the survival times copula. This property defines a "self-chaining-copula", and we show that this coincides with the extreme value copulas characterization. The self-chaining condition is satisfied by the Gumbel-Hougaard copula, a full characterization of self chaining copulas in the Archimedean family, and by the Marshall-Olkin copula. The result has important practical implications for consistent single-step and multi-step simulation of multivariate arrival times in a way that does not destroy dependency through iterations, as happens when inconsistently iterating a Gaussian copula.
[ { "type": "R", "before": "marginally-exponentially", "after": "marginally exponentially", "start_char_pos": 40, "end_char_pos": 64 }, { "type": "D", "before": "We present a homogeneity characterization of the self chaining condition.", "after": null, "start_char_pos": 861, "end_char_pos": 934 }, { "type": "R", "before": "dependence", "after": "dependency", "start_char_pos": 1097, "end_char_pos": 1107 } ]
[ 0, 181, 416, 549, 678, 860, 934 ]
1204.2638
1
In this paper, we propose an efficient Monte Carlo implementation of non-linear FBSDEs as a system of interacting particles by developing a variant of marked branching diffusion method. It will be particularly useful to investigate large and complex systems, and hence it is a good complement of our previous work presenting an analytical perturbation procedure for generic non-linear FBSDEs. There appear multiple species of particles, where the first one follows the diffusion of the original underlying state, and the others the Malliavin derivatives with a grading structure. In contrast to the naive implementation of marked branching diffusion, the number of branches as well as interactions are efficiently suppressed by the properties of the perturbative expansion. This property will make the numerical implementation more feasible and stable. Furthermore, there is no need of direct approximation of the non-linear driver by a polynomial function . The proposed method can be applied to semi-linear problems, such as American and Bermudan options, Credit Value Adjustment (CVA), and even fully non-linear issues, such as the optimal portfolio problems in incomplete and/or constrained markets, feedbacks from large investors, and also the analysis of various risk measures.
In this paper, we propose an efficient Monte Carlo implementation of non-linear FBSDEs as a system of interacting particles inspired by the ideas of branching diffusion method. It will be particularly useful to investigate large and complex systems, and hence it is a good complement of our previous work presenting an analytical perturbation procedure for generic non-linear FBSDEs. There appear multiple species of particles, where the first one follows the diffusion of the original underlying state, and the others the Malliavin derivatives with a grading structure. The number of branching points are capped by the order of perturbation, which is expected to make the scheme less numerically intensive . The proposed method can be applied to semi-linear problems, such as American and Bermudan options, Credit Value Adjustment (CVA), and even fully non-linear issues, such as the optimal portfolio problems in incomplete and/or constrained markets, feedbacks from large investors, and also the analysis of various risk measures.
[ { "type": "R", "before": "by developing a variant of marked", "after": "inspired by the ideas of", "start_char_pos": 124, "end_char_pos": 157 }, { "type": "R", "before": "In contrast to the naive implementation of marked branching diffusion, the number of branches as well as interactions are efficiently suppressed by the properties of the perturbative expansion. This property will make the numerical implementation more feasible and stable. Furthermore, there is no need of direct approximation of the non-linear driver by a polynomial function", "after": "The number of branching points are capped by the order of perturbation, which is expected to make the scheme less numerically intensive", "start_char_pos": 580, "end_char_pos": 956 } ]
[ 0, 185, 392, 579, 773, 852, 958 ]
1204.3136
1
In this work we develop a new measure to study the behavior of stochastic time series, which permits to distinguish events which are different from the ordinary, like financial crises. We identify from the data well known market crashes such as Black Thursday (1929), Black Monday (1987) and Subprime crisis (2008) with clear and robust results. We also show that the analysis has forecasting capabilities. We apply the method to the market fluctuations of 2011. From these results it appears as if the apparent crisis of 2011 is of a different nature from the other three .
Following the thermodynamic formulation of multifractal measure that was shown to be capable of detecting large fluctuations at an early stage, here we propose a new index which permits us to distinguish events like financial crisis in real time . We calculate the partition function from where we obtain thermodynamic quantities analogous to free energy and specific heat. The index is defined as the normalized energy variation and it can be used to study the behavior of stochastic time series, such as financial market daily data. Famous financial market crashes - Black Thursday (1929), Black Monday (1987) and Subprime crisis (2008) - are identified with clear and robust results. The method is also applied to the market fluctuations of 2011. From these results it appears as if the apparent crisis of 2011 is of a different nature from the other three . We also show that the analysis has forecasting capabilities .
[ { "type": "R", "before": "In this work we develop a new measure", "after": "Following the thermodynamic formulation of multifractal measure that was shown to be capable of detecting large fluctuations at an early stage, here we propose a new index which permits us", "start_char_pos": 0, "end_char_pos": 37 }, { "type": "D", "before": "study the behavior of stochastic time series, which permits to", "after": null, "start_char_pos": 41, "end_char_pos": 103 }, { "type": "R", "before": "which are different from the ordinary, like financial crises. We identify from the data well known market crashes such as", "after": "like financial crisis in real time . We calculate the partition function from where we obtain thermodynamic quantities analogous to free energy and specific heat. The index is defined as the normalized energy variation and it can be used to study the behavior of stochastic time series, such as financial market daily data. Famous financial market crashes -", "start_char_pos": 123, "end_char_pos": 244 }, { "type": "A", "before": null, "after": "- are identified", "start_char_pos": 315, "end_char_pos": 315 }, { "type": "R", "before": "We also show that the analysis has forecasting capabilities. We apply the method", "after": "The method is also applied", "start_char_pos": 347, "end_char_pos": 427 }, { "type": "A", "before": null, "after": ". We also show that the analysis has forecasting capabilities", "start_char_pos": 574, "end_char_pos": 574 } ]
[ 0, 184, 346, 407, 463 ]
1204.3310
1
Biochemical reaction networks are subjected to large fluctuations due to small molecule numbers, yet underlie reliable biological functions. Most theoretical approaches describe them as purely deterministic or stochastic dynamical systems, depending on which point of view is favored. Here, we investigate the dynamics of a self-repressing gene using an intermediate approach based on a moment expansion of the master equation, taking into account the binary character of gene activity. We thereby obtain deterministic equations which describe how nonlinearity feeds back fluctuations into the mean-field equations, providing insight into the interplay of determinism and stochasticity. This allows us to identify a region of parameter space where fluctuations induce relatively regular oscillations.
Biochemical reaction networks are subjected to large fluctuations attributable to small molecule numbers, yet underlie reliable biological functions. Most theoretical approaches describe them as purely deterministic or stochastic dynamical systems, depending on which point of view is favored. Here, we investigate the dynamics of a self-repressing gene using an intermediate approach based on a moment closure approximation of the master equation, which allows us to take into account the binary character of gene activity. We thereby obtain deterministic equations that describe how nonlinearity feeds back fluctuations into the mean-field equations, providing insight into the interplay of determinism and stochasticity. This allows us to identify regions of parameter space where fluctuations induce relatively regular oscillations.
[ { "type": "R", "before": "due", "after": "attributable", "start_char_pos": 66, "end_char_pos": 69 }, { "type": "R", "before": "expansion", "after": "closure approximation", "start_char_pos": 394, "end_char_pos": 403 }, { "type": "R", "before": "taking", "after": "which allows us to take", "start_char_pos": 428, "end_char_pos": 434 }, { "type": "R", "before": "which", "after": "that", "start_char_pos": 529, "end_char_pos": 534 }, { "type": "R", "before": "a region", "after": "regions", "start_char_pos": 714, "end_char_pos": 722 } ]
[ 0, 140, 284, 486, 686 ]
1204.3422
1
This paper investigates arbitrage chains involving d currencies and d foreign exchange trader-arbitrageurs. The commonly recognized belief in economics and finance is that arbitrage has the effect of causing prices in different marketsto converge. This conjecture was recently disproved in Kozyakin et al. (2010) ; Cross et al. (2012) , where was shown that for the case of four currencies arbitrage chains may be periodic or exponentially unstable. In contrast with the four-currency case, we find that arbitrage operations when d >= 5 currencies are present may appear very unstable, with the exchange rates growing in accordance with the double exponential law !
If financial markets displayed the informational efficiency postulated in the efficient markets hypothesis (EMH), arbitrage operations would be self-extinguishing. The present paper considers arbitrage sequences in foreign exchange (FX) markets, in which trading platforms and information are fragmented. In Kozyakin et al. (2010) and Cross et al. (2012) it was shown that sequences of triangular arbitrage operations in FX markets containing 4 currencies and trader-arbitrageurs tend to display periodicity or grow exponentially rather than being self-extinguishing. This paper extends the analysis to 5 or higher-order currency worlds. The key findings are that in a 5-currency world arbitrage sequences may also follow an exponential law as well as display periodicity, but that in higher-order currency worlds a double exponential law may additionally apply. There is an "inheritance of instability" in the higher-order currency worlds. Profitable arbitrage operations are thus endemic rather that displaying the self-extinguishing properties implied by the EMH.
[ { "type": "R", "before": "This paper investigates arbitrage chains involving d currencies and d foreign exchange trader-arbitrageurs. The commonly recognized belief in economics and finance is that arbitrage has the effect of causing prices in different marketsto converge. This conjecture was recently disproved in", "after": "If financial markets displayed the informational efficiency postulated in the efficient markets hypothesis (EMH), arbitrage operations would be self-extinguishing. The present paper considers arbitrage sequences in foreign exchange (FX) markets, in which trading platforms and information are fragmented. In", "start_char_pos": 0, "end_char_pos": 289 }, { "type": "R", "before": ";", "after": "and", "start_char_pos": 313, "end_char_pos": 314 }, { "type": "R", "before": ", where", "after": "it", "start_char_pos": 335, "end_char_pos": 342 }, { "type": "R", "before": "for the case of four currencies arbitrage chains may be periodic or exponentially unstable. In contrast with the four-currency case, we find that arbitrage operations when d >=", "after": "sequences of triangular arbitrage operations in FX markets containing 4 currencies and trader-arbitrageurs tend to display periodicity or grow exponentially rather than being self-extinguishing. This paper extends the analysis to", "start_char_pos": 358, "end_char_pos": 534 }, { "type": "R", "before": "currencies are present may appear very unstable, with the exchange rates growing in accordance with the", "after": "or higher-order currency worlds. The key findings are that in a 5-currency world arbitrage sequences may also follow an exponential law as well as display periodicity, but that in higher-order currency worlds a", "start_char_pos": 537, "end_char_pos": 640 }, { "type": "R", "before": "!", "after": "may additionally apply. There is an \"inheritance of instability\" in the higher-order currency worlds. Profitable arbitrage operations are thus endemic rather that displaying the self-extinguishing properties implied by the EMH.", "start_char_pos": 664, "end_char_pos": 665 } ]
[ 0, 107, 247, 314, 449 ]
1204.3600
1
We present scalable programmable quantum circuit schemes to simulate any given real unitary by setting the angle values in the circuit. This provides a fixed circuit design whose angles are determined from the elements of the given unitary in an efficient way by benefiting from the decomposition of a uniformly controlled network. In addition, the quantum complexity for the circuit is almost the same as non-general circuits.
Constructing general programmable circuits to be able to run any given unitary operator efficiently on a quantum processor is of fundamental importance. We present a new quantum circuit design technique resulting two general programmable circuit schemes. The circuit schemes can be used to simulate any given operator by setting the angle values in the circuit. This provides a fixed circuit design whose angles are determined from the elements of the given matrix, which can be non-unitary, in an efficient way . We also give both classical and quantum complexity analysis for these circuits and show that the circuits require a few classical computations, and the quantum complexities of them are almost the same as non-general circuits.
[ { "type": "R", "before": "We present scalable programmable quantum circuit schemes", "after": "Constructing general programmable circuits to be able to run any given unitary operator efficiently on a quantum processor is of fundamental importance. We present a new quantum circuit design technique resulting two general programmable circuit schemes. The circuit schemes can be used", "start_char_pos": 0, "end_char_pos": 56 }, { "type": "R", "before": "real unitary", "after": "operator", "start_char_pos": 79, "end_char_pos": 91 }, { "type": "R", "before": "unitary", "after": "matrix, which can be non-unitary,", "start_char_pos": 232, "end_char_pos": 239 }, { "type": "R", "before": "by benefiting from the decomposition of a uniformly controlled network. In addition, the quantum complexity for the circuit is", "after": ". We also give both classical and quantum complexity analysis for these circuits and show that the circuits require a few classical computations, and the quantum complexities of them are", "start_char_pos": 260, "end_char_pos": 386 } ]
[ 0, 135, 331 ]
1204.3600
2
Constructing general programmable circuits to be able to run any given unitary operator efficiently on a quantum processor is of fundamental importance. We present a new quantum circuit design technique resulting two general programmable circuit schemes. The circuit schemes can be used to simulate any given operator by setting the angle values in the circuit. This provides a fixed circuit design whose angles are determined from the elements of the given matrix, which can be non-unitary, in an efficient way. We also give both classical and quantum complexity analysis for these circuits and show that the circuits require a few classical computations , and the quantum complexities of them are almost the same as non-general circuits .
Unlike fixed designs, programmable circuit designs support an infinite number of operators. The functionality of a programmable circuit can be altered by simply changing the angle values of the rotation gates in the circuit. Here, we present a new quantum circuit design technique resulting in two general programmable circuit schemes. The circuit schemes can be used to simulate any given operator by setting the angle values in the circuit. This provides a fixed circuit design whose angles are determined from the elements of the given matrix-which can be non-unitary-in an efficient way. We also give both the classical and quantum complexity analysis for these circuits and show that the circuits require a few classical computations . They have almost the same quantum complexities as non-general circuits . Since the presented circuit designs are independent from the matrix decomposition techniques and the global optimization processes used to find quantum circuits for a given operator, high accuracy simulations can be done for the unitary propagators of molecular Hamiltonians on quantum computers. As an example, we show how to build the circuit design for the hydrogen molecule .
[ { "type": "R", "before": "Constructing general programmable circuits to be able to run any given unitary operator efficiently on a quantum processor is of fundamental importance. We", "after": "Unlike fixed designs, programmable circuit designs support an infinite number of operators. The functionality of a programmable circuit can be altered by simply changing the angle values of the rotation gates in the circuit. Here, we", "start_char_pos": 0, "end_char_pos": 155 }, { "type": "A", "before": null, "after": "in", "start_char_pos": 213, "end_char_pos": 213 }, { "type": "R", "before": "matrix, which can be non-unitary, in", "after": "matrix-which can be non-unitary-in", "start_char_pos": 459, "end_char_pos": 495 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 532, "end_char_pos": 532 }, { "type": "R", "before": ", and the quantum complexities of them are", "after": ". They have", "start_char_pos": 658, "end_char_pos": 700 }, { "type": "A", "before": null, "after": "quantum complexities", "start_char_pos": 717, "end_char_pos": 717 }, { "type": "A", "before": null, "after": ". Since the presented circuit designs are independent from the matrix decomposition techniques and the global optimization processes used to find quantum circuits for a given operator, high accuracy simulations can be done for the unitary propagators of molecular Hamiltonians on quantum computers. As an example, we show how to build the circuit design for the hydrogen molecule", "start_char_pos": 742, "end_char_pos": 742 } ]
[ 0, 152, 255, 362, 513 ]
1204.5203
1
Boolean network models of molecular regulatory networks have been used successfully in computational systems biology. The Boolean functions that appear in published models tend to have special properties, in particular the property of being nested canalizing, a property inspired by the concept of canalization in evolutionary biology. It has been shown that networks comprised of nested canalizing functions have dynamic properties that make them suitable for modeling molecular regulatory networks, namely a small number of (large) attractors, as well as relatively short limit cycles. This paper contains a detailed analysis of this class of functions, based on a novel normal form as polynomial functions over the Boolean field. The concept of layer is introduced that stratifies variables into different classes depending on their level of dominance. Using this layer concept a closed form formula is derived for the number of nested canalizing functions with a given number of variables. Additional metrics analyzed include Hamming weight, the activity number of any variable, and the average sensitivity of the function. It is also shown that the average sensitivity of any nested canalizing function is between 0 and 2. This provides a rationale for why nested canalizing functions are stable, since a random Boolean function has average sensitivity n/2. The paper also contains experimental evidence that the layer number is an important factor in network stability.
Boolean network models of molecular regulatory networks have been used successfully in computational systems biology. The Boolean functions that appear in published models tend to have special properties, in particular the property of being nested canalizing, a concept inspired by the concept of canalization in evolutionary biology. It has been shown that networks comprised of nested canalizing functions have dynamic properties that make them suitable for modeling molecular regulatory networks, namely a small number of (large) attractors, as well as relatively short limit cycles. This paper contains a detailed analysis of this class of functions, based on a novel normal form as polynomial functions over the Boolean field. The concept of layer is introduced that stratifies variables into different classes depending on their level of dominance. Using this layer concept a closed form formula is derived for the number of nested canalizing functions with a given number of variables. Additional metrics considered include Hamming weight, the activity number of any variable, and the average sensitivity of the function. It is also shown that the average sensitivity of any nested canalizing function is between 0 and 2. This provides a rationale for why nested canalizing functions are stable, since a random Boolean function in n variables has average sensitivity n/2. The paper also contains experimental evidence that the layer number is an important factor in network stability.
[ { "type": "R", "before": "property", "after": "concept", "start_char_pos": 262, "end_char_pos": 270 }, { "type": "R", "before": "analyzed", "after": "considered", "start_char_pos": 1013, "end_char_pos": 1021 }, { "type": "A", "before": null, "after": "in n variables", "start_char_pos": 1334, "end_char_pos": 1334 } ]
[ 0, 117, 335, 587, 732, 855, 993, 1127, 1227, 1363 ]
1204.5941
1
The translation-transcription process with the description of the most basic "elementary" processesconsists in: 1) production of mRNA molecules, 2) initiation of these molecules by circularization with help of initiation factors, 3) initiation of translation, recruiting the small ribosomal subunit, 4) assembly of full ribosomes, 5) elongation, i.e. movement of ribosomes along mRNA with production of protein, 6) termination of translation, 7) degradation of mRNA molecules. A certain complexity in the mathematical formulation of this process arises when one tries to take into account the phenomenon of polysome first, when several ribosomes are producing peptides on a single mRNA at the same time. This leads to multiplicity of possible states of mRNA with various numbers of ribosomes with potentially different dynamics, interaction between ribosomes and other difficulties. In this preprint we provide 1) detailed mechanistic description of the translationprocess with explicit representation of every state of translating mRNA, followed by 2) deriving the simplest and basic ODE model of coupled transcription, translation and degradation , and 3) developing a model suitable for describing all known mechanisms of miRNA action on translation. The basic model is constructed by correct lumping of the detailed model states and by separating the description of ribosomal turnover. It remains linear under assumption of that the translation is not limited by availability of ribosomal subunits or initiation factors. The only serious limitation of this type of translation modeling is in that it does not take into account possible interactions between ribosomes. The latter might lead to more complex phenomena which can be taken into account in simulatory models of the detailed representation of translation at the cost of more difficult analytical analysis of the model .
Synthesis of proteins is one of the most fundamental biological processes, which consumes a significant amount of cellular resources. Despite many efforts to produce detailed mechanistic mathematical models of translation, no basic and simple kinetic model of mRNA lifecycle ( transcription, translation and degradation ) exists. We build such a model by lumping multiple states of translated mRNA into few dynamical variables and introducing a pool of translating ribosomes. The basic and simple model can be extended, if necessary, to take into account various phenomena such as the interaction between translating ribosomes or regulation of translation by microRNA. The model can be used as a building block (translation module) for more complex models of cellular processes .
[ { "type": "R", "before": "The translation-transcription process with the description of the most basic \"elementary\" processesconsists in: 1) production of mRNA molecules, 2) initiation of these molecules by circularization with help of initiation factors, 3) initiation of translation, recruiting the small ribosomal subunit, 4) assembly of full ribosomes, 5) elongation, i.e. movement of ribosomes along mRNA with production of protein, 6) termination of translation, 7) degradation of mRNA molecules. A certain complexity in the mathematical formulation of this process arises when one tries to take into account the phenomenon of polysome first, when several ribosomes are producing peptides on a single mRNA at the same time. This leads to multiplicity of possible states of mRNA with various numbers of ribosomes with potentially different dynamics, interaction between ribosomes and other difficulties. In this preprint we provide 1) detailed mechanistic description of the translationprocess with explicit representation of every state of translating mRNA, followed by 2) deriving the simplest and basic ODE model of coupled", "after": "Synthesis of proteins is one of the most fundamental biological processes, which consumes a significant amount of cellular resources. Despite many efforts to produce detailed mechanistic mathematical models of translation, no basic and simple kinetic model of mRNA lifecycle (", "start_char_pos": 0, "end_char_pos": 1105 }, { "type": "R", "before": ", and 3) developing a model suitable for describing all known mechanisms of miRNA action on translation. The basic model is constructed by correct lumping of the detailed model states and by separating the description of ribosomal turnover. It remains linear under assumption of that the translation is not limited by availability of ribosomal subunits or initiation factors. The only serious limitation of this type of translation modeling is in that it does not", "after": ") exists. We build such a model by lumping multiple states of translated mRNA into few dynamical variables and introducing a pool of translating ribosomes. The basic and simple model can be extended, if necessary, to", "start_char_pos": 1149, "end_char_pos": 1612 }, { "type": "R", "before": "possible interactions between ribosomes. The latter might lead to more complex phenomena which can be taken into account in simulatory models of the detailed representation of translation at the cost of more difficult analytical analysis of the model", "after": "various phenomena such as the interaction between translating ribosomes or regulation of translation by microRNA. The model can be used as a building block (translation module) for more complex models of cellular processes", "start_char_pos": 1631, "end_char_pos": 1881 } ]
[ 0, 476, 703, 882, 1253, 1389, 1524, 1671 ]
1204.5941
2
Synthesis of proteins is one of the most fundamental biological processes, which consumes a significant amount of cellular resources. Despite many efforts to produce detailed mechanistic mathematical models of translation, no basic and simple kinetic model of mRNA lifecycle (transcription, translation and degradation) exists. We build such a model by lumping multiple states of translated mRNA into few dynamical variables and introducing a pool of translating ribosomes. The basic and simple model can be extended, if necessary, to take into account various phenomena such as the interaction between translating ribosomes or regulation of translation by microRNA. The model can be used as a building block (translation module) for more complex models of cellular processes .
Protein synthesis is one of the most fundamental biological processes, which consumes a significant amount of cellular resources. Despite existence of multiple mathematical models of translation, varying in the level of mechanistical details, surprisingly, there is no basic and simple chemical kinetic model of this process, derived directly from the detailed kinetic model. One of the reasons for this is that the translation process is characterized by indefinite number of states, thanks to existence of polysomes. We bypass this difficulty by applying a trick consisting in lumping multiple states of translated mRNA into few dynamical variables and by introducing a variable describing the pool of translating ribosomes. The simplest model can be solved analytically under some assumptions. The basic and simple model can be extended, if necessary, to take into account various phenomena such as the interaction between translating ribosomes , limited amount of ribosomal units or regulation of translation by microRNA. The model can be used as a building block (translation module) for more complex models of cellular processes . We demonstrate the utility of the model in two examples. First, we determine the critical parameters of the single protein synthesis for the case when the ribosomal units are abundant. Second, we demonstrate intrinsic bi-stability in the dynamics of the ribosomal protein turnover and predict that a minimal number of ribosomes should pre-exists in a living cell to sustain its protein synthesis machinery, even in the absence of proliferation .
[ { "type": "R", "before": "Synthesis of proteins", "after": "Protein synthesis", "start_char_pos": 0, "end_char_pos": 21 }, { "type": "R", "before": "many efforts to produce detailed mechanistic", "after": "existence of multiple", "start_char_pos": 142, "end_char_pos": 186 }, { "type": "A", "before": null, "after": "varying in the level of mechanistical details, surprisingly, there is", "start_char_pos": 223, "end_char_pos": 223 }, { "type": "A", "before": null, "after": "chemical", "start_char_pos": 244, "end_char_pos": 244 }, { "type": "R", "before": "mRNA lifecycle (transcription, translation and degradation) exists. We build such a model by", "after": "this process, derived directly from the detailed kinetic model. One of the reasons for this is that the translation process is characterized by indefinite number of states, thanks to existence of polysomes. We bypass this difficulty by applying a trick consisting in", "start_char_pos": 262, "end_char_pos": 354 }, { "type": "R", "before": "introducing a", "after": "by introducing a variable describing the", "start_char_pos": 431, "end_char_pos": 444 }, { "type": "A", "before": null, "after": "simplest model can be solved analytically under some assumptions. The", "start_char_pos": 480, "end_char_pos": 480 }, { "type": "A", "before": null, "after": ", limited amount of ribosomal units", "start_char_pos": 628, "end_char_pos": 628 }, { "type": "A", "before": null, "after": ". We demonstrate the utility of the model in two examples. First, we determine the critical parameters of the single protein synthesis for the case when the ribosomal units are abundant. Second, we demonstrate intrinsic bi-stability in the dynamics of the ribosomal protein turnover and predict that a minimal number of ribosomes should pre-exists in a living cell to sustain its protein synthesis machinery, even in the absence of proliferation", "start_char_pos": 780, "end_char_pos": 780 } ]
[ 0, 133, 329, 475, 670 ]
1204.6613
1
We prove weak and strong maximum principles, including a Hopf lemma, for classical solutions to equations defined by linear, second-order, partial differential operators with non-negative characteristic form (degenerate-elliptic operators), in the presence of a second-order boundarycondition of Ventcel type along the degeneracy locus of the principle symbol of the operator on the domain boundary. We apply these maximum principles to obtain uniqueness and a priori maximum principle estimates for classical solutions to boundary value and obstacle problems defined by these degenerate-elliptic operators , again in the presence of a second-order boundary condition, for Dirichlet or Neumann boundary conditions along the complement of the degeneracy locus. We also prove weak maximum principles and uniqueness for solutions to the corresponding variational equations and inequalities defined with the aide of weighted Sobolev spaces. The domain is allowed to be unbounded when the operator coefficients and solutions obey certain growth conditions.
We prove weak and strong maximum principles, including a Hopf lemma, for smooth subsolutions to equations defined by linear, second-order, partial differential operators whose principal symbols vanish along a portion of the domain boundary. The boundary regularity property of the smooth subsolutions along this boundary vanishing locus ensures that these maximum principles hold irrespective of the sign of the Fichera function. Boundary conditions need only be prescribed on the complement in the domain boundary of the principal symbol vanishing locus. We obtain uniqueness and a priori maximum principle estimates for smooth solutions to boundary value and obstacle problems defined by these boundary-degenerate elliptic operators for partial Dirichlet or Neumann boundary conditions along the complement of the boundary vanishing locus. We also prove weak maximum principles and uniqueness for solutions to the corresponding variational equations and inequalities defined with the aide of weighted Sobolev spaces. The domain is allowed to be unbounded when the operator coefficients and solutions obey certain growth conditions.
[ { "type": "R", "before": "classical solutions", "after": "smooth subsolutions", "start_char_pos": 73, "end_char_pos": 92 }, { "type": "R", "before": "with non-negative characteristic form (degenerate-elliptic operators), in the presence of a second-order boundarycondition of Ventcel type along the degeneracy locus of", "after": "whose principal symbols vanish along a portion of the domain boundary. The boundary regularity property of", "start_char_pos": 170, "end_char_pos": 338 }, { "type": "R", "before": "principle symbol of the operator on the domain boundary. We apply these maximum principles to", "after": "smooth subsolutions along this boundary vanishing locus ensures that these maximum principles hold irrespective of the sign of the Fichera function. Boundary conditions need only be prescribed on the complement in the domain boundary of the principal symbol vanishing locus. We", "start_char_pos": 343, "end_char_pos": 436 }, { "type": "R", "before": "classical", "after": "smooth", "start_char_pos": 500, "end_char_pos": 509 }, { "type": "R", "before": "degenerate-elliptic operators , again in the presence of a second-order boundary condition, for", "after": "boundary-degenerate elliptic operators for partial", "start_char_pos": 577, "end_char_pos": 672 }, { "type": "R", "before": "degeneracy", "after": "boundary vanishing", "start_char_pos": 742, "end_char_pos": 752 } ]
[ 0, 399, 759, 936 ]
1205.0332
1
This study conducts a comprehensive analysis of time series segmentation on the Japanese stock prices listed on the first section of the Tokyo Stock Exchange during the period from January 4 , 2000 to January 30 , 2012. A recursive segmentation procedure is used under the assumption of a Gaussian mixture. The number of each quintile of variance for all the segments indicates is investigated empirically. It is found that from June 2004 to June 2007 a large majority of stocks are stable and that from 2008 several stocks showed instability. On March 2011 the number of instable securities steeply increased due to societal turmoil influenced by the East Japan Great Earthquake. It is concluded that the number of stocks included in each quintile of volatility provides useful information on the Japanese macroeconomic situation .
This study conducts a comprehensive analysis of time series segmentation on the Japanese stock prices listed on the first section of the Tokyo Stock Exchange during the period from 4 January 2000 to 30 January 2012. A recursive segmentation procedure is used under the assumption of a Gaussian mixture. The daily number of each quintile of volatilities for all the segments is investigated empirically. It is found that from June 2004 to June 2007 , a large majority of stocks are stable and that from 2008 several stocks showed instability. On March 2011 , the daily number of instable securities steeply increased due to societal turmoil influenced by the East Japan Great Earthquake. It is concluded that the number of stocks included in each quintile of volatilities provides useful information on macroeconomic situations .
[ { "type": "D", "before": "January", "after": null, "start_char_pos": 181, "end_char_pos": 188 }, { "type": "R", "before": ",", "after": "January", "start_char_pos": 191, "end_char_pos": 192 }, { "type": "D", "before": "January", "after": null, "start_char_pos": 201, "end_char_pos": 208 }, { "type": "R", "before": ",", "after": "January", "start_char_pos": 212, "end_char_pos": 213 }, { "type": "A", "before": null, "after": "daily", "start_char_pos": 311, "end_char_pos": 311 }, { "type": "R", "before": "variance", "after": "volatilities", "start_char_pos": 339, "end_char_pos": 347 }, { "type": "D", "before": "indicates", "after": null, "start_char_pos": 369, "end_char_pos": 378 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 453, "end_char_pos": 453 }, { "type": "R", "before": "the", "after": ", the daily", "start_char_pos": 560, "end_char_pos": 563 }, { "type": "R", "before": "volatility", "after": "volatilities", "start_char_pos": 754, "end_char_pos": 764 }, { "type": "R", "before": "the Japanese macroeconomic situation", "after": "macroeconomic situations", "start_char_pos": 796, "end_char_pos": 832 } ]
[ 0, 219, 306, 407, 545, 682 ]
1205.0381
1
MicroRNA-mediated regulation of gene expression is characterised by some distinctive features which set it apart from unregulated and transcription factor-regulated gene expression. Recently, a mathematical model has been proposed to describe the dynamics of post-transcriptional regulation by microRNAs. The model explains quite well the observations made in single cell experiments. In this paper, we introduce additional features into the model and consider two specific cases. In the first case, a non-cooperative positive feedback loop is included in the transcriptional regulation of the target gene expression. In the second case, a stochastic version of the original model is considered in which there are random transitions between the inactive and active states of the gene. In the first case we show that bistability is possible in a parameter regime due to the presence of a non-linear protein decay term in the gene expression dynamics. In the second case, we derive the conditions for obtaining stochastic binary gene expression. We find that this type of gene expression is more favourable in the case of regulation by microRNAs as compared to the case of unregulated gene expression. The theoretical predictions on binary gene expression are experimentally testable.
MicroRNA-mediated regulation of gene expression is characterised by some distinctive features which set it apart from unregulated and transcription factor-regulated gene expression. Recently, a mathematical model has been proposed to describe the dynamics of post-transcriptional regulation by microRNAs. The model explains quite well the observations made in single cell experiments. In this paper, we introduce some additional features into the model and consider two specific cases. In the first case, a non-cooperative positive feedback loop is included in the transcriptional regulation of the target gene expression. In the second case, a stochastic version of the original model is considered in which there are random transitions between the inactive and active expression states of the gene. In the first case we show that bistability is possible in a parameter regime due to the presence of a non-linear protein decay term in the gene expression dynamics. In the second case, we derive the conditions for obtaining stochastic binary gene expression. We find that this type of gene expression is more favourable in the case of regulation by microRNAs as compared to the case of unregulated gene expression. The theoretical predictions on binary gene expression are experimentally testable.
[ { "type": "A", "before": null, "after": "some", "start_char_pos": 413, "end_char_pos": 413 }, { "type": "A", "before": null, "after": "expression", "start_char_pos": 766, "end_char_pos": 766 } ]
[ 0, 181, 304, 384, 481, 618, 786, 951, 1045, 1201 ]
1205.0381
2
MicroRNA-mediated regulation of gene expression is characterised by some distinctive features which set it apart from unregulated and transcription factor-regulated gene expression. Recently, a mathematical model has been proposed to describe the dynamics of post-transcriptional regulation by microRNAs. The model explains quite well the observations made in single cell experiments . In this paper, we introduce some additional features into the model and consider two specific cases. In the first case, a non-cooperative positive feedback loop is included in the transcriptional regulation of the target gene expression. In the second case, a stochastic version of the original model is considered in which there are random transitions between the inactive and active expression states of the gene. In the first case we show that bistability is possible in a parameter regime due to the presence of a non-linear protein decay term in the gene expression dynamics. In the second case, we derive the conditions for obtaining stochastic binary gene expression. We find that this type of gene expression is more favourable in the case of regulation by microRNAs as compared to the case of unregulated gene expression. The theoretical predictions on binary gene expression are experimentally testable.
MicroRNA-mediated regulation of gene expression is characterised by some distinctive features that set it apart from unregulated and transcription factor-regulated gene expression. Recently, a mathematical model has been proposed to describe the dynamics of post-transcriptional regulation by microRNAs. The model explains the observations made in single cell experiments quite well . In this paper, we introduce some additional features into the model and consider two specific cases. In the first case, a non-cooperative positive feedback loop is included in the transcriptional regulation of the target gene expression. In the second case, a stochastic version of the original model is considered in which there are random transitions between the inactive and active expression states of the gene. In the first case we show that bistability is possible in a parameter regime , due to the presence of a non-linear protein decay term in the gene expression dynamics. In the second case, we derive the conditions for obtaining stochastic binary gene expression. We find that this type of gene expression is more favourable in the case of regulation by microRNAs as compared to the case of unregulated gene expression. The theoretical predictions relating to binary gene expression are experimentally testable.
[ { "type": "R", "before": "which", "after": "that", "start_char_pos": 94, "end_char_pos": 99 }, { "type": "D", "before": "quite well", "after": null, "start_char_pos": 324, "end_char_pos": 334 }, { "type": "A", "before": null, "after": "quite well", "start_char_pos": 384, "end_char_pos": 384 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 880, "end_char_pos": 880 }, { "type": "R", "before": "on", "after": "relating to", "start_char_pos": 1247, "end_char_pos": 1249 } ]
[ 0, 181, 304, 386, 487, 624, 802, 968, 1062, 1218 ]
1205.2521
1
In this paper we study the continuum time dynamics of a stock in a market where agents behavior is modeled by a Minority Game with number of strategies for each agent S=2 and "fake" market histories . The dynamics derived is a generalized geometric Brownian motion; from the Black & Scholes formula the calibration of the Minority Game , by means of the game parameter \sigma^{2 is performed. An " (\alpha,\sigma^{2) -matrix" containing, given options' moneyness and maturities, values of the parameters \alpha and \sigma^{2} that make the theoretical option price agree with the market price is constructed. } We conclude that the asymmetric phase of the Minority Game with \alpha close to \alpha_c is coherent with options implied volatility market.
In this paper we study the continuum time dynamics of a stock in a market where agents behavior is modeled by a Minority Game and a Grand Canonical Minority Game . The dynamics derived is a generalized geometric Brownian motion; from the Black & Scholes formula the calibration of both the Minority Game and the Grand Canonical Minority Game , by means of their characteristic parameters, is performed. ) -matrix" containing, given options' moneyness and maturities, values of the parameters \alpha and \sigma^{2} that make the theoretical option price agree with the market price is constructed. } We conclude that for both games the asymmetric phase with characteristic parameters close to critical ones is coherent with options implied volatility market.
[ { "type": "R", "before": "with number of strategies for each agent S=2 and \"fake\" market histories", "after": "and a Grand Canonical Minority Game", "start_char_pos": 126, "end_char_pos": 198 }, { "type": "A", "before": null, "after": "both", "start_char_pos": 318, "end_char_pos": 318 }, { "type": "A", "before": null, "after": "and the Grand Canonical Minority Game", "start_char_pos": 337, "end_char_pos": 337 }, { "type": "R", "before": "the game parameter \\sigma^{2", "after": "their characteristic parameters,", "start_char_pos": 352, "end_char_pos": 380 }, { "type": "D", "before": "An \" (\\alpha,\\sigma^{2", "after": null, "start_char_pos": 395, "end_char_pos": 417 }, { "type": "A", "before": null, "after": "for both games", "start_char_pos": 630, "end_char_pos": 630 }, { "type": "R", "before": "of the Minority Game with \\alpha close to \\alpha_c", "after": "with characteristic parameters close to critical ones", "start_char_pos": 652, "end_char_pos": 702 } ]
[ 0, 200, 265, 394, 610 ]
1205.2571
1
We introduce and analyze a minimal model of epigenetic silencing in budding yeast, built only uponknown biomolecular interactions in the system ,a posteriori identifying the key ones necessary for bistability of epigenetic states. The model explicitly incorporates two key chromatin marks, namely H4K16 acetylation and H3K79 methylation, and explores whether the presence of multiple marks lead to a qualitatively different systems behavior. We find that , having both modifications is important for the robustness of epigenetic silencing. More remarkably, besides the silenced and transcriptionally active fate of chromatin, our model leads to a novel state with bivalent marks under certain perturbations (knock-out mutations, inhibition or enhancement of enzymatic activity). The bivalent state is shown to result in patchy silencing in regions of parameter space and turns out to be pertinent in several perturbations . We also show that the titration effect, owing to a limited supply of silencing proteins, can result in counter-intuitive responses. The design principles of the silencing system is systematically investigated helping clarify disparate experimental observations in the literature within an unified theoretical frameworkfor the first time and leading to fresh experimental proposals . Specifically, we discuss the behavior of Sir protein recruitment, spreading and stability of silenced regions in commonly-studied mutants (e.g., \emph{sas2}%DIFAUXCMD %DIFDELCMD < \varDelta%%% ,\emph{dot1}%DIFAUXCMD %DIFDELCMD < \varDelta%%% ) illuminating the controversial role of Dot1 in the systems biology of yeast silencing.
We introduce and analyze a minimal model of epigenetic silencing in budding yeast, built upon known biomolecular interactions in the system . Doing so, we identify the epigenetic marks essential for the bistability of epigenetic states. The model explicitly incorporates two key chromatin marks, namely H4K16 acetylation and H3K79 methylation, and explores whether the presence of multiple marks lead to a qualitatively different systems behavior. We find that having both modifications is important for the robustness of epigenetic silencing. Besides the silenced and transcriptionally active fate of chromatin, our model leads to a novel state with bivalent (i.e., both active and silencing) marks under certain perturbations (knock-out mutations, inhibition or enhancement of enzymatic activity). The bivalent state appears under several perturbations and is shown to result in patchy silencing . We also show that the titration effect, owing to a limited supply of silencing proteins, can result in counter-intuitive responses. The design principles of the silencing system is systematically investigated and disparate experimental observations are assessed within a single theoretical framework . Specifically, we discuss the behavior of Sir protein recruitment, spreading and stability of silenced regions in commonly-studied mutants (e.g., \emph{}%DIFAUXCMD %DIFDELCMD < \varDelta%%% \emph{}%DIFAUXCMD %DIFDELCMD < \varDelta%%% sas2, dot1 ) illuminating the controversial role of Dot1 in the systems biology of yeast silencing.
[ { "type": "D", "before": "only upon", "after": null, "start_char_pos": 89, "end_char_pos": 98 }, { "type": "R", "before": "known", "after": "upon known", "start_char_pos": 98, "end_char_pos": 103 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 144, "end_char_pos": 145 }, { "type": "D", "before": "a posteriori", "after": null, "start_char_pos": 145, "end_char_pos": 157 }, { "type": "R", "before": "identifying the key ones necessary for", "after": ". Doing so, we identify the epigenetic marks essential for the", "start_char_pos": 158, "end_char_pos": 196 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 455, "end_char_pos": 456 }, { "type": "R", "before": "More remarkably, besides", "after": "Besides", "start_char_pos": 540, "end_char_pos": 564 }, { "type": "A", "before": null, "after": "(i.e., both active and silencing)", "start_char_pos": 673, "end_char_pos": 673 }, { "type": "A", "before": null, "after": "appears under several perturbations and", "start_char_pos": 799, "end_char_pos": 799 }, { "type": "D", "before": "in regions of parameter space and turns out to be pertinent in several perturbations", "after": null, "start_char_pos": 839, "end_char_pos": 923 }, { "type": "R", "before": "helping clarify", "after": "and", "start_char_pos": 1135, "end_char_pos": 1150 }, { "type": "R", "before": "in the literature within an unified theoretical frameworkfor the first time and leading to fresh experimental proposals", "after": "are assessed within a single theoretical framework", "start_char_pos": 1187, "end_char_pos": 1306 }, { "type": "D", "before": "sas2", "after": null, "start_char_pos": 1460, "end_char_pos": 1464 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1502, "end_char_pos": 1503 }, { "type": "D", "before": "dot1", "after": null, "start_char_pos": 1509, "end_char_pos": 1513 }, { "type": "A", "before": null, "after": "sas2, dot1", "start_char_pos": 1551, "end_char_pos": 1551 } ]
[ 0, 230, 441, 539, 779, 925, 1057, 1322 ]
1205.2999
1
Since the financial crash in 2008, economic science and the economic profession are under siege. Critics point fingers at ivory tower economists, devoted to the construction of unfalsifiable models based on unrealistic assumptions in purely theoretical basis. Economies are complex man-made systems URLanisms and markets interact according to motivations and principles not entirely understood yet. Neo-classical economics is agnostic about the neural mechanisms that underlie the valuation of choices and decision making. The increasing dissatisfaction with the postulates of traditional economics i.e. perfectly rational agents, interacting through efficient markets in the search of equilibrium, has created new incentives for different approcahes in economics. Behavioral economics 2%DIFDELCMD < ]%%% , 9%DIFDELCMD < ] %%% builds on cognitive and emotional models of agents, Neuroeconomics addresses the neurobiological basis of valuation of choices 8%DIFDELCMD < ]%%% , 7%DIFDELCMD < ] %%% or Evolutionary economics 3%DIFDELCMD < ]%%% , 5%DIFDELCMD < ]%%% , 4%DIFDELCMD < ]%%% , 1%DIFDELCMD < ]%%% , 6%DIFDELCMD < ] %%% which strives for a new understanding of the economy as a complex evolutionary system, composed of agents that adapt to endogenous patterns out of equilibrium regions. The science of complexity may provide the platform to cross disciplinary boundaries in seemgly disparate fields such as brain science and economics. In this paper we take an integrative stance, fostering new insights into the economic character of neural activity. Key concepts in brain science like Hebbian learning and neural plasticity are revisited and elaborated, inside a new theoretical framework, that is sensitive to the new ideas that econophysics is proposing for financial markets. The objective here is to precisely delineate common topics in both neural and economic science, within a systemic outlook grounded in empirical basis that jolts the unification across the science of complex systems .
Economies are complex man-made systems URLanisms and markets interact according to motivations and principles not entirely understood yet. The increasing dissatisfaction with the postulates of traditional economics i.e. perfectly rational agents, interacting through efficient markets in the search of equilibrium, has created new incentives for different approaches in economics. %DIFDELCMD < ]%%% %DIFDELCMD < ] %%% %DIFDELCMD < ]%%% %DIFDELCMD < ] %%% %DIFDELCMD < ]%%% %DIFDELCMD < ]%%% %DIFDELCMD < ]%%% %DIFDELCMD < ]%%% %DIFDELCMD < ] %%% The science of complexity may provide the platform to cross disciplinary boundaries in seemingly disparate fields such as brain science and economics. In this paper we take an integrative stance, fostering new insights into the economic character of neural activity. The objective here is to precisely delineate common topics in both neural and economic science, within a systemic outlook grounded in empirical basis that jolts the unification across the science of complex systems . It is argued that this mainly relies on the study of the inverse problem in complex system with a truly Bayesian approach .
[ { "type": "D", "before": "Since the financial crash in 2008, economic science and the economic profession are under siege. Critics point fingers at ivory tower economists, devoted to the construction of unfalsifiable models based on unrealistic assumptions in purely theoretical basis.", "after": null, "start_char_pos": 0, "end_char_pos": 259 }, { "type": "D", "before": "Neo-classical economics is agnostic about the neural mechanisms that underlie the valuation of choices and decision making.", "after": null, "start_char_pos": 399, "end_char_pos": 522 }, { "type": "R", "before": "approcahes", "after": "approaches", "start_char_pos": 740, "end_char_pos": 750 }, { "type": "D", "before": "Behavioral economics", "after": null, "start_char_pos": 765, "end_char_pos": 785 }, { "type": "D", "before": "2", "after": null, "start_char_pos": 786, "end_char_pos": 787 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 805, "end_char_pos": 806 }, { "type": "D", "before": "9", "after": null, "start_char_pos": 807, "end_char_pos": 808 }, { "type": "D", "before": "builds on cognitive and emotional models of agents, Neuroeconomics addresses the neurobiological basis of valuation of choices", "after": null, "start_char_pos": 827, "end_char_pos": 953 }, { "type": "D", "before": "8", "after": null, "start_char_pos": 954, "end_char_pos": 955 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 973, "end_char_pos": 974 }, { "type": "D", "before": "7", "after": null, "start_char_pos": 975, "end_char_pos": 976 }, { "type": "D", "before": "or Evolutionary economics", "after": null, "start_char_pos": 995, "end_char_pos": 1020 }, { "type": "D", "before": "3", "after": null, "start_char_pos": 1021, "end_char_pos": 1022 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1040, "end_char_pos": 1041 }, { "type": "D", "before": "5", "after": null, "start_char_pos": 1042, "end_char_pos": 1043 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1061, "end_char_pos": 1062 }, { "type": "D", "before": "4", "after": null, "start_char_pos": 1063, "end_char_pos": 1064 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1082, "end_char_pos": 1083 }, { "type": "D", "before": "1", "after": null, "start_char_pos": 1084, "end_char_pos": 1085 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1103, "end_char_pos": 1104 }, { "type": "D", "before": "6", "after": null, "start_char_pos": 1105, "end_char_pos": 1106 }, { "type": "D", "before": "which strives for a new understanding of the economy as a complex evolutionary system, composed of agents that adapt to endogenous patterns out of equilibrium regions.", "after": null, "start_char_pos": 1125, "end_char_pos": 1292 }, { "type": "R", "before": "seemgly", "after": "seemingly", "start_char_pos": 1380, "end_char_pos": 1387 }, { "type": "D", "before": "Key concepts in brain science like Hebbian learning and neural plasticity are revisited and elaborated, inside a new theoretical framework, that is sensitive to the new ideas that econophysics is proposing for financial markets.", "after": null, "start_char_pos": 1558, "end_char_pos": 1786 }, { "type": "A", "before": null, "after": ". It is argued that this mainly relies on the study of the inverse problem in complex system with a truly Bayesian approach", "start_char_pos": 2002, "end_char_pos": 2002 } ]
[ 0, 96, 259, 398, 522, 764, 806, 1292, 1441, 1557, 1786 ]
1205.3111
1
We extend the formalism of Random Boolean Networks with canalizing rules to multilevel complex networks. The formalism allows to model genetic networks in which each gene might take part in more than one signaling pathway. We use a semi-annealed approach to study the stability of this class of models when coupled in a multiplex network and show that the analytical results are in good agreement with numerical simulations . Our main finding is that the multiplex structure provides a mechanism for the stabilization of the system and of chaotic regimes of individual layers . Our results help understanding why some genetic networks that are theoretically expected to operate in the chaotic regime can actually display dynamical stability .
The study of the interplay between the structure and dynamics of complex multilevel systems is a pressing challenge nowadays. In this paper, we use a semi-annealed approximation to study the stability properties of Random Boolean Networks in multiplex (multi-layered) graphs . Our main finding is that the multilevel structure provides a mechanism for the stabilization of the dynamics of the whole system even when individual layers work on the chaotic regime, therefore identifying new ways of feedback between the structure and the dynamics of these systems . Our results point out the need for a conceptual transition from the physics of single layered networks to the physics of multiplex networks. Finally, the fact that the coupling modifies the phase diagram and the critical conditions of the isolated layers suggests that interdependency can be used as a control mechanism .
[ { "type": "R", "before": "We extend the formalism of Random Boolean Networks with canalizing rules to multilevel complex networks. The formalism allows to model genetic networks in which each gene might take part in more than one signaling pathway. We", "after": "The study of the interplay between the structure and dynamics of complex multilevel systems is a pressing challenge nowadays. In this paper, we", "start_char_pos": 0, "end_char_pos": 225 }, { "type": "R", "before": "approach", "after": "approximation", "start_char_pos": 246, "end_char_pos": 254 }, { "type": "R", "before": "of this class of models when coupled in a multiplex network and show that the analytical results are in good agreement with numerical simulations", "after": "properties of Random Boolean Networks in multiplex (multi-layered) graphs", "start_char_pos": 278, "end_char_pos": 423 }, { "type": "R", "before": "multiplex", "after": "multilevel", "start_char_pos": 455, "end_char_pos": 464 }, { "type": "R", "before": "system and of chaotic regimes of individual layers", "after": "dynamics of the whole system even when individual layers work on the chaotic regime, therefore identifying new ways of feedback between the structure and the dynamics of these systems", "start_char_pos": 525, "end_char_pos": 575 }, { "type": "R", "before": "help understanding why some genetic networks that are theoretically expected to operate in the chaotic regime can actually display dynamical stability", "after": "point out the need for a conceptual transition from the physics of single layered networks to the physics of multiplex networks. Finally, the fact that the coupling modifies the phase diagram and the critical conditions of the isolated layers suggests that interdependency can be used as a control mechanism", "start_char_pos": 590, "end_char_pos": 740 } ]
[ 0, 104, 222, 425, 577 ]
1205.3630
1
Complex networks possess a rich, multi-scale structure reflecting the dynamical and URLanization of the systems they model. Often there is a need to analyze multiple networks simultaneously, to model a system by more than one type of interaction or to go beyond simple pairwise interactions, but currently there is a lack of theoretical and computational methods to address such problems. Here we introduce a framework for multi-network analysis based on hypergraph representations. Our main result is a generalization of the Perron-Frobenius theorem from which we derive spectral clustering algorithms for directed and undirected hypergraphs. We illustrate our approach with applications for tripartite community detection in folksonomies, for local and global alignment of protein-protein interaction networks between multiple species and for detecting clusters of overlapping regulatory pathways in directed networks.
Complex networks possess a rich, multi-scale structure reflecting the dynamical and URLanization of the systems they model. Often there is a need to analyze multiple networks simultaneously, to model a system by more than one type of interaction or to go beyond simple pairwise interactions, but currently there is a lack of theoretical and computational methods to address these problems. Here we introduce a framework for clustering and community detection in such systems using hypergraph representations. Our main result is a generalization of the Perron-Frobenius theorem from which we derive spectral clustering algorithms for directed and undirected hypergraphs. We illustrate our approach with applications for local and global alignment of protein-protein interaction networks between multiple species , for tripartite community detection in folksonomies, and for detecting clusters of overlapping regulatory pathways in directed networks.
[ { "type": "R", "before": "such", "after": "these", "start_char_pos": 374, "end_char_pos": 378 }, { "type": "R", "before": "multi-network analysis based on", "after": "clustering and community detection in such systems using", "start_char_pos": 423, "end_char_pos": 454 }, { "type": "D", "before": "tripartite community detection in folksonomies, for", "after": null, "start_char_pos": 693, "end_char_pos": 744 }, { "type": "A", "before": null, "after": ", for tripartite community detection in folksonomies,", "start_char_pos": 837, "end_char_pos": 837 } ]
[ 0, 123, 388, 482, 643 ]
1205.3763
1
The main aim of this work is to incorporate selected findings from behavioural finance into a Heterogeneous Agent Model using the Brock and Hommes (1998) framework. In particular, we analyse the dynamics of the model around the so-called `Break Point Date', when behavioural elements are injected into the system and compare it to our empirical benchmark sample. Behavioural patterns are thus embedded into an asset pricing framework , which allows to examine their direct impact. Price behaviour of 30 Dow Jones Industrial Average constituents covering five particularly turbulent U.S. stock market periods reveals interesting pattern . To replicate it, we apply numerical analysis using the Heterogeneous Agent Model extended with the selected findings from behavioural finance: herding, overconfidence, and market sentiment. We show that these behavioural breaks can be well modelled via the Heterogeneous Agent Model framework and they extend the original model considerably. Various modifications lead to significantly different results and model with behavioural breaks is also able to partially replicate price behaviour found in the data during turbulent stock market periods.
The main aim of this work is to incorporate selected findings from behavioural finance into a Heterogeneous Agent Model using the Brock and Hommes (1998) framework. Behavioural patterns are injected into an asset pricing framework through the so-called `Break Point Date' , which allows us to examine their direct impact. In particular, we analyse the dynamics of the model around the behavioural break. Price behaviour of 30 Dow Jones Industrial Average constituents covering five particularly turbulent U.S. stock market periods reveals interesting pattern in this aspect . To replicate it, we apply numerical analysis using the Heterogeneous Agent Model extended with the selected findings from behavioural finance: herding, overconfidence, and market sentiment. We show that these behavioural breaks can be well modelled via the Heterogeneous Agent Model framework and they extend the original model considerably. Various modifications lead to significantly different results and model with behavioural breaks is also able to partially replicate price behaviour found in the data during turbulent stock market periods.
[ { "type": "D", "before": "In particular, we analyse the dynamics of the model around the so-called `Break Point Date', when behavioural elements are injected into the system and compare it to our empirical benchmark sample.", "after": null, "start_char_pos": 165, "end_char_pos": 362 }, { "type": "R", "before": "thus embedded", "after": "injected", "start_char_pos": 388, "end_char_pos": 401 }, { "type": "A", "before": null, "after": "through the so-called `Break Point Date'", "start_char_pos": 434, "end_char_pos": 434 }, { "type": "A", "before": null, "after": "us", "start_char_pos": 450, "end_char_pos": 450 }, { "type": "A", "before": null, "after": "In particular, we analyse the dynamics of the model around the behavioural break.", "start_char_pos": 483, "end_char_pos": 483 }, { "type": "A", "before": null, "after": "in this aspect", "start_char_pos": 639, "end_char_pos": 639 } ]
[ 0, 164, 362, 482, 641, 831, 983 ]
1205.3767
1
A trading strategy based on a natural learning process, which asymptotically outperforms any trading strategy from RKHS (Reproduced Kernel Hilbert Space) , is presented . In this process, the trader rationally chooses his gambles using predictions made by a randomized well calibrated algorithm. Our strategy is based on Dawid's notion of calibration with more general changing checking rules and on some modification of Kakade and Foster's randomized algorithm for computing calibrated forecasts. We use also Vovk's method of defensive forecasting in RKHS .
We present a universal algorithm for online trading in Stock Market which performs asymptotically at least as good as any stationary trading strategy that computes the investment at each step using a fixed function of the side information that belongs to a given RKHS (Reproducing Kernel Hilbert Space) . Using a universal kernel, we extend this result for any continuous stationary strategy . In this learning process, a trader rationally chooses his gambles using predictions made by a randomized well-calibrated algorithm. Our strategy is based on Dawid's notion of calibration with more general checking rules and on some modification of Kakade and Foster's randomized rounding algorithm for computing the well-calibrated forecasts. We combine the method of randomized calibration with Vovk's method of defensive forecasting in RKHS . Unlike the statistical theory, no stochastic assumptions are made about the stock prices. Our empirical results on historical markets provide strong evidence that this type of technical trading can "beat the market" if transaction costs are ignored .
[ { "type": "R", "before": "A trading strategy based on a natural learning process, which asymptotically outperforms any trading strategy from RKHS (Reproduced", "after": "We present a universal algorithm for online trading in Stock Market which performs asymptotically at least as good as any stationary trading strategy that computes the investment at each step using a fixed function of the side information that belongs to a given RKHS (Reproducing", "start_char_pos": 0, "end_char_pos": 131 }, { "type": "R", "before": ", is presented", "after": ". Using a universal kernel, we extend this result for any continuous stationary strategy", "start_char_pos": 154, "end_char_pos": 168 }, { "type": "R", "before": "process, the", "after": "learning process, a", "start_char_pos": 179, "end_char_pos": 191 }, { "type": "R", "before": "well calibrated", "after": "well-calibrated", "start_char_pos": 269, "end_char_pos": 284 }, { "type": "D", "before": "changing", "after": null, "start_char_pos": 369, "end_char_pos": 377 }, { "type": "A", "before": null, "after": "rounding", "start_char_pos": 452, "end_char_pos": 452 }, { "type": "R", "before": "calibrated", "after": "the well-calibrated", "start_char_pos": 477, "end_char_pos": 487 }, { "type": "R", "before": "use also", "after": "combine the method of randomized calibration with", "start_char_pos": 502, "end_char_pos": 510 }, { "type": "A", "before": null, "after": ". Unlike the statistical theory, no stochastic assumptions are made about the stock prices. Our empirical results on historical markets provide strong evidence that this type of technical trading can \"beat the market\" if transaction costs are ignored", "start_char_pos": 558, "end_char_pos": 558 } ]
[ 0, 170, 295, 498 ]
1205.3944
1
Transcriptional dynamics of gene regulatory networks are regulated in highly precise manner, despite a fluctuating environment and mutations. We model these dynamics as those of a coupled logistic map on a network and design systems which are robust against phenotypic perturbations (perturbations in dynamics), as well as systems which are robust against mutation (perturbations in network structure). To achieve such a design, we apply a multicanonical Monte Carlo . Analysis based on the maximum Lyapunov exponent and parameter sensitivity shows that systems with marginal stability, which are regarded as systems at the edge of chaos, emerge when robustness against genotypic perturbations is required. This emergence of the edge of chaos is a URLanization phenomenon and does not need a fine tuning of parameters.
Dynamics in biological networks are in general robust against several perturbations. We investigate a coupled map network as a model motivated by gene regulatory networks and design systems which are robust against phenotypic perturbations (perturbations in dynamics), as well as systems which are robust against mutation (perturbations in network structure). To achieve such a design, we apply a multicanonical Monte Carlo method . Analysis based on the maximum Lyapunov exponent and parameter sensitivity shows that systems with marginal stability, which are regarded as systems at the edge of chaos, emerge when robustness against network perturbations is required. This emergence of the edge of chaos is a URLanization phenomenon and does not need a fine tuning of parameters.
[ { "type": "R", "before": "Transcriptional dynamics of gene regulatory networks are regulated in highly precise manner, despite a fluctuating environment and mutations. We model these dynamics as those of a coupled logistic map on a network", "after": "Dynamics in biological networks are in general robust against several perturbations. We investigate a coupled map network as a model motivated by gene regulatory networks", "start_char_pos": 0, "end_char_pos": 213 }, { "type": "A", "before": null, "after": "method", "start_char_pos": 467, "end_char_pos": 467 }, { "type": "R", "before": "genotypic", "after": "network", "start_char_pos": 671, "end_char_pos": 680 } ]
[ 0, 141, 402, 469, 707 ]
1205.4345
1
We discuss a new notion of risk measures that preserve the property of coherence called Copula Conditional Tail Expectation (CCTE) . This measure describes the expected amount of risk that can be experienced given that a potential bivariate risk exceeds a bivariate threshold value, and provides an important measure for right-tail risk. Our goal is to propose an alternative risk measure which takes into account the fluctuations of losses and possible correlations between random variables .
A new notion to risk measures preserving the coherence axioms, that we call Copula Conditional Tail Expectation (CCTE) , is given. This risk measure describes the expected amount of risk that can be experienced given that a potential bivariate risk exceeds a bivariate threshold value, and provides an important measure for right-tail risk. Our goal is to propose an alternative risk measure which takes into account the fluctuations of losses and possible correlations between random variables . Finally, our risk measure is applied to the real financial data .
[ { "type": "R", "before": "We discuss a new notion of risk measures that preserve the property of coherence called", "after": "A new notion to risk measures preserving the coherence axioms, that we call", "start_char_pos": 0, "end_char_pos": 87 }, { "type": "R", "before": ". This", "after": ", is given. This risk", "start_char_pos": 131, "end_char_pos": 137 }, { "type": "A", "before": null, "after": ". Finally, our risk measure is applied to the real financial data", "start_char_pos": 492, "end_char_pos": 492 } ]
[ 0, 132, 337 ]
1205.4345
2
A new notion to risk measurespreserving the coherence axioms , that we call Copula Conditional Tail Expectation (CCTE), is given. This risk measure describes the expected amount of risk that can be experienced given that a potential bivariate risk exceeds a bivariate threshold value, and provides an important measure for right-tail risk. Our goal is to propose an alternative risk measure which takes into account the fluctuations of losses and possible correlations between random variables. Finally, our risk measure is applied to the real financial data .
Our goal in this paper is to propose an alternative risk measure which takes into account the fluctuations of losses and possible correlations between random variables. This new notion of risk measures , that we call Copula Conditional Tail Expectation describes the expected amount of risk that can be experienced given that a potential bivariate risk exceeds a bivariate threshold value, and provides an important measure for right-tail risk. An application to real financial data is given .
[ { "type": "R", "before": "A new notion to risk measurespreserving the coherence axioms", "after": "Our goal in this paper is to propose an alternative risk measure which takes into account the fluctuations of losses and possible correlations between random variables. This new notion of risk measures", "start_char_pos": 0, "end_char_pos": 60 }, { "type": "D", "before": "(CCTE), is given. This risk measure", "after": null, "start_char_pos": 112, "end_char_pos": 147 }, { "type": "R", "before": "Our goal is to propose an alternative risk measure which takes into account the fluctuations of losses and possible correlations between random variables. Finally, our risk measure is applied to the", "after": "An application to", "start_char_pos": 340, "end_char_pos": 538 }, { "type": "A", "before": null, "after": "is given", "start_char_pos": 559, "end_char_pos": 559 } ]
[ 0, 129, 339, 494 ]
1205.4643
1
For portfolio choice problems with proportional transaction costs, we discuss whether or not there exists a shadow price, i.e., a least favorable frictionless market extension leading to the same optimal strategy and utility. By means of an explicit counter-example, we show that shadow prices may fail to exist even in seemingly perfectly benign situations, i.e., for a log-investor trading in an arbitrage-free market with bounded prices and constant transaction costsof arbitrary size . We also clarify the connection between shadow prices and duality theory. Whereas dual minimizers need not lead to shadow prices in the above "global" sense, we show that they always correspond to a "local" version.
For portfolio choice problems with proportional transaction costs, we discuss whether or not there exists a shadow price, i.e., a least favorable frictionless market extension leading to the same optimal strategy and utility. By means of an explicit counter-example, we show that shadow prices may fail to exist even in seemingly perfectly benign situations, i.e., for a log-investor trading in an arbitrage-free market with bounded prices and arbitrarily small transaction costs . We also clarify the connection between shadow prices and duality theory. Whereas dual minimizers need not lead to shadow prices in the above "global" sense, we show that they always correspond to a "local" version.
[ { "type": "R", "before": "constant transaction costsof arbitrary size", "after": "arbitrarily small transaction costs", "start_char_pos": 444, "end_char_pos": 487 } ]
[ 0, 225, 489, 562 ]
1205.6126
1
We present a phenomenological dynamical model able to describe the stretching features of a lengthvs applied forceDNA curve. As concerning the chain, the model grounds on the discrete worm-like chain model with the elastic modifications, which properly describes the elongation features at low and intermediate forces. At high forces the dynamics , developed under a double well potential with a cubic term, accounts for the narrow transition present in the DNA elongation (overstretching). An good agreement between simulation and experiment is obtained.
We present a phenomenological dynamical model able to describe the stretching features of the curve of DNA length vs applied force. As concerns the chain, the model is based on the discrete wormlike chain model with elastic modifications, which properly describes the elongation features at low and intermediate forces. The dynamics is developed under a double-well potential with a linear term, which, at high forces, accounts for the narrow transition present in the DNA elongation (overstretching). A quite good agreement between simulation and experiment is obtained.
[ { "type": "D", "before": "a length", "after": null, "start_char_pos": 90, "end_char_pos": 98 }, { "type": "D", "before": "vs", "after": null, "start_char_pos": 98, "end_char_pos": 100 }, { "type": "R", "before": "applied forceDNA curve. As concerning", "after": "the curve of DNA length vs applied force. As concerns", "start_char_pos": 101, "end_char_pos": 138 }, { "type": "R", "before": "grounds", "after": "is based", "start_char_pos": 160, "end_char_pos": 167 }, { "type": "R", "before": "worm-like", "after": "wormlike", "start_char_pos": 184, "end_char_pos": 193 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 211, "end_char_pos": 214 }, { "type": "R", "before": "At high forces the dynamics ,", "after": "The dynamics is", "start_char_pos": 319, "end_char_pos": 348 }, { "type": "R", "before": "double well", "after": "double-well", "start_char_pos": 367, "end_char_pos": 378 }, { "type": "R", "before": "cubic term,", "after": "linear term, which, at high forces,", "start_char_pos": 396, "end_char_pos": 407 }, { "type": "R", "before": "An", "after": "A quite", "start_char_pos": 491, "end_char_pos": 493 } ]
[ 0, 124, 318, 490 ]
1205.6160
1
This paper studies stability of the exponential utility maximization when there are small variations on agent's utility . Two settings are studied . First, in a general semimartingale model where random endowments are present, there is a sequence of utilities defined on R converging to the exponential utility. Under a uniform condition on their marginal utilities, convergence of value functions, optimal terminal wealth and optimal investment strategies are obtained, their rate of convergence are determined. Stability of utility-based pricing is also discussed . Second, there is a sequence of utilities defined on R_+ each of which is comparable to a power utility whose relative risk aversion converges to infinity . Their associated optimal strategies, after appropriate scaling, converge to the optimal strategy for the exponential hedging problem. This complements Theorem 3.2 in M. Nutz, Probab. Theory Relat. Fields, 152, 2012, by allowing general utilities in the converging sequence .
This paper studies stability of the exponential utility maximization when there are small variations on agent's utility function . Two settings are considered . First, in a general semimartingale model where random endowments are present, a sequence of utilities defined on R converges to the exponential utility. Under a uniform condition on their marginal utilities, convergence of value functions, optimal payoffs and optimal investment strategies are obtained, their rate of convergence are also determined. Stability of utility-based pricing is studied as an application . Second, a sequence of utilities defined on R_+ converges to the exponential utility after shifting and scaling . Their associated optimal strategies, after appropriate scaling, converge to the optimal strategy for the exponential hedging problem. This complements Theorem 3.2 in M. Nutz, Probab. Theory Relat. Fields, 152, 2012, which establishes the convergence for a sequence of power utilities .
[ { "type": "A", "before": null, "after": "function", "start_char_pos": 120, "end_char_pos": 120 }, { "type": "R", "before": "studied", "after": "considered", "start_char_pos": 140, "end_char_pos": 147 }, { "type": "D", "before": "there is", "after": null, "start_char_pos": 228, "end_char_pos": 236 }, { "type": "R", "before": "converging", "after": "converges", "start_char_pos": 274, "end_char_pos": 284 }, { "type": "R", "before": "terminal wealth", "after": "payoffs", "start_char_pos": 408, "end_char_pos": 423 }, { "type": "A", "before": null, "after": "also", "start_char_pos": 502, "end_char_pos": 502 }, { "type": "R", "before": "also discussed", "after": "studied as an application", "start_char_pos": 553, "end_char_pos": 567 }, { "type": "D", "before": "there is", "after": null, "start_char_pos": 578, "end_char_pos": 586 }, { "type": "R", "before": "each of which is comparable to a power utility whose relative risk aversion converges to infinity", "after": "converges to the exponential utility after shifting and scaling", "start_char_pos": 626, "end_char_pos": 723 }, { "type": "R", "before": "by allowing general utilities in the converging sequence", "after": "which establishes the convergence for a sequence of power utilities", "start_char_pos": 942, "end_char_pos": 998 } ]
[ 0, 122, 149, 312, 514, 725, 859, 908, 922 ]
1206.0026
1
Agents' heterogeneity has been recognized as a driver mechanism for the persistence of financial volatility. We focus on the multiplicity of investment strategies' horizons; we embed this concept in a continuous time stochastic volatility framework and prove that a parsimonious, two-scales version effectively capture the long memory as measured from the real data. Since estimating parameters in a stochastic volatility model is a challenging task, we introduce a robust, knowledge-driven methodology based on the Generalized Methods of Moments. In addition to volatility clustering, the estimated model also captures other relevant stylized facts, emerging as a minimal but realistic and complete framework for modeling financial time series.
Agents' heterogeneity has been recognized as a driver mechanism for the persistence of financial volatility. We focus on the multiplicity of investment strategies' horizons; we embed this concept in a continuous time stochastic volatility framework and prove that a parsimonious, two-scales version effectively captures the long memory as measured from the real data. Since estimating parameters in a stochastic volatility model is a challenging task, we introduce a robust, knowledge-driven methodology based on the Generalized Methods of Moments. In addition to the volatility clustering, the estimated model also captures other relevant stylized facts, emerging as a minimal but realistic and complete framework for modeling financial time series.
[ { "type": "R", "before": "capture", "after": "captures", "start_char_pos": 311, "end_char_pos": 318 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 563, "end_char_pos": 563 } ]
[ 0, 108, 173, 366, 547 ]
1206.0026
2
Agents' heterogeneity has been recognized as a driver mechanism for the persistence of financial volatility. We focus on the multiplicity of investment strategies' horizons ; we embed this concept in a continuous time stochastic volatility framework and prove that a parsimonious, two-scales version effectively captures the long memory as measured from the real data. Since estimating parameters in a stochastic volatility model is a challengingtask , we introduce a robust , knowledge-driven methodology based on the Generalized Methods of Moments . In addition to the volatility clustering, the estimated model also captures other relevant stylized facts, emerging as a minimal but realistic and complete framework for modeling financial time series.
Agents' heterogeneity is recognized as a driver mechanism for the persistence of financial volatility. We focus on the multiplicity of investment strategies' horizons , we embed this concept in a continuous time stochastic volatility framework and prove that a parsimonious, two-scale version effectively captures the long memory as measured from the real data. Since estimating parameters in a stochastic volatility model is challenging , we introduce a robust methodology based on the Generalized Method of Moments supported by a heuristic selection of the orthogonal conditions . In addition to the volatility clustering, the estimated model also captures other relevant stylized facts, emerging as a minimal but realistic and complete framework for modelling financial time series.
[ { "type": "R", "before": "has been", "after": "is", "start_char_pos": 22, "end_char_pos": 30 }, { "type": "R", "before": ";", "after": ",", "start_char_pos": 173, "end_char_pos": 174 }, { "type": "R", "before": "two-scales", "after": "two-scale", "start_char_pos": 281, "end_char_pos": 291 }, { "type": "R", "before": "a challengingtask", "after": "challenging", "start_char_pos": 433, "end_char_pos": 450 }, { "type": "D", "before": ", knowledge-driven", "after": null, "start_char_pos": 475, "end_char_pos": 493 }, { "type": "R", "before": "Methods of Moments", "after": "Method of Moments supported by a heuristic selection of the orthogonal conditions", "start_char_pos": 531, "end_char_pos": 549 }, { "type": "R", "before": "modeling", "after": "modelling", "start_char_pos": 722, "end_char_pos": 730 } ]
[ 0, 108, 174, 368, 551 ]
1206.0094
1
During the last decade, network approaches became a powerful tool to describe protein structure and dynamics. Here we first describe the protein structure networks of molecular chaperones, then characterize chaperone containing subnetworks of interactomes called as chaperone-networks or chaperomes. We review the role of molecular chaperones in short-term adaptation of cellular networks in response to stress, and in long-term adaptation discussing their putative functions in the regulation of evolvability. We provide a general overview of possible network mechanisms of adaptation, learning and memory formation. We propose that changes of network rigidity play a key role in learning and memory formation processes. Flexible network topology provides 'learning-competent' state. Here, networks may have much less modular boundaries than locally rigid, highly modular networks, where the learnt information has already been consolidated in a memory formation process. Since modular boundaries are efficient filters of information, in the 'learning-competent' state information filtering may be much smaller, than after memory formation. This mechanism restricts high information transfer to the 'learning competent' state. After memory formation the stored information is protected by modular boundary-induced segregation and information filtering . The flexible networks of URLanisms are generally in a 'learning competent' state. On the contrary, locally rigid networks of URLanisms have lost their 'learning competent' state, but store and protect their learnt information efficiently. We anticipate that the above mechanism may operate at the level of both protein-protein interaction and neuronal networks.
During the last decade, network approaches became a powerful tool to describe protein structure and dynamics. Here , we describe first the protein structure networks of molecular chaperones, then characterize chaperone containing sub-networks of interactomes called as chaperone-networks or chaperomes. We review the role of molecular chaperones in short-term adaptation of cellular networks in response to stress, and in long-term adaptation discussing their putative functions in the regulation of evolvability. We provide a general overview of possible network mechanisms of adaptation, learning and memory formation. We propose that changes of network rigidity play a key role in learning and memory formation processes. Flexible network topology provides 'learning-competent' state. Here, networks may have much less modular boundaries than locally rigid, highly modular networks, where the learnt information has already been consolidated in a memory formation process. Since modular boundaries are efficient filters of information, in the 'learning-competent' state information filtering may be much smaller, than after memory formation. This mechanism restricts high information transfer to the 'learning competent' state. After memory formation , modular boundary-induced segregation and information filtering protect the stored information . The flexible networks of URLanisms are generally in a 'learning competent' state. On the contrary, locally rigid networks of URLanisms have lost their 'learning competent' state, but store and protect their learnt information efficiently. We anticipate that the above mechanism may operate at the level of both protein-protein interaction and neuronal networks.
[ { "type": "R", "before": "we first describe", "after": ", we describe first", "start_char_pos": 115, "end_char_pos": 132 }, { "type": "R", "before": "subnetworks", "after": "sub-networks", "start_char_pos": 228, "end_char_pos": 239 }, { "type": "R", "before": "the stored information is protected by", "after": ",", "start_char_pos": 1251, "end_char_pos": 1289 }, { "type": "A", "before": null, "after": "protect the stored information", "start_char_pos": 1353, "end_char_pos": 1353 } ]
[ 0, 109, 299, 510, 617, 721, 784, 972, 1141, 1227, 1355, 1437, 1594 ]
1206.0094
2
During the last decade, network approaches became a powerful tool to describe protein structure and dynamics. Here, we describe first the protein structure networks of molecular chaperones, then characterize chaperone containing sub-networks of interactomes called as chaperone-networks or chaperomes. We review the role of molecular chaperones in short-term adaptation of cellular networks in response to stress, and in long-term adaptation discussing their putative functions in the regulation of evolvability. We provide a general overview of possible network mechanisms of adaptation, learning and memory formation. We propose that changes of network rigidity play a key role in learning and memory formation processes. Flexible network topology provides 'learning-competent' state. Here, networks may have much less modular boundaries than locally rigid, highly modular networks, where the learnt information has already been consolidated in a memory formation process. Since modular boundaries are efficient filters of information, in the 'learning-competent' state information filtering may be much smaller, than after memory formation. This mechanism restricts high information transfer to the 'learning competent' state. After memory formation, modular boundary-induced segregation and information filtering protect the stored information. The flexible networks of URLanisms are generally in a 'learning competent' state. On the contrary, locally rigid networks of URLanisms have lost their 'learning competent' state, but store and protect their learnt information efficiently. We anticipate that the above mechanism may operate at the level of both protein-protein interaction and neuronal networks.
During the last decade, network approaches became a powerful tool to describe protein structure and dynamics. Here, we describe first the protein structure networks of molecular chaperones, then characterize chaperone containing sub-networks of interactomes called as chaperone-networks or chaperomes. We review the role of molecular chaperones in short-term adaptation of cellular networks in response to stress, and in long-term adaptation discussing their putative functions in the regulation of evolvability. We provide a general overview of possible network mechanisms of adaptation, learning and memory formation. We propose that changes of network rigidity play a key role in learning and memory formation processes. Flexible network topology provides "learning competent" state. Here, networks may have much less modular boundaries than locally rigid, highly modular networks, where the learnt information has already been consolidated in a memory formation process. Since modular boundaries are efficient filters of information, in the "learning competent" state information filtering may be much smaller, than after memory formation. This mechanism restricts high information transfer to the "learning competent" state. After memory formation, modular boundary-induced segregation and information filtering protect the stored information. The flexible networks of URLanisms are generally in a "learning competent" state. On the contrary, locally rigid networks of URLanisms have lost their "learning competent" state, but store and protect their learnt information efficiently. We anticipate that the above mechanism may operate at the level of both protein-protein interaction and neuronal networks.
[ { "type": "R", "before": "'learning-competent'", "after": "\"learning competent\"", "start_char_pos": 759, "end_char_pos": 779 }, { "type": "R", "before": "'learning-competent'", "after": "\"learning competent\"", "start_char_pos": 1045, "end_char_pos": 1065 }, { "type": "R", "before": "'learning competent'", "after": "\"learning competent\"", "start_char_pos": 1202, "end_char_pos": 1222 }, { "type": "R", "before": "'learning competent'", "after": "\"learning competent\"", "start_char_pos": 1403, "end_char_pos": 1423 }, { "type": "R", "before": "'learning competent'", "after": "\"learning competent\"", "start_char_pos": 1500, "end_char_pos": 1520 } ]
[ 0, 109, 301, 512, 619, 723, 786, 974, 1143, 1229, 1348, 1430, 1587 ]
1206.0384
1
We consider the market of n financial agents who aim to increase their expected utilities by sharing their random incomes . Given the optimal sharing rules, we address the situation where agents do not share their true random endowments, but instead they report as endowments the random quantities that maximize their expected utility when the sharing rules are applied. It is shown that this strategic behavior results in a Nash-equilibrium type of agreement among the agents, which implies an inefficient risk sharing. Under quadric utility functionals, we give closed form solutions for this Nash equilibrium and discuss the associated findings. The effect of a similar agents ' strategic behavioris studied in the oligopoly over-the- counter market of finite financial securities, whose equilibrium prices are determined by the equality of demand and supply. The resulting risk sharing inefficiency is even more intense if the agents' participation in the market becomes an endogenous problem. Regarding this issue, we give conditions under which the participation of an extra agent is bene?ficial for all the existed ones. This discussion naturally leads to the problem of sub-group formation in the market, which is addressed for the ?first time in a financial risk sharing literature. A related example under quadratic utility functionals is extensively analyzed .
We consider the market of n financial agents who aim to increase their utilities by efficiently sharing their random endowments . Given the endogenously derived optimal sharing rules, we address the situation where agents do not reveal their true endowments, but instead they report as endowments the random quantities that maximize their utilities when the sharing rules are applied. Under mean-variance preferences, it is shown that each agent should share only a fraction of his true endowment and report that he is exposed to some endowment he does not possess. Furthermore, if all agents follow similar strategic behavior, the market equilibrates at a Nash-type equilibrium which benefits the speculators and results in risk sharing inefficiency . This agents' strategic behavior, when applied to oligopoly markets of exogenously given financial securities, changes the effective market portfolio and implies a price pressure on the traded securities in the CAPM .
[ { "type": "R", "before": "expected utilities by", "after": "utilities by efficiently", "start_char_pos": 71, "end_char_pos": 92 }, { "type": "R", "before": "incomes", "after": "endowments", "start_char_pos": 114, "end_char_pos": 121 }, { "type": "A", "before": null, "after": "endogenously derived", "start_char_pos": 134, "end_char_pos": 134 }, { "type": "R", "before": "share their true random", "after": "reveal their true", "start_char_pos": 203, "end_char_pos": 226 }, { "type": "R", "before": "expected utility", "after": "utilities", "start_char_pos": 319, "end_char_pos": 335 }, { "type": "R", "before": "It", "after": "Under mean-variance preferences, it", "start_char_pos": 372, "end_char_pos": 374 }, { "type": "R", "before": "this strategic behavior results in a Nash-equilibrium type of agreement among the agents, which implies an inefficient risk sharing. Under quadric utility functionals, we give closed form solutions for this Nash equilibrium and discuss the associated findings. The effect of a similar agents ' strategic behavioris studied in the oligopoly over-the- counter market of finite financial securities, whose equilibrium prices are determined by the equality of demand and supply. The resulting", "after": "each agent should share only a fraction of his true endowment and report that he is exposed to some endowment he does not possess. Furthermore, if all agents follow similar strategic behavior, the market equilibrates at a Nash-type equilibrium which benefits the speculators and results in", "start_char_pos": 389, "end_char_pos": 877 }, { "type": "R", "before": "is even more intense if the agents' participation in the market becomes an endogenous problem. Regarding this issue, we give conditions under which the participation of an extra agent is bene?ficial for all the existed ones. This discussion naturally leads to the problem of sub-group formation in the market, which is addressed for the ?first time in a financial risk sharing literature. A related example under quadratic utility functionals is extensively analyzed", "after": ". This agents' strategic behavior, when applied to oligopoly markets of exogenously given financial securities, changes the effective market portfolio and implies a price pressure on the traded securities in the CAPM", "start_char_pos": 904, "end_char_pos": 1370 } ]
[ 0, 371, 521, 649, 863, 998, 1128, 1292 ]
1206.0384
2
We consider the market of n financial agents who aim to increase their utilities by efficiently sharing their random endowments. Given the endogenously derived optimal sharing rules, we address the situation where agentsdo not reveal their true endowments, but instead they report as endowments the random quantities that maximize their utilities when the sharing rulesare applied. Under mean-variance preferences, it is shown that each agent should share only a fraction of his true endowment and report that he is exposed to some endowment he does not possess. Furthermore, if all agents follow similar strategic behavior, the market equilibrates at a Nash-type equilibrium which benefits the speculators and results in risk sharing inefficiency. This agents ' strategic behavior, when applied to oligopoly markets of exogenously given financial securities, changes the effective market portfolio and implies a price pressure on the traded securities in the CAPM .
The paper studies an oligopolistic equilibrium model of financial agents who aim to share their random endowments. The risk-sharing securities and their prices are endogenously determined as the outcome of a strategic game played among all the participating agents. In the complete-market setting, each agent's set of strategic choices consists of the security payoffs and the pricing kernel that are consistent with the optimal-sharing rules; while in the incomplete setting, agents respond via demand functions on a vector of given tradeable securities. It is shown that at the (Nash) risk-sharing equilibrium, the sharing securities are suboptimal, since agents submit for sharing different risk exposures than their true endowments. On the other hand, the Nash equilibrium prices stay unaffected by the game only in the special case of agents with the same risk aversion. In addition, agents with sufficiently lower risk aversion act as predatory traders, since they absorb utility surplus from the high risk averse agents and reduce the efficiency of sharing. The main results of the paper also hold under the generalized models that allow the presence of noise traders and heterogeneity in agents' beliefs .
[ { "type": "R", "before": "We consider the market of n", "after": "The paper studies an oligopolistic equilibrium model of", "start_char_pos": 0, "end_char_pos": 27 }, { "type": "R", "before": "increase their utilities by efficiently sharing their", "after": "share their", "start_char_pos": 56, "end_char_pos": 109 }, { "type": "R", "before": "Given the endogenously derived optimal sharing rules, we address the situation where agentsdo not reveal their true endowments, but instead they report as endowments the random quantities that maximize their utilities when the sharing rulesare applied. Under mean-variance preferences, it", "after": "The risk-sharing securities and their prices are endogenously determined as the outcome of a strategic game played among all the participating agents. In the complete-market setting, each agent's set of strategic choices consists of the security payoffs and the pricing kernel that are consistent with the optimal-sharing rules; while in the incomplete setting, agents respond via demand functions on a vector of given tradeable securities. It", "start_char_pos": 129, "end_char_pos": 417 }, { "type": "R", "before": "each agent should share only a fraction of his true endowment and report that he is exposed to some endowment he does not possess. Furthermore, if all agents follow similar strategic behavior, the market equilibrates at a Nash-type equilibrium which benefits the speculators and results in risk sharing inefficiency. This agents ' strategic behavior, when applied to oligopoly markets of exogenously given financial securities, changes the effective market portfolio and implies a price pressure on the traded securities in the CAPM", "after": "at the (Nash) risk-sharing equilibrium, the sharing securities are suboptimal, since agents submit for sharing different risk exposures than their true endowments. On the other hand, the Nash equilibrium prices stay unaffected by the game only in the special case of agents with the same risk aversion. In addition, agents with sufficiently lower risk aversion act as predatory traders, since they absorb utility surplus from the high risk averse agents and reduce the efficiency of sharing. The main results of the paper also hold under the generalized models that allow the presence of noise traders and heterogeneity in agents' beliefs", "start_char_pos": 432, "end_char_pos": 964 } ]
[ 0, 128, 381, 562, 748 ]
1206.0478
1
We discuss finiteness (effectiveness), continuity (robustness) and optimality (efficiency) results for capital requirements , or risk measures, defined for financial positions belonging to an ordered topological vector space. Given a set of acceptable financial positions and a pre-specified traded asset ( the eligible asset ), the associated capital requirement for a financial position represents to the amount of capital that needs to be raised and invested in the eligible asset to make the position acceptable. Our abstract approach allows to provide results that are applicable to a whole range of spaces of financial positions commonly used in the literature. Moreover, it allows to unveil the key properties of acceptance sets and of eligible assetsdriving the effectiveness and the robustness of the associated capital requirements. In particular, if the underlying space is a Fr\'echet lattice, we provide new finiteness results for required capital, as well as a simplified proof of the Extended Namioka-Klee theorem for risk measures . As an application we present a comprehensive treatment of finiteness and continuity for capital requirements based on Value-at-Risk and Tail-Value-at-Risk acceptability .
We discuss general capital requirements representing the minimum amount of capital that a financial institution needs to raise and invest in a pre-specified eligible, or reference, asset to ensure it is adequately capitalized. Financial positions are modeled as elements belonging to an ordered topological vector space. The payoff of the eligible asset is assumed to be an arbitrary non-zero positive element, thus allowing for a wide range of choices. In the context of function spaces, these general capital requirements cannot be transformed into cash-additive capital requirements by a simple change of numeraire unless the payoff of the eligible asset is bounded away from zero. This excludes the possibility of choosing a defaultable security as the eligible asset which, given the potential unavailability of risk-free assets, constitutes an important gap in the existing theory of capital requirements. This paper fills this gap and provides a detailed analysis of the interplay between acceptance sets and eligible assets. We provide a variety of finiteness and continuity results when the eligible asset has a payoff with "interior-like" qualities, paying particular attention to the case where the underlying space of positions is a Fr\'{e . As an application we provide a complete characterization of finiteness and L^p-continuity for quantile-based capital requirements, the most important types of capital requirements encountered in practice .
[ { "type": "R", "before": "finiteness (effectiveness), continuity (robustness) and optimality (efficiency) results for capital requirements , or risk measures, defined for financial positions", "after": "general capital requirements representing the minimum amount of capital that a financial institution needs to raise and invest in a pre-specified eligible, or reference, asset to ensure it is adequately capitalized. Financial positions are modeled as elements", "start_char_pos": 11, "end_char_pos": 175 }, { "type": "R", "before": "Given a set of acceptable financial positions and a pre-specified traded asset (", "after": "The payoff of", "start_char_pos": 226, "end_char_pos": 306 }, { "type": "R", "before": "), the associated capital requirement for a financial position represents to the amount of capital that needs to be raised and invested in the eligible asset to make the position acceptable. Our abstract approach allows to provide results that are applicable to a whole range of spaces of financial positions commonly used in the literature. Moreover, it allows to unveil the key properties of", "after": "is assumed to be an arbitrary non-zero positive element, thus allowing for a wide range of choices. In the context of function spaces, these general capital requirements cannot be transformed into cash-additive capital requirements by a simple change of numeraire unless the payoff of the eligible asset is bounded away from zero. This excludes the possibility of choosing a defaultable security as the eligible asset which, given the potential unavailability of risk-free assets, constitutes an important gap in the existing theory of capital requirements. This paper fills this gap and provides a detailed analysis of the interplay between", "start_char_pos": 326, "end_char_pos": 719 }, { "type": "R", "before": "of eligible assetsdriving the effectiveness and the robustness of the associated capital requirements. In particular, if the underlying space is a Fr\\'echet lattice, we provide new finiteness results for required capital, as well as a simplified proof of the Extended Namioka-Klee theorem for risk measures", "after": "eligible assets. We provide a variety of finiteness and continuity results when the eligible asset has a payoff with \"interior-like\" qualities, paying particular attention to the case where the underlying space of positions is a Fr\\'{e", "start_char_pos": 740, "end_char_pos": 1046 }, { "type": "R", "before": "present a comprehensive treatment", "after": "provide a complete characterization", "start_char_pos": 1070, "end_char_pos": 1103 }, { "type": "R", "before": "continuity for capital requirements based on Value-at-Risk and Tail-Value-at-Risk acceptability", "after": "L^p-continuity for quantile-based capital requirements, the most important types of capital requirements encountered in practice", "start_char_pos": 1122, "end_char_pos": 1217 } ]
[ 0, 225, 516, 667, 842, 1048 ]
1206.0478
2
We discuss general capital requirements representing the minimum amount of capital that a financial institution needs to raise and invest in a pre-specified eligible, or reference, asset to ensure it is adequately capitalized. Financial positions are modeled as elements belonging to an ordered topological vector space. The payoff of the eligible asset is assumed to be an arbitrary non-zero positive element, thus allowing for a wide range of choices. In the context of function spaces, these general capital requirements cannot be transformed into cash-additive capital requirements by a simple change of numeraire unless the payoff of the eligible asset is bounded away from zero. This excludes the possibility of choosing a defaultable security as the eligible asset which, given the potential unavailability of risk-free assets, constitutes an important gap in the existing theory of capital requirements. This paper fills this gap and provides a detailed analysis of the interplay between acceptance sets and eligible assets. We provide a variety of finiteness and continuity results when the eligible asset has a payoff with "interior-like" qualities, paying particular attention to the case where the underlying space of positions is a Fr\'{e .
We discuss risk measures representing the minimum amount of capital a financial institution needs to raise and invest in a pre-specified eligible asset to ensure it is adequately capitalized. Most of the literature has focused on cash-additive risk measures, for which the eligible asset is a risk-free bond, on the grounds that the general case can be reduced to the cash-additive case by a change of num\'{e eligible asset is a general defaultable bond. In this paper we fill this gap allowing for general eligible assets. We provide a variety of finiteness and continuity results for general risk measures, as well as dual representations for the convex case. We apply our results to risk measures based on Value-at-Risk and Tail Value-at-Risk on L^p spaces, as well as to shortfall risk measures based on utility functions on Orlicz spaces. We pay special attention to the property of cash subadditivity, which has been recently proposed as an alternative to cash additivity to deal with defaultable bonds. In important cases, we provide characterizations of cash subadditivity for general risk measures and show that, when the eligible asset is a defaultable bond, cash subadditivity is the exception rather than the rule. Finally, we consider the situation where the eligible asset is not liquidly traded and the pricing rule is no longer linear. We establish when the resulting risk measures are quasi-convex and show that cash subadditivity is only compatible with continuous pricing rules .
[ { "type": "R", "before": "general capital requirements", "after": "risk measures", "start_char_pos": 11, "end_char_pos": 39 }, { "type": "D", "before": "that", "after": null, "start_char_pos": 83, "end_char_pos": 87 }, { "type": "R", "before": "eligible, or reference, asset", "after": "eligible asset", "start_char_pos": 157, "end_char_pos": 186 }, { "type": "R", "before": "Financial positions are modeled as elements belonging to an ordered topological vector space. The payoff of the", "after": "Most of the literature has focused on cash-additive risk measures, for which the", "start_char_pos": 227, "end_char_pos": 338 }, { "type": "R", "before": "assumed to be an arbitrary non-zero positive element, thus allowing for a wide range of choices. In the context of function spaces, these general capital requirements cannot be transformed into", "after": "a risk-free bond, on the grounds that the general case can be reduced to the", "start_char_pos": 357, "end_char_pos": 550 }, { "type": "R", "before": "capital requirements by a simple change of numeraire unless the payoff of the", "after": "case by a change of num\\'{e", "start_char_pos": 565, "end_char_pos": 642 }, { "type": "R", "before": "bounded away from zero. This excludes the possibility of choosing a defaultable security as the eligible asset which, given the potential unavailability of risk-free assets, constitutes an important gap in the existing theory of capital requirements. This paper fills this gap and provides a detailed analysis of the interplay between acceptance sets and", "after": "a general defaultable bond. In this paper we fill this gap allowing for general", "start_char_pos": 661, "end_char_pos": 1015 }, { "type": "R", "before": "when the eligible asset has a payoff with \"interior-like\" qualities, paying particular", "after": "for general risk measures, as well as dual representations for the convex case. We apply our results to risk measures based on Value-at-Risk and Tail Value-at-Risk on L^p spaces, as well as to shortfall risk measures based on utility functions on Orlicz spaces. We pay special", "start_char_pos": 1091, "end_char_pos": 1177 }, { "type": "R", "before": "case where the underlying space of positions is a Fr\\'{e", "after": "property of cash subadditivity, which has been recently proposed as an alternative to cash additivity to deal with defaultable bonds. In important cases, we provide characterizations of cash subadditivity for general risk measures and show that, when the eligible asset is a defaultable bond, cash subadditivity is the exception rather than the rule. Finally, we consider the situation where the eligible asset is not liquidly traded and the pricing rule is no longer linear. We establish when the resulting risk measures are quasi-convex and show that cash subadditivity is only compatible with continuous pricing rules", "start_char_pos": 1195, "end_char_pos": 1251 } ]
[ 0, 226, 320, 453, 684, 911, 1032 ]
1206.0478
3
We discuss risk measures representing the minimum amount of capital a financial institution needs to raise and invest in a pre-specified eligible asset to ensure it is adequately capitalized. Most of the literature has focused on cash-additive risk measures, for which the eligible asset is a risk-free bond, on the grounds that the general case can be reduced to the cash-additive case by a change of num\'{e . However, discounting does not work in all financially relevant situations, for instance, if the eligible asset is a general defaultable bond. In this paper we fill this gap allowing for general eligible assets. We provide a variety of finiteness and continuity results for general risk measures , as well as dual representations for the convex case. We apply our results to risk measures based on Value-at-Risk and Tail Value-at-Risk on L^p spaces, as well as to shortfall risk measures based on utility functions on Orlicz spaces. We pay special attention to the property of cash subadditivity, which has been recently proposed as an alternative to cash additivity to deal with defaultable bonds. In important cases , we provide characterizations of cash subadditivity for general risk measures and show that, when the eligible asset is a defaultable bond, cash subadditivity is the exception rather than the rule. Finally, we consider the situation where the eligible asset is not liquidly traded and the pricing rule is no longer linear. We establish when the resulting risk measures are quasi-convex and show that cash subadditivity is only compatible with continuous pricing rules.
We discuss risk measures representing the minimum amount of capital a financial institution needs to raise and invest in a pre-specified eligible asset to ensure it is adequately capitalized. Most of the literature has focused on cash-additive risk measures, for which the eligible asset is a risk-free bond, on the grounds that the general case can be reduced to the cash-additive case by a change of numeraire . However, discounting does not work in all financially relevant situations, typically when the eligible asset is a defaultable bond. In this paper we fill this gap allowing for general eligible assets. We provide a variety of finiteness and continuity results for the corresponding risk measures and apply them to risk measures based on Value-at-Risk and Tail Value-at-Risk on L^p spaces, as well as to shortfall risk measures on Orlicz spaces. We pay special attention to the property of cash subadditivity, which has been recently proposed as an alternative to cash additivity to deal with defaultable bonds. For important examples , we provide characterizations of cash subadditivity and show that, when the eligible asset is a defaultable bond, cash subadditivity is the exception rather than the rule. Finally, we consider the situation where the eligible asset is not liquidly traded and the pricing rule is no longer linear. We establish when the resulting risk measures are quasiconvex and show that cash subadditivity is only compatible with continuous pricing rules.
[ { "type": "R", "before": "eligible asset", "after": "eligible asset", "start_char_pos": 137, "end_char_pos": 151 }, { "type": "R", "before": "num\\'{e", "after": "numeraire", "start_char_pos": 402, "end_char_pos": 409 }, { "type": "R", "before": "for instance, if", "after": "typically when", "start_char_pos": 487, "end_char_pos": 503 }, { "type": "D", "before": "general", "after": null, "start_char_pos": 528, "end_char_pos": 535 }, { "type": "R", "before": "general risk measures , as well as dual representations for the convex case. We apply our results", "after": "the corresponding risk measures and apply them", "start_char_pos": 685, "end_char_pos": 782 }, { "type": "R", "before": "based on utility functions on", "after": "on", "start_char_pos": 899, "end_char_pos": 928 }, { "type": "R", "before": "In important cases", "after": "For important examples", "start_char_pos": 1110, "end_char_pos": 1128 }, { "type": "D", "before": "for general risk measures", "after": null, "start_char_pos": 1182, "end_char_pos": 1207 }, { "type": "R", "before": "quasi-convex", "after": "quasiconvex", "start_char_pos": 1503, "end_char_pos": 1515 } ]
[ 0, 191, 553, 622, 761, 943, 1109, 1327, 1452 ]
1206.0924
1
We revisit the question of the pitch angle of DNA , which has been suggested to be 45 deg . It is demonstrated that the pitch angle of DNA is well below 45 deg. For A-DNA it is approximately 28 deg., and for B-DNA approximately 38 deg., in accord with the principle of optimal packing of tubular double helices .
The question of the value of the pitch angle of DNA is visited from the perspective of a geometrical analysis of transcription . It is suggested that for transcription to be possible, the pitch angle of B-DNA must be smaller than the angle of zero-twist. At the zero-twist angle the double helix is maximally rotated and its strain-twist coupling vanishes. A numerical estimate of the pitch angle for B-DNA based on differential geometry is compared with numbers obtained from existing empirical data. The crystallographic studies shows that the pitch angle is approximately 38 deg., less than the corresponding zero-twist angle of 41.8 deg., which is consistent with the suggested principle for transcription .
[ { "type": "R", "before": "We revisit the", "after": "The", "start_char_pos": 0, "end_char_pos": 14 }, { "type": "A", "before": null, "after": "value of the", "start_char_pos": 31, "end_char_pos": 31 }, { "type": "R", "before": ", which has been suggested to be 45 deg", "after": "is visited from the perspective of a geometrical analysis of transcription", "start_char_pos": 51, "end_char_pos": 90 }, { "type": "R", "before": "demonstrated that", "after": "suggested that for transcription to be possible,", "start_char_pos": 99, "end_char_pos": 116 }, { "type": "R", "before": "DNA is well below 45 deg. For A-DNA it is approximately 28 deg., and", "after": "B-DNA must be smaller than the angle of zero-twist. At the zero-twist angle the double helix is maximally rotated and its strain-twist coupling vanishes. A numerical estimate of the pitch angle", "start_char_pos": 136, "end_char_pos": 204 }, { "type": "A", "before": null, "after": "based on differential geometry is compared with numbers obtained from existing empirical data. The crystallographic studies shows that the pitch angle is", "start_char_pos": 215, "end_char_pos": 215 }, { "type": "R", "before": "in accord with the principle of optimal packing of tubular double helices", "after": "less than the corresponding zero-twist angle of 41.8 deg., which is consistent with the suggested principle for transcription", "start_char_pos": 239, "end_char_pos": 312 } ]
[ 0, 92, 161 ]
1206.1272
1
A spin model relating physical to financial variables is presented. Based on this model, an algorithm evaluating negative temperatures was applied to New York Stock Exchange quotations from May 2005 up to the present. Stylized patterns resembling known processes in phenomenological thermodynamics were found, namely, population inversion and the magnetocaloric effect.
A spin model relating physical to financial variables is presented. This work is the first to introduce the concept of negative absolute temperature into stock market dynamics by establishing a rigorous formal analogy between physical and financial variables. Based on this model, an algorithm evaluating negative temperatures was applied to an analysis of New York Stock Exchange quotations from November 2002 up to the present. We found that the magnitude of negative temperature peaks correlates with subsequent index movement. Moreover, a certain autocorrelation function decays as temperature increases. An effort was directed to the search for patterns similar to known physical processes, since the model hypotheses pointed to the possibility of such a similarity. A number of cases resembling known processes in phenomenological thermodynamics were found, namely, population inversion and the magneto-caloric effect.
[ { "type": "A", "before": null, "after": "This work is the first to introduce the concept of negative absolute temperature into stock market dynamics by establishing a rigorous formal analogy between physical and financial variables.", "start_char_pos": 68, "end_char_pos": 68 }, { "type": "A", "before": null, "after": "an analysis of", "start_char_pos": 151, "end_char_pos": 151 }, { "type": "R", "before": "May 2005", "after": "November 2002", "start_char_pos": 192, "end_char_pos": 200 }, { "type": "R", "before": "Stylized patterns", "after": "We found that the magnitude of negative temperature peaks correlates with subsequent index movement. Moreover, a certain autocorrelation function decays as temperature increases. An effort was directed to the search for patterns similar to known physical processes, since the model hypotheses pointed to the possibility of such a similarity. A number of cases", "start_char_pos": 220, "end_char_pos": 237 }, { "type": "R", "before": "magnetocaloric", "after": "magneto-caloric", "start_char_pos": 349, "end_char_pos": 363 } ]
[ 0, 67, 219 ]
1206.1755
1
Experimental X-ray crystallography, NMR ( NuclearMagnetic Resonance) spectroscopy, dual polarization interferometry, etc are indeed very powerful tools to determine the 3-Dimensional structure of a protein (including the membrane protein); theoretical mathematical and physical computational approaches can also allow us to obtain a description of the protein 3D structure at a submicroscopic level for some unstable, noncrystalline and insoluble proteins. X-ray crystallography finds the X-ray final structure of a protein, produce a better structure. This means theoretical methods are also important in determinations of protein structures. This paper presents a theoretical computational method - an improved LBFGS Quasi-Newtonian mathematical optimization method - to produce 3D structures of prion AGAAAAGA amyloid fibrils (which are unstable, noncrystalline and insoluble), from the potential energy minimization point of view .
Experimental X-ray crystallography, NMR ( Nuclear Magnetic Resonance) spectroscopy, dual polarization interferometry, etc are indeed very powerful tools to determine the 3-Dimensional structure of a protein (including the membrane protein); theoretical mathematical and physical computational approaches can also allow us to obtain a description of the protein 3D structure at a submicroscopic level for some unstable, noncrystalline and insoluble proteins. X-ray crystallography finds the X-ray final structure of a protein, which usually need refinements using theoretical protocols in order to produce a better structure. This means theoretical methods are also important in determinations of protein structures. Optimization is always needed in the computer-aided drug design, structure-based drug design, molecular dynamics, and quantum and molecular mechanics. This paper introduces some optimization algorithms used in these research fields and presents a new theoretical computational method - an improved LBFGS Quasi-Newtonian mathematical optimization method - to produce 3D structures of Prion AGAAAAGA amyloid fibrils (which are unstable, noncrystalline and insoluble), from the potential energy minimization point of view . Because the NMR or X-ray structure of the hydrophobic region AGAAAAGA of prion proteins has not yet been determined, the model constructed by this paper can be used as a reference for experimental studies on this region, and may be useful in furthering the goals of medicinal chemistry in this field .
[ { "type": "R", "before": "NuclearMagnetic", "after": "Nuclear Magnetic", "start_char_pos": 42, "end_char_pos": 57 }, { "type": "A", "before": null, "after": "which usually need refinements using theoretical protocols in order to", "start_char_pos": 525, "end_char_pos": 525 }, { "type": "R", "before": "This paper presents a", "after": "Optimization is always needed in the computer-aided drug design, structure-based drug design, molecular dynamics, and quantum and molecular mechanics. This paper introduces some optimization algorithms used in these research fields and presents a new", "start_char_pos": 645, "end_char_pos": 666 }, { "type": "R", "before": "prion", "after": "Prion", "start_char_pos": 799, "end_char_pos": 804 }, { "type": "A", "before": null, "after": ". Because the NMR or X-ray structure of the hydrophobic region AGAAAAGA of prion proteins has not yet been determined, the model constructed by this paper can be used as a reference for experimental studies on this region, and may be useful in furthering the goals of medicinal chemistry in this field", "start_char_pos": 935, "end_char_pos": 935 } ]
[ 0, 239, 456, 553, 644 ]
1206.1967
1
Deformation of single stranded DNA before reaching the pore in translocation process is investigated. By solving the Laplace equation in a suitable coordinate system and with appropriate boundary conditions, an approximate solution for electric field inside and outside of a narrow pore is obtained. With an analysis based on "electrohydrodynamic equivalence" we determine the possibility of extension of a charged polymer due to the presence of electric field gradient in the vicinity of the pore entrance. Such deformation is shown to have a great contribution on the capturing process, first stage of any translocation phenomena. With a multiscale hybrid simulation (LB-MD) it is shown that an effective deformation before reaching the pore occurs which facilitates the process of finding the entrance for the end monomers .
Deformation of single stranded DNA in translocation process before reaching the pore is investigated. By solving the Laplace equation in a suitable coordinate system and with appropriate boundary conditions, an approximate solution for the electric field inside and outside of a narrow pore is obtained. With an analysis based on "electrohydrodynamic equivalence" we determine the possibility of extension of a charged polymer due to the presence of an electric field gradient in the vicinity of the pore entrance. With a multi-scale hybrid simulation (LB-MD) , it is shown that an effective deformation before reaching the pore occurs which facilitates the process of finding the entrance for the end monomers . We also highlight the role of long range hydrodynamic interactions via comparison of the LB-MD results with those obtained using a Langevin thermostat instead of the LB solver .
[ { "type": "A", "before": null, "after": "in translocation process", "start_char_pos": 35, "end_char_pos": 35 }, { "type": "D", "before": "in translocation process", "after": null, "start_char_pos": 61, "end_char_pos": 85 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 237, "end_char_pos": 237 }, { "type": "A", "before": null, "after": "an", "start_char_pos": 448, "end_char_pos": 448 }, { "type": "R", "before": "Such deformation is shown to have a great contribution on the capturing process, first stage of any translocation phenomena. With a multiscale", "after": "With a multi-scale", "start_char_pos": 511, "end_char_pos": 653 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 680, "end_char_pos": 680 }, { "type": "A", "before": null, "after": ". We also highlight the role of long range hydrodynamic interactions via comparison of the LB-MD results with those obtained using a Langevin thermostat instead of the LB solver", "start_char_pos": 830, "end_char_pos": 830 } ]
[ 0, 102, 301, 510, 635 ]
1206.2133
1
We study the dynamics and patterning of polar contractile filaments on the surface of a cylindrical cell using active hydrodynamic equations that incorporate couplings between curvature and filament orientation. Cables and rings spontaneously emerge as steady state configurations of active filaments on the cylinder, and can be stationary or moving, helical or segments moving along helical trajectories. Contractility induces coalescence of proximal rings. We observe phase transitions in the steady state patterns upon changing cell diameter . Our results are relevant in a variety of cellular contexts involving the dynamics and patterning of active biopolymers in cylindrical cells.
We study the dynamics and patterning of polar contractile filaments on the surface of a cylindrical cell using active hydrodynamic equations that incorporate couplings between curvature and filament orientation. Cables and rings spontaneously emerge as steady state configurations on the cylinder, and can be stationary or moving, helical or segments moving along helical trajectories. Contractility induces coalescence of proximal rings. We observe phase transitions in the steady state patterns upon changing cell diameter and make several testable predictions . Our results are relevant to the dynamics and patterning of a variety of active biopolymers in cylindrical cells.
[ { "type": "D", "before": "of active filaments", "after": null, "start_char_pos": 281, "end_char_pos": 300 }, { "type": "A", "before": null, "after": "and make several testable predictions", "start_char_pos": 545, "end_char_pos": 545 }, { "type": "R", "before": "in a variety of cellular contexts involving", "after": "to", "start_char_pos": 573, "end_char_pos": 616 }, { "type": "A", "before": null, "after": "of a variety", "start_char_pos": 645, "end_char_pos": 645 } ]
[ 0, 211, 405, 458, 547 ]
1206.2305
1
We consider the portfolio choice problem for an investor interested in long-run growth optimality while facing drawdown constraints in a general continuous semimartingale model. The paper introduces the numeraire property through the notion of expected relative return and shows that drawdown-constrained strategies with the numeraire property exist and are unique, but may depend on the financial planning horizon. We explicitly characterize the growth-optimal strategy and show that it enjoys the numeraire property within the class of investments satisfying the drawdown constraint , when sampled at the times of its maximum and asymptotically as the time-horizon becomes distant . Finally , it is established that the asymptotically growth-optimal strategy is obtained as limit of numeraire strategies on finite horizons.
We consider the portfolio choice problem for a long-run investor in a general continuous semimartingale model. We suggest to use path-wise growth optimality as the decision criterion and encode preferences through restrictions on the class of admissible wealth processes. Specifically, the investor is only interested in strategies which satisfy a given linear drawdown constraint. The paper introduces the numeraire property through the notion of expected relative return and shows that drawdown-constrained strategies with the numeraire property exist and are unique, but may depend on the financial planning horizon. However , when sampled at the times of its maximum and asymptotically as the time-horizon becomes distant , the drawdown-constrained numeraire portfolio is given explicitly through a model-independent transformation of the unconstrained numeraire portfolio. Further , it is established that the asymptotically growth-optimal strategy is obtained as limit of numeraire strategies on finite horizons.
[ { "type": "D", "before": "an investor interested in long-run growth optimality while facing drawdown constraints in", "after": null, "start_char_pos": 45, "end_char_pos": 134 }, { "type": "A", "before": null, "after": "long-run investor in a", "start_char_pos": 137, "end_char_pos": 137 }, { "type": "A", "before": null, "after": "We suggest to use path-wise growth optimality as the decision criterion and encode preferences through restrictions on the class of admissible wealth processes. Specifically, the investor is only interested in strategies which satisfy a given linear drawdown constraint.", "start_char_pos": 179, "end_char_pos": 179 }, { "type": "R", "before": "We explicitly characterize the growth-optimal strategy and show that it enjoys the numeraire property within the class of investments satisfying the drawdown constraint", "after": "However", "start_char_pos": 418, "end_char_pos": 586 }, { "type": "R", "before": ". Finally", "after": ", the drawdown-constrained numeraire portfolio is given explicitly through a model-independent transformation of the unconstrained numeraire portfolio. Further", "start_char_pos": 685, "end_char_pos": 694 } ]
[ 0, 178, 417, 686 ]
1206.2524
1
Recently, several works have analyzed the efficiency of photosynthetic complexes in a transient scenario and how such an efficiency is affected by environmental noise. Here, following a quantum master equation approach, we study the energy and excitation transport in fully connected networks both in general and in the particular case of the FMO complex. The analysis is carried out for the steady state of the system where excitation energy is constantly "flowing" through the system. Steady state transport scenarios are particularly relevant if the evolution of the quantum system is not conditioned on the arrival of individual excitations. By adding dephasing to the system, we analyze the possibility of noise-enhancement of the quantum transport.
Recently, several works have analysed the efficiency of photosynthetic complexes in a transient scenario and how that efficiency is affected by environmental noise. Here, following a quantum master equation approach, we study the energy and excitation transport in fully connected networks both in general and in the particular case of the Fenna-Matthew-Olson complex. The analysis is carried out for the steady state of the system where the excitation energy is constantly "flowing" through the system. Steady state transport scenarios are particularly relevant if the evolution of the quantum system is not conditioned on the arrival of individual excitations. By adding dephasing to the system, we analyse the possibility of noise-enhancement of the quantum transport.
[ { "type": "R", "before": "analyzed", "after": "analysed", "start_char_pos": 29, "end_char_pos": 37 }, { "type": "R", "before": "such an", "after": "that", "start_char_pos": 113, "end_char_pos": 120 }, { "type": "R", "before": "FMO", "after": "Fenna-Matthew-Olson", "start_char_pos": 343, "end_char_pos": 346 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 425, "end_char_pos": 425 }, { "type": "R", "before": "analyze", "after": "analyse", "start_char_pos": 685, "end_char_pos": 692 } ]
[ 0, 167, 355, 487, 646 ]
1206.2818
1
Proteins participating in a protein-protein interaction network can be grouped into homology classes following their common ancestry. Furthermore, homology class number and size have a high degree of universality, depending on genome size only and not on genome-specific features. These "genomic laws" can be regarded as evolutionary constraints on the growth mechanism of genomes. We define a statistical model describing the joint growth of the network and the partitioning of nodes into homology classesunder the constraint of a stochastic class-expansion/innovation model based on the "Chinese restaurant process", and we study this model through a combined mean-field and simulation approach. Additionally, we define model variants that allow to study the age dependency of interactions under different prescriptions of duplication-divergence growth. Comparing with empirical data , we show that the model reproduces the observed universal behavior allowing a variant-independent fit of parameters. This analysis also indicates that an age-dependent duplication move appears to be necessary for reproducing the basic network topological observables together with the age correlation between interacting nodes visible in empirical data .
Proteins participating in a protein-protein interaction network can be grouped into homology classes following their common ancestry. Proteins added to the network correspond to genes added to the classes, so that the dynamics of the two objects are intrinsically linked. Here, we first introduce a statistical model describing the joint growth of the network and the partitioning of nodes into classes, which is studied through a combined mean-field and simulation approach. We then employ this unified framework to address the specific issue of the age dependence of protein interactions, through the definition of three different node wiring/divergence schemes. Comparison with empirical data indicates that an age-dependent divergence move is necessary in order to reproduce the basic topological observables together with the age correlation between interacting nodes visible in empirical data . We also discuss the possibility of nontrivial joint partition/topology observables .
[ { "type": "R", "before": "Furthermore, homology class number and size have a high degree of universality, depending on genome size only and not on genome-specific features. These \"genomic laws\" can be regarded as evolutionary constraints on the growth mechanism of genomes. We define", "after": "Proteins added to the network correspond to genes added to the classes, so that the dynamics of the two objects are intrinsically linked. Here, we first introduce", "start_char_pos": 134, "end_char_pos": 391 }, { "type": "R", "before": "homology classesunder the constraint of a stochastic class-expansion/innovation model based on the \"Chinese restaurant process\", and we study this model", "after": "classes, which is studied", "start_char_pos": 490, "end_char_pos": 642 }, { "type": "R", "before": "Additionally, we define model variants that allow to study the age dependency of interactions under different prescriptions of duplication-divergence growth. Comparing", "after": "We then employ this unified framework to address the specific issue of the age dependence of protein interactions, through the definition of three different node wiring/divergence schemes. Comparison", "start_char_pos": 698, "end_char_pos": 865 }, { "type": "D", "before": ", we show that the model reproduces the observed universal behavior allowing a variant-independent fit of parameters. This analysis also", "after": null, "start_char_pos": 886, "end_char_pos": 1022 }, { "type": "R", "before": "duplication move appears to be necessary for reproducing the basic network", "after": "divergence move is necessary in order to reproduce the basic", "start_char_pos": 1055, "end_char_pos": 1129 }, { "type": "A", "before": null, "after": ". We also discuss the possibility of nontrivial joint partition/topology observables", "start_char_pos": 1240, "end_char_pos": 1240 } ]
[ 0, 133, 280, 381, 697, 855, 1003 ]
1206.2934
1
In the present paper, we introduce a numerical scheme for the price of a barrier option when the underlying follows a diffusion process. The numerical scheme is based on an extension of a static hedging formula of barrier options. For getting the static hedging formula, the underlying process needs to have a symmetry. We introduce a way to "symmetrize" a given diffusion process. Then the pricing of a barrier option is reduced to that of plain options under the symmetrized process. To show how our symmetrization scheme works, we will present some numerical results applying (path-independent) Euler-Maruyama approximation to our scheme, comparing them with the path-dependent Euler-Maruyama scheme when the underlying process follows Black-Scholes and some CEVmodels . The results show the effectiveness of our scheme.
In the present paper, we introduce a numerical scheme for the price of a barrier option when the price of the underlying follows a diffusion process. The numerical scheme is based on an extension of a static hedging formula of barrier options. For getting the static hedging formula, the underlying process needs to have a symmetry. We introduce a way to "symmetrize" a given diffusion process. Then the pricing of a barrier option is reduced to that of plain options under the symmetrized process. To show how our symmetrization scheme works, we will present some numerical results applying (path-independent) Euler-Maruyama approximation to our scheme, comparing them with the path-dependent Euler-Maruyama scheme when the model is of the Black-Scholes , CEV, Heston, and (\lambda) -SABR, respectively . The results show the effectiveness of our scheme.
[ { "type": "A", "before": null, "after": "price of the", "start_char_pos": 97, "end_char_pos": 97 }, { "type": "R", "before": "underlying process follows", "after": "model is of the", "start_char_pos": 713, "end_char_pos": 739 }, { "type": "R", "before": "and some CEVmodels", "after": ", CEV, Heston, and (\\lambda) -SABR, respectively", "start_char_pos": 754, "end_char_pos": 772 } ]
[ 0, 137, 231, 320, 382, 486, 774 ]
1206.3768
2
In Density Functional Theory simulations based on the LAPW method, each self-consistent cycle comprises dozens of large dense generalized eigenproblems. In contrast to real-space methods, eigenpairs solving for problems at distinct cycles have either been believed to be independent or at most very loosely connected. In a recent study \mbox{%DIFAUXCMD DBB , it was proposed to revert this point of view and consider simulations as made of dozens of sequences of eigenvalue problems; each sequence groups together eigenproblems with equal%DIFDELCMD < {\bf %%% k -vectors and an increasing outer-iteration cycle index \ell. From this different standpoint it was possible to demonstrate that, contrary to belief, successive eigenproblems in a sequence are strongly correlated with one another. In particular, by tracking the evolution of subspace angles between eigenvectors of successive eigenproblems, it was shown that these angles decrease noticeably after the first few iterations and become close to collinear : the closer to convergence the stronger the correlation becomes . This last result suggests that we can manipulate the eigenvectors, solving for a specific eigenproblem in a sequence, as an approximate solution for the following eigenproblem. In this work we present results that are in line with this intuition. First, we provide numerical examples where opportunely selected block iterative solvers benefit from the reuse of eigenvectors by achieving a substantial speed-up. We then develop a C language version of one of these algorithms and run a series of tests specifically focused on performance and scalability. All the numerical tests are carried out employing sequences of eigenproblems extracted from simulations of solid-state physics crystals. The results presented here could eventually open the way to a widespread use of block iterative solvers in ab initio electronic structure codes based on the LAPW approach.
In Density Functional Theory simulations based on the LAPW method, each self-consistent field cycle comprises dozens of large dense generalized eigenproblems. In contrast to real-space methods, eigenpairs solving for problems at distinct cycles have either been believed to be independent or at most very loosely connected. In a recent study 7 , it was %DIFDELCMD < {\bf %%% demonstrated that, contrary to belief, successive eigenproblems in a sequence are strongly correlated with one another. In particular, by monitoring the subspace angles between eigenvectors of successive eigenproblems, it was shown that these angles decrease noticeably after the first few iterations and become close to collinear . This last result suggests that we can manipulate the eigenvectors, solving for a specific eigenproblem in a sequence, as an approximate solution for the following eigenproblem. In this work we present results that are in line with this intuition. We provide numerical examples where opportunely selected block iterative eigensolvers benefit from the reuse of eigenvectors by achieving a substantial speed-up. The results presented will eventually open the way to a widespread use of block iterative eigensolvers in ab initio electronic structure codes based on the LAPW approach.
[ { "type": "A", "before": null, "after": "field", "start_char_pos": 88, "end_char_pos": 88 }, { "type": "R", "before": "\\mbox{%DIFAUXCMD DBB", "after": "7", "start_char_pos": 337, "end_char_pos": 357 }, { "type": "D", "before": "proposed to revert this point of view and consider simulations as made of dozens of sequences of eigenvalue problems; each sequence groups together eigenproblems with equal", "after": null, "start_char_pos": 367, "end_char_pos": 539 }, { "type": "D", "before": "k", "after": null, "start_char_pos": 561, "end_char_pos": 562 }, { "type": "R", "before": "-vectors and an increasing outer-iteration cycle index \\ell. From this different standpoint it was possible to demonstrate", "after": "demonstrated", "start_char_pos": 563, "end_char_pos": 685 }, { "type": "R", "before": "tracking the evolution of", "after": "monitoring the", "start_char_pos": 811, "end_char_pos": 836 }, { "type": "D", "before": ": the closer to convergence the stronger the correlation becomes", "after": null, "start_char_pos": 1015, "end_char_pos": 1079 }, { "type": "R", "before": "First, we", "after": "We", "start_char_pos": 1329, "end_char_pos": 1338 }, { "type": "R", "before": "solvers", "after": "eigensolvers", "start_char_pos": 1409, "end_char_pos": 1416 }, { "type": "D", "before": "We then develop a C language version of one of these algorithms and run a series of tests specifically focused on performance and scalability. All the numerical tests are carried out employing sequences of eigenproblems extracted from simulations of solid-state physics crystals.", "after": null, "start_char_pos": 1493, "end_char_pos": 1772 }, { "type": "R", "before": "here could", "after": "will", "start_char_pos": 1795, "end_char_pos": 1805 }, { "type": "R", "before": "solvers", "after": "eigensolvers", "start_char_pos": 1869, "end_char_pos": 1876 } ]
[ 0, 153, 318, 484, 623, 792, 1081, 1258, 1328, 1492, 1635, 1772 ]
1206.3894
1
The nonlinear elastic properties of fibrin networks are crucial for normal blood clotting . Here, we show that the extraordinary strain-stiffening response of fibrin clots reflects the hierarchical architecture of the fibrin fibers , which are bundles of wormlike protofibrils . We measure the rheology of networks of unbundled protofibrils , and find excellent agreement with an affine model of extensible wormlike polymers. By direct comparison with these data, we show that physiological clots of thick fibers can be modeled as networks of tight protofibril bundles. At high stress, the protofibrils contribute independently to the network elasticity, which may reflect a decoupling of the tight bundle structure. The hierarchical architecture of fibrin fibers can thus account for the enormous elastic resilience characteristic of blood clots.
Bundles of polymer filaments are responsible for the rich and unique mechanical behaviors of many biomaterials, including cells and extracellular matrices. In fibrin biopolymers, whose nonlinear elastic properties are crucial for normal blood clotting , protofibrils self-assemble and bundle to form networks of semiflexible fibers. Here we show that the extraordinary strain-stiffening response of fibrin networks is a direct reflection of the hierarchical architecture of the fibrin fibers . We measure the rheology of networks of unbundled protofibrils and find excellent agreement with an affine model of extensible wormlike polymers. By direct comparison with these data, we show that physiological fibrin networks composed of thick fibers can be modeled as networks of tight protofibril bundles. We demonstrate that the tightness of coupling between protofibrils in the fibers can be tuned by the degree of enzymatic intermolecular crosslinking by the coagulation Factor XIII. Furthermore, at high stress, the protofibrils contribute independently to the network elasticity, which may reflect a decoupling of the tight bundle structure. The hierarchical architecture of fibrin fibers can thus account for the nonlinearity and enormous elastic resilience characteristic of blood clots.
[ { "type": "R", "before": "The", "after": "Bundles of polymer filaments are responsible for the rich and unique mechanical behaviors of many biomaterials, including cells and extracellular matrices. In fibrin biopolymers, whose", "start_char_pos": 0, "end_char_pos": 3 }, { "type": "D", "before": "of fibrin networks", "after": null, "start_char_pos": 33, "end_char_pos": 51 }, { "type": "R", "before": ". Here,", "after": ", protofibrils self-assemble and bundle to form networks of semiflexible fibers. Here", "start_char_pos": 90, "end_char_pos": 97 }, { "type": "R", "before": "clots reflects", "after": "networks is a direct reflection of", "start_char_pos": 166, "end_char_pos": 180 }, { "type": "D", "before": ", which are bundles of wormlike protofibrils", "after": null, "start_char_pos": 232, "end_char_pos": 276 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 341, "end_char_pos": 342 }, { "type": "R", "before": "clots", "after": "fibrin networks composed", "start_char_pos": 491, "end_char_pos": 496 }, { "type": "R", "before": "At", "after": "We demonstrate that the tightness of coupling between protofibrils in the fibers can be tuned by the degree of enzymatic intermolecular crosslinking by the coagulation Factor XIII. Furthermore, at", "start_char_pos": 570, "end_char_pos": 572 }, { "type": "A", "before": null, "after": "nonlinearity and", "start_char_pos": 789, "end_char_pos": 789 } ]
[ 0, 91, 278, 425, 569, 716 ]
1206.4420
1
Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics . Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. Our data analysis of six major indices suggests that the actual interaction structure is mathematically equivalent to an Ising model on a complete graph with gaussian interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.
Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent based models with rules designed in order to recover some empirical behaviors . Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with\'e interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.
[ { "type": "A", "before": null, "after": "or they are agent based models with rules designed in order to recover some empirical behaviors", "start_char_pos": 378, "end_char_pos": 378 }, { "type": "A", "before": null, "after": "This is done with an approach only based on empirical data of price returns.", "start_char_pos": 577, "end_char_pos": 577 }, { "type": "R", "before": "is mathematically equivalent to", "after": "may be thought as", "start_char_pos": 664, "end_char_pos": 695 }, { "type": "R", "before": "complete graph with gaussian", "after": "complex network with\\'e", "start_char_pos": 716, "end_char_pos": 744 } ]
[ 0, 109, 243, 380, 576, 809, 988 ]
1206.4420
2
Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with\'e interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.
Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.
[ { "type": "R", "before": "agent based", "after": "agent-based", "start_char_pos": 390, "end_char_pos": 401 }, { "type": "R", "before": "with\\'e", "after": "with", "start_char_pos": 888, "end_char_pos": 895 } ]
[ 0, 109, 243, 474, 670, 747, 960, 1139 ]
1206.4420
3
Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors . Here we show that the pairwise model is actually a statistically consistent model with observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors , as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.
Financial markets are a classical example of complex systems as they comprise many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or others are agent-based models with rules designed in order to recover some empirical behaviours . Here we show that the pairwise model is actually a statistically consistent model with observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach based only on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviours , as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.
[ { "type": "R", "before": "are compound by", "after": "comprise", "start_char_pos": 69, "end_char_pos": 84 }, { "type": "R", "before": "they", "after": "others", "start_char_pos": 381, "end_char_pos": 385 }, { "type": "R", "before": "behaviors", "after": "behaviours", "start_char_pos": 464, "end_char_pos": 473 }, { "type": "R", "before": "only based", "after": "based only", "start_char_pos": 698, "end_char_pos": 708 }, { "type": "R", "before": "behaviors", "after": "behaviours", "start_char_pos": 1142, "end_char_pos": 1151 } ]
[ 0, 109, 243, 475, 667, 744, 954, 1133 ]
1206.4424
1
We construct a model of a lattice polymer which describes secondary structures of proteins . In this model the energy of a conformation of a polymer is equal to a sum of energies of conformations of segments of the polymer chain of the length five. We show that for this model with cooperative interaction all conformations with minimal energy are combinations of lattice models of alpha-helix and beta-strand. We show that for lattice polymers of the length not longer that 38 monomers we can describe all conformations with minimal energy.
In the standard approach to lattice proteins the models based on nearest neighbor interaction are used. In this kind of models it is difficult to explain the existence of secondary structures --- special preferred conformations of protein chains. In the present paper a new lattice model of proteins is proposed which is based on non-local cooperative interactions . In this model the energy of a conformation of a polymer is equal to the sum of energies of conformations of fragments of the polymer chain of the length five. It is shown that this quinary lattice model is able to describe at qualitative level secondary structures of proteins: for this model all conformations with minimal energy are combinations of lattice models of alpha--helix and beta--strand. Moreover for lattice polymers of the length not longer that 38 monomers we can describe all conformations with minimal energy.
[ { "type": "R", "before": "We construct a model of a lattice polymer which describes secondary structures of proteins", "after": "In the standard approach to lattice proteins the models based on nearest neighbor interaction are used. In this kind of models it is difficult to explain the existence of secondary structures --- special preferred conformations of protein chains. In the present paper a new lattice model of proteins is proposed which is based on non-local cooperative interactions", "start_char_pos": 0, "end_char_pos": 90 }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 161, "end_char_pos": 162 }, { "type": "R", "before": "segments", "after": "fragments", "start_char_pos": 199, "end_char_pos": 207 }, { "type": "R", "before": "We show that", "after": "It is shown that this quinary lattice model is able to describe at qualitative level secondary structures of proteins:", "start_char_pos": 249, "end_char_pos": 261 }, { "type": "D", "before": "with cooperative interaction", "after": null, "start_char_pos": 277, "end_char_pos": 305 }, { "type": "R", "before": "alpha-helix and beta-strand. We show that", "after": "alpha--helix and beta--strand. Moreover", "start_char_pos": 382, "end_char_pos": 423 } ]
[ 0, 92, 248, 410 ]
1206.4539
1
We introduce a rigorous and powerful method to microscopically compute the observables which characterize the thermodynamics and kinetics of rare macromolecular transitions . In order to sample the ensemble of statistically significant reaction pathways, we define a biased molecular dynamics (MD) in which barrier-crossing transitions are accelerated without introducing any unphysical external force. In contrast to other biased MD methods, in the present approach the systematic errors which are generated in order to accelerate the transition can be analytically calculated and therefore can be corrected for. This allows for an accurate and computationally very efficient reconstruction of the free-energy profile as a function of an arbitrarily chosen reaction coordinate . The transition path time can then be readily evaluated within the Dominant Reaction Pathways (DRP) approach. We illustrate and test this method on a simple system and show that it yields very accurate results .
We introduce a rigorous method to microscopically compute the observables which characterize the thermodynamics and kinetics of rare macromolecular transitions for which it is possible to identify a priori a slow reaction coordinate . In order to sample the ensemble of statistically significant reaction pathways, we define a biased molecular dynamics (MD) in which barrier-crossing transitions are accelerated without introducing any unphysical external force. In contrast to other biased MD methods, in the present approach the systematic errors which are generated in order to accelerate the transition can be analytically calculated and therefore can be corrected for. This allows for a computationally efficient reconstruction of the free-energy profile as a function of the reaction coordinate and for the calculation of the corresponding diffusion coefficient . The transition path time can then be readily evaluated within the Dominant Reaction Pathways (DRP) approach. We illustrate and test this method by characterizing a thermally activated transition on a two-dimensional energy surface and the folding of a small protein fragment within a coarse-grained model .
[ { "type": "D", "before": "and powerful", "after": null, "start_char_pos": 24, "end_char_pos": 36 }, { "type": "A", "before": null, "after": "for which it is possible to identify a priori a slow reaction coordinate", "start_char_pos": 173, "end_char_pos": 173 }, { "type": "R", "before": "an accurate and computationally very", "after": "a computationally", "start_char_pos": 631, "end_char_pos": 667 }, { "type": "R", "before": "an arbitrarily chosen reaction coordinate", "after": "the reaction coordinate and for the calculation of the corresponding diffusion coefficient", "start_char_pos": 737, "end_char_pos": 778 }, { "type": "R", "before": "on a simple system and show that it yields very accurate results", "after": "by characterizing a thermally activated transition on a two-dimensional energy surface and the folding of a small protein fragment within a coarse-grained model", "start_char_pos": 925, "end_char_pos": 989 } ]
[ 0, 175, 403, 614, 780, 889 ]
1206.5224
1
The importance of considering the volumes to analyze stock prices movements can be considered as a well-accepted practice in the financial area. However, when we look at the scientific production in this area, particularly in this field, we still cannot find a unified model that includes volume and price variations for stock assessment purposes. In this paper we present a computer model that could fulfill this gap, proposing a new index to evaluate stock prices based on their historical prices and volumes traded. Besides the model can be considered mathematically very simple, it was able to improve significantly the performance of agents operating with real financial data , as will be showed in this paper . Based on the results obtained, and also on the very intuitive logic of our model, we believe that the index proposed here can be very useful to help investors on the activity of determining ideal price ranges for buying and selling stocks in the financial market.
The importance of considering the volumes to analyze stock prices movements can be considered as a well-accepted practice in the financial area. However, when we look at the scientific production in this field, we still cannot find a unified model that includes volume and price variations for stock assessment purposes. In this paper we present a computer model that could fulfill this gap, proposing a new index to evaluate stock prices based on their historical prices and volumes traded. Besides the model can be considered mathematically very simple, it was able to improve significantly the performance of agents operating with real financial data . Based on the results obtained, and also on the very intuitive logic of our model, we believe that the index proposed here can be very useful to help investors on the activity of determining ideal price ranges for buying and selling stocks in the financial market.
[ { "type": "D", "before": "area, particularly in this", "after": null, "start_char_pos": 204, "end_char_pos": 230 }, { "type": "D", "before": ", as will be showed in this paper", "after": null, "start_char_pos": 681, "end_char_pos": 714 } ]
[ 0, 144, 347, 518 ]
1206.5724
1
We study physical limits of chemical sensing by a single chemotactic cell with cooperative chemoreceptors on the cell surface . We derive general formula for the gradient sensing limit from the uncertainty in instantaneous receptor activity configurations and find that cooperativity by non-adaptive receptors could significantly lower the sensing limit at the biochemically-relevant regime of balanced free energy difference between active and inactive receptor states . Cooperativity by adaptive receptors are beneficial to gradient sensing under a broad range of background concentrations. Our results also show that isotropic receptor aggregate layout represents an optimal configuration to gradient sensing and that anisotropy does not change the effect of receptor cooperativity on the sensing limit .
Most sensory cells use cross-membrane chemoreceptors to detect chemical signals in the environment. The biochemical properties and URLanization of chemoreceptors play important roles in achieving and maintaining sensitivity and accuracy of chemical sensing. Here we investigate the effects of receptor cooperativity and adaptation on the limits of gradient sensing. We study a single cell with aggregated chemoreceptor arrays on the cell surface and derive general formula to the limits for gradient sensing from the uncertainty of instantaneous receptor activity . In comparison to independent receptors, we find that cooperativity by non-adaptative receptors could significantly lower the sensing limit in a chemical concentration range determined by the biochemical properties of ligand-receptor binding and ligand-induced receptor activity . Cooperativity by adaptative receptors are beneficial to gradient sensing within a broad range of background concentrations. Our results also show that isotropic receptor aggregate layout on the cell surface represents an optimal configuration to gradient sensing .
[ { "type": "R", "before": "We study physical limits of chemical sensing by a single chemotactic cell with cooperative chemoreceptors", "after": "Most sensory cells use cross-membrane chemoreceptors to detect chemical signals in the environment. The biochemical properties and URLanization of chemoreceptors play important roles in achieving and maintaining sensitivity and accuracy of chemical sensing. Here we investigate the effects of receptor cooperativity and adaptation on the limits of gradient sensing. We study a single cell with aggregated chemoreceptor arrays", "start_char_pos": 0, "end_char_pos": 105 }, { "type": "R", "before": ". We", "after": "and", "start_char_pos": 126, "end_char_pos": 130 }, { "type": "R", "before": "for the gradient sensing limit", "after": "to the limits for gradient sensing", "start_char_pos": 154, "end_char_pos": 184 }, { "type": "R", "before": "in", "after": "of", "start_char_pos": 206, "end_char_pos": 208 }, { "type": "R", "before": "configurations and", "after": ". In comparison to independent receptors, we", "start_char_pos": 241, "end_char_pos": 259 }, { "type": "R", "before": "non-adaptive", "after": "non-adaptative", "start_char_pos": 287, "end_char_pos": 299 }, { "type": "R", "before": "at the biochemically-relevant regime of balanced free energy difference between active and inactive receptor states", "after": "in a chemical concentration range determined by the biochemical properties of ligand-receptor binding and ligand-induced receptor activity", "start_char_pos": 354, "end_char_pos": 469 }, { "type": "R", "before": "adaptive", "after": "adaptative", "start_char_pos": 489, "end_char_pos": 497 }, { "type": "R", "before": "under", "after": "within", "start_char_pos": 543, "end_char_pos": 548 }, { "type": "A", "before": null, "after": "on the cell surface", "start_char_pos": 656, "end_char_pos": 656 }, { "type": "D", "before": "and that anisotropy does not change the effect of receptor cooperativity on the sensing limit", "after": null, "start_char_pos": 713, "end_char_pos": 806 } ]
[ 0, 127, 592 ]
1206.6238
1
Effects of the period mismatch on entrainment properties in two coupled genetic oscillators are studied. The entrainment is calculated with a phase reduction approach and a Floquet multiplier analysis, and their dependencies on coupling strength and the period ratio are investigated in two genetic oscillator models (smooth and relaxation oscillators). We find that the existence of the period mismatch induces an enhancement of entrainment in both smooth and relaxation oscillators. By calculating Floquet multipliers, we show that the enhancement mechanism is based on the coupled oscillators which are in the vicinity of bifurcation on limit cycle .
Biological oscillators coordinate individual cellular components to function coherently and collectively. They are typically composed of multiple feedback loops, and period mismatch between them is unavoidable in biological implementations. We investigated the advantageous effect of the period mismatch in terms of synchronization against external stimuli (or trainability). Specifically, we numerically analyzed two fundamental genetic models, smooth and relaxation oscillators, on their trainability for different coupling strength and period ratio by the phase reduction and the Floquet multiplier analysis. We found that the period mismatch induces better entrainment in both oscillators; the enhancement occurred in the vicinity of the bifurcation on limit cycle . The optimal period ratio in the smooth oscillator for the enhanced trainability coincided with the experimentally observed ratio, which suggested the biological exploitation of the period mismatch. Although the origin of multiple feedback loops is often accounted for by the notion of passive robustness against perturbation, we here studied active benefits of the period mismatch on the efficiency of genetic oscillators. Our findings show qualitatively different perspective on the essentiality and inherent advantage of multiple loops in genetic oscillators .
[ { "type": "R", "before": "Effects of the period mismatch on entrainment properties in two coupled genetic oscillators are studied. The entrainment is calculated with a phase reduction approach and a Floquet multiplier analysis, and their dependencies on", "after": "Biological oscillators coordinate individual cellular components to function coherently and collectively. They are typically composed of multiple feedback loops, and period mismatch between them is unavoidable in biological implementations. We investigated the advantageous effect of the period mismatch in terms of synchronization against external stimuli (or trainability). Specifically, we numerically analyzed two fundamental genetic models, smooth and relaxation oscillators, on their trainability for different", "start_char_pos": 0, "end_char_pos": 227 }, { "type": "D", "before": "the period ratio are investigated in two genetic oscillator models (smooth and relaxation oscillators). We find that the existence of the", "after": null, "start_char_pos": 250, "end_char_pos": 387 }, { "type": "A", "before": null, "after": "ratio by the phase reduction and the Floquet multiplier analysis. We found that the period", "start_char_pos": 395, "end_char_pos": 395 }, { "type": "R", "before": "an enhancement of", "after": "better", "start_char_pos": 413, "end_char_pos": 430 }, { "type": "R", "before": "smooth and relaxation oscillators. By calculating Floquet multipliers, we show that the enhancement mechanism is based on the coupled oscillators which are", "after": "oscillators; the enhancement occurred", "start_char_pos": 451, "end_char_pos": 606 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 626, "end_char_pos": 626 }, { "type": "A", "before": null, "after": ". The optimal period ratio in the smooth oscillator for the enhanced trainability coincided with the experimentally observed ratio, which suggested the biological exploitation of the period mismatch. Although the origin of multiple feedback loops is often accounted for by the notion of passive robustness against perturbation, we here studied active benefits of the period mismatch on the efficiency of genetic oscillators. Our findings show qualitatively different perspective on the essentiality and inherent advantage of multiple loops in genetic oscillators", "start_char_pos": 654, "end_char_pos": 654 } ]
[ 0, 104, 353, 485 ]
1206.6238
2
Biological oscillators coordinate individual cellular components to function coherently and collectively. They are typically composed of multiple feedback loops, and period mismatch between them is unavoidable in biological implementations. We investigated the advantageous effect of the period mismatch in terms of synchronization against external stimuli(or trainability) . Specifically, we numerically analyzed two fundamental genetic models , smooth and relaxation oscillators , on their trainability for different coupling strength and period ratio by the phase reduction and the Floquet multiplier analysis . We found that the period mismatch induces better entrainment in both oscillators ; the enhancement occurred in the vicinity of the bifurcation on limit cycle. The optimal period ratio in the smooth oscillator for the enhanced trainability coincided with the experimentally observed ratio, which suggested the biological exploitation of the period mismatch. Although the origin of multiple feedback loops is often accounted for by the notion of passive robustness against perturbation, we here studied active benefits of the period mismatch on the efficiency of genetic oscillators. Our findings show qualitatively different perspective on the essentiality and inherent advantage of multiple loops in genetic oscillators .
Biological oscillators coordinate individual cellular components so that they function coherently and collectively. They are typically composed of multiple feedback loops, and period mismatch is unavoidable in biological implementations. We investigated the advantageous effect of this period mismatch in terms of a synchronization response to external stimuli . Specifically, we considered two fundamental models of genetic circuits: smooth- and relaxation oscillators . Using phase reduction and Floquet multipliers, we numerically analyzed their entrainability under different coupling strengths and period ratios . We found that a period mismatch induces better entrainment in both types of oscillator ; the enhancement occurs in the vicinity of the bifurcation on their limit cycles. In the smooth oscillator , the optimal period ratio for the enhancement coincides with the experimentally observed ratio, which suggests biological exploitation of the period mismatch. Although the origin of multiple feedback loops is often explained as a passive mechanism to ensure robustness against perturbation, we study the active benefits of the period mismatch , which include increasing the efficiency of the genetic oscillators. Our findings show a qualitatively different perspective for both the inherent advantages of multiple loops and their essentiality .
[ { "type": "R", "before": "to", "after": "so that they", "start_char_pos": 65, "end_char_pos": 67 }, { "type": "D", "before": "between them", "after": null, "start_char_pos": 182, "end_char_pos": 194 }, { "type": "R", "before": "the", "after": "this", "start_char_pos": 284, "end_char_pos": 287 }, { "type": "R", "before": "synchronization against external stimuli(or trainability)", "after": "a synchronization response to external stimuli", "start_char_pos": 316, "end_char_pos": 373 }, { "type": "R", "before": "numerically analyzed two fundamental genetic models , smooth", "after": "considered two fundamental models of genetic circuits: smooth-", "start_char_pos": 393, "end_char_pos": 453 }, { "type": "R", "before": ", on their trainability for different coupling strength and period ratio by the phase reduction and the Floquet multiplier analysis", "after": ". Using phase reduction and Floquet multipliers, we numerically analyzed their entrainability under different coupling strengths and period ratios", "start_char_pos": 481, "end_char_pos": 612 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 629, "end_char_pos": 632 }, { "type": "R", "before": "oscillators", "after": "types of oscillator", "start_char_pos": 684, "end_char_pos": 695 }, { "type": "R", "before": "occurred", "after": "occurs", "start_char_pos": 714, "end_char_pos": 722 }, { "type": "R", "before": "limit cycle. The optimal period ratio in", "after": "their limit cycles. In", "start_char_pos": 761, "end_char_pos": 801 }, { "type": "R", "before": "for the enhanced trainability coincided", "after": ", the optimal period ratio for the enhancement coincides", "start_char_pos": 824, "end_char_pos": 863 }, { "type": "R", "before": "suggested the", "after": "suggests", "start_char_pos": 910, "end_char_pos": 923 }, { "type": "R", "before": "accounted for by the notion of passive", "after": "explained as a passive mechanism to ensure", "start_char_pos": 1028, "end_char_pos": 1066 }, { "type": "R", "before": "here studied", "after": "study the", "start_char_pos": 1103, "end_char_pos": 1115 }, { "type": "R", "before": "on", "after": ", which include increasing", "start_char_pos": 1155, "end_char_pos": 1157 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1176, "end_char_pos": 1176 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 1216, "end_char_pos": 1216 }, { "type": "R", "before": "on the essentiality and inherent advantage", "after": "for both the inherent advantages", "start_char_pos": 1253, "end_char_pos": 1295 }, { "type": "R", "before": "in genetic oscillators", "after": "and their essentiality", "start_char_pos": 1314, "end_char_pos": 1336 } ]
[ 0, 105, 240, 375, 614, 697, 773, 971, 1197 ]
1206.6332
1
Crystallography may be the gold standard of protein structure determination, but obtaining the necessary high-quality crystals is akin to prospecting for the precious mineral. The fields of structural biology and soft matter have independently sought out fundamental principles to rationalize the process, but the conceptual differences and the limited crosstalk between the two disciplines have prevented a comprehensive understanding of the phenomenon to emerge. Here we conduct a computational study of proteins from the rubredoxin family that bridges the two fields. Using atomistic simulations, we characterize the crystal contacts, and then parameterize patchy particle models. Comparing the phase diagrams of these models with experimental results enables us to critically examine the assumptions behind the two approaches and to reveal key features of protein-protein interactions that facilitate their crystallization .
The fields of structural biology and soft matter have independently sought out fundamental principles to rationalize protein crystallization. Yet the conceptual differences and the limited overlap between the two disciplines have thus far prevented a comprehensive understanding of the phenomenon to emerge. We conduct a computational study of proteins from the rubredoxin family that bridges the two fields. Using atomistic simulations, we characterize their crystal contacts, and accordingly parameterize patchy particle models. Comparing the phase diagrams of these schematic models with experimental results enables us to critically examine the assumptions behind the two approaches . The study also reveals features of protein-protein interactions that can be leveraged to crystallize proteins more generally .
[ { "type": "D", "before": "Crystallography may be the gold standard of protein structure determination, but obtaining the necessary high-quality crystals is akin to prospecting for the precious mineral.", "after": null, "start_char_pos": 0, "end_char_pos": 175 }, { "type": "R", "before": "the process, but the", "after": "protein crystallization. Yet the", "start_char_pos": 293, "end_char_pos": 313 }, { "type": "R", "before": "crosstalk", "after": "overlap", "start_char_pos": 353, "end_char_pos": 362 }, { "type": "A", "before": null, "after": "thus far", "start_char_pos": 396, "end_char_pos": 396 }, { "type": "R", "before": "Here we", "after": "We", "start_char_pos": 466, "end_char_pos": 473 }, { "type": "R", "before": "the", "after": "their", "start_char_pos": 617, "end_char_pos": 620 }, { "type": "R", "before": "then", "after": "accordingly", "start_char_pos": 643, "end_char_pos": 647 }, { "type": "A", "before": null, "after": "schematic", "start_char_pos": 723, "end_char_pos": 723 }, { "type": "R", "before": "and to reveal key", "after": ". The study also reveals", "start_char_pos": 832, "end_char_pos": 849 }, { "type": "R", "before": "facilitate their crystallization", "after": "can be leveraged to crystallize proteins more generally", "start_char_pos": 896, "end_char_pos": 928 } ]
[ 0, 175, 465, 571, 684 ]
1206.7014
1
Motivated by single molecule experiments on biopolymers we explore force vs.\ extension behavior of flexible chains with hydrophobic segments using numerical simulationsand theory . We find that in addition to the fraction of hydrophobic patches their spatial distribution along the backbone of the chain play a major role in altering its mechanical response. These results are discussed in light of the helix-coil model for biopolymers .
Motivated by single molecule experiments on biopolymers we explore equilibrium morphologies and force-extension behavior of copolymers with hydrophobic segments using Langevin dynamics simulations . We find that the interplay between different length scales, namely, the persistence length \ell_{p in addition to the fraction of hydrophobic patches f play a major role in altering the equilibrium morphologies and mechanical response. In particular, we show a plethora of equilibrium morphologies for this system,e.g. core-shell, looped (with hybridised hydrophilic-hydrophobic sections), and extended coils as a function of these parameters. A competition of bending energy and hybridisation energies between two types of beads determines the equilibrium morphology. Further, mechanical properties of such polymer architectures are crucially dependent on their native conformations, and in turn on the disorder realisation along the chain backbone. Thus, for flexible chains, a globule to extended coil transition is effected via a tensile force for all disorder realisations. However, the exact nature of the force-extension curves are different for the different disorder realisations. In contrast, we find that force-extension behavior of semi-flexible chains with different equilibrium configurationse.g. core-shell, looped,etc. reveal a cascade of force-induced conformational transitions .
[ { "type": "R", "before": "force vs.\\ extension behavior of flexible chains", "after": "equilibrium morphologies and force-extension behavior of copolymers", "start_char_pos": 67, "end_char_pos": 115 }, { "type": "R", "before": "numerical simulationsand theory", "after": "Langevin dynamics simulations", "start_char_pos": 148, "end_char_pos": 179 }, { "type": "A", "before": null, "after": "the interplay between different length scales, namely, the persistence length \\ell_{p", "start_char_pos": 195, "end_char_pos": 195 }, { "type": "R", "before": "their spatial distribution along the backbone of the chain", "after": "f", "start_char_pos": 247, "end_char_pos": 305 }, { "type": "R", "before": "its", "after": "the equilibrium morphologies and", "start_char_pos": 336, "end_char_pos": 339 }, { "type": "R", "before": "These results are discussed in light of the helix-coil model for biopolymers", "after": "In particular, we show a plethora of equilibrium morphologies for this system,", "start_char_pos": 361, "end_char_pos": 437 }, { "type": "A", "before": null, "after": "e.g.", "start_char_pos": 437, "end_char_pos": 437 }, { "type": "A", "before": null, "after": "core-shell, looped (with hybridised hydrophilic-hydrophobic sections), and extended coils as a function of these parameters. A competition of bending energy and hybridisation energies between two types of beads determines the equilibrium morphology. Further, mechanical properties of such polymer architectures are crucially dependent on their native conformations, and in turn on the disorder realisation along the chain backbone. Thus, for flexible chains, a globule to extended coil transition is effected via a tensile force for all disorder realisations. However, the exact nature of the force-extension curves are different for the different disorder realisations. In contrast, we find that force-extension behavior of semi-flexible chains with different equilibrium configurations", "start_char_pos": 438, "end_char_pos": 438 }, { "type": "A", "before": null, "after": "e.g.", "start_char_pos": 438, "end_char_pos": 438 }, { "type": "A", "before": null, "after": "core-shell, looped,", "start_char_pos": 439, "end_char_pos": 439 }, { "type": "A", "before": null, "after": "etc.", "start_char_pos": 439, "end_char_pos": 439 }, { "type": "A", "before": null, "after": "reveal a cascade of force-induced conformational transitions", "start_char_pos": 440, "end_char_pos": 440 } ]
[ 0, 181, 360 ]
1206.7053
1
Mitochondrial adenine nucleotide (AdN) content is regulated through the Ca2+-activated, electroneutral ATP-Mg/Pi carrier (APC). The APC is a protein in the mitochondrial carrier super family that localizes to the inner mitochondrial membrane (IMM). It is known to modulate a number of processes that depend on mitochondrial AdN content, such as gluconeogenesis, protein synthesis, and citrulline synthesis. Despite this critical role, a kinetic model of the underlying mechanism has not been developed and corroborated . Here, a biophysical model of the APC is developed that is thermodynamically balanced and accurately reproduces a number of reported data sets from isolated rat liver and rat kidney mitochondria. The model is based on an ordered bi-bi mechanism for hetero-exchange of ATP and Pi and also includes homo-exchanges of ATP and Pi to explain both the initial rate and time course data on ATP and Pi transport via the APC. The model invokes seven kinetic parameters regarding the APC mechanism and three parameters related to matrix pH regulation by external Pi. These parameters are estimated based on nineteen independent data curves; the estimated parameters are corroborated using six additional data curves. The model takes into account the effects of pH, Mg2+ and Ca2+ on ATP and Pi transport via the APC and supports the conclusion that the pH gradient across the IMM serves as the primary driving force for AdN uptake or efflux. Moreover, computer simulations demonstrate that extra-matrix Ca2+ modulates the turnover rate of the APC and not the binding affinity of ATP, as previously suggested.
Mitochondrial adenine nucleotide (AdN) content is regulated through the Ca2+-activated, electroneutral ATP-Mg/Pi carrier (APC). The APC is a protein in the mitochondrial carrier super family that localizes to the inner mitochondrial membrane (IMM). It is known to modulate a number of processes that depend on mitochondrial AdN content, such as gluconeogenesis, protein synthesis, and citrulline synthesis. Despite this critical role, a kinetic model of the underlying mechanism has not been developed and validated . Here, a biophysical model of the APC is developed that is thermodynamically balanced and accurately reproduces a number of reported data sets from isolated rat liver and rat kidney mitochondria. The model is based on an ordered bi-bi mechanism for hetero-exchange of ATP and Pi and also includes homo-exchanges of ATP and Pi to explain both the initial rate and time course data on ATP and Pi transport via the APC. The model invokes seven kinetic parameters regarding the APC mechanism and three parameters related to matrix pH regulation by external Pi. These parameters are estimated based on nineteen independent data curves; the estimated parameters are validated using six additional data curves. The model takes into account the effects of pH, Mg2+ and Ca2+ on ATP and Pi transport via the APC and supports the conclusion that the pH gradient across the IMM serves as the primary driving force for AdN uptake or efflux. Moreover, computer simulations demonstrate that extra-matrix Ca2+ modulates the turnover rate of the APC and not the binding affinity of ATP, as previously suggested.
[ { "type": "R", "before": "corroborated", "after": "validated", "start_char_pos": 506, "end_char_pos": 518 }, { "type": "R", "before": "corroborated", "after": "validated", "start_char_pos": 1180, "end_char_pos": 1192 } ]
[ 0, 127, 248, 406, 520, 715, 936, 1076, 1150, 1226, 1450 ]
1207.0233
1
For any exponential L\'evy model whose diffusion component is nonzero\E , we provide an exact series representation for the implied volatility of a European call option. Numerical examples are provided .
For any strictly positive martingale S =\E^X for which X has an analytically tractable characteristic function , we provide an expansion for the implied volatility . This expansion is explicit in the sense that it involves no integrals, but only polynomials in \log(K/S_0). We illustrate the versatility of our expansion by computing the approximate implied volatility smile in three well-known martingale models: one finite activity exponential L\'evy model (Merton), one infinite activity exponential L\'evy model (Variance Gamma), and one stochastic volatility model (Heston). We show how this technique can be extended to compute approximate forward implied volatilities and we implement this extension in the Heston setting. Finally, we illustrate how our expansion can be used to perform a model-free calibration of the empirically observed implied volatility surface .
[ { "type": "R", "before": "exponential L\\'evy model whose diffusion component is nonzero", "after": "strictly positive martingale S =", "start_char_pos": 8, "end_char_pos": 69 }, { "type": "A", "before": null, "after": "^X for which X has an analytically tractable characteristic function", "start_char_pos": 71, "end_char_pos": 71 }, { "type": "R", "before": "exact series representation", "after": "expansion", "start_char_pos": 88, "end_char_pos": 115 }, { "type": "R", "before": "of a European call option. Numerical examples are provided", "after": ". This expansion is explicit in the sense that it involves no integrals, but only polynomials in \\log(K/S_0). We illustrate the versatility of our expansion by computing the approximate implied volatility smile in three well-known martingale models: one finite activity exponential L\\'evy model (Merton), one infinite activity exponential L\\'evy model (Variance Gamma), and one stochastic volatility model (Heston). We show how this technique can be extended to compute approximate forward implied volatilities and we implement this extension in the Heston setting. Finally, we illustrate how our expansion can be used to perform a model-free calibration of the empirically observed implied volatility surface", "start_char_pos": 143, "end_char_pos": 201 } ]
[ 0, 169 ]
1207.0233
2
For any strictly positive martingale S = %DIFDELCMD < \E%%% ^X for which X has an analytically tractable characteristic function, we provide an expansion for the implied volatility. This expansion is explicit in the sense that it involves no integrals, but only polynomials in \log(K/S_0) . We illustrate the versatility of our expansion by computing the approximate implied volatility smile in three well-known martingale models: one finite activity exponential L\'evy model (Merton), one infinite activity exponential L\'evy model (Variance Gamma), and one stochastic volatility model (Heston) . We show how this technique can be extended to compute approximate forward implied volatilities and we implement this extension in the Heston setting . Finally, we illustrate how our expansion can be used to perform a model-free calibration of the empirically observed implied volatility surface.
For any strictly positive martingale S = %DIFDELCMD < \E%%% \exp(X) for which X has a characteristic function, we provide an expansion for the implied volatility. This expansion is explicit in the sense that it involves no integrals, but only polynomials in the log strike . We illustrate the versatility of our expansion by computing the approximate implied volatility smile in three well-known martingale models: one finite activity exponential L\'evy model (Merton), one infinite activity exponential L\'evy model (Variance Gamma), and one stochastic volatility model (Heston) . Finally, we illustrate how our expansion can be used to perform a model-free calibration of the empirically observed implied volatility surface.
[ { "type": "R", "before": "^X", "after": "\\exp(X)", "start_char_pos": 60, "end_char_pos": 62 }, { "type": "R", "before": "an analytically tractable", "after": "a", "start_char_pos": 79, "end_char_pos": 104 }, { "type": "R", "before": "\\log(K/S_0)", "after": "the log strike", "start_char_pos": 277, "end_char_pos": 288 }, { "type": "D", "before": ". We show how this technique can be extended to compute approximate forward implied volatilities and we implement this extension in the Heston setting", "after": null, "start_char_pos": 596, "end_char_pos": 746 } ]
[ 0, 181, 597, 748 ]
1207.0750
1
We propose a CEV-like local stochastic volatility model that that fixes one problem with the CEV model; namely, when the elasticity of variance is negative our local volatility function does not go to zero as the value of the underlying goes to infinity . Within our framework, we obtain an explicit expressions for both (i) the price of any European option and (ii) the implied volatility smile .
We introduce a class of local stochastic volatility models . Within our framework, we obtain an expression for both (i) the price of any European option and (ii) the induced implied volatility smile . To illustrate our method, we perform specific computations for a CEV-like model .
[ { "type": "R", "before": "propose a CEV-like", "after": "introduce a class of", "start_char_pos": 3, "end_char_pos": 21 }, { "type": "R", "before": "model that that fixes one problem with the CEV model; namely, when the elasticity of variance is negative our local volatility function does not go to zero as the value of the underlying goes to infinity", "after": "models", "start_char_pos": 50, "end_char_pos": 253 }, { "type": "R", "before": "explicit expressions", "after": "expression", "start_char_pos": 291, "end_char_pos": 311 }, { "type": "A", "before": null, "after": "induced", "start_char_pos": 371, "end_char_pos": 371 }, { "type": "A", "before": null, "after": ". To illustrate our method, we perform specific computations for a CEV-like model", "start_char_pos": 397, "end_char_pos": 397 } ]
[ 0, 103, 255 ]
1207.0750
2
We introduce a class of local stochastic volatility models. Within our framework, we obtain an expression for both (i) the price of any European option and (ii) the induced implied volatility smile. To illustrate our method , we perform specific computations for a CEV-like model .
We introduce a new class of local volatility models. Within this framework, we obtain expressions for both (i) the price of any European option and (ii) the induced implied volatility smile. As an illustration of our framework , we perform specific pricing and implied volatility computations for a CEV-like example. Numerical examples are provided .
[ { "type": "A", "before": null, "after": "new", "start_char_pos": 15, "end_char_pos": 15 }, { "type": "D", "before": "stochastic", "after": null, "start_char_pos": 31, "end_char_pos": 41 }, { "type": "R", "before": "our", "after": "this", "start_char_pos": 68, "end_char_pos": 71 }, { "type": "R", "before": "an expression", "after": "expressions", "start_char_pos": 93, "end_char_pos": 106 }, { "type": "R", "before": "To illustrate our method", "after": "As an illustration of our framework", "start_char_pos": 200, "end_char_pos": 224 }, { "type": "A", "before": null, "after": "pricing and implied volatility", "start_char_pos": 247, "end_char_pos": 247 }, { "type": "R", "before": "model", "after": "example. Numerical examples are provided", "start_char_pos": 276, "end_char_pos": 281 } ]
[ 0, 60, 199 ]
1207.0843
1
We analyse the behaviour of the implied volatility smile for options close to expiry in the exponential L\'evy class of asset price models with jumps. We introduce a new renormalisation of the strike variable with the property that the implied volatility converges to a non-constant limiting shape, which is a function of both the diffusion component of the process and the jump activity (Blumenthal-Getoor) index of the jump component. Our limiting implied volatility formula relates the jump activity of the underlying asset price process to the short end of the implied volatility surface and sheds new light on the difference between finite and infinite variation jumps from the point of view of option prices. For infinite variation processes , the wings of the limiting smile are determined by the jump activity indices of the positive and negative jumps, whereas in the finite variation case , the wings have a constant model-independent slope. This makes infinite variation L\'evy models better suited for calibration based on short-maturity option prices.
We analyse the behaviour of the implied volatility smile for options close to expiry in the exponential L\'evy class of asset price models with jumps. We introduce a new renormalisation of the strike variable with the property that the implied volatility converges to a non-constant limiting shape, which is a function of both the diffusion component of the process and the jump activity (Blumenthal-Getoor) index of the jump component. Our limiting implied volatility formula relates the jump activity of the underlying asset price process to the short end of the implied volatility surface and sheds new light on the difference between finite and infinite variation jumps from the viewpoint of option prices: in the latter , the wings of the limiting smile are determined by the jump activity indices of the positive and negative jumps, whereas in the former , the wings have a constant model-independent slope. This result gives a theoretical justification for the preference of the infinite variation L\'evy models over the finite variation ones in the calibration based on short-maturity option prices.
[ { "type": "R", "before": "point of view of option prices. For infinite variation processes", "after": "viewpoint of option prices: in the latter", "start_char_pos": 683, "end_char_pos": 747 }, { "type": "R", "before": "finite variation case", "after": "former", "start_char_pos": 877, "end_char_pos": 898 }, { "type": "R", "before": "makes", "after": "result gives a theoretical justification for the preference of the", "start_char_pos": 957, "end_char_pos": 962 }, { "type": "R", "before": "better suited for", "after": "over the finite variation ones in the", "start_char_pos": 996, "end_char_pos": 1013 } ]
[ 0, 150, 436, 714, 951 ]
1207.1630
1
We propose a hybrid CEV Exponential L\'evy model whose volatility, L\'evy measure, and killing rate are all state-dependence. In this setting we find a closed form solution for the price of any European-style option. Additionally, for a certain sub-class of models, we find a closed-form expression for the implied volatility smile .
We propose a class of equity models whose volatility, L\'evy measure, and killing rate all have local stochastic state-dependence. In this framework we find a closed form solution for the price of any European-style option. Additionally, for a certain sub-class of models, we find an exact expression for the induced implied volatility smile . To illustrate our framework, we perform specific computations for hybrid CEV/L\'evy model (which we christen as the "C\'EV" model) .
[ { "type": "R", "before": "hybrid CEV Exponential L\\'evy model", "after": "class of equity models", "start_char_pos": 13, "end_char_pos": 48 }, { "type": "R", "before": "are all", "after": "all have local stochastic", "start_char_pos": 100, "end_char_pos": 107 }, { "type": "R", "before": "setting", "after": "framework", "start_char_pos": 134, "end_char_pos": 141 }, { "type": "R", "before": "a closed-form", "after": "an exact", "start_char_pos": 274, "end_char_pos": 287 }, { "type": "A", "before": null, "after": "induced", "start_char_pos": 307, "end_char_pos": 307 }, { "type": "A", "before": null, "after": ". To illustrate our framework, we perform specific computations for hybrid CEV/L\\'evy model (which we christen as the \"C\\'EV\" model)", "start_char_pos": 333, "end_char_pos": 333 } ]
[ 0, 125, 216 ]
1207.1630
2
We propose a class of equity models whose volatility, L\'evy measure , and killing rate all have local stochastic state-dependence. In this framework we find a closed form solution for the price of any European-style option. Additionally, for a certain sub-class of models, we find an exact expression for the induced implied volatility smile. To illustrate our framework, we perform specific computations for hybrid CEV/L\'evy model (which we christen as the "C\'EV" model) .
We consider a class of assets whose risk-neutral pricing dynamics are described by an exponential L\'evy-type process subject to default. The class of processes we consider features locally-dependent drift, diffusion and default-intensity as well as a locally-dependent L\'evy measure . Using techniques from regular perturbation theory and Fourier analysis, we derive a series expansion for the price of a European-style option. We also provide precise conditions under which this series expansion converges to the exact price. Additionally, for a certain subclass of assets in our modeling framework, we derive an expansion for the implied volatility induced by our option pricing formula. The implied volatility expansion is exact within its radius of convergence. As an example of our framework, we propose a class of CEV-like L\'evy-type models. Within this class, approximate option prices can be computed by a single Fourier integral and approximate implied volatilities are explicit (i.e., no integration is required). Furthermore, the class of CEV-like L\'evy-type models is shown to provide a tight fit to the implied volatility surface of S P500 index options .
[ { "type": "R", "before": "propose", "after": "consider", "start_char_pos": 3, "end_char_pos": 10 }, { "type": "R", "before": "equity models whose volatility,", "after": "assets whose risk-neutral pricing dynamics are described by an exponential L\\'evy-type process subject to default. The class of processes we consider features locally-dependent drift, diffusion and default-intensity as well as a locally-dependent", "start_char_pos": 22, "end_char_pos": 53 }, { "type": "R", "before": ", and killing rate all have local stochastic state-dependence. In this framework we find a closed form solution", "after": ". Using techniques from regular perturbation theory and Fourier analysis, we derive a series expansion", "start_char_pos": 69, "end_char_pos": 180 }, { "type": "R", "before": "any", "after": "a", "start_char_pos": 198, "end_char_pos": 201 }, { "type": "A", "before": null, "after": "We also provide precise conditions under which this series expansion converges to the exact price.", "start_char_pos": 225, "end_char_pos": 225 }, { "type": "R", "before": "sub-class of models, we find an exact expression for the induced implied volatility smile. To illustrate", "after": "subclass of assets in our modeling framework, we derive an expansion for the implied volatility induced by our option pricing formula. The implied volatility expansion is exact within its radius of convergence. As an example of", "start_char_pos": 254, "end_char_pos": 358 }, { "type": "R", "before": "perform specific computations for hybrid CEV/L\\'evy model (which we christen as the \"C\\'EV\" model)", "after": "propose a class of CEV-like L\\'evy-type models. Within this class, approximate option prices can be computed by a single Fourier integral and approximate implied volatilities are explicit (i.e., no integration is required). Furthermore, the class of CEV-like L\\'evy-type models is shown to provide a tight fit to the implied volatility surface of S", "start_char_pos": 377, "end_char_pos": 475 }, { "type": "A", "before": null, "after": "P500 index options", "start_char_pos": 476, "end_char_pos": 476 } ]
[ 0, 131, 224, 344 ]
1207.1631
1
The linear noise approximation is commonly used to obtain intrinsic noise statistics such as Fano factors and coefficients of variation for biochemical networks. These estimates are accurate for networks with large numbers of molecules. However it is well known that many biochemical networks are characterized by at least one species with a small number of molecules. We here describe modifications to the software intrinsic Noise Analyzer (iNA) which enable it to accurately compute noise statistics over wide ranges of molecule numbers. This is achieved by calculating the next order corrections to the linear noise approximation's estimates of variance and covariance of concentration fluctuations. The efficiency of the methods is significantly improved by automated just-in-time compilation using the LLVM framework leading to a fluctuation analysis which typically outperforms that obtained by means of exact stochastic simulations. iNA is hence particularly well suited for the needs of the computational biology community.
The linear noise approximation is commonly used to obtain intrinsic noise statistics for biochemical networks. These estimates are accurate for networks with large numbers of molecules. However it is well known that many biochemical networks are characterized by at least one species with a small number of molecules. We here describe version 0.3 of the software intrinsic Noise Analyzer (iNA) which allows for accurate computation of noise statistics over wide ranges of molecule numbers. This is achieved by calculating the next order corrections to the linear noise approximation's estimates of variance and covariance of concentration fluctuations. The efficiency of the methods is significantly improved by automated just-in-time compilation using the LLVM framework leading to a fluctuation analysis which typically outperforms that obtained by means of exact stochastic simulations. iNA is hence particularly well suited for the needs of the computational biology community.
[ { "type": "D", "before": "such as Fano factors and coefficients of variation", "after": null, "start_char_pos": 85, "end_char_pos": 135 }, { "type": "R", "before": "modifications to", "after": "version 0.3 of", "start_char_pos": 386, "end_char_pos": 402 }, { "type": "R", "before": "enable it to accurately compute", "after": "allows for accurate computation of", "start_char_pos": 453, "end_char_pos": 484 } ]
[ 0, 161, 236, 368, 539, 702 ]
1207.1759
1
In the context of a general continuous financial market model, we study whether the additional information associated with an honest time gives rise to arbitrage . By relying on the theory of progressive enlargement of filtrations, we explicitly show that arbitrage profits can never be realized strictly before an honest time, while classical arbitrage opportunities can be realized exactly at a honest time and stronger arbitrages of the first kind always exist after an honest time . We carefully study the behavior of local martingale deflators and consider no-arbitrage-type conditions weaker than NFLVR.
In the context of a general continuous financial market model, we study whether the additional information associated with an honest time gives rise to arbitrage profits . By relying on the theory of progressive enlargement of filtrations, we explicitly show that no kind of arbitrage profit can ever be realised strictly before an honest time, while classical arbitrage opportunities can be realised exactly at an honest time as well as after an honest time. Moreover, stronger arbitrages of the first kind can only be obtained by trading as soon as an honest time occurs . We carefully study the behavior of local martingale deflators and consider no-arbitrage-type conditions weaker than NFLVR.
[ { "type": "A", "before": null, "after": "profits", "start_char_pos": 162, "end_char_pos": 162 }, { "type": "R", "before": "arbitrage profits can never be realized", "after": "no kind of arbitrage profit can ever be realised", "start_char_pos": 257, "end_char_pos": 296 }, { "type": "R", "before": "realized exactly at a honest time and", "after": "realised exactly at an honest time as well as after an honest time. Moreover,", "start_char_pos": 376, "end_char_pos": 413 }, { "type": "R", "before": "always exist after", "after": "can only be obtained by trading as soon as", "start_char_pos": 452, "end_char_pos": 470 }, { "type": "A", "before": null, "after": "occurs", "start_char_pos": 486, "end_char_pos": 486 } ]
[ 0, 164, 488 ]
1207.1804
1
Inspired by biological processes such as protein synthesis and motor protein transport, we intro- duce the concept of localized dynamical sites coupled to a driven lattice gas dynamics. By mimicking a local structural change in the lattice, these sites interact with particles and strongly influence their transport phenomenology. We analyze the case of a single dynamical site whose dynamics is cou- pled with the density of particles on the lattice; we investigate the phenomenology of the model and how the current-density relationship is affected by the dynamical defect. Crucially, we find a novel dynamical regime characterized by an intermittent current and subject to severe finite-size effects that can enhance the transport at high particle densities. We describe and rationalize the variety of regimes via refined mean-field approaches .
Many transport processes in nature take place on substrates, often considered as unidimensional lanes. These unidimensional substrates are typically non-static: affected by a fluctuating environment, they can undergo conformational changes. This is particularly true in biological cells, where the state of the substrate is often coupled to the active motion of macromolecular complexes, such as motor proteins on microtubules or ribosomes on mRNAs, causing new interesting phenomena. Inspired by biological processes such as protein synthesis by ribosomes and motor protein transport, we introduce the concept of localized dynamical sites coupled to a driven lattice gas dynamics. We investigate the phenomenology of transport in the presence of dynamical defects and find a novel regime characterized by an intermittent current and subject to severe finite-size effects . Our results demonstrate the impact of the regulatory role of the dynamical defects in transport, not only in biology but also in more general contexts .
[ { "type": "A", "before": null, "after": "Many transport processes in nature take place on substrates, often considered as unidimensional lanes. These unidimensional substrates are typically non-static: affected by a fluctuating environment, they can undergo conformational changes. This is particularly true in biological cells, where the state of the substrate is often coupled to the active motion of macromolecular complexes, such as motor proteins on microtubules or ribosomes on mRNAs, causing new interesting phenomena.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "A", "before": null, "after": "by ribosomes", "start_char_pos": 60, "end_char_pos": 60 }, { "type": "R", "before": "intro- duce", "after": "introduce", "start_char_pos": 93, "end_char_pos": 104 }, { "type": "R", "before": "By mimicking a local structural change in the lattice, these sites interact with particles and strongly influence their transport phenomenology. We analyze the case of a single dynamical site whose dynamics is cou- pled with the density of particles on the lattice; we", "after": "We", "start_char_pos": 188, "end_char_pos": 456 }, { "type": "R", "before": "the model and how the current-density relationship is affected by the dynamical defect. Crucially, we", "after": "transport in the presence of dynamical defects and", "start_char_pos": 490, "end_char_pos": 591 }, { "type": "D", "before": "dynamical", "after": null, "start_char_pos": 605, "end_char_pos": 614 }, { "type": "R", "before": "that can enhance the transport at high particle densities. We describe and rationalize the variety of regimes via refined mean-field approaches", "after": ". Our results demonstrate the impact of the regulatory role of the dynamical defects in transport, not only in biology but also in more general contexts", "start_char_pos": 705, "end_char_pos": 848 } ]
[ 0, 187, 332, 453, 577, 763 ]
1207.1842
1
This paper examines the adaptive market hypothesis of Lo (2004, 2005) using the Ito and Noda's (2012) time-varying autoregressive model in Japan. As shown in Ito and Noda (2012), their degree of market efficiency gives us a more precise measurement of market efficiency than conventional moving window methods. The empirical results shows that the AMH of Lo (2004, 2005) is supported in the Japanese grown-up stock market .
This paper examines the adaptive market hypothesis of Lo (2004, 2005) using the Ito and Noda's (2012) non-Bayesian time-varying AR model in Japan. As shown in Ito and Noda (2012), their degree of market efficiency gives us a more precise measurement of market efficiency than conventional moving window methods. The empirical results supports the AMH of Lo (2004, 2005) for data of the more quali?ed stock market in Japan .
[ { "type": "A", "before": null, "after": "non-Bayesian", "start_char_pos": 102, "end_char_pos": 102 }, { "type": "R", "before": "autoregressive", "after": "AR", "start_char_pos": 116, "end_char_pos": 130 }, { "type": "R", "before": "shows that", "after": "supports", "start_char_pos": 334, "end_char_pos": 344 }, { "type": "R", "before": "is supported in the Japanese grown-up stock market", "after": "for data of the more quali?ed stock market in Japan", "start_char_pos": 372, "end_char_pos": 422 } ]
[ 0, 146, 311 ]
1207.1842
2
This paper examines the adaptive market hypothesis of Lo (2004 , 2005) using the Ito and Noda's (2012) non-Bayesian time-varying AR model in Japan. As shown in Ito and Noda (2012), their degree of market efficiency gives us a more precise measurement of market efficiency than conventional moving window methods . The empirical results supports the AMH of Lo (2004 , 2005) for data of the more quali?ed stock market in Japan.
This study examines Lo's (2004 ) adaptive market hypothesis (AMH) in Japanese stock markets (TOPIX and TSE2). In particular, we measure the degree of market efficiency by using the non-Bayesian time-varying model approach of Ito et al. (2014, 2015), which provides a more accurate measurement of market efficiency than conventional statistical inferences (i.e., statistical tests using the moving window method) . The empirical results show that (1) market efficiency changes over time in the TOPIX and TSE2, (2) the market efficiency of the TSE2 is lower than that of the TOPIX in most periods, and (3) the market efficiency of the TOPIX has evolved since the bursting of the bubble economy in the early 1990s, but that of the TSE2 has not. Therefore, we conclude that the empirical results support Lo's (2004 ) AMH for data on the more qualified stock market in Japan.
[ { "type": "R", "before": "paper examines the adaptive market hypothesis of Lo", "after": "study examines Lo's", "start_char_pos": 5, "end_char_pos": 56 }, { "type": "R", "before": ", 2005) using the Ito and Noda's (2012) non-Bayesian time-varying AR model in Japan. As shown in Ito and Noda (2012), their", "after": ") adaptive market hypothesis (AMH) in Japanese stock markets (TOPIX and TSE2). In particular, we measure the", "start_char_pos": 63, "end_char_pos": 186 }, { "type": "R", "before": "gives us a more precise", "after": "by using the non-Bayesian time-varying model approach of Ito et al. (2014, 2015), which provides a more accurate", "start_char_pos": 215, "end_char_pos": 238 }, { "type": "R", "before": "moving window methods", "after": "statistical inferences (i.e., statistical tests using the moving window method)", "start_char_pos": 290, "end_char_pos": 311 }, { "type": "R", "before": "supports the AMH of Lo", "after": "show that (1) market efficiency changes over time in the TOPIX and TSE2, (2) the market efficiency of the TSE2 is lower than that of the TOPIX in most periods, and (3) the market efficiency of the TOPIX has evolved since the bursting of the bubble economy in the early 1990s, but that of the TSE2 has not. Therefore, we conclude that the empirical results support Lo's", "start_char_pos": 336, "end_char_pos": 358 }, { "type": "R", "before": ", 2005) for data of the more quali?ed", "after": ") AMH for data on the more qualified", "start_char_pos": 365, "end_char_pos": 402 } ]
[ 0, 147, 313 ]
1207.1842
3
This study examines Lo's (2004) adaptive market hypothesis (AMH) in Japanese stock markets (TOPIX and TSE2). In particular, we measure the degree of market efficiency by using the non-Bayesian time-varying model approach of Ito et al. (2014, 2015), which provides a more accurate measurement of market efficiency than conventional statistical inferences (i.e., statistical tests using the moving window method). The empirical results show that (1) market efficiency changes over time in the TOPIX and TSE2 , (2) the market efficiency of the TSE2 is lower than that of the TOPIX in most periods, and (3) the market efficiency of the TOPIX has evolved since the bursting of the bubble economy in the early 1990s , but that of the TSE2 has not. Therefore, we conclude that the empirical results support Lo's (2004) AMH for data on the more qualified stock market in Japan.
This study examines the adaptive market hypothesis (AMH) in Japanese stock markets (TOPIX and TSE2). In particular, we measure the degree of market efficiency by using a time-varying model approach . The empirical results show that (1) the degree of market efficiency changes over time in the two markets , (2) the level of market efficiency of the TSE2 is lower than that of the TOPIX in most periods, and (3) the market efficiency of the TOPIX has evolved , but that of the TSE2 has not. We conclude that the results support the AMH for the more qualified stock market in Japan.
[ { "type": "R", "before": "Lo's (2004)", "after": "the", "start_char_pos": 20, "end_char_pos": 31 }, { "type": "R", "before": "the non-Bayesian", "after": "a", "start_char_pos": 176, "end_char_pos": 192 }, { "type": "R", "before": "of Ito et al. (2014, 2015), which provides a more accurate measurement of market efficiency than conventional statistical inferences (i.e., statistical tests using the moving window method).", "after": ".", "start_char_pos": 221, "end_char_pos": 411 }, { "type": "A", "before": null, "after": "the degree of", "start_char_pos": 448, "end_char_pos": 448 }, { "type": "R", "before": "TOPIX and TSE2", "after": "two markets", "start_char_pos": 492, "end_char_pos": 506 }, { "type": "A", "before": null, "after": "level of", "start_char_pos": 517, "end_char_pos": 517 }, { "type": "D", "before": "since the bursting of the bubble economy in the early 1990s", "after": null, "start_char_pos": 652, "end_char_pos": 711 }, { "type": "R", "before": "Therefore, we", "after": "We", "start_char_pos": 744, "end_char_pos": 757 }, { "type": "R", "before": "empirical results support Lo's (2004) AMH for data on", "after": "results support the AMH for", "start_char_pos": 776, "end_char_pos": 829 } ]
[ 0, 108, 411, 743 ]
1207.2316
1
We illustrate a problem in the self-financing condition used in the papers "Funding beyond discounting: collateral agreements and derivatives pricing" (Risk Magazine, February 2010) and "Partial Differential Equation Representations of Derivatives with Counterparty Risk and Funding Costs" (The Journal of Credit Risk, 2011). These papers state an erroneous self-financing condition. In the first paper, this is equivalent to assuming that the equity position is self-financing on its own and without including the cash position. In the second paper, this is equivalent to assuming that a subportfolio is self-financing on its own, rather than the whole portfolio. The error in the first paper is avoided when clearly distinguishing between price processes, dividend processes and gain processes. We present an outline of the correct proof , clarifying the structure of the relevant funding accounts, and show that the final result in "Funding beyond discounting" is correct, even if the self-financing condition used in the proof is not.
We illustrate a problem in the self-financing condition used in the papers "Funding beyond discounting: collateral agreements and derivatives pricing" (Risk Magazine, February 2010) and "Partial Differential Equation Representations of Derivatives with Counterparty Risk and Funding Costs" (The Journal of Credit Risk, 2011). These papers state an erroneous self-financing condition. In the first paper, this is equivalent to assuming that the equity position is self-financing on its own and without including the cash position. In the second paper, this is equivalent to assuming that a subportfolio is self-financing on its own, rather than the whole portfolio. The error in the first paper is avoided when clearly distinguishing between price processes, dividend processes and gain processes. We present an outline of the derivation that yields the correct statement of the self-financing condition , clarifying the structure of the relevant funding accounts, and show that the final result in "Funding beyond discounting" is correct, even if the self-financing condition stated is not.
[ { "type": "R", "before": "correct proof", "after": "derivation that yields the correct statement of the self-financing condition", "start_char_pos": 826, "end_char_pos": 839 }, { "type": "R", "before": "used in the proof", "after": "stated", "start_char_pos": 1013, "end_char_pos": 1030 } ]
[ 0, 325, 383, 529, 664, 796 ]
1207.3137
1
Biological structure and function depend on complex regulatory interactions between many genes. A wealth of gene expression data is available from high-throughput genome-wide measurement technologies, but effective gene regulatory network inference methods are still needed. Model-based methods founded on quantitative descriptions of gene regulation are among the most promising, but many such methods still rely on ad hoc inference approaches and lack experimental interpretability. We propose an experimental design and develop an associated statistical method for learning a quantitative, interpretable, predictive, biophysics-based ordinary differential equation model for gene regulation. We fit the model parameters using gene expression measurements from perturbed steady-states of the system, like those following overexpression or knockdown experiments. Although the original model is nonlinear, our design allows us to transform it into a convex optimization problem by restricting attention to steady-states and using the lasso for parameter selection. Here, we describe the model and inference algorithm and apply them to a synthetic six-gene system, demonstrating that the model is detailed and flexible enough to account for activation and repression as well as synergistic and self-regulation, and that the algorithm can efficiently and accurately recover the parameters used to generate the data.
Biological structure and function depend on complex regulatory interactions between many genes. A wealth of gene expression data is available from high-throughput genome-wide measurement technologies, but effective gene regulatory network inference methods are still needed. Model-based methods founded on quantitative descriptions of gene regulation are among the most promising, but many such methods rely on simple, local models or on ad hoc inference approaches lacking experimental interpretability. We propose an experimental design and develop an associated statistical method for inferring a gene network by learning a standard quantitative, interpretable, predictive, biophysics-based ordinary differential equation model of gene regulation. We fit the model parameters using gene expression measurements from perturbed steady-states of the system, like those following overexpression or knockdown experiments. Although the original model is nonlinear, our design allows us to transform it into a convex optimization problem by restricting attention to steady-states and using the lasso for parameter selection. Here, we describe the model and inference algorithm and apply them to a synthetic six-gene system, demonstrating that the model is detailed and flexible enough to account for activation and repression as well as synergistic and self-regulation, and the algorithm can efficiently and accurately recover the parameters used to generate the data.
[ { "type": "R", "before": "still rely on", "after": "rely on simple, local models or on", "start_char_pos": 403, "end_char_pos": 416 }, { "type": "R", "before": "and lack", "after": "lacking", "start_char_pos": 445, "end_char_pos": 453 }, { "type": "R", "before": "learning a", "after": "inferring a gene network by learning a standard", "start_char_pos": 568, "end_char_pos": 578 }, { "type": "R", "before": "for", "after": "of", "start_char_pos": 674, "end_char_pos": 677 }, { "type": "D", "before": "that", "after": null, "start_char_pos": 1314, "end_char_pos": 1318 } ]
[ 0, 95, 274, 484, 694, 863, 1064 ]
1207.3464
1
This paper is dedicated to the consistency of systemic risk measures with respect to stochastic dependence. It compares two alternative notions of Conditional Value-at-Risk (CoVaR) available in the current literature. These notions are both based on the conditional distribution of a random variable Y given a stress event for a random variable X, but they use different types of stress events. We derive representations of these alternative CoVaR notions in terms of copulas, study their general dependence consistency and compare their performance in several stochastic models. Our central finding is that conditioning on X>=VaR_\alpha(X) gives a much better response to dependence between X and Y than conditioning on X=VaR_\alpha(X). The theoretical results relate the dependence consistency of CoVaR using conditioning on X>=VaR_\alpha(X) to well established results on concordance ordering of multivariate distributions or their copulas. These results also apply to some other systemic risk measures, such as the Marginal Expected Shortfall (MES) and the Systemic Impact Index (SII). The counterexamples for CoVaR based on the stress event X=VaR_\alpha(X) include inconsistency with respect to correlation in the bivariate Gaussian model .
This paper is dedicated to the consistency of systemic risk measures with respect to stochastic dependence. It compares two alternative notions of Conditional Value-at-Risk (CoVaR) available in the current literature. These notions are both based on the conditional distribution of a random variable Y given a stress event for a random variable X, but they use different types of stress events. We derive representations of these alternative CoVaR notions in terms of copulas, study their general dependence consistency and compare their performance in several stochastic models. Our central finding is that conditioning on X>=VaR_\alpha(X) gives a much better response to dependence between X and Y than conditioning on X=VaR_\alpha(X). We prove general results that relate the dependence consistency of CoVaR using conditioning on X>=VaR_\alpha(X) to well established results on concordance ordering of multivariate distributions or their copulas. These results also apply to some other systemic risk measures, such as the Marginal Expected Shortfall (MES) and the Systemic Impact Index (SII). We provide counterexamples showing that CoVaR based on the stress event X=VaR_\alpha(X) is not dependence consistent. In particular, if (X,Y) is bivariate normal, then CoVaR based on X=VaR_\alpha(X) is not an increasing function of the correlation parameter. Similar issues arise in the bivariate t model and in the model with t margins and a Gumbel copula. In all these cases, CoVaR based on X>=VaR_\alpha(X) is an increasing function of the dependence parameter .
[ { "type": "R", "before": "The theoretical results", "after": "We prove general results that", "start_char_pos": 738, "end_char_pos": 761 }, { "type": "R", "before": "The counterexamples for", "after": "We provide counterexamples showing that", "start_char_pos": 1090, "end_char_pos": 1113 }, { "type": "R", "before": "include inconsistency with respect to correlation", "after": "is not dependence consistent. In particular, if (X,Y) is bivariate normal, then CoVaR based on X=VaR_\\alpha(X) is not an increasing function of the correlation parameter. Similar issues arise", "start_char_pos": 1162, "end_char_pos": 1211 }, { "type": "R", "before": "Gaussian model", "after": "t model and in the model with t margins and a Gumbel copula. In all these cases, CoVaR based on X>=VaR_\\alpha(X) is an increasing function of the dependence parameter", "start_char_pos": 1229, "end_char_pos": 1243 } ]
[ 0, 107, 217, 394, 579, 737, 943, 1089 ]
1207.4309
1
Levy copulas are the most natural concept to capture jump dependence in multivariate Levy processes. They translate the intuition and many features of the copula concept into a time series setting. A challenge faced by both, distributional and Levy copulas, is to find flexible but still applicable models for higher dimensions. To overcome this problem, the concept of pair copula constructions has been successfully applied to distributional copulas. In this paper we develop the pair construction for Levy copulas (PLCC). Similar to pair constructions of distributional copulas, the pair construction of a d-dimensional Levy copula consists of d(d-1)/2 bivariate dependence functions. We show that only d-1 of these bivariate functions are Levy copulas, whereas the remaining functions are distributional copulas. Since there are no restrictions concerning the choice of the copulas, the proposed pair construction adds the desired flexibility to Levy copula models. We provide detailed estimation and simulation algorithms and apply the pair construction in a simulation study.
Levy copulas are the most general concept to capture jump dependence in multivariate Levy processes. They translate the intuition and many features of the copula concept into a time series setting. A challenge faced by both, distributional and Levy copulas, is to find flexible but still applicable models for higher dimensions. To overcome this problem, the concept of pair copula constructions has been successfully applied to distributional copulas. In this paper , we develop the pair construction for Levy copulas (PLCC). Similar to pair constructions of distributional copulas, the pair construction of a d-dimensional Levy copula consists of d(d-1)/2 bivariate dependence functions. We show that only d-1 of these bivariate functions are Levy copulas, whereas the remaining functions are distributional copulas. Since there are no restrictions concerning the choice of the copulas, the proposed pair construction adds the desired flexibility to Levy copula models. We discuss estimation and simulation in detail and apply the pair construction in a simulation study.
[ { "type": "R", "before": "natural", "after": "general", "start_char_pos": 26, "end_char_pos": 33 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 467, "end_char_pos": 467 }, { "type": "R", "before": "provide detailed", "after": "discuss", "start_char_pos": 974, "end_char_pos": 990 }, { "type": "R", "before": "algorithms", "after": "in detail", "start_char_pos": 1017, "end_char_pos": 1027 } ]
[ 0, 100, 197, 328, 452, 525, 688, 817, 970 ]
1207.4574
1
The directed polymerization of dendritic actin networks is an essential element of many biological processes, including cell migration. Different theoretical models for the interplay between the underlying processes of polymerization, capping and branching have resulted in conflicting predictions. One of the main reasons for this discrepancy is the assumption of a branching reaction which is first (autocatalytic) versus zeroth order in filament density . Here we introduce a unifying framework from which these two models emerge as limiting cases for low and high filament density, respectively . A smooth transition between these cases is found at intermediate conditions. We also derive a threshold for the capping rate, below which zeroth order characteristics are predicted to dominate the dynamics of the network for all accessible filament densities. For capping rates above this threshold, autocatalytic growth is predicted at sufficiently low filament density .
The directed polymerization of actin networks is an essential element of many biological processes, including cell migration. Different theoretical models considering the interplay between the underlying processes of polymerization, capping and branching have resulted in conflicting predictions. One of the main reasons for this discrepancy is the assumption of a branching reaction that is either first order (autocatalytic) or zeroth order in the number of existing filaments . Here we introduce a unifying framework from which the two established scenarios emerge as limiting cases for low and high filament number . A smooth transition between the two cases is found at intermediate conditions. We also derive a threshold for the capping rate, above which autocatalytic growth is predicted at sufficiently low filament number. Below the threshold, zeroth order characteristics are predicted to dominate the dynamics of the network for all accessible filament numbers. Together, this allows cells to grow stable actin networks over a large range of different conditions .
[ { "type": "D", "before": "dendritic", "after": null, "start_char_pos": 31, "end_char_pos": 40 }, { "type": "R", "before": "for", "after": "considering", "start_char_pos": 165, "end_char_pos": 168 }, { "type": "R", "before": "which is first", "after": "that is either first order", "start_char_pos": 386, "end_char_pos": 400 }, { "type": "R", "before": "versus", "after": "or", "start_char_pos": 417, "end_char_pos": 423 }, { "type": "R", "before": "filament density", "after": "the number of existing filaments", "start_char_pos": 440, "end_char_pos": 456 }, { "type": "R", "before": "these two models", "after": "the two established scenarios", "start_char_pos": 509, "end_char_pos": 525 }, { "type": "R", "before": "density, respectively", "after": "number", "start_char_pos": 577, "end_char_pos": 598 }, { "type": "R", "before": "these", "after": "the two", "start_char_pos": 629, "end_char_pos": 634 }, { "type": "R", "before": "below which", "after": "above which autocatalytic growth is predicted at sufficiently low filament number. Below the threshold,", "start_char_pos": 727, "end_char_pos": 738 }, { "type": "R", "before": "densities. For capping rates above this threshold, autocatalytic growth is predicted at sufficiently low filament density", "after": "numbers. Together, this allows cells to grow stable actin networks over a large range of different conditions", "start_char_pos": 850, "end_char_pos": 971 } ]
[ 0, 135, 298, 458, 600, 677, 860 ]
1207.4749
1
In this paper we ask whether arbitrage-free prices are obtained by utility maximization. This is found to be true for any given investor , provided that one considers the marginal utility-based prices relative to all initial endowments with finite utility .
In this paper we ask whether , given a stock market and an illiquid derivative, there exists arbitrage-free prices at which an utility-maximizing agent would always want to buy the derivative, irrespectively of his own initial endowment of derivatives and cash. We prove that this is false for any given investor if one considers all initial endowments with finite utility , and that it can instead be true if one restricts to the endowments in the interior. We show however how the endowments on the boundary can give rise to very odd phenomena; for example, an investor with such an endowment would choose not to trade in the derivative even at prices arbitrarily close to some arbitrage price .
[ { "type": "A", "before": null, "after": ", given a stock market and an illiquid derivative, there exists", "start_char_pos": 29, "end_char_pos": 29 }, { "type": "R", "before": "are obtained by utility maximization. This is found to be true", "after": "at which an utility-maximizing agent would always want to buy the derivative, irrespectively of his own initial endowment of derivatives and cash. We prove that this is false", "start_char_pos": 52, "end_char_pos": 114 }, { "type": "R", "before": ", provided that one considers the marginal utility-based prices relative to", "after": "if one considers", "start_char_pos": 138, "end_char_pos": 213 }, { "type": "A", "before": null, "after": ", and that it can instead be true if one restricts to the endowments in the interior. We show however how the endowments on the boundary can give rise to very odd phenomena; for example, an investor with such an endowment would choose not to trade in the derivative even at prices arbitrarily close to some arbitrage price", "start_char_pos": 257, "end_char_pos": 257 } ]
[ 0, 89 ]
1207.4860
1
This article proposes a method to quantify the structure of a bipartite graph with a network entropy from a statistical-physical point of view . The network entropy of a bipartite graph with random links is computed from numerical simulation . As an application of the proposed method to analyze collective behavior, the affairs in which participants quote and trade in the foreign exchange market are quantified. The network entropy per node is found to correspond to the macroeconomic situation. A finite mixture of Gumbel distributions is used to fit with the empirical distribution for the minimum values of network entropy per node in each week. The mixture of Gumbel distributions with parameter estimates by segmentation procedure is verified by Kolmogorov-Smirnov test. The finite mixture of Gumbel distributions can extrapolate the probability of extreme events that have never been observed .
This article proposes a method to quantify the structure of a bipartite graph using a network entropy per link . The network entropy of a bipartite graph with random links is calculated both numerically and theoretically . As an application of the proposed method to analyze collective behavior, the affairs in which participants quote and trade in the foreign exchange market are quantified. The network entropy per link is found to correspond to the macroeconomic situation. A finite mixture of Gumbel distributions is used to fit the empirical distribution for the minimum values of network entropy per link in each week. The mixture of Gumbel distributions with parameter estimates by segmentation procedure is verified by the Kolmogorov--Smirnov test. The finite mixture of Gumbel distributions that extrapolate the empirical probability of extreme events has explanatory power at a statistically significant level .
[ { "type": "R", "before": "with", "after": "using", "start_char_pos": 78, "end_char_pos": 82 }, { "type": "R", "before": "from a statistical-physical point of view", "after": "per link", "start_char_pos": 101, "end_char_pos": 142 }, { "type": "R", "before": "computed from numerical simulation", "after": "calculated both numerically and theoretically", "start_char_pos": 207, "end_char_pos": 241 }, { "type": "R", "before": "node", "after": "link", "start_char_pos": 438, "end_char_pos": 442 }, { "type": "D", "before": "with", "after": null, "start_char_pos": 554, "end_char_pos": 558 }, { "type": "R", "before": "node", "after": "link", "start_char_pos": 632, "end_char_pos": 636 }, { "type": "R", "before": "Kolmogorov-Smirnov", "after": "the Kolmogorov--Smirnov", "start_char_pos": 753, "end_char_pos": 771 }, { "type": "R", "before": "can extrapolate the", "after": "that extrapolate the empirical", "start_char_pos": 821, "end_char_pos": 840 }, { "type": "R", "before": "that have never been observed", "after": "has explanatory power at a statistically significant level", "start_char_pos": 871, "end_char_pos": 900 } ]
[ 0, 144, 243, 413, 497, 650, 777 ]
1207.5202
1
We report conducting probe atomic force microscopy (CP-AFM) measurements of electron transport (ETp), as a function of temperature and force, through monolayers of holo-azurin (holo-Az) and Cu-depleted Az (apo-Az)that retain only their tightly bound water, immobilized on gold surfaces. The changes in CP-AFM current-voltage ( I-V) curves for holo-Az and apo-Az, measured between 250 - 370K, are strikingly different. While ETp across holo-Az at low force (6 nN) is temperature-independent over the whole examined range, ETp across apo-Az is thermally activated , with calculated activation energy of 600%DIFDELCMD < \pm100 %%% meV. These results confirm our results of macroscopic contact area ETp measurements via holo- and apo-Az, as a function of temperature, where the crucial role of the Cu redox centre has been observed. While increasing the applied tip force from 6 to 12 nN did not significantly change the temperature dependence of ETp via apo-Az, ETp via holo-Az changed qualitatively, namely from temperature-independent at 6 nN to thermally activated at forces \geq 9 nN, suggesting changes in the protein structure upon increasing the applied force. The capability of exploring ETp by CP-AFM over a significant range of temperatures, with varying tip force to detect possible pressure-induced changes in the sample , significantly adds to the ability to study ETp through proteins and of using ETp to study proteins, with this approach .
The mechanisms of solid-state electron transport (ETp) via a monolayer of immobilized Azurin (Az) was examined by conducting probe atomic force microscopy (CP-AFM) , both as function of temperature ( 248 - 373K) and of applied tip force (6-12 nN). By varying both temperature and force in CP-AFM, we find that the ETp mechanism can alter with a change in the force applied via the tip to the proteins. As the applied force increases, ETp via Az changes from temperature-independent to thermally activated at high temperatures. This is in contrast to the Cu-depleted form of Az ( apo-Az %DIFDELCMD < \pm100 %%% ), where increasing the applied force causes only small quantitative effects, that fit with a decrease in electrode spacing. At low force ETp via holo-Az is temperature-independent and thermally activated via apo-Az. This observation agrees with macroscopic-scale measurements, thus confirming that the difference in ETp dependence on temperature between holo- and apo-Az is an inherent one that may reflect a difference in rigidity between the two forms. An important implication of these results, which depend on CP-AFM measurements over a significant temperature range , is that for ETp measurements on floppy systems, such as proteins, the stress applied to the sample should be kept constant or, at least controlled during measurement .
[ { "type": "R", "before": "We report", "after": "The mechanisms of solid-state electron transport (ETp) via a monolayer of immobilized Azurin (Az) was examined by", "start_char_pos": 0, "end_char_pos": 9 }, { "type": "R", "before": "measurements of electron transport (ETp), as a", "after": ", both as", "start_char_pos": 60, "end_char_pos": 106 }, { "type": "D", "before": "and force, through monolayers of holo-azurin (holo-Az) and Cu-depleted Az (apo-Az)that retain only their tightly bound water, immobilized on gold surfaces. The changes in CP-AFM current-voltage", "after": null, "start_char_pos": 131, "end_char_pos": 324 }, { "type": "R", "before": "I-V) curves for holo-Az and apo-Az, measured between 250", "after": "248", "start_char_pos": 327, "end_char_pos": 383 }, { "type": "R", "before": "370K, are strikingly different. While ETp across holo-Az at low force (6 nN) is", "after": "373K) and of applied tip force (6-12 nN). By varying both temperature and force in CP-AFM, we find that the ETp mechanism can alter with a change in the force applied via the tip to the proteins. As the applied force increases, ETp via Az changes from", "start_char_pos": 386, "end_char_pos": 465 }, { "type": "R", "before": "over the whole examined range, ETp across", "after": "to thermally activated at high temperatures. This is in contrast to the Cu-depleted form of Az (", "start_char_pos": 490, "end_char_pos": 531 }, { "type": "D", "before": "is thermally activated , with calculated activation energy of 600", "after": null, "start_char_pos": 539, "end_char_pos": 604 }, { "type": "R", "before": "meV. These results confirm our results of macroscopic contact area ETp measurements via holo- and apo-Az, as a function of temperature, where the crucial role of the Cu redox centre has been observed. While", "after": "), where", "start_char_pos": 628, "end_char_pos": 834 }, { "type": "R", "before": "tip force from 6 to 12 nN did not significantly change the temperature dependence of ETp via apo-Az,", "after": "force causes only small quantitative effects, that fit with a decrease in electrode spacing. At low force", "start_char_pos": 858, "end_char_pos": 958 }, { "type": "R", "before": "changed qualitatively, namely from", "after": "is", "start_char_pos": 975, "end_char_pos": 1009 }, { "type": "R", "before": "at 6 nN to thermally activated at forces \\geq 9 nN, suggesting changes in the protein structure upon increasing the applied force. The capability of exploring ETp by", "after": "and thermally activated via apo-Az. This observation agrees with macroscopic-scale measurements, thus confirming that the difference in ETp dependence on temperature between holo- and apo-Az is an inherent one that may reflect a difference in rigidity between the two forms. An important implication of these results, which depend on", "start_char_pos": 1034, "end_char_pos": 1199 }, { "type": "A", "before": null, "after": "measurements", "start_char_pos": 1207, "end_char_pos": 1207 }, { "type": "R", "before": "range of temperatures, with varying tip force to detect possible pressure-induced changes in the sample", "after": "temperature range", "start_char_pos": 1227, "end_char_pos": 1330 }, { "type": "R", "before": "significantly adds to the ability to study ETp through proteins and of using ETp to study proteins, with this approach", "after": "is that for ETp measurements on floppy systems, such as proteins, the stress applied to the sample should be kept constant or, at least controlled during measurement", "start_char_pos": 1333, "end_char_pos": 1451 } ]
[ 0, 286, 417, 632, 828, 1164 ]
1207.5506
1
We investigated the influences of molecular crowding on biochemical reaction processes on two-dimensional surfaces, using the model of signal-transduction processes on biomembranes. We performed simulations of the two-dimensional cell-based model, which describes the reactions and diffusions of the receptors, signaling proteins, target proteins , and crowders , on the cell membrane. The signaling proteins are activated by receptors and induce target proteins to unbind from the membrane. We found that the reaction rates of two-dimensional systems consistently exhibit a maximum at a high volume fraction of molecules , such that two molecules in the vicinity cannot easily exchange their positions . We further demonstrated that molecular crowding influences the hierarchical molecular distributions throughout the reaction process. The signaling proteins tend to surround the receptors, and the target proteins tend to become distributed around the signaling protein-receptor clusters. This distribution accelerates the receptor--signaling protein and signaling protein-target protein reactions. Thus, molecular crowding frequently enhances reactions on two-dimensional surfaces, but restricts reactions in three-dimensional bulk systems .
We investigated the influences of the excluded volume of molecules on biochemical reaction processes on two-dimensional surfaces, using the model of signal-transduction processes on biomembranes. We performed simulations of the two-dimensional cell-based model, which describes the reactions and diffusion of the receptors, signaling proteins, target proteins and crowders on the cell membrane. Here, the signaling proteins are activated by receptors and these activated signaling proteins activate target proteins, causing that to unbind from the membrane. We found that the signal flow of the system, defined as the activation rate of target protein, shows the following non-trivial variations against the change in the volume fraction of molecules or the affinity of the target protein to the membrane. For example, with an increase in the binding rate of target proteins to the membrane, the signal flow varies in the form i) monotonically increasing, ii) increasing then decreasing in a bell-shaped curve, or iii) increasing, decreasing then, increasing in an S-shaped curve . We further demonstrated that the excluded volume of molecules influences the hierarchical molecular distributions throughout the reaction processes. In particular, when the system exhibits large signal flow, the signaling proteins tend to surround the receptors, and the target proteins tend to become distributed around the receptor--signaling protein clusters. This accelerates both types of reactions. To explain these phenomena, we analyzed the stochastic model of the local motions of molecules around the receptor .
[ { "type": "R", "before": "molecular crowding", "after": "the excluded volume of molecules", "start_char_pos": 34, "end_char_pos": 52 }, { "type": "R", "before": "diffusions", "after": "diffusion", "start_char_pos": 282, "end_char_pos": 292 }, { "type": "R", "before": ", and crowders ,", "after": "and crowders", "start_char_pos": 347, "end_char_pos": 363 }, { "type": "R", "before": "The", "after": "Here, the", "start_char_pos": 386, "end_char_pos": 389 }, { "type": "R", "before": "induce target proteins", "after": "these activated signaling proteins activate target proteins, causing that", "start_char_pos": 440, "end_char_pos": 462 }, { "type": "R", "before": "reaction rates of two-dimensional systems consistently exhibit a maximum at a high", "after": "signal flow of the system, defined as the activation rate of target protein, shows the following non-trivial variations against the change in the", "start_char_pos": 510, "end_char_pos": 592 }, { "type": "R", "before": ", such that two molecules in the vicinity cannot easily exchange their positions", "after": "or the affinity of the target protein to the membrane. For example, with an increase in the binding rate of target proteins to the membrane, the signal flow varies in the form i) monotonically increasing, ii) increasing then decreasing in a bell-shaped curve, or iii) increasing, decreasing then, increasing in an S-shaped curve", "start_char_pos": 622, "end_char_pos": 702 }, { "type": "R", "before": "molecular crowding", "after": "the excluded volume of molecules", "start_char_pos": 734, "end_char_pos": 752 }, { "type": "R", "before": "process. The", "after": "processes. In particular, when the system exhibits large signal flow, the", "start_char_pos": 829, "end_char_pos": 841 }, { "type": "R", "before": "signaling protein-receptor", "after": "receptor--signaling protein", "start_char_pos": 955, "end_char_pos": 981 }, { "type": "R", "before": "distribution accelerates the receptor--signaling protein and signaling protein-target protein reactions. Thus, molecular crowding frequently enhances reactions on two-dimensional surfaces, but restricts reactions in three-dimensional bulk systems", "after": "accelerates both types of reactions. To explain these phenomena, we analyzed the stochastic model of the local motions of molecules around the receptor", "start_char_pos": 997, "end_char_pos": 1243 } ]
[ 0, 181, 385, 491, 704, 837, 991, 1101 ]
1207.5506
2
We investigated the influences of the excluded volume of molecules on biochemical reaction processes on two-dimensional surfaces , using the model of signal-transduction processes on biomembranes. We performed simulations of the two-dimensional cell-based model, which describes the reactions and diffusion of the receptors, signaling proteins, target proteins and crowders on the cell membrane. Here, the signaling proteins are activated by receptors and these activated signaling proteins activate target proteins , causing that to unbind from the membrane . We found that the signal flow of the system, defined as the activation rate of target protein, shows the following non-trivial variations against the change in the volume fraction of molecules or the affinity of the target protein to the membrane. For example, with an increase in the binding rate of target proteins to the membrane , the signal flow varies in the form i) monotonically increasing , ii) increasing then decreasing in a bell-shaped curve , or iii) increasing, decreasing then, increasing in an S-shaped curve. We further demonstrated that the excluded volume of molecules influences the hierarchical molecular distributions throughout the reaction processes. In particular, when the system exhibits large signal flow, the signaling proteins tend to surround the receptors , and the target proteins tend to become distributed around the receptor--signaling protein clusters. This accelerates both types of reactions. To explain these phenomena, we analyzed the stochastic model of the local motions of molecules around the receptor.
We investigate the influences of the excluded volume of molecules on biochemical reaction processes on 2-dimensional surfaces using a model of signal transduction processes on biomembranes. We perform simulations of the 2-dimensional cell-based model, which describes the reactions and diffusion of the receptors, signaling proteins, target proteins , and crowders on the cell membrane. The signaling proteins are activated by receptors , and these activated signaling proteins activate target proteins that bind autonomously from the cytoplasm to the membrane, and unbind from the membrane if activated. If the target proteins bind frequently, the volume fraction of molecules on the membrane becomes so large that the excluded volume of the molecules for the reaction and diffusion dynamics cannot be negligible. We find that such excluded volume effects of the molecules induce non-trivial variations of the signal flow, defined as the activation frequency of target proteins, as follows. With an increase in the binding rate of target proteins , the signal flow varies by i) monotonically increasing ; ii) increasing then decreasing in a bell-shaped curve ; or iii) increasing, decreasing , then increasing in an S-shaped curve. We further demonstrate that the excluded volume of molecules influences the hierarchical molecular distributions throughout the reaction processes. In particular, when the system exhibits a large signal flow, the signaling proteins tend to surround the receptors to form receptor-signaling protein clusters , and the target proteins tend to become distributed around such clusters. To explain these phenomena, we analyze the stochastic model of the local motions of molecules around the receptor.
[ { "type": "R", "before": "investigated", "after": "investigate", "start_char_pos": 3, "end_char_pos": 15 }, { "type": "R", "before": "two-dimensional surfaces , using the model of signal-transduction", "after": "2-dimensional surfaces using a model of signal transduction", "start_char_pos": 104, "end_char_pos": 169 }, { "type": "R", "before": "performed", "after": "perform", "start_char_pos": 200, "end_char_pos": 209 }, { "type": "R", "before": "two-dimensional", "after": "2-dimensional", "start_char_pos": 229, "end_char_pos": 244 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 361, "end_char_pos": 361 }, { "type": "R", "before": "Here, the", "after": "The", "start_char_pos": 397, "end_char_pos": 406 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 453, "end_char_pos": 453 }, { "type": "R", "before": ", causing that to", "after": "that bind autonomously from the cytoplasm to the membrane, and", "start_char_pos": 518, "end_char_pos": 535 }, { "type": "R", "before": ". We found that the signal flow of the system, defined as the activation rate of target protein, shows the following non-trivial variations against the change in the", "after": "if activated. If the target proteins bind frequently, the", "start_char_pos": 561, "end_char_pos": 726 }, { "type": "R", "before": "or the affinity of the target protein to the membrane. For example, with", "after": "on the membrane becomes so large that the excluded volume of the molecules for the reaction and diffusion dynamics cannot be negligible. We find that such excluded volume effects of the molecules induce non-trivial variations of the signal flow, defined as the activation frequency of target proteins, as follows. With", "start_char_pos": 756, "end_char_pos": 828 }, { "type": "D", "before": "to the membrane", "after": null, "start_char_pos": 880, "end_char_pos": 895 }, { "type": "R", "before": "in the form", "after": "by", "start_char_pos": 921, "end_char_pos": 932 }, { "type": "R", "before": ",", "after": ";", "start_char_pos": 961, "end_char_pos": 962 }, { "type": "R", "before": ",", "after": ";", "start_char_pos": 1017, "end_char_pos": 1018 }, { "type": "R", "before": "then,", "after": ", then", "start_char_pos": 1050, "end_char_pos": 1055 }, { "type": "R", "before": "demonstrated", "after": "demonstrate", "start_char_pos": 1100, "end_char_pos": 1112 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 1278, "end_char_pos": 1278 }, { "type": "A", "before": null, "after": "to form receptor-signaling protein clusters", "start_char_pos": 1352, "end_char_pos": 1352 }, { "type": "R", "before": "the receptor--signaling protein clusters. This accelerates both types of reactions.", "after": "such clusters.", "start_char_pos": 1413, "end_char_pos": 1496 }, { "type": "R", "before": "analyzed", "after": "analyze", "start_char_pos": 1528, "end_char_pos": 1536 } ]
[ 0, 196, 396, 562, 810, 1088, 1237, 1454, 1496 ]
1207.5524
1
DNA has a well-defined structural transition -- the denaturation of its double-stranded form into two single-strands -- that strongly affects its thermal transport properties. We show that, according to a widely implemented model for DNA denaturation, one can engineer DNA "heattronic" devices that have a rapidly increasing thermal conductance over a narrow temperature range across the denaturation transition (~ 350K ). The origin of the switching behavior is the release of the base pairs from their confining potential as DNA denatures, which both softens the lattice and suppresses nonlinear effects , increasing the conductance. Most importantly, we demonstrate that DNA nanojunctions have a broad range of thermal tunability due to varying the sequence and length, and exploit the underlying nonlinear behavior. We discuss the role of disorder in the base sequence, as well as the relation to genomic DNA. These results set the basis for developing thermal devices out of materials with nonlinear structural dynamics, as well as understanding the underlying mechanisms of DNA denaturation.
DNA has a well-defined structural transition - the denaturation of its double-stranded form into two single strands - that strongly affects its thermal transport properties. We show that, according to a widely implemented model for DNA denaturation, one can engineer DNA "heattronic" devices that have a rapidly increasing thermal conductance over a narrow temperature range across the denaturation transition (~ 350 K ). The origin of the switching behavior is the release of the base pairs from their confining potential as DNA denatures, which both softens the lattice and suppresses nonlinear effects and thus increases the thermal conductance. Most importantly, we demonstrate that DNA nanojunctions have a broad range of thermal tunability due to varying the sequence and length, and exploit the underlying nonlinear behavior. We discuss the role of disorder in the base sequence, as well as the relation to genomic DNA. These results set the basis for developing thermal devices out of materials with nonlinear structural dynamics, as well as understanding the underlying mechanisms of DNA denaturation.
[ { "type": "R", "before": "--", "after": "-", "start_char_pos": 45, "end_char_pos": 47 }, { "type": "R", "before": "single-strands --", "after": "single strands -", "start_char_pos": 102, "end_char_pos": 119 }, { "type": "R", "before": "350K", "after": "350 K", "start_char_pos": 415, "end_char_pos": 419 }, { "type": "R", "before": ", increasing the", "after": "and thus increases the thermal", "start_char_pos": 606, "end_char_pos": 622 } ]
[ 0, 175, 422, 635, 819, 913 ]
1207.5524
2
DNA has a well-defined structural transition - the denaturation of its double-stranded form into two single strands - that strongly affects its thermal transport properties. We show that, according to a widely implemented model for DNA denaturation, one can engineer DNA "heattronic" devices that have a rapidly increasing thermal conductance over a narrow temperature range across the denaturation transition (~350 K). The origin of the switchingbehavior is the release of the base pairs from their confining potential as DNA denatures, which both softens the lattice and suppresses nonlinear effects and thus increases the thermal conductance . Most importantly, we demonstrate that DNA nanojunctions have a broad range of thermal tunability due to varying the sequence and length, and exploit the underlying nonlinear behavior. We discuss the role of disorder in the base sequence, as well as the relation to genomic DNA. These results set the basis for developing thermal devices out of materials with nonlinear structural dynamics, as well as understanding the underlying mechanisms of DNA denaturation.
DNA has a well-defined structural transition -- the denaturation of its double-stranded form into two single strands -- that strongly affects its thermal transport properties. We show that, according to a widely implemented model for DNA denaturation, one can engineer DNA "heattronic" devices that have a rapidly increasing thermal conductance over a narrow temperature range across the denaturation transition (~350 K). The origin of this rapid increase of conductance, or "switching", is the softening of the lattice and suppression of nonlinear effects as the temperature crosses the transition temperature and DNA denatures . Most importantly, we demonstrate that DNA nanojunctions have a broad range of thermal tunability due to varying the sequence and length, and exploiting the underlying nonlinear behavior. We discuss the role of disorder in the base sequence, as well as the relation to genomic DNA. These results set the basis for developing thermal devices out of materials with nonlinear structural dynamics, as well as understanding the underlying mechanisms of DNA denaturation.
[ { "type": "R", "before": "-", "after": "--", "start_char_pos": 45, "end_char_pos": 46 }, { "type": "R", "before": "-", "after": "--", "start_char_pos": 116, "end_char_pos": 117 }, { "type": "R", "before": "the switchingbehavior is the release of the base pairs from their confining potential as DNA denatures, which both softens the lattice and suppresses nonlinear effects and thus increases the thermal conductance", "after": "this rapid increase of conductance, or \"switching\", is the softening of the lattice and suppression of nonlinear effects as the temperature crosses the transition temperature and DNA denatures", "start_char_pos": 434, "end_char_pos": 644 }, { "type": "R", "before": "exploit", "after": "exploiting", "start_char_pos": 788, "end_char_pos": 795 } ]
[ 0, 173, 419, 646, 830, 924 ]
1207.5895
1
We consider a group of Bayesian agents who each possess an independent private signal about an unknown state of the world . We study the question of efficient learning: in which games is private information efficiently disseminated among the agents? In particular, we explore the notion of asymptotic learning, which is said to occur when agents learn the state of the world with probability that approaches one as the number of agents tends to infinity. We show that under general conditions asymptotic learning follows from agreement on posterior actions or posterior beliefs, regardless of the information exchange dynamics .
We consider social learning settings in which a group of agents face uncertainty regarding a state of the world , observe private signals, share the same utility function, and act in a general dynamic setting. We introduce Social Learning Equilibria, a static equilibrium concept that abstracts away from the details of the given dynamics, but nevertheless captures the corresponding asymptotic equilibrium behavior. We establish strong equilibrium properties on agreement, herding, and information aggregation .
[ { "type": "A", "before": null, "after": "social learning settings in which", "start_char_pos": 12, "end_char_pos": 12 }, { "type": "R", "before": "Bayesian agents who each possess an independent private signal about an unknown", "after": "agents face uncertainty regarding a", "start_char_pos": 24, "end_char_pos": 103 }, { "type": "R", "before": ". We study the question of efficient learning: in which games is private information efficiently disseminated among the agents? In particular, we explore the notion of asymptotic learning, which is said to occur when agents learn the state of the world with probability that approaches one as the number of agents tends to infinity. We show that under general conditions asymptotic learning follows from agreement on posterior actions or posterior beliefs, regardless of the information exchange dynamics", "after": ", observe private signals, share the same utility function, and act in a general dynamic setting. We introduce Social Learning Equilibria, a static equilibrium concept that abstracts away from the details of the given dynamics, but nevertheless captures the corresponding asymptotic equilibrium behavior. We establish strong equilibrium properties on agreement, herding, and information aggregation", "start_char_pos": 123, "end_char_pos": 627 } ]
[ 0, 124, 250, 455 ]