doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
sequence
1212.0479
1
We study tick-by-tick financial returns belonging to the FTSE MIB index of the Italian Stock Exchange (Borsa Italiana). We find that non-stationarities detected in other markets in the past are still there. Moreover , scaling properties reported in the previous literature for other high-frequency financial data are approximately valid as well. Finally , we propose a simple method for describing non-stationary returns, based on a non-homogeneous normal compound Poisson process and we test this model against the empirical findings . It turns out that the model can reproduce several stylized facts of high-frequency financial time series .
We study tick-by-tick financial returns belonging to the FTSE MIB index of the Italian Stock Exchange (Borsa Italiana). We can confirm previously detected non-stationarities. However , scaling properties reported in the previous literature for other high-frequency financial data are only approximately valid. As a consequence of the empirical analyses , we propose a simple method for describing non-stationary returns, based on a non-homogeneous normal compound Poisson process . We test this model against the empirical findings and it turns out that the model can approximately reproduce several stylized facts of high-frequency financial time series . Moreover, using Monte Carlo simulations, we analyze order selection for this model class using three information criteria: Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and the Hannan-Quinn information criterion (HQ). For comparison, we also perform a similar Monte Carlo experiment for the ACD (autoregressive conditional duration) model. Our results show that the information criteria work best for small parameter numbers for the compound Poisson type models, whereas for the ACD model the model selection procedure does not work well in certain cases .
[ { "type": "R", "before": "find that non-stationarities detected in other markets in the past are still there. Moreover", "after": "can confirm previously detected non-stationarities. However", "start_char_pos": 123, "end_char_pos": 215 }, { "type": "R", "before": "approximately valid as well. Finally", "after": "only approximately valid. As a consequence of the empirical analyses", "start_char_pos": 317, "end_char_pos": 353 }, { "type": "R", "before": "and we", "after": ". We", "start_char_pos": 481, "end_char_pos": 487 }, { "type": "R", "before": ". It", "after": "and it", "start_char_pos": 535, "end_char_pos": 539 }, { "type": "A", "before": null, "after": "approximately", "start_char_pos": 569, "end_char_pos": 569 }, { "type": "A", "before": null, "after": ". Moreover, using Monte Carlo simulations, we analyze order selection for this model class using three information criteria: Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and the Hannan-Quinn information criterion (HQ). For comparison, we also perform a similar Monte Carlo experiment for the ACD (autoregressive conditional duration) model. Our results show that the information criteria work best for small parameter numbers for the compound Poisson type models, whereas for the ACD model the model selection procedure does not work well in certain cases", "start_char_pos": 643, "end_char_pos": 643 } ]
[ 0, 119, 206, 345, 536 ]
1212.0779
1
We prove here a general closed-form expansion formula for forward-start options and the forward implied volatility smile in a large class of models, including Heston and time-changed exponential Levy models. This expansion applies to both small and large maturities and is based solely on the knowledge of the forward characteristic function of the underlying process. The method is based on sharp large deviations techniques, and allows us to recover (in particular) many results for the spot implied volatility smile. In passing we show (i) that the small-maturity exploding behaviour of forward smiles depends on whether the quadratic variation of the underlying is bounded or not, and (ii) that the forward-start date also has to be rescaled in order to obtain non-trivial small-maturity asymptotics .
We prove here a general closed-form expansion formula for forward-start options and the forward implied volatility smile in a large class of models, including the Heston stochastic volatility and time-changed exponential L\'evy models. This expansion applies to both small and large maturities and is based solely on the properties of the forward characteristic function of the underlying process. The method is based on sharp large deviations techniques, and allows us to recover (in particular) many results for the spot implied volatility smile. In passing we (i) show that the forward-start date has to be rescaled in order to obtain non-trivial small-maturity asymptotics , (ii) prove that the forward-start date may influence the large-maturity behaviour of the forward smile, and (iii) provide some examples of models with finite quadratic variation where the small-maturity forward smile does not explode .
[ { "type": "R", "before": "Heston", "after": "the Heston stochastic volatility", "start_char_pos": 159, "end_char_pos": 165 }, { "type": "R", "before": "Levy", "after": "L\\'evy", "start_char_pos": 195, "end_char_pos": 199 }, { "type": "R", "before": "knowledge", "after": "properties", "start_char_pos": 293, "end_char_pos": 302 }, { "type": "D", "before": "show", "after": null, "start_char_pos": 534, "end_char_pos": 538 }, { "type": "R", "before": "that the small-maturity exploding behaviour of forward smiles depends on whether the quadratic variation of the underlying is bounded or not, and (ii) that the", "after": "show that the", "start_char_pos": 543, "end_char_pos": 702 }, { "type": "D", "before": "also", "after": null, "start_char_pos": 722, "end_char_pos": 726 }, { "type": "A", "before": null, "after": ", (ii) prove that the forward-start date may influence the large-maturity behaviour of the forward smile, and (iii) provide some examples of models with finite quadratic variation where the small-maturity forward smile does not explode", "start_char_pos": 804, "end_char_pos": 804 } ]
[ 0, 207, 368, 519 ]
1212.0781
1
In this paper we study the optimal stopping problem of pricing an American Put option on a Zero Coupon Bond (ZCB) in the Heath-Jarrow-Morton (HJM) framework for the forward interest rate. In particular we consider its Musiela's parametrization to guarantee a Markovian setting. Hence we are in an infinite dimensional setting, in which the forward rate curve is described by a SDE in a suitable Hilbert space. In order to find an infinite dimensional variational formulation of the pricing problem , we extend some results on infinite dimensional optimal stopping and variational inequalities recently obtained in 8%DIFDELCMD < ]%%% . The proof goes through three main steps. First we regularize the American bond option's payoff by adopting usual smoothing arguments. Next we approximate the infinite dimensional dynamics by finite dimensional ones to which we associate suitable optimal stopping problems in R^n. Then, by taking the limit as n goes to infinity and by removing the smoothing on the payoff, we obtain an infinite dimensional variational inequality for the price of the American bond option. Moreover , the first time at which the price of the American bond option equals the payoff turns out to be an optimalexercise time .
We study the optimal stopping problem of pricing an American Put option on a Zero Coupon Bond (ZCB) in the Musiela's parametrization of the Heath-Jarrow-Morton (HJM) model for forward interest rates. First we show regularity properties of the price function by probabilistic methods. Then we find an infinite dimensional variational formulation of the pricing problem %DIFDELCMD < ]%%% by approximating the original optimal stopping problem by finite dimensional ones , after a suitable smoothing of the payoff. As expected, the first time the price of the American bond option equals the payoff is shown to be optimal .
[ { "type": "R", "before": "In this paper we", "after": "We", "start_char_pos": 0, "end_char_pos": 16 }, { "type": "A", "before": null, "after": "Musiela's parametrization of the", "start_char_pos": 121, "end_char_pos": 121 }, { "type": "R", "before": "framework for the forward interest rate. In particular we consider its Musiela's parametrization to guarantee a Markovian setting. Hence we are in an infinite dimensional setting, in which the forward rate curve is described by a SDE in a suitable Hilbert space. In order to", "after": "model for forward interest rates. First we show regularity properties of the price function by probabilistic methods. Then we", "start_char_pos": 148, "end_char_pos": 422 }, { "type": "D", "before": ", we extend some results on infinite dimensional optimal stopping and variational inequalities recently obtained in", "after": null, "start_char_pos": 499, "end_char_pos": 614 }, { "type": "D", "before": "8", "after": null, "start_char_pos": 615, "end_char_pos": 616 }, { "type": "R", "before": ". The proof goes through three main steps. First we regularize the American bond option's payoff by adopting usual smoothing arguments. Next we approximate the infinite dimensional dynamics by", "after": "by approximating the original optimal stopping problem by", "start_char_pos": 634, "end_char_pos": 826 }, { "type": "D", "before": "to which we associate suitable optimal stopping problems in R^n. Then, by taking the limit as n goes to infinity and by removing the smoothing on the payoff, we obtain an infinite dimensional variational inequality for the price of the American bond option. Moreover", "after": null, "start_char_pos": 851, "end_char_pos": 1117 }, { "type": "A", "before": null, "after": "after a suitable smoothing of", "start_char_pos": 1120, "end_char_pos": 1120 }, { "type": "A", "before": null, "after": "payoff. As expected, the", "start_char_pos": 1125, "end_char_pos": 1125 }, { "type": "D", "before": "at which", "after": null, "start_char_pos": 1137, "end_char_pos": 1145 }, { "type": "R", "before": "turns out to be an optimalexercise time", "after": "is shown to be optimal", "start_char_pos": 1202, "end_char_pos": 1241 } ]
[ 0, 188, 278, 410, 676, 769, 915, 1108 ]
1212.1194
1
We revisit here the mathematical model for ATP production in mitochondria introduced recently by Bertram, Pedersen, Luciani, and Sherman (BPLS) as a simplification of the more complete but intricate Magnus and Keizer's model. We correct some inaccuracies in the BPLS original approximations and and the calcium uniporter rate J_{\rm uni}. We introduce new approximations for such flux rates and } then analyze some of the dynamical properties of the model. We infer from exhaustive numerical explorations that the enhanced BPLS equations have a unique attractor fixed point for physiologically acceptable ranges of mitochondrial variables and respiration inputs . We determine, in the stationary regime, the dependence of the mitochondrial variables on the respiration inputs, namely the cytosolic concentration of calcium {\rm Ca}_{\rm c} and the substrate fructose 1,6-bisphosphate FBP. The same effect of calcium saturation reported for the original BPLS model is observed here. We find out, however, an interesting non-stationary effect : the inertia of the model tends to increase considerably for high concentrations of calcium {\rm .
We revisit here the mathematical model for ATP production in mitochondria introduced recently by Bertram, Pedersen, Luciani, and Sherman (BPLS) as a simplification of the more complete but intricate Magnus and Keizer's model. We identify some inaccuracies in the BPLS original approximations for two flux rates, namely the adenine nucleotide translocator rate J_{\rm ANT and the calcium uniporter rate J_{\rm uni}. We introduce new approximations for such flux rates and } then analyze some of the dynamical properties of the model. We infer , from exhaustive numerical explorations , that the enhanced BPLS equations have a unique attractor fixed point for physiologically acceptable ranges of mitochondrial variables and respiration inputs , as one would indeed expect from homeostasis . We determine, in the stationary regime, the dependence of the mitochondrial variables on the respiration inputs, namely the cytosolic concentration of calcium {\rm Ca}_{\rm c} and the substrate fructose 1,6-bisphosphate FBP. The same dynamical effects of calcium and FBP saturations reported for the original BPLS model are observed here. We find out, however, a novel non-stationary effect which could be, in principle, physiologically interesting: some response times of the model tend to increase considerably for high concentrations of calcium and/or FBP. In particular, the larger the concentrations of{\rm Ca _{\rm c .
[ { "type": "R", "before": "correct", "after": "identify", "start_char_pos": 229, "end_char_pos": 236 }, { "type": "R", "before": "and", "after": "for two flux rates, namely the adenine nucleotide translocator rate J_{\\rm ANT", "start_char_pos": 291, "end_char_pos": 294 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 466, "end_char_pos": 466 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 506, "end_char_pos": 506 }, { "type": "A", "before": null, "after": ", as one would indeed expect from homeostasis", "start_char_pos": 664, "end_char_pos": 664 }, { "type": "R", "before": "effect of calcium saturation", "after": "dynamical effects of calcium and FBP saturations", "start_char_pos": 901, "end_char_pos": 929 }, { "type": "R", "before": "is", "after": "are", "start_char_pos": 967, "end_char_pos": 969 }, { "type": "R", "before": "an interesting", "after": "a novel", "start_char_pos": 1007, "end_char_pos": 1021 }, { "type": "R", "before": ": the inertia", "after": "which could be, in principle, physiologically interesting: some response times", "start_char_pos": 1044, "end_char_pos": 1057 }, { "type": "R", "before": "tends", "after": "tend", "start_char_pos": 1071, "end_char_pos": 1076 }, { "type": "A", "before": null, "after": "and/or FBP. In particular, the larger the concentrations of", "start_char_pos": 1137, "end_char_pos": 1137 }, { "type": "A", "before": null, "after": "Ca", "start_char_pos": 1142, "end_char_pos": 1142 }, { "type": "A", "before": null, "after": "_{\\rm c", "start_char_pos": 1143, "end_char_pos": 1143 } ]
[ 0, 225, 338, 456, 666, 891, 984 ]
1212.1638
2
In this paper, we focus on the scheduling problem in multi-channel wireless networks, e.g., the downlink of a single cell in fourth generation (4G) OFDM-based cellular networks. Our goal is to design practical scheduling policies that can achieve provably good performance in terms of both throughput and delay, at a low complexity. While a class of O(n^{2.5} \log n) complexity hybrid scheduling policies are recently developed to guarantee both rate-function delay optimality (in the many-channel many-user asymptotic regime) and throughput optimality (in general non-asymptotic setting), their practical complexity is typically high. To address this issue, we develop a simple greedy policy called Delay-based Server-Side-Greedy (D-SSG) with a lower complexity O(n^2) \lower , and rigorously prove that D-SSG not only achieves throughput optimality, but also guarantees near-optimal rate-function-based delay performance. Specifically, the rate-function attained by D-SSG for any fixed integer threshold b>0 , is no smaller than the maximum achievable rate-function by any scheduling policy for threshold b-1. Thus, we are able to achieve a reduction in complexity (from O(n^{2.5} \log n) of the hybrid policies to O(n^2 ) ) with a minimal drop in the delay performance. More importantly, in practice, D-SSG generally has a substantially lower complexity than the hybrid policies that typically have a large constant factor hidden in the O(\cdot) notation. Finally, we conduct numerical simulations to validate our theoretical results in various scenarios. The simulation results show that D-SSG not only guarantees a near-optimal rate-function, but also empirically is virtually indistinguishable from delay-optimal policies.
In this paper, we focus on the scheduling problem in multi-channel wireless networks, e.g., the downlink of a single cell in fourth generation (4G) OFDM-based cellular networks. Our goal is to design practical scheduling policies that can achieve provably good performance in terms of both throughput and delay, at a low complexity. While a class of O(n^{2.5} \log n) -complexity hybrid scheduling policies are recently developed to guarantee both rate-function delay optimality (in the many-channel many-user asymptotic regime) and throughput optimality (in the general non-asymptotic setting), their practical complexity is typically high. To address this issue, we develop a simple greedy policy called Delay-based Server-Side-Greedy (D-SSG) with a \lower complexity 2n^2+2n , and rigorously prove that D-SSG not only achieves throughput optimality, but also guarantees near-optimal asymptotic delay performance. Specifically, we show that the rate-function attained by D-SSG for any delay-violation threshold b , is no smaller than the maximum achievable rate-function by any scheduling policy for threshold b-1. Thus, we are able to achieve a reduction in complexity (from O(n^{2.5} \log n) of the hybrid policies to 2n^2 + 2n ) with a minimal drop in the delay performance. More importantly, in practice, D-SSG generally has a substantially lower complexity than the hybrid policies that typically have a large constant factor hidden in the O(\cdot) notation. Finally, we conduct numerical simulations to validate our theoretical results in various scenarios. The simulation results show that D-SSG not only guarantees a near-optimal rate-function, but also empirically is virtually indistinguishable from delay-optimal policies.
[ { "type": "R", "before": "complexity", "after": "-complexity", "start_char_pos": 368, "end_char_pos": 378 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 558, "end_char_pos": 558 }, { "type": "D", "before": "lower complexity O(n^2)", "after": null, "start_char_pos": 748, "end_char_pos": 771 }, { "type": "A", "before": null, "after": "complexity 2n^2+2n", "start_char_pos": 779, "end_char_pos": 779 }, { "type": "R", "before": "rate-function-based", "after": "asymptotic", "start_char_pos": 888, "end_char_pos": 907 }, { "type": "A", "before": null, "after": "we show that", "start_char_pos": 941, "end_char_pos": 941 }, { "type": "R", "before": "fixed integer threshold b>0", "after": "delay-violation threshold b", "start_char_pos": 986, "end_char_pos": 1013 }, { "type": "R", "before": "O(n^2 )", "after": "2n^2 + 2n", "start_char_pos": 1221, "end_char_pos": 1228 } ]
[ 0, 177, 332, 637, 926, 1115, 1276, 1462, 1562 ]
1212.1877
1
The theory of functionally generated portfolios (FGPs) is an aspect of the continuous-time, continuous- path Stochastic Portfolio Theory of Robert Fernholz. FGPs have been formulated to yield a master equation - a description of their return relative to a passive (buy-and-hold) benchmark portfolio serving as the num\'eraire. This description has proven to be analytically very useful, as it is both pathwise and free of stochastic integrals. Historically, FGPs have been specified only as portfolios on the tradeable assets of a market, and the num\'eraire has been confined to be the market portfolio. Here we generalize the class of FGPs in several ways: (1) they may be specified over any strictly positive wealth processes resulting from investment in the tradeable assets, (2) the num\'eraire may be any strictly positive wealth process, (3 ) generating functions may be stochastically dynamic, adjusting to changing market conditions through an auxiliary continuous-path stochastic argument of finite variation. These generalizations do not forfeit the important tractability properties of the associated master equation. We show how these generalizations can be usefully applied to statistical arbitrage, portfolio risk immunization, and the theory of mirror portfolios.
The theory of functionally generated portfolios (FGPs) is an aspect of the continuous-time, continuous-path Stochastic Portfolio Theory of Robert Fernholz. FGPs have been formulated to yield a master equation - a description of their return relative to a passive (buy-and-hold) benchmark portfolio serving as the num\'eraire. This description has proven to be analytically very useful, as it is both pathwise and free of stochastic integrals. Here we generalize the class of FGPs in several ways: (1) the num\'eraire may be any strictly positive wealth process, not necessarily the market portfolio or even a passive portfolio; (2 ) generating functions may be stochastically dynamic, adjusting to changing market conditions through an auxiliary continuous-path stochastic argument of finite variation. These generalizations do not forfeit the important tractability properties of the associated master equation. We show how these generalizations can be usefully applied to scenario analysis, statistical arbitrage, portfolio risk immunization, and the theory of mirror portfolios.
[ { "type": "R", "before": "continuous- path", "after": "continuous-path", "start_char_pos": 92, "end_char_pos": 108 }, { "type": "D", "before": "Historically, FGPs have been specified only as portfolios on the tradeable assets of a market, and the num\\'eraire has been confined to be the market portfolio.", "after": null, "start_char_pos": 444, "end_char_pos": 604 }, { "type": "R", "before": "they may be specified over any strictly positive wealth processes resulting from investment in the tradeable assets, (2) the", "after": "the", "start_char_pos": 663, "end_char_pos": 787 }, { "type": "R", "before": "(3", "after": "not necessarily the market portfolio or even a passive portfolio; (2", "start_char_pos": 845, "end_char_pos": 847 }, { "type": "A", "before": null, "after": "scenario analysis,", "start_char_pos": 1191, "end_char_pos": 1191 } ]
[ 0, 156, 326, 443, 604, 1019, 1129 ]
1212.2129
1
On-line portfolio selection is a fundamental problem in computational finance, which has been extensively studied across several research communities, including finance, statistics, artificial intelligence, machine learning, and data mining, etc. This article aims to provide a comprehensive survey and a structural understanding of existing on-line portfolio selection techniques in literature . From an on-line machine learning perspective, we first formulate on-line portfolio selection as an on-line sequential decision problem, and then survey a variety of state-of-the-art approaches in literature , which are grouped into several major categories, including benchmarks, "Follow-the-Winner" approaches, "Follow-the-Loser" approaches, "Pattern-Matching" based approaches, and meta-learning algorithms . In addition to the problem formulation and related algorithms, we also discuss the relationship of these algorithms with the Capital Growth theory in order to better understand the commons and differences of their underlying trading ideas. This article aims to provide a timely and comprehensive survey for both machine learning and data mining researchers in academia and quantitative portfolio managers in financial industry to help them understand the state of the art and facilitate their research or practical applications. We also discuss some open issues and evaluate some emerging new trends for future research directions.
Online portfolio selection is a fundamental problem in computational finance, which has been extensively studied across several research communities, including finance, statistics, artificial intelligence, machine learning, and data mining, etc. This article aims to provide a comprehensive survey and a structural understanding of published online portfolio selection techniques . From an online machine learning perspective, we first formulate online portfolio selection as a sequential decision problem, and then survey a variety of state-of-the-art approaches , which are grouped into several major categories, including benchmarks, "Follow-the-Winner" approaches, "Follow-the-Loser" approaches, "Pattern-Matching" based approaches, and "Meta-Learning Algorithms" . In addition to the problem formulation and related algorithms, we also discuss the relationship of these algorithms with the Capital Growth theory in order to better understand the similarities and differences of their underlying trading ideas. This article aims to provide a timely and comprehensive survey for both machine learning and data mining researchers in academia and quantitative portfolio managers in the financial industry to help them understand the state-of-the-art and facilitate their research and practical applications. We also discuss some open issues and evaluate some emerging new trends for future research directions.
[ { "type": "R", "before": "On-line", "after": "Online", "start_char_pos": 0, "end_char_pos": 7 }, { "type": "R", "before": "existing on-line", "after": "published online", "start_char_pos": 333, "end_char_pos": 349 }, { "type": "D", "before": "in literature", "after": null, "start_char_pos": 381, "end_char_pos": 394 }, { "type": "R", "before": "on-line", "after": "online", "start_char_pos": 405, "end_char_pos": 412 }, { "type": "R", "before": "on-line", "after": "online", "start_char_pos": 462, "end_char_pos": 469 }, { "type": "R", "before": "an on-line", "after": "a", "start_char_pos": 493, "end_char_pos": 503 }, { "type": "D", "before": "in literature", "after": null, "start_char_pos": 590, "end_char_pos": 603 }, { "type": "R", "before": "meta-learning algorithms", "after": "\"Meta-Learning Algorithms\"", "start_char_pos": 781, "end_char_pos": 805 }, { "type": "R", "before": "commons", "after": "similarities", "start_char_pos": 989, "end_char_pos": 996 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1216, "end_char_pos": 1216 }, { "type": "R", "before": "state of the art", "after": "state-of-the-art", "start_char_pos": 1264, "end_char_pos": 1280 }, { "type": "R", "before": "or", "after": "and", "start_char_pos": 1311, "end_char_pos": 1313 } ]
[ 0, 246, 396, 807, 1047, 1337 ]
1212.2140
1
We study the existence of optimal actions in a zero-sum game \inf _\tau \sup_PE ^P[X_\tau] between a stopper and a controller choosing a probability measure. In particular, we consider the optimal stopping problem \inf _\tau E(X _\tau ) for a class of sublinear expectations E(\cdot) including the G-expectation. We show that the game has a value. Moreover, exploiting the theory of sublinear expectations, we define a nonlinear Snell envelope Y and prove that the first hitting time \inf{t:\, Y_t=X_t is an optimal stopping time. The existence of a saddle point is shown under a compactness condition. Finally, the results are applied to the subhedging of American options under volatility uncertainty.
We study the existence of optimal actions in a zero-sum game \inf _{\tau \sup_PE ^P[X_{\tau between a stopper and a controller choosing a probability measure. This includes the optimal stopping problem \inf _{\tau E(X _{\tau ) for a class of sublinear expectations E(\cdot) such as the G-expectation. We show that the game has a value. Moreover, exploiting the theory of sublinear expectations, we define a nonlinear Snell envelope Y and prove that the first hitting time \inf\{t:Y_t=X_t\ is an optimal stopping time. The existence of a saddle point is shown under a compactness condition. Finally, the results are applied to the subhedging of American options under volatility uncertainty.
[ { "type": "R", "before": "_\\tau", "after": "_{\\tau", "start_char_pos": 66, "end_char_pos": 71 }, { "type": "R", "before": "^P[X_\\tau]", "after": "^P[X_{\\tau", "start_char_pos": 80, "end_char_pos": 90 }, { "type": "R", "before": "In particular, we consider", "after": "This includes", "start_char_pos": 158, "end_char_pos": 184 }, { "type": "R", "before": "_\\tau", "after": "_{\\tau", "start_char_pos": 219, "end_char_pos": 224 }, { "type": "R", "before": "_\\tau", "after": "_{\\tau", "start_char_pos": 229, "end_char_pos": 234 }, { "type": "R", "before": "including", "after": "such as", "start_char_pos": 284, "end_char_pos": 293 }, { "type": "R", "before": "\\inf{t:\\, Y_t=X_t", "after": "\\inf\\{t:Y_t=X_t\\", "start_char_pos": 484, "end_char_pos": 501 } ]
[ 0, 157, 312, 347, 530, 602 ]
1212.3147
1
In this paper we derive an easily computed approximation of Rogers and Shi's lower bound for a local volatility jump-diffusion model and then use it to approximate European basket option values . If the local volatility function is time independent then there is a closed-form expression for the approximation. Numerical tests show that the lower bound approximation is fast and accurate in comparison with the Monte Carlo method, the partial-exact approximation method and the asymptotic expansion method .
In this paper we derive an easily computed approximation to European basket call prices for a local volatility jump-diffusion model . We apply the asymptotic expansion method to find the approximate value of the lower bound of European basket call prices . If the local volatility function is time independent then there is a closed-form expression for the approximation. Numerical tests show that the suggested approximation is fast and accurate in comparison with the Monte Carlo and other approximation methods in the literature .
[ { "type": "R", "before": "of Rogers and Shi's lower bound", "after": "to European basket call prices", "start_char_pos": 57, "end_char_pos": 88 }, { "type": "R", "before": "and then use it to approximate European basket option values", "after": ". We apply the asymptotic expansion method to find the approximate value of the lower bound of European basket call prices", "start_char_pos": 133, "end_char_pos": 193 }, { "type": "R", "before": "lower bound", "after": "suggested", "start_char_pos": 341, "end_char_pos": 352 }, { "type": "R", "before": "method, the partial-exact approximation method and the asymptotic expansion method", "after": "and other approximation methods in the literature", "start_char_pos": 423, "end_char_pos": 505 } ]
[ 0, 195, 310 ]
1212.3195
1
We report evidence that empirical data show time varying multifractal properties. This is obtained by comparing empirical observations of the weighted generalised Hurst exponent (wGHE) with time series simulated via Multifractal Random Walk (MRW) by Bacry et al. [E.Bacry, J.Delour and J.Muzy, Phys.Rev.E \,{\bf 64 026103, 2001}]. While dynamical wGHE computed on synthetic MRW series is consistent with a scenario where multifractality is constant over time, fluctuations in the dynamical wGHE observed in empirical data fail to be in agreement with a MRW with constant intermittency parameter. This is a strong argument to claim that observed variations of multifractality in financial time series are to be ascribed to a structural breakdown in the temporal covariance structure of stock returns series. As a consequence, multi fractal models with a constant intermittency parameter may not always be satisfactory in reproducing financial market behaviour .
We perform an extensive empirical analysis of scaling properties of equity returns, suggesting that financial data show time varying multifractal properties. This is obtained by comparing empirical observations of the weighted generalised Hurst exponent (wGHE) with time series simulated via Multifractal Random Walk (MRW) by Bacry et al. [E.Bacry, J.Delour and J.Muzy, Phys.Rev.E \,{\bf 64 026103, 2001}]. While dynamical wGHE computed on synthetic MRW series is consistent with a scenario where multifractality is constant over time, fluctuations in the dynamical wGHE observed in empirical data are not in agreement with a MRW with constant intermittency parameter. We test these hypotheses of constant multifractality considering different specifications of MRW model with fatter tails: in all cases considered, although the thickness of the tails accounts for most of anomalous fluctuations of multifractality, still cannot fully explain the observed fluctuations .
[ { "type": "R", "before": "report evidence that empirical", "after": "perform an extensive empirical analysis of scaling properties of equity returns, suggesting that financial", "start_char_pos": 3, "end_char_pos": 33 }, { "type": "R", "before": "fail to be", "after": "are not", "start_char_pos": 522, "end_char_pos": 532 }, { "type": "R", "before": "This is a strong argument to claim that observed variations of multifractality in financial time series are to be ascribed to a structural breakdown in the temporal covariance structure of stock returns series. As a consequence, multi fractal models with a constant intermittency parameter may not always be satisfactory in reproducing financial market behaviour", "after": "We test these hypotheses of constant multifractality considering different specifications of MRW model with fatter tails: in all cases considered, although the thickness of the tails accounts for most of anomalous fluctuations of multifractality, still cannot fully explain the observed fluctuations", "start_char_pos": 596, "end_char_pos": 958 } ]
[ 0, 81, 299, 330, 595, 806 ]
1212.3205
1
Studies of coevolution of amino acids within and between proteins have revealed two types of coevolving units: coevolving contacts, which are pairs of amino acids distant along the sequence but in contact in the three-dimensional structure, and sectors, which are larger groups of structurally connected amino acids that underly the biochemical properties of proteins. By reconciling two approaches for analyzing correlations in multiple sequence alignments, we uncover a new class of coevolving units , called 'sectons'. Sectons provide a conceptual link between coevolving contacts and sectors. The methods and results that we present are general , and relevant beyond protein structures . This generality is illustrated with an analysis of the co-occurrence of orthologous genes in bacterial genomes .
Studies of coevolution of amino acids within and between proteins have revealed two types of coevolving units: coevolving contacts, which are pairs of amino acids distant along the sequence but in contact in the three-dimensional structure, and sectors, which are larger groups of structurally connected amino acids that underlie the biochemical properties of proteins. By reconciling two approaches for analyzing correlations in multiple sequence alignments, we link these two findings together and with coevolving units of intermediate size, called `sectons', which are shown to provide additional information. By extending the analysis to the co-occurrence of orthologous genes in bacterial genomes, we also show that the methods and results are general and relevant beyond protein structures .
[ { "type": "R", "before": "underly", "after": "underlie", "start_char_pos": 321, "end_char_pos": 328 }, { "type": "R", "before": "uncover a new class of coevolving units , called 'sectons'. Sectons provide a conceptual link between coevolving contacts and sectors. The", "after": "link these two findings together and with coevolving units of intermediate size, called `sectons', which are shown to provide additional information. By extending the analysis to the co-occurrence of orthologous genes in bacterial genomes, we also show that the", "start_char_pos": 462, "end_char_pos": 600 }, { "type": "R", "before": "that we present are general ,", "after": "are general", "start_char_pos": 621, "end_char_pos": 650 }, { "type": "D", "before": ". This generality is illustrated with an analysis of the co-occurrence of orthologous genes in bacterial genomes", "after": null, "start_char_pos": 690, "end_char_pos": 802 } ]
[ 0, 368, 521, 596, 691 ]
1212.3281
1
Various approaches have explored the covariation of residues in multiple-sequence alignments of homologous proteins to extract functional and structural information. Among those are principal component analysis (PCA), which identifies the most correlated groups of residues, and direct coupling analysis (DCA), a global inference method based on the maximum entropy principle, which aims at predicting residue-residue contacts. In this paper, inspired by the statistical physics of disordered systems, we introduce the Hopfield-Potts model to naturally interpolate between these two approaches. The Hopfield-Potts model allows us to identify relevant 'patterns' of residues from the knowledge of the eigenmodes and eigenvalues of the residue-residue Pearson correlation matrix. We show how the computation of such statistical patterns makes it possible to accurately predict residue-residue contacts with a much smaller number of parameters than DCA. In addition, we show that low-eigenvalue correlation modes, discarded by PCA, are important to recover structural information: the corresponding patterns are highly localized, that is, they are concentrated in few sites, which we find to be in close contact on the three-dimensional protein fold . We also explain why these low-eigenvalue modes, in contrast to the standard principal components, are able to efficiently encode compensatory mutations between pairs of residues .
Various approaches have explored the covariation of residues in multiple-sequence alignments of homologous proteins to extract functional and structural information. Among those are principal component analysis (PCA), which identifies the most correlated groups of residues, and direct coupling analysis (DCA), a global inference method based on the maximum entropy principle, which aims at predicting residue-residue contacts. In this paper, inspired by the statistical physics of disordered systems, we introduce the Hopfield-Potts model to naturally interpolate between these two approaches. The Hopfield-Potts model allows us to identify relevant 'patterns' of residues from the knowledge of the eigenmodes and eigenvalues of the residue-residue correlation matrix. We show how the computation of such statistical patterns makes it possible to accurately predict residue-residue contacts with a much smaller number of parameters than DCA. This dimensional reduction allows us to avoid overfitting and to extract contact information from multiple-sequence alignments of reduced size. In addition, we show that low-eigenvalue correlation modes, discarded by PCA, are important to recover structural information: the corresponding patterns are highly localized, that is, they are concentrated in few sites, which we find to be in close contact in the three-dimensional protein fold .
[ { "type": "D", "before": "Pearson", "after": null, "start_char_pos": 750, "end_char_pos": 757 }, { "type": "A", "before": null, "after": "This dimensional reduction allows us to avoid overfitting and to extract contact information from multiple-sequence alignments of reduced size.", "start_char_pos": 951, "end_char_pos": 951 }, { "type": "R", "before": "on", "after": "in", "start_char_pos": 1210, "end_char_pos": 1212 }, { "type": "D", "before": ". We also explain why these low-eigenvalue modes, in contrast to the standard principal components, are able to efficiently encode compensatory mutations between pairs of residues", "after": null, "start_char_pos": 1248, "end_char_pos": 1427 } ]
[ 0, 165, 427, 594, 777, 950, 1249 ]
1212.3647
1
Motivated by data-rich experiments in transcriptional regulation and sensory neuroscience, we consider the following general problem in statistical inference. A system of interest, when exposed to a stimulus S, adopts a deterministic response R of which a noisy measurement M is made. Given a large number of measurements and corresponding stimuli , we wish to identify the correct " response function " relating R to S . However the "noise function" relating M to R is unknown a priori. Here we show that maximizing likelihood over both response functions and noise functions is equivalent to simply identifying maximally informative response functions -- ones that maximize the mutual information I[ R;M ] between predicted responses and corresponding measurements . Moreover, if the correct response function is in the class of models being explored, maximizing mutual information becomes equivalent to simultaneously maximizing every dependence measure that satisfies the Data Processing Inequality. We note that experiments of the type considered are unable to distinguish between parametrized response functions lying along certain "diffeomorphic modes" in parameter space . We show how to derive these diffeomorphic modes and observe, fortunately, that such modes typically span a very low-dimensional subspace of parameter space. Therefore, given sufficient data, maximizing mutual information can pinpoint nearly all response function parameters without requiring any model of experimental noise .
Motivated by data-rich experiments in transcriptional regulation and sensory neuroscience, we consider the following general problem in statistical inference. When exposed to a high-dimensional signal S, a system of interest computes a representation R of that signal which is then observed through a noisy measurement M . From a large number of signals and measurements , we wish to infer the " filter " that maps S to R. However, the standard method for solving such problems, likelihood-based inference, requires perfect a priori knowledge of the "noise function" mapping R to M. In practice such noise functions are usually known only approximately, if at all, and using an incorrect noise function will typically bias the inferred filter. Here we show that , in the large data limit, this need for a pre-characterized noise function can be circumvented by searching for filters that instead maximize the mutual information I[ M;R ] between observed measurements and predicted representations . Moreover, if the correct filter lies within the space of filters being explored, maximizing mutual information becomes equivalent to simultaneously maximizing every dependence measure that satisfies the Data Processing Inequality. It is important to note that maximizing mutual information will typically leave a small number of directions in parameter space unconstrained. We term these directions "diffeomorphic modes" and present an equation that allows these modes to be derived systematically. The presence of diffeomorphic modes reflects a fundamental and nontrivial substructure within parameter space, one that is obscured by standard likelihood-based inference .
[ { "type": "R", "before": "A system of interest, when", "after": "When", "start_char_pos": 159, "end_char_pos": 185 }, { "type": "R", "before": "stimulus S, adopts a deterministic response R of which", "after": "high-dimensional signal S,", "start_char_pos": 199, "end_char_pos": 253 }, { "type": "A", "before": null, "after": "system of interest computes a representation R of that signal which is then observed through a", "start_char_pos": 256, "end_char_pos": 256 }, { "type": "R", "before": "is made. Given", "after": ". From", "start_char_pos": 277, "end_char_pos": 291 }, { "type": "R", "before": "measurements and corresponding stimuli", "after": "signals and measurements", "start_char_pos": 310, "end_char_pos": 348 }, { "type": "R", "before": "identify the correct", "after": "infer the", "start_char_pos": 362, "end_char_pos": 382 }, { "type": "R", "before": "response function", "after": "filter", "start_char_pos": 385, "end_char_pos": 402 }, { "type": "R", "before": "relating R to S . However the", "after": "that maps S to R. However, the standard method for solving such problems, likelihood-based inference, requires perfect a priori knowledge of the", "start_char_pos": 405, "end_char_pos": 434 }, { "type": "R", "before": "relating M to R is unknown a priori.", "after": "mapping R to M. In practice such noise functions are usually known only approximately, if at all, and using an incorrect noise function will typically bias the inferred filter.", "start_char_pos": 452, "end_char_pos": 488 }, { "type": "R", "before": "maximizing likelihood over both response functions and noise functions is equivalent to simply identifying maximally informative response functions -- ones that", "after": ", in the large data limit, this need for a pre-characterized noise function can be circumvented by searching for filters that instead", "start_char_pos": 507, "end_char_pos": 667 }, { "type": "R", "before": "R;M", "after": "M;R", "start_char_pos": 703, "end_char_pos": 706 }, { "type": "R", "before": "predicted responses and corresponding measurements", "after": "observed measurements and predicted representations", "start_char_pos": 717, "end_char_pos": 767 }, { "type": "R", "before": "response function is in the class of models", "after": "filter lies within the space of filters", "start_char_pos": 795, "end_char_pos": 838 }, { "type": "R", "before": "We note that experiments of the type considered are unable to distinguish between parametrized response functions lying along certain \"diffeomorphic modes\"", "after": "It is important to note that maximizing mutual information will typically leave a small number of directions", "start_char_pos": 1005, "end_char_pos": 1160 }, { "type": "R", "before": ". We show how to derive these diffeomorphic modes and observe, fortunately, that such modes typically span a very low-dimensional subspace of parameter space. Therefore, given sufficient data, maximizing mutual information can pinpoint nearly all response function parameters without requiring any model of experimental noise", "after": "unconstrained. We term these directions \"diffeomorphic modes\" and present an equation that allows these modes to be derived systematically. The presence of diffeomorphic modes reflects a fundamental and nontrivial substructure within parameter space, one that is obscured by standard likelihood-based inference", "start_char_pos": 1180, "end_char_pos": 1505 } ]
[ 0, 158, 285, 488, 769, 1004, 1181, 1338 ]
1212.3716
1
PD curve calibration refers to the task of transforming a set of conditional probabilities of default (PDs) to another average PD level that is determined by a change of the underlying unconditional PD. This paper presents a framework that allows to explore a variety of calibration techniques and the conditions under which they are fit for purpose. We test the techniques discussed by applying them to a publicly available dataset of agency rating and default statistics that can be considered typical for the scope of application of the techniques . We show that the popular technique of 'scaling the PD curve' is theoretically questionable and does not perform well on the test datasets. We identify two calibration techniques that are both theoretically sound and perform much better on the test datasets .
PD curve calibration refers to the transformation of a set of rating grade-level probabilities of default (PDs) to another average PD level that is determined by a change of the underlying portfolio-wide PD. This paper presents a framework that allows to explore a variety of calibration approaches and the conditions under which they are fit for purpose. We test the approaches discussed by applying them to a publicly available dataset of agency rating and default statistics that can be considered typical for the scope of application of the approaches . We show that the popular 'scaled PD curve' approach is theoretically questionable and does not perform well on the test datasets. We identify two calibration approaches that are both theoretically sound and perform much better on the test datasets . Keywords: Probability of default, calibration, likelihood ratio, Bayes' formula, rating profile, binary classification .
[ { "type": "R", "before": "task of transforming", "after": "transformation of", "start_char_pos": 35, "end_char_pos": 55 }, { "type": "R", "before": "conditional", "after": "rating grade-level", "start_char_pos": 65, "end_char_pos": 76 }, { "type": "R", "before": "unconditional", "after": "portfolio-wide", "start_char_pos": 185, "end_char_pos": 198 }, { "type": "R", "before": "techniques", "after": "approaches", "start_char_pos": 283, "end_char_pos": 293 }, { "type": "R", "before": "techniques", "after": "approaches", "start_char_pos": 363, "end_char_pos": 373 }, { "type": "R", "before": "techniques", "after": "approaches", "start_char_pos": 540, "end_char_pos": 550 }, { "type": "R", "before": "technique of 'scaling the", "after": "'scaled", "start_char_pos": 578, "end_char_pos": 603 }, { "type": "A", "before": null, "after": "approach", "start_char_pos": 614, "end_char_pos": 614 }, { "type": "R", "before": "techniques", "after": "approaches", "start_char_pos": 721, "end_char_pos": 731 }, { "type": "A", "before": null, "after": ". Keywords: Probability of default, calibration, likelihood ratio, Bayes' formula, rating profile, binary classification", "start_char_pos": 811, "end_char_pos": 811 } ]
[ 0, 202, 350, 552, 692 ]
1212.3716
2
PD curve calibration refers to the transformation of a set of rating grade-level probabilities of default (PDs) to another average PD level that is determined by a change of the underlying portfolio-wide PD. This paper presents a framework that allows to explore a variety of calibration approaches and the conditions under which they are fit for purpose. We test the approaches discussed by applying them to a publicly available dataset of agency rating and default statistics that can be considered typical for the scope of application of the approaches. We show that the popular 'scaled PD curve ' approach is theoretically questionable and does not perform well on the test datasets. We identify two calibration approaches that are both theoretically sound and perform much better on the test datasets. Keywords: Probability of default, calibration, likelihood ratio, Bayes' formula, rating profile, binary classification.
PD curve calibration refers to the transformation of a set of rating grade level probabilities of default (PDs) to another average PD level that is determined by a change of the underlying portfolio-wide PD. This paper presents a framework that allows to explore a variety of calibration approaches and the conditions under which they are fit for purpose. We test the approaches discussed by applying them to publicly available datasets of agency rating and default statistics that can be considered typical for the scope of application of the approaches. We show that the popular 'scaled PDs ' approach is theoretically questionable and identify an alternative calibration approach ('scaled likelihood ratio') that is both theoretically sound and performs better on the test datasets. Keywords: Probability of default, calibration, likelihood ratio, Bayes' formula, rating profile, binary classification.
[ { "type": "R", "before": "grade-level", "after": "grade level", "start_char_pos": 69, "end_char_pos": 80 }, { "type": "R", "before": "a publicly available dataset", "after": "publicly available datasets", "start_char_pos": 409, "end_char_pos": 437 }, { "type": "R", "before": "PD curve", "after": "PDs", "start_char_pos": 590, "end_char_pos": 598 }, { "type": "R", "before": "does not perform well on the test datasets. We identify two calibration approaches that are", "after": "identify an alternative calibration approach ('scaled likelihood ratio') that is", "start_char_pos": 644, "end_char_pos": 735 }, { "type": "R", "before": "perform much", "after": "performs", "start_char_pos": 765, "end_char_pos": 777 } ]
[ 0, 207, 355, 556, 687, 806 ]
1212.4312
1
We propose a coarse-grained modeling strategy for peptides where the effect of changes of the pH can be efficiently described. The idea is based on modeling the effects of the pH value on the main driving interactions using reference data from atomistic simulations and experimental databases and transferring its main physical features to the coarse-grained resolution according the principle of consistency across the scales . After refining the coarse-grained model appropriately this was achieved by finding a unique set of parameters for the coarse-grained model that, when applied to peptides with different sequences and experimental properties, reproduces the experimental and atomistic data of reference. We used the such parametrized model for performing several numerical tests to check the universality of the model . We have tried systems with rather different response to pH variations, showing a highly satisfactory performance of the model.
We propose the first, to our knowledge, coarse-grained modeling strategy for peptides where the effect of changes of the pH can be efficiently described. The idea is based on modeling the effects of the pH value on the main driving interactions . We use reference data from atomistic simulations and experimental databases and transfer its main physical features to the coarse-grained resolution according the principle of " consistency across the scales ". The coarse-grained model is refined by finding a set of parameters that, when applied to peptides with different sequences and experimental properties, reproduces the experimental and atomistic data of reference. We use the such parameterized model for performing several numerical tests to check its transferability to other systems and to prove the universality of the related modeling strategy . We have tried systems with rather different response to pH variations, showing a highly satisfactory performance of the model.
[ { "type": "R", "before": "a", "after": "the first, to our knowledge,", "start_char_pos": 11, "end_char_pos": 12 }, { "type": "R", "before": "using", "after": ". We use", "start_char_pos": 218, "end_char_pos": 223 }, { "type": "R", "before": "transferring", "after": "transfer", "start_char_pos": 297, "end_char_pos": 309 }, { "type": "A", "before": null, "after": "\"", "start_char_pos": 397, "end_char_pos": 397 }, { "type": "R", "before": ". After refining the", "after": "\". The", "start_char_pos": 428, "end_char_pos": 448 }, { "type": "R", "before": "appropriately this was achieved", "after": "is refined", "start_char_pos": 470, "end_char_pos": 501 }, { "type": "D", "before": "unique", "after": null, "start_char_pos": 515, "end_char_pos": 521 }, { "type": "D", "before": "for the coarse-grained model", "after": null, "start_char_pos": 540, "end_char_pos": 568 }, { "type": "R", "before": "used the such parametrized", "after": "use the such parameterized", "start_char_pos": 718, "end_char_pos": 744 }, { "type": "A", "before": null, "after": "its transferability to other systems and to prove", "start_char_pos": 799, "end_char_pos": 799 }, { "type": "R", "before": "model", "after": "related modeling strategy", "start_char_pos": 824, "end_char_pos": 829 } ]
[ 0, 126, 574, 714, 831 ]
1212.4470
1
We report the experimental verification of noise-enhanced logic behaviour in an electronic analog of a synthetic genetic network, composed of two repressors and two constructive promoters. We observe good agreement between circuit measurements and numerical prediction, with the circuit allowing for robust logic operations in an optimal window of noise. Namely, the input-output characteristics of a logic gate is reproduced faithfully under moderate noise, which is a manifestation of the phenomenon known as Logical Stochastic Resonance. Interestingly, the two dynamical variables in the system yield complementary logic behaviour simultaneously , indicating strong potential for parallel processing .
We report the experimental verification of noise-enhanced logic behaviour in an electronic analog of a synthetic genetic network, composed of two repressors and two constitutive promoters. We observe good agreement between circuit measurements and numerical prediction, with the circuit allowing for robust logic operations in an optimal window of noise. Namely, the input-output characteristics of a logic gate is reproduced faithfully under moderate noise, which is a manifestation of the phenomenon known as Logical Stochastic Resonance. The two dynamical variables in the system yield complementary logic behaviour simultaneously . The system is easily morphed from AND/NAND to OR/NOR logic .
[ { "type": "R", "before": "constructive", "after": "constitutive", "start_char_pos": 165, "end_char_pos": 177 }, { "type": "R", "before": "Interestingly, the", "after": "The", "start_char_pos": 541, "end_char_pos": 559 }, { "type": "R", "before": ", indicating strong potential for parallel processing", "after": ". The system is easily morphed from AND/NAND to OR/NOR logic", "start_char_pos": 649, "end_char_pos": 702 } ]
[ 0, 188, 354, 540 ]
1212.4491
1
Modelling of ion transport via plasma membrane needs identification and quantitative understanding of the involved processes. Brief characterisation of ion transport systems of a yeast cell (Pma1, Ena1, TOK1, Nha1, Trk1, Trk2, non-selective cation conductance) and estimates concerning the number of molecules of each transporter per a cell allow predicting the corresponding ion flows. Comparison of ion transport in small yeast cell and several animal cell types is provided and importance of cell volume to surface ratio is stressed. Role of cell wall and lipid rafts is discussed in aspect of required increase in spatial and temporary resolution of measurements. Conclusions are formulated to describe specific features of ion transport in a yeast cell. Potential directions of future research are outlined based on the assumptions.
Modeling of ion transport via plasma membrane needs identification and quantitative understanding of the involved processes. Brief characterization of main ion transport systems of a yeast cell (Pma1, Ena1, TOK1, Nha1, Trk1, Trk2, non-selective cation conductance) and determining the exact number of molecules of each transporter per a typical cell allow us to predict the corresponding ion flows. In this review a comparison of ion transport in small yeast cell and several animal cell types is provided . The importance of cell volume to surface ratio is emphasized. The role of cell wall and lipid rafts is discussed in respect to required increase in spatial and temporal resolution of measurements. Conclusions are formulated to describe specific features of ion transport in a yeast cell. Potential directions of future research are outlined based on the assumptions.
[ { "type": "R", "before": "Modelling", "after": "Modeling", "start_char_pos": 0, "end_char_pos": 9 }, { "type": "R", "before": "characterisation of", "after": "characterization of main", "start_char_pos": 132, "end_char_pos": 151 }, { "type": "R", "before": "estimates concerning the", "after": "determining the exact", "start_char_pos": 265, "end_char_pos": 289 }, { "type": "R", "before": "cell allow predicting", "after": "typical cell allow us to predict", "start_char_pos": 336, "end_char_pos": 357 }, { "type": "R", "before": "Comparison", "after": "In this review a comparison", "start_char_pos": 387, "end_char_pos": 397 }, { "type": "R", "before": "and", "after": ". The", "start_char_pos": 477, "end_char_pos": 480 }, { "type": "R", "before": "stressed. Role", "after": "emphasized. The role", "start_char_pos": 527, "end_char_pos": 541 }, { "type": "R", "before": "aspect of", "after": "respect to", "start_char_pos": 587, "end_char_pos": 596 }, { "type": "R", "before": "temporary", "after": "temporal", "start_char_pos": 630, "end_char_pos": 639 } ]
[ 0, 125, 386, 536, 667, 758 ]
1212.4733
1
In a recent paper by Noy and Golestanian (Phys. Rev. Lett. 109, 228101, 2012) the elastic properties of DNA were studied by molecular dynamics (MD) simulations . Two important conclusions were made: (i) the computed bending and twisting rigidities of DNA agree well with experimental data , and (ii) for lengths of a few helical turns, DNA dynamics exhibits long-range correlations in qualitative difference from the worm-like rod (WLR) model. Earlier similar studies showed that (i) the current MD forcefields systematically overestimate the DNA rigidity, and (ii) MD trajectories of DNA involve only short-range correlations; no deviations from the WLR model are detectable if MD data are analyzed properly . Here it is argued that the data analysis in the above mentioned paper was not correct and that the earlier conclusions are valid.
Recent experimental data indicate that the elastic wormlike rod model of DNA that works well on long length scales may break down on shorter scales relevant to biology. According to Noy and Golestanian (Phys. Rev. Lett. 109, 228101, 2012) molecular dynamics (MD) simulations predict DNA rigidity close to experimental data and confirm one scenario of such breakdown, namely, that for lengths of a few helical turns, DNA dynamics exhibit long-range bending and stretching correlations. Earlier studies using similar forcefields concluded that (i) MD systematically overestimate the DNA rigidity, and (ii) no deviations from the WLR model are detectable . Here it is argued that the data analysis in the above mentioned paper was incorrect and that the earlier conclusions are valid.
[ { "type": "R", "before": "In a recent paper by", "after": "Recent experimental data indicate that the elastic wormlike rod model of DNA that works well on long length scales may break down on shorter scales relevant to biology. According to", "start_char_pos": 0, "end_char_pos": 20 }, { "type": "D", "before": "the elastic properties of DNA were studied by", "after": null, "start_char_pos": 78, "end_char_pos": 123 }, { "type": "R", "before": ". Two important conclusions were made: (i) the computed bending and twisting rigidities of DNA agree well with experimental data , and (ii)", "after": "predict DNA rigidity close to experimental data and confirm one scenario of such breakdown, namely, that", "start_char_pos": 160, "end_char_pos": 299 }, { "type": "R", "before": "exhibits", "after": "exhibit", "start_char_pos": 349, "end_char_pos": 357 }, { "type": "R", "before": "correlations in qualitative difference from the worm-like rod (WLR) model. Earlier similar studies showed", "after": "bending and stretching correlations. Earlier studies using similar forcefields concluded", "start_char_pos": 369, "end_char_pos": 474 }, { "type": "R", "before": "the current MD forcefields", "after": "MD", "start_char_pos": 484, "end_char_pos": 510 }, { "type": "D", "before": "MD trajectories of DNA involve only short-range correlations;", "after": null, "start_char_pos": 566, "end_char_pos": 627 }, { "type": "D", "before": "if MD data are analyzed properly", "after": null, "start_char_pos": 676, "end_char_pos": 708 }, { "type": "R", "before": "not correct", "after": "incorrect", "start_char_pos": 785, "end_char_pos": 796 } ]
[ 0, 47, 58, 161, 443, 627, 710 ]
1212.4745
1
We present a minimal motif model for transmembrane cell signaling. The model assumes signaling events taking place in spatially distributed nanoclusters regulated by a birth/death dynamics. The combination of these spatio-temporal aspects can be modulated to provide a switch-like response behavior without invoking sophisticated modeling of the signaling process as a sequence of cascade reactions and fine-tuned parameters. Our results also show that the URLanization of the signaling process and the fact that the distributed events take place in nanoclusters with a finite lifetime regulated by local production is sufficient to obtain a robust and high-fidelity response.
We present a minimal motif model for transmembrane cell signaling. The model assumes signaling events taking place in spatially distributed nanoclusters regulated by a birth/death dynamics. The combination of these spatio-temporal aspects can be modulated to provide a robust and high-fidelity response behavior without invoking sophisticated modeling of the signaling process as a sequence of cascade reactions and fine-tuned parameters. Our results show that the fact that the distributed signaling events take place in nanoclusters with a finite lifetime regulated by local production is sufficient to obtain a robust and high-fidelity response.
[ { "type": "R", "before": "switch-like", "after": "robust and high-fidelity", "start_char_pos": 269, "end_char_pos": 280 }, { "type": "D", "before": "also", "after": null, "start_char_pos": 438, "end_char_pos": 442 }, { "type": "D", "before": "URLanization of the signaling process and the", "after": null, "start_char_pos": 457, "end_char_pos": 502 }, { "type": "A", "before": null, "after": "signaling", "start_char_pos": 529, "end_char_pos": 529 } ]
[ 0, 66, 189, 425 ]
1212.4890
1
The goal of this study is to make connections between Bollinger Bands and time series models in order to gain a better understanding of the statistical underpinnings of Bollinger Bands. In the first part of the study, we review a popular econometric model called the rolling regression time series model and illustrate an equivalence between the latter and the Bollinger Band methodology. In the second part of the study, we illustrate the use of Bollinger Bands in pairs trading \mbox{%DIFAUXCMD INV2007 return duration relationship in Bollinger Bands pairs trading.Then , by viewing Bollinger Bands as an approximation to the random walk plus noise (RWPN) time series model, we are able to modify the Bollinger Band algorithm used in pairs trading and develop a pairs trading variant that we call "Fixed Forecast Maximum Duration Bands" (FFMDPT). Finally , we conduct historical simulations using SAP-Nikkei data in order to compare the performance of the variant with Bollinger Bands in order to analyze its advantages and disadvantages .
The goal of this study is to explain and examine the statistical underpinnings of the Bollinger Band methodology. We start off by elucidating the rolling regression time series model and deriving its explicit relationship to Bollinger Bands. Next we illustrate the use of Bollinger Bands in pairs trading and prove the existence of a specific return duration relationship in Bollinger Band pairs trading.Then by viewing the Bollinger Band moving average as an approximation to the random walk plus noise (RWPN) time series model, we develop a pairs trading variant that we call "Fixed Forecast Maximum Duration ' Bands" (FFMDPT). Lastly , we conduct pairs trading simulations using SAP and Nikkei index data in order to compare the performance of the variant with Bollinger Bands .
[ { "type": "R", "before": "make connections between Bollinger Bands and time series models in order to gain a better understanding of", "after": "explain and examine", "start_char_pos": 29, "end_char_pos": 135 }, { "type": "R", "before": "Bollinger Bands. In the first part of the study, we review a popular econometric model called the", "after": "the Bollinger Band methodology. We start off by elucidating the", "start_char_pos": 169, "end_char_pos": 266 }, { "type": "R", "before": "illustrate an equivalence between the latter and the Bollinger Band methodology. In the second part of the study,", "after": "deriving its explicit relationship to Bollinger Bands. Next", "start_char_pos": 308, "end_char_pos": 421 }, { "type": "R", "before": "\\mbox{%DIFAUXCMD INV2007", "after": "and prove the existence of a specific", "start_char_pos": 480, "end_char_pos": 504 }, { "type": "R", "before": "Bands", "after": "Band", "start_char_pos": 547, "end_char_pos": 552 }, { "type": "R", "before": ", by viewing Bollinger Bands", "after": "by viewing the Bollinger Band moving average", "start_char_pos": 572, "end_char_pos": 600 }, { "type": "D", "before": "are able to modify the Bollinger Band algorithm used in pairs trading and", "after": null, "start_char_pos": 680, "end_char_pos": 753 }, { "type": "A", "before": null, "after": "'", "start_char_pos": 832, "end_char_pos": 832 }, { "type": "R", "before": "Finally", "after": "Lastly", "start_char_pos": 850, "end_char_pos": 857 }, { "type": "R", "before": "historical simulations using SAP-Nikkei", "after": "pairs trading simulations using SAP and Nikkei index", "start_char_pos": 871, "end_char_pos": 910 }, { "type": "D", "before": "in order to analyze its advantages and disadvantages", "after": null, "start_char_pos": 988, "end_char_pos": 1040 } ]
[ 0, 185, 388, 567, 849 ]
1212.4894
1
We consider controller-stopper problems in which the controlled processes can have jumps. The global filtration is represented by the Brownian filtration, enlarged by the filtration generated by the jump process. We assume that the Brownian motion and jump process are independent, and there exists a probability density function for the jump times and marks . Under these assumptions, we decompose the global controller-stopper problem into controller-stopper problems with respect to the Brownian filtration, which are determined by a backward induction. We apply our decomposition method to indifference pricing of American options under multiple default risk. The backward induction leads to a system of reflected backward stochastic differential equations (RBSDEs). We show that there exists a solution to this RBSDE system and that the solution provides a characterization of the value function.
We consider controller-stopper problems in which the controlled processes can have jumps. The global filtration is represented by the Brownian filtration, enlarged by the filtration generated by the jump process. We assume that there exists a conditional probability density function for the jump times and marks given the filtration of the Brownian motion and decompose the global controller-stopper problem into controller-stopper problems with respect to the Brownian filtration, which are determined by a backward induction. We apply our decomposition method to indifference pricing of American options under multiple default risk. The backward induction leads to a system of reflected backward stochastic differential equations (RBSDEs). We show that there exists a solution to this RBSDE system and that the solution provides a characterization of the value function.
[ { "type": "D", "before": "the Brownian motion and jump process are independent, and", "after": null, "start_char_pos": 228, "end_char_pos": 285 }, { "type": "A", "before": null, "after": "conditional", "start_char_pos": 301, "end_char_pos": 301 }, { "type": "R", "before": ". Under these assumptions, we", "after": "given the filtration of the Brownian motion and", "start_char_pos": 360, "end_char_pos": 389 } ]
[ 0, 89, 212, 361, 557, 664, 771 ]
1212.5109
1
Construction of synthetic genetic networks requires the assembly of DNA fragments encoding functional biological parts in a defined order. Yet this may become a time-consuming procedure. To address this technical bottleneck, we have created a series of Gateway shuttle vectors , which facilitate the assembly of artificial genes and their expression in the budding yeast Saccharomyces cerevisiae. Our method enables the rapid construction of an artificial gene from a promoter and an open reading frame (ORF) cassette by one-step recombination reaction in vitro. Furthermore, the expression vector thus created can readily be introduced into yeast cells to test the assembled gene's functionality. As flexible regulatory components of a synthetic genetic network, we also created new versions of the tetracycline-regulated transactivators tTA and rtTA by fusing them to the auxin-inducible degron (AID). Using our gene assembly approach, we made yeast expression vectors of these engineered transactivators, AIDtTA and AIDrtTA , and tested their functions in yeast. We showed that these factors can be regulated by doxycycline and degraded rapidly after addition of auxin to the medium. Taken together, the method for combinatorial gene assembly described here is versatile and would be a valuable tool for yeast synthetic biology.
Construction of synthetic genetic networks requires the assembly of DNA fragments encoding functional biological parts in a defined order. Yet this may become a time-consuming procedure. To address this technical bottleneck, we have created a series of Gateway shuttle vectors and an integration vector , which facilitate the assembly of artificial genes and their expression in the budding yeast Saccharomyces cerevisiae. Our method enables the rapid construction of an artificial gene from a promoter and an open reading frame (ORF) cassette by one-step recombination reaction in vitro. Furthermore, the plasmid thus created can readily be introduced into yeast cells to test the assembled gene's functionality. As flexible regulatory components of a synthetic genetic network, we also created new versions of the tetracycline-regulated transactivators tTA and rtTA by fusing them to the auxin-inducible degron (AID). Using our gene assembly approach, we made yeast expression vectors of these engineered transactivators, AIDtTA and AIDrtTA and then tested their functions in yeast. We showed that these factors can be regulated by doxycycline and degraded rapidly after addition of auxin to the medium. Taken together, the method for combinatorial gene assembly described here is versatile and would be a valuable tool for yeast synthetic biology.
[ { "type": "A", "before": null, "after": "and an integration vector", "start_char_pos": 277, "end_char_pos": 277 }, { "type": "R", "before": "expression vector", "after": "plasmid", "start_char_pos": 581, "end_char_pos": 598 }, { "type": "R", "before": ", and", "after": "and then", "start_char_pos": 1028, "end_char_pos": 1033 } ]
[ 0, 138, 186, 397, 563, 698, 904, 1066, 1187 ]
1212.5563
1
Equivalent characterizations of multi-portfolio time consistency are deduced for closed convex and coherent set-valued risk measures on L^p_d . In the convex case, multi-portfolio time consistency is equivalent to a condition on the sum of minimal penalty functions. The proof of this results is entirely different from the proof in the scalar case as the scalar method cannot be applied here. In the coherent case, multi-portfolio time consistency is equivalent to a generalized version of stability of the dual variables. As examples, the set of superhedging portfolios in markets with transaction costs is shown to have the stability property and a multi-portfolio time consistent version of the set-valued average value at risk, the composed AV@R, is given and its dual representation deduced.
Equivalent characterizations of multi-portfolio time consistency are deduced for closed convex and coherent set-valued risk measures on L^p_d (F_T) with image space in the power set of L^p_d(F_t) . In the convex case, multi-portfolio time consistency is equivalent to a cocycle condition on the sum of minimal penalty functions. In the coherent case, multi-portfolio time consistency is equivalent to a generalized version of stability of the dual variables. As examples, the set-valued entropic risk measure with constant risk aversion coefficient is shown to satisfy the cocycle condition for its minimal penalty functions, the set of superhedging portfolios in markets with proportional transaction costs is shown to have the stability property and in markets with convex transaction costs is shown to satisfy the composed cocycle condition, and a multi-portfolio time consistent version of the set-valued average value at risk, the composed AV@R, is given and its dual representation deduced.
[ { "type": "A", "before": null, "after": "(F_T) with image space in the power set of L^p_d(F_t)", "start_char_pos": 142, "end_char_pos": 142 }, { "type": "A", "before": null, "after": "cocycle", "start_char_pos": 217, "end_char_pos": 217 }, { "type": "D", "before": "The proof of this results is entirely different from the proof in the scalar case as the scalar method cannot be applied here.", "after": null, "start_char_pos": 269, "end_char_pos": 395 }, { "type": "A", "before": null, "after": "set-valued entropic risk measure with constant risk aversion coefficient is shown to satisfy the cocycle condition for its minimal penalty functions, the", "start_char_pos": 543, "end_char_pos": 543 }, { "type": "A", "before": null, "after": "proportional", "start_char_pos": 591, "end_char_pos": 591 }, { "type": "A", "before": null, "after": "and in markets with convex transaction costs is shown to satisfy the composed cocycle condition,", "start_char_pos": 650, "end_char_pos": 650 } ]
[ 0, 268, 395, 525 ]
1212.5563
2
Equivalent characterizations of multi-portfolio time consistency are deduced for closed convex and coherent set-valued risk measures on L ^p_d(F_T ) with image space in the power set of L ^p_d(F_t ). In the convex case, multi-portfolio time consistency is equivalent to a cocycle condition on the sum of minimal penalty functions. In the coherent case, multi-portfolio time consistency is equivalent to a generalized version of stability of the dual variables. As examples, the set-valued entropic risk measure with constant risk aversion coefficient is shown to satisfy the cocycle condition for its minimal penalty functions, the set of superhedging portfolios in markets with proportional transaction costs is shown to have the stability property and in markets with convex transaction costs is shown to satisfy the composed cocycle condition, and a multi-portfolio time consistent version of the set-valued average value at risk, the composed AV@R, is given and its dual representation deduced.
Equivalent characterizations of multi-portfolio time consistency are deduced for closed convex and coherent set-valued risk measures on L _d^p(\Omega,\mathcal F_T,\mathbb P ) with image space in the power set of L _d^p(\Omega,\mathcal F_t,\mathbb P ). In the convex case, multi-portfolio time consistency is equivalent to a cocycle condition on the sum of minimal penalty functions. In the coherent case, multi-portfolio time consistency is equivalent to a generalized version of stability of the dual variables. As examples, the set-valued entropic risk measure with constant risk aversion coefficient is shown to satisfy the cocycle condition for its minimal penalty functions, the set of superhedging portfolios in markets with proportional transaction costs is shown to have the stability property and in markets with convex transaction costs is shown to satisfy the composed cocycle condition, and a multi-portfolio time consistent version of the set-valued average value at risk, the composed AV@R, is given and its dual representation deduced.
[ { "type": "R", "before": "^p_d(F_T", "after": "_d^p(\\Omega,\\mathcal F_T,\\mathbb P", "start_char_pos": 138, "end_char_pos": 146 }, { "type": "R", "before": "^p_d(F_t", "after": "_d^p(\\Omega,\\mathcal F_t,\\mathbb P", "start_char_pos": 188, "end_char_pos": 196 } ]
[ 0, 199, 330, 460, 473, 627 ]
1212.5563
3
Equivalent characterizations of multi-portfolio time consistency are deduced for closed convex and coherent set-valued risk measures on L _d ^p(\Omega,\mathcal F _T, \mathbb P ) with image space in the power set of L _d ^p(\Omega,\mathcal F_t, \mathbb P ). In the convex case, multi-portfolio time consistency is equivalent to a cocycle condition on the sum of minimal penalty functions. In the coherent case, multi-portfolio time consistency is equivalent to a generalized version of stability of the dual variables. As examples, the set-valued entropic risk measure with constant risk aversion coefficient is shown to satisfy the cocycle condition for its minimal penalty functions, the set of superhedging portfolios in markets with proportional transaction costs is shown to have the stability property and in markets with convex transaction costs is shown to satisfy the composed cocycle condition, and a multi-portfolio time consistent version of the set-valued average value at risk, the composed AV@R, is given and its dual representation deduced.
Equivalent characterizations of multiportfolio time consistency are deduced for closed convex and coherent set-valued risk measures on L ^p(\Omega,\mathcal F , P; R^d ) with image space in the power set of L ^p(\Omega,\mathcal F_t, P;R^d ). In the convex case, multiportfolio time consistency is equivalent to a cocycle condition on the sum of minimal penalty functions. In the coherent case, multiportfolio time consistency is equivalent to a generalized version of stability of the dual variables. As examples, the set-valued entropic risk measure with constant risk aversion coefficient is shown to satisfy the cocycle condition for its minimal penalty functions, the set of superhedging portfolios in markets with proportional transaction costs is shown to have the stability property and in markets with convex transaction costs is shown to satisfy the composed cocycle condition, and a multiportfolio time consistent version of the set-valued average value at risk, the composed AV@R, is given and its dual representation deduced.
[ { "type": "R", "before": "multi-portfolio", "after": "multiportfolio", "start_char_pos": 32, "end_char_pos": 47 }, { "type": "D", "before": "_d", "after": null, "start_char_pos": 138, "end_char_pos": 140 }, { "type": "R", "before": "_T, \\mathbb P", "after": ", P; R^d", "start_char_pos": 162, "end_char_pos": 175 }, { "type": "D", "before": "_d", "after": null, "start_char_pos": 217, "end_char_pos": 219 }, { "type": "R", "before": "\\mathbb P", "after": "P;R^d", "start_char_pos": 244, "end_char_pos": 253 }, { "type": "R", "before": "multi-portfolio", "after": "multiportfolio", "start_char_pos": 277, "end_char_pos": 292 }, { "type": "R", "before": "multi-portfolio", "after": "multiportfolio", "start_char_pos": 410, "end_char_pos": 425 }, { "type": "R", "before": "multi-portfolio", "after": "multiportfolio", "start_char_pos": 910, "end_char_pos": 925 } ]
[ 0, 256, 387, 517, 530, 684 ]
1212.5712
1
Various biological sensory systems exhibit a response to the relative change of the stimulus, often reffered to as fold-change detection. Here, we present a mechanism consisting of two interacting proteins, able to detect a fold-change effectively . This mechanism, in contrast to other proposed mechanisms, does not consume chemical energy and is not subject to transcriptional and translational noise. We show by analytical and numerical calculations that the mechanism can have a fast, precise and efficient response for parameters that are relevant to eukaryotic cells.
Various biological sensory systems exhibit a response to the relative change of the stimulus, often referred to as fold-change detection. Here, we present a mechanism consisting of two interacting proteins, that effectively detects a fold-change of the stimulus . This mechanism, in contrast to previously proposed mechanisms, does not consume chemical energy and is not subject to transcriptional and translational noise. We show by analytical and numerical calculations that the mechanism can have a fast, precise and effcient response for parameters that are relevant to eukaryotic cells.
[ { "type": "R", "before": "reffered", "after": "referred", "start_char_pos": 100, "end_char_pos": 108 }, { "type": "R", "before": "able to detect", "after": "that effectively detects", "start_char_pos": 207, "end_char_pos": 221 }, { "type": "R", "before": "effectively", "after": "of the stimulus", "start_char_pos": 236, "end_char_pos": 247 }, { "type": "R", "before": "other", "after": "previously", "start_char_pos": 281, "end_char_pos": 286 }, { "type": "R", "before": "efficient", "after": "effcient", "start_char_pos": 501, "end_char_pos": 510 } ]
[ 0, 137, 249, 403 ]
1212.5712
2
Various biological sensory systems exhibit a response to the relative change of the stimulus, often referred to as fold-change detection. Here, we present a mechanism consisting of two interacting proteins , that effectively detects a fold-change of the stimulus . This mechanism, in contrast to previously proposed mechanisms, does not consume chemical energy and is not subject to transcriptional and translational noise. We show by analytical and numerical calculations that the mechanism can have a fast, precise and effcient response for parameters that are relevant to eukaryotic cells.
Various biological sensory systems exhibit a response to a relative change of the stimulus, often referred to as fold-change detection. In the last few years fold-change detecting mechanisms, based on transcriptional networks, have been proposed. Here we present fold-change detecting mechanism, based on protein-protein interactions, consisting of two interacting proteins . This mechanism, in contrast to previously proposed mechanisms, does not consume chemical energy and is not subject to transcriptional and translational noise. We show by analytical and numerical calculations , that the mechanism can have a fast, precise and efficient response for parameters that are relevant to eukaryotic cells.
[ { "type": "R", "before": "the", "after": "a", "start_char_pos": 57, "end_char_pos": 60 }, { "type": "R", "before": "Here, we present a mechanism", "after": "In the last few years fold-change detecting mechanisms, based on transcriptional networks, have been proposed. Here we present fold-change detecting mechanism, based on protein-protein interactions,", "start_char_pos": 138, "end_char_pos": 166 }, { "type": "D", "before": ", that effectively detects a fold-change of the stimulus", "after": null, "start_char_pos": 206, "end_char_pos": 262 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 473, "end_char_pos": 473 }, { "type": "R", "before": "effcient", "after": "efficient", "start_char_pos": 522, "end_char_pos": 530 } ]
[ 0, 137, 264, 423 ]
1212.6732
1
We consider the class of risk measures associated with optimized certainty equivalents. This class includes several popular examples, such as CV@R and the entropic risk measure. We develop numerical schemes for the computation of such risk measures using Fourier transform methods. This leads to a very competitive method for the calculation of CV@R in particular, which is comparable in computational time to the calculation of V@R .
We consider the class of risk measures associated with optimized certainty equivalents. This class includes several popular examples, such as CV@R and monotone mean-variance. Numerical schemes are developed for the computation of these risk measures using Fourier transform methods. This leads , in particular, to a very competitive method for the calculation of CV@R which is comparable in computational time to the calculation of V@R . We also develop methods for the efficient computation of risk contributions .
[ { "type": "R", "before": "the entropic risk measure. We develop numerical schemes", "after": "monotone mean-variance. Numerical schemes are developed", "start_char_pos": 151, "end_char_pos": 206 }, { "type": "R", "before": "such", "after": "these", "start_char_pos": 230, "end_char_pos": 234 }, { "type": "A", "before": null, "after": ", in particular,", "start_char_pos": 293, "end_char_pos": 293 }, { "type": "D", "before": "in particular,", "after": null, "start_char_pos": 351, "end_char_pos": 365 }, { "type": "A", "before": null, "after": ". We also develop methods for the efficient computation of risk contributions", "start_char_pos": 434, "end_char_pos": 434 } ]
[ 0, 87, 177, 281 ]
1301.1335
1
The interplay between global constraints and local material properties of chain molecules is a subject of emerging interest. Molecules that are intrinsically chiral, such as double-stranded DNA, is one example. They exhibit a non-vanishing strain-twist coupling, which depends on the local geometry, i.e. on curvature and torsion, yet the paths of closed loops are restricted by White's theorem. We suggest that the reciprocation of these principles leads to a twist neutrality condition . I.e. to a zero sum rule for the incremental change in the rate of winding along the curve . This has direct implications for plasmids. For small circular microDNAs it follows that there must exist a minimum length for these to be double-stranded. A first estimate of this minimum length is 120 base pairs. This is not far from the 80 base pairs which is about the smallest length observed in experimental studies. Slightly longer microDNAs are better described as an ellipse and a relationship between length and eccentricity for these is presented. In summary, molecular stability of chain molecules requires the local geometrical properties and the global topological properties to work in congruency .
The interplay between global constraints and local material properties of chain molecules is a subject of emerging interest. Studies of molecules that are intrinsically chiral, such as double-stranded DNA, is one example. Their properties generally depend on the local geometry, i.e. on curvature and torsion, yet the paths of closed molecules are globally restricted by topology. Molecules that fulfill a twist neutrality condition , a zero sum rule for the incremental change in the rate of winding along the curve , will behave neutrally to strain . This has implications for plasmids. For small circular microDNAs it follows that there must exist a minimum length for these to be double-stranded. It also follows that all microDNAs longer than the minimum length must be concave. This counterintuitive result is consistent with the kink-like appearance which has been observed for circular DNA. A prediction for the total negative curvature of a circular microDNA is given as a function of its length .
[ { "type": "R", "before": "Molecules", "after": "Studies of molecules", "start_char_pos": 125, "end_char_pos": 134 }, { "type": "R", "before": "They exhibit a non-vanishing strain-twist coupling, which depends", "after": "Their properties generally depend", "start_char_pos": 211, "end_char_pos": 276 }, { "type": "R", "before": "loops are restricted by White's theorem. We suggest that the reciprocation of these principles leads to", "after": "molecules are globally restricted by topology. Molecules that fulfill", "start_char_pos": 355, "end_char_pos": 458 }, { "type": "R", "before": ". I.e. to", "after": ",", "start_char_pos": 488, "end_char_pos": 497 }, { "type": "A", "before": null, "after": ", will behave neutrally to strain", "start_char_pos": 580, "end_char_pos": 580 }, { "type": "D", "before": "direct", "after": null, "start_char_pos": 592, "end_char_pos": 598 }, { "type": "R", "before": "A first estimate of this minimum length is 120 base pairs. This is not far from the 80 base pairs which is about the smallest length observed in experimental studies. Slightly longer microDNAs are better described as an ellipse and a relationship between length and eccentricity for these is presented. In summary, molecular stability of chain molecules requires the local geometrical properties and the global topological properties to work in congruency", "after": "It also follows that all microDNAs longer than the minimum length must be concave. This counterintuitive result is consistent with the kink-like appearance which has been observed for circular DNA. A prediction for the total negative curvature of a circular microDNA is given as a function of its length", "start_char_pos": 738, "end_char_pos": 1193 } ]
[ 0, 124, 210, 395, 582, 625, 737, 796, 904, 1040 ]
1301.1496
1
Since risky positions in multivariate portfolios can be offset by various choices of capital requirements that depend on the exchange rules and related transaction costs, it is natural to assume that the risk measures of random vectors are set-valued. Furthermore, it is reasonable to include the exchange rules in the argument of the risk and so consider risk measures of set-valued portfolios. This situation includes the classical Kabanov's transaction costs model, where the set-valued portfolio is given by the sum of a random vector and an exchange cone . The definition of the selection risk measure is based on calling a set-valued portfolio acceptable if it possesses a selection with all individually acceptable marginals. The obtained risk measure is coherent (or convex), law invariant and has values being upper convex closed sets. We describe the dual representation of the selection risk measure and suggest efficient ways of approximating it from below and from above. In case of Kabanov's exchange cone model, it is shown how the selection risk measure relates to the set-valued risk measures considered by Kulikov (2008) and Hamel and Heyde (2010 ).
Since risky positions in multivariate portfolios can be offset by various choices of capital requirements that depend on the exchange rules and related transaction costs, it is natural to assume that the risk measures of random vectors are set-valued. Furthermore, it is reasonable to include the exchange rules in the argument of the risk and so consider risk measures of set-valued portfolios. This situation includes the classical Kabanov's transaction costs model, where the set-valued portfolio is given by the sum of a random vector and an exchange cone , but also a number of further cases of additional liquidity constraints . The definition of the selection risk measure is based on calling a set-valued portfolio acceptable if it possesses a selection with all individually acceptable marginals. The obtained risk measure is coherent (or convex), law invariant and has values being upper convex closed sets. We describe the dual representation of the selection risk measure and suggest efficient ways of approximating it from below and from above. In case of Kabanov's exchange cone model, it is shown how the selection risk measure relates to the set-valued risk measures considered by Kulikov (2008) , Hamel and Heyde (2010 ) and Hamel et al. (2013 ).
[ { "type": "A", "before": null, "after": ", but also a number of further cases of additional liquidity constraints", "start_char_pos": 560, "end_char_pos": 560 }, { "type": "R", "before": "and", "after": ",", "start_char_pos": 1140, "end_char_pos": 1143 }, { "type": "A", "before": null, "after": ") and Hamel et al. (2013", "start_char_pos": 1166, "end_char_pos": 1166 } ]
[ 0, 251, 395, 562, 733, 845, 985 ]
1301.1496
2
Since risky positions in multivariate portfolios can be offset by various choices of capital requirements that depend on the exchange rules and related transaction costs, it is natural to assume that the risk measures of random vectors are set-valued. Furthermore, it is reasonable to include the exchange rules in the argument of the risk and so consider risk measures of set-valued portfolios. This situation includes the classical Kabanov's transaction costs model, where the set-valued portfolio is given by the sum of a random vector and an exchange cone, but also a number of further cases of additional liquidity constraints. The definition of the selection risk measure is based on calling a set-valued portfolio acceptable if it possesses a selection with all individually acceptable marginals. The obtained risk measure is coherent (or convex), law invariant and has values being upper convex closed sets. We describe the dual representation of the selection risk measure and suggest efficient ways of approximating it from below and from above. In case of Kabanov's exchange cone model, it is shown how the selection risk measure relates to the set-valued risk measures considered by Kulikov (2008), Hamel and Heyde (2010) and Hamel et al. (2013).
Since risky positions in multivariate portfolios can be offset by various choices of capital requirements that depend on the exchange rules and related transaction costs, it is natural to assume that the risk measures of random vectors are set-valued. Furthermore, it is reasonable to include the exchange rules in the argument of the risk measure and so consider risk measures of set-valued portfolios. This situation includes the classical Kabanov's transaction costs model, where the set-valued portfolio is given by the sum of a random vector and an exchange cone, but also a number of further cases of additional liquidity constraints. We suggest a definition of the risk measure based on calling a set-valued portfolio acceptable if it possesses a selection with all individually acceptable marginals. The obtained selection risk measure is coherent (or convex), law invariant and has values being upper convex closed sets. We describe the dual representation of the selection risk measure and suggest efficient ways of approximating it from below and from above. In case of Kabanov's exchange cone model, it is shown how the selection risk measure relates to the set-valued risk measures considered by Kulikov (2008), Hamel and Heyde (2010) , and Hamel, Heyde and Rudloff (2013).
[ { "type": "A", "before": null, "after": "measure", "start_char_pos": 340, "end_char_pos": 340 }, { "type": "R", "before": "The", "after": "We suggest a", "start_char_pos": 634, "end_char_pos": 637 }, { "type": "R", "before": "selection risk measure is", "after": "risk measure", "start_char_pos": 656, "end_char_pos": 681 }, { "type": "A", "before": null, "after": "selection", "start_char_pos": 818, "end_char_pos": 818 }, { "type": "R", "before": "and Hamel et al.", "after": ", and Hamel, Heyde and Rudloff", "start_char_pos": 1236, "end_char_pos": 1252 } ]
[ 0, 251, 396, 633, 804, 917, 1057 ]
1301.1876
1
A DNA polymerase (DNAP) replicates a template DNA strand. It also exploits the template as the track for its own motor-like mechanical movement. In the polymerase mode it elongates the nascent DNA by one nucleotide in each step. But, whenever it commits an error by misincorporting an incorrect nucleotide, it can switch to an exonuclease mode. In the latter mode it excises the wrong nucleotide before switching back to its polymerase mode. We capture the effects of mechanical tension F applied on the template DNA within the framework of a stochastic kinetic model of DNA replication . The model mimics an%DIFDELCMD < {\it %%% in-vitro experiment where a single DNAP converts a ssDNA template into a dsDNA by its action . The F-dependence of the average rate of replication, which includes also the effects of correction of misincorporations , is in good qualitative agreement with the corresponding experimental results. Using the methods of first-passage times for same model , we also derive the exact analytical expressions for the probability distributions of nine distinct conditional dwell times of a DNAP . The predicted tension-dependence of these distributions are, in principle, accessible to single-molecule experiments.
A DNA polymerase (DNAP) replicates a template DNA strand. It also exploits the template as the track for its own motor-like mechanical movement. In the polymerase mode it elongates the nascent DNA by one nucleotide in each step. But, whenever it commits an error by misincorporting an incorrect nucleotide, it can switch to an exonuclease mode. In the latter mode it excises the wrong nucleotide before switching back to its polymerase mode. We develop a stochastic kinetic model of DNA replication %DIFDELCMD < {\it %%% that mimics an in-vitro experiment where a single-stranded DNA, subjected to a mechanical tension F, is converted to a double-stranded DNA by a single DNAP . The F-dependence of the average rate of replication, which depends on the rates of both polymerase and exonuclease activities of the DNAP , is in good qualitative agreement with the corresponding experimental results. We introduce 9 novel distinct conditional dwell times of a DNAP. Using the methods of first-passage times , we also derive the exact analytical expressions for the probability distributions of these conditional dwell times . The predicted F-dependence of these distributions are, in principle, accessible to single-molecule experiments.
[ { "type": "R", "before": "capture the effects of mechanical tension F applied on the template DNA within the framework of", "after": "develop", "start_char_pos": 445, "end_char_pos": 540 }, { "type": "D", "before": ". The model mimics an", "after": null, "start_char_pos": 587, "end_char_pos": 608 }, { "type": "A", "before": null, "after": "that mimics an", "start_char_pos": 630, "end_char_pos": 630 }, { "type": "R", "before": "single DNAP converts a ssDNA template into a dsDNA by its action", "after": "single-stranded DNA, subjected to a mechanical tension F, is converted to a double-stranded DNA by a single DNAP", "start_char_pos": 659, "end_char_pos": 723 }, { "type": "R", "before": "includes also the effects of correction of misincorporations", "after": "depends on the rates of both polymerase and exonuclease activities of the DNAP", "start_char_pos": 785, "end_char_pos": 845 }, { "type": "A", "before": null, "after": "We introduce 9 novel distinct conditional dwell times of a DNAP.", "start_char_pos": 926, "end_char_pos": 926 }, { "type": "D", "before": "for same model", "after": null, "start_char_pos": 968, "end_char_pos": 982 }, { "type": "R", "before": "nine distinct", "after": "these", "start_char_pos": 1070, "end_char_pos": 1083 }, { "type": "D", "before": "of a DNAP", "after": null, "start_char_pos": 1108, "end_char_pos": 1117 }, { "type": "R", "before": "tension-dependence", "after": "F-dependence", "start_char_pos": 1134, "end_char_pos": 1152 } ]
[ 0, 57, 144, 228, 344, 441, 588, 725, 925, 1119 ]
1301.2417
1
The conformational change of biological macromolecule is investigated from the point of quantum transition. A quantum theory on protein folding is proposed based on the molecular torsion as the slow variable of the system. Using the nonadiabaticity operator method we deduce the Hamiltonian describing conformational change. An analytical form of the formula on protein folding rate is obtained. The proposed quantum protein folding theory affords a unifying approach to the study of a large class problems of biological conformational change , for example, the quantum transition of tubulin dimer in microtubules, the ligand binding in G protein coupled receptors in membrane, the histone modification in nucleic acid through atomic group binding and the protein photo folding processes, etc. In all these applications the concise rate formula is suggested as a useful tool for investigators .
The conformational change of biological macromolecule is investigated from the point of quantum transition. A quantum theory on protein folding is proposed . Compared with other dynamical variables such as mobile electrons, chemical bonds and stretching-bending vibrations the molecular torsion has the lowest energy and can be looked as the slow variable of the system. Simultaneously, from the multi-minima property of torsion potential the local conformational states are well defined. Following the idea that the slow variables slave the fast ones and using the nonadiabaticity operator method we deduce the Hamiltonian describing conformational change. It is proved that the influence of fast variables on the macromolecule can fully be taken into account through a phase transformation of slow variable wave function. Starting from the conformation- transition Hamiltonian the nonradiative matrix element is calculated in two important cases: A, only electrons are fast variables and the electronic state does not change in the transition process; B, fast variables are not limited to electrons but the perturbation approximation can be used. Then, the general formulas for protein folding rate are deduced. The analytical form of the formula is utilized to study the temperature dependence of protein folding rate and the curious non-Arrhenius temperature relation is interpreted. The decoherence time of quantum torsion state is estimated and the quantum coherence degree of torsional angles in the protein folding is studied by using temperature dependence data. The proposed folding rate formula gives a unifying approach for the study of a large class problems of biological conformational change .
[ { "type": "R", "before": "based on", "after": ". Compared with other dynamical variables such as mobile electrons, chemical bonds and stretching-bending vibrations", "start_char_pos": 156, "end_char_pos": 164 }, { "type": "A", "before": null, "after": "has the lowest energy and can be looked", "start_char_pos": 187, "end_char_pos": 187 }, { "type": "R", "before": "Using the", "after": "Simultaneously, from the multi-minima property of torsion potential the local conformational states are well defined. Following the idea that the slow variables slave the fast ones and using the", "start_char_pos": 224, "end_char_pos": 233 }, { "type": "R", "before": "An", "after": "It is proved that the influence of fast variables on the macromolecule can fully be taken into account through a phase transformation of slow variable wave function. Starting from the conformation- transition Hamiltonian the nonradiative matrix element is calculated in two important cases: A, only electrons are fast variables and the electronic state does not change in the transition process; B, fast variables are not limited to electrons but the perturbation approximation can be used. Then, the general formulas for protein folding rate are deduced. The", "start_char_pos": 326, "end_char_pos": 328 }, { "type": "R", "before": "on", "after": "is utilized to study the temperature dependence of", "start_char_pos": 360, "end_char_pos": 362 }, { "type": "R", "before": "is obtained. The proposed quantum protein folding theory affords", "after": "and the curious non-Arrhenius temperature relation is interpreted. The decoherence time of quantum torsion state is estimated and the quantum coherence degree of torsional angles in the protein folding is studied by using temperature dependence data. The proposed folding rate formula gives", "start_char_pos": 384, "end_char_pos": 448 }, { "type": "R", "before": "to", "after": "for", "start_char_pos": 469, "end_char_pos": 471 }, { "type": "D", "before": ", for example, the quantum transition of tubulin dimer in microtubules, the ligand binding in G protein coupled receptors in membrane, the histone modification in nucleic acid through atomic group binding and the protein photo folding processes, etc. In all these applications the concise rate formula is suggested as a useful tool for investigators", "after": null, "start_char_pos": 544, "end_char_pos": 893 } ]
[ 0, 107, 223, 325, 396, 794 ]
1301.2504
1
We present a mechanistic model for the adherence of two spherical flocs in quiescent flow conditions . Adhesion forces arise via the binding ligands as well as the attractive / repulsive surface potential in an ionic medium via the DLVO theory. The reversible binding kinetics are assumed to follow the standard model for linear springs \mbox{%DIFAUXCMD Dembo1988 that large floc aggregates are possible with more tensile ligands due to efficient inter-floc collisions ( measured via the collision factor). Our results quantify how fluid drag and strong electrolytic composition of the surrounding fluid favor floc formation as well.
We consider the attachment of spherical particles in quiescent flow conditions with their surface coated with binding ligands, a model for fluid-immersed adhesion in many biological and artificial settings. Our theory highlights how the physics of the binding kinetics of these ligands (expressed through the collision factor function) as well as the attractive / repulsive surface potential in an ionic medium effects the eventual size of these particle aggregates (flocs). For experimentally measurable parameters in microbial population studies, our results suggests that large floc aggregates are possible with more stretchable ligands due to efficient inter-floc collisions ( or a large, non-zero collision factor). Strong electrolytic composition of the surrounding fluid favors large floc formation as well.
[ { "type": "R", "before": "present a mechanistic model for the adherence of two spherical flocs", "after": "consider the attachment of spherical particles", "start_char_pos": 3, "end_char_pos": 71 }, { "type": "R", "before": ". Adhesion forces arise via the binding ligands", "after": "with their surface coated with binding ligands, a model for fluid-immersed adhesion in many biological and artificial settings. Our theory highlights how the physics of the binding kinetics of these ligands (expressed through the collision factor function)", "start_char_pos": 101, "end_char_pos": 148 }, { "type": "R", "before": "via the DLVO theory. The reversible binding kinetics are assumed to follow the standard model for linear springs \\mbox{%DIFAUXCMD Dembo1988", "after": "effects the eventual size of these particle aggregates (flocs). For experimentally measurable parameters in microbial population studies, our results suggests", "start_char_pos": 224, "end_char_pos": 363 }, { "type": "R", "before": "tensile", "after": "stretchable", "start_char_pos": 414, "end_char_pos": 421 }, { "type": "R", "before": "measured via the", "after": "or a large, non-zero", "start_char_pos": 471, "end_char_pos": 487 }, { "type": "R", "before": "Our results quantify how fluid drag and strong", "after": "Strong", "start_char_pos": 507, "end_char_pos": 553 }, { "type": "R", "before": "favor", "after": "favors large", "start_char_pos": 604, "end_char_pos": 609 } ]
[ 0, 102, 244, 506 ]
1301.2504
2
We consider the attachment of spherical particles in quiescent flow conditions with their surface coated with binding ligands , a model for fluid-immersed adhesion in many biological and artificial settings. Our theory highlights how the physics of the binding kinetics of these ligands (expressed through the collision factor function) as well as the attractive / repulsive surface potential in an ionic medium effects the eventual size of these particle aggregates (flocs). For experimentally measurable parameters in microbial population studies, our results suggests that large floc aggregates are possible with more stretchable ligands due to efficient inter-floc collisions ( or a large, non-zero collision factor). Strong electrolytic composition of the surrounding fluid favors large floc formation as well.
We consider rigid spherical particles coated with binding ligands and study their attachment in quiescent flow. This class of fluid-immersed adhesion is widespread in many natural and engineering settings. Our theory highlights how the physics of the binding kinetics of these ligands (expressed through the collision factor function) as well as the attractive / repulsive surface potential in an ionic medium effects the eventual size of these particle aggregates (flocs). As an application of our theory, we consider a microbial population of spherical bacteria. Our results suggest that the elastic ligands allow large floc aggregates by inducing efficient inter-floc collisions ( i.e., a large, non-zero collision factor). Strong electrolytic composition of the surrounding fluid favors large floc formation as well.
[ { "type": "R", "before": "the attachment of spherical particles in quiescent flow conditions with their surface", "after": "rigid spherical particles", "start_char_pos": 12, "end_char_pos": 97 }, { "type": "R", "before": ", a model for", "after": "and study their attachment in quiescent flow. This class of", "start_char_pos": 126, "end_char_pos": 139 }, { "type": "R", "before": "in many biological and artificial", "after": "is widespread in many natural and engineering", "start_char_pos": 164, "end_char_pos": 197 }, { "type": "R", "before": "For experimentally measurable parameters in microbial population studies, our results suggests that", "after": "As an application of our theory, we consider a microbial population of spherical bacteria. Our results suggest that the elastic ligands allow", "start_char_pos": 476, "end_char_pos": 575 }, { "type": "R", "before": "are possible with more stretchable ligands due to", "after": "by inducing", "start_char_pos": 598, "end_char_pos": 647 }, { "type": "R", "before": "or", "after": "i.e.,", "start_char_pos": 682, "end_char_pos": 684 } ]
[ 0, 207, 475, 721 ]
1301.2504
3
We consider rigid spherical particles coated with binding ligands and study their attachment in quiescent flow . This class of fluid-immersed adhesion is widespread in many natural and engineering settings. Our theory highlights how the physics of the binding kinetics of these ligands (expressed through the collision factor function) as well as the attractive / repulsive surface potential in an ionic medium effects the eventual size of these particle aggregates (flocs). As an application of our theory, we consider a microbial population of spherical bacteria. Our results suggest that the elastic ligands allow large floc aggregates by inducing efficient inter-floc collisions (i.e., a large, non-zero collision factor). Strong electrolytic composition of the surrounding fluid favors large floc formation as well.
We present a multi-scale model to study the attachment of spherical particles with a rigid core, coated with binding ligands and in equilibrium with the surrounding, quiescent fluid medium . This class of fluid-immersed adhesion is widespread in many natural and engineering settings. Our theory highlights how the micro-scale binding kinetics of these ligands , as well as the attractive / repulsive surface potential in an ionic medium effects the eventual macro-scale size distribution of the particle aggregates (flocs). The results suggest that the presence of elastic ligands on the particle surface allow large floc aggregates by inducing efficient inter-floc collisions (i.e., a large, non-zero collision factor). Strong electrolytic composition of the surrounding fluid favors large floc formation as well.
[ { "type": "R", "before": "consider rigid spherical particles", "after": "present a multi-scale model to study the attachment of spherical particles with a rigid core,", "start_char_pos": 3, "end_char_pos": 37 }, { "type": "R", "before": "study their attachment in quiescent flow", "after": "in equilibrium with the surrounding, quiescent fluid medium", "start_char_pos": 70, "end_char_pos": 110 }, { "type": "R", "before": "physics of the", "after": "micro-scale", "start_char_pos": 237, "end_char_pos": 251 }, { "type": "R", "before": "(expressed through the collision factor function)", "after": ",", "start_char_pos": 286, "end_char_pos": 335 }, { "type": "R", "before": "size of these", "after": "macro-scale size distribution of the", "start_char_pos": 432, "end_char_pos": 445 }, { "type": "R", "before": "As an application of our theory, we consider a microbial population of spherical bacteria. Our", "after": "The", "start_char_pos": 475, "end_char_pos": 569 }, { "type": "R", "before": "elastic ligands", "after": "presence of elastic ligands on the particle surface", "start_char_pos": 595, "end_char_pos": 610 } ]
[ 0, 112, 206, 474, 565, 726 ]
1301.2530
1
By numerical and empirical means a dynamic transition of a complex network was found from a hierarchical scale-free Minimal Spanning Tree (MST ), representing the stock market before the recent worldwide financial crash, to a superstar-like MST decorated by a scale-free hierarchy of trees representing the market's state for the period containing the crash. Subsequently, a transition from this latter (perheps)an unstable state to hierarchical scale-free MST decorated by several star-like trees is observed after the worldwide financial crash. In the present work we applied the MST technique, as a particularly useful canonical tool of a graph theory, to show the transitions on the (German) Frankfurt Stock Exchange (FSE). Analogous results we obtained earlier for the Warsaw Stock Exchange . Our results can serve as an empirical foundation for the theory of dynamic structural and topological phase transitions on financial markets.
We find numerical and empirical evidence for dynamical, structural and topological phase transitions on the (German) Frankfurt Stock Exchange (FSE) in the temporal vicinity of the worldwide financial crash. Using the Minimal Spanning Tree (MST) technique, a particularly useful canonical tool of the graph theory, two transitions of the topology of a complex network representing FSE were found. First transition is from a hierarchical scale-free MST representing the stock market before the recent worldwide financial crash, to a superstar-like MST decorated by a scale-free hierarchy of trees representing the market's state for the period containing the crash. Subsequently, a transition is observed from this transient, (meta)stable state of the crash, to a hierarchical scale-free MST decorated by several star-like trees after the worldwide financial crash. The phase transitions observed are analogous to the ones we obtained earlier for the Warsaw Stock Exchange and more pronounced than those found by Onnela-Chakraborti-Kaski-Kert\'esz for S P 500 index in the vicinity of Black Monday (October 19, 1987) and also in the vicinity of January 1, 1998. Our results provide an empirical foundation for the future theory of dynamical, structural and topological phase transitions on financial markets.
[ { "type": "R", "before": "By", "after": "We find", "start_char_pos": 0, "end_char_pos": 2 }, { "type": "R", "before": "means a dynamic transition of", "after": "evidence for dynamical, structural and topological phase transitions on the (German) Frankfurt Stock Exchange (FSE) in the temporal vicinity of the worldwide financial crash. Using the Minimal Spanning Tree (MST) technique, a particularly useful canonical tool of the graph theory, two transitions of the topology of", "start_char_pos": 27, "end_char_pos": 56 }, { "type": "R", "before": "was found", "after": "representing FSE were found. First transition is", "start_char_pos": 75, "end_char_pos": 84 }, { "type": "R", "before": "Minimal Spanning Tree (MST ),", "after": "MST", "start_char_pos": 116, "end_char_pos": 145 }, { "type": "R", "before": "from this latter (perheps)an unstable state to", "after": "is observed from this transient, (meta)stable state of the crash, to a", "start_char_pos": 386, "end_char_pos": 432 }, { "type": "D", "before": "is observed", "after": null, "start_char_pos": 498, "end_char_pos": 509 }, { "type": "R", "before": "In the present work we applied the MST technique, as a particularly useful canonical tool of a graph theory, to show the transitions on the (German) Frankfurt Stock Exchange (FSE). Analogous results", "after": "The phase transitions observed are analogous to the ones", "start_char_pos": 547, "end_char_pos": 745 }, { "type": "R", "before": ". Our results can serve as", "after": "and more pronounced than those found by Onnela-Chakraborti-Kaski-Kert\\'esz for S", "start_char_pos": 796, "end_char_pos": 822 }, { "type": "A", "before": null, "after": "P 500 index in the vicinity of Black Monday (October 19, 1987) and also in the vicinity of January 1, 1998. Our results provide", "start_char_pos": 823, "end_char_pos": 823 }, { "type": "R", "before": "theory of dynamic", "after": "future theory of dynamical,", "start_char_pos": 856, "end_char_pos": 873 } ]
[ 0, 358, 546, 727, 797 ]
1301.2728
1
We introduce a solvable model of randomly growing systems consisted of many independent subunits. Scaling relations and growth rate distributions in the limit of infinite subunits are analyzed theoretically. Various types of scaling properties and distributions reported for growth rates of complex systems in wide fields can be derived from this basic physical model. Statistical data of growth rates for about 1 million business firms are analyzed as an example of randomly growing systems in the real-world . Not only scaling relations are consistent with the theoretical solution, the whole functional form of the growth rate distribution is fitted with a theoretical distribution having a power law tail.
We introduce a solvable model of randomly growing systems consisting of many independent subunits. Scaling relations and growth rate distributions in the limit of infinite subunits are analysed theoretically. Various types of scaling properties and distributions reported for growth rates of complex systems in a variety of fields can be derived from this basic physical model. Statistical data of growth rates for about 1 million business firms are analysed as a real-world example of randomly growing systems . Not only are the scaling relations consistent with the theoretical solution, but the entire functional form of the growth rate distribution is fitted with a theoretical distribution that has a power-law tail.
[ { "type": "R", "before": "consisted", "after": "consisting", "start_char_pos": 58, "end_char_pos": 67 }, { "type": "R", "before": "analyzed", "after": "analysed", "start_char_pos": 184, "end_char_pos": 192 }, { "type": "R", "before": "wide", "after": "a variety of", "start_char_pos": 310, "end_char_pos": 314 }, { "type": "R", "before": "analyzed as an", "after": "analysed as a real-world", "start_char_pos": 441, "end_char_pos": 455 }, { "type": "D", "before": "in the real-world", "after": null, "start_char_pos": 492, "end_char_pos": 509 }, { "type": "R", "before": "scaling relations are", "after": "are the scaling relations", "start_char_pos": 521, "end_char_pos": 542 }, { "type": "R", "before": "the whole", "after": "but the entire", "start_char_pos": 585, "end_char_pos": 594 }, { "type": "R", "before": "having a power law", "after": "that has a power-law", "start_char_pos": 685, "end_char_pos": 703 } ]
[ 0, 97, 207, 368, 511 ]
1301.2750
1
Recently, it has been shown that CSMA-type random access algorithms are throughput optimal since they achieve the maximum throughput while maintaining the network stability. However, the optimality is established with the following unrealistic assumptions; i-) the underlaying Markov chain reaches a stationary distribution immediately, which causes large delay in practice; ii-) the channel is static and does not change over time . In this paper, we design fully distributed scheduling algorithms which are provably throughput optimal for general fading channels. When arbitrary backoff time is allowed, the proposed distributed algorithm achieves the same performance in terms of rate region and delay as that of a centralized system without requiring any message passing. For the case where backoff time is discrete, we show that our algorithm still maintains throughput-optimality and achieves good delay performance at the expense of low overhead for collision resolution .
It has been known that load unaware channel selection in 802.11 networks results in high level interference, and can significantly reduce the network throughput. In current implementation, the only way to determine the traffic load on a channel is to measure that channel for a certain duration of time. Therefore, in order to find the best channel with the minimum load all channels have to be measured, which is costly and can cause unacceptable communication interruptions between the AP and the stations . In this paper, we propose a learning based approach which aims to find the channel with the minimum load by measuring only limited number of channels. Our method uses Gaussian Process Regressing to accurately track the traffic load on each channel based on the previous measured load. We confirm the performance of our algorithm by using experimental data, and show that the time consumed for the load measurement can be reduced up to 46\% compared to the case where all channels are monitored .
[ { "type": "R", "before": "Recently, it has been shown that CSMA-type random access algorithms are throughput optimal since they achieve the maximum throughput while maintaining the network stability. However, the optimality is established with the following unrealistic assumptions; i-) the underlaying Markov chain reaches a stationary distribution immediately, which causes large delay in practice; ii-) the channel is static and does not change over time", "after": "It has been known that load unaware channel selection in 802.11 networks results in high level interference, and can significantly reduce the network throughput. In current implementation, the only way to determine the traffic load on a channel is to measure that channel for a certain duration of time. Therefore, in order to find the best channel with the minimum load all channels have to be measured, which is costly and can cause unacceptable communication interruptions between the AP and the stations", "start_char_pos": 0, "end_char_pos": 431 }, { "type": "R", "before": "design fully distributed scheduling algorithms which are provably throughput optimal for general fading channels. When arbitrary backoff time is allowed, the proposed distributed algorithm achieves the same performance in terms of rate region and delay as that of a centralized system without requiring any message passing. For the case where backoff time is discrete, we show that our algorithm still maintains throughput-optimality and achieves good delay performance at the expense of low overhead for collision resolution", "after": "propose a learning based approach which aims to find the channel with the minimum load by measuring only limited number of channels. Our method uses Gaussian Process Regressing to accurately track the traffic load on each channel based on the previous measured load. We confirm the performance of our algorithm by using experimental data, and show that the time consumed for the load measurement can be reduced up to 46\\% compared to the case where all channels are monitored", "start_char_pos": 452, "end_char_pos": 977 } ]
[ 0, 173, 256, 374, 433, 565, 775 ]
1301.2964
1
When investors have heterogeneous attitudes towards risk, it is reasonable to assume that each investor has a pricing kernel, and that these individual pricing kernels are in some way aggregated to form a market pricing kernel. The various investors are then buyers or sellers depending on how their individual pricing kernels compare to that of the market. In the case of geometric Brownian motion based models, we can represent such heterogeneous attitudes by letting the market price of risk be a random variable, the distribution of which corresponds to the variability of attitude across the market. If the flow of market information is determined by the movements of the pricesof assets , then neither the Brownian driver nor the market price of risk are directly visible: the filtration is generated by an "information process" given by a combination of the two that takes the form of a Brownian motion with random drift . We show that the market pricing kernel is then given by the harmonic mean of the individual pricing kernels associated with the various market participants. Alternatively, one can view the market pricing kernel as the inverse of a "benchmark" or "natural numeraire" asset, and in that case the benchmark asset is the portfolio obtained by aggregating the benchmarks assigned by the individual investors based on their private risk preferences. Remarkably, with an appropriate definition of L\'evy information one draws the same conclusion in the case of a geometric L\'evy model in which asset prices can jump. As a consequence one is lead to a rather general scheme for the management of investments in heterogeneous markets subject to jump risk.
When investors have heterogeneous attitudes towards risk, it is reasonable to assume that each investor has a pricing kernel, and that these individual pricing kernels are aggregated to form a market pricing kernel. The various investors are then buyers or sellers depending on how their individual pricing kernels compare to that of the market. In Brownian-based models, we can represent such heterogeneous attitudes by letting the market price of risk be a random variable, the distribution of which corresponds to the variability of attitude across the market. If the flow of market information is determined by the movements of prices , then neither the Brownian driver nor the market price of risk are directly visible: the filtration is generated by an "information process" given by a combination of the two . We show that the market pricing kernel is then given by the harmonic mean of the individual pricing kernels associated with the various market participants. Remarkably, with an appropriate definition of L\'evy information one draws the same conclusion in the case when asset prices can jump. As a consequence we are led to a rather general scheme for the management of investments in heterogeneous markets subject to jump risk.
[ { "type": "D", "before": "in some way", "after": null, "start_char_pos": 172, "end_char_pos": 183 }, { "type": "R", "before": "the case of geometric Brownian motion based", "after": "Brownian-based", "start_char_pos": 361, "end_char_pos": 404 }, { "type": "R", "before": "the pricesof assets", "after": "prices", "start_char_pos": 673, "end_char_pos": 692 }, { "type": "D", "before": "that takes the form of a Brownian motion with random drift", "after": null, "start_char_pos": 869, "end_char_pos": 927 }, { "type": "D", "before": "Alternatively, one can view the market pricing kernel as the inverse of a \"benchmark\" or \"natural numeraire\" asset, and in that case the benchmark asset is the portfolio obtained by aggregating the benchmarks assigned by the individual investors based on their private risk preferences.", "after": null, "start_char_pos": 1087, "end_char_pos": 1373 }, { "type": "R", "before": "of a geometric L\\'evy model in which", "after": "when", "start_char_pos": 1481, "end_char_pos": 1517 }, { "type": "R", "before": "one is lead", "after": "we are led", "start_char_pos": 1558, "end_char_pos": 1569 } ]
[ 0, 227, 357, 604, 929, 1086, 1373, 1540 ]
1301.3100
1
We consider an optimal stopping problem recently posed by Shiryaev at the International Conference on Advanced Stochastic Optimization URLanized by the Steklov Institute of Mathematics in September 2012: \sup_{\tau\in\mathcal{T_{\eps,T}}EB_{\tau-\eps}, where }%DIFDELCMD < \eps %%% _{\eps,T}}E[i=1^n (\tau-\eps^i)^+^i], where T>0 } is a fixed strictly positive constant, B is a Brownian motion, is progressively measurable with respect to the Brownian filtration, }\eps and T_{\eps,T} is the set of Brownian stopping times that lie between \eps and a fixed horizon T>%DIFDELCMD < \eps%%% . We solve this problem by conditioning and then using the theory of reflected backward stochastic differential equations (RBSDEs). _{0,T}}EB_{(\tau-\eps)^+} recently posed by Shiryaev at the International Conference on Advanced Stochastic Optimization URLanized by the Steklov Institute of Mathematics in September 2012.}
We consider _{\eps,T}}EB_{\tau-\eps}, where }%DIFDELCMD < \eps %%% the optimal problem \sup_{\tau\in\mathcal{T_{\eps,T}}E[i=1^n (\tau-\eps^i)^+^i], where T>0 } is a fixed time horizon, (\phi_t^i)_{0\leq t\leq T is progressively measurable with respect to the Brownian filtration, }\eps^i\in[0,T] is a constant, i=1,...o,n, and T_{\eps,T} is the set of stopping times that lie between a constant \eps %DIFDELCMD < \eps%%% \in[0,T] and T . We solve this problem by conditioning and then using the theory of reflected backward stochastic differential equations (RBSDEs). As a corollary, we provide the solution to the optimal stopping problem \sup_{\tau\in\mathcal{T_{0,T}}EB_{(\tau-\eps)^+} recently posed by Shiryaev at the International Conference on Advanced Stochastic Optimization URLanized by the Steklov Institute of Mathematics in September 2012.}
[ { "type": "D", "before": "an optimal stopping problem recently posed by Shiryaev at the International Conference on Advanced Stochastic Optimization URLanized by the Steklov Institute of Mathematics in September 2012: \\sup_{\\tau\\in\\mathcal{T", "after": null, "start_char_pos": 12, "end_char_pos": 227 }, { "type": "A", "before": null, "after": "the optimal problem \\sup_{\\tau\\in\\mathcal{T", "start_char_pos": 282, "end_char_pos": 282 }, { "type": "R", "before": "strictly positive constant, B is a Brownian motion,", "after": "time horizon, (\\phi_t^i)_{0\\leq t\\leq T", "start_char_pos": 343, "end_char_pos": 394 }, { "type": "A", "before": null, "after": "^i\\in[0,T] is a constant, i=1,...o,n,", "start_char_pos": 469, "end_char_pos": 469 }, { "type": "D", "before": "Brownian", "after": null, "start_char_pos": 499, "end_char_pos": 507 }, { "type": "A", "before": null, "after": "a constant", "start_char_pos": 540, "end_char_pos": 540 }, { "type": "D", "before": "and a fixed horizon T>", "after": null, "start_char_pos": 546, "end_char_pos": 568 }, { "type": "A", "before": null, "after": "\\in[0,T] and T", "start_char_pos": 589, "end_char_pos": 589 }, { "type": "A", "before": null, "after": "As a corollary, we provide the solution to the optimal stopping problem \\sup_{\\tau\\in\\mathcal{T", "start_char_pos": 722, "end_char_pos": 722 } ]
[ 0, 203, 234, 721 ]
1301.3100
2
We consider the optimal problem \tau\in\mathcal{T_{\eps,T}}E[i=1^n (\tau-\eps^i)^+^i], where T>0 is a fixed time horizon, (\phi_t^i)_{0\leq t\leq T} is progressively measurable with respect to the Brownian filtration, \eps^i\in[0,T] is a constant, i=1,...o,n, and T_{\eps,T} is the set of stopping times that lie between a constant \eps\in[0,T] and T. We solve this problem by conditioning and then using the theory of reflected backward stochastic differential equations (RBSDEs). As a corollary, we provide the solution to the optimal stopping problem \tau\in\mathcal{T_{0,T}}EB_{(\tau-\eps)^+} recently posed by Shiryaev at the International Conference on Advanced Stochastic Optimization URLanized by the Steklov Institute of Mathematics in September 2012. \eps
We consider the optimal problem \tau\in\mathcal{T_{\eps,T}}E[i=1^n (\tau-\eps^i)^+^i], where T>0 is a fixed time horizon, (\phi_t^i)_{0\leq t\leq T} is progressively measurable with respect to the Brownian filtration, \eps^i\in[0,T] is a constant, i=1,...o,n, and T_{\eps,T} is the set of stopping times that lie between a constant \eps\in[0,T] and T. We solve this problem by conditioning and then using the theory of reflected backward stochastic differential equations (RBSDEs). As a corollary, we provide the solution to the optimal stopping problem \tau\in\mathcal{T_{0,T}}EB_{(\tau-\eps)^+} recently posed by Shiryaev at the International Conference on Advanced Stochastic Optimization URLanized by the Steklov Institute of Mathematics in September 2012. We also provide its asymptotic order as\eps\searrow 0.
[ { "type": "A", "before": null, "after": "We also provide its asymptotic order as", "start_char_pos": 761, "end_char_pos": 761 }, { "type": "A", "before": null, "after": "\\searrow 0.", "start_char_pos": 765, "end_char_pos": 765 } ]
[ 0, 351, 481 ]
1301.3422
1
In yeast, phenotypic adaptations can evolve by natural selection of conformational variant prions and their variant amyloid fibers. This system requires the Hsp104 disaggregase, which fragments amyloid fibers into smaller seed prions that are passed on to mitotic descendants and meiotic spores. Interestingly, Hsp104 is found in diverse eukaryotes except metazoans. To investigate whether a prion-based transmission "genetics" was incompatible with the evolution of Metazoa , we identify genes conserved in fungi and choanoflagellates but lost in animals. We show that both eukaryotic clpB amyloid disaggregases , HSP104 and its nuclear-encoded mitochondrial endo-ortholog HSP78, were lost in the stem-metazoan lineage along with only a small number of other relevant genes. We show that these gene losses are not unrelated historical accidents because these loci comprise a very small regulon devoted to prion transmission in yeast. We propose that evolution of developmental asymmetric cell-specifications necessitated the evolutionary deprecation of the ancient clpB system .
The evolution of animals involved a transition from a unicellular transcriptional regime to one supporting developmentally programmed somatic multicellularity. This transition was facilitated by an emergent developmental gene repertoire via gene family expansion and new gene innovation. Unknown, however, is whether this genomic-rewiring was constrained by genetic functions incompatible with key features of animals. Here , we identify conserved eukaryotic genes that were lost in stem-metazoans. Among the few genes lost are both eukaryotic clpB orthologs , HSP104 and HSP78, which encode amyloid disaggregases. Despite their ancient origins in the eukaryotic and mitochondrial primogenitors, we find that both clpB loci of Saccharomyces share a promoter signature that is specific to 7 genes involved in prion homeostasis. We suggest that the loss of these amyloid disaggregases was a necessary step in the evolutionary origin of the asymmetric cell specifications of somatic development .
[ { "type": "R", "before": "In yeast, phenotypic adaptations can evolve by natural selection of conformational variant prions and their variant amyloid fibers. This system requires the Hsp104 disaggregase, which fragments amyloid fibers into smaller seed prions that are passed on to mitotic descendants and meiotic spores. Interestingly, Hsp104 is found in diverse eukaryotes except metazoans. To investigate whether a prion-based transmission \"genetics\" was incompatible with the evolution of Metazoa", "after": "The evolution of animals involved a transition from a unicellular transcriptional regime to one supporting developmentally programmed somatic multicellularity. This transition was facilitated by an emergent developmental gene repertoire via gene family expansion and new gene innovation. Unknown, however, is whether this genomic-rewiring was constrained by genetic functions incompatible with key features of animals. Here", "start_char_pos": 0, "end_char_pos": 474 }, { "type": "R", "before": "genes conserved in fungi and choanoflagellates but lost in animals. We show that", "after": "conserved eukaryotic genes that were lost in stem-metazoans. Among the few genes lost are", "start_char_pos": 489, "end_char_pos": 569 }, { "type": "R", "before": "amyloid disaggregases", "after": "orthologs", "start_char_pos": 591, "end_char_pos": 612 }, { "type": "D", "before": "its nuclear-encoded mitochondrial endo-ortholog", "after": null, "start_char_pos": 626, "end_char_pos": 673 }, { "type": "R", "before": "were lost in the stem-metazoan lineage along with only a small number of other relevant genes. We show that these gene losses are not unrelated historical accidents because these loci comprise a very small regulon devoted to prion transmission in yeast. We propose that evolution of developmental asymmetric cell-specifications necessitated the evolutionary deprecation of the ancient clpB system", "after": "which encode amyloid disaggregases. Despite their ancient origins in the eukaryotic and mitochondrial primogenitors, we find that both clpB loci of Saccharomyces share a promoter signature that is specific to 7 genes involved in prion homeostasis. We suggest that the loss of these amyloid disaggregases was a necessary step in the evolutionary origin of the asymmetric cell specifications of somatic development", "start_char_pos": 681, "end_char_pos": 1077 } ]
[ 0, 131, 295, 366, 556, 775, 934 ]
1301.3422
2
The evolution of animals involved a transition from a unicellular transcriptional regime to one supporting developmentally programmed somatic multicellularity. This transition was facilitated by an emergent developmental gene repertoire via gene family expansion and new gene innovation. Unknown, however, is whether this genomic-rewiring was constrained by genetic functions incompatible with key features of animals . Here, we identify conserved eukaryotic genes that were lost in stem-metazoans. Among the few genes lost are both eukaryotic clpB orthologs, HSP104 and HSP78 , which encode amyloid disaggregases. Despite their ancient origins in the eukaryotic and mitochondrial primogenitors, we find that both clpB loci of Saccharomyces share a promoter signature that is specific to 7 genes involved in prion homeostasis. We suggest that the loss of these amyloid disaggregases was a necessary step in the evolutionary origin of the asymmetric cell specifications of somatic development .
The evolution of animals involved a transition from a unicellular transcriptional regime to a transcriptional program supporting developmentally programmed somatic multicellularity. This transition was facilitated by an emergent gene repertoire devoted to developmental regulation, which involved gene family expansion and diversification as well as gene innovation. Unknown, however, is whether the loss of key genes co-evolved with and/or facilitated this genomic rewiring . Here, we identify conserved eukaryotic genes that were lost in stem-metazoans. These genes suggest a coherent pattern of gene loss from which the biological and ecological context of animal origins can be inferred. First, we find that a large number of genes for multiple biosynthetic pathways were lost in the stem-metazoan lineage. Second, we find that of all the essential amino acid pathway genes absent in animals, only the biosynthetic pathways for the branched chain amino acids are also absent in choanoflagellates, the sister group to animals. Third, we find only a few conserved genes lost in the stem-metazoan lineage that do not encode metabolic enzymes, and these include the eukaryotic clpB genes, HSP78 and HSP104 , which encode protein disaggregase type chaperones. Finally, we find that in choanoflagellates and fungi the clpB genes and a small repertoire of refolding chaperones are co-regulated by the heat shock element HSE4 that can be bound by two Hsf1 trimers. Based on these findings, we propose that loss of branched chain biosynthetic genes in the holozoan radiation represents an initial pattern of loss that foreshadows the wholesale loss of metabolic genes in the stem-metazoan lineage. We further propose that key components of metabolic pathways may have required chaperone-mediated folding, and that loss of these pathways may explain the loss of dedicated protein folding machinery in the stem-metazoan lineage .
[ { "type": "R", "before": "one", "after": "a transcriptional program", "start_char_pos": 92, "end_char_pos": 95 }, { "type": "D", "before": "developmental gene repertoire via", "after": null, "start_char_pos": 207, "end_char_pos": 240 }, { "type": "A", "before": null, "after": "repertoire devoted to developmental regulation, which involved gene", "start_char_pos": 246, "end_char_pos": 246 }, { "type": "R", "before": "new", "after": "diversification as well as", "start_char_pos": 268, "end_char_pos": 271 }, { "type": "R", "before": "this genomic-rewiring was constrained by genetic functions incompatible with key features of animals", "after": "the loss of key genes co-evolved with and/or facilitated this genomic rewiring", "start_char_pos": 318, "end_char_pos": 418 }, { "type": "R", "before": "Among the few genes lost are both eukaryotic clpB orthologs, HSP104 and", "after": "These genes suggest a coherent pattern of gene loss from which the biological and ecological context of animal origins can be inferred. First, we find that a large number of genes for multiple biosynthetic pathways were lost in the stem-metazoan lineage. Second, we find that of all the essential amino acid pathway genes absent in animals, only the biosynthetic pathways for the branched chain amino acids are also absent in choanoflagellates, the sister group to animals. Third, we find only a few conserved genes lost in the stem-metazoan lineage that do not encode metabolic enzymes, and these include the eukaryotic clpB genes,", "start_char_pos": 500, "end_char_pos": 571 }, { "type": "A", "before": null, "after": "and HSP104", "start_char_pos": 578, "end_char_pos": 578 }, { "type": "R", "before": "amyloid disaggregases. Despite their ancient origins in", "after": "protein disaggregase type chaperones. Finally, we find that in choanoflagellates and fungi the clpB genes and a small repertoire of refolding chaperones are co-regulated by the heat shock element HSE4 that can be bound by two Hsf1 trimers. Based on these findings, we propose that loss of branched chain biosynthetic genes in the holozoan radiation represents an initial pattern of loss that foreshadows the wholesale loss of metabolic genes in", "start_char_pos": 594, "end_char_pos": 649 }, { "type": "R", "before": "eukaryotic and mitochondrial primogenitors, we find that both clpB loci of Saccharomyces share a promoter signature that is specific to 7 genes involved in prion homeostasis. We suggest that the", "after": "stem-metazoan lineage. We further propose that key components of metabolic pathways may have required chaperone-mediated folding, and that", "start_char_pos": 654, "end_char_pos": 848 }, { "type": "R", "before": "amyloid disaggregases was a necessary step in the evolutionary origin of the asymmetric cell specifications of somatic development", "after": "pathways may explain the loss of dedicated protein folding machinery in the stem-metazoan lineage", "start_char_pos": 863, "end_char_pos": 993 } ]
[ 0, 159, 288, 420, 499, 616, 828 ]
1301.3422
3
The evolution of animals involved a transition from a unicellular transcriptional regime to a transcriptional program supporting developmentally programmed somatic multicellularity. This transition was facilitated by an emergent gene repertoire devoted to developmental regulation, which involved gene family expansion and diversification as well as gene innovation. Unknown, however, is whether the loss of key genes co-evolved with and/or facilitated this genomic rewiring . Here, we identify conserved eukaryotic genes that were lost in stem-metazoans. These genes suggest a coherent pattern of gene loss from which the biological and ecological context of animal origins can be inferred. First, we find that a large number of genes for multiple biosynthetic pathways were lost in the stem-metazoan lineage. Second, we find that of all the essential amino acid pathway genes absent in animals, only the biosynthetic pathways for the branched chain amino acids are also absent in choanoflagellates, the sister group to animals. Third, we find only a few conserved genes lost in the stem-metazoan lineage that do not encode metabolic enzymes, and these include the eukaryotic clpB genes, HSP78 and HSP104, which encode protein disaggregase type chaperones. Finally, we find that in choanoflagellates and fungi the clpB genes and a small repertoire of refolding chaperones are co-regulated by the heat shock element HSE4 that can be bound by two Hsf1 trimers. Based on these findings, we propose that loss of branched chain biosynthetic genes in the holozoan radiation represents an initial pattern of loss that foreshadows the wholesale loss of metabolic genes in the stem-metazoan lineage. We further propose that key components of metabolic pathwaysmay have required chaperone-mediated folding, and that loss of these pathways may explain the loss of dedicated protein folding machinery in the stem-metazoan lineage .
The evolution of animals involved acquisition of an emergent gene repertoire for gastrulation. Whether loss of genes also co-evolved with this developmental reprogramming has not yet been addressed . Here, we identify twenty-four genetic functions that are retained in fungi and choanoflagellates but undetectable in animals. These lost genes encode: (i) sixteen distinct biosynthetic functions; (ii) the two ancestral eukaryotic ClpB disaggregases, Hsp78 and Hsp104, which function in the mitochondria and cytosol, respectively; and (iii) six other assorted functions. We present computational and experimental data that are consistent with a joint function for the differentially localized ClpB disaggregases, and with the possibility of a shared client/chaperone relationship between the mitochondrial Fe/S homoaconitase encoded by the lost LYS4 gene and the two ClpBs. Our analyses lead to the hypothesis that the evolution of gastrulation-based multicellularity in animals led to efficient extraction of nutrients from dietary sources, loss of natural selection for maintenance of energetically expensive biosynthetic pathways, and subsequent loss of their attendant ClpB chaperones .
[ { "type": "R", "before": "a transition from a unicellular transcriptional regime to a transcriptional program supporting developmentally programmed somatic multicellularity. This transition was facilitated by", "after": "acquisition of", "start_char_pos": 34, "end_char_pos": 216 }, { "type": "R", "before": "devoted to developmental regulation, which involved gene family expansion and diversification as well as gene innovation. Unknown, however, is whether the loss of key genes", "after": "for gastrulation. Whether loss of genes also", "start_char_pos": 245, "end_char_pos": 417 }, { "type": "R", "before": "and/or facilitated this genomic rewiring", "after": "this developmental reprogramming has not yet been addressed", "start_char_pos": 434, "end_char_pos": 474 }, { "type": "R", "before": "conserved eukaryotic genes that were lost in stem-metazoans. These genes suggest a coherent pattern of gene loss from which the biological and ecological context of animal origins can be inferred. First, we find that a large number of genes for multiple biosynthetic pathways were lost in the stem-metazoan lineage. Second, we find that of all the essential amino acid pathway genes absent in animals, only the biosynthetic pathways for the branched chain amino acids are also absent in choanoflagellates, the sister group to animals. Third, we find only a few conserved genes lost in the stem-metazoan lineage that do not encode metabolic enzymes, and these include the eukaryotic clpB genes, HSP78 and HSP104, which encode protein disaggregase type chaperones. Finally, we find that in choanoflagellates and fungi the clpB genes and a small repertoire of refolding chaperones are co-regulated by the heat shock element HSE4 that can be bound by two Hsf1 trimers. Based on these findings, we propose that loss of branched chain biosynthetic genes in the holozoan radiation represents an initial pattern of loss that foreshadows the wholesale loss of metabolic genes in the stem-metazoan lineage. We further propose that key components of metabolic pathwaysmay have required chaperone-mediated folding, and that loss of these pathways may explain the loss of dedicated protein folding machinery in the stem-metazoan lineage", "after": "twenty-four genetic functions that are retained in fungi and choanoflagellates but undetectable in animals. These lost genes encode: (i) sixteen distinct biosynthetic functions; (ii) the two ancestral eukaryotic ClpB disaggregases, Hsp78 and Hsp104, which function in the mitochondria and cytosol, respectively; and (iii) six other assorted functions. We present computational and experimental data that are consistent with a joint function for the differentially localized ClpB disaggregases, and with the possibility of a shared client/chaperone relationship between the mitochondrial Fe/S homoaconitase encoded by the lost LYS4 gene and the two ClpBs. Our analyses lead to the hypothesis that the evolution of gastrulation-based multicellularity in animals led to efficient extraction of nutrients from dietary sources, loss of natural selection for maintenance of energetically expensive biosynthetic pathways, and subsequent loss of their attendant ClpB chaperones", "start_char_pos": 495, "end_char_pos": 1918 } ]
[ 0, 181, 366, 476, 555, 691, 810, 1029, 1257, 1459, 1691 ]
1301.3531
1
A distorted expectation is a Choquet expectation with respect to the capacity induced by a concave probability distortion. Distorted expectations are encountered in various static settings, in risk theory, mathematical finance and mathematical economics. There are a number of different ways to extend a distorted expectation to a multi-period setting, which are not all time-consistent. One time-consistent extension is to define the non-linear expectation by backward recursion, applying the distorted expectation stepwise, over single periods. In a multinomial random walk model we show that this non-linear expectation is stable when the number of intermediate periods increases to infinity: Under a suitable scaling of the probability distortions and provided that the tick-size and time step-size converge to zero in such a way that the multinomial random walks converge to a Levy process , we show that values of random variables under the multi-period distorted expectations converge to the values under a continuous-time non-linear expectation operator, which may be identified with a certain type of Peng's g-expectation. A coupling argument is given to show that this operator reduces to a classical linear expectation when restricted to the set of pathwise increasing claims. Our results also show that a certain class of g-expectations driven by a Brownian motion and a Poisson random measure may be computed numerically by recursively defined distorted expectations .
In this paper we explore a novel way to combine the dynamic notion of time-consistency with the static notion of quantile-based coherent risk-measure or spectral risk measure, of which Expected Shortfall is a prime example. We introduce a class of dynamic risk measures in terms of a certain family of g-expectations driven by Wiener and Poisson point processes. In analogy with the static case , we show that these risk measures, which we label dynamic spectral risk measures, are locally law-invariant and additive on the set of pathwise increasing random variables. We substantiate the link between dynamic spectral risk measures and their static counterparts by establishing a limit theorem for general path-functionals which shows that such dynamic risk measures arise as limits under vanishing time-step of iterated spectral risk measures driven by approximating lattice random walks. This involves a certain non-standard scaling of the corresponding spectral weight-measures that we identify explicitly .
[ { "type": "R", "before": "A distorted expectation is a Choquet expectation with respect to the capacity induced by a concave probability distortion. Distorted expectations are encountered in various static settings, in risk theory, mathematical finance and mathematical economics. There are a number of different ways to extend a distorted expectation to a multi-period setting, which are not all time-consistent. One time-consistent extension is to define the non-linear expectation by backward recursion, applying the distorted expectation stepwise, over single periods. In a multinomial random walk model we show that this non-linear expectation is stable when the number of intermediate periods increases to infinity: Under a suitable scaling of the probability distortions and provided that the tick-size and time step-size converge to zero in such a way that the multinomial random walks converge to a Levy process", "after": "In this paper we explore a novel way to combine the dynamic notion of time-consistency with the static notion of quantile-based coherent risk-measure or spectral risk measure, of which Expected Shortfall is a prime example. We introduce a class of dynamic risk measures in terms of a certain family of g-expectations driven by Wiener and Poisson point processes. In analogy with the static case", "start_char_pos": 0, "end_char_pos": 894 }, { "type": "R", "before": "values of random variables under the multi-period distorted expectations converge to the values under a continuous-time non-linear expectation operator, which may be identified with a certain type of Peng's g-expectation. A coupling argument is given to show that this operator reduces to a classical linear expectation when restricted to", "after": "these risk measures, which we label dynamic spectral risk measures, are locally law-invariant and additive on", "start_char_pos": 910, "end_char_pos": 1248 }, { "type": "R", "before": "claims. Our results also show that a certain class of g-expectations driven by a Brownian motion and a Poisson random measure may be computed numerically by recursively defined distorted expectations", "after": "random variables. We substantiate the link between dynamic spectral risk measures and their static counterparts by establishing a limit theorem for general path-functionals which shows that such dynamic risk measures arise as limits under vanishing time-step of iterated spectral risk measures driven by approximating lattice random walks. This involves a certain non-standard scaling of the corresponding spectral weight-measures that we identify explicitly", "start_char_pos": 1280, "end_char_pos": 1479 } ]
[ 0, 122, 254, 387, 546, 695, 1131, 1287 ]
1301.3531
2
In this paper we explore a novel way to combine the dynamic notion of time-consistency with the static notion of quantile-based coherent risk-measure or spectral risk measure, of which Expected Shortfall is a prime example. We introduce a class of dynamic risk measures in terms of a certain family of g-expectations driven by Wiener and Poisson point processes . In analogy with the static case, we show that these risk measures, which we label dynamic spectral risk measures , are locally law-invariant and additive on the set of pathwise increasing random variables . We substantiate the link between dynamic spectral risk measures and their static counterparts by establishing a limit theorem for general path-functionals which shows that such dynamic risk measures arise as limits under vanishing time-step of iterated spectral risk measures driven by approximating lattice random walks. This involves a certain non-standard scaling of the corresponding spectral weight-measures that we identify explicitly .
In this paper we explore a novel way to combine the dynamic notion of time-consistency with the static notion of quantile-based coherent risk measure or spectral risk measure, of which Expected Shortfall is a prime example. We introduce a new class of dynamic risk measures given in terms of a certain family of g-expectations driven by Wiener and Poisson point processes , which we call dynamic spectral risk measures . We substantiate the link between dynamic spectral risk measures and their static counterparts by establishing a limit theorem for general path-functionals which shows that such dynamic risk measures arise as limits under vanishing time-step of iterated spectral risk measures driven by approximating lattice random walks. This involves a certain non-standard scaling of the corresponding spectral weight-measures that we identify explicitly . We also define a family of market-consistent dynamic spectral risk measures in the setting of a financial market. We identify the corresponding optimal hedging strategy and show that the negative of the value of such a risk measure is equal to the smallest price under a certain family of equivalent martingale measures. By varying the parameters of the risk measure we show that this price ranges between the subhedging price and the price under the minimal martingale measure .
[ { "type": "R", "before": "risk-measure", "after": "risk measure", "start_char_pos": 137, "end_char_pos": 149 }, { "type": "A", "before": null, "after": "new", "start_char_pos": 239, "end_char_pos": 239 }, { "type": "A", "before": null, "after": "given", "start_char_pos": 271, "end_char_pos": 271 }, { "type": "R", "before": ". In analogy with the static case, we show that these risk measures, which we label", "after": ", which we call", "start_char_pos": 364, "end_char_pos": 447 }, { "type": "D", "before": ", are locally law-invariant and additive on the set of pathwise increasing random variables", "after": null, "start_char_pos": 479, "end_char_pos": 570 }, { "type": "A", "before": null, "after": ". We also define a family of market-consistent dynamic spectral risk measures in the setting of a financial market. We identify the corresponding optimal hedging strategy and show that the negative of the value of such a risk measure is equal to the smallest price under a certain family of equivalent martingale measures. By varying the parameters of the risk measure we show that this price ranges between the subhedging price and the price under the minimal martingale measure", "start_char_pos": 1014, "end_char_pos": 1014 } ]
[ 0, 223, 365, 572, 894 ]
1301.3531
3
In this paper we explore a novel way to combine the dynamic notion of time-consistency with the static notion of quantile-based coherent risk measure or spectral risk measure, of which Expected Shortfall is a prime example. We introduce a new class of dynamic risk measures given in terms of a certain family of g-expectations driven by Wiener and Poisson point processes, which we call dynamic spectral risk measures. We substantiate the link between dynamic spectral risk measures and their static counterparts by establishing a limit theorem for general path-functionals which shows that such dynamic risk measures arise as limits under vanishing time-step of iterated spectral risk measures driven by approximating lattice random walks. This involves a certain non-standard scaling of the corresponding spectral weight-measures that we identify explicitly. We also define a family of market-consistent dynamic spectral risk measures in the setting of a financial market. We identify the corresponding optimal hedging strategy and show that the negative of the value of such a risk measure is equal to the smallest price under a certain family of equivalent martingale measures. By varying the parameters of the risk measure we show that this price ranges between the subhedging price and the price under the minimal martingale measure .
In this paper we propose the notion of continuous-time dynamic spectral risk-measure (DSR). Adopting a Poisson random measure setting, we define this class of dynamic coherent risk-measures in terms of certain backward stochastic differential equations. By establishing a functional limit theorem, we show that DSRs may be considered to be (strongly) time-consistent continuous-time extensions of iterated spectral risk-measures, which are obtained by iterating a given spectral risk-measure (such as Expected Shortfall) along a given time-grid. Specifically, we demonstrate that any DSR arises in the limit of a sequence of such iterated spectral risk-measures driven by lattice-random walks, under suitable scaling and vanishing time- and spatial-mesh sizes. To illustrate its use in financial optimisation problems, we analyse dynamic portfolio optimisation problems under DSR, providing verification theorems for the optimal (equilibrium) strategies and the corresponding value functions in terms of associated (extended) HJB equations. In the case of a long-only investor we explicitly identify optimal portfolio allocation strategies .
[ { "type": "R", "before": "explore a novel way to combine the dynamic notion of time-consistency with the static notion of quantile-based coherent risk measure or spectral risk measure, of which Expected Shortfall is a prime example. We introduce a new", "after": "propose the notion of continuous-time dynamic spectral risk-measure (DSR). Adopting a Poisson random measure setting, we define this", "start_char_pos": 17, "end_char_pos": 242 }, { "type": "R", "before": "risk measures given", "after": "coherent risk-measures", "start_char_pos": 260, "end_char_pos": 279 }, { "type": "R", "before": "a certain family of g-expectations driven by Wiener and Poisson point processes, which we call dynamic spectral risk measures. We substantiate the link between dynamic spectral risk measures and their static counterparts by establishing a limit theorem for general path-functionals which shows that such dynamic risk measures arise as limits under vanishing time-step", "after": "certain backward stochastic differential equations. By establishing a functional limit theorem, we show that DSRs may be considered to be (strongly) time-consistent continuous-time extensions", "start_char_pos": 292, "end_char_pos": 659 }, { "type": "R", "before": "risk measures driven by approximating lattice random walks. This involves a certain non-standard scaling of the corresponding spectral weight-measures that we identify explicitly. We also define a family of market-consistent dynamic spectral risk measures in the setting of a financial market. We identify the corresponding optimal hedging strategy and show that the negative of the value of such a risk measure is equal to the smallest price under a certain family of equivalent martingale measures. By varying the parameters of the risk measure we show that this price ranges between the subhedging price and the price under the minimal martingale measure", "after": "risk-measures, which are obtained by iterating a given spectral risk-measure (such as Expected Shortfall) along a given time-grid. Specifically, we demonstrate that any DSR arises in the limit of a sequence of such iterated spectral risk-measures driven by lattice-random walks, under suitable scaling and vanishing time- and spatial-mesh sizes. To illustrate its use in financial optimisation problems, we analyse dynamic portfolio optimisation problems under DSR, providing verification theorems for the optimal (equilibrium) strategies and the corresponding value functions in terms of associated (extended) HJB equations. In the case of a long-only investor we explicitly identify optimal portfolio allocation strategies", "start_char_pos": 681, "end_char_pos": 1338 } ]
[ 0, 223, 418, 740, 860, 974, 1181 ]
1301.3531
4
In this paper we propose the notion of continuous-time dynamic spectral risk-measure (DSR). Adopting a Poisson random measure setting, we define this class of dynamic coherent risk-measures in terms of certain backward stochastic differential equations. By establishing a functional limit theorem, we show that DSRs may be considered to be (strongly) time-consistent continuous-time extensions of iterated spectral risk-measures, which are obtained by iterating a given spectral risk-measure (such as Expected Shortfall) along a given time-grid. Specifically, we demonstrate that any DSR arises in the limit of a sequence of such iterated spectral risk-measures driven by lattice-random walks, under suitable scaling and vanishing time- and spatial-mesh sizes. To illustrate its use in financial optimisation problems, we analyse dynamic portfolio optimisation problems under DSR, providing verification theorems for the optimal (equilibrium) strategies and the corresponding value functions in terms of associated (extended) HJB equations. In the case of a long-only investor we explicitly identify optimal portfolio allocation strategies .
In this paper we propose the notion of continuous-time dynamic spectral risk-measure (DSR). Adopting a Poisson random measure setting, we define this class of dynamic coherent risk-measures in terms of certain backward stochastic differential equations. By establishing a functional limit theorem, we show that DSRs may be considered to be (strongly) time-consistent continuous-time extensions of iterated spectral risk-measures, which are obtained by iterating a given spectral risk-measure (such as Expected Shortfall) along a given time-grid. Specifically, we demonstrate that any DSR arises in the limit of a sequence of such iterated spectral risk-measures driven by lattice-random walks, under suitable scaling and vanishing time- and spatial-mesh sizes. To illustrate its use in financial optimisation problems, we analyse a dynamic portfolio optimisation problem under a DSR .
[ { "type": "A", "before": null, "after": "a", "start_char_pos": 830, "end_char_pos": 830 }, { "type": "R", "before": "problems under DSR, providing verification theorems for the optimal (equilibrium) strategies and the corresponding value functions in terms of associated (extended) HJB equations. In the case of a long-only investor we explicitly identify optimal portfolio allocation strategies", "after": "problem under a DSR", "start_char_pos": 862, "end_char_pos": 1140 } ]
[ 0, 91, 253, 545, 760, 1041 ]
1301.4207
1
Behavior of systems that are functions of anticipated behavior of other systems, whose own behavior is also anticipatory but homeostatic and determined by hierarchical ordering, which changes over time, of sets of possible environments that are not co-possible, is proven to be highly non-linear and sensitively dependent on precise parameters. Averages and other kinds of aggregates cannot be calculated for sets of measurements of behavior of systems, defined in this essay, that are "index complex" in this way. This includes many systems, for instance, social behavior, where anticipation of behavior of other individuals plays a central role. Analysis by way of generalized functions of complex variables is done for these kinds of systems, and equations of change of state are formally described. Behavior that comprises of responses to market interest rates is taken for example .
Behavior of systems that are functions of anticipated behavior of other systems, whose own behavior is also anticipatory but homeostatic and determined by hierarchical ordering, which changes over time, of sets of possible environments that are not co-possible, is proven to be highly non-linear and sensitively dependent on precise parameters. Averages and other kinds of aggregates cannot be calculated for sets of measurements of behavior of systems, defined in this essay, that are "index complex" in this way. This includes many systems, for instance, social behavior, where anticipation of behavior of other individuals plays a central role. Anticipation of preferences of economic actors are discussed in this way. Analysis by way of generalized functions of complex variables is done for these kinds of systems, and equations of change of state are formally described. Behavior that comprises of responses to market interest rates is taken for example . Continuity assumptions in economics analyzed in this context. Anticipatory responses to inflation in economics are discussed. Applications to theory of production are presented .
[ { "type": "A", "before": null, "after": "Anticipation of preferences of economic actors are discussed in this way.", "start_char_pos": 648, "end_char_pos": 648 }, { "type": "A", "before": null, "after": ". Continuity assumptions in economics analyzed in this context. Anticipatory responses to inflation in economics are discussed. Applications to theory of production are presented", "start_char_pos": 887, "end_char_pos": 887 } ]
[ 0, 344, 514, 647, 803 ]
1301.5129
1
A Bayesian non-parametric approach for efficient risk management is proposed. A dynamic model is considered where optimal portfolio weights and hedging ratios are adjusted at each period. The covariance matrix of the returns is described using an asymmetric MGARCH model . Restrictive parametric assumptions for the errors are avoided by relying on Bayesian non-parametric methods, which allow for a better evaluation of the uncertainty in financial decisions. Illustrative risk management problems using real data are solved. Significant differences in posterior distributions of the optimal weights and ratios are obtained arising from different assumptions for the errors in the time series model .
We propose a Bayesian non-parametric approach for modeling the distribution of multiple returns. In particular, we use an asymmetric dynamic conditional correlation (ADCC) model to estimate the time-varying correlations of financial returns where the individual volatilities are driven by GJR-GARCH models. The ADCC-GJR-GARCH model takes into consideration the asymmetries in individual assets' volatilities, as well as in the correlations. The errors are modeled using a Dirichlet location-scale mixture of multivariate Gaussian distributions allowing for a great flexibility in the return distribution in terms of skewness and kurtosis. Model estimation and prediction are developed using MCMC methods based on slice sampling techniques. We carry out a simulation study to illustrate the flexibility of the proposed approach. We find that the proposed DPM model is able to adapt to several frequently used distribution models and also accurately estimates the posterior distribution of the volatilities of the returns, without assuming any underlying distribution. Finally, we present a financial application using Apple and NASDAQ Industrial index data to solve a portfolio allocation problem. We find that imposing a restrictive parametric distribution can result into underestimation of the portfolio variance, whereas DPM model is able to overcome this problem .
[ { "type": "R", "before": "A", "after": "We propose a", "start_char_pos": 0, "end_char_pos": 1 }, { "type": "R", "before": "efficient risk management is proposed. A dynamic model is considered where optimal portfolio weights and hedging ratios are adjusted at each period. The covariance matrix of the returns is described using an asymmetric MGARCH model . Restrictive parametric assumptions for the errors are avoided by relying on Bayesian non-parametric methods, which allow for a better evaluation of the uncertainty in financial decisions. Illustrative risk management problems using real data are solved. Significant differences in posterior distributions of the optimal weights and ratios are obtained arising from different assumptions for the errors in the time series model", "after": "modeling the distribution of multiple returns. In particular, we use an asymmetric dynamic conditional correlation (ADCC) model to estimate the time-varying correlations of financial returns where the individual volatilities are driven by GJR-GARCH models. The ADCC-GJR-GARCH model takes into consideration the asymmetries in individual assets' volatilities, as well as in the correlations. The errors are modeled using a Dirichlet location-scale mixture of multivariate Gaussian distributions allowing for a great flexibility in the return distribution in terms of skewness and kurtosis. Model estimation and prediction are developed using MCMC methods based on slice sampling techniques. We carry out a simulation study to illustrate the flexibility of the proposed approach. We find that the proposed DPM model is able to adapt to several frequently used distribution models and also accurately estimates the posterior distribution of the volatilities of the returns, without assuming any underlying distribution. Finally, we present a financial application using Apple and NASDAQ Industrial index data to solve a portfolio allocation problem. We find that imposing a restrictive parametric distribution can result into underestimation of the portfolio variance, whereas DPM model is able to overcome this problem", "start_char_pos": 39, "end_char_pos": 699 } ]
[ 0, 77, 187, 460, 526 ]
1301.5824
1
We present a maximum entropy framework to separate the intrinsic and the extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible histogram of mRNA copy numbers by accounting for possible variations in global extrinsic factors. The distribution of extrinsic factors is estomated using the maximum entropy principle. Our results show that extrinsic factors quantitatively and qualitatively affect the histogram. Specifically, we{\it suggest that the cell to cell variation in extrinsic factors accounts for the wider that Poisson{\it distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in {\it E. coli while others need verification. Application of the current method to understanding noise in mRNA production in eukaryotes and protein production is also discussed.
We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in{\it E. coli . We suggest that the variation in extrinsic factors may account for the observed{\it wider than Poisson distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in {\it E. coli while others need verification. Application of the current framework to more complex situations is also discussed.
[ { "type": "R", "before": "the intrinsic and the", "after": "intrinsic and", "start_char_pos": 51, "end_char_pos": 72 }, { "type": "R", "before": "histogram of mRNA copy numbers", "after": "probability distribution of the copy number of the gene product (mRNA or protein)", "start_char_pos": 202, "end_char_pos": 232 }, { "type": "D", "before": "global", "after": null, "start_char_pos": 274, "end_char_pos": 280 }, { "type": "R", "before": "estomated", "after": "estimated", "start_char_pos": 341, "end_char_pos": 350 }, { "type": "R", "before": "quantitatively and qualitatively affect the histogram. Specifically, we", "after": "qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in", "start_char_pos": 428, "end_char_pos": 499 }, { "type": "A", "before": null, "after": "E. coli", "start_char_pos": 504, "end_char_pos": 504 }, { "type": "A", "before": null, "after": ". We", "start_char_pos": 505, "end_char_pos": 505 }, { "type": "D", "before": "cell to cell", "after": null, "start_char_pos": 523, "end_char_pos": 535 }, { "type": "R", "before": "accounts for the wider that Poisson", "after": "may account for the observed", "start_char_pos": 567, "end_char_pos": 602 }, { "type": "A", "before": null, "after": "wider than Poisson", "start_char_pos": 607, "end_char_pos": 607 }, { "type": "R", "before": "method to understanding noise in mRNA production in eukaryotes and protein production", "after": "framework to more complex situations", "start_char_pos": 955, "end_char_pos": 1040 } ]
[ 0, 160, 299, 387, 482, 642, 792, 927 ]
1301.6114
1
We use a simple agent based model of value investors in financial markets to test three credit regulation policies. The first is the unregulated case, which only imposes limits on maximum leverage. The second is Basle II , which also imposes interest rate spreads on loans and haircuts on collateral, and the third is a hypothetical alternative in which banks perfectly hedge all of their leverage-induced risk with options that are paid for by the funds . When compared to the unregulated case both Basle II and the perfect hedge policy reduce the risk of default when leverage is low but increase it when leverage is high. This is because both regulation policies increase the amount of synchronized buying and selling needed to achieve deleveraging, which can destabilize the market. None of these policies are optimal for everyone: Risk neutral investors prefer the unregulated case with a maximum leverageof roughly four , banks prefer the perfect hedge policy, and fund managers prefer the unregulated case with a high maximum leverage. No one prefers Basle II.
We use a simple agent based model of value investors in financial markets to test three credit regulation policies. The first is the unregulated case, which only imposes limits on maximum leverage. The second is Basle II and the third is a hypothetical alternative in which banks perfectly hedge all of their leverage-induced risk with options . When compared to the unregulated case both Basle II and the perfect hedge policy reduce the risk of default when leverage is low but increase it when leverage is high. This is because both regulation policies increase the amount of synchronized buying and selling needed to achieve deleveraging, which can destabilize the market. None of these policies are optimal for everyone: Risk neutral investors prefer the unregulated case with low maximum leverage , banks prefer the perfect hedge policy, and fund managers prefer the unregulated case with high maximum leverage. No one prefers Basle II.
[ { "type": "R", "before": ", which also imposes interest rate spreads on loans and haircuts on collateral, and", "after": "and", "start_char_pos": 221, "end_char_pos": 304 }, { "type": "D", "before": "that are paid for by the funds", "after": null, "start_char_pos": 424, "end_char_pos": 454 }, { "type": "R", "before": "a maximum leverageof roughly four", "after": "low maximum leverage", "start_char_pos": 892, "end_char_pos": 925 }, { "type": "D", "before": "a", "after": null, "start_char_pos": 1018, "end_char_pos": 1019 } ]
[ 0, 115, 197, 456, 624, 786, 1042 ]
1301.6141
1
Instabilities in the price dynamics of a large number of financial assets are a clear sign of systemic events. By investigating a set of 20 high cap stocks traded at the Italian Stock Exchange, we find that there is a large number of multiple cojumps, i. e. minutes in which a sizable number of stocks displays a discontinuity of the price process. We show that the dynamics of these jumps is not described neither by a multivariate Poisson nor by a multivariate Hawkes model , which are unable to capture simultaneously the time clustering of jumps and the high synchronization of jumps across assets . We introduce a one factor model approach where both the factor and the idiosyncratic jump components are described by a Hawkes process. We introduce a robust calibration scheme which is able to distinguish systemic and idiosyncratic jumps and we show that the model reproduces very well the empirical behaviour of the jumps of the Italian stocks .
Instabilities in the price dynamics of a large number of financial assets are a clear sign of systemic events. By investigating a set of 20 high cap stocks traded at the Italian Stock Exchange, we find that there is a large number of high frequency cojumps. We show that the dynamics of these jumps is described neither by a multivariate Poisson nor by a multivariate Hawkes model . We introduce a Hawkes one factor model which is able to capture simultaneously the time clustering of jumps and the high synchronization of jumps across assets .
[ { "type": "R", "before": "multiple cojumps, i. e. minutes in which a sizable number of stocks displays a discontinuity of the price process.", "after": "high frequency cojumps.", "start_char_pos": 234, "end_char_pos": 348 }, { "type": "D", "before": "not", "after": null, "start_char_pos": 393, "end_char_pos": 396 }, { "type": "R", "before": ", which are unable", "after": ". We introduce a Hawkes one factor model which is able", "start_char_pos": 476, "end_char_pos": 494 }, { "type": "D", "before": ". We introduce a one factor model approach where both the factor and the idiosyncratic jump components are described by a Hawkes process. We introduce a robust calibration scheme which is able to distinguish systemic and idiosyncratic jumps and we show that the model reproduces very well the empirical behaviour of the jumps of the Italian stocks", "after": null, "start_char_pos": 602, "end_char_pos": 949 } ]
[ 0, 110, 348, 603, 739 ]
1301.6252
1
We propose a few variations around a simple model in order to take into account the market impactof the option seller when hedging an option. This "retro-action" mechanism turns the linear Black and Scholes PDE into a non-linear one. This model allows also to retrieve some earlier results of \mbox{%DIFAUXCMD CheriSonTouz .
We consider a model of linear market impact, and address the problem of replicating a contingent claim in this framework. We derive a non-linear Black-Scholes Equation that provides an exact replication strategy. This equation is fully non-linear and singular, but we show that it is well posed, and we prove existence of smooth solutions for a large class of final payoffs, both for constant and local volatility. To obtain regularity of the solutions, we develop an original method based on Legendre transforms. The close connections with the problem of hedging with it gamma constraints studied by Cheridito, Soner and Touzi and with the problem of hedging under it liquidity costs are discussed. We also derive a modified Black-Scholes formula valid for asymptotically small impact parameter, and finally provide numerical simulations as an illustration .
[ { "type": "R", "before": "propose a few variations around a simple model in order to take into account the market impactof the option seller when hedging an option. This \"retro-action\" mechanism turns the linear Black and Scholes PDE into a", "after": "consider a model of linear market impact, and address the problem of replicating a contingent claim in this framework. We derive a", "start_char_pos": 3, "end_char_pos": 217 }, { "type": "R", "before": "one. This model allows also to retrieve some earlier results of \\mbox{%DIFAUXCMD CheriSonTouz", "after": "Black-Scholes Equation that provides an exact replication strategy. This equation is fully non-linear and singular, but we show that it is well posed, and we prove existence of smooth solutions for a large class of final payoffs, both for constant and local volatility. To obtain regularity of the solutions, we develop an original method based on Legendre transforms. The close connections with the problem of hedging with it gamma constraints studied by Cheridito, Soner and Touzi and with the problem of hedging under it liquidity costs are discussed. We also derive a modified Black-Scholes formula valid for asymptotically small impact parameter, and finally provide numerical simulations as an illustration", "start_char_pos": 229, "end_char_pos": 322 } ]
[ 0, 141, 233 ]
1302.0590
1
Duality for robust hedging with proportional transaction costs of path dependent European options is obtained in a discrete time financial market with one risky asset. Investor's portfolio consists of a dynamically traded stock and a static position in vanilla options which can be exercised at maturity. Only stock trading is subject to proportional transaction costs. The main theorem is duality between hedging and a Monge-Kantorovich type optimization problem. In this dual transport problem the optimization is over all the probability measures which satisfy an approximate martingale condition related to consistent price systems in addition to the usual marginal constraints.
Duality for robust hedging with proportional transaction costs of path dependent European options is obtained in a discrete time financial market with one risky asset. Investor's portfolio consists of a dynamically traded stock and a static position in vanilla options which can be exercised at maturity. Both the stock and the option trading is subject to proportional transaction costs. The main theorem is duality between hedging and a Monge-Kantorovich type optimization problem. In this dual transport problem the optimization is over all the probability measures which satisfy an approximate martingale condition related to consistent price systems in addition to the usual marginal constraints.
[ { "type": "R", "before": "Only stock", "after": "Both the stock and the option", "start_char_pos": 305, "end_char_pos": 315 } ]
[ 0, 167, 304, 369, 464 ]
1302.2063
2
The financial crisis clearly illustrated the importance of characterizing the level of ` systemic' risk associated with an entire credit network, rather than with single institutions. However, the interplay between financial distress and topological changes is still poorly understood. Here we analyze the quarterly interbank exposures among Dutch banks over the period 1998-2008, ending with the crisis. After controlling for the link density, many topological properties display an abrupt change in 2008, providing a clear - but unpredictable - signature of the crisis. By contrast, if the heterogeneity of banks' connectivity is controlled for, the same properties show a gradual transition to the crisis, starting in 2005 and preceded by an even earlier period during which anomalous debt loops presumably favored the underestimation of counter-party risk. These early-warning signals are undetectable if the network is reconstructed from partial bank-specific data, as routinely done. We discuss important implications for bank regulatory policies.
The financial crisis clearly illustrated the importance of characterizing the level of ' systemic' risk associated with an entire credit network, rather than with single institutions. However, the interplay between financial distress and topological changes is still poorly understood. Here we analyze the quarterly interbank exposures among Dutch banks over the period 1998-2008, ending with the crisis. After controlling for the link density, many topological properties display an abrupt change in 2008, providing a clear - but unpredictable - signature of the crisis. By contrast, if the heterogeneity of banks' connectivity is controlled for, the same properties show a gradual transition to the crisis, starting in 2005 and preceded by an even earlier period during which anomalous debt loops could have led to the underestimation of counter-party risk. These early-warning signals are undetectable if the network is reconstructed from partial bank-specific data, as routinely done. We discuss important implications for bank regulatory policies.
[ { "type": "R", "before": "`", "after": "'", "start_char_pos": 87, "end_char_pos": 88 }, { "type": "R", "before": "presumably favored", "after": "could have led to", "start_char_pos": 799, "end_char_pos": 817 } ]
[ 0, 183, 285, 404, 571, 860, 989 ]
1302.2312
1
In this article we study the convergence of a European lookback option with floating strike to its evaluation with the Black-Scholes model. We confirm that this convergence is of order 1/ \sqrt{n . For this, we use the binomial model of Cheuk-Vorst which allows us to write the price of the option using a double sum. Based on an improvement of a lemma of Lin-Palmer, we are able to give the precise value of the term in 1/ \sqrt{n in the expansion of the error; we also obtain the value of the term in 1/n if the risk free interest rate is non zero .
In this article we study the convergence of a European lookback option with floating strike evaluated with the binomial model of Cox-Ross-Rubinstein to its evaluation with the Black-Scholes model. We do the same for its delta. We confirm that these convergences are of order 1/ Sqrt(n) . For this, we use the binomial model of Cheuk-Vorst which allows us to write the price of the option using a double sum. Based on an improvement of a lemma of Lin-Palmer, we are able to give the precise value of the term in 1/ Sqrt(n) in the expansion of the error; we also obtain the value of the term in 1/n if the risk free interest rate is non zero . This modelisation will also allow us to determine the first term in the expansion of the delta .
[ { "type": "A", "before": null, "after": "evaluated with the binomial model of Cox-Ross-Rubinstein", "start_char_pos": 92, "end_char_pos": 92 }, { "type": "R", "before": "confirm that this convergence is", "after": "do the same for its delta. We confirm that these convergences are", "start_char_pos": 144, "end_char_pos": 176 }, { "type": "R", "before": "\\sqrt{n", "after": "Sqrt(n)", "start_char_pos": 189, "end_char_pos": 196 }, { "type": "R", "before": "\\sqrt{n", "after": "Sqrt(n)", "start_char_pos": 425, "end_char_pos": 432 }, { "type": "A", "before": null, "after": ". This modelisation will also allow us to determine the first term in the expansion of the delta", "start_char_pos": 551, "end_char_pos": 551 } ]
[ 0, 140, 198, 318, 463 ]
1302.3001
1
The Third Fundamental Theorem of Asset Pricing allows to apply the Second Fundamental Theorem under very mild regularity conditions regarding the process of asset prices. An economic equilibrium exists if and only if the price process is a martingale. Hence, market efficiency at least requires the absence of weak arbitrage opportunities. I show that no weak arbitrage is necessary but not sufficient to establish a situation where asset prices always "fully reflect" a specific set of information beyond the price history. By contrast, completeness is a sufficient and necessary condition for a market to be informationally efficient, i.e., only in that case past and future events turn out to be conditionally independent of the price history . At the end of the paper I give different characterizations of market efficiency.
The Third Fundamental Theorem of Asset Pricing allows to apply the Second Fundamental Theorem under very mild regularity conditions regarding the process of asset prices. It has been shown that an economic equilibrium exists if and only if the price process is a martingale. Hence, market efficiency at least requires the absence of weak arbitrage , but this is not sufficient to establish a situation where asset prices always "fully reflect" some information beyond the price history. By contrast, completeness is a sufficient and necessary condition for a market to be informationally efficient, i.e., only in that case past and future events turn out to be independent conditional on the price history with respect to the physical measure . At the end of the paper I give different characterizations of market efficiency.
[ { "type": "R", "before": "An", "after": "It has been shown that an", "start_char_pos": 171, "end_char_pos": 173 }, { "type": "R", "before": "opportunities. I show that no weak arbitrage is necessary but", "after": ", but this is", "start_char_pos": 325, "end_char_pos": 386 }, { "type": "R", "before": "a specific set of", "after": "some", "start_char_pos": 469, "end_char_pos": 486 }, { "type": "R", "before": "conditionally independent of", "after": "independent conditional on", "start_char_pos": 699, "end_char_pos": 727 }, { "type": "A", "before": null, "after": "with respect to the physical measure", "start_char_pos": 746, "end_char_pos": 746 } ]
[ 0, 170, 251, 339, 524, 748 ]
1302.3001
2
The Third Fundamental Theorem of Asset Pricing allows to apply the Second Fundamental Theorem under very mild regularity conditions regarding the process of asset prices. It has been shown that an economic equilibrium exists if and only if the price process is a martingale. Hence, market efficiency at least requires the absence of weak arbitrage , but this is not sufficient to establish a situation where asset prices always "fully reflect " some information beyond the price history . By contrast, completeness is a sufficient and necessary condition for a market to be informationally efficient , i.e., only in that case past and future events turn out to be independent conditional on the price history with respect to the physical measure. At the end of the paper I give different characterizations of market efficiency .
Market efficiency at least requires the absence of weak arbitrage opportunities , but this is not sufficient to establish a situation where the market is sensitive, i.e., where it "fully reflects " or "rapidly adjusts to" some information flow including the evolution of asset prices . By contrast, No Weak Arbitrage together with market sensitivity is sufficient and necessary for a market to be informationally efficient .
[ { "type": "R", "before": "The Third Fundamental Theorem of Asset Pricing allows to apply the Second Fundamental Theorem under very mild regularity conditions regarding the process of asset prices. It has been shown that an economic equilibrium exists if and only if the price process is a martingale. Hence, market", "after": "Market", "start_char_pos": 0, "end_char_pos": 288 }, { "type": "A", "before": null, "after": "opportunities", "start_char_pos": 348, "end_char_pos": 348 }, { "type": "R", "before": "asset prices always", "after": "the market is sensitive, i.e., where it", "start_char_pos": 409, "end_char_pos": 428 }, { "type": "R", "before": "reflect", "after": "reflects", "start_char_pos": 436, "end_char_pos": 443 }, { "type": "R", "before": "some information beyond the price history", "after": "or \"rapidly adjusts to\" some information flow including the evolution of asset prices", "start_char_pos": 446, "end_char_pos": 487 }, { "type": "R", "before": "completeness is a", "after": "No Weak Arbitrage together with market sensitivity is", "start_char_pos": 503, "end_char_pos": 520 }, { "type": "D", "before": "condition", "after": null, "start_char_pos": 546, "end_char_pos": 555 }, { "type": "D", "before": ", i.e., only in that case past and future events turn out to be independent conditional on the price history with respect to the physical measure. At the end of the paper I give different characterizations of market efficiency", "after": null, "start_char_pos": 601, "end_char_pos": 827 } ]
[ 0, 170, 274, 489, 747 ]
1302.3197
1
Employing a recent technique which allows the representation of nonstationary data by means of a juxtaposition of locally stationary paths of different length, we introduce a comprehensive analysis of the key observables in a financial market: the trading volume and the price fluctuations. From the segmentation procedure we are able to introduce a quantitative description of statistical features of these two quantities, which are often named stylized facts, namely the tails of the distribution of trading volume and price fluctuations and a dynamics compatible with the U-shaped profile of the volume in a trading section and the slow decay of the autocorrelation function. The segmentation of the trading volume series provides evidence of slow evolution of the fluctuating parameters of each patch, pointing to the mixing scenario. By assuming that long-term features are the outcome of a statistical mixture of simple local forms, we test and compare different probability density functions to provide the long-term distribution of the trading volume, concluding that the log-normal gives the best agreement with the empirical distribution. Moreover, the segmentation of the magnitude price fluctuations are quite different from the results for the trading volume, indicating that changes in the statistics of price fluctuations occur at a faster scale than in the case of trading volume.
Employing a recent technique which allows the representation of nonstationary data by means of a juxtaposition of locally stationary patches of different length, we introduce a comprehensive analysis of the key observables in a financial market: the trading volume and the price fluctuations. From the segmentation procedure we are able to introduce a quantitative description of a group of statistical features (stylizes facts) of the trading volume and price fluctuations , namely the tails of each distribution, the U-shaped profile of the volume in a trading session and the evolution of the trading volume autocorrelation function. The segmentation of the trading volume series provides evidence of slow evolution of the fluctuating parameters of each patch, pointing to the mixing scenario. Assuming that long-term features are the outcome of a statistical mixture of simple local forms, we test and compare different probability density functions to provide the long-term distribution of the trading volume, concluding that the log-normal gives the best agreement with the empirical distribution. Moreover, the segmentation of the magnitude price fluctuations are quite different from the results for the trading volume, indicating that changes in the statistics of price fluctuations occur at a faster scale than in the case of trading volume.
[ { "type": "R", "before": "paths", "after": "patches", "start_char_pos": 133, "end_char_pos": 138 }, { "type": "R", "before": "statistical features of these two quantities, which are often named stylized facts, namely the tails of the distribution of", "after": "a group of statistical features (stylizes facts) of the", "start_char_pos": 378, "end_char_pos": 501 }, { "type": "R", "before": "and a dynamics compatible with the", "after": ", namely the tails of each distribution, the", "start_char_pos": 540, "end_char_pos": 574 }, { "type": "R", "before": "section and the slow decay of the", "after": "session and the evolution of the trading volume", "start_char_pos": 619, "end_char_pos": 652 }, { "type": "R", "before": "By assuming", "after": "Assuming", "start_char_pos": 839, "end_char_pos": 850 } ]
[ 0, 290, 678, 838, 1148 ]
1302.3250
1
Recently several CSMA algorithms , e.g., Glauber dynamics , have been proposed for multihop wireless scheduling, as viable solutions to achieve the throughput optimality, yet are simple to implement. However, their delay performances still remain far from satisfactory , mainly due to the nature of the underlying Markov chains that imposes a fundamental constraint on how the link state can evolve over time. In this paper, we propose a new approach toward better queueing and delay performance, based on our observation that the algorithm needs not be Markovian, as long as it can be implemented in a distributed manner, achieve the same throughput optimality, while offering far better delay performance for general network topologies. Our approach hinges upon utilizing past state information observed by local link and then constructing a high-order Markov chain for the evolution of the feasible link schedules. We show in theory and simulation that our proposed algorithm, named delayed CSMA, adds virtually no additional overhead onto the existing CSMA-based algorithms, achieves the throughput optimality under the usual choice of link weight as a function of local queue length, and also guarantees much better delay performance by effectively ' de-correlating' the link state process (thus removing link starvation) under any arbitrary network topology. From our extensive simulations we observe that the delay under our algorithm can be often reduced by a factor of 20 over a wide range of scenarios, compared to the standard Glauber-dynamics-based CSMA algorithm.
Recently several CSMA algorithms based on the Glauber dynamics model have been proposed for multihop wireless scheduling, as viable solutions to achieve the throughput optimality, yet are simple to implement. However, their delay performances still remain unsatisfactory , mainly due to the nature of the underlying Markov chains that imposes a fundamental constraint on how the link state can evolve over time. In this paper, we propose a new approach toward better queueing and delay performance, based on our observation that the algorithm needs not be Markovian, as long as it can be implemented in a distributed manner, achieve the same throughput optimality, while offering far better delay performance for general network topologies. Our approach hinges upon utilizing past state information observed by local link and then constructing a high-order Markov chain for the evolution of the feasible link schedules. We show in theory and simulation that our proposed algorithm, named delayed CSMA, adds virtually no additional overhead onto the existing CSMA-based algorithms, achieves the throughput optimality under the usual choice of link weight as a function of local queue length, and also provides much better delay performance by effectively ` de-correlating' the link state process (thus removing link starvation) under any arbitrary network topology. From our extensive simulations we observe that the delay under our algorithm can be often reduced by a factor of 20 over a wide range of scenarios, compared to the standard Glauber-dynamics-based CSMA algorithm.
[ { "type": "R", "before": ", e.g., Glauber dynamics ,", "after": "based on the Glauber dynamics model", "start_char_pos": 33, "end_char_pos": 59 }, { "type": "R", "before": "far from satisfactory", "after": "unsatisfactory", "start_char_pos": 247, "end_char_pos": 268 }, { "type": "R", "before": "guarantees", "after": "provides", "start_char_pos": 1198, "end_char_pos": 1208 }, { "type": "R", "before": "'", "after": "`", "start_char_pos": 1254, "end_char_pos": 1255 } ]
[ 0, 199, 409, 738, 917, 1364 ]
1302.3319
1
In this paper we extend Buchen's method to develop a new technique for pricing of some exotic options with several expiry dates(more than 3 expiry dates) using a concept of higher order binary option. At first we introduce the concept of higher order binary option and then provide the pricing formulae of nth order binaries using PDE method. After that, we apply them to pricing of some multiple-expiry exotic options such as Bermudan option, multi time extendable option, multi shout option and etc. Here, when calculating the price of concrete multiple-expiry exotic options, we do not try to get the formal solution of corresponding initial-boundary problem of the Black-Scholes equation, but explain how to express the expiry payoffs of the exotic option as a combination of the payoffs of some class of higher order binary options. Once the expiry payoffs are expressed as a linear combination of the payoffs of some class of higher order binary options, in order to avoid arbitrage, the exotic option prices are obtained by static replication with respect to this family of higher order binaries.
In this paper we extend Buchen's method to develop a new technique for pricing of some exotic options with several expiry dates(more than 3 expiry dates) using a concept of higher order binary option. At first we introduce the concept of higher order binary option and then provide the pricing formulae of n-th order binaries using PDE method. After that, we apply them to pricing of some multiple-expiry exotic options such as Bermudan option, multi time extendable option, multi shout option and etc. Here, when calculating the price of concrete multiple-expiry exotic options, we do not try to get the formal solution to corresponding initial-boundary problem of the Black-Scholes equation, but explain how to express the expiry payoffs of the exotic options as a combination of the payoffs of some class of higher order binary options. Once the expiry payoffs are expressed as a linear combination of the payoffs of some class of higher order binary options, in order to avoid arbitrage, the exotic option prices are obtained by static replication with respect to this family of higher order binaries.
[ { "type": "R", "before": "nth", "after": "n-th", "start_char_pos": 306, "end_char_pos": 309 }, { "type": "R", "before": "of", "after": "to", "start_char_pos": 620, "end_char_pos": 622 }, { "type": "R", "before": "option", "after": "options", "start_char_pos": 753, "end_char_pos": 759 } ]
[ 0, 200, 342, 501, 837 ]
1302.3654
1
We study the pricing problem for corporate defaultable bond from the viewpoint of the investors outside the firm that could not exactly know about the information of firm. We consider the problem for pricing of corporate defaultable bond in the case that the firm value is only declared in some fixed discrete time and unexpected default intensity is determined by the declared firm value. Here we provide a partial differential equation model for such a defaultable bond and give its pricing formula. Our pricing model is derived to a solving problem of a partial differential equation with random constant default intensityand a terminal value of the binary type . Our main method is to use the solving method of a partial differential equation for bond pricing with constant default intensity in every subinterval and to take expectation to remove the random constants.
We study the pricing problem for corporate defaultable bond from the viewpoint of the investors outside the firm that could not exactly know about the information of the firm. We consider the problem for pricing of corporate defaultable bond in the case when the firm value is only declared in some fixed discrete time and unexpected default intensity is determined by the declared firm value. Here we provide a partial differential equation model for such a defaultable bond and give its pricing formula. Our pricing model is derived to solving problems of partial differential equations with random constants (de- fault intensity) and terminal values of binary types . Our main method is to use the solving method of a partial differential equation with a random constant in every subinterval and to take expectation to remove the random constants.
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 166, "end_char_pos": 166 }, { "type": "R", "before": "that", "after": "when", "start_char_pos": 251, "end_char_pos": 255 }, { "type": "R", "before": "a solving problem of a partial differential equation with random constant default intensityand a terminal value of the binary type", "after": "solving problems of partial differential equations with random constants (de- fault intensity) and terminal values of binary types", "start_char_pos": 535, "end_char_pos": 665 }, { "type": "R", "before": "for bond pricing with constant default intensity", "after": "with a random constant", "start_char_pos": 748, "end_char_pos": 796 } ]
[ 0, 172, 390, 502, 667 ]
1302.4006
1
Undirected labeled graphs and graph rewriting are natural models of chemical compounds and chemical reactions. This provides a basis for exploring spaces of molecules and computing reaction networks implicitly defined by graph grammars. Molecule graphs are connected, meaning that rewriting steps in general are many-to-many graph transformations. Chemical grammars are typically subject to combinatorial explosion, however, making it often infeasible to compute the underlying network by direct breadth-first expansion. To alleviate this problem, we introduce here partial applications of rules as a basis for the efficient implementation of strategies that are not only well suited for exploration of chemistries defined by graph grammars, but that are also applicable in a generalgraph rewriting context as well. As showcases, we explore a complex chemistry based on the Diels-Alder reaction to explore specific subspaces of the molecular space. As a non-chemical application we use the framework of exploration strategies to model an abstract graph rewriting problem to construct high-level transformations that cannot be directly represented by the Double-Pushout formalism starting from simple DPO transformation rules .
Computational approaches to exploring "chemical universes", i.e., very large sets, potentially infinite sets of compounds that can be constructed by a prescribed collection of reaction mechanisms, in practice suffer from a combinatorial explosion. It quickly becomes impossible to test, for all pairs of compounds in a rapidly growing network, whether they can react with each other. More sophisticated and efficient strategies are therefore required to construct very large chemical reaction networks. Undirected labeled graphs and graph rewriting are natural models of chemical compounds and chemical reactions. Borrowing the idea of partial evaluation from functional programming, we introduce partial applications of rewrite rules. Binding substrate to rules increases the number of rules but drastically prunes the substrate sets to which it might match, resulting in dramatically reduced resource requirements. At the same time, exploration strategies can be guided, e.g. based on restrictions on the product molecules to avoid the explicit enumeration of very unlikely compounds. To this end we introduce here a generic framework for the specification of exploration strategies in graph-rewriting systems. Using key examples of complex chemical networks from sugar chemistry and the realm of metabolic networks we demonstrate the feasibility of a high-level strategy framework. The ideas presented here can not only be used for a strategy-based chemical space exploration that has close correspondence of experimental results, but are much more general. In particular, the framework can be used to emulate higher-level transformation models such as illustrated in a small puzzle game .
[ { "type": "A", "before": null, "after": "Computational approaches to exploring \"chemical universes\", i.e., very large sets, potentially infinite sets of compounds that can be constructed by a prescribed collection of reaction mechanisms, in practice suffer from a combinatorial explosion. It quickly becomes impossible to test, for all pairs of compounds in a rapidly growing network, whether they can react with each other. More sophisticated and efficient strategies are therefore required to construct very large chemical reaction networks.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "This provides a basis for exploring spaces of molecules and computing reaction networks implicitly defined by graph grammars. Molecule graphs are connected, meaning that rewriting steps in general are many-to-many graph transformations. Chemical grammars are typically subject to combinatorial explosion, however, making it often infeasible to compute the underlying network by direct breadth-first expansion. To alleviate this problem,", "after": "Borrowing the idea of partial evaluation from functional programming, we introduce partial applications of rewrite rules. Binding substrate to rules increases the number of rules but drastically prunes the substrate sets to which it might match, resulting in dramatically reduced resource requirements. At the same time, exploration strategies can be guided, e.g. based on restrictions on the product molecules to avoid the explicit enumeration of very unlikely compounds. To this end", "start_char_pos": 112, "end_char_pos": 548 }, { "type": "R", "before": "partial applications of rules as a basis for the efficient implementation of strategies that are not only well suited for exploration of chemistries defined by graph grammars, but that are also applicable in a generalgraph rewriting context as well. As showcases, we explore a complex chemistry based on the Diels-Alder reaction to explore specific subspaces of the molecular space. As a non-chemical application we use the framework of exploration strategies to model an abstract graph rewriting problem to construct", "after": "a generic framework for the specification of exploration strategies in graph-rewriting systems. Using key examples of complex chemical networks from sugar chemistry and the realm of metabolic networks we demonstrate the feasibility of a", "start_char_pos": 567, "end_char_pos": 1084 }, { "type": "R", "before": "transformations that cannot be directly represented by the Double-Pushout formalism starting from simple DPO transformation rules", "after": "strategy framework. The ideas presented here can not only be used for a strategy-based chemical space exploration that has close correspondence of experimental results, but are much more general. In particular, the framework can be used to emulate higher-level transformation models such as illustrated in a small puzzle game", "start_char_pos": 1096, "end_char_pos": 1225 } ]
[ 0, 111, 237, 348, 521, 816, 949 ]
1302.4254
1
We consider a financial market model with a single risky asset whose price process evolves according to a general jump-diffusion with locally bounded coefficients and where market participants have only access to a partial information flow (%DIFDELCMD < \E%%% _t)_{t\geq0\subseteq(}%DIFDELCMD < \F%%% _t)_{t\geq0 . For any utility function, we prove that the partial information financial market is locally viable, in the sense that the problem of maximizing the expected utility of terminal wealth has a solution up to a stopping time, if and only if the marginal utility of the terminal wealth is the density of a partial information equivalent martingale measure (PIEMM). This equivalence result is proved in a constructive way by relying on maximum principles for stochastic control under partial information. We then show that the financial market is globally viable if and only if there exists a partial information local martingale deflator (PILMD), which can be explicitly constructed. In the case of bounded coefficients, the latter turns out to be the density process of a global PIEMM. We illustrate our results by means of an explicit example.
We consider a financial market model with a single risky asset whose price process evolves according to a general jump-diffusion with locally bounded coefficients and where market participants have only access to a partial information flow %DIFDELCMD < \E%%% \subseteq(}%DIFDELCMD < \F%%% . For any utility function, we prove that the partial information financial market is locally viable, in the sense that the optimal portfolio problem has a solution up to a stopping time, if and only if the (normalised) marginal utility of the terminal wealth generates a partial information equivalent martingale measure (PIEMM). This equivalence result is proved in a constructive way by relying on maximum principles for stochastic control problems under partial information. We then characterize a global notion of market viability in terms of partial information local martingale deflators (PILMDs). We illustrate our results by means of a simple example.
[ { "type": "D", "before": "(", "after": null, "start_char_pos": 240, "end_char_pos": 241 }, { "type": "D", "before": "_t)_{t\\geq0", "after": null, "start_char_pos": 260, "end_char_pos": 271 }, { "type": "D", "before": "_t)_{t\\geq0", "after": null, "start_char_pos": 301, "end_char_pos": 312 }, { "type": "R", "before": "problem of maximizing the expected utility of terminal wealth", "after": "optimal portfolio problem", "start_char_pos": 437, "end_char_pos": 498 }, { "type": "A", "before": null, "after": "(normalised)", "start_char_pos": 556, "end_char_pos": 556 }, { "type": "R", "before": "is the density of", "after": "generates", "start_char_pos": 597, "end_char_pos": 614 }, { "type": "A", "before": null, "after": "problems", "start_char_pos": 788, "end_char_pos": 788 }, { "type": "R", "before": "show that the financial market is globally viable if and only if there exists a", "after": "characterize a global notion of market viability in terms of", "start_char_pos": 824, "end_char_pos": 903 }, { "type": "R", "before": "deflator (PILMD), which can be explicitly constructed. In the case of bounded coefficients, the latter turns out to be the density process of a global PIEMM.", "after": "deflators (PILMDs).", "start_char_pos": 941, "end_char_pos": 1098 }, { "type": "R", "before": "an explicit", "after": "a simple", "start_char_pos": 1137, "end_char_pos": 1148 } ]
[ 0, 340, 675, 815, 995, 1098 ]
1302.4267
1
Biological systems show two structural features on many levels URLanization: sparseness, in which only a small fraction of possible interactions between components actually occur; and modularity : the near decomposability of the system into modules with distinct functionality. Recent work suggests that modularity can evolve in a variety of circumstances, including goals that vary in time such that they share the same subgoals (modularly varying goals) . Here, we studied the origin of modularity and sparseness focusing on the nature of the mutation process, rather than variations in the goal. We use simulations of evolution with different mutation rules. We find that commonly used sum-rule mutations, in which interactions are mutated by adding random numbers, do not lead to modularity or sparseness except for special situations. In contrast, product-rule mutations in which interactions are mutated by multiplying by random numbers , a better model for the effects of biological mutations , lead to sparseness naturally. When the goals of evolution are modular, in the sense that specific groups of inputs affect specific groups of outputs, product-rule mutations lead to modular structure; sum-rule mutations do not. Product-rule mutations generate sparseness and modularity because they keep small interaction terms small.
Biological systems exhibit two structural features on many levels URLanization: sparseness, in which only a small fraction of possible interactions between components actually occur; and modularity - the near decomposability of the system into modules with distinct functionality. Recent work suggests that modularity can evolve in a variety of circumstances, including goals that vary in time such that they share the same subgoals (modularly varying goals) , or when connections are costly . Here, we studied the origin of modularity and sparseness focusing on the nature of the mutation process, rather than on connection cost or variations in the goal. We use simulations of evolution with different mutation rules. We found that commonly used sum-rule mutations, in which interactions are mutated by adding random numbers, do not lead to modularity or sparseness except for in special situations. In contrast, product-rule mutations in which interactions are mutated by multiplying by random numbers - a better model for the effects of biological mutations - led to sparseness naturally. When the goals of evolution are modular, in the sense that specific groups of inputs affect specific groups of outputs, product-rule mutations also lead to modular structure; sum-rule mutations do not. Product-rule mutations generate sparseness and modularity because they tend to reduce interactions, and to keep small interaction terms small.
[ { "type": "R", "before": "show", "after": "exhibit", "start_char_pos": 19, "end_char_pos": 23 }, { "type": "R", "before": ":", "after": "-", "start_char_pos": 195, "end_char_pos": 196 }, { "type": "A", "before": null, "after": ", or when connections are costly", "start_char_pos": 456, "end_char_pos": 456 }, { "type": "A", "before": null, "after": "on connection cost or", "start_char_pos": 576, "end_char_pos": 576 }, { "type": "R", "before": "find", "after": "found", "start_char_pos": 667, "end_char_pos": 671 }, { "type": "A", "before": null, "after": "in", "start_char_pos": 822, "end_char_pos": 822 }, { "type": "R", "before": ",", "after": "-", "start_char_pos": 946, "end_char_pos": 947 }, { "type": "R", "before": ", lead", "after": "- led", "start_char_pos": 1003, "end_char_pos": 1009 }, { "type": "A", "before": null, "after": "also", "start_char_pos": 1178, "end_char_pos": 1178 }, { "type": "A", "before": null, "after": "tend to reduce interactions, and to", "start_char_pos": 1304, "end_char_pos": 1304 } ]
[ 0, 179, 277, 458, 600, 663, 842, 1034, 1205, 1232 ]
1302.4679
1
Assuming that agents' preferences satisfy first-order stochastic dominance, we show that the Expected Utility Paradigm can explain all rational investment choices. In particular, the optimal investment strategy in any behavioral law-invariant setting corresponds to the optimum for some risk averse expected utility maximizer whose concave utility function we derive explicitly . This result enables us to infer agents' utility and risk aversion from their investment choice in a non-parametric way. We also show that decreasing absolute risk aversion (DARA) is equivalent to a demand for terminal wealth that has more spread than the opposite of the log pricing kernel at the investment horizon.
Assuming that agents' preferences satisfy first-order stochastic dominance, we show how the (Generalized) Expected Utility paradigm can rationalize all optimal investment choices: the optimal investment strategy in any behavioral law-invariant (state-independent) setting corresponds to the optimum for a risk averse (generalized) expected utility maximizer with an explicitly derived concave utility function . This result enables us to infer the utility and risk aversion of agents from their investment choice in a non-parametric way. We relate the property of decreasing absolute risk aversion (DARA) to distributional properties of the terminal wealth and the financial market. Specifically, we show that DARA is equivalent to a demand for a terminal wealth that has more spread than the opposite of the log pricing kernel at the investment horizon.
[ { "type": "R", "before": "that the Expected Utility Paradigm can explain all rational investment choices. In particular,", "after": "how the (Generalized) Expected Utility paradigm can rationalize all optimal investment choices:", "start_char_pos": 84, "end_char_pos": 178 }, { "type": "A", "before": null, "after": "(state-independent)", "start_char_pos": 243, "end_char_pos": 243 }, { "type": "R", "before": "some risk averse", "after": "a risk averse (generalized)", "start_char_pos": 283, "end_char_pos": 299 }, { "type": "R", "before": "whose", "after": "with an explicitly derived", "start_char_pos": 327, "end_char_pos": 332 }, { "type": "D", "before": "we derive explicitly", "after": null, "start_char_pos": 358, "end_char_pos": 378 }, { "type": "R", "before": "agents'", "after": "the", "start_char_pos": 413, "end_char_pos": 420 }, { "type": "A", "before": null, "after": "of agents", "start_char_pos": 447, "end_char_pos": 447 }, { "type": "R", "before": "also show that", "after": "relate the property of", "start_char_pos": 505, "end_char_pos": 519 }, { "type": "A", "before": null, "after": "to distributional properties of the terminal wealth and the financial market. Specifically, we show that DARA", "start_char_pos": 561, "end_char_pos": 561 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 592, "end_char_pos": 592 } ]
[ 0, 163, 380, 501 ]
1302.4679
2
Assuming that agents' preferences satisfy first-order stochastic dominance, we show how the (Generalized) Expected Utility paradigm can rationalize all optimal investment choices: the optimal investment strategy in any behavioral law-invariant (state-independent) setting corresponds to the optimum for a risk averse (generalized) expected utility maximizer with an explicitly derived concave utility function. This result enables us to infer the utility and risk aversion of agents from their investment choice in a non-parametric way. We relate the property of decreasing absolute risk aversion (DARA) to distributional properties of the terminal wealth and the financial market. Specifically, we show that DARA is equivalent to a demand for a terminal wealth that has more spread than the opposite of the log pricing kernel at the investment horizon.
Assuming that agents' preferences satisfy first-order stochastic dominance, we show how the Expected Utility paradigm can rationalize all optimal investment choices: the optimal investment strategy in any behavioral law-invariant (state-independent) setting corresponds to the optimum for an expected utility maximizer with an explicitly derived concave non-decreasing utility function. This result enables us to infer the utility and risk aversion of agents from their investment choice in a non-parametric way. We relate the property of decreasing absolute risk aversion (DARA) to distributional properties of the terminal wealth and of the financial market. Specifically, we show that DARA is equivalent to a demand for a terminal wealth that has more spread than the opposite of the log pricing kernel at the investment horizon.
[ { "type": "D", "before": "(Generalized)", "after": null, "start_char_pos": 92, "end_char_pos": 105 }, { "type": "R", "before": "a risk averse (generalized)", "after": "an", "start_char_pos": 303, "end_char_pos": 330 }, { "type": "A", "before": null, "after": "non-decreasing", "start_char_pos": 393, "end_char_pos": 393 }, { "type": "A", "before": null, "after": "of", "start_char_pos": 661, "end_char_pos": 661 } ]
[ 0, 411, 537, 683 ]
1302.7010
1
We introduce a multivariate diffusion model that is able to price derivative securities featuring multiple underlying assets. Each underlying shows a volatility smile and is modeled according to a density-mixture dynamical model while the same property holds for the multivariate process of all assets, whose density is a mixture of multivariate basic densities. This allows to reconcile single name and index/basket volatility smiles in a consistent framework. Rather than simply correlating one-dimensional local volatility models for each asset, our approach could be dubbed a multidimensional local volatility approach with state dependent diffusion matrix. The model is quite tractable, leading to a complete market and not requiring Fourier techniques , contrary to multivariate stochastic volatility models such as Wishart. We provide a semi-analytic formula for the price of European optionson a basket/index of securities . A comparison with the standard approach consisting in using Monte Carlo simulation that samples simply-correlated suitably discretized one-dimensional paths is made . Our results show that our approach is promising in terms of basket option pricing . We also introduce a multivariate uncertain volatility model of which our multivariate local volatilities model is a multivariate markovian projectionand analyze the dependence structure induced by our multivariate dynamics in detail . A few numerical examples on simple contracts conclude the paper.
We introduce a multivariate diffusion model that is able to price derivative securities featuring multiple underlying assets. Each asset volatility smile is modeled according to a density-mixture dynamical model while the same property holds for the multivariate process of all assets, whose density is a mixture of multivariate basic densities. This allows to reconcile single name and index/basket volatility smiles in a consistent framework. Our approach could be dubbed a multidimensional local volatility approach with vector-state dependent diffusion matrix. The model is quite tractable, leading to a complete market and not requiring Fourier techniques for calibration and dependence measures , contrary to multivariate stochastic volatility models such as Wishart. We prove existence and uniqueness of solutions for the model stochastic differential equations, provide formulas for a number of basket options, and analyze the dependence structure of the model in detail by deriving a number of results on covariances, its copula function and rank correlation measures and volatilities-assets correlations . A comparison with sampling simply-correlated suitably discretized one-dimensional mixture dynamical paths is made , both in terms of option pricing and of dependence, and first order expansion relationships between the two models' local covariances are derived . We also show existence of a multivariate uncertain volatility model of which our multivariate local volatilities model is a Markovian projection, highlighting that the projected model is smoother and avoids a number of drawbacks of the uncertain volatility version. We also show a consistency result where the Markovian projection of a geometric basket in the multivariate model is a univariate mixture dynamics model . A few numerical examples on basket and spread options pricing conclude the paper.
[ { "type": "R", "before": "underlying shows a volatility smile and", "after": "asset volatility smile", "start_char_pos": 131, "end_char_pos": 170 }, { "type": "R", "before": "Rather than simply correlating one-dimensional local volatility models for each asset, our", "after": "Our", "start_char_pos": 462, "end_char_pos": 552 }, { "type": "R", "before": "state", "after": "vector-state", "start_char_pos": 628, "end_char_pos": 633 }, { "type": "A", "before": null, "after": "for calibration and dependence measures", "start_char_pos": 758, "end_char_pos": 758 }, { "type": "R", "before": "provide a semi-analytic formula for the price of European optionson a basket/index of securities", "after": "prove existence and uniqueness of solutions for the model stochastic differential equations, provide formulas for a number of basket options, and analyze the dependence structure of the model in detail by deriving a number of results on covariances, its copula function and rank correlation measures and volatilities-assets correlations", "start_char_pos": 835, "end_char_pos": 931 }, { "type": "R", "before": "the standard approach consisting in using Monte Carlo simulation that samples", "after": "sampling", "start_char_pos": 952, "end_char_pos": 1029 }, { "type": "A", "before": null, "after": "mixture dynamical", "start_char_pos": 1085, "end_char_pos": 1085 }, { "type": "R", "before": ". Our results show that our approach is promising", "after": ", both", "start_char_pos": 1100, "end_char_pos": 1149 }, { "type": "R", "before": "basket option pricing", "after": "option pricing and of dependence, and first order expansion relationships between the two models' local covariances are derived", "start_char_pos": 1162, "end_char_pos": 1183 }, { "type": "R", "before": "introduce", "after": "show existence of", "start_char_pos": 1194, "end_char_pos": 1203 }, { "type": "R", "before": "multivariate markovian projectionand analyze the dependence structure induced by our multivariate dynamics in detail", "after": "Markovian projection, highlighting that the projected model is smoother and avoids a number of drawbacks of the uncertain volatility version. We also show a consistency result where the Markovian projection of a geometric basket in the multivariate model is a univariate mixture dynamics model", "start_char_pos": 1302, "end_char_pos": 1418 }, { "type": "R", "before": "simple contracts", "after": "basket and spread options pricing", "start_char_pos": 1449, "end_char_pos": 1465 } ]
[ 0, 125, 362, 461, 661, 831, 933, 1101, 1185, 1420 ]
1302.7036
1
The cusp catastrophe theory has been primarily developed as a deterministic theory for systems that may respond to continuous changes in a control variables by a discontinuous change. While most of the systems in behavioral sciences are subject to noise, and in behavioral finance moreover to time-varying volatility, it may be difficult to apply the theory in these fields. This paper addresses the issue and proposes a two-step estimation methodology, which will allow us to apply the catastrophe theory to model stock market crashes. Utilizing high frequency data, we estimate the daily realized volatility from the returns in the first step and use the stochastic cusp catastrophe on the data normalized by the estimated volatility in the second step to study possible discontinuities in markets. We support our methodology by simulations where we also discuss the importance of stochastic noise and volatility in the deterministic cusp model. The methodology is empirically tested on almost 27 years of U.S. stock market evolution covering several important recessions and crisis periods. Results suggest that the proposed methodology provides an important shift in application of catastrophe theory to stock markets. We show that stock markets subject to noise and time-varying volatility shows strong bifurcation marks. Due to the very long sample period we also develop a rolling estimation approach , where we study the dynamics of the parameters and we find that while in the first half of the period stock markets showed strong marks of bifurcations, in the second half catastrophe theory was not able to confirm this behavior. Results may have an important implications for understanding the recent deep financial crisis of 2008.
This paper develops a two-step estimation methodology, which allows us to apply catastrophe theory to stock market returns with time-varying volatility and model stock market crashes. Utilizing high frequency data, we estimate the daily realized volatility from the returns in the first step and use stochastic cusp catastrophe on data normalized by the estimated volatility in the second step to study possible discontinuities in markets. We support our methodology by simulations where we also discuss the importance of stochastic noise and volatility in deterministic cusp catastrophe model. The methodology is empirically tested on almost 27 years of U.S. stock market evolution covering several important recessions and crisis periods. Due to the very long sample period we also develop a rolling estimation approach and we find that while in the first half of the period stock markets showed marks of bifurcations, in the second half catastrophe theory was not able to confirm this behavior. Results suggest that the proposed methodology provides an important shift in application of catastrophe theory to stock markets.
[ { "type": "R", "before": "The cusp catastrophe theory has been primarily developed as a deterministic theory for systems that may respond to continuous changes in a control variables by a discontinuous change. While most of the systems in behavioral sciences are subject to noise, and in behavioral finance moreover to time-varying volatility, it may be difficult to apply the theory in these fields. This paper addresses the issue and proposes", "after": "This paper develops", "start_char_pos": 0, "end_char_pos": 418 }, { "type": "R", "before": "will allow", "after": "allows", "start_char_pos": 460, "end_char_pos": 470 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 483, "end_char_pos": 486 }, { "type": "A", "before": null, "after": "stock market returns with time-varying volatility and", "start_char_pos": 509, "end_char_pos": 509 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 654, "end_char_pos": 657 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 689, "end_char_pos": 692 }, { "type": "R", "before": "the deterministic cusp", "after": "deterministic cusp catastrophe", "start_char_pos": 919, "end_char_pos": 941 }, { "type": "D", "before": "Results suggest that the proposed methodology provides an important shift in application of catastrophe theory to stock markets. We show that stock markets subject to noise and time-varying volatility shows strong bifurcation marks.", "after": null, "start_char_pos": 1095, "end_char_pos": 1327 }, { "type": "D", "before": ", where we study the dynamics of the parameters", "after": null, "start_char_pos": 1409, "end_char_pos": 1456 }, { "type": "D", "before": "strong", "after": null, "start_char_pos": 1533, "end_char_pos": 1539 }, { "type": "R", "before": "may have an important implications for understanding the recent deep financial crisis of 2008.", "after": "suggest that the proposed methodology provides an important shift in application of catastrophe theory to stock markets.", "start_char_pos": 1648, "end_char_pos": 1742 } ]
[ 0, 183, 374, 537, 801, 948, 1094, 1223, 1327, 1639 ]
1302.7075
1
Machupo virus (MACV) is a New World arenavirus that is currently emerging from a rodent reservoir into the human population. For New World arenaviruses, it is known that transferrin receptor binding is a major determinate of host range variation. To better understand the relationship between the virus and its human host, we used a structure-based computational model to interrogate the effects of mutations in the interfacebetween the MACV spike glycoprotein ( MACVGP) and the Human transferrin receptor (hTfR1) . Existing computational methods to probe protein-protein interactions perform poorly with non-alanine mutations and on the flexible loops encountered in many host-virus interactions. We applied steered molecular dynamics (SMD) simulations to induce protein dissociation; using descriptive statistics, we were able to differentiate mutant complexes and biochemically rationalize available infectivity data. The previously published co-crystal structure implies two important hydrogen bonding networks in the MACVGP /hTfR1 interface ; our simulations confirm one of them is critical for viral binding . Also, since mutations in polar residues in another putative network do not change viral binding, we can conclude the second network is likely an artifact of crystallization. Finally, we found a viral site known to be critical for infection , may mark an important evolutionary suppressor site for infection-diminishing hTfR1 mutants. Taken together, our computational data rationalize many of the available experimental data, further refine our biochemical understanding of this system, and point to the most likely next evolutionary step for MACV .
Existing computational methods to predict protein--protein interaction affinity often perform poorly in important test cases. In particular, the effects of multiple mutations, non-alanine substitutions, and flexible loops are difficult to predict with available tools and protocols. We present here a new method to interrogate affinity differences resulting from mutations in a host-virus protein--protein interface. Our method is based on extensive non-equilibrium all-atom simulations: We computationally pull the machupo virus (MACV) spike glycoprotein ( GP1) away from the human transferrin receptor (hTfR1) and estimate affinity using the maximum applied force during a pulling simulation and the area under the force-versus-distance curve. We find that these quantities provide novel biophysical insight into the GP1/hTfR1 interaction. First, with no prior knowledge of the system we can differentiate among wild-type and mutant complexes. Second, although the static co-crystal structure shows two large hydrogen-bonding networks in the GP1 /hTfR1 interface , our simulations indicate that only one of them is critical for the binding interaction. Third, one viral site known to be critical for infection may mark an important evolutionary suppressor site for infection-resistant hTfR1 mutants. Finally, our method provides an elegant framework to compare the effects of multiple mutations, individually and jointly, on protein--protein interactions .
[ { "type": "R", "before": "Machupo virus (MACV) is a New World arenavirus that is currently emerging from a rodent reservoir into the human population. For New World arenaviruses, it is known that transferrin receptor binding is a major determinate of host range variation. To better understand the relationship between the virus and its human host, we used a structure-based computational model to interrogate the effects of mutations in the interfacebetween the MACV", "after": "Existing computational methods to predict protein--protein interaction affinity often perform poorly in important test cases. In particular, the effects of multiple mutations, non-alanine substitutions, and flexible loops are difficult to predict with available tools and protocols. We present here a new method to interrogate affinity differences resulting from mutations in a host-virus protein--protein interface. Our method is based on extensive non-equilibrium all-atom simulations: We computationally pull the machupo virus (MACV)", "start_char_pos": 0, "end_char_pos": 441 }, { "type": "R", "before": "MACVGP) and the Human", "after": "GP1) away from the human", "start_char_pos": 463, "end_char_pos": 484 }, { "type": "R", "before": ". Existing computational methods to probe protein-protein interactions perform poorly with non-alanine mutations and on the flexible loops encountered in many host-virus interactions. We applied steered molecular dynamics (SMD) simulations to induce protein dissociation; using descriptive statistics, we were able to differentiate mutant complexes and biochemically rationalize available infectivity data. The previously published", "after": "and estimate affinity using the maximum applied force during a pulling simulation and the area under the force-versus-distance curve. We find that these quantities provide novel biophysical insight into the GP1/hTfR1 interaction. First, with no prior knowledge of the system we can differentiate among wild-type and mutant complexes. Second, although the static", "start_char_pos": 514, "end_char_pos": 945 }, { "type": "R", "before": "implies two important hydrogen bonding", "after": "shows two large hydrogen-bonding", "start_char_pos": 967, "end_char_pos": 1005 }, { "type": "R", "before": "MACVGP", "after": "GP1", "start_char_pos": 1022, "end_char_pos": 1028 }, { "type": "R", "before": "; our simulations confirm", "after": ", our simulations indicate that only", "start_char_pos": 1046, "end_char_pos": 1071 }, { "type": "R", "before": "viral binding . Also, since mutations in polar residues in another putative network do not change viral binding, we can conclude the second network is likely an artifact of crystallization. Finally, we found a viral", "after": "the binding interaction. Third, one viral", "start_char_pos": 1100, "end_char_pos": 1315 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1356, "end_char_pos": 1357 }, { "type": "R", "before": "infection-diminishing", "after": "infection-resistant", "start_char_pos": 1413, "end_char_pos": 1434 }, { "type": "R", "before": "Taken together, our computational data rationalize many of the available experimental data, further refine our biochemical understanding of this system, and point to the most likely next evolutionary step for MACV", "after": "Finally, our method provides an elegant framework to compare the effects of multiple mutations, individually and jointly, on protein--protein interactions", "start_char_pos": 1450, "end_char_pos": 1663 } ]
[ 0, 124, 246, 697, 785, 920, 1047, 1115, 1289, 1449 ]
1302.7192
1
We propose a unified analysis of a whole spectrum of no-arbitrage conditions for financial market models based on continuous semimartingales. In particular, we focus on no-arbitrage conditions weaker than the classical notions of No Arbitrage and No Free Lunch with Vanishing Risk. We provide a complete characterisation of all no-arbitrage conditions, linking their validity to the existence and to the properties of (weak) martingale deflators and to the characteristics of the discounted asset price process .
We propose a unified analysis of a whole spectrum of no-arbitrage conditions for financial market models based on continuous semimartingales. In particular, we focus on no-arbitrage conditions weaker than the classical notions of No Arbitrage and No Free Lunch with Vanishing Risk. We provide a complete characterisation of the considered no-arbitrage conditions, linking their validity to the characteristics of the discounted asset price process and to the existence and the properties of (weak) martingale deflators , and review classical as well as recent results .
[ { "type": "R", "before": "all", "after": "the considered", "start_char_pos": 324, "end_char_pos": 327 }, { "type": "R", "before": "existence", "after": "characteristics of the discounted asset price process", "start_char_pos": 383, "end_char_pos": 392 }, { "type": "A", "before": null, "after": "existence and the", "start_char_pos": 404, "end_char_pos": 404 }, { "type": "R", "before": "and to the characteristics of the discounted asset price process", "after": ", and review classical as well as recent results", "start_char_pos": 447, "end_char_pos": 511 } ]
[ 0, 141, 281 ]
1302.7238
1
Given a loset I, every surjective map p: A --- > I endows the set A with a structure of preordered set by "replacing" the elements of I with their inverse images via p considered as " bubbles " (sets endowed with an equivalence relation), lifting the structure of loset on A, and "agglutinating" this structure with the bubbles. Every bubbling A of a structure of loset I is a structure of preordered set A (not necessarily complete) whose preorder has negatively transitive asymmetric part and every such structure on a given set A can be obtained by bubbling up of certain structure of a loset I, intrinsically encoded in A. In other words, the difference between linearity and negative transitivity is constituted of bubbles . As a consequence of this characterization, under certain natural topological conditions on the preordered set A furnished with its interval topology, the existence of a continuous generalized utility function on A is proved.
Given a linearly ordered set I, every surjective map p: A -- > I endows the set A with a structure of set of preferences by "replacing" the elements of I with their inverse images via p considered as " balloons " (sets endowed with an equivalence relation), lifting the linear order on A, and "agglutinating" this structure with the balloons. Every ballooning A of a structure of linearly ordered set I is a set of preferences whose preference relation (not necessarily complete) is negatively transitive and every such structure on a given set A can be obtained by ballooning of certain structure of a linearly ordered set I, intrinsically encoded in A. In other words, the difference between linearity and negative transitivity is constituted of balloons . As a consequence of this characterization, under certain natural topological conditions on the set of preferences A furnished with its interval topology, the existence of a continuous generalized utility function on A is proved.
[ { "type": "R", "before": "loset", "after": "linearly ordered set", "start_char_pos": 8, "end_char_pos": 13 }, { "type": "R", "before": "---", "after": "--", "start_char_pos": 43, "end_char_pos": 46 }, { "type": "R", "before": "preordered set", "after": "set of preferences", "start_char_pos": 88, "end_char_pos": 102 }, { "type": "R", "before": "bubbles", "after": "balloons", "start_char_pos": 184, "end_char_pos": 191 }, { "type": "R", "before": "structure of loset", "after": "linear order", "start_char_pos": 251, "end_char_pos": 269 }, { "type": "R", "before": "bubbles. Every bubbling", "after": "balloons. Every ballooning", "start_char_pos": 320, "end_char_pos": 343 }, { "type": "R", "before": "loset", "after": "linearly ordered set", "start_char_pos": 364, "end_char_pos": 369 }, { "type": "R", "before": "structure of preordered set A", "after": "set of preferences whose preference relation", "start_char_pos": 377, "end_char_pos": 406 }, { "type": "R", "before": "whose preorder has negatively transitive asymmetric part", "after": "is negatively transitive", "start_char_pos": 434, "end_char_pos": 490 }, { "type": "R", "before": "bubbling up", "after": "ballooning", "start_char_pos": 552, "end_char_pos": 563 }, { "type": "R", "before": "loset", "after": "linearly ordered set", "start_char_pos": 590, "end_char_pos": 595 }, { "type": "R", "before": "bubbles", "after": "balloons", "start_char_pos": 720, "end_char_pos": 727 }, { "type": "R", "before": "preordered set", "after": "set of preferences", "start_char_pos": 825, "end_char_pos": 839 } ]
[ 0, 328, 626, 729 ]
1302.7246
1
We introduce a tractable multi-currency model with stochastic volatility and correlated stochastic interest rates that takes into account the smile in the FX market and the evolution of yield curves. The pricing of vanilla options on FX rates can be performed effciently through the FFT methodology thanks to the affinity of the model . A joint calibration exercise of the implied volatility surfaces of a triangle of FX rates shows the flexibility of our framework in dealing with the typical symmetries that characterize the FX market. Our framework is also able to describe many non trivial links between FX rates and interest rates: a second calibration exercise highlights the ability of the model to fit simultaneously FX implied volatilities while being coherent with interest rate products.
We introduce a tractable multi-currency model with stochastic volatility and correlated stochastic interest rates that takes into account the smile in the FX market and the evolution of yield curves. The pricing of vanilla options on FX rates can be performed effciently through the FFT methodology thanks to the affinity of the model Our framework is also able to describe many non trivial links between FX rates and interest rates: a second calibration exercise highlights the ability of the model to fit simultaneously FX implied volatilities while being coherent with interest rate products.
[ { "type": "D", "before": ". A joint calibration exercise of the implied volatility surfaces of a triangle of FX rates shows the flexibility of our framework in dealing with the typical symmetries that characterize the FX market.", "after": null, "start_char_pos": 335, "end_char_pos": 537 } ]
[ 0, 199, 537 ]
1303.0237
1
We consider a semi-static market composed of derivative securities, which we assume can be traded only at time zero, and of stocks, which can be traded continuously in time . Using a general utility function defined on the positive real line, we study the dependence on the price of the derivatives of the outputs of the utility maximization problem , investigating not only stability but also differentiability, monotonicity, convexity and asymptotic properties.
This paper studies the problem of maximizing expected utility from terminal wealth in a semi-static market composed of derivative securities, which we assume can be traded only at time zero, and of stocks, which can be traded continuously in time and are modeled as locally-bounded semi-martingales . Using a general utility function defined on the positive real line, we first study existence and uniqueness of the solution, and then we consider the dependence of the outputs of the utility maximization problem on the price of the derivatives , investigating not only stability but also differentiability, monotonicity, convexity and limiting properties.
[ { "type": "R", "before": "We consider", "after": "This paper studies the problem of maximizing expected utility from terminal wealth in", "start_char_pos": 0, "end_char_pos": 11 }, { "type": "A", "before": null, "after": "and are modeled as locally-bounded semi-martingales", "start_char_pos": 173, "end_char_pos": 173 }, { "type": "R", "before": "study the dependence on the price of the derivatives of the", "after": "first study existence and uniqueness of the solution, and then we consider the dependence of the", "start_char_pos": 247, "end_char_pos": 306 }, { "type": "A", "before": null, "after": "on the price of the derivatives", "start_char_pos": 351, "end_char_pos": 351 }, { "type": "R", "before": "asymptotic", "after": "limiting", "start_char_pos": 443, "end_char_pos": 453 } ]
[ 0 ]
1303.1690
1
The risk of a financial position is usually summarized by a risk measure. As this risk measure has to be estimated from historical data, it is important to be able to verify and compare competing estimation procedures. In statistical decision theory, risk measures for which such verification and comparison is possible, are called elicitable. It is known that quantile based risk measures such as value at risk are elicitable. In this paper we show that law-invariant spectral risk measures such as expected shortfall are not elicitable unless they reduce to minus the expected value. Hence, it is unclear how to perform forecast verification or comparison. However, the class of elicitable law-invariant coherent risk measures does not reduce to minus the expected value. We show that it contains expectiles, and that they play a special role amongst all elicitable law-invariant coherent risk measures .
The risk of a financial position is usually summarized by a risk measure. As this risk measure has to be estimated from historical data, it is important to be able to verify and compare competing estimation procedures. In statistical decision theory, risk measures for which such verification and comparison is possible, are called elicitable. It is known that quantile based risk measures such as value at risk are elicitable. In this paper we show that law-invariant spectral risk measures such as expected shortfall are not elicitable unless they reduce to minus the expected value. Hence, it is unclear how to perform forecast verification or comparison. However, the class of elicitable law-invariant coherent risk measures does not reduce to minus the expected value. We show that it consists of certain expectiles .
[ { "type": "R", "before": "contains expectiles, and that they play a special role amongst all elicitable law-invariant coherent risk measures", "after": "consists of certain expectiles", "start_char_pos": 790, "end_char_pos": 904 } ]
[ 0, 73, 218, 343, 427, 585, 658, 773 ]
1303.2513
1
This paper investigates optimal portfolio strategies in a market where the drift is driven by an unobserved Markov chain. Information on the state of this chain is obtained from stock prices and expert opinions in the form of signals at random discrete time points. As in Frey et al. (2012), Int. J. Theor. Appl. Finance, 15, No. 1, we use stochastic filtering to transform the original problem into an optimization problem under full information where the state variable is the filter for the Markov chain. This problem is studied with dynamic programming techniques and with regularization arguments . Using results from the recent literature we obtain the existence of classical solutions to the dynamic programming equation in a regularized version of the model. From this the optimal strategy in the regularized model is straightforward to compute. We give convergence results which show that this strategy is epsilon-optimal in the original model .
This paper investigates optimal portfolio strategies in a market where the drift is driven by an unobserved Markov chain. Information on the state of this chain is obtained from stock prices and expert opinions in the form of signals at random discrete time points. As in Frey et al. (2012), Int. J. Theor. Appl. Finance, 15, No. 1, we use stochastic filtering to transform the original problem into an optimization problem under full information where the state variable is the filter for the Markov chain. The dynamic programming equation for this problem is studied with viscosity-solution techniques and with regularization arguments .
[ { "type": "R", "before": "This", "after": "The dynamic programming equation for this", "start_char_pos": 508, "end_char_pos": 512 }, { "type": "R", "before": "dynamic programming", "after": "viscosity-solution", "start_char_pos": 537, "end_char_pos": 556 }, { "type": "D", "before": ". Using results from the recent literature we obtain the existence of classical solutions to the dynamic programming equation in a regularized version of the model. From this the optimal strategy in the regularized model is straightforward to compute. We give convergence results which show that this strategy is epsilon-optimal in the original model", "after": null, "start_char_pos": 602, "end_char_pos": 952 } ]
[ 0, 121, 265, 306, 507, 766, 853 ]
1303.2950
1
We consider the problem of maximizing expected utility from terminal wealth for a power investor who can allocate his wealth in a stock, a defaultable bond, and a money market account. The dynamics of these security prices are governed by geometric Brownian motions modulated by a hidden continuous time finite state Markov chain . By means of a reference probability approach to filtering in the enlarged market filtration , we reduce the partially observed stochastic control problem to a risk sensitive control problem with full observation . We separate the latter into a pre-default and a post-default dynamic optimization subproblems, and obtain two coupled Hamilton-Jacobi-Bellman equations for the optimal value functions. We obtain a complete solution to the post-default optimization subproblem, and prove a verification theorem for the solution of the pre-default optimization subproblem .
We consider the problem of maximizing expected utility from terminal wealth for a power investor who can allocate his wealth in a stock, a defaultable security, and a money market account. The dynamics of these security prices are governed by geometric Brownian motions modulated by a hidden continuous time finite state Markov chain . We reduce the partially observed stochastic control problem to a complete observation risk sensitive control problem via the filtered regime switching probabilities . We separate the latter into a pre-default and a post-default dynamic optimization subproblems, and obtain two coupled Hamilton-Jacobi-Bellman (HJB) partial differential equations. We prove existence and uniqueness of a globally bounded classical solution to the pre-default HJB equation, and give a verification theorem characterizing each value function as the solution of the corresponding HJB equation. We provide a detailed numerical analysis showing that the investor increases his stock holdings as the filter probability of being in the high growth regime increases, and decreases his credit risk exposure when the filter probability of being in the high default risk regime gets larger. We find that the investor increases the fraction of overall wealth invested in the risky asset when the information gain coming from receiving price observations is higher .
[ { "type": "D", "before": "a", "after": null, "start_char_pos": 137, "end_char_pos": 138 }, { "type": "D", "before": "defaultable bond, and", "after": null, "start_char_pos": 139, "end_char_pos": 160 }, { "type": "A", "before": null, "after": "defaultable security, and", "start_char_pos": 163, "end_char_pos": 163 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 164, "end_char_pos": 164 }, { "type": "D", "before": ". By means of a reference probability approach to filtering in the enlarged market filtration", "after": null, "start_char_pos": 332, "end_char_pos": 425 }, { "type": "R", "before": ", we", "after": ". We", "start_char_pos": 426, "end_char_pos": 430 }, { "type": "A", "before": null, "after": "complete observation", "start_char_pos": 493, "end_char_pos": 493 }, { "type": "R", "before": "with full observation", "after": "via the filtered regime switching probabilities", "start_char_pos": 525, "end_char_pos": 546 }, { "type": "R", "before": "equations for the optimal value functions. We obtain a complete", "after": "(HJB) partial differential equations. We prove existence and uniqueness of a globally bounded classical", "start_char_pos": 691, "end_char_pos": 754 }, { "type": "R", "before": "post-default optimization subproblem, and prove", "after": "pre-default HJB equation, and give", "start_char_pos": 771, "end_char_pos": 818 }, { "type": "R", "before": "for", "after": "characterizing each value function as", "start_char_pos": 842, "end_char_pos": 845 }, { "type": "R", "before": "pre-default optimization subproblem", "after": "corresponding HJB equation. We provide a detailed numerical analysis showing that the investor increases his stock holdings as the filter probability of being in the high growth regime increases, and decreases his credit risk exposure when the filter probability of being in the high default risk regime gets larger. We find that the investor increases the fraction of overall wealth invested in the risky asset when the information gain coming from receiving price observations is higher", "start_char_pos": 866, "end_char_pos": 901 } ]
[ 0, 186, 333, 548, 733 ]
1303.2950
2
We consider the problem of maximizing expected utility from terminal wealth for a power investor who can allocate his wealth in a stock, a defaultable security, and a money market account. The dynamics of these security prices are governed by geometric Brownian motions modulated by a hidden continuous time finite state Markov chain. We reduce the partially observed stochastic control problem to a complete observation risk sensitive control problem via the filtered regime switching probabilities. We separate the latter into a pre-default and a post-default dynamic optimization subproblems, and obtain two coupled Hamilton-Jacobi-Bellman (HJB) partial differential equations. We prove existence and uniqueness of a globally bounded classical solution to the pre-default HJB equation, and give a verification theorem characterizing each value function as the solution of the corresponding HJB equation . We provide a detailed numerical analysis showing that the investor increases his stock holdings as the filter probability of being in the high growth regime increases, and decreases his credit risk exposure when the filter probability of being in the high default risk regime gets larger. We find that the investor increases the fraction of overall wealth invested in the risky asset when the information gain coming from receiving price observations is higher .
We consider the problem of maximizing expected utility for a power investor who can allocate his wealth in a stock, a defaultable security, and a money market account. The dynamics of these security prices are governed by geometric Brownian motions modulated by a hidden continuous time finite state Markov chain. We reduce the partially observed stochastic control problem to a complete observation risk sensitive control problem via the filtered regime switching probabilities. We separate the latter into pre-default and post-default dynamic optimization subproblems, and obtain two coupled Hamilton-Jacobi-Bellman (HJB) partial differential equations. We prove existence and uniqueness of a globally bounded classical solution to each HJB equation, and give the corresponding verification theorem . We provide a numerical analysis showing that the investor increases his holdings in stock as the filter probability of being in high growth regimes increases, and decreases his credit risk exposure when the filter probability of being in high default risk regimes gets larger .
[ { "type": "D", "before": "from terminal wealth", "after": null, "start_char_pos": 55, "end_char_pos": 75 }, { "type": "R", "before": "a", "after": "a", "start_char_pos": 165, "end_char_pos": 166 }, { "type": "D", "before": "a", "after": null, "start_char_pos": 529, "end_char_pos": 530 }, { "type": "D", "before": "a", "after": null, "start_char_pos": 547, "end_char_pos": 548 }, { "type": "R", "before": "the pre-default", "after": "each", "start_char_pos": 759, "end_char_pos": 774 }, { "type": "R", "before": "a verification theorem characterizing each value function as the solution of the corresponding HJB equation", "after": "the corresponding verification theorem", "start_char_pos": 798, "end_char_pos": 905 }, { "type": "D", "before": "detailed", "after": null, "start_char_pos": 921, "end_char_pos": 929 }, { "type": "R", "before": "stock holdings", "after": "holdings in stock", "start_char_pos": 989, "end_char_pos": 1003 }, { "type": "R", "before": "the high growth regime", "after": "high growth regimes", "start_char_pos": 1042, "end_char_pos": 1064 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 1155, "end_char_pos": 1158 }, { "type": "R", "before": "regime gets larger. We find that the investor increases the fraction of overall wealth invested in the risky asset when the information gain coming from receiving price observations is higher", "after": "regimes gets larger", "start_char_pos": 1177, "end_char_pos": 1368 } ]
[ 0, 188, 334, 500, 680, 907, 1196 ]
1303.3148
1
We investigate the general structure of optimal investment and consumption with small proportional transaction costs. For a safe asset and a risky asset with general continuous dynamics, traded with random and time-varying but small transaction costs, we derive simple formal asymptotics for the optimal policy and welfare. These reveal the roles of the investors' preferences as well as the market and cost dynamics, and also lead to a fully dynamic model for the implied trading volume. In frictionless models that can be solved in closed form, explicit formulas for the leading-order corrections due to small transaction costs obtain .
We investigate the general structure of optimal investment and consumption with small proportional transaction costs. For a safe asset and a risky asset with general continuous dynamics, traded with random and time-varying but small transaction costs, we derive simple formal asymptotics for the optimal policy and welfare. These reveal the roles of the investors' preferences as well as the market and cost dynamics, and also lead to a fully dynamic model for the implied trading volume. In frictionless models that can be solved in closed form, explicit formulas for the leading-order corrections due to small transaction costs are obtained .
[ { "type": "R", "before": "obtain", "after": "are obtained", "start_char_pos": 630, "end_char_pos": 636 } ]
[ 0, 117, 323, 488 ]
1303.3177
1
A new Multi-Carrier Differential Chaos Shift Keying (MC-DCSK) modulation is presented in this paper. The system endeavors to provide a good trade-off between robustness, energy efficiency and high data rate, while still being simple compared to conventional multi-carrier spread spectrum systems. This system can be seen as a parallel extension of the DCSK modulation where one chaotic reference sequence is transmitted over a predefined subcarrier frequency. Multiple modulated data streams are transmitted over the remaining subcarriers. This transmitter structure saves energy and increases the spectral efficiency of the conventional DCSK system . The receiver design makes this system easy to implement where no radio frequency (RF) delay circuit is needed to demodulate received data. Various system design parameters are discussed throughout the paper, including the number of subcarriers, the spreading factor, and the transmitted energy. Once the design is explained, the bit error rate performance of the MC-DCSK system is computed and compared to the conventional DCSK system under an additive white Gaussian noise (AWGN) channel . Simulation results confirm the advantages of this new hybrid design.
A new Multi-Carrier Differential Chaos Shift Keying (MC-DCSK) modulation is presented in this paper. The system endeavors to provide a good trade-off between robustness, energy efficiency and high data rate, while still being simple compared to conventional multi-carrier spread spectrum systems. This system can be seen as a parallel extension of the DCSK modulation where one chaotic reference sequence is transmitted over a predefined subcarrier frequency. Multiple modulated data streams are transmitted over the remaining subcarriers. This transmitter structure increases the spectral efficiency of the conventional DCSK system and uses less energy . The receiver design makes this system easy to implement where no radio frequency (RF) delay circuit is needed to demodulate received data. Various system design parameters are discussed throughout the paper, including the number of subcarriers, the spreading factor, and the transmitted energy. Once the design is explained, the bit error rate performance of the MC-DCSK system is computed and compared to the conventional DCSK system under an additive white Gaussian noise (AWGN) and Rayleigh channels . Simulation results confirm the advantages of this new hybrid design.
[ { "type": "D", "before": "saves energy and", "after": null, "start_char_pos": 567, "end_char_pos": 583 }, { "type": "A", "before": null, "after": "and uses less energy", "start_char_pos": 650, "end_char_pos": 650 }, { "type": "R", "before": "channel", "after": "and Rayleigh channels", "start_char_pos": 1134, "end_char_pos": 1141 } ]
[ 0, 100, 296, 459, 539, 652, 791, 947 ]
1303.3183
1
In this paper, we consider the problem of optimal exogenous control of gene regulatory networks. Our approach consists in adapting and further developing an established reinforcement learning algorithm called the fitted Q iteration. This algorithm infers the control law directly from the measurements of the system's response to external control inputs without the use of a mathematical model of the system. The measurement data set can either be collected from wet-lab experiments or artificially created by computer simulations of dynamical models of the system. The algorithm is applicable to a wide range of biological systems due to its ability to deal with nonlinear and stochastic system dynamics. To illustrate the application of the algorithm to a gene regulatory network, the regulation of the toggle switch system is considered. The control objective of this problem is to drive the concentrations of two specific proteins to a target region in the state space . In our companion paper, we take a closer look at the reference tracking problem from the reinforcement learning point of view and consider the generalised repressilator as an example .
In this paper, we consider the problem of optimal exogenous control of gene regulatory networks. Our approach consists in adapting an established reinforcement learning algorithm called the fitted Q iteration. This algorithm infers the control law directly from the measurements of the system's response to external control inputs without the use of a mathematical model of the system. The measurement data set can either be collected from wet-lab experiments or artificially created by computer simulations of dynamical models of the system. The algorithm is applicable to a wide range of biological systems due to its ability to deal with nonlinear and stochastic system dynamics. To illustrate the application of the algorithm to a gene regulatory network, the regulation of the toggle switch system is considered. The control objective of this problem is to drive the concentrations of two specific proteins to a target region in the state space .
[ { "type": "D", "before": "and further developing", "after": null, "start_char_pos": 131, "end_char_pos": 153 }, { "type": "D", "before": ". In our companion paper, we take a closer look at the reference tracking problem from the reinforcement learning point of view and consider the generalised repressilator as an example", "after": null, "start_char_pos": 973, "end_char_pos": 1157 } ]
[ 0, 96, 232, 408, 565, 705, 840, 974 ]
1303.4092
1
We present a novel device-free localization method, tailored to detecting persons in indoor environments . The method utilizes a fixed ultra-wide bandwidth (UWB ) infrastructure and does not require a training database of template waveforms. Instead, the method capitalizes on the fact that a human presence induces small low-frequency variations that stand out against the background signal, which is mainly affected by wideband noise. We analyze the detection probability, and validate our findings with numerical simulations and experiments with off-the-shelf UWB transceivers .
We present a novel device-free stationary person detection and ranging method, that is applicable to ultra-wide bandwidth (UWB) networks . The method utilizes a fixed UWB infrastructure and does not require a training database of template waveforms. Instead, the method capitalizes on the fact that a human presence induces small low-frequency variations that stand out against the background signal, which is mainly affected by wideband noise. We analyze the detection probability, and validate our findings with numerical simulations and experiments with off-the-shelf UWB transceivers in an indoor environment .
[ { "type": "R", "before": "localization method, tailored to detecting persons in indoor environments", "after": "stationary person detection and ranging method, that is applicable to ultra-wide bandwidth (UWB) networks", "start_char_pos": 31, "end_char_pos": 104 }, { "type": "R", "before": "ultra-wide bandwidth (UWB )", "after": "UWB", "start_char_pos": 135, "end_char_pos": 162 }, { "type": "A", "before": null, "after": "in an indoor environment", "start_char_pos": 580, "end_char_pos": 580 } ]
[ 0, 106, 241, 436 ]
1303.4114
1
The practicality of the stochastic network calculus (SNC) is often questioned on grounds of potential looseness of its performance bounds. In this paper it is uncovered that for bursty arrival processes (specifically Markov-Modulated On-Off (MMOO)), whose amenability to per-flow analysis is typically proclaimed as a highlight of SNC, the bounds can unfortunately indeed be very loose (e.g., by several orders of magnitude off). In response to this uncovered weakness of SNC, the (Standard) per-flow bounds are herein improved by deriving a general sample-path bound, using martingale based techniques, which accommodates FIFO, SP, EDF, and GPS scheduling. The obtained (Martingale) bounds gain an exponential decay factor of {O} %DIFDELCMD < \left(%%% e^{-\alpha n}%DIFDELCMD < \right) %%% in the number of flows n. Moreover, numerical comparisons against simulations show that the Martingale bounds are remarkably accurate for FIFO, SP, and EDF scheduling; for GPS scheduling, although the Martingale bounds substantially improve the Standard bounds, they are numerically loose, demanding for improvements in the core SNC analysis of GPS.
The practicality of the stochastic network calculus (SNC) is often questioned on grounds of potential looseness of its performance bounds. In this paper it is uncovered that for bursty arrival processes (specifically Markov-Modulated On-Off (MMOO)), whose amenability to per-flow analysis is typically proclaimed as a highlight of SNC, the bounds can unfortunately indeed be very loose (e.g., by several orders of magnitude off). In response to this uncovered weakness of SNC, the (Standard) per-flow bounds are herein improved by deriving a general sample-path bound, using martingale based techniques, which accommodates FIFO, SP, EDF, and GPS scheduling. The obtained (Martingale) bounds gain an exponential decay factor of {O} %DIFDELCMD < \left(%%% }%DIFDELCMD < \right) %%% (e^{-\alpha n in the number of flows n. Moreover, numerical comparisons against simulations show that the Martingale bounds are remarkably accurate for FIFO, SP, and EDF scheduling; for GPS scheduling, although the Martingale bounds substantially improve the Standard bounds, they are numerically loose, demanding for improvements in the core SNC analysis of GPS.
[ { "type": "D", "before": "e^{-\\alpha n", "after": null, "start_char_pos": 754, "end_char_pos": 766 }, { "type": "A", "before": null, "after": "(e^{-\\alpha n", "start_char_pos": 792, "end_char_pos": 792 } ]
[ 0, 138, 429, 657, 818, 960 ]
1303.4926
1
Gene transcription mediated by RNA polymerase II (pol-II) is a key step in gene expression. The dynamics of pol-II moving along the transcribed region influences the rate and timing of gene expression. In this work we present a probabilistic model of transcription dynamics which is fitted to pol-II occupancy time course data measured using ChIP-Seq. The model can be used to estimate transcription speed and to infer the temporal pol-II activity profile at the gene promoter. Model parameters are determined using either maximum likelihood estimation or via Bayesian inference using Markov chain Monte Carlo sampling. The Bayesian approach provides confidence intervals for parameter estimates and allows the use of priors that capture domain knowledge, e.g. the expected range of transcription speeds, based on previous experiments. The model describes the movement of pol-II down the gene body and can be used to identify the time of induction for transcriptionally engaged genes. By clustering the inferred promoter activity time profiles, we are able to determine which genes respond quickly to stimuli and group genes that share activity profiles and may therefore be co-regulated. We apply our methodology to biological data obtained using ChIP-seq to measure pol-II occupancy genome-wide when MCF-7 human breast cancer cells are treated with estradiol (E2). The transcription speeds we obtain agree with those obtained previously for smaller numbers of genes with the advantage that our approach can be applied genome-wide. We validate the biological significance of the pol-II promoter activity clusters by investigating cluster-specific transcription factor binding patterns and determining canonical pathway enrichment .
Gene transcription mediated by RNA polymerase II (pol-II) is a key step in gene expression. The dynamics of pol-II moving along the transcribed region influence the rate and timing of gene expression. In this work we present a probabilistic model of transcription dynamics which is fitted to pol-II occupancy time course data measured using ChIP-Seq. The model can be used to estimate transcription speed and to infer the temporal pol-II activity profile at the gene promoter. Model parameters are estimated using either maximum likelihood estimation or via Bayesian inference using Markov chain Monte Carlo sampling. The Bayesian approach provides confidence intervals for parameter estimates and allows the use of priors that capture domain knowledge, e.g. the expected range of transcription speeds, based on previous experiments. The model describes the movement of pol-II down the gene body and can be used to identify the time of induction for transcriptionally engaged genes. By clustering the inferred promoter activity time profiles, we are able to determine which genes respond quickly to stimuli and group genes that share activity profiles and may therefore be co-regulated. We apply our methodology to biological data obtained using ChIP-seq to measure pol-II occupancy genome-wide when MCF-7 human breast cancer cells are treated with estradiol (E2). The transcription speeds we obtain agree with those obtained previously for smaller numbers of genes with the advantage that our approach can be applied genome-wide. We validate the biological significance of the pol-II promoter activity clusters by investigating cluster-specific transcription factor binding patterns and determining canonical pathway enrichment . We find that rapidly induced genes are enriched for both estrogen receptor alpha (ER\alpha) and FOXA1 binding in their proximal promoter regions .
[ { "type": "R", "before": "influences", "after": "influence", "start_char_pos": 151, "end_char_pos": 161 }, { "type": "R", "before": "determined", "after": "estimated", "start_char_pos": 499, "end_char_pos": 509 }, { "type": "A", "before": null, "after": ". We find that rapidly induced genes are enriched for both estrogen receptor alpha (ER\\alpha) and FOXA1 binding in their proximal promoter regions", "start_char_pos": 1731, "end_char_pos": 1731 } ]
[ 0, 91, 201, 351, 477, 619, 835, 984, 1188, 1366, 1532 ]
1303.6340
1
We obtain a very simple way to price a class of barrier options under a huge class of L\'evy processes when a symmetry property , like put-call symmetry, holds .
In this paper we present a very simple way to price a class of barrier options when the underlying process is driven by a huge class of L\'evy processes . To achieve our goal we assume that our market satisfies a symmetry property . In case of not satisfying that property some approximations can be obtained .
[ { "type": "R", "before": "We obtain", "after": "In this paper we present", "start_char_pos": 0, "end_char_pos": 9 }, { "type": "R", "before": "under", "after": "when the underlying process is driven by", "start_char_pos": 64, "end_char_pos": 69 }, { "type": "R", "before": "when", "after": ". To achieve our goal we assume that our market satisfies", "start_char_pos": 103, "end_char_pos": 107 }, { "type": "R", "before": ", like put-call symmetry, holds", "after": ". In case of not satisfying that property some approximations can be obtained", "start_char_pos": 128, "end_char_pos": 159 } ]
[ 0 ]
1303.6485
1
This paper presents an innovative technique to explore the effect on energy consumption of an extensive number of the optimisations a compiler can perform. We evaluate a set of ten carefully selected benchmarks for five different embedded platforms. A fractional factorial design is used to systematically explore the large optimisation space (2^82 possible combinations), whilst still accurately determining the effects of optimisations and optimisation combinations. Hardware power measurements on each platform are taken to ensure all architectural effects on the energy consumption are captured. In the majority of cases, execution time and energy consumption are highly correlated. However, predicting the effect a particular optimisation may have is non-trivial due to its interactions with other optimisations. This validates long standing community beliefs, but for the first time provides concrete evidence of the effect and its magnitude . A further conclusion of this study is the structure of the benchmark has a larger effect than the hardware architecture on whether the optimisation will be effective, and that no single optimisation is universally beneficial for execution time or energy consumption.
This paper presents an analysis of the energy consumption of an extensive number of the optimisations a modern compiler can perform. Using GCC as a test case, we evaluate a set of ten carefully selected benchmarks for five different embedded platforms. A fractional factorial design is used to systematically explore the large optimisation space (2^82 possible combinations), whilst still accurately determining the effects of optimisations and optimisation combinations. Hardware power measurements on each platform are taken to ensure all architectural effects on the energy consumption are captured. We show that fractional factorial design can find more optimal combinations than relying on built in compiler settings. We explore the relationship between run-time and energy consumption , and identify scenarios where they are and are not correlated . A further conclusion of this study is the structure of the benchmark has a larger effect than the hardware architecture on whether the optimisation will be effective, and that no single optimisation is universally beneficial for execution time or energy consumption.
[ { "type": "R", "before": "innovative technique to explore the effect on", "after": "analysis of the", "start_char_pos": 23, "end_char_pos": 68 }, { "type": "A", "before": null, "after": "modern", "start_char_pos": 134, "end_char_pos": 134 }, { "type": "R", "before": "We", "after": "Using GCC as a test case, we", "start_char_pos": 157, "end_char_pos": 159 }, { "type": "R", "before": "In the majority of cases, execution time", "after": "We show that fractional factorial design can find more optimal combinations than relying on built in compiler settings. We explore the relationship between run-time", "start_char_pos": 601, "end_char_pos": 641 }, { "type": "R", "before": "are highly correlated. However, predicting the effect a particular optimisation may have is non-trivial due to its interactions with other optimisations. This validates long standing community beliefs, but for the first time provides concrete evidence of the effect and its magnitude", "after": ", and identify scenarios where they are and are not correlated", "start_char_pos": 665, "end_char_pos": 948 } ]
[ 0, 156, 250, 469, 600, 687, 818, 950 ]
1303.6737
1
Mathematical models are increasingly being used to understand complex biochemical systems, but we rarely know the consequences of choosing one modelto another . Using algebraic techniques we study systematically the effects of intermediate (transient) complexes and provide a simple, yet rigorous mathematical classification of all models obtained from a core model by including intermediates. Main examples include enzymatic and post-translational modifications systems, where intermediates often are considered insignificant and neglected in a model . In each class , a mathematically simple canonical model characterizes crucial dynamical properties, such as mono/ multistationarity and stability of steady states, of all models in the class. Importantly, our results provide guidelines to the modeler in choosing between models and in distinguishing their properties .
Mathematical models are increasingly being used to understand complex biochemical systems, to analyze experimental data and make predictions about unobserved quantities. However, we rarely know how robust our conclusions are with respect to the choice and uncertainties of the model . Using algebraic techniques we study systematically the effects of intermediate , or transient, species in biochemical systems and provide a simple, yet rigorous mathematical classification of all models obtained from a core model by including intermediates. Main examples include enzymatic and post-translational modification systems, where intermediates often are considered insignificant and neglected in a model , or they are not included because we are unaware of their existence. All possible models obtained from the core model are classified into a finite number of classes. Each class is defined by a mathematically simple canonical model that characterizes crucial dynamical properties, such as mono- and multistationarity and stability of steady states, of all models in the class. We show that if the core model does not have conservation laws, then the introduction of intermediates does not change the steady-state concentrations of the species in the core model, after suitable matching of parameters. Importantly, our results provide guidelines to the modeler in choosing between models and in distinguishing their properties . Further, our work provides a formal way of comparing models that share a common skeleton .
[ { "type": "R", "before": "but", "after": "to analyze experimental data and make predictions about unobserved quantities. However,", "start_char_pos": 91, "end_char_pos": 94 }, { "type": "R", "before": "the consequences of choosing one modelto another", "after": "how robust our conclusions are with respect to the choice and uncertainties of the model", "start_char_pos": 110, "end_char_pos": 158 }, { "type": "R", "before": "(transient) complexes", "after": ", or transient, species in biochemical systems", "start_char_pos": 240, "end_char_pos": 261 }, { "type": "R", "before": "modifications", "after": "modification", "start_char_pos": 449, "end_char_pos": 462 }, { "type": "R", "before": ". In each class ,", "after": ", or they are not included because we are unaware of their existence. All possible models obtained from the core model are classified into a finite number of classes. Each class is defined by", "start_char_pos": 552, "end_char_pos": 569 }, { "type": "A", "before": null, "after": "that", "start_char_pos": 610, "end_char_pos": 610 }, { "type": "R", "before": "mono/", "after": "mono- and", "start_char_pos": 663, "end_char_pos": 668 }, { "type": "A", "before": null, "after": "We show that if the core model does not have conservation laws, then the introduction of intermediates does not change the steady-state concentrations of the species in the core model, after suitable matching of parameters.", "start_char_pos": 747, "end_char_pos": 747 }, { "type": "A", "before": null, "after": ". Further, our work provides a formal way of comparing models that share a common skeleton", "start_char_pos": 873, "end_char_pos": 873 } ]
[ 0, 393, 553, 746 ]
1303.6793
1
We performed stochastic simulations of transcription factor (TF) molecules translocating by facilitated diffusion (a combination of 3D diffusion in the cytoplasm and 1D random walk on the DNA) , and consider various abundances of cognate and non-cognate TFs to assess the influence of competitor molecules thatalso move along the DNA . We show that molecular crowding on the DNA always leads to longer times required by TF molecules to locate their target sites as well as to lower occupancy, which may confer a general mechanism to control gene activity levels globally . Finally, we show that crowding on the DNA may increase transcriptional noise through increased variability of the occupancy time of the target sites.
Transcription factor (TF) molecules translocate by facilitated diffusion (a combination of 3D diffusion around and 1D random walk on the DNA) . Despite the attention this mechanism received in the last 40 years, only a few studies investigated the influence of the cellular environment on the facilitated diffusion mechanism and, in particular, the influence of `other' DNA binding proteins competing with the TF molecules for DNA space. Molecular crowding on the DNA is likely to influence the association rate of TFs to their target site and the steady state occupancy of those sites, but it is still not clear how it influences the search in a genome-wide context, when the model includes biologically relevant parameters (such as: TF abundance, TF affinity for DNA and TF dynamics on the DNA). We performed stochastic simulations of TFs performing the facilitated diffusion mechanism, and considered various abundances of cognate and non-cognate TFs . We show that, for both obstacles that move on the DNA and obstacles that are fixed on the DNA, changes in search time are not statistically significant in case of biologically relevant crowding levels on the DNA. In the case of non-cognate proteins that slide on the DNA, molecular crowding on the DNA always leads to statistically significant lower levels of occupancy, which may confer a general mechanism to control gene activity levels globally . When the `other' molecules are immobile on the DNA, we found a completely different behaviour, namely: the occupancy of the target site is always increased by higher molecular crowding on the DNA . Finally, we show that crowding on the DNA may increase transcriptional noise through increased variability of the occupancy time of the target sites.
[ { "type": "R", "before": "We performed stochastic simulations of transcription", "after": "Transcription", "start_char_pos": 0, "end_char_pos": 52 }, { "type": "R", "before": "translocating", "after": "translocate", "start_char_pos": 75, "end_char_pos": 88 }, { "type": "R", "before": "in the cytoplasm", "after": "around", "start_char_pos": 145, "end_char_pos": 161 }, { "type": "R", "before": ", and consider", "after": ". Despite the attention this mechanism received in the last 40 years, only a few studies investigated the influence of the cellular environment on the facilitated diffusion mechanism and, in particular, the influence of `other' DNA binding proteins competing with the TF molecules for DNA space. Molecular crowding on the DNA is likely to influence the association rate of TFs to their target site and the steady state occupancy of those sites, but it is still not clear how it influences the search in a genome-wide context, when the model includes biologically relevant parameters (such as: TF abundance, TF affinity for DNA and TF dynamics on the DNA). We performed stochastic simulations of TFs performing the facilitated diffusion mechanism, and considered", "start_char_pos": 193, "end_char_pos": 207 }, { "type": "R", "before": "to assess the influence of competitor molecules thatalso move along the DNA . We show that", "after": ". We show that, for both obstacles that move on the DNA and obstacles that are fixed on the DNA, changes in search time are not statistically significant in case of biologically relevant crowding levels on the DNA. In the case of non-cognate proteins that slide on the DNA,", "start_char_pos": 258, "end_char_pos": 348 }, { "type": "R", "before": "longer times required by TF molecules to locate their target sites as well as to lower", "after": "statistically significant lower levels of", "start_char_pos": 395, "end_char_pos": 481 }, { "type": "A", "before": null, "after": ". When the `other' molecules are immobile on the DNA, we found a completely different behaviour, namely: the occupancy of the target site is always increased by higher molecular crowding on the DNA", "start_char_pos": 571, "end_char_pos": 571 } ]
[ 0, 64, 192, 573 ]
1304.2942
1
We solve the optimal trade execution problem in the Almgren and Chirss framework with the Value-at-risk / Expected Shortfall based criterion of Gatheral and Schied when the underlying unaffected stock price follows a displaced diffusion model. The displaced diffusion model can conveniently model at the same time situations where either an arithmetic Brownian motion (ABM) or a geometric Browinan motion (GBM) type dynamics may prevail, thus serving as a bridge between the ABM and GBM frameworks. We introduce alternative risk criteria and we notice that the optimal trade executionstrategy little depends on the specific risk criterion we adopt. In most situations the solution is close to the simple Volume Weighted Average Price (VWAP) solution regardless of the specific diffusion dynamics or risk criterion that is chosen, especially on realistic trading horizons of a few days or hours. This suggests that more general dynamics need to be considered, and possibly more extreme risk criteria, in order to find a relevant impact on the optimal strategy
We solve a version of the optimal trade execution problem when the mid asset price follows a displaced diffusion . Optimal strategies under various risk criteria, namely value-at-risk, expected shortfall and a version of the cost variance measure are derived and compared. It is well known that displaced diffusions exhibit dynamics which are in-between arithmetic Brownian motions (ABM) or geometric Brownian motions (GBM) depending of the choice of the shift parameter. The model presented in the paper attempts to provide a bridge between the approach of Almgren and Chris (ABM) and Gatheral and Schied (GBM) to optimal execution. We also study the dependence of the optimal solution on the choice of the risk aversion criterion. Optimal solutions across criteria and asset dynamics are comparable although the difference are not negligible for high levels of risk aversion and low market impact assets.
[ { "type": "A", "before": null, "after": "a version of", "start_char_pos": 9, "end_char_pos": 9 }, { "type": "R", "before": "in the Almgren and Chirss framework with the Value-at-risk / Expected Shortfall based criterion of Gatheral and Schied when the underlying unaffected stock", "after": "when the mid asset", "start_char_pos": 46, "end_char_pos": 201 }, { "type": "R", "before": "model. The displaced diffusion model can conveniently model at the same time situations where either an arithmetic Brownian motion", "after": ". Optimal strategies under various risk criteria, namely value-at-risk, expected shortfall and a version of the cost variance measure are derived and compared. It is well known that displaced diffusions exhibit dynamics which are in-between arithmetic Brownian motions", "start_char_pos": 238, "end_char_pos": 368 }, { "type": "R", "before": "a geometric Browinan motion", "after": "geometric Brownian motions", "start_char_pos": 378, "end_char_pos": 405 }, { "type": "R", "before": "type dynamics may prevail, thus serving as", "after": "depending of the choice of the shift parameter. The model presented in the paper attempts to provide", "start_char_pos": 412, "end_char_pos": 454 }, { "type": "R", "before": "ABM and GBM frameworks. We introduce alternative risk criteria and we notice that the optimal trade executionstrategy little depends on the specific risk criterion we adopt. In most situations the solution is close to the simple Volume Weighted Average Price (VWAP) solution regardless of the specific diffusion dynamics or risk criterion that is chosen, especially on realistic trading horizons of a few days or hours. This suggests that more general dynamics need to be considered, and possibly more extreme risk criteria, in order to find a relevant impact on the optimal strategy", "after": "approach of Almgren and Chris (ABM) and Gatheral and Schied (GBM) to optimal execution. We also study the dependence of the optimal solution on the choice of the risk aversion criterion. Optimal solutions across criteria and asset dynamics are comparable although the difference are not negligible for high levels of risk aversion and low market impact assets.", "start_char_pos": 476, "end_char_pos": 1059 } ]
[ 0, 244, 499, 649, 895 ]
1304.2942
2
We solve a version of the optimal trade execution problem when the mid asset price follows a displaced diffusion. Optimal strategies under various risk criteria, namely value-at-risk, expected shortfall and a version of the cost variance measure are derived and compared. It is well known that displaced diffusions exhibit dynamics which are in-between arithmetic Brownian motions (ABM) or geometric Brownian motions (GBM) depending of the choice of the shift parameter. The model presented in the paper attempts to provide a bridge between the approach of Almgren and Chris (ABM) and Gatheral and Schied (GBM) to optimal execution. We also study the dependence of the optimal solution on the choice of the risk aversion criterion. Optimal solutions across criteria and asset dynamics are comparable although the difference are not negligible for high levels of risk aversion and low market impact assets .
We solve a version of the optimal trade execution problem when the mid asset price follows a displaced diffusion. Optimal strategies in the adapted class under various risk criteria, namely value-at-risk, expected shortfall and a new criterion called "squared asset expectation" (SAE), related to a version of the cost variance measure , are derived and compared. It is well known that displaced diffusions (DD) exhibit dynamics which are in-between arithmetic Brownian motions (ABM) and geometric Brownian motions (GBM) depending of the choice of the shift parameter. Furthermore, DD allows for changes in the support of the mid asset price distribution, allowing one to include a minimum permitted value for the mid price, either positive or negative. We study the dependence of the optimal solution on the choice of the risk aversion criterion. Optimal solutions across criteria and asset dynamics are comparable although differences are not negligible for high levels of risk aversion and low market impact assets . This is illustrated with numerical examples .
[ { "type": "A", "before": null, "after": "in the adapted class", "start_char_pos": 133, "end_char_pos": 133 }, { "type": "A", "before": null, "after": "new criterion called \"squared asset expectation\" (SAE), related to a", "start_char_pos": 210, "end_char_pos": 210 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 248, "end_char_pos": 248 }, { "type": "A", "before": null, "after": "(DD)", "start_char_pos": 318, "end_char_pos": 318 }, { "type": "R", "before": "or", "after": "and", "start_char_pos": 391, "end_char_pos": 393 }, { "type": "R", "before": "The model presented in the paper attempts to provide a bridge between the approach of Almgren and Chris (ABM) and Gatheral and Schied (GBM) to optimal execution. We also", "after": "Furthermore, DD allows for changes in the support of the mid asset price distribution, allowing one to include a minimum permitted value for the mid price, either positive or negative. We", "start_char_pos": 475, "end_char_pos": 644 }, { "type": "R", "before": "the difference", "after": "differences", "start_char_pos": 813, "end_char_pos": 827 }, { "type": "A", "before": null, "after": ". This is illustrated with numerical examples", "start_char_pos": 909, "end_char_pos": 909 } ]
[ 0, 113, 274, 474, 636, 735 ]
1304.3159
1
We propose a new, unified approach to solving jump-diffusion partial integro-differential equations (PIDEs) that often appear in mathematical finance. Our method consists of the following steps. First, a second-order operator splitting on financial processes (diffusion and jumps) is applied to these PIDEs. To solve the diffusion equation, we use standard finite-difference methods, which for multi-dimensional problems could also include splitting on various dimensions. For the jump part, we transform the jump integral into a pseudo-differential operator. Then for various jump models we show how to construct an appropriate first and second order approximation on a grid which supersets the grid that we used for the diffusion part. These approximations make the scheme to be unconditionally stable in time and preserve positivity of the solution which is computed via a matrix exponential . The paper demonstrates that the proposed method is computationally efficient, accurate and simple to implement } .
We propose a new, unified approach to solving jump-diffusion partial integro-differential equations (PIDEs) that often appear in mathematical finance. Our method consists of the following steps. First, a second-order operator splitting on financial processes (diffusion and jumps) is applied to these PIDEs. To solve the diffusion equation, we use standard finite-difference methods, which for multi-dimensional problems could also include splitting on various dimensions. For the jump part, we transform the jump integral into a pseudo-differential operator. Then for various jump models we show how to construct an appropriate first and second order approximation on a grid which supersets the grid that we used for the diffusion part. These approximations make the scheme to be unconditionally stable in time and preserve positivity of the solution which is computed either via a matrix exponential , or via P \'a}de approximation of the matrix exponent. Various numerical experiments are provided to justify these results .
[ { "type": "A", "before": null, "after": "either", "start_char_pos": 870, "end_char_pos": 870 }, { "type": "R", "before": ". The paper demonstrates that the proposed method is computationally efficient, accurate and simple to implement", "after": ", or via P", "start_char_pos": 896, "end_char_pos": 1008 }, { "type": "A", "before": null, "after": "\\'a", "start_char_pos": 1009, "end_char_pos": 1009 }, { "type": "A", "before": null, "after": "de approximation of the matrix exponent. Various numerical experiments are provided to justify these results", "start_char_pos": 1010, "end_char_pos": 1010 } ]
[ 0, 150, 194, 307, 472, 559, 737, 897 ]
1304.3602
1
At the heart of technology transitions lie complex processes of technology choices. Understanding and planning sustainability transitions requires modelling work, which necessitates a theory of technology substitution. A theoretical model of technological change and turnover is presented, intended as a methodological paradigm shift from widely used conventional modelling approaches such as cost optimisation. It follows the tradition of evolutionary economics and evolutionary game theory, using ecological population growth dynamics to represent the evolution of technology populations in the marketplace, with substitutions taking place at the level of the decision-maker. Extended to use principles of human demography or the age structured evolution of species in interacting ecosystems, this theory is built from first principles, and through an appropriate approximation, reduces to a form identical to empirical models of technology diffusion common in the technology transitions literature. Using an age structure, it provides the appropriate groundwork and theoretical framework to understand interacting technologies, their birth, ageing and mutual substitution. This analysis provides insight in explaining the nature and origin of observed timescales of technology transitions , in terms of technology life expectancies, the dynamic process of production capacity expansion or collapse and its timescales, in what is termed a `demographic phase'. While this model contributes to the general understanding of technological change, the information in this work is intended to be used practically for the parameterisation of technology diffusion in large scale models of technology systems when measured data is unknown or uncertain, as is the case for new technologies, notably for modelling future energy systems and greenhouse gas emissions .
At the heart of technology transitions lie complex processes of social and industrial dynamics. The quantitative study of sustainability transitions requires modelling work, which necessitates a theory of technology substitution. Many, if not most, contemporary modelling approaches for future technology pathways overlook most aspects of transitions theory, for instance dimensions of investor choices, dynamic rates of diffusion and the profile of transitions. A significant body of literature however exists that demonstrates how transitions follow S-shaped diffusion curves or Lotka-Volterra systems of equations. This framework is used ex-post since timescales can only be reliably obtained in cases where the transitions have already occurred, precluding its use for studying cases of interest where nascent innovations in protective niches await favourable conditions for their diffusion. Scaling parameters of transitions can in principle be derived from industrial dynamics, technology turnover rates and technology characteristics. In this context, this paper presents a theory framework for calculating the parameterisation of S-shaped diffusion curves for use in simulation models of technology transitions without the involvement of historical data fitting, making use of standard demography theory applied to technology at the unit level. The classic Lotka-Volterra competition system emerges from first principles from demography theory, its timescales explained in terms of technology lifetimes and industrial dynamics. The theory is discussed in the context of the multi-level perspective on technology transitions, where innovation and the diffusion of new socio-technical regimes take a prominent place, offering a bridge between qualitative and quantitative descriptions .
[ { "type": "R", "before": "technology choices. Understanding and planning", "after": "social and industrial dynamics. The quantitative study of", "start_char_pos": 64, "end_char_pos": 110 }, { "type": "R", "before": "A theoretical model of technological change and turnover is presented, intended as a methodological paradigm shift from widely used conventional modelling approaches such as cost optimisation. It follows the tradition of evolutionary economics and evolutionary game theory, using ecological population growth dynamics to represent the evolution of technology populations in the marketplace, with substitutions taking place at the level of", "after": "Many, if not most, contemporary modelling approaches for future technology pathways overlook most aspects of transitions theory, for instance dimensions of investor choices, dynamic rates of diffusion and the profile of transitions. A significant body of literature however exists that demonstrates how transitions follow S-shaped diffusion curves or Lotka-Volterra systems of equations. This framework is used ex-post since timescales can only be reliably obtained in cases where the transitions have already occurred, precluding its use for studying cases of interest where nascent innovations in protective niches await favourable conditions for their diffusion. Scaling parameters of transitions can in principle be derived from industrial dynamics, technology turnover rates and technology characteristics. In this context, this paper presents a theory framework for calculating", "start_char_pos": 219, "end_char_pos": 657 }, { "type": "D", "before": "decision-maker. Extended to use principles of human demography or the age structured evolution of species in interacting ecosystems, this theory is built from first principles, and through an appropriate approximation, reduces to a form identical to empirical models of technology diffusion common in the technology transitions literature. Using an age structure, it provides the appropriate groundwork and theoretical framework to understand interacting technologies, their birth, ageing and mutual substitution. This analysis provides insight in explaining the nature and origin of observed timescales of technology transitions , in terms of technology life expectancies, the dynamic process of production capacity expansion or collapse and its timescales, in what is termed a `demographic phase'. While this model contributes to the general understanding of technological change, the information in this work is intended to be used practically for the", "after": null, "start_char_pos": 662, "end_char_pos": 1616 }, { "type": "R", "before": "technology diffusion in large scale", "after": "S-shaped diffusion curves for use in simulation", "start_char_pos": 1637, "end_char_pos": 1672 }, { "type": "R", "before": "systems when measured data is unknown or uncertain, as is the case for new technologies, notably for modelling future energy systems and greenhouse gas emissions", "after": "transitions without the involvement of historical data fitting, making use of standard demography theory applied to technology at the unit level. The classic Lotka-Volterra competition system emerges from first principles from demography theory, its timescales explained in terms of technology lifetimes and industrial dynamics. The theory is discussed in the context of the multi-level perspective on technology transitions, where innovation and the diffusion of new socio-technical regimes take a prominent place, offering a bridge between qualitative and quantitative descriptions", "start_char_pos": 1694, "end_char_pos": 1855 } ]
[ 0, 83, 218, 411, 677, 1001, 1175, 1461 ]
1304.3602
2
At the heart of technology transitions lie complex processes of social and industrial dynamics. The quantitative study of sustainability transitions requires modelling work, which necessitates a theory of technology substitution. Many, if not most, contemporary modelling approaches for future technology pathways overlook most aspects of transitions theory, for instance dimensions of heterogenous investor choices, dynamic rates of diffusion and the profile of transitions. A significant body of literature however exists that demonstrates how transitions follow S-shaped diffusion curves or Lotka-Volterra systems of equations. This framework is used ex-post since timescales can only be reliably obtained in cases where the transitions have already occurred, precluding its use for studying cases of interest where nascent innovations in protective niches await favourable conditions for their diffusion. Scaling parameters of transitions can in principle be derived from industrial dynamics, technology turnover rates and technology characteristics. In this context, this paper presents a theory framework for calculating the parameterisation of S-shaped diffusion curves for use in simulation models of technology transitions without the involvement of historical data fitting, making use of standard demography theory applied to technology at the unit level. The classic Lotka-Volterra competition system emerges from first principles from demography theory, its timescales explained in terms of technology lifetimes and industrial dynamics. The theory is discussed in the context of the multi-level perspective on technology transitions, where innovation and the diffusion of new socio-technical regimes take a prominent place, offering a bridge between qualitative and quantitative descriptions .
At the heart of technology transitions lie complex processes of social and industrial dynamics. The quantitative study of sustainability transitions requires modelling work, which necessitates a theory of technology substitution. Many, if not most, contemporary modelling approaches for future technology pathways overlook most aspects of transitions theory, for instance dimensions of heterogenous investor choices, dynamic rates of diffusion and the profile of transitions. A significant body of literature however exists that demonstrates how transitions follow S-shaped diffusion curves or Lotka-Volterra systems of equations. This framework is used ex-post since timescales can only be reliably obtained in cases where the transitions have already occurred, precluding its use for studying cases of interest where nascent innovations in protective niches await favourable conditions for their diffusion. In principle, scaling parameters of transitions can , however, be derived from knowledge of industrial dynamics, technology turnover rates and technology characteristics. In this context, this paper presents a theory framework for evaluating the parameterisation of S-shaped diffusion curves for use in simulation models of technology transitions without the involvement of historical data fitting, making use of standard demography theory applied to technology at the unit level. The classic Lotka-Volterra competition system emerges from first principles from demography theory, its timescales explained in terms of technology lifetimes and industrial dynamics. The theory is placed in the context of the multi-level perspective on technology transitions, where innovation and the diffusion of new socio-technical regimes take a prominent place, as well as discrete choice theory, the primary theoretical framework for introducing agent diversity .
[ { "type": "R", "before": "Scaling", "after": "In principle, scaling", "start_char_pos": 909, "end_char_pos": 916 }, { "type": "R", "before": "in principle", "after": ", however,", "start_char_pos": 947, "end_char_pos": 959 }, { "type": "A", "before": null, "after": "knowledge of", "start_char_pos": 976, "end_char_pos": 976 }, { "type": "R", "before": "calculating", "after": "evaluating", "start_char_pos": 1116, "end_char_pos": 1127 }, { "type": "R", "before": "discussed", "after": "placed", "start_char_pos": 1564, "end_char_pos": 1573 }, { "type": "R", "before": "offering a bridge between qualitative and quantitative descriptions", "after": "as well as discrete choice theory, the primary theoretical framework for introducing agent diversity", "start_char_pos": 1737, "end_char_pos": 1804 } ]
[ 0, 95, 229, 475, 630, 908, 1055, 1366, 1549 ]
1304.3796
1
Cooperation played a significant role in the URLanization and evolution of URLanisms. Both network topology and the initial position of cooperators heavily affect the cooperation of social dilemma games. We developed a novel simulation program package, called 'NetworGame', which is able to simulate any type of social dilemma games on any model, or real world networks with any assignment of initial cooperation or defection strategies to network nodes. The ability of initially defecting single nodes to break overall cooperation was called as 'game centrality'. The efficiency of this measure was verified on well-known social networks, and was extended to 'protein games', i.e. the simulation of cooperation between proteins, or their amino acids. Hubs and in particular, party hubs of yeast protein-protein interaction networks had a large influence to convert the cooperation of other nodes to defection. Simulations on methionyl-tRNA synthetase protein structure network indicated an increased influence of nodes belonging to intra-protein signaling pathways on breaking cooperation. The efficiency of single, initially defecting nodes to convert the cooperation of other nodes to defection in social dilemma games may be an important measure to predict the importance of amino acids and proteins in the integration and regulation of complex biological systems. The NetworGame algorithm is downloadable from here: www. linkgroup.hu /NetworGame.php
Cooperation played a significant role in the URLanization and evolution of URLanisms. Both network topology and the initial position of cooperators heavily affect the cooperation of social dilemma games. We developed a novel simulation program package, called 'NetworGame', which is able to simulate any type of social dilemma games on any model, or real world networks with any assignment of initial cooperation or defection strategies to network nodes. The ability of initially defecting single nodes to break overall cooperation was called as 'game centrality'. The efficiency of this measure was verified on well-known social networks, and was extended to 'protein games', i.e. the simulation of cooperation between proteins, or their amino acids. Hubs and in particular, party hubs of yeast protein-protein interaction networks had a large influence to convert the cooperation of other nodes to defection. Simulations on methionyl-tRNA synthetase protein structure network indicated an increased influence of nodes belonging to intra-protein signaling pathways on breaking cooperation. The efficiency of single, initially defecting nodes to convert the cooperation of other nodes to defection in social dilemma games may be an important measure to predict the importance of nodes in the integration and regulation of complex systems. Game centrality may help to design more efficient interventions to cellular networks (in forms of drugs), to ecosystems and social networks. The NetworGame algorithm is downloadable from here: www. NetworGame. linkgroup.hu
[ { "type": "R", "before": "amino acids and proteins", "after": "nodes", "start_char_pos": 1279, "end_char_pos": 1303 }, { "type": "R", "before": "biological systems.", "after": "systems. Game centrality may help to design more efficient interventions to cellular networks (in forms of drugs), to ecosystems and social networks.", "start_char_pos": 1349, "end_char_pos": 1368 }, { "type": "A", "before": null, "after": "NetworGame.", "start_char_pos": 1426, "end_char_pos": 1426 }, { "type": "D", "before": "/NetworGame.php", "after": null, "start_char_pos": 1440, "end_char_pos": 1455 } ]
[ 0, 85, 203, 454, 564, 751, 910, 1090, 1368 ]
1304.4151
1
The normal operation of power system relies on accurate state estimation that faithfully reflects the physical aspects of the electrical power grids. However, recent research shows that carefully synthesized false-data injection attacks can bypass the security system and thus introduce arbitrary errors to state estimates. In this paper, we investigate defending mechanisms against false-data injection attacks . By protecting carefully selected meter measurements and (or) power network topological information, we show that no false-data injection attack can be formulated to compromise any set of state estimates. We characterize the optimal protection problem as a variant Steiner tree problem in a graph , and propose both exact and reduced-complexity approximation algorithms to derive the optimal protection strategy that achieves the protection objective with minimum cost. For practical implementation, we also develop a unified defending strategy that efficiently utilizes both secure meter measurements and covert topological information . The advantageous performance of the proposed defending mechanisms is verified in IEEE standard power system testcases . In both theory and practice, our results provide solid countermeasures against false-data injection attack in large-scale electrical power system, which will be useful in the security upgrade projects towards smart power grids .
The normal operation of power system relies on accurate state estimation that faithfully reflects the physical aspects of the electrical power grids. However, recent research shows that carefully synthesized false-data injection attacks can bypass the security system and introduce arbitrary errors to state estimates. In this paper, we use graphical methods to study defending mechanisms against false-data injection attacks on power system state estimation. By securing carefully selected meter measurements , no false data injection attack can be launched to compromise any set of state estimates. We characterize the optimal protection problem , which protects the state estimates with minimum number of measurements, as a variant Steiner tree problem in a graph . Based on the graphical characterization, we propose both exact and reduced-complexity approximation algorithms . In particular, we show that the proposed tree-pruning based approximation algorithm significantly reduces computational complexity, while yielding negligible performance degradation compared with the optimal algorithms. The advantageous performance of the proposed defending mechanisms is verified in IEEE standard power system testcases .
[ { "type": "D", "before": "thus", "after": null, "start_char_pos": 272, "end_char_pos": 276 }, { "type": "R", "before": "investigate", "after": "use graphical methods to study", "start_char_pos": 342, "end_char_pos": 353 }, { "type": "R", "before": ". By protecting", "after": "on power system state estimation. By securing", "start_char_pos": 412, "end_char_pos": 427 }, { "type": "R", "before": "and (or) power network topological information, we show that no false-data", "after": ", no false data", "start_char_pos": 466, "end_char_pos": 540 }, { "type": "R", "before": "formulated", "after": "launched", "start_char_pos": 565, "end_char_pos": 575 }, { "type": "A", "before": null, "after": ", which protects the state estimates with minimum number of measurements,", "start_char_pos": 665, "end_char_pos": 665 }, { "type": "R", "before": ", and", "after": ". Based on the graphical characterization, we", "start_char_pos": 711, "end_char_pos": 716 }, { "type": "D", "before": "to derive the optimal protection strategy that achieves the protection objective with minimum cost. For practical implementation, we also develop a unified defending strategy that efficiently utilizes both secure meter measurements and covert topological information", "after": null, "start_char_pos": 784, "end_char_pos": 1050 }, { "type": "A", "before": null, "after": "In particular, we show that the proposed tree-pruning based approximation algorithm significantly reduces computational complexity, while yielding negligible performance degradation compared with the optimal algorithms.", "start_char_pos": 1053, "end_char_pos": 1053 }, { "type": "D", "before": ". In both theory and practice, our results provide solid countermeasures against false-data injection attack in large-scale electrical power system, which will be useful in the security upgrade projects towards smart power grids", "after": null, "start_char_pos": 1172, "end_char_pos": 1400 } ]
[ 0, 149, 323, 413, 617, 883, 1052, 1173 ]
1304.4151
2
The normal operation of power system relies on accurate state estimation that faithfully reflects the physical aspects of the electrical power grids. However, recent research shows that carefully synthesized false-data injection attacks can bypass the security system and introduce arbitrary errors to state estimates. In this paper, we use graphical methods to study defending mechanisms against false-data injection attacks on power system state estimation. By securing carefully selected meter measurements, no false data injection attack can be launched to compromise any set of state estimates . We characterize the optimal protection problem, which protects the state estimates with minimum number of measurements, as a variant Steiner tree problem in a graph. Based on the graphical characterization, we propose both exact and reduced-complexity approximation algorithms. In particular, we show that the proposed tree-pruning based approximation algorithm significantly reduces computational complexity, while yielding negligible performance degradation compared with the optimal algorithms. The advantageous performance of the proposed defending mechanisms is verified in IEEE standard power system testcases.
The normal operation of power system relies on accurate state estimation that faithfully reflects the physical aspects of the electrical power grids. However, recent research shows that carefully synthesized false-data injection attacks can bypass the security system and introduce arbitrary errors to state estimates. In this paper, we use graphical methods to study defending mechanisms against false-data injection attacks on power system state estimation. By securing carefully selected meter measurements, no false data injection attack can be launched to compromise any set of state variables . We characterize the optimal protection problem, which protects the state variables with minimum number of measurements, as a variant Steiner tree problem in a graph. Based on the graphical characterization, we propose both exact and reduced-complexity approximation algorithms. In particular, we show that the proposed tree-pruning based approximation algorithm significantly reduces computational complexity, while yielding negligible performance degradation compared with the optimal algorithms. The advantageous performance of the proposed defending mechanisms is verified in IEEE standard power system testcases.
[ { "type": "R", "before": "estimates", "after": "variables", "start_char_pos": 589, "end_char_pos": 598 }, { "type": "R", "before": "estimates", "after": "variables", "start_char_pos": 674, "end_char_pos": 683 } ]
[ 0, 149, 318, 459, 600, 766, 878, 1098 ]
1304.4460
1
RNA production in the cell follows a succession of enzyme-mediated processing steps from transcription until maturation. The participating enzymes, for example the spliceosome for mRNAs and Drosha and Dicer for microRNAs, are also produced in the cell and their copy-numbers fluctuate over time. Enzyme copy-number changes affect the processing rate of the substrate molecules . High enzyme numbers increase the processing probability, low enzyme numbers decrease it. We study different RNA processing cascades where enzyme copy-numbers are either fixed or fluctuate. We find that for fixed enzyme-copy numbers the substrates at steady-state are Poisson-distributed, and the whole RNA cascade dynamics can be understood as a single birth-death process of the mature RNA product. Further, we show analytically and verify numerically that when enzyme copy-numbers fluctuate the strength of substrate fluctuations increases linearly with the RNA transcription rate. This linear effect becomes stronger as the speed of enzyme dynamics decreases relative to the speed of RNA dynamics. Interestingly, we find that under certain conditions, the RNA cascade can reduce the strength of fluctuations in the expression level of the mature RNA product. Finally, by investigating the effects of processing polymorphisms we show that it is possible for the effects of transcriptional polymorphisms to be enhanced, reduced or even reversed. Our results provide a comprehensive framework to understand the dynamics of RNA processing.
RNA molecules follow a succession of enzyme-mediated processing steps from transcription until maturation. The participating enzymes, for example the spliceosome for mRNAs and Drosha and Dicer for microRNAs, are also produced in the cell and their copy-numbers fluctuate over time. Enzyme copy-number changes affect the processing rate of the substrate molecules ; high enzyme numbers increase the processing probability, low enzyme numbers decrease it. We study different RNA processing cascades where enzyme copy-numbers are either fixed or fluctuate. We find that for fixed enzyme-copy numbers the substrates at steady-state are Poisson-distributed, and the whole RNA cascade dynamics can be understood as a single birth-death process of the mature RNA product. In this case, solely fluctuations in the timing of RNA processing lead to variation in the number of RNA molecules. However, we show analytically and numerically that when enzyme copy-numbers fluctuate , the strength of RNA fluctuations increases linearly with the RNA transcription rate. This linear effect becomes stronger as the speed of enzyme dynamics decreases relative to the speed of RNA dynamics. Interestingly, we find that under certain conditions, the RNA cascade can reduce the strength of fluctuations in the expression level of the mature RNA product. Finally, by investigating the effects of processing polymorphisms we show that it is possible for the effects of transcriptional polymorphisms to be enhanced, reduced , or even reversed. Our results provide a framework to understand the dynamics of RNA processing.
[ { "type": "R", "before": "production in the cell follows", "after": "molecules follow", "start_char_pos": 4, "end_char_pos": 34 }, { "type": "R", "before": ". High", "after": "; high", "start_char_pos": 377, "end_char_pos": 383 }, { "type": "R", "before": "Further,", "after": "In this case, solely fluctuations in the timing of RNA processing lead to variation in the number of RNA molecules. However,", "start_char_pos": 779, "end_char_pos": 787 }, { "type": "D", "before": "verify", "after": null, "start_char_pos": 813, "end_char_pos": 819 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 872, "end_char_pos": 872 }, { "type": "R", "before": "substrate", "after": "RNA", "start_char_pos": 889, "end_char_pos": 898 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1409, "end_char_pos": 1409 }, { "type": "D", "before": "comprehensive", "after": null, "start_char_pos": 1450, "end_char_pos": 1463 } ]
[ 0, 120, 295, 378, 467, 567, 778, 963, 1080, 1241, 1427 ]