doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
sequence
1110.2573
1
In this paper we study several classical problems of optimal investment with intermediate consumption and random endowment in incomplete markets . We establish the key assertions of the utility maximization theory assuming that both primal and dual value functions are finite in the interiors of their domains as well as that random endowment at maturity can be dominated by the terminal value of a self-financing wealth process. In order to facilitate verification of these conditions, we present alternative, but equivalent conditions, under which the conclusions of the theory hold.
We consider a problem of optimal investment with intermediate consumption and random endowment in an incomplete semimartingale model of a financial market . We establish the key assertions of the utility maximization theory assuming that both primal and dual value functions are finite in the interiors of their domains as well as that random endowment at maturity can be dominated by the terminal value of a self-financing wealth process. In order to facilitate verification of these conditions, we present alternative, but equivalent conditions, under which the conclusions of the theory hold.
[ { "type": "R", "before": "In this paper we study several classical problems", "after": "We consider a problem", "start_char_pos": 0, "end_char_pos": 49 }, { "type": "R", "before": "incomplete markets", "after": "an incomplete semimartingale model of a financial market", "start_char_pos": 126, "end_char_pos": 144 } ]
[ 0, 146, 429 ]
1110.3121
1
The transient fluctuation of the population of species is investigated with a stochastic modelfor a catalytic network. The swinging changes in the fluctuation in the transient state from the initial growth to the final steady state is the consequence of a topology-dependent competition between the catalysis and spontaneous decay. Species in a sparse random network may be more likely to become extinct than expected from the value of the limit of the fluctuation in the steady state, and there is a risk of failing to reach by far the less fluctuating steady state.
The transient fluctuation of the prosperity of firms in a network economy is investigated with an abstract stochastic model. The model describes the profit which firms make when they sell materials to a firm which produces a product and the fixed cost expense to the firms to produce those materials and product. The formulae for this model are parallel to those for population dynamics. The swinging changes in the fluctuation in the transient state from the initial growth to the final steady state are the consequence of a topology-dependent time trial competition between the profitable interactions and expense. The firm in a sparse random network economy is more likely to go bankrupt than expected from the value of the limit of the fluctuation in the steady state, and there is a risk of failing to reach by far the less fluctuating steady state.
[ { "type": "R", "before": "population of species", "after": "prosperity of firms in a network economy", "start_char_pos": 33, "end_char_pos": 54 }, { "type": "R", "before": "a stochastic modelfor a catalytic network. The", "after": "an abstract stochastic model. The model describes the profit which firms make when they sell materials to a firm which produces a product and the fixed cost expense to the firms to produce those materials and product. The formulae for this model are parallel to those for population dynamics. The", "start_char_pos": 76, "end_char_pos": 122 }, { "type": "R", "before": "is", "after": "are", "start_char_pos": 232, "end_char_pos": 234 }, { "type": "A", "before": null, "after": "time trial", "start_char_pos": 275, "end_char_pos": 275 }, { "type": "R", "before": "catalysis and spontaneous decay. Species", "after": "profitable interactions and expense. The firm", "start_char_pos": 300, "end_char_pos": 340 }, { "type": "R", "before": "may be", "after": "economy is", "start_char_pos": 368, "end_char_pos": 374 }, { "type": "R", "before": "become extinct", "after": "go bankrupt", "start_char_pos": 390, "end_char_pos": 404 } ]
[ 0, 118, 332 ]
1110.3546
1
Threats on the stability of a financial system may severely affect the functioning of the entire economy, and thus considerable emphasis is placed on the analyzing the cause and effect of such threats. The financial crisis in the current and past decade has shown that one important cause of instability in global markets is the so-called financial contagion, namely the spreading of instabilities or failures of individual components of the network to other, perhaps healthier, components. This leads to a natural question of whether the regulatory authorities could have predicted and perhaps mitigated the current economic crisis by effective computations of some stability measure of the banking networks. Motivated by such observations, we consider the problem of defining and evaluating stabilities of both homogeneous and heterogeneous banking networks against propagation of synchronous idiosyncratic shocks given to a subset of banks. We formalize the homogeneous banking network model of Nier et al. and its corresponding heterogeneous version, formalize the synchronous shock propagation procedures outlined in that paper , define two appropriate stability measures and investigate the computational complexities of evaluating these measures for various network topologies and parameters of interest. Our results and proofs also shed some light on the properties of topologies and parameters of the network that may lead to higher or lower stabilities.
Threats on the stability of a financial system may severely affect the functioning of the entire economy, and thus considerable emphasis is placed on the analyzing the cause and effect of such threats. The financial crisis in the current and past decade has shown that one important cause of instability in global markets is the so-called financial contagion, namely the spreadings of instabilities or failures of individual components of the network to other, perhaps healthier, components. This leads to a natural question of whether the regulatory authorities could have predicted and perhaps mitigated the current economic crisis by effective computations of some stability measure of the banking networks. Motivated by such observations, we consider the problem of defining and evaluating stabilities of both homogeneous and heterogeneous banking networks against propagation of synchronous idiosyncratic shocks given to a subset of banks. We formalize the homogeneous banking network model of Nier et al. and its corresponding heterogeneous version, formalize the synchronous shock propagation procedures , define two appropriate stability measures and investigate the computational complexities of evaluating these measures for various network topologies and parameters of interest. Our results and proofs also shed some light on the properties of topologies and parameters of the network that may lead to higher or lower stabilities.
[ { "type": "R", "before": "spreading", "after": "spreadings", "start_char_pos": 371, "end_char_pos": 380 }, { "type": "D", "before": "outlined in that paper", "after": null, "start_char_pos": 1110, "end_char_pos": 1132 } ]
[ 0, 201, 490, 709, 943, 1311 ]
1110.3897
1
In this paper we consider stochastic optimization problems for a risk-avers investor when the decision maker is uncertain about the parameters of the underlying process. In a first part we consider problems of optimal stopping under drift ambiguity for one-dimensional diffusion processes. Analogously to the case of ordinary optimal stopping problems for one-dimensional Brow- nian motions we reduce the problem to the geometric problem of finding the smallest majorant of the reward function in an two-parameter function space. In a second part we solve optimal stopping problems when the underlying process can crash down. These problems are reduced to one optimal stopping problem and one Dynkin game. An explicit example is discussed.
In this paper we consider stochastic optimization problems for an ambiguity averse decision maker who is uncertain about the parameters of the underlying process. In a first part we consider problems of optimal stopping under drift ambiguity for one-dimensional diffusion processes. Analogously to the case of ordinary optimal stopping problems for one-dimensional Brownian motions we reduce the problem to the geometric problem of finding the smallest majorant of the reward function in a two-parameter function space. In a second part we solve optimal stopping problems when the underlying process may crash down. These problems are reduced to one optimal stopping problem and one Dynkin game. Examples are discussed.
[ { "type": "R", "before": "a risk-avers investor when the decision maker", "after": "an ambiguity averse decision maker who", "start_char_pos": 63, "end_char_pos": 108 }, { "type": "R", "before": "Brow- nian", "after": "Brownian", "start_char_pos": 372, "end_char_pos": 382 }, { "type": "R", "before": "an", "after": "a", "start_char_pos": 497, "end_char_pos": 499 }, { "type": "R", "before": "can", "after": "may", "start_char_pos": 610, "end_char_pos": 613 }, { "type": "R", "before": "An explicit example is", "after": "Examples are", "start_char_pos": 706, "end_char_pos": 728 } ]
[ 0, 169, 289, 529, 625, 705 ]
1110.4965
1
In this paper we consider an optimal dividend problem for an insurance company which risk process evolves as a spectrally negative L\'{e process (in the absence of dividend payments). We assume that the management of the company controls timing and size of dividend payments. The objective is to maximize the sum of the expected cumulative discounted dividends received until the moment of ruin and a penalty payment at the moment of ruin which is an increasing function of the size of the shortfall at ruin; in addition, there may be a fixed cost for taking out dividends. We explicitly solve the corresponding optimal control problem. The solution rests on the characterization of the value-function as the smallest stochastic super-solution that we establish . We find also an explicit necessary and sufficient condition for optimality of a single dividend-band strategy, in terms of a particular Gerber-Shiu function .
In this paper we consider an optimal dividend problem for an insurance company which risk process evolves as a spectrally negative Levy process (in the absence of dividend payments). We assume that the management of the company controls timing and size of dividend payments. The objective is to maximize the sum of the expected cumulative discounted dividends received until the moment of ruin and a penalty payment at the moment of ruin which is an increasing function of the size of the shortfall at ruin; in addition, there may be a fixed cost for taking out dividends. We explicitly solve the corresponding optimal control problem. The solution rests on the characterization of the value-function as (i) the unique stochastic solution of the associated HJB equation and as (ii) the pointwise smallest stochastic supersolution. We show that the optimal value process admits a dividend-penalty decomposition as sum of a martingale (associated to the penalty payment at ruin) and a potential (associated to the dividend payments) . We find also an explicit necessary and sufficient condition for optimality of a single dividend-band strategy, in terms of a particular Gerber-Shiu function . We analyze a number of concrete examples .
[ { "type": "R", "before": "L\\'{e", "after": "Levy", "start_char_pos": 131, "end_char_pos": 136 }, { "type": "R", "before": "the smallest stochastic super-solution that we establish", "after": "(i) the unique stochastic solution of the associated HJB equation and as (ii) the pointwise smallest stochastic supersolution. We show that the optimal value process admits a dividend-penalty decomposition as sum of a martingale (associated to the penalty payment at ruin) and a potential (associated to the dividend payments)", "start_char_pos": 705, "end_char_pos": 761 }, { "type": "A", "before": null, "after": ". We analyze a number of concrete examples", "start_char_pos": 921, "end_char_pos": 921 } ]
[ 0, 183, 275, 508, 573, 636, 763 ]
1110.4965
3
This paper concerns an optimal dividend distribution problem for an insurance company which risk process evolves as a spectrally negative L\'{e}vy process (in the absence of dividend payments). The management of the company is assumed to control timing and size of dividend payments. The objective is to maximize the sum of the expected cumulative discounted dividend payments received until the moment of ruin and a penalty payment at the moment of ruin which is an increasing function of the size of the shortfall at ruin; in addition, there may be a fixed cost for taking out dividends. A complete solution is presented to the corresponding stochastic control problem. It is established that the value-function is the unique stochastic solution and the pointwise smallest stochastic supersolution of the associated HJB equation. Furthermore, a necessary and sufficient condition is identified for optimality of a single dividend-band strategy, in terms of a particular Gerber-Shiu function. A number of concrete examples are analyzed.
This paper concerns an optimal dividend distribution problem for an insurance company whose risk process evolves as a spectrally negative L\'{e}vy process (in the absence of dividend payments). The management of the company is assumed to control timing and size of dividend payments. The objective is to maximize the sum of the expected cumulative discounted dividend payments received until the moment of ruin and a penalty payment at the moment of ruin , which is an increasing function of the size of the shortfall at ruin; in addition, there may be a fixed cost for taking out dividends. A complete solution is presented to the corresponding stochastic control problem. It is established that the value-function is the unique stochastic solution and the pointwise smallest stochastic supersolution of the associated HJB equation. Furthermore, a necessary and sufficient condition is identified for optimality of a single dividend-band strategy, in terms of a particular Gerber-Shiu function. A number of concrete examples are analyzed.
[ { "type": "R", "before": "which", "after": "whose", "start_char_pos": 86, "end_char_pos": 91 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 455, "end_char_pos": 455 } ]
[ 0, 193, 283, 525, 590, 672, 832, 994 ]
1110.5433
1
Recent studies have shown that adaptive networks driven by simple local rules URLanize into "critical" global steady states, thereby providing another framework for URLanized criticality (SOC). Here we study SOC in an adaptive network considered first by Bornholdt and Rohlf PRL, 84(26), p.6114-6117, 2000%DIFDELCMD < ]%%% . We focus on the important convergence to criticality and discover time-scale and noise optimal behaviour as well as a noise-induced phase transition . Due to the complexity of adaptive networks dynamics we suggest to investigate each effect separately by developing simple models. These models reveal three generically possible low-dimensional dynamical behaviors: time-scale resonance (TR), a simplified version of stochastic resonance - which call steady state stochastic resonance (SSR) - as well as noise-induced phase transitions. Thereby, our study not only opens up new directions for optimality in SOC but also applies to a much wider class of dynamical systems.
Recent studies have shown that adaptive networks driven by simple local rules URLanize into "critical" global steady states, thereby providing another framework for URLanized criticality (SOC). %DIFDELCMD < ]%%% We focus on the important convergence to criticality and demonstrate that noise and time-scale optimality are reached at finite values. This is in sharp contrast to the previously believed optimal zero noise and infinite time scale separation case. Furthermore, we discover a noise induced phase transition for the breakdown of SOC. As a novel paradigm for adaptive networks SOC, we suggest to investigate each of the three new effects separately by developing models. These models reveal three generically possible low-dimensional dynamical behaviors: time-scale resonance (TR), a new simplified version of stochastic resonance - which call steady state stochastic resonance (SSR) - as well as noise-induced phase transitions. Thereby, our study not only opens up new directions for optimality in SOC but also applies to a very wide class of dynamical systems.
[ { "type": "D", "before": "Here we study SOC in an adaptive network considered first by Bornholdt and Rohlf", "after": null, "start_char_pos": 194, "end_char_pos": 274 }, { "type": "D", "before": "PRL, 84(26), p.6114-6117, 2000", "after": null, "start_char_pos": 275, "end_char_pos": 305 }, { "type": "D", "before": ".", "after": null, "start_char_pos": 323, "end_char_pos": 324 }, { "type": "R", "before": "discover", "after": "demonstrate that noise and", "start_char_pos": 382, "end_char_pos": 390 }, { "type": "R", "before": "and noise optimal behaviour as well as a noise-induced phase transition . Due to the complexity of adaptive networks dynamics", "after": "optimality are reached at finite values. This is in sharp contrast to the previously believed optimal zero noise and infinite time scale separation case. Furthermore, we discover a noise induced phase transition for the breakdown of SOC. As a novel paradigm for adaptive networks SOC,", "start_char_pos": 402, "end_char_pos": 527 }, { "type": "R", "before": "effect", "after": "of the three new effects", "start_char_pos": 559, "end_char_pos": 565 }, { "type": "D", "before": "simple", "after": null, "start_char_pos": 591, "end_char_pos": 597 }, { "type": "A", "before": null, "after": "new", "start_char_pos": 719, "end_char_pos": 719 }, { "type": "R", "before": "much wider", "after": "very wide", "start_char_pos": 958, "end_char_pos": 968 } ]
[ 0, 193, 475, 605, 861 ]
1110.5433
2
Recent studies have shown that adaptive networks driven by simple local rules URLanize into "critical" global steady states, thereby providing another framework for URLanized criticality (SOC). We focus on the important convergence to criticality and demonstrate that noise and time-scale optimality are reached at finite values. This is in sharp contrast to the previously believed optimal zero noise and infinite time scale separation case. Furthermore, we discover a noise induced phase transition for the breakdown of SOC. As a novel paradigm for adaptive networks SOC, we suggest to investigate each of the three new effects separately by developing models. These models reveal three generically possible low-dimensional dynamical behaviors: time-scale resonance (TR), a new simplified version of stochastic resonance - which call steady state stochastic resonance (SSR) - as well as noise-induced phase transitions . Thereby, our study not only opens up new directions for optimality in SOC but also applies to a very wide class of dynamical systems .
Recent studies have shown that adaptive networks driven by simple local rules URLanize into "critical" global steady states, providing another framework for URLanized criticality (SOC). We focus on the important convergence to criticality and show that noise and time-scale optimality are reached at finite values. This is in sharp contrast to the previously believed optimal zero noise and infinite time scale separation case. Furthermore, we discover a noise induced phase transition for the breakdown of SOC. We also investigate each of the three new effects separately by developing models. These models reveal three generically low-dimensional dynamical behaviors: time-scale resonance (TR), a new simplified version of stochastic resonance - which we call steady state stochastic resonance (SSR) - as well as noise-induced phase transitions .
[ { "type": "D", "before": "thereby", "after": null, "start_char_pos": 125, "end_char_pos": 132 }, { "type": "R", "before": "demonstrate", "after": "show", "start_char_pos": 251, "end_char_pos": 262 }, { "type": "R", "before": "As a novel paradigm for adaptive networks SOC, we suggest to", "after": "We also", "start_char_pos": 527, "end_char_pos": 587 }, { "type": "D", "before": "possible", "after": null, "start_char_pos": 701, "end_char_pos": 709 }, { "type": "A", "before": null, "after": "we", "start_char_pos": 831, "end_char_pos": 831 }, { "type": "D", "before": ". Thereby, our study not only opens up new directions for optimality in SOC but also applies to a very wide class of dynamical systems", "after": null, "start_char_pos": 922, "end_char_pos": 1056 } ]
[ 0, 193, 329, 442, 526, 662, 923 ]
1110.5446
1
We consider the problem of maximizing the expected utility of discounted dividend payments of an insurance company whose reserves are modeled as a Cram\'er risk processwith Erlang claims. We focus on the exponential claims and power and logarithmic utility functions. Finally we also analyze asymptotic behaviour of the value function and identify the asymptotic optimal strategy. We also give the numerical procedure of finding considered value function.
We consider the problem of maximizing the discounted utility of dividend payments of an insurance company whose reserves are modeled as a classical Cram\'er-Lundberg risk process. We investigate this optimization problem under the constraint that dividend rate is bounded. We prove that the value function , defined in this model, fulfills the Hamilton-Jacobi-Bellman equation and identify the optimal dividend strategy. Eventually we extend our results for the reserve process modeled as a classical Cram\'er-Lundberg risk process with capital injections. For the extended model we also prove some results regarding asymptotic analysis of the value function.
[ { "type": "R", "before": "expected utility of discounted", "after": "discounted utility of", "start_char_pos": 42, "end_char_pos": 72 }, { "type": "R", "before": "Cram\\'er risk processwith Erlang claims. We focus on the exponential claims and power and logarithmic utility functions. Finally we also analyze asymptotic behaviour of", "after": "classical Cram\\'er-Lundberg risk process. We investigate this optimization problem under the constraint that dividend rate is bounded. We prove that", "start_char_pos": 147, "end_char_pos": 315 }, { "type": "A", "before": null, "after": ", defined in this model, fulfills the Hamilton-Jacobi-Bellman equation", "start_char_pos": 335, "end_char_pos": 335 }, { "type": "R", "before": "asymptotic optimal strategy. We also give the numerical procedure of finding considered", "after": "optimal dividend strategy. Eventually we extend our results for the reserve process modeled as a classical Cram\\'er-Lundberg risk process with capital injections. For the extended model we also prove some results regarding asymptotic analysis of the", "start_char_pos": 353, "end_char_pos": 440 } ]
[ 0, 187, 267, 381 ]
1110.5446
2
We consider the problem of maximizing the discounted utility of dividend payments of an insurance company whose reserves are modeled as a classical Cram\'er-Lundberg risk process. We investigate this optimization problem under the constraint that dividend rate is bounded. We prove that the value function , defined in this model, fulfills the Hamilton-Jacobi-Bellman equation and identify the optimal dividend strategy . Eventually we extend our results for the reserve process modeled as a classical Cram\'er-Lundberg risk process with capital injections. For the extended model we also prove some results regarding asymptotic analysis of the value function .
We consider the problem of maximizing the discounted utility of dividend payments of an insurance company whose reserves are modeled as a classical Cram\'er-Lundberg risk process. We investigate this optimization problem under the constraint that dividend rate is bounded. We prove that the value function fulfills the Hamilton-Jacobi-Bellman equation and we identify the optimal dividend strategy .
[ { "type": "D", "before": ", defined in this model,", "after": null, "start_char_pos": 306, "end_char_pos": 330 }, { "type": "A", "before": null, "after": "we", "start_char_pos": 381, "end_char_pos": 381 }, { "type": "D", "before": ". Eventually we extend our results for the reserve process modeled as a classical Cram\\'er-Lundberg risk process with capital injections. For the extended model we also prove some results regarding asymptotic analysis of the value function", "after": null, "start_char_pos": 421, "end_char_pos": 660 } ]
[ 0, 179, 272, 422, 558 ]
1110.5594
1
The Heston stochastic volatility process, which is widely used as an asset price model in mathematical finance, is a paradigm for a degenerate diffusion process where the degeneracy in the diffusion coefficient is proportional to the square root of the distance to the boundary of the half-plane. The generator of this process with killing, called the elliptic Heston operator, is a second-order degenerate elliptic partial differential operator whose coefficients have linear growth in the spatial variables and where the degeneracy in the operator symbol is proportional to the distance to the boundary of the half-plane. With the aid of weighted Sobolev spaces, we prove supremum bounds, a Harnack inequality, and Holder continuity near the boundary for solutions to elliptic variational equations defined by the Heston partial differential operator, as well as Holder continuity up to the boundary for solutions to elliptic variational inequalities defined by the Heston operator. In mathematical finance, solutions to obstacle problems for the elliptic Heston operator correspond to value functions for perpetual American-style options on the underlying asset.
The Heston stochastic volatility process, which is widely used as an asset price model in mathematical finance, is a paradigm for a degenerate diffusion process where the degeneracy in the diffusion coefficient is proportional to the square root of the distance to the boundary of the half-plane. The generator of this process with killing, called the elliptic Heston operator, is a second-order , degenerate-elliptic partial differential operator whose coefficients have linear growth in the spatial variables and where the degeneracy in the operator symbol is proportional to the distance to the boundary of the half-plane. With the aid of weighted Sobolev spaces, we prove supremum bounds, a Harnack inequality, and H\"older continuity near the boundary for solutions to variational equations defined by the elliptic Heston operator, as well as H\"older continuity up to the boundary for solutions to variational inequalities defined by the elliptic Heston operator. In mathematical finance, solutions to obstacle problems for the elliptic Heston operator correspond to value functions for perpetual American-style options on the underlying asset.
[ { "type": "R", "before": "degenerate elliptic", "after": ", degenerate-elliptic", "start_char_pos": 396, "end_char_pos": 415 }, { "type": "R", "before": "Holder", "after": "H\\\"older", "start_char_pos": 717, "end_char_pos": 723 }, { "type": "D", "before": "elliptic", "after": null, "start_char_pos": 770, "end_char_pos": 778 }, { "type": "R", "before": "Heston partial differential", "after": "elliptic Heston", "start_char_pos": 816, "end_char_pos": 843 }, { "type": "R", "before": "Holder", "after": "H\\\"older", "start_char_pos": 865, "end_char_pos": 871 }, { "type": "D", "before": "elliptic", "after": null, "start_char_pos": 919, "end_char_pos": 927 }, { "type": "A", "before": null, "after": "elliptic", "start_char_pos": 968, "end_char_pos": 968 } ]
[ 0, 296, 623, 985 ]
1110.5622
1
Tailoring self-assembly of proteins on solid surfaces remains a great challenge in diverse fields, e.g., enzyme catalysis, biosensing, and biomineralization, where surface functions are retained by controlling molecular structures. As a new approach in designing surface chemistry, we use here biocombinatorially selected graphite binding dodecapeptides which form self-assembled, long-range ordered peptide nanostructures on graphite. The peptide is engineered via simple sequence mutations to control fundamental processes of self-assembly, e.g., binding to solid surfaces, growth kinetics, and intermolecular interactions. Using atomic force microscopy and contact angle studies, we identify three domains of amino acids along the primary sequence that steer peptide aggregation and ordering, leading to uniformly displayed residues and sequence-programmable surface chemistries. Short peptides, with easily designed sequences, offer versatile control over molecular functionalization at liquid-solid interfaces, resulting in well-defined surface properties and nanoscale topology essential in building engineered, chemically rich, bio-solid interfaces.
This paper has been withdrawn by the author due to a missing figure
[ { "type": "R", "before": "Tailoring self-assembly of proteins on solid surfaces remains a great challenge in diverse fields, e.g., enzyme catalysis, biosensing, and biomineralization, where surface functions are retained by controlling molecular structures. As a new approach in designing surface chemistry, we use here biocombinatorially selected graphite binding dodecapeptides which form self-assembled, long-range ordered peptide nanostructures on graphite. The peptide is engineered via simple sequence mutations to control fundamental processes of self-assembly, e.g., binding to solid surfaces, growth kinetics, and intermolecular interactions. Using atomic force microscopy and contact angle studies, we identify three domains of amino acids along the primary sequence that steer peptide aggregation and ordering, leading to uniformly displayed residues and sequence-programmable surface chemistries. Short peptides, with easily designed sequences, offer versatile control over molecular functionalization at liquid-solid interfaces, resulting in well-defined surface properties and nanoscale topology essential in building engineered, chemically rich, bio-solid interfaces.", "after": "This paper has been withdrawn by the author due to a missing figure", "start_char_pos": 0, "end_char_pos": 1156 } ]
[ 0, 231, 435, 625, 882 ]
1110.5789
1
This paper proposes a model of financial contagion that accounts for explosive, mutually exciting shocks to market volatility. We fit the model using country-level data during the European sovereign debt crisis, which has its roots in the period 2008--2010, and was continuing to affect global markets as of October, 2011. Our analysis shows that existing volatility models are unable to explain two key stylized features of global markets during presumptive contagion periods: shocks to aggregate market volatility can be sudden and explosive, and they are associated with specific directional biases in the cross-section of country-level returns. Our model repairs this deficit by assuming that the random shocks to volatility are heavy-tailed and correlated cross-sectionally, both with each other and with returns. We find evidence for significant contagion effects during the major EU crisis periods of May 2010 and August 2011, where contagion is defined as excess correlation in the residuals from a factor model incorporating global and regional market risk factors. Some of this excess correlation can be explained by quantifying the impact of shocks to aggregate volatility in the cross-section of expected returns---but only, it turns out, if one is extremely careful in accounting for the explosive nature of these shocks. We show that global markets have time-varying cross-sectional sensitivities to these shocks, and that high sensitivities strongly predict periods of financial crisis. Moreover, the pattern of temporal changes in correlation structure between volatility and returns is readily interpretable in terms of the major events of the periods in question.
This paper proposes an empirical test of financial contagion in European equity markets during the tumultuous period of 2008-2011. Our analysis shows that traditional GARCH and Gaussian stochastic-volatility models are unable to explain two key stylized features of global markets during presumptive contagion periods: shocks to aggregate market volatility can be sudden and explosive, and they are associated with specific directional biases in the cross-section of country-level returns. Our model repairs this deficit by assuming that the random shocks to volatility are heavy-tailed and correlated cross-sectionally, both with each other and with returns. The fundamental conclusion of our analysis is that great care is needed in modeling volatility if one wishes to characterize the relationship between volatility and contagion that is predicted by economic theory. In analyzing daily data, we find evidence for significant contagion effects during the major EU crisis periods of May 2010 and August 2011, where contagion is defined as excess correlation in the residuals from a factor model incorporating global and regional market risk factors. Some of this excess correlation can be explained by quantifying the impact of shocks to aggregate volatility in the cross-section of expected returns - but only, it turns out, if one is extremely careful in accounting for the explosive nature of these shocks. We show that global markets have time-varying cross-sectional sensitivities to these shocks, and that high sensitivities strongly predict periods of financial crisis. Moreover, the pattern of temporal changes in correlation structure between volatility and returns is readily interpretable in terms of the major events of the periods in question.
[ { "type": "R", "before": "a model", "after": "an empirical test", "start_char_pos": 20, "end_char_pos": 27 }, { "type": "R", "before": "that accounts for explosive, mutually exciting shocks to market volatility. We fit the model using country-level data during the European sovereign debt crisis, which has its roots in the period 2008--2010, and was continuing to affect global markets as of October, 2011.", "after": "in European equity markets during the tumultuous period of 2008-2011.", "start_char_pos": 51, "end_char_pos": 322 }, { "type": "R", "before": "existing volatility", "after": "traditional GARCH and Gaussian stochastic-volatility", "start_char_pos": 347, "end_char_pos": 366 }, { "type": "R", "before": "We", "after": "The fundamental conclusion of our analysis is that great care is needed in modeling volatility if one wishes to characterize the relationship between volatility and contagion that is predicted by economic theory. In analyzing daily data, we", "start_char_pos": 819, "end_char_pos": 821 }, { "type": "R", "before": "returns---but", "after": "returns - but", "start_char_pos": 1217, "end_char_pos": 1230 } ]
[ 0, 126, 322, 648, 818, 1074, 1334, 1501 ]
1110.5997
1
The Hill coefficient is often used as a direct measure of the cooperativity of binding processes. It is an essential tool for probing properties of reactions in many biological systems. Here we analyze existing experimental data and demonstrate that the Hill coefficient characterizing the binding of many transcription factors to their cognate sites can in fact be larger than one -- the standard indication of cooperativity -- even in the absence of any standard cooperative binding mechanism. By studying the problem analytically, we demonstrate that this effect occurs due to the disordered binding energy of the transcription factor to the DNA molecule and the steric interactions between the different copies of the transcription factor. We quantify the dependence of the strength of this effect on the different parameters in the problem. In addition, we show that the enhanced Hill coefficient implies a significant reduction in the number of copies of the transcription factors which is needed to occupy a cognate site and, in many cases, can explain existing estimates for numbers of the transcription factors in cells .
The Hill coefficient is often used as a direct measure of the cooperativity of binding processes. It is an essential tool for probing properties of reactions in many biochemical systems. Here we analyze existing experimental data and demonstrate that the Hill coefficient characterizing the binding of transcription factors to their cognate sites can in fact be larger than one -- the standard indication of cooperativity -- even in the absence of any standard cooperative binding mechanism. By studying the problem analytically, we demonstrate that this effect occurs due to the disordered binding energy of the transcription factor to the DNA molecule and the steric interactions between the different copies of the transcription factor. We show that the enhanced Hill coefficient implies a significant reduction in the number of copies of the transcription factors which is needed to occupy a cognate site and, in many cases, can explain existing estimates for numbers of the transcription factors in cells . The mechanism is general and should be applicable to other biological recognition processes .
[ { "type": "R", "before": "biological", "after": "biochemical", "start_char_pos": 166, "end_char_pos": 176 }, { "type": "D", "before": "many", "after": null, "start_char_pos": 301, "end_char_pos": 305 }, { "type": "D", "before": "quantify the dependence of the strength of this effect on the different parameters in the problem. In addition, we", "after": null, "start_char_pos": 747, "end_char_pos": 861 }, { "type": "A", "before": null, "after": ". The mechanism is general and should be applicable to other biological recognition processes", "start_char_pos": 1129, "end_char_pos": 1129 } ]
[ 0, 97, 185, 495, 743, 845 ]
1110.6289
1
A drawdown constraint forces the current wealth to remain above a given function of its maximum to date. We consider the portfolio optimisation problem of maximising the long-term growth rate of the expected utility of wealth subject to a drawdown constraint, as in the original setup of Grossman and Zhou (1993). We work in an abstract semimartingale financial market model with a general class of utility functions and drawdown constraints. We solve the problem by showing that it is in fact equivalent to an unconstrained problem but for a modified utility function. Both the value function and the optimal investment policy for the drawdown problem are given explicitly in terms of their counterparts in the unconstrained problem . Our approach is very general but has an important limitation in that we assume all admissible wealth processes have a continuous running maximum. This allows us to use Azema-Yor processes. The proofs also rely on convergence properties, in the utility function, of the unconstrained problem which are of independent interest .
A drawdown constraint forces the current wealth to remain above a given function of its maximum to date. We consider the portfolio optimisation problem of maximising the long-term growth rate of the expected utility of wealth subject to a drawdown constraint, as in the original setup of Grossman and Zhou (1993). We work in an abstract semimartingale financial market model with a general class of utility functions and drawdown constraints. We solve the problem by showing that it is in fact equivalent to an unconstrained problem with a suitably modified utility function. Both the value function and the optimal investment policy for the drawdown problem are given explicitly in terms of their counterparts in the unconstrained problem .
[ { "type": "R", "before": "but for a", "after": "with a suitably", "start_char_pos": 533, "end_char_pos": 542 }, { "type": "D", "before": ". Our approach is very general but has an important limitation in that we assume all admissible wealth processes have a continuous running maximum. This allows us to use Azema-Yor processes. The proofs also rely on convergence properties, in the utility function, of the unconstrained problem which are of independent interest", "after": null, "start_char_pos": 734, "end_char_pos": 1060 } ]
[ 0, 104, 313, 442, 569, 735, 881, 924 ]
1111.0233
1
In the given article the methods of parametric diagnostics of gas turbine based on fuzzy logic is proposed. The diagnostic map of interconnection between some parts of turbine and changes of corresponding parameters has been developed. Also we have created model to define the efficiency of the compressor using fuzzy logic algorithms .
The creation of the systems models is very actual at present time, because it allow to simulate the work of some complex equipment without any additional spends. The given model of gas turbine is allowed to test and optimize the software for gas turbine automation systems, study station personal, like operators and engineers and will be useful for diagnostics and prediction tasks to analyze the efficiency of the gas turbine .
[ { "type": "R", "before": "In the given article the methods of parametric diagnostics", "after": "The creation of the systems models is very actual at present time, because it allow to simulate the work of some complex equipment without any additional spends. The given model", "start_char_pos": 0, "end_char_pos": 58 }, { "type": "R", "before": "based on fuzzy logic is proposed. The diagnostic map of interconnection between some parts of turbine and changes of corresponding parameters has been developed. Also we have created model to define", "after": "is allowed to test and optimize the software for gas turbine automation systems, study station personal, like operators and engineers and will be useful for diagnostics and prediction tasks to analyze", "start_char_pos": 74, "end_char_pos": 272 }, { "type": "R", "before": "compressor using fuzzy logic algorithms", "after": "gas turbine", "start_char_pos": 295, "end_char_pos": 334 } ]
[ 0, 107, 235 ]
1111.0808
1
The realization of Daily Artificial Dispatcher as a quantum/relativistic computation consists of perturbative renormalization of the Electrical Power System (EPS), generating the flowcharts of computation , verification, validation, description and help. Perturbative renormalization of EPS energy and time has been carried out in this paper for a day ahead via virtual thermalization of the EPS for a day ahead .
An algorithm for Electric Power System (EPS) quantum/relativistic security and efficiency computation for a day-ahead via perturbative renormalization of the EPS , finding the computation flowcharts, verification and validation is built in this paper .
[ { "type": "R", "before": "The realization of Daily Artificial Dispatcher as a", "after": "An algorithm for Electric Power System (EPS)", "start_char_pos": 0, "end_char_pos": 51 }, { "type": "R", "before": "computation consists of", "after": "security and efficiency computation for a day-ahead via", "start_char_pos": 73, "end_char_pos": 96 }, { "type": "R", "before": "Electrical Power System (EPS), generating the flowcharts of computation", "after": "EPS", "start_char_pos": 133, "end_char_pos": 204 }, { "type": "R", "before": "verification, validation, description and help. Perturbative renormalization of EPS energy and time has been carried out", "after": "finding the computation flowcharts, verification and validation is built", "start_char_pos": 207, "end_char_pos": 327 }, { "type": "D", "before": "for a day ahead via virtual thermalization of the EPS for a day ahead", "after": null, "start_char_pos": 342, "end_char_pos": 411 } ]
[ 0, 254 ]
1111.1133
1
This paper introduces a general framework of covariance structures that can be verified in many popular statistical models, such as factor and random effect models. The new structure is a summation of low rank and sparse matrices. We propose a LOw Rank and sparsE Covariance estimator (LOREC) to exploit this general structure in the high-dimensional setting. Analysis of this estimator shows that it recovers exactly the rank and support of the two componentsrespectively . Convergence rates under various norms are also presented. The estimatoris computed efficiently using convex optimization. We propose an iterative algorithm , based on Nesterov's method, to solve the optimization criterion. The algorithm is shown to produce a solution within O(1/t^2) of the optimal, after any finite t iterations. Numerical performance is illustrated using simulated data and stock portfolio selection on S&P 100.
Many popular statistical models, such as factor and random effects models, give arise a certain type of covariance structures that is a summation of low rank and sparse matrices. This paper introduces a penalized approximation framework to recover such model structures from large covariance matrix estimation. We propose an estimator based on minimizing a non-likelihood loss with separable non-smooth penalty functions. This estimator is shown to recover exactly the rank and sparsity patterns of these two components, and thus partially recovers the model structures . Convergence rates under various matrix norms are also presented. To compute this estimator, we further develop a first-order iterative algorithm to solve a convex optimization problem that contains separa- ble non-smooth functions, and the algorithm is shown to produce a solution within O(1/t^2) of the optimal, after any finite t iterations. Numerical performance is illustrated using simulated data and stock portfolio selection on S&P 100.
[ { "type": "R", "before": "This paper introduces a general framework of covariance structures that can be verified in many", "after": "Many", "start_char_pos": 0, "end_char_pos": 95 }, { "type": "R", "before": "effect models. The new structure", "after": "effects models, give arise a certain type of covariance structures that", "start_char_pos": 150, "end_char_pos": 182 }, { "type": "R", "before": "We propose a LOw Rank and sparsE Covariance estimator (LOREC) to exploit this general structure in the high-dimensional setting. Analysis of this estimator shows that it recovers", "after": "This paper introduces a penalized approximation framework to recover such model structures from large covariance matrix estimation. We propose an estimator based on minimizing a non-likelihood loss with separable non-smooth penalty functions. This estimator is shown to recover", "start_char_pos": 231, "end_char_pos": 409 }, { "type": "R", "before": "support of the two componentsrespectively", "after": "sparsity patterns of these two components, and thus partially recovers the model structures", "start_char_pos": 431, "end_char_pos": 472 }, { "type": "A", "before": null, "after": "matrix", "start_char_pos": 507, "end_char_pos": 507 }, { "type": "R", "before": "The estimatoris computed efficiently using convex optimization. We propose an iterative algorithm , based on Nesterov's method, to solve the optimization criterion. The", "after": "To compute this estimator, we further develop a first-order iterative algorithm to solve a convex optimization problem that contains separa- ble non-smooth functions, and the", "start_char_pos": 534, "end_char_pos": 702 } ]
[ 0, 164, 230, 359, 533, 597, 698, 806 ]
1111.1349
1
In this paper, we introduce a multivariate extension of the classical univariate Value- at-Risk (VaR). This extension may be useful to understand how solvency capital re- quirement computed for a given financial institution may be affected by the presence of additional risks . We also generalize the bivariate Conditional-Tail-Expectation (CTE), previously introduced by Di Bernardino et al. (2011), in a multivariate set- ting and we study its behavior. Several properties have been derived. In particular, we show that these two risk measures both satisfy the positive homogeneity and the translation invariance property. Comparison between univariate risk measures and components of multivariate VaR and CTE are provided. We also analyze how they are impacted by a change in marginal distributions, by a change in dependence structure and by a change in risk level. Interestingly, these results turn to be con- sistent with existing properties on univariate risk measures. Illustrations are given in the class of Archimedean copulas.
In this paper, we introduce a multivariate extension of the classical univariate Value-at-Risk (VaR). This extension may be useful to understand how solvency capital requirement is affected by the presence of risks that cannot be diversify away. This is typically the case for a network of highly interconnected financial institutions in a macro-prudential regulatory system. We also generalize the bivariate Conditional-Tail-Expectation (CTE), previously introduced by Di Bern- ardino et al. (2011), in a multivariate setting and we study its behavior. Several properties have been derived. In particular, we show that these two risk measures both satisfy the positive homogeneity and the translation invariance property. Comparison between univariate risk meas- ures and components of multivariate VaR and CTE are provided. We also analyze how they are impacted by a change in marginal distributions, by a change in dependence structure and by a change in risk level. Interestingly, these results turn to be consistent with existing properties on univariate risk measures. Illustrations are given in the class of Archimedean copulas.
[ { "type": "R", "before": "Value- at-Risk", "after": "Value-at-Risk", "start_char_pos": 81, "end_char_pos": 95 }, { "type": "R", "before": "re- quirement computed for a given financial institution may be", "after": "requirement is", "start_char_pos": 167, "end_char_pos": 230 }, { "type": "R", "before": "additional risks .", "after": "risks that cannot be diversify away. This is typically the case for a network of highly interconnected financial institutions in a macro-prudential regulatory system.", "start_char_pos": 259, "end_char_pos": 277 }, { "type": "R", "before": "Bernardino", "after": "Bern- ardino", "start_char_pos": 375, "end_char_pos": 385 }, { "type": "R", "before": "set- ting", "after": "setting", "start_char_pos": 419, "end_char_pos": 428 }, { "type": "R", "before": "measures", "after": "meas- ures", "start_char_pos": 660, "end_char_pos": 668 }, { "type": "R", "before": "con- sistent", "after": "consistent", "start_char_pos": 910, "end_char_pos": 922 } ]
[ 0, 102, 277, 455, 493, 624, 725, 869, 976 ]
1111.1349
2
In this paper, we introduce a multivariate extension of the classical univariate Value-at-Risk (VaR) . This extension may be useful to understand how solvency capital requirement is affected by the presence of risks that cannot be diversify away. This is typically the case for a network of highly interconnected financial institutions in a macro-prudential regulatory system. We also generalize the bivariate Conditional-Tail-Expectation (CTE), previously introduced by Di Bern- ardino et al. (2011), in a multivariate setting and we study its behavior. Several properties have been derived. In particular, we show that these two risk measures both satisfy the positive homogeneity and the translation invariance property. Comparison between univariate risk meas- ures and components of multivariate VaR and CTE are provided. We also analyze how they are impacted by a change in marginal distributions, by a change in dependence structure and by a change in risk level . Interestingly, these results turn to be consistent with existing properties on univariate risk measures . Illustrations are given in the class of Archimedean copulas.
In this paper, we introduce two alternative extensions of the classical univariate Value-at-Risk (VaR) in a multivariate setting. The two proposed multivariate VaR are vector-valued measures with the same dimension as the underlying risk portfolio. The lower-orthant VaR is constructed from level sets of multivariate distribution functions whereas the upper-orthant VaR is constructed from level sets of multivariate survival functions . Several properties have been derived. In particular, we show that these risk measures both satisfy the positive homogeneity and the translation invariance property. Comparison between univariate risk measures and components of multivariate VaR are provided. We also analyze how these measures are impacted by a change in marginal distributions, by a change in dependence structure and by a change in risk level . Illustrations are given in the class of Archimedean copulas.
[ { "type": "R", "before": "a multivariate extension", "after": "two alternative extensions", "start_char_pos": 28, "end_char_pos": 52 }, { "type": "A", "before": null, "after": "in a multivariate setting. The two proposed multivariate VaR are vector-valued measures with the same dimension as the underlying risk portfolio. The lower-orthant VaR is constructed from level sets of multivariate distribution functions whereas the upper-orthant VaR is constructed from level sets of multivariate survival functions", "start_char_pos": 101, "end_char_pos": 101 }, { "type": "D", "before": "This extension may be useful to understand how solvency capital requirement is affected by the presence of risks that cannot be diversify away. This is typically the case for a network of highly interconnected financial institutions in a macro-prudential regulatory system. We also generalize the bivariate Conditional-Tail-Expectation (CTE), previously introduced by Di Bern- ardino et al. (2011), in a multivariate setting and we study its behavior.", "after": null, "start_char_pos": 104, "end_char_pos": 555 }, { "type": "D", "before": "two", "after": null, "start_char_pos": 628, "end_char_pos": 631 }, { "type": "R", "before": "meas- ures", "after": "measures", "start_char_pos": 760, "end_char_pos": 770 }, { "type": "D", "before": "and CTE", "after": null, "start_char_pos": 806, "end_char_pos": 813 }, { "type": "R", "before": "they", "after": "these measures", "start_char_pos": 848, "end_char_pos": 852 }, { "type": "D", "before": ". Interestingly, these results turn to be consistent with existing properties on univariate risk measures", "after": null, "start_char_pos": 971, "end_char_pos": 1076 } ]
[ 0, 247, 377, 555, 593, 724, 827, 972, 1078 ]
1111.1646
1
Many biological electron transfer (ET) reactions are mediated by metal centres in proteins. NADH:ubiquinone oxidoreductase (complex I) contains an intramolecular chain of seven iron-sulphur (FeS) clusters, one of the longest chains of metal centres in biology and a test case for physical models of intramolecular ET. In biology, intramolecular ET is commonly described as a diffusive hopping process, according to the semi-classical theories of Marcus and Hopfield. However, recent studies have raised the possibility that non-trivial quantum mechanical effects play a functioning role in certain biomolecular processes. Here, we extend the semi-classical model for biological ET to incorporate both semi-classical and coherent quantum phenomena using a quantum master equation based on the Holstein Hamiltonian. We test our model on the structurally-defined chain of FeS clusters in complex I. By exploring a wide range of realistic parameters we and that, when the energy profile for ET along the chain is relatively at , just a small coherent contribution can provide a robust and significant increase in ET rate (above the semi-classical diffusive-hopping rate), even at physiologically-relevant temperatures. Conversely, when the on-site energies vary significantly along the chain the coherent contribution is negligible. For complex I, a crucial respiratory enzyme that is linked to many neuromuscular and degenerative diseases, our results suggest a new contribution towards ensuring that intramolecular ET does not limit the rate of catalysis. For the emerging field of quantum biology, our model is intended as a basis for elucidating the general role of coherent ET in biological ET reactions.
Many biological electron transfer (ET) reactions are mediated by metal centres in proteins. NADH:ubiquinone oxidoreductase (complex I) contains an intramolecular chain of seven iron-sulphur (FeS) clusters, one of the longest chains of metal centres in biology and a test case for physical models of intramolecular ET. In biology, intramolecular ET is commonly described as a diffusive hopping process, according to the semi-classical theories of Marcus and Hopfield. However, recent studies have raised the possibility that non-trivial quantum mechanical effects play a functioning role in certain biomolecular processes. Here, we extend the semi-classical model for biological ET to incorporate both semi-classical and coherent quantum phenomena using a quantum master equation based on the Holstein Hamiltonian. We test our model on the structurally-defined chain of FeS clusters in complex I. By exploring a wide range of realistic parameters we find that, when the energy profile for ET along the chain is relatively flat , just a small coherent contribution can provide a robust and significant increase in ET rate (above the semi-classical diffusive-hopping rate), even at physiologically-relevant temperatures. Conversely, when the on-site energies vary significantly along the chain the coherent contribution is negligible. For complex I, a crucial respiratory enzyme that is linked to many neuromuscular and degenerative diseases, our results suggest a new contribution towards ensuring that intramolecular ET does not limit the rate of catalysis. For the emerging field of quantum biology, our model is intended as a basis for elucidating the general role of coherent ET in biological ET reactions.
[ { "type": "R", "before": "and", "after": "find", "start_char_pos": 949, "end_char_pos": 952 }, { "type": "R", "before": "at", "after": "flat", "start_char_pos": 1020, "end_char_pos": 1022 } ]
[ 0, 91, 317, 466, 621, 813, 895, 1214, 1328, 1553 ]
1111.2091
1
We investigate two methods for reducing estimation error in portfolio optimization with Conditional Value-at-Risk (CVaR ). The first method is nonparametric : penalize portfolios with large variances in mean and CVaR estimations. The penalized problem is solvable by a quadratically-constrained quadratic program, and can be interpreted as a chance-constrained program. We show the original and penalized solutions follow the Central Limit Theorem with computable covariance by extending M-estimation results from statistics. The second method is parametric : solve the empirical Markowitz problem instead if the log-return distribution is in the elliptical family (which includes Gaussian and t distributions), as then the population frontiers of the Markowitz and mean-CVaR problems are equivalent . Numerical simulations show both methods improve upon the empirical mean-CVaR solution under an elliptical model, with the Markowitz solution dominating. The penalized solution dominates under a non-elliptical model with heavy one-sided loss .
We introduce performance-based regularization (PBR), a new approach to addressing estimation risk in data-driven optimization, to mean-CVaR portfolio optimization. We assume the available log-return data is iid, and detail the approach for two cases: nonparametric and parametric (the log-return distribution belongs in the elliptical family ). The nonparametric PBR method penalizes portfolios with large variability in mean and CVaR estimations. The parametric PBR method solves the empirical Markowitz problem instead of the empirical mean-CVaR problem, as the solutions of the Markowitz and mean-CVaR problems are equivalent when the log-return distribution is elliptical. We derive the asymptotic behavior of the nonparametric PBR solution, which leads to insight into the effect of penalization, and justification of the parametric PBR method. We also show via simulations that the PBR methods produce efficient frontiers that are, on average, closer to the population efficient frontier than the empirical approach to the mean-CVaR problem, with less variability .
[ { "type": "R", "before": "investigate two methods for reducing estimation error in portfolio optimization with Conditional Value-at-Risk (CVaR", "after": "introduce performance-based regularization (PBR), a new approach to addressing estimation risk in data-driven optimization, to mean-CVaR portfolio optimization. We assume the available log-return data is iid, and detail the approach for two cases: nonparametric and parametric (the log-return distribution belongs in the elliptical family", "start_char_pos": 3, "end_char_pos": 119 }, { "type": "R", "before": "first method is nonparametric : penalize", "after": "nonparametric PBR method penalizes", "start_char_pos": 127, "end_char_pos": 167 }, { "type": "R", "before": "variances", "after": "variability", "start_char_pos": 190, "end_char_pos": 199 }, { "type": "R", "before": "penalized problem is solvable by a quadratically-constrained quadratic program, and can be interpreted as a chance-constrained program. We show the original and penalized solutions follow the Central Limit Theorem with computable covariance by extending M-estimation results from statistics. The second method is parametric : solve", "after": "parametric PBR method solves", "start_char_pos": 234, "end_char_pos": 565 }, { "type": "R", "before": "if the log-return distribution is in the elliptical family (which includes Gaussian and t distributions), as then the population frontiers", "after": "of the empirical mean-CVaR problem, as the solutions", "start_char_pos": 606, "end_char_pos": 744 }, { "type": "R", "before": ". Numerical simulations show both methods improve upon the empirical", "after": "when the log-return distribution is elliptical. We derive the asymptotic behavior of the nonparametric PBR solution, which leads to insight into the effect of penalization, and justification of the parametric PBR method. We also show via simulations that the PBR methods produce efficient frontiers that are, on average, closer to the population efficient frontier than the empirical approach to the", "start_char_pos": 800, "end_char_pos": 868 }, { "type": "R", "before": "solution under an elliptical model, with the Markowitz solution dominating. The penalized solution dominates under a non-elliptical model with heavy one-sided loss", "after": "problem, with less variability", "start_char_pos": 879, "end_char_pos": 1042 } ]
[ 0, 229, 369, 525, 954 ]
1111.2462
1
Density expansions for hypoelliptic diffusions (X^1,...,X^d) are revisited. In particular, we are interested in density expansions of the projection (X_T^1,...,X_T^l), at time T>0, with l \leq d. Global conditions are found which replace the well-known "not-in-cutlocus" condition known from heat-kernel asymptotics ; cf. G. Ben Arous (1988). Our small noise expansion allows for a "second order" exponential factor. Applications include tail and implied volatility asymptotics in some correlated stochastic volatility models ; in particular, we solve a problem left open by A. Gulisashvili and E.M. Stein (2009) .
Density expansions for hypoelliptic diffusions (X^1,...,X^d) are revisited. In particular, we are interested in density expansions of the projection (X_T^1,...,X_T^l), at time T>0, with l \leq d. Global conditions are found which replace the well-known "not-in-cutlocus" condition known from heat-kernel asymptotics . Our small noise expansion allows for a "second order" exponential factor. As application, new light is shed on the Takanobu--Watanabe expansion of Brownian motion and Levy's stochastic area. Further applications include tail and implied volatility asymptotics in some stochastic volatility models , discussed in a compagnion paper .
[ { "type": "R", "before": "; cf. G. Ben Arous (1988).", "after": ".", "start_char_pos": 316, "end_char_pos": 342 }, { "type": "R", "before": "Applications", "after": "As application, new light is shed on the Takanobu--Watanabe expansion of Brownian motion and Levy's stochastic area. Further applications", "start_char_pos": 417, "end_char_pos": 429 }, { "type": "D", "before": "correlated", "after": null, "start_char_pos": 486, "end_char_pos": 496 }, { "type": "R", "before": "; in particular, we solve a problem left open by A. Gulisashvili and E.M. Stein (2009)", "after": ", discussed in a compagnion paper", "start_char_pos": 526, "end_char_pos": 612 } ]
[ 0, 75, 317, 342, 416, 527 ]
1111.2976
1
The inverse first passage time problem asks whether, for a Brownian motion B and a nonnegative random variable \zeta, there exists a time-varying barrier b such that P\{B_s>b(s), \, 0 %DIFDELCMD < \le %%% s%DIFDELCMD < \le %%% t\}=P\{\zeta>t\}. We study a "smoothed" version of this problem and ask whether there is a "barrier" b such that \mathbb{E[\exp(-\lambda \int_0^t \psi(B_s - b(s)) \, ds)] } [\exp(-\lambda\int_0^t\psi(B_s-b(s))\,ds)]} =P\{\zeta >t\}, where \lambda is a killing rate parameter and \psi:R %DIFDELCMD < \to [%%% 0,1 is a non-increasing \to[0,1] function. We prove that if \psi is suitably smooth, the function t\mapsto P\{\zeta>t\} is twice continuously differentiable, and the condition 0<- \frac{d \log \mathbb{P\{\zeta > t\}}{dt} } \{\zeta>t\}}{dt}} <\lambda holds for the hazard rate of \zeta, then there exists a unique continuously differentiable function b solving the smoothed problem. We show how this result leads to flexible models of default for which it is possible to compute expected values of contingent claims.
The inverse first passage time problem asks whether, for a Brownian motion B and a nonnegative random variable \zeta, there exists a time-varying barrier b such that P\{B_s>b(s), 0 %DIFDELCMD < \le %%% %DIFDELCMD < \le %%% \leq s\leq t\}=P\{\zeta>t\}. We study a "smoothed" version of this problem and ask whether there is a "barrier" b such that [\exp(-\lambda \int_0^t \psi(B_s - b(s)) \, ds)] } \mathbb{E[\exp(-\lambda\int_0^t\psi(B_s-b(s))\,ds)]} =P\{\zeta >t\}, where \lambda is a killing rate parameter , and \psi:R %DIFDELCMD < \to [%%% \to[0,1] is a nonincreasing function. We prove that if \psi is suitably smooth, the function t\mapsto P\{\zeta>t\} is twice continuously differentiable, and the condition 0<- \{\zeta > t\}}{dt} } \frac{d\log\mathbb{P\{\zeta>t\}}{dt}} <\lambda holds for the hazard rate of \zeta, then there exists a unique continuously differentiable function b solving the smoothed problem. We show how this result leads to flexible models of default for which it is possible to compute expected values of contingent claims.
[ { "type": "D", "before": "\\,", "after": null, "start_char_pos": 179, "end_char_pos": 181 }, { "type": "D", "before": "s", "after": null, "start_char_pos": 205, "end_char_pos": 206 }, { "type": "A", "before": null, "after": "\\leq s\\leq", "start_char_pos": 227, "end_char_pos": 227 }, { "type": "D", "before": "\\mathbb{E", "after": null, "start_char_pos": 341, "end_char_pos": 350 }, { "type": "A", "before": null, "after": "\\mathbb{E", "start_char_pos": 401, "end_char_pos": 401 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 503, "end_char_pos": 503 }, { "type": "D", "before": "0,1", "after": null, "start_char_pos": 537, "end_char_pos": 540 }, { "type": "D", "before": "is a non-increasing", "after": null, "start_char_pos": 541, "end_char_pos": 560 }, { "type": "A", "before": null, "after": "is a nonincreasing", "start_char_pos": 570, "end_char_pos": 570 }, { "type": "D", "before": "\\frac{d \\log \\mathbb{P", "after": null, "start_char_pos": 718, "end_char_pos": 740 }, { "type": "A", "before": null, "after": "\\frac{d\\log\\mathbb{P", "start_char_pos": 761, "end_char_pos": 761 } ]
[ 0, 245, 580, 919 ]
1111.3033
1
Summary: The extensively overlapping structure of network modules is an increasingly recognized feature of biological networks. Here we introduce a user-friendly implementation of our previous network module determination method, ModuLand , as a plug-in of the widely used Cytoscape program. We show the utility of this approach a.) to identify an extensively overlapping modular structure ; b.) to define a modular core and hierarchy allowing an easier functional annotation ; c.) to identify key nodes of high community centrality, modular overlap or bridgeness in protein structure, protein-protein interaction and metabolic networks. Availability and implementation: The ModuLand Cytoscape plug-in was written in C++, has a JAVA-based graphical interface , can be installed as a single plug-in and can run on Windows, Linux, or Mac OS. The plug-in and its user guide can be downloaded from: URL
Summary: The ModuLand plug-in provides Cytoscape users an algorithm determining a.) extensively overlapping network modules ; b.) several hierarchical layers of modules, where meta-nodes of the higher hierarchical layer represent modules of the lower layer ; c.) module cores predicting the function of the whole module; and d.) key nodes bridging two or multiple modules in complex networks. The plug-in was written in C++, has a detailed JAVA-based graphical interface with various colouring options , can be installed as a single file and can run on Windows, Linux, or Mac OS. We demonstrate its use on protein structure and metabolic networks. Availability: The plug-in and its user guide can be downloaded freely from: URL
[ { "type": "R", "before": "extensively overlapping structure of network modules is an increasingly recognized feature of biological networks. Here we introduce a user-friendly implementation of our previous network module determination method, ModuLand , as a", "after": "ModuLand", "start_char_pos": 13, "end_char_pos": 245 }, { "type": "R", "before": "of the widely used Cytoscape program. We show the utility of this approach", "after": "provides Cytoscape users an algorithm determining", "start_char_pos": 254, "end_char_pos": 328 }, { "type": "R", "before": "to identify an extensively overlapping modular structure", "after": "extensively overlapping network modules", "start_char_pos": 333, "end_char_pos": 389 }, { "type": "R", "before": "to define a modular core and hierarchy allowing an easier functional annotation", "after": "several hierarchical layers of modules, where meta-nodes of the higher hierarchical layer represent modules of the lower layer", "start_char_pos": 396, "end_char_pos": 475 }, { "type": "R", "before": "to identify key nodes of high community centrality, modular overlap or bridgeness in protein structure, protein-protein interaction and metabolic networks. Availability and implementation: The ModuLand Cytoscape", "after": "module cores predicting the function of the whole module; and d.) key nodes bridging two or multiple modules in complex networks. The", "start_char_pos": 482, "end_char_pos": 693 }, { "type": "A", "before": null, "after": "detailed", "start_char_pos": 728, "end_char_pos": 728 }, { "type": "A", "before": null, "after": "with various colouring options", "start_char_pos": 760, "end_char_pos": 760 }, { "type": "R", "before": "plug-in", "after": "file", "start_char_pos": 792, "end_char_pos": 799 }, { "type": "A", "before": null, "after": "We demonstrate its use on protein structure and metabolic networks. Availability:", "start_char_pos": 842, "end_char_pos": 842 }, { "type": "A", "before": null, "after": "freely", "start_char_pos": 892, "end_char_pos": 892 } ]
[ 0, 127, 291, 391, 477, 637, 841 ]
1111.3033
2
Summary: The ModuLand plug-in provides Cytoscape users an algorithm determining a.) extensively overlapping network modules ; b. ) several hierarchical layers of modules, where meta-nodes of the higher hierarchical layer represent modules of the lower layer ; c. ) module corespredicting the function of the whole module ; and d.) key nodes bridging two or multiple modules in complex networks . The plug-in was written in C++, has a detailed JAVA-based graphical interface with various colouring options , can be installed as a single file and can run on Windows, Linux, or Mac OS. We demonstrate its use on protein structure and metabolic networks. Availability: The plug-in and its user guide can be downloaded freely from: URL
Summary: The ModuLand plug-in provides Cytoscape users an algorithm for determining extensively overlapping network modules . Moreover, it identifies several hierarchical layers of modules, where meta-nodes of the higher hierarchical layer represent modules of the lower layer . The tool assigns module cores, which predict the function of the whole module , and determines key nodes bridging two or multiple modules . The plug-in has a detailed JAVA-based graphical interface with various colouring options . The ModuLand tool can run on Windows, Linux, or Mac OS. We demonstrate its use on protein structure and metabolic networks. Availability: The plug-in and its user guide can be downloaded freely from: URL Contact: [email protected] Supplementary information: Supplementary information is available at Bioinformatics online.
[ { "type": "R", "before": "determining a.)", "after": "for determining", "start_char_pos": 68, "end_char_pos": 83 }, { "type": "R", "before": "; b. )", "after": ". Moreover, it identifies", "start_char_pos": 124, "end_char_pos": 130 }, { "type": "R", "before": "; c. ) module corespredicting", "after": ". The tool assigns module cores, which predict", "start_char_pos": 258, "end_char_pos": 287 }, { "type": "R", "before": "; and d.)", "after": ", and determines", "start_char_pos": 321, "end_char_pos": 330 }, { "type": "D", "before": "in complex networks", "after": null, "start_char_pos": 374, "end_char_pos": 393 }, { "type": "D", "before": "was written in C++,", "after": null, "start_char_pos": 408, "end_char_pos": 427 }, { "type": "R", "before": ", can be installed as a single file and can", "after": ". The ModuLand tool can", "start_char_pos": 505, "end_char_pos": 548 }, { "type": "A", "before": null, "after": "Contact: [email protected] Supplementary information: Supplementary information is available at Bioinformatics online.", "start_char_pos": 731, "end_char_pos": 731 } ]
[ 0, 125, 259, 322, 395, 582, 650 ]
1111.3856
1
This paper considers exponential utility indifference pricing for a multidimensional non-traded assets model and provides two approximations for the utility indifference price : a linear approximation by Picard iteration and a semigroup approximation by splitting techniques . The key tool is the probabilistic representation for the utility indifference price by the solution of fully coupled linear forward-backward stochastic differential equations . We apply our methodology to study the counterparty risk of derivatives in incomplete markets.
This paper considers exponential utility indifference pricing for a multidimensional non-traded assets model subject to inter-temporal default risk, and provides a semigroup approximation for the utility indifference price . The key tool is the splitting method . We apply our methodology to study the counterparty risk of derivatives in incomplete markets.
[ { "type": "R", "before": "and provides two approximations", "after": "subject to inter-temporal default risk, and provides a semigroup approximation", "start_char_pos": 109, "end_char_pos": 140 }, { "type": "D", "before": ": a linear approximation by Picard iteration and a semigroup approximation by splitting techniques", "after": null, "start_char_pos": 176, "end_char_pos": 274 }, { "type": "R", "before": "probabilistic representation for the utility indifference price by the solution of fully coupled linear forward-backward stochastic differential equations", "after": "splitting method", "start_char_pos": 297, "end_char_pos": 451 } ]
[ 0, 276, 453 ]
1111.3885
1
We prove that for locally bounded processes, the absence of arbitrage of the first kind is equivalent to the existence of a dominating local martingale measure. This is related to results from the theory of filtration enlargements.
We prove that , for locally bounded processes, absence of arbitrage opportunities of the first kind is equivalent to the existence of a dominating local martingale measure. This is related to and motivated by results from the theory of filtration enlargements.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 14, "end_char_pos": 14 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 46, "end_char_pos": 49 }, { "type": "A", "before": null, "after": "opportunities", "start_char_pos": 71, "end_char_pos": 71 }, { "type": "A", "before": null, "after": "and motivated by", "start_char_pos": 182, "end_char_pos": 182 } ]
[ 0, 162 ]
1111.4298
1
We study dynamic pricing mechanisms of European contingent claims under uncertainty by using G framework introduced by Peng ( 2005 ). We consider a financial market consists of a riskless asset and a risky stock with price process modelled by a geometric generalized G-Brownian motion, which features the drift uncertainty and volatility uncertainty of the stock price process. A time consistent G-expectation is defined by the viscosity solution of the G-heat equation. Using the time consistent G-expectation we define the G dynamic pricing mechanism for the claim. We prove that G dynamic pricing mechanism is the bid-ask Markovian dynamic pricing mechanism. The full nonlinear PDE is derived to describe the bid (resp. ask) price dynamic of the claims. Monotone characteristic finite difference schemes for the nonlinear PDE are given, and the simulations of the bid (resp. ask) prices of contingent claims with uncertainty are implemented.
We study time consistent dynamic pricing mechanisms of European contingent claims under uncertainty by using G framework introduced by Peng ( 24 ). We consider a financial market consisting of a riskless asset and a risky stock with price process modelled by a geometric generalized G-Brownian motion, which features the drift uncertainty and volatility uncertainty of the stock price process. Using the techniques on G-framework we show that the risk premium of the asset is uncertain and distributed with maximum distribution. A time consistent G-expectation is defined by the viscosity solution of the G-heat equation. Using the time consistent G-expectation we define the G dynamic pricing mechanism for the claim. We prove that G dynamic pricing mechanism is the bid-ask Markovian dynamic pricing mechanism. The full nonlinear PDE is derived to describe the bid (resp. ask) price process of the claim. Monotone implicit characteristic finite difference schemes for the nonlinear PDE are given, nonlinear iterative schemes are constructed, and the simulations of the bid (resp. ask) prices of contingent claims under uncertainty are implemented.
[ { "type": "A", "before": null, "after": "time consistent", "start_char_pos": 9, "end_char_pos": 9 }, { "type": "R", "before": "2005", "after": "24", "start_char_pos": 127, "end_char_pos": 131 }, { "type": "R", "before": "consists", "after": "consisting", "start_char_pos": 166, "end_char_pos": 174 }, { "type": "A", "before": null, "after": "Using the techniques on G-framework we show that the risk premium of the asset is uncertain and distributed with maximum distribution.", "start_char_pos": 379, "end_char_pos": 379 }, { "type": "R", "before": "dynamic of the claims. Monotone", "after": "process of the claim. Monotone implicit", "start_char_pos": 736, "end_char_pos": 767 }, { "type": "A", "before": null, "after": "nonlinear iterative schemes are constructed,", "start_char_pos": 842, "end_char_pos": 842 }, { "type": "R", "before": "with", "after": "under", "start_char_pos": 914, "end_char_pos": 918 } ]
[ 0, 134, 378, 472, 569, 663, 758 ]
1111.4624
1
Powerful spectrum decision schemes enable cognitive radios (CRs) to find transmission opportunities in spectral resources allocated exclusively to the primary users. One of the key effecting factor on the CR network throughput is the spectrum sensing sequence used by each secondary user. In this paper, secondary users' throughput maximization through finding an appropriate sensing matrix (SM) is investigated. To this end, first , the average throughput of the CR network is evaluated for a given SM. Then, an optimization problem based on the maximization of the network throughput is formulated in order to find the optimal SM. As the optimum solution is very complicated, to avoid its major challenges, three novel sub optimum solutions for finding an appropriate SM are proposed for various cases including perfect and non-perfect sensing. Despite of having less computational complexity as well as lower consumed energy for finding a transmission opportunity , the proposed solutions perform quite well compared to the optimum solution (the optimum SM). The first algorithm, for instance, enables the SUs to reach 99.19\% of the throughput obtained by the optimal SM while providing an acceptable level of fairness among the SUs. The structure and performance of the proposed SM setting schemes are discussed in detail and a set of illustrative simulation results is presented to validate their performance .
Powerful spectrum decision schemes enable cognitive radios (CRs) to find transmission opportunities in spectral resources allocated exclusively to the primary users. One of the key effecting factor on the CR network throughput is the spectrum sensing sequence used by each secondary user. In this paper, secondary users' throughput maximization through finding an appropriate sensing matrix (SM) is investigated. To this end, first the average throughput of the CR network is evaluated for a given SM. Then, an optimization problem based on the maximization of the network throughput is formulated in order to find the optimal SM. As the optimum solution is very complicated, to avoid its major challenges, three novel sub optimum solutions for finding an appropriate SM are proposed for various cases including perfect and non-perfect sensing. Despite of having less computational complexities as well as lower consumed energies , the proposed solutions perform quite well compared to the optimum solution (the optimum SM). The structure and performance of the proposed SM setting schemes are discussed in detail and a set of illustrative simulation results is presented to validate their efficiencies .
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 432, "end_char_pos": 433 }, { "type": "R", "before": "complexity", "after": "complexities", "start_char_pos": 884, "end_char_pos": 894 }, { "type": "R", "before": "energy for finding a transmission opportunity", "after": "energies", "start_char_pos": 921, "end_char_pos": 966 }, { "type": "D", "before": "first algorithm, for instance, enables the SUs to reach 99.19\\% of the throughput obtained by the optimal SM while providing an acceptable level of fairness among the SUs. The", "after": null, "start_char_pos": 1066, "end_char_pos": 1241 }, { "type": "R", "before": "performance", "after": "efficiencies", "start_char_pos": 1403, "end_char_pos": 1414 } ]
[ 0, 165, 288, 412, 503, 632, 846, 1061, 1237 ]
1111.6038
1
In this paper we introduce and study the concept of optimal and surely optimal dual martingales in the context of dual valuation of Bermudan options, and outline the development of new algorithms in this context. We provide a characterization theorem, a theorem which gives conditions for a martingale to be surely optimal, and a stability theorem concerning martingales which are near to be surely optimal in a sense. Guided by these results we develop a framework of backward algorithms for constructing such a martingale. In turn this martingale may then be utilized for computing an upper bound of the Bermudan product. The methodology is pure dual in the sense that it doesn't require certain (input ) approximations to the Snell envelope. In an It\^o-L\'evy environment we outline a particular regression based backward algorithm which allows for computing dual upper bounds without nested Monte Carlo simulation. Moreover, as a by-product this algorithm also provides approximations to the continuation values of the product, which in turn determine a stopping policy. Hence, we may obtain lower bounds at the same time. In a first numerical study we demonstrate a backward dual regression algorithm in a Wiener environment that is easy to implement and is regarding accuracy comparable with the method of Belomestny et. al. (2009) .
In this paper we introduce and study the concept of optimal and surely optimal dual martingales in the context of dual valuation of Bermudan options, and outline the development of new algorithms in this context. We provide a characterization theorem, a theorem which gives conditions for a martingale to be surely optimal, and a stability theorem concerning martingales which are near to be surely optimal in a sense. Guided by these results we develop a framework of backward algorithms for constructing such a martingale. In turn this martingale may then be utilized for computing an upper bound of the Bermudan product. The methodology is pure dual in the sense that it doesn't require certain input approximations to the Snell envelope. In an It\^o-L\'evy environment we outline a particular regression based backward algorithm which allows for computing dual upper bounds without nested Monte Carlo simulation. Moreover, as a by-product this algorithm also provides approximations to the continuation values of the product, which in turn determine a stopping policy. Hence, we may obtain lower bounds at the same time. In a first numerical study we demonstrate the backward dual regression algorithm in a Wiener environment at well known benchmark examples. It turns out that the method is at least comparable to the one in Belomestny et. al. (2009) regarding accuracy, but regarding computational robustness there are even several advantages .
[ { "type": "R", "before": "(input )", "after": "input", "start_char_pos": 698, "end_char_pos": 706 }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 1170, "end_char_pos": 1171 }, { "type": "R", "before": "that is easy to implement and is regarding accuracy comparable with the method of", "after": "at well known benchmark examples. It turns out that the method is at least comparable to the one in", "start_char_pos": 1231, "end_char_pos": 1312 }, { "type": "A", "before": null, "after": "regarding accuracy, but regarding computational robustness there are even several advantages", "start_char_pos": 1339, "end_char_pos": 1339 } ]
[ 0, 212, 418, 524, 623, 744, 919, 1075, 1127 ]
1111.7130
1
DNA based nanostructures built on a long single stranded DNA scaffold, known as DNA origamis, are nowadays the basis of many applications. These applications range from the control of single-molecule chemical reaction networks to URLanization at the nanometer scale of various molecules including proteins and carbon nanotubes. However, many basic questions concerning the mechanisms of formation of the origamis have not been addressed so far. For instance, the robustness of different designs against factors, such as the internal topology, or the influence of the staple pattern, are handled empirically. We have built a model for the folding and melting processes of DNA origamis that is able to reproduce accurately several thermodynamic quantities measurable from UV absorption experiments. The model can also be used to design a new distribution of crossovers that increases the robustness of the DNA template. The model provides predictions among which a few of them have been already successfully verified. Therefore, in spite of its complexity we propose an algorithm that gives the capability to design and fabricate templates with dedicated properties, a necessary step for technological development .
DNA based nanostructures built on a long single stranded DNA scaffold, known as DNA origamis, offer the possibility URLanize various molecules at the nanometer scale in one pot experiments. The folding of the scaffold is guaranteed by the presence of short, single stranded DNA sequences (staples), that hold together separate regions of the scaffold. In this paper, we modelize the annealing-melting properties of these DNA constructions. The model captures important features such as the hysteresis between melting and annealing, as well as the dependence upon the topology of the scaffold. We show that cooperativity between staples is critical to quantitatively explain the folding process of DNA origamis .
[ { "type": "R", "before": "are nowadays the basis of many applications. These applications range from the control of single-molecule chemical reaction networks to URLanization", "after": "offer the possibility URLanize various molecules", "start_char_pos": 94, "end_char_pos": 242 }, { "type": "R", "before": "of various molecules including proteins and carbon nanotubes. However, many basic questions concerning the mechanisms of formation of the origamis have not been addressed so far. For instance, the robustness of different designs against factors,", "after": "in one pot experiments. The folding of the scaffold is guaranteed by the presence of short, single stranded DNA sequences (staples), that hold together separate regions of the scaffold. In this paper, we modelize the annealing-melting properties of these DNA constructions. The model captures important features", "start_char_pos": 266, "end_char_pos": 511 }, { "type": "R", "before": "internal topology, or the influence of the staple pattern, are handled empirically. We have built a model for the folding and melting processes of DNA origamis that is able to reproduce accurately several thermodynamic quantities measurable from UV absorption experiments. The model can also be used to design a new distribution of crossovers that increases the robustness of the DNA template. The model provides predictions among which a few of them have been already successfully verified. Therefore, in spite of its complexity we propose an algorithm that gives the capability to design and fabricate templates with dedicated properties, a necessary step for technological development", "after": "hysteresis between melting and annealing, as well as the dependence upon the topology of the scaffold. We show that cooperativity between staples is critical to quantitatively explain the folding process of DNA origamis", "start_char_pos": 524, "end_char_pos": 1211 } ]
[ 0, 138, 327, 444, 607, 796, 917, 1015 ]
1112.0045
1
Earlier, we developed a network information flow framework and implemented it as a web application, called ITM Probe. Given a context consisting of one or more user-selected nodes, ITM Probe retrieves other network nodes most related to that context. Although ITM Probe has several desirable features such as requiring neither restriction to subnetwork of interest nor additional and possibly noisy information, it still has a few limitations. For example, users can only query pre-compiled protein interaction networks. Also, manipulating the layout of significant subnetworks is non-trivial. Most importantly, it is difficult to integrate the results of ITM Probe within workflowsinvolving other analysis methods. To resolve these difficulties. we developed CytoITMprobe, a Cytoscape plugin. CytoITMprobe provides access to ITM Probe either through a web server or locally. The input, consisting of desired origins and/or destinations of information and a dissipation coefficient, is specified through a query form. The results are shown as a subnetwork of significant nodes and several summary tables. Users can control the composition and appearance of the subnetwork . Saving results as node and network attributes, CytoITMprobe allows Cytoscape users to manipulate and visualize context-specific information in a manner more flexible than its web predecessor . It also enables seamless integration of ITM Probe results with other Cytoscape plugins having complementary functionality for data analysis.
To provide the Cytoscape users the possibility of integrating ITM Probe into their workflows, we developed CytoITMprobe, a new Cytoscape plugin. CytoITMprobe maintains all the desirable features of ITM Probe and adds additional flexibility not achievable through its web service version. It provides access to ITM Probe either through a web server or locally. The input, consisting of a Cytoscape network, together with the desired origins and/or destinations of information and a dissipation coefficient, is specified through a query form. The results are shown as a subnetwork of significant nodes and several summary tables. Users can control the composition and appearance of the subnetwork and interchange their ITM Probe results with other software tools through tab-delimited files. The main strength of CytoITMprobe is its flexibility. It allows the user to specify as input any Cytoscape network, rather than being restricted to the pre-compiled protein-protein interaction networks available through the ITM Probe web service. Users may supply their own edge weights and directionalities. Consequently, as opposed to ITM Probe web service, CytoITMprobe can be applied to many other domains of network-based research beyond protein-networks . It also enables seamless integration of ITM Probe results with other Cytoscape plugins having complementary functionality for data analysis.
[ { "type": "R", "before": "Earlier, we developed a network information flow framework and implemented it as a web application, called ITM Probe. Given a context consisting of one or more user-selected nodes, ITM Probe retrieves other network nodes most related to that context. Although ITM Probe has several desirable features such as requiring neither restriction to subnetwork of interest nor additional and possibly noisy information, it still has a few limitations. For example, users can only query pre-compiled protein interaction networks. Also, manipulating the layout of significant subnetworks is non-trivial. Most importantly, it is difficult to integrate the results of ITM Probe within workflowsinvolving other analysis methods. To resolve these difficulties.", "after": "To provide the Cytoscape users the possibility of integrating ITM Probe into their workflows,", "start_char_pos": 0, "end_char_pos": 746 }, { "type": "A", "before": null, "after": "new", "start_char_pos": 776, "end_char_pos": 776 }, { "type": "A", "before": null, "after": "maintains all the desirable features of ITM Probe and adds additional flexibility not achievable through its web service version. It", "start_char_pos": 808, "end_char_pos": 808 }, { "type": "A", "before": null, "after": "a Cytoscape network, together with the", "start_char_pos": 903, "end_char_pos": 903 }, { "type": "R", "before": ". Saving results as node and network attributes, CytoITMprobe allows Cytoscape users to manipulate and visualize context-specific information in a manner more flexible than its web predecessor", "after": "and interchange their ITM Probe results with other software tools through tab-delimited files. The main strength of CytoITMprobe is its flexibility. It allows the user to specify as input any Cytoscape network, rather than being restricted to the pre-compiled protein-protein interaction networks available through the ITM Probe web service. Users may supply their own edge weights and directionalities. Consequently, as opposed to ITM Probe web service, CytoITMprobe can be applied to many other domains of network-based research beyond protein-networks", "start_char_pos": 1175, "end_char_pos": 1367 } ]
[ 0, 117, 250, 443, 520, 593, 715, 794, 877, 1020, 1107, 1369 ]
1112.0226
1
In this work we define a multivariate semi-Markov process. We derive an explicit expression for the transition probability of this multivariate semi-Markov process in the discrete time case. We apply this multivariate model to the study of the counterparty credit risk , with regard to correlation in a CDS contract. The financial crisis has stressed the importance of the study of the correlation in the financial market . In this regard, the study of the risk of default of the counterparty in any financial contract has become crucial in the credit risk . Many works has been done to trying to describe the counterparty risk in a CDS contract, but all this work are based on the Markovian approach to risk. In the our opinion this kind of model are too restrictive, because they require that the distribuction function of the waiting times has to be exponential or geometric, for discrete time. In the our model, we describe the evolution of credit rating of the financial subjects like a multivariate semi-Markov model, so we allow for arbitrarily distributed sojourn time. The age state dependency, typical of the semi-Markov environment, allow us to insert the correlation in a dynamical way. In particular, suppose that A is a default-free bondholder and C is the relative firm. The bondholder buy protection against C's default by another defaultable subject, say B the protection seller. Our model describe the evolution of the credit rating of the couple B and C. We admit for simultaneus default of C and B, the single default of C or single default of B .
We consider the problem of constructing an appropriate multivariate model for the study of the counterparty credit risk in credit rating migration problem. For this financial problem different multivariate Markov chain models were proposed. However the markovian assumption may be inappropriate for the study of the dynamic of credit ratings which typically show non markovian like behaviour . In this paper we develop a semi-Markov approach to the study of the counterparty credit risk by defining a new multivariate semi-Markov chain model. Methods are given for computing the transition probabilities, reliability functions and the price of a risky Credit Default Swap .
[ { "type": "R", "before": "In this work we define a multivariate semi-Markov process. We derive an explicit expression for the transition probability of this multivariate semi-Markov process in the discrete time case. We apply this multivariate model to", "after": "We consider the problem of constructing an appropriate multivariate model for", "start_char_pos": 0, "end_char_pos": 226 }, { "type": "R", "before": ", with regard to correlation in a CDS contract. The financial crisis has stressed the importance of", "after": "in credit rating migration problem. For this financial problem different multivariate Markov chain models were proposed. However the markovian assumption may be inappropriate for", "start_char_pos": 269, "end_char_pos": 368 }, { "type": "R", "before": "correlation in the financial market", "after": "dynamic of credit ratings which typically show non markovian like behaviour", "start_char_pos": 386, "end_char_pos": 421 }, { "type": "R", "before": "regard,", "after": "paper we develop a semi-Markov approach to", "start_char_pos": 432, "end_char_pos": 439 }, { "type": "R", "before": "risk of default of the counterparty in any financial contract has become crucial in the credit risk . Many works has been done to trying to describe the counterparty risk in a CDS contract, but all this work are based on the Markovian approach to risk. In the our opinion this kind of model are too restrictive, because they require that the distribuction function of the waiting times has to be exponential or geometric, for discrete time. In the our model, we describe the evolution of credit rating of the financial subjects like a", "after": "counterparty credit risk by defining a new", "start_char_pos": 457, "end_char_pos": 991 }, { "type": "R", "before": "model, so we allow for arbitrarily distributed sojourn time. The age state dependency, typical of the semi-Markov environment, allow us to insert the correlation in a dynamical way. In particular, suppose that A is a default-free bondholder and C is the relative firm. The bondholder buy protection against C's default by another defaultable subject, say B the protection seller. Our model describe the evolution of the credit rating of the couple B and C. We admit for simultaneus default of C and B, the single default of C or single default of B", "after": "chain model. Methods are given for computing the transition probabilities, reliability functions and the price of a risky Credit Default Swap", "start_char_pos": 1017, "end_char_pos": 1565 } ]
[ 0, 58, 190, 316, 423, 558, 709, 897, 1077, 1198, 1285, 1396, 1473 ]
1112.0270
1
Activation cascades are prevalent in cell signalling mechanisms . We study the classic model of linear activation cascades and find that in special but important cases the output of an entire cascade can be represented analytically as a function of the input and a lower incomplete gamma function. We also show that if the inactivation rate of a single component is altered, the change induced at the output is independent of the position in the cascade of the modified component. We use our analytical results to show how one can reduce the number of equations and parameters in ODE models of cell signalling cascades, and how delay differential equation models can sometimes be approximated through the use of simple expressions involving the incomplete gamma function.
Activation cascades are a prevalent feature in cellular mechanisms for signal transduction. Here we study the classic model of linear activation cascades and obtain analytical solutions in terms of lower incomplete gamma functions. We show that in the special but important case of optimal gain cascades (i.e., when all the deactivation rates are identical) the downstream output of an entire cascade can be represented exactly as a single nonlinear module containing an incomplete gamma function with parameters dependent on the input signal as well as the rates and length of the cascade. Our results can be used to represent optimal cascades efficiently by reducing the number of equations and parameters in computational ODE models under a variety of inputs. If the requirement for strict optimality is relaxed (under random deactivation rates), we show that the reduced representation can also reproduce the observed variability of downstream responses. In addition, we show that cascades can be rearranged so that homogeneous blocks can be lumped and represented by incomplete gamma functions. We also illustrate how the reduced representation can be used to fit data; in particular, the length of the cascade appears as a real-valued parameter and can thus be fitted in the same manner as Hill coefficients. Finally, we use our results to show how the output of delay differential equation models can be approximated with the use of simple expressions involving the incomplete gamma function.
[ { "type": "R", "before": "prevalent in cell signalling mechanisms . We", "after": "a prevalent feature in cellular mechanisms for signal transduction. Here we", "start_char_pos": 24, "end_char_pos": 68 }, { "type": "R", "before": "find that in", "after": "obtain analytical solutions in terms of lower incomplete gamma functions. We show that in the", "start_char_pos": 127, "end_char_pos": 139 }, { "type": "R", "before": "cases the", "after": "case of optimal gain cascades (i.e., when all the deactivation rates are identical) the downstream", "start_char_pos": 162, "end_char_pos": 171 }, { "type": "R", "before": "analytically as a function of the input and a lower incomplete gamma function. We also show that if the inactivation rate of a single component is altered, the change induced at the output is independent of the position in the cascade of the modified component. We use our analytical", "after": "exactly as a single nonlinear module containing an incomplete gamma function with parameters dependent on the input signal as well as the rates and length of the cascade. Our results can be used to represent optimal cascades efficiently by reducing the number of equations and parameters in computational ODE models under a variety of inputs. If the requirement for strict optimality is relaxed (under random deactivation rates), we show that the reduced representation can also reproduce the observed variability of downstream responses. In addition, we show that cascades can be rearranged so that homogeneous blocks can be lumped and represented by incomplete gamma functions. We also illustrate how the reduced representation can be used to fit data; in particular, the length of the cascade appears as a real-valued parameter and can thus be fitted in the same manner as Hill coefficients. Finally, we use our", "start_char_pos": 219, "end_char_pos": 502 }, { "type": "R", "before": "one can reduce the number of equations and parameters in ODE models of cell signalling cascades, and how", "after": "the output of", "start_char_pos": 523, "end_char_pos": 627 }, { "type": "R", "before": "sometimes be approximated through", "after": "be approximated with", "start_char_pos": 667, "end_char_pos": 700 } ]
[ 0, 65, 297, 480 ]
1112.1393
1
We investigate the possibility that prebiotic homochirality can be achieved through stochastic fluctuations of chiral-selective reaction rate parameters . Specifically, we examine an open network of polymerization reactions, where the reaction rates can undergo stochastic fluctuations about their mean values. Varying both the mean value and the rms dispersion of the relevant reaction rates, we show that moderate to high levels of chiral excess can be achieved . Considering the various unknowns related to prebiotic chemical networks in early Earth and the dependence of reaction rates to environmental properties such as temperature and pressure variations, we argue that homochirality could have been achieved via simple stochastic processes .
We investigate the possibility that prebiotic homochirality can be achieved exclusively through chiral-selective reaction rate parameters without any other explicit mechanism for chiral bias . Specifically, we examine an open network of polymerization reactions, where the reaction rates can have chiral-selective values. The reactions are neither autocatalytic nor do they contain explicit enantiomeric cross-inhibition terms. We are thus investigating how rare a set of chiral-selective reaction rates needs to be in order to generate a reasonable amount of chiral bias. We quantify our results adopting a statistical approach: varying both the mean value and the rms dispersion of the relevant reaction rates, we show that moderate to high levels of chiral excess can be achieved with fairly small chiral bias, below 10\% . Considering the various unknowns related to prebiotic chemical networks in early Earth and the dependence of reaction rates to environmental properties such as temperature and pressure variations, we argue that homochirality could have been achieved from moderate amounts of chiral selectivity in the reaction rates .
[ { "type": "R", "before": "through stochastic fluctuations of", "after": "exclusively through", "start_char_pos": 76, "end_char_pos": 110 }, { "type": "A", "before": null, "after": "without any other explicit mechanism for chiral bias", "start_char_pos": 153, "end_char_pos": 153 }, { "type": "R", "before": "undergo stochastic fluctuations about their mean values. Varying", "after": "have chiral-selective values. The reactions are neither autocatalytic nor do they contain explicit enantiomeric cross-inhibition terms. We are thus investigating how rare a set of chiral-selective reaction rates needs to be in order to generate a reasonable amount of chiral bias. We quantify our results adopting a statistical approach: varying", "start_char_pos": 255, "end_char_pos": 319 }, { "type": "A", "before": null, "after": "with fairly small chiral bias, below 10\\%", "start_char_pos": 465, "end_char_pos": 465 }, { "type": "R", "before": "via simple stochastic processes", "after": "from moderate amounts of chiral selectivity in the reaction rates", "start_char_pos": 718, "end_char_pos": 749 } ]
[ 0, 311 ]
1112.1510
1
Secondary bone cancer (SBC) is a complex disease characterized by intricate molecular mechanisms. Metastasis of primary breast cancer and prostate cancer cells is the main cause of SBC. Towards our goal of identifying SBC-specific targets , we use a strategy based on protein interactome analysis and gene ontologies. First, we compiled a well-curated dataset of 83 SBC genes(SBCGs). Further, we constructed protein interactome comprising of proteins known to be involved in generic cancer mechanisms and also for those implicated in SBC mechanisms. We hypothesize that these protein interaction networks embody mechanisms and processes that are relevant in generic cancers and those specific to SBC, respectively. We , then, identified targets specific to SBC by combining key cancer-related genes , SBCGs and ontologically significant genes. These targets, apart from being ontologically relevant to SBC , are critical in the topology and dynamics of generic cancer mechanisms. Further, we refined the targets for SBC-specificity by filtering out the genes involved in primary bone cancer , any other type of cancer or any other known disease. Our composite rational strategy involving literature-mined experimental data, graph theoretical analysis and gene ontological studies predicts targets that are more specific and relevant for SBC .
Metastasis is one of the most enigmatic aspects of cancer pathogenesis and is a major cause of cancer-associated mortality. Secondary bone cancer (SBC) is a complex disease caused by metastasis of tumor cells from their primary site and is characterized by intricate interplay of molecular interactions. Identification of targets for multifactorial diseases such as SBC, the most frequent complication of breast and prostate cancers, is a challenge. Towards achieving our aim of identification of targets specific to SBC, we constructed a 'Cancer Genes Network', a representative protein interactome of cancer genes. Using graph theoretical methods, we obtained a set of key genes that are relevant for generic mechanisms of cancers and have a role in biological essentiality. We also compiled a curated dataset of 391 SBC genes from published literature which serves as a basis of ontological correlates of secondary bone cancer. Building on these results, we implement a strategy based on generic cancer genes , SBC genes and gene ontology enrichment method, to obtain a set of targets that are specific to bone metastasis. Through this study, we present an approach for probing one of the major complications in cancers, namely, metastasis. The results on genes that play generic roles in cancer phenotype, obtained by network analysis of 'Cancer Genes Network', have broader implications in understanding the role of molecular regulators in mechanisms of cancers. Specifically, our study provides a set of potential targets that are of ontological and regulatory relevance to secondary bone cancer .
[ { "type": "A", "before": null, "after": "Metastasis is one of the most enigmatic aspects of cancer pathogenesis and is a major cause of cancer-associated mortality.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "A", "before": null, "after": "caused by metastasis of tumor cells from their primary site and is", "start_char_pos": 50, "end_char_pos": 50 }, { "type": "R", "before": "molecular mechanisms. Metastasis of primary breast cancer and prostate cancer cells is the main cause of SBC. Towards our goal of identifying SBC-specific targets , we use a strategy based on protein interactome analysis and gene ontologies. First, we compiled a well-curated dataset of 83 SBC genes(SBCGs). Further, we constructed protein interactome comprising of proteins known to be involved in generic cancer mechanisms and also for those implicated in SBC mechanisms. We hypothesize that these protein interaction networks embody mechanisms and processes that are relevant in generic cancers", "after": "interplay of molecular interactions. Identification of targets for multifactorial diseases such as SBC, the most frequent complication of breast", "start_char_pos": 78, "end_char_pos": 675 }, { "type": "R", "before": "those specific to SBC, respectively. We , then, identified targets specific to SBC by combining key cancer-related genes , SBCGs and ontologically significant genes. These targets, apart from being ontologically relevant to SBC , are critical in the topology and dynamics of generic cancer mechanisms. Further, we refined the targets for SBC-specificity by filtering out the", "after": "prostate cancers, is a challenge. Towards achieving our aim of identification of targets specific to SBC, we constructed a 'Cancer Genes Network', a representative protein interactome of cancer genes. Using graph theoretical methods, we obtained a set of key genes that are relevant for generic mechanisms of cancers and have a role in biological essentiality. We also compiled a curated dataset of 391 SBC", "start_char_pos": 680, "end_char_pos": 1054 }, { "type": "R", "before": "involved in primary bone cancer", "after": "from published literature which serves as a basis of ontological correlates of secondary bone cancer. Building on these results, we implement a strategy based on generic cancer genes", "start_char_pos": 1061, "end_char_pos": 1092 }, { "type": "R", "before": "any other type of cancer or any other known disease. Our composite rational strategy involving literature-mined experimental data, graph theoretical analysis and gene ontological studies predicts", "after": "SBC genes and gene ontology enrichment method, to obtain a set of targets that are specific to bone metastasis. Through this study, we present an approach for probing one of the major complications in cancers, namely, metastasis. The results on genes that play generic roles in cancer phenotype, obtained by network analysis of 'Cancer Genes Network', have broader implications in understanding the role of molecular regulators in mechanisms of cancers. Specifically, our study provides a set of potential", "start_char_pos": 1095, "end_char_pos": 1290 }, { "type": "R", "before": "more specific and relevant for SBC", "after": "of ontological and regulatory relevance to secondary bone cancer", "start_char_pos": 1308, "end_char_pos": 1342 } ]
[ 0, 99, 187, 319, 385, 551, 716, 845, 981, 1147 ]
1112.1607
1
We introduce an innovative theoretical framework for the valuation and replication of derivative transactions between defaultable entities based on the principle of arbitrage freedom. Our framework extends the traditional formulations based on Credit and Debit Valuation Adjustments (CVA and DVA). Depending on how the default contingency is accounted for, we list a total of ten different structuring styles. These include bi-partite structures between a bank and a counterparty, tri-partite structures with one margin lender in addition, quadri-partite structures with two margin lenders and, most importantly, configurations where all derivative transactions are cleared through a Central Counterparty Clearing House (CCP). We compare the various structuring styles under a number of criteria including consistency from an accounting standpoint, counterparty risk hedgeability, numerical complexity, transaction portability upon default, induced behaviour and macro-economic impact of the implied wealth allocation.
We introduce an innovative theoretical framework to model derivative transactions between defaultable entities based on the principle of arbitrage freedom. Our framework extends the traditional formulations based on Credit and Debit Valuation Adjustments (CVA and DVA). Depending on how the default contingency is accounted for, we list a total of ten different structuring styles. These include bipartite structures between a bank and a counterparty, tri-partite structures with one margin lender in addition, quadri-partite structures with two margin lenders and, most importantly, configurations where all derivative transactions are cleared through a Central Counterparty (CCP). We compare the various structuring styles under a number of criteria including consistency from an accounting standpoint, counterparty risk hedgeability, numerical complexity, transaction portability upon default, induced behaviour and macro-economic impact of the implied wealth allocation.
[ { "type": "R", "before": "for the valuation and replication of", "after": "to model", "start_char_pos": 49, "end_char_pos": 85 }, { "type": "R", "before": "bi-partite", "after": "bipartite", "start_char_pos": 424, "end_char_pos": 434 }, { "type": "D", "before": "Clearing House", "after": null, "start_char_pos": 705, "end_char_pos": 719 } ]
[ 0, 183, 297, 409, 726 ]
1112.2066
1
We introduce a class of correlated weighted graphs whose properties are meant to mimic the topological features of idiotypic networks, namely the interaction networks involving the B-core of the immune system. Each node is endowed with a bit-string representing the epitopal specificity of the corresponding B cell , a proper distance between any couple of bit-strings provides the coupling strength between the two nodes. By assuming a biased distribution for the entries in bit-strings , we show that such a correlation can yield fringes in the (weighted) connectivity distribution, small- worlds featuresas well as scaling laws in agreement with experimental findings. We also investigate the role of ageing, thought of as a progressive increase in the degree of correlation (specificity) , and we show that it can possibly induce mild percolation phenomena, which are investigated too.
We introduce a class of weighted graphs whose properties are meant to mimic the topological features of idiotypic networks, namely the interaction networks involving the B-core of the immune system. Each node is endowed with a bit-string representing the idiotypic specificity of the corresponding B cell and a proper distance between any couple of bit-strings provides the coupling strength between the two nodes. We show that a biased distribution of the entries in bit-strings can yield fringes in the (weighted) degree distribution, small-worlds features, and scaling laws, in agreement with experimental findings. We also investigate the role of ageing, thought of as a progressive increase in the degree of bias in bit-strings , and we show that it can possibly induce mild percolation phenomena, which are investigated too.
[ { "type": "D", "before": "correlated", "after": null, "start_char_pos": 24, "end_char_pos": 34 }, { "type": "R", "before": "epitopal", "after": "idiotypic", "start_char_pos": 266, "end_char_pos": 274 }, { "type": "R", "before": ",", "after": "and", "start_char_pos": 315, "end_char_pos": 316 }, { "type": "R", "before": "By assuming", "after": "We show that", "start_char_pos": 423, "end_char_pos": 434 }, { "type": "R", "before": "for", "after": "of", "start_char_pos": 457, "end_char_pos": 460 }, { "type": "D", "before": ", we show that such a correlation", "after": null, "start_char_pos": 488, "end_char_pos": 521 }, { "type": "R", "before": "connectivity distribution, small- worlds featuresas well as scaling laws", "after": "degree distribution, small-worlds features, and scaling laws,", "start_char_pos": 558, "end_char_pos": 630 }, { "type": "R", "before": "correlation (specificity)", "after": "bias in bit-strings", "start_char_pos": 766, "end_char_pos": 791 } ]
[ 0, 209, 422, 671 ]
1112.2317
1
I consider the microscopic mechanisms by which a particular left-right (L/R) asymmetry is generated at URLanism level from the microscopic handedness of cytoskeletal molecules. In light of a fundamental symmetry principle, the typical pattern-formation mechanisms of diffusion plus regulation cannot implement the "right-hand rule"; in fact, the cytoskeleton (made of helical fibers) seems always to be involved, usually in collective states induced by molecular motors. I detail a possible scenario involving actin/myosin layers in snails (and in%DIFDELCMD < {\it %%% C. elegans ), and outline other mechanisms which might govern handedness in eukaryote cell motility and in plants .
I consider the microscopic mechanisms by which a particular left-right (L/R) asymmetry is generated at URLanism level from the microscopic handedness of cytoskeletal molecules. In light of a fundamental symmetry principle, the typical pattern-formation mechanisms of diffusion plus regulation cannot implement the "right-hand rule"; at the microscopic level, the cell's cytoskeleton of chiral filaments seems always to be involved, usually in collective states driven by polymerization forces or molecular motors. It seems particularly easy for handedness to emerge in a shear or rotation in the background of an effectively two-dimensional system, such as the cell membrane or a layer of cells, as this requires no pre-existing axis apart from the layer normal. I detail a scenario involving actin/myosin layers in snails %DIFDELCMD < {\it %%% and in C. elegans , and also one about the microtubule layer in plant cells. I also survey the other examples that I am aware of, such as the emergence of handedness such as the emergence of handedness in neurons, in eukaryote cell motility , and in non-flagellated bacteria .
[ { "type": "R", "before": "in fact, the cytoskeleton (made of helical fibers)", "after": "at the microscopic level, the cell's cytoskeleton of chiral filaments", "start_char_pos": 333, "end_char_pos": 383 }, { "type": "R", "before": "induced by", "after": "driven by polymerization forces or", "start_char_pos": 442, "end_char_pos": 452 }, { "type": "A", "before": null, "after": "It seems particularly easy for handedness to emerge in a shear or rotation in the background of an effectively two-dimensional system, such as the cell membrane or a layer of cells, as this requires no pre-existing axis apart from the layer normal.", "start_char_pos": 471, "end_char_pos": 471 }, { "type": "D", "before": "possible", "after": null, "start_char_pos": 483, "end_char_pos": 491 }, { "type": "D", "before": "(and in", "after": null, "start_char_pos": 541, "end_char_pos": 548 }, { "type": "A", "before": null, "after": "and in", "start_char_pos": 570, "end_char_pos": 570 }, { "type": "R", "before": "), and outline other mechanisms which might govern handedness in", "after": ", and also one about the microtubule layer in plant cells. I also survey the other examples that I am aware of, such as the emergence of handedness such as the emergence of handedness in neurons, in", "start_char_pos": 582, "end_char_pos": 646 }, { "type": "R", "before": "and in plants", "after": ", and in non-flagellated bacteria", "start_char_pos": 671, "end_char_pos": 684 } ]
[ 0, 176, 332, 470 ]
1112.2939
1
We consider a model of optimal investment and consumption with both habit-formation and partial observations in incomplete Ito processes markets. The individual investor develops addictive consumption habits gradually while he can only observe the market stock prices but not the instantaneous rates of return, which follow Ornstein-Uhlenbeck processes . Applying the Kalman-Bucy filtering theorem and Dynamic Programming arguments, we solve the associated HJB equation fully explicitly for this path dependent stochastic control problem in the case of power utility preferences. We will provide the optimal investment and consumption policies in explicit feedback forms using rigorous verification arguments.
We consider a model of optimal investment and consumption with both habit formation and partial observations in incomplete It\^{o processes markets. The individual investor develops addictive consumption habits gradually while only observing the market stock prices but not the instantaneous rate of return, which follows an Ornstein-Uhlenbeck process . Applying the Kalman-Bucy filtering theorem and Dynamic Programming arguments, we solve the associated HJB equation explicitly for this path dependent stochastic control problem in the case of power utility preferences. We provide the optimal investment and consumption policies in explicit feedback forms using rigorous verification arguments.
[ { "type": "R", "before": "habit-formation", "after": "habit formation", "start_char_pos": 68, "end_char_pos": 83 }, { "type": "R", "before": "Ito", "after": "It\\^{o", "start_char_pos": 123, "end_char_pos": 126 }, { "type": "R", "before": "he can only observe", "after": "only observing", "start_char_pos": 224, "end_char_pos": 243 }, { "type": "R", "before": "rates", "after": "rate", "start_char_pos": 294, "end_char_pos": 299 }, { "type": "R", "before": "follow", "after": "follows an", "start_char_pos": 317, "end_char_pos": 323 }, { "type": "R", "before": "processes", "after": "process", "start_char_pos": 343, "end_char_pos": 352 }, { "type": "D", "before": "fully", "after": null, "start_char_pos": 470, "end_char_pos": 475 }, { "type": "D", "before": "will", "after": null, "start_char_pos": 583, "end_char_pos": 587 } ]
[ 0, 145, 579 ]
1112.2939
2
We consider a model of optimal investment and consumption with both habit formation and partial observations in incomplete It\^{o} processes markets. The individual investor develops addictive consumption habits gradually while only observing the market stock prices but not the instantaneous rate of return , which follows an Ornstein-Uhlenbeck process . Applying the Kalman-Bucy filtering theorem and Dynamic Programming arguments, we solve the associated HJB equation explicitly for this path dependent stochastic control problem in the case of power utility preferences . We provide the optimal investment and consumption policies in explicit feedback forms using rigorous verification arguments.
We consider a model of optimal investment and consumption with both habit formation and partial observations in incomplete It\^{o} processes market. The investor chooses his consumption under the addictive habits constraint while only observing the market stock prices but not the instantaneous rate of return . Applying the Kalman-Bucy filtering theorem and the Dynamic Programming arguments, we solve the associated Hamilton-Jacobi-Bellman (HJB) equation explicitly for the path dependent stochastic control problem in the case of power utilities . We provide the optimal investment and consumption policies in explicit feedback forms using rigorous verification arguments.
[ { "type": "R", "before": "markets. The individual investor develops addictive consumption habits gradually", "after": "market. The investor chooses his consumption under the addictive habits constraint", "start_char_pos": 141, "end_char_pos": 221 }, { "type": "D", "before": ", which follows an Ornstein-Uhlenbeck process", "after": null, "start_char_pos": 308, "end_char_pos": 353 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 403, "end_char_pos": 403 }, { "type": "R", "before": "HJB", "after": "Hamilton-Jacobi-Bellman (HJB)", "start_char_pos": 459, "end_char_pos": 462 }, { "type": "R", "before": "this", "after": "the", "start_char_pos": 487, "end_char_pos": 491 }, { "type": "R", "before": "utility preferences", "after": "utilities", "start_char_pos": 555, "end_char_pos": 574 } ]
[ 0, 149, 576 ]
1112.2940
1
This paper studies the problem of continuous time utility maximization of consumption together with addictive habit formation in general incomplete semimartingale financial markets. By introducing the auxiliary state processes and the modified dual space, we embed our original problem into an auxiliary time separable utility maximization problem with the shadow random endowment. We establish existence and uniqueness of the optimal solution using convex duality approach on the product space by defining the primal value function both on the initial wealth and initial habit. We also provide market independent sufficient conditions both on stochastic discounting processes for the habit formation process and on the utility function for the validity of several key assertions of our main results to hold true .
This paper studies continuous time utility maximization of consumption together with addictive habit formation in general incomplete semimartingale financial markets. By introducing the auxiliary state processes and the modified dual space, we embed our original problem into an auxiliary time separable utility maximization problem with the shadow random endowment. We establish existence and uniqueness of the optimal solution using convex duality approach on the product space by defining the primal value function both on the initial wealth and initial habit. We also provide market independent sufficient conditions both on stochastic discounting processes of the habit formation process and on the utility function to modify the convex duality approach when the auxiliary dual process is not necessarily integrable .
[ { "type": "D", "before": "the problem of", "after": null, "start_char_pos": 19, "end_char_pos": 33 }, { "type": "R", "before": "for", "after": "of", "start_char_pos": 677, "end_char_pos": 680 }, { "type": "R", "before": "for the validity of several key assertions of our main results to hold true", "after": "to modify the convex duality approach when the auxiliary dual process is not necessarily integrable", "start_char_pos": 737, "end_char_pos": 812 } ]
[ 0, 181, 381, 578 ]
1112.2940
2
This paper studies continuous time utility maximization of consumption together with addictive habit formation in general incomplete semimartingale financial markets. By introducing the auxiliary state processes and the modified dual space, we embed our original problem into an auxiliary time separable utility maximization problem with the shadow random endowment. We establish existence and uniqueness of the optimal solution using convex duality approach on the product space _{+}^{0}(\Omega\times[0,T],O,}\mathbb{P} by defining the primal value function both on the initial wealth and initial habit . We also provide market independent sufficient conditions both on stochastic discounting processes of the habit formation process and on the utility function to modify the convex duality approach when the auxiliary dual process is not necessarily integrable.
This paper studies the problem of continuous time expected utility maximization of consumption together with addictive habit formation in general incomplete semimartingale financial markets. Introducing an auxiliary state processes and a modified dual space, we embed our original problem into an auxiliary time-separable utility maximization problem with the shadow random endowment. We establish existence and uniqueness of the optimal solution using convex duality on the product space \mathbb{L_{+}^{0}(\Omega\times[0,T],O,}\mathbb{P}) by defining the primal value function as depending on both the initial wealth and initial standard of living . We also provide market independent sufficient conditions on both stochastic discounting processes of the habit formation process and on the utility function for our original problem to be well posed and to modify the convex duality approach when the auxiliary dual process is not necessarily integrable.
[ { "type": "R", "before": "continuous time", "after": "the problem of continuous time expected", "start_char_pos": 19, "end_char_pos": 34 }, { "type": "R", "before": "By introducing the", "after": "Introducing an", "start_char_pos": 167, "end_char_pos": 185 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 216, "end_char_pos": 219 }, { "type": "R", "before": "time separable", "after": "time-separable", "start_char_pos": 289, "end_char_pos": 303 }, { "type": "D", "before": "approach", "after": null, "start_char_pos": 450, "end_char_pos": 458 }, { "type": "A", "before": null, "after": "\\mathbb{L", "start_char_pos": 480, "end_char_pos": 480 }, { "type": "A", "before": null, "after": ")", "start_char_pos": 520, "end_char_pos": 520 }, { "type": "R", "before": "both on", "after": "as depending on both", "start_char_pos": 559, "end_char_pos": 566 }, { "type": "R", "before": "habit", "after": "standard of living", "start_char_pos": 598, "end_char_pos": 603 }, { "type": "R", "before": "both on", "after": "on both", "start_char_pos": 663, "end_char_pos": 670 }, { "type": "A", "before": null, "after": "for our original problem to be well posed and", "start_char_pos": 763, "end_char_pos": 763 } ]
[ 0, 166, 366, 605 ]
1112.2940
3
This paper studies the problem of continuous time expected utility maximization of consumption together with addictive habit formation in general incomplete semimartingale financial markets. Introducing an auxiliary state processes and a modified dual space, we embed our original problem into an auxiliary time-separable utility maximization problem with the shadow random endowment . We establish existence and uniqueness of the optimal solution using convex duality on the product space \mathbb{L_{+}^{0}(\Omega\times[0,T],O,}%DIFDELCMD < \mathbb{P}%%% ) by defining the primal value function as depending on both the initial wealth and initial standard of living. We also provide market independent sufficient conditions on both stochastic discounting processes of the habit formation process and on the utility function for our original problem to be well posed and to modify the convex duality approach when the auxiliary dual process is not necessarily integrable.
This paper studies the problem of continuous time expected utility maximization of consumption together with addictive habit formation in general incomplete semimartingale markets. Introducing the set of auxiliary state processes and the modified dual space, we embed our original problem into an abstract time-separable utility maximization problem with a shadow random endowment on the product space . We establish existence and uniqueness of the optimal solution using convex duality _{+}^{0}(\Omega\times[0,T],O,}%DIFDELCMD < \mathbb{P}%%% by defining the primal value function as depending on two variables, i.e., the initial wealth and the initial standard of living. We also provide market independent sufficient conditions both on the stochastic discounting processes and on the utility function for the well-posedness of our original optimization problem. Under the same assumptions, we can carefully modify the classical proofs in the approach of convex duality analysis when the auxiliary dual process is not necessarily integrable.
[ { "type": "D", "before": "financial", "after": null, "start_char_pos": 172, "end_char_pos": 181 }, { "type": "R", "before": "an", "after": "the set of", "start_char_pos": 203, "end_char_pos": 205 }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 236, "end_char_pos": 237 }, { "type": "R", "before": "auxiliary", "after": "abstract", "start_char_pos": 297, "end_char_pos": 306 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 356, "end_char_pos": 359 }, { "type": "A", "before": null, "after": "on the product space", "start_char_pos": 384, "end_char_pos": 384 }, { "type": "D", "before": "on the product space \\mathbb{L", "after": null, "start_char_pos": 470, "end_char_pos": 500 }, { "type": "D", "before": ")", "after": null, "start_char_pos": 557, "end_char_pos": 558 }, { "type": "R", "before": "both", "after": "two variables, i.e.,", "start_char_pos": 613, "end_char_pos": 617 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 641, "end_char_pos": 641 }, { "type": "R", "before": "on both", "after": "both on the", "start_char_pos": 727, "end_char_pos": 734 }, { "type": "D", "before": "of the habit formation process", "after": null, "start_char_pos": 768, "end_char_pos": 798 }, { "type": "R", "before": "our original problem to be well posed and to modify the convex duality approach", "after": "the well-posedness of our original optimization problem. Under the same assumptions, we can carefully modify the classical proofs in the approach of convex duality analysis", "start_char_pos": 831, "end_char_pos": 910 } ]
[ 0, 190, 386, 669 ]
1112.2940
4
This paper studies the problem of continuous time expected utility maximization of consumption together with addictive habit formation in general incomplete semimartingale markets. Introducing the set of auxiliary state processes and the modified dual space, we embed our original problem into an abstract time-separable utility maximization problem with a shadow random endowment on the product space . We establish existence_{+}^{0}(\Omega\times[0,T],O,}\mathbb{P} and uniqueness of the optimal solution using convex duality by defining the primal value function as depending on two variables, i.e., the initial wealth and the initial standard of living. We also provide market independent sufficient conditions both on the stochastic discounting processes and on the utility function for the well-posedness of our original optimization problem. Under the same assumptions, we can carefully modify the classical proofs in the approach of convex duality analysis when the auxiliary dual process is not necessarily integrable.
This paper studies the continuous time utility maximization problem on consumption with addictive habit formation in incomplete semimartingale markets. Introducing the set of auxiliary state processes and the modified dual space, we embed our original problem into a time-separable utility maximization problem with a shadow random endowment on the product space \mathbb{L_{+}^{0}(\Omega\times[0,T],O,}\mathbb{P}). Existence and uniqueness of the optimal solution are established using convex duality approach, where the primal value function is defined on two variables, i.e., the initial wealth and the initial standard of living. We also provide sufficient conditions on the stochastic discounting processes and on the utility function for the well-posedness of the original optimization problem. Under the same assumptions, classical proofs in the approach of convex duality analysis can be modified when the auxiliary dual process is not necessarily integrable.
[ { "type": "R", "before": "problem of continuous time expected utility maximization of consumption together", "after": "continuous time utility maximization problem on consumption", "start_char_pos": 23, "end_char_pos": 103 }, { "type": "D", "before": "general", "after": null, "start_char_pos": 138, "end_char_pos": 145 }, { "type": "R", "before": "an abstract", "after": "a", "start_char_pos": 294, "end_char_pos": 305 }, { "type": "R", "before": ". We establish existence", "after": "\\mathbb{L", "start_char_pos": 402, "end_char_pos": 426 }, { "type": "A", "before": null, "after": "). Existence", "start_char_pos": 466, "end_char_pos": 466 }, { "type": "A", "before": null, "after": "are established", "start_char_pos": 506, "end_char_pos": 506 }, { "type": "R", "before": "by defining", "after": "approach, where", "start_char_pos": 528, "end_char_pos": 539 }, { "type": "R", "before": "as depending", "after": "is defined", "start_char_pos": 566, "end_char_pos": 578 }, { "type": "R", "before": "market independent sufficient conditions both", "after": "sufficient conditions", "start_char_pos": 674, "end_char_pos": 719 }, { "type": "R", "before": "our", "after": "the", "start_char_pos": 814, "end_char_pos": 817 }, { "type": "D", "before": "we can carefully modify the", "after": null, "start_char_pos": 877, "end_char_pos": 904 }, { "type": "A", "before": null, "after": "can be modified", "start_char_pos": 965, "end_char_pos": 965 } ]
[ 0, 180, 403, 657, 848 ]
1112.2940
5
This paper studies the continuous time utility maximization problem on consumption with addictive habit formation in incomplete semimartingale markets. Introducing the set of auxiliary state processes and the modified dual space, we embed our original problem into a time-separable utility maximization problem with a shadow random endowment on the product space L _{+^{0}(\Omega\times[0,T],O,}%DIFDELCMD < \mathbb{P}%%% ],}\mathbb{P} ). Existence and uniqueness of the optimal solution are established using convex duality approach, where the primal value function is defined on two variables, i.e. , the initial wealth and the initial standard of living. We also provide sufficient conditions on the stochastic discounting processes and on the utility function for the well-posedness of the original optimization problem. Under the same assumptions, classical proofs in the approach of convex duality analysis can be modified when the auxiliary dual process is not necessarily integrable.
This paper studies the continuous time utility maximization problem on consumption with addictive habit formation in incomplete semimartingale markets. Introducing the set of auxiliary state processes and the modified dual space, we embed our original problem into a time-separable utility maximization problem with a shadow random endowment on the product space L ^{0}(\Omega\times[0,T],O,}%DIFDELCMD < \mathbb{P}%%% _+^0(\Omega\times 0,T],\mathcal{O,}\mathbb{P} ). Existence and uniqueness of the optimal solution are established using convex duality approach, where the primal value function is defined on two variables, that is , the initial wealth and the initial standard of living. We also provide sufficient conditions on the stochastic discounting processes and on the utility function for the well-posedness of the original optimization problem. Under the same assumptions, classical proofs in the approach of convex duality analysis can be modified when the auxiliary dual process is not necessarily integrable.
[ { "type": "D", "before": "_{+", "after": null, "start_char_pos": 365, "end_char_pos": 368 }, { "type": "A", "before": null, "after": "_+^0(\\Omega\\times", "start_char_pos": 421, "end_char_pos": 421 }, { "type": "A", "before": null, "after": "0,T", "start_char_pos": 422, "end_char_pos": 422 }, { "type": "A", "before": null, "after": ",\\mathcal{O", "start_char_pos": 423, "end_char_pos": 423 }, { "type": "R", "before": "i.e.", "after": "that is", "start_char_pos": 596, "end_char_pos": 600 } ]
[ 0, 151, 438, 657, 824 ]
1112.3802
1
A detailed stochastic model of single-gene auto-regulation is established and its solutions are explored when mRNA dynamics is fast compared with protein dynamics and in the opposite regime. The model includes all the sources of randomness that are intrinsic to the auto-regulation process . The timescale separation allows the derivation of analytic expressions for the equilibrium distributions of protein and mRNA. These distributions are shown to be well described in the continuous approximation, which is then used to discuss the qualitative features of the protein equilibrium distributions as a function of the biological parameters in the fast mRNA regime .
A detailed stochastic model of single-gene auto-regulation is established and its solutions are explored when mRNA dynamics is fast compared with protein dynamics and in the opposite regime. The model includes all the sources of randomness that are intrinsic to the auto-regulation process and it considers both transcriptional and post transcriptional regulation . The timescale separation allows the derivation of analytic expressions for the equilibrium distributions of protein and mRNA. These distributions are generally well described in the continuous approximation, which is used to discuss the qualitative features of the protein equilibrium distributions as a function of the biological parameters in the fast mRNA regime . The performance of the timescale approximation is assessed by comparison with simulations of the full stochastic system, and a good quantitative agreement is found for a wide range of parameter values. We show that either unimodal or bimodal equilibrium protein distributions can arise, and we discuss the auto-regulation mechanisms associated with bimodality .
[ { "type": "A", "before": null, "after": "and it considers both transcriptional and post transcriptional regulation", "start_char_pos": 290, "end_char_pos": 290 }, { "type": "R", "before": "shown to be", "after": "generally", "start_char_pos": 443, "end_char_pos": 454 }, { "type": "D", "before": "then", "after": null, "start_char_pos": 512, "end_char_pos": 516 }, { "type": "A", "before": null, "after": ". The performance of the timescale approximation is assessed by comparison with simulations of the full stochastic system, and a good quantitative agreement is found for a wide range of parameter values. We show that either unimodal or bimodal equilibrium protein distributions can arise, and we discuss the auto-regulation mechanisms associated with bimodality", "start_char_pos": 666, "end_char_pos": 666 } ]
[ 0, 190, 292, 418 ]
1112.3803
1
To what extent do the characteristic features of a chemical reaction network reflect its purpose and function? In general, one argues that correlations between specific features and specific functions are the key to understanding a complex structure. Yet specific features may sometimes be neutral and uncorrelated with any system-specific purpose, function , or causal chain. Such neutral features are caused by chance and randomness. Here we compare two classes of chemical networks: one that has been subject to biological evolution (the chemical reaction network of the metabolism in living cells) and one that has not (the atmospheric planetary chemical reaction networks). Their degree distributions are shown to share the very same neutral system-independent features. The shape of the broad distributions is to large extent controlled by a single parameter, the network size. From this perspective, there is little difference between atmospheric and metabolic networks; they are just different sizes of the same random assembling network. In other words, the shape of the degree distribution is a neutral characteristic feature and has no functional or evolutionary implications in itself; it is not a matter of life and death.
To what extent do the characteristic features of a chemical reaction network reflect its purpose and function? In general, one argues that correlations between specific features and specific functions are key to understanding a complex structure. However, specific features may sometimes be neutral and uncorrelated with any system-specific purpose, function or causal chain. Such neutral features are caused by chance and randomness. Here we compare two classes of chemical networks: one that has been subjected to biological evolution (the chemical reaction network of metabolism in living cells) and one that has not (the atmospheric planetary chemical reaction networks). Their degree distributions are shown to share the very same neutral system-independent features. The shape of the broad distributions is to a large extent controlled by a single parameter, the network size. From this perspective, there is little difference between atmospheric and metabolic networks; they are just different sizes of the same random assembling network. In other words, the shape of the degree distribution is a neutral characteristic feature and has no functional or evolutionary implications in itself; it is not a matter of life and death.
[ { "type": "D", "before": "the", "after": null, "start_char_pos": 205, "end_char_pos": 208 }, { "type": "R", "before": "Yet", "after": "However,", "start_char_pos": 251, "end_char_pos": 254 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 358, "end_char_pos": 359 }, { "type": "R", "before": "subject", "after": "subjected", "start_char_pos": 504, "end_char_pos": 511 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 570, "end_char_pos": 573 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 819, "end_char_pos": 819 } ]
[ 0, 110, 250, 376, 435, 678, 775, 884, 978, 1047, 1198 ]
1112.3805
1
The expectation monad is introduced abstractly via two composable adjunctions, but concretely cap- tures measures. It turns out to sit in between known monads: on the one hand the distribution and ultrafilter monad, and on the other hand the continuation monad. This expectation monad is used in two probabilistic analogues of fundamental results of Manes and Gelfand for the ultrafilter monad: algebras of the expectation monad are convex compact Hausdorff spaces, and are dually equivalent to so-called Banach effect algebras. These structures capture states and effects in quantum founda- tions , and also the duality between them. Moreover, the approach leads to a new re-formulation of Gleason's theorem, expressing that effects on a Hilbert space are free effect modules on projections, obtained via tensoring with the unit interval.
The expectation monad is introduced abstractly via two composable adjunctions, but concretely captures measures. It turns out to sit in between known monads: on the one hand the distribution and ultrafilter monad, and on the other hand the continuation monad. This expectation monad is used in two probabilistic analogues of fundamental results of Manes and Gelfand for the ultrafilter monad: algebras of the expectation monad are convex compact Hausdorff spaces, and are dually equivalent to so-called Banach effect algebras. These structures capture states and effects in quantum foundations , and also the duality between them. Moreover, the approach leads to a new re-formulation of Gleason's theorem, expressing that effects on a Hilbert space are free effect modules on projections, obtained via tensoring with the unit interval.
[ { "type": "R", "before": "cap- tures", "after": "captures", "start_char_pos": 94, "end_char_pos": 104 }, { "type": "R", "before": "founda- tions", "after": "foundations", "start_char_pos": 584, "end_char_pos": 597 } ]
[ 0, 114, 261, 528, 634 ]
1112.4385
1
We consider the problem of maximizing expected power util- ity from consumption over an infinite horizon in the Black-Scholes model with proportional transaction costs, as studied in the paper Shreve and Soner (1994) . Similarly to Kallsen and Muhle-Karbe (2010) , we derive a shadow price, that is, a frictionless price process with values in the bid-ask spread which leads to the same optimal policy . In doing so we explore and exploit the strong relationship between the shadow price and the Hamilton-Jacobi-Bellman-equation .
We consider the problem of maximizing expected power utility from consumption over an infinite horizon in the Black-Scholes model with proportional transaction costs, as studied in Shreve and Soner Ann. Appl. Probab. 4 (1994) 609-692 . Similar to Kallsen and Muhle-Karbe Ann. Appl. Probab. 20 (2010) 1341-1358 , we derive a shadow price, that is, a frictionless price process with values in the bid-ask spread which leads to the same optimal policy .
[ { "type": "R", "before": "util- ity", "after": "utility", "start_char_pos": 53, "end_char_pos": 62 }, { "type": "D", "before": "the paper", "after": null, "start_char_pos": 183, "end_char_pos": 192 }, { "type": "A", "before": null, "after": "Ann. Appl. Probab. 4", "start_char_pos": 210, "end_char_pos": 210 }, { "type": "R", "before": ". Similarly", "after": "609-692", "start_char_pos": 218, "end_char_pos": 229 }, { "type": "A", "before": null, "after": ". Similar", "start_char_pos": 230, "end_char_pos": 230 }, { "type": "A", "before": null, "after": "Ann. Appl. Probab. 20", "start_char_pos": 258, "end_char_pos": 258 }, { "type": "A", "before": null, "after": "1341-1358", "start_char_pos": 266, "end_char_pos": 266 }, { "type": "D", "before": ". In doing so we explore and exploit the strong relationship between the shadow price and the Hamilton-Jacobi-Bellman-equation", "after": null, "start_char_pos": 406, "end_char_pos": 532 } ]
[ 0, 407 ]
1112.4610
1
It is a classical result of Stein and Waterman that the asymptotic number of RNA secondary structures is 1.104366 \cdot n^{-3/2} \cdot 2.618034^n. To provide a better understanding of the kinetics of RNA secondary structure formation, we are interested in determining the asymptotic number of secondary structures that are %DIFDELCMD < {\em %%% locally optimal , with respect to a particular energy model. In the Nussinov energy model, where each base pair contributes -1 towards the energy of the structure, locally optimal structures are exactly the %DIFDELCMD < {\em %%% saturated structures, for which we have previously shown that asymptotically, there are 1.07427\cdot n^{-3/2} \cdot 2.35467^n many saturated structures for a sequence of length n. In this paper, we consider the %DIFDELCMD < {\em %%% base stacking energy model , a mild variant of the Nussinov model, where each stacked base pair contributes -1 toward the energy of the structure. Locally optimal structures with respect to the base stacking energy model are exactly those secondary structures, whose stems cannot be extended. Such structures were first considered by Evers and Giegerich, who described a dynamic programming algorithm to enumerate all locally optimal structures. In this paper, we apply methods from enumerative combinatorics to compute the asymptotic number of such structures. Additionally, we consider analogous combinatorial problems for secondary structures with annotated single-stranded, stacking nucleotides (dangles).
It is a classical result of Stein and Waterman that the asymptotic number of RNA secondary structures is 1.104366 \cdot n^{-3/2} \cdot 2.618034^n. Motivated by the kinetics of RNA secondary structure formation, we are interested in determining the asymptotic number of secondary structures that are %DIFDELCMD < {\em %%% locally optimal , with respect to a particular energy model. In the Nussinov energy model, where each base pair contributes -1 towards the energy of the structure, locally optimal structures are exactly the %DIFDELCMD < {\em %%% saturated structures, for which we have previously shown that asymptotically, there are 1.07427\cdot n^{-3/2} \cdot 2.35467^n many saturated structures for a sequence of length n. In this paper, we consider the %DIFDELCMD < {\em %%% base stacking energy model , a mild variant of the Nussinov model, where each stacked base pair contributes -1 toward the energy of the structure. Locally optimal structures with respect to the base stacking energy model are exactly those secondary structures, whose stems cannot be extended. Such structures were first considered by Evers and Giegerich, who described a dynamic programming algorithm to enumerate all locally optimal structures. In this paper, we apply methods from enumerative combinatorics to compute the asymptotic number of such structures. Additionally, we consider analogous combinatorial problems for secondary structures with annotated single-stranded, stacking nucleotides (dangles).
[ { "type": "R", "before": "To provide a better understanding of", "after": "Motivated by", "start_char_pos": 147, "end_char_pos": 183 }, { "type": "R", "before": "locally optimal", "after": "locally optimal", "start_char_pos": 345, "end_char_pos": 360 }, { "type": "R", "before": "saturated", "after": "saturated", "start_char_pos": 574, "end_char_pos": 583 } ]
[ 0, 146, 405, 753, 953, 1099, 1252, 1368 ]
1112.4740
1
We study the situation of an investor-producer who can trade on a financial market in continuous time and can transform some assets into others by means of a discrete time production system, in order to price and hedge derivatives on produced goods. This general framework covers the interesting case of an electricity producer who wants to hedge a financial position and can trade commodities which are also inputs for his system. This extends the framework of Bouchard and Nguyen (2011) to continuous time for concave and bounded production functions . We introduce the flexible concept of conditional sure profit along the idea of the no sure profit condition of Rasonyi (2009) and show that it allows one to provide a closedness property for the set of super-hedgeable claims in a very general setting. Using standard separation arguments, we then deduce a dual characterization of the latter .
We study the situation of an agent who can trade on a financial market and can also transform some assets into others by means of a production system, in order to price and hedge derivatives on produced goods. This framework is motivated by the case of an electricity producer who wants to hedge a position on the electricity spot price and can trade commodities which are inputs for his system. This extends the essential results of Bouchard Nguyen Huu (2011) to continuous time markets . We introduce the generic concept of conditional sure profit along the idea of the no sure profit condition of R\`asonyi (2009) . The condition allows one to provide a closedness property for the set of super-hedgeable claims in a very general financial setting. Using standard separation arguments, we then deduce a dual characterization of the latter and provide an application to power futures pricing .
[ { "type": "R", "before": "investor-producer", "after": "agent", "start_char_pos": 29, "end_char_pos": 46 }, { "type": "R", "before": "in continuous time and can", "after": "and can also", "start_char_pos": 83, "end_char_pos": 109 }, { "type": "D", "before": "discrete time", "after": null, "start_char_pos": 158, "end_char_pos": 171 }, { "type": "R", "before": "general framework covers the interesting", "after": "framework is motivated by the", "start_char_pos": 255, "end_char_pos": 295 }, { "type": "R", "before": "financial position", "after": "position on the electricity spot price", "start_char_pos": 349, "end_char_pos": 367 }, { "type": "D", "before": "also", "after": null, "start_char_pos": 404, "end_char_pos": 408 }, { "type": "R", "before": "framework of Bouchard and Nguyen", "after": "essential results of Bouchard", "start_char_pos": 449, "end_char_pos": 481 }, { "type": "A", "before": null, "after": "Nguyen Huu", "start_char_pos": 482, "end_char_pos": 482 }, { "type": "R", "before": "for concave and bounded production functions", "after": "markets", "start_char_pos": 509, "end_char_pos": 553 }, { "type": "R", "before": "flexible", "after": "generic", "start_char_pos": 573, "end_char_pos": 581 }, { "type": "R", "before": "Rasonyi", "after": "R\\`asonyi", "start_char_pos": 667, "end_char_pos": 674 }, { "type": "R", "before": "and show that it", "after": ". The condition", "start_char_pos": 682, "end_char_pos": 698 }, { "type": "A", "before": null, "after": "financial", "start_char_pos": 799, "end_char_pos": 799 }, { "type": "A", "before": null, "after": "and provide an application to power futures pricing", "start_char_pos": 899, "end_char_pos": 899 } ]
[ 0, 249, 431, 555, 808 ]
1112.4824
1
We solve four intertwined problems, motivated by mathematical finance, concerning degenerate-parabolic partial differential operators and degenerate diffusion processes. First, we consider a parabolic partial differential equation on a half-space whose coefficients are suitably Holder continuous and allowed to grow linearly in the spatial variable and which becomes degenerate along the boundary of the half-space. We establish existence and uniqueness of solutions in weighted Holder spaces which incorporate both the degeneracy at the boundary and the unboundedness of the coefficients. Second, we ] show that the martingale problem associated with a degenerate elliptic differential operator with unbounded, locally Holder continuous coefficients on a half-space is well-posed in the sense of Stroock and Varadhan . Third, we prove existence, uniqueness, and the strong Markov property for weak solutions to a stochastic differential equation with degenerate diffusion and unbounded coefficients with suitable H\"older continuity properties. Fourth, for an Ito process with degenerate diffusion and unbounded but appropriately regular coefficients, we prove existence of a strong Markov process, unique in the sense of probability law, whose one-dimensional marginal probability distributions match those of the given Ito process .
Motivated by applications to probability and mathematical finance, we consider a parabolic partial differential equation on a half-space whose coefficients are suitably Holder continuous and allowed to grow linearly in the spatial variable and which become degenerate along the boundary of the half-space. We establish existence and uniqueness of solutions in weighted Holder spaces which incorporate both the degeneracy at the boundary and the unboundedness of the coefficients. In our companion article arXiv:1211.4636], we apply the main result of this article to show that the martingale problem associated with a degenerate-elliptic partial differential operator is well-posed in the sense of Stroock and Varadhan .
[ { "type": "R", "before": "We solve four intertwined problems, motivated by", "after": "Motivated by applications to probability and", "start_char_pos": 0, "end_char_pos": 48 }, { "type": "D", "before": "concerning degenerate-parabolic partial differential operators and degenerate diffusion processes. First,", "after": null, "start_char_pos": 71, "end_char_pos": 176 }, { "type": "R", "before": "becomes", "after": "become", "start_char_pos": 360, "end_char_pos": 367 }, { "type": "R", "before": "Second, we", "after": "In our companion article", "start_char_pos": 591, "end_char_pos": 601 }, { "type": "A", "before": null, "after": "arXiv:1211.4636", "start_char_pos": 602, "end_char_pos": 602 }, { "type": "A", "before": null, "after": ", we apply the main result of this article to", "start_char_pos": 603, "end_char_pos": 603 }, { "type": "R", "before": "degenerate elliptic differential operator with unbounded, locally Holder continuous coefficients on a half-space", "after": "degenerate-elliptic partial differential operator", "start_char_pos": 655, "end_char_pos": 767 }, { "type": "D", "before": ". Third, we prove existence, uniqueness, and the strong Markov property for weak solutions to a stochastic differential equation with degenerate diffusion and unbounded coefficients with suitable H\\\"older continuity properties. Fourth, for an Ito process with degenerate diffusion and unbounded but appropriately regular coefficients, we prove existence of a strong Markov process, unique in the sense of probability law, whose one-dimensional marginal probability distributions match those of the given Ito process", "after": null, "start_char_pos": 819, "end_char_pos": 1334 } ]
[ 0, 169, 416, 590, 820, 1046 ]
1112.5840
1
A decision is an act or event of decision taking. Decision making always includes decision taking, the latter not involving significant exchanges with non-deciding agents. A decision outcome is a piece of storable information constituting the result of a decision. Decision outcomes are typed, for instance: plan, command, assertion, or boolean reply to a question. A decision effect is any consequence of putting a decision outcome into effect . Decision outcomes must be expected by the decider to lead to certain decision effects, by way of their being put into effect. The availability of a model or of a theory of the causal chain leading from a decision outcome to one or more decision effects is assumed for the decision taker , otherwise the decision outcome is merely an utterance. Decision effectiveness measures the decision effects against objectives meant to be served with the decision . Decision taking is positioned amidst many similar notions including: decision making, decision process, decision making process, decision process making, decision engineering, decision progression, and decision progression production . Decision making is operationally defined as an informatics related activity consisting of the production of progressions from threads, thus casting decision making competence as an informatics competence. Short-circuit logic underlies the production of decision making progressions from instruction sequences that codify prepared decision making processes. Decision taking can constitute the primary task of dedicated agents. Human agents in such roles are professional decision takers. Multi-threading is essential for the professional decision taker .
A decision is an act or event of decision taking. Decision making always includes decision taking, the latter not involving significant exchanges with non-deciding agents. A decision outcome is a piece of storable information constituting the result of a decision. Decision outcomes are typed, for instance: plan, command, assertion, or boolean reply to a question. Decision outcomes are seen by an audience and autonomous actions from the audience is supposed to realize the putting into effect of a decision outcome , thus leading to so-called decision effects . Decision outcomes are supposedly expected by the decider . Using a model or a theory concerning the causal chain leading from a decision outcome to one or more decision effects may support a decision taker decision taker in predicting plausible decision effects for candidate decision outcomes . Decision taking is positioned amidst many related notions including: decision making, decision process, decision making process, decision process making, decision engineering, decision progression, and decision progression production .
[ { "type": "R", "before": "A decision effect is any consequence of putting", "after": "Decision outcomes are seen by an audience and autonomous actions from the audience is supposed to realize the putting into effect of", "start_char_pos": 366, "end_char_pos": 413 }, { "type": "R", "before": "into effect", "after": ", thus leading to so-called decision effects", "start_char_pos": 433, "end_char_pos": 444 }, { "type": "R", "before": "must be", "after": "are supposedly", "start_char_pos": 465, "end_char_pos": 472 }, { "type": "R", "before": "to lead to certain decision effects, by way of their being put into effect. The availability of", "after": ". Using", "start_char_pos": 497, "end_char_pos": 592 }, { "type": "R", "before": "of a theory of", "after": "a theory concerning", "start_char_pos": 604, "end_char_pos": 618 }, { "type": "R", "before": "is assumed for the decision taker , otherwise the decision outcome is merely an utterance. Decision effectiveness measures the decision effects against objectives meant to be served with the decision", "after": "may support a decision taker decision taker in predicting plausible decision effects for candidate decision outcomes", "start_char_pos": 700, "end_char_pos": 899 }, { "type": "R", "before": "similar", "after": "related", "start_char_pos": 944, "end_char_pos": 951 }, { "type": "D", "before": ". Decision making is operationally defined as an informatics related activity consisting of the production of progressions from threads, thus casting decision making competence as an informatics competence. Short-circuit logic underlies the production of decision making progressions from instruction sequences that codify prepared decision making processes. Decision taking can constitute the primary task of dedicated agents. Human agents in such roles are professional decision takers. Multi-threading is essential for the professional decision taker", "after": null, "start_char_pos": 1136, "end_char_pos": 1689 } ]
[ 0, 49, 171, 264, 365, 446, 572, 790, 901, 1137, 1342, 1494, 1563, 1624 ]
1201.0625
1
Using Random Matrix Theory, we build a covariance matrix between stocks of the BM&F-Bovespa (Bolsa de Valores, Mercadorias e Futuros de S\~ao Paulo) which is cleaned of some of the noise due to the complex interactions between the many stocks and the finiteness of available data , and use a regression model in order to remove the market effect due to the common movement of all stocks. These two procedures are then used in order to build portfolios of stocks based on Markovitz 's theory, trying to build better predictions of future risk based on past data. This is done for years of both low and high volatility of the Brazilian market, from 2004 to 2010.
By using Random Matrix Theory, we build covariance matrices between stocks of the BM&F-Bovespa (Bolsa de Valores, Mercadorias e Futuros de S\~ao Paulo) which are cleaned of some of the noise due to the complex interactions between the many stocks and the finiteness of available data . We also use a regression model in order to remove the market effect due to the common movement of all stocks. These two procedures are then used to build stock portfolios based on Markowitz 's theory, trying to obtain better predictions of future risk based on past data. This is done for years of both low and high volatility of the Brazilian stock market, from 2004 to 2010. The results show that the use of regression to subtract the market effect on returns greatly increases the accuracy of the prediction of risk, and that, although the cleaning of the correlation matrix often leads to portfolios that better predict risks, in periods of high volatility of the market this procedure may fail to do so.
[ { "type": "R", "before": "Using", "after": "By using", "start_char_pos": 0, "end_char_pos": 5 }, { "type": "R", "before": "a covariance matrix", "after": "covariance matrices", "start_char_pos": 37, "end_char_pos": 56 }, { "type": "R", "before": "is", "after": "are", "start_char_pos": 155, "end_char_pos": 157 }, { "type": "R", "before": ", and", "after": ". We also", "start_char_pos": 280, "end_char_pos": 285 }, { "type": "R", "before": "in order to build portfolios of stocks based on Markovitz", "after": "to build stock portfolios based on Markowitz", "start_char_pos": 423, "end_char_pos": 480 }, { "type": "R", "before": "build", "after": "obtain", "start_char_pos": 502, "end_char_pos": 507 }, { "type": "A", "before": null, "after": "stock", "start_char_pos": 634, "end_char_pos": 634 }, { "type": "A", "before": null, "after": "The results show that the use of regression to subtract the market effect on returns greatly increases the accuracy of the prediction of risk, and that, although the cleaning of the correlation matrix often leads to portfolios that better predict risks, in periods of high volatility of the market this procedure may fail to do so.", "start_char_pos": 662, "end_char_pos": 662 } ]
[ 0, 387, 561 ]
1201.0689
1
The impact of damping effect to the DNA bubble is investigated within the Peyrard-Bishop model. In the continuum limit, the dynamics of the bubble of DNA is described by the damped nonlinear Schrodinger equation and studied by means of variational method. Traveling wave solution showed that when the viscosity is not introduced in the system, solitary wave patterns propagate without vanishing .
The damping effect to the DNA bubble is investigated within the Peyrard-Bishop model. In the continuum limit, the dynamics of the bubble of DNA is described by the damped nonlinear Schrodinger equation and studied by means of variational method. It is shown that the propagation of solitary wave pattern is not vanishing in a non-viscous system. Inversely, the solitary wave vanishes soon as the viscous force is introduced .
[ { "type": "D", "before": "impact of", "after": null, "start_char_pos": 4, "end_char_pos": 13 }, { "type": "R", "before": "Traveling wave solution showed that when the viscosity is not introduced in the system, solitary wave patterns propagate without vanishing", "after": "It is shown that the propagation of solitary wave pattern is not vanishing in a non-viscous system. Inversely, the solitary wave vanishes soon as the viscous force is introduced", "start_char_pos": 256, "end_char_pos": 394 } ]
[ 0, 95, 255 ]
1201.0769
1
In this article, we study the problem of robust utility maximization in an incomplete market with volatility uncertainty. The set of all possible models (probability measures) considered here is non-dominated. We propose to study this problem in the framework of second order backward stochastic differential equations introduced in Soner, Touzi and Zhang (2010) for Lipschitz continuous generator, then generalized by Possamai and Zhou (2011) in the quadratic growth case. We solve the problem for exponential, power and logarithmic utility functions and prove existence of an optimal strategy and of an optimal probability measure . Finally we provide several examples which shed more light and intuitions on the problem and its links with the classical utility maximisation one.
In this article, we consider the problem of robust utility maximization in an incomplete market with volatility uncertainty. The set of all possible models (probability measures) considered here is non-dominated. We propose to study this problem in the framework of second order backward stochastic differential equations introduced in Soner, Touzi and Zhang (2010) for Lipschitz continuous generator, then generalized by Possamai and Zhou (2011) in the quadratic growth case. We solve the problem for exponential, power and logarithmic utility functions and prove existence of an optimal strategy . Finally we provide several examples which shed more light on the problem and its links with the classical utility maximization one.
[ { "type": "R", "before": "study", "after": "consider", "start_char_pos": 20, "end_char_pos": 25 }, { "type": "D", "before": "and of an optimal probability measure", "after": null, "start_char_pos": 595, "end_char_pos": 632 }, { "type": "D", "before": "and intuitions", "after": null, "start_char_pos": 693, "end_char_pos": 707 }, { "type": "R", "before": "maximisation", "after": "maximization", "start_char_pos": 764, "end_char_pos": 776 } ]
[ 0, 121, 209, 473, 634 ]
1201.0769
2
In this article, we consider the problem of robust utility maximization in an incomplete market with volatility uncertainty . The set of all possible models (probability measures) considered here is non-dominated. We propose to study this problem in the framework of second order backward stochastic differential equations introduced in Soner, Touzi and Zhang (2010) for Lipschitz continuous generator, then generalized by Possamai and Zhou (2011) in the quadratic growth case. We solve the problem for exponential, power and logarithmic utility functions and prove existence of an optimal strategy. Finally we provide several examples which shed more light on the problem and its links with the classical utility maximization one .
The problem of robust utility maximization in an incomplete market with volatility uncertainty is considered, in the sense that the volatility of the market is only assumed to lie between two given bounds . The set of all possible models (probability measures) considered here is non-dominated. We propose studying this problem in the framework of second-order backward stochastic differential equations (2BSDEs for short) with quadratic growth generators. We show for exponential, power and logarithmic utilities that the value function of the problem can be written as the initial value of a particular 2BSDE and prove existence of an optimal strategy. Finally several examples which shed more light on the problem and its links with the classical utility maximization one are provided. In particular, we show that in some cases, the upper bound of the volatility interval plays a central role, exactly as in the option pricing problem with uncertain volatility models of 2 .
[ { "type": "R", "before": "In this article, we consider the", "after": "The", "start_char_pos": 0, "end_char_pos": 32 }, { "type": "A", "before": null, "after": "is considered, in the sense that the volatility of the market is only assumed to lie between two given bounds", "start_char_pos": 124, "end_char_pos": 124 }, { "type": "R", "before": "to study", "after": "studying", "start_char_pos": 226, "end_char_pos": 234 }, { "type": "R", "before": "second order", "after": "second-order", "start_char_pos": 268, "end_char_pos": 280 }, { "type": "R", "before": "introduced in Soner, Touzi and Zhang (2010) for Lipschitz continuous generator, then generalized by Possamai and Zhou (2011) in the quadratic growth case. We solve the problem", "after": "(2BSDEs for short) with quadratic growth generators. We show", "start_char_pos": 324, "end_char_pos": 499 }, { "type": "R", "before": "utility functions", "after": "utilities that the value function of the problem can be written as the initial value of a particular 2BSDE", "start_char_pos": 539, "end_char_pos": 556 }, { "type": "D", "before": "we provide", "after": null, "start_char_pos": 609, "end_char_pos": 619 }, { "type": "A", "before": null, "after": "are provided. In particular, we show that in some cases, the upper bound of the volatility interval plays a central role, exactly as in the option pricing problem with uncertain volatility models of", "start_char_pos": 732, "end_char_pos": 732 }, { "type": "A", "before": null, "after": "2", "start_char_pos": 733, "end_char_pos": 733 } ]
[ 0, 126, 214, 478, 600 ]
1201.0967
1
The adverse effects of financial crises in terms of output losses or output growth below its potential can be treated like losses from catastrophic events which have a low likelihood but a large impact in the event that they occur. We therefore analyze GDP losses in terms of frequency (number of loss events per period) and severity (loss per occurrence). Crises' frequency, severity, and the associated global output losses over periods of five years are identified on the basis of Laeven et al. (2008). Applying the Loss Distribution Approach used in insurance and operational risk theory and practice, we estimate a multi-country aggregate GDP loss distribution and thus approximate the conditional losses in the event of financial crises. The analysis of losses produced in the paper suggests that the LDA approach is a useful tool in discussions about the existence and capital requirements of a potential insurance against the risk of financial crises at the aggregate level .
We study cross-country GDP losses due to financial crises in terms of frequency (number of loss events per period) and severity (loss per occurrence). We perform the Loss Distribution Approach (LDA) to estimate a multi-country aggregate GDP loss probability density function and the percentiles associated to extreme events due to financial crises. We find that output losses arising from financial crises are strongly heterogeneous and that currency crises lead to smaller output losses than debt and banking crises. Extreme global financial crises episodes, occurring with a one percent probability every five years, lead to losses between 2.95\% and 4.54\% of world GDP .
[ { "type": "R", "before": "The adverse effects of financial crises in terms of output losses or output growth below its potential can be treated like losses from catastrophic events which have a low likelihood but a large impact in the event that they occur. We therefore analyze GDP losses", "after": "We study cross-country GDP losses due to financial crises", "start_char_pos": 0, "end_char_pos": 263 }, { "type": "R", "before": "Crises' frequency, severity, and the associated global output losses over periods of five years are identified on the basis of Laeven et al. (2008). Applying the", "after": "We perform the", "start_char_pos": 357, "end_char_pos": 518 }, { "type": "R", "before": "used in insurance and operational risk theory and practice, we", "after": "(LDA) to", "start_char_pos": 546, "end_char_pos": 608 }, { "type": "R", "before": "distribution and thus approximate the conditional losses in the event of", "after": "probability density function and the percentiles associated to extreme events due to", "start_char_pos": 653, "end_char_pos": 725 }, { "type": "R", "before": "The analysis of losses produced in the paper suggests that the LDA approach is a useful tool in discussions about the existence and capital requirements of a potential insurance against the risk of financial crises at the aggregate level", "after": "We find that output losses arising from financial crises are strongly heterogeneous and that currency crises lead to smaller output losses than debt and banking crises. Extreme global financial crises episodes, occurring with a one percent probability every five years, lead to losses between 2.95\\% and 4.54\\% of world GDP", "start_char_pos": 744, "end_char_pos": 981 } ]
[ 0, 231, 356, 505, 743 ]
1201.1437
1
The SABR model is a stochastic volatility model not admitting a closed form solution. Hagan, Kumar, Leniewski and Woodward have given an approximate solution by means of perturbative techniques. A more precise approximation was obtained by Henry-Labord\`ere using the heat kernel expansion method. The latter relies on deep and hard theorems from Riemannian geometry which are almost totally unknown to people working in finance, who however are those primarily interested in these results. The goal of this report is to fill this gap and makes these topics understandable to people with a basic knowledge of calculus and linear algebra.
The SABR model is a stochastic volatility model not admitting a closed form solution. Hagan, Kumar, Leniewski and Woodward have obtained an approximate solution by means of perturbative techniques. A more precise approximation was found by Henry-Labord\`ere with the heat kernel expansion method. The latter relies on deep and hard theorems from Riemannian geometry which are almost totally unknown to the professionals of finance, who however are those primarily interested in these results. The goal of this report is to fill this gap and to make these topics understandable with a basic knowledge of calculus and linear algebra.
[ { "type": "R", "before": "given", "after": "obtained", "start_char_pos": 128, "end_char_pos": 133 }, { "type": "R", "before": "obtained", "after": "found", "start_char_pos": 228, "end_char_pos": 236 }, { "type": "R", "before": "using", "after": "with", "start_char_pos": 258, "end_char_pos": 263 }, { "type": "R", "before": "people working in", "after": "the professionals of", "start_char_pos": 403, "end_char_pos": 420 }, { "type": "R", "before": "makes", "after": "to make", "start_char_pos": 539, "end_char_pos": 544 }, { "type": "D", "before": "to people", "after": null, "start_char_pos": 573, "end_char_pos": 582 } ]
[ 0, 85, 194, 297, 490 ]
1201.1782
1
Multi-currency FX derivatives o?er a challenging playground to the mathematical modelling of correlations. Quotes of liquidly traded vanilla options on cross FX rates, e.g. EUR/JPY, can be used to extract a great deal of information about the complex implied correlation structure between the corresponding main FX rates, e.g. USD/JPY and EUR/USD. Including all this information in a ?nancial model means being able to fit simultaneously all volatility smiles, a very demanding task. In this paper we propose a first solution to this problem in the class of stochastic volatility models. We introduce a novel multi-factor stochastic volatility Heston-based model, which is able to reproduce consistently typical multi-dimensional FX vanilla markets, while retaining the (semi)-analytical tractability typical of a?ne models and relying on a reasonable number of parameters. A successful joint calibration to real market data is presented together with various in- and out-of-sample calibration exercises to highlight the robustness of the parameters estimation. The proposed model is symmetric with respect to the choice of the risk-free currency, opening up to new approaches for coherent pricing and risk management of derivatives depending on multiple currencies.
Multi-currency FX derivatives offer a challenging playground to the mathematical modelling of correlations. Quotes of liquidly traded vanilla options on cross FX rates, e.g. EUR/JPY, can be used to extract a great deal of information about the complex implied correlation structure between the corresponding main FX rates, e.g. USD/JPY and EUR/USD. Including all this information in a financial model means being able to fit simultaneously all volatility smiles, a very demanding task. In this paper we propose a first solution to this problem in the class of stochastic volatility models. We introduce a novel multi-factor stochastic volatility Heston-based model, which is able to reproduce consistently typical multi-dimensional FX vanilla markets, while retaining the (semi)-analytical tractability typical of affine models and relying on a reasonable number of parameters. A successful joint calibration to real market data is presented together with various in- and out-of-sample calibration exercises to highlight the robustness of the parameters estimation. The proposed model is symmetric with respect to the choice of the risk-free currency, opening up to new approaches for coherent pricing and risk management of derivatives depending on multiple currencies.
[ { "type": "R", "before": "o?er", "after": "offer", "start_char_pos": 30, "end_char_pos": 34 }, { "type": "R", "before": "?nancial", "after": "financial", "start_char_pos": 384, "end_char_pos": 392 }, { "type": "R", "before": "a?ne", "after": "affine", "start_char_pos": 812, "end_char_pos": 816 } ]
[ 0, 106, 483, 587, 873, 1061 ]
1201.1782
2
Multi-currency FX derivatives offer a challenging playground to the mathematical modelling of correlations. Quotes of liquidly traded vanilla options on cross FX rates, e.g. EUR/JPY, can be used to extract a great deal of information about the complex implied correlation structure between the corresponding main FX rates, e.g. USD/JPY and EUR/USD. Including all this information in a financial model means being able to fit simultaneously all volatility smiles, a very demanding task. In this paper we propose a first solution to this problem in the class of stochastic volatility models. We introduce a novel multi-factor stochastic volatility Heston-based model, which is able to reproduce consistently typical multi-dimensional FX vanilla markets, while retaining the (semi)-analytical tractability typical of affine models and relying on a reasonable number of parameters. A successful joint calibration to real market data is presented together with various in- and out-of-sample calibration exercises to highlight the robustness of the parameters estimation. The proposed model is symmetric with respect to the choice of the risk-free currency , opening up to new approaches for coherent pricing and risk management of derivatives depending on multiple currencies .
We introduce a novel multi-factor Heston-based stochastic volatility model, which is able to reproduce consistently typical multi-dimensional FX vanilla markets, while retaining the (semi)-analytical tractability typical of affine models and relying on a reasonable number of parameters. A successful joint calibration to real market data is presented together with various in- and out-of-sample calibration exercises to highlight the robustness of the parameters estimation. The proposed model preserves the natural inversion and triangulation symmetries of FX spot rates and its functional form, irrespective of choice of the risk-free currency . That is, all currencies are treated in the same way .
[ { "type": "D", "before": "Multi-currency FX derivatives offer a challenging playground to the mathematical modelling of correlations. Quotes of liquidly traded vanilla options on cross FX rates, e.g. EUR/JPY, can be used to extract a great deal of information about the complex implied correlation structure between the corresponding main FX rates, e.g. USD/JPY and EUR/USD. Including all this information in a financial model means being able to fit simultaneously all volatility smiles, a very demanding task. In this paper we propose a first solution to this problem in the class of stochastic volatility models.", "after": null, "start_char_pos": 0, "end_char_pos": 589 }, { "type": "R", "before": "stochastic volatility Heston-based", "after": "Heston-based stochastic volatility", "start_char_pos": 624, "end_char_pos": 658 }, { "type": "R", "before": "is symmetric with respect to the", "after": "preserves the natural inversion and triangulation symmetries of FX spot rates and its functional form, irrespective of", "start_char_pos": 1085, "end_char_pos": 1117 }, { "type": "R", "before": ", opening up to new approaches for coherent pricing and risk management of derivatives depending on multiple currencies", "after": ". That is, all currencies are treated in the same way", "start_char_pos": 1151, "end_char_pos": 1270 } ]
[ 0, 107, 485, 589, 877, 1065 ]
1201.1783
1
We present a goal programming model for risk minimization of a financial portfolio managed by an agent subject to different possible criteria. We extend the classical risk minimization model with scalar risk measures to general case of set-valued risk measure . The problem we obtain is a set-valued optimization program and we propose a goal programming-based approach to obtain a solution which represents the best compromise between goals and the achievement levels. Numerical examples are provided to illustrate how the method works in practical situations.
We extend the classical risk minimization model with scalar risk measures to the general case of set-valued risk measures . The problem we obtain is a set-valued optimization model and we propose a goal programming-based approach with satisfaction function to obtain a solution which represents the best compromise between goals and the achievement levels. Numerical examples are provided to illustrate how the method works in practical situations.
[ { "type": "D", "before": "present a goal programming model for risk minimization of a financial portfolio managed by an agent subject to different possible criteria. We", "after": null, "start_char_pos": 3, "end_char_pos": 145 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 220, "end_char_pos": 220 }, { "type": "R", "before": "measure", "after": "measures", "start_char_pos": 253, "end_char_pos": 260 }, { "type": "R", "before": "program", "after": "model", "start_char_pos": 314, "end_char_pos": 321 }, { "type": "A", "before": null, "after": "with satisfaction function", "start_char_pos": 371, "end_char_pos": 371 } ]
[ 0, 142, 262, 471 ]
1201.1788
1
We provide a dual representation of quasiconvex conditional risk measures \% \rho defined on L^{0} modules of the L^{p} type . This is a consequence of more general result which extend the usual Penot-Volle representation for quasiconvex real valued maps . We establish, in the conditional setting, a complete duality between quasiconvex risk measures and the appropriate class of dual functions .
In the conditional setting we provide a complete duality between quasiconvex risk measures defined on L^{0} modules of the L^{p} type and the appropriate class of dual functions . This is based on a general result which extends the usual Penot-Volle representation for quasiconvex real valued maps .
[ { "type": "R", "before": "We provide a dual representation of quasiconvex conditional risk measures \\% \\rho", "after": "In the conditional setting we provide a complete duality between quasiconvex risk measures", "start_char_pos": 0, "end_char_pos": 81 }, { "type": "A", "before": null, "after": "and the appropriate class of dual functions", "start_char_pos": 125, "end_char_pos": 125 }, { "type": "R", "before": "a consequence of more", "after": "based on a", "start_char_pos": 136, "end_char_pos": 157 }, { "type": "R", "before": "extend", "after": "extends", "start_char_pos": 179, "end_char_pos": 185 }, { "type": "D", "before": ". We establish, in the conditional setting, a complete duality between quasiconvex risk measures and the appropriate class of dual functions", "after": null, "start_char_pos": 256, "end_char_pos": 396 } ]
[ 0, 127, 257 ]
1201.2264
1
The self-assembly of polyhedral shells, each constructed from 60 trapezoidal particles, is simulated using molecular dynamics. The URLanization of the component particles in this particular shell is similar to the capsomer proteins forming the capsid of a T=1 virus. Growth takes place in the presence of an atomistic solvent and, under suitable conditions, achieves a high yield of complete shells. The simulations provide details of the structures and lifetimes of the particle clusters that appear as intermediate states along the growth pathways , and the nature of the transitions between them. Reversible bond formation plays a major role throughout the assembly process by helping avoid incorrect assembly, and while there is a preference for compact structures during the early phase of cluster growth, as shells near completion structures with a variety of forms are encountered .
The self-assembly of polyhedral shells, each constructed from 60 trapezoidal particles, is simulated using molecular dynamics. The URLanization of the component particles in this shell is similar to the capsomer proteins forming the capsid of a T=1 virus. Growth occurs in the presence of an atomistic solvent and, under suitable conditions, achieves a high yield of complete shells. The simulations provide details of the structure and lifetime of the particle clusters that appear as intermediate states along the growth pathway , and the nature of the transitions between them. In certain respects the growth of size-60 shells from trapezoidal particles resembles the growth of icosahedral shells from triangular particles studied previously, with reversible bonding playing a major role in avoiding incorrect assembly, although the details differ due to particle shape and URLanization. The strong preference for maximal bonding exhibited by the triangular particle clusters is also apparent for trapezoidal particles, but this is now confined to early growth, and is less pronounced as shells approach completion along a variety of pathways .
[ { "type": "D", "before": "particular", "after": null, "start_char_pos": 179, "end_char_pos": 189 }, { "type": "R", "before": "takes place", "after": "occurs", "start_char_pos": 274, "end_char_pos": 285 }, { "type": "R", "before": "structures and lifetimes", "after": "structure and lifetime", "start_char_pos": 439, "end_char_pos": 463 }, { "type": "R", "before": "pathways", "after": "pathway", "start_char_pos": 541, "end_char_pos": 549 }, { "type": "R", "before": "Reversible bond formation plays", "after": "In certain respects the growth of size-60 shells from trapezoidal particles resembles the growth of icosahedral shells from triangular particles studied previously, with reversible bonding playing", "start_char_pos": 600, "end_char_pos": 631 }, { "type": "R", "before": "throughout the assembly process by helping avoid", "after": "in avoiding", "start_char_pos": 645, "end_char_pos": 693 }, { "type": "R", "before": "and while there is a preference for compact structures during the early phase of cluster growth, as shells near completion structures with", "after": "although the details differ due to particle shape and URLanization. The strong preference for maximal bonding exhibited by the triangular particle clusters is also apparent for trapezoidal particles, but this is now confined to early growth, and is less pronounced as shells approach completion along", "start_char_pos": 714, "end_char_pos": 852 }, { "type": "R", "before": "forms are encountered", "after": "pathways", "start_char_pos": 866, "end_char_pos": 887 } ]
[ 0, 126, 266, 399, 599 ]
1201.2817
1
For fat tailed distributions ( i.e. those that decay slower than an exponential), large deviations not only become relatively likely, but the way in which they are realized changes dramatically: A finite fraction of the whole sample deviation is concentrated on a single variable : large deviations are not the accumulation of many small deviations, but rather they are dominated to a single large fluctuation . The regime of large deviations is separated from the regime of typical fluctuations by a phase transition where the symmetry between the points in the sample is %DIFDELCMD < {\em %%% spontaneously broken . This phenomenon has been discussed in the context of mass transport models in physics, where it takes the form of a condensation phase transition. Yet, the phenomenon is way more general. For example, in risk management of large portfolios, it suggests that one should expect losses to concentrate on a single asset: when extremely bad things happen, it is likely that there is a single factor on which bad luck concentrates. Along similar lines, one should expect that bubbles in financial markets do not gradually deflate, but rather burst abruptly and that in the most rainy day of a year, precipitation concentrate on a given spot. Analogously, when applied to biological evolution, we 're lead to infer that , if fitness changes for individual mutations have a broad distribution, those large deviations that lead to better fit species are not likely to result from the accumulation of small positive mutations. Rather they are likely to arise from large rare jumps .
Large deviations for fat tailed distributions , i.e. those that decay slower than exponential, are not only relatively likely, but they also occur in a rather peculiar way where a finite fraction of the whole sample deviation is concentrated on a single variable . The regime of large deviations is separated from the regime of typical fluctuations by a phase transition where the symmetry between the points in the sample is %DIFDELCMD < {\em %%% spontaneously broken. For stochastic processes with a fat tailed microscopic noise, this implies that while typical realizations are well described by a diffusion process with continuous sample paths, large deviation paths are typically discontinuous. For eigenvalues of random matrices with fat tailed distributed elements, a large deviation where the trace of the matrix is anomalously large concentrates on just a single eigenvalue, whereas in the thin tailed world the large deviation affects the whole distribution. These results find a natural application to finance. Since the price dynamics of financial stocks is characterized by fat tailed increments, large fluctuations of stock prices are expected to be realized by discrete jumps. Interestingly, we find that large excursions of prices are more likely realized by continuous drifts rather than by discontinuous jumps. Indeed, auto-correlations suppress the concentration of large deviations. Financial covariance matrices also exhibit an anomalously large eigenvalue, the market mode, as compared to the prediction of random matrix theory. We show that this is explained by a large deviation with excess covariance rather than by one with excess volatility .
[ { "type": "R", "before": "For", "after": "Large deviations for", "start_char_pos": 0, "end_char_pos": 3 }, { "type": "R", "before": "(", "after": ",", "start_char_pos": 29, "end_char_pos": 30 }, { "type": "R", "before": "an exponential), large deviations not only become", "after": "exponential, are not only", "start_char_pos": 65, "end_char_pos": 114 }, { "type": "R", "before": "the way in which they are realized changes dramatically: A", "after": "they also occur in a rather peculiar way where a", "start_char_pos": 138, "end_char_pos": 196 }, { "type": "D", "before": ": large deviations are not the accumulation of many small deviations, but rather they are dominated to a single large fluctuation", "after": null, "start_char_pos": 280, "end_char_pos": 409 }, { "type": "D", "before": "spontaneously broken", "after": null, "start_char_pos": 595, "end_char_pos": 615 }, { "type": "R", "before": ". This phenomenon has been discussed in the context of mass transport models in physics, where it takes the form of a condensation phase transition. Yet, the phenomenon is way more general. For example, in risk management of", "after": "spontaneously broken. For stochastic processes with a fat tailed microscopic noise, this implies that while typical realizations are well described by a diffusion process with continuous sample paths, large deviation paths are typically discontinuous. For eigenvalues of random matrices with fat tailed distributed elements, a large deviation where the trace of the matrix is anomalously large concentrates on just a single eigenvalue, whereas in the thin tailed world the large deviation affects the whole distribution. These results find a natural application to finance. Since the price dynamics of financial stocks is characterized by fat tailed increments, large fluctuations of stock prices are expected to be realized by discrete jumps. Interestingly, we find that", "start_char_pos": 616, "end_char_pos": 840 }, { "type": "R", "before": "portfolios, it suggests that one should expect losses to concentrate on a single asset: when extremely bad things happen, it is likely that there is a single factor on which bad luck concentrates. Along similar lines, one should expect that bubbles in financial markets do not gradually deflate, but rather burst abruptly and that in the most rainy day of a year, precipitation concentrate on a given spot. Analogously, when applied to biological evolution, we 're lead to infer that , if fitness changes for individual mutations have a broad distribution, those large deviations that lead to better fit species are not likely to result from the accumulation of small positive mutations. Rather they are likely to arise from large rare jumps", "after": "excursions of prices are more likely realized by continuous drifts rather than by discontinuous jumps. Indeed, auto-correlations suppress the concentration of large deviations. Financial covariance matrices also exhibit an anomalously large eigenvalue, the market mode, as compared to the prediction of random matrix theory. We show that this is explained by a large deviation with excess covariance rather than by one with excess volatility", "start_char_pos": 847, "end_char_pos": 1588 } ]
[ 0, 411, 617, 764, 805, 1043, 1253, 1534 ]
1201.3432
1
Benford's law states that in many data sets the overall distribution of the significant digits tends to be logarithmic so that the occurrence of numbers beginning with smaller first significant digits is more often than those with larger ones. We investigate here recent data on illicit financial flows from developing countries and reveal that the data does submit to Benford's law. Further, the general improvement in the statistical accuracy which we observed supports the applicability of the normalization process used to limit the inclusion of the countries in the database for which the illicit financial flows are not substantial.
Benford's law states that in many data sets the overall distribution of the significant digits tends to be logarithmic so that the numbers beginning with smaller significant digits occur more often than those with larger ones. We investigate here recent data on illicit financial flows from developing countries and reveal that the data does submit to Benford's law. Further, the general improvement in the statistical accuracy we find here supports the applicability of the two stage normalization in filtering out countries from database for which illicit financial flows are not substantial.
[ { "type": "D", "before": "occurrence of", "after": null, "start_char_pos": 131, "end_char_pos": 144 }, { "type": "R", "before": "first significant digits is", "after": "significant digits occur", "start_char_pos": 176, "end_char_pos": 203 }, { "type": "R", "before": "which we observed", "after": "we find here", "start_char_pos": 445, "end_char_pos": 462 }, { "type": "R", "before": "normalization process used to limit the inclusion of the countries in the", "after": "two stage normalization in filtering out countries from", "start_char_pos": 497, "end_char_pos": 570 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 590, "end_char_pos": 593 } ]
[ 0, 243, 383 ]
1201.3432
2
Benford's law states that in many data sets the overall distribution of the significant digits tends to be logarithmic so that the numbers beginning with smaller significant digits occur more often than those with larger ones. We investigate here recent data on illicit financial flows from developing countries and reveal that the data does submit to Benford's law . Further, the general improvement in the statistical accuracy we find here supports the applicability of the two stage normalization in filtering out countries from database for which illicit financial flows are not substantial .
Benford's law states that in data sets from different phenomena leading digits tend to be distributed logarithmically such that the numbers beginning with smaller digits occur more often than those with larger ones. Particularly, the law is known to hold for different types of financial data. The Illicit Financial Flows (IFFs) exiting the developing countries are frequently discussed as hidden resources which could have been otherwise properly utilized for their development. We investigate here the distribution of the leading digits in the recent data on estimates of IFFs to look for the existence of a pattern as predicted by Benford's law and establish that the frequency of occurrence of the leading digits in these estimates does closely follow the law .
[ { "type": "R", "before": "many data sets the overall distribution of the significant digits tends to be logarithmic so", "after": "data sets from different phenomena leading digits tend to be distributed logarithmically such", "start_char_pos": 29, "end_char_pos": 121 }, { "type": "D", "before": "significant", "after": null, "start_char_pos": 162, "end_char_pos": 173 }, { "type": "A", "before": null, "after": "Particularly, the law is known to hold for different types of financial data. The Illicit Financial Flows (IFFs) exiting the developing countries are frequently discussed as hidden resources which could have been otherwise properly utilized for their development.", "start_char_pos": 227, "end_char_pos": 227 }, { "type": "A", "before": null, "after": "the distribution of the leading digits in the", "start_char_pos": 248, "end_char_pos": 248 }, { "type": "R", "before": "illicit financial flows from developing countries and reveal that the data does submit to", "after": "estimates of IFFs to look for the existence of a pattern as predicted by", "start_char_pos": 264, "end_char_pos": 353 }, { "type": "R", "before": ". Further, the general improvement in the statistical accuracy we find here supports the applicability of the two stage normalization in filtering out countries from database for which illicit financial flows are not substantial", "after": "and establish that the frequency of occurrence of the leading digits in these estimates does closely follow the law", "start_char_pos": 368, "end_char_pos": 596 } ]
[ 0, 226 ]
1201.3709
1
We study the hysteresis in unzipping and rezipping of a double stranded DNA by pulling its strands in opposite directions in the fixed force ensemble. The force is increased, at a constant rate from an initial value g_0 to some maximum value g_m that lies above the phase boundary and then decreased back again to g_{0}. We observed hysteresis during a complete cycle of unzipping and rezipping. We obtained probability distributions of work performed over a cycle of unzipping and rezipping for various pulling rates. The mean of the distribution is found to be very close to the area of the hysteresis loop. We extract the equilibrium force versus separation isotherm by using the work theorem on repeated non-equilibrium force measurements .
We study by using Monte Carlo simulations the hysteresis in unzipping and rezipping of a double stranded DNA (dsDNA) by pulling its strands in opposite directions in the fixed force ensemble. The force is increased, at a constant rate from an initial value g_0 to some maximum value g_m that lies above the phase boundary and then decreased back again to g_{0}. We observed hysteresis during a complete cycle of unzipping and rezipping. We obtained probability distributions of work performed over a cycle of unzipping and rezipping for various pulling rates. The mean of the distribution is found to be close (the difference being within 10\%, except for very fast pulling) to the area of the hysteresis loop. We extract the equilibrium force versus separation isotherm by using the work theorem on repeated non-equilibrium force measurements . Our method is capable of reproducing the equilibrium and the non-equilibrium force-separation isotherms for the spontaneous rezipping of dsDNA .
[ { "type": "A", "before": null, "after": "by using Monte Carlo simulations", "start_char_pos": 9, "end_char_pos": 9 }, { "type": "A", "before": null, "after": "(dsDNA)", "start_char_pos": 77, "end_char_pos": 77 }, { "type": "R", "before": "very close", "after": "close (the difference being within 10\\%, except for very fast pulling)", "start_char_pos": 565, "end_char_pos": 575 }, { "type": "A", "before": null, "after": ". Our method is capable of reproducing the equilibrium and the non-equilibrium force-separation isotherms for the spontaneous rezipping of dsDNA", "start_char_pos": 745, "end_char_pos": 745 } ]
[ 0, 152, 322, 397, 520, 611 ]
1201.3798
1
We investigate the dynamics of a trust game on a mixed population where individuals are forced to play against a predetermined number of partners, who they choose dynamically. Agents are also allowed to adapt their level of trustworthiness , based on payoff. The dynamics undergoes a transition at a specific value of the strategy update rate, above which an emergent URLanization is observed, where individuals have similar values of below optimal trustworthiness . This URLanization is not due to an explicit collusion among agents; instead it arises spontaneously from the maximization of the individual payoffs. This dynamics is marked by large fluctuations and a high degree of unpredictability for most of the parameter space, and serves as a plausible qualitative explanation for observed elevated levels and fluctuations of certain commodity prices.
We investigate the dynamics of a trust game on a mixed population where individuals with the role of buyers are forced to play against a predetermined number of sellers, whom they choose dynamically. Agents with the role of sellers are also allowed to adapt the level of value for money of their products , based on payoff. The dynamics undergoes a transition at a specific value of the strategy update rate, above which an emergent URLanization is observed, where sellers have similar values of below optimal value for money . This URLanization is not due to an explicit collusion among agents; instead it arises spontaneously from the maximization of the individual payoffs. This dynamics is marked by large fluctuations and a high degree of unpredictability for most of the parameter space, and serves as a plausible qualitative explanation for observed elevated levels and fluctuations of certain commodity prices.
[ { "type": "A", "before": null, "after": "with the role of buyers", "start_char_pos": 84, "end_char_pos": 84 }, { "type": "R", "before": "partners, who", "after": "sellers, whom", "start_char_pos": 138, "end_char_pos": 151 }, { "type": "A", "before": null, "after": "with the role of sellers", "start_char_pos": 184, "end_char_pos": 184 }, { "type": "R", "before": "their level of trustworthiness", "after": "the level of value for money of their products", "start_char_pos": 211, "end_char_pos": 241 }, { "type": "R", "before": "individuals", "after": "sellers", "start_char_pos": 402, "end_char_pos": 413 }, { "type": "R", "before": "trustworthiness", "after": "value for money", "start_char_pos": 451, "end_char_pos": 466 } ]
[ 0, 176, 260, 468, 536, 617 ]
1201.3867
1
Markov state models (MSMs) are a powerful means of understanding the structure and function of biomolecules by describing their free energy landscapes as a set of local minima (a.k.a. states) and the probabilities of transitioning between them. Unfortunately, it can be difficult to gain an intuition for an MSM because they typically must have tens of thousands of sates to quantitatively describe the rugged landscapes of most biomolecules . Here, I derive a Bayesian agglomerative clustering engine (BACE) for coarse-graining MSMs, making them suitable for extracting human understanding. BACE considerably outperforms existing methods by iteratively lumping together the most kinetically similar states while taking into account model uncertainty. I also present an extremely efficient expression for Bayesian model comparison that can be used to identify the most meaningful levels of the hierarchy of models from BACE. Code for both methods is available on the web URL
Markov state models (MSMs) ---or discrete-time master equation models---are a powerful way of understanding the structure and function of proteins and other molecular systems. However, they are typically too complicated to understand . Here, I present a Bayesian agglomerative clustering engine (BACE) for coarse-graining Markov chains---as well as a more general class of probabilistic models---making them more comprehensible while remaining as faithful as possible to the original kinetics by accounting for model uncertainty. The closed-form expression I derive here for determining which states to merge is equivalent to the generalized Jensen-Shannon divergence, an important measure from information theory that is related to the relative entropy. Therefore, the method has an appealing information theoretic interpretation. I also present an extremely efficient expression for Bayesian model comparison that can be used to identify the most meaningful levels of the hierarchy of models from BACE.
[ { "type": "R", "before": "are a powerful means", "after": "---or discrete-time master equation models---are a powerful way", "start_char_pos": 27, "end_char_pos": 47 }, { "type": "R", "before": "biomolecules by describing their free energy landscapes as a set of local minima (a.k.a. states) and the probabilities of transitioning between them. Unfortunately, it can be difficult to gain an intuition for an MSM because they typically must have tens of thousands of sates to quantitatively describe the rugged landscapes of most biomolecules", "after": "proteins and other molecular systems. However, they are typically too complicated to understand", "start_char_pos": 95, "end_char_pos": 441 }, { "type": "R", "before": "derive", "after": "present", "start_char_pos": 452, "end_char_pos": 458 }, { "type": "R", "before": "MSMs, making them suitable for extracting human understanding. BACE considerably outperforms existing methods by iteratively lumping together the most kinetically similar states while taking into account", "after": "Markov chains---as well as a more general class of probabilistic models---making them more comprehensible while remaining as faithful as possible to the original kinetics by accounting for", "start_char_pos": 529, "end_char_pos": 732 }, { "type": "R", "before": "I", "after": "The closed-form expression I derive here for determining which states to merge is equivalent to the generalized Jensen-Shannon divergence, an important measure from information theory that is related to the relative entropy. Therefore, the method has an appealing information theoretic interpretation. I", "start_char_pos": 752, "end_char_pos": 753 }, { "type": "D", "before": "Code for both methods is available on the web URL", "after": null, "start_char_pos": 925, "end_char_pos": 974 } ]
[ 0, 244, 443, 591, 751, 924 ]
1201.3867
2
Markov state models (MSMs)---or discrete-time master equation models---are a powerful way of understanding the structure and function of proteins and other molecular systems . However, they are typically too complicated to understand. Here, I present a Bayesian agglomerative clustering engine (BACE) for coarse-graining Markov chains---as well as a more general class of probabilistic models---making them more comprehensible while remaining as faithful as possible to the original kinetics by accounting for model uncertainty . The closed-form expression I derive here for determining which states to merge is equivalent to the generalized Jensen-Shannon divergence, an important measure from information theory that is related to the relative entropy. Therefore, the method has an appealing information theoretic interpretation . I also present an extremely efficient expression for Bayesian model comparison that can be used to identify the most meaningful levels of the hierarchy of models from BACE.
Markov state models (MSMs)---or discrete-time master equation models---are a powerful way of modeling the structure and function of molecular systems like proteins. Unfortunately, MSMs with sufficiently many states to make a quantitative connection with experiments (often tens of thousands of states even for small systems) are generally too complicated to understand. Here, I present a Bayesian agglomerative clustering engine (BACE) for coarse-graining such Markov models, thereby reducing their complexity and making them more comprehensible . An important feature of this algorithm is its ability to explicitly account for statistical uncertainty in model parameters that arises from finite sampling. This advance builds on a number of recent works highlighting the importance of accounting for uncertainty in the analysis of MSMs and provides significant advantages over existing methods for coarse-graining Markov state models . The closed-form expression I derive here for determining which states to merge is equivalent to the generalized Jensen-Shannon divergence, an important measure from information theory that is related to the relative entropy. Therefore, the method has an appealing information theoretic interpretation in terms of minimizing information loss. The bottom-up nature of the algorithm likely makes it particularly well suited for constructing mesoscale models . I also present an extremely efficient expression for Bayesian model comparison that can be used to identify the most meaningful levels of the hierarchy of models from BACE.
[ { "type": "R", "before": "understanding", "after": "modeling", "start_char_pos": 93, "end_char_pos": 106 }, { "type": "R", "before": "proteins and other molecular systems . However, they are typically", "after": "molecular systems like proteins. Unfortunately, MSMs with sufficiently many states to make a quantitative connection with experiments (often tens of thousands of states even for small systems) are generally", "start_char_pos": 137, "end_char_pos": 203 }, { "type": "R", "before": "Markov chains---as well as a more general class of probabilistic models---making", "after": "such Markov models, thereby reducing their complexity and making", "start_char_pos": 321, "end_char_pos": 401 }, { "type": "R", "before": "while remaining as faithful as possible to the original kinetics by accounting for model uncertainty", "after": ". An important feature of this algorithm is its ability to explicitly account for statistical uncertainty in model parameters that arises from finite sampling. This advance builds on a number of recent works highlighting the importance of accounting for uncertainty in the analysis of MSMs and provides significant advantages over existing methods for coarse-graining Markov state models", "start_char_pos": 427, "end_char_pos": 527 }, { "type": "A", "before": null, "after": "in terms of minimizing information loss. The bottom-up nature of the algorithm likely makes it particularly well suited for constructing mesoscale models", "start_char_pos": 831, "end_char_pos": 831 } ]
[ 0, 175, 234, 529, 754, 833 ]
1201.4551
1
Fossil fuels are major sources of energy, and have several advantages over other primary energy sources. Without extensive dependence on fossil fuels, it is questionable whether our economic prosperity can continue or not. This paper analyzes cointegration and causality between fossil fuel consumption and economic growth in the world over the period 1971--2008. The estimation results indicate that fossil fuel consumption and GDP are cointegrated and there exists long-run unidirectional causality from fossil fuel consumption to GDP. This paper also investigates the nexus between nonfossil energy consumption and GDP, and shows that there is no causality between the variables. The conclusions are that reducing fossil fuel consumption may hamper economic growth, and that it is unlikely that nonfossil energy will substantially replace fossil fuels . This paper also examines causal linkages between the variables using a trivariate model, and obtains the same results as those from the bivariate model .
This paper analyzes cointegration and causality between fossil fuel consumption and economic growth in the world over the period 1971--2008. The estimation results indicate that fossil fuel consumption and GDP are cointegrated and there exists long-run unidirectional causality from fossil fuel consumption to GDP. This paper also investigates the nexus between nonfossil energy consumption and GDP, and shows that there is no causality between the variables. The conclusions are that reducing fossil fuel consumption may hamper economic growth, and that it is unlikely that nonfossil energy will substantially replace fossil fuels .
[ { "type": "D", "before": "Fossil fuels are major sources of energy, and have several advantages over other primary energy sources. Without extensive dependence on fossil fuels, it is questionable whether our economic prosperity can continue or not.", "after": null, "start_char_pos": 0, "end_char_pos": 222 }, { "type": "D", "before": ". This paper also examines causal linkages between the variables using a trivariate model, and obtains the same results as those from the bivariate model", "after": null, "start_char_pos": 855, "end_char_pos": 1008 } ]
[ 0, 104, 222, 363, 537, 682, 856 ]
1201.4551
2
This paper analyzes cointegration and causality between fossil fuel consumption and economic growth in the world over the period 1971--2008. The estimation results indicate that fossil fuel consumption and GDP are cointegrated and there exists long-run unidirectional causality from fossil fuel consumption to GDP. This paper also investigates the nexus between nonfossil energy consumption and GDP, and shows that there is no causality between the variables. The conclusions are that reducing fossil fuel consumption may hamper economic growth, and that it is unlikely that nonfossil energy will substantially replace fossil fuels .
This paper has been withdrawn by the author due to some inaccurate descriptions in the section of INTRODUCTION and CONCLUSIONS .
[ { "type": "R", "before": "analyzes cointegration and causality between fossil fuel consumption and economic growth in the world over the period 1971--2008. The estimation results indicate that fossil fuel consumption and GDP are cointegrated and there exists long-run unidirectional causality from fossil fuel consumption to GDP. This paper also investigates the nexus between nonfossil energy consumption and GDP, and shows that there is no causality between the variables. The conclusions are that reducing fossil fuel consumption may hamper economic growth, and that it is unlikely that nonfossil energy will substantially replace fossil fuels", "after": "has been withdrawn by the author due to some inaccurate descriptions in the section of INTRODUCTION and CONCLUSIONS", "start_char_pos": 11, "end_char_pos": 631 } ]
[ 0, 140, 314, 459 ]
1201.4586
1
Financial markets worldwide do not have the same working hours. As a consequence, the study of correlation or causality between financial market indices becomes dependent on wether we should consider in computations of correlation matrices all indices in the same day or lagged indices. The answer is that we should consider both .
Financial markets worldwide do not have the same working hours. As a consequence, the study of correlation or causality between financial market indices becomes dependent on wether we should consider in computations of correlation matrices all indices in the same day or lagged indices. The answer this article proposes is that we should consider both . In this work, we use 79 indices of a diversity of stock markets across the world in order to study their correlation structure, and discover that representing in the same network original and lagged indices, we obtain a better understanding of how indices that operate at different hours relate to each other .
[ { "type": "A", "before": null, "after": "this article proposes", "start_char_pos": 298, "end_char_pos": 298 }, { "type": "A", "before": null, "after": ". In this work, we use 79 indices of a diversity of stock markets across the world in order to study their correlation structure, and discover that representing in the same network original and lagged indices, we obtain a better understanding of how indices that operate at different hours relate to each other", "start_char_pos": 331, "end_char_pos": 331 } ]
[ 0, 63, 286 ]
1201.5578
1
Modeling stochasticity in gene regulatory networks is an important and complex problem in molecular systems biology. Important work toward this goal have been made in elucidating the role of intrinsic noise. For this purpose several modeling strategies such as the Gillespie algorithm and chemical master equations have been efficiently used . This manuscript contributes with an alternative approach from these classical settings. Within the discrete paradigm, where genes, proteins, and other molecular components of gene regulatory networks are modeled as discrete variables and interaction among these components are given by logical rules representing the biochemical mechanisms governing their regulation; stochasticity is modeled at the biological function level under the assumption that even if the expression levels of the input nodes of an update function guarantee activation or degradation there is a probability that the process will not occur due to stochasticity in the process, for instance, some of the chemical reactions encoded by the function fail to occur which lead to a function failure. This approach allows a finer analysis of discrete models and provide a natural set up for cell population simulations. We applied our methods to two of the best studied regulatory networks, the outcome of lambda phage infection of bacteria and the p53-mdm2 complex.
Modeling stochasticity in gene regulatory networks is an important and complex problem in molecular systems biology. To elucidate intrinsic noise, several modeling strategies such as the Gillespie algorithm have been used successfully . This manuscript contributes an alternative approach from these classical settings. Within the discrete paradigm, where genes, proteins, and other molecular components of gene regulatory networks are modeled as discrete variables and interaction among these components are given by logical rules representing the biochemical mechanisms governing their regulation; stochasticity is modeled at the biological function level under the assumption that even if the expression levels of the input nodes of an update function guarantee activation or degradation there is a probability that the process will not occur due to stochasticity in the process, for instance, some of the chemical reactions encoded by the function fail to occur which lead to a function failure. This approach allows a finer analysis of discrete models and provide a natural set up for cell population simulations. We applied our methods to two of the best studied regulatory networks, the outcome of lambda phage infection of bacteria and the p53-mdm2 complex.
[ { "type": "R", "before": "Important work toward this goal have been made in elucidating the role of intrinsic noise. For this purpose", "after": "To elucidate intrinsic noise,", "start_char_pos": 117, "end_char_pos": 224 }, { "type": "R", "before": "and chemical master equations have been efficiently used", "after": "have been used successfully", "start_char_pos": 285, "end_char_pos": 341 }, { "type": "D", "before": "with", "after": null, "start_char_pos": 372, "end_char_pos": 376 } ]
[ 0, 116, 207, 343, 431, 711, 1111, 1230 ]
1201.5578
2
Modeling stochasticity in gene regulatory networks is an important and complex problem in molecular systems biology. To elucidate intrinsic noise, several modeling strategies such as the Gillespie algorithm have been used successfully. This manuscript contributes an alternative approach from these classical settings. Within the discrete paradigm, where genes, proteins, and other molecular components of gene regulatory networks are modeled as discrete variables and interaction among these components are given by logical rules representing the biochemical mechanisms governing their regulation ; stochasticity is modeled at the biological function level under the assumption that even if the expression levels of the input nodes of an update function guarantee activation or degradation there is a probability that the process will not occur due to stochasticity in the process, for instance, some of the chemical reactions encoded by the function fail to occur which lead to a function failure . This approach allows a finer analysis of discrete models and provide a natural set up for cell population simulations . We applied our methods to two of the best studied regulatory networks, the outcome of lambda phage infection of bacteria and the p53-mdm2 complex.
Modeling stochasticity in gene regulatory networks is an important and complex problem in molecular systems biology. To elucidate intrinsic noise, several modeling strategies such as the Gillespie algorithm have been used successfully. This paper contributes an approach as an alternative to these classical settings. Within the discrete paradigm, where genes, proteins, and other molecular components of gene regulatory networks are modeled as discrete variables and are assigned as logical rules describing their regulation through interactions with other components. Stochasticity is modeled at the biological function level under the assumption that even if the expression levels of the input nodes of an update rule guarantee activation or degradation there is a probability that the process will not occur due to stochastic effects . This approach allows a finer analysis of discrete models and provides a natural setup for cell population simulations to study cell-to-cell variability . We applied our methods to two of the most studied regulatory networks, the outcome of lambda phage infection of bacteria and the p53-mdm2 complex.
[ { "type": "R", "before": "manuscript contributes an alternative approach from", "after": "paper contributes an approach as an alternative to", "start_char_pos": 241, "end_char_pos": 292 }, { "type": "R", "before": "interaction among these components are given by logical rules representing the biochemical mechanisms governing their regulation ; stochasticity", "after": "are assigned as logical rules describing their regulation through interactions with other components. Stochasticity", "start_char_pos": 469, "end_char_pos": 613 }, { "type": "R", "before": "function", "after": "rule", "start_char_pos": 746, "end_char_pos": 754 }, { "type": "R", "before": "stochasticity in the process, for instance, some of the chemical reactions encoded by the function fail to occur which lead to a function failure", "after": "stochastic effects", "start_char_pos": 853, "end_char_pos": 998 }, { "type": "R", "before": "provide a natural set up", "after": "provides a natural setup", "start_char_pos": 1062, "end_char_pos": 1086 }, { "type": "A", "before": null, "after": "to study cell-to-cell variability", "start_char_pos": 1119, "end_char_pos": 1119 }, { "type": "R", "before": "best", "after": "most", "start_char_pos": 1159, "end_char_pos": 1163 } ]
[ 0, 116, 235, 318, 599, 1000, 1121 ]
1201.6643
1
Wolfe-Simon et al. reported isolation of a strain of Halomonas bacteria, GFAJ-1, which could use arsenic as a nutrient when phosphate is limiting, and which could specifically incorporate arsenic into its DNA in place of phosphorus. We have found that arsenate is not needed for growth of GFAJ-1 when phosphate is limiting . Additionally, we used mass spectrometry to show that DNA purified from cells grown with limiting phosphate and abundant arsenate does not contain detectable arsenate.
A strain of Halomonas bacteria, GFAJ-1, has been reported to be able to use arsenate as a nutrient when phosphate is limiting, and to specifically incorporate arsenic into its DNA in place of phosphorus. However, we have found that arsenate does not contribute to growth of GFAJ-1 when phosphate is limiting and that DNA purified from cells grown with limiting phosphate and abundant arsenate does not exhibit the spontaneous hydrolysis expected of arsenate ester bonds. Furthermore, mass spectrometry showed that this DNA contains only trace amounts of free arsenate and no detectable covalently bound arsenate.
[ { "type": "R", "before": "Wolfe-Simon et al. reported isolation of a", "after": "A", "start_char_pos": 0, "end_char_pos": 42 }, { "type": "R", "before": "which could use arsenic", "after": "has been reported to be able to use arsenate", "start_char_pos": 81, "end_char_pos": 104 }, { "type": "R", "before": "which could", "after": "to", "start_char_pos": 151, "end_char_pos": 162 }, { "type": "R", "before": "We", "after": "However, we", "start_char_pos": 233, "end_char_pos": 235 }, { "type": "R", "before": "is not needed for", "after": "does not contribute to", "start_char_pos": 261, "end_char_pos": 278 }, { "type": "R", "before": ". Additionally, we used mass spectrometry to show", "after": "and", "start_char_pos": 323, "end_char_pos": 372 }, { "type": "R", "before": "contain detectable", "after": "exhibit the spontaneous hydrolysis expected of arsenate ester bonds. Furthermore, mass spectrometry showed that this DNA contains only trace amounts of free arsenate and no detectable covalently bound", "start_char_pos": 463, "end_char_pos": 481 } ]
[ 0, 232, 324 ]
1202.0447
1
We present a unified approach to Doob's L^p maximal inequalities for 1\leq p<\infty. The novelty of our method is that these martingale inequalities are obtained as consequences of elementary deterministic counterparts. The latter have a natural interpretation in terms of robust hedging. Moreover our deterministic inequalities lead to new versions of Doob's maximal inequalities. These are best possible in the sense that equality is attained by properly chosen martingales.
We present a unified approach to Doob's L^p maximal inequalities for 1\leq p<\infty. The novelty of our method is that these martingale inequalities are obtained as consequences of elementary deterministic counterparts. The latter have a natural interpretation in terms of robust hedging. Moreover , our deterministic inequalities lead to new versions of Doob's maximal inequalities. These are best possible in the sense that equality is attained by properly chosen martingales.
[ { "type": "R", "before": "deterministic", "after": "deterministic", "start_char_pos": 192, "end_char_pos": 205 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 298, "end_char_pos": 298 } ]
[ 0, 84, 219, 288, 382 ]
1202.1243
1
MicroRNA can affect the protein translation using nine mechanistically different mechanisms, including repression of initiation and degradation of the transcript. There is a hot debate in the current literature about which mechanism and in which situations has a dominant role in living cells. The worst, same experimental systems dealing with the same pairs of mRNA and miRNA can provide controversial evidences about which is the actual mechanism of translation repression observed in the experiment. We start with reviewing the current state of the art about the knowledge of various mechanisms of miRNA action and suggest that mathematical modeling can help resolving some of the controversial interpretations. We describe three simple mathematical models of miRNA translation that can be used as tools in interpreting the experimental data on the dynamics of protein synthesis. The most complex model developed by us includes all known mechanisms of miRNA action. It allowed us to study possible dynamical patterns corresponding to different miRNA-mediated mechanisms of translation repression and to suggest concrete recipes on determining the dominant mechanism of miRNA action in the form of kinetic signatures. Using computational experiments and systematizing existing evidences from the literature, we justify a hypothesis about co-existence of distinct miRNA-mediated mechanisms of translation repression. The actually observed mechanism will be that acting on the limiting step of translation which might vary from one experimental setting to another. This model explains the majority of existing controversies reported.
MicroRNAs can affect the protein translation using nine mechanistically different mechanisms, including repression of initiation and degradation of the transcript. There is a hot debate in the current literature about which mechanism and in which situations has a dominant role in living cells. The worst, same experimental systems dealing with the same pairs of mRNA and miRNA can provide ambiguous evidences about which is the actual mechanism of translation repression observed in the experiment. We start with reviewing the current knowledge of various mechanisms of miRNA action and suggest that mathematical modeling can help resolving some of the controversial interpretations. We describe three simple mathematical models of miRNA translation that can be used as tools in interpreting the experimental data on the dynamics of protein synthesis. The most complex model developed by us includes all known mechanisms of miRNA action. It allowed us to study possible dynamical patterns corresponding to different miRNA-mediated mechanisms of translation repression and to suggest concrete recipes on determining the dominant mechanism of miRNA action in the form of kinetic signatures. Using computational experiments and systematizing existing evidences from the literature, we justify a hypothesis about co-existence of distinct miRNA-mediated mechanisms of translation repression. The actually observed mechanism will be that acting on or changing the limiting "place" of the translation process. The limiting place can vary from one experimental setting to another. This model explains the majority of existing controversies reported.
[ { "type": "R", "before": "MicroRNA", "after": "MicroRNAs", "start_char_pos": 0, "end_char_pos": 8 }, { "type": "R", "before": "controversial", "after": "ambiguous", "start_char_pos": 389, "end_char_pos": 402 }, { "type": "D", "before": "state of the art about the", "after": null, "start_char_pos": 539, "end_char_pos": 565 }, { "type": "R", "before": "the limiting step of translation which might", "after": "or changing the limiting \"place\" of the translation process. The limiting place can", "start_char_pos": 1473, "end_char_pos": 1517 } ]
[ 0, 162, 293, 502, 714, 882, 968, 1219, 1417, 1564 ]
1202.2076
1
In this paper, we take up the analysis of a principal/agent model with moral hazard introduced in \mbox{%DIFAUXCMD pages , with optimal contracting between competitive investors and an impatient bank monitoring a pool of long-term loans subject to Markovian contagion. We provide here a comprehensive mathematical formulation of the model and show using martingale arguments in the spirit of Sannikov \mbox{%DIFAUXCMD san how the maximization problem with implicit constraints faced by investors can be reduced to a classic stochastic control problem. The approach has the advantage of avoiding the more general techniques based on forward-backward stochastic differential equations described in \mbox{%DIFAUXCMD cviz and leads to a simple recursive system of Hamilton-Jacobi-Bellman equations. We provide a solution to our problem by a verification argument and give an explicit description of both the value function and the optimal contract. Finally, we study the limit case where the bank is no longer impatient.
In this paper, we take up the analysis of a principal/agent model with moral hazard introduced in 17 , with optimal contracting between competitive investors and an impatient bank monitoring a pool of long-term loans subject to Markovian contagion. We provide here a comprehensive mathematical formulation of the model and show using martingale arguments in the spirit of Sannikov 18 how the maximization problem with implicit constraints faced by investors can be reduced to a classical stochastic control problem. The approach has the advantage of avoiding the more general techniques based on forward-backward stochastic differential equations described in 6 and leads to a simple recursive system of Hamilton-Jacobi-Bellman equations. We provide a solution to our problem by a verification argument and give an explicit description of both the value function and the optimal contract. Finally, we study the limit case where the bank is no longer impatient.
[ { "type": "R", "before": "\\mbox{%DIFAUXCMD pages", "after": "17", "start_char_pos": 98, "end_char_pos": 120 }, { "type": "R", "before": "\\mbox{%DIFAUXCMD san", "after": "18", "start_char_pos": 401, "end_char_pos": 421 }, { "type": "R", "before": "classic", "after": "classical", "start_char_pos": 516, "end_char_pos": 523 }, { "type": "R", "before": "\\mbox{%DIFAUXCMD cviz", "after": "6", "start_char_pos": 696, "end_char_pos": 717 } ]
[ 0, 268, 551, 794, 944 ]
1202.3015
1
The cellular phenotype is described by a complex network of molecular interactions. Elucidating network properties that distinguish disease from the healthy state is therefore of great importance for gaining systems-level insights into disease mechanisms and ultimately for developing improved therapies. Recently, statistical mechanical network properties have been studied in the context of cancer networks, yet it is unclear which properties best characterise the cancer phenotype. In this work we take a step in this direction by comparing two different types of molecular entropy in their ability to discriminate cancer from the normal phenotype. One entropy measure (flux entropy) is dynamical in the sense that it is derived from a stochastic process. The second measure (covariance entropy) does not depend on the interaction network and is thus "static". Using multiple gene expression data sets of normal and cancer tissue, we demonstrate that flux entropy is a better discriminator of the cancer phenotype than covariance entropy. Specifically, we show that local flux entropy is always increased in cancer relative to normal tissue while the local covariance entropy is not. We show that gene expression differences between normal and cancer tissue are anticorrelated with local flux entropy changes, thus providing a systemic link between gene expression changes and their local information flux dynamics. We also show that genes located in the intracellular domain demonstrate preferential increases in flux entropy, while the entropy of genes encoding membrane receptors and secreted factors is preferentially reduced. Thus, these results elucidate intrinsic network properties of cancer and support the view that the observed increased robustness of cancer cells to perturbation and therapy may be due to an increase in the dynamical network entropy allowing cells to adapt to extracellular stresses .
The cellular phenotype is described by a complex network of molecular interactions. Elucidating network properties that distinguish disease from the healthy cellular state is therefore of critical importance for gaining systems-level insights into disease mechanisms and ultimately for developing improved therapies. By integrating gene expression data with a protein interaction network to induce a stochastic dynamics on the network, we here demonstrate that cancer cells are characterised by an increase in the dynamic network entropy, compared to cells of normal physiology. Using a fundamental relation between the macroscopic resilience of a dynamical system and the uncertainty (entropy) in the underlying microscopic processes, we argue that cancer cells will be more robust to random gene perturbations. In addition, we formally demonstrate that gene expression differences between normal and cancer tissue are anticorrelated with local dynamic entropy changes, thus providing a systemic link between gene expression changes at the nodes and their local network dynamics. In particular, we also find that genes which drive cell-proliferation in cancer cells and which often encode oncogenes are associated with reductions in the dynamic network entropy. In summary, our results support the view that the observed increased robustness of cancer cells to perturbation and therapy may be due to an increase in the dynamic network entropy that allows cells to adapt to the new cellular stresses. Conversely, genes that exhibit local flux entropy decreases in cancer may render cancer cells more susceptible to targeted intervention and may therefore represent promising drug targets .
[ { "type": "A", "before": null, "after": "cellular", "start_char_pos": 157, "end_char_pos": 157 }, { "type": "R", "before": "great", "after": "critical", "start_char_pos": 180, "end_char_pos": 185 }, { "type": "R", "before": "Recently, statistical mechanical network properties have been studied in the context of cancer networks, yet it is unclear which properties best characterise the cancer phenotype. In this work we take a step in this direction by comparing two different types of molecular entropy in their ability to discriminate cancer from the normal phenotype. One entropy measure (flux entropy) is dynamical in the sense that it is derived from a stochastic process. The second measure (covariance entropy) does not depend on the interaction network and is thus \"static\". Using multiple gene expression data sets of normal and cancer tissue, we demonstrate that flux entropy is a better discriminator of the cancer phenotype than covariance entropy. Specifically, we show that local flux entropy is always increased in cancer relative to normal tissue while the local covariance entropy is not. We show that", "after": "By integrating gene expression data with a protein interaction network to induce a stochastic dynamics on the network, we here demonstrate that cancer cells are characterised by an increase in the dynamic network entropy, compared to cells of normal physiology. Using a fundamental relation between the macroscopic resilience of a dynamical system and the uncertainty (entropy) in the underlying microscopic processes, we argue that cancer cells will be more robust to random gene perturbations. In addition, we formally demonstrate that", "start_char_pos": 306, "end_char_pos": 1200 }, { "type": "R", "before": "flux", "after": "dynamic", "start_char_pos": 1292, "end_char_pos": 1296 }, { "type": "A", "before": null, "after": "at the nodes", "start_char_pos": 1377, "end_char_pos": 1377 }, { "type": "R", "before": "information flux dynamics. We also show that genes located in the intracellular domain demonstrate preferential increases in flux entropy, while the entropy of genes encoding membrane receptors and secreted factors is preferentially reduced. Thus, these results elucidate intrinsic network properties of cancer and", "after": "network dynamics. In particular, we also find that genes which drive cell-proliferation in cancer cells and which often encode oncogenes are associated with reductions in the dynamic network entropy. In summary, our results", "start_char_pos": 1394, "end_char_pos": 1708 }, { "type": "R", "before": "dynamical network entropy allowing", "after": "dynamic network entropy that allows", "start_char_pos": 1842, "end_char_pos": 1876 }, { "type": "R", "before": "extracellular stresses", "after": "the new cellular stresses. Conversely, genes that exhibit local flux entropy decreases in cancer may render cancer cells more susceptible to targeted intervention and may therefore represent promising drug targets", "start_char_pos": 1895, "end_char_pos": 1917 } ]
[ 0, 83, 305, 485, 652, 759, 864, 1042, 1187, 1420, 1635 ]
1202.3621
1
We present a unified framework for the preclusion of non-degenerate multiple steady states in a network of interacting species. Interaction networks are modeled via systems of ordinary differential equations in which the form of the species rate function is restricted by the reactions of the network and how the species influence each reaction. We characterize the set of interaction networks for which any choice of associated rate function is injective within each stoichiometric class and thus cannot exhibit multistationarity. Our criteria rely on the determinant of the Jacobian of the species rate functions that belong to the class of so-called general mass-action kinetics. The criteria are computationally tractable and easily implemented. Our approach embraces and extends much previous work on multistationarity, such as work in relation to chemical reaction networks with dynamics defined by mass-action or non-catalytic kinetics, and also work based on the graphical analysis of the interaction graph associated to the system.
We present determinant criteria for the preclusion of non-degenerate multiple steady states in networks of interacting species. A network is modeled as a system of ordinary differential equations in which the form of the species formation rate function is restricted by the reactions of the network and how the species influence each reaction. We characterize families of so-called power-law kinetics for which the associated species formation rate function is injective within each stoichiometric class and thus the network cannot exhibit multistationarity. The criterion for power-law kinetics is derived from the determinant of the Jacobian of the species formation rate function. Using this characterization we further derive similar determinant criteria applicable to general sets of kinetics. The criteria are conceptually simple, computationally tractable and easily implemented. Our approach embraces and extends previous work on multistationarity, such as work in relation to chemical reaction networks with dynamics defined by mass-action or non-catalytic kinetics, and also work based on graphical analysis of the interaction graph associated to the system. Further, we interpret the criteria in terms of circuits in the so-called DSR-graph
[ { "type": "R", "before": "a unified framework", "after": "determinant criteria", "start_char_pos": 11, "end_char_pos": 30 }, { "type": "R", "before": "a network", "after": "networks", "start_char_pos": 94, "end_char_pos": 103 }, { "type": "R", "before": "Interaction networks are modeled via systems", "after": "A network is modeled as a system", "start_char_pos": 128, "end_char_pos": 172 }, { "type": "A", "before": null, "after": "formation", "start_char_pos": 241, "end_char_pos": 241 }, { "type": "R", "before": "the set of interaction networks for which any choice of associated", "after": "families of so-called power-law kinetics for which the associated species formation", "start_char_pos": 363, "end_char_pos": 429 }, { "type": "A", "before": null, "after": "the network", "start_char_pos": 499, "end_char_pos": 499 }, { "type": "R", "before": "Our criteria rely on", "after": "The criterion for power-law kinetics is derived from", "start_char_pos": 534, "end_char_pos": 554 }, { "type": "R", "before": "rate functions that belong to the class of so-called general mass-action", "after": "formation rate function. Using this characterization we further derive similar determinant criteria applicable to general sets of", "start_char_pos": 602, "end_char_pos": 674 }, { "type": "A", "before": null, "after": "conceptually simple,", "start_char_pos": 702, "end_char_pos": 702 }, { "type": "D", "before": "much", "after": null, "start_char_pos": 787, "end_char_pos": 791 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 970, "end_char_pos": 973 }, { "type": "A", "before": null, "after": "Further, we interpret the criteria in terms of circuits in the so-called DSR-graph", "start_char_pos": 1044, "end_char_pos": 1044 } ]
[ 0, 127, 346, 533, 684, 752 ]
1202.3796
1
It is now well accepted that cellular responses to materials in a biological medium reflect greatly the adsorbed biomolecular layer, rather than the material itself. Here, we study by molecular dynamic simulations the competitive protein adsorption on a surface ( Vroman-like effect), i.e. the non-monotonic behavior of the amount of protein adsorbed on a surface in contact with plasma as a function of contact time and plasma concentration . We show how the effect can be understood, controlled and inverted.
It is now well accepted that cellular responses to materials in a biological medium reflect greatly the adsorbed biomolecular layer, rather than the material itself. Here, we study by molecular dynamic simulations the competitive protein adsorption on a surface ( Vroman effect), i.e. the non-monotonic behavior of the amount of protein adsorbed on a surface in contact with plasma as a function of contact time and plasma concentration . We find a complex behavior, with regimes during which small and large proteins are not necessarily competing between them, but are both competing with others in solution . We show how the effect can be understood, controlled and inverted.
[ { "type": "R", "before": "Vroman-like", "after": "Vroman", "start_char_pos": 264, "end_char_pos": 275 }, { "type": "A", "before": null, "after": ". We find a complex behavior, with regimes during which small and large proteins are not necessarily competing between them, but are both competing with others in solution", "start_char_pos": 442, "end_char_pos": 442 } ]
[ 0, 165, 444 ]
1202.3892
1
Motivated by the lack of a suitable framework for analyzing popular stochastic models of Systems Biology, we devise conditions for existence and uniqueness of solutions to certain jump stochastic differential equations ( jump SDEs). Working from simple examples we find reasonable and explicit assumptions on the driving coefficients for the SDE representation to make sense. By `reasonable' we mean that stronger assumptions generally do not hold for systems of practical interest. In particular, we argue against the traditional use of global Lipschitz conditions and certain common growth restrictions. By `explicit', finally, we like to highlight the fact that the various constants occurring among our assumptions all can be determined once the model is fixed. We show how basic perturbation results can be derived in this setting such that these can readily be compared with the corresponding estimates from deterministic dynamics. The main complication is that the natural path-wise representation is generated by a counting measure with an intensity that depends nonlinearly on the state.
Motivated by the lack of a suitable framework for analyzing popular stochastic models of Systems Biology, we devise conditions for existence and uniqueness of solutions to certain jump stochastic differential equations ( SDEs). Working from simple examples we find reasonable and explicit assumptions on the driving coefficients for the SDE representation to make sense. By `reasonable' we mean that stronger assumptions generally do not hold for systems of practical interest. In particular, we argue against the traditional use of global Lipschitz conditions and certain common growth restrictions. By `explicit', finally, we like to highlight the fact that the various constants occurring among our assumptions all can be determined once the model is fixed. We show how basic long time estimates and some limit results for perturbations can be derived in this setting such that these can be contrasted with the corresponding estimates from deterministic dynamics. The main complication is that the natural path-wise representation is generated by a counting measure with an intensity that depends nonlinearly on the state.
[ { "type": "D", "before": "jump", "after": null, "start_char_pos": 221, "end_char_pos": 225 }, { "type": "R", "before": "perturbation results", "after": "long time estimates and some limit results for perturbations", "start_char_pos": 784, "end_char_pos": 804 }, { "type": "R", "before": "readily be compared", "after": "be contrasted", "start_char_pos": 856, "end_char_pos": 875 } ]
[ 0, 232, 375, 482, 605, 765, 937 ]
1202.4007
1
This paper provides approximations to utility indifference prices for a contingent claim in the large position size limit. Results are valid for general utility functions and semi-martingale models. It is shown that as the position size approaches infinity, all utility functions with the same rate of decay for large negative wealths yield the same price. Practically, this means an investor should price like an exponential investor. In a sizeable class of diffusion models, the large position limit is seen to arise naturally in conjunction with the limit of a complete model and hence approximations are most appropriate in this setting .
Approximations to utility indifference prices are provided for a contingent claim in the large position size limit. Results are valid for general utility functions on the real line and semi-martingale models. It is shown that as the position size approaches infinity, the utility function's decay rate for large negative wealths is the primary driver of prices. For utilities with exponential decay, one may price like an exponential investor. For utilities with a power decay, one may price like a power investor after a suitable adjustment to the rate at which the position size becomes large. In a sizable class of diffusion models, limiting indifference prices are explicitly computed for an exponential investor. Furthermore, the large claim limit is seen to endogenously arise as the hedging error for the claim vanishes .
[ { "type": "R", "before": "This paper provides approximations", "after": "Approximations", "start_char_pos": 0, "end_char_pos": 34 }, { "type": "A", "before": null, "after": "are provided", "start_char_pos": 66, "end_char_pos": 66 }, { "type": "A", "before": null, "after": "on the real line", "start_char_pos": 172, "end_char_pos": 172 }, { "type": "R", "before": "all utility functions with the same rate of decay", "after": "the utility function's decay rate", "start_char_pos": 260, "end_char_pos": 309 }, { "type": "R", "before": "yield the same price. Practically, this means an investor should", "after": "is the primary driver of prices. For utilities with exponential decay, one may", "start_char_pos": 337, "end_char_pos": 401 }, { "type": "R", "before": "In a sizeable", "after": "For utilities with a power decay, one may price like a power investor after a suitable adjustment to the rate at which the position size becomes large. In a sizable", "start_char_pos": 438, "end_char_pos": 451 }, { "type": "R", "before": "the large position", "after": "limiting indifference prices are explicitly computed for an exponential investor. Furthermore, the large claim", "start_char_pos": 479, "end_char_pos": 497 }, { "type": "R", "before": "arise naturally in conjunction with the limit of a complete model and hence approximations are most appropriate in this setting", "after": "endogenously arise as the hedging error for the claim vanishes", "start_char_pos": 515, "end_char_pos": 642 } ]
[ 0, 123, 200, 358, 437 ]
1202.4918
1
Decision making of agents who are members of a society is analyzed from the point of view of quantum decision theory. This generalizes the approach, developed earlier by the authors for separate individuals , to decision making under the influence of social interactions . The generalized approach not only avoids paradoxes , typical of classical decision making based on utility theory, but also explains the error-attenuation effects observed for the paradoxes occurring when decision makers, who are members of a society, consult with each other increasing in this way the available mutual information .
The influence of additional information on the decision making of agents , who are interacting members of a society , is analyzed within the mathematical framework based on the use of quantum probabilities. The introduction of social interactions, which influence the decisions of individual agents, leads to a generalization of the quantum decision theory developed earlier by the authors for separate individuals . The generalized approach is free of the standard paradoxes of classical decision theory. This approach also explains the error-attenuation effects observed for the paradoxes occurring when decision makers, who are members of a society, consult with each other , increasing in this way the available mutual information . A precise correspondence between quantum decision theory and classical utility theory is formulated via the introduction of an intermediate probabilistic version of utility theory of a novel form, which obeys the requirement that zero-utility prospects should have zero probability weights .
[ { "type": "R", "before": "Decision", "after": "The influence of additional information on the decision", "start_char_pos": 0, "end_char_pos": 8 }, { "type": "R", "before": "who are", "after": ", who are interacting", "start_char_pos": 26, "end_char_pos": 33 }, { "type": "R", "before": "is analyzed from the point of view of quantum decision theory. This generalizes the approach,", "after": ", is analyzed within the mathematical framework based on the use of quantum probabilities. The introduction of social interactions, which influence the decisions of individual agents, leads to a generalization of the quantum decision theory", "start_char_pos": 55, "end_char_pos": 148 }, { "type": "D", "before": ", to decision making under the influence of social interactions", "after": null, "start_char_pos": 207, "end_char_pos": 270 }, { "type": "R", "before": "not only avoids paradoxes , typical", "after": "is free of the standard paradoxes", "start_char_pos": 298, "end_char_pos": 333 }, { "type": "R", "before": "making based on utility theory, but", "after": "theory. This approach", "start_char_pos": 356, "end_char_pos": 391 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 549, "end_char_pos": 549 }, { "type": "A", "before": null, "after": ". A precise correspondence between quantum decision theory and classical utility theory is formulated via the introduction of an intermediate probabilistic version of utility theory of a novel form, which obeys the requirement that zero-utility prospects should have zero probability weights", "start_char_pos": 606, "end_char_pos": 606 } ]
[ 0, 117, 272 ]
1202.5092
1
We discuss chemical reaction networks and metabolic pathways based on stoichiometric network analysis, and introduce deformed toric ideal constraints by the algebraic geometrical approach. With the deformed toric ideal constraints, the shape of flux is constrained without introducing ad hoc constraints. To illustrate the effectiveness of such constraints, we discuss two examples of chemical reaction network and metabolic pathway; in the former the shape of flux is constrained completely by deformed toric ideal constraints, and in the latter, it is shown the deformed toric ideal constrains the parameters of flux at least partially .
We discuss chemical reaction networks and metabolic pathways based on stoichiometric network analysis, and introduce deformed toric ideal constraints by the algebraic geometrical approach. This paper concerns steady state flux of chemical reaction networks and metabolic pathways. With the deformed toric ideal constraints, the linear combination parameters of extreme pathways are automatically constrained without introducing ad hoc constraints. To illustrate the effectiveness of such constraints, we discuss two examples of chemical reaction network and metabolic pathway; in the former the flux and the concentrations are constrained completely by deformed toric ideal constraints, and in the latter, it is shown the deformed toric ideal constrains the linear combination parameters of flux at least partially . Even in the latter case, the flux and the concentrations are constrained completely with the additional constraint that the total amount of enzyme is constant .
[ { "type": "A", "before": null, "after": "This paper concerns steady state flux of chemical reaction networks and metabolic pathways.", "start_char_pos": 189, "end_char_pos": 189 }, { "type": "R", "before": "shape of flux is", "after": "linear combination parameters of extreme pathways are automatically", "start_char_pos": 237, "end_char_pos": 253 }, { "type": "R", "before": "shape of flux is", "after": "flux and the concentrations are", "start_char_pos": 453, "end_char_pos": 469 }, { "type": "A", "before": null, "after": "linear combination", "start_char_pos": 601, "end_char_pos": 601 }, { "type": "A", "before": null, "after": ". Even in the latter case, the flux and the concentrations are constrained completely with the additional constraint that the total amount of enzyme is constant", "start_char_pos": 640, "end_char_pos": 640 } ]
[ 0, 188, 305, 434 ]
1202.5362
1
A recurring motif in gene regulatory networks is transcription factors (TFs) that regulate each other, and then bind to overlapping sites on DNA, where they interact and synergistically control transcription of a target gene. Here, we suggest that this motif maximizes information flow in a noisy network. Gene expression is an inherently noisy process due to thermal fluctuations and the small number of molecules involved. A consequence of multiple TFs interacting at overlapping binding sites is that their binding noise becomes correlated. Using concepts from information theory we show that a signaling pathway transmits more information if 1) the noise of one input is correlated with that of the other, and 2) the input signals are not chosen independently. In the case of TFs, the latter criterion hints at up-stream cross-regulation. We explicitly demonstrate these ideas for the toy model of two TFs competing for the same binding site. We suggest that this mechanism potentially explains the motif of a coherent feed-forward loop terminating in overlapping binding sites commonly found in developmental networks , and discuss three specific examples. The systematic method proposed herein can be used to shed light on TF cross-regulation networks either from direct measurements of binding noise, or bioinformatic analysis of overlapping binding sites .
A recurring motif in gene regulatory networks is transcription factors (TFs) that regulate each other, and then bind to overlapping sites on DNA, where they interact and synergistically control transcription of a target gene. Here, we suggest that this motif maximizes information flow in a noisy network. Gene expression is an inherently noisy process due to thermal fluctuations and the small number of molecules involved. A consequence of multiple TFs interacting at overlapping binding-sites is that their binding noise becomes correlated. Using concepts from information theory , we show that in general a signaling pathway transmits more information if 1) noise of one input is correlated with that of the other, 2) input signals are not chosen independently. In the case of TFs, the latter criterion hints at up-stream cross-regulation. We demonstrate these ideas for competing TFs and feed-forward gene regulatory modules , and discuss generalizations to other signaling pathways. Our results challenge the conventional approach of treating biological noise as uncorrelated fluctuations, and present a systematic method for understanding TF cross-regulation networks either from direct measurements of binding noise, or bioinformatic analysis of overlapping binding-sites .
[ { "type": "R", "before": "binding sites", "after": "binding-sites", "start_char_pos": 482, "end_char_pos": 495 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 583, "end_char_pos": 583 }, { "type": "A", "before": null, "after": "in general", "start_char_pos": 597, "end_char_pos": 597 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 651, "end_char_pos": 654 }, { "type": "D", "before": "and", "after": null, "start_char_pos": 712, "end_char_pos": 715 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 719, "end_char_pos": 722 }, { "type": "D", "before": "explicitly", "after": null, "start_char_pos": 848, "end_char_pos": 858 }, { "type": "R", "before": "the toy model of two TFs competing for the same binding site. We suggest that this mechanism potentially explains the motif of a coherent", "after": "competing TFs and", "start_char_pos": 887, "end_char_pos": 1024 }, { "type": "R", "before": "loop terminating in overlapping binding sites commonly found in developmental networks", "after": "gene regulatory modules", "start_char_pos": 1038, "end_char_pos": 1124 }, { "type": "R", "before": "three specific examples. The systematic method proposed herein can be used to shed light on", "after": "generalizations to other signaling pathways. Our results challenge the conventional approach of treating biological noise as uncorrelated fluctuations, and present a systematic method for understanding", "start_char_pos": 1139, "end_char_pos": 1230 }, { "type": "R", "before": "binding sites", "after": "binding-sites", "start_char_pos": 1351, "end_char_pos": 1364 } ]
[ 0, 225, 305, 424, 543, 766, 844, 948, 1163 ]
1202.5702
1
In this paper, the market extension of set-valued risk measures for models with proportional transaction costs is linked with set-valued risk minimization problems. As a particular example, the set-valued average value at risk (AV@R) is defined and its market extension and corresponding risk minimization problems are studied. We show that for a finite probability space the calculation of the values of AV@R reduces to linear vector optimization problems which can be solved using known algorithms. The formulation of AV@R as a linear vector optimization problem is an extension of the corresponding scalar result by Rockafellar and Uryasev .
New versions of the set-valued average value at risk for multivariate risks are introduced by generalizing the well-known certainty equivalent representation to the set-valued case. The first "regulator" version is independent from any market model whereas the second version, called the market extension, takes trading opportunities into account. Essential properties of both versions are proven and an algorithmic approach is provided which admits to compute the values of both version over finite probability spaces. Several examples illustrate various features of the theoretical constructions .
[ { "type": "R", "before": "In this paper, the market extension of set-valued risk measures for models with proportional transaction costs is linked with set-valued risk minimization problems. As a particular example,", "after": "New versions of", "start_char_pos": 0, "end_char_pos": 189 }, { "type": "R", "before": "(AV@R) is defined and its market extension and corresponding risk minimization problems are studied. We show that for a finite probability space the calculation of", "after": "for multivariate risks are introduced by generalizing the well-known certainty equivalent representation to the set-valued case. The first \"regulator\" version is independent from any market model whereas the second version, called the market extension, takes trading opportunities into account. Essential properties of both versions are proven and an algorithmic approach is provided which admits to compute", "start_char_pos": 227, "end_char_pos": 390 }, { "type": "R", "before": "AV@R reduces to linear vector optimization problems which can be solved using known algorithms. The formulation of AV@R as a linear vector optimization problem is an extension of the corresponding scalar result by Rockafellar and Uryasev", "after": "both version over finite probability spaces. Several examples illustrate various features of the theoretical constructions", "start_char_pos": 405, "end_char_pos": 642 } ]
[ 0, 164, 327, 500 ]
1202.5983
1
Observing prices of European put and call options, we calibrate exponential L\'evy models nonparametrically. We discuss the implementation of the spectral estimation procedures for L\'evy models of finite jump activity as well as for self-decomposable L\'evy models and improve these methods. Confidence intervals are constructed for the estimators in the finite activity case. They allow inference on the behavior of the parameters when the option prices are observed in a sequence of trading days . We compare the performance of the procedures for finite and infinite jump activity based on real option data .
Observing prices of European put and call options, we calibrate exponential L\'evy models nonparametrically. We discuss the efficient implementation of the spectral estimation procedures for L\'evy models of finite jump activity as well as for self-decomposable L\'evy models . Based on finite sample variances, confidence intervals are constructed for the volatility, for the drift and, pointwise, for the jump density. As demonstrated by simulations, these intervals perform well in terms of size and coverage probabilities . We compare the performance of the procedures for finite and infinite jump activity based on options on the German DAX index and find that both methods achieve good calibration results. The stability of the finite activity model is studied when the option prices are observed in a sequence of trading days .
[ { "type": "A", "before": null, "after": "efficient", "start_char_pos": 124, "end_char_pos": 124 }, { "type": "R", "before": "and improve these methods. Confidence", "after": ". Based on finite sample variances, confidence", "start_char_pos": 267, "end_char_pos": 304 }, { "type": "R", "before": "estimators in the finite activity case. They allow inference on the behavior of the parameters when the option prices are observed in a sequence of trading days", "after": "volatility, for the drift and, pointwise, for the jump density. As demonstrated by simulations, these intervals perform well in terms of size and coverage probabilities", "start_char_pos": 339, "end_char_pos": 499 }, { "type": "R", "before": "real option data", "after": "options on the German DAX index and find that both methods achieve good calibration results. The stability of the finite activity model is studied when the option prices are observed in a sequence of trading days", "start_char_pos": 594, "end_char_pos": 610 } ]
[ 0, 108, 293, 378, 501 ]
1202.6188
1
We study a novel pricing operator for complete, local martingale models. The new pricing operator guarantees put-call parity to hold and the value of a forward contract to match the buy-and-hold strategy, even if the underlying follows strict local martingale dynamics. More precisely, we discuss a change of num\'eraire (change of currency) technique when the underlying is only a local martingale modelling for example an exchange rate. The new pricing operator assigns prices to contingent claims according to the minimal cost for replication strategies that succeed with probability one for both currencies as num\'eraire. Within this context, we interpret the non-martingality of an exchange-rate as a reflection of the possibility that the num\'eraire currency may devalue completely against the asset currency (hyperinflation).
We study a novel pricing operator for complete, local martingale models. The new pricing operator guarantees put-call parity to hold for model prices and the value of a forward contract to match the buy-and-hold strategy, even if the underlying follows strict local martingale dynamics. More precisely, we discuss a change of num\'eraire (change of currency) technique when the underlying is only a local martingale modelling for example an exchange rate. The new pricing operator assigns prices to contingent claims according to the minimal cost for superreplication strategies that succeed with probability one for both currencies as num\'eraire. Within this context, we interpret the lack of the martingale property of an exchange-rate as a reflection of the possibility that the num\'eraire currency may devalue completely against the asset currency (hyperinflation).
[ { "type": "A", "before": null, "after": "for model prices", "start_char_pos": 133, "end_char_pos": 133 }, { "type": "R", "before": "replication", "after": "superreplication", "start_char_pos": 535, "end_char_pos": 546 }, { "type": "R", "before": "non-martingality", "after": "lack of the martingale property", "start_char_pos": 666, "end_char_pos": 682 } ]
[ 0, 72, 270, 439, 627 ]
1202.6522
1
In this paper, we report on very efficient algorithms for the spherical harmonic transform (SHT) that can be used in numerical simulations of partial differential equations . Explicitly vectorized variations of the Gauss-Legendre algorithm are discussed and implemented in the open-source library SHTns which includes scalar and vector transforms. This library is especially suitable for direct numerical simulations of non-linear partial differential equations in spherical geometry, like the Navier-Stokes equation. The performance of our algorithms is compared to third party SHT implementations, including fast algorithms . Even though the complexity of the algorithms implemented in SHTns are of order O(N^3) (where N is the maximum harmonic degree of the transform), they perform much better than the available implementations of asymptotically fast algorithms, even for a truncation as high as N=1023. In our performance tests, the best performance for SHT on the x86 platform is delivered by SHTns , which is available at URL/nschaeff/shtns as open source software.
In this paper, we report on very efficient algorithms for the spherical harmonic transform (SHT) . Explicitly vectorized variations of the algorithm based on the Gauss-Legendre quadrature are discussed and implemented in the SHTns library which includes scalar and vector transforms. The main breakthrough is to achieve very efficient on-the-fly computations of the Legendre associated functions, even for very high resolutions, by taking advantage of the specific properties of the SHT and the advanced capabilities of current and future computers. This allows us to simultaneously and significantly reduce memory usage and computation time of the SHT. We measure the performance and accuracy of our algorithms . Even though the complexity of the algorithms implemented in SHTns are in O(N^3) (where N is the maximum harmonic degree of the transform), they perform much better than any third party implementation, including lower complexity algorithms, even for truncations as high as N=1023. SHTns is available at URL/nschaeff/shtns as open source software.
[ { "type": "D", "before": "that can be used in numerical simulations of partial differential equations", "after": null, "start_char_pos": 97, "end_char_pos": 172 }, { "type": "R", "before": "Gauss-Legendre algorithm", "after": "algorithm based on the Gauss-Legendre quadrature", "start_char_pos": 215, "end_char_pos": 239 }, { "type": "R", "before": "open-source library SHTns", "after": "SHTns library", "start_char_pos": 277, "end_char_pos": 302 }, { "type": "R", "before": "This library is especially suitable for direct numerical simulations of non-linear partial differential equations in spherical geometry, like the Navier-Stokes equation. The performance", "after": "The main breakthrough is to achieve very efficient on-the-fly computations of the Legendre associated functions, even for very high resolutions, by taking advantage of the specific properties of the SHT and the advanced capabilities of current and future computers. This allows us to simultaneously and significantly reduce memory usage and computation time of the SHT. We measure the performance and accuracy", "start_char_pos": 348, "end_char_pos": 533 }, { "type": "D", "before": "is compared to third party SHT implementations, including fast algorithms", "after": null, "start_char_pos": 552, "end_char_pos": 625 }, { "type": "R", "before": "of order", "after": "in", "start_char_pos": 698, "end_char_pos": 706 }, { "type": "R", "before": "the available implementations of asymptotically fast", "after": "any third party implementation, including lower complexity", "start_char_pos": 803, "end_char_pos": 855 }, { "type": "R", "before": "a truncation", "after": "truncations", "start_char_pos": 877, "end_char_pos": 889 }, { "type": "R", "before": "In our performance tests, the best performance for SHT on the x86 platform is delivered by SHTns , which", "after": "SHTns", "start_char_pos": 909, "end_char_pos": 1013 } ]
[ 0, 347, 517, 627, 908 ]
1202.6611
1
Confidence intervals and joint confidence sets are constructed for the nonparametric calibration of exponential L\'evy models based on prices of European options. This is done by showing joint asymptotic normality for the estimation of the volatility, the drift, the intensity and the L\'evy density at finitely many points in the spectral calibration method. Furthermore, the asymptotic normality result leads to a test on the value of the volatility in exponential L\'evy models .
Confidence intervals and joint confidence sets are constructed for the nonparametric calibration of exponential L\'evy models based on prices of European options. To this end, we show joint asymptotic normality in the spectral calibration method for the estimators of the volatility, the drift, the jump intensity and the L\'evy density at finitely many points .
[ { "type": "R", "before": "This is done by showing", "after": "To this end, we show", "start_char_pos": 163, "end_char_pos": 186 }, { "type": "R", "before": "for the estimation", "after": "in the spectral calibration method for the estimators", "start_char_pos": 214, "end_char_pos": 232 }, { "type": "A", "before": null, "after": "jump", "start_char_pos": 267, "end_char_pos": 267 }, { "type": "D", "before": "in the spectral calibration method. Furthermore, the asymptotic normality result leads to a test on the value of the volatility in exponential L\\'evy models", "after": null, "start_char_pos": 325, "end_char_pos": 481 } ]
[ 0, 162, 360 ]
1203.3248
1
We have proposed a new toy model of a heteropolymer chain capable of forming a cactus-like hierarchical secondary structure typical for RNA molecules. The specific feature of this model consists in the fact, that the sequential intervals between neighboring along a chain monomers are considered as quenched random variables. Using the optimization procedure for a special class of concave-type potentials, borrowed from optimal transport analysis, we have derived the stochastic differential equation for the ground state free energy of the chain . We have considered various distribution functions of intervals between neighboring monomers (truncated Gaussian and scale-free) and have demonstrated the existence of a topological transition from sequential to essentially embedded (nested) configurations of paired links.
We propose a new toy model of a heteropolymer chain capable of forming planar secondary structures typical for RNA molecules. In this model the sequential intervals between neighboring monomers along a chain are considered as quenched random variables. Using the optimization procedure for a special class of concave--type potentials, borrowed from optimal transport analysis, we derive the local difference equation for the ground state free energy of the chain with the planar (RNA--like) architecture of paired links. We consider various distribution functions of intervals between neighboring monomers (truncated Gaussian and scale--free) and demonstrate the existence of a topological crossover from sequential to essentially embedded (nested) configurations of paired links.
[ { "type": "R", "before": "have proposed", "after": "propose", "start_char_pos": 3, "end_char_pos": 16 }, { "type": "R", "before": "a cactus-like hierarchical secondary structure", "after": "planar secondary structures", "start_char_pos": 77, "end_char_pos": 123 }, { "type": "R", "before": "The specific feature of this model consists in the fact, that the", "after": "In this model the", "start_char_pos": 151, "end_char_pos": 216 }, { "type": "A", "before": null, "after": "monomers", "start_char_pos": 258, "end_char_pos": 258 }, { "type": "D", "before": "monomers", "after": null, "start_char_pos": 273, "end_char_pos": 281 }, { "type": "R", "before": "concave-type", "after": "concave--type", "start_char_pos": 383, "end_char_pos": 395 }, { "type": "R", "before": "have derived the stochastic differential", "after": "derive the local difference", "start_char_pos": 453, "end_char_pos": 493 }, { "type": "R", "before": ". We have considered", "after": "with the planar (RNA--like) architecture of paired links. We consider", "start_char_pos": 549, "end_char_pos": 569 }, { "type": "R", "before": "scale-free) and have demonstrated", "after": "scale--free) and demonstrate", "start_char_pos": 667, "end_char_pos": 700 }, { "type": "R", "before": "transition", "after": "crossover", "start_char_pos": 732, "end_char_pos": 742 } ]
[ 0, 150, 326, 550 ]
1203.3757
1
In this paper we study a continuous time, optimal stochastic investment problem under limited resources in a market with N firms. The investment processes are subject to a time-dependent stochastic constraint. Rather than using a dynamic programming approach, we exploit the concavity of the profit functional to derive some necessary and sufficient first order conditions for the corresponding Social Planner optimal policy. Our conditions are a stochastic infinite-dimensional generalization of the Kuhn-Tucker Theorem. ] As a subproduct we obtain an enlightening interpretation of the first order conditions for a single firm in Bank 5 . In the infinite-horizon case, with operating profit functions of Cobb-Douglas type, our method allows the explicit calculation of the optimal policy in terms of the base capacity process, i.e. the unique solution of the Bank and El Karoui representation problem 4 .
In this paper we study a continuous time, optimal stochastic investment problem under limited resources in a market with N firms. The investment processes are subject to a time-dependent stochastic constraint. Rather than using a dynamic programming approach, we exploit the concavity of the profit functional to derive some necessary and sufficient first order conditions for the corresponding Social Planner optimal policy. Our conditions are a stochastic infinite-dimensional generalization of the Kuhn-Tucker Theorem. The Lagrange multiplier takes the form of a nonnegative optional random measure on 0,T] which is flat off the set of times for which the constraint is binding, i.e. when all the fuel is spent. As a subproduct we obtain an enlightening interpretation of the first order conditions for a single firm in Bank (2005) . In the infinite-horizon case, with operating profit functions of Cobb-Douglas type, our method allows the explicit calculation of the optimal policy in terms of the `base capacity' process, i.e. the unique solution of the Bank and El Karoui representation problem (2004) .
[ { "type": "A", "before": null, "after": "The Lagrange multiplier takes the form of a nonnegative optional random measure on", "start_char_pos": 522, "end_char_pos": 522 }, { "type": "A", "before": null, "after": "0,T", "start_char_pos": 523, "end_char_pos": 523 }, { "type": "A", "before": null, "after": "which is flat off the set of times for which the constraint is binding, i.e. when all the fuel is spent.", "start_char_pos": 525, "end_char_pos": 525 }, { "type": "R", "before": "5", "after": "(2005)", "start_char_pos": 639, "end_char_pos": 640 }, { "type": "R", "before": "base capacity", "after": "`base capacity'", "start_char_pos": 808, "end_char_pos": 821 }, { "type": "R", "before": "4", "after": "(2004)", "start_char_pos": 905, "end_char_pos": 906 } ]
[ 0, 129, 209, 425, 521 ]
1203.4054
1
In this paper, we present an approach to predict the total CPU utilization in terms of CPU clock tick of applications when running on MapReduce framework. Our approach has two key phases: profiling and modeling. In the profiling phase, an application is run several times with different sets of MapReduce configuration parameters to profile total CPU clock tick of the application on a given platform. In the modeling phase, multi linear regression is used to map the sets of MapReduce configuration parameters (number of Mappers, number of Reducers, size of File System (HDFS) and the size of input file) to total CPU clock ticks of the application . This derived model can be used for predicting total CPU requirements of the same application when using MapReduce framework on the same platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. Three standard applications (WordCount, Exim Mainlog parsingand Terasort)are used to evaluate our modeling technique on pseudo-distributed MapReduce platforms . Results show that our automated model generation procedure can effectively characterize total CPU clock tick of these applications with average prediction error of 3.5\% , 4.05\% and 2.75\%, respectively .
Recently, businesses have started using MapReduce as a popular computation framework for processing large amount of data, such as spam detection, and different data mining tasks, in both public and private clouds. Two of the challenging questions in such environments are (1) choosing suitable values for MapReduce configuration parameters -e.g., number of mappers, number of reducers, and DFS block size-, and (2) predicting the amount of resources that a user should lease from the service provider. Currently, the tasks of both choosing configuration parameters and estimating required resources are solely the users' responsibilities. In this paper, we present an approach to provision the total CPU usage in clock cycles of jobs in MapReduce environment. For a MapReduce job, a profile of total CPU usage in clock cycles is built from the job past executions with different values of two configuration parameters e.g., number of mappers, and number of reducers. Then, a polynomial regression is used to model the relation between these configuration parameters and total CPU usage in clock cycles of the job. We also briefly study the influence of input data scaling on measured total CPU usage in clock cycles . This derived model along with the scaling result can then be used to provision the total CPU usage in clock cycles of the same jobs with different input data size. We validate the accuracy of our models using three realistic applications (WordCount, Exim MainLog parsing, and TeraSort) . Results show that the predicted total CPU usage in clock cycles of generated resource provisioning options are less than 8\% of the measured total CPU usage in clock cycles in our 20-node virtual Hadoop cluster .
[ { "type": "A", "before": null, "after": "Recently, businesses have started using MapReduce as a popular computation framework for processing large amount of data, such as spam detection, and different data mining tasks, in both public and private clouds. Two of the challenging questions in such environments are (1) choosing suitable values for MapReduce configuration parameters -e.g., number of mappers, number of reducers, and DFS block size-, and (2) predicting the amount of resources that a user should lease from the service provider. Currently, the tasks of both choosing configuration parameters and estimating required resources are solely the users' responsibilities.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "predict", "after": "provision", "start_char_pos": 42, "end_char_pos": 49 }, { "type": "R", "before": "utilization in terms of CPU clock tick of applications when running on MapReduce framework. Our approach has two key phases: profiling and modeling. In the profiling phase, an application is run several times with different sets of MapReduce configuration parameters to profile total CPU clock tick of the application on a given platform. In the modeling phase, multi linear", "after": "usage in clock cycles of jobs in MapReduce environment. For a MapReduce job, a profile of total CPU usage in clock cycles is built from the job past executions with different values of two configuration parameters e.g., number of mappers, and number of reducers. Then, a polynomial", "start_char_pos": 64, "end_char_pos": 438 }, { "type": "R", "before": "map the sets of MapReduce configuration parameters (number of Mappers, number of Reducers, size of File System (HDFS) and the size of input file) to total CPU clock ticks of the application", "after": "model the relation between these configuration parameters and total CPU usage in clock cycles of the job. We also briefly study the influence of input data scaling on measured total CPU usage in clock cycles", "start_char_pos": 461, "end_char_pos": 650 }, { "type": "R", "before": "can be used for predicting total CPU requirements", "after": "along with the scaling result can then be used to provision the total CPU usage in clock cycles", "start_char_pos": 672, "end_char_pos": 721 }, { "type": "R", "before": "application when using MapReduce framework on the same platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. Three standard", "after": "jobs with different input data size. We validate the accuracy of our models using three realistic", "start_char_pos": 734, "end_char_pos": 914 }, { "type": "R", "before": "Mainlog parsingand Terasort)are used to evaluate our modeling technique on pseudo-distributed MapReduce platforms", "after": "MainLog parsing, and TeraSort)", "start_char_pos": 945, "end_char_pos": 1058 }, { "type": "R", "before": "our automated model generation procedure can effectively characterize total CPU clock tick of these applications with average prediction error of 3.5\\% , 4.05\\% and 2.75\\%, respectively", "after": "the predicted total CPU usage in clock cycles of generated resource provisioning options are less than 8\\% of the measured total CPU usage in clock cycles in our 20-node virtual Hadoop cluster", "start_char_pos": 1079, "end_char_pos": 1264 } ]
[ 0, 155, 212, 402, 652, 798, 899 ]
1203.4153
1
We investigate portfolio selection problem from a signal processing perspective and study how an investor should distribute wealth over two assets in order to maximize the cumulative wealth. We construct portfolios that provide the optimal growth in i.i.d. discrete time two-asset markets under proportional transaction costs. As the market model, we consider arbitrary discrete distributions on the price relative vectors , which can also be used to approximate a wide class of continuous distributions. To achieve optimal growth, we use threshold portfolios, where we introduce a recursive update to calculate the expected wealth. We then demonstrate that under the threshold rebalancing framework, the achievable set of portfolios elegantly form an irreducible Markov chain under mild technical conditions. We evaluate the corresponding stationary distribution of this Markov chain, which provides a natural and efficient method to calculate the cumulative expected wealth. Subsequently, the corresponding parameters are optimized using a brute force approach yielding the growth optimal portfolio under proportional transaction costs in i.i.d. discrete-time two-asset markets. As a widely known financial problem, we also solve optimal portfolio selection in discrete-time markets constructed by sampling continuous-time Brownian markets. For the case that the underlying discrete distributions of the price relative vectors are unknown, we provide a maximum likelihood estimator that is also incorporated in the optimization framework .
We investigate how and when to diversify capital over assets, i.e., the portfolio selection problem , from a signal processing perspective . To this end, we first construct portfolios that achieve the optimal expected growth in i.i.d. discrete-time two-asset markets under proportional transaction costs. We then extend our analysis to cover markets having more than two stocks. The market is modeled by a sequence of price relative vectors with arbitrary discrete distributions , which can also be used to approximate a wide class of continuous distributions. To achieve the optimal growth, we use threshold portfolios, where we introduce a recursive update to calculate the expected wealth. We then demonstrate that under the threshold rebalancing framework, the achievable set of portfolios elegantly form an irreducible Markov chain under mild technical conditions. We evaluate the corresponding stationary distribution of this Markov chain, which provides a natural and efficient method to calculate the cumulative expected wealth. Subsequently, the corresponding parameters are optimized yielding the growth optimal portfolio under proportional transaction costs in i.i.d. discrete-time two-asset markets. As a widely known financial problem, we next solve optimal portfolio selection in discrete-time markets constructed by sampling continuous-time Brownian markets. For the case that the underlying discrete distributions of the price relative vectors are unknown, we provide a maximum likelihood estimator that is also incorporated in the optimization framework in our simulations .
[ { "type": "A", "before": null, "after": "how and when to diversify capital over assets, i.e., the", "start_char_pos": 15, "end_char_pos": 15 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 44, "end_char_pos": 44 }, { "type": "R", "before": "and study how an investor should distribute wealth over two assets in order to maximize the cumulative wealth. We", "after": ". To this end, we first", "start_char_pos": 82, "end_char_pos": 195 }, { "type": "R", "before": "provide the optimal", "after": "achieve the optimal expected", "start_char_pos": 222, "end_char_pos": 241 }, { "type": "R", "before": "discrete time", "after": "discrete-time", "start_char_pos": 259, "end_char_pos": 272 }, { "type": "R", "before": "As the market model, we consider arbitrary discrete distributions on the", "after": "We then extend our analysis to cover markets having more than two stocks. The market is modeled by a sequence of", "start_char_pos": 329, "end_char_pos": 401 }, { "type": "A", "before": null, "after": "with arbitrary discrete distributions", "start_char_pos": 425, "end_char_pos": 425 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 519, "end_char_pos": 519 }, { "type": "D", "before": "using a brute force approach", "after": null, "start_char_pos": 1038, "end_char_pos": 1066 }, { "type": "R", "before": "also", "after": "next", "start_char_pos": 1225, "end_char_pos": 1229 }, { "type": "A", "before": null, "after": "in our simulations", "start_char_pos": 1544, "end_char_pos": 1544 } ]
[ 0, 192, 328, 507, 636, 813, 980, 1184, 1346 ]
1203.4610
1
We consider financial positions belonging to the Banach lattice of bounded measurable functions on a given measurable space. We discuss risk measures generated by general acceptance sets allowing for capital injections to be invested in a pre-specified eligible asset with an everywhere positive payoff. Risk measures play a key role when defining required capital for a financial institution. We address the three critical questions: when is required capital a well-defined number for any financial position? When is required capital a continuous function of the financial position? Can the eligible asset be chosen in such a way that for every financial position the corresponding required capital is lower than if any other asset had been chosen? In contrast to most of the literature our discussion is not limited to convex or coherent acceptance sets and allows for eligible assets that are not necessarily bounded away from zero. This generality uncovers some unexpected phenomena and opens up the field for applications to acceptance sets based both on Value-at-Risk and on Tail Value-at-Risk .
We study capital requirements for financial positions belonging to spaces of bounded measurable functions . We allow for general acceptance sets and general positive eligible (or "reference") assets, which include defaultable bonds, options, or limited liability assets. Since the payoff of these assets is not bounded away from zero the resulting capital requirements cannot be transformed into cash-invariant risk measures by a simple change of numeraire. However, extending the range of eligible assets is important because, as exemplified by the recent financial crisis, the existence of default-free securities may not be a realistic assumption to make. We study finiteness and continuity properties of capital requirements in this general context. We apply the results to capital requirements based on Value-at-Risk and Tail-Value-at-Risk acceptability, the two most important acceptability criteria in practice. Finally, we prove that it is not possible to choose the eligible asset so that the corresponding capital requirement dominates the capital requirement corresponding any other choice of the eligible asset. Our examples and results on finiteness and continuity show that a theory of capital requirements allowing for general eligible assets is richer than that of cash-invariant capital requirements .
[ { "type": "R", "before": "consider", "after": "study capital requirements for", "start_char_pos": 3, "end_char_pos": 11 }, { "type": "R", "before": "the Banach lattice", "after": "spaces", "start_char_pos": 45, "end_char_pos": 63 }, { "type": "R", "before": "on a given measurable space. We discuss risk measures generated by", "after": ". We allow for", "start_char_pos": 96, "end_char_pos": 162 }, { "type": "R", "before": "allowing for capital injections to be invested in a pre-specified eligible asset with an everywhere positive payoff. Risk measures play a key role when defining required capital for a financial institution. We address the three critical questions: when is required capital a well-defined number for any financial position? When is required capital a continuous function of", "after": "and general positive eligible (or \"reference\") assets, which include defaultable bonds, options, or limited liability assets. Since the payoff of these assets is not bounded away from zero the resulting capital requirements cannot be transformed into cash-invariant risk measures by a simple change of numeraire. However, extending the range of eligible assets is important because, as exemplified by the recent financial crisis, the existence of default-free securities may not be a realistic assumption to make. We study finiteness and continuity properties of capital requirements in this general context. We apply the results to capital requirements based on Value-at-Risk and Tail-Value-at-Risk acceptability, the two most important acceptability criteria in practice. Finally, we prove that it is not possible to choose the eligible asset so that", "start_char_pos": 187, "end_char_pos": 559 }, { "type": "R", "before": "financial position? Can the", "after": "corresponding capital requirement dominates the capital requirement corresponding any other choice of the", "start_char_pos": 564, "end_char_pos": 591 }, { "type": "R", "before": "asset be chosen in such a way that for every financial position the corresponding required capital is lower than if any other asset had been chosen? In contrast to most of the literature our discussion is not limited to convex or coherent acceptance sets and allows for eligible assets that are not necessarily bounded away from zero. This generality uncovers some unexpected phenomena and opens up the field for applications to acceptance sets based both on Value-at-Risk and on Tail Value-at-Risk", "after": "asset. Our examples and results on finiteness and continuity show that a theory of capital requirements allowing for general eligible assets is richer than that of cash-invariant capital requirements", "start_char_pos": 601, "end_char_pos": 1099 } ]
[ 0, 124, 303, 393, 509, 583, 749, 935 ]