doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
sequence
1207.5895
2
We consider social learning settings in which a group of agents face uncertainty regarding a state of the world, observe private signals, share the same utility function, and act in a general dynamic setting. We introduce Social Learning Equilibria, a static equilibrium concept that abstracts away from the details of the given dynamics , but nevertheless captures the corresponding asymptotic equilibrium behavior. We establish strong equilibrium properties on agreement, herding, and information aggregation.
We consider a large class of social learning models in which a group of agents face uncertainty regarding a state of the world, share the same utility function, observe private signals, and interact in a general dynamic setting. We introduce Social Learning Equilibria, a static equilibrium concept that abstracts away from the details of the given extensive form , but nevertheless captures the corresponding asymptotic equilibrium behavior. We establish general conditions for agreement, herding, and information aggregation in equilibrium, highlighting a connection between agreement and information aggregation.
[ { "type": "R", "before": "social learning settings", "after": "a large class of social learning models", "start_char_pos": 12, "end_char_pos": 36 }, { "type": "D", "before": "observe private signals,", "after": null, "start_char_pos": 113, "end_char_pos": 137 }, { "type": "R", "before": "and act", "after": "observe private signals, and interact", "start_char_pos": 171, "end_char_pos": 178 }, { "type": "R", "before": "dynamics", "after": "extensive form", "start_char_pos": 329, "end_char_pos": 337 }, { "type": "R", "before": "strong equilibrium properties on", "after": "general conditions for", "start_char_pos": 430, "end_char_pos": 462 }, { "type": "A", "before": null, "after": "and information aggregation in equilibrium, highlighting a connection between agreement", "start_char_pos": 483, "end_char_pos": 483 } ]
[ 0, 208, 416 ]
1207.6018
1
This article characterizes certain small multistationary chemical reaction networks. Specifically we identify the `smallest CFSTR atoms of multistationarity', namely those containing one non-flow reaction, which may be irreversible or reversible. We will refer to such atoms as one-reaction atoms of multistationarity. CFSTR atoms of multistationarity were introduced in the recent work of Joshi and Shiu. We recall that a fully open network (alternatively a fully open CFSTR) is a chemical reaction network where all chemical species participate in the inflow and the outflow; and a CFSTR atom of multistationarity N is a multistationary fully open network such that the operation of `removing reactions' or `removing species' destroys the multistationarity of N. The class of CFSTR atoms of multistationarity determines the entire class of multistationary fully open networks since possessing a CFSTR atom of multistationarity as an `embedded network' (N is embedded in G if N can be obtained from G by removing a finite subset of reactions and species) was proven recently to be a sufficient condition for a fully open network to admit multistationarity. We find that there are infinitely many one-reaction atoms, however the set of such atoms can be characterized using two types, each type containing two parameters. The first type contains one chemical species while the second type contains two chemical species; while both types contain one irreversible reaction. We identify both types with the chemical process of autocatalysis. Furthermore, we give a complete classification by multistationarity of all fully open networks which contain one non-flow (possibly reversible) reaction. Moreover, we obtain new sufficient conditions for establishing multistationarity of certain fully open networks beyond the one-reaction setting.
This article characterizes certain small multistationary chemical reaction networks. We consider the set of fully open networks, those for which all chemical species participate in inflow and outflow, containing one non-flow (reversible or irreversible) reaction. We show that such a network admits multiple positive mass-action steady states if and only if the stoichiometric coefficients in the non-flow reaction satisfy a certain simple arithmetic relation. The multistationary fully open one-reaction networks are identified with the chemical process of autocatalysis. Using the notion of `embedded network' defined recently by Joshi and Shiu, we provide new sufficient conditions for establishing multistationarity of fully open networks , applicable well beyond the one-reaction setting.
[ { "type": "R", "before": "Specifically we identify the `smallest CFSTR atoms of multistationarity', namely those containing one non-flow reaction, which may be irreversible or reversible. We will refer to such atoms as one-reaction atoms of multistationarity. CFSTR atoms of multistationarity were introduced in the recent work of Joshi and Shiu. We recall that a fully open network (alternatively a fully open CFSTR) is a chemical reaction network where", "after": "We consider the set of fully open networks, those for which", "start_char_pos": 85, "end_char_pos": 513 }, { "type": "R", "before": "the inflow and the outflow; and a CFSTR atom of multistationarity N is a multistationary fully open network such that the operation of `removing reactions' or `removing species' destroys the multistationarity of N. The class of CFSTR atoms of multistationarity determines the entire class of", "after": "inflow and outflow, containing one non-flow (reversible or irreversible) reaction. We show that such a network admits multiple positive mass-action steady states if and only if the stoichiometric coefficients in the non-flow reaction satisfy a certain simple arithmetic relation. The", "start_char_pos": 550, "end_char_pos": 841 }, { "type": "R", "before": "networks since possessing a CFSTR atom of multistationarity as an `embedded network' (N is embedded in G if N can be obtained from G by removing a finite subset of reactions and species) was proven recently to be a sufficient condition for a fully open network to admit multistationarity. We find that there are infinitely many one-reaction atoms, however the set of such atoms can be characterized using two types, each type containing two parameters. The first type contains one chemical species while the second type contains two chemical species; while both types contain one irreversible reaction. We identify both types", "after": "one-reaction networks are identified", "start_char_pos": 869, "end_char_pos": 1494 }, { "type": "R", "before": "Furthermore, we give a complete classification by multistationarity of all fully open networks which contain one non-flow (possibly reversible) reaction. Moreover, we obtain", "after": "Using the notion of `embedded network' defined recently by Joshi and Shiu, we provide", "start_char_pos": 1539, "end_char_pos": 1712 }, { "type": "D", "before": "certain", "after": null, "start_char_pos": 1777, "end_char_pos": 1784 }, { "type": "A", "before": null, "after": ", applicable well", "start_char_pos": 1805, "end_char_pos": 1805 } ]
[ 0, 84, 246, 318, 405, 577, 764, 1157, 1321, 1419, 1471, 1538, 1692 ]
1207.6431
1
In experiments and in simulations, the free energy of a state of a system can be determined from the probability that the state is occupied. However, it is often necessary to impose a biasing potential on the system so that high energy states are sampled with a reasonable frequency. The unbiased energy is typically obtained from the data using the weighted histogram analysis method (WHAM). Here we present differential energy surface analysis (DESA), in which the gradient of the energy surface, dE/dx, is extracted from data taken with a series of harmonic biasing potentials. It is shown that DESA produces a maximum likelihood estimate of the folding landscape gradient. DESA is used to analyze extension vs. time data taken as an optical trap is used to unfold a DNA hairpin under a harmonic constraint . It is shown that the energy surface obtained from DESA is indistinguishable from the energy surface obtained when WHAM is applied to the same data. Two criteria are defined which indicate whether the DESA results are self-consistent. It is found that these criteria are satisfied for the experimental data analyzed, confirming that data taken with different constraint origins are sampling the same effective energy surface . The combination of DESA and the optical trap assay in which a structure is disrupted under harmonic constraint facilitates an extremely accurate measurement of the folding energy surface.
In experiments and in simulations, the free energy of a state of a system can be determined from the probability that the state is occupied. However, it is often necessary to impose a biasing potential on the system so that high energy states are sampled with sufficient frequency. The unbiased energy is typically obtained from the data using the weighted histogram analysis method (WHAM). Here we present differential energy surface analysis (DESA), in which the gradient of the energy surface, dE/dx, is extracted from data taken with a series of harmonic biasing potentials. It is shown that DESA produces a maximum likelihood estimate of the folding landscape gradient. DESA is demonstrated by analyzing data from a simulated system as well as data from a single-molecule unfolding experiment in which the end-to-end distance of a DNA hairpin is measured . It is shown that the energy surface obtained from DESA is indistinguishable from the energy surface obtained when WHAM is applied to the same data. Two criteria are defined which indicate whether the DESA results are self-consistent. It is found that these criteria can detect a situation where the energy is not a single-valued function of the measured reaction coordinate. The criteria were found to be satisfied for the experimental data analyzed, confirming that end-to-end distance is a good reaction coordinate for the experimental system . The combination of DESA and the optical trap assay in which a structure is disrupted under harmonic constraint facilitates an extremely accurate measurement of the folding energy surface.
[ { "type": "R", "before": "a reasonable", "after": "sufficient", "start_char_pos": 260, "end_char_pos": 272 }, { "type": "R", "before": "used to analyze extension vs. time data taken as an optical trap is used to unfold a DNA hairpin under a harmonic constraint", "after": "demonstrated by analyzing data from a simulated system as well as data from a single-molecule unfolding experiment in which the end-to-end distance of a DNA hairpin is measured", "start_char_pos": 685, "end_char_pos": 809 }, { "type": "R", "before": "are", "after": "can detect a situation where the energy is not a single-valued function of the measured reaction coordinate. The criteria were found to be", "start_char_pos": 1078, "end_char_pos": 1081 }, { "type": "R", "before": "data taken with different constraint origins are sampling the same effective energy surface", "after": "end-to-end distance is a good reaction coordinate for the experimental system", "start_char_pos": 1144, "end_char_pos": 1235 } ]
[ 0, 140, 283, 392, 580, 676, 811, 959, 1045, 1237 ]
1207.7308
1
Accurate Goodness of Fit tests for the extreme tails of empirical distributions is a very important issue, relevant in many contexts, including geophysics, insurance and finance. We have derived exact asymptotic results for a generalization of the Kolmogorov-Smirnov test, well suited to test these extreme tails. In passing, we have rederived and made more precise the result of [ P. L. Krapivsky and S. Redner, Am. J. Phys. 64 ( 5):546, 1996 ] concerning the survival probability of a diffusive particle in an expanding cage .
Accurate goodness-of-fit tests for the extreme tails of empirical distributions is a very important issue, relevant in many contexts, including geophysics, insurance , and finance. We have derived exact asymptotic results for a generalization of the large-sample Kolmogorov-Smirnov test, well suited to testing these extreme tails. In passing, we have rederived and made more precise the approximate limit solutions found originally in unrelated fields, first in [ L. Turban, J. Phys. A 25, 127 (1992) and later in P. L. Krapivsky and S. Redner, Am. J. Phys. 64 , 546 ( 1996 ) ] .
[ { "type": "R", "before": "Goodness of Fit", "after": "goodness-of-fit", "start_char_pos": 9, "end_char_pos": 24 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 166, "end_char_pos": 166 }, { "type": "A", "before": null, "after": "large-sample", "start_char_pos": 249, "end_char_pos": 249 }, { "type": "R", "before": "test", "after": "testing", "start_char_pos": 290, "end_char_pos": 294 }, { "type": "R", "before": "result of", "after": "approximate limit solutions found originally in unrelated fields, first in", "start_char_pos": 372, "end_char_pos": 381 }, { "type": "A", "before": null, "after": "L. Turban, J. Phys. A 25, 127 (1992)", "start_char_pos": 384, "end_char_pos": 384 }, { "type": "A", "before": null, "after": "and later in", "start_char_pos": 385, "end_char_pos": 385 }, { "type": "A", "before": null, "after": ", 546", "start_char_pos": 433, "end_char_pos": 433 }, { "type": "D", "before": "5):546,", "after": null, "start_char_pos": 436, "end_char_pos": 443 }, { "type": "A", "before": null, "after": ")", "start_char_pos": 449, "end_char_pos": 449 }, { "type": "D", "before": "concerning the survival probability of a diffusive particle in an expanding cage", "after": null, "start_char_pos": 452, "end_char_pos": 532 } ]
[ 0, 179, 315, 420 ]
1208.0763
1
In this paper, we follow the study of second order BSDEs with jumps started in our accompanying paper [ 17 ]. We prove existence of these equations by a direct method, thus providing complete wellposedness for second order BSDEs . These equations are the natural candidates for the probabilistic interpretation of fully non-linear partial integro-differential equations, which is the point of our paper 18%DIFDELCMD < ]%%% . Finally, we give an application of second order BSDEs to the study of a robust exponential utility maximization problem under model uncertainty. The uncertainty affects both the volatility process and the jump measure compensator. We prove existence of an optimal strategy, and that the value function of the problem is the unique solution of a particular second order BSDE with jumps .
In this paper, we pursue the study of second order BSDEs with jumps (2BSDEJs for short) started in our accompanying paper [ 15 ]. We prove existence of these equations by a direct method, thus providing complete wellposedness for 2BSDEJs . These equations are a natural candidate for the probabilistic interpretation of some fully non-linear partial integro-differential equations, which is the point of %DIFDELCMD < ]%%% the second part of this work. We prove a non-linear Feynman-Kac formula and show that solutions to 2BSDEJs provide viscosity solutions of the associated PIDEs .
[ { "type": "R", "before": "follow", "after": "pursue", "start_char_pos": 18, "end_char_pos": 24 }, { "type": "A", "before": null, "after": "(2BSDEJs for short)", "start_char_pos": 68, "end_char_pos": 68 }, { "type": "R", "before": "17", "after": "15", "start_char_pos": 105, "end_char_pos": 107 }, { "type": "R", "before": "second order BSDEs", "after": "2BSDEJs", "start_char_pos": 211, "end_char_pos": 229 }, { "type": "R", "before": "the natural candidates", "after": "a natural candidate", "start_char_pos": 252, "end_char_pos": 274 }, { "type": "A", "before": null, "after": "some", "start_char_pos": 315, "end_char_pos": 315 }, { "type": "D", "before": "our paper", "after": null, "start_char_pos": 395, "end_char_pos": 404 }, { "type": "D", "before": "18", "after": null, "start_char_pos": 405, "end_char_pos": 407 }, { "type": "R", "before": ". Finally, we give an application of second order BSDEs to the study of a robust exponential utility maximization problem under model uncertainty. The uncertainty affects both the volatility process and the jump measure compensator. We prove existence of an optimal strategy, and that the value function of the problem is the unique solution of a particular second order BSDE with jumps", "after": "the second part of this work. We prove a non-linear Feynman-Kac formula and show that solutions to 2BSDEJs provide viscosity solutions of the associated PIDEs", "start_char_pos": 425, "end_char_pos": 811 } ]
[ 0, 110, 231, 571, 657 ]
1208.0874
1
Motivated by questions in mass-action kinetics, we introduce the notion of "vertexical family " of differential inclusions. Defined on open hypercubes, these families are characterized by particular good behavior under projection maps. The motivating examples are certain families of reaction networks --- including reversible, weakly reversible, endotactic, and "strongly endotactic " reaction networks --- that give rise to vertexical families of mass-action differential inclusions. We prove that vertexical families are amenable to structural induction. Consequently, a trajectory of a vertexical family approaches the boundary if and only if either the trajectory approaches a vertex of the hypercube, or a trajectory in a lower-dimensional member of the family approaches the boundary. With this technology, we make progress on the global attractor conjecture, a central open problem concerning mass-action kinetics systems. Additionally, we phrase mass-action kinetics as a functor on reaction networks with variable rates.
Motivated by questions in mass-action kinetics, we introduce the notion of vertexical family of differential inclusions. Defined on open hypercubes, these families are characterized by particular good behavior under projection maps. The motivating examples are certain families of reaction networks -- including reversible, weakly reversible, endotactic, and strongly endotactic reaction networks -- that give rise to vertexical families of mass-action differential inclusions. We prove that vertexical families are amenable to structural induction. Consequently, a trajectory of a vertexical family approaches the boundary if and only if either the trajectory approaches a vertex of the hypercube, or a trajectory in a lower-dimensional member of the family approaches the boundary. With this technology, we make progress on the global attractor conjecture, a central open problem concerning mass-action kinetics systems. Additionally, we phrase mass-action kinetics as a functor on reaction networks with variable rates.
[ { "type": "R", "before": "\"vertexical family \"", "after": "vertexical family", "start_char_pos": 75, "end_char_pos": 95 }, { "type": "R", "before": "---", "after": "--", "start_char_pos": 302, "end_char_pos": 305 }, { "type": "R", "before": "\"strongly endotactic \" reaction networks ---", "after": "strongly endotactic reaction networks --", "start_char_pos": 363, "end_char_pos": 407 } ]
[ 0, 123, 235, 485, 557, 791, 930 ]
1208.1054
1
Progress in signal transduction research has been restricted by the use of differential equation modeling which produces high complexity for the analysis and comparison even of small-size systems. Other approaches are usually not directly compatible with mass-action kinetic reaction models. We suggest to analyze concentration changes as input-dependent response functions and separately calculate delays until steady states are reached. This allows to generate pre-computed matrices as transfer functions for any range of inputs ('protein signaling functions' (psfs)). They can be used to perform discrete dynamical simulations, but most importantly, these matrices offer an unprecedented range of possibilities to analyze seemingly complex systems in a simple manner. We show for instance how we can predict active vs. inactive signal transmission links, so that we know which units in the network will respond to input. The results correspond closely with biological knowledge .
We present a novel formulation for biochemical reaction networks in the context of signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of 'source' species, which receive input signals. Signals are transmitted to all other species in the system (the 'target' species) with a specific delay and transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and recalled to build discrete dynamical models. By separating reaction time and concentration we can greatly simplify the model, circumventing typical problems of complex dynamical systems. The transfer function transformation can be applied to mass-action kinetic models of signal transduction. The paper shows that this approach yields significant insight, while remaining an executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modules that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. We also found that overall interconnectedness depends on the magnitude of input, with high connectivity at low input and less connectivity at moderate to high input. This general result, which directly follows from the properties of individual transfer functions, contradicts notions of ubiquitous complexity by showing input-dependent signal transmission inactivation .
[ { "type": "R", "before": "Progress in signal transduction research has been restricted by the use of differential equation modeling which produces high complexity for the analysis and comparison even of small-size systems. Other approaches are usually not directly compatible with", "after": "We present a novel formulation for biochemical reaction networks in the context of signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of 'source' species, which receive input signals. Signals are transmitted to all other species in the system (the 'target' species) with a specific delay and transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and recalled to build discrete dynamical models. By separating reaction time and concentration we can greatly simplify the model, circumventing typical problems of complex dynamical systems. The transfer function transformation can be applied to", "start_char_pos": 0, "end_char_pos": 254 }, { "type": "R", "before": "kinetic reaction models. We suggest to analyze concentration changes as", "after": "kinetic models of signal transduction. The paper shows that this approach yields significant insight, while remaining an executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modules that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. We also found that overall interconnectedness depends on the magnitude of input, with high connectivity at low input and less connectivity at moderate to high input. This general result, which directly follows from the properties of individual transfer functions, contradicts notions of ubiquitous complexity by showing", "start_char_pos": 267, "end_char_pos": 338 }, { "type": "R", "before": "response functions and separately calculate delays until steady states are reached. This allows to generate pre-computed matrices as transfer functions for any range of inputs ('protein signaling functions' (psfs)). They can be used to perform discrete dynamical simulations, but most importantly, these matrices offer an unprecedented range of possibilities to analyze seemingly complex systems in a simple manner. We show for instance how we can predict active vs. inactive signal transmission links, so that we know which units in the network will respond to input. The results correspond closely with biological knowledge", "after": "signal transmission inactivation", "start_char_pos": 355, "end_char_pos": 980 } ]
[ 0, 196, 291, 438, 570, 770, 923 ]
1208.1513
1
We propose a definition of continuous time dynamical systems made up of interacting open subsystems. The interconnections of subsystems are coded by directed graphs. Our main result is that the appropriate maps of graphs called graph fibrations give rise to maps of the appropriate dynamical systems and thereby to conserved quantities .
We propose a definition of continuous time dynamical systems made up of interacting open subsystems. The interconnections of subsystems are coded by directed graphs. Our main result is that the appropriate maps of graphs called graph fibrations give rise to maps of the appropriate dynamical systems . Consequently surjective graph fibrations give rise to invariant subsystems and injective graph fibrations give rise to projections of dynamical systems .
[ { "type": "R", "before": "and thereby to conserved quantities", "after": ". Consequently surjective graph fibrations give rise to invariant subsystems and injective graph fibrations give rise to projections of dynamical systems", "start_char_pos": 300, "end_char_pos": 335 } ]
[ 0, 100, 165 ]
1208.1513
2
We propose a definition of continuous time dynamical systems made up of interacting open subsystems. The interconnections of subsystems are coded by directed graphs. Our main result is that the appropriate maps of graphs called graph fibrations give rise to maps of the appropriate dynamical systems. Consequently surjective graph fibrations give rise to invariant subsystems and injective graph fibrations give rise to projections of dynamical systems.
We propose a precise definition of a continuous time dynamical system made up of interacting open subsystems. The interconnections of subsystems are coded by directed graphs. We prove that the appropriate maps of graphs called graph fibrations give rise to maps of dynamical systems. Consequently surjective graph fibrations give rise to invariant subsystems and injective graph fibrations give rise to projections of dynamical systems.
[ { "type": "R", "before": "definition of", "after": "precise definition of a", "start_char_pos": 13, "end_char_pos": 26 }, { "type": "R", "before": "systems", "after": "system", "start_char_pos": 53, "end_char_pos": 60 }, { "type": "R", "before": "Our main result is", "after": "We prove", "start_char_pos": 166, "end_char_pos": 184 }, { "type": "D", "before": "the appropriate", "after": null, "start_char_pos": 266, "end_char_pos": 281 } ]
[ 0, 100, 165, 300 ]
1208.2680
1
Consistently predicting protein structure at atomic resolution from sequence alone remains an unsolved challenge in computational biophysics. Even small puzzles involving protein loops excised out of crystallographic structures can become intractable as loop lengths exceed 10 residues and if surrounding side-chain conformations are erased. This article presents a "stepwise ansatz" for recursively enumerating a physically realistic subspace of protein conformations, implemented as a stepwise assembly (SWA) protocol in the Rosetta framework. For 32 of 40 loops that challenged prior approaches, at least one of five lowest energy SWA models agrees with the crystallographic conformation with C \alpha%DIFDELCMD < } %%% RMSD accuracy better than 1.0 \AA%DIFDELCMD < }%%% . SWA successes include hairpin-like loops , cis-Pro touch turns, loops with lengths of up to 24 residues (well outside the range of previous methods), and five blind tests: all four loops of an unreleased 275-residue protein structure and an RNA-binding loop of YbxF, a target in the fourth RNA-puzzle competition. These results establish a systematic and fundamentally distinct approach to all-atom protein structure modeling that consistently outperforms Monte Carlo and refinement-based methods .
Consistently predicting biopolymer structure at atomic resolution from sequence alone remains a difficult problem, even for small sub-segments of large proteins. Such loop prediction challenges, which arise frequently in comparative modeling and protein design, can become intractable as loop lengths exceed 10 residues and if surrounding side-chain conformations are erased. This article introduces a modeling strategy based on a 'stepwise ansatz', recently developed for RNA modeling, which posits that any realistic all-atom molecular conformation can be built up by residue-by-residue stepwise enumeration. When harnessed to a dynamic-programming-like recursion in the Rosetta framework, the resulting stepwise assembly (SWA) protocol %DIFDELCMD < } %%% %DIFDELCMD < }%%% enables enumerative sampling of a 12 residue loop at a significant but achievable cost of thousands of CPU-hours. In a previously established benchmark, SWA recovers crystallographic conformations with sub-Angstrom accuracy for 19 of 20 loops, compared to 14 of 20 by KIC modeling with a comparable expenditure of computational power. Furthermore, SWA gives high accuracy results on an additional set of 15 loops highlighted in the biological literature for their irregularity or unusual length. Successes include cis-Pro touch turns, loops that pass through tunnels of other side-chains, and loops of lengths up to 24 residues . Remaining problem cases are traced to inaccuracies in the Rosetta all-atom energy function. In five additional blind tests, SWA achieves sub-Angstrom accuracy models, including the first such success in a protein/RNA binding interface, the YbxF/kink-turn interaction in the fourth RNA-puzzle competition. These results establish all-atom enumeration as a systematic approach to protein structure that can leverage high performance computing and physically realistic energy functions to more consistently achieve atomic resolution .
[ { "type": "R", "before": "protein", "after": "biopolymer", "start_char_pos": 24, "end_char_pos": 31 }, { "type": "R", "before": "an unsolved challenge in computational biophysics. Even small puzzles involving protein loops excised out of crystallographic structures", "after": "a difficult problem, even for small sub-segments of large proteins. Such loop prediction challenges, which arise frequently in comparative modeling and protein design,", "start_char_pos": 91, "end_char_pos": 227 }, { "type": "R", "before": "presents a \"stepwise ansatz\" for recursively enumerating a physically realistic subspace of protein conformations, implemented as a", "after": "introduces a modeling strategy based on a 'stepwise ansatz', recently developed for RNA modeling, which posits that any realistic all-atom molecular conformation can be built up by residue-by-residue stepwise enumeration. When harnessed to a dynamic-programming-like recursion in the Rosetta framework, the resulting", "start_char_pos": 355, "end_char_pos": 486 }, { "type": "D", "before": "in the Rosetta framework. For 32 of 40 loops that challenged prior approaches, at least one of five lowest energy SWA models agrees with the crystallographic conformation with C", "after": null, "start_char_pos": 520, "end_char_pos": 697 }, { "type": "D", "before": "\\alpha", "after": null, "start_char_pos": 698, "end_char_pos": 704 }, { "type": "D", "before": "RMSD accuracy better than 1.0", "after": null, "start_char_pos": 723, "end_char_pos": 752 }, { "type": "D", "before": "\\AA", "after": null, "start_char_pos": 753, "end_char_pos": 756 }, { "type": "R", "before": ". SWA successes include hairpin-like loops ,", "after": "enables enumerative sampling of a 12 residue loop at a significant but achievable cost of thousands of CPU-hours. In a previously established benchmark, SWA recovers crystallographic conformations with sub-Angstrom accuracy for 19 of 20 loops, compared to 14 of 20 by KIC modeling with a comparable expenditure of computational power. Furthermore, SWA gives high accuracy results on an additional set of 15 loops highlighted in the biological literature for their irregularity or unusual length. Successes include", "start_char_pos": 774, "end_char_pos": 818 }, { "type": "R", "before": "with lengths of", "after": "that pass through tunnels of other side-chains, and loops of lengths", "start_char_pos": 846, "end_char_pos": 861 }, { "type": "R", "before": "(well outside the range of previous methods), and five blind tests: all four loops of an unreleased 275-residue protein structure and an RNA-binding loop of YbxF, a target", "after": ". Remaining problem cases are traced to inaccuracies in the Rosetta all-atom energy function. In five additional blind tests, SWA achieves sub-Angstrom accuracy models, including the first such success in a protein/RNA binding interface, the YbxF/kink-turn interaction", "start_char_pos": 880, "end_char_pos": 1051 }, { "type": "R", "before": "a systematic and fundamentally distinct approach to all-atom protein structure modeling that consistently outperforms Monte Carlo and refinement-based methods", "after": "all-atom enumeration as a systematic approach to protein structure that can leverage high performance computing and physically realistic energy functions to more consistently achieve atomic resolution", "start_char_pos": 1114, "end_char_pos": 1272 } ]
[ 0, 141, 341, 545, 947, 1089 ]
1208.2908
1
Modern multicore chips show complex behavior with respect to performance and power. Starting with the Intel Sandy Bridge processor, it has become possible to directly measure the power dissipation of a CPU chip and correlate this data with the performance properties of the running code. We establish machine models that describe the interaction of parallel code with the hardware, going beyond a simplebottleneck analysis . Together with a phenomenological power model we are able to explain many peculiarities in the performance and power behavior of multicore processors, and derive guidelines for power-efficient execution of parallel programs .
Modern multicore chips show complex behavior with respect to performance and power. Starting with the Intel Sandy Bridge processor, it has become possible to directly measure the power dissipation of a CPU chip and correlate this data with the performance properties of the running code. Going beyond a simple bottleneck analysis, we employ the recently published Execution-Cache-Memory (ECM) model to describe the single- and multi-core performance of streaming kernels. The model refines the well-known roofline model, since it can predict the scaling and the saturation behavior of bandwidth-limited loop kernels on a multicore chip. The saturation point is especially relevant for considerations of energy consumption. From power dissipation measurements of benchmark programs with vastly different requirements to the hardware, we derive a simple, phenomenological power model for the Sandy Bridge processor . Together with the ECM model, we are able to explain many peculiarities in the performance and power behavior of multicore processors, and derive guidelines for energy-efficient execution of parallel programs . Finally, we show that the ECM and power models can be successfully used to describe the scaling and power behavior of a lattice-Boltzmann flow solver code .
[ { "type": "R", "before": "We establish machine models that describe the interaction of parallel code with", "after": "Going beyond a simple bottleneck analysis, we employ the recently published Execution-Cache-Memory (ECM) model to describe the single- and multi-core performance of streaming kernels. The model refines the well-known roofline model, since it can predict the scaling and the saturation behavior of bandwidth-limited loop kernels on a multicore chip. The saturation point is especially relevant for considerations of energy consumption. From power dissipation measurements of benchmark programs with vastly different requirements to", "start_char_pos": 288, "end_char_pos": 367 }, { "type": "R", "before": "going beyond a simplebottleneck analysis", "after": "we derive a simple, phenomenological power model for the Sandy Bridge processor", "start_char_pos": 382, "end_char_pos": 422 }, { "type": "R", "before": "a phenomenological power model", "after": "the ECM model,", "start_char_pos": 439, "end_char_pos": 469 }, { "type": "R", "before": "power-efficient", "after": "energy-efficient", "start_char_pos": 601, "end_char_pos": 616 }, { "type": "A", "before": null, "after": ". Finally, we show that the ECM and power models can be successfully used to describe the scaling and power behavior of a lattice-Boltzmann flow solver code", "start_char_pos": 648, "end_char_pos": 648 } ]
[ 0, 83, 287, 424 ]
1208.3087
1
This paper introduces the Markov-Switching Multifractal Duration (MSMD) model by adapting the MSM stochastic volatility model of Calvet and Fisher (2004) to the duration setting. Although the MSMD process is exponential \beta -mixing as we show in the paper, it is capable of generating highly persistent autocorrelation. We study analytically and by simulation how this feature of durations generated by the MSMD process propagates to counts and realized volatility. We employ a quasi-maximum likelihood estimator of the MSMD parameters based on the Whittle approximation and show that it is a computationally simple and fast alternative to the maximum likelihoodestimator, and works for general MSMD specifications . Finally, we compare the performance of the MSMD model with competing short- and long-memory duration models in an out-of-sample forecasting exercise based on price durations of three major foreign exchange futures contracts. The results of the comparison show that the long-memory models perform similarly and are superior to the short-memory ACD models.
This paper introduces the Markov-Switching Multifractal Duration (MSMD) model by adapting the MSM stochastic volatility model of Calvet and Fisher (2004) to the duration setting. Although the MSMD process is exponential \beta -mixing as we show in the paper, it is capable of generating highly persistent autocorrelation. We study analytically and by simulation how this feature of durations generated by the MSMD process propagates to counts and realized volatility. We employ a quasi-maximum likelihood estimator of the MSMD parameters based on the Whittle approximation and establish its strong consistency and asymptotic normality for general MSMD specifications. We show that the Whittle estimation is a computationally simple and fast alternative to maximum likelihood . Finally, we compare the performance of the MSMD model with competing short- and long-memory duration models in an out-of-sample forecasting exercise based on price durations of three major foreign exchange futures contracts. The results of the comparison show that the MSMD and LMSD perform similarly and are superior to the short-memory ACD models.
[ { "type": "R", "before": "\\beta", "after": "\\beta", "start_char_pos": 220, "end_char_pos": 225 }, { "type": "R", "before": "show that it", "after": "establish its strong consistency and asymptotic normality for general MSMD specifications. We show that the Whittle estimation", "start_char_pos": 577, "end_char_pos": 589 }, { "type": "R", "before": "the maximum likelihoodestimator, and works for general MSMD specifications", "after": "maximum likelihood", "start_char_pos": 642, "end_char_pos": 716 }, { "type": "R", "before": "long-memory models", "after": "MSMD and LMSD", "start_char_pos": 988, "end_char_pos": 1006 } ]
[ 0, 178, 321, 467, 718, 943 ]
1208.3785
1
We consider a financial market with liquidity cost as in Cetin, Jarrow and Protter [2004], where the supply function S \epsilon (s, \nu ) depends on a parameter \epsilon%DIFDELCMD < }\geq0 %%% with S0 (s, \nu )=s corresponding to the perfect liquid situation. Using the PDE characterization of Cetin, Soner and Touzi [2010] , of the super-hedging cost of an option written on such a stock, we provide a Taylor expansion of the super-hedging cost in powers of \epsilon . In particular, we explicitly compute the first term in the expansion for a European Call option and give bounds for the order of the expansion for a European Digital Option.
We consider a financial market with liquidity cost as in Cetin, Jarrow and Protter [2004], where the supply function S ^{\epsilon (s, \nu ) depends on a parameter %DIFDELCMD < }\geq0 %%% \epsilon\geq 0 with S^0 (s, \nu )=s corresponding to the perfect liquid situation. Using the PDE characterization of Cetin, Soner and Touzi [2010] of the super-hedging cost of an option written on such a stock, we provide a Taylor expansion of the super-hedging cost in powers of \epsilon . In particular, we explicitly compute the first term in the expansion for a European Call option and give bounds for the order of the expansion for a European Digital Option.
[ { "type": "R", "before": "\\epsilon", "after": "^{\\epsilon", "start_char_pos": 119, "end_char_pos": 127 }, { "type": "R", "before": "\\nu", "after": "\\nu", "start_char_pos": 132, "end_char_pos": 135 }, { "type": "D", "before": "\\epsilon", "after": null, "start_char_pos": 161, "end_char_pos": 169 }, { "type": "R", "before": "with S0", "after": "\\epsilon\\geq 0 with S^0", "start_char_pos": 193, "end_char_pos": 200 }, { "type": "R", "before": "\\nu", "after": "\\nu", "start_char_pos": 205, "end_char_pos": 208 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 324, "end_char_pos": 325 }, { "type": "R", "before": "\\epsilon", "after": "\\epsilon", "start_char_pos": 459, "end_char_pos": 467 } ]
[ 0, 259, 469 ]
1208.3789
1
Involvements of major financial institutions in the recent financial crisis have generated renewed interests in fragility of global financial networks among economists and regulatory authorities. In particular, one potential vulnerability of the financial networks is the "financial contagion" process in which insolvencies of individual entities propagate through the "web of dependencies" to affect the entire system. In this paper, we formalize a banking network model originally proposed by researchers from Bank of England and elsewhere that may be applicable to scenarios such as the OTC derivatives market, define a global stability measure for this model, and comprehensively evaluate the stability measure over more than 700,000 combinations of networks types and parameter combinations. Based on such comprehensive evaluations, we discuss some interesting implications of our evaluations of this stability measure, and derive topological properties and parameters combinations that may be used to flag the network as a possible fragile network.
The recent financial crisis have generated renewed interests in fragilities of global financial networks among economists and regulatory authorities. In particular, a potential vulnerability of the financial networks is the "financial contagion" process in which insolvencies of individual entities propagate through the "web of dependencies" to affect the entire system. In this paper, we formalize an extension of a financial network model originally proposed by Nier et al. for scenarios such as the OTC derivatives market, define a suitable global stability measure for this model, and perform a comprehensive empirical evaluation of this stability measure over more than 700,000 combinations of networks types and parameter combinations. Based on our evaluations, we discover many interesting implications of our evaluations of this stability measure, and derive topological properties and parameters combinations that may be used to flag the network as a possible fragile network.
[ { "type": "R", "before": "Involvements of major financial institutions in the", "after": "The", "start_char_pos": 0, "end_char_pos": 51 }, { "type": "R", "before": "fragility", "after": "fragilities", "start_char_pos": 112, "end_char_pos": 121 }, { "type": "R", "before": "one", "after": "a", "start_char_pos": 211, "end_char_pos": 214 }, { "type": "R", "before": "a banking", "after": "an extension of a financial", "start_char_pos": 448, "end_char_pos": 457 }, { "type": "R", "before": "researchers from Bank of England and elsewhere that may be applicable to", "after": "Nier et al. for", "start_char_pos": 495, "end_char_pos": 567 }, { "type": "A", "before": null, "after": "suitable", "start_char_pos": 623, "end_char_pos": 623 }, { "type": "R", "before": "comprehensively evaluate the", "after": "perform a comprehensive empirical evaluation of this", "start_char_pos": 669, "end_char_pos": 697 }, { "type": "R", "before": "such comprehensive", "after": "our", "start_char_pos": 807, "end_char_pos": 825 }, { "type": "R", "before": "discuss some", "after": "discover many", "start_char_pos": 842, "end_char_pos": 854 } ]
[ 0, 195, 419, 797 ]
1208.3789
2
The recent financial crisis have generated renewed interests in fragilities of global financial networks among economists and regulatory authorities. In particular, a potential vulnerability of the financial networks is the "financial contagion" process in which insolvencies of individual entities propagate through the "web of dependencies" to affect the entire system. In this paper, we formalize an extension of a financial network model originally proposed by Nier et al. for scenarios such as the OTC derivatives market, define a suitable global stability measure for this model, and perform a comprehensive empirical evaluation of this stability measure over more than 700,000 combinations of networks types and parameter combinations. Based on our evaluations, we discover many interesting implications of our evaluations of this stability measure, and derive topological properties and parameters combinations that may be used to flag the network as a possible fragile network.
The recent financial crisis have generated renewed interests in fragilities of global financial networks among economists and regulatory authorities. In particular, a potential vulnerability of the financial networks is the "financial contagion" process in which insolvencies of individual entities propagate through the "web of dependencies" to affect the entire system. In this paper, we formalize an extension of a financial network model originally proposed by Nier et al. for scenarios such as the OTC derivatives market, define a suitable global stability measure for this model, and perform a comprehensive empirical evaluation of this stability measure over more than 700,000 combinations of networks types and parameter combinations. Based on our evaluations, we discover many interesting implications of our evaluations of this stability measure, and derive topological properties and parameters combinations that may be used to flag the network as a possible fragile network. An interactive software FIN-STAB for computing the stability is available from the website www2.cs.uic.edu/~dasgupta/financial-simulator-files
[ { "type": "A", "before": null, "after": "An interactive software FIN-STAB for computing the stability is available from the website www2.cs.uic.edu/~dasgupta/financial-simulator-files", "start_char_pos": 987, "end_char_pos": 987 } ]
[ 0, 149, 371, 742 ]
1208.4799
1
A fund manager is paid performance fees with a high-water mark provision, and invests both fund's assets and private wealth in separate but potentially correlated risky assets, aiming to maximize expected utility from private wealth in the long run. Relative risk aversion and investment opportunities are constant . We find that the fund's portfolio depends only on the fund's investment opportunities, and the private portfolio only on private opportunities. The manager invests earned fees in the safe asset, allocating remaining private wealth in a constant-proportion portfolio, while the fund is managed as another constant-proportion portfolio , with risk aversion shifted towards one . The optimal welfare is the maximum between the optimal welfare of each investment opportunity, with no diversification gain. In particular, the manager does not hedge fund's exposure with private investments .
A fund manager invests both the fund's assets and own private wealth in separate but potentially correlated risky assets, aiming to maximize expected utility from private wealth in the long run. If relative risk aversion and investment opportunities are constant , we find that the fund's portfolio depends only on the fund's investment opportunities, and the private portfolio only on private opportunities. This conclusion is valid both for a hedge fund manager, who is paid performance fees with a high-water mark provision, and for a mutual fund manager, who is paid management fees proportional to the fund's assets. The manager invests earned fees in the safe asset, allocating remaining private wealth in a constant-proportion portfolio, while the fund is managed as another constant-proportion portfolio . The optimal welfare is the maximum between the optimal welfare of each investment opportunity, with no diversification gain. In particular, the manager does not use private investments to hedge future income from fees .
[ { "type": "R", "before": "is paid performance fees with a high-water mark provision, and invests both", "after": "invests both the", "start_char_pos": 15, "end_char_pos": 90 }, { "type": "A", "before": null, "after": "own", "start_char_pos": 109, "end_char_pos": 109 }, { "type": "R", "before": "Relative", "after": "If relative", "start_char_pos": 251, "end_char_pos": 259 }, { "type": "R", "before": ". We", "after": ", we", "start_char_pos": 316, "end_char_pos": 320 }, { "type": "A", "before": null, "after": "This conclusion is valid both for a hedge fund manager, who is paid performance fees with a high-water mark provision, and for a mutual fund manager, who is paid management fees proportional to the fund's assets.", "start_char_pos": 462, "end_char_pos": 462 }, { "type": "D", "before": ", with risk aversion shifted towards one", "after": null, "start_char_pos": 653, "end_char_pos": 693 }, { "type": "R", "before": "hedge fund's exposure with private investments", "after": "use private investments to hedge future income from fees", "start_char_pos": 857, "end_char_pos": 903 } ]
[ 0, 250, 317, 461, 695, 820 ]
1208.5126
1
We propose a dynamic transition, which has potential application in understanding non-equilibrium processes such as transcription and replication of DNA, unfolding of proteins and RNA etc. Here, we demonstrate that without changing the physiological condition, a DNA may be brought to a new dynamic state from the zipped or unzipped state by varying the frequency of the applied force. We report the value of various critical exponents associated with this transition, which are amenable to verification in the force spectroscopic experiments.
A simple model of DNA based on two interacting polymers has been used to study the unzipping of a double stranded DNA subjected to a periodic force. We propose a dynamical transition, where without changing the physiological condition, it is possible to bring DNA from the zipped /unzipped state to a new dynamic (hysteretic) state by varying the frequency of the applied force. Our studies reveal that the area of the hystersis loop grows with the same exponents as of the isotropic spin systems. These exponents are amenable to verification in the force spectroscopic experiments.
[ { "type": "A", "before": null, "after": "A simple model of DNA based on two interacting polymers has been used to study the unzipping of a double stranded DNA subjected to a periodic force.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "dynamic transition, which has potential application in understanding non-equilibrium processes such as transcription and replication of DNA, unfolding of proteins and RNA etc. Here, we demonstrate that", "after": "dynamical transition, where", "start_char_pos": 14, "end_char_pos": 215 }, { "type": "R", "before": "a DNA may be brought to a new dynamic state", "after": "it is possible to bring DNA", "start_char_pos": 262, "end_char_pos": 305 }, { "type": "R", "before": "or unzipped state", "after": "/unzipped state to a new dynamic (hysteretic) state", "start_char_pos": 322, "end_char_pos": 339 }, { "type": "R", "before": "We report the value of various critical exponents associated with this transition, which", "after": "Our studies reveal that the area of the hystersis loop grows with the same exponents as of the isotropic spin systems. These exponents", "start_char_pos": 387, "end_char_pos": 475 } ]
[ 0, 189, 386 ]
1208.5520
1
In the present work, a novel second-order approximation for ATM option prices under an exponential tempered stable model, a rich class of L\'evy processes with desirable features for financial modeling, is derived and, then, extended to a model with an additional independent Brownian component . Our method of proof is based on an integral representation of the option price involving the tail probability of the log-return process under the share measure and a suitable change of probability measure under which the process becomes stable . Our approach is sufficiently general to cover a wide class of L\'evy processes which satisfy the latter property and whose L\'evy densities can be closely approximated by a stable density near the origin. The results hereafter shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of ATM option prices near expiration. In the presence of an additional Brownian component, the second-order term, in time-t, is of the form d_{2}t^{(3-Y)/2}, with the coefficient d_{2} depending only on the overall jump intensity of the process and the tail-heaviness parameter Y. This extends the known result that the leading term is \sigma t^{1/2}/2\pi, where \sigma is the volatility of the continuous component. In contrast, under a pure-jump tempered stable model, the dependence on the overall jump intensity and Y is already reflected in the leading term, which is of the form d_{1} t^{1/Y}. The information on the relative frequency of negative and positive jumps appears only in the second-order term, which is shown to be of the form d_{2} t and whose order of decay turns out to be independent of Y. Our numerical results show that first-order term typically exhibits rather poor performance and that the second-order term significantly improves the approximation's accuracy.
In the present work, a novel second-order approximation for ATM option prices is derived for a large class of exponential L\'evy models . Our method of proof is based on an integral representation of the option price involving the tail probability of the log-return process under the share measure and a suitable change of probability measure under which the pure-jump component of the log-return process becomes a Y-stable process . Our approach is sufficiently general to cover a wide class of L\'evy processes which satisfy the latter property and whose L\'evy densities can be closely approximated by a stable density near the origin. The results hereafter shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of ATM option prices near expiration. In the presence of an additional Brownian component, the second-order term, in time-t, is of the form d_{2}t^{(3-Y)/2}, with the coefficient d_{2} depending only on the overall jump intensity of the process and the tail-heaviness parameter Y. This extends the already known result that the leading term is \sigma t^{1/2}/2\pi, where \sigma is the volatility of the continuous component. In contrast, under such a pure-jump model, the dependence on the overall jump intensity and Y is already reflected in the leading term, which is of the form d_{1} t^{1/Y}. The information on the relative frequency of negative and positive jumps appears only in the second-order term, which is shown to be of the form d_{2} t and whose order of decay turns out to be independent of Y. The asymptotic behavior of the corresponding Black-Scholes implied volatilities is also addressed. Our numerical results show that first-order term typically exhibits rather poor performance and that the second-order term significantly improves the approximation's accuracy.
[ { "type": "R", "before": "under an exponential tempered stable model, a rich class of", "after": "is derived for a large class of exponential", "start_char_pos": 78, "end_char_pos": 137 }, { "type": "R", "before": "processes with desirable features for financial modeling, is derived and, then, extended to a model with an additional independent Brownian component", "after": "models", "start_char_pos": 145, "end_char_pos": 294 }, { "type": "A", "before": null, "after": "pure-jump component of the log-return", "start_char_pos": 518, "end_char_pos": 518 }, { "type": "A", "before": null, "after": "process becomes a Y-stable", "start_char_pos": 519, "end_char_pos": 519 }, { "type": "D", "before": "becomes stable", "after": null, "start_char_pos": 528, "end_char_pos": 542 }, { "type": "A", "before": null, "after": "already", "start_char_pos": 1200, "end_char_pos": 1200 }, { "type": "A", "before": null, "after": "such", "start_char_pos": 1339, "end_char_pos": 1339 }, { "type": "D", "before": "tempered stable", "after": null, "start_char_pos": 1352, "end_char_pos": 1367 }, { "type": "A", "before": null, "after": "The asymptotic behavior of the corresponding Black-Scholes implied volatilities is also addressed.", "start_char_pos": 1716, "end_char_pos": 1716 }, { "type": "R", "before": "rather", "after": "rather", "start_char_pos": 1785, "end_char_pos": 1791 } ]
[ 0, 296, 544, 749, 939, 1182, 1319, 1503, 1715 ]
1208.5520
3
In the present work, a novel second-order approximation for ATM option prices is derived for a large class of exponential L\'evy models with or without Brownian component. The results hereafter shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of ATM option prices near expiration. In the presence of a Brownian component, the second-order term, in time-t, is of the form d_{2} t^{(3-Y)/2}, with the coefficient d_{2 on Y, the degree of jump activity, on \sigma, the volatility of the continuous component, and on an additional parameter controlling the intensity of the "small" jumps (regardless of their signs). /2\pi. } In contrast, under a pure-jump model, the dependence on Y and on the separate intensities of negative and positive small jumps are already reflected in the leading term, which is of the form d_{1}t^{1/Y}. The second-order term is shown to be of the form d _{2} t and, therefore, its order of decay turns out to be independent of Y. The asymptotic behavior of the corresponding Black-Scholes implied volatilities is also addressed. Our method of proof is based on an integral representation of the option price involving the tail probability of the log-return process under the share measure and a suitable change of probability measure under which the pure-jump component of the log-return process becomes a Y-stable process. Our approach is sufficiently general to cover a wide class of L\'evy processes which satisfy the latter property and whose L\'evy densitiy can be closely approximated by a stable density near the origin. Our numerical results show that first-order term typically exhibits rather poor performance and that the second-order term significantly improves the approximation's accuracy .
In the present work, a novel second-order approximation for ATM option prices is derived for a large class of exponential L\'{e models with or without Brownian component. The results hereafter shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of ATM option prices near expiration. In the presence of a Brownian component, the second-order term, in time-t, is of the form d_{2} \, t^{(3-Y)/2}, with d_{2 on Y, the degree of jump activity, on \sigma, the volatility of the continuous component, and on an additional parameter controlling the intensity of the "small" jumps (regardless of their signs). This extends the well known result that the leading first-order term is \sigma t^{1/2/2\pi. } In contrast, under a pure-jump model, the dependence on Y and on the separate intensities of negative and positive small jumps are already reflected in the leading term, which is of the form d_{1}t^{1/Y}. The second-order term is shown to be of the form _{2} t and, therefore, its order of decay turns out to be independent of Y. The asymptotic behavior of the corresponding Black-Scholes implied volatilities is also addressed. Our approach is sufficiently general to cover a wide class of L\'{e processes which satisfy the latter property and whose L\'{e densitiy can be closely approximated by a stable density near the origin. Our numerical results show that the first-order term typically exhibits rather poor performance and that the second-order term can significantly improve the approximation's accuracy , particularly in the absence of a Brownian component .
[ { "type": "R", "before": "L\\'evy", "after": "L\\'{e", "start_char_pos": 122, "end_char_pos": 128 }, { "type": "A", "before": null, "after": "\\,", "start_char_pos": 458, "end_char_pos": 458 }, { "type": "R", "before": "the coefficient d_{2", "after": "d_{2", "start_char_pos": 477, "end_char_pos": 497 }, { "type": "A", "before": null, "after": "This extends the well known result that the leading first-order term is \\sigma t^{1/2", "start_char_pos": 695, "end_char_pos": 695 }, { "type": "D", "before": "d", "after": null, "start_char_pos": 958, "end_char_pos": 959 }, { "type": "D", "before": "method of proof is based on an integral representation of the option price involving the tail probability of the log-return process under the share measure and a suitable change of probability measure under which the pure-jump component of the log-return process becomes a Y-stable process. Our", "after": null, "start_char_pos": 1139, "end_char_pos": 1433 }, { "type": "R", "before": "L\\'evy", "after": "L\\'{e", "start_char_pos": 1492, "end_char_pos": 1498 }, { "type": "R", "before": "L\\'evy", "after": "L\\'{e", "start_char_pos": 1553, "end_char_pos": 1559 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1666, "end_char_pos": 1666 }, { "type": "R", "before": "significantly improves", "after": "can significantly improve", "start_char_pos": 1758, "end_char_pos": 1780 }, { "type": "A", "before": null, "after": ", particularly in the absence of a Brownian component", "start_char_pos": 1810, "end_char_pos": 1810 } ]
[ 0, 171, 361, 694, 908, 1035, 1134, 1429, 1633 ]
1208.5581
1
We prove the existence of bounded solutions of quadratic backward SDEs with jumps, using a direct fixed point approach as in Tevzadze [ 36 ] . Under an additional standard assumption, we prove a uniqueness result , thanks to a comparison theorem. Then we study the properties of the corresponding g-expectations, we obtain in particular a non linear Doob-Meyer decomposition for g-submartingales and their regularity in time. As a consequence of this results, we obtain a converse comparison theoremfor our class of BSDEs. We give applications for dynamic risk measures and their dual representation, and compute their inf-convolution, with some explicit examples .
In this article, we prove the existence of bounded solutions of quadratic backward SDEs with jumps, that is to say for which the generator has quadratic growth in the variables (z,u). From a technical point of view, we use a direct fixed point approach as in Tevzadze [ 38 ] , which allows us to obtain existence and uniqueness of a solution when the terminal condition is small enough. Then, thanks to a well-chosen splitting, we recover an existence result for general bounded solution. Under additional assumptions, we can obtain stability results and a comparison theorem, which as usual implies uniqueness .
[ { "type": "R", "before": "We", "after": "In this article, we", "start_char_pos": 0, "end_char_pos": 2 }, { "type": "R", "before": "using a", "after": "that is to say for which the generator has quadratic growth in the variables (z,u). From a technical point of view, we use a", "start_char_pos": 83, "end_char_pos": 90 }, { "type": "R", "before": "36", "after": "38", "start_char_pos": 136, "end_char_pos": 138 }, { "type": "D", "before": ". Under an additional standard assumption, we prove a uniqueness result", "after": null, "start_char_pos": 141, "end_char_pos": 212 }, { "type": "A", "before": null, "after": "which allows us to obtain existence and uniqueness of a solution when the terminal condition is small enough. Then,", "start_char_pos": 215, "end_char_pos": 215 }, { "type": "R", "before": "comparison theorem. Then we study the properties of the corresponding g-expectations, we obtain in particular a non linear Doob-Meyer decomposition for g-submartingales and their regularity in time. As a consequence of this results, we obtain a converse comparison theoremfor our class of BSDEs. We give applications for dynamic risk measures and their dual representation, and compute their inf-convolution, with some explicit examples", "after": "well-chosen splitting, we recover an existence result for general bounded solution. Under additional assumptions, we can obtain stability results and a comparison theorem, which as usual implies uniqueness", "start_char_pos": 228, "end_char_pos": 664 } ]
[ 0, 247, 426, 523 ]
1208.6444
1
Multiscale and multiphysics applications are becoming increasingly common , and many researchers focus on combining existing models to construct combined multiscale models. Here we present a concise review of multiscale applications and their source communities. We assess and compare the methods they use to construct their multiscale models and we characterize areas where inter-disciplinary multiscale collaboration could be particularly beneficial. We conclude that multiscale computing has become increasingly popular in recent years, that different communities adopt very URLanizational approaches , and that simulations on a length scale of a few metres and a time scale of a few hours can be found in many of the multiscale research domains. Sharing multiscale methods specifically geared towards these scales between communities may therefore be particularly beneficial .
Multiscale and multiphysics applications are now commonplace , and many researchers focus on combining existing models to construct combined multiscale models. Here we present a concise review of multiscale applications and their source communities. We investigate the prevalence of multiscale projects in the EU and the US, review a range of coupling toolkits they use to construct multiscale models and identify areas where collaboration between disciplines could be particularly beneficial. We conclude that multiscale computing has become increasingly popular in recent years, that different communities adopt very different approaches to constructing multiscale simulations , and that simulations on a length scale of a few metres and a time scale of a few hours can be found in many of the multiscale research domains. Communities may receive additional benefit from sharing methods that are geared towards these scales .
[ { "type": "R", "before": "becoming increasingly common", "after": "now commonplace", "start_char_pos": 45, "end_char_pos": 73 }, { "type": "R", "before": "assess and compare the methods", "after": "investigate the prevalence of multiscale projects in the EU and the US, review a range of coupling toolkits", "start_char_pos": 266, "end_char_pos": 296 }, { "type": "D", "before": "their", "after": null, "start_char_pos": 319, "end_char_pos": 324 }, { "type": "R", "before": "we characterize areas where inter-disciplinary multiscale collaboration", "after": "identify areas where collaboration between disciplines", "start_char_pos": 347, "end_char_pos": 418 }, { "type": "R", "before": "URLanizational approaches", "after": "different approaches to constructing multiscale simulations", "start_char_pos": 578, "end_char_pos": 603 }, { "type": "R", "before": "Sharing multiscale methods specifically", "after": "Communities may receive additional benefit from sharing methods that are", "start_char_pos": 750, "end_char_pos": 789 }, { "type": "D", "before": "between communities may therefore be particularly beneficial", "after": null, "start_char_pos": 818, "end_char_pos": 878 } ]
[ 0, 172, 262, 452, 749 ]
1209.0305
1
In this paper we investigate a new class of growth rate maximization problems based on impulse control strategies such that the average number of trades per unit does not exceed a fixed level. Moreover, we include proportional transaction costs to make the portfolio problem more realistic. We provide a Verification Theorem to compute the optimal growth rate as well as an optimal trading strategy. Furthermore, we prove the existence of a constant boundary strategy which is optimal. At the end, we compare our approach to other discrete-time growth rate maximization problems in numerical examples .
In this paper we investigate a new class of growth rate maximization problems based on impulse control strategies such that the average number of trades per time unit does not exceed a fixed level. Moreover, we include proportional transaction costs to make the portfolio problem more realistic. We provide a Verification Theorem to compute the optimal growth rate as well as an optimal trading strategy. Furthermore, we prove the existence of a constant boundary strategy which is optimal. At the end, we compare our approach to other discrete-time growth rate maximization problems in numerical examples . It turns out that constant boundary strategies with a small average number of trades per unit perform nearly as good as the classical optimal solutions with infinite activity .
[ { "type": "A", "before": null, "after": "time", "start_char_pos": 157, "end_char_pos": 157 }, { "type": "A", "before": null, "after": ". It turns out that constant boundary strategies with a small average number of trades per unit perform nearly as good as the classical optimal solutions with infinite activity", "start_char_pos": 602, "end_char_pos": 602 } ]
[ 0, 193, 291, 400, 486 ]
1209.0453
1
Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the Random Field Ising model (RFIM) indeed provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilising self-referential feedback loops, induced either by herding, i.e. reference to the peers, or trending, i.e. reference to the past, and account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of RFIM-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can badly fail at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria to be reached. As a theoretical challenge, the study of detailed-balance violating decision rules is needed to decide whether conclusions based on models obeying detailed-balance are indeed robust and generic.
Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the Random Field Ising model (RFIM) indeed provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilising self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of RFIM-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can badly fail at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria to be reached. As a theoretical challenge, the study of so-called " detailed-balance " violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance ) are indeed robust and generic.
[ { "type": "D", "before": "the", "after": null, "start_char_pos": 667, "end_char_pos": 670 }, { "type": "A", "before": null, "after": "so-called \"", "start_char_pos": 1336, "end_char_pos": 1336 }, { "type": "A", "before": null, "after": "\"", "start_char_pos": 1354, "end_char_pos": 1354 }, { "type": "R", "before": "models obeying", "after": "current models (that all assume", "start_char_pos": 1429, "end_char_pos": 1443 }, { "type": "A", "before": null, "after": ")", "start_char_pos": 1461, "end_char_pos": 1461 } ]
[ 0, 118, 226, 333, 517, 792, 987, 1153, 1294 ]
1209.0697
1
We compute the value of a variance swap when the underlying is modeled as a Markov process time changed by a L\'{e}vy subordinator. In this framework, the underlying may exhibit jumps with a state-dependent L\'{e}vy measure, local stochastic volatility and have a local stochastic default intensity. Moreover, the L\'{e}vy subordinator that drives the underlying can be obtained directly by observing European call/put prices. To illustrate our general framework, we provide an explicit formula for the value of a variance swap when the diffusion is modeled as (i) a L\'evy subordinated geometric Brownian motion with default and (ii) a L\'evy subordinated Jump-to-default CEV process (see carr-linetsky-1). Our results extend the results of mendoza-carr-linetsky-1, by allowing for joint valuation of credit and equity derivatives as well as variance swaps.
We compute the value of a variance swap when the underlying is modeled as a Markov process time changed by a L\'{e}vy subordinator. In this framework, the underlying may exhibit jumps with a state-dependent L\'{e}vy measure, local stochastic volatility and have a local stochastic default intensity. Moreover, the L\'{e}vy subordinator that drives the underlying can be obtained directly by observing European call/put prices. To illustrate our general framework, we provide an explicit formula for the value of a variance swap when the underlying is modeled as (i) a L\'evy subordinated geometric Brownian motion with default and (ii) a L\'evy subordinated Jump-to-default CEV process (see carr-linetsky-1). In the latter example, we extend the results of mendoza-carr-linetsky-1, by allowing for joint valuation of credit and equity derivatives as well as variance swaps.
[ { "type": "R", "before": "diffusion", "after": "underlying", "start_char_pos": 537, "end_char_pos": 546 }, { "type": "R", "before": "Our results extend", "after": "In the latter example, we extend", "start_char_pos": 708, "end_char_pos": 726 } ]
[ 0, 131, 299, 426, 707 ]
1209.0708
1
A model is presented in this work for simulating endogenously the evolution of the marginal costs of production of energy carriers from non-renewable resources, their consumption, depletion pathways and timescales. Such marginal costs can be used to simulate the price formation of energy commodities. Drawing on previous work where a global database of energy resource economic potential was constructed, this work uses cost distributions of non-renewable resources in order to evaluate global flows of energy commodities. A mathematical framework is given to calculate endogenous flows of energy resources given an exogenous commodity price path. This framework can be used in reverse in order to calculate an exogenous marginal cost of production of energy carriers given an exogenous carrier demand. These two approaches generate limiting scenarios that depict extreme use of natural resources. The theory is however designed for use within economic models and models of technological change such as the Future Technology Transformations (FTT) family of models . In this work, it is implemented in the global power sector model FTT:Power , with which scenarios of global resource use and marginal costs of production of energy commodities are detailed . Policy implications are given.
A model is presented in this work for simulating endogenously the evolution of the marginal costs of production of energy carriers from non-renewable resources, their consumption, depletion pathways and timescales. Such marginal costs can be used to simulate the long term average price formation of energy commodities. Drawing on previous work where a global database of energy resource economic potentials was constructed, this work uses cost distributions of non-renewable resources in order to evaluate global flows of energy commodities. A mathematical framework is given to calculate endogenous flows of energy resources given an exogenous commodity price path. This framework can be used in reverse in order to calculate an exogenous marginal cost of production of energy carriers given an exogenous carrier demand. Using rigid price inelastic assumptions independent of the economy, these two approaches generate limiting scenarios that depict extreme use of natural resources. This is useful to characterise the current state and possible uses of remaining non-renewable resources such as fossil fuels and natural uranium. The theory is however designed for use within economic or technology models that allow technology substitutions . In this work, it is implemented in the global power sector model FTT:Power . Policy implications are given.
[ { "type": "A", "before": null, "after": "long term average", "start_char_pos": 263, "end_char_pos": 263 }, { "type": "R", "before": "potential", "after": "potentials", "start_char_pos": 380, "end_char_pos": 389 }, { "type": "R", "before": "These", "after": "Using rigid price inelastic assumptions independent of the economy, these", "start_char_pos": 805, "end_char_pos": 810 }, { "type": "A", "before": null, "after": "This is useful to characterise the current state and possible uses of remaining non-renewable resources such as fossil fuels and natural uranium.", "start_char_pos": 900, "end_char_pos": 900 }, { "type": "R", "before": "models and models of technological change such as the Future Technology Transformations (FTT) family of models", "after": "or technology models that allow technology substitutions", "start_char_pos": 956, "end_char_pos": 1066 }, { "type": "D", "before": ", with which scenarios of global resource use and marginal costs of production of energy commodities are detailed", "after": null, "start_char_pos": 1144, "end_char_pos": 1257 } ]
[ 0, 214, 302, 524, 649, 804, 899, 1068 ]
1209.1893
1
This paper develops an asymptotic expansion technique in momentum space . It is shown that Fourier transformation combined with a polynomial-function approximation of the nonlinear terms gives a closed recursive system of ordinary differential equations (ODEs) as an asymptotic expansion of the conditional distribution appearing in stochastic filtering problems . Thanks to the simplicity of the ODE system, higher order calculation can be performed easily. Furthermore, solving ODEs sequentially with small sub-periods with updated initial conditions makes it possible to implement a "substepping method" {\it for asymptotic expansion in a numerically efficient way. This is found to improve the performance significantly where otherwise the approximation fails badly. The method may be useful for other applications, such as, option pricing in finance as well as measure-valued stochastic dynamics in general .
This paper develops an asymptotic expansion technique in momentum space for stochastic filtering . It is shown that Fourier transformation combined with a polynomial-function approximation of the nonlinear terms gives a closed recursive system of ordinary differential equations (ODEs) for the relevant conditional distribution . Thanks to the simplicity of the ODE system, higher order calculation can be performed easily. Furthermore, solving ODEs sequentially with small sub-periods with updated initial conditions makes it possible to implement a {\it substepping method for asymptotic expansion in a numerically efficient way. This is found to improve the performance significantly where otherwise the approximation fails badly. The method is expected to provide an useful tool for more realistic financial modeling with unobserved parameters, and also for problems involving nonlinear measure-valued processes .
[ { "type": "A", "before": null, "after": "for stochastic filtering", "start_char_pos": 72, "end_char_pos": 72 }, { "type": "R", "before": "as an asymptotic expansion of the conditional distribution appearing in stochastic filtering problems", "after": "for the relevant conditional distribution", "start_char_pos": 262, "end_char_pos": 363 }, { "type": "D", "before": "\"substepping method\"", "after": null, "start_char_pos": 587, "end_char_pos": 607 }, { "type": "A", "before": null, "after": "substepping method", "start_char_pos": 613, "end_char_pos": 613 }, { "type": "R", "before": "may be useful for other applications, such as, option pricing in finance as well as", "after": "is expected to provide an useful tool for more realistic financial modeling with unobserved parameters, and also for problems involving nonlinear", "start_char_pos": 784, "end_char_pos": 867 }, { "type": "R", "before": "stochastic dynamics in general", "after": "processes", "start_char_pos": 883, "end_char_pos": 913 } ]
[ 0, 74, 365, 459, 670, 772 ]
1209.1893
2
This paper develops an asymptotic expansion technique in momentum space for stochastic filtering. It is shown that Fourier transformation combined with a polynomial-function approximation of the nonlinear terms gives a closed recursive system of ordinary differential equations (ODEs) for the relevant conditional distribution. Thanks to the simplicity of the ODE system, higher order calculation can be performed easily. Furthermore, solving ODEs sequentially with small sub-periods with updated initial conditions makes it possible to implement a %DIFDELCMD < {\it %%% substepping method for asymptotic expansion in a numerically efficient way. This is found to improve the performance significantly where otherwise the approximation fails badly. The method is expected to provide an useful tool for more realistic financial modeling with unobserved parameters, and also for problems involving nonlinear measure-valued processes.
This paper develops an asymptotic expansion technique in momentum space for stochastic filtering. It is shown that Fourier transformation combined with a polynomial-function approximation of the nonlinear terms gives a closed recursive system of ordinary differential equations (ODEs) for the relevant conditional distribution. Thanks to the simplicity of the ODE system, higher order calculation can be performed easily. Furthermore, solving ODEs sequentially with small sub-periods with updated initial conditions makes it possible to implement a %DIFDELCMD < {\it %%% substepping method for asymptotic expansion in a numerically efficient way. This is found to improve the performance significantly where otherwise the approximation fails badly. The method is expected to provide a useful tool for more realistic financial modeling with unobserved parameters, and also for problems involving nonlinear measure-valued processes.
[ { "type": "R", "before": "substepping method", "after": "substepping method", "start_char_pos": 571, "end_char_pos": 589 }, { "type": "R", "before": "an", "after": "a", "start_char_pos": 783, "end_char_pos": 785 } ]
[ 0, 97, 327, 421, 646, 748 ]
1209.2616
1
Influenza virus contains two highly variable envelope glycoproteins, hemagglutinin (HA) and neuraminidase (NA). The structure and properties of HA, which is responsible for binding the virus to the cell that is being infected, change significantly when the virus is transmitted from avian or swine species to humans. Here we focus on much smaller human individual evolutionary amino acid mutational changes in NA, which cleaves sialic acid groups and is required for influenza virus replication. We show that these smaller changes can be monitored very accurately across many Uniprot and NCBI strains using hydropathicity scales to quantify the roughness of water film packages. Quantification is most effective with the differential scale based on protein URLanized criticality (SOC). NA exhibits punctuated evolution at the molecular scale, millions of times smaller than the more familiar species scale, and thousands of times smaller than the genomic scale. Our analysis shows that large-scale vaccination projects have been responsible for a very large reduction in influenza severity in the last century, a reduction which is hidden from short-term studies . Hydropathic analysis is capable of interpreting and even predicting trends of functional changes in mutation prolific viruses.
Influenza virus contains two highly variable envelope glycoproteins, hemagglutinin (HA) and neuraminidase (NA). The structure and properties of HA, which is responsible for binding the virus to the cell that is being infected, change significantly when the virus is transmitted from avian or swine species to humans. Here we focus on much smaller human individual evolutionary amino acid mutational changes in NA, which cleaves sialic acid groups and is required for influenza virus replication. We show that very small amino acid changes can be monitored very accurately across many Uniprot and NCBI strains using hydropathicity scales to quantify the roughness of water film packages. Quantitative sequential analysis is most effective with the differential hydropathicity scale based on protein URLanized criticality (SOC). NA exhibits punctuated evolution at the molecular scale, millions of times smaller than the more familiar species scale, and thousands of times smaller than the genomic scale. Our analysis shows that large-scale vaccination programs have been responsible for a very large convergent reduction in influenza severity in the last century, a reduction which is hidden from short-term studies of vaccine effectiveness . Hydropathic analysis is capable of interpreting and even predicting trends of functional changes in mutation prolific viruses.
[ { "type": "R", "before": "these smaller", "after": "very small amino acid", "start_char_pos": 509, "end_char_pos": 522 }, { "type": "R", "before": "Quantification", "after": "Quantitative sequential analysis", "start_char_pos": 679, "end_char_pos": 693 }, { "type": "A", "before": null, "after": "hydropathicity", "start_char_pos": 734, "end_char_pos": 734 }, { "type": "R", "before": "projects", "after": "programs", "start_char_pos": 1011, "end_char_pos": 1019 }, { "type": "A", "before": null, "after": "convergent", "start_char_pos": 1059, "end_char_pos": 1059 }, { "type": "A", "before": null, "after": "of vaccine effectiveness", "start_char_pos": 1165, "end_char_pos": 1165 } ]
[ 0, 111, 316, 495, 678, 786, 962, 1167 ]
1209.3083
1
While modern structural biology has provided us with a rich and diverse picture of membrane proteins, the biological function of membrane proteins is often influenced by the mechanical properties of the surrounding lipid bilayer. Here we develop an analytic methodology connecting the hydrophobic shape of membrane proteins to the cooperative function of membrane proteins induced by bilayer-mediated elastic interactions. Application of this methodology to mechanosensitive channels shows that , in addition to protein separation and bilayer material properties, the sign and strength of elastic interactions , and associated cooperative gating characteristics, can depend on the protein shape and orientation . Our approach predicts how elastic interactions affect the molecular URLanization, and biological function of proteins in crowded membranes.
While modern structural biology has provided us with a rich and diverse picture of membrane proteins, the biological function of membrane proteins is often influenced by the mechanical properties of the surrounding lipid bilayer. Here we explore the relation between the shape of membrane proteins and the cooperative function of membrane proteins induced by membrane-mediated elastic interactions. For the experimental model system of mechanosensitive ion channels we find that the sign and strength of elastic interactions depend on the protein shape , yielding distinct cooperative gating curves for distinct protein orientations . Our approach predicts how directional elastic interactions affect the molecular URLanization, and biological function of proteins in crowded membranes.
[ { "type": "R", "before": "develop an analytic methodology connecting the hydrophobic", "after": "explore the relation between the", "start_char_pos": 238, "end_char_pos": 296 }, { "type": "R", "before": "to", "after": "and", "start_char_pos": 324, "end_char_pos": 326 }, { "type": "R", "before": "bilayer-mediated", "after": "membrane-mediated", "start_char_pos": 384, "end_char_pos": 400 }, { "type": "R", "before": "Application of this methodology to mechanosensitive channels shows that , in addition to protein separation and bilayer material properties,", "after": "For the experimental model system of mechanosensitive ion channels we find that", "start_char_pos": 423, "end_char_pos": 563 }, { "type": "D", "before": ", and associated cooperative gating characteristics, can", "after": null, "start_char_pos": 610, "end_char_pos": 666 }, { "type": "R", "before": "and orientation", "after": ", yielding distinct cooperative gating curves for distinct protein orientations", "start_char_pos": 695, "end_char_pos": 710 }, { "type": "A", "before": null, "after": "directional", "start_char_pos": 739, "end_char_pos": 739 } ]
[ 0, 229, 422, 712 ]
1209.3513
1
We propose a unified structural credit risk model incorporating insolvency, recovery and rollover risks. The firm finances itself mainly by issuing short- and long-term debt. Short-term debt can have either a discrete or a more realistic staggered tenor structure. We show that a unique threshold strategy (i.e., a bank run barrier) exists for short-term creditors to decide when to withdraw their funding, and this strategy is closely related to the solution of a non-standard optimal stopping time problem with control constraints. We decompose the total credit risk into an insolvency component and an illiquidity component based on such an endogenous bank run barrier together with an exogenous insolvency barrier.
We propose a unified structural credit risk model incorporating both insolvency and illiquidity risks, in order to investigate how a firm's default probability depends on the liquidity risk associated with its financing structure. We assume the firm finances its risky assets by issuing short- and long-term debt. Short-term debt can have either a discrete or a more realistic staggered tenor structure. At rollover dates of short-term debt, creditors face a dynamic coordination problem. We show that a unique threshold strategy (i.e., a debt run barrier) exists for short-term creditors to decide when to withdraw their funding, and this strategy is closely related to the solution of a non-standard optimal stopping time problem with control constraints. We decompose the total credit risk into an insolvency component and an illiquidity component based on such an endogenous debt run barrier together with an exogenous insolvency barrier.
[ { "type": "R", "before": "insolvency, recovery and rollover risks. The firm finances itself mainly", "after": "both insolvency and illiquidity risks, in order to investigate how a firm's default probability depends on the liquidity risk associated with its financing structure. We assume the firm finances its risky assets", "start_char_pos": 64, "end_char_pos": 136 }, { "type": "A", "before": null, "after": "At rollover dates of short-term debt, creditors face a dynamic coordination problem.", "start_char_pos": 265, "end_char_pos": 265 }, { "type": "R", "before": "bank", "after": "debt", "start_char_pos": 316, "end_char_pos": 320 }, { "type": "R", "before": "bank", "after": "debt", "start_char_pos": 656, "end_char_pos": 660 } ]
[ 0, 104, 174, 264, 534 ]
1209.3924
2
Multimodal oncological strategies which combine chemotherapy or radiotherapy with hyperthermia have a potential of improving the efficacy of the non-surgical methods of cancer treatment. Hyperthermia engages the heat-shock response mechanism (HSR), which main component (the heat-shock proteins ) is known to directly prevent the intended apoptosis of cancer cells. Moreover, cancer cells can have an already partially activated HSR, thereby hyperthermia may be more toxic to them relative to normal cells. On the other hand, HSR triggers thermotolerance, i.e. the hyperthermia treated cells show an impairment in their susceptibility to a subsequent heat-induced stress. This poses a question about the efficacy and about an optimal strategy of the therapy combined with hyperthermia treatment. We adapt our previous HSR model and propose its stochastic extension , which we then analyse using the approximate probabilistic model checking (APMC) technique . We formalise the notion of the thermotoleranceand estimate the intensity and the duration of the HSR-induced thermotolerance. Finally, we quantify the effect of a multimodal therapy based on hyperthermia and a cytotoxic effect of bortezomib, a proteasome inhibitor, and we propose an optimal strategy for combining these two modalities. By mechanistic modelling of HSR we are able to support the common belief that the combination of cancer treatment strategies increases therapy efficacy . Moreover, our results demonstrate feasibility and practical potential of APMC in analysis of stochastic models of signalling pathways .
Multimodal oncological strategies which combine chemotherapy or radiotherapy with hyperthermia have a potential of improving the efficacy of the non-surgical methods of cancer treatment. Hyperthermia engages the heat-shock response mechanism (HSR), which main component are heat-shock proteins (HSP). Cancer cells have already partially activated HSR, thereby , hyperthermia may be more toxic to them relative to normal cells. On the other hand, HSR triggers thermotolerance, i.e. hyperthermia treated cells show an impairment in their susceptibility to a subsequent heat-induced stress. This poses questions about efficacy and optimal strategy of the anti-cancer therapy combined with hyperthermia treatment. To address these questions, we adapt our previous HSR model and propose its stochastic extension . We formalise the notion of a HSP-induced thermotolerance. Next, we estimate the intensity and the duration of the thermotolerance. Finally, we quantify the effect of a multimodal therapy based on hyperthermia and a cytotoxic effect of bortezomib, a clinically approved proteasome inhibitor. Consequently, we propose an optimal strategy for combining hyperthermia and proteasome inhibition modalities. In summary, by a proof of concept mathematical analysis of HSR we are able to support the common belief that the combination of cancer treatment strategies increases therapy efficacy .
[ { "type": "R", "before": "(the", "after": "are", "start_char_pos": 270, "end_char_pos": 274 }, { "type": "R", "before": ") is known to directly prevent the intended apoptosis of cancer cells. Moreover, cancer cells can have an", "after": "(HSP). Cancer cells have", "start_char_pos": 295, "end_char_pos": 400 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 442, "end_char_pos": 442 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 562, "end_char_pos": 565 }, { "type": "R", "before": "a question about the efficacy and about an", "after": "questions about efficacy and", "start_char_pos": 684, "end_char_pos": 726 }, { "type": "A", "before": null, "after": "anti-cancer", "start_char_pos": 751, "end_char_pos": 751 }, { "type": "R", "before": "We", "after": "To address these questions, we", "start_char_pos": 798, "end_char_pos": 800 }, { "type": "D", "before": ", which we then analyse using the approximate probabilistic model checking (APMC) technique", "after": null, "start_char_pos": 867, "end_char_pos": 958 }, { "type": "R", "before": "the thermotoleranceand", "after": "a HSP-induced thermotolerance. Next, we", "start_char_pos": 988, "end_char_pos": 1010 }, { "type": "D", "before": "HSR-induced", "after": null, "start_char_pos": 1058, "end_char_pos": 1069 }, { "type": "R", "before": "proteasome inhibitor, and", "after": "clinically approved proteasome inhibitor. Consequently,", "start_char_pos": 1205, "end_char_pos": 1230 }, { "type": "R", "before": "these two modalities. By mechanistic modelling of", "after": "hyperthermia and proteasome inhibition modalities. In summary, by a proof of concept mathematical analysis of", "start_char_pos": 1276, "end_char_pos": 1325 }, { "type": "D", "before": ". Moreover, our results demonstrate feasibility and practical potential of APMC in analysis of stochastic models of signalling pathways", "after": null, "start_char_pos": 1450, "end_char_pos": 1585 } ]
[ 0, 186, 365, 507, 672, 797, 960, 1086, 1297, 1451 ]
1209.3924
3
Multimodal oncological strategies which combine chemotherapy or radiotherapy with hyperthermia have a potential of improving the efficacy of the non-surgical methods of cancer treatment. Hyperthermia engages the heat-shock response mechanism (HSR), which main component are heat-shock proteins (HSP). Cancer cells have already partially activated HSR, thereby, hyperthermia may be more toxic to them relative to normal cells. On the other hand, HSR triggers thermotolerance, i.e. hyperthermia treated cells show an impairment in their susceptibility to a subsequent heat-induced stress. This poses questions about efficacy and optimal strategy of the anti-cancer therapy combined with hyperthermia treatment. To address these questions, we adapt our previous HSR model and propose its stochastic extension. We formalise the notion of a HSP-induced thermotolerance. Next, we estimate the intensity and the duration of the thermotolerance. Finally, we quantify the effect of a multimodal therapy based on hyperthermia and a cytotoxic effect of bortezomib, a clinically approved proteasome inhibitor. Consequently, we propose an optimal strategy for combining hyperthermia and proteasome inhibition modalities. In summary, by a proof of concept mathematical analysis of HSR we are able to support the common belief that the combination of cancer treatment strategies increases therapy efficacy .
Multimodal oncological strategies which combine chemotherapy or radiotherapy with hyperthermia have a potential of improving the efficacy of the non-surgical methods of cancer treatment. Hyperthermia engages the heat-shock response mechanism (HSR), main component of which are heat-shock proteins (HSP). Cancer cells have already partially activated HSR, thereby, hyperthermia may be more toxic to them relative to normal cells. On the other hand, HSR triggers thermotolerance, i.e. hyperthermia treated cells show an impairment in their susceptibility to a subsequent heat-induced stress. This poses questions about efficacy and optimal strategy of the anti-cancer therapy combined with hyperthermia treatment. To address these questions, we adapt our previous HSR model and propose its stochastic extension. We formalise the notion of a HSP-induced thermotolerance. Next, we estimate the intensity and the duration of the thermotolerance. Finally, we quantify the effect of a multimodal therapy based on hyperthermia and a cytotoxic effect of bortezomib, a clinically approved proteasome inhibitor. Consequently, we propose an optimal strategy for combining hyperthermia and proteasome inhibition modalities. In summary, by a proof of concept mathematical analysis of HSR we are able to support the common belief that the combination of cancer treatment strategies increases therapy efficacy . thermotolerance .
[ { "type": "R", "before": "which main component", "after": "main component of which", "start_char_pos": 249, "end_char_pos": 269 }, { "type": "A", "before": null, "after": ". thermotolerance", "start_char_pos": 1391, "end_char_pos": 1391 } ]
[ 0, 186, 300, 425, 586, 708, 806, 864, 937, 1097, 1207 ]
1209.4223
1
Alzheimer's disease is a human brain disease that affects a significant fraction of the population by causing problems with short-term memory, thinking, spatial orientation and behavior, memory loss and other intellectual abilities. Up to date there is no singular test that can definitively diagnose Alzheimer's disease, although imaging technology designed to detect Alzheimer's plaques and tangles is rapidly becoming more powerful and precise. In this paper we introduce a novel diagnostic protocol , based on the combination of mitochondrial hypothesis-dynamics with the role of electromagnetic influences of the metal ions into the inner mitochondrial membrane , and the quantitated analysis of mitochondrial population. While there are few disappointing clinical-trial results for drug treatments in patients with Alzheimer's disease, scientific community need alternative diagnostic tools rather investing mainly in amyloid-targeting drugs.
Alzheimer's disease is a human brain disease that affects a significant fraction of the population by causing problems with short-term memory, thinking, spatial orientation and behavior, memory loss and other intellectual abilities. Up to date there is no singular test that can definitively diagnose Alzheimer's disease, although imaging technology designed to detect Alzheimer's plaques and tangles is rapidly becoming more powerful and precise. In this paper we introduce a decision-making model , based on the combination of mitochondrial hypothesis-dynamics with the role of electromagnetic influences of the metal ions into the inner mitochondrial membrane and the quantitative analysis of mitochondrial population. While there are few disappointing clinical-trial results for drug treatments in patients with Alzheimer's disease, scientific community need alternative diagnostic tools rather investing mainly in amyloid-targeting drugs.
[ { "type": "R", "before": "novel diagnostic protocol", "after": "decision-making model", "start_char_pos": 477, "end_char_pos": 502 }, { "type": "R", "before": ", and the quantitated", "after": "and the quantitative", "start_char_pos": 667, "end_char_pos": 688 } ]
[ 0, 232, 447, 726 ]
1209.4517
1
Geometric Brownian motion is non-stationary. It is non-ergodic in the sense that the time-average growth rate observed in a single realization differs from the growth rate of the ensemble average. We prove that the time-average growth rate of averages over a finite number, N, of realizations is independent of N. A stability analysis shows that the time at which such averages begin to deviate from ensemble-average behavior increases logarithmically with N .
Geometric Brownian motion (GBM) is a model for systems as varied as financial instruments and populations. The statistical properties of GBM are complicated by non-ergodicity, which can lead to ensemble averages exhibiting exponential growth while time averages suffer collapse. A common tactic for dealing with this difference is diversification. In this letter we show that diversification will not solve the problem but only delay the collapse .
[ { "type": "R", "before": "is non-stationary. It is non-ergodic in the sense that the time-average growth rate observed in a single realization differs from the growth rate of the ensemble average. We prove that the time-average growth rate of averages over a finite number, N, of realizations is independent of N. A stability analysis shows that the time at which such averages begin to deviate from ensemble-average behavior increases logarithmically with N", "after": "(GBM) is a model for systems as varied as financial instruments and populations. The statistical properties of GBM are complicated by non-ergodicity, which can lead to ensemble averages exhibiting exponential growth while time averages suffer collapse. A common tactic for dealing with this difference is diversification. In this letter we show that diversification will not solve the problem but only delay the collapse", "start_char_pos": 26, "end_char_pos": 458 } ]
[ 0, 44, 196 ]
1209.4517
2
Geometric Brownian motion (GBM) is a model for systems as varied as financial instruments and populations. The statistical properties of GBM are complicated by non-ergodicity, which can lead to ensemble averages exhibiting exponential growth while time averages suffer collapse . A common tactic for dealing with this difference is diversification. In this letter we show that diversification will not solve the problem but only delay the collapse .
Geometric Brownian motion (GBM) is a model for systems as varied as financial instruments and populations. The statistical properties of GBM are complicated by non-ergodicity, which can lead to ensemble averages exhibiting exponential growth while any individual trajectory collapses according to its time-average . A common tactic for bringing time averages closer to ensemble averages is diversification. In this letter we study the effects of diversification using the concept of ergodicity breaking .
[ { "type": "R", "before": "time averages suffer collapse", "after": "any individual trajectory collapses according to its time-average", "start_char_pos": 248, "end_char_pos": 277 }, { "type": "R", "before": "dealing with this difference", "after": "bringing time averages closer to ensemble averages", "start_char_pos": 300, "end_char_pos": 328 }, { "type": "R", "before": "show that diversification will not solve the problem but only delay the collapse", "after": "study the effects of diversification using the concept of ergodicity breaking", "start_char_pos": 367, "end_char_pos": 447 } ]
[ 0, 106, 279, 348 ]
1209.4566
1
Copying information is a fundamental task of biological systems that has to be performed at a finite temperature. There is agreement that this fact alone implies a lower limit on the error rate. However, contrasting results have been obtained regarding how this limit is approached. For instance, it is not clear when it can be achieved in a slow, quasi-adiabiatic regime or in a fast and highly dissipative one. In this paper, by means of analytical calculations and numerical simulations, we unravel a common feature of stochastic copying systems: the existence of two radically different copying modes. The first is based on different kinetic barriers , and is characterized by a high speed and high dissipation close to the lowest possible error. The second is based on energy differences between right and wrong copies, and is characterized by the fact that minimum copying error can be achieved at low speed and low dissipation. In models characterized by a single copying step, we demonstrate thatthese modes are alternative, i.e. they cannot be mixed to further reduce the minimum error . However, the two modes can be combined in multi-step reactions, such as in models implementing error correction through a proofreading pathway. By analyzing experimentally measured kinetic rates of two seemingly similar DNA polymerases, T7 and Pol\gamma, we argue that one of them operates in the kinetic and the other in the energetic regime .
We study stochastic copying schemes in which discrimination between a right and a wrong match is achieved via different kinetic barriers or different binding energy of the two matches. We demonstrate that, in single-step reactions, the two discrimination mechanisms are strictly alternative and can not be mixed to further reduce the error fraction. Close to the lowest error limit, kinetic discrimination results in a diverging copying velocity and dissipation per copied bit. On the opposite, energetic discrimination reaches its lowest error limit in an adiabatic regime where dissipation and velocity vanish. By analyzing experimentally measured kinetic rates of two DNA polymerases, T7 and Pol\gamma, we argue that one of them operates in the kinetic and the other in the energetic regime . Finally, we show how the two mechanisms can be combined in copying schemes implementing error correction through a proofreading pathway .
[ { "type": "R", "before": "Copying information is a fundamental task of biological systems that has to be performed at a finite temperature. There is agreement that this fact alone implies a lower limit on the error rate. However, contrasting results have been obtained regarding how this limit is approached. For instance, it is not clear when it can be achieved in", "after": "We study stochastic copying schemes in which discrimination between a right and", "start_char_pos": 0, "end_char_pos": 339 }, { "type": "R", "before": "slow, quasi-adiabiatic regime or in a fast and highly dissipative one. In this paper, by means of analytical calculations and numerical simulations, we unravel a common feature of stochastic copying systems: the existence of two radically different copying modes. The first is based on", "after": "wrong match is achieved via", "start_char_pos": 342, "end_char_pos": 627 }, { "type": "R", "before": ", and is characterized by a high speed and high dissipation close to the lowest possible error. The second is based on energy differences between right and wrong copies, and is characterized by the fact that minimum copying error can be achieved at low speed and low dissipation. In models characterized by a single copying step, we demonstrate thatthese modes are alternative, i.e. they cannot", "after": "or different binding energy of the two matches. We demonstrate that, in single-step reactions, the two discrimination mechanisms are strictly alternative and can not", "start_char_pos": 655, "end_char_pos": 1049 }, { "type": "R", "before": "minimum error . However, the two modes can be combined in multi-step reactions, such as in models implementing error correction through a proofreading pathway.", "after": "error fraction. Close to the lowest error limit, kinetic discrimination results in a diverging copying velocity and dissipation per copied bit. On the opposite, energetic discrimination reaches its lowest error limit in an adiabatic regime where dissipation and velocity vanish.", "start_char_pos": 1081, "end_char_pos": 1240 }, { "type": "D", "before": "seemingly similar", "after": null, "start_char_pos": 1299, "end_char_pos": 1316 }, { "type": "A", "before": null, "after": ". Finally, we show how the two mechanisms can be combined in copying schemes implementing error correction through a proofreading pathway", "start_char_pos": 1440, "end_char_pos": 1440 } ]
[ 0, 113, 194, 282, 412, 605, 750, 934, 1096, 1240 ]
1209.4566
2
We study stochastic copying schemes in which discrimination between a right and a wrong match is achieved via different kinetic barriers or different binding energy of the two matches. We demonstrate that, in single-step reactions, the two discrimination mechanisms are strictly alternative and can not be mixed to further reduce the error fraction. Close to the lowest error limit, kinetic discrimination results in a diverging copying velocity and dissipation per copied bit. On the opposite, energetic discrimination reaches its lowest error limit in an adiabatic regime where dissipation and velocity vanish. By analyzing experimentally measured kinetic rates of two DNA polymerases, T7 and Pol \gamma , we argue that one of them operates in the kinetic and the other in the energetic regime. Finally, we show how the two mechanisms can be combined in copying schemes implementing error correction through a proofreading pathway .
We study stochastic copying schemes in which discrimination between a right and a wrong match is achieved via different kinetic barriers or different binding energies of the two matches. We demonstrate that, in single-step reactions, the two discrimination mechanisms are strictly alternative and can not be mixed to further reduce the error fraction. Close to the lowest error limit, kinetic discrimination results in a diverging copying velocity and dissipation per copied bit. On the opposite, energetic discrimination reaches its lowest error limit in an adiabatic regime where dissipation and velocity vanish. By analyzing experimentally measured kinetic rates of two DNA polymerases, T7 and Pol \gamma , we argue that one of them operates in the kinetic and the other in the energetic regime. Finally, we show how the two mechanisms can be combined in copying schemes implementing error correction through a proofreading pathway
[ { "type": "R", "before": "energy", "after": "energies", "start_char_pos": 158, "end_char_pos": 164 }, { "type": "R", "before": "\\gamma", "after": "\\gamma", "start_char_pos": 699, "end_char_pos": 705 }, { "type": "D", "before": ".", "after": null, "start_char_pos": 933, "end_char_pos": 934 } ]
[ 0, 184, 349, 477, 612, 796 ]
1209.4695
1
The possibility of statistical evaluation of the market completeness and incompleteness is investigated for continuous time diffusion stock market models. It is shown that, for any incomplete market model from a wide class of models, there exists a complete market model with an arbitrarily close paths of the stock prices. This leads to the counterintuitive conclusion that the incomplete markets are indistinguishable from the complete markets in the terms of the price statistics.
The possibility of statistical evaluation of the market completeness and incompleteness is investigated for continuous time diffusion stock market models. It is well known that market completeness is not a robust property: a complete market model can be made incomplete via small random deviations of the coefficients. The paper shows that market incompleteness is also non-robust: small deviations can convert an incomplete model into a complete one. More precisely, it is shown that, for any incomplete market model from a wide class of models, there exists a complete market model with an arbitrarily close paths of the stock prices. This leads to the counterintuitive conclusion that the incomplete markets are indistinguishable from the complete markets in the terms of the price statistics.
[ { "type": "A", "before": null, "after": "well known that market completeness is not a robust property: a complete market model can be made incomplete via small random deviations of the coefficients. The paper shows that market incompleteness is also non-robust: small deviations can convert an incomplete model into a complete one. More precisely, it is", "start_char_pos": 161, "end_char_pos": 161 } ]
[ 0, 154, 324 ]
1209.4695
2
The possibility of statistical evaluation of the market completeness and incompleteness is investigated for continuous time diffusion stock market models. It is well known that market completeness is not a robust property: a complete market model can be made incomplete via small random deviations of the coefficients . The paper shows that market incompleteness is also non-robust: small deviations can convert an incomplete model into a complete one. More precisely, it is shown that, for any incomplete market model from a wide class of models, there exists a complete market model with an arbitrarily close paths of the stock prices . This leads to the counterintuitive conclusion that the incomplete markets are indistinguishable from the complete markets in the terms of the price statistics.
The possibility of statistical evaluation of the market completeness and incompleteness is investigated for continuous time diffusion stock market models. It is known that the market completeness is not a robust property: small random deviations of the coefficients convert a complete market model into a incomplete one . The paper shows that market incompleteness is also non-robust: small deviations can convert an incomplete model into a complete one. More precisely, it is shown that, for any incomplete market from a wide class of models, there exists a complete market model with arbitrarily close paths of the stock prices and the market parameters . This leads to a counterintuitive conclusion that the incomplete markets are indistinguishable from the complete markets in the terms of the market statistics.
[ { "type": "R", "before": "well known that", "after": "known that the", "start_char_pos": 161, "end_char_pos": 176 }, { "type": "D", "before": "a complete market model can be made incomplete via", "after": null, "start_char_pos": 223, "end_char_pos": 273 }, { "type": "A", "before": null, "after": "convert a complete market model into a incomplete one", "start_char_pos": 318, "end_char_pos": 318 }, { "type": "D", "before": "model", "after": null, "start_char_pos": 514, "end_char_pos": 519 }, { "type": "D", "before": "an", "after": null, "start_char_pos": 591, "end_char_pos": 593 }, { "type": "A", "before": null, "after": "and the market parameters", "start_char_pos": 638, "end_char_pos": 638 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 655, "end_char_pos": 658 }, { "type": "R", "before": "price", "after": "market", "start_char_pos": 783, "end_char_pos": 788 } ]
[ 0, 154, 320, 453, 640 ]
1209.5190
1
We present a new volatility model, simple to implement, that combines various attractive features such as an exponential moving average of the price and a leverage effect . This model is able to capture the so-called " panic effect", which occurs whenever systematic risk becomes the dominant factor. consequently , in contrast to other models , this new model is as reactive as the implied volatility indices . We also test the reactivity of our model using extreme events taken from the 470 most liquid European stocks over the last decade. We show that the reactive volatility model is more robust to extreme events, and it allows for the identification of precursors and replicas of extreme events.
We present a new volatility model, simple to implement, that includes a leverage effect whose return-volatility correlation function fits to empirical observations . This model is able to capture both the " retarded effect" induced by the specific risk, and the " panic effect", which occurs whenever systematic risk becomes the dominant factor. Consequently , in contrast to a GARCH model and a standard volatility estimate from the squared returns , this new model is as reactive as the implied volatility : the model adjusts itself in an instantaneous way to each variation of the single stock price or the stock index price and the adjustment is highly correlated to implied volatility changes . We also test the reactivity of our model using extreme events taken from the 470 most liquid European stocks over the last decade. We show that the reactive volatility model is more robust to extreme events, and it allows for the identification of precursors and replicas of extreme events.
[ { "type": "R", "before": "combines various attractive features such as an exponential moving average of the price and", "after": "includes", "start_char_pos": 61, "end_char_pos": 152 }, { "type": "A", "before": null, "after": "whose return-volatility correlation function fits to empirical observations", "start_char_pos": 171, "end_char_pos": 171 }, { "type": "R", "before": "the so-called", "after": "both the", "start_char_pos": 204, "end_char_pos": 217 }, { "type": "A", "before": null, "after": "retarded effect\" induced by the specific risk, and the \"", "start_char_pos": 220, "end_char_pos": 220 }, { "type": "R", "before": "consequently", "after": "Consequently", "start_char_pos": 303, "end_char_pos": 315 }, { "type": "R", "before": "other models", "after": "a GARCH model and a standard volatility estimate from the squared returns", "start_char_pos": 333, "end_char_pos": 345 }, { "type": "R", "before": "indices", "after": ": the model adjusts itself in an instantaneous way to each variation of the single stock price or the stock index price and the adjustment is highly correlated to implied volatility changes", "start_char_pos": 404, "end_char_pos": 411 } ]
[ 0, 173, 413, 544 ]
1209.5527
1
We consider a Bayesian game of pure informational externalities, in which a group of agents learn a binary state of the world from conditionally independent private signals, by repeatedly observing the actions of their neighbors in a social network. We show that the question of whether or not the agents learn the state of the world depends on the topology of the social network. In particular, we identify a geometric "egalitarianism" condition on the social network graph that guarantees learning in infinite networks, or learning with high probability in large finite networks, in any equilibrium of the game. We give examples of non-egalitarian networks with equilibria in which learning fails.
We consider a group of strategic agents who must each repeatedly take one of two possible actions. They learn which of the two actions is preferable from initial private signals, and by observing the actions of their neighbors in a social network. We show that the question of whether or not the agents learn efficiently depends on the topology of the social network. In particular, we identify a geometric "egalitarianism" condition on the social network that guarantees learning in infinite networks, or learning with high probability in large finite networks, in any equilibrium . We also give examples of non-egalitarian networks with equilibria in which learning fails.
[ { "type": "R", "before": "Bayesian game of pure informational externalities, in which a group of agents learn a binary state of the world from conditionally independent", "after": "group of strategic agents who must each repeatedly take one of two possible actions. They learn which of the two actions is preferable from initial", "start_char_pos": 14, "end_char_pos": 156 }, { "type": "R", "before": "by repeatedly", "after": "and by", "start_char_pos": 174, "end_char_pos": 187 }, { "type": "R", "before": "the state of the world", "after": "efficiently", "start_char_pos": 311, "end_char_pos": 333 }, { "type": "D", "before": "graph", "after": null, "start_char_pos": 469, "end_char_pos": 474 }, { "type": "R", "before": "of the game. We", "after": ". We also", "start_char_pos": 601, "end_char_pos": 616 } ]
[ 0, 249, 380, 613 ]
1209.5892
1
We use a recently developed coarse-grained model to simulate the overstretching of duplex DNA. Overstretching at 23C occurs at 77pN in the model, about 10pN higher than the experimental value at equivalent salt conditions. Furthermore, the model reproduces the temperature dependence of the overstretching force well. The mechanism of overstretching is always force-induced melting by unpeeling from the free ends. That we never see S-DNA (overstretched duplex DNA), even though there is clear experimental evidence for this mode of overstretching under certain conditions, suggests that S-DNA is not simply an unstacked but hydrogen-bonded duplex, but instead probably has a more exotic structure.
We use a recently developed coarse-grained model to simulate the overstretching of duplex DNA. Overstretching at 23C occurs at 74 pN in the model, about 6-7 pN higher than the experimental value at equivalent salt conditions. Furthermore, the model reproduces the temperature dependence of the overstretching force well. The mechanism of overstretching is always force-induced melting by unpeeling from the free ends. That we never see S-DNA (overstretched duplex DNA), even though there is clear experimental evidence for this mode of overstretching under certain conditions, suggests that S-DNA is not simply an unstacked but hydrogen-bonded duplex, but instead probably has a more exotic structure.
[ { "type": "R", "before": "77pN", "after": "74 pN", "start_char_pos": 127, "end_char_pos": 131 }, { "type": "R", "before": "10pN", "after": "6-7 pN", "start_char_pos": 152, "end_char_pos": 156 } ]
[ 0, 94, 222, 317, 414 ]
1209.5976
1
In this paper we propose different schemes for option hedging when asset returns are modeled by dynamics from a general class of GARCH models. Since the minimal martingale measure fails to produce a probability measure in this setting, we construct local risk minimization hedging strategies with respect to a risk neutral measure. Local risk minimization is investigated in the context of Gaussian driven models, and two other minimum variance hedges are proposed in order to extend Duan's delta hedge. These two methods are constructed using local risk minimizing hedges for bivariate diffusion limits of GARCH models. Numerical experiments are carried out in order to compare the different hedging schemes; in particular, the sensitivity of the hedging strategies with respect to several pricing measures is tested for a special class of non-Gaussian GARCH models for European style options with different moneyness and maturities .
We propose different schemes for option hedging when asset returns are modeled using a general class of GARCH models. More specifically, we implement local risk minimization and a minimum variance hedge approximation based on an extended Girsanov principle that generalizes Duan's (1995) delta hedge. Since the minimal martingale measure fails to produce a probability measure in this setting, we construct local risk minimization hedging strategies with respect to a pricing kernel. These approaches are investigated in the context of non-Gaussian driven models. Furthermore, we analyze these methods for non-Gaussian GARCH diffusion limit processes and link them to the corresponding discrete time counterparts. A detailed numerical analysis based on S P 500 European Call options is provided to assess the empirical performance of the proposed schemes. We also test the sensitivity of the hedging strategies with respect to the risk neutral measure used by recomputing some of our results with an exponential affine pricing kernel .
[ { "type": "R", "before": "In this paper we", "after": "We", "start_char_pos": 0, "end_char_pos": 16 }, { "type": "R", "before": "by dynamics from", "after": "using", "start_char_pos": 93, "end_char_pos": 109 }, { "type": "A", "before": null, "after": "More specifically, we implement local risk minimization and a minimum variance hedge approximation based on an extended Girsanov principle that generalizes Duan's (1995) delta hedge.", "start_char_pos": 143, "end_char_pos": 143 }, { "type": "R", "before": "risk neutral measure. Local risk minimization is", "after": "pricing kernel. These approaches are", "start_char_pos": 311, "end_char_pos": 359 }, { "type": "R", "before": "Gaussian driven models, and two other minimum variance hedges are proposed in order to extend Duan's delta hedge. These two methods are constructed using local risk minimizing hedges for bivariate diffusion limits of GARCH models. Numerical experiments are carried out in order to compare the different hedging schemes; in particular,", "after": "non-Gaussian driven models. Furthermore, we analyze these methods for non-Gaussian GARCH diffusion limit processes and link them to the corresponding discrete time counterparts. A detailed numerical analysis based on S", "start_char_pos": 391, "end_char_pos": 725 }, { "type": "A", "before": null, "after": "P 500 European Call options is provided to assess the empirical performance of the proposed schemes. We also test", "start_char_pos": 726, "end_char_pos": 726 }, { "type": "R", "before": "several pricing measures is tested for a special class of non-Gaussian GARCH models for European style options with different moneyness and maturities", "after": "the risk neutral measure used by recomputing some of our results with an exponential affine pricing kernel", "start_char_pos": 785, "end_char_pos": 935 } ]
[ 0, 142, 332, 504, 621, 710 ]
1209.6277
1
Nested canalizing Boolean functions play an important role in biological motivated regulative networks but also in signal processing, such as in describing stack filters. It has been conjectured that this class of functions has a stabilizing effect on the network dynamics. It is well known that the average sensitivity plays a central role for the stability of ( Random ) Boolean networks. Here , we prove a tight upper bound on the average sensitivity for nested canalizing functions in dependence of the number of relevant input variables. We further show, that it is smaller than 4/3 as conjectured in literature. This shows a large number of functions appearing in biological networks belong to a class that has a very low average sensitivity, which is even close to a tight lower bound.
Nested canalizing Boolean (NCF) functions play an important role in biological motivated regulative networks and in signal processing, in particular describing stack filters. It has been conjectured that NCFs have a stabilizing effect on the network dynamics. It is well known that the average sensitivity plays a central role for the stability of ( random ) Boolean networks. Here we provide a tight upper bound on the average sensitivity for NCFs as a function of the number of relevant input variables. As conjectured in literature this bound is smaller than 4/3 This shows that a large number of functions appearing in biological networks belong to a class that has very low average sensitivity, which is even close to a tight lower bound.
[ { "type": "A", "before": null, "after": "(NCF)", "start_char_pos": 26, "end_char_pos": 26 }, { "type": "R", "before": "but also", "after": "and", "start_char_pos": 104, "end_char_pos": 112 }, { "type": "R", "before": "such as in", "after": "in particular", "start_char_pos": 135, "end_char_pos": 145 }, { "type": "R", "before": "this class of functions has", "after": "NCFs have", "start_char_pos": 201, "end_char_pos": 228 }, { "type": "R", "before": "Random", "after": "random", "start_char_pos": 365, "end_char_pos": 371 }, { "type": "R", "before": ", we prove", "after": "we provide", "start_char_pos": 397, "end_char_pos": 407 }, { "type": "R", "before": "nested canalizing functions in dependence", "after": "NCFs as a function", "start_char_pos": 459, "end_char_pos": 500 }, { "type": "R", "before": "We further show, that it", "after": "As conjectured in literature this bound", "start_char_pos": 544, "end_char_pos": 568 }, { "type": "R", "before": "as conjectured in literature. This shows", "after": "This shows that", "start_char_pos": 589, "end_char_pos": 629 }, { "type": "D", "before": "a", "after": null, "start_char_pos": 718, "end_char_pos": 719 } ]
[ 0, 171, 274, 391, 543, 618 ]
1209.6497
1
We develop a technique based on Malliavin-Bismut calculus ideas, for asymptotic expansion of dual control problems arising in connection with exponential indifference valuation of claims, and with minimisation of relative entropy, in incomplete markets. The problems involve optimisation of a functional in which the control features quadratically , while in the state dynamics it appears as a drift perturbation to Brownian paths on Wiener space. This drift is interpreted as a measure change using the Girsanov theorem, leading to a form of the integration by parts formula in which a directional derivative on Wiener space is computed. This allows for asymptotic analysis of the control problem. Applications to incomplete Ito process markets are given, in which indifference prices are approximated in the low risk aversion limit. We also give an application to identifying the minimal entropy martingale measure as a perturbation to the minimal martingale measure in stochastic volatility models.
We develop a technique based on Malliavin-Bismut calculus ideas, for asymptotic expansion of dual control problems arising in connection with exponential indifference valuation of claims, and with minimisation of relative entropy, in incomplete markets. The problems involve optimisation of a functional of Brownian paths on Wiener space, with the paths perturbed by a drift involving the control. In addition there is a penalty term in which the control features quadratically . The drift perturbation is interpreted as a measure change using the Girsanov theorem, leading to a form of the integration by parts formula in which a directional derivative on Wiener space is computed. This allows for asymptotic analysis of the control problem. Applications to incomplete It\^o process markets are given, in which indifference prices are approximated in the low risk aversion limit. We also give an application to identifying the minimal entropy martingale measure as a perturbation to the minimal martingale measure in stochastic volatility models.
[ { "type": "A", "before": null, "after": "of Brownian paths on Wiener space, with the paths perturbed by a drift involving the control. In addition there is a penalty term", "start_char_pos": 304, "end_char_pos": 304 }, { "type": "R", "before": ", while in the state dynamics it appears as a drift perturbation to Brownian paths on Wiener space. This drift", "after": ". The drift perturbation", "start_char_pos": 349, "end_char_pos": 459 }, { "type": "R", "before": "Ito", "after": "It\\^o", "start_char_pos": 727, "end_char_pos": 730 } ]
[ 0, 253, 448, 639, 699, 835 ]
1210.0024
1
The Gene Ontology (GO) provides biologists with a controlled terminology that describes how genes are associated with functions and how functional terms are related to each other. These term-term relationships encode how scientists conceive URLanization of biological functions, and they take the form of a directed acyclic graph (DAG). Here, we propose that the network structure of gene-term annotations made using GO can be employed to establish an alternate natural way to group the functional terms which is different from the hierarchical structure established in the GO DAG. Instead of relying on an externally URLanization for biological functions, our method connects biological functions together if they are performed by the same genes, as indicated in a compendium of gene annotation data from numerous different biological experiments. Grouping terms by this alternate scheme provides a new framework with which to describe and predict the functions of experimentally identified sets of genes.
The Gene Ontology (GO) provides biologists with a controlled terminology that describes how genes are associated with functions and how functional terms are related to each other. These term-term relationships encode how scientists conceive URLanization of biological functions, and they take the form of a directed acyclic graph (DAG). Here, we propose that the network structure of gene-term annotations made using GO can be employed to establish an alternate natural way to group the functional terms which is different from the hierarchical structure established in the GO DAG. Instead of relying on an externally URLanization for biological functions, our method connects biological functions together if they are performed by the same genes, as indicated in a compendium of gene annotation data from numerous different experiments. We show that grouping terms by this alternate scheme is distinct from term relationships defined in the ontological structure and provides a new framework with which to describe and predict the functions of experimentally identified sets of genes.
[ { "type": "R", "before": "biological experiments. Grouping", "after": "experiments. We show that grouping", "start_char_pos": 825, "end_char_pos": 857 }, { "type": "A", "before": null, "after": "is distinct from term relationships defined in the ontological structure and", "start_char_pos": 889, "end_char_pos": 889 } ]
[ 0, 179, 336, 581, 848 ]
1210.0330
1
Despite considerable progress in genome- and proteome-based high-throughput screening methods and in rational drug design, the increase in approved drugs in the past decade did not match the increase of drug development costs. The network approach not only gives a systems-level understanding of drug action and disease complexity, but can also help to improve the efficiency of drug design. Here we give a comprehensive assessment of the analytical tools of network topology and dynamics. We summarize the current knowledge and the state-of-the-art use of chemical similarity, protein structure, protein-protein interaction, signaling, genetic interaction and metabolic networks in the discovery of drug targets . We show how network techniques can help in the identification of single-target, edgetic, multi-target and allo-network drug target candidates. We review the recent boom in network methods helping hit identification, lead selection optimizing drug efficacy, as well as minimizing side-effects and drug toxicity. Successful network-based drug development strategies are shown through the examples of infections, cancer, metabolic diseases, neurodegenerative diseases and aging. Finally, summarizing more than 1100 cited references we suggest an optimized protocol of network-aided drug development, and provide a list of systems-level hallmarks of drug quality. Finally, we highlight network-related drug development trends both at protein structure and cellular levels helping to achieve these hallmarks by a cohesive, global approach.
Despite considerable progress in genome- and proteome-based high-throughput screening methods and in rational drug design, the increase in approved drugs in the past decade did not match the increase of drug development costs. Network description and analysis not only gives a systems-level understanding of drug action and disease complexity, but can also help to improve the efficiency of drug design. We give a comprehensive assessment of the analytical tools of network topology and dynamics. The state-of-the-art use of chemical similarity, protein structure, protein-protein interaction, signaling, genetic interaction and metabolic networks in the discovery of drug targets is summarized. We propose that network targeting follows two basic strategies. The central hit strategy selectively targets central node/edges of the flexible networks of infectious agents or cancer cells to kill them. The network influence strategy works against other diseases, where an efficient reconfiguration of rigid networks needs to be achieved. It is shown how network techniques can help in the identification of single-target, edgetic, multi-target and allo-network drug target candidates. We review the recent boom in network methods helping hit identification, lead selection optimizing drug efficacy, as well as minimizing side-effects and drug toxicity. Successful network-based drug development strategies are shown through the examples of infections, cancer, metabolic diseases, neurodegenerative diseases and aging. Summarizing >1200 references we suggest an optimized protocol of network-aided drug development, and provide a list of systems-level hallmarks of drug quality. Finally, we highlight network-related drug development trends helping to achieve these hallmarks by a cohesive, global approach.
[ { "type": "R", "before": "The network approach", "after": "Network description and analysis", "start_char_pos": 227, "end_char_pos": 247 }, { "type": "R", "before": "Here we", "after": "We", "start_char_pos": 392, "end_char_pos": 399 }, { "type": "R", "before": "We summarize the current knowledge and the", "after": "The", "start_char_pos": 490, "end_char_pos": 532 }, { "type": "R", "before": ". We show", "after": "is summarized. We propose that network targeting follows two basic strategies. The central hit strategy selectively targets central node/edges of the flexible networks of infectious agents or cancer cells to kill them. The network influence strategy works against other diseases, where an efficient reconfiguration of rigid networks needs to be achieved. It is shown", "start_char_pos": 713, "end_char_pos": 722 }, { "type": "R", "before": "Finally, summarizing more than 1100 cited", "after": "Summarizing >1200", "start_char_pos": 1191, "end_char_pos": 1232 }, { "type": "D", "before": "both at protein structure and cellular levels", "after": null, "start_char_pos": 1437, "end_char_pos": 1482 } ]
[ 0, 226, 391, 489, 714, 857, 1025, 1190, 1374 ]
1210.0330
2
Despite considerable progress in genome- and proteome-based high-throughput screening methods and in rational drug design, the increase in approved drugs in the past decade did not match the increase of drug development costs. Network description and analysis not only gives a systems-level understanding of drug action and disease complexity, but can also help to improve the efficiency of drug design. We give a comprehensive assessment of the analytical tools of network topology and dynamics. The state-of-the-art use of chemical similarity, protein structure, protein-protein interaction, signaling, genetic interaction and metabolic networks in the discovery of drug targets is summarized. We propose that network targeting follows two basic strategies. The central hit strategy selectively targets central node /edges of the flexible networks of infectious agents or cancer cells to kill them. The network influence strategy works against other diseases, where an efficient reconfiguration of rigid networks needs to be achieved . It is shown how network techniques can help in the identification of single-target, edgetic, multi-target and allo-network drug target candidates. We review the recent boom in network methods helping hit identification, lead selection optimizing drug efficacy, as well as minimizing side-effects and drug toxicity. Successful network-based drug development strategies are shown through the examples of infections, cancer, metabolic diseases, neurodegenerative diseases and aging. Summarizing > 1200 references we suggest an optimized protocol of network-aided drug development, and provide a list of systems-level hallmarks of drug quality. Finally, we highlight network-related drug development trends helping to achieve these hallmarks by a cohesive, global approach.
Despite considerable progress in genome- and proteome-based high-throughput screening methods and in rational drug design, the increase in approved drugs in the past decade did not match the increase of drug development costs. Network description and analysis not only give a systems-level understanding of drug action and disease complexity, but can also help to improve the efficiency of drug design. We give a comprehensive assessment of the analytical tools of network topology and dynamics. The state-of-the-art use of chemical similarity, protein structure, protein-protein interaction, signaling, genetic interaction and metabolic networks in the discovery of drug targets is summarized. We propose that network targeting follows two basic strategies. The central hit strategy selectively targets central nodes /edges of the flexible networks of infectious agents or cancer cells to kill them. The network influence strategy works against other diseases, where an efficient reconfiguration of rigid networks needs to be achieved by targeting the neighbors of central nodes or edges . It is shown how network techniques can help in the identification of single-target, edgetic, multi-target and allo-network drug target candidates. We review the recent boom in network methods helping hit identification, lead selection optimizing drug efficacy, as well as minimizing side-effects and drug toxicity. Successful network-based drug development strategies are shown through the examples of infections, cancer, metabolic diseases, neurodegenerative diseases and aging. Summarizing more than 1200 references we suggest an optimized protocol of network-aided drug development, and provide a list of systems-level hallmarks of drug quality. Finally, we highlight network-related drug development trends helping to achieve these hallmarks by a cohesive, global approach.
[ { "type": "R", "before": "gives", "after": "give", "start_char_pos": 269, "end_char_pos": 274 }, { "type": "R", "before": "node", "after": "nodes", "start_char_pos": 813, "end_char_pos": 817 }, { "type": "A", "before": null, "after": "by targeting the neighbors of central nodes or edges", "start_char_pos": 1036, "end_char_pos": 1036 }, { "type": "R", "before": ">", "after": "more than", "start_char_pos": 1531, "end_char_pos": 1532 } ]
[ 0, 226, 403, 496, 695, 759, 900, 1038, 1185, 1353, 1518, 1679 ]
1210.0332
1
From the shape and size analysis of approximately 130 small icosahedral viruses we conclude that there is a typical structural capsid protein, having a mean diameter of 5 nm and a mean thickness of 3 nm, with more than two thirds of the analyzed capsid proteins having thicknesses between 2 nm and 4 nm. To investigate whether, in addition to the conserved geometry, capsid proteins show similarities in the way they interact with one another, we examined the shapes of the capsids in detail. We classified them numerically according to their similarity to sphere and icosahedron and a set of shapes in between, all obtained from the theory of elasticity of shells. In order to make a unique and straightforward connection between an idealized, numerically calculated shape of an elastic shell and a capsid, we devised a special shape fitting procedure, the outcome of which is the idealized elastic shape fitting the capsid best. Using such a procedure we performed statistical analysis of a series of virus shapes and we found strong similarities between the capsid elastic properties of even very different viruses. Our findings point to either evolutionary relatedness of most viruses or to evolutionary convergence of the examined properties of viruses. As we explain in the paper, there are both structural and functional reasons for the convergence of protein sizes and capsid elastic properties. Our work presents a specific quantitative scheme to estimate relatedness between different proteins based on the details of the (quaternary) shape they form (capsid). As such, it may provide an information complementary to the one obtained from the studies of other types of protein similarity, such as the overall composition of structural elements, topology of the folded protein backbone, and sequence similarity.
From the analysis of sizes of approximately 130 small icosahedral viruses we find that there is a typical structural capsid protein, having a mean diameter of 5 nm and a mean thickness of 3 nm, with more than two thirds of the analyzed capsid proteins having thicknesses between 2 nm and 4 nm. To investigate whether, in addition to the fairly conserved geometry, capsid proteins show similarities in the way they interact with one another, we examined the shapes of the capsids in detail. We classified them numerically according to their similarity to sphere and icosahedron and an interpolating set of shapes in between, all of them obtained from the theory of elasticity of shells. In order to make a unique and straightforward connection between an idealized, numerically calculated shape of an elastic shell and a capsid, we devised a special shape fitting procedure, the outcome of which is the idealized elastic shape fitting the capsid best. Using such a procedure we performed statistical analysis of a series of virus shapes and we found similarities between the capsid elastic properties of even very different viruses. As we explain in the paper, there are both structural and functional reasons for the convergence of protein sizes and capsid elastic properties. Our work presents a specific quantitative scheme to estimate relatedness between different proteins based on the details of the (quaternary) shape they form (capsid). As such, it may provide an information complementary to the one obtained from the studies of other types of protein similarity, such as the overall composition of structural elements, topology of the folded protein backbone, and sequence similarity.
[ { "type": "R", "before": "shape and size analysis of", "after": "analysis of sizes of", "start_char_pos": 9, "end_char_pos": 35 }, { "type": "R", "before": "conclude", "after": "find", "start_char_pos": 83, "end_char_pos": 91 }, { "type": "A", "before": null, "after": "fairly", "start_char_pos": 347, "end_char_pos": 347 }, { "type": "R", "before": "a", "after": "an interpolating", "start_char_pos": 585, "end_char_pos": 586 }, { "type": "A", "before": null, "after": "of them", "start_char_pos": 617, "end_char_pos": 617 }, { "type": "D", "before": "strong", "after": null, "start_char_pos": 1031, "end_char_pos": 1037 }, { "type": "D", "before": "viruses. Our findings point to either evolutionary relatedness of most viruses or to evolutionary convergence of the examined properties of", "after": null, "start_char_pos": 1112, "end_char_pos": 1251 } ]
[ 0, 303, 493, 667, 932, 1120, 1260, 1405, 1572 ]
1210.0570
1
We consider that the price of a firm follows a non linear stochastic delay differential equation. We also assume that any claim value whose value depends on firm value and time follows a non linear stochastic delay differential equation. Using self-financed strategy and duplication we are able to derive a Random Partial Differential Equation (RPDE) that any claim whose value depends on firm value and time should satisfy. We solve the RPDE for debt and loan guarantees under the assumption that there are no coupon payment nor dividends prior to the maturity of the debt.
We consider that the price of a firm follows a non linear stochastic delay differential equation. We also assume that any claim value whose value depends on firm value and time follows a non linear stochastic delay differential equation. Using self-financed strategy and replication we are able to derive a Random Partial Differential Equation (RPDE) satisfied by any corporate claim whose value is a function of firm value and time . Under specific final and boundary conditions, we solve the RPDE for the debt value and loan guarantees within a single period and homogeneous class of debt.
[ { "type": "R", "before": "duplication", "after": "replication", "start_char_pos": 271, "end_char_pos": 282 }, { "type": "R", "before": "that any", "after": "satisfied by any corporate", "start_char_pos": 351, "end_char_pos": 359 }, { "type": "R", "before": "depends on", "after": "is a function of", "start_char_pos": 378, "end_char_pos": 388 }, { "type": "R", "before": "should satisfy. We", "after": ". Under specific final and boundary conditions, we", "start_char_pos": 409, "end_char_pos": 427 }, { "type": "R", "before": "debt", "after": "the debt value", "start_char_pos": 447, "end_char_pos": 451 }, { "type": "R", "before": "under the assumption that there are no coupon payment nor dividends prior to the maturity of the", "after": "within a single period and homogeneous class of", "start_char_pos": 472, "end_char_pos": 568 } ]
[ 0, 97, 237, 424 ]
1210.0898
1
This paper presents a model of spontaneous economic order within the framework of general equilibrium theory . Our study shows that if a competitive economy is enough free and fair , then a spontaneous economic order shall emerge in long-period competitive equilibrium so that social members together occupy an optimally economic allocation . Despite this, the spontaneous order may degenerate in the form of economic crisis whenever an equilibrium economy approaches the extreme competition. Remarkably, such a theoretical framework of spontaneous order presents a bridge connecting Austrian economics and Neoclassical economics, where we indeed comprehend a truth: "Freedom drives economic development ".
This paper provides an attempt to formalize Hayek's notion of spontaneous order within the framework of the Arrow-Debreu economy . Our study shows that if a competitive economy is enough fair and free , then a spontaneous economic order shall emerge in long-run competitive equilibria so that social members together occupy an optimal distribution of income . Despite this, the spontaneous order might degenerate in the form of economic crises whenever an equilibrium economy approaches the extreme competition. Remarkably, such a theoretical framework of spontaneous order provides a bridge linking Austrian economics and Neoclassical economics, where we shall comprehend a truth: "Freedom promotes technological progress ".
[ { "type": "R", "before": "presents a model of spontaneous economic", "after": "provides an attempt to formalize Hayek's notion of spontaneous", "start_char_pos": 11, "end_char_pos": 51 }, { "type": "R", "before": "general equilibrium theory", "after": "the Arrow-Debreu economy", "start_char_pos": 82, "end_char_pos": 108 }, { "type": "R", "before": "free and fair", "after": "fair and free", "start_char_pos": 167, "end_char_pos": 180 }, { "type": "R", "before": "long-period competitive equilibrium", "after": "long-run competitive equilibria", "start_char_pos": 233, "end_char_pos": 268 }, { "type": "R", "before": "optimally economic allocation", "after": "optimal distribution of income", "start_char_pos": 311, "end_char_pos": 340 }, { "type": "R", "before": "may", "after": "might", "start_char_pos": 379, "end_char_pos": 382 }, { "type": "R", "before": "crisis", "after": "crises", "start_char_pos": 418, "end_char_pos": 424 }, { "type": "R", "before": "presents a bridge connecting", "after": "provides a bridge linking", "start_char_pos": 555, "end_char_pos": 583 }, { "type": "R", "before": "indeed", "after": "shall", "start_char_pos": 640, "end_char_pos": 646 }, { "type": "R", "before": "drives economic development", "after": "promotes technological progress", "start_char_pos": 676, "end_char_pos": 703 } ]
[ 0, 110, 342, 492 ]
1210.1240
1
Gene networks exhibiting oscillatory dynamics are widespread in biology. The minimal regulatory designs giving rise to oscillations have been implemented synthetically and studied by mathematical modeling. However, most of the available analyses generally neglect the coupling of regulatory circuits with the cellular "chassis" in which the circuits are embedded. For example, the cell macromolecular composition of fast-growing bacteria changes with growth rate. As a consequence, important parameters of gene expression, such as ribosome concentration or cell volume, are growth-rate dependent, ultimately coupling the dynamics of genetic circuits with cell physiology. This work addresses the effects of growth rate on the dynamics of a paradigmatic example of genetic oscillator, the repressilator. Making use of empirical growth-rate dependences of parameters in bacteria, we show that the repressilator dynamics can switch between oscillations and convergence to a fixed point depending on the cellular state of growth, and thus on the amount and quality of the food it is fed. The substrate of the circuit (type of plasmid or gene positions on the chromosome) also plays an important role in determining the oscillation stability and the growth-rate dependence of period and amplitude. This analysis has potential application in the field of synthetic biology, and suggests that the coupling between endogenous genetic oscillators and cell physiology can have substantial consequences on their functionality.
Gene networks exhibiting oscillatory dynamics are widespread in biology. The minimal regulatory designs giving rise to oscillations have been implemented synthetically and studied by mathematical modeling. However, most of the available analyses generally neglect the coupling of regulatory circuits with the cellular "chassis" in which the circuits are embedded. For example, the intracellular macromolecular composition of fast-growing bacteria changes with growth rate. As a consequence, important parameters of gene expression, such as ribosome concentration or cell volume, are growth-rate dependent, ultimately coupling the dynamics of genetic circuits with cell physiology. This work addresses the effects of growth rate on the dynamics of a paradigmatic example of genetic oscillator, the repressilator. Making use of empirical growth-rate dependences of parameters in bacteria, we show that the repressilator dynamics can switch between oscillations and convergence to a fixed point depending on the cellular state of growth, and thus on the nutrients it is fed. The physical support of the circuit (type of plasmid or gene positions on the chromosome) also plays an important role in determining the oscillation stability and the growth-rate dependence of period and amplitude. This analysis has potential application in the field of synthetic biology, and suggests that the coupling between endogenous genetic oscillators and cell physiology can have substantial consequences for their functionality.
[ { "type": "R", "before": "cell", "after": "intracellular", "start_char_pos": 381, "end_char_pos": 385 }, { "type": "R", "before": "amount and quality of the food", "after": "nutrients", "start_char_pos": 1042, "end_char_pos": 1072 }, { "type": "R", "before": "substrate", "after": "physical support", "start_char_pos": 1088, "end_char_pos": 1097 }, { "type": "R", "before": "on", "after": "for", "start_char_pos": 1492, "end_char_pos": 1494 } ]
[ 0, 72, 205, 363, 463, 671, 802, 1083, 1292 ]
1210.1848
1
To provide a solid analytic foundation for the module approach to conditional risk measures, this paper establishes a complete random convex analysis over random locally convex modules by simultaneously considering the two kinds of topologies (namely the (\varepsilon,\lambda)--topology and the locally L^0-- convex topology). It should be also mentioned that D. Filiporvi\'{c --convex modules in D. Filipovi\'{c, M. Kupper, N. Vogelpoth, Separation and duality in locally L^0--convex modules, J. Funct. Anal. 256 (2009) 3996-4029}%DIFDELCMD < ] %%% (briefly,the FKV paper), where they made some important contributions to the subject and presented some good ideas of financial applications. Unfortunately, there are serious shortcomings in the FKV paper. First, most of the principal results in the FKV paper were based on the premise that the locally L ^{0}--convex topology for every locally L ^{0--conditional risk measure (1\leq p\leq+\infty) can be uniquely extended to an L^{0}} --convex module may be induced by a family of L^{0 the FKV paper ever gave a proof of this premise, but there was a hole in this proof, in fact, it remains open up to now whether the premise is valid or not. In this paper we overcome the difficulty by working with random locally convex modules endowed with the locally L^0 --convex topology rather than locally L^0--convex modules. Besides, some basic and key results in the FKV paper are also false so that some more interesting and essential things are covered, so we first correct these mistakes and further give a thorough treatment of random convex analysis .
To provide a solid analytic foundation for the module approach to conditional risk measures, this paper establishes a complete random convex analysis over random locally convex modules by simultaneously considering the two kinds of topologies (namely the (\varepsilon,\lambda)--topology and the locally L^0-- convex topology). Then, we make use of the advantage of the (\varepsilon,\lambda)--topology and grasp the local property of L^0 --convex , M. Kupper, N. Vogelpoth, Separation and duality in locally L^0--convex modules, J. Funct. Anal. 256 (2009) 3996-4029}%DIFDELCMD < ] %%% conditional risk measures to prove that every L ^{0}--convex L ^{p--conditional risk measure (1\leq p\leq+\infty) can be uniquely extended to an L^{0}} --convex L^{p the process we find that combining the countable concatenation hull of a set and the local property of conditional risk measures is a very useful analytic skill that may considerably simplify and improve the study of L^{0 --convex conditional risk measures .
[ { "type": "R", "before": "It should be also mentioned that D. Filiporvi\\'{c", "after": "Then, we make use of the advantage of the (\\varepsilon,\\lambda)--topology and grasp the local property of L^0", "start_char_pos": 327, "end_char_pos": 376 }, { "type": "D", "before": "modules in", "after": null, "start_char_pos": 386, "end_char_pos": 396 }, { "type": "D", "before": "D. Filipovi\\'{c", "after": null, "start_char_pos": 397, "end_char_pos": 412 }, { "type": "R", "before": "(briefly,the FKV paper), where they made some important contributions to the subject and presented some good ideas of financial applications. Unfortunately, there are serious shortcomings in the FKV paper. First, most of the principal results in the FKV paper were based on the premise that the locally L", "after": "conditional risk measures to prove that every L", "start_char_pos": 550, "end_char_pos": 854 }, { "type": "D", "before": "topology for every locally", "after": null, "start_char_pos": 868, "end_char_pos": 894 }, { "type": "R", "before": "^{0", "after": "^{p", "start_char_pos": 897, "end_char_pos": 900 }, { "type": "R", "before": "module may be induced by a family of L^{0", "after": "L^{p", "start_char_pos": 995, "end_char_pos": 1036 }, { "type": "R", "before": "FKV paper ever gave a proof of this premise, but there was a hole in this proof, in fact, it remains open up to now whether the premise is valid or not. In this paper we overcome the difficulty by working with random locally convex modules endowed with the locally L^0", "after": "process we find that combining the countable concatenation hull of a set and the local property of conditional risk measures is a very useful analytic skill that may considerably simplify and improve the study of L^{0", "start_char_pos": 1041, "end_char_pos": 1309 }, { "type": "R", "before": "topology rather than locally L^0--convex modules. Besides, some basic and key results in the FKV paper are also false so that some more interesting and essential things are covered, so we first correct these mistakes and further give a thorough treatment of random convex analysis", "after": "conditional risk measures", "start_char_pos": 1319, "end_char_pos": 1599 } ]
[ 0, 326, 362, 503, 691, 755, 1193, 1368 ]
1210.3149
1
When modeling co-expression networks from high-throughput time course data, the Pearson correlation function is reasonably effective . However, this approach is limited since it cannot capture non-linear interactions and time shifts between the signals . Here we propose to overcome these two issues by employing a novel similarity function, DTWMIC , combining a measure taking care of functional interactions of signals (MIC) and a measure identifying horizontal displacements (DTW). By using a network comparison metric to quantify differences, we show the effectiveness of the DTWMIC approach on both synthetic and transcriptomic datasets.
When modeling coexpression networks from high-throughput time course data, Pearson Correlation Coefficient (PCC) is one of the most effective and popular similarity functions . However, its reliability is limited since it cannot capture non-linear interactions and time shifts . Here we propose to overcome these two issues by employing a novel similarity function, Dynamic Time Warping Maximal Information Coefficient (DTW-MIC) , combining a measure taking care of functional interactions of signals (MIC) and a measure identifying horizontal displacements (DTW). By using the Hamming-Ipsen-Mikhailov (HIM) metric to quantify network differences, the effectiveness of the DTW-MIC approach is demonstrated on both synthetic and transcriptomic datasets.
[ { "type": "R", "before": "co-expression", "after": "coexpression", "start_char_pos": 14, "end_char_pos": 27 }, { "type": "R", "before": "the Pearson correlation function is reasonably effective", "after": "Pearson Correlation Coefficient (PCC) is one of the most effective and popular similarity functions", "start_char_pos": 76, "end_char_pos": 132 }, { "type": "R", "before": "this approach", "after": "its reliability", "start_char_pos": 144, "end_char_pos": 157 }, { "type": "D", "before": "between the signals", "after": null, "start_char_pos": 233, "end_char_pos": 252 }, { "type": "R", "before": "DTWMIC", "after": "Dynamic Time Warping Maximal Information Coefficient (DTW-MIC)", "start_char_pos": 342, "end_char_pos": 348 }, { "type": "R", "before": "a network comparison", "after": "the Hamming-Ipsen-Mikhailov (HIM)", "start_char_pos": 494, "end_char_pos": 514 }, { "type": "R", "before": "differences, we show", "after": "network differences,", "start_char_pos": 534, "end_char_pos": 554 }, { "type": "R", "before": "DTWMIC approach", "after": "DTW-MIC approach is demonstrated", "start_char_pos": 580, "end_char_pos": 595 } ]
[ 0, 134, 254, 484 ]
1210.3456
1
MicroRNAs (miRNAs) are small RNA molecules composed of 19-22 nt playing important regulatory roles in post-transcriptional gene regulation by inhibiting the translation of the mRNA into proteins or otherwise cleaving the target mRNA. Inferring miRNA targets provides useful information for understanding the roles of miRNA involving in biological processes which may result in diagnosing complex diseases. Statistical methodologies of point estimates such as the LASSO algorithm have been proposed to identify the interactions of miRNA and mRNA based on sequence and expression data. In this paper, we propose Bayesian LASSO and non-negative Bayesian LASSO to analyze the interactions between miRNA and mRNA using the expression data. The proposed Bayesian methods explore the posterior distributions for those parameters required in the model depicting the miRNA-mRNA interactions. For comparison purposes, we applied the Least Square Regression (LSR), Ridge Regression (RR), LASSO, non-negative LASSO (nLASSO), and the Bayesian approaches to four public data sets which have the known interaction pairs of miRNA and mRNA. Comparing to the point estimate algorithms, the Bayesian methods are able to infer more known interactions and are more meaningful to provide credible intervals to take into account the uncertainty of the interactions of miRNA and mRNA. The Bayesian approaches are useful for graphing the inferred effects of the miRNAs on the targets by plotting the posterior distributions of those parameters, and while the point estimate algorithm only provides a single estimate for those parameters .
MicroRNAs (miRNAs) are small RNA molecules composed of 19-22 nt , which play important regulatory roles in post-transcriptional gene regulation by inhibiting the translation of the mRNA into proteins or otherwise cleaving the target mRNA. Inferring miRNA targets provides useful information for understanding the roles of miRNA in biological processes that are potentially involved in complex diseases. Statistical methodologies for point estimation, such as the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm, have been proposed to identify the interactions of miRNA and mRNA based on sequence and expression data. In this paper, we propose using the Bayesian LASSO (BLASSO) and the non-negative Bayesian LASSO (nBLASSO) to analyse the interactions between miRNA and mRNA using expression data. The proposed Bayesian methods explore the posterior distributions for those parameters required to model the miRNA-mRNA interactions. These approaches can be used to observe the inferred effects of the miRNAs on the targets by plotting the posterior distributions of those parameters. For comparison purposes, the Least Squares Regression (LSR), Ridge Regression (RR), LASSO, non-negative LASSO (nLASSO), and the proposed Bayesian approaches were applied to four public datasets. We concluded that nLASSO and nBLASSO perform best in terms of sensitivity and specificity. Compared to the point estimate algorithms, which only provide single estimates for those parameters, the Bayesian methods are more meaningful and provide credible intervals , which take into account the uncertainty of the inferred interactions of the miRNA and mRNA. Furthermore, Bayesian methods naturally provide statistical significance to select convincing inferred interactions, while point estimate algorithms require a manually chosen threshold, which is less meaningful, to choose the possible interactions .
[ { "type": "R", "before": "playing", "after": ", which play", "start_char_pos": 64, "end_char_pos": 71 }, { "type": "D", "before": "involving", "after": null, "start_char_pos": 323, "end_char_pos": 332 }, { "type": "R", "before": "which may result in diagnosing", "after": "that are potentially involved in", "start_char_pos": 357, "end_char_pos": 387 }, { "type": "R", "before": "of point estimates", "after": "for point estimation,", "start_char_pos": 432, "end_char_pos": 450 }, { "type": "R", "before": "LASSO algorithm", "after": "Least Absolute Shrinkage and Selection Operator (LASSO) algorithm,", "start_char_pos": 463, "end_char_pos": 478 }, { "type": "R", "before": "Bayesian LASSO and", "after": "using the Bayesian LASSO (BLASSO) and the", "start_char_pos": 610, "end_char_pos": 628 }, { "type": "R", "before": "to analyze", "after": "(nBLASSO) to analyse", "start_char_pos": 657, "end_char_pos": 667 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 714, "end_char_pos": 717 }, { "type": "R", "before": "in the model depicting", "after": "to model", "start_char_pos": 831, "end_char_pos": 853 }, { "type": "A", "before": null, "after": "These approaches can be used to observe the inferred effects of the miRNAs on the targets by plotting the posterior distributions of those parameters.", "start_char_pos": 883, "end_char_pos": 883 }, { "type": "R", "before": "we applied the Least Square", "after": "the Least Squares", "start_char_pos": 909, "end_char_pos": 936 }, { "type": "R", "before": "Bayesian approaches", "after": "proposed Bayesian approaches were applied", "start_char_pos": 1022, "end_char_pos": 1041 }, { "type": "R", "before": "data sets which have the known interaction pairs of miRNA and mRNA. Comparing", "after": "datasets. We concluded that nLASSO and nBLASSO perform best in terms of sensitivity and specificity. Compared", "start_char_pos": 1057, "end_char_pos": 1134 }, { "type": "A", "before": null, "after": "which only provide single estimates for those parameters,", "start_char_pos": 1169, "end_char_pos": 1169 }, { "type": "R", "before": "able to infer more known interactions and are more meaningful to", "after": "more meaningful and", "start_char_pos": 1195, "end_char_pos": 1259 }, { "type": "R", "before": "to", "after": ", which", "start_char_pos": 1287, "end_char_pos": 1289 }, { "type": "R", "before": "interactions of", "after": "inferred interactions of the", "start_char_pos": 1331, "end_char_pos": 1346 }, { "type": "R", "before": "The Bayesian approaches are useful for graphing the inferred effects of the miRNAs on the targets by plotting the posterior distributions of those parameters, and while the point estimate algorithm only provides a single estimate for those parameters", "after": "Furthermore, Bayesian methods naturally provide statistical significance to select convincing inferred interactions, while point estimate algorithms require a manually chosen threshold, which is less meaningful, to choose the possible interactions", "start_char_pos": 1363, "end_char_pos": 1613 } ]
[ 0, 405, 583, 734, 882, 1362 ]
1210.3543
1
We consider the sectoral composition of a country's GDP, i.e the division into agrarian, industrial, and service sectors. Exploring a simple system of differential equations we characterize the transfer of GDP shares between the sectors in the course of economic development. The model fits for the majority of countries providing 4 country-specific parameters. Relating the agrarian with the industrial sector, a data collapse over all countries and all years supports the applicability of our approach. Depending on the parameter ranges, country development exhibits different transfer properties. Most countries follow 3 of 8 characteristic paths. The types are not random but show distinct geographic and development patterns.
We consider the sectoral composition of a country's GDP, i.e .\ the partitioning into agrarian, industrial, and service sectors. Exploring a simple system of differential equations we characterize the transfer of GDP shares between the sectors in the course of economic development. The model fits for the majority of countries providing 4 country-specific parameters. Relating the agrarian with the industrial sector, a data collapse over all countries and all years supports the applicability of our approach. Depending on the parameter ranges, country development exhibits different transfer properties. Most countries follow 3 of 8 characteristic paths. The types are not random but show distinct geographic and development patterns.
[ { "type": "R", "before": "the division", "after": ".\\ the partitioning", "start_char_pos": 61, "end_char_pos": 73 } ]
[ 0, 121, 275, 361, 504, 599, 650 ]
1210.3811
1
The main result of this paper is a bilateral collateralized counterparty valuation adjusted pricing equation, which allows to price a deal while taking into account credit and debt valuation adjustments (CVA, DVA) along with margining and funding costs, all in a consistent way. We find that the equation has a recursive form, making the introduction of a purely additive funding valuation adjustment (FVA) difficult. Yet, we can cast the pricing equation into a set of iterative relationships which can be solved by means of standard least-square Monte Carlo techniques. As a consequence, we find that identifying funding costs FVA and debit valuation adjustments DVA is not tenable in general, contrary to what has been suggested in the literature in simple cases. We define a comprehensive framework that allows us to derive earlier results on funding or counterparty risk as a special case, although our framework is more than the sum of such special cases. We derive the general pricing equation by resorting to a risk-neutral approach . We consider realistic settings and include in our models the common market practices suggested by ISDA documentation, without assuming restrictive constraints on margining procedures and close-out netting rules. In particular, we allow for asymmetric collateral and funding rates, and exogenous liquidity policies and hedging strategies. Re-hypothecation liquidity risk and close-out amount evaluation issues are also covered. Finally, relevant examples of non-trivial settings illustrate how to derive known facts about discounting curves from a robust general framework and without resorting to ad hoc hypotheses.
The main result of this paper is a collateralized counterparty valuation adjusted pricing equation, which allows to price a deal while taking into account credit and debit valuation adjustments (CVA, DVA) along with margining and funding costs, all in a consistent way. Funding risk breaks the bilateral nature of the valuation formula. We find that the equation has a recursive form, making the introduction of a purely additive funding valuation adjustment (FVA) difficult. Yet, we can cast the pricing equation into a set of iterative relationships which can be solved by means of standard least-square Monte Carlo techniques. As a consequence, we find that identifying funding costs and debit valuation adjustments is not tenable in general, contrary to what has been suggested in the literature in simple cases. The assumptions under which funding costs vanish are a very special case of the more general theory. We define a comprehensive framework that allows us to derive earlier results on funding or counterparty risk as a special case, although our framework is more than the sum of such special cases. We derive the general pricing equation by resorting to a risk-neutral approach where the new types of risks are included by modifying the payout cash flows . We consider realistic settings and include in our models the common market practices suggested by ISDA documentation, without assuming restrictive constraints on margining procedures and close-out netting rules. In particular, we allow for asymmetric collateral and funding rates, and exogenous liquidity policies and hedging strategies. Re-hypothecation liquidity risk and close-out amount evaluation issues are also covered. Finally, relevant examples of non-trivial settings illustrate how to derive known facts about discounting curves from a robust general framework and without resorting to ad hoc hypotheses.
[ { "type": "D", "before": "bilateral", "after": null, "start_char_pos": 35, "end_char_pos": 44 }, { "type": "R", "before": "debt", "after": "debit", "start_char_pos": 176, "end_char_pos": 180 }, { "type": "A", "before": null, "after": "Funding risk breaks the bilateral nature of the valuation formula.", "start_char_pos": 279, "end_char_pos": 279 }, { "type": "D", "before": "FVA", "after": null, "start_char_pos": 630, "end_char_pos": 633 }, { "type": "D", "before": "DVA", "after": null, "start_char_pos": 666, "end_char_pos": 669 }, { "type": "A", "before": null, "after": "The assumptions under which funding costs vanish are a very special case of the more general theory.", "start_char_pos": 768, "end_char_pos": 768 }, { "type": "A", "before": null, "after": "where the new types of risks are included by modifying the payout cash flows", "start_char_pos": 1043, "end_char_pos": 1043 } ]
[ 0, 278, 418, 572, 767, 963, 1045, 1257, 1383, 1472 ]
1210.4014
1
We introduce a new economics system suited for Intangible Goods ({\sc ig}). We argue that such system can now be implemented in the real world . It is more advantageous and more fair for the large majority of individuals without requiring any upper authority. This system is strongly demanding on network computer power for making financial transactions. This is a new democratic tool. Next step will be to test, validate the security of various implementations, and to ask for legal rules adaptations.\sc net {\sc \sc net {\sc We emphasis on the fact that all proposed documentation, algorithm, program in any language related to this proposal shall be open-source without any possibility to add any patent on any sort on the system or subsystem. This should be considered as a pure intellectual construction, like parts of Mathematics and then belongs to nobody or everybody, like 1+1=2. The very first draft is written in French language and posted to URL before October 18th, 2012.
We introduce a new economic system suited for Intangible Goods ({\sc ig}). We argue that such system can now be implemented in the real world using advance technics in distributed network computing and cryptography. The specification of the so called\sc net is presented. To Limit the number of financial transactions, the system is forced to define its own currency, with many benefits. The new cup currency, world wide seted is dedicated to{\sc ig , available for person-to-person trading, protected from speculation and adapted for tax recovery with no additional computation. Those nices features makes the\sc net a new democraic tool, fixing specific issues in{\sc ig trading and reviving a whole domain activity. We emphasis on the fact that all proposed documentation, algorithm, program in any language related to this proposal shall be open-source without any possibility to post any patent of any sort on the system or subsystem. This new trading contract should be considered as a pure intellectual construction, like parts of Mathematics and then belongs to nobody or everybody, like 1+1=2. Next step will be to test, validate the security of various implementations details, and to ask for legal rules adaptations. The first draft paper is written in French language and posted to URL .
[ { "type": "R", "before": "economics", "after": "economic", "start_char_pos": 19, "end_char_pos": 28 }, { "type": "R", "before": ". It is more advantageous and more fair for the large majority of individuals without requiring any upper authority. This system is strongly demanding on network computer power for making financial transactions. This is a new democratic tool. Next step will be to test, validate the security of various implementations, and to ask for legal rules adaptations.", "after": "using advance technics in distributed network computing and cryptography. The specification of the so called", "start_char_pos": 143, "end_char_pos": 502 }, { "type": "A", "before": null, "after": "is presented. To Limit the number of financial transactions, the system is forced to define its own currency, with many benefits. The new cup currency, world wide seted is dedicated to", "start_char_pos": 510, "end_char_pos": 510 }, { "type": "A", "before": null, "after": "ig", "start_char_pos": 515, "end_char_pos": 515 }, { "type": "A", "before": null, "after": ", available for person-to-person trading, protected from speculation and adapted for tax recovery with no additional computation. Those nices features makes the", "start_char_pos": 516, "end_char_pos": 516 }, { "type": "A", "before": null, "after": "a new democraic tool, fixing specific issues in", "start_char_pos": 524, "end_char_pos": 524 }, { "type": "A", "before": null, "after": "ig", "start_char_pos": 529, "end_char_pos": 529 }, { "type": "A", "before": null, "after": "trading and reviving a whole domain activity.", "start_char_pos": 530, "end_char_pos": 530 }, { "type": "R", "before": "add any patent on", "after": "post any patent of", "start_char_pos": 696, "end_char_pos": 713 }, { "type": "A", "before": null, "after": "new trading contract", "start_char_pos": 756, "end_char_pos": 756 }, { "type": "R", "before": "The very first draft", "after": "Next step will be to test, validate the security of various implementations details, and to ask for legal rules adaptations. The first draft paper", "start_char_pos": 894, "end_char_pos": 914 }, { "type": "R", "before": "before October 18th, 2012.", "after": ".", "start_char_pos": 963, "end_char_pos": 989 } ]
[ 0, 75, 144, 259, 354, 385, 502, 750, 893 ]
1210.4014
2
We introduce a new economic system suited for Intangible Goods ({\sc ig}). We argue that such system can now be implemented in the real world using advance technics in distributed network computing and cryptography. The specification of the so called %DIFDELCMD < \sc net %%% is presented. To Limit the number of financial transactions, the system is forced to define its own currency, with many benefits. The new cup currency, world wide seted is dedicated to {\sc ig}, available for person-to-person trading, protected from speculation and adapted for tax recovery with no additional computation. Those nices features makes the %DIFDELCMD < \sc net %%% a new democraic tool, fixing specific issues in {\sc ig} trading and reviving a whole domain activity. We emphasis on the fact that all proposed documentation, algorithm, program in any language related to this proposal shall be open-source without any possibility to post any patent of any sort on the system or subsystem. This new trading contract should be considered as a pure intellectual construction, like parts of Mathematics and then belongs to nobody or everybody, like 1+1=2. Next step will be to test, validate the security of various implementations details, and to ask for legal rules adaptations. The first draft paper is written in French language and posted to URL .
We introduce a new economic system suited for Intangible Goods ({\sc ig}). We argue that such system can now be implemented in the real world using advance technics in distributed network computing and cryptography. The specification of the so called %DIFDELCMD < \sc net %%% is presented. To Limit the number of financial transactions, the system is forced to define its own currency, with many benefits. The new "cup" currency, extended worldwide, is dedicated to {\sc ig}, available only for person-to-person trading, protected from speculation and adapted for tax recovery with no additional computation. Those nices features makes the %DIFDELCMD < \sc net %%% a new democratic tool, fixing specific issues in {\sc ig} trading and reviving a whole domain activity. We emphasis on the fact that all proposed documentation, algorithm, program in any language related to this proposal shall be open-source without any possibility to post any patent of any sort on the system or subsystem. This new trading model should be considered as a pure intellectual construction, like parts of Mathematics and then belongs to nobody or everybody, like 1+1=2. Next step will be to test, validate the security of various implementations details, and to ask for legal rules adaptations. The first draft paper is written in French language and posted to URL and hal.archive-ouverte.fr . We expect to provide an English translation before Christmas .
[ { "type": "R", "before": "cup currency, world wide seted", "after": "\"cup\" currency, extended worldwide,", "start_char_pos": 414, "end_char_pos": 444 }, { "type": "A", "before": null, "after": "only", "start_char_pos": 481, "end_char_pos": 481 }, { "type": "R", "before": "a new democraic", "after": "a new democratic", "start_char_pos": 656, "end_char_pos": 671 }, { "type": "R", "before": "contract", "after": "model", "start_char_pos": 997, "end_char_pos": 1005 }, { "type": "A", "before": null, "after": "and hal.archive-ouverte.fr . We expect to provide an English translation before Christmas", "start_char_pos": 1338, "end_char_pos": 1338 } ]
[ 0, 74, 215, 289, 405, 599, 758, 979, 1142, 1267 ]
1210.4017
1
TheF_1-ATP synthase is a factory for synthesizing ATP in virtually all cells. Its core machinery is the subcomplex F_1-motor (F_1-ATPase) and performs the reversible mechanochemical coupling. Isolated F_1-motor hydrolyzes ATP, which is accompanied by unidirectional rotation of its central \gamma-shaft. When a strong opposing torque is imposed, the \gamma-shaft rotates in the opposite direction and drives the F_1-motor to synthesize ATP. This mechanical-to-chemical } free-energy transduction at 100\% efficiency is not prohibited by thermodynamic laws. However, it is usually reached only at the quasi-static limit . Here, we evaluated the work exerted by the nanosized biological free-energy transducer F1-ATPase by single-molecule experiments on the basis of nonequilibrium theory . The results imply that the F1-ATPase achieves a nearly 100\% free-energy conversion efficiency even far from quasistatic process for both the mechanical-to-chemical and chemical-to-mechanical transductions. Such a high efficiency at a finite-time operation is not expected for macroscopic engines and highlights a remarkable property of the nanosized engines working in the energy scale of kBT .
F_\mathrm{oF_1-ATP synthase is a factory for synthesizing ATP in virtually all cells. Its core machinery is the subcomplex F_1-motor (F_1-ATPase) and performs the reversible mechanochemical coupling. Isolated F_1-motor hydrolyzes ATP, which is accompanied by unidirectional rotation of its central \gamma-shaft. When a strong opposing torque is imposed, the \gamma-shaft rotates in the opposite direction and drives the F_1-motor to synthesize ATP. This mechanical-to-chemical } free-energy transduction is the final and central step of the multistep cellular ATP-synthetic pathway . Here, we determined the amount of mechanical work exploited by the F_1-motor to synthesize an ATP molecule during forced rotations using methodology combining a nonequilibrium theory and single molecule measurements of responses to external torque. We found that the work exploited by the motor amounts only to that is thermodynamically required for the ATP synthesis. Specifically, F_1-motor converts mechanical work to chemical free energy at quite a high efficiency with negligible dissipation inside the motor even during rotations far from a quasistatic process .
[ { "type": "R", "before": "The", "after": "F_\\mathrm{o", "start_char_pos": 0, "end_char_pos": 3 }, { "type": "R", "before": "transduction at 100\\% efficiency is not prohibited by thermodynamic laws. However, it is usually reached only at the quasi-static limit", "after": "transduction is the final and central step of the multistep cellular ATP-synthetic pathway", "start_char_pos": 483, "end_char_pos": 618 }, { "type": "R", "before": "evaluated the work exerted by the nanosized biological free-energy transducer F1-ATPase by single-molecule experiments on the basis of nonequilibrium theory . The results imply that the F1-ATPase achieves a nearly 100\\% free-energy conversion efficiency even far from quasistatic process for both the mechanical-to-chemical and chemical-to-mechanical transductions. Such", "after": "determined the amount of mechanical work exploited by the F_1-motor to synthesize an ATP molecule during forced rotations using methodology combining a nonequilibrium theory and single molecule measurements of responses to external torque. We found that the work exploited by the motor amounts only to that is thermodynamically required for the ATP synthesis. Specifically, F_1-motor converts mechanical work to chemical free energy at quite", "start_char_pos": 630, "end_char_pos": 1000 }, { "type": "R", "before": "at a finite-time operation is not expected for macroscopic engines and highlights a remarkable property of the nanosized engines working in the energy scale of kBT", "after": "with negligible dissipation inside the motor even during rotations far from a quasistatic process", "start_char_pos": 1019, "end_char_pos": 1182 } ]
[ 0, 77, 191, 303, 440, 556, 620, 788, 995 ]
1210.4973
1
In order to design complex networks that are robust and sustainable, we must understand systemic risk. As economic systems become increasingly interconnected, for example, a shock in a single financial network can provoke cascading failures throughout the system. The widespread effects of the current EU debt crisis and the 2008 world financial crisis occur because financial systemsare characterized by complex relations that allow a local crisis to spread dramatically. We study US commercial bank balance sheet data and design a bi-partite banking network composed of (i) banks and (ii) bank assets . We propose a cascading failure model to simulate the crisis spreading process in a bi-partite banking network. We test our model using 2007 data to analyze failed banks . We find that , within realistic parameters, our model identifies a significant portion of the actual failed banks from the FDIC failed bank database from 2008 to 2011.
As economic entities become increasingly interconnected, a shock in a financial network can provoke significant cascading failures throughout the system. To study the systemic risk of financial systems, we create a bi-partite banking network model composed of banks and bank assets and propose a cascading failure model to describe the risk propagation process during crises. We empirically test the model with 2007 US commercial banks balance sheet data and compare the model prediction of the failed banks with the real failed banks after 2007. We find that our model efficiently identifies a significant portion of the actual failed banks reported by Federal Deposit Insurance Corporation. The results suggest that this model could be useful for systemic risk stress testing for financial systems. The model also identifies that commercial rather than residential real estate assets are major culprits for the failure of over 350 US commercial banks during 2008-2011.
[ { "type": "R", "before": "In order to design complex networks that are robust and sustainable, we must understand systemic risk. As economic systems", "after": "As economic entities", "start_char_pos": 0, "end_char_pos": 122 }, { "type": "D", "before": "for example,", "after": null, "start_char_pos": 159, "end_char_pos": 171 }, { "type": "D", "before": "single", "after": null, "start_char_pos": 185, "end_char_pos": 191 }, { "type": "A", "before": null, "after": "significant", "start_char_pos": 222, "end_char_pos": 222 }, { "type": "R", "before": "The widespread effects of the current EU debt crisis and the 2008 world financial crisis occur because financial systemsare characterized by complex relations that allow a local crisis to spread dramatically. We study US commercial bank balance sheet data and design a", "after": "To study the systemic risk of financial systems, we create a", "start_char_pos": 265, "end_char_pos": 533 }, { "type": "R", "before": "composed of (i) banks and (ii) bank assets . We", "after": "model composed of banks and bank assets and", "start_char_pos": 561, "end_char_pos": 608 }, { "type": "R", "before": "simulate the crisis spreading process in a bi-partite banking network. We test our model using", "after": "describe the risk propagation process during crises. We empirically test the model with", "start_char_pos": 646, "end_char_pos": 740 }, { "type": "R", "before": "data to analyze failed banks .", "after": "US commercial banks balance sheet data and compare the model prediction of the failed banks with the real failed banks after 2007.", "start_char_pos": 746, "end_char_pos": 776 }, { "type": "R", "before": ", within realistic parameters, our model", "after": "our model efficiently", "start_char_pos": 790, "end_char_pos": 830 }, { "type": "R", "before": "from the FDIC failed bank database from 2008 to 2011.", "after": "reported by Federal Deposit Insurance Corporation. The results suggest that this model could be useful for systemic risk stress testing for financial systems. The model also identifies that commercial rather than residential real estate assets are major culprits for the failure of over 350 US commercial banks during 2008-2011.", "start_char_pos": 891, "end_char_pos": 944 } ]
[ 0, 102, 264, 473, 605, 716, 776 ]
1210.5466
1
This paper studies the problem of maximizing expected utility from terminal wealth , combining a static position in derivative securities with a traditional dynamic trading strategy in stocks. We work in the framework of a general semi-martingale model and consider a utility function defined on the positive real line.
This paper studies the problem of maximizing expected utility from terminal wealth combining a static position in derivative securities , which we assume can be traded only at time zero, with a traditional dynamic trading strategy in stocks. We work in the framework of a general semi-martingale model and consider a utility function defined on the positive real line.
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 83, "end_char_pos": 84 }, { "type": "A", "before": null, "after": ", which we assume can be traded only at time zero,", "start_char_pos": 138, "end_char_pos": 138 } ]
[ 0, 193 ]
1210.5479
1
A decoupled time-changed (DTC) L\'evy process is a generalized time-changed L\'evy process whose continuous and discontinuous parts are allowed to follow separate random time scalings . Disentangling the stochastic time change in two components not only provides a powerful unitary framework for the models already present in the literature, but in principle opens up new possibilities for the choice of an asset's log-price dynamics. Following this new idea, we devise a general martingale structure for the price process, and obtain an inverse-Fourier pricing equation for claims paying at maturity a function of the asset value and its realized volatility. Thus with a single formula we are able to capture prices for derivatives depending on either an asset or its volatility , but also joint payoffs are allowed , like the target volatility option (TVO) , which shall be discussed. Numerical computations validating our techniques are provided. Notably, the DTC theory allows to incorporate valuation formulae from various asset models in a single software implementation .
In this paper we propose a general derivative pricing framework which employs decoupled time-changed (DTC) L\'evy processes to model the underlying asset of contingent claims. A DTC L\'evy process is a generalized time-changed L\'evy process whose continuous and pure jump parts are allowed to follow separate random time scalings ; we devise the martingale structure for a DTC L\'evy-driven asset and revisit many popular models which fall under this framework. Postulating different time changes for the underlying L\'evy decomposition allows to introduce asset price models consistent with the assumption of a correlated pair of continuous and jump market activities; we study one illustrative DTC model having this property by assuming that the instantaneous activity rates follow the the so-called Wishart process. The theory developed is applied to the problem of pricing claims depending not only on the price or the volatility of an underlying asset , but also to more sophisticated derivatives that pay-off on the joint performance of these two financial variables , like the target volatility option (TVO) . We solve the pricing problem through a Fourier-inversion method; numerical computations validating our technique are provided .
[ { "type": "R", "before": "A", "after": "In this paper we propose a general derivative pricing framework which employs", "start_char_pos": 0, "end_char_pos": 1 }, { "type": "A", "before": null, "after": "processes to model the underlying asset of contingent claims. A DTC L\\'evy", "start_char_pos": 38, "end_char_pos": 38 }, { "type": "R", "before": "discontinuous", "after": "pure jump", "start_char_pos": 113, "end_char_pos": 126 }, { "type": "R", "before": ". Disentangling the stochastic time change in two components not only provides a powerful unitary framework for the models already present in the literature, but in principle opens up new possibilities for the choice of an asset's log-price dynamics. Following this new idea, we devise a general", "after": "; we devise the", "start_char_pos": 185, "end_char_pos": 480 }, { "type": "D", "before": "the price process, and obtain an inverse-Fourier pricing equation for claims paying at maturity a function of the asset value and its realized volatility. Thus with", "after": null, "start_char_pos": 506, "end_char_pos": 670 }, { "type": "R", "before": "single formula we are able to capture prices for derivatives depending on either an asset or its volatility", "after": "DTC L\\'evy-driven asset and revisit many popular models which fall under this framework. Postulating different time changes for the underlying L\\'evy decomposition allows to introduce asset price models consistent with the assumption of a correlated pair of continuous and jump market activities; we study one illustrative DTC model having this property by assuming that the instantaneous activity rates follow the the so-called Wishart process. The theory developed is applied to the problem of pricing claims depending not only on the price or the volatility of an underlying asset", "start_char_pos": 673, "end_char_pos": 780 }, { "type": "R", "before": "joint payoffs are allowed", "after": "to more sophisticated derivatives that pay-off on the joint performance of these two financial variables", "start_char_pos": 792, "end_char_pos": 817 }, { "type": "R", "before": ", which shall be discussed. Numerical", "after": ". We solve the pricing problem through a Fourier-inversion method; numerical", "start_char_pos": 860, "end_char_pos": 897 }, { "type": "R", "before": "techniques are provided. Notably, the DTC theory allows to incorporate valuation formulae from various asset models in a single software implementation", "after": "technique are provided", "start_char_pos": 926, "end_char_pos": 1077 } ]
[ 0, 435, 660, 887, 950 ]
1210.6089
1
We propose a model of parameter learning for signal transduction, where the objective function is defined by signal transmission efficiency. This is a novel approach compared to the usual technique of adjusting parameters only on the basis of experimental data. We apply this to learn kinetic rates as a form of evolutionary learning, and find parameters which satisfy the objective. These may be intersected with parameter sets generated from experimental time-series datato further constrain a signal transduction model . The resulting model is self-regulating , i.e. perturbations in protein concentrations or changes in extracellular signaling will automatically lead to adaptation. We systematically perturb protein concentrations and observe the response of the system. We find fits with common observations of compensatory or co-regulation of protein expression levels in cellular systems (e. g. PDE3, AC5). In a novel experiment, we alter the distribution of extracellular signaling, and observe adaptation based on optimizing signal transmission. Self-regulating systems may be predictive of unwanted drug interference effects, since they aim to mimic complex cellular adaptation in a unified way.
We propose a model of parameter learning for signal transduction, where the objective function is defined by signal transmission efficiency. We apply this to learn kinetic rates as a form of evolutionary learning, and look for parameters which satisfy the objective. This is a novel approach compared to the usual technique of adjusting parameters only on the basis of experimental data . The resulting model is URLanizing , i.e. perturbations in protein concentrations or changes in extracellular signaling will automatically lead to adaptation. We systematically perturb protein concentrations and observe the response of the system. We find compensatory or co-regulation of protein expression levels . In a novel experiment, we alter the distribution of extracellular signaling, and observe adaptation based on optimizing signal transmission. We also discuss the relationship between signaling with and without transients. Signaling by transients may involve maximization of signal transmission efficiency for the peak response, but a minimization in steady-state responses. With an appropriate objective function, this can also be achieved by concentration adjustment. URLanizing systems may be predictive of unwanted drug interference effects, since they aim to mimic complex cellular adaptation in a unified way.
[ { "type": "D", "before": "This is a novel approach compared to the usual technique of adjusting parameters only on the basis of experimental data.", "after": null, "start_char_pos": 141, "end_char_pos": 261 }, { "type": "R", "before": "find", "after": "look for", "start_char_pos": 339, "end_char_pos": 343 }, { "type": "R", "before": "These may be intersected with parameter sets generated from experimental time-series datato further constrain a signal transduction model", "after": "This is a novel approach compared to the usual technique of adjusting parameters only on the basis of experimental data", "start_char_pos": 384, "end_char_pos": 521 }, { "type": "R", "before": "self-regulating", "after": "URLanizing", "start_char_pos": 547, "end_char_pos": 562 }, { "type": "D", "before": "fits with common observations of", "after": null, "start_char_pos": 784, "end_char_pos": 816 }, { "type": "R", "before": "in cellular systems (e. g. PDE3, AC5).", "after": ".", "start_char_pos": 876, "end_char_pos": 914 }, { "type": "R", "before": "Self-regulating", "after": "We also discuss the relationship between signaling with and without transients. Signaling by transients may involve maximization of signal transmission efficiency for the peak response, but a minimization in steady-state responses. With an appropriate objective function, this can also be achieved by concentration adjustment. URLanizing", "start_char_pos": 1056, "end_char_pos": 1071 } ]
[ 0, 140, 261, 383, 523, 686, 775, 914, 1055 ]
1210.6156
1
We perform a spatially resolved simulation study of an AND gate based on DNA strand displacement . DNA strands are modelled using a coarse-grained dynamic bonding model [ C. URL, Comp. Phys. Comm. 183, 1793 (2012) ] . We simulate the operation of the AND gate using several lengths of the toehold and the adjacent domains. Our simulations exhibit non-ideal behaviour as expected in an experimental implementation of strand displacement operations. We characterize this non-ideal behaviour in detail and characterize how the kinetic operation of the gate depends on the toehold and adjacent domain lengths. In particular, we observe that, while the final output state is reached with high fidelity when both input strands are present, our simulations exhibit numerous long-lived transition paths from the initial input stateto the final output state .
We perform a spatially resolved simulation study of an AND gate based on DNA strand displacement using several lengths of the toehold and the adjacent domains . DNA strands are modelled using a coarse-grained dynamic bonding model [ C. URL, Comp. Phys. Comm. 183, 1793 (2012) ] . We observe a complex transition path from the initial state to the final state of the AND gate . This path is strongly influenced by non-ideal effects due to transient bubbles revealing undesired toeholds and thermal melting of whole strands. We have also characterized the bound and unbound kinetics of single strands, and in particular the kinetics of the total AND operation and the three distinct distinct DNA transitions that it is based on. We observe a exponential kinetic dependence on the toehold length of the competitive displacement operation, but that the gate operation time is only weakly dependent on both the toehold and adjacent domain length. Our gate displays excellent logical fidelity in three input states, and quite poor fidelity in the fourth input state. This illustrates how non-ideality can have very selective effects on fidelity. Simulations and detailed analysis such as those presented here provide molecular insights into strand displacement computation, that can be also be expected in chemical implementations .
[ { "type": "A", "before": null, "after": "using several lengths of the toehold and the adjacent domains", "start_char_pos": 97, "end_char_pos": 97 }, { "type": "R", "before": ". We simulate the operation", "after": ". We observe a complex transition path from the initial state to the final state", "start_char_pos": 217, "end_char_pos": 244 }, { "type": "R", "before": "using several lengths of the toehold and the adjacent domains. Our simulations exhibit", "after": ". This path is strongly influenced by", "start_char_pos": 261, "end_char_pos": 347 }, { "type": "R", "before": "behaviour as expected in an experimental implementation of strand displacement operations. We characterize this non-ideal behaviour in detail and characterize how the kinetic operation of the gate depends on", "after": "effects due to transient bubbles revealing undesired toeholds and thermal melting of whole strands. We have also characterized the bound and unbound kinetics of single strands, and in particular the kinetics of the total AND operation and", "start_char_pos": 358, "end_char_pos": 565 }, { "type": "A", "before": null, "after": "three distinct distinct DNA transitions that it is based on. We observe a exponential kinetic dependence on the toehold length of the competitive displacement operation, but that the gate operation time is only weakly dependent on both the", "start_char_pos": 570, "end_char_pos": 570 }, { "type": "R", "before": "lengths. In particular, we observe that, while the final output state is reached with high fidelity when both input strands are present, our simulations exhibit numerous long-lived transition paths from the initial input stateto the final output state", "after": "length. Our gate displays excellent logical fidelity in three input states, and quite poor fidelity in the fourth input state. This illustrates how non-ideality can have very selective effects on fidelity. Simulations and detailed analysis such as those presented here provide molecular insights into strand displacement computation, that can be also be expected in chemical implementations", "start_char_pos": 599, "end_char_pos": 850 } ]
[ 0, 185, 197, 323, 448, 607 ]
1210.6166
1
Biological networks have two modes. The first mode is static: a network is a passage on which something flows. The second mode is dynamic: a network is a pattern constructed by gluing functions of entities constituting the network. In this paper, first we discuss that these two modes can be associated with a category theoretic duality (adjunction) and derive a natural network structure (a path notion) for each mode by appealing to a category theoretic universality. The path notion corresponding to the static mode is just the usual directed path. The path notion for the dynamic mode is called lateral path which is the alternating path considered on the set of arcs. Their general functionalities in a network are transport and coherence, respectively. Second, we introduce a betweenness centrality of arcs for each mode and see how the two modes are embedded in various real biological network data. We find that there is a trade-off relationship between the two centralities: if the value of one is large then the value of the other is small. This can be seen as a kind of division of labor in a network into transport on the network and coherence of the network. Finally, we propose an optimization model of networks based on a quality function involving intensities of the two modes in order to see how networks with the above trade-off relationship can emerge through evolution. We show that the trade-off relationship can be observed in the evolved networks only when the dynamic mode is dominant in the quality function by numerical simulations. We also show that the evolved networks have qualitatively similar features with real biological networks by standard complex network analysis.
Biological networks have two modes. The first mode is static: a network is a passage on which something flows. The second mode is dynamic: a network is a pattern constructed by gluing functions of entities constituting the network. In this paper, first we discuss that these two modes can be associated with the category theoretic duality (adjunction) and derive a natural network structure (a path notion) for each mode by appealing to the category theoretic universality. The path notion corresponding to the static mode is just the usual directed path. The path notion for the dynamic mode is called lateral path which is the alternating path considered on the set of arcs. Their general functionalities in a network are transport and coherence, respectively. Second, we introduce a betweenness centrality of arcs for each mode and see how the two modes are embedded in various real biological network data. We find that there is a trade-off relationship between the two centralities: if the value of one is large then the value of the other is small. This can be seen as a kind of division of labor in a network into transport on the network and coherence of the network. Finally, we propose an optimization model of networks based on a quality function involving intensities of the two modes in order to see how networks with the above trade-off relationship can emerge through evolution. We show that the trade-off relationship can be observed in the evolved networks only when the dynamic mode is dominant in the quality function by numerical simulations. We also show that the evolved networks have features qualitatively similar to real biological networks by standard complex network analysis.
[ { "type": "R", "before": "a", "after": "the", "start_char_pos": 308, "end_char_pos": 309 }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 435, "end_char_pos": 436 }, { "type": "R", "before": "qualitatively similar features with", "after": "features qualitatively similar to", "start_char_pos": 1603, "end_char_pos": 1638 } ]
[ 0, 35, 110, 231, 469, 551, 672, 758, 906, 1050, 1171, 1389, 1558 ]
1210.6321
1
Understanding the mutual relationships between information flows and social activity in society today is one of the cornerstones of the social sciences. In financial economics, the key issue in this regard is understanding and quantifying how news of all possible types (geopolitical, environmental, social, financial, economic, etc.) affect trading and the pricing of firms URLanized stock markets. In this paper we seek to address this issue by performing an analysis of more than 24 million news records provided by Thompson Reuters and of their relationship with trading activity for 205 major stocks in the S&P US stock index. We show that the whole landscape of news that affect stock price movements can be automatically summarized via simple regularized regressions between trading activity and news information pieces decomposed, with the help of simple topic modeling techniques, into their "thematic" features. Using these methods, we are able to estimate and quantify the impacts of news on trading. We introduce network-based visualization techniques to represent the whole landscape of news information associated with a basket of stocks. The examination of the words that are representative of the topic distributions confirms that our method is able to extract the significant pieces of information influencing the stock market. Our results show that one of the most puzzling stylized fact in financial economies, namely that at certain times trading volumes appear to be "abnormally large," can be explained by the flow of news. In this sense, our results prove that there is no "excess trading," if the news are genuinely novel and provide relevant financial information.
Understanding the mutual relationships between information flows and social activity in society today is one of the cornerstones of the social sciences. In financial economics, the key issue in this regard is understanding and quantifying how news of all possible types (geopolitical, environmental, social, financial, economic, etc.) affect trading and the pricing of firms URLanized stock markets. In this article, we seek to address this issue by performing an analysis of more than 24 million news records provided by Thompson Reuters and of their relationship with trading activity for 206 major stocks in the S&P US stock index. We show that the whole landscape of news that affect stock price movements can be automatically summarized via simple regularized regressions between trading activity and news information pieces decomposed, with the help of simple topic modeling techniques, into their "thematic" features. Using these methods, we are able to estimate and quantify the impacts of news on trading. We introduce network-based visualization techniques to represent the whole landscape of news information associated with a basket of stocks. The examination of the words that are representative of the topic distributions confirms that our method is able to extract the significant pieces of information influencing the stock market. Our results show that one of the most puzzling stylized fact in financial economies, namely that at certain times trading volumes appear to be "abnormally large," can be partially explained by the flow of news. In this sense, our results prove that there is no "excess trading," when restricting to times when news are genuinely novel and provide relevant financial information.
[ { "type": "R", "before": "paper", "after": "article,", "start_char_pos": 408, "end_char_pos": 413 }, { "type": "R", "before": "205", "after": "206", "start_char_pos": 588, "end_char_pos": 591 }, { "type": "A", "before": null, "after": "partially", "start_char_pos": 1515, "end_char_pos": 1515 }, { "type": "R", "before": "if the", "after": "when restricting to times when", "start_char_pos": 1615, "end_char_pos": 1621 } ]
[ 0, 152, 399, 631, 921, 1011, 1152, 1344, 1546 ]
1210.6643
1
Pulsed-field-gradient nuclear magnetic resonance (PFG-NMR) is used to obtain the true hydrodynamic size of complexes of peptides with sodium dodecyl sulfate SDS micelles. The peptide used in this study is a 19-residue antimicrobial peptide, GAD-2. Two smaller dipeptides, alanine-glycine (Ala-Gly) and tyrosine-leucine (Tyr-Leu), are used for comparison. We use PFG-NMR to simultaneously measure diffusion coefficients of both peptide and surfactant. These two inputs, as a function of SDS concentration, are then fit to a simple two species model that neglects hydrodynamic interactions between complexes. From this we obtain the fraction of free SDS, and the hydrodynamic size of complexes in a GAD-2--SDS system as a function of SDS concentration. These results are compared to those for smaller dipeptides and for peptide-free solutions. At low SDS concentrations ([SDS] \leq 25 mM), the results self-consistently point to a GAD-2--SDS complex of fixed hydrodynamic size R =(5.5 \pm 0.3) nm. At intermediate SDS concentrations (25 mM < [SDS] < 60 mM), the apparent size of a GAD-2--SDS complex shows almost a factor of two increase without a significant change in surfactant-to-peptide ratio within a complex, most likely implying an increase in the number of peptides in a complex. For peptide-free solutions, the self-diffusion coefficients of SDS with and without buffer are significantly different at low SDS concentrations but merge above [SDS]=60 mM. This concentration is identified as an onset of crowding beyond which it is impossible, even in principle, to extract information about hydrodynamic size of the peptide-surfactant complex ] .
Pulsed-field-gradient nuclear magnetic resonance (PFG-NMR) is used to obtain the true hydrodynamic size of complexes of peptides with sodium dodecyl sulfate SDS micelles. The peptide used in this study is a 19-residue antimicrobial peptide, GAD-2. Two smaller dipeptides, alanine-glycine (Ala-Gly) and tyrosine-leucine (Tyr-Leu), are used for comparison. We use PFG-NMR to simultaneously measure diffusion coefficients of both peptide and surfactant. These two inputs, as a function of SDS concentration, are then fit to a simple two species model that neglects hydrodynamic interactions between complexes. From this we obtain the fraction of free SDS, and the hydrodynamic size of complexes in a GAD-2--SDS system as a function of SDS concentration. These results are compared to those for smaller dipeptides and for peptide-free solutions. At low SDS concentrations ([SDS] \leq 25 mM), the results self-consistently point to a GAD-2--SDS complex of fixed hydrodynamic size R =(5.5 \pm 0.3) nm. At intermediate SDS concentrations (25 mM < [SDS] < 60 mM), the apparent size of a GAD-2--SDS complex shows almost a factor of two increase without a significant change in surfactant-to-peptide ratio within a complex, most likely implying an increase in the number of peptides in a complex. For peptide-free solutions, the self-diffusion coefficients of SDS with and without buffer are significantly different at low SDS concentrations but merge above [SDS]=60 mM. We find that in order to obtain unambiguous information about the hydrodynamic size of a peptide-surfactant complex from diffusion measurements, experiments must be carried out at or below SDS] = 25 mM .
[ { "type": "R", "before": "This concentration is identified as an onset of crowding beyond which it is impossible, even in principle, to extract information about", "after": "We find that in order to obtain unambiguous information about the", "start_char_pos": 1461, "end_char_pos": 1596 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 1618, "end_char_pos": 1621 }, { "type": "A", "before": null, "after": "from diffusion measurements, experiments must be carried out at or below", "start_char_pos": 1649, "end_char_pos": 1649 }, { "type": "A", "before": null, "after": "SDS", "start_char_pos": 1650, "end_char_pos": 1650 }, { "type": "A", "before": null, "after": "= 25 mM", "start_char_pos": 1652, "end_char_pos": 1652 } ]
[ 0, 170, 247, 354, 450, 606, 750, 841, 995, 1286, 1460 ]
1210.6727
1
We establish Schauder a priori estimates and regularity for solutions to a class of degenerate-elliptic linear second-order partial differential equations. Furthermore, given a smooth source function, we prove regularity of solutions up to the portion of the boundary where the operator is degenerate. Degenerate-elliptic operators of the kind described in our article appear in a diverse range of applications, including as generators of affine diffusion processes employed in stochastic volatility models in mathematical finance, generators of diffusion processes arising in mathematical biology, and the study of porous media.
We establish Schauder a priori estimates and regularity for solutions to a class of boundary-degenerate elliptic linear second-order partial differential equations. Furthermore, given a smooth source function, we prove regularity of solutions up to the portion of the boundary where the operator is degenerate. Degenerate-elliptic operators of the kind described in our article appear in a diverse range of applications, including as generators of affine diffusion processes employed in stochastic volatility models in mathematical finance, generators of diffusion processes arising in mathematical biology, and the study of porous media.
[ { "type": "R", "before": "degenerate-elliptic", "after": "boundary-degenerate elliptic", "start_char_pos": 84, "end_char_pos": 103 } ]
[ 0, 155, 301 ]
1210.7608
1
In this article , we develop a liquidation model in which the trader is constrained to liquidate a portfolio at constant participation rate . Considering the functional forms usually used by practitioners , we obtain a closed-form expression for the optimal participation rate and for the liquidity premium a trader should quote to buy a large block of shares. We also show that the difference in terms of liquidity premiumbetween the constant participation rate case and the usual Almgren-Chriss-like case never exceeds 15\% .
When executing their orders, investors are proposed different strategies by brokers and investment banks. Most orders are executed using VWAP algorithms. Other basic execution strategies include POV (also called PVol) -- for percentage of volume --, IS -- implementation shortfall -- or Target Close. In this article dedicated to POV strategies , we develop a liquidation model in which a trader is constrained to liquidate a portfolio with a constant participation rate to the market . Considering the functional forms usually used by practitioners for market impact functions , we obtain a closed-form expression for the optimal participation rate . Also, we develop a microfounded risk-liquidity premium that permits to better assess the costs and risks of execution processes and to give a price to a large block of shares. We also provide a thorough comparison between IS strategies and POV strategies in terms of risk-liquidity premium .
[ { "type": "A", "before": null, "after": "When executing their orders, investors are proposed different strategies by brokers and investment banks. Most orders are executed using VWAP algorithms. Other basic execution strategies include POV (also called PVol) -- for percentage of volume --, IS -- implementation shortfall -- or Target Close.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "A", "before": null, "after": "dedicated to POV strategies", "start_char_pos": 17, "end_char_pos": 17 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 60, "end_char_pos": 63 }, { "type": "R", "before": "at", "after": "with a", "start_char_pos": 111, "end_char_pos": 113 }, { "type": "A", "before": null, "after": "to the market", "start_char_pos": 142, "end_char_pos": 142 }, { "type": "A", "before": null, "after": "for market impact functions", "start_char_pos": 208, "end_char_pos": 208 }, { "type": "R", "before": "and for the liquidity premium", "after": ". Also, we develop a microfounded risk-liquidity premium that permits to better assess the costs and risks of execution processes and to give a price to", "start_char_pos": 281, "end_char_pos": 310 }, { "type": "D", "before": "trader should quote to buy a", "after": null, "start_char_pos": 313, "end_char_pos": 341 }, { "type": "R", "before": "show that the difference", "after": "provide a thorough comparison between IS strategies and POV strategies", "start_char_pos": 373, "end_char_pos": 397 }, { "type": "R", "before": "liquidity premiumbetween the constant participation rate case and the usual Almgren-Chriss-like case never exceeds 15\\%", "after": "risk-liquidity premium", "start_char_pos": 410, "end_char_pos": 529 } ]
[ 0, 364 ]
1210.8380
1
Financial markets are a typical example of complex systems where interactions between constituants lead to many remarkable features. Here we give empirical evidences , by making as few assumptions as possible, that the market microstructure capturing almost all of the available information in the data of stock markets does not involve higher order than pairwise interactions. We give an economic interpretation of this pairwise model. We show that it accurately recovers the empirical correlation coefficients thus the collective behaviors are quantitatively described by models that capture the observed pairwise correlations but no higher-order interactions. Furthermore, we show that an order-disorder transition occurs as predicted by the pairwise model. Last, we make the link with the graph theoretic description of stock markets recovering the non-random and scale free topology, shrinking length during crashes and meaningful clustering features as expected.
Financial markets are a typical example of complex systems where interactions between constituents lead to many remarkable features. Here we give empirical evidence , by making as few assumptions as possible, that the market microstructure capturing almost all of the available information in the data of stock markets does not involve higher order than pairwise interactions. We give an economic interpretation of this pairwise model. We show that it accurately recovers the empirical correlation coefficients thus the collective behaviors are quantitatively described by models that capture the observed pairwise correlations but no higher-order interactions. Furthermore, we show that an order-disorder transition occurs as predicted by the pairwise model. Last, we make the link with the graph-theoretic description of stock markets recovering the non-random and scale-free topology, shrinking length during crashes and meaningful clustering features as expected.
[ { "type": "R", "before": "constituants", "after": "constituents", "start_char_pos": 86, "end_char_pos": 98 }, { "type": "R", "before": "evidences", "after": "evidence", "start_char_pos": 156, "end_char_pos": 165 }, { "type": "R", "before": "graph theoretic", "after": "graph-theoretic", "start_char_pos": 793, "end_char_pos": 808 }, { "type": "R", "before": "scale free", "after": "scale-free", "start_char_pos": 868, "end_char_pos": 878 } ]
[ 0, 132, 377, 436, 662, 760 ]
1210.8380
2
Financial markets are a typical example of complex systems where interactions between constituents lead to many remarkable features. Here we give empirical evidence, by making as few assumptions as possible, that the market microstructure capturing almost all of the available information in the data of stock markets does not involve higher order than pairwise interactions. We give an economic interpretation of this pairwise model. We show that it accurately recovers the empirical correlation coefficients thus the collective behaviors are quantitatively described by models that capture the observed pairwise correlations but no higher-order interactions. Furthermore, we show that an order-disorder transition occurs as predicted by the pairwise model. Last, we make the link with the graph-theoretic description of stock markets recovering the non-random and scale-free topology, shrinking length during crashes and meaningful clustering features as expected .
Financial markets are a typical example of complex systems where interactions between constituents lead to many remarkable features. Here , we show that a pairwise maximum entropy model (or auto-logistic model) is able to describe switches between ordered (strongly correlated) and disordered market states. In this framework, the influence matrix may be thought as a dissimilarity measure and we explain how it can be used to study market structure. We make the link with the graph-theoretic description of stock markets reproducing the non-random and scale-free topology, shrinking length during crashes and meaningful clustering features as expected . The pairwise model provides an alternative method to study financial networks which may be useful for characterization of abnormal market states (crises and bubbles), in capital allocation or for the design of regulation rules .
[ { "type": "R", "before": "we give empirical evidence, by making as few assumptions as possible, that the market microstructure capturing almost all of the available information in the data of stock markets does not involve higher order than pairwise interactions. We give an economic interpretation of this pairwise model. We show that it accurately recovers the empirical correlation coefficients thus the collective behaviors are quantitatively described by models that capture the observed pairwise correlations but no higher-order interactions. Furthermore,", "after": ",", "start_char_pos": 138, "end_char_pos": 673 }, { "type": "R", "before": "an order-disorder transition occurs as predicted by the pairwise model. Last, we", "after": "a pairwise maximum entropy model (or auto-logistic model) is able to describe switches between ordered (strongly correlated) and disordered market states. In this framework, the influence matrix may be thought as a dissimilarity measure and we explain how it can be used to study market structure. We", "start_char_pos": 687, "end_char_pos": 767 }, { "type": "R", "before": "recovering", "after": "reproducing", "start_char_pos": 836, "end_char_pos": 846 }, { "type": "A", "before": null, "after": ". The pairwise model provides an alternative method to study financial networks which may be useful for characterization of abnormal market states (crises and bubbles), in capital allocation or for the design of regulation rules", "start_char_pos": 966, "end_char_pos": 966 } ]
[ 0, 132, 375, 434, 660, 758 ]
1211.0104
1
We present a generic analytical scheme for the quantification of fluctuations due to bi-functionality induced signal transduction within the members of bacterial two-component system. The proposed model takes into account post-translational modifications in terms of elementary phosphotransfer kinetics. Sources of fluctuations due to autophosphorylation, kinase and phosphatase activity of the sensor kinase have been considered in the model via Langevin equations, which are then solved exactly within the framework of linear noise approximation. The resultant analytical expression of phosphorylated response regulators are then used to quantify the noise profile of biologically motivated single and branched pathways. Enhancement and reduction of noise in terms of extra phosphate outflux and influx, respectively, have been analyzed for the branched system .
We present a generic analytical scheme for the quantification of fluctuations due to bifunctionality-induced signal transduction within the members of bacterial two-component system. The proposed model takes into account post-translational modifications in terms of elementary phosphotransfer kinetics. Sources of fluctuations due to autophosphorylation, kinase and phosphatase activity of the sensor kinase have been considered in the model via Langevin equations, which are then solved within the framework of linear noise approximation. The resultant analytical expression of phosphorylated response regulators are then used to quantify the noise profile of biologically motivated single and branched pathways. Enhancement and reduction of noise in terms of extra phosphate outflux and influx, respectively, have been analyzed for the branched system . Furthermore, role of fluctuations of the network output in the regulation of a promoter with random activation/deactivation dynamics has been analyzed .
[ { "type": "R", "before": "bi-functionality induced", "after": "bifunctionality-induced", "start_char_pos": 85, "end_char_pos": 109 }, { "type": "D", "before": "exactly", "after": null, "start_char_pos": 489, "end_char_pos": 496 }, { "type": "A", "before": null, "after": ". Furthermore, role of fluctuations of the network output in the regulation of a promoter with random activation/deactivation dynamics has been analyzed", "start_char_pos": 863, "end_char_pos": 863 } ]
[ 0, 183, 303, 548, 722 ]
1211.0349
2
MicroRNAs (miRNAs) critically modulate stem cell pluripotency and their differentiation , but the precise mechanistic mechanism remains largely unknown. This study systematically reveals the functional and mechanistic roles of miRNAs in mouse pluripotent stem cells by analyzing the genome-wide physical interactions between all activated miRNAs and their targets. Generally, miRNAs vary their physical targets and functions with the switch of pluripotency to differentiation state. During pluripotency miRNAs primarily and mechanistically target and repress developmental processes, but surprisingly miRNAs do not directly target the pluripotent core factors but only mediate extrinsic signal pathways associated with pluripotency. During differentiation miRNAs mechanistically inhibit metabolism and directly repress the pluripotency . Interestingly, DNA methylation in enhancer regions mediates these miRNA activations. Together, under mediation by DNA methylation, miRNAs directly repress development and mechanistically modulate the pluripotent signal pathways to help stem cells maintain pluripotency; yet miRNAs directly repress both pluripotency and metabolism to facilitate cell differentiation .
MicroRNAs (miRNAs) critically modulate stem cell pluripotency , but the precise mechanistic mechanism remains largely unknown. This study systematically reveals the general mechanistic roles of miRNAs in mouse stem cells by analyzing global physical interactions between miRNAs and their targets. Generally, miRNAs primarily repress developmental processes, but surprisingly the top up-regulated miRNAs do not directly target the pluripotent core factors as thought; they only target extrinsic signal pathways associated with pluripotency. This suggested that the most important miRNAs in stem cells may only indirectly regulate the pluripotency via the genetic system . Interestingly, miRNAs predominately target and repress DNA methyltransferase, core enzymes for DNA methylation, and decreasing methylation in turn activates the top miRNAs. This system-wide circuit between up-regulated miRNAs and the repressed DNA methylation system would eventually activate pluripotent core factors for pluripotency. Therefore, miRNAs systematically repress development and inhibit the epigenetic system to modulate stem cell pluripotency .
[ { "type": "D", "before": "and their differentiation", "after": null, "start_char_pos": 62, "end_char_pos": 87 }, { "type": "R", "before": "functional and", "after": "general", "start_char_pos": 191, "end_char_pos": 205 }, { "type": "D", "before": "pluripotent", "after": null, "start_char_pos": 243, "end_char_pos": 254 }, { "type": "R", "before": "the genome-wide", "after": "global", "start_char_pos": 279, "end_char_pos": 294 }, { "type": "D", "before": "all activated", "after": null, "start_char_pos": 325, "end_char_pos": 338 }, { "type": "R", "before": "vary their physical targets and functions with the switch of pluripotency to differentiation state. During pluripotency miRNAs primarily and mechanistically target and", "after": "primarily", "start_char_pos": 383, "end_char_pos": 550 }, { "type": "A", "before": null, "after": "the top up-regulated", "start_char_pos": 601, "end_char_pos": 601 }, { "type": "R", "before": "but only mediate", "after": "as thought; they only target", "start_char_pos": 661, "end_char_pos": 677 }, { "type": "R", "before": "During differentiation miRNAs mechanistically inhibit metabolism and directly repress the pluripotency", "after": "This suggested that the most important miRNAs in stem cells may only indirectly regulate the pluripotency via the genetic system", "start_char_pos": 734, "end_char_pos": 836 }, { "type": "A", "before": null, "after": "miRNAs predominately target and repress DNA methyltransferase, core enzymes for", "start_char_pos": 854, "end_char_pos": 854 }, { "type": "D", "before": "methylation in enhancer regions mediates these miRNA activations. Together, under mediation by DNA", "after": null, "start_char_pos": 859, "end_char_pos": 957 }, { "type": "R", "before": "miRNAs directly repress development and mechanistically modulate the pluripotent signal pathways to help stem cells maintain pluripotency; yet miRNAs directly repress both pluripotency and metabolism to facilitate cell differentiation", "after": "and decreasing methylation in turn activates the top miRNAs. This system-wide circuit between up-regulated miRNAs and the repressed DNA methylation system would eventually activate pluripotent core factors for pluripotency. Therefore, miRNAs systematically repress development and inhibit the epigenetic system to modulate stem cell pluripotency", "start_char_pos": 971, "end_char_pos": 1205 } ]
[ 0, 152, 364, 482, 733, 838, 924, 1109 ]
1211.0349
3
MicroRNAs (miRNAs) critically modulate stem cell pluripotency, but the precise mechanistic mechanism remains largely unknown. This study systematically reveals the general mechanistic roles of miRNAs in mouse stem cells by analyzing global physical interactions between miRNAs and their targets. Generally, miRNAs primarily repress developmental processes, but surprisingly the top up-regulated miRNAs do not directly target the pluripotent core factors as thought; they only target extrinsic signal pathways associated with pluripotency. This suggested that the most important miRNAs in stem cells may only indirectly regulate the pluripotency via the genetic system . Interestingly, miRNAs predominately target and repress DNA methyltransferase, core enzymes for DNA methylation, and decreasing methylation in turn activates the top miRNAs. This system-wide circuit between up-regulated miRNAs and the repressed DNA methylation system would eventually activate pluripotent core factors for pluripotency. Therefore, miRNAs systematically repress development and inhibit the epigenetic system to modulate stem cell pluripotency.
MicroRNAs (miRNAs) critically modulate stem cell pluripotency, but the fundamental mechanism remains largely unknown. This study systematically reveals the global mechanism of miRNA functions in mouse stem cells by analyzing genome-wide physical interactions between miRNAs and their targets. Generally, miRNAs primarily repress developmental processes, but surprisingly the top up-regulated miRNAs do not directly target the pluripotent core factors as thought; they only target the signal pathways associated with pluripotency. This suggests that the most important miRNAs in stem cells may only indirectly regulate the pluripotency . Interestingly, miRNAs predominately target and repress DNA methyltransferases, the core enzymes for DNA methylation, and decreasing methylation in turn activates the top miRNAs. This system-wide circuit between up-regulated miRNAs and the repressed DNA methylation system eventually activates the pluripotent core factors for pluripotency. Therefore, miRNAs systematically repress development and inhibit the epigenetic system to dynamically modulate stem cell pluripotency.
[ { "type": "R", "before": "precise mechanistic", "after": "fundamental", "start_char_pos": 71, "end_char_pos": 90 }, { "type": "R", "before": "general mechanistic roles of miRNAs", "after": "global mechanism of miRNA functions", "start_char_pos": 164, "end_char_pos": 199 }, { "type": "R", "before": "global", "after": "genome-wide", "start_char_pos": 233, "end_char_pos": 239 }, { "type": "R", "before": "extrinsic", "after": "the", "start_char_pos": 483, "end_char_pos": 492 }, { "type": "R", "before": "suggested", "after": "suggests", "start_char_pos": 544, "end_char_pos": 553 }, { "type": "D", "before": "via the genetic system", "after": null, "start_char_pos": 645, "end_char_pos": 667 }, { "type": "R", "before": "methyltransferase,", "after": "methyltransferases, the", "start_char_pos": 729, "end_char_pos": 747 }, { "type": "R", "before": "would eventually activate", "after": "eventually activates the", "start_char_pos": 937, "end_char_pos": 962 }, { "type": "A", "before": null, "after": "dynamically", "start_char_pos": 1096, "end_char_pos": 1096 } ]
[ 0, 125, 295, 465, 538, 669, 842, 1005 ]
1211.0349
4
MicroRNAs (miRNAs) critically modulate stem cell pluripotency, but the fundamental mechanism remains largely unknown. This study systematically reveals the global mechanism of miRNA functions in mouse stem cells by analyzing genome-wide physical interactions between miRNAs and their targets. Generally, miRNAs primarily repress developmental processes, but surprisingly the top up-regulated miRNAs do not directly target the pluripotent core factors as thought ; they only target the signal pathways associated with pluripotency. This suggests that the most important miRNAs in stem cells may only indirectly regulate the pluripotency. Interestingly, miRNAs predominately target and repress DNA methyltransferases, the core enzymes for DNA methylation , and decreasing methylation in turn activates the top miRNAs . This system-wide circuit between up-regulated miRNAs and the repressed DNA methylation system eventually activates the pluripotent core factors for pluripotency. Therefore, miRNAs systematically repress development and inhibit the epigenetic system to dynamically modulate stem cell pluripotency .
MicroRNAs (miRNAs) critically modulate stem cell properties like pluripotency, but the fundamental mechanism remains largely unknown. This study systematically analyzes multiple-omics data and builds a systems physical network including genome-wide interactions between miRNAs and their targets to reveal the systems mechanism of miRNA functions in mouse pluripotent stem cells. Globally, miRNAs directly repress the pluripotent core factors during differentiation state. Surprisingly, during pluripotent state, the top important miRNAs do not directly regulate the pluripotent core factors as thought , but they only directly target the pluripotent signal pathways and directly repress developmental processes. Furthermore, at pluripotent state miRNAs predominately repress DNA methyltransferases, the core enzymes for DNA methylation . The decreasing methylation repressed by miRNAs in turn activates the top miRNAs and pluripotent core factors , creating an active circuit system to modulate pluripotency. MiRNAs vary their functions with different stem cell states. While miRNAs directly repress pluripotent core factors to facilitate the differentiation during cell differentiation, they also help stem cells to maintain pluripotency by activating pluripotent cores through directly repressing DNA methylation systems and primarily inhibiting development .
[ { "type": "A", "before": null, "after": "properties like", "start_char_pos": 49, "end_char_pos": 49 }, { "type": "R", "before": "reveals the global", "after": "analyzes multiple-omics data and builds a systems physical network including genome-wide interactions between miRNAs and their targets to reveal the systems", "start_char_pos": 145, "end_char_pos": 163 }, { "type": "R", "before": "stem cells by analyzing genome-wide physical interactions between miRNAs and their targets. Generally, miRNAs primarily repress developmental processes, but surprisingly the top up-regulated", "after": "pluripotent stem cells. Globally, miRNAs directly repress the pluripotent core factors during differentiation state. Surprisingly, during pluripotent state, the top important", "start_char_pos": 202, "end_char_pos": 392 }, { "type": "R", "before": "target", "after": "regulate", "start_char_pos": 416, "end_char_pos": 422 }, { "type": "R", "before": "; they only target the signal pathways associated with pluripotency. This suggests that the most important miRNAs in stem cells may only indirectly regulate the pluripotency. Interestingly, miRNAs predominately target and", "after": ", but they only directly target the pluripotent signal pathways and directly repress developmental processes. Furthermore, at pluripotent state miRNAs predominately", "start_char_pos": 463, "end_char_pos": 684 }, { "type": "R", "before": ", and decreasing methylation", "after": ". The decreasing methylation repressed by miRNAs", "start_char_pos": 754, "end_char_pos": 782 }, { "type": "R", "before": ". This system-wide circuit between up-regulated miRNAs and the repressed DNA methylation system eventually activates the", "after": "and", "start_char_pos": 816, "end_char_pos": 936 }, { "type": "R", "before": "for pluripotency. Therefore, miRNAs systematically repress development and inhibit the epigenetic system to dynamically modulate stem cell pluripotency", "after": ", creating an active circuit system to modulate pluripotency. MiRNAs vary their functions with different stem cell states. While miRNAs directly repress pluripotent core factors to facilitate the differentiation during cell differentiation, they also help stem cells to maintain pluripotency by activating pluripotent cores through directly repressing DNA methylation systems and primarily inhibiting development", "start_char_pos": 962, "end_char_pos": 1113 } ]
[ 0, 118, 293, 464, 531, 637, 979 ]
1211.0412
1
In this paper we derive a new handy integral equation for the free boundary of infinite time horizon, continuous time, stochastic, irreversible investment problems with uncertainty modeled as a one-dimensional, regular diffusion X. The new integral equation allows to explicitly find the free boundary, b , in some so far unsolved cases, as when X is a three-dimensional Bessel process or a CEV process. Our result follows from purely probabilistic arguments. Indeed, we first show that b(X(t))=l(t), with l (t) unique optional solution of a representation problem in the spirit of Bank-El Karoui (2004); then, thanks to such identification and the fact that l (t) uniquely solves a backward stochastic equation, we find the integral problem for the free boundary .
In this paper we derive a new handy integral equation for the free-boundary of infinite time horizon, continuous time, stochastic, irreversible investment problems with uncertainty mo-deled as a one-dimensional, regular diffusion X. The new integral equation allows to explicitly find the free-boundary b in some so far unsolved cases, as when the operating profit function is not multiplicatively separable and X is a three- dimensional Bessel process or a CEV process. Our result follows from purely probabilistic arguments. Indeed, we first show that b(X(t))=l(t), with l the unique optional solution of a representation problem in the spirit of Bank-El Karoui (2004); then, thanks to such identification and the fact that l uniquely solves a backward stochastic equation, we find the integral problem for the free-boundary .
[ { "type": "R", "before": "free boundary", "after": "free-boundary", "start_char_pos": 62, "end_char_pos": 75 }, { "type": "R", "before": "modeled", "after": "mo-deled", "start_char_pos": 181, "end_char_pos": 188 }, { "type": "R", "before": "free boundary, b ,", "after": "free-boundary b", "start_char_pos": 288, "end_char_pos": 306 }, { "type": "A", "before": null, "after": "the operating profit function is not multiplicatively separable and", "start_char_pos": 346, "end_char_pos": 346 }, { "type": "R", "before": "three-dimensional", "after": "three- dimensional", "start_char_pos": 354, "end_char_pos": 371 }, { "type": "R", "before": "(t)", "after": "the", "start_char_pos": 509, "end_char_pos": 512 }, { "type": "D", "before": "(t)", "after": null, "start_char_pos": 662, "end_char_pos": 665 }, { "type": "R", "before": "free boundary", "after": "free-boundary", "start_char_pos": 751, "end_char_pos": 764 } ]
[ 0, 231, 404, 460, 605 ]
1211.0412
2
In this paper we derive a new handy integral equation for the free-boundary of infinite time horizon, continuous time, stochastic, irreversible investment problems with uncertainty mo-deled as a one-dimensional, regular diffusion X. The new integral equation allows to explicitly find the free-boundary b in some so far unsolved cases, as when the operating profit function is not multiplicatively separable and X is a three- dimensional Bessel process or a CEV process. Our result follows from purely probabilistic arguments. Indeed, we first show that b(X(t))=l (t), with l the unique optional solution of a representation problem in the spirit of Bank-El Karoui (2004) ; then, thanks to such identification and the fact that l uniquely solves a backward stochastic equation, we find the integral problem for the free-boundary.
In this paper , we derive a new handy integral equation for the free-boundary of infinite time horizon, continuous time, stochastic, irreversible investment problems with uncertainty modeled as a one-dimensional, regular diffusion X. The new integral equation allows to explicitly find the free-boundary b (\cdot) in some so far unsolved cases, as when the operating profit function is not multiplicatively separable and X is a three-dimensional Bessel process or a CEV process. Our result follows from purely probabilistic arguments. Indeed, we first show that b(X(t))=l ^* (t), with l ^* the unique optional solution of a representation problem in the spirit of Bank-El Karoui Ann. Probab. 32 (2004) 1030-1067 ; then, thanks to such an identification and the fact that l ^* uniquely solves a backward stochastic equation, we find the integral problem for the free-boundary.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 14, "end_char_pos": 14 }, { "type": "R", "before": "mo-deled", "after": "modeled", "start_char_pos": 182, "end_char_pos": 190 }, { "type": "A", "before": null, "after": "(\\cdot)", "start_char_pos": 306, "end_char_pos": 306 }, { "type": "R", "before": "three- dimensional", "after": "three-dimensional", "start_char_pos": 421, "end_char_pos": 439 }, { "type": "A", "before": null, "after": "^*", "start_char_pos": 566, "end_char_pos": 566 }, { "type": "A", "before": null, "after": "^*", "start_char_pos": 579, "end_char_pos": 579 }, { "type": "A", "before": null, "after": "Ann. Probab. 32", "start_char_pos": 669, "end_char_pos": 669 }, { "type": "A", "before": null, "after": "1030-1067", "start_char_pos": 677, "end_char_pos": 677 }, { "type": "A", "before": null, "after": "an", "start_char_pos": 701, "end_char_pos": 701 }, { "type": "A", "before": null, "after": "^*", "start_char_pos": 737, "end_char_pos": 737 } ]
[ 0, 233, 472, 528, 679 ]
1211.0618
1
We study an admissions control problem, where a queue with service rate 1-p receives incoming jobs at rate \lambda\in(1-p,1), and the decision maker is allowed to redirect away jobs up to a rate of p, with the objective of minimizing the time-average queue length. We show that the amount of information about the future has a significant impact on system performance, in the heavy-traffic regime. When the future is unknown, the optimal average queue length diverges at rate \log_{1/(1-p) 1/(1-\lambda)} 1{1-\lambda}} , as \lambda\to 1. In sharp contrast, when all future arrival and service times are revealed beforehand, the optimal average queue length converges to a finite constant, (1-p)/p, as \lambda %DIFDELCMD < \to %%% 1. \to1 We further show that the finite limit of (1-p)/p can be achieved using only a finite lookahead window starting from the current time frame, whose length scales as \log 1/(1-\lambda(\log1{1-\lambda}} ), as \lambda %DIFDELCMD < \to %%% 1. \to1 This leads to the conjecture of an interesting duality between queuing delay and the amount of information about the future.
We study an admissions control problem, where a queue with service rate 1-p receives incoming jobs at rate \lambda\in(1-p,1), and the decision maker is allowed to redirect away jobs up to a rate of p, with the objective of minimizing the time-average queue length. We show that the amount of information about the future has a significant impact on system performance, in the heavy-traffic regime. When the future is unknown, the optimal average queue length diverges at rate 1/(1-\lambda)} \sim\log_{1/(1-p)1{1-\lambda}} , as \lambda\to 1. In sharp contrast, when all future arrival and service times are revealed beforehand, the optimal average queue length converges to a finite constant, (1-p)/p, as \lambda %DIFDELCMD < \to %%% \to1. We further show that the finite limit of (1-p)/p can be achieved using only a finite lookahead window starting from the current time frame, whose length scales as \mathcal{O(\log1{1-\lambda}} ), as \lambda %DIFDELCMD < \to %%% \to1. This leads to the conjecture of an interesting duality between queuing delay and the amount of information about the future.
[ { "type": "D", "before": "\\log_{1/(1-p)", "after": null, "start_char_pos": 476, "end_char_pos": 489 }, { "type": "A", "before": null, "after": "\\sim\\log_{1/(1-p)", "start_char_pos": 505, "end_char_pos": 505 }, { "type": "D", "before": "1.", "after": null, "start_char_pos": 730, "end_char_pos": 732 }, { "type": "A", "before": null, "after": ".", "start_char_pos": 737, "end_char_pos": 737 }, { "type": "R", "before": "\\log 1/(1-\\lambda", "after": "\\mathcal{O", "start_char_pos": 901, "end_char_pos": 918 }, { "type": "D", "before": "1.", "after": null, "start_char_pos": 972, "end_char_pos": 974 }, { "type": "A", "before": null, "after": ".", "start_char_pos": 979, "end_char_pos": 979 } ]
[ 0, 264, 397, 537, 556, 623 ]
1211.0856
1
A heat kernel approach is proposed for the development of a general, flexible, and mathematically tractable asset pricing framework in finite time. The pricing kernel, giving rise to the price system in an incomplete market, is modelled by weighted heat kernels that are driven by multivariate Markov processes and that provide enough degrees of freedom in order to calibrate to relevant data, e.g. to the term structure of bond prices. It is shown how, for a class of models, the prices of bonds, caplets, and swaptions can be computed in closed form. The dynamical equations for the price processes are derived, and explicit formulae are obtained for the short rate of interest, the risk premium, and for the stochastic volatility of prices. Several of the closed-form asset price models presented in this paper are driven by combinations of Markovian jump processes with different probability laws. Such models provide a rich basis for consistent applications in several sectors of a financial market including equity, fixed-income, commodities, and insurance. The flexible, multidimensional and multivariate structure, on which the asset price models are constructed, lends itself well to the transparent modelling of dependence across asset classes. As an illustration, the impact on prices by spiralling debt, a typical feature of a financial crisis, is modelled explicitly, and contagion effects are readily observed in the dynamics of asset returns.
A heat kernel approach is proposed for the development of a general, flexible, and mathematically tractable asset pricing framework in finite time. The pricing kernel, giving rise to the price system in an incomplete market, is modelled by weighted heat kernels which are driven by multivariate Markov processes and which provide enough degrees of freedom in order to calibrate to relevant data, e.g. to the term structure of bond prices. It is shown how, for a class of models, the prices of bonds, caplets, and swaptions can be computed in closed form. The dynamical equations for the price processes are derived, and explicit formulae are obtained for the short rate of interest, the risk premium, and for the stochastic volatility of prices. Several of the closed-form asset price models presented in this paper are driven by combinations of Markovian jump processes with different probability laws. Such models provide a rich basis for consistent applications in several sectors of a financial market including equity, fixed-income, commodities, and insurance. The flexible, multidimensional and multivariate structure, on which the asset price models are constructed, lends itself well to the transparent modelling of dependence across asset classes. As an illustration, the impact on prices by spiralling debt, a typical feature of a financial crisis, is modelled explicitly, and contagion effects are readily observed in the dynamics of asset returns.
[ { "type": "R", "before": "that", "after": "which", "start_char_pos": 262, "end_char_pos": 266 }, { "type": "R", "before": "that", "after": "which", "start_char_pos": 315, "end_char_pos": 319 } ]
[ 0, 147, 436, 552, 743, 901, 1063, 1254 ]
1211.0906
1
Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on previously unseen input data , using machine learning techniques to build a model of the algorithm's runtime as a function of domain-specific problem features. Such models have important applications to algorithm analysis, portfolio-based algorithm selection, and the automatic configuration of parameterized algorithms. Over the past decade, a wide variety of techniques have been studied for building such models. Here, we describe extensions and improvements of previous models, new families of models, and --- perhaps most importantly --- a much more thorough treatment of algorithm parameters as model inputs. We also describe novel features for predicting algorithm runtime for the propositional satisfiability (SAT), mixed integer programming (MIP) , and travelling salesperson (TSP) problems. We evaluate these innovations through the largest empirical analysis of its kind, comparing to all previously proposed modeling techniques of which we are aware . Our experiments consider 11 algorithms and 35 instance distributions; they also span a very wide range of SAT, MIP, and TSP instances, with the least structured having been generated uniformly at random and the most structured having emerged from real industrial applications. Overall, we demonstrate that our new models yield substantially better runtime predictions than previous approaches in terms of their generalization to new problem instances, to new algorithms from a parameterized space, and to both simultaneously.
Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input , using machine learning techniques to build a model of the algorithm's runtime as a function of problem-specific instance features. Such models have important applications to algorithm analysis, portfolio-based algorithm selection, and the automatic configuration of parameterized algorithms. Over the past decade, a wide variety of techniques have been studied for building such models. Here, we describe extensions and improvements of existing models, new families of models, and -- perhaps most importantly -- a much more thorough treatment of algorithm parameters as model inputs. We also comprehensively describe new and existing features for predicting algorithm runtime for propositional satisfiability (SAT), travelling salesperson (TSP) and mixed integer programming (MIP) problems. We evaluate these innovations through the largest empirical analysis of its kind, comparing to a wide range of runtime modelling techniques from the literature . Our experiments consider 11 algorithms and 35 instance distributions; they also span a very wide range of SAT, MIP, and TSP instances, with the least structured having been generated uniformly at random and the most structured having emerged from real industrial applications. Overall, we demonstrate that our new models yield substantially better runtime predictions than previous approaches in terms of their generalization to new problem instances, to new algorithms from a parameterized space, and to both simultaneously.
[ { "type": "A", "before": null, "after": "a", "start_char_pos": 90, "end_char_pos": 90 }, { "type": "D", "before": "data", "after": null, "start_char_pos": 115, "end_char_pos": 119 }, { "type": "R", "before": "domain-specific problem", "after": "problem-specific instance", "start_char_pos": 217, "end_char_pos": 240 }, { "type": "R", "before": "previous", "after": "existing", "start_char_pos": 556, "end_char_pos": 564 }, { "type": "R", "before": "---", "after": "--", "start_char_pos": 601, "end_char_pos": 604 }, { "type": "R", "before": "---", "after": "--", "start_char_pos": 630, "end_char_pos": 633 }, { "type": "R", "before": "describe novel", "after": "comprehensively describe new and existing", "start_char_pos": 714, "end_char_pos": 728 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 775, "end_char_pos": 778 }, { "type": "A", "before": null, "after": "travelling salesperson (TSP) and", "start_char_pos": 815, "end_char_pos": 815 }, { "type": "D", "before": ", and travelling salesperson (TSP)", "after": null, "start_char_pos": 848, "end_char_pos": 882 }, { "type": "R", "before": "all previously proposed modeling techniques of which we are aware", "after": "a wide range of runtime modelling techniques from the literature", "start_char_pos": 988, "end_char_pos": 1053 } ]
[ 0, 250, 411, 506, 705, 892, 1055, 1125, 1332 ]
1211.2709
1
In this paper, we create a model of unexpected fluctuation of the long-term real interest rate. This model is based on our new version of IS-LM model. The new IS-LM model eliminates two main deficiencies of the original model: assumptions of constant price level and of strictly exogenous money supply. The unexpected fluctuations of the long-term real interest rate can be explained by existence of special type of cycle called relaxation oscillation on money (or financial assets) market. Relaxation oscillations include some short parts looking like "jumps". These "jumps" can be interpreted like unexpected. In other words, we try to explain these "unexpected" fluctuations of long-term real interest rate and show that these fluctuations can be only seemingly unexpected. Last but not least, we show some impacts of the government intervention by fiscal or monetary policy on economics using this models. Then we suggest possible interaction between fiscal and monetary policy .
In this paper, we present own point of view how the unexpected fluctuations of the long-term real interest rate can be explained . We describe a macroeconomic environment by the modification of the fundamental macroeconomic equilibrium model called the IS-LM model. Last but not least, we suggest a possible cooperation between the fiscal and monetary policy to reduce these fluctuations. Our modelling is demonstrated on an illustrative example .
[ { "type": "R", "before": "create a model of unexpected fluctuation of the long-term real interest rate. This model is based on our new version of IS-LM model. The new IS-LM model eliminates two main deficiencies of the original model: assumptions of constant price level and of strictly exogenous money supply. The", "after": "present own point of view how the", "start_char_pos": 18, "end_char_pos": 306 }, { "type": "R", "before": "by existence of special type of cycle called relaxation oscillation on money (or financial assets) market. Relaxation oscillations include some short parts looking like \"jumps\". These \"jumps\" can be interpreted like unexpected. In other words, we try to explain these \"unexpected\" fluctuations of long-term real interest rate and show that these fluctuations can be only seemingly unexpected.", "after": ". We describe a macroeconomic environment by the modification of the fundamental macroeconomic equilibrium model called the IS-LM model.", "start_char_pos": 384, "end_char_pos": 776 }, { "type": "R", "before": "show some impacts of the government intervention by fiscal or monetary policy on economics using this models. Then we suggest possible interaction between", "after": "suggest a possible cooperation between the", "start_char_pos": 800, "end_char_pos": 954 }, { "type": "A", "before": null, "after": "to reduce these fluctuations. Our modelling is demonstrated on an illustrative example", "start_char_pos": 982, "end_char_pos": 982 } ]
[ 0, 95, 150, 302, 490, 561, 611, 776, 909 ]
1211.2820
1
MicroRNAs are small noncoding RNAs that regulate genes post- transciptionally by binding and degrading target eukaryotic mR- NAs . We use a quantitative model to study gene regulation by microRNAs and compare it to gene regulation by prokaryotic small non-coding RNAs (sRNAs). Our model uses a combina- tion of analytic techniques as well as computational simulations to calculate the mean-expression and noise profiles of genes regu- lated by both sRNAs and microRNAs . We find that despite very different molecular machinery and modes of action (catalytic vs stoichiometric), the mean expression levels and noise profiles of microRNA-regulated genes are almost identical to genes regu- lated by prokaryotic sRNAs. MicroRNAs suppress noise when proteins are expressed at low levels but substantially increase noise at intermediate and high expression levels. This suggests that microRNAs and sRNAs may represent an example of con- vergent evolution . We extend our model to study crosstalk be- tween multiple mRNAs that are regulated by a single microRNA and show that noise is a sensitive measure of microRNA-mediated interaction between mRNAs. This suggests a new experimental strategy for uncovering the microRNA-mRNA interactions and testing the competing endogenous RNA (ceRNA) hypothesis.
MicroRNAs are small noncoding RNAs that regulate genes post-transciptionally by binding and degrading target eukaryotic mRNAs . We use a quantitative model to study gene regulation by inhibitory microRNAs and compare it to gene regulation by prokaryotic small non-coding RNAs (sRNAs). Our model uses a combination of analytic techniques as well as computational simulations to calculate the mean-expression and noise profiles of genes regulated by both microRNAs and sRNAs . We find that despite very different molecular machinery and modes of action (catalytic vs stoichiometric), the mean expression levels and noise profiles of microRNA-regulated genes are almost identical to genes regulated by prokaryotic sRNAs. This behavior is extremely robust and persists across a wide range of biologically relevant parameters . We extend our model to study crosstalk between multiple mRNAs that are regulated by a single microRNA and show that noise is a sensitive measure of microRNA-mediated interaction between mRNAs. We conclude by discussing possible experimental strategies for uncovering the microRNA-mRNA interactions and testing the competing endogenous RNA (ceRNA) hypothesis.
[ { "type": "R", "before": "post- transciptionally", "after": "post-transciptionally", "start_char_pos": 55, "end_char_pos": 77 }, { "type": "R", "before": "mR- NAs", "after": "mRNAs", "start_char_pos": 121, "end_char_pos": 128 }, { "type": "A", "before": null, "after": "inhibitory", "start_char_pos": 187, "end_char_pos": 187 }, { "type": "R", "before": "combina- tion", "after": "combination", "start_char_pos": 295, "end_char_pos": 308 }, { "type": "R", "before": "regu- lated by both sRNAs and microRNAs", "after": "regulated by both microRNAs and sRNAs", "start_char_pos": 430, "end_char_pos": 469 }, { "type": "R", "before": "regu- lated", "after": "regulated", "start_char_pos": 683, "end_char_pos": 694 }, { "type": "R", "before": "MicroRNAs suppress noise when proteins are expressed at low levels but substantially increase noise at intermediate and high expression levels. This suggests that microRNAs and sRNAs may represent an example of con- vergent evolution", "after": "This behavior is extremely robust and persists across a wide range of biologically relevant parameters", "start_char_pos": 717, "end_char_pos": 950 }, { "type": "R", "before": "be- tween", "after": "between", "start_char_pos": 992, "end_char_pos": 1001 }, { "type": "R", "before": "This suggests a new experimental strategy", "after": "We conclude by discussing possible experimental strategies", "start_char_pos": 1148, "end_char_pos": 1189 } ]
[ 0, 130, 277, 716, 860, 952, 1147 ]
1211.3133
1
A common metaphor for describing development is a rugged epigenetic landscape where cell fates are represented as attracting valleys resulting from a complex regulatory network. Here, we introduce a framework for explicitly constructing epigenetic landscapes that combines genomic data with techniques from physics. Each cell fate is a dynamic attractor, yet cells can change fate in response to external signals. Our model suggests that partially reprogrammed cells are a natural consequence of high-dimensional landscapes and predicts that partially reprogrammed cells should be hybrids that co-express genes from multiple cell fates. We verify this prediction by reanalyzing existing data sets . Our model reproduces known reprogramming protocols and identifies candidate transcription factors for reprogramming to novel cell fates, suggesting epigenetic landscapes are a powerful paradigm for understanding cellular identity.
A common metaphor for describing development is a rugged "epigenetic landscape" where cell fates are represented as attracting valleys resulting from a complex regulatory network. Here, we introduce a framework for explicitly constructing epigenetic landscapes that combines genomic data with techniques from spin-glass physics. Each cell fate is a dynamic attractor, yet cells can change fate in response to external signals. Our model suggests that partially reprogrammed cells are a natural consequence of high-dimensional landscapes , and predicts that partially reprogrammed cells should be hybrids that co-express genes from multiple cell fates. We verify this prediction by reanalyzing existing datasets . Our model reproduces known reprogramming protocols and identifies candidate transcription factors for reprogramming to novel cell fates, suggesting epigenetic landscapes are a powerful paradigm for understanding cellular identity.
[ { "type": "R", "before": "epigenetic landscape", "after": "\"epigenetic landscape\"", "start_char_pos": 57, "end_char_pos": 77 }, { "type": "A", "before": null, "after": "spin-glass", "start_char_pos": 307, "end_char_pos": 307 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 525, "end_char_pos": 525 }, { "type": "R", "before": "data sets", "after": "datasets", "start_char_pos": 689, "end_char_pos": 698 } ]
[ 0, 177, 316, 414, 638, 700 ]
1211.3690
1
The Influenza A virus belongs to the Orthomyxoviridae family. Influenza virus infection occurs yearly in all countries of the world. It usually kills between 250,000 and 500,000 people and causes severe illness in millions more. Over the last century alone we have seen 3 global influenza pandemics. The great human and financial cost of this disease has made it the second most studied virus today, behind HIV. Recently, several genome-wide RNA interference studies have focused on identifying host molecules that participate in Influenza infection. We used five of these studies for this meta-analysis. As a result, 2 host gene complexes important for the Influenza virus life cycle were identified. The first complex is related to Golgi transport and vesicle coating. Several viral proteins go through this complex before viral budding, the knock-down of Golgi related proteins might have prevented viral production. The second complex is related to ATP dependent proton transport. The uncoating of the Influenza virus is dependent on pH changes within the endosome, knock-down of these genes may changes in the endosomal pH inhibiting the uncoating of the virus .
The Influenza A virus belongs to the Orthomyxoviridae family. Influenza virus infection occurs yearly in all countries of the world. It usually kills between 250,000 and 500,000 people and causes severe illness in millions more. Over the last century alone we have seen 3 global influenza pandemics. The great human and financial cost of this disease has made it the second most studied virus today, behind HIV. Recently, several genome-wide RNA interference studies have focused on identifying host molecules that participate in Influenza infection. We used nine of these studies for this meta-analysis. Even though the overlap among genes identified in multiple screens was small, network analysis indicates that similar protein complexes and biological functions of the host were present. As a result, several host gene complexes important for the Influenza virus life cycle were identified. The biological function and the relevance of each identified protein complex in the Influenza virus life cycle is further detailed in this paper .
[ { "type": "R", "before": "five", "after": "nine", "start_char_pos": 559, "end_char_pos": 563 }, { "type": "A", "before": null, "after": "Even though the overlap among genes identified in multiple screens was small, network analysis indicates that similar protein complexes and biological functions of the host were present.", "start_char_pos": 605, "end_char_pos": 605 }, { "type": "R", "before": "2", "after": "several", "start_char_pos": 619, "end_char_pos": 620 }, { "type": "R", "before": "first complex is related to Golgi transport and vesicle coating. Several viral proteins go through this complex before viral budding, the knock-down of Golgi related proteins might have prevented viral production. The second complex is related to ATP dependent proton transport. The uncoating of", "after": "biological function and", "start_char_pos": 707, "end_char_pos": 1002 }, { "type": "A", "before": null, "after": "relevance of each identified protein complex in the", "start_char_pos": 1007, "end_char_pos": 1007 }, { "type": "R", "before": "is dependent on pH changes within the endosome, knock-down of these genes may changes in the endosomal pH inhibiting the uncoating of the virus", "after": "life cycle is further detailed in this paper", "start_char_pos": 1024, "end_char_pos": 1167 } ]
[ 0, 61, 132, 228, 299, 411, 550, 604, 702, 771, 920, 985 ]
1211.4251
1
The relative solvent accessibility (RSA) of a residue in a protein measures the extent of burial or exposure of that residue in the 3D structure. RSA is frequently used to describe a protein's biophysical or evolutionary properties. To calculate RSA, a residue's solvent accessibility ( SA ) needs to be normalized by a suitable reference value for the given amino acid; several normalization scales have previously been proposed. However, these scales do not provide tight upper bounds on SA values frequently observed in empirical crystal structures. Instead, they underestimate the largest allowed SA values, by up to 20\%. As a result, many empirical crystal structures contain residues that seem to have RSA values in excess of one. Here, we derive a new normalization scale that does provide a tight upper bound on observed SA values. We pursue two complementary strategies, one based on extensive analysis of empirical structures and one based on systematic enumeration of biophysically allowed tripeptides. Both approaches yield highly congruent results that consistently exceed published values. We conclude that previously published SA normalization values were too small primarily because the conformations that maximize SA had not been correctly identified. Finally , we show that empirically derived hydrophobicity scales are sensitive to accurate RSA calculation, and we derive a hydrophobicity scale that shows excellent agreement with experimentally measured scales.
The relative solvent accessibility (RSA) of a residue in a protein measures the extent of burial or exposure of that residue in the 3D structure. RSA is frequently used to describe a protein's biophysical or evolutionary properties. To calculate RSA, a residue's solvent accessibility ( ASA ) needs to be normalized by a suitable reference value for the given amino acid; several normalization scales have previously been proposed. However, these scales do not provide tight upper bounds on ASA values frequently observed in empirical crystal structures. Instead, they underestimate the largest allowed ASA values, by up to 20\%. As a result, many empirical crystal structures contain residues that seem to have RSA values in excess of one. Here, we derive a new normalization scale that does provide a tight upper bound on observed ASA values. We pursue two complementary strategies, one based on extensive analysis of empirical structures and one based on systematic enumeration of biophysically allowed tripeptides. Both approaches yield congruent results that consistently exceed published values. We conclude that previously published ASA normalization values were too small , primarily because the conformations that maximize ASA had not been correctly identified. As an application of our results , we show that empirically derived hydrophobicity scales are sensitive to accurate RSA calculation, and we derive new hydrophobicity scales that show increased correlation with experimentally measured scales.
[ { "type": "R", "before": "SA", "after": "ASA", "start_char_pos": 287, "end_char_pos": 289 }, { "type": "R", "before": "SA", "after": "ASA", "start_char_pos": 490, "end_char_pos": 492 }, { "type": "R", "before": "SA", "after": "ASA", "start_char_pos": 601, "end_char_pos": 603 }, { "type": "R", "before": "SA", "after": "ASA", "start_char_pos": 830, "end_char_pos": 832 }, { "type": "D", "before": "highly", "after": null, "start_char_pos": 1037, "end_char_pos": 1043 }, { "type": "R", "before": "SA", "after": "ASA", "start_char_pos": 1143, "end_char_pos": 1145 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1182, "end_char_pos": 1182 }, { "type": "R", "before": "SA", "after": "ASA", "start_char_pos": 1233, "end_char_pos": 1235 }, { "type": "R", "before": "Finally", "after": "As an application of our results", "start_char_pos": 1271, "end_char_pos": 1278 }, { "type": "R", "before": "a hydrophobicity scale that shows excellent agreement", "after": "new hydrophobicity scales that show increased correlation", "start_char_pos": 1393, "end_char_pos": 1446 } ]
[ 0, 145, 232, 370, 430, 552, 626, 737, 840, 1014, 1104, 1270 ]
1211.4598
1
The fundamental theorem of utility maximization (called FTUM hereafter) says that the utility maximization admits solution if and only if there exists an equivalent martingale measure. This theorem is true for discrete market models (where the number of scenarios is finite), and remains valid for general discrete-time market models when the utility is smooth enough. However, this theorem fails in continuous-time framework even with nice utility function, where there might exist arbitrage opportunities and optimal portfolio. This paper addresses the question how far we can weaken the non-arbitrage condition as well as the utility maximization problem to preserve their complete and strong relationship described by the FTUM. As application of our version of the FTUM, we establish equivalence between the No-Unbounded-Profit-with-Bounded-Risk condition, the existence of num\'eraire portfolio, and the existence of solution to the utility maximization under equivalent probability measure. The latter fact can be interpreted as a sort of weak form of market's viability, while this equivalence is established with a much less technical approach. Furthermore, the obtained equivalent probability can be chosen as close to the real-world probability measure as we want .
The fundamental theorem of utility maximization (called FTUM hereafter) says that the utility maximization admits solution if and only if there exists an equivalent martingale measure. This theorem is true for discrete market models (where the number of scenarios is finite), and remains valid for general discrete-time market models when the utility is smooth enough. However, this theorem --in this current formulation-- fails in continuous-time framework even with nice utility function, where there might exist arbitrage opportunities and optimal portfolio. This paper addresses the question how far we can weaken the non-arbitrage condition as well as the utility maximization problem to preserve their complete and strong relationship described by the FTUM. As application of our version of the FTUM, we establish equivalence between the No-Unbounded-Profit-with-Bounded-Risk condition, the existence of num\'eraire portfolio, and the existence of solution to the utility maximization under equivalent probability measure. The latter fact can be interpreted as a sort of weak form of market's viability, while this equivalence is established with a much less technical approach. Furthermore, the obtained equivalent probability can be chosen as close to the real-world probability measure as we want (but might not be equal) .
[ { "type": "A", "before": null, "after": "--in this current formulation--", "start_char_pos": 391, "end_char_pos": 391 }, { "type": "A", "before": null, "after": "(but might not be equal)", "start_char_pos": 1275, "end_char_pos": 1275 } ]
[ 0, 184, 368, 530, 732, 997, 1153 ]
1211.4598
2
The fundamental theorem of utility maximization (called FTUM hereafter) says that the utility maximization admits solution if and only if there exists an equivalent martingale measure. This theorem is true for discrete market models (where the number of scenarios is finite), andremains valid for general discrete-time market modelswhen the utility is smooth enough. However, this theorem --in this current formulation-- fails in continuous-time framework even with nice utility function, where there might exist arbitrage opportunities and optimal portfolio. This paper addresses the question how far we can weaken the non-arbitrage condition as well as the utility maximization problem to preserve their complete and strong relationship described by the FTUM. As application of our version of the FTUM, we establish equivalence between the No-Unbounded-Profit-with-Bounded-Risk condition , the existence of num\'eraire portfolio, and the existence of solution to the utility maximization under equivalent probability measure . The latter fact can be interpreted as a sort of weak form of market's viability, while this equivalence is established with a much less technical approach. Furthermore, the obtained equivalent probability can be chosen as close to the real-world probability measure as we want (but might not be equal) {\it .
This paper proposes two approaches that quantify the exact relationship among the viability, the absence of arbitrage, and/or the existence of the num\'eraire portfolio under minimal assumptions and for general continuous-time market models. Precisely, our first and principal contribution proves the equivalence among the No-Unbounded-Profit-with-Bounded-Risk condition (NUPBR hereafter) , the existence of the num\'eraire portfolio, and the existence of the optimal portfolio under an equivalent probability measure for any "nice" utility and positive initial capital. Herein, a 'nice" utility is any smooth von URLenstern utility satisfying Inada's conditions and the elasticity assumptions of Kramkov and Schachermayer . Furthermore, the equivalent probability measure ---under which the utility maximization problems have solutions--- can be chosen as close to the real-world probability measure as we want (but might not be equal) . Without changing the underlying probability measure and under mild assumptions, our second contribution proves that the NUPBR is equivalent to the "{\it local " existence of the optimal portfolio. This constitutes an alternative to the first contribution, if one insists on working under the real-world probability. These two contributions lead naturally to new types of viability that we call weak and local viabilities .
[ { "type": "R", "before": "The fundamental theorem of utility maximization (called FTUM hereafter) says that", "after": "This paper proposes two approaches that quantify the exact relationship among the viability, the absence of arbitrage, and/or the existence of the num\\'eraire portfolio under minimal assumptions and for general continuous-time market models. Precisely, our first and principal contribution proves the equivalence among", "start_char_pos": 0, "end_char_pos": 81 }, { "type": "D", "before": "utility maximization admits solution if and only if there exists an equivalent martingale measure. This theorem is true for discrete market models (where the number of scenarios is finite), andremains valid for general discrete-time market modelswhen the utility is smooth enough. However, this theorem --in this current formulation-- fails in continuous-time framework even with nice utility function, where there might exist arbitrage opportunities and optimal portfolio. This paper addresses the question how far we can weaken the non-arbitrage condition as well as the utility maximization problem to preserve their complete and strong relationship described by the FTUM. As application of our version of the FTUM, we establish equivalence between the", "after": null, "start_char_pos": 86, "end_char_pos": 841 }, { "type": "A", "before": null, "after": "(NUPBR hereafter)", "start_char_pos": 890, "end_char_pos": 890 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 910, "end_char_pos": 910 }, { "type": "R", "before": "solution to the utility maximization under", "after": "the optimal portfolio under an", "start_char_pos": 955, "end_char_pos": 997 }, { "type": "A", "before": null, "after": "for any \"nice\" utility and positive initial capital. Herein, a 'nice\" utility is any smooth von URLenstern utility satisfying Inada's conditions and the elasticity assumptions of Kramkov and Schachermayer", "start_char_pos": 1029, "end_char_pos": 1029 }, { "type": "D", "before": "The latter fact can be interpreted as a sort of weak form of market's viability, while this equivalence is established with a much less technical approach.", "after": null, "start_char_pos": 1032, "end_char_pos": 1187 }, { "type": "R", "before": "obtained equivalent probability", "after": "equivalent probability measure ---under which the utility maximization problems have solutions---", "start_char_pos": 1205, "end_char_pos": 1236 }, { "type": "A", "before": null, "after": ". Without changing the underlying probability measure and under mild assumptions, our second contribution proves that the NUPBR is equivalent to the \"", "start_char_pos": 1334, "end_char_pos": 1334 }, { "type": "A", "before": null, "after": "local", "start_char_pos": 1339, "end_char_pos": 1339 }, { "type": "A", "before": null, "after": "\" existence of the optimal portfolio. This constitutes an alternative to the first contribution, if one insists on working under the real-world probability. These two contributions lead naturally to new types of viability that we call weak and local viabilities", "start_char_pos": 1340, "end_char_pos": 1340 } ]
[ 0, 184, 366, 559, 761, 1031, 1187 ]
1211.4636
1
Using results from our companion article URL :1112.4824v2] on a Schauder approach to existence of solutions to a degenerate-parabolic partial differential equation, we solve three intertwined problems, motivated by probability theory and mathematical finance, concerning degenerate diffusion processes. We show that the martingale problem associated with a degenerate-elliptic differential operator with unbounded, locally Holder continuous coefficients on a half-space is well-posed in the sense of Stroock and Varadhan. Second, we prove existence, uniqueness, and the strong Markov property for weak solutions to a stochastic differential equation with degenerate diffusion and unbounded coefficients with suitable H\"older continuity properties. Third, for an Ito process with degenerate diffusion and unbounded but appropriately regular coefficients, we prove existence of a strong Markov process, unique in the sense of probability law, whose one-dimensional marginal probability distributions match those of the given Ito process.
Using results from our companion article arXiv :1112.4824v2] on a Schauder approach to existence of solutions to a degenerate-parabolic partial differential equation, we solve three intertwined problems, motivated by probability theory and mathematical finance, concerning degenerate diffusion processes. We show that the martingale problem associated with a degenerate-elliptic differential operator with unbounded, locally Holder continuous coefficients on a half-space is well-posed in the sense of Stroock and Varadhan. Second, we prove existence, uniqueness, and the strong Markov property for weak solutions to a stochastic differential equation with degenerate diffusion and unbounded coefficients with suitable H\"older continuity properties. Third, for an Ito process with degenerate diffusion and unbounded but appropriately regular coefficients, we prove existence of a strong Markov process, unique in the sense of probability law, whose one-dimensional marginal probability distributions match those of the given Ito process.
[ { "type": "R", "before": "URL", "after": "arXiv", "start_char_pos": 41, "end_char_pos": 44 } ]
[ 0, 302, 521, 748 ]
1211.4946
1
The dependency structure of credit risk parameters is a key driver for capital consumption and receives regulatory and scientific attention. The impact of parameter imperfections on the quality of expected loss in the sense of a fair, unbiased estimate of risk expenses however is barely covered. So far there are no established backtesting procedures for EL, quantifying its impact with regards to pricing or risk adjusted profitability measures, such as RARORAC . In this paper, a practically oriented, top-down approach to assess the quality of EL by backtesting with actually observed risk impact on capital is introduced. In a first step, the concept of risk expenses (Cost of Risk) has to be extended beyond the classical provisioning (P&L) view, towards a more adequate capital consumption approach (Impact of Risk, IoR). On this basis, the difference between parameter-based EL and actually reported Impact of Risk is decomposed into its key components (PL Backtest and NPL Backtest). The proposed method will deepen the understanding of practical properties of EL, aligns the EL with actually observed risk impact on capital and has the potential to improve the quality of EL-based business decisions. Besides assumptions on the stability of parameter estimates and default identification, there are no further requirements on the underlying credit risk parameters. The method is robust irrespective whether parameters are simple, expert based values or highly predictive and perfectly calibrated IRBA compliant methods.
The dependency structure of credit risk parameters is a key driver for capital consumption and receives regulatory and scientific attention. The impact of parameter imperfections on the quality of expected loss in the sense of a fair, unbiased estimate of risk expenses however is barely covered. So far there are no established backtesting procedures for EL, quantifying its impact with regards to pricing or risk adjusted profitability measures, such as RAROC or EVA . In this paper, a practically oriented, top-down approach to assess the quality of EL by backtesting with actually observed risk impact on capital is introduced. In a first step, the concept of risk expenses (Cost of Risk) has to be extended beyond the classical provisioning (P&L) view, towards a more adequate capital consumption approach (Impact of Risk, IoR). On this basis, the difference between parameter-based EL and actually reported Impact of Risk is decomposed into its key components (PL Backtest and NPL Backtest). The proposed method will deepen the understanding of practical properties of EL, aligns the EL with actually observed risk impact on capital and has the potential to improve the quality of EL-based business decisions. Besides assumptions on the stability of parameter estimates and default identification, there are no further requirements on the underlying credit risk parameters. The method is robust irrespective whether parameters are simple, expert based values or highly predictive and perfectly calibrated IRBA compliant methods.
[ { "type": "R", "before": "RARORAC", "after": "RAROC or EVA", "start_char_pos": 456, "end_char_pos": 463 } ]
[ 0, 140, 296, 626, 828, 992, 1210, 1374 ]
1211.4946
2
The dependency structure of credit risk parameters is a key driver for capital consumption and receives regulatory and scientific attention. The impact of parameter imperfections on the quality of expected loss in the sense of a fair, unbiased estimate of risk expenses however is barely covered. So far there are no established backtesting procedures for EL, quantifying its impact with regards to pricing or risk adjusted profitability measures , such as RAROC or EVA . In this paper, a practically oriented, top-down approach to assess the quality of EL by backtesting with actually observed risk impact on capital is introduced. In a first step, the concept of risk expenses (Cost of Risk) has to be extended beyond the classical provisioning (P L) view, towards a more adequate capital consumption approach (Impact of Risk, IoR). On this basis, the difference between parameter-based EL and actually reported Impact of Risk is decomposed into its key components (PL Backtest and NPL Backtest) . The proposed method will deepen the understanding of practical properties of EL, aligns the EL with actually observed risk impact on capital and has the potential to improve the quality of EL-based business decisions. Besides assumptions on the stability of parameter estimates and default identification, there are no further requirementson the underlying credit risk parameters . The method is robust irrespective whether parameters are simple, expert based values or highly predictive and perfectly calibrated IRBA compliant methods .
The dependency structure of credit risk parameters is a key driver for capital consumption and receives regulatory and scientific attention. The impact of parameter imperfections on the quality of expected loss (EL) in the sense of a fair, unbiased estimate of risk expenses however is barely covered. So far there are no established backtesting procedures for EL, quantifying its impact with regards to pricing or risk adjusted profitability measures . In this paper, a practically oriented, top-down approach to assess the quality of EL by backtesting with a properly defined risk measure is introduced. In a first step, the concept of risk expenses (Cost of Risk) has to be extended beyond the classical provisioning view, towards a more adequate capital consumption approach (Impact of Risk, IoR). On this basis, the difference between parameter-based EL and actually reported Impact of Risk is decomposed into its key components . The proposed method will deepen the understanding of practical properties of EL, reconciles the EL with a clearly defined and observable risk measure and provides a link between upcoming IFRS 9 accounting standards for loan loss provisioning with IRBA regulatory capital requirements . The method is robust irrespective whether parameters are simple, expert based values or highly predictive and perfectly calibrated IRBA compliant methods , as long as parameters and default identification procedures are stable .
[ { "type": "A", "before": null, "after": "(EL)", "start_char_pos": 211, "end_char_pos": 211 }, { "type": "D", "before": ", such as RAROC or EVA", "after": null, "start_char_pos": 448, "end_char_pos": 470 }, { "type": "R", "before": "actually observed risk impact on capital", "after": "a properly defined risk measure", "start_char_pos": 578, "end_char_pos": 618 }, { "type": "D", "before": "(P", "after": null, "start_char_pos": 748, "end_char_pos": 750 }, { "type": "D", "before": "L)", "after": null, "start_char_pos": 751, "end_char_pos": 753 }, { "type": "D", "before": "(PL Backtest and NPL Backtest)", "after": null, "start_char_pos": 968, "end_char_pos": 998 }, { "type": "R", "before": "aligns", "after": "reconciles", "start_char_pos": 1082, "end_char_pos": 1088 }, { "type": "R", "before": "actually observed risk impact on capital and has the potential to improve the quality of EL-based business decisions. Besides assumptions on the stability of parameter estimates and default identification, there are no further requirementson the underlying credit risk parameters", "after": "a clearly defined and observable risk measure and provides a link between upcoming IFRS 9 accounting standards for loan loss provisioning with IRBA regulatory capital requirements", "start_char_pos": 1101, "end_char_pos": 1380 }, { "type": "A", "before": null, "after": ", as long as parameters and default identification procedures are stable", "start_char_pos": 1537, "end_char_pos": 1537 } ]
[ 0, 140, 297, 472, 633, 835, 1000, 1218, 1382 ]
1211.5235
1
This study presents an ANSeR model (asset network systemic risk model) to quantify the risk of financial contagion which manifests itself in a financial crisis. The transmission of financial distress is governed by a heterogeneous bank credit network and an investment portfolio of banks. Bankruptcy reproductive ratio of a financial system is computed as a function of the diversity and risk exposure of an investment portfolio of banks, and the denseness and concentration of a heterogeneous bank credit network. An analytic solution of the bankruptcy reproductive ratio for a small financial system is derived and a numerical solution for a large financial system is obtained. For a large financial system, Large diversity among banks in the investment portfolio makes financial contagion more damaging on the average. But large diversity is essentially effective in eliminating the risk of financial contagion in the worst case of financial crisis scenarios. A bank-unique specialization portfolio is more suitable than a uniform diversification portfolio and a system-wide specialization portfolio in strengthening the robustness of a financial system.
This study presents an ANWSER model (asset network systemic risk model) to quantify the risk of financial contagion which manifests itself in a financial crisis. The transmission of financial distress is governed by a heterogeneous bank credit network and an investment portfolio of banks. Bankruptcy reproductive ratio of a financial system is computed as a function of the diversity and risk exposure of an investment portfolio of banks, and the denseness and concentration of a heterogeneous bank credit network. An analytic solution of the bankruptcy reproductive ratio for a small financial system is derived and a numerical solution for a large financial system is obtained. For a large financial system, Large diversity among banks in the investment portfolio makes financial contagion more damaging on the average. But large diversity is essentially effective in eliminating the risk of financial contagion in the worst case of financial crisis scenarios. A bank-unique specialization portfolio is more suitable than a uniform diversification portfolio and a system-wide specialization portfolio in strengthening the robustness of a financial system.
[ { "type": "R", "before": "ANSeR", "after": "ANWSER", "start_char_pos": 23, "end_char_pos": 28 } ]
[ 0, 160, 288, 514, 679, 821, 962 ]
1211.5368
1
Microscopic URLanismslocomote in viscous fluids through spontaneous beating of filamentous structures anchored at one end, such as flagella and cilia. Prokaryotic flagella rotate rigidly like a corkscrew, while eukaryotic flagella are flexible and oscillate in a plane. We observe similar biomimetic beating behaviorin silico by clamping an active filament at one end and solving for hydrodynamics interactions. Highly active filaments become unstable to transverse perturbations and exhibit autonomous oscillations . This transition into a limit cycle occurs via a supercritical Hopf bifurcation. The time period and amplitude of beating increase with increasing filament length, but collapse to a master curve on appropriate scaling. An analytical calculation of the spectrum of the filament model in the free-draining approximation fails to capture oscillatory behavior, emphasizing the crucial role played by hydrodynamic interactions .
Non-equilibrium processes which convert chemical energy into mechanical motion enable the motility URLanisms . Bundles of inextensible filaments driven by energy transduction of molecular motors form essential components of micron-scale motility engines like cilia and flagella. The mimicry of cilia-like motion in recent experiments on synthetic active filaments supports the idea that generic physical mechanisms may be sufficient to generate such motion. Here we show, theoretically, that the competition between the destabilising effect of hydrodynamic interactions induced by force-free and torque-free chemomechanically active flows, and the stabilising effect of nonlinear elasticity, provides a generic route to spontaneous oscillations in active filaments. These oscillations, reminiscent of prokaryotic and eukaryotic flagellar motion, are obtained without having to invoke structural complexity or biochemical regulation. This minimality implies that biomimetic oscillations, previously observed only in complex bundles of active filaments, can be replicated in simple chains of generic chemomechanically active beads .
[ { "type": "D", "before": "Microscopic URLanismslocomote in viscous fluids through spontaneous beating of filamentous structures anchored at one end, such as flagella and cilia. Prokaryotic flagella rotate rigidly like a corkscrew, while eukaryotic flagella are flexible and oscillate in a plane. We observe similar biomimetic beating behavior", "after": null, "start_char_pos": 0, "end_char_pos": 316 }, { "type": "D", "before": "in silico", "after": null, "start_char_pos": 316, "end_char_pos": 325 }, { "type": "R", "before": "by clamping an active filament at one end and solving for hydrodynamics interactions. Highly active filaments become unstable to transverse perturbations and exhibit autonomous oscillations", "after": "Non-equilibrium processes which convert chemical energy into mechanical motion enable the motility URLanisms", "start_char_pos": 326, "end_char_pos": 515 }, { "type": "R", "before": "This transition into a limit cycle occurs via a supercritical Hopf bifurcation. The time period and amplitude of beating increase with increasing filament length, but collapse to a master curve on appropriate scaling. An analytical calculation of the spectrum of the filament model in the free-draining approximation fails to capture oscillatory behavior, emphasizing the crucial role played by hydrodynamic interactions", "after": "Bundles of inextensible filaments driven by energy transduction of molecular motors form essential components of micron-scale motility engines like cilia and flagella. The mimicry of cilia-like motion in recent experiments on synthetic active filaments supports the idea that generic physical mechanisms may be sufficient to generate such motion. Here we show, theoretically, that the competition between the destabilising effect of hydrodynamic interactions induced by force-free and torque-free chemomechanically active flows, and the stabilising effect of nonlinear elasticity, provides a generic route to spontaneous oscillations in active filaments. These oscillations, reminiscent of prokaryotic and eukaryotic flagellar motion, are obtained without having to invoke structural complexity or biochemical regulation. This minimality implies that biomimetic oscillations, previously observed only in complex bundles of active filaments, can be replicated in simple chains of generic chemomechanically active beads", "start_char_pos": 518, "end_char_pos": 938 } ]
[ 0, 150, 269, 411, 517, 597, 735 ]
1211.5816
1
We devise a simple and general method for solving non-linear stochastic Hamilton-Jacobi-Bellman partial differential equations . We apply our method to the portfolio model.
We develop a simple and general method for solving non-linear Hamilton-Jacobi-Bellman partial differential equations HJB PDEs . We apply our method to the portfolio model.
[ { "type": "R", "before": "devise", "after": "develop", "start_char_pos": 3, "end_char_pos": 9 }, { "type": "D", "before": "stochastic", "after": null, "start_char_pos": 61, "end_char_pos": 71 }, { "type": "A", "before": null, "after": "HJB PDEs", "start_char_pos": 127, "end_char_pos": 127 } ]
[ 0, 129 ]
1211.5816
2
We develop a simple and general method for solving non-linear Hamilton-Jacobi-Bellman partial differential equations HJB PDEs . We apply our method to the portfolio model.
We develop a simple and general method that transforms ( non-linear ) PDEs to ODEs . We apply our method to the stochastic portfolio model.
[ { "type": "R", "before": "for solving", "after": "that transforms (", "start_char_pos": 39, "end_char_pos": 50 }, { "type": "R", "before": "Hamilton-Jacobi-Bellman partial differential equations HJB PDEs", "after": ") PDEs to ODEs", "start_char_pos": 62, "end_char_pos": 125 }, { "type": "A", "before": null, "after": "stochastic", "start_char_pos": 155, "end_char_pos": 155 } ]
[ 0, 127 ]
1211.5816
3
We develop a simple and general method that transforms (non-linear) PDEs to ODEs . We apply our method to the stochastic portfolio model .
We overcome a major obstacle in mathematical optimization. In so doing, we provide a smooth solution to the HJB PDE without assuming the differentiability of the value function . We apply our method to financial models .
[ { "type": "R", "before": "develop a simple and general method that transforms (non-linear) PDEs to ODEs", "after": "overcome a major obstacle in mathematical optimization. In so doing, we provide a smooth solution to the HJB PDE without assuming the differentiability of the value function", "start_char_pos": 3, "end_char_pos": 80 }, { "type": "R", "before": "the stochastic portfolio model", "after": "financial models", "start_char_pos": 106, "end_char_pos": 136 } ]
[ 0, 82 ]
1211.6772
1
The reaction-diffusion master equation (RDME) is a lattice stochastic reaction-diffusion model that has been used to study spatially distributed cellular processes. The RDME has been shown to have the drawback of losing bimolecular reactions in the continuum limit that the lattice spacing approaches zero (in two or more dimensions). In this work we derive a new convergent RDME (CRDME) that eliminates this problem. The CRDME is obtained by finite volume discretization of a spatially-continuous stochastic reaction-diffusion model. We demonstrate the empirical numerical convergence of reaction time statistics associated with the CRDME. Although the reaction time statistics of the RDME diverge as the lattice spacing approaches zero, we show they approach those of the CRDME for sufficiently large lattice spacings or slow bimolecular reaction rates . As such, the RDME may be interpreted as an approximation to the CRDME in several asymptotic limits.
The reaction-diffusion master equation (RDME) is a lattice stochastic reaction-diffusion model that has been used to study spatially distributed cellular processes. The RDME has been shown to have the drawback of losing bimolecular reactions in the limit that the lattice spacing approaches zero (in two or more dimensions). In this work we derive a new convergent RDME (CRDME) that eliminates this problem. The CRDME is obtained by finite volume discretization of a spatially-continuous stochastic reaction-diffusion model. We demonstrate the numerical convergence of reaction time statistics associated with the CRDME. For sufficiently large lattice spacings or slow bimolecular reaction rates , we also show the reaction time statistics of the CRDME may be approximated by those from the RDME. The original RDME may therefore be interpreted as an approximation to the CRDME in several asymptotic limits.
[ { "type": "D", "before": "continuum", "after": null, "start_char_pos": 249, "end_char_pos": 258 }, { "type": "D", "before": "empirical", "after": null, "start_char_pos": 554, "end_char_pos": 563 }, { "type": "R", "before": "Although the reaction time statistics of the RDME diverge as the lattice spacing approaches zero, we show they approach those of the CRDME for", "after": "For", "start_char_pos": 641, "end_char_pos": 783 }, { "type": "R", "before": ". As such, the RDME may", "after": ", we also show the reaction time statistics of the CRDME may be approximated by those from the RDME. The original RDME may therefore", "start_char_pos": 855, "end_char_pos": 878 } ]
[ 0, 164, 334, 417, 534, 856 ]
1211.6772
2
The reaction-diffusion master equation (RDME) is a lattice stochastic reaction-diffusion model that has been used to study spatially distributed cellular processes. The RDME has been shown to have the drawback of losing bimolecular reactions in the limit that the lattice spacing approaches zero ( in two or more dimensions ). In this work we derive a new convergent RDME (CRDME) that eliminates this problem. The CRDME is obtained by finite volume discretization of a spatially-continuous stochastic reaction-diffusion model . We demonstrate the numerical convergence of reaction time statistics associated with the CRDME. For sufficiently large lattice spacings or slow bimolecular reaction rates, we also show the reaction time statistics of the CRDME may be approximated by those from the RDME. The original RDME may therefore be interpreted as an approximation to the CRDME in several asymptotic limits.
The reaction-diffusion master equation (RDME) is a lattice stochastic reaction-diffusion model that has been used to study spatially distributed cellular processes. The RDME is often interpreted as an approximation to spatially-continuous models in which molecules move by Brownian motion and react by one of several mechanisms when sufficiently close. In the limit that the lattice spacing approaches zero , in two or more dimensions , the RDME has been shown to lose bimolecular reactions. The RDME is therefore not a convergent approximation to any spatially-continuous model that incorporates bimolecular reactions. In this work we derive a new convergent RDME (CRDME) by finite volume discretization of a spatially-continuous stochastic reaction-diffusion model popularized by Doi . We demonstrate the numerical convergence of reaction time statistics associated with the CRDME. For sufficiently large lattice spacings or slow bimolecular reaction rates, we also show the reaction time statistics of the CRDME may be approximated by those from the RDME. The original RDME may therefore be interpreted as an approximation to the CRDME in several asymptotic limits.
[ { "type": "R", "before": "has been shown to have the drawback of losing bimolecular reactions in", "after": "is often interpreted as an approximation to spatially-continuous models in which molecules move by Brownian motion and react by one of several mechanisms when sufficiently close. In", "start_char_pos": 174, "end_char_pos": 244 }, { "type": "R", "before": "(", "after": ",", "start_char_pos": 296, "end_char_pos": 297 }, { "type": "R", "before": ").", "after": ", the RDME has been shown to lose bimolecular reactions. The RDME is therefore not a convergent approximation to any spatially-continuous model that incorporates bimolecular reactions.", "start_char_pos": 324, "end_char_pos": 326 }, { "type": "D", "before": "that eliminates this problem. The CRDME is obtained", "after": null, "start_char_pos": 380, "end_char_pos": 431 }, { "type": "A", "before": null, "after": "popularized by Doi", "start_char_pos": 526, "end_char_pos": 526 } ]
[ 0, 164, 326, 409, 528 ]
1211.7045
1
A major challenge in single particle reconstruction from cryo electron-microscopy is to establish a reliable ab-initio 3D model using 2D images with unknown orientations. Common-lines based methods can determine the orientations of images without additional geometric information. However, such methods fail when the detection rate of common-lines is too low due to the high level of noise in the images. An approximation to the least squares global self consistency error was obtained using convex relaxation by semidefinite programming. The purpose of this paper is three-fold. First, we introduce another, yet more robust , global self consistency error in an optimization problem that can be solved via semidefinite relaxation. Second, we introduce a spectral norm constraint to robustify the relaxed problem. Third, we use alternating direction method or the iteratively reweighted least squares (IRLS) procedureto solve the problem . Numerical experiments demonstrate that the proposed method significantly decreases the orientation estimation error when the detection rate of common-lines is low.
A major challenge in single particle reconstruction from cryo-electron microscopy is to establish a reliable ab-initio three-dimensional model using two-dimensional projection images with unknown orientations. Common-lines based methods estimate the orientations without additional geometric information. However, such methods fail when the detection rate of common-lines is too low due to the high level of noise in the images. An approximation to the least squares global self consistency error was obtained using convex relaxation by semidefinite programming. In this paper we introduce a more robust global self consistency error and show that the corresponding optimization problem can be solved via semidefinite relaxation. In order to prevent artificial clustering of the estimated viewing directions, we further introduce a spectral norm term that is added as a constraint or as a regularization term to the relaxed minimization problem. The resulted problems are solved by using either the alternating direction method of multipliers or an iteratively reweighted least squares procedure . Numerical experiments with both simulated and real images demonstrate that the proposed methods significantly reduce the orientation estimation error when the detection rate of common-lines is low.
[ { "type": "R", "before": "cryo electron-microscopy", "after": "cryo-electron microscopy", "start_char_pos": 57, "end_char_pos": 81 }, { "type": "R", "before": "3D model using 2D", "after": "three-dimensional model using two-dimensional projection", "start_char_pos": 119, "end_char_pos": 136 }, { "type": "R", "before": "can determine the orientations of images", "after": "estimate the orientations", "start_char_pos": 198, "end_char_pos": 238 }, { "type": "R", "before": "The purpose of this paper is three-fold. First, we introduce another, yet more robust ,", "after": "In this paper we introduce a more robust", "start_char_pos": 539, "end_char_pos": 626 }, { "type": "R", "before": "in an optimization problem that", "after": "and show that the corresponding optimization problem", "start_char_pos": 657, "end_char_pos": 688 }, { "type": "R", "before": "Second, we", "after": "In order to prevent artificial clustering of the estimated viewing directions, we further", "start_char_pos": 732, "end_char_pos": 742 }, { "type": "R", "before": "constraint to robustify the relaxed problem. Third, we use", "after": "term that is added as a constraint or as a regularization term to the relaxed minimization problem. The resulted problems are solved by using either the", "start_char_pos": 769, "end_char_pos": 827 }, { "type": "R", "before": "or the", "after": "of multipliers or an", "start_char_pos": 857, "end_char_pos": 863 }, { "type": "R", "before": "(IRLS) procedureto solve the problem", "after": "procedure", "start_char_pos": 901, "end_char_pos": 937 }, { "type": "A", "before": null, "after": "with both simulated and real images", "start_char_pos": 962, "end_char_pos": 962 }, { "type": "R", "before": "method significantly decreases", "after": "methods significantly reduce", "start_char_pos": 993, "end_char_pos": 1023 } ]
[ 0, 170, 280, 404, 538, 579, 731, 813, 939 ]
1212.0476
1
The aim of this paper is twofold. First, we extend the results of [ 32 ] concerning the existence and uniqueness of second-order reflected 2BSDEs to the case of upper obstacles. Then, under some regularity assumptions on one of the barriers, similar to the ones in [ 9 ], and when the two barriers are completely separated, we provide a complete wellposedness theory for doubly reflected second-order BSDEs. We also show that these objects are related to non-standard optimal stopping games, thus generalizing the connection between DRBSDEs and Dynkin games first proved by Cvitani\'c and Karatzas [ 10 ]. More precisely, we show that the second order DRBSDEs provide solutions of what we call uncertain Dynkin games and that they also allow us to obtain super and subhedging prices for American game options (also called Israeli options) in financial markets with volatility uncertainty
The aim of this paper is twofold. First, we extend the results of [ 33 ] concerning the existence and uniqueness of second-order reflected 2BSDEs to the case of two obstacles. Under some regularity assumptions on one of the barriers, similar to the ones in [ 10 ], and when the two barriers are completely separated, we provide a complete wellposedness theory for doubly reflected second-order BSDEs. We also show that these objects are related to non-standard optimal stopping games, thus generalizing the connection between DRBSDEs and Dynkin games first proved by Cvitanic and Karatzas [ 11 ]. More precisely, we show under a technical assumption that the second order DRBSDEs provide solutions of what we call uncertain Dynkin games and that they also allow us to obtain super and subhedging prices for American game options (also called Israeli options) in financial markets with volatility uncertainty
[ { "type": "R", "before": "32", "after": "33", "start_char_pos": 68, "end_char_pos": 70 }, { "type": "R", "before": "upper obstacles. Then, under", "after": "two obstacles. Under", "start_char_pos": 161, "end_char_pos": 189 }, { "type": "R", "before": "9", "after": "10", "start_char_pos": 267, "end_char_pos": 268 }, { "type": "R", "before": "Cvitani\\'c", "after": "Cvitanic", "start_char_pos": 574, "end_char_pos": 584 }, { "type": "R", "before": "10", "after": "11", "start_char_pos": 600, "end_char_pos": 602 }, { "type": "A", "before": null, "after": "under a technical assumption", "start_char_pos": 630, "end_char_pos": 630 } ]
[ 0, 33, 177, 407, 605 ]