url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://phys.libretexts.org/Bookshelves/Classical_Mechanics/Variational_Principles_in_Classical_Mechanics_(Cline)/04%3A_Nonlinear_Systems_and_Chaos/4.01%3A_Introduction_to_Nonlinear_Systems_and_Chaos
# 4.1: Introduction to Nonlinear Systems and Chaos $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ In nature only a subset of systems have equations of motion that are linear. Contrary to the impression given by the analytic solutions presented in undergraduate physics courses, most dynamical systems in nature exhibit non-linear behavior that leads to complicated motion. The solutions of non-linear equations usually do not have analytic solutions, superposition does not apply, and they predict phenomena such as attractors, discontinuous period bifurcation, extreme sensitivity to initial conditions, rolling motion, and chaos. During the past four decades, exciting discoveries have been made in classical mechanics that are associated with the recognition that nonlinear systems can exhibit chaos. Chaotic phenomena have been observed in most fields of science and engineering such as, weather patterns, fluid flow, motion of planets in the solar system, epidemics, changing populations of animals, birds and insects, and the motion of electrons in atoms. The complicated dynamical behavior predicted by non-linear differential equations is not limited to classical mechanics, rather it is a manifestation of the mathematical properties of the solutions of the differential equations involved, and thus is generally applicable to solutions of first or second-order non-linear differential equations. It is important to understand that the systems discussed in this chapter follow a fully deterministic evolution predicted by the laws of classical mechanics, the evolution for which is based on the prior history. This behavior is completely different from a random walk where each step is based on a random process. The complicated motion of deterministic non-linear systems stems in part from sensitivity to the initial conditions. There are many examples of turbulent and laminar flow. The French mathematician Poincaré is credited with being the first to recognize the existence of chaos during his investigation of the gravitational three-body problem in celestial mechanics. At the end of the nineteenth century Poincaré noticed that such systems exhibit high sensitivity to initial conditions characteristic of chaotic motion, and the existence of nonlinearity which is required to produce chaos. Poincaré’s work received little notice, in part it was overshadowed by the parallel development of the Theory of Relativity and quantum mechanics at the start of the $$20^{th}$$ century. In addition, solving nonlinear equations of motion is difficult, which discouraged work on nonlinear mechanics and chaotic motion. The field blossomed during the $$1960^{\prime }s$$ when computers became sufficiently powerful to solve the nonlinear equations required to calculate the long-time histories necessary to document the evolution of chaotic behavior. Laplace, and many other scientists, believed in the deterministic view of nature which assumes that if the position and velocities of all particles are known, then one can unambiguously predict the future motion using Newtonian mechanics. Researchers in many fields of science now realize that this “clockwork universe" is invalid. That is, knowing the laws of nature can be insufficient to predict the evolution of nonlinear systems in that the time evolution can be extremely sensitive to the initial conditions even though they follow a completely deterministic development. There are two major classifications of nonlinear systems that lead to chaos in nature. The first classification encompasses nondissipative Hamiltonian systems such as Poincaré’s three-body celestial mechanics system. The other main classification involves driven, damped, non-linear oscillatory systems. Nonlinearity and chaos is a broad and active field and thus this chapter will focus only on a few examples that illustrate the general features of non-linear systems. Weak non-linearity is used to illustrate bifurcation and asymptotic attractor solutions for which the system evolves independent of the initial conditions. The common sinusoidally-driven linearly-damped plane pendulum illustrates several features characteristic of the evolution of a non-linear system from order to chaos. The impact of non-linearity on wavepacket propagation velocities and the existence of soliton solutions is discussed. The example of the three-body problem is discussed in chapter $$11$$. The transition from laminar flow to turbulent flow is illustrated by fluid mechanics discussed in chapter $$16.8$$. Analytic solutions of nonlinear systems usually are not available and thus one must resort to computer simulations. As a consequence the present discussion focusses on the main features of the solutions for these systems and ignores how the equations of motion are solved. This page titled 4.1: Introduction to Nonlinear Systems and Chaos is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Douglas Cline via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761094212532043, "perplexity": 335.2439235168635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00176.warc.gz"}
http://www.gradesaver.com/a-long-way-gone/q-and-a/who-is-gasemu-in-a-long-way-gone-203638
Who is Gasemu in A Long Way Gone? Explain who he is, what kind of character he is, and how he knows Ishmael's family.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951601624488831, "perplexity": 3217.9083209799714}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00237-ip-10-171-10-108.ec2.internal.warc.gz"}
https://brilliant.org/problems/an-algebra-problem-by-matias-bruna/
# An algebra problem by Matías Bruna Algebra Level pending For each $$n \in \mathbb{N}$$, are defined $$f(n)$$ and $$g(n)$$ as follows: $$f(n)=2n+1$$ and $$g(n)=n^{2}+n$$. And let $$S=\dfrac{1}{\sqrt{f(1)+2\sqrt{g(1)}}} + \dfrac{1}{\sqrt{f(2)+2\sqrt{g(2)}}} + \cdots + \dfrac{1}{\sqrt{f(2015)+2\sqrt{g(2015)}}}$$ Compute $$\left \lfloor{10S}\right \rfloor$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984486699104309, "perplexity": 4054.5655185956907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607786.59/warc/CC-MAIN-20170524035700-20170524055700-00359.warc.gz"}
https://www.physicsforums.com/threads/a-little-history-of-physics-how-is-physics-involved-in-this.326790/
# A little history of physics .How is physics involved in this? 1. Jul 24, 2009 ### graphicer89 A little history of physics.....How is physics involved in this?? Well I was reading over something interesting and something is puzzling me. So it turns out that in 1783 the Montgolfier brothers of France launched what is possibly the first balloon flight carrying passengers which was a Duck, a rooster and a sheep. Their balloon which was a bout 35 feet in diameter and constructed of cloth lined with paper, was launched by filling it with smoke.The flight landed safely about some 8 minutes later. What i am trying to figure out is the physics that is involved with this flight ...for example in terms of its ascent, descenting and the landing this "balloon" made.....How would you explain how physics was involved in this experiment?? 2. Jul 24, 2009 ### RoyalCat Re: A little history of physics.....How is physics involved in this?? The two main things I can think of as relevant are kinetic gas theory and Archimedes' Principle. According to Archimedes, if an object of density $$\rho _1$$ is put into a fluid of density $$\rho_2 > \rho_1$$ then the object will experience an upwards lift force. According to kinetic gas theory, a hotter gas is a less dense gas (A gas is one kind of fluid). Do note, I only have a rudimentary understanding of both, so you should probably wait for someone to confirm what I've wrote. Last edited: Jul 24, 2009 3. Jul 24, 2009 ### minger Re: A little history of physics.....How is physics involved in this?? You are correct RoyalCat. Bouyancy in layman's terms says that an object will see an upward force equal the weight of the displaced fluid. Let's take water as an example. If we were to fill up an infinately thin hollow sphere with water and drop it in more water, what would happen? The principle says that the ball with displace water equal to its own volume, with an associated weight. This weight will exert a force on the ball. In this case, since the densities are equal, the mass equals the force and the ball can remain in equilbrium. Now, in your case, let's assume that the hot gas, or smoke, or whatever was in the balloon had a density 1/2 that of air. If the volume of the balloon was 100 cubic feet, then the air would exert an upward force of ~8.07 lbf. The smoke itself has mass and weight, which results in net upward force of ~4.04 lbf. As said previously, there is an ideal gas law which correlates various properties of a fluid. It says that: $$\frac{P}{\rho} = RT$$ If we keep the pressure and gas constant equal, then we can rewrite: $$\rho T = \mbox{constant}$$ So, if the temperature goes up, then the density must go down. As we just seen, decreasing the density will decrease the mass the fluid, and thus increase the net upward force. 4. Jul 24, 2009 ### RoyalCat Re: A little history of physics.....How is physics involved in this?? Just to clarify, depending on the ratio between the densities of the fluid and the object, the upwards force can either be smaller than, greater than, or equal to the weight of the object ($$mg$$) meaning the object can have a net acceleration upwards, downwards, or no net acceleration at all. This is, of course, ignoring the effect of drag (Which is very significant, mind you). But either way, you get movement. Similar Discussions: A little history of physics .How is physics involved in this?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696196675300598, "perplexity": 675.0916108124117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102307.32/warc/CC-MAIN-20170816144701-20170816164701-00622.warc.gz"}
https://www.groundai.com/project/gravitational-wave-sources-from-pop-iii-stars-are-preferentially-located-within-the-cores-of-their-host-galaxies/
The Galactic Location of GW Sources from Pop III Stars Gravitational Wave Sources from Pop III Stars are Preferentially Located within the Cores of their Host Galaxies Fabio Pacucci , Abraham Loeb, Stefania Salvadori Department of Physics, Yale University, New Haven, CT 06511, USA. Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138, USA. Dipartimento di Fisica e Astronomia, Universit di Firenze, Via G. Sansone 1, Sesto Fiorentino, Italy. INAF/Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, Firenze, Italy. GEPI, Observatoire de Paris, PSL Research University, CNRS, Place Jule Janssen 92190, Meudon, France. [email protected] Abstract The detection of gravitational waves (GWs) generated by merging black holes has recently opened up a new observational window into the Universe. The mass of the black holes in the first and third LIGO detections, ( and ), suggests low-metallicity stars as their most likely progenitors. Based on high-resolution N-body simulations, coupled with state-of-the-art metal enrichment models, we find that the remnants of Pop III stars are preferentially located within the cores of galaxies. The probability of a GW signal to be generated by Pop III stars reaches at from the galaxy center, compared to a benchmark value of outside the core. The predicted merger rates inside bulges is ( is the Pop III binarity fraction). To match the credible range of LIGO merger rates, we obtain: . Future advances in GW observatories and the discovery of possible electromagnetic counterparts could allow the localization of such sources within their host galaxies. The preferential concentration of GW events within the bulge of galaxies would then provide an indirect proof for the existence of Pop III stars. keywords: gravitational waves - stars: Population III - black hole physics - cosmology: dark ages, reionization, first stars - cosmology: early Universe - galaxies: bulges pubyear: 2017pagerange: Gravitational Wave Sources from Pop III Stars are Preferentially Located within the Cores of their Host GalaxiesReferences 1 Introduction The first detection of gravitational waves (GWs) by the Laser Interferometer Gravitational Wave Observatory (LIGO) has marked the birth of GW astronomy. The event GW150914 (Abbott et al., 2016a) originated from the merger of a binary black hole (BBH) system with masses and at a redshift , corresponding to a luminosity distance of . This discovery was followed by a second (Abbott et al., 2016b) and a third detection (The LIGO Scientific Collaboration et al., 2017), the latter one with inferred source-frame masses of and at . Current predictions (Abbott et al., 2016d) indicate that GW events will be detected regularly with the additional GW detectors (e.g. VIRGO and KAGRA) at a rate of several per month up to . Overall, the inferred merger rate is . The opening of a new observational window would enable a revolution in our understanding of the Universe. From an astrophysical point of view (Abbott et al., 2016c), the detection provides direct evidence for the existence of BBHs with comparable mass components. This type of BBHs have been predicted in two main types of formation channels, involving isolated stellar binaries in galactic fields or dynamical interactions in dense stellar environments (Belczynski et al., 2016; Rodriguez et al., 2016). Moreover, a high-mass () stellar progenitor favors a low-metallicity formation environment (, see e.g. Eldridge & Stanway 2016). Extremely low metallicity () leads to the formation of high-mass stars because: (i) the process of gas fragmentation is less efficient and results in proto-stellar clouds that are times more massive than in the presence of dust and metals, and (ii) the accretion of gas onto the proto-stellar cores is more efficient (Omukai & Palla, 2002; Bromm et al., 2002; Abel et al., 2002). The low redshift of the event GW150914, (, and the low metallicity of the stellar progenitors of the BBH suggest two main formation scenarios for this source. The BBH could have formed in the local Universe, possibly in a low-mass galaxy with a low metal content (Belczynski et al., 2016). Another possible formation channel is in globular clusters (Rodriguez et al., 2016; Zevin et al., 2017). This formation channel implies that the BBH underwent a prompt merger, on a time scale much shorter than the Hubble time. Alternatively, the progenitors of the BBH could have formed in the early Universe, possibly from Pop III stars (see e.g. Belczynski et al. 2004). The first population of stars has not been observed so far (see Sobral et al. 2015; Pacucci et al. 2017; Natarajan et al. 2017) and large uncertainties remain about their physical properties. Current theories (Barkana & Loeb, 2001; Bromm, 2013; Loeb & Furlanetto, 2013) suggest that Pop III stars are characterized by: (i) very low metallicities (), and (ii) large masses (). The formation of BBH progenitors at high redshifts from Pop III stars would imply a time delay between formation and merger of . Pop III stars are therefore natural candidates for the progenitors of massive BBHs. For instance, Kinugawa et al. (2014) pointed out that the detection of GW signals from BBHs with masses would strongly indicate that these sources preferentially originated from Pop III stars. Hartwig et al. (2016) estimated the contribution of Pop III stars to the intrinsic merger rate density: owing to their higher masses, the remnants of Pop III stars produce strong GW signals, even if their contribution in number is small and most of them would occur at high . In this Letter we propose an independent way to assess if the component of BBHs, detected through GWs, originated from Pop III stars. Employing a data-constrained chemical evolution model coupled with high-resolution N-body simulations, we study the location of Pop III star remnants in a galaxy like the Milky Way (MW) and its high- progenitors. Due to the hierarchical build-up of structures, we expect these old and massive black hole relics to be preferentially found inside the bulge of galaxies. Previously, Gao et al. (2010) employed the Aquarius simulation to show that half of Pop III remnants should be localized within of galactic centers. Here we aim at improving this result, tracking the location of Pop III remnants interior to the galactic bulge (). The localization of GW sources by future observations would therefore allow to test their Pop III origin. Such spatial localization of GW events is currently out of reach. For instance, Nissanke et al. (2013) stated that with an array of new detectors it will be possible to reach an accuracy in the spatial localization up to square degrees. This would allow the localization of a GW event within the dimension of a typical galactic bulge () only for a few galaxies in the local Universe. This situation would change if GW events had electromagnetic (EM) counterparts. The Fermi satellite reported the detection of a transient signal at photon energies that lasted and appeared after GW150914 (Connaughton et al., 2016), encompassing of the probability sky map associated with the LIGO event. Similarly, the satellite AGILE might have detected a high-energy () EM counterpart associated with GW170104 (Verrecchia et al., 2017). While the merging of the components of an isolated BBH generates no EM counterpart, other physical situations exist in which the GW is associated with an EM signal. For instance, the merging BBHs may be orbiting inside the disk of a super-massive black hole (Kocsis et al., 2008), inducing a tidal disruption event (see also Perna et al. 2016; Murase et al. 2016). Moreover, the collapse of the massive star forming an inner binary in a dumbbell configuration could appear as a supernova explosion or a GRB (Loeb, 2016; Fedrow et al., 2017). The identification of EM counterparts of GW signals would allow a precise localization of the source. 2 Methods We start by describing the properties of the simulations employed along with the general assumptions for Pop III stars. 2.1 N-body simulation and chemical evolution We employed a data-constrained chemical evolution model designed to study the first stellar generations combined with N-body simulations of MW analogues to localize Pop III remnants within the scale of the MW bulge, (Ness et al., 2013). While the most important features of the N-body simulation are reported here, more details can be found in Scannapieco et al. (2006), Salvadori et al. (2010a). These papers simulated a MW analog with the GCD+ code (Kawata & Gibson, 2003) using a multi-resolution technique to achieve high resolution. The initial conditions are set up at using GRAFIC2 (Bertschinger, 2001). The highest resolution region in the full simulation is a sphere with a radius four times larger than the virial radius of the MW analog at . The dark matter particles mass and spatial resolution are respectively and . The total mass of stars contained in the MW disk and used to calibrate the simulation is . The virial mass and radius of the MW analog, containing about particles, are consistent with observational estimates , (Battaglia et al., 2005). While the resolution of this N-body simulation is not the highest currently available, the chemical evolution model is unrivaled in studying the metal enrichment of Pop III and Pop I/II stellar populations down to the present day. The star-formation and metal enrichment history of the MW is studied by using a model that self-consistently follows the formation of Pop III and PopI/II stars and that is calibrated to reproduce the observed properties of the MW (Salvadori et al., 2010a). Combined with the N-body simulation, the model naturally reproduces the age-metallicity relation, along with the properties of metal-poor stars, including the metallicity distribution functions and spatial distributions. Furthermore, it matches the properties of higher- galaxies (Salvadori et al., 2010b). So far, no other simulations can simultaneously account for these observations. Hence, even if we are unable to resolve the main formation sites of Pop III stars (i.e. the mini-halos), we are able to localize with great accuracy the descendants of Pop III stars, i.e. the extremely low metallicity Pop II stars. For this reason, the method we employ here is perfectly suitable for pinpointing the location of Pop III stars in the MW and its progenitors. 2.2 General assumptions for Pop III remnants Regarding the properties of Pop III stars, we make educated guesses, or parametrize the unknown properties. For instance, the initial mass function (IMF) for Pop III stars is predicted to be different from the IMF of local stars. In the simulations, we employ different Pop III IMFs (see Sec. 4 for details) and check that the radial distribution of Pop III stars is nearly independent on these choices. Moreover, not all binaries can produce LIGO sources (Christian & Loeb, 2017), since they need to have suitable initial masses and separations (see Sec. 4). Also the binarity fraction of Pop III stars could be different from the one of local systems. A qualitative constraint on its value can be derived by matching our predictions with the LIGO merger rate (see Sec. 5). 3 Location of Pop III Remnants within their Host Galaxy Next we analyze the spatial distribution of Pop III stellar remnants for the MW analog and for some of its progenitors at different halo masses and redshifts (see Table 1). 3.1 Pop III remnants in a Milky Way like galaxy Figure 1 compares the spatial distributions of Pop III and of Pop I/II stars in the simulation of the MW analog. The kernel density estimation (left panel) suggests that the distribution of Pop III remnants is highly concentrated near the center. Each contour corresponds to of the components of each population. We find that of Pop III stars are located within from the galactic center. This is confirmed by the probability distribution function (PDF, right panel) for the distance from the center on the galactic plane. The radial distribution of the two stellar components is significantly different, as the full widths at half maximum (FWHM) suggest: for Pop III stars and for Pop I/II stars. Figure 2 shows the stellar densities for both populations: the radial distribution of Pop III remnants is steeper, , than the radial distribution of normal stars, . Thus, at increasing radial distances Pop III remnants are rarer: ρ⋆(PopIII)ρ⋆(PopI/II)∼r−1. (1) Similar radial profiles are found for smaller galactic systems, such as P1 and P2 in Table 1. This is illustrated in Fig. 3. 4 Modeling the probability of observing a GW signal from Pop III stars Employing our N-body simulations, we compute the probability that GW signals generated by BBHs originated from Pop III stars. We name this probability and we calculate it as a function of the galactocentric distance . Formally, the probability is the product of three terms. The first term, , is the probability of having a Pop III remnant at out of the entire sample of stars: , see Fig. 3 for examples of . The second term, , is the probability of having binary systems. This probability takes into account both the scenario in which a BBH is formed from binary stars and the scenario in which the black holes form separately and then become gravitationally bound. The binarity probability depends on several factors and have been extensively studied for Pop I/II stars. Instead, there are large uncertainties in the extrapolation to the binarity probability of Pop III stars. Here, we parametrize it with a constant value, independent of mass: . Assuming that GW events are generated by Pop III stars in galactic bulges, we derive in Sec. 5 an estimate of the parameter by matching our predicted GW rate with the LIGO statistics, requiring that . The third term, , is the probability that the progenitor stars end up with remnants of the correct masses to produce a given GW event, such as GW150914 or GW170104. This probability depends on a large number of parameters and is generally impossible to calculate from first principles. The mass ratio distribution, the orbital separation, the mass exchange between companions and the natal kicks after supernova events can only be incorporated by complicated population synthesis models (Eldridge & Stanway, 2016). The use of these models goes beyond the scope of this paper. An improvement in our knowledge of mass ratios could occur in the near future from large surveys, such as GAIA. Mashian & Loeb (2017) predict that up to astrometric binaries hosting black holes and stellar companions brighter than GAIA’s detection threshold should be discovered with sensitivity. Whereas calculating the mass distribution of normal Pop I/II stars is feasible, the IMF for Pop III stars is poorly constrained. Hence, we make the simplifying assumption that depends only on the IMFs of the two populations of stars. In other words, we assume that it depends only on their respective probability of forming stars above a given mass threshold, e.g. . In fact, the mass range for LIGO-detected BBHs so far is , and the initial stellar mass needs to be significantly higher than these values. The characteristic mass of Pop III stars is predicted to be larger than the mass of local stars, and so they are more prone to having the minimum mass necessary to form BBH sources of LIGO signals. With this assumption we are neglecting all the complications that can only be accounted for by population synthesis models. We use the following IMF for the two stellar populations, with a Salpeter exponent and a low mass cutoff that depends on the stellar population: Φ(m)∝m−2.35exp(−Mcm), (2) where is in solar units. For Pop III stars we assume , while for Pop I/II stars . With we find that Pop III stars are times more common in the mass range than Pop I/II stars. Instead, with , the factor is . The final expression for is: PIII,GW(r)=PIII(r)×Pbin,III×P(M1,M2). (3) Given our assumptions, the only term that directly depends on position is the first one. 5 Constraints on binarity and radial probabilities We are now at a position to constrain the binarity probability of Pop III stars given the inferred LIGO merger rate and then compute the overall probability that a GW signal event localized in the bulge of a MW analog originated from Pop III remnants. The LIGO detection statistics (Abbott et al., 2016d) implies a credible range of merger rates between . In general, is computed as follows: R=ρG×NBBH×P, (4) where is the number density of galaxies in the local Universe, is the average number of BBHs in a MW-mass galaxy and is the average merger rate. We assume (Conselice et al., 2016) and (from the distribution of merging times in Rodriguez et al. 2016 for BBHs in globular clusters). A comment on the latter value is warranted. The PDF of semi-major axes in binary orbits for Pop III stars is largely unconstrained. Here we make the educated guess that the PDF should not significantly vary between Pop III and Pop I/II stars (Sana et al., 2012). Thus, it is also reasonable to employ the distribution of merging times for Pop I/II stars. Moreover, the number of binaries within the bulge of our MW analog is (with the binarity probability for Pop III stars). We therefore obtain: . From the LIGO predicted rate we obtain upper and lower limits for the Pop III binarity probability of and . Stacy & Bromm (2013) find an overall binarity fraction of , which is in the middle of our range. The estimated binarity fraction of Pop III stars is large, and somewhat compatible with observations of massive stars in the Milky Way, indicating that about of O-type stars have spectroscopic binary companions. This is also consistent with the fact that primordial stars are predicted to form preferentially in multiple systems (see e.g. Greif et al. 2012; Stacy et al. 2016). The probability that a GW signal is generated by remnants of binary Pop III stars is shown in Fig. 3 for a MW analog. In the calculation we assumed and different values of the characteristic mass in the Pop III IMF: and . The probability that a GW signal is generated by Pop III stars is enhanced in all cases within the core of the MW analog. In particular, for the probability reaches at from the center, compared to a benchmark value of outside the galactic core. Similarly, in the case, the peak probability inside the core is , with a benchmark value of . Also note that for the probability reaches inside the galactic core. 6 Discussion and Conclusions The masses of the two merging black holes in the first LIGO detection suggest that their most likely progenitors are massive low-metallicity stars. The BBH could have formed in the local Universe, possibly in a galaxy with a low content of metals, and then merged. Alternatively, the progenitors of the BBH formed in the early Universe, possibly as Pop III stars, and then merged on a Hubble time scale. By using high-resolution N-body simulations of MW-like and smaller galaxies, coupled with a state-of-the-art metal enrichment model, we suggest that the remnants of Pop III stars should be preferentially found within the bulges of galaxies, i.e. from the center of a MW-like galaxy. We predict a merger rate for GW events generated by Pop III stars inside bulges in the local Universe () of , where is the binarity probability of Pop III stars. By matching with the LIGO merger rate, we derive lower and upper limits of . By using and a Salpeter-like IMF with a variable low-mass cutoff for Pop III stars, we predict that the probability for observing GW signals generated by Pop III stars is strongly enhanced in the core of their host galaxies. In particular, in the case , the probability reaches at from the galactic center, compared to a benchmark value of outside it. A GW signal from within the bulge of galaxies could also originate from BBHs formed in globular clusters and slowly drifting towards the core (Rodriguez et al., 2016). Globular clusters can produce a significant population of massive BBHs that merge in the local Universe, with a merger rate of , with of sources having total masses in the range . They drift towards the center of their galaxy, but the dynamical friction time to reach the innermost is very long. Since only a fraction of BBHs generated in globular clusters would be found inside the innermost core of the galaxy, the merger rate inside the bulge would be , unable to match the LIGO merger rate. We conclude that the Pop III channel is still the preferential one for generating GW events in galactic bulges. While most Pop III stars are within the core of the galaxy, their total number is small with respect to the Pop I/II stars, whose population had a much longer fraction of the Hubble time to form. Nonetheless, the IMF of Pop III stars is skewed towards more massive stars. It is then up to times more likely to have a Pop III binary star with the right masses to produce GW signal like the first or the third LIGO detections than in the case of Pop I/II stars. Despite the large uncertainties regarding the physical properties (or even the existence) of Pop III stars, there is one robust conclusion that can be drawn. If the first population of massive () and metal-free () stars exists, then we predict that GW signals generated by their BBHs are preferentially () located within the inner core of galaxies. If the GW signals detected so far by LIGO are indeed generated by Pop III remnants, we predict in addition that their binarity probability is within the range . The localization of a GW in the core of a galaxy would not by itself pinpoint its origin as Pop III remnants. Nonetheless, the future build-up of a solid statistics of GW events and the possible localization of a large fraction of them within the core of galaxies, coupled with considerations about their masses, could clearly provide an indirect probe of Pop III stars. FP acknowledges the Chandra grant nr. AR6-17017B and NASA-ADAP grant nr. MA160009 and is grateful to Charles Bailyn, Frank van den Bosch and Daisuke Kawata. SS was supported by the European Commission through a Marie Curie Fellowship (project PRIMORDIAL, nr. 700907). This work was supported in part by the Black Hole Initiative at Harvard University, which is funded by a grant from the John Templeton Foundation. References • Abbott et al. (2016a) Abbott B. P., et al., 2016a, Physical Review Letters, 116, 061102 • Abbott et al. (2016b) Abbott B. P., et al., 2016b, Physical Review Letters, 116, 241103 • Abbott et al. (2016c) Abbott B. P., et al., 2016c, ApJ, 818, L22 • Abbott et al. (2016d) Abbott B. P., et al., 2016d, ApJ, 833, L1 • Abel et al. (2002) Abel T., Bryan G. L., Norman M. L., 2002, Science, 295, 93 • Barkana & Loeb (2001) Barkana R., Loeb A., 2001, Phys. Rep., 349, 125 • Battaglia et al. (2005) Battaglia G., et al., 2005, MNRAS, 364, 433 • Belczynski et al. (2004) Belczynski K., Bulik T., Rudak B., 2004, ApJ, 608, L45 • Belczynski et al. (2016) Belczynski K., Holz D. E., Bulik T., O’Shaughnessy R., 2016, Nature, 534, 512 • Bertschinger (2001) Bertschinger E., 2001, ApJS, 137, 1 • Bromm (2013) Bromm V., 2013, Reports on Progress in Physics, 76, 112901 • Bromm et al. (2002) Bromm V., Coppi P. S., Larson R. B., 2002, ApJ, 564, 23 • Christian & Loeb (2017) Christian P., Loeb A., 2017, preprint, (arXiv:1701.01736) • Connaughton et al. (2016) Connaughton V., et al., 2016, ApJ, 826, L6 • Conselice et al. (2016) Conselice C. J., Wilkinson A., Duncan K., Mortlock A., 2016, ApJ, 830, 83 • Eldridge & Stanway (2016) Eldridge J. J., Stanway E. R., 2016, MNRAS, 462, 3302 • Fedrow et al. (2017) Fedrow J. M., Ott C. D., Sperhake U., Blackman J., Haas R., Reisswig C., De Felice A., 2017, preprint, (arXiv:1704.07383) • Gao et al. (2010) Gao L., Theuns T., Frenk C. S., Jenkins A., Helly J. C., Navarro J., Springel V., White S. D. M., 2010, MNRAS, 403, 1283 • Greif et al. (2012) Greif T. H., Bromm V., Clark P. C., Glover S. C. O., Smith R. J., Klessen R. S., Yoshida N., Springel V., 2012, MNRAS, 424, 399 • Hartwig et al. (2016) Hartwig T., Volonteri M., Bromm V., Klessen R. S., Barausse E., Magg M., Stacy A., 2016, MNRAS, 460, L74 • Kawata & Gibson (2003) Kawata D., Gibson B. K., 2003, MNRAS, 340, 908 • Kinugawa et al. (2014) Kinugawa T., Inayoshi K., Hotokezaka K., Nakauchi D., Nakamura T., 2014, MNRAS, 442, 2963 • Kocsis et al. (2008) Kocsis B., Haiman Z., Menou K., 2008, ApJ, 684, 870 • Loeb (2016) Loeb A., 2016, ApJ, 819, L21 • Loeb & Furlanetto (2013) Loeb A., Furlanetto S. R., 2013, The First Galaxies in the Universe. Princeton University Press • Mashian & Loeb (2017) Mashian N., Loeb A., 2017, preprint, (arXiv:1704.03455) • Murase et al. (2016) Murase K., Kashiyama K., Mészáros P., Shoemaker I., Senno N., 2016, ApJ, 822, L9 • Natarajan et al. (2017) Natarajan P., Pacucci F., Ferrara A., Agarwal B., Ricarte A., Zackrisson E., Cappelluti N., 2017, ApJ, 838, 117 • Ness et al. (2013) Ness M., et al., 2013, MNRAS, 432, 2092 • Nissanke et al. (2013) Nissanke S., Kasliwal M., Georgieva A., 2013, ApJ, 767, 124 • Omukai & Palla (2002) Omukai K., Palla F., 2002, Ap&SS, 281, 71 • Pacucci et al. (2017) Pacucci F., Pallottini A., Ferrara A., Gallerani S., 2017, MNRAS, 468, L77 • Perna et al. (2016) Perna R., Lazzati D., Giacomazzo B., 2016, ApJ, 821, L18 • Rodriguez et al. (2016) Rodriguez C. L., Chatterjee S., Rasio F. A., 2016, Phys. Rev. D, 93, 084029 • Salvadori et al. (2010a) Salvadori S., Ferrara A., Schneider R., Scannapieco E., Kawata D., 2010a, MNRAS, 401, L5 • Salvadori et al. (2010b) Salvadori S., Dayal P., Ferrara A., 2010b, MNRAS, 407, L1 • Sana et al. (2012) Sana H., et al., 2012, Science, 337, 444 • Scannapieco et al. (2006) Scannapieco E., Kawata D., Brook C. B., Schneider R., Ferrara A., Gibson B. K., 2006, ApJ, 653, 285 • Sobral et al. (2015) Sobral D., Matthee J., Darvish B., Schaerer D., Mobasher B., Röttgering H. J. A., Santos S., Hemmati S., 2015, ApJ, 808, 139 • Stacy & Bromm (2013) Stacy A., Bromm V., 2013, MNRAS, 433, 1094 • Stacy et al. (2016) Stacy A., Bromm V., Lee A. T., 2016, MNRAS, 462, 1307 • The LIGO Scientific Collaboration et al. (2017) The LIGO Scientific Collaboration et al., 2017, preprint, (arXiv:1706.01812) • Verrecchia et al. (2017) Verrecchia F., et al., 2017, preprint, (arXiv:1706.00029) • Zevin et al. (2017) Zevin M., Pankow C., Rodriguez C. L., Sampson L., Chase E., Kalogera V., Rasio F. A., 2017, preprint, (arXiv:1704.07379) Comments 0 You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters Loading ... 139016 You are asking your first question! How to quickly get a good answer: • Keep your question short and to the point • Check for grammar or spelling errors. • Phrase it like a question Test Test description
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8843106627464294, "perplexity": 2957.560220954573}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148375.36/warc/CC-MAIN-20200229022458-20200229052458-00365.warc.gz"}
http://export.arxiv.org/abs/1803.00575
astro-ph.EP (what is this?) # Title: Planetesimal formation during protoplanetary disk buildup Abstract: Models of dust coagulation and subsequent planetesimal formation are usually computed on the backdrop of an already fully formed protoplanetary disk model. At the same time, observational studies suggest that planetesimal formation should start early, possibly even before the protoplanetary disk is fully formed. In this paper, we investigate under which conditions planetesimals already form during the disk buildup stage, in which gas and dust fall onto the disk from its parent molecular cloud. We couple our earlier planetesimal formation model at the water snow line to a simple model of disk formation and evolution. We find that under most conditions planetesimals only form after the buildup stage when the disk becomes less massive and less hot. However, there are parameters for which planetesimals already form during the disk buildup. This occurs when the viscosity driving the disk evolution is intermediate ($\alpha_v \sim 10^{-3}-10^{-2}$) while the turbulent mixing of the dust is reduced compared to that ($\alpha_t \lesssim 10^{-4}$), and with the assumption that water vapor is vertically well-mixed with the gas. Such $\alpha_t \ll \alpha_v$ scenario could be expected for layered accretion, where the gas flow is mostly driven by the active surface layers, while the midplane layers, where most of the dust resides, are quiescent. Comments: 6 pages, 5 figures, accepted for publication in A&A, minor changes due to language edition Subjects: Earth and Planetary Astrophysics (astro-ph.EP) DOI: 10.1051/0004-6361/201732221 Cite as: arXiv:1803.00575 [astro-ph.EP] (or arXiv:1803.00575v2 [astro-ph.EP] for this version) ## Submission history From: Joanna Drazkowska [view email] [v1] Thu, 1 Mar 2018 19:00:03 GMT (276kb,D) [v2] Thu, 8 Mar 2018 11:16:39 GMT (276kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291674852371216, "perplexity": 3627.0246309581717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646636.25/warc/CC-MAIN-20180319081701-20180319101701-00506.warc.gz"}
http://tex.stackexchange.com/questions/21452/how-to-typeset-this-symbol
# How to typeset this symbol? It's not \mathscr{I}, which is more italic (slanted) - This looks like a problem for detxify! –  Seamus Jun 23 '11 at 12:09 Have a look at “How to look up a math symbol?” for ideas how you can easily find a particular symbol. –  Martin Scharrer Jun 23 '11 at 12:10 @Seamus: It would be really create if detexify would accept an image URL as input. It's sometimes difficult to draw them correctly. –  Martin Scharrer Jun 23 '11 at 12:12 @Martin yeah I've been trying to draw that for like 3 minutes now! My rollerball mouse is not designed for drawing... –  Seamus Jun 23 '11 at 12:14 I've found something fairly similar in the STIX fonts, but it is more slanted than that and as you specifically say that \mathscr{I} is more slanted then I guess that's not what you want. –  Loop Space Jun 23 '11 at 12:38 \documentclass{standalone} \pdfmapfile{+rsfso.map} \DeclareSymbolFont{rsfso}{U}{rsfso}{m}{n} \DeclareSymbolFontAlphabet{\mathscr}{rsfso} \begin{document} $\mathscr{I}$ \end{document} The \pdfmapfile is necessary as of today, since it seems that the map file doesn't correctly register into TeX Live. Works with TeX Live 2010 and 2011/testing. The package mathrsfs defines \mathscr to use the font rsfs10 (or another size), while my definition requests the font rsfso10 (or at different size). This font has been developed by Michael Sharpe (texdoc rsfso), but his package redefines \mathcal instead of using a different command. So I copied the definition from mathrsfs changing rsfs in the font names into rsfso. The font is just like RSFS, but less slanted. I don't know why the TeX Live manager doesn't add the map file to pdftex.map; but since the trick with \pdfmapline works, why bother? Well, we should bother if the engine used is Xe(La)TeX, so a bug report will be filed. - That appears to be it.. Could you explain the difference with the results that I (and Herbert) got when using \mathscr{I}? –  willem Jun 23 '11 at 16:00 Works for me now without the pdfmapfile, so apparently the bug got fixed. –  Ben Crowell Dec 26 '13 at 16:37 @BenCrowell I guess so. –  egreg Dec 26 '13 at 16:39 The sensible answer is "find a suitable font" (for example, the STIX script I is pretty close, but is perhaps too slanted). Here's a silly answer. \documentclass{standalone} \usepackage{tikz} \usepackage{calligraphy} \begin{document} \begin{tikzpicture}[scale=10] \calligraphy[copperplate,red,heavy,heavy line width=.2cm,light line width=.1cm] (0.8,0.51) .. controls +(-.1,-.08) and +(0,-.15) .. (0.5,0.6) .. controls +(0,.15) and +(-.13,-.07) .. (0.85,0.72) +(0,0) .. controls +(-.26,-0.07) and +(.12,.15) .. (0.61,0.3) .. controls +(-.12,-.15) and +(0,-.13) .. (.22,.28); \end{tikzpicture} \end{document} which produces: Apart from the blob at the end, it's pretty close. I know that it's close because I did it by blowing up the image in the post and drawing on top of it. The line widths probably need tweaking a little, but it's only meant to be a silly answer ... - NICE! However, I need to use this in a journal article, and I'm not sure if editor's like this stuff.. –  willem Jun 23 '11 at 13:23 You should probably mention that the calligraphy package isn't on CTAN yet and explain where to get it and how awesome it is –  Seamus Jun 23 '11 at 13:32 @willem: I did say it was silly! egreg's answer looks more promising. I have to say that I'm curious as to why \mathscr{I} wouldn't be acceptable to you. –  Loop Space Jun 23 '11 at 13:33 @Seamus: I was counting on the fact that no-one would be silly enough to actually try to do this. –  Loop Space Jun 23 '11 at 13:34 Choose another font if needed. \documentclass{article} \usepackage[T1]{fontenc} \usepackage[charter]{mathdesign} \begin{document} \Huge $\mathfrak{I} \mathscr{I}$ \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255502223968506, "perplexity": 2474.576781654922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636213.71/warc/CC-MAIN-20150417045716-00228-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/circular-motion-what-is-the-radius-of-the-loop-de-loop-in-meters.603456/
# Circular motion-what is the radius of the loop de loop in meters 1. May 5, 2012 ### dani123 1. The problem statement, all variables and given/known data Snoopy is flying his vintage warplane in a "loop de loop" path being chased by the Red Baron. His instruments tell him the plane is level (at the bottom of the loop) and travelling at 180 km/hr. He is sitting on a set of bathroom scales that him he weighs four times what he normally does. What is the radius of the loop in meters? 2. Relevant equations Fc=mv2/R ac=v2/R 3. The attempt at a solution I am completely lost with this problem but this is what I attempted... I started by converting 180km/hr into m/s and came up with 648 000 000m/s. Then I used this value and plugged it into the centripetal acceleration equation and used ac=9.8m/s2 and then solved for R= 4.28*1016m I know this must be the wrong answer but I am very lost as to what I should do or even how I should be looking at this problem... If anyone could help me out that would be greatly appreciated! Thank you so much in advance! 2. May 5, 2012 ### tiny-tim hi dani123! :rofl: :rofl: :rofl: hint: what is the equation for the reaction force between snoopy and the scales? 3. May 7, 2012 ### dani123 im confused as to how thats gonna help me find the radius.. 4. May 7, 2012 ### tiny-tim but the only information you have is the magnitude of that reaction force 5. May 7, 2012 ### dani123 so do i just convert 180km into meters? 6. May 7, 2012 ### dani123 nvm! scratch that last post ... 7. May 7, 2012 ### tiny-tim yes, and hours into seconds 8. May 7, 2012 ### dani123 i tend to over think the problems that have the simplest answers! haha 9. May 9, 2012 ### dani123 after i converted it into m/s... thats the answer? really? 10. May 9, 2012 ### Staff: Mentor Consider what his effective acceleration must be if he weighs 4x normal weight (when the usual acceleration due to gravity is just g). What additional acceleration is operating when he moves in circular motion? 11. May 28, 2012 ### dani123 im lost 12. May 28, 2012 ### tiny-tim what is the equation for the reaction force between snoopy and the scales? 13. May 28, 2012 ### dani123 i dont know anymore,, im really confused 14. May 28, 2012 ### truesearch edit...delete 15. May 28, 2012 ### Staff: Mentor If the radius were infinite, there would be a 1g upward force exerted by the scale on him. How many additional g's of force are required to keep him moving in a circle? That should be v2/R. Chet 16. May 28, 2012 ### tiny-tim well, how many forces are there, and what is the acceleration? 17. May 28, 2012 ### Staff: Mentor The net force is 3g's. 18. May 28, 2012 ### dani123 how did you come up with these g's ? 19. May 28, 2012 ### azizlwl What is the radius of the loop in meters? You draw a free body diagram. From this you can calculate the radius needed. There are 3 forces exerted on the man. 1. Gravitional force. 2. Centripetal force. 3. Normal force. This force shown by the scale. 20. May 29, 2012 ### vela Staff Emeritus There are only two forces on Snoopy: the gravitational force and normal force. The net force on Snoopy results in his centripetal acceleration. Similar Discussions: Circular motion-what is the radius of the loop de loop in meters
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037181496620178, "perplexity": 2347.384318140915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886952.14/warc/CC-MAIN-20180117173312-20180117193312-00535.warc.gz"}
https://math.stackexchange.com/questions/2923156/understanding-a-proof-that-if-n-geq-3-then-a-n-is-generated-by-all-the-3
# Understanding a proof that if $n \geq 3$, then $A_n$ is generated by all the $3$-cycles The following proof (up until the point I got stuck) was given in my class notes Proof: Let $S$ denote the set of elements in $S_n$ which are the product of two transpositions. Then $\langle S \rangle \subseteq A_n$. Now if $\alpha \in A_n$ then $\alpha = \tau_1 \tau_2 \dots \tau_k$ where the $\tau_i$'s are transpositions and $k$ is even. Now $\alpha = (\tau_1\tau_2)(\tau_3\tau_4) \dots (\tau_{k-1}\tau_k)$ implies that $\alpha \in \langle S \rangle$. Hence $A_n = \langle S \rangle$. The last line is the part of this proof that I don't understand. For $\alpha$ to be an element in $\langle S \rangle$ we need to show that $\alpha$ is the product of two transpositions, however in the above I don't see how $\alpha$ is the product of two transpositions at all. We'd need to show that $\alpha = f g$ where $f, g \in S_n$ are both $2$-cycles and I don't see how the last line implies that. Careful here: you need to show that $\alpha\in\left<S\right>$, not that $\alpha\in S$. $\left<S\right>$ is the subgroup generated by $S$, so it suffices to show that $\alpha$ is a product of elements of $S$. Thus, $$\alpha = \tau_1\tau_2\cdots\tau_k = (\tau_1\tau_2)(\tau_3\tau_4)\cdots(\tau_{k-1}\tau_i)$$ is in $\left<S\right>$ because each of $\tau_1\tau_2$ and $\tau_3\tau_4$, etc. are elements of $S$. $\langle S \rangle$ is not $S$ itself, it is the group that is generated by the elements of $S$. Every element in $\langle S \rangle$ is a product of elements of $S$. (by definition should be product of elements of $S$ and their inverses but here the inverse of an element of $S$ here is also in $S$ so it doesn't matter). Now, $\tau_1\tau_2, \tau_3\tau_4,\dots,\tau_{k-1}\tau_k$ are all elements in $S$ and $\alpha$ is their product. Hence $\alpha\in\langle S \rangle$. The notation $\langle S \rangle$ denotes the group generated by the set $S$ which is a subset of some group $G$. This means the proof has achieved it's objective: it's shown that $\alpha$ is a product of elements in $S$, ie elements that are products of two transpositions. By the way, if you use a proof of the simplicity of $A_n$ that does not depend on $A_n$ being generated by $3$-cycles (and the easiest proof, for example, does not depend on it...get simplicity of $A_5$ from the sizes of its conjugacy classes and for larger $n$ use induction and easy standard results on multiply-transitive permutation groups), then you have a simple way to show that $A_n$ is generated by $3$-cycles, for $n \ge 5$. Since the $3$-cycles form a conjugacy class (you don't even need this....just that they are a union of conjugacy classes), they generate a normal subgroup. Since $A_n$ is simple, that subgroup must be $A_n$. For $n=3$ or $4$, the proof is also easy: The number of $3$-cycles exceeds $|A_n|/2$ in those cases.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635800123214722, "perplexity": 63.31301447168421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258147.95/warc/CC-MAIN-20190525164959-20190525190959-00372.warc.gz"}
https://www.lessonplanet.com/teachers/compare-fractions-with-decimals
# Compare Fractions With Decimals In this comparing fractions learning exercise, students solve ten problems in which fractions and decimal numbers are compared with the <, >, or = signs.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9933909177780151, "perplexity": 3892.615708095163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822822.66/warc/CC-MAIN-20171018070528-20171018090528-00532.warc.gz"}
http://www.physicspages.com/tag/equation-of-continuity/
Klein-Gordon equation: probability density and current Reference: References: Robert D. Klauber, Student Friendly Quantum Field Theory, (Sandtrove Press, 2013) – Chapter 3, Problem 3.3. In non-relativistic quantum mechanics governed by the Schrödinger equation, the probability density is given by $\displaystyle \rho=\Psi^{\dagger}\Psi \ \ \ \ \ (1)$ and the probability current is given by (generalizing our earlier result to 3-d and using natural units): $\displaystyle \mathbf{J}=\frac{i}{2m}\left(\Psi\nabla\Psi^{\dagger}-\Psi^{\dagger}\nabla\Psi\right) \ \ \ \ \ (2)$ The continuity equation for probability is then $\displaystyle \frac{\partial\rho}{\partial t}+\nabla\cdot\mathbf{J}=0 \ \ \ \ \ (3)$ We’ll now look at how these results appear in relativistic quantum mechanics, using the Klein-Gordon equation: $\displaystyle \frac{\partial^{2}\phi}{\partial t^{2}}=\left(\nabla^{2}-\mu^{2}\right)\phi=0 \ \ \ \ \ (4)$ We can multiply this equation by ${\phi^{\dagger}}$ and then subtract the hermitian conjugate of the result from the original equation to get $\displaystyle \phi^{\dagger}\frac{\partial^{2}\phi}{\partial t^{2}}-\phi\frac{\partial^{2}\phi^{\dagger}}{\partial t^{2}}$ $\displaystyle =$ $\displaystyle \phi^{\dagger}\left(\nabla^{2}-\mu^{2}\right)\phi-\phi\left(\nabla^{2}-\mu^{2}\right)\phi^{\dagger}\ \ \ \ \ (5)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \phi^{\dagger}\nabla^{2}\phi-\phi\nabla^{2}\phi^{\dagger} \ \ \ \ \ (6)$ The LHS can be written as $\displaystyle \phi^{\dagger}\frac{\partial^{2}\phi}{\partial t^{2}}-\phi\frac{\partial^{2}\phi^{\dagger}}{\partial t^{2}}=\frac{\partial}{\partial t}\left(\phi^{\dagger}\frac{\partial\phi}{\partial t}-\phi\frac{\partial\phi^{\dagger}}{\partial t}\right) \ \ \ \ \ (7)$ (use the product rule on the RHS and cancel terms). The RHS of 6 can be written as (use the product rule again): $\displaystyle \phi^{\dagger}\nabla^{2}\phi-\phi\nabla^{2}\phi^{\dagger}=\nabla\cdot\left(\phi^{\dagger}\nabla\phi-\phi\nabla\phi^{\dagger}\right) \ \ \ \ \ (8)$ We can write this as a continuity equation for the Klein-Gordon equation, with the following definitions: $\displaystyle \rho$ $\displaystyle \equiv$ $\displaystyle i\left(\phi^{\dagger}\frac{\partial\phi}{\partial t}-\phi\frac{\partial\phi^{\dagger}}{\partial t}\right)\ \ \ \ \ (9)$ $\displaystyle \mathbf{j}$ $\displaystyle =$ $\displaystyle -i\left(\phi^{\dagger}\nabla\phi-\phi\nabla\phi^{\dagger}\right) \ \ \ \ \ (10)$ [The extra ${i}$ is introduced to make ${\rho}$ and ${\mathbf{j}}$ real. Note that the factor within the parentheses in both expressions is a complex quantity minus its complex conjugate, which always gives a pure imaginary term. Thus multiplying by ${i}$ ensures the result is real.] Then $\displaystyle \frac{\partial\rho}{\partial t}+\nabla\cdot\mathbf{j}=0 \ \ \ \ \ (11)$ We can put this in 4-vector form if we use (for some 3-vector ${\mathbf{A}}$): $\displaystyle \nabla\cdot\mathbf{A}=-\partial^{i}a_{i} \ \ \ \ \ (12)$ where the implied sum over ${i}$ is from ${i=1}$ to ${i=3}$ (spatial coordinates), and the minus sign appears because we’ve raised the index on ${\partial^{i}}$. If we define $\displaystyle j_{i}=i\left(\phi^{\dagger}\partial_{i}\phi-\phi\partial_{i}\phi^{\dagger}\right) \ \ \ \ \ (13)$ (that is, the negative of 10), then ${\nabla\cdot\mathbf{j}=\partial^{i}j_{i}}$. To make ${j_{\mu}}$ into a 4-vector, we add ${j_{0}=\rho}$ and we get $\displaystyle \frac{\partial j_{0}}{\partial t}+\partial^{i}j_{i}=\partial^{\mu}j_{\mu}=0 \ \ \ \ \ (14)$ [Note that my definition of ${j_{i}}$ is the negative of the middle term in Klauber’s equation 3-21, although raising the ${i}$ index agrees with the last term in 3-21. I can’t see how his middle and last equations for ${j_{i}}$ and ${j^{i}}$ can both be right, since raising the ${i}$ in the middle equation for ${j_{i}}$ merely raises the ${\phi_{,i}}$ to ${\phi^{,i}}$ without changing the sign.] The curious thing about the Klein-Gordon equation is that its probability density ${\rho}$ in 9 need not be positive, depending on the values of ${\phi}$ and its time derivative. To see how this can affect the physical meaning of the equation, consider the general plane wave solution to the Klein-Gordon equation $\displaystyle \phi=\sum_{\mathbf{k}}\frac{1}{\sqrt{2V\omega_{\mathbf{k}}}}\left(A_{\mathbf{k}}e^{-ikx}+B_{\mathbf{k}}^{\dagger}e^{ikx}\right) \ \ \ \ \ (15)$ Klauber explores this starting with his equation 3-24, where he takes a test solution in which all ${B_{\mathbf{k}}^{\dagger}=0}$ and shows that ${\int\rho\;d^{3}x=\sum_{\mathbf{k}}\left|A_{\mathbf{k}}\right|^{2}=1}$ so that in this case, the total probability of finding the system in some state is +1 as it should be. Let’s see what happens if we take all ${A_{\mathbf{k}}=0}$. In that case, 9 becomes $\displaystyle \rho$ $\displaystyle =$ $\displaystyle i\left[\sum_{\mathbf{k}}\frac{B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right]\left[\sum_{\mathbf{k}^{\prime}}\frac{i\omega_{\mathbf{k}^{\prime}}B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]-\nonumber$ $\displaystyle$ $\displaystyle$ $\displaystyle i\left[\sum_{\mathbf{k}^{\prime}}\frac{B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]\left[\sum_{\mathbf{k}}\frac{-i\omega_{\mathbf{k}}B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right]\ \ \ \ \ (16)$ $\displaystyle$ $\displaystyle =$ $\displaystyle -\left[\sum_{\mathbf{k}}\frac{B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right]\left[\sum_{\mathbf{k}^{\prime}}\frac{\omega_{\mathbf{k}^{\prime}}B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]-\ \ \ \ \ (17)$ $\displaystyle$ $\displaystyle$ $\displaystyle \left[\sum_{\mathbf{k}^{\prime}}\frac{B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]\left[\sum_{\mathbf{k}}\frac{\omega_{\mathbf{k}}B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right] \ \ \ \ \ (18)$ We now wish to calculate ${\int\rho\;d^{3}x}$. We can use the orthonormality of solutions to do the integral. We have $\displaystyle \frac{1}{V}\int e^{i\left(k^{\prime}-k\right)x}d^{3}x=\delta_{\mathbf{k},\mathbf{k}^{\prime}} \ \ \ \ \ (19)$ We get $\displaystyle -\int\left[\sum_{\mathbf{k}}\frac{B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right]\left[\sum_{\mathbf{k}^{\prime}}\frac{\omega_{\mathbf{k}^{\prime}}B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]d^{3}x$ $\displaystyle =$ $\displaystyle -\sum_{\mathbf{k}}\frac{\left|B_{\mathbf{k}}\right|^{2}}{2}\ \ \ \ \ (20)$ $\displaystyle -\int\left[\sum_{\mathbf{k}^{\prime}}\frac{B_{\mathbf{k}^{\prime}}^{\dagger}}{\sqrt{2\omega_{\mathbf{k}^{\prime}}V}}e^{ik^{\prime}x}\right]\left[\sum_{\mathbf{k}}\frac{\omega_{\mathbf{k}}B_{\mathbf{k}}}{\sqrt{2\omega_{\mathbf{k}}V}}e^{-ikx}\right]d^{3}x$ $\displaystyle =$ $\displaystyle -\sum_{\mathbf{k}}\frac{\left|B_{\mathbf{k}}\right|^{2}}{2}\ \ \ \ \ (21)$ $\displaystyle \int\rho\;d^{3}x$ $\displaystyle =$ $\displaystyle -\sum_{\mathbf{k}}\left|B_{\mathbf{k}}\right|^{2} \ \ \ \ \ (22)$ Thus the total probability of finding the system in one of the state ${\mathbf{k}}$ is negative. Stress-energy tensor: conservation equations Reference: Moore, Thomas A., A General Relativity Workbook, University Science Books (2013) – Chapter 20; Box 20.5. We can express conservation of energy and momentum in terms of the stress-energy tensor. Recall that the physical meaning of the component ${T^{tt}}$ is the energy density. To get the conservation laws, consider a small box with dimensions ${dx}$, ${dy}$ and ${dz}$, and restrict our attention to the case of ‘dust’, that is, a fluid containing particles that are all at rest relative to each other. In that case, the tensor has the form $\displaystyle T^{ij}=\rho_{0}u^{i}u^{j} \ \ \ \ \ (1)$ where ${\rho_{0}}$ is the energy density of the dust in its own rest frame and ${u^{i}}$ is the four-velocity of the fluid as measured in some observer’s local inertial frame. Then the flow of energy into the box over, say, the left-hand face perpendicular to the ${x}$ axis at position ${x}$ in time ${dt}$ is the energy density multiplied by the velocity component in the ${x}$ direction ${v^{x}}$. $\displaystyle dE_{x}=\left(T_{x}^{tt}v^{x}dt\right)dydz \ \ \ \ \ (2)$ We’ve multiplied by ${dydz}$ since this is the area of the face of the box through which the energy is flowing, and thus the total flow of energy is the density multiplied by the volume that crosses the box’s face, which is ${v^{x}dtdydz}$. The subscript ${x}$ on ${T_{x}^{tt}}$ means that the tensor is evaluated at position ${x}$. We have $\displaystyle T^{tt}=\rho_{0}u^{t}u^{t} \ \ \ \ \ (3)$ and ${u^{t}v^{x}=\gamma v^{x}=u^{x}}$, so ${T^{tt}v^{x}=T^{tx}}$ using 1, and thus $\displaystyle dE_{x}=\left(T_{x}^{tx}dt\right)dydz \ \ \ \ \ (4)$ Similarly, the energy flowing across the face at position ${x+dx}$ is then $\displaystyle dE_{x+dx}=\left(T_{x+dx}^{tx}dt\right)dydz \ \ \ \ \ (5)$ Taking the difference of these two equations we get $\displaystyle \left(T_{x+dx}^{tx}-T_{x}^{tx}\right)dtdydz=\partial_{x}T^{tx}dxdtdydz \ \ \ \ \ (6)$ We can write similar equations for the ${y}$ and ${z}$ directions: $\displaystyle \left(T_{y+dy}^{ty}-T_{y}^{ty}\right)dtdxdz$ $\displaystyle =$ $\displaystyle \partial_{y}T^{ty}dxdtdydz\ \ \ \ \ (7)$ $\displaystyle \left(T_{z+dz}^{tz}-T_{z}^{tz}\right)dtdydx$ $\displaystyle =$ $\displaystyle \partial_{z}T^{tz}dxdtdydz \ \ \ \ \ (8)$ Adding these up gives the net total change in energy within the boxes $\displaystyle dE$ $\displaystyle =$ $\displaystyle -\left(\partial_{x}T^{tx}+\partial_{y}T^{ty}+\partial_{z}T^{tz}\right)dxdydzdt \ \ \ \ \ (9)$ The minus sign occurs because if, say, ${\partial_{x}T^{tx}<0}$, this indicates that ${T_{x}^{tx}>T_{x+dx}^{tx}}$ so more energy flows in at position ${x}$ than flows out at position ${x+dx}$, resulting in ${dE>0}$. The net change in energy due to its flow across the boundaries of the box must be reflected in the change of the energy within the box. The energy density is given by ${T^{tt}}$ so we must have $\displaystyle dE$ $\displaystyle =$ $\displaystyle \left(T_{t+dt}^{tt}-T_{t}^{tt}\right)dxdydz\ \ \ \ \ (10)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \partial_{t}T^{tt}dxdydzdt \ \ \ \ \ (11)$ The energy conservation law is then given by $\displaystyle \partial_{t}T^{tt}dxdydzdt$ $\displaystyle =$ $\displaystyle -\left(\partial_{x}T^{tx}+\partial_{y}T^{ty}+\partial_{z}T^{tz}\right)dxdydzdt \ \ \ \ \ (12)$ Since this must be true for any choice of differentials, the energy conservation law is expressed in the compact form $\displaystyle \partial_{j}T^{tj}=0 \ \ \ \ \ (13)$ We can do a similar argument for momentum. The component ${T^{tj}}$ (where ${j}$ is a spatial coordinate) is the density of the ${j}$ component of momentum and the components ${T^{ij}}$ are the rates of flow of the ${j}$ component of momentum in the ${i}$ direction, so the net change in momentum component ${j}$ due to differences in the flow rate at the boundaries of the box is $\displaystyle dp^{j}=-\left(\partial_{x}T^{xj}+\partial_{y}T^{yj}+\partial_{z}T^{zj}\right)dxdydzdt \ \ \ \ \ (14)$ This must be equal to the net change of ${p^{j}}$ within the box over time ${dt}$, so $\displaystyle dp^{j}=\partial_{t}T^{tj}dxdydzdt \ \ \ \ \ (15)$ and $\displaystyle \partial_{i}T^{ij}=0 \ \ \ \ \ (16)$ This is therefore true for all four values of ${j}$ and represents conservation of energy and momentum, or just four-momentum. We derived this formula for the special case of a local inertial frame (LIF). We’ve seen that the appropriate generalization of the gradient is the absolute gradient or covariant derivative, so the appropriate tensor equation for conservation of four-momentum is $\displaystyle \boxed{\nabla_{i}T^{ij}=0} \ \ \ \ \ (17)$ In terms of Christoffel symbols, this is $\displaystyle \nabla_{i}T^{ij}=\partial_{i}T^{ij}+\Gamma_{ik}^{i}T^{kj}+\Gamma_{ik}^{j}T^{ik}=0 \ \ \ \ \ (18)$ We can apply this equation to the more general case of a perfect fluid in general coordinates, where the tensor is $\displaystyle T^{ij}=\left(\rho_{0}+P_{0}\right)u^{i}u^{j}+P_{0}g^{ij} \ \ \ \ \ (19)$ We can work out the covariant derivative in a LIF. In a LIF, all Christoffel symbols are zero so we get $\displaystyle \nabla_{i}T^{ij}$ $\displaystyle =$ $\displaystyle \partial_{i}T^{ij}\ \ \ \ \ (20)$ $\displaystyle 0$ $\displaystyle =$ $\displaystyle u^{i}u^{j}\partial_{i}\left(\rho_{0}+P_{0}\right)+\left(\rho_{0}+P_{0}\right)\left[u^{i}\partial_{i}u^{j}+u^{j}\partial_{i}u^{i}\right]+\eta^{ij}\partial_{i}P_{0} \ \ \ \ \ (21)$ The four-velocity always satisfies the relation ${\mathbf{u}\cdot\mathbf{u}=u_{j}u^{j}=-1}$ so we have $\displaystyle \partial_{i}\left(\mathbf{u}\cdot\mathbf{u}\right)$ $\displaystyle =$ $\displaystyle \partial_{i}\left(u^{j}u_{j}\right)\ \ \ \ \ (22)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \partial_{i}\left(\eta_{jk}u^{k}u^{j}\right)\ \ \ \ \ (23)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \eta_{jk}\left[u^{k}\partial_{i}u^{j}+u^{j}\partial_{i}u^{k}\right]\ \ \ \ \ (24)$ $\displaystyle$ $\displaystyle =$ $\displaystyle u_{j}\partial_{i}u^{j}+u_{k}\partial_{i}u^{k}\ \ \ \ \ (25)$ $\displaystyle$ $\displaystyle =$ $\displaystyle 2u_{j}\partial_{i}u^{j}\ \ \ \ \ (26)$ $\displaystyle$ $\displaystyle =$ $\displaystyle 0 \ \ \ \ \ (27)$ We can now multiply 21 by ${u_{j}}$ and use the above result to get $\displaystyle u^{i}u_{j}u^{j}\partial_{i}\left(\rho_{0}+P_{0}\right)+\left(\rho_{0}+P_{0}\right)\left[u^{i}u_{j}\partial_{i}u^{j}+u_{j}u^{j}\partial_{i}u^{i}\right]+\eta^{ij}u_{j}\partial_{i}P_{0}$ $\displaystyle =$ $\displaystyle -u^{i}\partial_{i}\left(\rho_{0}+P_{0}\right)-\left(\rho_{0}+P_{0}\right)\partial_{i}u^{i}+u^{i}\partial_{i}P_{0} \ \ \ \ \ (28)$ $\displaystyle =$ $\displaystyle 0$ $\displaystyle \left(\rho_{0}+P_{0}\right)\partial_{i}u^{i}+u^{i}\partial_{i}\rho_{0} \ \ \ \ \ (29)$ $\displaystyle =$ $\displaystyle 0$ $\displaystyle \partial_{i}\left(u^{i}\rho_{0}\right)+P_{0}\partial_{i}u^{i} \ \ \ \ \ (30)$ $\displaystyle =$ $\displaystyle 0$ This last equation is known as the equation of continuity. Note that it is valid only in a LIF, since the derivative isn’t covariant. Now we can multiply 29 by ${u^{j}}$and subtract it from 21: $\displaystyle u^{i}u^{j}\partial_{i}P_{0}+\left(\rho_{0}+P_{0}\right)u^{i}\partial_{i}u^{j}+\eta^{ij}\partial_{i}P_{0}$ $\displaystyle =$ $\displaystyle 0\ \ \ \ \ (31)$ $\displaystyle \left(\rho_{0}+P_{0}\right)u^{i}\partial_{i}u^{j}$ $\displaystyle =$ $\displaystyle -\left(u^{i}u^{j}+\eta^{ij}\right)\partial_{i}P_{0} \ \ \ \ \ (32)$ This is the equation of motion, also valid in a LIF. In the non-relativistic limit, the density in any LIF will be the same, as will the pressure. Also, ${P_{0}\ll\rho_{0}}$ so we can approximate 30 by $\displaystyle \partial_{i}\left(u^{i}\rho_{0}\right)\approx0 \ \ \ \ \ (33)$ Using ${\mathbf{u}\approx\left[1,v^{x},v^{y},v^{z}\right]}$ this becomes $\displaystyle \partial_{t}\rho_{0}=-\vec{\nabla}\cdot\left(\rho_{0}\mathbf{v}\right) \ \ \ \ \ (34)$ where the arrow above ${\vec{\nabla}}$ indicates this is the 3-d gradient, not the covariant derivative. This is the Newtonian equation of continuity for a perfect fluid, which expresses conservation of mass. We can also approximate 32 by neglecting any products of velocity components, since ${v^{i}v^{j}\ll1}$ if both ${i}$ and ${j}$ are spatial coordinates. The LHS becomes $\displaystyle \left(\rho_{0}+P_{0}\right)u^{i}\partial_{i}u^{j}$ $\displaystyle \approx$ $\displaystyle \rho_{0}u^{i}\partial_{i}u^{j}\ \ \ \ \ (35)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \rho_{0}\left[\partial_{t}v^{j}+\left(\vec{\mathbf{v}}\cdot\vec{\nabla}\right)v^{j}\right] \ \ \ \ \ (36)$ The term with ${j=t}$ drops out, since ${u^{t}=1}$ and its derivatives are all zero. We can combine the three spatial coordinates into a single vector expression: $\displaystyle \rho_{0}\left[\partial_{t}\vec{\mathbf{v}}+\left(\vec{\mathbf{v}}\cdot\vec{\nabla}\right)\vec{\mathbf{v}}\right] \ \ \ \ \ (37)$ The RHS is $\displaystyle -\left(u^{i}u^{j}+\eta^{ij}\right)\partial_{i}P_{0} \ \ \ \ \ (38)$ If ${j=t}$, the ${i=t}$ term in the sum is zero because ${u^{t}u^{t}+\eta^{ij}=+1-1=0}$. If we ignore all other terms that are second order or higher in ${v}$ and/or ${P_{0}}$, we are left with only ${-\eta^{ij}\partial_{i}P_{0}}$. Looking at the 3 terms with ${j}$ being a spatial coordinate, this is ${-\vec{\nabla}P_{0}}$ so we get the approximation $\displaystyle \rho_{0}\left[\partial_{t}\vec{\mathbf{v}}+\left(\vec{\mathbf{v}}\cdot\vec{\nabla}\right)\vec{\mathbf{v}}\right]=-\vec{\nabla}P_{0} \ \ \ \ \ (39)$ which is Euler’s equation of motion for a perfect fluid.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 212, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9732524156570435, "perplexity": 123.00383216798433}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608936.94/warc/CC-MAIN-20170527094439-20170527114439-00062.warc.gz"}
http://www.iro.umontreal.ca/~qip2012/abstract.php
 QIP2012 ### Abstracts Itai Arad, Zeph Landau and Umesh Vazirani. An improved area law for 1D frustration-free systems Abstract: We present a new proof for the 1D area law for frustration-free systems with a constant gap, which exponentially improves the entropy bound in Hastings' 1D proof. For particles of dimension d, a spectral gap \eps>0 and interaction strength of at most J, our entropy bound is S <= O(1)X^3 log^8 X where X=(J log d)/ \eps, versus the e^{O(X)} in Hastings' proof. Our proof is completely combinatorial. It follows the simple proof of the commuting case, combining the detectability lemma with basic tools from approximation theory. In higher dimensions, we manage to slightly improve the bound, showing that the entanglement entropy between a region L and its surrounding is bounded by S <= O(1) |\partial L|^2 log^8|\partial L|, which in 2D is very close to the (trivial) volume-law. Salman Beigi and Amin Gohari. Information Causality is a Special Point in the Dual of the Gray-Wyner Region Abstract: Information Causality puts restrictions on the amount of information learned by a party (Bob) in a one-way communication problem. Bob receives an index b, and after a one-way communication from the other party (Alice), tries to recover a part of Alice's input. Because of the possibility of cloning, this game in its completely classical form is equivalent to one in which there are several Bobs indexed by b, who are interested in recovering different parts of Alice's input string, after receiving a public message from her. Adding a private message from Alice to each Bob, and assuming that the game is played many times, we obtain the Gray-Wyner problem for which a complete characterization of the achievable region is known. In this paper, we first argue that in the classical case Information Causality is only a single point in the dual of the Gray-Wyner region. Next, we show that despite the fact that cloning is impossible in a general physical theory, the result from classical world carries over to any physical theory provided that it satisfies a new property. This new property of the physical theory is called the Accessibility of Mutual Information and holds in the quantum theory. It implies that the Gray-Wyner region completely characterizes all the inequalities corresponding to the game of Information Causality. In other words, we provide infinitely many inequalities that Information Causality is only one of them. In the second part of the paper we prove that Information Causality leads to a non-trivial lower bound on the communication cost of simulating a given non-local box when the parties are allowed to share entanglement. We also consider the same problem when the parties are provided with preshared randomness. Our non-technical contribution is to comment that information theorists who have been interested in the area of control have independently studied the same problem in a different context from an information theoretic perspective, rather than a communication complexity one. Connecting these two lines of research, we report a formula that, rather surprisingly, gives an exact computable expression for the optimal amount of communication needed for sampling from a non-local distribution given infinite preshared randomness. Salman Beigi and Robert Koenig. Simplified instantaneous non-local quantum computation with applications to position-based cryptography Abstract: Motivated by concerns that non-local measurements may violate causality, Vaidman has shown that any non-local operation can be implemented using local operations and a single round of simultaneously passed classical communication only. His protocols are based on a highly non-trivial recursive use of teleportation. Here we give a simple proof of this fact, reducing the amount of entanglement required from a doubly exponential to an exponential amount. We also prove a linear lower bound on the amount of entanglement consumed for the implementation of a certain non-local measurement. These results have implications for position-based cryptography: any scheme becomes insecure if the adversaries share an amount of entanglement scaling exponentially in the number of communicated qubits. Furthermore, certain schemes are secure under the assumption that the adversaries have at most a linear amount of entanglement and are required to communicate classically. Aleksandrs Belovs. Span Programs for Functions with Constant-Sized 1-certificates Abstract: Besides the Hidden Subgroup Problem, the second large class of quantum speed-ups is for functions with constant-sized 1-certificates. This includes the OR function, solvable by the Grover algorithm, the distinctness, the triangle and other problems. The usual way to solve them is by quantum walk on the Johnson graph. We propose a solution for the same problems using span programs. The span program is a computational model equivalent to the quantum query algorithm in its strength, and yet very different in its outfit. We prove the power of our approach by designing a quantum algorithm for the triangle problem with query complexity O(n^{35/27}) that is better than O(n^{13/10}) of the best previously known algorithm by Magniez et al. Dominic Berry, Richard Cleve and Sevag Gharibian. Discrete simulations of continuous-time query algorithms that are efficient with respect to queries, gates and space Abstract: We show that any continuous-time quantum query algorithm whose total query time is T and whose driving Hamiltonian is implementable with G gates can be simulated by a discrete-query quantum algorithm using the following resources: * O(T log T / loglog T) queries * O(T G polylog T) 1- and 2-qubit gates * O(polylog T) qubits of space. This extends previous results where the query cost is the same or better, but where the orders of the second and third resource costs are at least T^2 polylog T and T polylog T respectively. These new bounds are useful in circumstances where abstract black-box query algorithms are translated into concrete algorithms with subroutines substituted for the black-box queries. In these circumstances, what matters most is the total gate complexity, which can be large if the cost of the operations performed between the queries is large (even if the number of queries is small). Our bound implies that if the implementation cost of the driving Hamiltonian is small, the total gate complexity is not much more than the query complexity. Robin Blume-Kohout. Paranoid tomography: Confidence regions for quantum hardware Abstract: Many paranoid'' quantum information processing protocols, such as fault tolerance and cryptography, require rigorously validated quantum hardware. We use tomography to characterize and calibrate such devices. But point estimates of the state or gate implemented by a device -- which are the end result of all current tomographic protocols -- can never provide the rigorously reliable validation required for fault tolerance or QKD. We need region estimators. Here, I introduce likelihood-ratio confidence region estimators, and show that (unlike ad hoc techniques such as the bootstrap) they are absolutely reliable and near-optimally powerful, as well as convenient and simple to implement. Fernando Brandao, Aram Harrow and Michal Horodecki. Local random quantum circuits are approximate polynomial-designs Abstract: We prove that local random quantum circuits acting on n qubits composed of polynomially many nearest neighbor two-qubit gates form an approximate unitary poly(n)-design. Previously it was unknown whether random quantum circuits were a t-design for any t > 3. The proof is based on an interplay of techniques from quantum many-body theory, representation theory, and the theory of Markov chains. In particular we employ a result of Nachtergaele for lower bounding the spectral gap of frustration-free quantum local Hamiltonians; a quasi-orthogonality property of permutation matrices; and a result of Oliveira which extends to the unitary group the path-coupling method for bounding the mixing time of random walks. We also consider pseudo-randomness properties of local random quantum circuits of small depth and prove they constitute a quantum poly(n)-copy tensor product expander. The proof also rests on techniques from quantum many-body theory, in particular on the detectability lemma of Aharonov, Arad, Landau, and Vazirani. We give two applications of the results. Firstly we show the following pseudo- randomness property of efficiently generated quantum states: almost every state of n qubits generated by circuits of size n.k cannot be distinguished from the maximally mixed state by circuits of size n^((k+4)/6); this provides a data-hiding scheme against computationally bounded adversaries. Secondly, we reconsider a recent argument of Masanes, Roncaglia, and Acin concerning local equilibration of time-envolving quantum systems, and strengthen the connection between fast equilibration of small subsystems and the circuit complexity of the unitary which diagonalizes the Hamiltonian. Gilles Brassard, Peter Høyer, Kassem Kalach, Marc Kaplan, Sophie Laplante and Louis Salvail. Merkle Puzzles in a Quantum World Abstract: In 1974, Ralph Merkle proposed the first unclassified scheme for secure communications over insecure channels. When legitimate communicating parties are willing to spend an amount of computational effort proportional to some parameter N, an eavesdropper cannot break into their communication without spending a time proportional to N^2, which is quadratically more than the legitimate effort. We showed in an earlier paper that Merkle's schemes are completely insecure against a quantum adversary, but that their security can be partially restored if the legitimate parties are also allowed to use quantum computation: the eavesdropper needed to spend a time proportional to N^{3/2} to break our earlier quantum scheme. Furthermore, all previous classical schemes could be broken completely by the onslaught of a quantum eavesdropper and we conjectured that this is unavoidable. We give two novel key establishment schemes in the spirit of Merkle's. The first one can be broken by a quantum adversary that makes an effort proportional to N^{5/3} to implement a quantum random walk in a Johnson graph reminiscent of Andris Ambainis' quantum algorithm for the element distinctness problem. This attack is optimal up to logarithmic factors. Our second scheme is purely classical, yet it cannot be broken by a quantum eavesdropper who is only willing to expend effort proportional to that of the legitimate parties. Sergey Bravyi. Topological qubits: stability against thermal noise Abstract: A big open question in the quantum information theory concerns feasibility of a self-correcting quantum memory. A user of such memory would need quantum computing capability only to write and read information. The storage itself requires no action from the user, as long as the memory is put in contact with a cold enough thermal bath. In this talk I will review toy models of a quantum memory based on stabilizer code Hamiltonians with the topological order. These models describe quantum spin lattices with short-range interactions and a quantum code degenerate ground state. I will show how to derive a lower bound on the memory time for stabilizer Hamiltonians that have no string-like logical operators, such as the recently discovered 3D Cubic Code. This bound applies when the interaction between the spin lattice and the thermal bath can be described by a Markovian master equation. Our results demonstrate that the 3D Cubic Code is a marginally self-correcting memory: for a fixed temperature T the maximum memory time that can be achieved by increasing the lattice size grows exponentially with 1/T^2, whereas the optimal lattice size grows exponentially with 1/T. We also compute the memory time of the 3D Cubic Code numerically using a novel error correction algorithm to simulate the readout step. The numerics suggests that our lower bound on the memory time is tight. A unique feature of the studied stabilizer Hamiltonians responsible for the self-correction is the energy landscape that penalizes any sequence of local errors resulting in a logical error. The energy barrier that must be overcome to implement a bit-flip or phase-flip on the encoded qubit grows logarithmically with the lattice size. Such energy landscape also renders diffusion of topological defects over large distances energetically unfavorable. This is a joint work with Jeongwan Haah (Caltech) Sergey Bravyi and Robert Koenig. Disorder-assisted error correction in Majorana chains Abstract: It was recently realized that quenched disorder may enhance the reliability of topological qubits by reducing the mobility of anyons at zero temperature. Here we compute storage times with and without disorder for quantum chains with unpaired Majorana fermions - the simplest toy model of a quantum memory. Disorder takes the form of a random site-dependent chemical potential. The corresponding one-particle problem is a one-dimensional Anderson model with disorder in the hopping amplitudes. We focus on the zero-temperature storage of a qubit encoded in the ground state of the Majorana chain. Storage and retrieval are modeled by a unitary evolution under the memory Hamiltonian with an unknown weak perturbation followed by an error-correction step. Assuming dynamical localization of the one-particle problem, we show that the storage time grows exponentially with the system size. We give supporting evidence for the required localization property by estimating Lyapunov exponents of the one-particle eigenfunctions. We also simulate the storage process for chains with a few hundred sites. Our numerical results indicate that in the absence of disorder, the storage time grows only as a logarithm of the system size. We provide numerical evidence for the beneficial effect of disorder on storage times and show that suitably chosen pseudorandom potentials can outperform random ones. Jop Briet and Thomas Vidick. Explicit lower and upper bounds on the entangled value of multiplayer XOR games Abstract: XOR games are the simplest model in which the nonlocal properties of entanglement manifest themselves. When there are two players, it is well known that the bias --- the maximum advantage over random play --- of entangled players is at most a constant times greater than that of classical players. Using tools from operator space theory, Perez-Garcia et al. [Comm. Math. Phys. 279 (2), 2008] showed that no such bound holds when there are three or more players: in that case the ratio of the entangled and classical biases can become unbounded and scale with the size of the game. We give a new, simple and explicit (though still probabilistic) construction of a family of three-player XOR games for which entangled players have a large advantage over classical players. Our game has N^2 questions per player and entangled players have a factor of order sqrt(N) (up to log factors) advantage over classical players. Moreover, the entangled players only need to share a state of local dimension N and measure observables defined by tensor products of the Pauli matrices. Additionally, we give the first upper bounds on the maximal violation in terms of the number of questions per player, showing that our construction is only quadratically off in that respect. Our results rely on probabilistic estimates on the norm of random matrices and higher-order tensors. Our results rely on probabilistic estimates on the norm of random matrices and higher-order tensors. Harry Buhrman, Serge Fehr, Christian Schaffner and Florian Speelman. The Garden-Hose Game and Application to Position-Based Quantum Cryptography Abstract: We study position-based cryptography in the quantum setting. We examine a class of protocols that only require the communication of a single qubit and 2n bits of classical information. To this end, we define a new model of communication complexity, the garden-hose model, which enables us to prove upper bounds on the number of EPR pairs needed to attack such schemes. This model furthermore opens up a way to link the security of quantum position-based cryptography to traditional complexity theory. Josh Cadney, Noah Linden and Andreas Winter. Infinitely many constrained inequalities for the von Neumann entropy Abstract: We exhibit infinitely many new, constrained inequalities for the von Neumann entropy, and show that they are independent of each other and the known inequalities obeyed by the von Neumann entropy (basically strong subadditivity). The new inequalities were proved originally by Makarychev et al. [Commun. Inf. Syst., 2(2):147-166, 2002] for the Shannon entropy, using properties of probability distributions. Our approach extends the proof of the inequalities to the quantum domain, and includes their independence for the quantum and also the classical cases. André Chailloux and Iordanis Kerenidis. Optimal Bounds for Quantum Bit Commitment Abstract: Bit commitment is a fundamental cryptographic primitive with numerous applications. Quantum information allows for bit commitment schemes in the information theoretic setting where no dishonest party can perfectly cheat. The previously best-known quantum protocol by Ambainis achieved a cheating probability of at most 3/4[Amb01]. On the other hand, Kitaev showed that no quantum protocol can have cheating probability less than 1/sqrt{2}[Kit03] (his lower bound on coin flipping can be easily extended to bit commitment). Closing this gap has since been an important and open question. In this paper, we provide the optimal bound for quantum bit commitment. We first show a lower bound of approximately 0.739, improving Kitaev's lower bound. We then present an optimal quantum bit commitment protocol which has cheating probability arbitrarily close to 0.739. More precisely, we show how to use any weak coin flipping protocol with cheating probability 1/2 + \eps in order to achieve a quantum bit commitment protocol with cheating probability 0.739 + O(\eps). We then use the optimal quantum weak coin flipping protocol described by Mochon[Moc07]. To stress the fact that our protocol uses quantum effects beyond the weak coin flip, we show that any classical bit commitment protocol with access to perfect weak (or strong) coin flipping has cheating probability at least 3/4. André Chailloux and Or Sattath. The Complexity of the Separable Hamiltonian Problem Abstract: In this paper, we study variants of the canonical local hamiltonian problem where we have the additional promise that the witness is separable. We define two variants of the local problem. In the separable sparse hamiltonian problem, the hamiltonians are not necessarily local, but rather sparse. We show that this problem is QMA(2) complete. On the other hand, we consider another problem, the separable local hamiltonian problem and show that it is QMA complete. This should be compared to the local hamiltonian problem, and to the sparse hamiltonian problem which are both QMA complete. This is the first study of separable Hamiltonian problems which leads to new complete problems for both QMA and QMA(2) and might give some new ways of comparing these two classes. Eric Chitambar, Wei Cui and Hoi-Kwong Lo. Increasing Entanglement by Separable Operations and New Monotones for W-type Entanglement Abstract: In this talk, we seek to better understand the structure of local operations and classical communication (LOCC) and its relationship to separable operations (SEP). To this end, we compare the abilities of LOCC and SEP for distilling EPR entanglement from one copy of an N-qubit W-class state (i.e. that of the form sqrt{x_0}|00...0> + sqrt{x_1}|10...0> +...+ sqrt{x_n}|00...1>). In terms of transformation success probability, we are able to quantify a gap as large as 37% between the two classes. Our work involves constructing new analytic entanglement monotones for W-class states which can increase on average by separable operations. Additionally, we are able to show that the set of LOCC operations, considered as a subset of the most general quantum measurements, is not closed. Extended Version: arXiv:1106.1208 Matthias Christandl and Renato Renner. Reliable Quantum State Tomography Abstract: Quantum state tomography is the task of inferring the state of a quantum system by appropriate measurements. Since the frequency distributions of the outcomes obtained from any finite number of measurements will generally deviate from their asymptotic limits, the estimation of the state can never be perfectly accurate, thus requiring the specification of error bounds. Furthermore, the individual reconstruction of matrix elements of the density operator representation of a state may lead to inconsistent results (e.g., operators with negative eigenvalues). Here we introduce a framework for quantum state tomography that enables the computation of accurate and consistent estimates and reliable error bars from a finite set of data and show that these have a well-defined and universal operational significance. The method does not require any prior assumptions about the distribution of the possible states or a specific parametrization of the state space. The resulting error bars are tight, corresponding to the fundamental limits that quantum theory imposes on the precision of measurements. At the same time, the technique is practical and particularly well suited for tomography on systems consisting of a small number of qubits, which are currently in the focus of interest in experimental quantum information science. Toby Cubitt, Martin Schwarz, Frank Verstraete, Or Sattath and Itai Arad. Three Proofs of a Constructive Commuting Quantum Lovasz Local Lemma Abstract: The recently proven Quantum Lovasz Local Lemma generalises the well-known Lovasz Local Lemma. It states that, if a collection of subspace constraints are "weakly dependent", there necessarily exists a state satisfying all constraints. It implies e.g. that certain instances of the quantum kQSAT satisfiability problem are necessarily satisfiable, or that many-body systems with "not too many" interactions are never frustrated. However, the QLLL only asserts existence; it says nothing about how to find the state. Inspired by Moser's breakthrough classical results, we present a constructive version of the QLLL in the setting of commuting constraints, proving that a simple quantum algorithm converges efficiently to the required state. In fact, we provide three different proofs, all of which are independent of the original QLLL proof. So these results also provide independent, constructive proofs of the commuting QLLL itself, but strengthen it significantly by giving an efficient algorithm for finding the state whose existence is asserted by the QLLL. Marcus P. Da Silva, Steven T. Flammia, Olivier Landon-Cardinal, Yi-Kai Liu and David Poulin. Practical characterization of quantum devices without tomography Abstract: Quantum tomography is the main method used to assess the quality of quantum information processing devices, but its complexity presents a major obstacle for the characterization of even moderately large systems. However, tomography generates much more information than is often sought. Taking a more targeted approach, we develop schemes that enable (i) estimating the fidelity of an experiment to a theoretical ideal description, (ii) learning which description within a reduced subset best matches the experimental data. Both these approaches yield a significant reduction in resources compared to tomography. In particular, we show how to estimate the fidelity between a predicted pure state and an arbitrary experimental state using only a constant number of Pauli expectation values selected at random according to an importance-weighting rule. In addition, we propose methods for certifying quantum circuits and learning continuous-time quantum dynamics that are described by local Hamiltonians or Lindbladians. This extended abstract is a synthesis of arXiv:1104.3835 and arXiv:1104.4695, which the reader can consult for complete details on the results, methods and proofs. Nilanjana Datta, Min-Hsiu Hsieh and Mark Wilde. Quantum rate distortion, reverse Shannon theorems, and source-channel separation Abstract: We derive quantum counterparts of two key theorems of classical information theory, namely, the rate distortion theorem and the source-channel separation theorem. The rate-distortion theorem gives the ultimate limits on lossy data compression, and the source-channel separation theorem implies that a two-stage protocol consisting of compression and channel coding is optimal for transmitting a memoryless source over a memoryless channel. In spite of their importance in the classical domain, there has been surprisingly little work in these areas for quantum information theory. In the present work, we prove that the quantum rate distortion function is given in terms of the regularized entanglement of purification. Although this formula is regularized, at the very least it demonstrates that Barnum's conjecture on the achievability of the coherent information for quantum rate distortion is generally false. We also determine single-letter expressions for entanglement-assisted quantum rate distortion. Moreover, we prove several quantum source channel separation theorems. The strongest of these are in the entanglement-assisted setting, in which we establish a necessary and sufficient condition for transmitting a memoryless source over a memoryless quantum channel up to a given distortion. Thomas Decker, Gábor Ivanyos, Miklos Santha and Pawel Wocjan. Hidden Symmetry Subgroup Problems Abstract: We advocate a new approach of addressing hidden structure problems and finding efficient quantum algorithms. We introduce and investigate the Hidden Symmetry Subgroup Problem (HSSP), which is a generalization of the well-studied Hidden Subgroup Problem (HSP). Given a group acting on a set and an oracle whose level sets define a partition of the set, the task is to recover the subgroup of symmetries of this partition inside the group. The HSSP provides a unifying framework that, besides the HSP, encompasses a wide range of algebraic oracle problems, including quadratic hidden polynomial problems. While the HSSP can have provably exponential quantum query complexity, we obtain efficient quantum algorithms for various interesting cases. To achieve this, we present a general method for reducing the HSSP to the HSP, which works efficiently in several cases related to symmetries of polynomials. The HSSP therefore connects in a rather surprising way certain hidden polynomial problems with the HSP. Using this connection, we obtain the first efficient quantum algorithm for the hidden polynomial problem for multivariate quadratic polynomials over fields of constant characteristic. We also apply the new methods to polynomial function graph problems and present an efficient quantum procedure for constant degree multivariate polynomials over any field. This result improves in several ways the currently known algorithms. Guillaume Duclos-Cianci, Héctor Bombin and David Poulin. Equivalence of Topological Codes and Fast Decoding Algorithms Abstract: Two topological phases are equivalent if they are connected by a local unitary transformation. In this sense, classifying topological phases amounts to classifying long-range entanglement patterns. We show that all 2D topological stabilizer codes are equivalent to several copies of one universal phase: Kitaev's topological code. Error correction benefits from the corresponding local mappings. Omar Fawzi, Patrick Hayden, Ivan Savov, Pranab Sen and Mark Wilde. Advances in classical communication for network quantum information theory Abstract: Our group has developed new techniques that have yielded significant advances in network quantum information theory. We have established the existence of a quantum simultaneous decoder for two-sender quantum multiple access channels by using novel methods to deal with the non-commutativity of the many operators involved, and we have also applied this result in various scenarios, included unassisted and assisted classical communication over quantum multiple access channels, quantum broadcast channels, and quantum interference channels. Prior researchers have already considered classical communication over quantum multiple-access and broadcast channels, but our work extends and in some cases improves upon this prior work. Also, we are the first to make progress on the capacity of the quantum interference channel, which is a channel with two senders and two receivers, where one sender is interested in communicating with one receiver and the other sender with the other receiver. The aim of the proposed talk at QIP 2012 is to summarize this recent work and its applications as well as to discuss new avenues for network quantum information theory that may make use of these results. Rodrigo Gallego, Lars Würflinger, Antonio Acín and Miguel Navascués. An operational framework for nonlocality Abstract: Since the advent of the first quantum information protocols, entanglement was recognized as a key ingredient for quantum information purposes, necessary for quantum computation or cryptography. A framework was developed to characterize and quantify entanglement as a resource based on the following operational principle: entanglement among N parties cannot be created by local operations and classical communication, even when N-1 parties collaborate. More recently, nonlocality has been identified as another resource, alternative to entanglement and necessary for device-independent quantum information protocols. We introduce a novel framework for nonlocality based on a similar principle: nonlocality among N parties cannot be created by local operations and shared randomness even when N-1 parties collaborate. We then show that the standard definition of multipartite nonlocality, due to Svetlichny, is inconsistent with this operational approach: according to it, genuine tripartite nonlocality could be created by two collaborating parties. We then discuss alternative definitions for which consistency is recovered. Rodrigo Gallego, Lars Erik Würflinger, Antonio Acín and Miguel Navascués. Quantum correlations require multipartite information principles Abstract: Identifying which correlations among distant observers are possible within our current description of Nature, based on quantum mechanics, is a fundamental problem in Physics. Recently, information concepts have been proposed as the key ingredient to characterize the set of quantum correlations. Novel information principles, such as, information causality or non-trivial communication complexity, have been introduced in this context and successfully applied to some concrete scenarios. We show in this work a fundamental limitation of this approach: no principle based on bipartite information concepts is able to single out the set of quantum correlations for an arbitrary number of parties. Our results reflect the intricate structure of quantum correlations and imply that new and intrinsically multipartite information concepts are needed for their full understanding. Sevag Gharibian and Julia Kempe. Hardness of approximation for quantum problems Abstract: The polynomial hierarchy plays a central role in classical complexity theory. Here, we define a quantum generalization of the polynomial hierarchy, and initiate its study. We show that not only are there natural complete problems for the second level of this quantum hierarchy, but that these problems are in fact strongly hard to approximate. Our results thus yield the first known hardness of approximation results for a quantum complexity class. Our approach is based on the use of dispersers, and is inspired by the classical results of Umans regarding hardness of approximation for the second level of the classical polynomial hierarchy [Umans, FOCS 1999]. Gus Gutoski and Xiaodi Wu. Parallel approximation of min-max problems with applications to classical and quantum zero-sum games Abstract: This paper presents an efficient parallel algorithm for a new class of min-max problems based on the matrix multiplicative weight (MMW) update method. Our algorithm can be used to find near-optimal strategies for competitive two-player classical or quantum games in which a referee exchanges any number of messages with one player followed by any number of additional messages with the other. This algorithm considerably extends the class of games which admit parallel solutions and demonstrates for the first time the existence of a parallel algorithm for any game (classical or quantum) in which one player reacts adaptively to the other. As a direct consequence, we prove that several competing-provers complexity classes collapse to PSPACE such as QRG(2), SQG and two new classes called DIP and DQIP. A special case of our result is a parallel approximation scheme for a new class of semidefinite programs whose feasible region consists of n-tuples of semidefinite matrices that satisfy a .transcript-like. consistency condition. Applied to this special case, our algorithm yields a direct polynomial-space simulation of multi-message quantum interactive proofs resulting in a first-principles proof of QIP = PSPACE. It is noteworthy that our algorithm establishes a new way, called the min-max approach, to solve SDPs in contrast to the primal-dual approach to SDPs used in the original proof of QIP = PSPACE. Jeongwan Haah. Local stabilizer codes in three dimensions without string logical operators Abstract: We suggest concrete models for self-correcting quantum memory by reporting examples of local stabilizer codes in 3D that have no string logical operators. Previously known local stabilizer codes in 3D all have string-like logical operators, which make the codes non-self-correcting. We introduce a notion of "logical string segments" to avoid difficulties in defining one dimensional objects in discrete lattices. We prove that every string-like logical operator of our code can be deformed to a disjoint union of short segments, and each segment is in the stabilizer group. Esther Haenggi and Marco Tomamichel. The Link between Uncertainty Relations and Non-Locality Abstract: Two of the most intriguing features of quantum physics are the uncertainty principle and the occurrence of non-local correlations. The uncertainty principle states that there exist pairs of non-compatible measurements on quantum systems such that their outcomes cannot be simultaneously predicted by any observer. Non-local correlations of measurement outcomes at different locations cannot be explained by classical physics, but appear in quantum mechanics in the presence of entanglement. Here, we show that these two essential properties of quantum mechanics are quantitatively related. Namely, we provide an entropic uncertainty relation that gives a lower bound on the uncertainty of the binary outcomes of two measurements in terms of the maximum Clauser-Horne-Shimony-Holt value that can be achieved using the same measurements. We discuss an application of this uncertainty relation to certify a quantum source using untrusted devices. Rahul Jain and Nayak Ashwin. A quantum information cost trade-off for the Augmented Index Abstract: In this work we establish a trade-off between the amount of (classical and) quantum information the two parties necessarily reveal about their inputs in the process of the computing Augmented Index, a natural variant of the Index function. A surprising feature of this trade-off is that it holds even under a distribution on inputs on which the function value is 'known in advance'. In fact, this is the price paid by any protocol that works correctly on a hard'' distribution. We show that quantum protocols that compute the Augmented Index function correctly with constant error on the uniform distribution, either Alice reveals Omega(n/t) information about her n-bit input, or Bob reveals Omega(1/t) information about his (log n)-bit input, where t is the number of messages in the protocol, even when the inputs are drawn from an easy'' distribution, the uniform distribution over inputs which evaluate to 0. The classical version of this result has implications for the space required by streaming algorithms---algorithms that scan the input sequentially only a few times, while processing each input symbol quickly using a small amount of space. It implies that certain context free properties need space Omega(sqrt(n)/T) on inputs of length n, when allowed T unidirectional passes over the input. The quantum version would have similar consequences, provided a certain information inequality holds. Andrew Landahl, Jonas Anderson and Patrick Rice. Fault-tolerant quantum computing with color codes Abstract: We present and analyze protocols for fault-tolerant quantum computing using color codes. To process these codes, no qubit movement is necessary; nearest-neighbor gates in two spatial dimensions suffices. Our focus is on the color codes defined by the 4.8.8 semiregular lattice, as they provide the best error protection per physical qubit among color codes. We present circuit-level schemes for extracting the error syndrome of these codes fault-tolerantly. We further present an integer-program-based decoding algorithm for identifying the most likely error given the (possibly faulty) syndrome. We simulated our syndrome extraction and decoding algorithms against three physically-motivated noise models using Monte Carlo methods, and used the simulations to estimate the corresponding accuracy thresholds for fault-tolerant quantum error correction. We also used a self-avoiding walk analysis to lower-bound the accuracy threshold for two of these noise models. We present two methods for fault-tolerantly computing with these codes. In the first, many of the operations are transversal and therefore spatially local if two-dimensional arrays of qubits are stacked atop each other. In the second, code deformation techniques are used so that all quantum processing is spatially local in just two dimensions. In both cases, the accuracy threshold for computation is comparable to that for error correction. Our analysis demonstrates that color codes perform slightly better than Kitaev's surface codes when circuit details are ignored. When these details are considered, we estimate that color codes achieve a threshold of 0.082(3)%, which is higher than the threshold of 1.3 times 10^{-5} achieved by concatenated coding schemes restricted to nearest-neighbor gates in two dimensions [Spedalieri and Roychowdhury, Quant. Inf. Comp. 9, 666 (2009)] but lower than the threshold of 0.75% to 1.1% reported for the Kitaev codes subject to the same restrictions [Raussendorf and Harrington, Phys. Rev. Lett. 98, 190504 (2007); Wang et al., Phys. Rev. A 83, 020302(R) (2011)]. Finally, because the behavior of our decoder's performance for two of the noise models we consider maps onto an order-disorder phase transition in the three-body random-bond Ising model in 2D and the corresponding random-plaquette gauge model in 3D, our results also answer the Nishimori conjecture for these models in the negative: the statistical-mechanical classical spin systems associated to the 4.8.8 color codes are counterintuitively more ordered at positive temperature than at zero temperature. Troy Lee, Rajat Mittal, Ben Reichardt, Robert Spalek and Mario Szegedy. Quantum query complexity for state conversion Abstract: State conversion generalizes query complexity to the problem of converting between two input-dependent quantum states by making queries to the input. We characterize the complexity of this problem by introducing a natural information-theoretic norm that extends the Schur product operator norm. The complexity of converting between two systems of states is given by the distance between them, as measured by this norm. In the special case of function evaluation, the norm is closely related to the general adversary bound, a semi-definite program that lower-bounds the number of input queries needed by a quantum algorithm to evaluate a function. We thus obtain that the general adversary bound characterizes the quantum query complexity of any function whatsoever. This generalizes and simplifies the proof of the same result in the case of boolean input and output. Also in the case of function evaluation, we show that our norm satisfies a remarkable composition property, implying that the quantum query complexity of the composition of two functions is at most the product of the query complexities of the functions, up to a constant. Finally, our result implies that discrete and continuous-time query models are equivalent in the bounded-error setting, even for the general state-conversion problem. Troy Lee and Jérémie Roland. A strong direct product theorem for quantum query complexity Abstract: We show that quantum query complexity satisfies a strong direct product theorem. This means that computing k copies of a function with less than k times the quantum queries needed to compute one copy of the function implies that the overall success probability will be exponentially small in k. For a boolean function f we also show an XOR lemma---computing the parity of k copies of f with less than k times the queries needed for one copy implies that the advantage over random guessing will be exponentially small. We do this by showing that the multiplicative adversary method, which inherently satisfies a strong direct product theorem, is always at least as large as the additive adversary method, which is known to characterize quantum query complexity. Francois Le Gall. Improved Output-Sensitive Quantum Algorithms for Boolean Matrix Multiplication Abstract: We present new quantum algorithms for Boolean Matrix Multiplication in both the time complexity and the query complexity settings. As far as time complexity is concerned, our results show that the product of two n x n Boolean matrices can be computed on a quantum computer in time O(n^{3/2}+nk^{3/4}), where k is the number of non-zero entries in the product, improving over the output-sensitive quantum algorithm by Buhrman and Spalek (SODA'06) that runs in O(n^{3/2}k^{1/2}) time. This is done by constructing a quantum version of a recent classical algorithm by Lingas (ESA'09), using quantum techniques such as quantum counting to exploit the sparsity of the output matrix. As far as query complexity is concerned, our results improve over the quantum algorithm by Vassilevska Williams and Williams (FOCS'10) based on a reduction to the triangle finding problem. One of the main contributions leading to this improvement is the construction of a quantum algorithm for triangle finding tailored especially for the tripartite graphs appearing in the reduction. Spyridon Michalakis and Justyna Pytel. Stability of Frustration-Free Hamiltonians Abstract: We prove stability of the spectral gap for gapped, frustration-free Hamiltonians under general, quasi-local perturbations. We present a necessary and sufficient condition for stability, which we call Local Topological Quantum Order. This result extends previous work by Bravyi et al. on the stability of topological quantum order for Hamiltonians composed of commuting projections with a common zero-energy subspace. Abel Molina and John Watrous. Hedging bets with correlated quantum strategies Abstract: This paper studies correlations among independently administered hypothetical tests of a simple interactive type, and demonstrates that correlations arising in quantum information theoretic variants of these tests can exhibit a striking non-classical behavior. When viewed in a game-theoretic setting, these correlations are suggestive of a perfect form of hedging, where the risk of a loss in one game of chance is perfectly offset by one's actions in a second game. This type of perfect hedging is quantum in nature, it is not possible in classical variants of the tests we consider. Sandu Popescu. The smallest possible thermal machines and the foundations of thermodynamics Abstract: In my talk I raise the question on the fundamental limits to the size of thermal machines - refrigerators, heat pumps and work producing engines - and I will present the smallest possible ones. I will then discuss the issue of a possible complementarity between size and efficiency and show that even the smallest machines could be maximally efficient and I will also present a new point of view over what is work and what do thermal machines actually do. Finally I will present a completely new approach to the foundations of thermodynamics that follows from these results, which in their turn are inspired from quantum information concepts. Joseph M. Renes, Frederic Dupuis and Renato Renner. Quantum Polar Coding Abstract: Polar coding, introduced in 2008 by Arikan, is the first efficiently encodable and decodable coding scheme that provably achieves the Shannon bound for the rate of information transmission over classical discrete memoryless channels (in the asymptotic limit of large block sizes). Here we study the use of polar codes for the efficient coding and decoding of quantum information. Focusing on the case of qubit channels we construct a coding scheme which, using some pre-shared entanglement, asymptotically achieves a net transmission rate equal to the coherent information. Furthermore, for channels with sufficiently low noise level, no pre-shared entanglement is required. Jérémie Roland. Quantum rejection sampling Abstract: Rejection sampling is a well-known method to sample from a target distribution, given the ability to sample from a given distribution. The method has been first formalized by von Neumann (1951) and has many applications in classical computing. We define a quantum analogue of rejection sampling: given a black box producing a coherent superposition of (possibly unknown) quantum states with some amplitudes, the problem is to prepare a coherent superposition of the same states, albeit with different target amplitudes. The main result of this paper is a tight characterization of the query complexity of this quantum state generation problem. We exhibit an algorithm, which we call quantum rejection sampling, and analyze its cost using semidefinite programming. Our proof of a matching lower bound is based on the automorphism principle which allows to symmetrize any algorithm over the automorphism group of the problem. Furthermore, we illustrate how quantum rejection sampling may be used as a primitive in designing quantum algorithms, by providing three different applications. We first show that it was implicitly used in the quantum algorithm for linear systems of equations by Harrow, Hassidim and Lloyd. Secondly, we show that it can be used to speed up the main step in the quantum Metropolis sampling algorithm by Temme et al.. Finally, we derive a new quantum algorithm for the hidden shift problem of an arbitrary Boolean function. Joint work with Maris Ozols and Martin Roetteler, to appear in ITCS'12 Norbert Schuch. Complexity of commuting Hamiltonians on a square lattice of qubits Abstract: We consider the computational complexity of Hamiltonians which are sums of commuting terms acting on plaquettes in a square lattice of qubits, and we show that deciding whether the ground state minimizes the energy of each local term individually is in the complexity class NP. That is, if the ground states has this property, this can be proven using a classical certificate which can be efficiently verified on a classical computer. Different to previous results on commuting Hamiltonians, our certificate proves the existence of such a state without giving instructions on how to prepare it. Martin Schwarz, Kristan Temme, Frank Verstraete, Toby Cubitt and David Perez-Garcia. Preparing projected entangled pair states on a quantum computer Abstract: We present a quantum algorithm to prepare injective PEPS on a quantum computer, a problem raised by Verstraete, Wolf, Perez-Garcia, and Cirac [PRL 96, 220601 (2006)]. To be efficient, our algorithm requires well-conditioned PEPS projectors and, essentially, an inverse-polynomial spectral gap of the PEPS' parent Hamiltonian. Injective PEPS are the unique groundstates of their parent Hamiltonians and capture groundstates of many physically relevant many-body Hamiltonians, such as e.g. the 2D AKLT state. Even more general is the class of G-injective PEPS which have parent Hamiltonians with a ground state space of degeneracy |G|, the order of the discrete symmetry group G. As our second result we show how to prepare G-injective PEPS under similar assumptions as well. The class of G-injective PEPS contains topologically ordered states, such as Kitaev's toric code which our algorithm is thus able to prepare. Yaoyun Shi and Xiaodi Wu. Epsilon-net method for optimizations over separable states Abstract: In this paper we study the algorithms for the linear optimization over separable quantum states, which is a NP-hard problem. Precisely, the objective function is <Q, \rho> for some hermitian Q where \rho is a separable quantum state. Our strategy is to enumerate (via epsilon-nets) more cleverly with the help of certain structure of some interesting Qs. As a result, we obtain efficient algorithms either in time or space for the following cases. Firstly, we provide a polynomial time (or space) algorithm when Q can be decomposed into the form Q = sum_{i=1}^M Q^1_i tensor Q^2_i with small M. As a direct consequence, we prove a variant of the complexity class QMA(2) where the verifier performs only logarithm number of unit gates acting on both proofs simultaneously is contained in PSPACE. We also initiate the study of the natural extension of the local Hamiltonian problem to the k-partite case. By the same algorithm, we conclude those problems remain to be inside PSPACE. Secondly, for a positive semidefinite Q we obtain an algorithm with running time exponential in the Frobenius norm of Q , which reproves one of the main results of Brandao, Christandl and Yard [STOC pp.343(2011)]. Note that this result was originally proved by making use of many non-trivial results in quantum information, whereas our algorithm only utilizes fundamental operations of matrices and the Schmidt decomposition of bipartite pure states. Graeme Smith, John A. Smolin and Jon Yard. Quantum communication with gaussian channels of zero quantum capacity Abstract: Superactivation of channel capacity occurs when two channels have zero capacity separately, but can have nonzero capacity when used together. We present a family of simple and natural examples of superactivation of quantum capacity using gaussian channels that can potentially be realized with current technologies. This demonstrates the richness of the set of gaussian channels and the complexity of their capacity-achieving protocols. Superactivation is therefore not merely an oddity confined to unrealistic models but is in fact necessary for a proper characterization of realistic communication settings. Rolando Somma and Sergio Boixo. Spectral Gap Amplification Abstract: Many problems in quantum information reduce to preparing a specific eigenstate of some Hamiltonian H. The generic cost of quantum algorithms for these problems is determined by the inverse spectral gap of H for that eigenstate and the cost of evolving with H for some fixed time. The goal of spectral gap amplification is therefore to construct a Hamiltonian H' with the same eigenstate as that of H but a bigger spectral gap, requiring that constant-time evolutions with H' and H can be implemented with nearly the same cost. We show that a quadratic spectral gap amplification is possible when H satisfies a frustration-free property and give H' for this case. This results in quantum speedups for some adiabatic evolutions. Defining a suitable oracle model, we establish that the quadratic amplification is optimal for frustration-free Hamiltonians and that no spectral gap amplification is possible if the frustration-free property is removed. A corollary is that finding a similarity transformation between a stochastic Hamiltonian and the corresponding stochastic matrix is hard in the oracle model, setting strong limits in the power of some classical methods that simulate quantum adiabatic evolutions. Implications of spectral gap amplification for quantum speedups of optimization problems and the preparation of projected entangled pair states (PEPS) are discussed. Nengkun Yu, Runyao Duan and Quanhua Xu. Bounds on the distance between a unital quantum channel and the convex hull of unitary channels, with applications to the asymptotic quantum Birkhoff conjecture Abstract: Motivated by the recent resolution of Asymptotic Quantum Birkhoff Conjecture (AQBC), we attempt to estimate the distance between a given unital quantum channel and the convex hull of unitary channels. We provide two lower bounds on this distance by employing techniques from quantum information and operator algebra, respectively. We then show how to apply these results to construct some explicit counterexamples to AQBC.  QIP2012
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211532235145569, "perplexity": 692.7029127669462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864798.12/warc/CC-MAIN-20180522151159-20180522171159-00020.warc.gz"}
https://forum.knime.com/t/meaning-of-distance-matrix-output/1418
# meaning of distance matrix output hello! I´m using the distance matrix node in my workflow. I calculate fingerprits and then i use the tanimoto function of the distance matrix node calculating a distance matrix of the fingerprints. now in the output column there are the distances listed but i don´t understand between which row values the distance is meaned?! - ok, in the first row there is no distance because it compares the first entry with itself. in the second row there is the distance between the first and the second entry.. ok, but what is the first entry of the third row? is it the distance between molecule 3 and 1 or between 3 and 2? and waht is then the second entry of the third row (distance between which  row entries)? and so one... it is a lower triangular distance matrix. e.g. in DataRow i at position j you find the distance between the molecule in Row i and the molecule in Row j (only for i<j) Hi If you dont want to a matrix of all the molecules being compared with one another and simply want to get a tanimato similarity between one molecule (or a set of molecules) and a list of molecules then you can use either the Indigo Fingerprint Similarity or the Erlwood Fingerprint Similarity. If you are using more than one molecule to compare against, and want an overall average similarity then simply tick Multifusion query in ther Erlwood node, or average aggregation type in the Indigo node. Simon.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114053010940552, "perplexity": 782.3881798241897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00660.warc.gz"}
https://www.encyclopediaofmath.org/index.php/Complexification_of_a_Lie_group
# Complexification of a Lie group 2010 Mathematics Subject Classification: Primary: 22E [MSN][ZBL] The complexification of a Lie group $G$ over $\R$ is a complex Lie group $G_\C$ containing $G$ as a real Lie subgroup such that the Lie algebra $\def\fg{ {\mathfrak g}}\fg$ of $G$ is a real form of the Lie algebra $\fg_\C$ of $G_\C$ (see Complexification of a Lie algebra). One then says that the group $G$ is a real form of the Lie group $G_\C$. For example, the group $\def\U{ {\rm U}}\U(n)$ of all unitary matrices of order $n$ is a real form of the group $\def\GL{ {\rm GL}}\GL(n,\C)$ of all non-singular matrices of order $n$ with complex entries. There is a one-to-one correspondence between the complex-analytic linear representations of a connected simply-connected complex Lie group $G_\C$ and the real-analytic representations of its connected real form $G$, under which irreducible representations correspond to each other. This correspondence is set up in the following way: If $\rho$ is an (irreducible) finite-dimensional complex-analytic representation of $G_\C$, then the restriction of $\rho$ to $G$ is an (irreducible) real-analytic representation of $G$. Not every real Lie group has a complexification. In particular, a connected semi-simple Lie group has a complexification if and only if $G$ is linear, that is, is isomorphic to a subgroup of some group $\GL(n,\C)$. For example, the universal covering of the group of real second-order matrices with determinant 1 does not have a complexification. On the other hand, every compact Lie group has a complexification. The non-existence of complexifications for certain real Lie groups inspired the introduction of the more general notion of a universal complexification $(\tilde G,\tau)$ of a real Lie group $G$. Here $\tilde G$ is a complex Lie group and $\tau : G\to \tilde G$ is a real-analytic homomorphism such that for every complex Lie group $H$ and every real-analytic homomorphism $\alpha : G\to H$ there exists a unique complex-analytic homomorphism $\beta : \tilde G\to H$ such that $\alpha=\beta\circ \tau$. The universal complexification of a Lie group always exists and is uniquely defined [Bo]. Uniqueness means that if $(\tilde G'\tau')$ is another universal complexification of $\lambda : \tilde G\to \tilde G'$, then there is a natural isomorphism $\lambda\circ\tau = \tau'$ such that $\dim_\C\;\tilde G \le \dim_\R G$. In general, $G$, but if $G$ is simply connected, then $\dim_\C \tilde G = \dim_\R G$ and the kernel of $\tau$ is discrete.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9256894588470459, "perplexity": 72.37196208289275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860557.7/warc/CC-MAIN-20180618125242-20180618145242-00621.warc.gz"}
https://cn.vosi.biz/bbs/getmsg.aspx/bbsID110/msg_id1146953
Forum Index \ DriveHQ Customer Support Forum \ • vanoordt • (12 posts) Hi, I have a serious problem with restore of encrypted files. The restore tab of the backup utility doesn't show files, even though it does show the used space. Also the browser shows the files, however, since they are encrypted, it makes no sense to download them. I have this problem for 3 of my 13 taks now. I have already recreated one of these tasks before, because of this problem. The new tasks performed well for some time and then did the same. What I see is this: and when I double click on it: (Hope these pictures show in the post.) 5/19/2007 3:32:06 AM • DriveHQ Webmaster • (1098 posts) Subject: Re: Restore doesn't show files but shows space used, explorer shows the files If you can see the files from Internet Explorer, then the files should be there. If you have backed up a folder, in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files. If you are not sure about it, you can also use DriveHQ FileManager 3.8 to download the files. The first time when you download encrypted files, it will prompt you to input the encryption key. 5/19/2007 4:37:38 AM • vanoordt • (12 posts) Subject: Re: Re: Restore doesn't show files but shows space used, explorer shows the file... User: DriveHQ Webmaster  -  5/19/2007 4:37:38 AM If you can see the files from Internet Explorer, then the files should be there. If you have backed up a folder, in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files. If you are not sure about it, you can also use DriveHQ FileManager 3.8 to download the files. The first time when you download encrypted files, it will prompt you to input the encryption key. I understand that "in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files", however, if I double click it shows no files. Even though they are there as stated in my first post. I can send two screenshots to illustrate this. The contents of other tasks, encrypted with the same key, does show, and thus can be downloaded. Thanks for your attention to this problem. 5/19/2007 1:27:31 PM • DriveHQ Webmaster • (1098 posts) Subject: Re: Re: Re: Restore doesn't show files but shows space used, explorer shows the ... User: vanoordt  -  5/19/2007 1:27:31 PM User: DriveHQ Webmaster  -  5/19/2007 4:37:38 AM If you can see the files from Internet Explorer, then the files should be there. If you have backed up a folder, in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files. If you are not sure about it, you can also use DriveHQ FileManager 3.8 to download the files. The first time when you download encrypted files, it will prompt you to input the encryption key. I understand that "in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files", however, if I double click it shows no files. Even though they are there as stated in my first post. I can send two screenshots to illustrate this. The contents of other tasks, encrypted with the same key, does show, and thus can be downloaded. Thanks for your attention to this problem. Please double check the tasks were backed up properly. You can manually restart the task as needed. You can delete an existing task without deleting the backup sets on server, and then recreate the same backup task to fix any possible problems. 5/19/2007 3:49:08 PM • vanoordt • (12 posts) Subject: Re: Re: Re: Re: Restore doesn't show files but shows space used, explorer shows ... User: DriveHQ Webmaster  -  5/19/2007 3:49:08 PM User: vanoordt  -  5/19/2007 1:27:31 PM User: DriveHQ Webmaster  -  5/19/2007 4:37:38 AM If you can see the files from Internet Explorer, then the files should be there. If you have backed up a folder, in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files. If you are not sure about it, you can also use DriveHQ FileManager 3.8 to download the files. The first time when you download encrypted files, it will prompt you to input the encryption key. I understand that "in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files", however, if I double click it shows no files. Even though they are there as stated in my first post. I can send two screenshots to illustrate this. The contents of other tasks, encrypted with the same key, does show, and thus can be downloaded. Thanks for your attention to this problem. Please double check the tasks were backed up properly. You can manually restart the task as needed. You can delete an existing task without deleting the backup sets on server, and then recreate the same backup task to fix any possible problems. The files are properly backed up. I can use Filemanager to download them, however, since they are encrypted I can't use them. 5/21/2007 1:08:19 AM • DriveHQ Webmaster • (1098 posts) Subject: Re: Re: Re: Re: Re: Restore doesn't show files but shows space used, explorer sh... User: vanoordt  -  5/21/2007 1:08:19 AM User: DriveHQ Webmaster  -  5/19/2007 3:49:08 PM User: vanoordt  -  5/19/2007 1:27:31 PM User: DriveHQ Webmaster  -  5/19/2007 4:37:38 AM If you can see the files from Internet Explorer, then the files should be there. If you have backed up a folder, in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files. If you are not sure about it, you can also use DriveHQ FileManager 3.8 to download the files. The first time when you download encrypted files, it will prompt you to input the encryption key. I understand that "in the restore tab, you need to double click on the folder to enter the folder; it will then list the subfolders and files", however, if I double click it shows no files. Even though they are there as stated in my first post. I can send two screenshots to illustrate this. The contents of other tasks, encrypted with the same key, does show, and thus can be downloaded. Thanks for your attention to this problem. Please double check the tasks were backed up properly. You can manually restart the task as needed. You can delete an existing task without deleting the backup sets on server, and then recreate the same backup task to fix any possible problems.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555731177330017, "perplexity": 2105.1123003987555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00517.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2021_AMC_12A_Problems/Problem_22&diff=next&oldid=146099
# Difference between revisions of "2021 AMC 12A Problems/Problem 22" ## Problem Suppose that the roots of the polynomial are and , where angles are in radians. What is ? ## Solution Part 1: solving for c Notice that is the negation of the product of roots by Vieta's formulas Multiply by Then use sine addition formula backwards: Part 2: starting to solve for b is the sum of roots two at a time by Vieta's We know that By plugging all the parts in we get: Which ends up being: Which is shown in the next part to equal , so Part 3: solving for a and b as the sum of roots is the negation of the sum of roots The real values of the 7th roots of unity are: and they sum to . If we subtract 1, and condense identical terms, we get: Therefore, we have Finally multiply or . ~Tucker ~ pi_is_3.14 ## See also 2021 AMC 12A (Problems • Answer Key • Resources) Preceded byProblem 21 Followed byProblem 23 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 All AMC 12 Problems and Solutions The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions. Invalid username Login to AoPS
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.836452066898346, "perplexity": 3805.436326672524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00245.warc.gz"}
https://www.physicsforums.com/threads/electron-vector-problem.42849/
# Homework Help: Electron - Vector Problem 1. Sep 12, 2004 ### 0aNoMaLi7 One part of this problem has me confused....id appreciate any guidance. I have part (a) and (b) but (c) and (d) are TOTALLY losing me. i don't even know where to begin. THANKS "An electron's position is given by r=3.00t i - 5.00t^2 j + 3.00 k, with t in seconds and r in meters" (a) In unit-vector notation, what is the electron's velocity v(t)? My answer: 3.00 i - 10.0t j+ 0.00 k (b) What is v in unit-vector notation at t=6.00s? My answer: 3.00 i - 60.0 j+ 0.00 k (c) What is the magnitude of v at t = 6.00 s? (d) What angle does v make with the positive direction of the x axis at t = 6.00 s? Thank you. 2. Sep 12, 2004 ### Tide To find the magnitude of a vector just square each component, add them up and find the square root. You can find the angle between two vectors using the "dot product:" $$\vec A \cdot \vec B = A B \cos \phi$$ where A and B are the magnitudes of the vectors and $\phi$ is the angle between them. 3. Sep 12, 2004 ### 0aNoMaLi7 thanks.... solved it :-)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133678436279297, "perplexity": 1135.6473371542845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219495.97/warc/CC-MAIN-20180822045838-20180822065838-00590.warc.gz"}
https://physics.stackexchange.com/questions/346923/are-indices-conventionally-raised-inside-or-outside-of-partial-derivatives-in-ge
# Are indices conventionally raised inside or outside of partial derivatives in general relativity? If $A_\mu$ is a one-form, then is there a widely accepted convention among physicists about whether the notation $$\partial_\mu A^\mu \tag{1}$$ means "the partial-derivative four-divergence of the four-vector $A^\mu$ corresponding to $A_\mu$", i.e. $$\partial_\mu (g^{\mu \nu} A_\nu),\tag{2}$$ or just $$g^{\mu \nu} \partial_\mu A_\nu~?\tag{3}$$ The former definition corresponds more naturally to our usual definition of the partial derivative, but has the unfortunate property that $\partial_\mu A^\mu \neq \partial^\mu A_\mu$. For higher partial derivatives, do we adopt the convention that all partial derivatives are taken before raising or lowering any indices, so that the the contractions are invariant under the interchange of which index is raised and which is lowered? Or are partial (as opposed to covariant) derivatives used rarely enough in GR that there's no need to adopt a general convention for how they work (after all, the quantity $\partial_\mu A^\mu$ is non-tensorial under either convention)? (Please don't close this question as being about math rather than physics. This question is asking whether there is a notational convention accepted among physicists, and has nothing to do with math.) • I don't know of any conventions and I don't think there are any, precisely for the reason you state. But if I had to guess, I'd say that most people would agree that the metric goes inside the derivative. Jul 21, 2017 at 21:51 • Perhaps the question could be better-posed/more interesting from the math pov if you replaced those partials by covariant derivatives, but with a connection that need not be metric compatible. Thoughts? Jul 21, 2017 at 22:10 • @AccidentalFourierTransform Partial derivatives are covariant derivatives with a connection that need not be metric compatible. Technically, any system of coordinates defines a connection in which parallel-transport is defined by simply keeping the partials with respect to each coordinate direction constant. That's just not usually a very physically useful connection (except in the case of Cartesian coordinates on flat spacetime). Jul 21, 2017 at 23:05 • @tparker good point. Jul 22, 2017 at 9:37 Most of the doubts in your questions can be solved if you avoid calling a vector (or a form) their coordinates. $A_{\mu}$ is not a one-form: $A = A_{\mu}dx^{\mu}$ is. $g^{\mu\nu}$ is not a tensor: $g= g^{\mu\nu}e_{\mu}\otimes e_{\nu}$ is. As such one thing is just taking partial derivatives of some functions with respect to their variables, namely $$\sum_{\mu}\frac{\partial}{\partial x^{\mu}} A_{\mu}(x)$$ one other thing is the contraction of a tensor, namely making the tensor act on some dual basis, that is $$\sum_{\mu\nu\sigma}(g^{\mu\nu}e_{\mu}\otimes e_{\nu})(A_{\sigma}dx^{\sigma}) = \sum_{\mu\nu\sigma}(g^{\mu\nu}A_{\sigma})\, e_{\mu}\, e_{\nu}(dx^{\sigma})$$ The convention is that you just have to carry the bases things act upon and that is it. • So what is your answer to my specific question? Jul 22, 2017 at 14:12 • The answer to your question is that there is no "raising or lowering of the indeces" nor there is the need to define what derivatives to take first, because you are doing two different things that you are mistaking by the same: the former is taking a divergence, the latter is contracting a tensor. Jul 22, 2017 at 14:19 • I'm not doing any things, I'm simply asking about the interpretation of the notation $\partial_\mu A^\mu$. I never actually did any mathematical operations. Jul 22, 2017 at 14:34 • $A_\mu$ is a one-form if you use abstract index notation, which is the correct thing to do. The indices are just type annotations. Jul 28, 2017 at 22:37 • ...indeces all around $(A,B,...), (a,b,...)$ and so forth. In my opinion this is much clunkier than the standard one, to be honest. Jul 30, 2017 at 22:30 There isn't one, because partial derivatives are not meaningful in GR. Partial derivatives can appear in two places: • Exterior derivatives • Lie derivatives. Obviously they can also appear if you expand a covariant derivative but you really shouldn't raise or lower individual incides then. For covariant derivatives, it doesn't matter, because $\nabla g=0$, so you can freely move $g$ in or out of the derivative and then we have $\nabla_\mu A^\mu=\nabla^\mu A_\mu$. For Lie-derivatives, you can express them with covariant derivatives. However it does matter, because Lie-derivatives do not commute with $g$, unless your vector field is a Killing-field, so we have $\mathcal L_X A^\mu\neq(\mathcal L_X A_\nu)g^{\mu\nu}$, in this case, you need to specify whether you raise/lower before or after the Lie-derivative. However I need to say that the index notation meshes really badly with the Lie-derivative notation anyways. For exterior derivatives, you can express that with covariant derivatives, and also, the exterior derivative is meaningful if and only if, you calculate it on a differential form, which are, by definition, lower-indexed. As AccidentialFourierTransform said in the comments, the issue is more interesting if you have multiple connections and/or multiple metrics and/or a non-compatible connection. Every time I have seen such situations in the physics literature, the raisings/lowerings were written out explicitly, or a convention was declared beforehand, but because these occurrences are rather specific, one cannot really make a definitive convention en general. • @AccidentalFourierTransform I have noted non-metric connections in the last paragraph. Jul 22, 2017 at 14:01 • Note that, as I mentioned in a comment to the OP, partial derivatives technically are covariant derivatives with respect to a connection that is not necessarily metric compatible. Jul 22, 2017 at 15:34 • @tparker And that is not necessarily globally defined, and whose existence entirely depends on the whim of choosing a chart. While technically $\partial_\mu$ is indeed a local connection, in terms of function it has no internal meaning. It is only used as a "reference device" because we know how to calculate it. Jul 22, 2017 at 15:39 1. If $A^{\mu}$ is supposed to be (components of) a vector field, i.e. a (1,0) contravariant tensor field, then the expression (1) is not a divergence. A divergence of a vector field in a pseudo-Riemannian manifold is a scalar field, i.e. a (0,0) tensor field, and has the local form $${\rm div} A~=~ \frac{1}{\sqrt{|g|}}\partial_{\mu} (\sqrt{|g|} A^{\mu}) \tag{A}$$ 2. Similarly, if $A_{\nu}$ is supposed to be (components of) a co-vector field, i.e. a (0,1) covariant tensor field, then the expression (3) is not a (0,0) tensor field. 3. Apart from the important objection about not working with non-covariant quantities, if OP is merely asking about conventions for a notational short-hand for working with a partial derivative $$\partial^{\mu}\tag{B}$$ with raised index, say, in a general relativistic context, it seems most convenient to let the metric be outside, i.e. $$\partial^{\mu}~:=~g^{\mu\nu}\partial_{\nu}.\tag{C}$$ E.g. the Laplace-Beltrami operator would then become $$\Delta~=~\frac{1}{\sqrt{|g|}}\partial_{\mu}\sqrt{|g|}\partial^{\mu}.\tag{D}$$ But we cannot really recommend the notation (B) outside a special relativistic context in order not to create unnecessary confusion.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287242889404297, "perplexity": 367.5859773548891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00532.warc.gz"}
https://scipost.org/SciPostPhys.10.1.016
## Gravity loop integrands from the ultraviolet Alex Edison, Enrico Herrmann, Julio Parra-Martinez, Jaroslav Trnka SciPost Phys. 10, 016 (2021) · published 25 January 2021 ### Abstract We demonstrate that loop integrands of (super-)gravity scattering ampli- tudes possess surprising properties in the ultraviolet (UV) region. In particular, we study the scaling of multi-particle unitarity cuts for asymptotically large momenta and expose an improved UV behavior of four-dimensional cuts through seven loops as com- pared to standard expectations. For N=8 supergravity, we show that the improved large momentum scaling combined with the behavior of the integrand under BCFW deformations of external kinematics uniquely fixes the loop integrands in a number of non-trivial cases. In the integrand construction, all scaling conditions are homoge- neous. Therefore, the only required information about the amplitude is its vanishing at particular points in momentum space. This homogeneous construction gives indirect evidence for a new geometric picture for graviton amplitudes similar to the one found for planar N=4 super Yang-Mills theory. We also show how the behavior at infinity is related to the scaling of tree-level amplitudes under certain multi-line chiral shifts which can be used to construct new recursion relations. ### Authors / Affiliations: mappings to Contributors and Organizations See all Organizations. Funders for the research work leading to this publication
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8360782861709595, "perplexity": 1967.9908443377833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00476.warc.gz"}
http://mathhelpforum.com/algebra/11301-equation.html
1. ## equation x-366= -415 89+y=112 -27=w-14 2. Originally Posted by dianna x-366= -415 89+y=112 -27=w-14 We've done a bunch of these for you. I think it would be better to see YOU do them. So I'll work out the first one, and you post your solutions to the second two and we'll look them over. $x - 366 = -415$ $x - 366 + 366 = -415 + 366$ $x = 49$ The last two are done using almost exactly the same method. -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9031322002410889, "perplexity": 720.2489719708989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607963.70/warc/CC-MAIN-20170525025250-20170525045250-00563.warc.gz"}
https://rd.springer.com/chapter/10.1007/978-3-662-03877-2_17
# Squeezed States of Light • Pierre Meystre • Murray SargentIII ## Abstract The Heisenberg uncertainty principle $$\Delta A\Delta B \geqslant \frac{1} {2}\left| {\left\langle {\left[ {A,B} \right]} \right\rangle } \right|$$ between the standard deviations of two arbitrary observables, ΔA = 〈(A - (A))21/2 and similarly for ΔB, has a built-in degree of freedom: one can squeeze the standard deviation of one observable provided one “stretches” that for the conjugate observable. For example, the position and momentum standard deviations obey the uncertainty relation $$\Delta x\Delta p \geqslant \hbar /2$$ (17.1) and we can squeeze Δx to an arbitrarily small value at the expense of accordingly increasing the standard deviation Δp. All quantum mechanics requires is than the product be bounded from below. As discussed in Sect. 13.1, the electric and magnetic fields form a pair of observables analogous to the position and momentum of a simple harmonic oscillator. Accordingly, they obey a similar uncertainty relation $$\Delta E\Delta B \geqslant \left( {{\text{constant}}} \right)\hbar /2$$ (17.2) . ### Keywords Expense Sine Tral Nism
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700042605400085, "perplexity": 697.0742292634145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812584.40/warc/CC-MAIN-20180219111908-20180219131908-00773.warc.gz"}
http://math.stackexchange.com/questions/844042/homeomorphism-proof
# Homeomorphism proof $\mathbb{R}^2-(0 \times \mathbb{R}_+) \approx \mathbb{R}^2$ Now consider the map that sends the line $(-1 \times \mathbb{R}_+)$ to $(0 \times \mathbb{R}_+)$. And then continue this inductively. Every function is a map, because it is a translation. The composition of any finite number of maps is a map. The question I have is whether the countably infinite composition of these maps is a map. Thank you. - There's no meaningful definition of an infinite composition of maps. If $s(n)=n+1$ then what does the map $s^{\infty}$ map $n$ to? – Dan Rust Jun 22 '14 at 22:46 In the plane you can find simple homeomorphisms: first contract $R^2$ onto $R^*_+\times R$ via $(x,y)\mapsto(e^x,y)$, and then take the square by identifying the real plane with the complex numbers: $z\mapsto z^2$. This will give you a homeomorphism $R^2\to R^2\setminus R_-\times 0$ which you can then rotate to where you want it. – Olivier Bégassat Jun 22 '14 at 22:50 I guess, @Mike meant the map $(n,x)\mapsto (n+1,x)$ if $n\in\Bbb Z,\ n<0$ and $x>0$. – Berci Jun 22 '14 at 22:50 @Berci: yeah, that's basically what I meant. Additionally, all other points are mapped to themselves. – Mike Jun 22 '14 at 23:00 But this is not continuous on points of $\{n\}\times\Bbb R^{\ge 0}$ for $n<0,\,n\in\Bbb Z$. – Berci Jun 22 '14 at 23:02 Hint: Use polar coordinates, the angle measured starting from the given ray $\{0\}\times\Bbb R^+$ (i.e. the $y$-axis), then deform the angles from $(0,360^\circ)$ to $(90^\circ,270^\circ)$, thus you get the open half plane. What is the image of the point $(0,-1)$ under this 'map'? If you can't answer this question then it is not a map. – Dan Rust Jun 22 '14 at 22:56 @Mike a small ball around $(0,0)$ has non-open preimage because the preimage has two connected components, one of which is the single point $\{(-1,0)\}$. – Dan Rust Jun 22 '14 at 23:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146397709846497, "perplexity": 255.35598363449512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829970.64/warc/CC-MAIN-20160723071029-00093-ip-10-185-27-174.ec2.internal.warc.gz"}
https://andrescaicedo.wordpress.com/2011/01/20/507-problem-list-iii/
## 507- Problem list (III) For Part II, see here. (Many thanks to Robert Balmer, Nick Davidson, and Amy Griffin for help with this list.) • The Erdös-Turán conjecture on additive bases of order 2. • If $R(n)$ is the $n$-th Ramsey number, does $\lim_{n\to\infty}R(n)^{1/n}$ exist? • Hindman’s problem: Is it the case that for every finite coloring of the positive integers, there are $x$ and $y$ such that $x$, $y$, $x + y$, and $xy$ are all of the same color? • Does the polynomial Hirsch conjecture hold? • Does $P=NP$? (See also this post (in Spanish) by Javier Moreno.) • Mahler’s conjecture on convex bodies. • Nathanson’s conjecture: Is it true that ${}|A+A|\le|A-A|$ for “almost all” finite sets of integers $A$? • The (bounded) Burnside’s problem: For which $m,n$ is the free group $B(m,n)$ finite? • Is the frequency of 1s in the Kolakoski sequence asymptotically equal to $1/2$? (And related problems.) • A question on Narayana numbers: Find a combinatorial interpretation of identity 6.C7(d) in Stanley’s “Catalan addendum” to Enumerative combinatorics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705550670623779, "perplexity": 1131.2324761154157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930895.96/warc/CC-MAIN-20150521113210-00139-ip-10-180-206-219.ec2.internal.warc.gz"}
https://assignment-daixie.com/tag/phys3202%E4%BB%A3%E8%80%83/
# 液体和等离子体|PHYS3202 Fluids and Plasma代写 0 “Fluids and Plasmas” is an applied physics course designed to give the students experience in working with, predicting and measuring the behaviour of fluid flows and plasmas. The course begins with an outline of the fluid equations of motion, which lead to solutions for waves in fluids, convection and buoyancy-driven flows. Here we consider some special cases of $(100)$ obtained by specializing $a, b, c$, and $d$ in $H$ of (77). Our choices for these four functions will determine the structure of the first integrals $x_{0}$ and $y_{0}$ through (99). For the cases we consider, their structure will be easy to discern and will give some insight into the behavior of $\xi$. How $\mathbf{B}_{p}$ will propagate in each case is pointed out to make the discussion more physically concrete. To conclude, a physical interpretation for the terms of $H$ and the role they play in determining how solutions propagate are discussed as well. The first case we consider is a rather drastic simplification of the general result (99): we take $a, b, c$, and $d$ all to be zero, getting rid of Hentirely. Then we are simply left with $$x_{0}=x \quad \text { and } \quad y_{0}=y .$$ Thus, in this case, the general solution for $\xi$ is of the form $$\xi=\xi(x, y, z-\gamma \tau),$$ which corresponds to a structure propagating toroidally with specd $\gamma$. $$\psi=(1 / \gamma)\left(\xi-\alpha \nabla_{1}^{2} \xi\right)(x, y, z,-\gamma \tau) .$$ The arguments in parentheses stress that $\psi$ moves in exactly the same way as $\xi$ : surfaces of constant poloidal flux simply propagate in the $z$ direction with constant velocity $\gamma$. Applying $\mathbf{B}{p}=-\epsilon B{T} \hat{\mathbf{z}} \times \nabla_{1} \psi$ to (105) shows that the disturbance $\mathbf{B}{p}$ also propagates in the same way: if we follow a point moving along a characteristic curve, $\mathbf{B}{p}$ at the point will be a constant vector. However, from $(105)$ and the arguments given at the end of Sec. III F, the solution is not necessarily an Alfvén-like wave because, in general, $\mathbf{B}{p}$ will not be proportional to $\mathbf{v}{t}$ for this case. ## PHYS3202 COURSE NOTES : Having introduced the fluid equations, we next discuss a method for arriving at exact solutions of them. We denote the partial derivative of a quantity by a subscript, e.g., $\partial U / \partial \tau \equiv U_{\tau}$. Then, after rearranging the terms of $(9)$ and $(10)$ and subtracting (14) from (9), we can write \begin{aligned} &U_{r}+[\phi, U]+J_{z}+[J, \psi]=0, \ &\psi_{r}+(\phi-\alpha \chi){z}+[\phi-\alpha \chi, \psi]=0, \end{aligned} This is the nonlinear system we will study. Note that we are taking $\hat{\eta}=0$ in (16); the resistivity of the plasma is neglected for all that follows. To satisfy (17) we take $$\chi=g(z)+U,$$ where $g$ is an arbitrary function of $z$. This is by no means the general solution to (17); it is simply a special case that satisfies (17) with little effort. Defining \begin{aligned} &\xi \xi \phi-\alpha g(z), \ &\text { and recasting (15) and (16) in terms of } \xi \text { gives } \ &\qquad U{+}+[\xi, U]+J_{z}+[J, \psi]=0 \ &\text { and } \ &\qquad \psi_{\tau}+(\xi-\alpha U){z}+[\xi-\alpha U, \psi]=0, \end{aligned} where (18) has been used. We note in passing that from (19) and (6), the definition of $U$, we have $$U=\nabla{1}^{2} \xi,$$ a relation that will be used often in what follows. Now we have to find solutions to (20) and (21). Let us first consider the simpler case of axisymmetric equilibrium.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9589707851409912, "perplexity": 343.70918052080793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711114.3/warc/CC-MAIN-20221206192947-20221206222947-00846.warc.gz"}
http://blog.brucemerry.org.za/2017/06/extra-dcj-2017-r2-analysis.html?showComment=1502261014711
## Monday, June 12, 2017 ### Flagpoles My solution during the contest was essentially the same as the official analysis. Afterwards I realised a potential slight simplification: if one starts by computing the second-order differences (i.e., the differences of the differences), then one is looking for the longest run of zeros, rather than the longest run of the same value. That removes the need to communicate the value used in the runs at the start and end of each section. ### Number Bases I missed the trick of being able to uniquely determine the base from the first point at which X[i] + Y[i] ≠ Z[i]. Instead, at every point where X[i] + Y[i] ≠ Z[i], I determine two candidate bases (depending on whether there is a carry of not). Then I collect the candidates and test each of them. If more than three candidates are found, then the test case is impossible, since there must be two disjoint candidate pairs. ### Broken Memory My approach was slightly different. Each node binary searches for its broken value, using two other nodes to help (and simultaneously helping two other nodes). Let's say we know the broken value is in a particular interval. Split that interval in half, and compute hashes for each half on the node (h1 and h2) and on two other nodes (p1 and p2, q1 and q2). If h1 equals p1 or q1, then the broken value must be in interval 2, or vice versa. If neither applies, then nodes p and q both have broken values, in the opposite interval to that of the current node. We can tell which by checking whether p1 = q1 or p2 = q2. This does rely on not having collisions in the hash function. In the contest I relied on the contest organisers not breaking my exact choice of hash function, but it is actually possible to write a solution that works on all test data. Let P be a prime greater than $$10^{18}$$. To hash an interval, compute the sums $$\sum m_i$$ and $$\sum i m_i$$, both mod P, giving a 128-bit hash. Suppose two sequences p and q collide, but differ in at most two positions. The sums are the same, so they must differ in exactly two positions j and k, with $$p_j - q_j = q_k - p_k$$ (all mod P). But then the second sums will differ by $$jp_j + kp_k - jq_j - kq_k = (j - k)(p_j - q_j)$$, and since P is prime and each factor is less than P, this will be non-zero. ronaldo said... nice
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8560407161712646, "perplexity": 530.6389535794445}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669730.38/warc/CC-MAIN-20191118080848-20191118104848-00136.warc.gz"}
http://mathhelpforum.com/advanced-algebra/118794-find-square-root-3x3-matrix.html
# Math Help - Find the square root of a 3x3 matrix 1. ## Find the square root of a 3x3 matrix I apologize if my notation isn't clear, newbie to this forum I'm trying to find out how to find the square root of a 3x3 matrix. For A= [1, 1, 1 0, 1, 1 0 , 0, 1] I know that, in general, A^x = (P^-1) (D^x) (P) for some invertible P. In the case of linearly independent eigenvectors P should form a basis of A's eigenspace. But, the eigenvalues of A here are all 1, and only has one eigenvector, [1, 0, 0] and its scalar multiples. So that method isn't going to work. There is a method using Spectral Decomposition that I don't fully understand. It starts with the equation for A nxn, eigenvalues v1....vs, multiplicities m1....ms, then there exists n uniquely defined consituent matrices E i,k: i = 1...s, k = 0.... m-1 s.t. for any analytic function f(x) we have f (A) = (s sigma i = 1) (mi sigma k =0) f^(k) (vi) E i,k Anyways if you can decode that it seems to me you can arrive at the constituent matrices of A by the following equations (A-I) (A-I) = 0 + 0 + 2 E 1,0 (A-I) (A-I) = 2 E 1,2 which works out to [ 0, 0, 1/2 0, 0, 0 0, 0, 0 ] A-I = E 1,1 which is of course [ 0, 1, 1 0, 0, 1 0, 0, 0] and finally I = E1,0 So we have 3 constituent matrices for A, let's say X E1,0 + Y E 1,1 + Z E 1,2 It turns out for values X=1, Y= 1/2, and Z = -1/4 you get [ 1, 1/2, 3/8 0, 1, 1/2 0, 0, 1] whose square is A. So somehow (I don't know) we have to use the constituent matrices in a linear equation to general the square root of A. How to get the values of X,Y,Z I do not know. 2. Originally Posted by Gchan I apologize if my notation isn't clear, newbie to this forum I'm trying to find out how to find the square root of a 3x3 matrix. For A= [1, 1, 1 0, 1, 1 0 , 0, 1] I know that, in general, A^x = (P^-1) (D^x) (P) for some invertible P. In the case of linearly independent eigenvectors P should form a basis of A's eigenspace. But, the eigenvalues of A here are all 1, and only has one eigenvector, [1, 0, 0] and its scalar multiples. So that method isn't going to work. There is a method using Spectral Decomposition that I don't fully understand. It starts with the equation for A nxn, eigenvalues v1....vs, multiplicities m1....ms, then there exists n uniquely defined consituent matrices E i,k: i = 1...s, k = 0.... m-1 s.t. for any analytic function f(x) we have f (A) = (s sigma i = 1) (mi sigma k =0) f^(k) (vi) E i,k Anyways if you can decode that it seems to me you can arrive at the constituent matrices of A by the following equations (A-I) (A-I) = 0 + 0 + 2 E 1,0 (A-I) (A-I) = 2 E 1,2 which works out to [ 0, 0, 1/2 0, 0, 0 0, 0, 0 ] A-I = E 1,1 which is of course [ 0, 1, 1 0, 0, 1 0, 0, 0] and finally I = E1,0 So we have 3 constituent matrices for A, let's say X E1,0 + Y E 1,1 + Z E 1,2 It turns out for values X=1, Y= 1/2, and Z = -1/4 you get [ 1, 1/2, 3/8 0, 1, 1/2 0, 0, 1] whose square is A. So somehow (I don't know) we have to use the constituent matrices in a linear equation to general the square root of A. How to get the values of X,Y,Z I do not know. finding a square root of a $3 \times 3$ upper triangular matrix $A=[a_{ij}]$ is not hard (i'll assume that the entries on the diagonal of $A$ are real and positive). define the $3 \times 3$ matrix $B=[b_{ij}]$ by: for $i=1,2,3$ let $b_{ii}=\sqrt{a_{ii}}.$ also define $b_{21}=b_{31}=b_{32}=0.$ finally let $b_{12}=\frac{a_{12}}{b_{11} + b_{22}}, \ b_{23}=\frac{a_{23}}{b_{22}+b_{33}}$ and $b_{13}=\frac{a_{13} - b_{12}b_{23}}{b_{11}+b_{33}}.$ it's easy to see that $B^2=A.$ of course this formula will also work for matrices with complex entries if you choose a square root for each $a_{ii}$ and if the denominators in $b_{12}, \ b_{23}$ and $b_{13}$ are non-zero. finally, since every square matrix with complex entries is similar to an upper triangular matrix, you can use the above to find a square root of an arbitrary $3 \times 3$ matrix. 3. Thank you very much. Is there a way to then determine the Jordan form of A? If eigenvalues are 1, then J= [ 1, 0, 0 0, 1, 0 0, 0, 1] or [1,1,0 0,1,0 0,0,1] or [1,1,0 0,1,1 0,0,1] or [1,0,0 0,1,1 0,0,1] Can we determine which is the Jordan form without finding P st A = (P^-1) (J) (P) ?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9457170963287354, "perplexity": 789.3106519550767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468396.75/warc/CC-MAIN-20151124205428-00134-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/108213/the-multiplicative-system-in-a-symmetric-monoidal-category
# The multiplicative system in a symmetric monoidal category Let $\mathcal{C}$ be a symmetric monoidal category. In the 1973 paper "Note on monoidal localisation" by Brian Day, the multiplicative system of morphism in $\mathcal{C}$ has been discussed. See also this mathoverflow question by Martin Brandenburg. My question is: can we consider a multiplicative system consists of both objects and morphisms in $\mathcal{C}$? This means that we have a collection of objects $x_i$ and a collection of morphisms $f_i$ such that $x_i \otimes x_j$ is still in the collection $x_i$ and $x_i$ and $f_j$ satisfies some "compatible condition". And can we define a localization along this more general multiplicative system? Notice that in this viewpoint the case in the first paragraph can be considered as the multplicative system with only one object $1$ (and a system of morphisms). - This is a bit above my categorical pay grade, so I will leave a hopefully helpful comment rather than an answer. A general strategy to working with objects in any category is to encode them via their identity morphisms. Is it enough in your case to use the theory of monoidal localization but with some identity morphisms in the mix? – Theo Johnson-Freyd Sep 27 '12 at 14:02 @Theo: Yes I need some morphism in the mix. But still I'm interested in the case your mentioned: we consider a multiplicative system of objects and the identity morphisms of each object. Then what should be the requirement on the collection of objects to make them a multiplicative system? – Zhaoting Wei Sep 27 '12 at 15:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561235308647156, "perplexity": 212.55242279300737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148558.5/warc/CC-MAIN-20160205193908-00157-ip-10-236-182-209.ec2.internal.warc.gz"}
http://hal.in2p3.fr/in2p3-00271254
Untangling supernova-neutrino oscillations with beta-beam data Abstract : Recently, we suggested that low-energy beta-beam neutrinos can be very useful for the study of supernova neutrino interactions. In this paper, we examine the use of a such experiment for the analysis of a supernova neutrino signal. Since supernova neutrinos are oscillating, it is very likely that the terrestrial spectrum of supernova neutrinos of a given flavor will not be the same as the energy distribution with which these neutrinos were first emitted. We demonstrate the efficacy of the proposed method for untangling multiple neutrino spectra. This is an essential feature of any model aiming at gaining information about the supernova mechanism, probing proto-neutron star physics, and understanding supernova nucleosynthesis, such as the neutrino process and the r-process. We also consider the efficacy of different experimental approaches including measurements at multiple beam energies and detector configurations. Document type : Journal articles Physical Review C, American Physical Society, 2008, 77, pp.055501. <10.1103/PhysRevC.77.055501> Domain : http://hal.in2p3.fr/in2p3-00271254 Contributor : Suzanne Robert <> Submitted on : Tuesday, April 8, 2008 - 4:24:21 PM Last modification on : Thursday, June 5, 2008 - 10:58:18 AM Citation N. Jachowicz, G. C. Mclaughlin, C. Volpe. Untangling supernova-neutrino oscillations with beta-beam data. Physical Review C, American Physical Society, 2008, 77, pp.055501. <10.1103/PhysRevC.77.055501>. <in2p3-00271254> Metrics Consultations de la notice
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369293451309204, "perplexity": 3723.8222983392793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00129-ip-10-164-35-72.ec2.internal.warc.gz"}
https://infoscience.epfl.ch/record/212049
Infoscience Journal article # Phase derivative estimation from a single interferogram using a Kalman smoothing algorithm We report a technique for direct phase derivative estimation from a single recording of a complex interferogram. In this technique, the interference field is represented as an autoregressive model with spatially varying coefficients. Estimates of these coefficients are obtained using the Kalman filter implementation. The Rauch-Tung-Striebel smoothing algorithm further improves the accuracy of the coefficient estimation. These estimated coefficients are utilized to compute the spatially varying phase derivative. Stochastic evolution of the coefficients is considered, which allows estimating the phase derivative with any type of spatial variation. The simulation and experimental results are provided to substantiate the noise robustness and applicability of the proposed method in phase derivative estimation. (C) 2015 Optical Society of America
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.82281494140625, "perplexity": 690.7905834777484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189214.2/warc/CC-MAIN-20170322212949-00328-ip-10-233-31-227.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1938100/how-do-i-find-999-mod-1000
# How do I find $999!$ (mod $1000$)? I came across the following question in a list of number theory exercises Find $999!$ (mod $1000$) I have to admit that I have no idea where to start. My first instinct was to use Wilson's Theorem, but the issue is that $1000$ is not prime. • Hint: $15!$ is divisible by $1000.$ – bof Sep 23 '16 at 6:53 • If that takes too much work, it is at least clear that $50!$ is divisible by $1000$. – Brian Tung Sep 23 '16 at 6:56 • Goodness I just thought of something simple that may work. $999!$ 'contains' inside it a '100' and a '10' inside it for sure. So whatever number we are left with is surely a multiple of $1000$. Hence the residue is $0$. Although a technique very much specific to this question, can somebody verify if this is valid? – Trogdor Sep 23 '16 at 7:08 • $999!$ is indeed divisible by $10!$ and by $100!$, but you need to show that it is also divisible by $10!\cdot100!$, which is a little less trivial (though not that difficult). In short, you can take every decomposition of $1000$ (except for $1\cdot1000$), and use it in order to prove that $1000$ divides $999!$. For example, $1000=500\cdot2$, and $999!$ is divisible by $502!$, which is equal to $500!\cdot501\cdot502$ and is therefore divisible by $500\cdot2$. In order to find the smallest factorial for which this holds, you need to use the prime factorization of $1000$. – barak manos Sep 23 '16 at 9:30 $1000=2\cdot2\cdot2\cdot5\cdot5\cdot5$ $2\cdot2\cdot2$ divides $2\cdot4\cdot6$ without remainder $5\cdot5\cdot5$ divides $5\cdot10\cdot15$ without remainder $2\cdot4\cdot6\cdot5\cdot10\cdot15$ divides $15!$ without remainder $15!$ divides $999!$ without remainder Therefore $1000$ divides $999!$ without remainder What are you guys doing? Due to the fact that $500\times2 = 1000$ it follows trivially that $999!$ is congruent to $0 \space \text{(mod 1000)}$. Actually we know for certain that for every $x$ such that $x$ is lager than or equal to $500!$ it will always be the case that $x$ will be congruent to $0 \space \text{(mod 1000)}$. Why? Because you can always rewrite the factorial as $(500\times2)\times(\text{the remainding factors})$ thus, $0\times(\text{the remainding factors}) = 0 \equiv 0 \space \text{(mod 1000)}$. Short answer: $999!$ is a multiple of $10\cdot20\cdot30$. For every occurrence of a multiple of 5 in the number you are building up to 999!, an additional zero becomes part of the ending sequence of digits of that number, never to leave again. Your question is tantamount to asking what the final three digits of 999! are. The first factorial to end in three zeroes, then and forevermore, is 15! . So, all following factorials will have the three zeroes. So, 000.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8741517066955566, "perplexity": 171.8852502165069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145746.24/warc/CC-MAIN-20200223032129-20200223062129-00205.warc.gz"}
https://slideplayer.com/slide/5853394/
# Radian and Degree Measure Objectives: Describe Angles Use Radian and Degree measures. ## Presentation on theme: "Radian and Degree Measure Objectives: Describe Angles Use Radian and Degree measures."— Presentation transcript: Radian and Degree Measure Objectives: Describe Angles Use Radian and Degree measures  An angle is determined by rotating a ray about its endpoint. The starting position of the ray is the initial side of the angle and the position after rotation is the terminal side. The endpoint of the ray is the vertex of the angle. This perception of an angle fits a coordinate system in which the origin is the vertex and the initial side coincides with the positive x-axis. Such an angle is in standard position. Counterclockwise rotation generates positive angles and clockwise rotation generates negative angles. Angle that have the same initial and terminal sides are called coterminal angles. Trigonometry: the measurement of triangles the relationships among the sides and angles of triangles  Radian: the measure of a central angle that intercepts an arc equal in length to the radius of the circle. Algebraically, this means that where is measured in radians.  (Note that )  Degree: a measure of one degree is equivalent to a rotation of of a complete revolution about the vertex. Measure of an angle: the amount of rotation from the initial side to the terminal side. Conversions between Degrees and Radians  To convert degrees to radians, multiply degrees by EX: Convert from degrees to radians a) b)  To convert radians to degrees, multiply radians by EX: Convert from radians to degrees a) b)  Fractional parts of degrees are expressed in minutes and seconds, using the prime and double prime notations, respectively. Many calculators have special keys for converting an angle in degrees, minutes, and seconds to decimal degree form, and vice versa. Decimal degrees are used to denote fractional parts of degrees  A) Determine the quadrant in which the angle lies  B) Convert to degrees  C) Determine two coterminal angles (one positive and one negative) for each angle.  D) Determine the complement and the supplement of each angle (IF POSSIBLE).  1. EXAMPLE  A) Determine the quadrant in which the angle lies  B) Convert to radians  C) Determine two coterminal angles (one positive and one negative) for each angle.  D) Determine the complement and the supplement of each angle (IF POSSIBLE).  2. EXAMPLE  A) Determine the quadrant in which the angle lies  B) Convert to degrees  C) Determine two coterminal angles (one positive and one negative) for each angle.  3. EXAMPLE  A) Determine the quadrant in which the angle lies  B) Convert to radians  C) Determine two coterminal angles (one positive and one negative) for each angle.  4. EXAMPLE  a)  b)  c) EX 5: Convert to Decimal Degrees. Round to the nearest thousandth of a degree.  a)  b)  c) EX 6: Convert to Degree, Minutes and Seconds
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9192929267883301, "perplexity": 1197.4392654302223}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481281.1/warc/CC-MAIN-20191205164243-20191205192243-00522.warc.gz"}
http://math.stackexchange.com/questions/181898/is-there-a-simple-proof-of-the-isoperimetric-theorem-for-squares?answertab=votes
# Is there a “simple” proof of the isoperimetric theorem for squares? "Simple" means that it doesn't use any integral or multivariable calculus concepts. A friend of mine who's taking a differential calculus course came up with the problem Prove that among all the quadrilaterals with a given perimeter, the one with the biggest area is the square. I solved the problem with Lagrange multipliers, using $2h + 2b$ as the function and $hb = A$ as the constraint. But I'm the one who took the multivariable calculus course, not him. So I'd like to know if there's a way of proving this theorem using differential calculus concepts, or even geometry and trigonometry. - Isn't the problem you solved different from the one you pose? You solved a maximization of the perimeter given a fixed area. –  Jose27 Aug 13 '12 at 1:46 Well, $A$ is an arbitrary constant and $h$ and $b$ are variables. I got $h = b$ as the only solution for this maximization, shouldn't that be enough for proving it? –  user1002327 Aug 13 '12 at 1:51 See this. The isoperimetric theorem is discussed a bit later, after the inequality shown on that page. –  Timmy Turner Aug 13 '12 at 1:52 @user1002327: Yes, but you're solving a different problem: You're maximizing perimeter with a given area as constrain. On the other hand you're asking for maximization of the area given a perimeter. –  Jose27 Aug 13 '12 at 1:57 Oh, you're right. –  user1002327 Aug 13 '12 at 1:59 In the case of rectangles, here's a solution of the dual problem: find the rectangle of smallest perimeter for a given area. The shapes should be the same. Just complete the square (if you can pardon the expression). The width is $w$; the height is $A/w$ (where $A$ is the area). So the semi-circumference is $$w + \frac A w = \left(w - 2\sqrt{A}+ \frac A w\right) + 2\sqrt{A} = \left(\sqrt{w} - \sqrt{\frac{A}{w}}\right)^2 + 2\sqrt{A}.$$ This is as small as possible when the expression that gets squared is $0$. So that $=0$ when $w=\text{what?}$ Later edit: Now let's try it more directly. The perimeter is $4\ell$. You have a rectangle with two opposite sides of length $k$ and and two of length $2\ell-k$. The area is \begin{align} A & = k(2\ell-k) = -k^2 + 2k\ell = -\Big(k^2 - 2k\ell\Big) = -\Big(k^2 -2k\ell + \ell^2\Big) +\ell^2 \\[8pt] & = -\Big(k-\ell\Big)^2 + \ell^2. \end{align} This is as big as possible when $k=\ell$, so you have a square. We still have the case of non-rectangles to deal with. - The second proof is very simple, looks like we can use that. And I don't think they're going to deal with the other quadrilaterals in a differential calculus course for engineers, so I'm marking your answer as the correct one. Thanks. –  user1002327 Aug 13 '12 at 2:03 This is essentially the arithmetic-geometric mean (AM-GM) inequality for lists of 2 numbers. http://en.wikipedia.org/wiki/Inequality_of_arithmetic_and_geometric_means#Geometric_interpretation In higher dimensions, the inequality says that the hypercube has the most volume among all boxes where the sum of length + width + depth + ... is fixed. The wiki page above has a bunch of proofs, many of which don't use calculus. - Since you mentioned being interested in a geometric proof, here's one that should be easy to understand (albeit with one sneaky detail swept under the rug). Start with a $w\times h$ rectangle, with $w\gt h$; we'll prove that there's another rectangle of the same perimiter but greater area. Since the area of the rectangle is twice the area of the triangle of base $w$ and height $h$, we can just consider the triangle's area. But now consider adding some small amount $x$ to the height $h$ and subtracting the same amount from the width $w$. This doesn't change the perimeter, since we're just redistributing a small segment, but we can see what it does to the area: The area of the original triangle with base $w$ and height $h$ is the sum of the pink and green triangles, while the area of the new triangle with base $w-x$ and height $h+x$ is the sum of the pink and blue triangles. But as long as $w-x\gt h+x$, the blue triangle will have a greater area than the green one: they both have the same base ($x$) and the blue one has a greater height. This implies that as long as $w\gt h$, we can increase the area of the triangle (and thus the rectangle) by trading off some amount of width for the same amount of height, and that in turn implies that the maximum area must be achieved by the square. - I really like this one, I'm very tempted to choose yours as the correct answer. –  user1002327 Aug 13 '12 at 22:27 Thank you! As I said, it does have one subtle catch - I don't have a quick, easy geometric proof as to why the height of the blue triangle is larger other than the intuition that the intersection point lies on the 'right' side of the $45^\circ$ bisector through the right angle. An algebraic proof is easy, but loses a lot of the charm of the core proof; still, if you present this one, most students probably won't catch you on it! –  Steven Stadnicki Aug 14 '12 at 4:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9035151600837708, "perplexity": 337.99063962674586}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246640550.27/warc/CC-MAIN-20150417045720-00249-ip-10-235-10-82.ec2.internal.warc.gz"}
http://www.chennaispeakers.com/2016/09/is-ramkumar-died-of-electric-wire.html
# Is Ramkumar died of electric wire????? 0 Ramkumar who was caught in the case of swathi murder was prisoned in puzhal is found to dead at 18th of this month. He committed suicide by biting teh electric wire passing inside the block where he was put is the information for from.the securities of the prison. There are. Some suspect in the death of the ramkumar said by his family members. After postmortem somehow we people was eager that will get cleared from the investigation but there are obstacles prevailing still. Electric wire is embossed in the wall from this suspect arises that how he would have died by biting the wire. There was a switch box near the place where he is dead. That is not that much easy to bite the wire which is already embossed in the wall. Suppose if he has done that he would have got shock and he would have thrown somewhere due to the radiation. There’s was no scars in his body so we conclude that he would have not died due to the electric wire shock. We will come to known there real reason of the death only after CBI investigation and the post mortem report. Related Searches SHARE
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130332469940186, "perplexity": 2548.117687189837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824618.72/warc/CC-MAIN-20171021062002-20171021082002-00139.warc.gz"}
http://mathhelpforum.com/number-theory/172931-prove-sum-all-positive-integers-k-print.html
# Prove the sum of all positive integers k... • February 28th 2011, 09:09 AM uberbandgeek6 Prove the sum of all positive integers k... Prove that the sum of all positive integers k where 1 <= k <= n and gcd(k,n) = 1 is (1/2)n*phi(n). I don't even know where to begin with this. We've been going over primitive roots in class lately so it probably has something to do with them, but I have no idea where they come into the problem. • February 28th 2011, 02:16 PM LoblawsLawBlog For any n, $\zeta=e^{2\pi i/n}$ is a primitive nth root of unity, and the other primitive roots are exactly the numbers $\zeta^k$, where gcd(k,n)=1. If you picture these on the unit circle and realize that the conjugate of a primitive root is also a primitive root, you can prove it that way. • February 28th 2011, 06:21 PM uberbandgeek6 Quote: Originally Posted by LoblawsLawBlog For any n, $\zeta=e^{2\pi i/n}$ is a primitive nth root of unity, and the other primitive roots are exactly the numbers $\zeta^k$, where gcd(k,n)=1. If you picture these on the unit circle and realize that the conjugate of a primitive root is also a primitive root, you can prove it that way. Uh, we didn't learn anything like that in class. • February 28th 2011, 06:55 PM LoblawsLawBlog Ok, I tried to explain it using primitive roots because you said you're learning about them. The same proof will carry over to the group $\mathbb{Z}/n\mathbb{Z}$, since this group is isomorphic to the nth roots of unity. But you might not be doing much with groups if this is a number theory course, even though there's a lot of overlap. I guess the key facts are that $\varphi(n)$ is the number of positive integers less than n that are relatively prime to n and also that gcd(n,k)=gcd(n,n-k). Does that help? You could probably also prove this using some facts about $\varphi$ listed on its wikipedia page. edit: Ah, that explains it- there are two uses of the term primitive root. I thought you meant primitive roots of unity and I had never heard of the other meaning. Still, my second paragraph could still be used. • February 28th 2011, 07:21 PM chisigma Quote: Originally Posted by uberbandgeek6 Prove that the sum of all positive integers k where 1 <= k <= n and gcd(k,n) = 1 is (1/2)n*phi(n). I don't even know where to begin with this. We've been going over primitive roots in class lately so it probably has something to do with them, but I have no idea where they come into the problem. In any case is $\displaystyle \sum_{k=1}^{n-1} k= \frac{n\ (n-1)}{2}$. If n is prime then $\varphi(n)= n-1$ so that $\displaystyle \sum_{k=1}^{n-1} k= \frac{n}{2}\ \varphi(n)$... Kind regards $\chi$ $\sigma$ • March 1st 2011, 05:44 AM uberbandgeek6 @chisigma: I understand what you mean, but I'm not given that n is prime, so I don't see how that would work. @Loblawslawblog: I don't understand where you are going with the gcd(n, n-k). I get that it is true, but I don't see how that helps. • March 1st 2011, 06:46 AM chisigma Quote: Originally Posted by uberbandgeek6 @chisigma: I understand what you mean, but I'm not given that n is prime, so I don't see how that would work. One of Your hypothesis is that $\forall k$ for which $0\le k\le n$ is $\text {gcd} (k,n)=1$ and that means that n is prime... Kind regards $\chi$ $\sigma$ • March 1st 2011, 06:57 AM uberbandgeek6 Quote: Originally Posted by chisigma One of Your hypothesis is that $\forall k$ for which $0\le k\le n$ is $\text {gcd} (k,n)=1$ and that means that n is prime... Kind regards $\chi$ $\sigma$ Maybe I'm interpreting it wrong, but I think the problem means that it wants the sum of all k's that are relatively prime with n. So if n was not prime (say n = 10), it would be 1+3+7+9 = 20. Then (1/2)(10)phi(10) = 20 as well. • March 1st 2011, 07:35 AM LoblawsLawBlog Quote: Originally Posted by uberbandgeek6 @Loblawslawblog: I don't understand where you are going with the gcd(n, n-k). I get that it is true, but I don't see how that helps. Look at your example with n=10. 1 is relatively prime to 10, and so is 10-1=9. Likewise, 3 and 10-3=7 are both relatively prime to 10. Then the sum we're looking for is (1+9)+(3+7)=(4/2)(10). • March 1st 2011, 10:28 AM tonio Quote: Originally Posted by LoblawsLawBlog Look at your example with n=10. 1 is relatively prime to 10, and so is 10-1=9. Likewise, 3 and 10-3=7 are both relatively prime to 10. Then the sum we're looking for is (1+9)+(3+7)=(4/2)(10). The hint, or info, given by LLLB is critical: let $\Phi(n,k):=\{k\in\mathbb{N}\;;\;1\leq k\leq n\,,\,\,gcd(n,k)=1\}$ 1) Prove that $k\in\Phi(n,k)\Longleftrightarrow n-k\in\Phi(n,k)$ 2) Thus, we can pair up all the numbers in $\Phi(n,k)$ in pairs $(k,n-k)$ , with $k\mbox{ and also }n-k\mbox{ in } \Phi(n,k)$ 3) Since $|\Phi(n,k)|=\phi(n)$ , there are $\frac{1}{2}\phi(n)$ pairs as above, and since the sum of each such pair is $n$ we're then done...(Clapping) Tonio • March 1st 2011, 07:11 PM Bruno J. Quote: Originally Posted by LoblawsLawBlog For any n, $\zeta=e^{2\pi i/n}$ is a primitive nth root of unity, and the other primitive roots are exactly the numbers $\zeta^k$, where gcd(k,n)=1. If you picture these on the unit circle and realize that the conjugate of a primitive root is also a primitive root, you can prove it that way. How can you prove it that way? • March 2nd 2011, 08:10 AM LoblawsLawBlog It's the same proof. A primitive root z^k would correspond to k in Z_n and the pairing (k,n-k) is just the pair z^k and its conjugate. Then the factor of 1/2 comes in because you only need to consider the top half of the unit circle. Add the exponents of the primitive roots and you're done. So basically there's no reason to go to complex numbers, but I tried to awkwardly jam them in because I thought the OP was learning something similar. • March 2nd 2011, 08:23 AM Bruno J. That's what I thought. There's no reason to talk about primitive roots here. :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033001661300659, "perplexity": 346.55223751029627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831903.50/warc/CC-MAIN-20140820021351-00086-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/103586-bilinear-maps.html
1. ## Bilinear Maps I was told that any bilinear map $B: V\times V\longrightarrow \mathbb{F}$ (where $V$ is an n-dimensional vector space and $\mathbb{F}$ is a field of scalars) can be expressed as $B(u,v)=u^TAv$, where $A$ is an $n\times n$ matrix. Does this statement require proof, and if so, how can I prove it? 2. Originally Posted by redsoxfan325 I was told that any bilinear map $B: V\longrightarrow \mathbb{F}$ (where $V$ is an n-dimensional vector space and $\mathbb{F}$ is a field of scalars) can be expressed as $B(u,v)=u^TAv$, where $A$ is an $n\times n$ matrix. Does this statement require proof, and if so, how can I prove it? A bilinear form on V is a bilinear mapping $B: V \times V \longrightarrow \mathbb{F}$ such that B(u+u', v) = B(u,v) + B(u',v), B(u, v+v') = B(u,v) + B(u,v'), B(tu, v) = B(u, tv) = tB(u,v), where t is a scalar in the field F. Let u= BX, v=BY, where B = (v_1, v_2, ... , v_n) is a basis of V and X,Y are coordinate vectors. Then, $B(u, v) = B(\sum_iv_ix_i, \sum_jv_jy_j)$. Using bilinearity, $B(\sum_iv_ix_i, \sum_jv_jy_j) = \sum_{i,j}x_iy_jB(v_i, v_j)=X^TAY$, where A is $B(v_i, v_j)$. 3. Originally Posted by redsoxfan325 I was told that any bilinear map $B: V\longrightarrow \mathbb{F}$ (where $V$ is an n-dimensional vector space and $\mathbb{F}$ is a field of scalars) can be expressed as $B(u,v)=u^TAv$, where $A$ is an $n\times n$ matrix. Does this statement require proof, and if so, how can I prove it? that $B$ is actually from $V \times V$ to $\mathbb{F}.$ also $u^TAv$ has no meaning unless you fix a basis $\mathcal{B}=\{e_1, \cdots , e_n \}$ for $V$ and then by $u,v$ we'll mean $[u], \ [v],$ the coordinate vectors of $u,v$ with respect to the basis $\mathcal{B}.$ anyway, the proof is quite easy: define the matrix $A=[a_{ij}]$ by $a_{ij}=B(e_i,e_j).$ now $[e_k], \ 1 \leq k \leq n,$ is an $n \times 1$ vector with $1$ in its $k$-th row and $0$ is in other rows. it's easy to see that $[e_i]^T A [e_j]=a_{ij}=B(e_i,e_j).$ finally if $u=\sum_i b_ie_i, \ v=\sum_i c_ie_i,$ then using bilinearity of $B$ we get: $B(u,v)=\sum_{i,j} b_ic_j B(e_i,e_j)=\sum_{i,j}b_ic_j[e_i]^T A [e_j]=(\sum_i b_i[e_i]^T)A (\sum_j c_j[e_j] )=[u]^TA[v].$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 43, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921922087669373, "perplexity": 173.34968745972466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121267.21/warc/CC-MAIN-20170423031201-00077-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.cut-the-knot.org/Probability/TwoVarsityDivisions.shtml
# Two Varsity Divisions ### Solution 1 Twenty teams may be split into two divisions of $10$ teams each in $\displaystyle \frac{1}{2}{20\choose 10}$ ways. Two teams removed, $18$ to be split between the two divisions. For the first question, we need $9$ teams to complete the division with one of the strongest teams (the other division will be filled automatically.) Thus the probability in Question 1 is $\displaystyle P=\frac{\displaystyle {18\choose 9}}{\displaystyle \frac{1}{2}{20\choose 10}}=\frac{10}{19}.$ For the second question, we need $8$ teams to complete the division of the two strongest teams. Thus the probability in Question 2 is $\displaystyle P=\frac{\displaystyle {18\choose 8}}{\displaystyle \frac{1}{2}{20\choose 10}}=\frac{9}{19}.$ ### Solution 2 Denote, for convenience, the two strongest teams $A$ and $B.$ When $A$ is chosen into one of the divisions, there are $19$ slots that $B$ can fit in. For the first question, $10$ of these slots are in the "other" division, giving the probability as $\displaystyle \frac{10}{19}.$ For the second question, there are $9$ slots in the same division as $A,$ making the probability of $B$ falling there equal to $\displaystyle \frac{9}{19}.$ Naturally, if $A$ and $B$ are not in the same division, then they are in different ones, so that $\displaystyle \frac{10}{19}+\frac{9}{19}=1.$ ### Acknowledgment That's a modification of a problem from A. M. Yaglom, I. M. Yaglom, Challenging Mathematical Problems with Elementary Solutions, Vol I, Dover, 1987. But I have only a Russian edition.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721330761909485, "perplexity": 425.08116124940045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158001.44/warc/CC-MAIN-20180922005340-20180922025740-00403.warc.gz"}
http://math.stackexchange.com/questions/272470/counting-pairs-n-n1-where-n-and-n1-are-both-quadratic-residues-etc
# Counting pairs $(n,n+1)$ where $n$ and $n+1$ are both quadratic residues, etc.? This is an interesting problem I read that has me stumped. Let $(RR)$ denote the number of pairs $(n,n+1)$ in the set $\{1,2,\dots,p-1\}$ such that $n$ and $n+1$ are both residues modulo $p$. Let $(NR)$ denote the pairs where $n$ is a nonresidue, and $n+1$ is a residue modulo $p$. Do the same for $(NN)$ and $(RN)$. The question is, what are $(RR)+(RN),(NR)+(NN),(RR)+(NR),(RN)+(NN)$? I know that if $g$ is a primitive root, then the residues are the even powers of $g$, and the nonresidues are the odd powers of $g$. So the pairs in $(RR)$ have form $(g^{2k},g^{2j})$, and I would like to count the pairs that can be expressed as $n=g^{2k},n+1=g^{2j}$. This implies $g^{2j}-g^{2k}=1=g^{p-1}$. I could set up similar equations for the other three types of pairs, but I don't see any thing nice to grab onto and work with. Maybe computing $(RR)+(RN)$ is easier than computing $(RR)$ and $(RN)$ separately for some reason? How could one approach computing these? Thank you. Source: Ireland/Rosen #5.29 - Hint: can you spot some mutually exclusive combinations which might combine neatly ... –  Mark Bennet Jan 7 '13 at 23:22 Let $p$ be odd. We look at $(RR)+(RN)$. This is almost the number of QR. The only way a QR can fail to be followed by a QR or an NR is if the QR is at $p-1$. This is the case iff $p\equiv 1\pmod{4}$. So when $p\equiv 1\pmod{4}$, we have $(RR)+(RN)=\frac{p-1}{2}-1$. When $p\equiv -1\pmod{4}$, we have $(RR)+(RN)=\frac{p-1}{2}$. I have not done the other questions. They look much the same. For the last two we have to travel backwards, and note that $1$ is always a QR. - Thanks again, I'll try my best to work out the rest. –  Noomi Holloway Jan 8 '13 at 8:00 To compute separately see Apostol Chapter 9 Ex 5, p201. With $\alpha$,$\beta$ being $\pm1$ let $N(\alpha,\beta)$ denote the number of integers $x$ among $1,2,\dots,p-2$ such that $(x|p)=\alpha$ and $(x+1|p)=\beta$ where $p$ is an odd prime. So $N(1,-1)=(RN)$ above. Then $4N(\alpha,\beta)=\displaystyle\sum_{x=1}^{p-2}(1+\alpha(x|p))(1+\beta(x+1|p))$ since $1+\alpha(x|p)=2$ if $(x|p)=\alpha$; and is $0$ otherwsie. Similarily for $\beta$ so that $(1+\alpha(x|p))(1+\beta(x+1|p))=4$ if $(x|p)=\alpha$ and $(x+1|p)=\beta$; and is $0$ otherwise. Then expanding the sum we get $4N(\alpha,\beta)=p-2-\beta-\alpha\beta-\alpha(-1|p)$ using $\displaystyle\sum_{x=1}^{p-2}(x|p)=-(-1|p)$; $\displaystyle\sum_{x=1}^{p-2}(x+1|p)=-1$ and $\displaystyle\sum_{x=1}^{p-2}(x(x+1)|p)=-1$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9825928211212158, "perplexity": 106.0418174049772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115855094.38/warc/CC-MAIN-20150124161055-00068-ip-10-180-212-252.ec2.internal.warc.gz"}
https://socratic.org/questions/form-the-quaderatic-equation-whose-roots-alpha-and-beta-satisfy-relation-alpha-b
Algebra Topics # Form the quadratic equation whose roots alpha and beta satisfy the relations alpha beta=768 and alpha^2+beta^2=1600? Jan 6, 2018 ${x}^{2} - 56 x + 768 = 0$ #### Explanation: if $\alpha \text{ and } \beta$ are the roots of a quadratic eqn , the eqn can be written as ${x}^{2} - \left(\alpha + \beta\right) x + \alpha \beta = 0$ we are given $\textcolor{red}{\alpha \beta = 768}$ $\textcolor{b l u e}{{\alpha}^{2} + {\beta}^{2} = 1600}$ now ${\left(\alpha + \beta\right)}^{2} = \textcolor{b l u e}{{\alpha}^{2} + {\beta}^{2}} + \textcolor{red}{2 \alpha \beta}$ so ${\left(\alpha + \beta\right)}^{2} = \textcolor{b l u e}{1600} + 2 \times \textcolor{red}{768} = 3136$ $\therefore \alpha + \beta = \sqrt{3136} = 56$ ${x}^{2} - 56 x + 768 = 0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98577481508255, "perplexity": 1996.7315106093508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900200.97/warc/CC-MAIN-20201028162226-20201028192226-00516.warc.gz"}
https://infoscience.epfl.ch/record/183643
Infoscience Journal article # Two-phase heat transfer and high-speed visualization of refrigerant flows in 100 × 100 μm2 silicon multi-microchannels Two-phase flow boiling of R245fa, R236fa, and R1234ze(E) in 100 × 100 μm2 parallel microchannels for cooling of future 3D-ICs has been investigated. Significant flow instabilities, back flow, and non-uniform flow distribution among the channels were observed in the micro-evaporator without any inlet restrictions (micro-orifices). Therefore, to prevent such problems, rectangular restrictions were placed at the inlet of each channel and the two-phase flow flashed by the micro-orifices was identified as the most optimal operating condition. In the present study, a novel in-situ pixel-by-pixel technique was developed to calibrate the raw infra-red images, thus converting them into two-dimensional temperature fields of 10’000 pixels over the test section surface operating at 60 Hz. Tests showed that the base heat flux of 48.6 W cm−2 could be dissipated whilst keeping the micro-evaporator’s temperature below 85 °C.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8952569365501404, "perplexity": 4238.492439837108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423723.8/warc/CC-MAIN-20170721062230-20170721082230-00450.warc.gz"}
https://www.physicsforums.com/threads/power-series-solution-to-a-diff-eq.223944/
Power Series Solution to a Diff EQ 1. Mar 24, 2008 [SOLVED] !!!Power Series Solution to a Diff EQ!!! 1. The problem statement, all variables and given/known data Find the first 5 term of a Power series solution of $$y'+2xy=0$$ (1) Missed this class, so please bear with my attempt here. 3. The attempt at a solution Assuming that y takes the form $$y=\sum_{n=0}^{\infty}c_nx^n$$ Then (1) can be written: $$\sum_{n=1}^{\infty}nc_nx^{n-1}+2x\sum_{n=0}^{\infty}c_nx^n=0$$ Re-written 'in phase' and with the same indices (in terms of k): $$c_1+\sum_{k=1}^{\infty}(k+1)c_{k+1}x^k+\sum_{k=1}^{\infty}2c_{k-1}x^k=0$$ $$\Rightarrow c_1+\sum_{k=1}^{\infty}[(k+1)c_{k+1}+2c_{k-1}]x^k=0$$ Now invoking the identity property, I can say that all coefficients of powers of x are equal to zero (including $c_1*x^0$) So I can write: $c_1=0$ and $$c_{k+1}=-\frac{2c_{k-1}}{k+1}$$ Now I am stuck (I know I am almost there though!) Should I just start plugging in numbers for k=1,2,3,4,5 ? Will this generate enough 'recursiveness' to solve for the 1st five terms? Is that the correct approach? Thanks!! 2. Mar 24, 2008 Don't worry guys, I got it. And for those who might make future use of this thread, my approach was correct. Solving $$c_{k+1}=-\frac{2c_{k-1}}{k+1}$$ For k=1,2....,9 generates enough coefficients to write out the first five nonzero terms of the solution by plugging them back into $$y=\sum_{n=0}^{\infty}c_nx^n$$ Last edited: Mar 24, 2008 3. Mar 24, 2008 Kreizhn I think you need one more initial condition on your series (example $y(0)=x_0$). You need to define $c_0$ (or you can just leave $c_0$ as the 'integration' constant of the ODE). However, the recursion formula becomes pretty obvious once you have that. If you can't see it right away, try plugging in a few values. Since $c_1=0$ what can we say about all the odd labelled coefficients? You should get something that looks like the series of an exponential function. In fact, the series will have a closed form solution if you can see how your answer relates to the exponential series.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376910328865051, "perplexity": 462.220984267781}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720475.79/warc/CC-MAIN-20161020183840-00012-ip-10-171-6-4.ec2.internal.warc.gz"}
http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume10/jaakkola99a-html/footnode.html
...model.1 The acronym QMR-DT'' that we use in this paper refers to the decision-theoretic'' reformulation of the QMR by Shwe, et al. (1991). Shwe, et al. replaced the heuristic representation employed in the original QMR model (Miller, Fasarie, & Myers, 1986) by a probabilistic representation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...QMR-DT.2 D'Ambrosio (1994) reports mixed'' results using incremental SPI on the QMR-DT, for a somewhat more difficult set of cases than Heckerman (1989) and Henrion (1991), but still with a restricted number of positive findings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...decoupled.3 Jensen's inequality, which states that $f(a + \sum_j q_j x_j) \geq \sum_j q_j f(a + x_j)$, for concave $f$, where $\sum q_j = 1$, and $0 \leq q_j \leq 1$, is a simple consequence of Eq. (8), where $x$ is taken to be $a + \sum_j q_j x_j$. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...exactly.4 Given that a significant fraction of the positive findings are being treated exactly in these simulations, one may wonder what if any additional accuracy is due to the variational transformations. We address this concern later in this section and demonstrate that the variational transformations are in fact responsible for a significant portion of the accuracy in these cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...algorithm.5 The initialization method proved to have little effect on the inference results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...inference.6 We also investigated Gibbs sampling (Pearl, 1988). The results from Gibbs sampling were not as good as the results from likelihood-weighted sampling, and we report only the latter results in the remainder of the paper. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...method.7 It should be noted that this is a conservative comparison, because the partially-exact method in fact benefits from the variational transformation-the set of exactly treated positive findings is selected on the basis of the accuracy of the variational transformations, and these accuracies correlate with the diagnostic relevance of the findings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Jordan Sun May 9 16:22:01 PDT 1999
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405215978622437, "perplexity": 893.6808230539567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887832.51/warc/CC-MAIN-20180119065719-20180119085719-00482.warc.gz"}
https://www.physicsforums.com/threads/bug-physics.176750/
# Bug physics. 1. Jul 12, 2007 ### Demonsthenes 1. The problem statement, all variables and given/known data A bug of mass m is stuck to a point on the rim of a rolling wheel (of radius r)... which traces out a path called a cycloid. The position vector of the point (bug) is given by: r(theta) = r(theta - sin(theta))i + r(1 - cos(theta))j For a wheel rolling with constant angular spped... 2. Relevant equations A) Determine the velocity vector v = vxi + vyj. B) Determine the acceleration vector a =axi + ayj C) Verify the magnitude of the acceleration vector is given by a = r(greek w)^2 3. The attempt at a solution The derivatives i took for each vector looked very incorrect. 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution Can you offer guidance or do you also need help? Draft saved Draft deleted Similar Discussions: Bug physics.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9740546941757202, "perplexity": 1829.8137173739071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824537.24/warc/CC-MAIN-20171021005202-20171021025202-00405.warc.gz"}
http://math.stackexchange.com/questions/283308/point-of-discontinuity
# Point of discontinuity I have a function: $$f(x) = x$$ Defined over the domain $\mathbb{R} \backslash 0$. Is it correct to say that: The function is continuous, but it has a point of discontinuity at $x=0$? - No it doesn't. Why would $f(x)=x$ have a discontinuity? – nbubis Jan 21 '13 at 7:56 I can't imagine a better example, therefore I said there's f(x)=x with the domain R without 0. – TomDavies92 Jan 21 '13 at 7:57 I think it is a valid a good question to clarify subtleties regarding fundamental definitions. The choice of function is very good since it concentrates on the issue at hand: the question whether 'not defined at a point' means 'not continuous there'. – Ittay Weiss Jan 21 '13 at 8:04 @TomDavies92 - Edited the English, I think it makes more sense now. – nbubis Jan 21 '13 at 8:12 The function is continuous for all points where it is defined, which according to you is the set $\mathbb R - \{0\}$. It has no points of discontinuity. A point $x$ is a point of discontinuity for a function $f:D\to \mathbb R$ if the function is defined at that point but its value there is not the same as the limit. When $f$ is not defined at $x$ at all then $x$ can't be considered a point of discontinuity. Think of it this way: the function $f(x)=x^2$, defined on all of the real numbers, is not defined for $x$="the moon". Does it mean that $f$ is discontinuous at the moon? - Ittay Weiss said "When $f$ is not defined at $x$ at all then $x$ can't be considered a point of discontinuity." What about $f(x) = 1/x$? $0$ is not defined at $f$, nevertheless, as long as I know $0$ is considered a point of discontinuity for the function $f(x) = 1/x$. It is enough for the function $f$ to be defined on some neighborhood of $x_0$ even though it is not defined at $x_0$. I think this point is a point of discontinuity since $\lim_{x\to 0+} = \lim_{x\to 0-} = 0$, but $f(0)\neq 0$ (undefined by definition). – fade2black Aug 20 '13 at 22:34 It is true that $f(x)=1/x$ is not continuous at $x=0,$ since it isn't defined there. However, this does not mean that $f$ has a discontinuity at $x=0.$ See here, especially: "The term removable discontinuity is sometimes used by abuse of terminology...." – Cameron Buie Aug 21 '13 at 4:51 Then what about the example $1/x$, where $0$ is not in the domain of $f$? My text book on calculus says that the point 0 is a non-removable point of discontinuity (Calculus, 9th Ed., Ron Larson, pg. 71, Example 1 (a), function $f(x)=1/x$). – fade2black Aug 21 '13 at 8:15 @fade2black: Again, that is not a discontinuity. It is what is known as a pole. Many textbooks would call that a discontinuity, but again, this is an abuse of terminology. Continuity and discontinuity are defined only for points of the function's domain. I suspect that your textbook did not give a precise definition of what continuity at a point means, because otherwise, they couldn't claim that $0$ was a point of discontinuity of $f(x)=1/x$. – Cameron Buie Aug 21 '13 at 13:56 No, the continuity or discontinuity of a function at a point is only defined if the point is in the domain. The function is continuous at every point of its domain, which was stipulated to be $\mathbb R\setminus \{0\}$. It is not defined at $0$. - What about limit points? I heared, that the point of discontinuity is also a limit point, that does not belong to the domain. Is it true? – TomDavies92 Jan 21 '13 at 7:58 I do not understand the question. – Jonas Meyer Jan 21 '13 at 8:01 @TomDavies92: It is depend on what kind of discontinuty you faced at that point. – Babak S. Jan 21 '13 at 8:02 @TomDavies92: But perhaps you are thinking about something like the following. If $f:\mathbb R\setminus\{0\}\to\mathbb R$ is a function, then when is it possible to extend $f$ by defining a value $f(0)$ in such a way to make the extended function continuous at $0$? This could be answered by checking whether or not $\lim\limits_{x\to 0}f(x)$ exists. – Jonas Meyer Jan 21 '13 at 8:04 If $x=0$ is not part of the domain of the function, there is no sense in talking about the properties of the function at that point - continuity or anything else. It happens that the domain on which this function is defined is a part of a larger domain - it is a fundamental issue in mathematics to identify the possibility of extending functions to such larger domains in "good" ways - either preserving useful properties like continuity, or acquiring new ones - like connectedness, compactness or roots to specific equations. Your $f$ could be extended to $\mathbb R$ by defining $f(0)=\pi$. If you want to preserve continuity, however, you need to define $f(0)=0$. You might consider the function $g(x)$ defined on the same domain with $g(x)=-x$ when $x$ is negative, and $g(x)=x$ when $x$ is positive. Defining $g(0) = 0$ keeps $g$ continuous, but it is no longer differentiable over the whole domain of definition. Sometimes there is a trade-off between the properties you want and the domain you choose. - First, I agree with most other answers: What you say about your $f$ is not totally right but $f$ is continuous where is defined. I would like to add, that probably some confusion is caused by the fact that there is an obvious extension to a superset of its domain of definition. (Here, set $f(0)=0$, obviously.) Indeed, the problem if a continuous function defined on some set has a continuous extension to a larger set is very important in some places of mathematics (e.g. if a continuous linear operator on a Banach space can be extended to a larger Banach space in which the former one is embedded...). Another example is the theorem that a continuous function defined on a dense set of a metric space has a continuous extension to the whole space. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109622240066528, "perplexity": 135.60924736768226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824756.90/warc/CC-MAIN-20160723071024-00293-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.gradesaver.com/civil-disobedience/q-and-a/how-do-the-ideas-expressed-in-lines-88-96-help-to-qualify-or-clarify-some-of-thoreaus-ideas-323102
# How do the ideas expressed in lines 88-96 help to qualify or clarify some of thoreau's ideas? lines 88-96 " why does it always crucify christ, and excommunicate copernicus and luter , and pronounce washington and franklin rebels?... if the unjusice is part of the necessary friction of the machine of goverment, let it go, let it go: perchance it will wear smooth, the machine will wear out . ifinjustice has a spring or a pulley or a rope or a crank , exclusively for itself, then perhaps you may consider whether the remedy will not be worse than evil: but if it so much a nature tat it requires you to be the agent of the injustiice to another , then, i sa break the law. let your life be a counter-friction to stop the machine.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202595710754395, "perplexity": 2503.0387062749564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209216.31/warc/CC-MAIN-20180814170309-20180814190309-00243.warc.gz"}
https://christopherdanielson.wordpress.com/2014/04/28/wiggins-questions-3/
# Wiggins questions #3 ## Question 3 ### You are told to “invert and multiply” to solve division problems with fractions. But why does it work? Prove it. Oh dear. If anyone on the Internet has had more to say about dividing fractions than I have, I am unaware of who that is. (And, for the record, I would like to buy that person an adult beverage!) Unlike the division by zero stuff from question 1, this question is better tackled with informal notions than with formalities. The formalities leave one feeling cold and empty, for they don’t answer the conceptual why. The formalities will invoke the associative property of multiplication, the definition of reciprocal, inverse and the multiplicative identity, et cetera. The conceptual why—for many of us—lies in thinking about fractions as operators, and in thinking about a particular meaning of division. ### 1. A meaning of division There are two meanings for division: partitive (or sharing) and quotative (or measuring). The partitive meaning is the most common one we think of when we do whole number division. I have 12 cookies to share equally among 3 people. How many cookies does each person get? We know the number of groups (3 in this example) and we need to find the size of each group. I can mow 4 lawns with $\frac{2}{3}$ of a tank of gas in my lawnmower is a partitive division problem because I know what $\frac{2}{3}$ of a tank can do, and I want to find what a whole tank can do. So performing the division $4\div \frac{2}{3}$ will answer the question. ### 2. Fractions as operators When I multiply by a fraction, I am making things larger (if the fraction is greater than 1), or smaller (if the fraction is less than 1, but still positive). Scaling from (say) 5 to 4 requires multiplying 5 by $\frac{4}{5}$. Scaling from 4 to 5 requires multiplying by $\frac{5}{4}$. This relationship always holds—reverse the order of scaling and you need to multiply by the reciprocal. ### putting it all together Back to the lawnmower. There is some number of lawns I can mow with a full tank of gas in my lawnmower. Whatever that number is, it was scaled by $\frac{2}{3}$ to get 4 lawns. Now we need to scale back to that number (whatever it is) in order to know the number of lawns I can mow with a full tank. So I need to scale 4 up by $\frac{3}{2}$. Now we have two solutions to the same problem. The first solution involved division. The second solution involved multiplication. They are both correct so they must have the same value. Therefore, $4\div \frac{2}{3} = 4 \cdot \frac{3}{2}$ There was nothing special about the numbers chosen here, so the same argument applies to all positive values. $A\div\frac{b}{c}=A\cdot\frac{c}{b}$ We have to be careful about zero. Negative numbers behave the same way as positive numbers in this case, since the associative and commutative properties of multiplication will let us isolate any values of $-1$ and treat everything else as a positive number. Please note that you do not need to invert and multiply to solve fraction division problems. You can use common denominators, then divide just the resulting numerators. You can use common numerators, then use the reciprocal of the resulting denominators. Or you can just divide across as you do when you multiply fractions. The origins of the strong preference for invert-and-multiply are unclear. ### 7 responses to “Wiggins questions #3” Christopher, my issue with this defense of the invert and multiply algorithm is not consistent with the way we think of c x d, where c is the multiplier and d is the multiplicand. So, when we multiply a number c (number to be replicated) by a number d (number of groups or replications), we represent that symbolically by c x d (c groups of d) and not the other way around. I agree that we “scale 4 up by 3/2,” but that is interpreted symbolically as 3/2 x 4. In this case, 4 x 3/2 doesn’t have a contextual interpretation that makes sense. Maybe, at the level that students are doing multiplication and division of fractions, they can abandon the (more) concrete interpretation of multiplication for the abstract, but I tend toward being more of a stickler in this case, because there is a bridge that we can make between the concrete interpretation and abstract symbols (common denominator algorithm or this on with the multiplier in the appropriate position). I also think that, despite the CC’s focus on standard algorithms, a defense of the invert and multiply algorithm can be taken as a free ticket to Same/change/flip-ville. Although, sadly, I don’t think the citizens there are really reading your blog. 2. Christopher Wait! You see this as a defense of the standard algorithm for fraction division as a curricular topic, Adam? That was not its purpose at all. I am just answering the question here, which is a reasonable one: Is there a conceptual way to think about this algorithm? I say yes. As you rightly point out, this answer may not be appropriate for all audiences. But the algorithm is not at fault. There are no bad correct algorithms. There are, however, algorithms that are more likely to support the thinking of a typical student. Common denominator fraction division is more likely to support the thinking of the average 5th-7th grader. And it is more likely to be in the capacity of the average elementary teacher to teach conceptually. We are agreed on that. I think you may be ignoring what I think is the crux of my argument – the order of 4 and 3/2 in the operation cannot be overlooked, if conceptual is the goal, despite the level of the student. 4 / (2/3) is not equal to 3/2 x 4, at the concrete level. I didn’t say this is a bad algorithm, but I do think that there are certainly better ones, given the historical treatment of it (“Ours is not to reason why…,” Keep/Change/Flip, etc.). I also think that, much like the standard long division algorithm, this algorithm is unproductive. Kids memorize it without knowing why it works…and then forget it, like the division algorithm. Maybe I would have rather seen you answer the question with something along the lines of “yes, but here’s a different algorithm that grows out of a concrete representation of the operation” or something like that. 4. Maybe it’s a bit late, but I have just found your other blog (this one!). lawnmower, and other similar problems. Simple, common sense method: 4 lawns with 2/3 of a tank, so 2 lawns with one third of a tank so 6 lawns with a full tank. if the kids are pushed into converting the problem into an immediate calculation, and not encouraged to “get the result somehow” then the path is smoothed for the rest of the math difficulties lying ahead. If the kids cannot see step 1 above then they are definitely not ready for fractions. My position is that from the outset fractions are numbers, and invented/used to enhance measurement. I will stop here!!!!! 5. Late If it’s known that 4 lawns are cut with 2/3 of gas tank, then… Assuming that the lawnmower engine works, at full power, across all fuel states of the gas tank, except when empty. One third of gas, would theoretically, yield only 2 lawns cut. It makes sense, with half the gas (comparing 1/3 to 2/3), you would only cut half the grass. 2/3 gas must cut double the amount of grass, compared to 1/3 gas. Each third of gas, is individually, the same amount of litres, of course. Each third is the same size, of course. Otherwise it wouldn’t be called a third of some original amount. (obviously). Each third of gas therefore cuts same amount of grass, individually. When you know that one third is something, then you simply add three thirds together to get one whole. (1/3) + (1/3) + (1/3) = 3/3, otherwise known as one whole. 2+2+2 = 6 lawns cut, with a fully operational 3/3 gas tank. To me this sounds like some blue collar math logic, but it works well enough in this case. 6. sassyaggie Like a Number Talk, I’m seeing from the comments different ways of looking at the same problem. Could it be possible of doing that even with using the same algorithm? What would you say is the big idea?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707944750785828, "perplexity": 703.050696350801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608954.74/warc/CC-MAIN-20170527133144-20170527153144-00500.warc.gz"}
https://socratic.org/questions/59e341137c0149198d7e042c
Chemistry Topics # Question e042c Oct 20, 2017 Here's what I got. #### Explanation: Start by picking a sample of this $\text{20% v/v}$ ethanol solution. To make the calculations easier, let's say that this sample has a volume of $\text{100 mL}$. The solution is said to be $\text{20% v/v}$ ethanol, which implies that it contains $\text{20 mL}$ of ethanol, the solute, for every $\text{100 mL}$ of the solution. Use the density of the solution to find its mass. 100 color(red)(cancel(color(black)("mL solution"))) * "0.9 g solution"/(1color(red)(cancel(color(black)("mL solution")))) = "90 g solution" Next, usee the density of ethanol to find the mass of the solute. 20 color(red)(cancel(color(black)("mL ethanol"))) * "0.75 g ethanol"/(1color(red)(cancel(color(black)("mL ethanol")))) = "15 g ethanol" Now, in order to find the solution's percent concentration by mass, $\text{m/m %}$, you need to figure out the mass of ethanol present in $\text{100 g}$ of this solution. Since you already know how much ethanol you have in $\text{90 g}$ of the solution, you can say that $\text{100 g}$ will contain 100 color(red)(cancel(color(black)("g solution"))) * "15 g ethanol"/(90color(red)(cancel(color(black)("g solution")))) = "16.667 g ethanol"# This means that the percent concentration by mass is equal to $\textcolor{\mathrm{da} r k g r e e n}{\underline{\textcolor{b l a c k}{\text{% m/m = 17% ethanol}}}}$ I'll leave the answer rounded to two sig figs, but keep in mind that you should round it to one significant figure, the number of sig figs you have for the percent concentration by volume. ##### Impact of this question 713 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8702424168586731, "perplexity": 1849.3250667516936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402123173.74/warc/CC-MAIN-20200930075754-20200930105754-00749.warc.gz"}
http://link.springer.com/chapter/10.1007%2F3-540-36178-2_17
Chapter Advances in Cryptology — ASIACRYPT 2002 Volume 2501 of the series Lecture Notes in Computer Science pp 267-287 Date: # Cryptanalysis of Block Ciphers with Overdefined Systems of Equations • Nicolas T. CourtoisAffiliated withCP8 Crypto Lab, SchlumbergerSema • , Josef PieprzykAffiliated withCenter for Advanced Computing - Algorithms and Cryptography, Department of Computing, Macquarie University ## Abstract Several recently proposed ciphers, for example Rijndael and Serpent, are built with layers of small S-boxes interconnected by linear key-dependent layers. Their security relies on the fact, that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on probabilistic characteristics, which makes their security grow exponentially with the number of rounds N r r. In this paper we study the security of such ciphers under an additional hypothesis: the S-box can be described by an overdefined system of algebraic equations (true with probability 1). We show that this is true for both Serpent (due to a small size of S-boxes) and Rijndael (due to unexpected algebraic properties). We study general methods known for solving overdefined systems of equations, such as XL from Eurocrypt’00, and show their inefficiency. Then we introduce a new method called XSL that uses the sparsity of the equations and their specific structure. The XSL attack uses only relations true with probability 1, and thus the security does not have to grow exponentially in the number of rounds. XSL has a parameter P, and from our estimations is seems that P should be a constant or grow very slowly with the number of rounds. The XSL attack would then be polynomial (or subexponential) in N r> , with a huge constant that is double-exponential in the size of the S-box. The exact complexity of such attacks is not known due to the redundant equations. Though the presented version of the XSL attack always gives always more than the exhaustive search for Rijndael, it seems to (marginally) break 256-bit Serpent. We suggest a new criterion for design of S-boxes in block ciphers: they should not be describable by a system of polynomial equations that is too small or too overdefined. ### Key Words Block ciphers AES Rijndael Square Serpent Camellia multivariate quadratic equations MQ problem overdefined systems of multivariate equations XL algorithm Gröbner bases sparse multivariate polynomials Multivariate Cryptanalysis
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8421176671981812, "perplexity": 1522.72503529583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661962.7/warc/CC-MAIN-20160924173741-00000-ip-10-143-35-109.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/252434/pigeonhole-principle-on-graphs
# Pigeonhole Principle on Graphs I just have a last minute question for my combinatorics final (which is in one hour!!). My prof particularly told me to study the following question and I'm pretty sure it involves the pigeonhole principle but I can't remember how it applies. Can anyone help me out? Prove that for each $n \in \mathbb{Z}^+$ There exists a loop-free connected undirected graph $G=(V,E)$, where $|V|=2n$ and which has two vertices of degree $i$ for every $1 \leq i \leq n$. Any help is greatly, greatly appreciated! - Doesn't look like a pigeonhole principal problem to me. –  Thomas Andrews Dec 6 '12 at 17:36 Hm. Maybe I'm confusing it with another one. I just thought I remembered it being answered as a pigeonhole principle question. I'll try to tackle it differently –  connorbode Dec 6 '12 at 17:38 has at leat 2 vertices or exactly two vertices ? –  Amr Dec 6 '12 at 17:42 @somekindarukus This is a sort of converse to the well-known fact that in every simple graph there are two vertices with the same degree. This latter problem is a very standard application of pigeonhole. –  Erick Wong Dec 6 '12 at 17:43 @Amr "Exactly" or "At least" are the same here. There are $2n$ nodes. If, for each $i$, there are at least two nodes with degree $i$, then, for each $i$, there are exactly two nodes with degree $i$. –  Thomas Andrews Dec 6 '12 at 17:51 Suppose that you have a bipartite graph $G_n$ with vertex classes $V_0$ and $V_1$, each containing $n$ vertices, one of each degree from $1$ through $n$. Add an isolated vertex to each part to get a bipartite graph $H_n$, each of whose parts contains one vertex of degree $k$ for $k=0,\dots,n$. Now add one more vertex to each part, connecting it by an edge to each vertex in the other part. The resulting graph $G_{n+2}$ will be bipartite, and each part will contain one vertex of degree $k$ for $k=1,\dots,n+2$. @Thomas: That’s easy: just do it. Take vertices $1,2,3,1',2',3'$ with edges $\{3,1'\},\{3,2'\},\{3,3'\}$, $\{2,3'\},\{2,2'\}$, and $\{1,3'\}$. –  Brian M. Scott Dec 6 '12 at 18:10 I just want to note that the graphs need not necessarily be bipartite. You can do a one step induction from $n$ to $n+1$. If $G_n$ is divided into two vertex classes, $V_0$ and $V_1$, of size $n$, each containing one vertex of degree $k$ for $1\leq k \leq n$, then we simply add two isolated vertices, $v_0$ and $v_1$, and add edges from $v_i$ to the $\lfloor \frac{n+1}{2} \rfloor$ vertices of largest degree in $V_i$, $i=1,2$. Then, if $n$ is even, add the edge ${v_0,v_1}$. Note that for $n \geq 4$, $G_n$ is NOT bipartite. –  Kevin Halasz Jan 18 '13 at 22:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8626657724380493, "perplexity": 155.74973012853226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246644200.21/warc/CC-MAIN-20150417045724-00306-ip-10-235-10-82.ec2.internal.warc.gz"}
https://kyushu-u.pure.elsevier.com/en/publications/a-synthetic-solution-for-identification-and-extraction-of-the-eff
# A Synthetic Solution for Identification and Extraction of the Effective Microseismic Wave Component Using Decomposition on Time, Frequency, and Wavelet Coefficient Domains Mingwei Zhang, Qingbin Meng, Shengdong Liu, Hideki Shimada Research output: Contribution to journalArticlepeer-review ## Abstract To reduce noise components from original microseismic waves, a comprehensive fine signal processing approach using the integrated decomposition analysis of the wave duration, frequency spectrum, and wavelet coefficient domain was developed and implemented. Distribution regularities of the wave component and redundant noise on the frequency spectrum and the wavelet coefficient domain were first expounded. The frequency threshold and wavelet coefficient threshold were determined for the identification and extraction of the effective wave component. The frequency components between the reconstructed microseismic wave and the original measuring signal were compared. The noise elimination effect via the scale-changed domain decomposition was evaluated. Interaction between the frequency threshold and the wavelet coefficient threshold in the time domain was discussed. The findings reveal that tri-domain decomposition analysis achieves the precise identification and extraction of the effective microseismic wave component and improves the reliability of waves by eliminating the redundant noise. The frequency threshold and the wavelet coefficient threshold on a specific time window are two critical parameters that determine the degree of precision for the identification of the extracted wave component. This research involves development of the proposed integrated domain decomposition method and provides a diverse view on the fine processing of the microseismic signal. Original language English 3875170 Shock and Vibration 2017 https://doi.org/10.1155/2017/3875170 Published - 2017 ## All Science Journal Classification (ASJC) codes • Civil and Structural Engineering • Condensed Matter Physics • Geotechnical Engineering and Engineering Geology • Mechanics of Materials • Mechanical Engineering ## Fingerprint Dive into the research topics of 'A Synthetic Solution for Identification and Extraction of the Effective Microseismic Wave Component Using Decomposition on Time, Frequency, and Wavelet Coefficient Domains'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9023885130882263, "perplexity": 3067.2138268481394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710953.78/warc/CC-MAIN-20221204004054-20221204034054-00716.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-sum-of-the-finite-geometric-sequence-of-sigma-16-1-2-j-1-fro
Calculus Topics # How do you find the sum of the finite geometric sequence of sum_(j=1)^12 16(1/2)^(j-1)? Jul 13, 2017 Given: ${\sum}_{j = 1}^{12} 16 {\left(\frac{1}{2}\right)}^{j - 1}$ This can be written as: $16 + {\sum}_{j = 1}^{11} 16 {\left(\frac{1}{2}\right)}^{j} \text{ [1]}$ From this reference we obtain the formula: ${S}_{n} = {\sum}_{j = 1}^{n} {a}_{j} = {a}_{1} \frac{1 - {r}^{n}}{1 - r}$ The formula fits the second term of equation [1] where ${a}_{1} = 16 , r = \frac{1}{2} \mathmr{and} n = 11$ ${S}_{12} = 16 + 16 \frac{1 - {\left(\frac{1}{2}\right)}^{11}}{1 - \left(\frac{1}{2}\right)}$ ${S}_{12} = 47.984375$ ##### Impact of this question 169 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.933423638343811, "perplexity": 1111.4564914194837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998369.29/warc/CC-MAIN-20190617022938-20190617044938-00049.warc.gz"}
https://cstheory.stackexchange.com/questions/12504/implications-of-proof-of-abc-conjecture-for-cs-theory/12556
# Implications of proof of abc conjecture for cs theory What implications would a proof of the abc conjecture have for tcs? http://quomodocumque.wordpress.com/2012/09/03/mochizuki-on-abc/ Bhatnagar, Gopalan, and Lipton show that, assuming the abc conjecture, there are polynomials of degree $O((kn)^{1/2+\varepsilon})$ representing the Threshold-of-$k$ function over ${\mathbb Z}_6$. For fixed constant $k$, and $m$ which has $t$ prime factors, the abc conjecture implies a polynomial for Threshold-of-$k$ over $\mathbb Z_m$ with degree $O(n^{1/t+\varepsilon})$. This presumably has relevance to the ${\sf TC^0}$ versus $\sf ACC^0[6]$ problem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791696071624756, "perplexity": 302.3281006304998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181482.18/warc/CC-MAIN-20201125071137-20201125101137-00661.warc.gz"}
http://www.aanda.org/articles/aa/abs/2004/25/aa0386-03/aa0386-03.html
Free access Issue A&A Volume 421, Number 1, July I 2004 187 - 193 Interstellar and circumstellar matter http://dx.doi.org/10.1051/0004-6361:20034386 A&A 421, 187-193 (2004) DOI: 10.1051/0004-6361:20034386 ## Observations of the Brackett decrement in the Class I source HH100 IR B. Nisini1, S. Antoniucci2, 1 and T. Giannini1 1  INAF-Osservatorio Astronomico di Roma, 00040 Monteporzio Catone, Italy 2  Università degli Studi "Tor Vergata", via della Ricerca Scientifica 1, 00133 Roma, Italy (Received 24 September 2003 / Accepted 3 March 2004 ) Abstract The Brackett decrement in the Class I source HH100 IR has been observed and analyzed to set constraints on the origin of the IR HI emission in this young object. We have used both low resolution ( 800) observations of the Brackett lines from Br to Br24, and medium resolution ( 9000) spectra of the Br , Br12 and Br13 lines. The dereddened fluxes indicates that the lines remain moderately thick up to high quantum numbers. Moreover, the profiles of the three lines observed in medium resolution are all broad and nearly symmetric, with a trend for the lines at high n-number to be narrower than the Br  line. With the assumption that the three lines have different optical depths and consequently trace zones at different physical depths, we interprete the observed profiles as evidence that the ionized gas velocity in the HI emitting region is increasing as we move outwards, as expected in an accelerating wind more than in an infalling gas. We have modelled the observed line ratios and velocities with a simplified model for the HI excitation from a circumstellar gas with a velocity law . Such a comparison indicates that the observations are consistent with the emission coming from a very compact region of 4-6  , where the gas has been already accelerated to velocities of the order of 200 km s -1, with an associated mass flow rate of the ionized component of the order of 10  yr -1. This implies that the observed lines should originate either from a stellar wind or from the inner part of a disk wind, providing that the disk inner truncation radius is close to the stellar surface. It is also expected that the gas ionization fraction is relatively high as testified by the high rate of ionized mass loss derived. Our analysis, however, does not resolve the problem of how to reproduce the observed symmetrical line profiles, which at present are apparently difficult to model by both wind and accretion models. This probably points to the fact that the real situation is more complicated than described in the simple model presented here. Key words: line: formation -- stars: circumstellar matter -- stars: individual: HH100-IR -- infrared: stars -- stars: formation -- stars: winds, outflows Offprint request: B. Nisini, [email protected]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648860096931458, "perplexity": 1816.2749214008652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721558.87/warc/CC-MAIN-20161020183841-00510-ip-10-171-6-4.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/150746/red-shifted-photons-lost-energy-in-which-form?noredirect=1
# Red shifted photons lost energy in which form? Red shifted photons lost energy in which form? Photons which have experienced a change in frequency (red shift) due to gravity(or other red shifting affects), have necessarily lost energy, total energy is conserved. Red shifts happen because of various causes. there also exist blue shifts: Conversely, a decrease in wavelength is called blueshift and is generally seen when a light-emitting object moves toward an observer or when electromagnetic radiation moves into a gravitational field. Now on redshifts: Some redshifts are an example of the Doppler effect, familiar in the change in the apparent pitches of sirens and frequency of the sound waves emitted by speeding vehicles. A redshift occurs whenever a light source moves away from an observer. Energy is conserved by the motion of the source. Motion =kinetic energy . the red shifted adds to the kinetic energy of the source seen in the rest frame of the obsrver, and the blue adds to the energy of the photon again seen in the rest frame of the observer. Another kind of redshift is cosmological redshift, which is due to the expansion of the universe, and sufficiently distant light sources (generally more than a few million light years away) show redshift corresponding to the rate of increase in their distance from Earth. Again the motion takes up the energy balance Finally, gravitational redshift is a relativistic effect observed in electromagnetic radiation moving out of gravitational fields. The gravitational field picks up the balance of energy, again in the rest frame of the observer. • Anna, I was going to ask a similar question, and this one popped up. So considering $E=mc^2$ and $E=h\nu$ fully governs the energy conservation, then for a redshifted gamma photon from the big bang, $\nu$ decreases and so does $m$ since $c$ is constant. Right? Or are there more physics involved? – docscience Apr 14 '17 at 16:04 • @docscience E=mc^2 is misleading and we no longer use it in particle physics. This m is just a mathematical description of the extra inertia and confuses things. We only work with the rest mass, ( the "length" of the four vector) and the rest mass of the photon is always zero. So when the energy decreases nu becomes smaller, equivalent to the acoustic doppler shift – anna v Apr 14 '17 at 16:21 • Thanks! So then where did the energy go if it didn't change the mass? Considering say one photon by itself as a 'system'. Was it lost to 'space-time'? I'm still researching, but can the redshift be one source of the dark energy? – docscience Apr 14 '17 at 16:28 • @docscience For conservation of energy one has to consider the relative velocity of the observer to the photon source,. – anna v Apr 14 '17 at 17:00 • But considering the source is a dead star that was formed shortly after the big bang, how could the source matter any more today? Is it because, for the photon, time is 'stopped'? – docscience Apr 14 '17 at 17:45 It seems contradictory that red shifted light has lost energy yet total energy is conserved (where did the energy go?). The trick to understanding this is knowing that the energy measured depends on the frame of reference you are measuring from. Consider a ball flying towards you quite fast and hitting you in the head. From your perspective it has a lot of kinetic energy (and it hurt when it hit). Now consider yourself flying along at the same speed as the ball. It's stationary compared to you and it has no kinetic energy (compared to you). It can't hit you in the head and it can't hurt you because its not moving towards you any more. In both cases the ball is doing the same thing (and the energy in the system hasn't changed), its just your frame of reference that has changed and the same is happening with red shifted light. If you move away from light (at a fast enough speed) it will appear red shifted (less energy) and conversely if you move towards it (at a fast enough speed), it will appear blue shifted (appear to have gained energy). Going back to the ball example, if you drive towards a ball that is flying towards you, it will hurt more (have more energy) if you drive away from it, it will hurt less (have less energy). Nothing has changed in the ball, or it's energy, it's you that has changed (e.g. you've used energy to accelerate towards or away from it). • Technically, you can't "move away from light." You can move away from a light source, in which case, the spectrum of light that you receive from the source will be shifted toward the red from the spectrum that you expected to see based on your knowledge of the process that produced the light. – Solomon Slow Jul 27 '16 at 17:09 Energy is most definitely conserved in the case of gravitation-ally red shifted (GRS) photons. The sun is 4.6 billion years old and has energy output equal to 3.8x10+26 watts. If 1% of the energy output is lost to GRS, an enoumous quantity of energy is missing. If the lost energy were simply hanging out in the surrounding gravitational field, it should be somehow observable by now since the energy has been building up and stored for billions of years. However, that is not the case. Electromagnetic energy can not be "trapped" and somehow stored, but it can be converted to increasingly smaller frequency. All photon energy escapes EVERY gravitational field, but is red shifted. Said differently, a blue photon released from the sun is converted to many red photons as it escapes the gravitational field. Conservation of energy dictates this result. Imagine a transverse wave traveling down a stretched rope. The rope suddenly divides into two ropes. The initial wave is transformed into two smaller waves each of lower energy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8205258846282959, "perplexity": 397.8025831557082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986696339.42/warc/CC-MAIN-20191019141654-20191019165154-00310.warc.gz"}
http://math.stackexchange.com/questions/95213/branch-point-what-makes-a-closed-loop-around-it-special/95352
# Branch point-what makes a closed loop around it special? I am having difficulty understanding the concept of a branch point of a multifunction. It is typically explained as follows:branch point is a point such that the function is discontinuous when going around an arbitrarily small circuit around this point. What I am unable to understand is what is special about the set of points encircling this particular point? If I have a closed loop somewhere in the plane, the function is not multivalued on this loop. But if I translate this to enclose this point, then the function becomes multivalued. Why? I initially thought it might be because if this branch point is my reference point then as I run over the loop I am running over the values 0 to 2$pi$. But if I move my reference point to somewhere inside the loop this still remains true over any closed loop. Please can someone help? A similar question was asked but unfortunately did not receive ny replies. OP was advised to read the wiki page which I have. - Perhaps the difficulty is in the definition of multifunction, rather than branch point. Have you seen a rigorous definition of this? –  Zhen Lin Dec 30 '11 at 16:16 (Disclaimer: What follows is non-standard and was dreamt up after thinking about anafunctors the whole day.) First of all, let me define some notions. I will assume familiarity with elementary point set topology. Definition. Let $X$ and $Y$ be topological spaces. A continuous relation between $X$ and $Y$ is a triple $(R, \sigma, \tau)$ consisting of a topological space $R$ and continuous maps $\sigma : R \to X$ and $\tau : R \to Y$. This is a generalisation of the usual notion of a relation between two sets: indeed, recall that a relation on $X$ and $Y$ is just a subset $R \subseteq X \times Y$, and any such subset is automatically equipped with continuous projection maps to $X$ and $Y$. Also, just as a function is a special kind of relation in set theory, a continuous function is a special kind of continuous relation: every continuous function $f : X \to Y$ induces a continuous relation $(X, \textrm{id}, f)$, and every continuous relation $(R, \sigma, \tau)$ with $\sigma : R \to X$ a homeomorphism induces a continuous function $\tau \circ \sigma^{-1} : X \to Y$. Example. Let $X = Y = \mathbb{C}$, and let $R = \{ (x, y) \in \mathbb{C}^2 : y = x^2 \}$. Then $R$ together with the canonical projections makes a continuous relation between $\mathbb{C}$ and $\mathbb{C}$. Note that it is not a function, since the projection $(x, y) \mapsto x$ is not a homeomorphism here. (It's not even a bijection!) We will later see that it is a branched continuous multifunction, however. Definition. An unbranched continuous multifunction $F : X \nrightarrow Y$ is a continuous relation $(R, \sigma, \tau)$ such that $\sigma$ is surjective and a local homeomorphism. This makes $R$ into what is called an espace étalé over $X$. Here are some general facts about such spaces: Proposition. Let $\sigma : R \to X$ be a surjective local homeomorphism. 1. For each point $x$ in $X$, the fibre $R_x = \sigma^{-1} \{ x \}$ is non-empty and has the discrete topology. 2. The map $\sigma : R \to X$ has the path lifting property: i.e. if $\gamma : [0, 1] \to X$ be a continuous path and $x = \gamma (0)$, then for any $\tilde{x}$ in $R_x$, there is a unique continuous path $\tilde{\gamma} : [0, 1] \to R$ such that $\sigma \circ \tilde{\gamma} = \gamma$ and $\tilde{\gamma}(0) = x$. Example. Let $X = \mathbb{C} \setminus \{ 0 \}$, $Y = \mathbb{C}$, and define $$R = \{ (x, y) \in \mathbb{C}^2 : x \ne 0, y = x^2 \}$$ Let $\sigma : R \to X$, $\tau : R \to Y$ be the first and second projections, respectively. Obviously, $\sigma$ is surjective, and with a little work it can be shown that $\sigma$ is a local homeomorphism: after all, that's exactly what it means to be able to take a square root locally. Thus, $(R, \sigma, \tau)$ is an unbranched multifunction. Now, let $\gamma : [0, 1] \to X$ be the unit circle, with $\gamma(0) = 1$. Explicitly, $$\gamma (t) = \exp (2 \pi t i)$$ One easily verifies that $(1, 1) \in R$, so by the path lifting property there is a unique path $\tilde{\gamma} : [0, 1] \to R$ lying over $\gamma$. Here we are lucky and there is an explicit formula: $$\tilde{\gamma} (t) = (\exp (2 \pi t i), \exp (\pi t i))$$ What is ‘the value’ of the multifunction along this path? It is just $\tau \circ \tilde{\gamma}$, of course. But one immediately sees that \begin{align} \tau(\tilde{\gamma}(0)) & = +1 \newline \tau(\tilde{\gamma}(1)) & = -1 \end{align} So, even though $\tilde{\gamma}$ is a lift of the closed loop $\gamma$, $\tilde{\gamma}$ itself is not a closed loop! It is precisely this which leads to the phenomenon you allude to in your question: in more advanced terms, this is simply the observation that the induced map on fundamental groups $\sigma_* : \pi_1(R, \tilde{x}) \to \pi_1(R, x)$ is not an isomorphism. When this happens, we say $\sigma$ has non-trivial monodromy. Example. More generally, there is a notion of a covering space of $X$. If $R$ is a covering space of $X$ with covering map $\sigma : R \to X$, then $(R, \sigma, \textrm{id})$ is a unbranched continuous multifunction $X \nrightarrow R$. Finally, let us define the notions in question themselves. Definition. A (branched) continuous multifunction $F : X \nrightarrow Y$ is a continuous relation $(R, \sigma, \tau)$ with the following properties: 1. The map $\sigma$ is surjective. 2. The set $$U = \{ x \in X : \sigma \text{ is a local homeomorphism at each point in the fibre } R_x \}$$ is a dense open subset of $X$. 3. The restriction $(\hat{R}, \sigma |_{\hat{R}}, \tau |_{\hat{R}})$ is an unbranched continuous multifunction, where $\hat{R} = \sigma^{-1} U$. A ramification point is a point $\tilde{x}$ of $R$ such that $\sigma$ fails to be a local homeomorphism. A branch point is a point $x$ of $R$ such that there is a ramification point $\tilde{x}$ with $\sigma(\tilde{x}) = x$. Observe that, in the above notation, $X \setminus U$ is precisely the set of branch points of $F$: so we are stipulating that the set of branch points is nowhere dense. Proposition. If $F : X \nrightarrow Y$ is a branched continuous multifunction, and $x$ is not a branch point, then there is an open neighbourhood $U$ and a map $\phi : U \to R$ such that $\sigma \circ \phi$ is the identity on $U$, and $\tau \circ \phi : U \to Y$ is a genuine continuous function. We say $\phi$ is a local section of $F$. This immediately follows from the fact that $\sigma$ is a local homeomorphism above $x$. Example. Returning to the first example, where $X = Y = \mathbb{C}$ and $$R = \{ (x, y) \in \mathbb{C}^2 : y = x^2 \}$$ we see that $(R, \sigma, \tau)$ defines a branched multifunction: the only ramification point is $(0, 0)$, so the only branch point is $0$, and $U = \mathbb{C} \setminus \{ 0 \}$ is indeed an open dense subset of $\mathbb{C}$. Now, what does this have to do with closed loops? Well, here we have to specialise a bit. One property of $\mathbb{C}$ is that it is locally simply connected: indeed, for every open neighbourhood $U$ containing a point $x$, there is a disc centred at $x$ contained in $U$. The monodromy theorem this implies that, for any surjective local homeomorphism $\sigma : R \to U$, any lift of any sufficiently small closed loop in $U$ through $\sigma$ must again be a closed loop. Let us suppose $\sigma$ has the property that every fibre $R_x$ is finite. Thus, if $x$ is not a branch point, for every point $\tilde{x}$ in the fibre $R_x$, there is an open neighbourhood $V_{\tilde{x}}$ of $\tilde{x}$ such that $\sigma |_{V_{\tilde{x}}} : V_{\tilde{x}} \to U_{\tilde{x}}$ is a homeomorphism, where $U_{\tilde{x}}$ is an open neighbourhood of $x$. Set $$U = \bigcap_{\tilde{x} \in R_x} U_{\tilde{x}}$$ Since $R_x$ is finite, $U$ is open, and by construction $\sigma^{-1} U$ will be homeomorphic to $R_x \times U$. But $U$ must contain an open disc $D$ centred on $x$, and it is clear that $\sigma^{-1} D$ will be homeomorphic to a disjoint union of finitely many open discs. It follows that every closed loop contained in $D$ must lift to a closed loop in $\sigma^{-1} D$. Therefore, by contraposition, if $x$ is a point such that there are arbitrarily small closed loops encircling $x$ which do not lift to closed loops, $x$ must be a branch point. - I don't know if this will help you but here is an example. Think about $\sqrt{z}$ going around the origin counter-clockwise, $z=re^{i\theta}$, defining $\sqrt{r}$ to be the positive square root or the non-negative real number $r$. Then it's easy to define $\sqrt{z}$ for $0\leq\theta<2\pi, r\geq0$, we have $\sqrt{re^{i\theta}}=\sqrt{r}e^{i\theta/2}$. But coming back around to the positive real axis, there is a problem. We have already defined$\sqrt{z}$ there as the positive square root of $r$, but the function, trying to continue it, wants to take the value $\sqrt{r}e^{\pi i}=-\sqrt{r}$. So you have to go to the concept of a Riemann surface or a multivalued function or whatever. One can unambiguously define $\sqrt{z}$ on any simply connected open set not containing the origin (essentially because one can choose values for the argument of $\sqrt{z}$, a way of halving the angle). But near zero, you can't do this continuously in a single valued way as we saw above. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774467945098877, "perplexity": 94.26683518669023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122192267.50/warc/CC-MAIN-20150124175632-00191-ip-10-180-212-252.ec2.internal.warc.gz"}
https://thecuriousastronomer.wordpress.com/2011/11/14/test-of-latex-equations/
Feeds: Posts This is just a post to try and figure out what is going wrong with the horizontal alignment of Latex text. For example $360^{\circ}$ doesn’t seem to align correctly with the text either side of it. What happens if I just put in $F=ma$ into the line? That comes out above the text too. Clearly something is wrong!! What about $P^{2} \propto a^{3}$, how does this come out? Is $a$ the semi-major axis? Yes it is, and $P$ is the period of the orbit.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990380585193634, "perplexity": 254.61117566230038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105086.81/warc/CC-MAIN-20170818175604-20170818195604-00456.warc.gz"}
http://mathhelpforum.com/algebra/166230-algebra-question-print.html
# Algebra question • Dec 14th 2010, 10:39 AM Jools Algebra question Hey all. The root of my problem is actually from a chemistry question I'm doing, but for some reason I can't wrap my head around the algebraic solution to a formula need to solve the question. It's fairly simple, but for some reason I'm stuck. It gives the answer as x = 0.18 which works when you plug it in, but I can't get to it. (Please ignore the spacing above and below the division sign as it won't recognize my spaces for some reason) (1.0 - x) -------------------- = 1.0 (0.50 + x) (1.0 + x) Here's what I've done: (1.0 - x) ------------------- = 1.0 (.5 + 1.5x + x^2) (.5 + 1.5x + x^2) = (1.0 - x) .5x + x^2 = .5 Therefore: .5 = x^2 + .5x I tried the quadratic formula but it didn't give me the right answer. Can anybody help? Thanks. • Dec 14th 2010, 10:45 AM mr fantastic Quote: Originally Posted by Jools Hey all. The root of my problem is actually from a chemistry question I'm doing, but for some reason I can't wrap my head around the algebraic solution to a formula need to solve the question. It's fairly simple, but for some reason I'm stuck. It gives the answer as x = 0.18 which works when you plug it in, but I can't get to it. (Please ignore the spacing above and below the division sign as it won't recognize my spaces for some reason) (1.0 - x) -------------------- = 1.0 (0.50 + x) (1.0 + x) Here's what I've done: (1.0 - x) ------------------- = 1.0 (.5 + 1.5x + x^2) (.5 + 1.5x + x^2) = (1.0 - x) .5x + x^2 = .5 Mr F says: This does NOT follow from the previous line. You are meant to ADD x to both sides .... By the way, thankyou for posting your work. It makes it easy to diagnose your trouble. Therefore: .5 = x^2 + .5x I tried the quadratic formula but it didn't give me the right answer. Can anybody help? Thanks. And once you have fixed your mistake, I'm sure you know that you must re-arrange the equation into the form quadratic = 0 before using the quadratic formula. • Dec 14th 2010, 10:46 AM e^(i*pi) Quote: Originally Posted by Jools (.5 + 1.5x + x^2) = (1.0 - x) .5x + x^2 = .5 Your error is between these two lines. For whatever reasons you've subtracted when moving the -x from the right to the left when you should have added it. EDIT: Mr F is right, far easier to figure out when work has been posted. Just out of interest what topic is this in chem? • Dec 15th 2010, 06:07 AM Jools Thanks for your reply! In looking at it again I did make an error there, but on my paper I did it the right way, and still didn't get the answer from the book. Starting from where I made the error above: x^2 + 2.5x - .5 = 0 a = 1 b = 2 c = -.5 Using the quadratic formula I come up with: -2.5 +/- 1.44 which is either - 1.06 or - 3.94 Sorry about the confusion. And thanks again for the help. P.S. The chemistry topic I am working on is chemical equilibrium. This formula is for finding the concentrations of reactants when volume changes. • Dec 15th 2010, 08:04 AM HallsofIvy Then you are using the quadratic formula incorrectly. $\frac{-b\pm\sqrt{b^2- 4ac}}{2a}$ The first part, $\frac{-b}{2a}$ is $\frac{-2}{2}= -1$, not "-2.5" The discriminant is $\sqrt{b^2- 4ac}= \sqrt{2^2- 4(1)(-.5)}= \sqrt{4+ 2}= \sqrt{6}$ which is about 2.45, not "1.44". • Dec 15th 2010, 10:47 AM Jools Ok got it. I wasn't including -b as part of the numerator, I was adding/subtracting it to the resolved fraction. Thans for all the help!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8059912919998169, "perplexity": 604.531928606069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103891.56/warc/CC-MAIN-20170817170613-20170817190613-00539.warc.gz"}
https://tohoku.pure.elsevier.com/ja/publications/solving-non-parametric-inverse-problem-in-continuous-markov-rando
# Solving non-parametric inverse problem in continuous Markov random field using loopy belief propagation Muneki Yasuda, Shun Kataoka ## 抄録 In this paper, we address the inverse problem, or the statistical machine learning problem, in Markov random fields with a non-parametric pair-wise energy function with continuous variables. The inverse problem is formulated by maximum likelihood estimation. The exact treatment of maximum likelihood estimation is intractable because of two problems: (1) it includes the evaluation of the partition function and (2) it is formulated in the form of functional optimization. We avoid Problem (1) by using Bethe approximation. Bethe approximation is an approximation technique equivalent to the loopy belief propagation. Problem (2) can be solved by using orthonormal function expansion. Orthonormal function expansion can reduce a functional optimization problem to a function optimization problem. Our method can provide an analytic form of the solution of the inverse problem within the framework of Bethe approximation as a result of variational optimization. 本文言語 English 084806 journal of the physical society of japan 86 8 https://doi.org/10.7566/JPSJ.86.084806 Published - 2017 8 15 ## ASJC Scopus subject areas • 物理学および天文学(全般) ## フィンガープリント 「Solving non-parametric inverse problem in continuous Markov random field using loopy belief propagation」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9061022996902466, "perplexity": 780.8982010439754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304760.30/warc/CC-MAIN-20220125035839-20220125065839-00351.warc.gz"}
https://www.arxiv-vanity.com/papers/1611.06561/
Latest results from lattice N=4 super Yang–Mills Institute for Theoretical Physics, University of Bern, 3012 Bern, Switzerland E-mail: Simon Catterall Department of Physics, Syracuse University, Syracuse, New York 13244, United States E-mail: Poul H. Damgaard Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark E-mail: Joel Giedt Department of Physics, Applied Physics and Astronomy, Rensselaer Polytechnic Institute, Troy, New York 12065, United States E-mail: Abstract: We present some of the latest results from our numerical investigations of supersymmetric Yang–Mills theory formulated on a space-time lattice. Based on a construction that exactly preserves a single supersymmetry at non-zero lattice spacing, we recently developed an improved lattice action that is now being employed in large-scale calculations. Here we update our studies of the static potential using this new action, also applying tree-level lattice perturbation theory to improve the analysis of the potential itself. Considering relatively weak couplings, we obtain results for the Coulomb coefficient that are consistent with continuum perturbation theory. Non-perturbative investigations of supersymmetric Yang–Mills (SYM) formulated on a space-time lattice have advanced rapidly in recent years. In addition to playing important roles in holographic approaches to quantum gravity, investigations of the structure of scattering amplitudes, and the conformal bootstrap program, SYM is also the only known four-dimensional theory for which a lattice regularization can exactly preserve a closed subalgebra of the supersymmetries at non-zero lattice spacing  [1, 2, 3, 4, 5]. Based on this lattice construction we have been pursuing large-scale numerical investigations of SYM that can in principle access non-perturbative couplings for arbitrary numbers of colors . Here we discuss a selection of our latest results from this work in progress. Last year we introduced a procedure to regulate flat directions in numerical computations by modifying the moduli equations in a way that preserves the single exact supersymmetry at non-zero lattice spacing [6, 7, 8]. This procedure produces a lattice action that exhibits effective improvement, with significantly reduced discretization artifacts that vanish much more rapidly upon approaching the continuum limit. We have implemented this improved action in our parallel software for lattice SYM [9], and are now employing it in the large-scale numerical computations discussed in this proceedings. We make our software publicly available to encourage independent investigations and the development of a lattice SYM community. In this proceedings, after briefly reviewing the improved action we revisit our lattice investigations of the static potential [10, 11]. In addition to the new lattice action, we also improve the static potential analysis itself by applying tree-level lattice perturbation theory. We observe a coulombic potential and our preliminary results for the Coulomb coefficient are consistent with continuum perturbative predictions. A separate contribution to these proceedings [12] discusses our efforts to investigate S duality on the Coulomb branch of SYM where some of the adjoint scalar fields acquire non-zero vacuum expectation values leading to spontaneous symmetry breaking. These efforts involve measuring the masses of the elementary W boson and the corresponding dual topological ’t Hooft–Polyakov monopole. Ref. [12] also provides an update on our ongoing investigations of the Konishi operator scaling dimension. Improved lattice action for N=4 Sym Our lattice formulation of SYM is based on the Marcus (or Geometric-Langlands) topological twist of the continuum theory [13, 14]. This produces a gauge theory with a five-component complexified gauge field in four space-time dimensions. We discretize the theory on the lattice, exactly preserving the closed subalgebra involving the single twisted-scalar supercharge . The improved lattice action that we use is [6] S =N2λlat∑n{Tr[Q(χab(n)D(+)aUb(n)+η(n){¯¯¯¯D(−)aUa(n)+GO(n)IN}−12η(n)d(n))] −14Tr[ϵabcde χde(n+ˆμa+ˆμb+ˆμc)¯¯¯¯D(+)cχab(n)]+μ2∑a(1NTr[Ua(n)¯¯¯¯Ua(n)]−1)2}, where the operator in the first line is and is the oriented plaquette built from the complexified gauge links in the plane. Repeated indices are summed and the forward/backward finite-difference operators both reduce to the usual covariant derivatives in the continuum limit [3, 4]. All indices run from 1 through 5, corresponding to the five symmetric basis vectors of the four-dimensional lattice [2, 11]. When this action has the same form as the twisted continuum theory [13, 14]. These two tunable couplings are introduced to stabilize numerical calculations by regulating flat directions and exact zero modes. The scalar potential with coupling lifts flat directions in the SU() sector, while the plaquette determinant with coupling does so in the U(1) sector. Although non-zero softly breaks the supersymmetry, the plaquette determinant deformation is exact. This -exact deformation results from the general procedure introduced in Ref. [6], which imposes the Ward identity by modifying the equations of motion for the auxiliary field, d(n)=¯¯¯¯D(−)aUa(n)⟶d(n)=¯¯¯¯D(−)aUa(n)+GO(n)IN. (1) With this Ward identity gives after averaging over the lattice volume, while is constrained by the scalar potential. Thanks to the reduced soft supersymmetry breaking enabled by this procedure, Ward identity violations vanish in the continuum limit [8]. This is consistent with the improvement expected since and the other lattice symmetries forbid all dimension-5 operators [6]. With the moduli space of the lattice theory survives to all orders of lattice perturbation theory [15]. If nonperturbative effects such as instantons also preserve the moduli space, then the most general long-distance effective action contains only the terms in the improved action above [11, 16]. In addition, all but one of the coefficients on the terms in can be absorbed by rescaling the fermions and the auxiliary field, leaving only a single coupling that may need to be tuned to recover the full symmetries of SYM in the continuum limit. Tree-level improvement for the lattice N=4 SYM static potential We extract the static potential from the exponential temporal decay of rectangular Wilson loops . To easily analyze all possible spatial separations we gauge fix to Coulomb gauge and compute , where is the product of complexified temporal links at spatial location , extending from timeslice to timeslice . The static potential analysis can be improved by refining the scalar distance associated with the spatial three-vector . This is a long-established idea in lattice gauge theory, dating back at least to Ref. [17]. Previously we identified the scalar distance as the euclidean norm of , where each is a basis vector of the lattice. Because these basis vectors are not orthogonal, is a four-vector in physical space-time even though is a three-vector displacement on a fixed timeslice of the lattice. To obtain tree-level improvement we instead extract the scalar distance from the Fourier transform of the bosonic propagator computed at tree level in lattice perturbation theory. Then to this order in lattice perturbation theory. Using the tree-level lattice propagator computed in Ref. [15], we have In this expression is the same four-vector discussed above while and the dual basis vectors are defined by . The last identity allows us to replace and , more directly relating to the three-vector displacement . On a finite lattice, the continuous integral in Eq. 2 would reduce to a discrete sum over integer . Since we have not yet computed the zero-mode () contribution to the discrete sum, here we determine by numerically evaluating the continuous integral that corresponds to the infinite-volume limit. Ref. [17] argues that infinite-volume can safely be used in finite-volume lattice calculations, without affecting either the Coulomb coefficient or the string tension. In agreement with this argument, we checked that both approaches give us similar results even though we currently omit the zero-mode contribution from the finite-volume computation. We experimented with three integrators to numerically evaluate the four-dimensional integral in Eq. 2, obtaining consistent results but significantly different performance. For our problem the most efficient integrator we were able to find was the Divonne algorithm implemented in the Cuba library [18]. This is a stratified sampling algorithm based on CERNLIB routine D151 [19]. Especially for large Divonne’s evaluation of Eq. 2 converged several orders of magnitude more rapidly than two versions of the vegas algorithm [20] that we tested. These two versions of vegas both provide some improvements over the original algorithm, and are implemented in Cuba and at http://github.com/gplepage/vegas. Latest results for the static potential In Fig. 1 we demonstrate the effects of tree-level improvement for lattice SYM computations of the static potential. All four plots in this figure consider lattices generated using the improved action at ’t Hooft coupling . The top row of plots analyze with the scalar distance defined by the naive euclidean norm of . In the top-left plot we show the potential itself for gauge groups U() with , 3 and 4, including fits to the Coulomb form . It is possible to see that the first points at are consistently below the fit curves, while the next points at are well above them. This scatter of the points around the fit is isolated in the top-right plot where we show . It is precisely this scatter at short distances that tree-level improvement ameliorates, as shown in the bottom row of plots. These results come from the same gauge configurations and measurements as those in the top row, with the only change in the analysis being the use of obtained from Eq. 2 via the Divonne integrator in Cuba. There is not a one-to-one correspondence between the points in the two rows of plots. Several that produce the same euclidean norm (and are therefore combined in our original analyses) lead to distinct . At the same time, the finite-volume effects also change. We drop any displacements that extend at least halfway across the spatial volume of the lattice. When working with euclidean norms for , this imposes , whereas . In Fig. 2 we collect preliminary results from tree-level improved static potential analyses employing our new ensembles of gauge configurations generated using the improved action. On lattices we consider three U() gauge groups with , 3 and 4, while to explore finite-volume effects we also carry out and calculations for . (Because the larger volumes also help to control discretization artifacts at stronger couplings, so far we have only generated lattices at the strongest included in this analysis.) A notable finite-volume effect that we observe is a small negative value for the string tension when we fit the static potential to the confining form . We can see in Fig. 1 that such a negative string tension would improve the fit for distances near the finite-volume cutoff. As increases we gain data at larger distances, which more effectively constrain . In the right plot of Fig. 2 we see that the string tension moves toward zero as increases, confirming that the static potential is coulombic at all couplings we consider. We therefore fit the static potential to the Coulomb form to obtain the results for the Coulomb coefficient in the left plot of Fig. 2. For the same gauge groups and lattice volumes discussed above our results are consistent with the next-to-next-to-leading-order (NNLO) perturbative prediction from Refs. [21, 22, 23]. The agreement with perturbation theory tends to improve as and increase, especially at the strongest ’t Hooft coupling where the larger volume helps control discretization artifacts. Next steps for lattice N=4 Sym We are near to finalizing and publishing our tree-level improved analyses of the lattice SYM static potential based on the improved lattice action introduced last year and summarized above. In addition we are making progress analyzing the anomalous dimension of the Konishi operator, developing a variational method to disentangle the Konishi and supergravity () operators as described in Ref. [12]. We continue to investigate the possible sign problem of the lattice theory, as well as the restoration of the other supersymmetries and in the continuum limit. Finally, Ref. [12] also presents a new project to study S duality on the Coulomb branch of the theory, by measuring the masses of the W boson and the corresponding dual topological ’t Hooft–Polyakov monopole. Ideally this Coulomb branch investigation will allow non-perturbative lattice tests of S duality even at ’t Hooft couplings relatively far from the self-dual point . Acknowledgments: We thank Tom DeGrand, Julius Kuti and Rainer Sommer for helpful discussions of perturbative improvement for the static potential, and Rudi Rahn for advice on numerical integration. This work was supported by the U.S. Department of Energy (DOE), Office of Science, Office of High Energy Physics, under Award Numbers DE-SC0009998 (DS, SC) and DE-SC0013496 (JG). Numerical calculations were carried out on the HEP-TH cluster at the University of Colorado, the DOE-funded USQCD facilities at Fermilab, and the Comet cluster at the San Diego Computing Center through the Extreme Science and Engineering Discovery Environment (XSEDE) supported by U.S. National Science Foundation grant number ACI-1053575.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.897934079170227, "perplexity": 888.2108939571198}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358323.91/warc/CC-MAIN-20211127223710-20211128013710-00503.warc.gz"}
https://embed.planetcalc.com/577/
Sine wave calculator Construction of a sine wave with the user's parameters This calculator builds a parametric sinusoid in the range from 0 to $2\pi$ Why parametric? Because the graph is represented by the following formula $y(x)=sin(kx+a)$, and the coefficients k and a can be set by the user. Some words about the form in which the user can set the coefficients – there are three possible forms: The number put in the box is interpreted as radians, for example, 2 radians 2. Degrees The number put in the box is interpreted as degrees, for example, 60 degrees The number put in the box is interpreted as a factor in front of the number $\pi$, for example, 2$\pi$ radian By default, k = 1, a = 0, which gives us a classic graph $y(x)=sin(x)$ P.S. It is clear that when k is very large, the graph looks shitty, but what can you do – its a linear approximation after all! Sine wave Digits after the decimal point: 2 Sine wave The file is very large. Browser slowdown may occur during loading and creation. URL copied to clipboard PLANETCALC, Sine wave calculator
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 5, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405698537826538, "perplexity": 800.1422413951292}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00260.warc.gz"}
https://math.hecker.org/2014/03/08/linear-algebra-and-its-applications-exercise-3-1-11/
Linear Algebra and Its Applications, Exercise 3.1.11 Exercise 3.1.11. Fredholm’s alternative to the fundamental theorem of linear algebra states that for any matrix $A$ and vector $b$ either 1) $Ax = b$ has a solution or 2) $A^Ty = 0, y^Tb \ne 0$ has a solution, but not both. Show that assuming both (1) and (2) have solutions leads to a contradiction. Answer: Suppose that both (1) and (2) have solutions for any matrix $A$ and any vector $b$. In other words $Ax = b$ for some $x$ and $A^Ty = 0$ for some $y$, where $y^Tb \ne 0$. Since $A^Ty = 0$ we also have $(A^Ty)^T = 0$. But $(A^Ty)^T = y^T(A^T)^T = y^TA$ so that we also have $y^TA = 0$. Multiplying both sides by $x$ we have $y^TAx = 0 \cdot x = 0$. But $Ax = b$ so the equation $y^TAx = 0$ reduces to $y^Tb = 0$ contrary to our original assumption that $y^Tb \ne 0$. We have thus shown that either (1) can have a solution or (2) can have a solution but not both at the same time. NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang. If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books. This entry was posted in linear algebra and tagged , . Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937501072883606, "perplexity": 167.99588051544174}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529813.23/warc/CC-MAIN-20190420120902-20190420142023-00014.warc.gz"}
http://www.digplanet.com/wiki/Acoustic_theory
digplanet beta 1: Athena Share digplanet: Agriculture Applied sciences Arts Belief Chronology Culture Education Environment Geography Health History Humanities Language Law Life Mathematics Nature People Politics Science Society Technology Acoustic theory is the field relating to mathematical description of sound waves. It is derived from fluid dynamics. See acoustics for the engineering approach. The propagation of sound waves in a fluid (such as water) can be modeled by an equation of motion (conservation of momentum) and an equation of continuity (conservation of mass). With some simplifications, in particular constant density, they can be given as follows: \begin{align} \rho_0 \frac{\partial \mathbf{v}}{\partial t} + \nabla p & = 0 \qquad \text{(Momentum balance)} \\ \frac{\partial p}{\partial t} + \kappa~\nabla \cdot \mathbf{v} & = 0 \qquad \text{(Mass balance)} \end{align} where $p(\mathbf{x}, t)$ is the acoustic pressure and $\mathbf{v}(\mathbf{x}, t)$ is the acoustic fluid velocity vector, $\mathbf{x}$ is the vector of spatial coordinates $x, y, z$, $t$ is the time, $\rho_0$ is the static mass density of the medium and $\kappa$ is the bulk modulus of the medium. The bulk modulus can be expressed in terms of the density and the speed of sound in the medium ($c_0$) as $\kappa = \rho_0 c_0^2 ~.$ If the acoustic fluid velocity field is irrotational, $\nabla \times \mathbf{v}=\mathbf{0}$, then the acoustic wave equation is a combination of these two sets of balance equations and can be expressed as [1] $\cfrac{\partial^2 \mathbf{v}}{\partial t^2} - c_0^2~\nabla^2\mathbf{v} = 0 \qquad \text{or} \qquad \cfrac{\partial^2 p}{\partial t^2} - c_0^2~\nabla^2 p = 0,$ where we have used the vector Laplacian, $\nabla^2 \mathbf{v} = \nabla(\nabla \cdot \mathbf{v}) - \nabla \times (\nabla \times \mathbf{v})$ . The acoustic wave equation (and the mass and momentum balance equations) are often expressed in terms of a scalar potential $\varphi$ where $\mathbf{v} = \nabla\varphi$. In that case the acoustic wave equation is written as $\cfrac{\partial^2 \varphi}{\partial t^2} - c_0^2~\nabla^2 \varphi = 0$ and the momentum balance and mass balance are expressed as $p + \rho_0~\cfrac{\partial\varphi}{\partial t} = 0 ~;~~ \rho + \cfrac{\rho_0}{c_0^2}~\cfrac{\partial\varphi}{\partial t} = 0 ~.$ ## Derivation of the governing equations The derivations of the above equations for waves in an acoustic medium are given below. ### Conservation of momentum The equations for the conservation of linear momentum for a fluid medium are $\rho \left(\frac{\partial \mathbf{v}}{\partial t} + \mathbf{v} \cdot \nabla \mathbf{v}\right) = -\nabla p + \nabla \cdot\boldsymbol{s} + \rho\mathbf{b}$ where $\mathbf{b}$ is the body force per unit mass, $p$ is the pressure, and $\boldsymbol{s}$ is the deviatoric stress. If $\boldsymbol{\sigma}$ is the Cauchy stress, then $p := -\tfrac{1}{3}~\text{tr}(\boldsymbol{\sigma}) ~;~~ \boldsymbol{s} := \boldsymbol{\sigma} + p~\boldsymbol{\mathit{1}}$ where $\boldsymbol{\mathit{1}}$ is the rank-2 identity tensor. We make several assumptions to derive the momentum balance equation for an acoustic medium. These assumptions and the resulting forms of the momentum equations are outlined below. #### Assumption 1: Newtonian fluid In acoustics, the fluid medium is assumed to be Newtonian. For a Newtonian fluid, the deviatoric stress tensor is related to the velocity by $\boldsymbol{s} = \mu~\left[\nabla\mathbf{v} + (\nabla\mathbf{v})^T\right] + \lambda~(\nabla \cdot \mathbf{v})~\boldsymbol{\mathit{1}}$ where $\mu$ is the shear viscosity and $\lambda$ is the bulk viscosity. Therefore, the divergence of $\boldsymbol{s}$ is given by \begin{align} \nabla\cdot\boldsymbol{s} \equiv \cfrac{\partial s_{ij}}{\partial x_i} & = \mu \left[\cfrac{\partial}{\partial x_i}\left(\cfrac{\partial v_i}{\partial x_j}+\cfrac{\partial v_j}{\partial x_i}\right)\right] + \lambda~\left[\cfrac{\partial}{\partial x_i}\left(\cfrac{\partial v_k}{\partial x_k}\right)\right]\delta_{ij} \\ & = \mu~\cfrac{\partial^2 v_i}{\partial x_i \partial x_j} + \mu~\cfrac{\partial^2 v_j}{\partial x_i\partial x_i} + \lambda~\cfrac{\partial^2 v_k}{\partial x_k\partial x_j} \\ & = (\mu + \lambda)~\cfrac{\partial^2 v_i}{\partial x_i \partial x_j} + \mu~\cfrac{\partial^2 v_j}{\partial x_i^2} \\ & \equiv (\mu + \lambda)~\nabla(\nabla\cdot\mathbf{v}) + \mu~\nabla^2\mathbf{v} ~. \end{align} Using the identity $\nabla^2\mathbf{v} = \nabla(\nabla\cdot\mathbf{v}) - \nabla\times\nabla\times\mathbf{v}$, we have $\nabla\cdot\boldsymbol{s} = (2\mu + \lambda)~\nabla(\nabla\cdot\mathbf{v}) - \mu~\nabla\times\nabla\times\mathbf{v}~.$ The equations for the conservation of momentum may then be written as $\rho \left(\frac{\partial \mathbf{v}}{\partial t} + \mathbf{v} \cdot \nabla \mathbf{v}\right) = -\nabla p + (2\mu + \lambda)~\nabla(\nabla\cdot\mathbf{v}) - \mu~\nabla\times\nabla\times\mathbf{v} + \rho\mathbf{b}$ #### Assumption 2: Irrotational flow For most acoustics problems we assume that the flow is irrotational, that is, the vorticity is zero. In that case $\nabla\times\mathbf{v} = 0$ and the momentum equation reduces to $\rho \left(\frac{\partial \mathbf{v}}{\partial t} + \mathbf{v} \cdot \nabla \mathbf{v}\right) = -\nabla p + (2\mu + \lambda)~\nabla(\nabla\cdot\mathbf{v}) + \rho\mathbf{b}$ #### Assumption 3: No body forces Another frequently made assumption is that effect of body forces on the fluid medium is negligible. The momentum equation then further simplifies to $\rho \left(\frac{\partial \mathbf{v}}{\partial t} + \mathbf{v} \cdot \nabla \mathbf{v}\right) = -\nabla p + (2\mu + \lambda)~\nabla(\nabla\cdot\mathbf{v})$ #### Assumption 4: No viscous forces Additionally, if we assume that there are no viscous forces in the medium (the bulk and shear viscosities are zero), the momentum equation takes the form $\rho \left(\frac{\partial \mathbf{v}}{\partial t} + \mathbf{v} \cdot \nabla \mathbf{v}\right) = -\nabla p$ #### Assumption 5: Small disturbances An important simplifying assumption for acoustic waves is that the amplitude of the disturbance of the field quantities is small. This assumption leads to the linear or small signal acoustic wave equation. Then we can express the variables as the sum of the (time averaged) mean field ($\langle\cdot\rangle$) that varies in space and a small fluctuating field ($\tilde{\cdot}$) that varies in space and time. That is $p = \langle p\rangle + \tilde{p} ~;~~ \rho = \langle\rho\rangle + \tilde{\rho} ~;~~ \mathbf{v} = \langle\mathbf{v}\rangle + \tilde{\mathbf{v}}$ and $\cfrac{\partial\langle p \rangle}{\partial t} = 0 ~;~~ \cfrac{\partial\langle \rho \rangle}{\partial t} = 0 ~;~~ \cfrac{\partial\langle \mathbf{v} \rangle}{\partial t} = \mathbf{0} ~.$ Then the momentum equation can be expressed as $\left[\langle\rho\rangle+\tilde{\rho}\right] \left[\frac{\partial\tilde{\mathbf{v}}}{\partial t} + \left[\langle\mathbf{v}\rangle+\tilde{\mathbf{v}}\right] \cdot \nabla \left[\langle\mathbf{v}\rangle+\tilde{\mathbf{v}}\right]\right] = -\nabla \left[\langle p\rangle+\tilde{p}\right]$ Since the fluctuations are assumed to be small, products of the fluctuation terms can be neglected (to first order) and we have \begin{align} \langle\rho\rangle~\frac{\partial\tilde{\mathbf{v}}}{\partial t} & + \left[\langle\rho\rangle+\tilde{\rho}\right]\left[\langle\mathbf{v}\rangle\cdot\nabla \langle\mathbf{v}\rangle\right]+ \langle\rho\rangle\left[\langle\mathbf{v}\rangle\cdot\nabla\tilde{\mathbf{v}} + \tilde{\mathbf{v}}\cdot\nabla\langle\mathbf{v}\rangle\right] \\ & = -\nabla \left[\langle p\rangle+\tilde{p}\right] \end{align} #### Assumption 6: Homogeneous medium Next we assume that the medium is homogeneous; in the sense that the time averaged variables $\langle p \rangle$ and $\langle \rho \rangle$ have zero gradients, i.e., $\nabla\langle p \rangle = 0 ~;~~ \nabla\langle \rho \rangle = 0 ~.$ The momentum equation then becomes $\langle\rho\rangle~\frac{\partial\tilde{\mathbf{v}}}{\partial t} + \left[\langle\rho\rangle+\tilde{\rho}\right]\left[\langle\mathbf{v}\rangle\cdot\nabla \langle\mathbf{v}\rangle\right]+ \langle\rho\rangle\left[\langle\mathbf{v}\rangle\cdot\nabla\tilde{\mathbf{v}} + \tilde{\mathbf{v}}\cdot\nabla\langle\mathbf{v}\rangle\right] = -\nabla\tilde{p}$ #### Assumption 7: Medium at rest At this stage we assume that the medium is at rest which implies that the mean velocity is zero, i.e. $\langle\mathbf{v}\rangle = 0$. Then the balance of momentum reduces to $\langle\rho\rangle~\frac{\partial\tilde{\mathbf{v}}}{\partial t} = -\nabla\tilde{p}$ Dropping the tildes and using $\rho_0 := \langle\rho\rangle$, we get the commonly used form of the acoustic momentum equation $\rho_0~\frac{\partial\mathbf{v}}{\partial t} + \nabla p = 0 ~.$ ### Conservation of mass The equation for the conservation of mass in a fluid volume (without any mass sources or sinks) is given by $\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \mathbf{v}) = 0$ where $\rho(\mathbf{x},t)$ is the mass density of the fluid and $\mathbf{v}(\mathbf{x},t)$ is the fluid velocity. The equation for the conservation of mass for an acoustic medium can also be derived in a manner similar to that used for the conservation of momentum. #### Assumption 1: Small disturbances From the assumption of small disturbances we have $p = \langle p\rangle + \tilde{p} ~;~~ \rho = \langle\rho\rangle + \tilde{\rho} ~;~~ \mathbf{v} = \langle\mathbf{v}\rangle + \tilde{\mathbf{v}}$ and $\cfrac{\partial\langle p \rangle}{\partial t} = 0 ~;~~ \cfrac{\partial\langle \rho \rangle}{\partial t} = 0 ~;~~ \cfrac{\partial\langle \mathbf{v} \rangle}{\partial t} = \mathbf{0} ~.$ Then the mass balance equation can be written as $\frac{\partial\tilde{\rho}}{\partial t} + \left[\langle\rho\rangle+\tilde{\rho}\right]\nabla \cdot\left[\langle\mathbf{v}\rangle+\tilde{\mathbf{v}}\right] + \nabla\left[\langle\rho\rangle+\tilde{\rho}\right]\cdot \left[\langle\mathbf{v}\rangle+\tilde{\mathbf{v}}\right]= 0$ If we neglect higher than first order terms in the fluctuations, the mass balance equation becomes $\frac{\partial\tilde{\rho}}{\partial t} + \left[\langle\rho\rangle+\tilde{\rho}\right]\nabla \cdot\langle\mathbf{v}\rangle+ \langle\rho\rangle\nabla\cdot\tilde{\mathbf{v}} + \nabla\left[\langle\rho\rangle+\tilde{\rho}\right]\cdot\langle\mathbf{v}\rangle+ \nabla\langle\rho\rangle\cdot\tilde{\mathbf{v}}= 0$ #### Assumption 2: Homogeneous medium Next we assume that the medium is homogeneous, i.e., $\nabla\langle \rho \rangle = 0 ~.$ Then the mass balance equation takes the form $\frac{\partial\tilde{\rho}}{\partial t} + \left[\langle\rho\rangle+\tilde{\rho}\right]\nabla \cdot\langle\mathbf{v}\rangle+ \langle\rho\rangle\nabla\cdot\tilde{\mathbf{v}} + \nabla\tilde{\rho}\cdot\langle\mathbf{v}\rangle = 0$ #### Assumption 3: Medium at rest At this stage we assume that the medium is at rest, i.e., $\langle\mathbf{v}\rangle = 0$. Then the mass balance equation can be expressed as $\frac{\partial\tilde{\rho}}{\partial t} + \langle\rho\rangle\nabla\cdot\tilde{\mathbf{v}} = 0$ #### Assumption 4: Ideal gas, adiabatic, reversible In order to close the system of equations we need an equation of state for the pressure. To do that we assume that the medium is an ideal gas and all acoustic waves compress the medium in an adiabatic and reversible manner. The equation of state can then be expressed in the form of the differential equation: $\cfrac{dp}{d\rho} = \cfrac{\gamma~p}{\rho} ~;~~ \gamma := \cfrac{c_p}{c_v} ~;~~ c^2 = \cfrac{\gamma~p}{\rho} ~.$ where $c_p$ is the specific heat at constant pressure, $c_v$ is the specific heat at constant volume, and $c$ is the wave speed. The value of $\gamma$ is 1.4 if the acoustic medium is air. For small disturbances $\cfrac{dp}{d\rho} \approx \cfrac{\tilde{p}}{\tilde{\rho}} ~;~~ \cfrac{p}{\rho} \approx \cfrac{\langle p \rangle}{\langle \rho \rangle} ~;~~ c^2 \approx c_0^2 = \cfrac{\gamma~\langle p\rangle}{\langle \rho \rangle} ~.$ where $c_0$ is the speed of sound in the medium. Therefore, $\cfrac{\tilde{p}}{\tilde{\rho}} = \gamma~\cfrac{\langle p \rangle}{\langle \rho \rangle} = c_0^2 \qquad \implies \qquad \cfrac{\partial\tilde{p}}{\partial t} = c_0^2 \cfrac{\partial\tilde{\rho}}{\partial t}$ The balance of mass can then be written as $\cfrac{1}{c_0^2}\frac{\partial\tilde{p}}{\partial t} + \langle\rho\rangle\nabla\cdot\tilde{\mathbf{v}} = 0$ Dropping the tildes and defining $\rho_0 := \langle\rho\rangle$ gives us the commonly used expression for the balance of mass in an acoustic medium: $\frac{\partial p}{\partial t} + \rho_0~c_0^2~\nabla\cdot\mathbf{v} = 0 ~.$ ## Governing equations in cylindrical coordinates If we use a cylindrical coordinate system $(r,\theta,z)$ with basis vectors $\mathbf{e}_r, \mathbf{e}_\theta, \mathbf{e}_z$, then the gradient of $p$ and the divergence of $\mathbf{v}$ are given by \begin{align} \nabla p & = \cfrac{\partial p}{\partial r}~\mathbf{e}_r + \cfrac{1}{r}~\cfrac{\partial p}{\partial \theta}~\mathbf{e}_\theta + \cfrac{\partial p}{\partial z}~\mathbf{e}_z \\ \nabla\cdot\mathbf{v} & = \cfrac{\partial v_r}{\partial r} + \cfrac{1}{r}\left(\cfrac{\partial v_\theta}{\partial \theta} + v_r\right) + \cfrac{\partial v_z}{\partial z} \end{align} where the velocity has been expressed as $\mathbf{v} = v_r~\mathbf{e}_r+v_\theta~\mathbf{e}_\theta+v_z~\mathbf{e}_z$. The equations for the conservation of momentum may then be written as $\rho_0~\left[\cfrac{\partial v_r}{\partial t}~\mathbf{e}_r+\cfrac{\partial v_\theta}{\partial t}~\mathbf{e}_\theta+\cfrac{\partial v_z}{\partial t}~\mathbf{e}_z\right] + \cfrac{\partial p}{\partial r}~\mathbf{e}_r + \cfrac{1}{r}~\cfrac{\partial p}{\partial \theta}~\mathbf{e}_\theta + \cfrac{\partial p}{\partial z}~\mathbf{e}_z = 0$ In terms of components, these three equations for the conservation of momentum in cylindrical coordinates are $\rho_0~\cfrac{\partial v_r}{\partial t} + \cfrac{\partial p}{\partial r} = 0 ~;~~ \rho_0~\cfrac{\partial v_\theta}{\partial t} + \cfrac{1}{r}~\cfrac{\partial p}{\partial \theta} = 0 ~;~~ \rho_0~\cfrac{\partial v_z}{\partial t} + \cfrac{\partial p}{\partial z} = 0 ~.$ The equation for the conservation of mass can similarly be written in cylindrical coordinates as $\cfrac{\partial p}{\partial t} + \kappa\left[\cfrac{\partial v_r}{\partial r} + \cfrac{1}{r}\left(\cfrac{\partial v_\theta}{\partial \theta} + v_r\right) + \cfrac{\partial v_z}{\partial z}\right] = 0 ~.$ ### Time harmonic acoustic equations in cylindrical coordinates The acoustic equations for the conservation of momentum and the conservation of mass are often expressed in time harmonic form (at fixed frequency). In that case, the pressures and the velocity are assumed to be time harmonic functions of the form $p(\mathbf{x}, t) = \hat{p}(\mathbf{x})~e^{-i\omega t} ~;~~ \mathbf{v}(\mathbf{x}, t) = \hat{\mathbf{v}}(\mathbf{x})~e^{-i\omega t} ~;~~ i := \sqrt{-1}$ where $\omega$ is the frequency. Substitution of these expressions into the governing equations in cylindrical coordinates gives us the fixed frequency form of the conservation of momentum $\cfrac{\partial\hat{p}}{\partial r} = i\omega~\rho_0~\hat{v}_r ~;~~ \cfrac{1}{r}~\cfrac{\partial\hat{p}}{\partial \theta} = i\omega~\rho_0~\hat{v}_\theta ~;~~ \cfrac{\partial\hat{p}}{\partial z} = i\omega~\rho_0~\hat{v}_z$ and the fixed frequency form of the conservation of mass $\cfrac{i\omega \hat{p}}{\kappa} = \cfrac{\partial \hat{v}_r}{\partial r} + \cfrac{1}{r}\left(\cfrac{\partial \hat{v}_\theta}{\partial \theta} + \hat{v}_r\right) + \cfrac{\partial \hat{v}_z}{\partial z} ~.$ #### Special case: No z-dependence In the special case where the field quantities are independent of the z-coordinate we can eliminate $v_r, v_\theta$ to get $\frac{\partial^2 p}{\partial r^2} + \frac{1}{r}\frac{\partial p}{\partial r} + \frac{1}{r^2}~\frac{\partial^2 p}{\partial\theta^2} + \frac{\omega^2\rho_0}{\kappa}~p = 0$ Assuming that the solution of this equation can be written as $p(r,\theta) = R(r)~Q(\theta)$ we can write the partial differential equation as $\cfrac{r^2}{R}~\cfrac{d^2R}{dr^2} + \cfrac{r}{R}~\cfrac{dR}{dr} + \cfrac{r^2\omega^2\rho_0}{\kappa} = -\cfrac{1}{Q}~\cfrac{d^2Q}{d\theta^2}$ The left hand side is not a function of $\theta$ while the right hand side is not a function of $r$. Hence, $r^2~\cfrac{d^2R}{dr^2} + r~\cfrac{dR}{dr} + \cfrac{r^2\omega^2\rho_0}{\kappa}~R = \alpha^2~R ~;~~ \cfrac{d^2Q}{d\theta^2} = -\alpha^2~Q$ where $\alpha^2$ is a constant. Using the substitution $\tilde{r} \leftarrow \left(\omega\sqrt{\cfrac{\rho_0}{\kappa}}\right) r = k~r$ we have $\tilde{r}^2~\cfrac{d^2R}{d\tilde{r}^2} + \tilde{r}~\cfrac{dR}{d\tilde{r}} + (\tilde{r}^2-\alpha^2)~R = 0 ~;~~ \cfrac{d^2Q}{d\theta^2} = -\alpha^2~Q$ The equation on the left is the Bessel equation which has the general solution $R(r) = A_\alpha~J_\alpha(k~r) + B_\alpha~J_{-\alpha}(k~r)$ where $J_\alpha$ is the cylindrical Bessel function of the first kind and $A_\alpha, B_\alpha$ are undetermined constants. The equation on the right has the general solution $Q(\theta) = C_\alpha~e^{i\alpha\theta} + D_\alpha~e^{-i\alpha\theta}$ where $C_\alpha,D_\alpha$ are undetermined constants. Then the solution of the acoustic wave equation is $p(r,\theta) = \left[A_\alpha~J_\alpha(k~r) + B_\alpha~J_{-\alpha}(k~r)\right]\left(C_\alpha~e^{i\alpha\theta} + D_\alpha~e^{-i\alpha\theta}\right)$ Boundary conditions are needed at this stage to determine $\alpha$ and the other undetermined constants. ## References 1. ^ Douglas D. Reynolds. (1981). Engineering Principles in Acoustics, Allyn and Bacon Inc., Boston. Original courtesy of Wikipedia: http://en.wikipedia.org/wiki/Acoustic_theory — Please support Wikipedia. 1000000 videos foundNext > 1000000 videos foundNext > 20 news items Vezza D'Alba e Diano D'Alba: due serate sostenere il progetto umanitario Mosoq ... targatocn Fri, 07 Nov 2014 00:15:00 -0800 Si tratta di un AperiMusica: cibo e vino con le cover pop degli Acoustic Theory. Il secondo appuntamento sarà sabato 22 novembre ore 20.30 presso l'Azienda Agricola Cascina Rossa in via Cane Guido 24, Valle Talloria – Diano d'Alba, con il Concerto di ... La Stampa “Solidarietà in cantina” tra cibo e note La Stampa Sat, 08 Nov 2014 02:08:06 -0800 L'appuntamento è stasera dalle 18,30 alle 20, con un “Aperimusica” e le cover pop degli Acoustic Theory nell'azienda agricola Demarie di Vezza d'Alba, mentre sabato 22 novembre, ore 20,30, ci si sposterà a Cascina Rossa in Valle Talloria a Diano d'Alba ... WCPO Cincinnati and Milford students solve business, arts and community problems WCPO Tue, 27 May 2014 07:30:22 -0700 CINCINNATI – They helped decide whether Brazil is ready for the 2016 Olympics, how to lay out an art exhibit at Cincinnati Art Museum, how many Tri-Health hospital beds should be irradiated and whether it's worth the cost to buy or grow organic foods. Was Stonehenege built for sound effects? Telegraph.co.uk Thu, 16 Feb 2012 22:53:18 -0800 ... the mid-summer sunrise and mid-winter sunset and there is widespread agreement that it was used for cremation burials. “However, I don't think you'll find many archaeologists who know about Stonehenge giving this particular acoustic theory a lot of ... New tool for measuring frozen gas in ocean floor sediments EurekAlert (press release) Tue, 26 Feb 2013 07:28:09 -0800 Those measurements, and the acoustic theory we developed to interpret the data, provided exactly the foundation we needed to undertake this critically important study that will be relevant to the seabed in somewhat deeper waters. "As a greenhouse gas, ... Town Hall up with best in the world Stuff.co.nz Fri, 07 Dec 2012 14:41:15 -0800 Warren wrote in 2005 that Marshall was developing an acoustic theory of lateral reflections for auditoria. The town hall became "his most successful guinea pig". Hamish Hay told his fellow councillors that the expense of Warren's trip was justified by ... Was Stonehenge the result of an extraordinary hallucination after frenzied ... Daily Mail Thu, 16 Feb 2012 10:08:23 -0800 ... is aligned along the mid-summer sunrise and mid-winter sunset and there is widespread agreement that it was used for cremation burials. 'I don't think you'll find many archaeologists who know about Stonehenge giving this particular acoustic theory ... Air Force bristles at criticism of F-35 site selection process for Burlington vtdigger.org Tue, 03 Jul 2012 21:07:56 -0700 Last week a South Burlington official alleged the U.S. Air Force made a mistake in a scoring system used to rank Burlington International Airport as a top flight base for the new F-35 fighter jet. The Air Force responded ASAP with a press release ... Limit to books that you can completely read online Include partial books (book previews) .gsc-branding { display:block; } Oops, we seem to be having trouble contacting Twitter
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 102, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9585247039794922, "perplexity": 1084.8380639187872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380638.30/warc/CC-MAIN-20141119123300-00188-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/induced-voltage-with-coil-between-2-magnets.515499/
# Induced voltage with coil between 2 magnets 1. Jul 20, 2011 ### davenn hi gang, Just building a seismometer I have 2 x 2.5cm disc rare earth magnets spaced far enough apart for a coil of wire to move between them. Would you expect there to be any difference in induced voltage into a coil between 2 magnets when the magnets are orientated to attract or repel ? I'm thinking no difference, but really have no idea see pic for idea of what I'm doing. ignore that fact the builder of shown unit is using 4 x disc magnets, 2 top and 2 bottom. if there is a difference between the two options .... why ? cheers Dave #### Attached Files: • ###### coil_inserted2a.jpg File size: 42.9 KB Views: 157 Last edited by a moderator: Jul 20, 2011 2. Jul 20, 2011 ### gerbi Well.. can't see any picture. Got link to it ? 3. Jul 20, 2011 ### davenn I posted it and saw the pic, thought nothing more of it then after your comment in the email, I visited the page and saw the pic was missing Should be ok now :) cheers Dave 4. Jul 20, 2011 ### gerbi I'm not very familiar with that kind of stuff.. but.. from general elekctromagnetics: I take that magnets are on opposite sides of coil and when there is earthquake you have some coil-magnets movement, correct ? If yes then there are significant diffrences for diffrent magnets setups. You can observe signal because there is some movement between magnets and coils. Magnetic flux changes and this induces voltage in coil. The essence here is the magnetic flux generated by magnets. If they are set up in the same side N-S-N-S (or S-N-S-N) the flux is flowing, but if You set them up like N-S-S-N (or S-N-N-S) the magnetic flux will be near zero (it's like two sources set opposite). No flux = no signal. 5. Jul 20, 2011 ### davenn yes, thats correct. Some guys use horseshoe magnets. But strong and physically relatively small ones are not readily available. So the other main choice these days are to use several rare earth disc magnets. Just for your info.... at the other end of the "red" assembly there are more magnets mounted. They have a piece of aluminium between them. This is used for dampening of the pendulum arm, so that it doesnt go into free natural oscillation. Else you are just recording the motion of the pendulum and not the earth. Tecnically its the frame of the seismometer that is moving, not the pendulum, due to the large mass on the end of the pendulum arm. but because the pendulum isnt totally isolated from the frame it will start to oscillate as well. Dave
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8026736378669739, "perplexity": 1880.1215662515417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295383.20/warc/CC-MAIN-20160823195815-00080-ip-10-153-172-175.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2851589/scaling-a-part-of-a-function-without-altering-the-rest-of-it
# Scaling a part of a function without altering the rest of it The question rises form the needing of plotting the function $$I\left(x, y\right) = I_0\operatorname{sinc}^2\left(x\right)\operatorname{sinc}^2\left(y\right).$$ where $$\operatorname{sinc}(x) = \begin{cases}\frac{\sin(\pi x)}{\pi x} &\text{ if } x \neq 0\, \\ 1 &\text{ if } x = 0 \end{cases}$$ If you have already seen this function it's highly spiked at the origin of axes and decay very fast for values greater than 2 as you can see in the plot below Now I would like to give a nice representation of the function by scaling the central spike (or equivalently the rest of the function in order to make the hight of subsequent fringes of interference. Initially I thought to scale the plot in the circle around the origin which contains the central spike, but this does nor really work well since it changes the relative scale in a discontinuous way, which results in a bad behaviour in the plot So I thought that there could be a continuous function which can do the job I want, something for example like $${\rm scalingF}\left(x, y\right) \sim A\left(1-e^{-B\left(x^2 + y^2\right)}\right),$$ namely the positive opposite of a gaussian so that the outer part of the plot is stretched out more than the inner part, but unfortunately this method turns out in cutting the highest part, even if the rest of the plot is now precisely how I want it: $${\rm scalingF}\left(x, y\right) I\left(x, y\right)$$ gives Does anybody know a way to scale just a part of this function maintaining it continuous and with a much more similar shape to the initial one (meaning that the profiles of the wave should remain the same)? • Do you compute the scaling function yourself? If yes how? $1-e^{??}$ suffers from catastrophic cancellation for small arguments. If available use a function like expm1. – gammatester Jul 14 '18 at 13:53 • Have you tried plotting it on a logarithmic scale: $10\log_{10} I(x,y)$ ? – Andy Walls Jul 14 '18 at 14:14 • @gammatester I can't understand your question, but the function $$1-e^{-x^2}$$ is just a gaussian curve reflected along the x axis and shifted above of one, so no "catastrophic cancellation" occurs for me ;), but maybe I'm misunderstanding what you want to say! – opisthofulax Jul 14 '18 at 14:18 • If $x$ is small you have $1-e^{-x^2}=x^2-x^4/2$ but if you compute e.g. with IEEE double precision you have $e^{-x^2}\approx 1$, and the difference suffers from cancellation. But as Théophile has written, this is not your main problem. – gammatester Jul 14 '18 at 14:33 Your Gaussian idea is interesting; the reason it doesn't work is that the scaling function vanishes at $(0,0)$. To fix this, how about simply adding $1$: $${\rm scalingF}\left(x, y\right) = A\left(1-e^{-B\left(x^2 + y^2\right)}\right) + 1$$ so that ${\rm scalingF}(0, 0) = 1$, and far enough from the origin, ${\rm scalingF}(x,y) \approx A + 1$. Now, maybe you want to preserve the height everywhere except near the origin. In that case, we just scale down again (and I've used $A - 1$ as a multiplier instead of $A$ so that it's a bit cleaner): $${\rm scalingF}\left(x, y\right) = \frac{(A-1)\left(1-e^{-B\left(x^2 + y^2\right)}\right) + 1}A$$ This will scale the origin to $1/A$ its original height while preserving the periphery. • Thanks for your answer, but the problem of the gaussian curve is that augmenting $A$ will cause anyway to make appear a gaussian pattern on the top of the spike, if I try to zoom the outer region too much (for which I cannot change so much $B$ otherwise the effect of magnifying get lost). I'm thinking that I do not need a spiked curve but maybe something that has a plateau instead of the spike of the gaussian gurve and decreases similarly fast (in this case increase). – opisthofulax Jul 14 '18 at 14:54 • I managed a nice solution with arctan function, I'll post it right now ;) – opisthofulax Jul 14 '18 at 15:17 • @opisthofulax Interesting... glad you figured it out! – Théophile Jul 15 '18 at 3:10 So, I started considering that the gaussian curve is not precisely what I did need. In fact the scaling with this function tends to create an "inverse spike" in the function, rather than scaling with the same height all the point of a function in a certain circular/squared region. This behaviour can be easily visualised by plotting the suggested function from @Théophile with A and B parameters set in order to scale enough the whole graph except from the central spike: $$I\cdot{\rm scalingF}\left(x, y\right) = I\cdot\frac{(A-1)\left(1-e^{-B\left(x^2 + y^2\right)}\right) + 1}A$$ will produce (e.g for $A\gg1$, because I need a big scaling factor, and $B\approx.1$ as I just need a small region centered around the origin) That's because it's scaling the center to $1/A$, but the stuff around is not scaled the same nor less, but more. Then a function which stays the same for and interval and then grows really fast is needed. Plus, the region not around the axes (the blackest ones) must be scaled more than the region along the axes. Now, by knowing that a $\operatorname{rect}(x)$ function will create a bad discontinuity (the first solution tried by me), I thought that the nicest solution was a "continuous" $\operatorname{rect}(x)$ function, which can easily be obtained by thinking as the sum of two $\arctan(x)$ functions "going in opposite directions" in which the arguments of $\arctan(x)$ has been choosen for cutting intensity in the region $\approx[-3.5, 3.5]$ which is more or less the one in which the original function is having that big spike. At is point there are two possibilities. The first is to multiply the two "$\operatorname{rectarctan}(x)$" in order to get a distribution of intensity $${\rm scalingF1}\left(x, y\right) = \arctan[5 x - 20] - \arctan[5 x + 20] + 1.01 \pi)\cdot (\arctan[5 y - 20] - \arctan[5 y + 20] + 1.01 \pi)$$ where the factor of pies control the proportion between the central spike and the rest of the scaled graph such as if they become too big the scaling quantity on the central spike will become greater than the one on the other part. Anyway, as it clear to see from figure 3 the path along the axis is not stretched out, so it can be annoying to have the spikes far from the axes bigger than the one along the axes as you can see below But a simple modification is possible in order to remedy this little bug, by substituting the multiplication in the scaling function by the sum $${\rm scalingF2}\left(x, y\right) = \arctan[5 x - 20] - \arctan[5 x + 20] + 1.01 \pi)\cdot (\arctan[5 y - 20] - \arctan[5 y + 20] + 1.01 \pi)$$ which graph is This function modulates both contents along the axes and contents outside, by cutting high intensities in the center, as it's possible to see in the final output
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.8452489972114563, "perplexity": 287.2842569193279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997508.21/warc/CC-MAIN-20190616002634-20190616024634-00089.warc.gz"}
http://stats.seandolinar.com/tag/statistics/
# Calculating Z-Scores [with R code] I’ve included the full R code and the data set can be found on UCLA’s Stats Wiki Normal distributions are convenient because they can be scaled to any mean or standard deviation meaning you can use the exact same distribution for weight, height, blood pressure, white-noise errors, etc. Obviously, the means and standard deviations of these measurements should all be completely different. In order to get the distributions standardized, the measurements can be changed into z-scores. Z-scores are a stand-in for the actual measurement, and they represent the distance of a value from the mean measured in standard deviations. So a z-score of 2.0 means the measurement is 2 standard deviations away from the mean. To demonstrate how this is calculated and used, I found a height and weight data set on UCLA’s site. They have height measurements from children from Hong Kong. Unfortunately, the site doesn’t give much detail about the data, but it is an excellent example of normal distribution as you can see in the graph below. The red line represents the theoretical normal distribution, while the blue area chart reflects a kernel density estimation of the data set obtained from UCLA. The data set doesn’t deviate much from the theoretical distribution. The z-scores are also listed on this normal distribution to show how the actual measurements of height correspond to the z-scores, since the z-scores are simple arithmetic transformations of the actual measurements. The first step to find the z-score is to find the population mean and standard deviation. It should be noted that the sd function in R uses the sample standard deviation and not the population standard deviation, though with 25,000 samples the different is rather small. Using just the population mean [μ = 67.99] and standard deviation [σ = 1.90], you can calculate the z-score for any given value of x. In this example I’ll use 72 for x. $z = \frac{x - \mu}{\sigma}$ This gives you a z-score of 2.107. To put this tool to use, let’s use the z-score to find the probability of finding someone who is 72 inches [6-foot] tall. [Remember this data set doesn’t apply to adults in the US, so these results might conflict with everyday experience.] The z-score will be used to determine the area [probability] underneath the distribution curve past the z-score value that we are interested in. [One note is that you have to specify a range (72 to infinity) and not a single value (72). If you wanted to find people who are exactly 6-foot, not taller than 6-foot, you would have to specify the range of 71.5 to 72.5 inches. This is another problem, but this has everything to do with definite integrals intervals if you are familiar with Calc I.] The above graph shows the area we intend to calculate. The blue area is our target, since it represents the probability of finding someone taller than 6-foot. The yellow area represents the rest of the population or everyone is is under 6-feet tall. The z-score and actual height measurements are both given underscoring the relationship between the two. Typically in an introductory stats class, you’d use the z-score and look it up in a table and find the probability that way. R has a function ‘pnorm’ which will give you a more precise answer than a table in a book. [‘pnorm’ stands for “probability normal distribution”.] Both R and typical z-score tables will return the area under the curve from -infinity to value on the graph this is represented by the yellow area. In this particular problem, we want to find the blue area. The solution to this is an easy arithmetic function. The area under the curve is 1, so by subtracting the yellow area from 1 will give you the area [probability] for the blue area. Yellow Area: Blue Area [TARGET]: Both of these techniques in R will yield the same answer of 1.76%. I used both methods, to show that R has some versatility that traditional statistics tables don’t have. I personally find statistics tables antiquated, since we have better ways to determine it, and the table doesn’t help provide any insight over software solutions. Z-scores are useful when relating different measurement distributions to each acting as a ‘common denominator’. The z-scores are used extensively for determining area underneath the curve when using text book tables, and also can be easily used in programs such as R. Some statistical hypothesis tests are based on z-scores and the basic principles of finding the area beyond some value. # OLS Derivation Ordinary Least Squares (OLS) is a great low computing power way to obtain estimates for coefficients in a linear regression model. I wanted to detail the derivation of the solution since it can be confusing for anyone not familiar with matrix calculus. First, the initial matrix equation is setup below. With X being a matrix of the data’s p covariates plus the regression constant. [This will be represented as a column of ones if you were to look at the data in the X matrix.] Y is the column matrix of the target variable and β is the column matrix of unknown coefficients. e is a column matrix of the residuals. $\mathbf{Y = X} \boldsymbol{\beta} + \boldsymbol{e}$ Before manipulating the equation it is important to note you are not solving for X or Y, but instead β and will do this by minimizing the sum of squares for the residuals (SSE). So the equation can be rewritten by moving the error term to the left side of the equation. $\boldsymbol{e} = \mathbf{Y - X} \boldsymbol{\beta}$ The SSE can be written as the product of the transposed residual column vector and its original column vector. [This is actually how you would obtain the sum of squares for any vector.] $\mathrm{SSE} = \boldsymbol{e}'\boldsymbol{e}$ Since you transpose and multiply one side of the equation, you have to follow suit on the other side. Yielding $\boldsymbol{e'e} = (\mathbf{Y - X} \boldsymbol{\beta})'(\mathbf{Y - X} \boldsymbol{\beta})$ The transpose operator can be distributed through out the quantity on the right side, so the right side can be multiplied out. $\boldsymbol{e'e} = (\mathbf{Y' - \boldsymbol{\beta}'X'})(\mathbf{Y - X} \boldsymbol{\beta})$ Using the rule that A’X = X’A, you can multiple out the right side and simplify it. $\boldsymbol{e'e} = (\mathbf{Y'Y - Y'X\boldsymbol{\beta} - \boldsymbol{\beta}'X'Y} + \boldsymbol{\beta'\mathbf{X'X}\beta})$ $\boldsymbol{e'e} = (\mathbf{Y'Y - \boldsymbol{\beta}'X'Y - \boldsymbol{\beta}'X'Y} + \boldsymbol{\beta'\mathbf{X'X}\beta})$ $\boldsymbol{e'e} = (\mathbf{Y'Y - 2\boldsymbol{\beta}'X'Y} + \boldsymbol{\beta'\mathbf{X'X}\beta})$ To minimize the SSE, you have to take the partial derivative relative to β. Any terms without a β term in them will go to zero. Using the transpose rule from before you can see how the middle term yields -2X’Y using differentiation rules from Calc1. The last term is a bit tricky, but it derives to +2X’Xβ. $\frac{\delta\boldsymbol{e'e}}{\delta\boldsymbol{\beta}} = \frac{\delta\mathbf{Y'Y}}{\delta\boldsymbol{\beta}} - \frac{2\boldsymbol{\beta}'\mathbf{X'Y}}{\delta\boldsymbol{\beta}} + \frac{\delta\boldsymbol{\beta'\mathbf{X'X}\beta}}{\delta\boldsymbol{\beta}}$ $\frac{\delta\boldsymbol{e'e}}{\delta\boldsymbol{\beta}} = - 2\mathbf{X'Y} + 2\boldsymbol{\mathbf{X'X}\beta}$ To find the minimum (it will never be a maximum if you have all the requirements for OLS fulfilled), the derivative of the SSE is set to zero. $0 = - 2\mathbf{X'Y} + 2\mathbf{X'X}\boldsymbol{\beta}$ $0 = \mathbf{- X'Y} + \mathbf{X'X}\boldsymbol{\beta}$ Using some basic linear algebra and multiplying both sides by the inverse of (X’X)… $(\mathbf{X'X})^{-1}\mathbf{X'X}\boldsymbol{\beta} = (\mathbf{X'X})^{-1}\mathbf{X'Y}$ …yields the solution for β $\boldsymbol{\beta} = (\mathbf{X'X})^{-1}\mathbf{X'Y}$ References: Chatterjee, S & Hadi, A. (2012). Regression analysis by example. Hoboken, NJ: John Wiley & Sons, Inc. # Count Data Distribution Primer — Binomial / Negative Binomial / Poisson Count data is exclusively whole number data where each increment represents one of something. It could be a car accident, a run in baseball, or an insurance claim. The critical thing here is that these are discrete, distinct items. Count data behaves differently than continuous data, and the distribution [frequency of of different values] is different between the two. Random continuous data typically follows the normal distribution, which is the bell curve everyone remembers from high school grade systems. [Which is a really bad way to grade, but I digress.] Count data generally follows the Binomial/Negative Binomial/Poisson distribution depending what context you are viewing the data; all three distributions are mathematically related. Binomial Distribution: The binomial distribution (BD) is the collection of probabilities of getting a certain number of successes in a given number of trials specifically measuring Bernoulli trials [a yes/no event similar to a coin flip, but it’s not necessarily 50/50]. My favorite example to understand the binomial distribution is using it to determine the probability that you’d get exactly 5 HEADS if you flipped a coin 10 times [it’s NOT 50%!]. It’s actually 24.61%. The probability of getting heads in any given coin flip is 50%, but over 10 flips, you’ll only get exactly 5 HEADS and 5 TAILS about 25% of the time. The equation below gives the two popular notations for the binomial probability mass function. $n$ is total number of trials. [the graph above used n=10]. $r$ is the number of successes you want to know the probability for. You calculate this function for each number of HEADS [0-10] for $r$ to get the distribution above. $p$ is the simple probability for each event. [$p$ = .5 for the coin flip.] $P(X=r) = {{n}\choose{r}} p^{r} (1-p)^{n-r} = \frac{n!}{r!(n-r)!} p^{r} (1-p)^{n-r}$ The equation has three parts. The first part is the combination ${{n}\choose{r}}$, which is the number of combinations when you have $n$ total items taken $r$ at a time. Combination disregard order, so the set {1, 4, 9} is the same as {4, 9, 1}. This part of the equation tells you how many possible ways there are to get to a certain outcome since there are many way to get 5 HEADS in 10 tosses. Since ${{10}\choose{5}}$ is larger than any other combination, 5 HEADS will have the largest probability. There are two more terms in the equation. $p^r$ is joint probability of getting r successes in a particular order, and $(1-p)^{n-r}$ is the corresponding probably of also getting the failures also in a particular order. I find it helpful to conceptualize the equation as having three parts accounting for different things: total combinations of successes and failures, the probabilities of successes, and the probability of failures. Negative Binomial Distribution: While there is a good reason for it, the name of the negative binomial distribution (NBD) is confusing. Nothing I will present will involve making anything negative so, let’s just get that out of the way and ignore it. The binomial distribution uses the probability of successes in the total number of ATTEMPTS. To contrast this, the negative binomial distribution uses the probability that a certain number of FAILURES occur before the $r$th SUCCESS. This has many applications specifically when a sequence terminates after the $r$th success such as modeling the probability that you will sell out of the 25 cups of lemonade you have stocked for a given number of cars that pass by. The idea is that you would pack up your lemonade stand after you sell out, so cars that would pass by after the final success won’t matter. Another good example is modeling the win probability of a 7-game sports playoff series. The team that wins the series must win 4 games and specifically the last game played in the series, since the playoff series terminates after one team reaches 4 wins. One of the more important restrictions on the NBD is that the last event must be a success. Going back to the sports playoff series example, the team that wins the series will NEVER lose the last game. With the 10 coin-flip example, the BD was looking for the probability of getting a certain number of HEADS within a set number of coin flips. Using the NBD, we will look for the probability of 5 HEADS before getting a certain number of TAILS. The total number of flips will not ALWAYS equal 10 and actually exceeds 10 as seen below. The probability mass function that describes the NBD graph above is given below: $P(X=k) = {{r+k-1}\choose{k}} p^{r} (1-p)^{k}$ The equation for the NBD has the same parts as the BD: the combinations, the success, and the failures. In the NBD case the combinations are less than the BD [for the same total number of coin flips]. This is because the last outcome is held fix at a success. The probability of success and failure parts of the equation are conceptually the same as the BD. The failure portion is written differently because the number of failures is a parameter $k$ instead of a derived quantity like [$n-r$]. Poisson Distribution: The Poisson Distribution (PD) is directly related to both the BD and the NBD, because it is the limiting case of both of them. As the number of trials goes to infinity, then the Poisson distribution emerges. The graph for the PD will look similar to the NBD or the BD, and there is no example comparing the coin flip since there has to be some non-discrete process like traffic flow or earthquakes. The major difference is not what is represented, but how it is viewed and calculated. The Poisson distribution is described by the equation: $P(X = x) = \frac{e^{-\lambda}\lambda^x}{x!}$ $\lambda$ is the expected value [or the mean] for an event and $x$ is the count value. If you knew an average of 0.2 car crashes happen at an intersection at a given day then you could solve the equation for $x$ = {0, 1, 2, 3, 4, 5, … } and get the PD for the problem. One of the restrictions and major issues with the use of the PD is that the model assumes the mean and the variance are equal. In most real data instances the variance is greater than the mean, so the PD tends to favor more values around the expected value than real data reflects. If you are interested in the derivations and math behind these I recommend this site: http://statisticalmodeling.wordpress.com/. I feel like they explain the derivation of the negative binomial better than most places I’ve found. It addresses why it’s called the NEGATIVE binomial distribution as well. The site also contains derivations of the PD being the limiting case of the BD and NBD.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 36, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359950184822083, "perplexity": 335.4697846464676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189680.78/warc/CC-MAIN-20170322212949-00563-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/forgotten-linear-algebra.224823/
# Forgotten' linear algebra 1. Mar 28, 2008 ### jdstokes Hi all, I learned this stuff years ago and wasn't brilliant at it even then so I think a refresher is in order. Suppose I have n distinct homogeneous equations in n unknowns. I want to find the solution so I write down the matrix of coefficients multiplying my vector of variables as follows $A \mathbf{x} =\mathbf{0}$. Now, we don't want $\deta A \neq 0$ to happen otherwise the columns of A are linearly independent so the only solution to $A \mathbf{x} = \mathbf{C}_1 x_1 + \cdots \mathbf{C}_n x_n = \mathbf{0}$ is $\mathbf{0}$. Now how do we actually solve this for $\mathbf{x}$, do we just do Gaussian elimination followed by back-substitution? Is the solution unique in this case? Now suppose the system is inhomogeneous $A\mathbf{x} = \mathbf{b}$ where $\mathbf{b}\neq 0$. In this case we actually want $\det A \neq 0$ because then we can instantly write down the unique solution $\mathbf{x} = A^{-1}\mathbf{b}$. Have I gotten the solution to square systems about right? If yes, I'll try to figure out the non-square case. 2. Mar 28, 2008 ### slider142 If the null space of A is not empty, then it is a subspace, so the solution is an entire subspace of the space you're working with, not just a single vector. The subspace containing only the zero vector is the only degenerate subspace that does consist of a single vector, and it is always in the null space. Yep, that's right. Last edited: Mar 28, 2008 3. Mar 28, 2008 ### transgalactic i suggest that after you write your matrix just make a row reduction and you are not supposed to write a column of zeros in the end the last column depends on the last number after the "=" sign 4. Mar 28, 2008 ### Peeter You can also solve systems of equations of this form with the wedge product (wedging the column vectors). I'd put an example of this in the wiki Geometric Algebra page a while back when I started learning the subject: http://en.wikipedia.org/wiki/Geomet...rally_expressed_in_terms_of_the_wedge_product. Looking at the example now, I don't think it's the greatest. It should also probably be in a wedge product page instead of GA ... but that was the context that I learned about it first (I chose to use the mostly empty wiki page to dump down my initial notes on the subject as I started learning it;)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8674826622009277, "perplexity": 436.0464233873669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00385-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/please-help-arrow-shot-in-sky-question.60109/
1. Jan 17, 2005 ### mrd59 Ok, I'm trying to help my son with this test question he got wrong. He is in a basic high school physics class and I have virtually no experience with physics. The book and classroom handouts have nothing that completely explains this question. Any help and explaination would be most wonderful! An arrow is shot straight up into the air and then falls back down to the ground. On the way up , the arrow could be described as...... A) Positive inertia, positive velocity, positive acceleration and positive momentum. (he marked this and got it wrong) B)Negative inertia, positive velocity, positive acceleration and negative momentum. C) positive inertia, negative velocity, negative acceleration and positive momentum D) Positive inertia, positive velocity, negative acceleration and positive momentum. (we think this might be the correct answer but are not sure why) :surprised 2. Jan 17, 2005 ### ek Negative acceleration. Gravity is slowing the arrow down, if it was positive acceleration the arrow would go flying off into space. 3. Jan 17, 2005 ### chroot Staff Emeritus Is this the entire question? This question actually has no answer as stated, because it fails to make clear its assumptions. Velocity cannot be said to be positive or negative without first defining which direction should be considered positive! Acceleration suffers the same problem; you must first define the directions. People often consider gravity to be a negative acceleration, but this is just a convention. You could just as well consider it a positive acceleration. Besides, the terms "positive inertia" and "negative inertia" mean nothing at all. Perhaps the teacher meant kinetic energy? If this is the entire problem as given, the teacher should really be ripped to shreds. If I were you, I'd be making a phone call. - Warren 4. Jan 17, 2005 ### ek It's just grade 10 or 11 physics. I don't think there's a need to over complicate things for students. Assumptions are made for the ease of the students I'm guessing. That inertia thing though, I was thinking wtf too. 5. Jan 17, 2005 ### chroot Staff Emeritus A problem that is not completely specified does not have a completely specified answer, no matter what "easing" is done. If the teacher told the class that "upwards velocities are always positive in my class," then so be it, but we here on physicsforums.com do not have access to that information. We cannot answer it. - Warren 6. Jan 17, 2005 ### mrd59 Yes this is the entire question. He gets some extra points for correcting his test questions and explaining why the wrong answers were wrong and the correct answer is correct. Of course the problem with that is the teacher will not help by telling you which one is the correct answer. All we have are some very sketchy handouts from class and a Conceptual physics book. Most of test questions have taken us a long time to help him figure out (post test) even with these resources. It seems to be a vicious circle. Material is not explained, test questions do not come from the book or handouts, student fails, teacher says find the right answer, but that is hard if you don't know it (which is why you failed), and resources are minimal. A parents' (and students') nightmare! Last edited: Jan 17, 2005 7. Jan 17, 2005 ### Curious3141 As chroot said, the statement of this question is flawed. Although, one can infer the convention from the fact that 3 of the quantities should be signed one way and one the other way meaning there's only one choice that fits. Nevertheless, feedback should be given to the teacher. I've had bitter experience with teachers who misinform, insist they're right when they're wrong, and then sneakily correct themselves later without fanfare or even acknowledgement. For the sake of your son and the rest in his class make sure the teacher knows why this question is incomplete. You can always ask the teacher to come here to get chastised. :tongue2: 8. Jan 17, 2005 ### chroot Staff Emeritus I fondly remember a ninth-grade chemistry teacher spending a half an hour of class time trying to convince me that a milliliter of any substance weighs one gram. - Warren 9. Jan 17, 2005 ### dextercioby We wouldn't be that lucky... :tongue2: To the OP:My advice is to start l'exposeé with the words: "I chose the Oy axis pointing upwards". Then the answer will be correct and the idiot would have nothing to say... Daniel. 10. Jan 17, 2005 ### mrd59 Thanks to all of you! My daughter is a physics major in college but she's back at school now so not around to help. I might be posting quite often if this class continues to go in this direction! -m 11. Jan 17, 2005 ### Integral Staff Emeritus I would bet that in class the teacher had a standard set of definitions for the coordinate system. Clearly he expected his default definitions to be known and understood by the students. We are not privy to such definitions. Your son needs to examine, notes etc. to gain an understanding of these unstated but implied definitions.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075973987579346, "perplexity": 1244.9373442286064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00155-ip-10-171-10-70.ec2.internal.warc.gz"}
https://brilliant.org/problems/complex-number-1/
# Complex number 1! Algebra Level 5 If $$x = 2+5i$$ and $$\large 2(\frac{1}{1!9!} +\frac{1}{3!7!}) + \frac{1}{5!5!} = \frac{2^a}{b!}$$ , then the value of $$(x^3 -5x^2 +33x -19)$$ is equal to × Problem Loading... Note Loading... Set Loading...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9669448137283325, "perplexity": 3543.4484921930098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645775.16/warc/CC-MAIN-20180318130245-20180318150245-00796.warc.gz"}
https://aas.org/archives/BAAS/v26n2/aas184/abs/S2907.html
A New Mechanism for Excitation of the [N II] Temperature Diagnostic Emission Lines at 6548 \AA\ and 6584 \AA Session 29 -- General Interstellar Medium Display presentation, Tuesday, 31, 1994, 9:20-6:30 ## [29.07] A New Mechanism for Excitation of the [N II] Temperature Diagnostic Emission Lines at 6548 \AA\ and 6584 \AA Todd M. Tripp, John S. Gallagher, and Alan Watson (Washburn Observatory, U. Wisconsin) This paper discusses the possibility that the intensities of the [N II] emission lines at 6548 and 6584 \AA\ are boosted, in a variety of astrophysical nebulae, by the $(2s^{2}2p^{2}) \ ^{1}D_{2}$ --- $(2s^{2}2p3s) \ ^{3}P^{o}_{1}$ transition at 748 \AA. According to theoretical calculations by Fawcett and experimental measurement of the branching ratio, this intersystem ($\Delta S \neq$ 0) transition is as fast as a typical electric dipole allowed transition. The $(2s^{2}2p3s) \ ^{3}P^{o}_{1}$ level is directly populated by a resonance transition from the $(2s^{2}2p^{2}) \ ^{3}P$ ground state, and one out of three electrons that enter the $(2s^{2}2p3s) \ ^{3}P^{o}_{1}$ level will spontaneously decay through the 748 \AA \ transition. Since the 748 \AA\ transition deposits electrons in the $(2s^{2}2p^{2}) \ ^{1}D_{2}$ level from which the [N II] $\lambda \lambda$6548,6584 emissions originate, this remarkably strong intersystem transition will increase the intensities of the famous [N II] temperature diagnostic emission lines if significant excitation of the $(2s^{2}2p3s) \ ^{3}P^{o}_{1}$ level occurs. We discuss the circumstances in which the $(3s) \ ^{3}P^{o}_{1}$ upper level could be significantly populated by photoexcitation, collisional excitation, or recombination. We point out that detection of recombination lines that feed the $(3s) \ ^{3}P^{o}_{1}$ level provides direct evidence that this process is important in nova shells. We also discuss the possible importance of this process in the production of anomalously large [N II]/H$\alpha$ ratios in cooling flow emission line filaments and starburst galaxy superwinds.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711276292800903, "perplexity": 3203.297847363753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660467.49/warc/CC-MAIN-20160924173740-00070-ip-10-143-35-109.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Unicity_distance
# Unicity distance In cryptography, unicity distance is the length of an original ciphertext needed to break the cipher by reducing the number of possible spurious keys to zero in a brute force attack. That is, after trying every possible key, there should be just one decipherment that makes sense, i.e. expected amount of ciphertext needed to determine the key completely, assuming the underlying message has redundancy.[1] Claude Shannon defined the unicity distance in his 1949 paper "Communication Theory of Secrecy Systems." Consider an attack on the ciphertext string "WNAIW" encrypted using a Vigenère cipher with a five letter key. Conceivably, this string could be deciphered into any other string — RIVER and WATER are both possibilities for certain keys. This is a general rule of cryptanalysis: with no additional information it is impossible to decode this message. Of course, even in this case, only a certain number of five letter keys will result in English words. Trying all possible keys we will not only get RIVER and WATER, but SXOOS and KHDOP as well. The number of "working" keys will likely be very much smaller than the set of all possible keys. The problem is knowing which of these "working" keys is the right one; the rest are spurious. ## Relation with key size and possible plaintexts In general, given particular assumptions about the size of the key and the number of possible messages, there is an average ciphertext length where there is only one key (on average) that will generate a readable message. In the example above we see only upper case English characters, so if we assume that the plaintext has this form, then there are 26 possible letters for each position in the string. Likewise if we assume five-character upper case keys, there are K=265 possible keys, of which the majority will not "work". A tremendous number of possible messages, N, can be generated using even this limited set of characters: N = 26L, where L is the length of the message. However, only a smaller set of them is readable plaintext due to the rules of the language, perhaps M of them, where M is likely to be very much smaller than N. Moreover, M has a one-to-one relationship with the number of keys that work, so given K possible keys, only K × (M/N) of them will "work". One of these is the correct key, the rest are spurious. Since M/N gets arbitrarily small as the length L of the message increases, there is eventually some L that is large enough to make the number of spurious keys equal to zero. Roughly speaking, this is the L that makes KM/N=1. This L is the unicity distance. ## Relation with key entropy and plaintext redundancy The unicity distance can equivalently be defined as the minimum amount of ciphertext required to permit a computationally unlimited adversary to recover the unique encryption key.[1] The expected unicity distance can then be shown to be:[1] ${\displaystyle U=H(k)/D}$ where U is the unicity distance, H(k) is the entropy of the key space (e.g. 128 for 2128 equiprobable keys, rather less if the key is a memorized pass-phrase). D is defined as the plaintext redundancy in bits per character. Now an alphabet of 32 characters can carry 5 bits of information per character (as 32 = 25). In general the number of bits of information per character is log2(N), where N is the number of characters in the alphabet and log2 is the binary logarithm. So for English each character can convey log2(26) = 4.7 bits of information. However the average amount of actual information carried per character in meaningful English text is only about 1.5 bits per character. So the plain text redundancy is D = 4.7 − 1.5 = 3.2.[1] Basically the bigger the unicity distance the better. For a one time pad of unlimited size, given the unbounded entropy of the key space, we have ${\displaystyle U=\infty }$, which is consistent with the one-time pad being unbreakable. ### Unicity distance of substitution cipher For a simple substitution cipher, the number of possible keys is 26! = 4.0329 × 1026 = 288.4, the number of ways in which the alphabet can be permuted. Assuming all keys are equally likely, H(k) = log2(26!) = 88.4 bits. For English text D = 3.2, thus U = 88.4/3.2 = 28. So given 28 characters of ciphertext it should be theoretically possible to work out an English plaintext and hence the key. ## Practical application Unicity distance is a useful theoretical measure, but it doesn't say much about the security of a block cipher when attacked by an adversary with real-world (limited) resources. Consider a block cipher with a unicity distance of three ciphertext blocks. Although there is clearly enough information for a computationally unbounded adversary to find the right key (simple exhaustive search), this may be computationally infeasible in practice. The unicity distance can be increased by reducing the plaintext redundancy. One way to do this is to deploy data compression techniques prior to encryption, for example by removing redundant vowels while retaining readability. This is a good idea anyway, as it reduces the amount of data to be encrypted. Another way to increase the unicity distance is to increase the number of possible valid sequences in the files as it is read. Since if for at least the first several blocks any bit pattern can effectively be part of a valid message then the unicity distance has not been reached. This is possible on long files when certain bijective string sorting permutations are used, such as the many variants of bijective Burrows–Wheeler transforms.[dubious ] Ciphertexts greater than the unicity distance can be assumed to have only one meaningful decryption. Ciphertexts shorter than the unicity distance may have multiple plausible decryptions. Unicity distance is not a measure of how much ciphertext is required for cryptanalysis,[why?] but how much ciphertext is required for there to be only one reasonable solution for cryptanalysis.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848983645439148, "perplexity": 718.3927204078728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188926.39/warc/CC-MAIN-20170322212948-00173-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.maplesoft.com/support/help/Maple/view.aspx?path=dsolve%2Fnumeric%2FBVP
BVP - Maple Help dsolve/numeric/BVP find numerical solution of ODE boundary value problems Calling Sequence dsolve(odesys, numeric, vars, options) Parameters odesys - set or list; ordinary differential equation(s) and boundary conditions numeric - name; instruct dsolve to find a numerical solution vars - (optional) any indeterminate function of one variable, or a set or list of them, representing the unknowns of the ODE problem options - (optional) equations of the form keyword = value Description • The dsolve command with the numeric or type=numeric option on a real-valued two-point boundary value problem (BVP), finds a numerical solution for the ODE or ODE system BVP. • The type of problem (BVP or IVP) is automatically detected by dsolve, and an applicable algorithm is used. The optional equation method=bvp[submethod] indicates that a specific BVP solver is to be used. The available submethods are traprich, trapdefer, midrich, and middefer. The first two methods, traprich and trapdefer,are trapezoid methods that use Richardson extrapolation enhancement or deferred correction enhancement, respectively. The remaining two methods, midrich and middefer, are midpoint methods with the same enhancement schemes. There are two major considerations when choosing a submethod for a BVP. The trapezoid submethods are generally more efficient for typical problems, but the midpoint submethods are capable of handling harmless end-point singularities that the trapezoid submethods cannot. For the enhancement schemes, Richardson extrapolation is generally faster, but deferred corrections uses less memory on difficult problems. • The available methods are fairly general, and should work on a variety of BVPs. They are not well suited to solve BVPs that are stiff or have solutions with singularities in their higher order derivatives. • The solution method is capable of handling both linear and nonlinear BVPs with fixed, periodic, and even nonlinear boundary conditions. For nonlinear boundary conditions, however, it may be necessary to provide an initial solution profile that approximately satisfies the boundary conditions (see 'approxsoln' below). The method is also capable of handling BVP systems with undetermined parameters, given a sufficient number of boundary conditions to determine their values. • Computation can be performed in both hardware precision and arbitrary precision, based on the setting of Digits. If Digits is smaller than the hardware precision for the machine, then computations are performed in hardware precision (see evalhf). If Digits is larger, then computations are performed in Maple floating point. In both cases, many of the more computationally intensive steps are performed in compiled external libraries. • The return value of dsolve and the manipulation of the input system is controlled by the following three optional equations, which are discussed in dsolve[numeric]. 'output' = keyword or array 'known' = name or list of names 'optimize' = boolean In addition to the output options available for all numerical ODE problems, the option output=mesh is also available. It specifies that dsolve return an array-form output where independent variable values are chosen as the discrete mesh used internally by the method. This also has the effect of changing the interpolant default to false because only the discrete solution is required. See interpolant below. The 'known' option specifies user-defined known functions, and is discussed in dsolve[numeric]. Options • The options listed below are specific to BVP and include options to control the solution process, the accuracy of the result, and the initial mesh or solution profile to be used. 'abserr' = numeric 'range' = numeric..numeric 'adaptive' = boolean 'maxmesh' = integer 'initmesh' = integer 'approxsoln' = array, list or procedure 'continuation' = name 'mincont' = numeric 'interpolant' = boolean 'abserr'= numeric Numeric value that gives an absolute error tolerance for the solution. In all but exceptional cases the true solution should be within the error tolerance value of the continuous approximate solution obtained by the method.  The default value of abserr is Float(1,-6). Note: This is an absolute error tolerance, so if the scale of the problem is such that the solution values are much smaller/larger than unitary, this parameter requires adjustment. As for IVPs (see dsolve[numeric,IVP]), the system is converted to a first order system internally before the numerical solution is computed. The absolute error tolerance is applied to all components of the converted system. 'range'= numeric..numeric Gives the left and right boundary points of the solution interval in the form leftpt..rightpt. This enables the method to be used to compute the solution of an initial value problem over a fixed interval using the absolute global error bound specified by abserr. Boolean value that determines whether mesh adaptation is used to obtain the solution. By default this is true. The method applied is based on arc-length, with additional rules to prevent adjacent steps from changing their size too rapidly, restrictions on the largest and smallest allowable step sizes as compared with the corresponding fixed step size, and restrictions preventing the mesh points near the boundaries from becoming too widely spaced. Note: Some problems work better with fixed step-size meshes. 'maxmesh'= integer Integer value that determines the maximum number of points dsolve uses to compute the numerical solution. The numeric BVP solver internally uses a discrete mesh of points to calculate the approximate solution, which is adjusted as greater accuracy is required. If the desired accuracy cannot be obtained with the current limitation imposed by maxmesh, an error is returned. The default value for maxmesh is $128$. Its value must be between $32$ and $134217728$. 'initmesh'= integer Integer value that determines the number of points dsolve uses to compute the initial solution profile. In some cases, the default initial $8$ point mesh does not have sufficient resolution to obtain the initial solution profile, so increasing this value can give a solution when the default value does not. Its value must be between $8$ and $134217728$. 'approxsoln'= array, list or procedure Argument that enables specification of an initial approximate discrete solution to be used as a starting point for the problem. There are many forms that this argument can take. The simplest forms are the output of another dsolve/numeric computation when output is specified as an array (Matrix form output), or as a procedure (procedurelist, listprocedure, or operator output). Another form is as a two-dimensional array where the names of the dependent variables and independent variables are given in the first column, followed by their values at different mesh points in the following columns. All dependent variables in the (converted to first order) system, must be present. This form can also be specified using nested lists. The final form is as a list of equations that describe the initial solution profile as functions of the independent variable. For example, for a second order BVP in $u\left(x\right)$, this could be specified as $\left[u\left(x\right)=f\left(x\right)\right]$ or as $\left[u\left(x\right)=f\left(x\right),\frac{ⅆ}{ⅆx}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}u\left(x\right)=g\left(x\right)\right]$, where the $f\left(x\right),g\left(x\right)$ are fully determined functions of $x$. In the former case, the required derivative values of $u\left(x\right)$ are computed by evaluation of the derivative of $f\left(x\right)$. In the cases with a discrete solution given in array form, the values of the independent variable must always be provided at either boundary, in increasing order over the solution region, and at a minimum of $8$ distinct points. In addition, the specification of a discrete approximate solution cannot be used in combination with initmesh. For systems with free parameters (determined through interaction with the boundary conditions) it is suggested that approximate values for the parameters also be provided. 'continuation'= name Argument that allows solution of a BVP via a continuous transformation from an easier problem to the desired problem. This method is used in obtaining an initial solution profile, and is most helpful for problems for which the Newton iteration for the initial solution approximation does not converge. The continuation option provides the name for the continuation parameter for the problem. The continuation problem must be constructed so that the parameter c, when varied from $0$ to $1$, defines a different BVP for each value of c, where $c=0$ represents the simpler problem, and $c=1$ represents the problem to be solved. The continuation parameter can be present in the differential equation, the values for the boundary conditions, or both. Two examples of the use of continuation for the solution of BVP problems can be found in dsolve[numeric_bvp,advanced]. For best results, the continuation problem should be constructed so that the BVP solutions vary continuously with the parameter, and so that the rate of change of the solution, with respect to the parameter, is very roughly constant over the range $c=0..1$. 'mincont'= numeric Argument that provides a minimum value for the allowed change in the continuation parameter when computing the initial solution profile. The value must be positive, and less than or equal to $\frac{1}{10}$. The default value is $\frac{1}{100}$. This option is valid only when continuation has also been specified. For some discussion of difficult BVP problems, and possible solutions, see dsolve[numeric_bvp,advanced]. 'interpolant'= boolean Boolean that is true by default, and specifies that the solution should be refined until an interpolation of the solution gives the requested accuracy. When this is false, the solution is improved until only the discrete solution has the requested accuracy. Also see the output=mesh argument. • Results can be plotted by using the function odeplot in the plots package. Examples Linear boundary value problem: > $\mathrm{dsol1}≔\mathrm{dsolve}\left(\left\{\mathrm{diff}\left(y\left(x\right),x,x\right)-2y\left(x\right)=0,y\left(0\right)=1.2,y\left(1\right)=0.9\right\},\mathrm{numeric}\right)$ ${\mathrm{dsol1}}{≔}{\mathbf{proc}}\left({\mathrm{x_bvp}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (1) > $\mathrm{dsol1}\left(0\right)$ $\left[{x}{=}{0.}{,}{y}{}\left({x}\right){=}{1.20000000000000}{,}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{-1.25251895272792}\right]$ (2) > $\mathrm{dsol1}\left(0.2\right)$ $\left[{x}{=}{0.2}{,}{y}{}\left({x}\right){=}{0.994463627112648}{,}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{-0.816528958014159}\right]$ (3) > $\mathrm{dsol1}\left(0.5\right)$ $\left[{x}{=}{0.5}{,}{y}{}\left({x}\right){=}{0.832942089176703}{,}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{-0.276385195570455}\right]$ (4) > $\mathrm{dsol1}\left(1\right)$ $\left[{x}{=}{1.}{,}{y}{}\left({x}\right){=}{0.900000000000000}{,}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{0.555701111490340}\right]$ (5) Nonlinear boundary value problem: > $\mathrm{deq2}≔\mathrm{diff}\left(y\left(x\right),x,x\right)+\frac{\left(2+{y\left(x\right)}^{2}\right)y\left(x\right)}{1+{y\left(x\right)}^{2}}=1$ ${\mathrm{deq2}}{≔}\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){+}\frac{\left({2}{+}{{y}{}\left({x}\right)}^{{2}}\right){}{y}{}\left({x}\right)}{{1}{+}{{y}{}\left({x}\right)}^{{2}}}{=}{1}$ (6) > $\mathrm{bc2}≔y\left(0\right)=0,y\left(2\right)=3:$ > $\mathrm{dsol2}≔\mathrm{dsolve}\left(\left\{\mathrm{bc2},\mathrm{deq2}\right\},\mathrm{numeric},\mathrm{output}=\mathrm{listprocedure}\right)$ ${\mathrm{dsol2}}{≔}\left[{x}{=}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}{,}{y}{}\left({x}\right){=}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}{,}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}\right]$ (7) > $\mathrm{fy}≔\mathrm{subs}\left(\mathrm{dsol2},y\left(x\right)\right)$ ${\mathrm{fy}}{≔}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (8) > $\left[\mathrm{seq}\left(\mathrm{fy}\left(\frac{2i}{4}\right),i=0..4\right)\right]$ $\left[{0.}{,}{1.20426489602605}{,}{2.24834461794798}{,}{2.89441120924878}{,}{3.00000000000000}\right]$ (9) Using BVP Solver for an IVP: > $\mathrm{dsys3}≔\mathrm{diff}\left(y\left(x\right),x\right)-y\left(x\right)=0,y\left(1\right)=1$ ${\mathrm{dsys3}}{≔}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){-}{y}{}\left({x}\right){=}{0}{,}{y}{}\left({1}\right){=}{1}$ (10) > $\mathrm{dsol3}≔\mathrm{dsolve}\left(\left\{\mathrm{dsys3}\right\},\mathrm{numeric},\mathrm{method}=\mathrm{bvp},\mathrm{range}=0..1,\mathrm{abserr}=1.×{10}^{-10}\right)$ ${\mathrm{dsol3}}{≔}{\mathbf{proc}}\left({\mathrm{x_bvp}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (11) > $\mathrm{dsol3}\left(0\right)$ $\left[{x}{=}{0.}{,}{y}{}\left({x}\right){=}{0.367879441157275}\right]$ (12) > $\mathrm{evalf}\left[11\right]\left(\mathrm{exp}\left(-1\right)\right)$ ${0.36787944117}$ (13) Boundary value problem with an unknown parameter: > $\mathrm{dsys4}≔\mathrm{diff}\left(y\left(x\right),x,x\right)-ay\left(x\right)=0,y\left(0\right)=1,y\left(1\right)=1,\mathrm{D}\left(y\right)\left(0\right)=2$ ${\mathrm{dsys4}}{≔}\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){-}{a}{}{y}{}\left({x}\right){=}{0}{,}{y}{}\left({0}\right){=}{1}{,}{y}{}\left({1}\right){=}{1}{,}{\mathrm{D}}{}\left({y}\right){}\left({0}\right){=}{2}$ (14) > $\mathrm{dsol4}≔\mathrm{dsolve}\left(\left\{\mathrm{dsys4}\right\},\mathrm{numeric},\mathrm{output}=\mathrm{operator}\right)$ ${\mathrm{dsol4}}{≔}\left[{x}{=}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}{,}{y}{=}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}{,}{\mathrm{D}}{}\left({y}\right){=}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}{,}{a}{=}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}\right]$ (15) > $\mathrm{dsol4}\left(0\right)$ $\left[{x}{=}{0}{,}{y}{}\left({0}\right){=}{1.00000000000000}{,}{\mathrm{D}}{}\left({y}\right){}\left({0}\right){=}{2.00000000000000}{,}{a}{}\left({0}\right){=}{-2.96069553617247}\right]$ (16) > $\mathrm{dsol4}\left(0.5\right)$ $\left[{x}{=}{0.5}{,}{y}{}\left({0.5}\right){=}{1.53330815132594}{,}{\mathrm{D}}{}\left({y}\right){}\left({0.5}\right){=}{5.80015080018856}{×}{{10}}^{{-11}}{,}{a}{}\left({0.5}\right){=}{-2.96069553617247}\right]$ (17) References Ascher, U.; Mattheij, R.; and Russell, R. "Numerical Solution of Boundary Value Problems for Ordinary Differential Equations." SIAM Classics in Applied Mathematics. Vol. 13. (1995). Ascher, U., and Petzold, L. "Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations." SIAM, Philadelphia. 1998.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 56, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679888248443604, "perplexity": 689.6131664845852}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00266.warc.gz"}
https://advances.sciencemag.org/content/2/3/e1500901
Research ArticleMOLECULAR PHYSICS # Universal diffraction of atoms and molecules from a quantum reflection grating See allHide authors and affiliations Vol. 2, no. 3, e1500901 ## Abstract Since de Broglie’s work on the wave nature of particles, various optical phenomena have been observed with matter waves of atoms and molecules. However, the analogy between classical and atom/molecule optics is not exact because of different dispersion relations. In addition, according to de Broglie’s formula, different combinations of particle mass and velocity can give the same de Broglie wavelength. As a result, even for identical wavelengths, different molecular properties such as electric polarizabilities, Casimir-Polder forces, and dissociation energies modify (and potentially suppress) the resulting matter-wave optical phenomena such as diffraction intensities or interference effects. We report on the universal behavior observed in matter-wave diffraction of He atoms and He2 and D2 molecules from a ruled grating. Clear evidence for emerging beam resonances is observed in the diffraction patterns, which are quantitatively the same for all three particles and only depend on the de Broglie wavelength. A model, combining secondary scattering and quantum reflection, permits us to trace the observed universal behavior back to the peculiar principles of quantum reflection. Keywords • Quantum reflection • emerging beam resonance • helium dimer • matter-wave optics • grazing incidence atom optics • Rayleigh-Wood anomaly ## INTRODUCTION On the basis of the quantum-mechanical wave nature of particles, optical effects such as refraction, diffraction, and interferometry have been observed with “matter waves” of atoms, molecules, and more recently, clusters and macromolecules (1, 2). In these experiments, unlike in classical optics with light, the interaction of the particle either with an external field or with the material of an optical element introduces a particle-dependent disturbance that, in general, tends to be detrimental for observing the optical effect of interest. For instance, diffraction patterns of atoms and molecules diffracted by a nanoscale transmission grating strongly depend on the van der Waals interaction between the particle and the grating material; an increase in interaction strength causes a narrowing of the effective width of the grating slits (3). Also, the diffraction peak intensities of He and D2 scattered from a crystal surface were found to strongly differ because of the different surface corrugations that result from different particle-solid interaction strengths (4). The effect of the molecule-surface interaction becomes severe for macromolecules and clusters, where it can result in a strong reduction of the fringe visibility as observed in matter-wave interferometry (5). One possible way of overcoming this problem was recently demonstrated by Brand et al., who succeeded in using atomically thin nanoscale gratings made from a single-layer material such as graphene, thereby minimizing the particle-grating interaction (6). An alternative approach could be to use conventional diffraction gratings in such a way that the atoms or molecules do not come close to the solid surfaces, thereby strongly reducing the effects of the particle-grating interaction. Here, we demonstrate universal (viz., interaction-independent) diffraction by the quantum reflection of atoms and molecules from a conventional reflection grating. We have observed emerging beam resonances of identical shape for He atoms, D2 molecules, and even helium dimers, He2, under grazing incidence conditions. Coherent scattering of the particles results from quantum reflection from the long-range Casimir-Polder particle-surface potential tens of nanometers above the actual grating surface (7). By applying a secondary scattering model, we show how universal diffraction results from the peculiar principles governing quantum reflection. Here, universal diffraction means that the diffraction phenomena, including both angles and relative intensities of the diffraction peaks, depend solely on the de Broglie wavelength λ and are independent of the different strengths of the specific particle-grating interaction. When an atom or molecule approaches a solid surface, it is exposed to the long-range attractive Casimir-Polder particle-surface interaction potential. In a classical description, this results in an acceleration of the particle toward the surface where, at the classical turning point, it will scatter back from the steep repulsive inner branch of the particle-surface potential. However, if the particle’s incident velocity is sufficiently small, the classical picture needs to be replaced by a quantum mechanical description. According to quantum mechanics, the particle’s de Broglie wavelength will vary along the slope of the attractive Casimir-Polder potential. If the length of the slope appears to be short on a length scale set by the de Broglie wavelength, the attractive potential effectively acts as an impedance discontinuity to the particle’s wave function. As a result, there is a detectable probability for the wave function to be quantum-reflected at the Casimir-Polder potential, way in front of the actual surface (811). In the limit of vanishing incident particle velocity (corresponding to infinite incident wavelength), the Casimir-Polder potential effectively resembles a step in the potential, and thus, the probability for quantum reflection approaches unity. Quantum reflection from a solid has been observed with ultracold, metastable atoms (12, 13), atom beams (1416), and even Bose-Einstein condensates (17, 18). Recently, coherent and nondestructive quantum reflection from a diffraction grating was reported for the van der Waals clusters He2 (7) and He3 (19). The exceptionally small binding energies of 10−7 eV (He2) and 10−5 eV (He3) are orders of magnitude smaller than the cluster-surface potential well depth of 10−2 eV. However, because dimers and trimers are quantum-reflected tens of nanometers above the surface, they do not come close to where surface-induced forces would inevitably break up the fragile bonds. Emerging beam resonance, also known as the Rayleigh-Wood anomaly, is a phenomenon that occurs in grating diffraction when conditions (wavelength, grating period, and incidence angle) are such that a diffracted beam of order m emerges from the grating plane. For instance, when, for given wavelength and grating period, the incidence angle is continuously varied from grazing toward normal incidence, the mth-order diffraction beam will, at some point, change from an evanescent wave state (pre-emergence) to emerging and, eventually, to a freely propagating wave above the grating plane (post-emergence). The incidence angle at which the emergence occurs is referred to as the mth-order Rayleigh angle. The emergence of a new beam causes abrupt intensity variations of the other diffraction beams marking the emerging beam resonance. The effect was first observed with visible light by Wood (20) and explained by Rayleigh in 1907 (21). Only recently was it observed in atom diffraction (22). Here, we report evidence for the emerging beam resonance effect for the helium dimer. ## RESULTS A schematic of the experimental setup is shown in Fig. 1. The 20-μm-period echelette grating is described in Materials and Methods, whereas further details of the apparatus are provided in the Supplementary Materials. Diffraction data observed with a helium beam, containing both atoms and dimers at de Broglie wavelengths of 0.33 and 0.16 nm, respectively, are shown in Fig. 2A for a range of incidence and diffraction angles. The –1st-order diffraction peak of helium atoms, which emerges at an incidence angle of θin = 1.047 mrad, shows the strongest overall signal, larger than the specular (0th-order) beam. The –2nd-order diffraction beam, emerging at θin = 1.480 mrad, and the first-order beam of the atoms are clearly visible. Higher-order (n = 2 and n = 3) atomic diffraction beams show up with less intensity decaying with increasing incidence angle. In addition, –1st- and –3rd-order peaks of He2 are clearly visible for incidence angles larger than 0.75 and 1.3 mrad, respectively (7). Furthermore, for incidence angles from 0.7 to 0.9 mrad, a weak signal of the dimer’s first-order peak appears. For both atoms and dimers, the larger intensities of negative-order peaks result from the grating blaze (23). At Rayleigh angles, indicated in Fig. 2A, diffraction peaks appear at grazing emergence, with their intensities steeply increasing with incidence angle. Figure 2B shows angular spectra (corresponding to cross sections of the contour plot along the y axis) for incidence angles around the –1st-order Rayleigh angle θR,−1(He) = 1.04 mrad, where the monomer’s –1st-order peak and the dimer’s –2nd-order peak emerge. For θin ≤ 0.99 mrad, the –1st-order diffraction beam has not yet appeared (pre-emergence); there is no peak at angles θ ≤ 0.5 mrad (greenish traces in Fig. 2B). In this regime, the specular and first-order peaks of the atoms as well as the –1st-order peak of the dimers (inset in Fig. 2B) show little or no change with incidence angle. For incidence angles in the range θin = 1.0 to 1.05 mrad, the new diffraction peak is emerging progressively at θ ≤ 0.5 mrad (red traces in Fig. 2B). Because of a finite incident beam divergence of about 50 μrad, the emergence of the new peak does not occur at a well-defined incidence angle but rather is spread out over an interval of angles (23). This is reflected by the fact that for θin = 1.040 and 1.050 mrad, the partly emerged peaks share the left slope (24). Concurrent to the emergence of a new peak, the specular peak and the first-order peak of the atoms exhibit a steep increase from 340 to 500 counts/s and from 70 to 105 counts/s, respectively. An even stronger increase of about 100% is found for the –1st-order diffraction peak of the helium dimers, as can be seen in the inset of Fig. 2B. We interpret these rather abrupt intensity variations as a manifestation of the emerging beam resonance effect for He and He2 upon the emergence of the –1st- and –2nd-order peak, respectively. For incidence angles θin ≥ 1.066 mrad, the new diffraction beam appears fully emerged from the grating (post-emergence; bluish traces in Fig. 2B). Figure 2C shows diffraction efficiencies analyzed from the data shown in Fig. 2A. It is evident in the graph that at θR,−1(He) = θR,−2(He2) = 1.04 mrad, the diffraction efficiencies not only for He (n = 0 and 1) but also for He2 (n = −1) exhibit cusps characteristic for the emerging beam resonance effect (22). In addition, when the dimer –3rd-order beam emerges at θR,−3(He2) = 1.28 mrad, the dimer –1st-order diffraction efficiency exhibits a rapid decrease. ## DISCUSSION To analyze the emerging beam resonance behavior, we apply the multiple scattering model introduced by Rayleigh (21) and Fano (25, 26). As depicted in Fig. 3, the nth-order diffraction beam amplitude An is approximated as the constructive interference of direct and secondary scattering waves; An = An(1) + An(2). For θin = θR,m, the geometrical path length difference between direct and secondary scattering, deff (1 − cosθin), is equal to |m|λ, giving rise to fully constructive interference. An additional phase shift Φ is induced by the particle-surface interaction potential for an atom or molecule propagating along the path of length deff between the first and second scattering occurrences (Fig. 3). For quantum reflection under Rayleigh conditions, one finds Φ = −m π [1 + cosθR,m] ≈ −m 2π (see Materials and Methods), which also corresponds to fully constructive interference. Although this simple model cannot account for the detailed shape of the emerging beam resonance effect displayed in Fig. 2C, it allows us to derive two main aspects. First, under Rayleigh conditions, we expect constructive interference between direct and secondary scattering. Thus, the emergence of an mth-order beam is expected to increase the other diffraction peaks, including the specular peak, the more so the more intense the emerging beam is. Consequently, the overall reflectivity shows a steep variation under Rayleigh conditions, which is in full agreement with experimental results (22). Second, the calculated phase shift depends on the de Broglie wavelength as the sole parameter. Hence, at a given de Broglie wavelength, the model predicts the same (universal) behavior for any atom or molecule. This comes as a surprise because quantum reflection is inherently linked with the Casimir-Polder potential, which is particle-specific. The derivation of the phase shift Φ is based on two assumptions (see Materials and Methods). The particle-surface potential probed by the particle along its path between the first and second scattering occurrences is (i) constant and (ii) equal to the particle’s incident perpendicular kinetic energy, that is, the energy associated with the incident velocity component perpendicular to the grating plane. The second assumption is an approximation that follows from the principles of quantum reflection (911). It holds independent of the specific particle properties. As a result, different atoms or molecules at the same de Broglie wavelength will be quantum-reflected at different heights above the surface, but their wave functions will acquire the same phase shift Φ. This peculiarity of quantum reflection is the origin of universal (interaction-independent) diffraction. To check this prediction of universal behavior, we repeated the experiment with He and D2 under conditions such that their de Broglie wavelengths are identical to the wavelength of He2 in the data shown in Fig. 2. All other experimental parameters were kept unchanged. Figure 4 shows a direct comparison of the –1st-order diffraction efficiency curves. We find excellent agreement of the data for the three species (except for incident angles larger than about 1.25 mrad), thereby confirming the prediction of universal diffraction. At larger incident angles, He and D2 diffraction efficiencies still overlap, but the He2 efficiency is found to taper off. A possible explanation for this deviation could be that some dimers start to break up as they approach closer to the surface with increasing incidence angle. Furthermore, we note that universal behavior was also found for the –3rd-order diffraction efficiency curves of He, He2, and D2. In conclusion, we have observed emerging beam resonances for He, He2, and D2 quantum-reflected from an echelette diffraction grating at grazing incidence. Our observation indicates that He2, despite its fragile bond, can undergo double coherent, nondestructive scattering; under Rayleigh conditions, dimers scattered at a grating unit propagate parallel to the surface, scatter a second time at another grating unit without breakup, and interfere with directly scattered dimers. Furthermore, a simple approximate calculation of the relative phase between the direct and the secondary scattering paths indicates constructive interference under Rayleigh conditions independent of the particle-specific Casimir-Polder interaction with the grating. Diffraction data of He, He2, and D2 under conditions of identical de Broglie wavelength confirm this universal behavior. Because the effect is independent of the particle-specific properties, universal diffraction from a quantum-reflection grating can, in principle, be applied to larger molecules as well. The only prerequisite is the preparation of a sufficiently large de Broglie wavelength corresponding to the velocity component perpendicular to the grating plane, thereby providing a sufficient quantum reflection probability. In future experiments, this could possibly be achieved by applying a state-of-the-art molecular-beam deceleration technique (27) or by choosing an even smaller incidence angle than the ones shown here. ## MATERIALS AND METHODS ### Diffraction grating The commercial plane ruled echelette grating (Newport 20RG050-600-1; period d = 20 μm; blaze angle, 14 mrad) is aligned in a conical mount (28); the grooves are almost parallel to the incidence plane. We define the azimuth angle φ as the angle between the grooves and incidence plane. Here, a negative azimuth angle was chosen to enhance the intensities of emerging diffraction beams (22). The exact value of φ is determined from fitting the diffraction angle curves shown in Fig. 2A. An agreement between the lines and the positions of the observed peaks is found for φ = −33.5 mrad corresponding to deff = 597 μm. ### Rayleigh angles The nth-order diffraction angle θn can be calculated by the approximated grating equation for conical diffraction, cosθin − cosθn = n(λ/deff) with effective period deff = d/|sinφ| (23). The Rayleigh angle θR,m is derived by inserting cosθR,m − 1 = mλ/deff into the grating equation. Because the de Broglie wavelength of a particle is inversely proportional to its mass, Rayleigh angles for monomers and dimers at the same particle velocity follow a simple relationship: θR,m(He) = θR,2m(He2). For instance, in the measurements shown in Fig. 2, the de Broglie wavelength λ is 0.327 nm for He and 0.164 nm for He2, resulting in Rayleigh angles θR,−1(He) = θR,−2(He2) = 1.047 mrad. ### Diffraction efficiencies We define the diffraction efficiency of an nth-order peak as the ratio of its area to the incidence beam area. Diffraction peak areas are determined by fitting each peak of an individual diffraction pattern, like the ones shown in Fig. 2B, with a Gaussian. The incidence beam area is determined from an angular spectrum measured with the grating removed from the beam path. The incident beam signal is dominated by the atomic beam component with just a small contribution of a few percent due to helium dimers. Therefore, the normalization to the incident beam peak area results in a slight underestimation of the actual diffraction efficiencies (given as the ratio of the nth-order beam intensity to the incident beam intensity for either atoms or dimers) for the atoms and in a severe underestimation for the dimers. Thus, the diffraction efficiencies for dimers plotted in Fig. 2C should be considered as having arbitrary units and cannot be compared quantitatively to their atomic counterparts. ### Diffractive evanescent waves Evanescent waves (29) contribute to An(2) because they propagate parallel to the grating surface plane. Evanescent waves result from diffraction into higher-order beams whose wave vector normal component is imaginary (that is, nonpropagating) (26, 29). A diffraction beam of order m is freely propagating as long as the incidence angle is larger than the Rayleigh angle, θin > θR,m; it is emerging under the Rayleigh condition, θin = θR,m; and it is evanescent for θin < θR,m. Evanescent waves contribute to An(2) in the secondary scattering model. Close to the Rayleigh angle, θin ≤ θR,m, a significant contribution of the mth-order evanescent wave to An(2) can be expected (26). This is why the emerging beam resonance can, potentially, affect the other diffraction peak intensities already in the pre-emergence regime at θin ≤ θR,m (23). ### Quantum reflection Quantum reflection of a particle from the attractive particle-surface potential takes place at a range of heights above the surface, where the reflection probability is nonzero. As a rule of thumb, this range of heights is around the location where the kinetic energy associated with the velocity normal component is equal to the absolute magnitude of the attractive particle-surface interaction potential (911). (See the Supplementary Materials for information on how the rule of thumb is derived.) The attractive part of the potential can be approximated by a Casimir-Polder surface potential, V(z) = −C3l/ [(l + z)z3]. Here, z denotes the distance from the surface, and the product of the van der Waals coefficient C3 and a characteristic length l (l = 9.3 nm for He) indicates the transition from the van der Waals (z << l) to the retarded Casimir-Polder regime (z >> l) (11). For He2, one expects C3 to be two times larger and l to be the same as compared to the He atom, because the extremely weak van der Waals bond of the dimer is too feeble to cause a significant disturbance to the electron shells of the two He atoms that, thus, can be treated as separate atoms (30). Because He and He2, coexisting in a helium beam, have the same velocity, the dimer’s kinetic energy is twice that of the atoms. As a result, at a given incidence angle, He and He2 in a beam are quantum-reflected at about the same distance from the surface, because the increased incidence energy of He2 is compensated for by its larger C3 coefficient. Using C3 = 0.202 meV nm3 between helium and aluminum (31), at θin = 0.740, 1.047, 1.282, and 1.480 mrad, which are the Rayleigh angles for the emergence of the –1st- to –4th-order He2 peaks in Fig. 2, the surface distance, where quantum reflection takes place, is estimated from the rule of thumb to be 35.3, 29.2, 26.2, and 24.2 nm, respectively. However, for identical de Broglie wavelengths, as in Fig. 4, quantum reflection of the three species is expected to occur at different heights above the surface. This can easily be seen by considering the rule of thumb and the strength of the particle-surface interaction, which is stronger for D2 than for He. As to He and He2, for identical de Broglie wavelengths, the dimer’s incident kinetic energy is just one-half of the monomer’s kinetic energy. Therefore, quantum reflection of He2 is expected to occur at larger distances above the grating surface as compared to He. ### Secondary scattering phase shift It is straightforward to calculate the additional phase shift Φ induced by the particle-surface potential for an atom or molecule of mass M and incident kinetic energy E propagating along the path deff between the first and second scattering (see Fig. 3). We apply the rule of thumb stating that quantum reflection takes place at about that distance to the surface where the particle’s incident kinetic energy (corresponding to the motion along the surface-normal coordinate) equals the absolute magnitude of the Casimir-Polder potential energy (911). Therefore, for secondary scattering, we can approximate the potential energy probed by the particle along the additional path of length deff to be equal to Eperp = (1/2)Mvperp2. Here, vperp denotes the normal component of the incident particle velocity. The particle-surface potential–induced phase shift can be calculated as Φ = (kk0)deff = = , where the square root has been approximated by its Taylor expansion justified by Eperp << E. k and k0 denote the particle’s wave vector in the presence and absence of the particle-surface interaction potential, respectively. In the former case, the kinetic energy is increased by the potential energy of the atom-surface interaction. We assume k to be constant along the path deff between the first and second scattering, corresponding to a constant height of the particle above the surface of the grating facet. The phase shift Φ = can be further simplified. For Rayleigh conditions of mth-order emergence, given by cosθR,m − 1 = mλ/deff, we get sin2θR,m = (1 + cosθR,m)(1 − cosθR,m) = (1 + cosθR,m) . As a result, one finds that, for quantum reflection under Rayleigh conditions, the phase shift induced by the potential is Φ = −m π [1 + cosθR,m] ≈ −m 2π. ## SUPPLEMENTARY MATERIALS Source and helium beam. Slits, apparatus geometry, and definition of angles. Mass spectrometer detector and apparatus resolution. Derivation of the “rule of thumb” of quantum reflection. Fig. S1. Schematic of the quantum-reflection diffraction setup. References (32, 33) This is an open-access article distributed under the terms of the Creative Commons Attribution license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ## REFERENCES AND NOTES Acknowledgments: We thank P. R. Bunker for proofreading the manuscript and L. Y. Kim for help with preparing the figures. Funding: B.S.Z. acknowledges support from the T.J. Park Science Fellowship, and W.Z. acknowledges support from the Alexander von Humboldt Foundation. This work was further supported by the grant from Creative and Innovation Project (1.120025.01) at the UNIST and the National Research Foundation of the Ministry of Education, Science and Technology, Korea (NRF-2012R1A1A1041789). Author contributions: B.S.Z. and W.S. conceived the experiment, W.Z. and B.S.Z. made the measurements and analyzed the data, and B.S.Z. and W.S. derived the model description and wrote the manuscript. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors. View Abstract
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935216665267944, "perplexity": 1356.0545064596763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998581.65/warc/CC-MAIN-20190617223249-20190618005249-00109.warc.gz"}
https://hsm.stackexchange.com/questions/2617/when-did-people-know-that-all-real-polynomials-of-degree-greater-than-2-are-redu
# When did people know that all real polynomials of degree greater than 2 are reducible? Let $f(x) \in \mathbb{R}[x]$, and write $d = \deg f$. It is well known that if $\deg f > 2$, then $f$ is reducible over $\mathbb{R}$. This fact can easily be proved with the fundamental theorem of algebra. Indeed, by the fundamental theorem of algebra, $f(x)$ splits over $\mathbb{C}$, and since $f(x) = \overline{f(x)}$ when $x$ is real, it follows that the linear factors of $f$ must be real or come in conjugate pairs. Therefore, irreducible polynomials over $\mathbb{R}$ are only of degree 1 or 2, and it is easy to find examples of degree two polynomials with real coefficients which are irreducible. This fact has a profound impact on the theory of partial fractions, a staple in first year calculus. Indeed, calculus II (at least at my university) tends to spend an inordinate amount of time on 'integration' via the method of anti-derivatives, which I believe most mathematicians know is ineffective in solving the majority of problems, considering most functions do not have elementary anti-derivatives. However, in the context of rational functions, anti-differentiation and partial fractions completely solves the problem, as any rational function can be written as the sum of simpler rational functions, each with an elementary anti-derivative. However, this fact is (I believe) far from obvious if you do not know that the only irreducible polynomials over $\mathbb{R}$ are linear or quadratic. That said, it seems to me that the method of partial fractions is much older than the fundamental theorem of algebra. So when did people know (perhaps before the first proof of the fundamental theorem of algebra) that the only irreducible polynomials over $\mathbb{R}$ are linear or quadratic? When was the 'complete' solution of anti-derivatives of rational functions obtained? Is this history accounted for anywhere? I apologize in advance if this question is trivial. ## migrated from mathoverflow.netAug 2 '15 at 12:36 This question came from our site for professional mathematicians. • If someone knew long ago that all polynomials over the reals can be factored into polynomials of degree $\leq2$, then all that would be needed to get the fundamental theorem of algebra would be that real quadratics have complex roots. That would be known pretty much immediately after complex numbers are invented. – Andreas Blass Aug 2 '15 at 2:58 • As I understand it, one of the reasons proving the FTA was important was to ensure that partial fractions would always work, at least in principle; I do not believe that it was a known fact before then. – Arturo Magidin Aug 2 '15 at 4:39 • This question would be more suitable on the History of Science and Mathematics stackexchange site. You are essentially asking who first proved the fundamental theorem of algebra. That is generally attributed to Gauss, in his doctoral thesis, although I believe his argument had some topological gaps. – KCd Aug 2 '15 at 6:34 • A very complete answer is contained in the Wikipedia article "Fundamental theorem of algebra". – Alexandre Eremenko Aug 2 '15 at 12:59 • I wrote an essay about the mathematics of integrating rational functions in this 15 April 2006 sci.math post, and in my brief historical comments I made a reference to the MacTutor History of Mathematics archive for The fundamental theorem of algebra, which seems to be more focused on what you're looking for than the Wikipedia article. See also the comments at the end of my sci.math post about Leibniz. – Dave L Renfro Aug 3 '15 at 19:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889865279197693, "perplexity": 189.35054184039356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987751039.81/warc/CC-MAIN-20191021020335-20191021043835-00220.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-4-proportions-percents-and-solving-inequalities-chapters-1-4-cumulative-review-problem-set-page-186/19
## Elementary Algebra $5$ Combining like terms, the given expression simplifies to: $5x-7y-8x+3y$ =$5x-8x+3y-7y$ =$x(5-8)+y(3-7)$ =$-3x-4y$ We then substitute $x=9$ and $y=-8$ in the expression and simplify: $-3x-4y$ =$-3(9)-4(-8)$ =$-27+32$ =$5$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317671656608582, "perplexity": 296.43706851470284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867904.94/warc/CC-MAIN-20180526210057-20180526230057-00016.warc.gz"}
https://wiki.kidzsearch.com/wiki/Nano-
kidzsearch.com > wiki   Explore:images videos games International System of Units (Redirected from Nano-) Links between the seven SI base unit definitions. Clockwise from top: kelvin (temperature), second (time), metre (length), kilogram (mass), candela (luminous intensity), mole (amount of substance) and ampere (electric current). The International System of Units is the standard modern form of the metric system. The name of this system can be shortened or abbreviated to SI, from the French name Système International d'unités. The International System of Units is a system of measurement based on 7 base units: the metre (length), kilogram (mass), second (time), ampere (electric current), Kelvin (temperature), mole (quantity), and candela (brightness). These base units can be used in combination with each other. This creates SI derived units, which can be used to describe other quantities, such as volume, energy, pressure, and velocity. The system is used almost globally. Only Myanmar, Liberia, and the United States do not use SI as their official system of measurement.[1] In these countries, though, SI is commonly used in science and medicine. History and use The metric system was created in France after the French Revolution in 1789. The original system only had two standard units, the kilogram and the metre. The metric system became popular amongst scientists. In the 1860s, James Clerk Maxwell and William Thomson (later known as Lord Kelvin) suggested a system with three base units - length, mass, and time. Other units would be derived from those three base units. Later, this suggestion would be used to create the centimetre-gram-second system of units (CGS), which used the centimetre as the base unit for length, the gram as the base unit for mass, and the second as the base unit for time. It also added the dyne as the base unit for force and the erg as the base unit for energy. As scientists studied electricity and magnetism, they realized other base units were needed to describe these subjects. By the middle of the 20th century, many different versions of the metric system were being used. This was very confusing. In 1954, the 9th General Conference on Weights and Measures (CGPM) created the first version of the International System of Units. The six base units that they used were the metre, kilogram, second, ampere, Kelvin, and candela.[2] The seventh base unit, the mole, was added in 1971.[3] SI is now used almost everywhere in the world, except in the United States, Liberia and Myanmar, where the older imperial units are still widely used. Other countries, most of them historically related to the British Empire, are slowly replacing the old imperial system with the metric system or using both systems at the same time. Units of measurement Base units The SI base units are measurements used by scientists and other people around the world. All the other units can be written by combining these seven base units in different ways. These other units are called "derived units". SI base units Unit name Unit symbol Quantity measured General definition metre m length kilogram [note 1] kg mass second s time • Original (Medieval): 186400 of a day • Current (1967): The time needed for 9192631770 periods or cycles of the radiation created by electrons moving between two energy levels of the caesium-133 atom. ampere A electric current • Original (1881): A tenth of the abampere, the unit of current used in the electromagnetic CGS.[4] • Current (1946): The current passing through two very long and thin wires placed 1 m apart that produces an attractive force equal to 2×10−7 newton per metre of length. kelvin K temperature • Original (1743): The centigrade scale is obtained by assigning 0° to the freezing point of water and 100° to the boiling point of water. • Current (1967): The fraction 1273.16 of the thermodynamic temperature of the triple point of water. mole mol amount of substance • Original (1900): The molecular weight of a substance in mass grams. • Current (1967): The same amount as the number of atoms in 0.012 kilogram of carbon-12.[note 2] candela cd luminous intensity • Original (1946): 160 of the brightness per square centimetre of a black body at the temperature where platinum freezes. • Current (1979): The luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540×1012 hertz and that has a radiant intensity in that direction of 1683 watt per steradian. Notes 1. The kilogram is the SI base unit of mass and is used in the definitions of derived units. However, units of mass are named using prefixes as if the gram were the base unit. 2. When the mole is used, the substance being measured must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles. Derived units Derived units are created by combining the base units. The base units can be divided, multiplied, or raised to powers. Some derived units have special names. Usually these were created to make calculations simpler. Named units derived from SI base units Name Symbol Quantity Definition other units Definition SI base units hertz Hz frequency s−1 newton N force, weight m∙kg∙s−2 pascal Pa pressure, stress N/m2 m−1∙kg∙s−2 joule J energy, work, heat N∙m m2∙kg∙s−2 watt W power, radiant flux J/s m2∙kg∙s−3 coulomb C electric charge s∙A volt V voltage, electrical potential difference, electromotive force W/A J/C m2∙kg∙s−3∙A−1 farad F electrical capacitance C/V m−2∙kg−1∙s4∙A2 ohm Ω electrical resistance, impedance, reactance V/A m2∙kg∙s−3∙A−2 siemens S electrical conductance 1/Ω m−2∙kg−1∙s3∙A2 weber Wb magnetic flux J/A m2∙kg∙s−2∙A−1 tesla T magnetic field strength Wb/m2 V∙s/m2 N/A∙m kg∙s−2∙A−1 henry H inductance Wb/A V∙s/A m2∙kg∙s−2∙A−2 degree Celsius °C temperature relative to 273.15 K TK − 273.15 K lumen lm luminous flux cd∙sr cd lux lx illuminance lm/m2 m−2∙cd becquerel Bq radioactivity (decays per unit time) s−1 gray Gy absorbed dose (of ionizing radiation) J/kg m2∙s−2 sievert Sv equivalent dose (of ionizing radiation) J/kg m2∙s−2 katal kat catalytic activity s−1∙mol Prefixes Very large or very small measurements can be written using prefixes. Prefixes are added to the beginning of the unit to make a new unit. For example, the prefix kilo- means "1000" times the original unit and the prefix milli- means "0.001" times the original unit. So one kilometre is 1000 metres and one milligram is a 1000th of a gram. Standard prefixes for the SI units of measure Multiples Name deca- hecto- kilo- mega- giga- tera- peta- exa- zetta- yotta- Prefix da h k M G T P E Z Y Factor 100 101 102 103 106 109 1012 1015 1018 1021 1024 Fractions Name deci- centi- milli- micro- nano- pico- femto- atto- zepto- yocto- Prefix d c m μ n p f a z y Factor 100 10−1 10−2 10−3 10−6 10−9 10−12 10−15 10−18 10−21 10−24 References 1. "Appendix G: Weights and Measures". The World Facebook. Central Intelligence Agency. 2013. Retrieved 5 April 2013. 2. . 9th session, Resolution 6. 3. International Bureau of Weights and Measures (1971), Unité SI de quantité de matière (SI unit of amount of substance). 14th session, Resolution 3.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408257365226746, "perplexity": 3864.1691077277846}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00028.warc.gz"}
https://jart.guilan.ac.ir/article_3687.html
# On the Class of Subsets of Residuated lattice which induces a Congruence Relation Document Type: Research Paper Author Department of Mathematics, Faculty of Mathematical Sciences and Computer, Shahid Chamran University of Ahvaz, Ahvaz, Iran Abstract In this manuscript,  we study the class of special subsets connected with a  subset in a residuated lattice and investigate some related properties. We describe the union of elements of this class. Using the intersection of all special subsets connected with a subset, we give a necessary and sufficient condition for a subset to be a filter.  Finally, by defining some operations, we endow this class with a residuated lattice structure and prove that it is isomorphic to the set of all congruence classes with respect to a filter. Keywords
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293581604957581, "perplexity": 409.24174668095077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540532624.3/warc/CC-MAIN-20191211184309-20191211212309-00521.warc.gz"}
https://www.physicsforums.com/threads/calculus-differentiation-question.128045/
# Homework Help: Calculus Differentiation Question 1. Aug 4, 2006 ### ohlhauc1 I am just learning calculus, and I have to differentiate a problem. I have worked on it and I have asked people, but they do not know. Here is what I have done thus far: f(x) = 3x-6 – 8x5 + 9x2/5 + √7 f(x) = 3(dx-6/dx) – 8(dx5/dx) + 9(dx2/5/dx) + √7(d/dx) f(x) = 3(-6x5) – 8(5x4) + 9(2/5x-3/5) + 0 f(x) = -18x5 – 40x4 + 18/5x-3/5 I am wondering if this is it, or whether I need to do more? If I need to do more, could you say which rule I should use or some other type of advice? Thanks P.S. If it is wrong, could you please tell me as well. 2. Aug 4, 2006 ### Staff: Mentor Your notation is a bit confusing. What is the starting function f(x)? And are you asked to find d[f(x)]/dx? Are all the "x" in your equations the unknown "x", or are some of the multiplication symbols? 3. Aug 4, 2006 ### Data if you mean that your function is $$f(x) = 3x^{-6} - 8x^5 + 9x^{\frac{2}{5}}+\sqrt{7}$$ and you found that the derivative is $$f^\prime (x) = -18x^5 - 40 x^4 + \frac{18}{5}x^{-\frac{3}{5}},$$ (I don't know why you've written f(x)= at every line when you are trying to differentiate) then you have made a small error, because $$\frac{d(x^{-6})}{dx} = -6x^{-7},$$ not $-6x^5$. Other than that it's ok (if I have translated your rather cryptic notation correctly - try to learn LaTeX: https://www.physicsforums.com/showthread.php?t=8997) Last edited: Aug 4, 2006
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8544702529907227, "perplexity": 1086.4386785896857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158279.11/warc/CC-MAIN-20180922084059-20180922104459-00296.warc.gz"}
http://cds.cern.ch/collection/ATLAS%20Preprints?ln=ru&as=1
# ATLAS Preprints Последние добавления: 2019-09-20 22:59 Search for direct production of electroweakinos in final states with one lepton, missing transverse momentum and a Higgs boson decaying into two $b$-jets in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector The results of a search for electroweakino pair production $pp \rightarrow \tilde\chi^\pm_1 \tilde\chi^0_2$ in which the chargino ($\tilde\chi^\pm_1$) decays into a $W$ boson and the lightest neutralino ($\tilde\chi^0_1$), while the heavier neutralino ($\tilde\chi^0_2$) decays into the Standard Model 125 GeV Higgs boson and a second $\tilde\chi^0_1$ are presented. [...] CERN-EP-2019-188. - 2019. Fulltext - Previous draft version 2019-09-19 07:44 Search for squarks and gluinos in final states with same-sign leptons and jets using 139 fb$^{-1}$ of data collected with the ATLAS detector / ATLAS Collaboration A search for supersymmetric partners of gluons and quarks is presented, involving signatures with jets and either two isolated leptons (electrons or muons) with the same electric charge, or at least three isolated leptons. [...] arXiv:1909.08457 ; CERN-EP-2019-161. - 2019. - 42 p, 42 p. Fulltext - Previous draft version - Fulltext 2019-09-06 16:03 Combined measurements of Higgs boson production and decay using up to $80$ fb$^{-1}$ of proton-proton collision data at $\sqrt{s}=$ 13 TeV collected with the ATLAS experiment / ATLAS Collaboration Combined measurements of Higgs boson production cross sections and branching fractions are presented. [...] arXiv:1909.02845 ; CERN-EP-2019-097. - 2019. - 80 p. Fulltext - Previous draft version - Fulltext 2019-09-04 14:59 Measurement of azimuthal anisotropy of muons from charm and bottom hadrons in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector / ATLAS Collaboration The elliptic flow of muons from the decay of charm and bottom hadrons is measured in $pp$ collisions at $\sqrt{s}=13$ TeV using a data sample with an integrated luminosity of 150 pb$^{-1}$ recorded by the ATLAS detector at the LHC. [...] arXiv:1909.01650 ; CERN-EP-2019-166. - 2019. - 27 p. Fulltext - Previous draft version - Fulltext 2019-09-04 14:03 Rare top quark production at the LHC:\\ t$\bar{\text{t}}$Z, t$\bar{\text{t}}$W, t$\bar{\text{t}}\gamma$, tZq, t$\gamma$q, and t$\bar{\text{t}}$t$\bar{\text{t}}$ / Knolle, Joscha (DESY) /ATLAS and CMS Collaborations A comprehensive set of measurements of top quark pair and single top quark production in association with electroweak bosons (W, Z, or $\gamma$) is presented. The results are compared to standard model (SM) predictions and used to set limits on new physics effects that would induce deviations from the SM. [...] CMS-CR-2019-078.- Geneva : CERN, 2019 - 7 p. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 2019-09-04 13:59 Electroweak measurements at the High-Luminosity LHC / Savin, Alexander (Wisconsin U., Madison) /ATLAS and CMS Collaborations A set of selected standard model measurements proposed for the ATLAS and CMS experiments after the high-luminosity upgrade of the LHC is discussed. The measurements are separated into two categories: precise measurements that benefit from both improved systematic uncertainties and increased luminosity, like W or top mass, or weak mixing angle measurements; measurements of low cross section production that benefit mainly from luminosity increase and detector improvements, like VV VBS polarized cross section measurements or study of VVV production.. CMS-CR-2019-120.- Geneva : CERN, 2019 - 7 p. Fulltext: PDF; In : 7th Edition of the Large Hadron Collider Physics Conference, Puebla, Mexico, 20 May 2019 2019-09-04 13:59 Differential measurements of Higgs production at ATLAS and CMS / Sculac, Toni (Split Tech. U.) /ATLAS and CMS Collaborations Differential Higgs boson production cross sections are sensitive probes for physics beyond the Standard Model. New physics may contribute in the gluon-gluon fusion loop, the dominant Higgs boson production mechanism at the LHC, and manifest itself through deviations from the distributions predicted by the standard model. [...] CMS-CR-2019-113.- Geneva : CERN, 2019 - 8 p. Fulltext: PDF; In : 7th Edition of the Large Hadron Collider Physics Conference, Puebla, Mexico, 20 May 2019 2019-09-03 21:58 Search for light long-lived neutral particles produced in $pp$ collisions at $\sqrt{s} =$ 13 TeV and decaying into collimated leptons or light hadrons with the ATLAS detector / ATLAS Collaboration Several models of physics beyond the Standard Model predict the existence of dark photons, light neutral particles decaying into collimated leptons or light hadrons. [...] arXiv:1909.01246 ; CERN-EP-2019-140. - 2019. - 42 p. Fulltext - Previous draft version - Fulltext 2019-09-02 17:41 Performance of electron and photon triggers in ATLAS during LHC Run 2 / ATLAS Collaboration Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for the ATLAS experiment to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena in both proton-proton and heavy-ion collisions. [...] arXiv:1909.00761 ; CERN-EP-2019-169. - 2019. - 55 p. Fulltext - Fulltext 2019-08-22 22:53 Search for flavour-changing neutral currents in processes with one top quark and a photon using 81 fb$^{-1}$ of $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS experiment / ATLAS Collaboration A search for flavour-changing neutral current (FCNC) events via the coupling of a top quark, a photon, and an up or charm quark is presented using 81 fb$^{-1}$ of proton-proton collision data taken at a centre-of-mass energy of 13 TeV with the ATLAS detector at the LHC. [...] arXiv:1908.08461 ; CERN-EP-2019-155. - 2019. - 34 p. Fulltext - Previous draft version - Fulltext
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953795075416565, "perplexity": 3201.947530609418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574765.55/warc/CC-MAIN-20190922012344-20190922034344-00214.warc.gz"}
https://www.lmfdb.org/L/rational/8/280%5E4/1.1
## Results (22 matches) Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin 8-280e4-1.1-c0e4-0-0 $0.373$ $0.000381$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$0.0, 0.0, 0.0, 0.0 0 1 0 1.19131 Modular form 280.1.c.a 8-280e4-1.1-c1e4-0-0 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $0.369327$ Modular form 280.2.n.a 8-280e4-1.1-c1e4-0-1 $1.49$ $24.9$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 0 0.520589 Modular form 280.2.bl.a 8-280e4-1.1-c1e4-0-10 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $4$ $1.80849$ Modular form 280.2.bv.a 8-280e4-1.1-c1e4-0-2 $1.49$ $24.9$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 0 0.525921 Modular form 280.2.bv.b 8-280e4-1.1-c1e4-0-3 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $0.678926$ Modular form 280.2.bj.d 8-280e4-1.1-c1e4-0-4 $1.49$ $24.9$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 0 0.706594 Modular form 280.2.bj.c 8-280e4-1.1-c1e4-0-5 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $1.06518$ Modular form 280.2.bv.c 8-280e4-1.1-c1e4-0-6 $1.49$ $24.9$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 0 1.07820 Modular form 280.2.bv.d 8-280e4-1.1-c1e4-0-7 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $1.08428$ Modular form 280.2.q.d 8-280e4-1.1-c1e4-0-8 $1.49$ $24.9$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 4 1.48247 Modular form 280.2.bj.b 8-280e4-1.1-c1e4-0-9 1.49 24.9 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $4$ $1.68517$ Modular form 280.2.bj.a 8-280e4-1.1-c2e4-0-0 $2.76$ $3.38\times 10^{3}$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$2.0, 2.0, 2.0, 2.0 2 1 0 0.0273502 Modular form 280.3.bi.a 8-280e4-1.1-c2e4-0-1 2.76 3.38\times 10^{3} 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $2.0, 2.0, 2.0, 2.0$ $2$ $1$ $0$ $0.241217$ Modular form 280.3.c.e 8-280e4-1.1-c2e4-0-2 $2.76$ $3.38\times 10^{3}$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$2.0, 2.0, 2.0, 2.0 2 1 0 0.263552 Modular form 280.3.bi.b 8-280e4-1.1-c2e4-0-3 2.76 3.38\times 10^{3} 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $2.0, 2.0, 2.0, 2.0$ $2$ $1$ $0$ $0.426550$ Modular form 280.3.be.a 8-280e4-1.1-c2e4-0-4 $2.76$ $3.38\times 10^{3}$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$2.0, 2.0, 2.0, 2.0 2 1 0 0.557054 Modular form 280.3.c.f 8-280e4-1.1-c3e4-0-0 4.06 7.44\times 10^{4} 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $3.0, 3.0, 3.0, 3.0$ $3$ $1$ $0$ $0.394803$ Modular form 280.4.bg.a 8-280e4-1.1-c5e4-0-0 $6.70$ $4.06\times 10^{6}$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$5.0, 5.0, 5.0, 5.0 5 1 4 1.09797 Modular form 280.6.a.h 8-280e4-1.1-c5e4-0-1 6.70 4.06\times 10^{6} 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $5.0, 5.0, 5.0, 5.0$ $5$ $1$ $4$ $1.37505$ Modular form 280.6.a.g 8-280e4-1.1-c7e4-0-0 $9.35$ $5.85\times 10^{7}$ $8$ $2^{12} \cdot 5^{4} \cdot 7^{4}$ 1.1 $$7.0, 7.0, 7.0, 7.0 7 1 4 0.954482 Modular form 280.8.a.b 8-280e4-1.1-c7e4-0-1 9.35 5.85\times 10^{7} 8 2^{12} \cdot 5^{4} \cdot 7^{4} 1.1$$ $7.0, 7.0, 7.0, 7.0$ $7$ $1$ $4$ $1.16106$ Modular form 280.8.a.a
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9716721773147583, "perplexity": 326.2237390478739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153474.19/warc/CC-MAIN-20210727170836-20210727200836-00470.warc.gz"}
http://tex.stackexchange.com/questions/13914/toc-numbering-problem/13915
# ToC numbering problem My LaTeX document is acting strangely. Here is a simplified version of it: \documentclass{article} \begin{document} \tableofcontents \newpage \include{includedfile} \end{document} And in includedfile.tex: \section{My Section Title} Quack. Clearly, in the table of contents, the heading for the part should precede the one for the section, but it doesn't! What's wrong? - ## migrated from stackoverflow.comMar 20 '11 at 5:51 The delaying issue several people have mentioned is that TeX delays all \write commands until \shipout time. If for some reason, you need an immediate \write, you can use \immediate\write. To that end, here's a simple new macro that acts like \addcontentsline, but writes to the aux file immediately. \documentclass{article} \begingroup \let\origwrite\write \def\write{\immediate\origwrite}% \endgroup } \begin{document} \tableofcontents \newpage \include{includedfile} \end{document} - This also solved a problem where \addcontentsline was not adding bookmark entries in the resulting PDF, unless there was additional text after the command. Bizarre. – jevon Oct 13 '11 at 2:07 This is a tricky little issue. It turns out that \include differs from \input in an important way; it doesn't just add a couple of \clearpages. I think the right solution is to make a custom \include command which functions almost like the usual one: \newcommand{\myinclude}[1]{\clearpage\input{#1}\clearpage} When you use \addcontentsline, directly or indirectly, it writes a line on the aux file saying "write this and that to the toc file". Then it reads the aux file and follows that instruction. When you run latex again, the toc file has the right stuff in it and you get a nice table of contents. But the tex \write command has some sort of delay to it (that I don't understand). When you use \addcontentsline several times in a row, it doesn't matter because they all go on the write stack in the right order. But here's the tricky part: when you use \include, it makes a separate aux file for the file you're including and immediately writes a command in the main aux file saying "go look at that other aux file for instructions" (with no weird delay). So if you use \include immediately after an \addcontentsline, the "go look at the other aux file" command gets written before the "write some stuff in the toc file" command. So all the contents entries from the included file get written first! - This is what I ended up using. Thanks! – Ben Alpert Aug 3 '09 at 0:37 It works for me when I replace \include by \input. I think \include is for chapters (it forces a \clearpage or something like that), so I never use it in practice. - I do actually want to have the included file on a separate page, though, and I'd also like to figure out why it doesn't work as written. – Ben Alpert Aug 1 '09 at 18:01 You can add an explicit \clearpage or \cleardoublepage. The only thing you lose is that \include is for partial compilation (i.e. with \includeonly). As for why it does not work as written, I have no idea… experience with LaTeX shows that sometimes you probably don't want to understand :) – Damien Pollet Aug 1 '09 at 18:08 What if you replace \addcontentsline{toc}{part}{A Part of My Document} with \part{A Part of My Document} - Agreed. The usual sectioning commands call \addcontentsline, so it is generally not necessary to call it explicitly yourself. @Ben Alpert: If you have a special reason for making the explicit call, it might help to describe it. – dmckee Aug 2 '09 at 4:59 @dmckee: The usual sectioning commands also produce other output. He probably doesn't want to have a separate page saying "Part I" that you flip through when you're reading. If you use \addcontentsline, the table of contents is the only place you'll get any output. – Anton Geraschenko Aug 2 '09 at 23:58 Try moving the addcontentsline above the tableofcontents: Updated: incorrect ordering occurs if \addcontentsline is on the same level as \include. A workaround is to have the \addcontentsline in the included file: \documentclass{article} \begin{document} \tableofcontents \newpage \include{includedfile} \include{some-other-file} \end{document} contents of includedfile.tex \addcontentsline{toc}{part}{First Part of My Document} \section{My Section Title} Quack. - Except that I really want many different parts after the table of contents, with multiple included files for each. So that won't work. – Ben Alpert Aug 1 '09 at 18:05 Reading through some latex documentation there seems to be a problem using \addcontentsline at the same level as an \include statement. The solution is to move the \addcontentsline into the file loaded by \include. Not the cleanest solution but it will allow you to have multiple parts correctly ordered in the table of contents. – indy Aug 1 '09 at 19:22 If you try the file \documentclass{article} \begin{document} \tableofcontents \newpage \part{A Part of My Document} \include{includedfile} \end{document} You may get a clue as to what is happening. The /addcontentsline instruction is normally invoked automatically by the document sectioning commands... If you do not want a heading number (starred form) but you do want an entry in the .toc file, you can use \addcontentsline with or without \numberline ... (Mittelbach and Goosens (2004), see below) Hence, for example, \documentclass{article} \begin{document} \tableofcontents \newpage \part*{A Part of My Document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8882081508636475, "perplexity": 1871.206049857766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382360/warc/CC-MAIN-20130516092622-00046-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/pressure-with-gases.15869/
# Pressure with Gases 1. Mar 7, 2004 ### Antepolleo Here's the problem: 4. A diving bell in the shape of a cylinder with a height of 2.10 m is closed at the upper end and open at the lower end. The bell is lowered from air into sea water ( p = 1.025 g/cm3). The air in the bell is initially at 16.0°C. The bell is lowered to a depth (measured to the bottom of the bell) of 47.0 fathoms or 86.0 m. At this depth the water temperature is 4.0°C, and the bell is in thermal equilibrium with the water. (a) How high does sea water rise in the bell? (b) To what minimum pressure must the air in the bell be raised to expel the water that entered? My question is, how are you supposed to figure this out without knowing the diameter of the bell? 2. Mar 8, 2004 ### Antepolleo Hmm.. I tried an approach where I let the number of moles of the gas be constant, but it got me nowhere. 3. Mar 8, 2004 ### Janitor My approach would be this... P(z) = gpz + P_0 where P(z) is the pressure as a function of depth z below sea level, g is acceleration of gravity, p (rho) is density of water, P_0 is pressure at sea level (i.e. one atmosphere). You have been given p and z, and you can look up g and P_0, so you can solve for P(z). Then if we make the assumption that none of the air in the cylinder dissolves in the water as the cylinder is lowered (number of moles is constant, as you say), we should have P(z)V(z)/T(z) = P_0 V_0/T_0 where V(z) is the volume of air in the cylinder at depth z, T(z) is the absolute temperature at depth z, V_0 is the volume of the entire cylinder, and T_0 is the absolute temperature at sea level. You can solve this for V(z), since you know all the other quantities. The height to which the water rises is h = L [1-V(z)/V_0] where L is the length of the cylinder. Last edited: Mar 8, 2004 Similar Discussions: Pressure with Gases
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571967124938965, "perplexity": 817.560823060112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321961.50/warc/CC-MAIN-20170627235941-20170628015941-00606.warc.gz"}
https://arxiv-export-lb.library.cornell.edu/abs/2205.06689?context=stat
stat (what is this?) Title: Heavy-Tail Phenomenon in Decentralized SGD Abstract: Recent theoretical studies have shown that heavy-tails can emerge in stochastic optimization due to `multiplicative noise', even under surprisingly simple settings, such as linear regression with Gaussian data. While these studies have uncovered several interesting phenomena, they consider conventional stochastic optimization problems, which exclude decentralized settings that naturally arise in modern machine learning applications. In this paper, we study the emergence of heavy-tails in decentralized stochastic gradient descent (DE-SGD), and investigate the effect of decentralization on the tail behavior. We first show that, when the loss function at each computational node is twice continuously differentiable and strongly convex outside a compact region, the law of the DE-SGD iterates converges to a distribution with polynomially decaying (heavy) tails. To have a more explicit control on the tail exponent, we then consider the case where the loss at each node is a quadratic, and show that the tail-index can be estimated as a function of the step-size, batch-size, and the topological properties of the network of the computational nodes. Then, we provide theoretical and empirical results showing that DE-SGD has heavier tails than centralized SGD. We also compare DE-SGD to disconnected SGD where nodes distribute the data but do not communicate. Our theory uncovers an interesting interplay between the tails and the network structure: we identify two regimes of parameters (stepsize and network size), where DE-SGD can have lighter or heavier tails than disconnected SGD depending on the regime. Finally, to support our theoretical results, we provide numerical experiments conducted on both synthetic data and neural networks. Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Optimization and Control (math.OC) Cite as: arXiv:2205.06689 [stat.ML] (or arXiv:2205.06689v2 [stat.ML] for this version) Submission history From: Yuanhan Hu [view email] [v1] Fri, 13 May 2022 14:47:04 GMT (2446kb,D) [v2] Mon, 16 May 2022 14:31:35 GMT (2446kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9402021765708923, "perplexity": 1140.647424142469}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00567.warc.gz"}
https://www.physicsforums.com/threads/a-question-in-group-theory.557224/
# A question in group theory 1. Dec 5, 2011 ### Ledger This is not homework. Self-study. And I'm really enjoying it. But, as I'm going through this book ("A Book of Abstract Algebra" by Charles C. Pinter) every so often I run into a problem or concept I don't understand. Let G be a finite abelian group, say G = (e,a1, a2, a3,...,an). Prove that (a1*a2*...an)^2 = e. So, it has a finite number of elements and it's a group. So it's associative, has an identity element and an inverse as elements of G, and as it's abelian so it's also commutative. But I don't see how squaring the product of its elements leads to the identity element e. Wait. Writing this has me thinking that each element might be being 'multiplied' by it's inverse yielding e for every pair, which when all mutiplied together still yields e, even when ultimately squared. Could that be the answer, even though I may not have stated it elegantly? There's no one I can ask so I brought it to this forum. 2. Dec 5, 2011 ### spamiam You're on the right track. But the product is squared for a reason. What if your group has elements of order 2? This won't cause problems, but it's necessary to consider it. 3. Dec 5, 2011 ### Ledger 'If the group has elements of order 2' I don't really understand that. The terminology in this book I understand (so far) is if the group is of order 2 that means it is a finite group with two elements. Things are sometimes squared to get rid of a negative sign. But if the elements are numbers I would think that multiplying a negative number by its inverse (which would also be negative so the outcome is 1) would take care of that. But perhaps not so I'll go with squaring would knock a negative out of the final e. Is that it? 4. Dec 5, 2011 ### micromass What spamiam means is that there might be an element $a_i$ such that $a_i=a_i^{-1}$. In that case, your proof would not hold anymore. Indeed, its inverse does not occur in the list since it equals $a_i$. 5. Dec 5, 2011 ### Ledger So there could be an element of G that equals its own inverse. So squaring the product insures that this is reduced to e as well? Since they equal each other they should square to identity I think. Is this it? 6. Dec 5, 2011 Yes! 7. Dec 5, 2011 ### Ledger Similar Discussions: A question in group theory
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8297340869903564, "perplexity": 470.80278927720553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886860.29/warc/CC-MAIN-20180117082758-20180117102758-00436.warc.gz"}
https://ai.stackexchange.com/questions/118/how-can-fuzzy-logic-be-used-in-creating-ai/122
# How can fuzzy logic be used in creating AI? Fuzzy logic is the logic where every statement can have any real truth value between 0 and 1. How can fuzzy logic be used in creating AI? Is it useful for certain decision problems involving multiple inputs? Can you give an example of an AI that uses it? A classical example of fuzzy logic in an AI is the expert system Mycin. Fuzzy logic can be used to deal with probabilities and uncertainties. If one looks at, for example, predicate logic, then every statement is either true or false. In reality, we don't have this mathematical certainty. For example, let's say a physician (or expert system) sees a symptom that can be attributed to a few different diseases (say A, B and C). The physician will now attribute a higher likelihood to the possibility of the patient having any of these three diseases. There is no definite true or false statement, but there is a change of weights. This can be reflected in fuzzy logic, but not so easily in symbolic logic. My impression is that fuzzy logic has mostly declined in relevance and probabilistic logic has taken over its niche. (See the comparison on Wikipedia.) The two are somewhat deeply related, and so it's mostly a change in perspective and language. That is, fuzzy logic mostly applies to labels which have uncertain ranges. An object that's cool but not too cool could be described as either cold or warm, and fuzzy logic handles this by assigning some fractional truth value to the 'cold' and 'warm' labels and no truth to the 'hot' label. Probabilistic logic focuses more on the probability of some fact given some observations, and is deeply focused on the uncertainty of observations. When we look at an email, we track our belief that the email is "spam" and shouldn't be shown to the user with some number, and adjust that number as we see evidence for and against it being spam. • Probabilistic logic is a progressive term that is difficult to distinguish from the classical meaning of fuzzy logic. Both Drools and Prolog are in use in business and industrial fuzzy logic control. – han_nah_han_ Dec 11 '18 at 12:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8443382382392883, "perplexity": 673.566550057123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595282.35/warc/CC-MAIN-20200119205448-20200119233448-00040.warc.gz"}
https://brilliant.org/discussions/thread/interesting-construction-problems/
# Interesting construction problems Here are some interesting construction problems that I stumbled upon. In the following problems, constructions can only be done using an unmarked straightedge and compass. I came across the first problem accidentally while watching my friend fold origami. However, it might have been posed long ago by some mathematician. The third problem is easier than the others. 1. Given a line segment of length $l$ and any positive integer $n$. Show that, using straightedge and compass, it is always possible to divide the line into $n$ equal segments, no matter what $n$ is. 2. Given a line segment of length $\sqrt{2}+\sqrt{3}+\sqrt{5}$, is it possible to construct a line segment of length $1$? 3. Given triangle ABC, construct the circumcircle, and the incircle. 4. Given the perpendicular from A and two medians from A, B onto BC, AC respectively, reconstruct triangle ABC. Note by Joel Tan 6 years, 6 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: 2: Follows directly from the definition of a constructible number. Constructively, show that $\sqrt{n}$ is always constructible when $n \in \mathbb{Z}^{+}$ and use compass equivalence theorem to add them up. 3: Let $O$ be the circumcenter of $\Delta ABC$. It's pretty obvious that $\Delta AOB$ is isoceles, and that the angle bisector of $\angle AOB$ is the perpendicular bisector of $AB$; hence, by similar argument for the other sides, $O$ is the intersection of the perpendicular bisectors of the sides of $\Delta ABC$. Circumcircle construction is just a corollary. Let $O^{\prime}$ be the incenter of $\Delta ABC$. Let the shortest distance from the incenter to $AB$ intersect the latter at $X$; do similarly for $AC$, with the intersection $Y$. $O^{\prime}X \equiv O^{\prime}Y$ (radii), and $AX \equiv AY$ (convergent tangents). It's easy to see the angle bisector of $\angle ABC$ passes through the incenter; thus, by similar argument for the other angles, $O^{\prime}$ is the intersection of the angle bisectors of the interior angles of $\Delta ABC$. Again, incircle is just a corollary (albeit a slightly more complicated one). - 6 years, 2 months ago I'm new to compass and straightedge, so I'm sorry if I used any theorems incorrectly. As for the incircle corollary, my method is to pick any side of $\Delta ABC$ and construct any circle with center $O^{\prime}$ that intersects that side. The median of the two intersections on the side will be a point on the circumference of the incircle. - 6 years, 2 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 36, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9466005563735962, "perplexity": 834.5670153330383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00419.warc.gz"}
https://quizplus.com/quiz/150410-quiz-14-chi-square-tests
Try out our new practice tests completely free! Statistics Bookmark ## Quiz 16 : Chi-Square Tests To compute a pooled sample proportion, each of the sample proportions is weighted by the size of the population from which the sample was selected. Free True False False The degrees of freedom for in a test of independence involving a contingency table with 10 rows and 11 columns are 90. Free True False True One hundred people were sampled from each of three populations and asked a question.The responses are shown in the table: If the three populations represented here contain the same proportion of yes responses, we would have expected to see 23.33 yes responses in each sample. Free True False True The degrees of freedom for a test of independence involving a contingency table with 10 rows and 8 columns are 63. True False In a chi-square calculation involving 6 independent terms (that is, with df = 6), there is a 5% probability that the result will be less than 1.635. True False Two hundred items were sampled from each of two recent shipments.The results are shown in the table: If the two shipments contain the same proportion of defective items, we would have expected to see 14 defective items in each sample. True False The table-based approach to testing for differences in population proportions yields more accurate results than the squared standardized normal random variable approach. True False Samples of equal size have been selected from each of three populations, producing sample proportions of 0.4, 0.3, and 0.5.To compute a pooled sample proportion, we can compute the simple average of the three sample proportions. True False Samples of equal size have been selected from each of three populations, producing sample proportions of 0.4, 0.3, and 0.5.To compute a pooled sample proportion, each of the sample proportions is weighted by size of the population from which the sample was selected. True False The degrees of freedom for test of independence involving a contingency table with 12 rows and 12 columns are 144. True False In a chi-square distribution, 2 is the sum of squared standardized normal random variables with degrees of freedom equal to the number of independent terms included in the sum. True False In a chi-square test of proportion differences, we will reject the "all proportions are equal" null hypothesis if the p-value for the chi-square statistic is less than the significance level for the test. True False Samples of equal size have been selected from each of three populations, producing sample proportions of 0.5, 0.7, and 0.6.The pooled sample proportion would be 0.65. True False The degrees of freedom in a goodness of fit test for a multinomial distribution with 5 categories are 4. True False In a chi-square calculation involving 5 independent terms (that is, with df = 5), there is a 5% probability that the result will be greater than 16.750. True False The degrees of freedom in a goodness of fit test for a multinomial distribution with 5 categories are (5 − 2) = 3. True False
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8969221115112305, "perplexity": 604.3382240904839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00773.warc.gz"}
https://reference.wolframcloud.com/language/ref/SymmetricMatrixQ.html
# SymmetricMatrixQ gives True if m is explicitly symmetric, and False otherwise. # Details and Options • A matrix m is symmetric if m==Transpose[m]. • SymmetricMatrixQ works for symbolic as well as numerical matrices. • The following options can be given: • SameTest Automatic function to test equality of expressions Tolerance Automatic tolerance for approximate numbers • For exact and symbolic matrices, the option SameTest->f indicates that two entries mij and mkl are taken to be equal if f[mij,mkl] gives True. • For approximate matrices, the option Tolerance->t can be used to indicate that all entries Abs[mij]t are taken to be zero. • For matrix entries Abs[mij]>t, equality comparison is done except for the last bits, where is \$MachineEpsilon for MachinePrecision matrices and for matrices of Precision . # Examples open allclose all ## Basic Examples(2) Test if a 2×2 numeric matrix is explicitly symmetric: Test if a 3×3 symbolic matrix is explicitly symmetric: ## Scope(10) ### Basic Uses (6) Test if a real machine-precision matrix is symmetric: A real symmetric matrix is also Hermitian: Test if a complex matrix is symmetric: A complex symmetric matrix has symmetric real and imaginary parts: Test if an exact matrix is symmetric: Make the matrix symmetric: Use SymmetricMatrixQ with an arbitrary-precision matrix: A random matrix is typically not symmetric: Use SymmetricMatrixQ with a symbolic matrix: The matrix becomes symmetric when : SymmetricMatrixQ works efficiently with large numerical matrices: ### Special Matrices (4) Use SymmetricMatrixQ with sparse matrices: Use SymmetricMatrixQ with structured matrices: Use with a QuantityArray structured matrix: The identity matrix is symmetric: HilbertMatrix is symmetric: ## Options(2) ### SameTest(1) This matrix is symmetric for a positive real , but SymmetricMatrixQ gives False: Use the option SameTest to get the correct answer: ### Tolerance(1) Generate a real-valued symmetric matrix with some random perturbation of order 10-14: Adjust the option Tolerance to accept this matrix as symmetric: The norm of the difference between the matrix and its transpose: ## Applications(13) ### Generating Symmetric Matrices(4) Any matrix generated from a symmetric function is symmetric: The function is symmetric: Using Table generates a symmetric matrix: SymmetrizedArray can generate matrices (and general arrays) with symmetries: Convert back to an ordinary matrix using Normal: Check that matrices drawn from GaussianOrthogonalMatrixDistribution are symmetric: Matrices drawn from CircularOrthogonalMatrixDistribution are symmetric and unitary: Every Jordan matrix is similar to a symmetric matrix. Since any square matrix is similar to its Jordan form, this means that any square matrix is similar to a symmetric matrix. Define a function for generating an Jordan block for eigenvalue : For example, here is the Jordan matrix of dimension 4 for the eigenvalue : Define a function for generating a corresponding complex similarity transformation: The matrix is a sum of times the identity matrix and times the backward identity matrix: Then is symmetric, which shows that the Jordan matrix is similar to a symmetric matrix: Confirm the matrix is symmetric: ### Examples of Symmetric Matrices(5) The Hessian matrix of a function is symmetric: Many special matrices are symmetric, including FourierMatrix: And HilbertMatrix: Visualize the matrix types: Many filter kernel matrices are symmetric, including DiskMatrix: Visualize the matrices: AdjacencyMatrix of an undirected graph is symmetric: As is KirchhoffMatrix: Visualize adjacency and Kirchhoff matrices for different graphs: Several statistical measures are symmetric matrices, including Covariance: ### Uses of Symmetric Matrices(4) A positive-definite, real symmetric matrix or metric defines an inner product by : Verify that is in fact symmetric and positive definite: Orthogonalize the standard basis of to find an orthonormal basis: Confirm that this basis is orthonormal with respect to the inner product : The moment of inertia tensor is the equivalent of mass for rotational motion. For example, kinetic energy is , with taking the place of the mass and angular velocity taking the place of linear velocity in the formula . can be represented by a positive-definite symmetric matrix. Compute the moment of inertia for a tetrahedron with endpoints at the origin and positive coordinate axes: Verify that the matrix is symmetric: Compute the kinetic energy if its angular velocity is : The kinetic energy is positive as long as is nonzero, showing the matrix was positive definite: Determine if a sparse matrix is structurally symmetric: The matrix is not symmetric: But it is structurally symmetric: Use a different method for symmetric matrices, with failover to a general method: Construct real-valued matrices for testing: For a non-symmetric matrix m, the function myLS just uses Gaussian elimination: For a symmetric indefinite matrix ms, try Cholesky and continue with Gaussian elimination: For a symmetric positive-definite matrix mpd, try Cholesky, which succeeds: ## Properties & Relations(13) trivially returns False for any x that is not a matrix: A matrix is symmetric if mTranspose[m]: A real-valued symmetric matrix is Hermitian: But a complex-valued symmetric matrix may not be: Use Symmetrize to compute the symmetric part of a matrix: This equals the average of m and Transpose[m]: Any matrix can be represented as the sum of its symmetric and antisymmetric parts: Use AntisymmetricMatrixQ to test whether a matrix is antisymmetric: If is an symmetric matrix with real entries, then is antihermitian: MatrixExp[I m] for real symmetric m is unitary: A real-valued symmetric matrix is always a normal matrix: A complex-valued symmetric matrix need not be normal: Real-valued symmetric matrices have all real eigenvalues: Use Eigenvalues to find eigenvalues: Note that a complex-valued symmetric matrix may have both real and complex eigenvalues: for real symmetric m can be factored into linear terms: Real-valued symmetric matrices have a complete set of eigenvectors: As a consequence, they must be diagonalizable: Use Eigenvectors to find eigenvectors: Note that a complex-valued symmetric matrix need not have these properties: The inverse of a symmetric matrix is symmetric: Matrix functions of symmetric matrices are symmetric, including MatrixPower: And any univariate function representable using MatrixFunction: ## Possible Issues(1) SymmetricMatrixQ uses the definition for both real- and complex-valued matrices: These complex matrices need not be normal or possess many properties of self-adjoint (real symmetric) matrices: HermitianMatrixQ tests the condition for self-adjoint matrices: Alternatively, test if the entries are real to restrict to real symmetric matrices: ## Neat Examples(1) Images of symmetric matrices including FourierMatrix: Wolfram Research (2008), SymmetricMatrixQ, Wolfram Language function, https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html (updated 2014). #### Text Wolfram Research (2008), SymmetricMatrixQ, Wolfram Language function, https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html (updated 2014). #### CMS Wolfram Language. 2008. "SymmetricMatrixQ." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2014. https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html. #### APA Wolfram Language. (2008). SymmetricMatrixQ. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html #### BibTeX @misc{reference.wolfram_2022_symmetricmatrixq, author="Wolfram Research", title="{SymmetricMatrixQ}", year="2014", howpublished="\url{https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html}", note=[Accessed: 30-January-2023 ]} #### BibLaTeX @online{reference.wolfram_2022_symmetricmatrixq, organization={Wolfram Research}, title={SymmetricMatrixQ}, year={2014}, url={https://reference.wolfram.com/language/ref/SymmetricMatrixQ.html}, note=[Accessed: 30-January-2023 ]}
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8705314993858337, "perplexity": 2770.472929884401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00573.warc.gz"}
https://probabilityexam.wordpress.com/tag/soa-exam-p/
Exam P Practice Problem 104 – two random insurance losses Problem 104-A Two random losses $X$ and $Y$ are jointly modeled by the following density function: $\displaystyle f(x,y)=\frac{1}{32} \ (4-x) \ (4-y) \ \ \ \ \ \ 0 Suppose that both of these losses had occurred. Given that $X$ is exactly 2, what is the probability that $Y$ is less than 1? $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \frac{7}{24}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \frac{11}{24}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \frac{12}{24}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \frac{13}{24}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \frac{14}{24}$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 104-B Two random losses $X$ and $Y$ are jointly modeled by the following density function: $\displaystyle f(x,y)=\frac{1}{96} \ (x+2y) \ \ \ \ \ \ 0 Suppose that both of these losses had occurred. Determine the probability that $Y$ exceeds 2 given that the loss $X$ is known to be 2. $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \frac{13}{36}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \frac{24}{36}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \frac{26}{36}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \frac{28}{36}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \frac{29}{36}$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics dan ma actuarial science Daniel Ma actuarial $\copyright$ 2018 – Dan Ma Exam P Practice Problem 103 – randomly selected auto collision claims Problem 103-A The size of an auto collision claim follows a distribution that has density function $f(x)=2(1-x)$ where $0. Two randomly selected claims are examined. Compute the probability that one claim is at least twice as large as the other. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \frac{10}{36}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \frac{15}{36}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \frac{20}{36}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \frac{21}{36}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \frac{23}{36}$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 103-B Auto collision claims follow an exponential distribution with mean 2. For two randomly selected auto collision claims, compute the probability that the larger claim is more than four times the size of the smaller claims. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.2$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 0.3$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 0.4$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 0.5$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 0.6$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics dan ma actuarial science Daniel Ma actuarial $\copyright$ 2018 – Dan Ma Exam P Practice Problem 102 – estimating claim costs Problem 102-A Insurance claims modeled by a distribution with the following cumulative distribution function. $\displaystyle F(x) = \left\{ \begin{array}{ll} \displaystyle 0 &\ \ \ \ \ \ x \le 0 \\ \text{ } & \text{ } \\ \displaystyle \frac{1}{1536} \ x^4 &\ \ \ \ \ \ 0 < x \le 4 \\ \text{ } & \text{ } \\ \displaystyle 1-\frac{2}{3} x+\frac{1}{8} x^2- \frac{1}{1536} \ x^4 &\ \ \ \ \ \ 4 < x \le 8 \\ \text{ } & \text{ } \\ \displaystyle 1 &\ \ \ \ \ \ x > 8 \\ \end{array} \right.$ The insurance company is performing a study on all claims that exceed 3. Determine the mean of all claims being studied. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ 4.8$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ 4.9$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ 5.0$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ 5.1$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ 5.2$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 102-B Insurance claims modeled by a distribution with the following cumulative distribution function. $\displaystyle F(x) = \left\{ \begin{array}{ll} \displaystyle 0 &\ \ \ \ \ \ x \le 0 \\ \text{ } & \text{ } \\ \displaystyle \frac{1}{50} \ x^2 &\ \ \ \ \ \ 0 < x \le 5 \\ \text{ } & \text{ } \\ \displaystyle -\frac{1}{50} x^2+\frac{2}{5} x- 1 &\ \ \ \ \ \ 5 < x \le 10 \\ \text{ } & \text{ } \\ \displaystyle 1 &\ \ \ \ \ \ x > 10 \\ \end{array} \right.$ The insurance company is performing a study on all claims that exceed 4. Determine the mean of all claims being studied. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 5.9$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 6.0$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 6.1$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 6.2$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 6.3$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics dan ma actuarial science Daniel Ma actuarial $\copyright$ 2018 – Dan Ma Exam P Practice Problem 101 – auto collision claims Problem 101-A The amount paid on an auto collision claim by an insurance company follows a distribution with the following density function. $\displaystyle f(x) = \left\{ \begin{array}{ll} \displaystyle \frac{1}{96} \ x^3 \ e^{-x/2} &\ \ \ \ \ \ x > 0 \\ \text{ } & \text{ } \\ \displaystyle 0 &\ \ \ \ \ \ \text{otherwise} \\ \end{array} \right.$ The insurance company paid 64 claims in a certain month. Determine the approximate probability that the average amount paid is between 7.36 and 8.84. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ 0.8320$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ 0.8376$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ 0.8435$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ 0.8532$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ 0.8692$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 101-B The amount paid on an auto collision claim by an insurance company follows a distribution with the following density function. $\displaystyle f(x) = \left\{ \begin{array}{ll} \displaystyle \frac{1}{1536} \ x^3 \ e^{-x/4} &\ \ \ \ \ \ x > 0 \\ \text{ } & \text{ } \\ \displaystyle 0 &\ \ \ \ \ \ \text{otherwise} \\ \end{array} \right.$ The insurance company paid 36 claims in a certain month. Determine the approximate 25th percentile for the average claims paid in that month. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 15.11$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 15.43$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 15.75$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 16.25$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 16.78$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics dan ma actuarial science Daniel Ma actuarial $\copyright$ 2017 – Dan Ma Exam P Practice Problem 100 – find the variance of loss in profit Problem 100-A The monthly amount of time $X$ (in hours) during which a manufacturing plant is inoperative due to equipment failures or power outage follows approximately a distribution with the following moment generating function. $\displaystyle M(t)=\biggl( \frac{1}{1-7.5 \ t} \biggr)^2$ The amount of loss in profit due to the plant being inoperative is given by $Y=12 X + 1.25 X^2$. Determine the variance of the loss in profit. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \text{279,927.20}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \text{279,608.20}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \text{475,693.76}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \text{583,358.20}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \text{601,769.56}$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 100-B The weekly amount of time $X$ (in hours) that a manufacturing plant is down (due to maintenance or repairs) has an exponential distribution with mean 8.5 hours. The cost of the downtime, due to lost production and maintenance and repair costs, is modeled by $Y=15+5 X+1.2 X^2$. Determine the variance of the cost of the downtime. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \text{130,928.05}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \text{149,368.45}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \text{181,622.05}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \text{188,637.67}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \text{195,369.15}$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics dan ma actuarial science Daniel Ma actuarial $\copyright$ 2017 – Dan Ma Exam P Practice Problem 99 – When Random Loss is Doubled Problem 99-A A business owner faces a risk whose economic loss amount $X$ follows a uniform distribution over the interval $0. In the next year, the loss amount is expected to be doubled and is expected to be modeled by the random variable $Y=2X$. Suppose that the business owner purchases an insurance policy effective at the beginning of next year with the provision that any loss amount less than or equal to 0.5 is the responsibility of the business owner and any loss amount that is greater than 0.5 is paid by the insurer in full. When a loss occurs next year, determine the expected payment made by the insurer to the business owner. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \frac{8}{16}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \frac{9}{16}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \frac{13}{16}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \frac{15}{16}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \frac{17}{16}$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 99-B A business owner faces a risk whose economic loss amount $X$ has the following density function: $\displaystyle f(x)=\frac{x}{2} \ \ \ \ \ \ 0 In the next year, the loss amount is expected to be doubled and is expected to be modeled by the random variable $Y=2X$. Suppose that the business owner purchases an insurance policy effective at the beginning of next year with the provision that any loss amount less than or equal to 1 is the responsibility of the business owner and any loss amount that is greater than 1 is paid by the insurer in full. When a loss occurs next year, what is the expected payment made by the insurer to the business owner? $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.6667$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 1.5833$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 1.6875$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 1.7500$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 2.6250$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics expected insurance payment deductible $\copyright$ 2017 – Dan Ma Exam P Practice Problem 98 – flipping coins Problem 98-A Coin 1 is an unbiased coin, i.e. when flipping the coin, the probability of getting a head is 0.5. Coin 2 is a biased coin such that when flipping the coin, the probability of getting a head is 0.6. One of the coins is chosen at random. Then the chosen coin is tossed repeatedly until a head is obtained. Suppose that the first head is observed in the fifth toss. Determine the probability that the chosen coin is Coin 2. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.2856$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 0.3060$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 0.3295$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 0.3564$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 0.3690$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 98-B Box 1 contains 3 red balls and 1 white ball while Box 2 contains 2 red balls and 2 white balls. The two boxes are identical in appearance. One of the boxes is chosen at random. A ball is sampled from the chosen box with replacement until a white ball is obtained. Determine the probability that the chosen box is Box 1 if the first white ball is observed on the 6th draw. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.7530$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 0.7632$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 0.7825$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 0.7863$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 0.7915$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics geometric distribution Bayes $\copyright$ 2017 – Dan Ma Exam P Practice Problem 97 – Variance of Claim Sizes Problem 97-A For a type of insurance policies, the following is the probability that the size of claim is greater than $x$. $\displaystyle P(X>x) = \left\{ \begin{array}{ll} \displaystyle 1 &\ \ \ \ \ \ x \le 0 \\ \text{ } & \text{ } \\ \displaystyle \biggl(1-\frac{x}{10} \biggr)^6 &\ \ \ \ \ \ 0 Calculate the variance of the claim size for this type of insurance policies. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \frac{10}{7}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \frac{75}{49}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \frac{95}{49}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \frac{15}{7}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \frac{25}{7}$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 97-B For a type of insurance policies, the following is the probability that the size of a claim is greater than $x$. $\displaystyle P(X>x) = \left\{ \begin{array}{ll} \displaystyle 1 &\ \ \ \ \ \ x \le 0 \\ \text{ } & \text{ } \\ \displaystyle \biggl(\frac{250}{x+250} \biggr)^{2.25} &\ \ \ \ \ \ x>0 \\ \end{array} \right.$ Calculate the expected claim size for this type of insurance policies. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 200.00$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 203.75$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 207.67$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 217.32$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 232.74$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ probability exam P actuarial exam math Daniel Ma mathematics $\copyright$ 2017 – Dan Ma Exam P Practice Problem 96 – Expected Insurance Payment Problem 96-A An insurance policy is purchased to cover a random loss subject to a deductible of 1. The cumulative distribution function of the loss amount $X$ is: $\displaystyle F(x) = \left\{ \begin{array}{ll} \displaystyle 0 &\ \ \ \ \ \ x<0 \\ \text{ } & \text{ } \\ \displaystyle \frac{3}{25} \ x^2 - \frac{2}{125} \ x^3 &\ \ \ \ \ \ 0 \le x<5 \\ \text{ } & \text{ } \\ 1 &\ \ \ \ \ \ 5 Given a random loss $X$, determine the expected payment made under this insurance policy. $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.50$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 1.54$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 1.72$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 4.63$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 6.26$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 96-B An insurance policy is purchased to cover a random loss subject to a deductible of 2. The density function of the loss amount $X$ is: $\displaystyle f(x) = \left\{ \begin{array}{ll} \displaystyle \frac{3}{8} \biggl(1- \frac{1}{4} \ x + \frac{1}{64} \ x^2 \biggr) &\ \ \ \ \ \ 0 Given a random loss $X$, what is the expected benefit paid by this insurance policy? $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ 0.51$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ 0.57$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ 0.63$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ 1.60$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ 2.00$ _______________________________________________ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ _______________________________________________ _______________________________________________ $\copyright \ 2016 - \text{Dan Ma}$ Exam P Practice Problem 95 – Measuring Dispersion Problem 95-A The lifetime (in years) of a machine for a manufacturing plant is modeled by the random variable $X$. The following is the density function of $X$. $\displaystyle f(x) = \left\{ \begin{array}{ll} \displaystyle \frac{3}{2500} \ (100x-20x^2+ x^3) &\ \ \ \ \ \ 0 Calculate the standard deviation of the lifetime of such a machine. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 2.0$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 2.7$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 3.0$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 4.0$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 4.9$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ ______________________________________________________________________ Problem 95-B The travel time to work (in minutes) for an office worker has the following density function. $\displaystyle f(x) = \left\{ \begin{array}{ll} \displaystyle \frac{3}{1000} \ (50-5x+\frac{1}{8} \ x^2) &\ \ \ \ \ \ 0 Calculate the variance of the travel time to work for this office worker. $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 3.87$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 5.00$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 6.50$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 8.75$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 15.00$ ______________________________________________________________________ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ ______________________________________________________________________ ______________________________________________________________________ $\copyright \ 2016 \ \ \text{Dan Ma}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 280, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999958276748657, "perplexity": 130.54515672381612}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526386.37/warc/CC-MAIN-20190719223744-20190720005744-00163.warc.gz"}
https://maths.anu.edu.au/study/student-projects/computational-problems
# Computational problems The problem of numerically computing eigenvalues and eigenfunctions of the Laplacian, with Dirichlet (zero) boundary conditions, on a plane domain, is computationally intensive and there is a lot of theory behind finding efficient algorithms. Proving convergence rates is likewise an interesting theoretical problem. Recently, Barnett and Barnett-Hassell have shown that the method of particular solutions (MPS), a standard method, is more accurate by an order of E1/2, where E is the eigenvalue, then previously shown. Analyzing the scaling method, which is a more efficient method for finding large blocks of eigenvalues simultaneously, is planned for 2009. There are good projects possible here for those who like to combine theory and computation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8890975713729858, "perplexity": 490.0474517486312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826968.71/warc/CC-MAIN-20181215174802-20181215200802-00034.warc.gz"}
https://chemistrycorners.com/first-term-of-chemistry-stpm/
# First Term ## 1 Atoms, Molecules and Stoichiometry ### 1.1 Fundamental particles of an atom Candidates should be able to: (a) describe the properties of protons, neutrons and electrons in terms of their relative charges and relative masses; (b) predict the behaviour of beams of protons, neutrons and electrons in both electric and magnetic fields; (c) describe the distribution of mass and charges within an atom; (d) determine the number of protons, neutrons and electrons present in both neutral and charged species of a given proton number and nucleon number; (e) describe the contribution of protons and neutrons to atomic nuclei in terms of proton number and nucleon number; (f) distinguish isotopes based on the number of neutrons present, and state examples of both stable and unstable isotopes. ### 1.2 Relative atomic, isotopic, molecular and formula masses Candidates should be able to: (a) define the terms relative atomic mass, Ar, relative isotopic mass, relative molecular mass, Mr, and relative formula mass based on 12C; (b) interpret mass spectra in terms of relative abundance of isotopes and molecular fragments; (c) calculate relative atomic mass of an element from the relative abundance of its isotopes or its mass spectrum. ### 1.3 The mole and the Avogadro constant Candidates should be able to: (a) define mole in terms of the Avogadro constant; (b) calculate the number of moles of reactants, volumes of gases, volumes of solutions and concentrations of solutions; (c) deduce stoichiometric relationships from the calculations above. ## 2 Electronic Structures of Atoms ### 2.1 Electronic energy levels of atomic hydrogen Candidates should be able to: (a) explain the formation of the emission line spectrum of atomic hydrogen in the Lyman and Balmer series using Bohr’s Atomic Model. ### 2.2 Atomic orbitals: s, p and d Candidates should be able to: (a) deduce the number and relative energies of the s, p and d orbitals for the principal quantum numbers 1, 2 and 3, including the 4s orbitals; (b) describe the shape of the s and p orbitals. ### 2.3 Electronic configuration Candidates should be able to: (a) predict the electronic configuration of atoms and ions given the proton number (and charge); (b) define and apply Aufbau principle, Hund’s rule and Pauli Exclusion Principle. ### 2.4 Classification of elements into s, p, d and f blocks in the Periodic Table Candidates should be able to: (a) identify the position of the elements in the Periodic Table as (i) block s, with valence shell configurations s1 and s2, (ii) block p, with valence shell configurations from s2p1 to s2p6, (iii) block d, with valence shell configurations from d1s2 to d10s2; (b) identify the position of elements in block f of the Periodic Table. ## 3 Chemical Bonding ### 3.1 Ionic bonding Candidates should be able to: (a) describe ionic (electrovalent) bonding as exemplified by NaCl and MgCl2. ### 3.2 Covalent bonding Candidates should be able to: (a) draw the Lewis structure of covalent molecules (octet rule as exemplified by NH3, CCl4, H2O, CO2, N2O4 and exception to the octet rule as exemplified by BF3, NO, NO2, PCl5, SF6); (b) draw the Lewis structure of ions as exemplified by SO42, CO32, NO3 and CN; (c) explain the concept of overlapping and hybridisation of the s and p orbitals as exemplified by BeCl2, BF3, CH4, N2, HCN, NH3 and H2O molecules; (d) predict and explain the shapes of and bond angles in molecules and ions using the principle of valence shell electron pair repulsion, e.g. linear, trigonal planar, tetrahedral, trigonal bipyramid, octahedral, V-shaped, T-shaped, seesaw and pyramidal; (e) explain the existence of polar and non-polar bonds (including CC1, CN, CO, CMg) resulting in polar or/ and non-polar molecules; (f) relate bond lengths and bond strengths with respect to single, double and triple bonds; (g) explain the inertness of nitrogen molecule in terms of its strong triple bond and non-polarity; (h) describe typical properties associated with ionic and covalent bonding in terms of bond strength, melting point and electrical conductivity; (i) explain the existence of covalent character in ionic compounds such as A12O3, A1I3 and LiI; (j) explain the existence of coordinate (dative covalent) bonding as exemplified by H3O+, NH4+, A12C16 and [Fe(CN)6]3. ### 3.3 Metallic bonding Candidates should be able to: (a) explain metallic bonding in terms of electron sea model. ### 3.4 Intermolecular forces: van der Waals forces and hydrogen bonding Candidates should be able to: (a) describe hydrogen bonding and van der Waals forces (permanent, temporary and induced dipole); (b) deduce the effect of van der Waals forces between molecules on the physical properties of substances; (c) deduce the effect of hydrogen bonding (intermolecular and intramolecular) on the physical properties of substances. ## 4 States of Matter ### 4.1 Gases Candidates should be able to: (a) explain the pressure and behaviour of ideal gas using the kinetic theory; (b) explain qualitatively, in terms of molecular size and intermolecular forces, the conditions necessary for a gas approaching the ideal behaviour; (c) define Boyle’s law, Charles’ law and Avogadro’s law; (d) apply the pV nRT equation in calculations, including the determination of the relative molecular mass, Mr; (e) define Dalton’s law, and use it to calculate the partial pressure of a gas and its composition; (f) explain the limitation of ideality at very high pressures and very low temperatures. ### 4.2 Liquids Candidates should be able to: (a) describe the kinetic concept of the liquid state; (b) describe the melting of solid to liquid, vaporisation and vapour pressure using simple kinetic theory; (c) define the boiling point and freezing point of liquids. ### 4.3 Solids Candidates should be able to: (a) describe qualitatively the lattice structure of a crystalline solid which is: (i) ionic, as in sodium chloride, (ii) simple molecular, as in iodine, (iii) giant molecular, as in graphite, diamond and silicon(IV) oxide, (iv) metallic, as in copper; (b) describe the allotropes of carbon (graphite, diamond and fullerenes), and their uses. ### 4.4 Phase diagrams Candidates should be able to: (a) sketch the phase diagram for water and carbon dioxide, and explain the anomalous behaviour of water; (b) explain phase diagrams as graphical plots of experimentally determined results; (c) interpret phase diagrams as curves describing the conditions of equilibrium between phases and as regions representing single phases; (d) predict how a phase may change with changes in temperature and pressure; (e) discuss vaporisation, boiling, sublimation, freezing, melting, triple and critical points of H2O and CO2; (f) explain qualitatively the effect of a non-volatile solute on the vapour pressure of a solvent, and hence, on its melting point and boiling point (colligative properties); (g) state the uses of dry ice. ## 5 Reaction Kinetics ### 5.1 Rate of reaction Candidates should be able to: (a) define rate of reaction, rate equation, order of reaction, rate constant, half-life of a first-order reaction, rate determining step, activation energy and catalyst; (b) explain qualitatively, in terms of collision theory, the effects of concentration and temperature on the rate of a reaction. ### 5.2 Rate law Candidates should be able to: (a) calculate the rate constant from initial rates; (b) predict an initial rate from rate equations and experimental data; (c) use titrimetric method to study the rate of a given reaction. ### 5.3 The effect of temperature on reaction kinetics Candidates should be able to: (a) explain the relationship between the rate constants with the activation energy and temperature using Arrhenius equation k = Ae ^[-(Ea/RT)] (b) use the Boltzmann distribution curve to explain the distribution of molecular energy. ### 5.4 The role of catalysts in reactions Candidates should be able to: (a) explain the effect of catalysts on the rate of a reaction; (b) explain how a reaction, in the presence of a catalyst, follows an alternative path with a lower activation energy; (c) explain the role of atmospheric oxides of nitrogen as catalysts in the oxidation of atmospheric sulphur dioxide; (d) explain the role of vanadium (V) oxide as a catalyst in the Contact process; (e) describe enzymes as biological catalysts. ### 5.5 Order of reactions and rate constants Candidates should be able to: (a) deduce the order of a reaction (zero-, first- and second-) and the rate constant by the initial rates method and graphical methods; (b) verify that a suggested reaction mechanism is consistent with the observed kinetics; (c) use the half-life (t½) of a first-order reaction in calculations. ## 6 Equilibria ### 6.1 Chemical equilibria Candidates should be able to: (a) describe a reversible reaction and dynamic equilibrium in terms of forward and backward reactions; (b) state mass action law from stoichiometric equation; (c) deduce expressions for equilibrium constants in terms of concentrations, Kc, and partial pressures, Kp, for homogeneous and heterogeneous systems; (d) calculate the values of the equilibrium constants in terms of concentrations or partial pressures from given data; (e) calculate the quantities present at equilibrium from given data; (f) apply the concept of dynamic chemical equilibrium to explain how the concentration of stratospheric ozone is affected by the photodissociation of NO2, O2 and O3 to form reactive oxygen radicals; (g) state the Le Chatelier‟s principle and use it to discuss the effect of catalysts, changes in concentration, pressure or temperature on a system at equilibrium in the following examples: (i) the synthesis of hydrogen iodide, (ii) the dissociation of dinitrogen tetroxide, (iii) the hydrolysis of simple esters, (iv) the Contact process, (v) the Haber process, (vi) the Ostwald process; (h) explain the effect of temperature on equilibrium constant from the equation, ln K= -H/RT + C ### 6.2 Ionic equilibria Candidates should be able to: (a) use Arrhenius, BrØnsted-Lowry and Lewis theories to explain acids and bases; (b) identify conjugate acids and bases; (c) explain qualitatively the different properties of strong and weak electrolytes; (d) explain and calculate the terms pH, pOH, Ka, pKa, Kb, pKb, Kw and pKw from given data; (e) explain changes in pH during acid-base titrations; (f) explain the choice of suitable indicators for acid-base titrations; (g) define buffer solutions; (h) calculate the pH of buffer solutions from given data; (i) explain the use of buffer solutions and their importance in biological systems such as the role of H2CO3 / HCO3 in controlling pH in blood. ### 6.3 Solubility equilibria Candidates should be able to: (a) define solubility product, Ksp; (b) calculate Ksp from given concentrations and vice versa; (c) describe the common ion effect, including buffer solutions; (d) predict the possibility of precipitation from solutions of known concentrations; (e) apply the concept of solubility equilibria to describe industrial procedure for water softening. ### 6.4 Phase equilibria Candidates should be able to: (a) state and apply Raoult’s law for two miscible liquids; (b) interpret the boiling point-composition curves for mixtures of two miscible liquids in terms of ideal‟ behaviour or positive or negative deviations from Raoult’s law; (c) explain the principles involved in fractional distillation of ideal and non ideal liquid mixtures; (d) explain the term azeotropic mixture; (e) explain the limitations on the separation of two components forming an azeotropic mixture; (f) explain qualitatively the advantages and disadvantages of fractional distillation under reduced pressure.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928998708724976, "perplexity": 3525.6200012660292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704804187.81/warc/CC-MAIN-20210126233034-20210127023034-00068.warc.gz"}
https://hal.science/hal-01351604
Hodge-Dirac, Hodge-Laplacian and Hodge-Stokes operators in L^p spaces on Lipschitz domains - Archive ouverte HAL Access content directly Journal Articles Revista Matemática Iberoamericana Year : 2018 Hodge-Dirac, Hodge-Laplacian and Hodge-Stokes operators in L^p spaces on Lipschitz domains Alan Mcintosh • Function : Author Sylvie Monniaux Abstract This paper concerns Hodge-Dirac operators D = d + δ acting in L p (Ω, Λ) where Ω is a bounded open subset of R n satisfying some kind of Lipschitz condition, Λ is the exterior algebra of R n , d is the exterior derivative acting on the de Rham complex of differential forms on Ω, and δ is the interior derivative with tangential boundary conditions. In L 2 (Ω, Λ), δ = d * and D is self-adjoint, thus having bounded resolvents (I + itD) −1 t∈R as well as a bounded functional calculus in L 2 (Ω, Λ). We investigate the range of values p H < p < p H about p = 2 for which D has bounded resolvents and a bounded holomorphic functional calculus in L p (Ω, Λ). On domains which we call very weakly Lipschitz, we show that this is the same range of values as for which L p (Ω, Λ) has a Hodge (or Helmholz) decomposition, being an open interval that includes 2. The Hodge-Laplacian ∆ is the square of the Hodge-Dirac operator, i.e. −∆ = D 2 , so it also has a bounded functional calculus in L p (Ω, Λ) when p H < p < p H. But the Stokes operator with Hodge boundary conditions, which is the restriction of −∆ to the subspace of divergence free vector fields in L p (Ω, Λ 1) with tangential boundary conditions , has a bounded holomorphic functional calculus for further values of p, namely for max{1, p H S } < p < p H where p H S is the Sobolev exponent below p H , given by 1/p H S = 1/p H + 1/n, so that p H S < 2n/(n + 2). In 3 dimensions, p H S < 6/5. We show also that for bounded strongly Lipschitz domains Ω, p H < 2n/(n + 1) < 2n/(n − 1) < p H , in agreement with the known results that p H < 4/3 < 4 < p H in dimension 2, and p H < 3/2 < 3 < p H in dimension 3. In both dimensions 2 and 3, p H S < 1 , implying that the Stokes operator has a bounded functional calculus in L p (Ω, Λ 1) when Ω is strongly Lipschitz and 1 < p < p H . Dates and versions hal-01351604 , version 1 (04-08-2016) Identifiers • HAL Id : hal-01351604 , version 1 • ARXIV : Cite Alan Mcintosh, Sylvie Monniaux. Hodge-Dirac, Hodge-Laplacian and Hodge-Stokes operators in L^p spaces on Lipschitz domains. Revista Matemática Iberoamericana, 2018, 34 (4), pp.1711-1753. ⟨hal-01351604⟩ 147 View
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551637172698975, "perplexity": 1219.1947308360648}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00444.warc.gz"}
http://mahrabu.blogspot.com/2008/05/blog-post_12.html
## Monday, May 12, 2008 ### ברוך שכיונתי A while ago I said: You might say that an inverse-square force is infinite when r=0, but I would respond that classical electromagnetism doesn't really allow point charges, but only finite-density charge distributions. This probably means we should be a little bit more careful about using point charges in all our examples, but they're useful approximations if we don't think too hard about it. But point charges can't exist because they would result in infinite electric fields, which results in infinite energy density, and if you integrate the energy over any finite volume containing a point charge, you get infinite energy. So I certainly wasn't the first one with that idea. The Feynman Lectures, volume II chapter 8, does the same integral, gets an infinite result, and concludes: We must conclude that the idea of locating the energy in the field is inconsistent with the assumption of the existence of point charges. One way out of the difficulty would be to say that elementary charges, such as an electron, are not points but are really small distributions of charge. Alternatively, we could say that there is something wrong in our theory of electricity at very small distances, or with the idea of the local conservation of energy. There are difficulties with either point of view. These difficulties have never been overcome; they exist to this day. I'm going to go with choice B ("there is something wrong in our theory of electricity at very small distances"), where "our theory of electricity" means classical electromagnetism. In quantum mechanics, even "point charges" aren't really localized at a point. But then I wonder what Feynman meant when he said "these difficulties ... exist to this day", since he himself was one of the main people responsible for quantum electrodynamics. So I'll have to learn QED one day and get back to you. #### 1 comment: 1. Much as I enjoy reading all of your posts, it's nice to see a bit of physics again :).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8660739064216614, "perplexity": 414.51661703886475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164000853/warc/CC-MAIN-20131204133320-00012-ip-10-33-133-15.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-equations/176916-spring-mass-undamped-motion-print.html
# spring mass undamped motion • April 5th 2011, 12:02 PM duaneg37 spring mass undamped motion A mass weighing 10 pounds stretches a spring 1/4 foot. This mass is removed and replaced with a mass of 1.6 slugs, which is initially released from a point 1/3 foot above the equilibrium position with a downward velocity of 5/4 ft/s. At what time does the mass attain a displacement below the equilibrium position numerically equal to 1/2 the amplitude? my position function is $x(t)=-\frac{1}{3}cos(160t)+\frac{1}{128}sin(160t) $ I have tried to put it into the form $x(t)=Asin(\omega t+\phi) $ and I am getting $x(t)=\frac{1}{3}sin(160t+\frac{\pi}{2}) $ The time I am getting is $\frac{\pi}{480}s$ initially, but I don't know if this is right. I'm also having trouble writing the answer as a series. Can anyone help me? Thanks • April 5th 2011, 01:25 PM TheEmptySet Quote: Originally Posted by duaneg37 A mass weighing 10 pounds stretches a spring 1/4 foot. This mass is removed and replaced with a mass of 1.6 slugs, which is initially released from a point 1/3 foot above the equilibrium position with a downward velocity of 5/4 ft/s. At what time does the mass attain a displacement below the equilibrium position numerically equal to 1/2 the amplitude? my position function is $x(t)=-\frac{1}{3}cos(160t)+\frac{1}{128}sin(160t) $ I have tried to put it into the form $x(t)=Asin(\omega t+\phi) $ and I am getting $x(t)=\frac{1}{3}sin(160t+\frac{\pi}{2}) $ The time I am getting is $\frac{\pi}{480}s$ initially, but I don't know if this is right. I'm also having trouble writing the answer as a series. Can anyone help me? Thanks First I don't think your ODE is correct. First by Hook's law we have $F=kx \iff 10=k(.25) \iff k=40$ Now the ode is $\displaystyle mx''=-kx \iff x''+\frac{1}{5}x=0$ So the solution should have the form $\displaystyle x(t)=c_1\cos\left( \frac{t}{5}\right)+c_2\sin\left( \frac{t}{5}\right)$ Now use the initial conditions • April 5th 2011, 03:18 PM duaneg37 I thought I had to convert the mass into slugs by $W=mg$, which gives the force as $320$ slugs $=k(\frac{1}{4})$ for Hooke's law. This gives $k=1280$. Then I found m for the weight of $1.6$ slugs >>> $1.6=m(32)$where $m=.05$ This gives $\omega ^{2}=25600$. Is this incorrect? Shouldn't your equation be $x(t)=c_{1}cos2t+c_{2}sin2t$? I did it again without rounding off as much and got: $x(t)=\frac{\sqrt{16393}}{384}sin(160t-1.547)$ the time I got: $t=.026+\frac{n\pi}{160}$ sec., let $n=0,1,2,3,...$ Am I on the wrong track with this? Thanks a lot! • April 5th 2011, 03:40 PM TheEmptySet Weight is a force. Slugs are mass. So in hooks law you need a force and since lbs are a force you do not need to do any conversions. • April 5th 2011, 04:29 PM topsquark Quote: Originally Posted by TheEmptySet $\displaystyle mx''=-kx \iff x''+\frac{1}{5}x=0$ Actually $\displaystyle \frac{k}{m} = \frac{40}{1.6} = 25$, not 1/25. Thus $x(t) = c_1~cos(5t) + c_2~sin(5t)$ -Dan • April 5th 2011, 04:35 PM duaneg37 I think I've done them all wrong! Thanks a lot for you help! • April 5th 2011, 07:26 PM duaneg37 My position function is $x(t)=\frac{5}{12}sin(5t-.927)$ To find the time I set $\frac{1}{2}=sin(5t-.927)$ I know the sine of $\frac{\pi}{6}$ and $\frac{5\pi}{6}$ will give $\frac{1}{2}$, so I get $t=.29$and $t=.709$. Do I use both of these times to get my answer? I found the period to be $T=\frac{2\pi}{5}$. I said $t=.29+\frac{2n\pi}{5}$ s where $n=0,1,2,3,...$ and $t=.709+\frac{2n\pi}{5}$ s where $n=0,1,2,3,...$ as my answer. Am I doing this right? • April 5th 2011, 07:40 PM topsquark Quote: Originally Posted by duaneg37 My position function is $x(t)=\frac{5}{12}sin(5t-.927)$ To find the time I set $\frac{1}{2}=sin(5t-.927)$ I know the sine of $\frac{\pi}{6}$ and $\frac{5\pi}{6}$ will give $\frac{1}{2}$, so I get $t=.29$and $t=.709$. Do I use both of these times to get my answer? I found the period to be $T=\frac{2\pi}{5}$. I said $t=.29+\frac{2n\pi}{5}$ s where $n=0,1,2,3,...$ and $t=.709+\frac{2n\pi}{5}$ s where $n=0,1,2,3,...$ as my answer. Am I doing this right? Looks good to me. (Nod) -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 50, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495034217834473, "perplexity": 545.9468328630679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137906.42/warc/CC-MAIN-20140914011217-00021-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.lmno.cnrs.fr/node/339?mini=2019-09
# Elliptic integrals of the third kind and 1-motives Vendredi, 5. avril 2019 - 14:00 - 15:00 Orateur: Cristiana Bertolin Résumé: In our PhD thesis we have showed that the generalized Grothendieck's Conjecture of Periods applied to 1-motives, whose underlying semi-abelian variety is a product of elliptic curves and of tori, is equivalent to a transcendental conjecture involving elliptic integrals of the first and second kind, and logarithms of complex numbers. In this talk we investigate the generalized Grothendieck's Conjecture of Periods in the case of 1-motives whose underlying semi-abelian variety is a non trivial extension of a product of elliptic curves by a torus. This will imply the introduction of elliptic integrals of the third kind for the computation of the period matrix of the 1-motive and therefore the generalized Grothendieck's Conjecture of Periods applied  will be equivalent to a transcendental conjecture involving elliptic integrals of the first, second and third kind.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8752935528755188, "perplexity": 305.7509080349906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598217.23/warc/CC-MAIN-20200120081337-20200120105337-00487.warc.gz"}
https://indico.cern.ch/event/331032/contributions/1720171/
# SUSY 2015, 23rd International Conference on Supersymmetry and Unification of Fundamental Interactions 23-29 August 2015 Lake Tahoe US/Pacific timezone ## Searches for vector-like partners of top and bottom quarks at CMS 28 Aug 2015, 14:30 30m Court View () ### Court View Alternative Theories ### Speaker Huaqiao Zhang (Chinese Academy of Sciences (CN)) ### Description We present new results on searches for massive top and bottom quark partners using proton-proton collision data collected with the CMS detector at the CERN LHC at a center-of-mass energy of 8 TeV. These fourth-generation vector-like quarks are postulated to solve the Hierarchy problem and stabilize the Higgs mass, while escaping constraints on the Higgs cross section measurement. The vector-like quark decays result in a variety of final states, containing boosted top and bottom quarks, gauge and Higgs bosons. We search using several categories of reconstructed objects, from multi-leptonic to fully hadronic final states. We set exclusion limits on both the vector-like quark mass and pair-production cross sections, for combinations of the vector-like quark branching ratios.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9736162424087524, "perplexity": 4036.1330701546735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317339.12/warc/CC-MAIN-20190822172901-20190822194901-00526.warc.gz"}
https://electronics.stackexchange.com/questions/313337/is-gain-block-can-be-used-a-power-amplifier
# Is Gain Block can be used a Power Amplifier I have chosen 2 datasheets of MMIC Power Amplifier: 1. HMC311ST89 MMIC AMPLIFIER – DC-6GHZ -16db Gain 2. HMC637ALP5E – DC-6GHZ – 13db Gain My intention is to use the 1st one as Power Amplifier. I was wondering whether I am going right because the 1st one description is GAIN-BLOCK whereas, the 2nd one is mentioned as Power Amplifier as per the respective datasheet. • You need to look at the 1dB compression point (or, alternatively, to third order intercept point) spec in the datasheets in order to make a decision. – Enric Blanco Jun 28 '17 at 9:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9139423966407776, "perplexity": 2721.7001403390573}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00636.warc.gz"}
https://www.physicsforums.com/threads/best-physics-problem-ever.352668/
# Best physics problem ever 1. Nov 7, 2009 ### DanP Dear Sir, I am a high school student and have a problem. My teacher and I were talking about Satan. Of course you know that when he fell from heaven, he fell for nine days, and nine nights, at 32 feet a second and was increasing his speed every second. I was told there was a foluma [formula] to it. I know you don't have time for such little things, but if possible please send me the foluma. Thank you, Jerry Quite possibly a letter sent to Einstein by a child, I was told. 2. Nov 7, 2009 ### Danger The formula is simple; there's no such thing as Satan. 3. Nov 7, 2009 ### DanP Some theorists believe LHC will spawn him Watch and laugh: Last edited by a moderator: Sep 25, 2014 4. Nov 7, 2009 ### FredGarvin That assumes that the acceleration due to gravity between Heaven and Hell is constant and equal to that of the Earth. 5. Nov 7, 2009 ### Redd Then we could calculate the distance between heaven and hell. It seems surprisingly small. Maybe this is actually a telling metaphor? 6. Nov 7, 2009 ### DaveC426913 Just for fun though... At what altitude is Heaven, by this logic. 9 days' fall to Earth, accounting for gravitational gradient, would give us what altitude? 7. Nov 7, 2009 ### slider142 There is a small problem. We are told that he falls at 32 feet per second, a constant velocity, and then told that he increases his speed every second (by an unknown amount, unless he is falling to Earth or another known mass). This is contradictory, unless the problem statement is that he is accelerating at 32 feet per second every second, which is approximately the acceleration due to gravity near sea level. Unfortunately, a 9 day fall cannot be anywhere near sea level, unless he is falling through a hole cut in the Earth. Last edited: Nov 7, 2009 8. Nov 7, 2009 ### slider142 Is there atmosphere present at the time of Satan's fall? If so, we must account for air friction; as he enters the atmosphere near the end of his fall he will brake to terminal velocity in air, which depends on his cross-section and mass. Assuming no friction and that he is actually falling to Earth's sea level, where Earth has mass M, we can use energy methods: $$9 \text{ days} = \int_\text{sea level}^x \frac{\pm dx}{\sqrt\frac{2GM}{x}}$$ Solving for the displacement x gives us: $$x = \left(\pm 3\sqrt\frac{GM}{2}(9 \text{ days}) + (\text{sea level})^\frac{3}{2}\right)^\frac{2}{3}$$ where the numerical value of G must be adjusted for days instead of seconds. The Earth page on Wikipedia gives average values for sea level and mass and assuming exact 24-hour days, we have approximately 6434 km, which is pretty close to the radius of the Earth. Last edited: Nov 7, 2009 9. Nov 7, 2009 ### Staff: Mentor Yeah, got to agree with Danger. The problem is non-existent. 10. Nov 7, 2009 ### DanP Henceforth, this shall be know as "Satan's law". 11. Nov 7, 2009 ### Loren Booda I once saw it calculated that heaven is hotter than hell - but that might violate the guidelines (no kidding). 12. Nov 7, 2009 ### Staff: Mentor Was that this? 13. Nov 7, 2009 ### Pengwuino I believe I saw this same mathematics prove that Women = Evil. 14. Nov 7, 2009 ### NeoDevin Well, women take time and money, so $\mbox{Women} = \mbox{Time}\cdot\mbox{Money}$. I have often heard it claimed that time is money, $\mbox{Time} = \mbox{Money}$, so we have $\mbox{Women} = \mbox{Money}^2$. But since money is the root of evil, $\mbox{Money} = \sqrt{\mbox{Evil}}$, we know that $\mbox{Women} = \left(\sqrt{\mbox{Evil}}\right)^2 = \mbox{Evil}$. 15. Nov 7, 2009 ### whs Fall speed from heaven = 2 * X; Where X is a constant for adjusting the fall speed to your liking. 16. Nov 7, 2009 ### DaveC426913 17. Nov 7, 2009 ### Loren Booda I think I saw the calculation in an old Ripley's Believe it or Not. There were references from a religious text, saying how high heaven is and where hell is located. Then using some earth physics - voila! I wonder if the student you quote ended up in engineering, divinity or their unification: theoretical physics. 18. Nov 8, 2009 ### NeoDevin 19. Nov 8, 2009 ### DanP Would be funny if you still have the proof somewhere and you can post it as a joke. 20. Nov 8, 2009 ### arildno This then yields the equation: $$women=\frac{evil}{{love}^{2}}$$ or: $${love}^{2}*women=evil$$ or: $$love*(love*women)=evil$$ Thus, the problem is that it is evil to love the fact that you love women. Straight men should regret the fact that they are not gay, while lesbians ought to regret they are not straight, I suppose Last edited: Nov 8, 2009
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9315000176429749, "perplexity": 2751.339016503232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517845.16/warc/CC-MAIN-20171212173259-20171212193259-00580.warc.gz"}
https://scholar.lib.ntnu.edu.tw/en/publications/an-alternative-approach-for-a-distance-inequality-associated-with-2
# An alternative approach for a distance inequality associated with the second-order cone and the circular cone Xin He Miao, Yen chi Roger Lin, Jein Shan Chen* *Corresponding author for this work Research output: Contribution to journalArticlepeer-review ## Abstract It is well known that the second-order cone and the circular cone have many analogous properties. In particular, there exists an important distance inequality associated with the second-order cone and the circular cone. The inequality indicates that the distances of arbitrary points to the second-order cone and the circular cone are equivalent, which is crucial in analyzing the tangent cone and normal cone for the circular cone. In this paper, we provide an alternative approach to achieve the aforementioned inequality. Although the proof is a bit longer than the existing one, the new approach offers a way to clarify when the equality holds. Such a clarification is helpful for further study of the relationship between the second-order cone programming problems and the circular cone programming problems. Original language English 291 Journal of Inequalities and Applications 2016 1 https://doi.org/10.1186/s13660-016-1243-5 Published - 2016 Dec 1 ## Keywords • circular cone • distance • projection • second-order cone ## ASJC Scopus subject areas • Analysis • Discrete Mathematics and Combinatorics • Applied Mathematics ## Fingerprint Dive into the research topics of 'An alternative approach for a distance inequality associated with the second-order cone and the circular cone'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9266403317451477, "perplexity": 950.8813285018666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00209.warc.gz"}
https://www.physicsforums.com/threads/quantum-physics-rayleigh-jeans-wiens-law.396534/
# Homework Help: Quantum physics-rayleigh jeans/wien's law 1. Apr 19, 2010 Show that the Rayleigh-Jeans radiation law is not consistent with Wien displacement law, λmax T=constant, or vmax is proportional to T. 2. Apr 19, 2010 ### Gordianus The displacement law states that at any temperature T the black body spectrum reaches its peak at a wavelength given by the displacement law. If you happen to plot the Rayleigh-Jeans formula, you'll find there is no maximum. The shorter the wavelength, the higher the spectral power. This is known as the "ultra-violet catastrophe" and, in the search of a "cure", Planck came up with his famous proposal. 3. Apr 21, 2010
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566909670829773, "perplexity": 1737.437349901993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859817.15/warc/CC-MAIN-20180617213237-20180617233237-00552.warc.gz"}
https://wias-berlin.de/publications/wias-publ/run.jsp?template=abstract&type=Preprint&year=2017&number=2371
WIAS Preprint No. 2371, (2017) # Homogenization theory for the random conductance model with degenerate ergodic weights and unbounded-range jumps Authors • Flegel, Franziska • Heida, Martin • Slowik, Martin 2010 Mathematics Subject Classification • 60H25 60K37 35B27 35R60 47B80 47A75 Keywords • Random conductance model, homogenization, Dirichlet eigenvalues, local times, percolation DOI 10.20347/WIAS.PREPRINT.2371 Abstract We study homogenization properties of the discrete Laplace operator with random conductances on a large domain in Zd. More precisely, we prove almost-sure homogenization of the discrete Poisson equation and of the top of the Dirichlet spectrum. We assume that the conductances are stationary, ergodic and nearest-neighbor conductances are positive. In contrast to earlier results, we do not require uniform ellipticity but certain integrability conditions on the lower and upper tails of the conductances. We further allow jumps of arbitrary length. Without the long-range connections, the integrability condition on the lower tail is optimal for spectral homogenization. It coincides with a necessary condition for the validity of a local central limit theorem for the random walk among random conductances. As an application of spectral homogenization, we prove a quenched large deviation principle for thenormalized and rescaled local times of the random walk in a growing box. Our proofs are based on a compactness result for the Laplacian's Dirichlet energy, Poincaré inequalities, Moser iteration and two-scale convergence Appeared in • Ann. Inst. H. Poincare Probab. Statist., 55 (2019), pp. 1226--1257, DOI 10.1214/18-AIHP917 . Download Documents
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8268874287605286, "perplexity": 1205.2742808699472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879362.3/warc/CC-MAIN-20201022082653-20201022112653-00388.warc.gz"}
http://swmath.org/software/8675
# FLRBFN-AMF Computed force control system using functional link radial basis function network with asymmetric membership function for piezo-flexural nanopositioning stage. A computed force control system using functional link radial basis function network with asymmetric membership function (FLRBFN-AMF) for three-dimension motion control of a piezo-flexural nanopositioning stage (PFNS) is proposed in this study. First, the dynamics of the PFNS mechanism with the introduction of a lumped uncertainty including the equivalent hysteresis friction force are derived. Then, a computed force control system with an auxiliary control is proposed for the tracking of the reference contours with improved steady-state response. Since the dynamic characteristics of the PFNS are non-linear and time varying, a computed force control system using FLRBFN-AMF is designed to improve the control performance for the tracking of various reference trajectories, where the FLRBFN-AMF is employed to estimate a non-linear function including the lumped uncertainty of the PFNS. Moreover, by using the asymmetric membership function, the learning capability of the networks can be upgraded and the number of fuzzy rules can be optimised for the functional link radial basis function network. Furthermore, the adaptive learning algorithms for the training of the parameters of the FLRBFN-AMF online are derived using the Lyapunov stability theorem. Finally, some experimental results for the tracking of various reference contours of the PFNS are given to demonstrate the validity of the proposed control system. ## References in zbMATH (referenced in 1 article ) Showing result 1 of 1. Sorted by year (citations)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071131706237793, "perplexity": 1363.0186927414713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540909.75/warc/CC-MAIN-20161202170900-00456-ip-10-31-129-80.ec2.internal.warc.gz"}