INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Factorising certain polynomials During lesson we were given a starter activity which was to try and factorise polynomials and see what happened.
The polynomials were
$x^3-8$,
$x^3-3x^2+ x -3$
and
$x^4 - 16$.
I could not work out what happened to them, and it's bugging me. If anyone can explain then it'd be helpful! Thank you.
| a^3 - b^3 = (a - b)(a^2 + ab + b^2). So x^3 - 8 = (x - 2)(x^2 + 2x + 4).
For x^3 - 3x^2 + x - 3, factor by grouping the first two terms together and taking out the GCF and grouping the second two and doing the same. This should give you
x^2(x - 3) + 1(x - 3). This gives you a greatest common factor of (x - 3) that you can now factor out.
(x - 3)(x^2 + 1). And you are done.
|
Is restriction of scalars a pullback? I am reading some handwritten notes, and scribbled next to a restriction of scalars functor, are the words "a pullback".
I don't understand why this might be the case.
In particular, consider a field $k$ with a map $\varphi:k\rightarrow A$ for $A$ an associative unital $k$-algebra. Then we consider the induced functor $\varphi_*:A-mod\rightarrow k-mod$ restriction of scalars.
Is $\varphi_*$ the pullback of some diagram in the category of functors from $A$-mod to $k$-mod?
I know this a fairly strange question, but I just cant see what the scribbler had in mind.
| The sense in which "pullback" is being used here is the same sense it's being used in this Wikipedia article; that is, it's being used to refer to the process of "pulling back" a morphism $A \to \text{End}(R)$ (an $A$-module) along the morphism $k \to A$ to get a morphism $k \to \text{End}(R)$ (a $k$-module) by precomposition. This is different from the categorical pullback. I don't know why they're named the same.
Edit: Okay, so now I do know why they're named the same. To be brief, I think the historical motivation is that the pullback of vector bundles can be defined in both ways; see, for example, this blog post by Akhil Mathew.
|
The characteristic and minimal polynomial of a companion matrix The companion matrix of a monic polynomial $f \in \mathbb F\left[x\right]$ in $1$ variable $x$ over a field $\mathbb F$ plays an important role in understanding the structure of finite dimensional $\mathbb F[x]$-modules.
It is an important fact that the characteristic polynomial and the minimal polynomial of $C(f)$ are both equal to $f$. This can be seen quite easily by induction on the degree of $f$.
Does anyone know a different proof of this fact? I would love to see a graph theoretic proof or a non inductive algebraic proof, but I would be happy with anything that makes it seem like more than a coincidence!
| Suppose your matrix is over a field $\mathbb{F}$. Look at $G = \mathbb F[x]/f$, where $f$ is your polynomial of degree $n$. Then $G$ is a vector space over $\mathbb{F}$, and $C(f)$ is the matrix (with respect to the basis $1,x,x^2,\ldots,x^{n-1}$) corresponding to the linear operator $g \mapsto x \cdot g$.
Since $f = 0$ in $G$, also $fx^i = 0$ in $G$, and so $f$ is a polynomial of degree $n$ such that $f(C(f)) = 0$. Moreover, any polynomial $g$ of smaller degree does not reduce to $0$ in $G$, so in particular $g(C(f))$ applied to the vector $1$ does not equal the zero vector. So $f$ is the minimal polynomial of $C(f)$. Since it has degree $n$, it must be the characteristic polynomial.
|
Does the series $\sum\limits_{n=1}^{\infty}\frac{\sin(n-\sqrt{n^2+n})}{n}$ converge? I'm just reviewing for my exam tomorow looking at old exams, unfortunately I don't have solutions. Here is a question I found : determine if the series converges or diverges. If it converges find it's limit.
$$\displaystyle \sum\limits_{n=1}^{\infty}\dfrac{\sin(n-\sqrt{n^2+n})}{n}$$
I've ruled down possible tests to the limit comparison test, but I feel like I've made a mistake somewhere.
divergence test - limit is 0 by the squeeze theorem
integral test - who knows how to solve this
comparison test - series is not positive
ratio root tests - on the absolute value of the series, this wouldn't work out
alternating series test - would not work, the series is not decreasing or alternating
Any ideas what to compare this series here with or where my mistake is on my reasoning above?
| The key here is that $n - \sqrt{n^2 + n}$ converges to $-{1 \over 2}$ as $n$ goes to infinity:
$$n - \sqrt{n^2 + n}= (n - \sqrt{n^2 + n}) \times {n + \sqrt{n^2 + n} \over n + \sqrt{n^2 + n}}$$
$$= {n^2 - (n^2 + n) \over n + \sqrt{n^2 + n}} = -{n \over n + \sqrt{n^2 + n}}$$
$$= -{1 \over 1 + \sqrt{1 + {1 \over n}}}$$
Take limits as $n$ goes to infinity to get $-{1 \over 2}$.
Hence $\sin(n - \sqrt{n^2 + n})$ converges to $\sin(-{1 \over 2})$, and the series diverges similarly to ${1 \over n}$, using the limit comparison test for example.
|
0.246810121416...: Is it a algebraic number? Is it algebraic the number 0.2468101214 ...? (After point, the natural numbers are juxtaposed pairs).
| No, this number is transcendental. The proof by Mahler mentioned in a comment shows this.
A good reference to learn about basic transcendental number theory is the book "Making transcendence transparent: an intuitive approach to classical transcendental number theory", by Edward Burger and Robert Tubbs, Springer-Verlag (2004).
In chapter 1 of the book the proof of the transcendence of Mahler's constant $0.1234\dots$ is discussed. The idea is to show that the "obvious" rational approximations actually are very very close, to the point that they would contradict easy estimates (due to Liouville) for how quickly rational numbers can approximate irrational algebraic numbers. The Wikipedia entry on Liouville numbers discusses Liouville's approximation theorem and related results:
If $\alpha$ is algebraic of degree $d\ge 2$ then there is a constant $C$ such that for any rational $p/q$ with $q>0$, we have $$ \left|\alpha-\frac pq\right|>\frac{C}{q^d}. $$
Actually, there is a bit of work needed here. The estimates the book discusses together with a strengthening of Liouville's theorem give the proof for Mahler's constant, and the same argument works for the number you are asking.
The strengthening we need is due to Klaus Roth in 1955, and he was awarded the Fields medal in 1958 for this result.
|
What will be the remainder? I'm stuck with this problem I'm trying to solve from about an hour. Here's the question.
What is the remainder when (3^202)+137 is divided by 101?
There are 4 options -> 36, 45, 56, 11
I want to know the answer of the question with proper and possibly easiest method to solve the problem.
Thanks in advance, waiting for reply. :)
| HINT $\ 101\ $ is prime so a little Fermat $\rm\ \Rightarrow\ \ 3^{101}\ \equiv\ 3\ \ \Rightarrow\ \ 3^{202}\ \equiv\ \ldots\ (mod\ 101)$
Since your comment reveals you are not familiar with modular arithmetic, here is an alternative.
By Fermat's little theorem $101$ divides $\: 3^{101}-3\: $ so it also divides $\rm\ (3^{101}-3)\ (3^{101}+3)\ =\ 3^{202}-9$
|
Solving $2x - \sin 2x = \pi/2$ for $0 < x < \pi/2$ What is $x$ in closed form if $2x-\sin2x=\pi/2$, $x$ in the first quadrant?
| The solution is given by $$\displaystyle x = \pi/4 + D/2$$
where $\displaystyle D$ is the root of $\cos y = y$
The root of $\displaystyle \cos y = y$ is nowadays known as the Dottie Number and apparently has no known "closed form" solution. If you consider this number to be part of your constants, then the above can be considered a closed form solution.
For a proof:
If $\displaystyle y = \sin(2x)$
then we have that
$\displaystyle 2x = \pi/2 + y$
$\displaystyle y = \sin 2x = \sin (\pi/2 + y) = \cos y$.
The root of $$\displaystyle y = \cos y$$ is $\displaystyle y = 0.739085\dots$
Notice that $\displaystyle \pi/2 > x \gt \pi/4$ (as $\displaystyle f(x) = 2x - \sin 2x$ is increasing in $\displaystyle [0,\pi/2]$), so if $\displaystyle x = \pi/4 + z$ then
$\displaystyle \sin(2x) = \sin(\pi/2 + 2z) = \cos 2z = 0.739085\dots$
And thus $\displaystyle z = \dfrac{0.739085\dots}{2}$.
Thus $$\displaystyle x \sim \pi/4 + \dfrac{0.739085}{2} \sim 1.154940730005\dots$$
See Also: A003957.
|
Why steenrod commute with transgression I'm reading Hatcher's notes on spectral sequences and he mentions that steenrod squares commute with the coboundary operator for pairs (X,A) which would then explain why these operations commute with the transgression. It says it's because
that coboundary operator can be defined in terms of suspension and we know steenrod operations commute with suspension. Does anyone know the details of this reasoning?
So...
Assuming the standard axioms of steenrod operations, how do we prove that it commutes with the coboundary operator for pairs?
Thank you,
| I realized that your question wasn't exactly about the Steenrod axioms themselves, but about the definition of the coboundary operator involving suspension. In reduced homology, the boundary operator $\partial$ for the pair $(X,A)$ (where the inclusion $i:A\rightarrow X$ is a cofibration) can be defined to come from the "topological boundary map" $\partial^!$ followed by the inverse of the suspension isomorphism. The former is itself a composition
$$ \partial^! = \pi \circ \psi^{-1}: X/A \rightarrow Ci \rightarrow \Sigma A, $$
where $Ci$ is the mapping cone of $i$, $\psi^{-1}$ is a homotopy inverse of the quotient $\psi: Ci \rightarrow Ci/CA=X/A$, and $\pi: Ci \rightarrow Ci/X=\Sigma A$. So
$$ \partial = (\Sigma_*)^{-1} \circ \partial^!_* : \tilde{H}_q(X/A) \rightarrow \tilde{H}_q(\Sigma A) \rightarrow \tilde{H}_{q-1}(A) .$$
In fact, this is true for any reduced homology theory. See May's "Concise Course" for details, pp. 106-7. I'm pretty sure that the situation for cohomology is very similar.
Bottom line: In this formulation, the coboundary operator is the composition of a map induced from an actual map on spaces and the (inverse of the (?)) suspension isomorphism. Steenrod squares commute with both of these, so they commute with the coboundary operator.
|
When does a subbase of a base generate the same topology? Suppose that $\mathcal{B}$ is a base for a topology on a space $X$. Is there a nice way of thinking about how we can modify $\mathcal{B}$ (for instance, to simplify computations) without changing the topology it generates? It seems non-trivial to compute the topology generated by a base, but maybe some "small enough" changes to the base should be safe. The situation is delicate:
If I have a fixed open set $U$, consider the family of sets $\mathcal{B'} = \{ B \in \mathcal{B} | B \subset U \text{ or } B \subset (X \setminus U) \}$
This does not generate the same topology as $\mathcal{B}$, since $\mathcal{B'}$ won't, in general, be a base. For instance, if $X = \mathbb{R}$, and $U = (0,1)$, then $1$ is not contained in any element of $\mathcal{B'}$.
Cover $X$ by open sets $U_i$, and set $\mathcal{B'} = \{ B \in \mathcal{B} | B \subset U_i \text{ for some } i \}$
This, on the other hand, does sometimes work. For instance, consider $X = \text{Spec} A$ with the Zariski topology, and suppose that $X$ is covered by $U_i = \text{Spec} A_i$. A base for the Zariski topology on $X$ is given by sets of the form $D(f) = \{ \mathfrak{p} \subseteq A | f \not \in \mathfrak{p} \}$. We can restrict this base to only include those $D(f)$ that lie in some $U_i$, and we obtain the same topology.
I suppose my question is the following:
If $\mathcal{B}$ is a base for a topology $\mathcal{T}$, are there some nice types of subbases of $\mathcal{B}$ (along the lines of the second example above) of $\mathcal{B}$ that will always generate $\mathcal{T}$?
I think this sort of thing comes up when checking that various properties of scheme morphisms are affine local, so I've also tagged this with [algebraic-geometry].
| Since a topology generated by a base consists of open sets that are union of basic open sets, you may drop, from a given base, any open set that is a union of open sets in the same base and get a smaller base.
|
How to compute a 2x2 Homography out of 3 corresponding points? In 1D projective geometry,
I want to compute the 2x2 Homography matrix $H$ (in homogeneous coordinates), given 3 pairs of corresponding points.
i.e. I want to find H such that:
$$\left(\begin{array}{cc}
h_{11} & h_{12}\\
h_{21} & h_{22}\end{array}\right)\left(\begin{array}{ccc}
0 & a & a+b\\
1 & 1 & 1
\end{array}\right) =
\left(\begin{array}{ccc}
0 &a' &a'+b'\\
1 & 1 & 1
\end{array}\right).$$
However, I've got 6 equations here and only 3 unknowns.
(dof(H) = 4 elements less one for scaling = 3).
I thought about 3 scaling factors that would add up to 6 unknowns, s.t. we would have a unique solution. But how exactly do I insert the scaling factors into the matrices and how can I compute H then?
Do you have a clue?
| Your answer is mathematically correct, however I figured out another way to solve this equation, which leads to a simpler result.
I applied the technique for 2D projective geometry, which is described here to the 1D case and it works out fine.
|
What type of triangle satisfies: $8R^2 = a^2 + b^2 + c^2 $? In a $\displaystyle\bigtriangleup$ ABC,R is circumradius and $\displaystyle 8R^2 = a^2 + b^2 + c^2 $ , then $\displaystyle\bigtriangleup$ ABC is of which type ?
| $$\sin^2A+\sin^2B+\sin^2C$$
$$=1-(\cos^2A-\sin^2B)+1-\cos^2C$$
$$=2-\cos(A+B)\cos(A-B)-\cos C\cdot\cos C$$
$$=2-\cos(\pi-C)\cos(A-B)-\cos\{\pi-(A+B)\}\cdot\cos C$$
$$=2+\cos C\cos(A-B)+\cos(A+B)\cdot\cos C\text{ as }\cos(\pi-x)=-\cos x$$
$$=2+\cos C\{\cos(A-B)+\cos(A+B)\}$$
$$=2+2\cos A\cos B\cos C$$
$(1)$ If $2+2\cos A\cos B\cos C=2, \cos A\cos B\cos C=0$
$\implies $ at least one of $\cos A,\cos B,\cos C$ is $0$ which needs the respective angles $=\frac\pi2$
But we can have at most one angle $\ge \frac\pi2$
So, here we shall have exactly one angle $=\frac\pi2$
$(2)$ If $2+2\cos A\cos B\cos C>2, \cos A\cos B\cos C>0$
Either all of $\cos A,\cos B,\cos C$ must be $>0\implies$ all the angles are acute
or exactly two cosine ratios $<0$ which needs the respective angles $> \frac\pi2,$ which is impossible for a triangle
$(3)$ If $2+2\cos A\cos B\cos C<2, \cos A\cos B\cos C<0$
Either all the ratios $<0$ which needs the respective angles $> \frac\pi2,$ which is impossible fro a triangle
or exactly one of the cosine ratios is $<0\implies $ the respective angle $> \frac\pi2,$
|
Moments and non-negative random variables? I want to prove that for non-negative random variables with distribution F:
$$E(X^{n}) = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx$$
Is the following proof correct?
$$R.H.S = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx = \int_0^\infty n x^{n-1} (1-F(x)) dx$$
using integration by parts:
$$R.H.S = [x^{n}(1-F(x))]_0^\infty + \int_0^\infty x^{n} f(x) dx = 0 + \int_0^\infty x^{n} f(x) dx = E(X^{n})$$
If not correct, then how to prove it?
| Here's another way. (As the others point out, the statement is true if $E[X^n]$ actually exists.)
Let $Y = X^n$. $Y$ is non-negative if $X$ is.
We know
$$E[Y] = \int_0^{\infty} P(Y \geq t) dt,$$
so
$$E[X^n] = \int_0^{\infty} P(X^n \geq t) dt.$$
Then, perform the change of variables $t = x^n$. This immediately yields
$$E[X^n] = \int_0^{\infty} n x^{n-1} P(X^n \geq x^n) dx = \int_0^{\infty} n x^{n-1} P(X \geq x) dx.$$
|
Partitioning a graph (clustering of point sets in 2 dimensions) I am given $n$ points in 2D.(Each of say approximately equal weight). I want to partition it into $m$ clusters ($m$ can be anything and it is input by the user) in such a way that the center of mass of each cluster is "far" from center of mass of all other clusters. What is a good heuristic approach (it should also be quick and easy to implement) for this? My current approach is to set up a binary tree at each step. What I am doing now is that the line I choose to separate cluster at each step which maximizes the moment of inertia of the set of points in the cluster I am splitting. Any suggestion welcome!
| The keyword is "clustering" as mentioned in Moron's answer. Any problem of this type will be NP-complete. In practice, K-means is not bad in its runtime or (depending very much on the application) its results. Like the simplex algorithm for linear programming, it can take exponential time in the worst case, but its practical complexity is much lower. The worst-case bound was proven only very recently.
Also, partitioning a graph is a different problem. Here you are partitioning a set of points and distances are used but not any graph structure.
(added:)
Here is the smoothed analysis. K-means has polynomial runtime when averaged over (Gaussian) random perturbations of the input, which is not surprising considering the practical efficiency:
http://arxiv.org/abs/0904.1113
|
Let $a$ be a quadratic residue modulo $p$. Prove $a^{(p-1)/2} \equiv 1 \bmod p$.
Question:
Let $a$ be a quadratic residue to a prime modulus $p$. Prove $a^{(p-1)/2} \equiv 1 \pmod{p}$.
My attempt at a solution:
\begin{align*}
&a\text{ is a quadratic residue}\\\
&\Longrightarrow a\text{ is a residue class of $p$ which has even index $c$ relative to a primitive root $g$}\\\
&\Longrightarrow a \equiv g^c \pmod{p}\\\
&\Longrightarrow a \equiv g^{2k} \pmod{p}\text{ where $2k=c$}\\\
&\Longrightarrow g^{2kv} \equiv g^{c} \pmod{p}\text{ for some natural number $v$}\\\
&\Longrightarrow 2kv \equiv c \pmod{p-1}\text{ (by a proof in class)}\\\
&\Longrightarrow 2kv \equiv 2k \pmod{p-1}\\\
&\Longrightarrow kv \equiv k \pmod{(p-1)/2}\\\
&\Longrightarrow v \equiv k(k^{-1}) \pmod{(p-1)/2}\text{ since $\gcd(2k, p-1)$ is not equal to 1}\\\
&\Longrightarrow k^{-1} \text{ (k inverse exists)}\\\
&\Longrightarrow v \equiv 1 \pmod{(p-1)/2}.
\end{align*}
I believe this implies that $g^{(p-1)/2} \equiv 1 \pmod{p}$, is this correct?
Although what I was required to show was $a^{(p-1)/2} \equiv 1 \pmod{p}$, am I on the right track, how do I show this, I've spent quite some time on this and looked over all the proofs in my notes, I can't seem to find out how.
| Bill has succintly told you how to prove the result. But you were also asking for comments on your proposed argument. I will address that.
In line 5, where did the $v$ come from, and what is its role? Notice that you can take $v=1$ and what you write is true. So how is this giving you any information?
In line 9, you are already using an inverse of $k$, even though you only assert its existence in the next line. You cannot do that: in order to use it, you must first show it exists, and you haven't done it.
But assuming it does exist, and that your entire chain of argument holds, you'll notice that all you concluded was that $v\equiv 1\pmod{(p-1)/2}$. This is of course natural: you have $a=g^c=g^{2k}=g^{2kv}$; you can always take $v=1$ and that will work regardless of $k$, $g$, $a$... And you probably know now that it does not lead to a proof.
So you had not actually proven anything. You've only written $a$ as an even power of a primitive root, and that's it. Lines 1 through 4 are correct; but from line 5 through the end, you are just spinning your wheels and not getting any closer to the result you want.
|
Integral with Tanh: $\int_{0}^{b} \tanh(x)/x \mathrm{d} x$ . What would be the solution when 'b' does not tends to infinity though a large one? two integrals that got my attention because I really don't know how to solve them. They are a solution to the CDW equation below critical temperature of a 1D strongly correlated electron-phonon system. The second one is used in the theory of superconductivity, while the first is a more complex variation in lower dimensions. I know the result for the second one, but without the whole calculus, it is meaningless.
$$ \int_0^b \frac{\tanh(c(x^2-b^2))}{x-b}\mathrm{d}x $$
$$ \int_0^b \frac{\tanh(x)}{x}\mathrm{d}x \approx \ln\frac{4e^\gamma b}{\pi} \text{as} \ b \to \infty$$
where $\gamma = 0.57721...$ is Euler's constant
| For $x$ large, $\tanh x$ is very close to $1$. Therefore for large $b$, $$\int_0^b \frac{\tanh x}{x} \, \mathrm{d}x \approx C + \int^b \frac{\mathrm{d}x}{x} = C' + \log b.$$ You can prove it rigorously and obtain a nice error bound if you wish. Your post indicates a specific value of $C'$, but for large $b$, any two "close" constants $C_1,C_2$ will satisfy $$\log b + C_1 \approx \log b + C_2,$$ so probably $\gamma + \log (4/\pi)$ has no significance other than being a number close to $C'$ and having a nice form.
If we do the estimation rigorously, we will probably find out that $C'$ is well defined (i.e. the error in the first $\approx$ is $o(b)$), and then one can ask for its value. It probably has no nice closed form.
EDIT: In fact $\gamma + \log (4/\pi)$ is the correct constant, as shown in Derek's answer.
|
Convergence of integrals in $L^p$ Stuck with this problem from Zgymund's book.
Suppose that $f_{n} \rightarrow f$ almost everywhere and that $f_{n}, f \in L^{p}$ where $1<p<\infty$. Assume that $\|f_{n}\|_{p} \leq M < \infty$. Prove that:
$\int f_{n}g \rightarrow \int fg$ as $n \rightarrow \infty$ for all $g \in L^{q}$ such that $\dfrac{1}{p} + \dfrac{1}{q} = 1$.
Right, so I estimate the difference of the integrals and using Hölder end up with:
$$\left|\int f_{n} g - \int fg\right| \leq \|g\|_{q} \|f_{n} - f\|_{p}$$
From here I'm stuck because we are not assuming convergence in the seminorm but just pointwise convergence almost everywhere. How to proceed?
| Here is a proof which is not based on Egoroff's theorem. As Jonas T points out in an other answer, Fatou's Lemma imply
$$\int|f|^p=\int\lim |f_n|^p =\int\liminf |f_n|^p \le \liminf\int |f_n|^p \le M^p$$
and hence $f\in L^p$.
At this point we may assume $f=0$ (consider the problem with $f_n$ replaced by $f_n-f$), and we must show $\int f_ng\to 0$.
Fix a number $A\gt0$, and let $E_n=\{x: |f_n(x) g(x)|\le A|g(x)|^q\}$ where $q=p/(p-1)$ (the conjugate to $p$). By dominated convergence we have
$$\int_{E_n}f_ng\,dx\to0$$
On the complement, $E_n^c =X-E_n$, we have $|g|^{q}\le A^{-1}|f_n g|$. Hence, by Hölder's inequality
$$\int_{E_n^c}|f_ng|\,dx\le\left(\int_{E_n^c} |f_n|^p\right)^{1/p}\left(\int_{E_n^c} |g|^q\right)^{1/q}\le M A^{-1/q}\left(\int_{E_n^c}|f_ng|\,dx\right)^{1/q}$$
In other words
$$\left(\int_{E_n^c}|f_ng|\,dx\right)^{1-1/q}\le MA^{-1/q}$$
that is
$$\limsup_n\int_{E_n^c}|f_ng|\,dx\le M^pA^{-p/q}.$$
In total we have shown
$$\limsup_n\int|f_ng|\,dx\le M^pA^{-p/q}$$
for all $A\gt0$, from which be conclude $$\lim_n\int|f_ng|=0.$$
|
Calculus, find the limit, Exp vs Power? $\lim_{x\to\infty} \frac{e^x}{x^n}$
n is any natural number.
Using L'hopital doesn't make much sense to me. I did find this in the book:
"In a struggle between a power and an exp, the exp wins."
Can I refer that line as an answer? If the fraction would have been flipped, then the limit would be zero. But in this case I the limit is actually $\infty$
| HINT: One way of looking at this would be: $$\frac{1}{x^{n}} \biggl[ \biggl(1 + \frac{x}{1!} + \frac{x^{2}}{2!} + \cdots + \frac{x^{n}}{n!}\biggr) + \frac{x^{n+1}}{(n+1)!} + \cdots \biggr]$$
I hope you understand why i put the brackets inside those terms.
|
Yet another inequality: $|a+b|^p<2^p(|a|^p+|b|^p)$ Let $a$ and $b$ be real numbers and $p>0$. What is the best way to prove that $|a+b|^p<2^p(|a|^p+|b|^p)$?
| Well,
\begin{align*}
|a + b|^p \leq (|a| + |b|)^p &\leq 2^p \text{max}\{|a|^p, |b|^p\}\\
&\leq 2^{p - 1} (|a|^p + |b|^p - |a^p - b^p|)\\
&\leq 2^{p - 1} (|a|^p + |b|^p)
\end{align*}
|
Zero to the zero power – is $0^0=1$? Could someone provide me with a good explanation of why $0^0=1$?
My train of thought:
$x>0$
$0^x=0^{x-0}=0^x/0^0$, so
$0^0=0^x/0^x=\,?$
Possible answers:
*
*$0^0\cdot0^x=1\cdot0^0$, so $0^0=1$
*$0^0=0^x/0^x=0/0$, which is undefined
PS. I've read the explanation on mathforum.org, but it isn't clear to me.
| A clear and intuitive answer can be provided by ZFC Set-Theory. As described in Enderton's 'Elements of Set Theory (available free for viewing here; see pdf-page 151): http://sistemas.fciencias.unam.mx/~lokylog/images/stories/Alexandria/Teoria%20de%20Conjuntos%20Basicos/Enderton%20H.B_Elements%20of%20Set%20Theory.pdf, the set of all functions from the empty set to the empty set consists merely of the empty function which is 1 function. Hence $0^0$ = 1.
|
Solving the recurrence relation that contains summation of nth term $$T(n)=1+2\sum_{i=1}^{n-1}T(i) , \quad n > 1$$
$$T(1)=1$$
any hint or how to solve?
| Using a spreadsheet, I note that $T(n)=3^{(n-1)}$ This is easily verified by induction.
$T(1)=1=3^0$.
Then if it is true up to $n$, $$T(n+1)=1+2\sum_{i=0}^{n-1}3^i=1+2\frac{3^n-1}{3-1}$$
|
density of 3D Gaussian distribution For a 2D Gaussian distribution with
$$ \mu = \begin{pmatrix} \mu_x \\ \mu_y \end{pmatrix}, \quad \Sigma = \begin{pmatrix} \sigma_x^2 & \rho \sigma_x \sigma_y \\ \rho \sigma_x \sigma_y & \sigma_y^2 \end{pmatrix},
$$
its probability density function is
$$
f(x,y) = \frac{1}{2 \pi \sigma_x \sigma_y \sqrt{1-\rho^2}} \exp\left( -\frac{1}{2(1-\rho^2)}\left[ \frac{(x-\mu_x)^2}{\sigma_x^2} + \frac{(y-\mu_y)^2}{\sigma_y^2} - \frac{2\rho(x-\mu_x)(y-\mu_y)}{\sigma_x \sigma_y} \right] \right),
$$
I was wondering if there is also a similarly clean formula for 3D Gaussian distribution density? What is it?
Thanks and regards!
EDIT:
What I ask is after taking the inverse of the covariance matrix, if the density has a clean form just as in 2D case?
| There is a standard, general formula for the density of the joint normal (or multivariate normal) distrubution of dimension $n$, provided that the ($n \times n$) covariance matrix $\Sigma$ is non-singular (see, e.g., this or this). In particular, you can apply for $n=3$. When the covariance matrix is singular, the distribution is expressed in terms of the characteristic function.
|
Curve of a fixed point of a conic compelled to pass through 2 points Suppose that in the plane a given conic curve is compelled to pass through two fixed points of that plane.
What are the curves covered by a fixed point of the conic, its center (for an ellipse), its focus, etc. ?
(I apologize for the bad English ...)
| First, a few animations:
These were generated by a parabola with focal length $a=1$ and distance between two points $c=5$. The first one has the focus of the parabola as the tracing point, while the second one has the vertex as tracing point.
Now, for the mathematics: using whuber's and Blue's comments as a possible interpretation, the question is asking what would be the (point-)glissette of a conic sliding between two points. I'll consider the parabolic case here, since it's the easiest of the three.
Here's the general idea: start with some parabola $(2at\quad at^2)^T$ (where $a$ is the "focal length", or the distance from the vertex to the focus), imagine a chord of length $c$ moving along the parabola, and then translate and rotate the parabola in such a way as to have the chord's endpoints match the two fixed points at $(-c/2,0)$ and $(c/2,0)$.
Here's the complication: letting the two points on the parabola at a distance $c$ from each other have the parameters $u$ and $u+h$, we obtain the quartic equation
$$h^4+4uh^3+4(1+u^2)h^2-\left(\frac{c}{a}\right)^2=0$$
and as you might know, solving a quartic equation is complicated. The algebra is hellish, and I'll thus skip that for the time being. Assuming that we now have the (complicated!) function $h(a,c,u)$ for computing the lone positive root of that quartic equation, here's what you do: translate the tracing point $(x_t,y_t)$ so that the point $(2au\quad au^2)^T$ is the origin, rotate by an appropriate rotation matrix, and then translate again by the point $(c/2,0)$. The "appropriate matrix" is obtained by considering the slope of the chord of length $c$ of the parabola:
$$m=\frac{a(u+h)^2-au^2}{2a(u+h)-2au}=u+\frac{h(a,c,u)}{2}$$
and from that construct the rotation matrix
$$\frac1{\sqrt{1+m^2}}\begin{pmatrix}1&m\\-m&1\end{pmatrix}$$
Assembling that all together gives
$$\frac1{\sqrt{1+(u+h(a,c,u)/2)^2}}\begin{pmatrix}1&u+h(a,c,u)/2\\-u-h(a,c,u)/2&1\end{pmatrix}\cdot\left(\begin{pmatrix}x_t\\y_t\end{pmatrix}-\begin{pmatrix}2au\\au^2\end{pmatrix}\right)-\begin{pmatrix}c/2\\0\end{pmatrix}$$
You can obtain the complicated parametric equations for the parabola glissette by replacing the $h(a,c,u)$ with the appropriate expression for the positive root of the quartic equation given earlier.
The elliptic and hyperbolic cases are even more complicated than this; I'll leave the investigation of that to someone with more endurance and mathematical ability than me. :)
The Mathematica notebook for generating these animations can be obtained from me upon request.
In the Mathematica notebook I provided, I used the function Root[] for representing the function $h(a,c,u)$. To show that I wasn't pulling the leg of you, the gentle reader, I'll display the explicit form of $h(a,c,u)$, the way Ferrari would've.
Consider again the quartic equation
$$h^4+4uh^3+4(1+u^2)h^2-\left(\frac{c}{a}\right)^2=0$$
The resolvent cubic for this quartic is
$$y^3-4(u^2+1)y^2+\frac{4c^2}{a^2}y-\frac{16c^2}{a^2}=0$$
and the (only) positive root of this cubic is given by the expression
$$y_+=\frac13\left(4(1+u^2)+\frac{2(4a^2 (1+u^2)^2-3c^2)}{a\sqrt[3]{v}}+\frac{2}{a}\sqrt[3]{v}\right)$$
where
$$v=8a^3 (1+u^2)^3-9ac^2 (u^2-2)-3c\sqrt{3}\sqrt{c^4+16a^4 (1+u^2)^3+a^2 c^2 (8-20u^2-u^4)}$$
and the real cube root is always taken.
From $y_+$, we can compute $h(a,c,u)$ as the positive root of the quadratic
$$h^2+\frac{h}{2}(4u-2\sqrt{y_+-4})+\frac12\left(y_+-\sqrt{\frac{4c^2}{a^2}+y_+^2}\right)$$
that is,
$$h(a,c,u)=-u+\frac{\sqrt{y_+-4}}{2}+\sqrt{u^2-1-u\sqrt{y_+-4}-\frac{y_+}{4}+\sqrt{\frac{y_+^2}{4}+\frac{c^2}{a^2}}}$$
(I told you it was complicated... ;))
|
Why would I want to multiply two polynomials? I'm hoping that this isn't such a basic question that it gets completely laughed off the site, but why would I want to multiply two polynomials together?
I flipped through some algebra books and have googled around a bit, and whenever they introduce polynomial multiplication they just say 'Suppose you have two polynomials you wish to multiply', or sometimes it's just as simple as 'find the product'. I even looked for some example story problems, hoping that might let me in on the secret, but no dice.
I understand that a polynomial is basically a set of numbers (or, if you'd rather, a mapping of one set of numbers to another), or, in another way of thinking about it, two polynomials are functions, and the product of the two functions is a new function that lets you apply the function once, provided you were planning on applying the original functions to the number and then multiplying the result together.
Elementary multiplication can be described as 'add $X$ to itself $Y$ times', where $Y$ is a nice integer number of times. When $Y$ is not a whole number, it doesn't seem to make as much sense.
Any ideas?
| When you take calculus, you will need to factor a polynomial p as a product of two polynomials a and b. If you know how polynomial multiplication works, then finding factorizations is easier. Learn how to multiply now so that you can factor easily later. :)
|
When functions commute under composition Today I was thinking about composition of functions. It has nice properties, its always associative, there is an identity, and if we restrict to bijective functions then we have an inverse.
But then I thought about commutativity. My first intuition was that bijective self maps of a space should commute but then I saw some counter-examples. The symmetric group is only abelian if $n \le 2$ so clearly there need to be more restrictions on functions than bijectivity for them to commute.
The only examples I could think of were boring things like multiplying by a constant or maximal tori of groups like $O(n)$ (maybe less boring).
My question: In a euclidean space, what are (edit) some nice characterizations of sets of functions that commute? What about in a more general space?
Bonus: Is this notion of commutativity important anywhere in analysis?
| This question may also be related to how certain functions behave under functions of their variables. In this context, the property of commuting with binary operators, such as addition and multiplication, can be used to define classes of functions:
*
*additive commutation: if $g(x, y) = x + y$, then $f\big(g(x, y)\big) = g\big(f(x),\ f(y)\big)$ if and only if $f(x + y) = f(x) + f(y)$ thus $f$ is a homogeneous linear function of the form $f(x; a) \equiv ax$
*multiplicative commutation: if $g(x, y) = xy$, then $f\big( g(x, y) \big) = g\big(f(x),\ f(y)\big)$ if and only if $f(xy) = f(x)f(y)$ thus $f$ is "scale invariant" i.e. a power law of the form $f(x; a) \equiv x^a$
*log-additive commutation: if $g(x, y) = x + y$, then $\log f\big( g(x, y) \big) = g\big( \log f(x),\ \log f(y) \big)$ if and only if $f(x + y) = f(x)f(y)$ thus $f$ is an exponential function of the form $f(x; a) \equiv \exp(ax)$
The last item (3) involves a third function (the logarithm) which when denoted as $h$ gives
$h\big(f[g(x, y)]\big) = g\big(h[f(x)],\ h[f(y)]\big)$
or
$h \circ f \circ g(x, y) = g\big(h \circ f(x),\ h \circ f(y)\big).$
Since $h \circ f$ occurs on both sides, we can denote this as $\tilde f$ to get
$\tilde f \big( g(x, y) \big) = g \big( \tilde f(x), \tilde f(y) \big)$
which has the same form as item (1) above. From this perspective, items (1) and (3) above can be seen as being isomorphic under the $\exp$ and $\log$ pair of invertible mappings.
|
If there are $200$ students in the library, how many ways are there for them to be split among the floors of the library if there are $6$ floors? Need help studying for an exam.
Practice Question:
If there are $200$ students in the library, how many ways are there for them to be split among the floors of the library if there are $6$ floors?
Hint: The students can not be told apart (they are indistinguishable).
The answer must be in terms of $P(n,r), C(n,r)$, powers, or combinations of these. The answers do not have to be calculated.
| Note that if they are distinguishable then the number of ways is given by $6^{200}$ since each of the 200 students have $6$ choices of floors.
However, we are given that the students are indistinguishable.
Hence, we are essentially interested in solving $a_1 + a_2 + a_3 + a_4 + a_5 + a_6 = 200$, where $a_i$ denotes the number of students in the $i^{th}$ floor.
The constraints are $0 \leq a_i \leq 200$, $\forall i \in \{1,2,3,4,5,6\}$.
We will in fact look at a general version of this problem.
We want to find the total number of natural number solutions for the following equation:
$\displaystyle \sum_{i=1}^{n} a_i = N$, where $a_i \in \mathbb{N}$
The method is as follows:
Consider $N$ sticks.
$| | | | | | | | ... | | |$
We want to do partition these $N$ sticks into $n$ parts.
This can be done if we draw $n-1$ long vertical lines in between these $N$ sticks.
The number of gaps between these $N$ sticks is $N-1$.
So the total number of ways of drawing these $n-1$ long vertical lines in between these $N$ sticks is $C(N-1,n-1)$.
So the number of natural number solutions for $\displaystyle \sum_{i=1}^{n} a_i = N$ is $C(N-1,n-1)$.
If we are interested in the number of non-negative integer solutions, all we need to do is replace $a_i = b_i - 1$ and count the number of natural number solutions for the resulting equation in $b_i$'s.
i.e. $\displaystyle \sum_{i=1}^{n} (b_i - 1) = N$ i.e. $\displaystyle \sum_{i=1}^{n} b_i = N + n$.
So the number of non-negative integer solutions to $\displaystyle \sum_{i=1}^{n} a_i = N$ is given by $C(N+n-1,n-1)$.
So, for the current problem assuming that some of the floors can be empty, the answer is $C(200+5,5) = C(205,5) = 2872408791$.
|
Find formula from values Is there any "algorithm" or steps to follow to get a formula from a table of values.
Example:
Using this values:
X Result
1 3
2 5
3 7
4 9
I'd like to obtain:
Result = 2X+1
Edit
Maybe using excel?
Edit 2
Additional info:
It is not going to be always a polynomial and it may have several parameters (I think 2).
| (This is way too complicated to use it here, one can always expect a desired polynomial that fits all the points.)
One of the possible algorithm is Langrange Interpolating Polynomial.
For a polynomial $P(n)$ of degree $(n-1)$ passes through $n$ points:
$$(x_1,y_1=f(x_1)),\ldots,(x_n,y_n=f(x_n))$$
We have
$$P(x)=\sum_{j=1}^n\left[y_j\prod^n_{k=1,k\neq j}\frac{x-x_k}{x_j-x_k}\right]$$
Explicitly,
$$P(x)=\frac{y_1(x-x_2)\cdots(x-x_n)}{(x_1-x_2)\cdots(x_1-x_n)}+
\frac{y_2(x-x_1)(x-x_3)\cdots(x-x_n)}{(x_2-x_1)(x_2-x_3)\cdots(x_2-x_n)}+\ldots
+\\\frac{y_n(x-x_1)\cdots(x-x_{n-1})}{(x_n-x_1)\cdots(x_n-x_{n-1})}$$
In this context,
\begin{align}
P(n)&=\frac{3(n-2)(n-3)(n-4)}{(1-2)(1-3)(1-4)}+\frac{5(n-1)(n-3)(n-4)}{(2-1)(2-3)(2-4)}\\
&+\frac{7(n-1)(n-2)(n-4)}{(3-1)(3-2)(3-4)}+\frac{5(n-1)(n-2)(n-3)}{(4-1)(4-2)(4-3)}
\end{align}
Simplify and we get
$$P(n)=-\frac13(2 n^3-12 n^2+16 n-15)
$$
|
conversion of a powerseries $-3x+4x^2-5x^3+\ldots $ into $ -2+\frac 1 x - 0 - \frac 1 {x^3} + \ldots $ This is initially a funny question, because I've found this on old notes but I do not find/recover my own derivation... But then the question is more general.
Q1:
I considered the function
$ f(x) = - \frac {2x^2+3x}{(x+1)^2} $
I expressed this by a powerseries $ f_1(x) = -3x + 4x^2 - 5x^3 + 6x^4 - \ldots $
and stated without the derivation that this is also
$ f_2(x) = \frac {-2}{1} -\frac {-1}{x} + 0 - \frac {1} {x^3} + \frac {2}{x^4} - \ldots + \ldots $
and - well: hell, - don't see it now how I did it.
What was interesting to me was, that after looking for the fixpoints $ x_0=0, x_{1,2} =-2 $ the range of convergence in the expression by $f_1$ is obviously $ |x|<1 $ limited to the unit-interval but in that by $f_2$ it is infinity and $ |x|>1 $ .
Q2:
I would like to be able to translate also other powerseries into an $f_2$-type-expression. (I remember to have read a remark of "expanding a powerseries at infinity" but have never seen an explanation of this - so this might be irrelevant for this case?) So: what is the technique to do this given a function in terms of a usual powerseries, for instance for the geometric series $ g(x)=1+x+x^2+ \ldots $ or some series $ h(x) = K + a*x + b*x^2 + c*x^3 + \ldots $ ?
[edit: minus-sign in f(x) was missing, one numerator in f2 was wrong]
| Divide the numerator and denominator of $f(x)$ by $x^2$ and set $y=1/x$ then expand for $y$ and you have your expansion at infinity.
|
Algebraic Identity $a^{n}-b^{n} = (a-b) \sum\limits_{k=0}^{n-1} a^{k}b^{n-1-k}$ Prove the following: $\displaystyle a^{n}-b^{n} = (a-b) \sum\limits_{k=0}^{n-1} a^{k}b^{n-1-k}$.
So one could use induction on $n$? Could one also use trichotomy or some type of combinatorial argument?
| You can apply Ruffini's rule. Here is a copy from my Algebra text book (Compêndio de Álgebra, VI, by Sebastião e Silva and Silva Paulo) where the following formula is obtained:
$x^n-a^n\equiv (x-a)(x^{n-1}+ax^{n-2}+a^2x^{n-3}+\cdots +a^{n-2}x+a^{n-1}).$
Translation: The Ruffini's rule can be used to find the quotient of $x^n-a^n$ by $x-a$:
(Figure)
Thus, if $n$ is a natural number, we have
$x^n-a^n\equiv (x-a)(x^{n-1}+ax^{n-2}+a^2x^{n-3}+\cdots +a^{n-2}x+a^{n-1})$
|
Rational Numbers and Uniqueness Let $x$ be a positive rational number of the form $\displaystyle x = \sum\limits_{k=1}^{n} \frac{a_k}{k!}$ where each $a_k$ is a nonnegative integer with $a_k \leq k-1$ for $k \geq 2$ and $a_n >0$. Prove that $a_1 = [x]$, $a_k = [k!x]-k[(k-1)!x]$ for $k = 2, \dots, n$, and that $n$ is the smallest integer such that $n!x$ is an integer. Conversely, show that every positive rational number can be expressed in this form in one and only one way. Note that $[x]$ is the greatest integer function.
So I think there are two parts to this: (i) an inductive proof and (ii) a proof by contradiction. Would this be the correct "high level" approach to this problem?
| Since $a_k \le k-1$ for $k \ge 2$ we have
$$ \lfloor x \rfloor = \left\lfloor \sum_{k=1}^n \frac{a_k}{k!} \right\rfloor \le a_1
+ \left\lfloor \sum_{k=2}^n \frac{k-1}{k!} \right\rfloor = a_1 $$
as the latter term is $0$ since $ \sum_{k=2}^n \frac{k-1}{k!} = \sum_{k=2}^n \lbrace \frac{1}{(k-1)!} - \frac{1}{k!} \rbrace
= 1 – 1/n! < 1.$
To show $ a_m = \lfloor m! x \rfloor – m \lfloor (m-1)! x \rfloor $ for $m=2,3,\ldots,n$ we note that
$$ \lfloor m! x \rfloor = \left\lfloor m! \sum_{k=1}^n \frac{a_k}{k!} \right\rfloor =
A_m + \left\lfloor m! \sum_{k=m+1}^n \frac{a_k}{k!} \right\rfloor, $$
where $A_m$ is an integer given by
$$A_m = m! \frac{a_1}{1!} + m! \frac{a_2}{2!} + \cdots + m! \frac{a_m}{m!}.$$
However
$$ m! \sum_{k=m+1}^n \frac{a_k}{k!} \le m! \sum_{k=m+1}^n \frac{k-1}{k!}
= m! \sum_{k=m+1}^n \left\lbrace \frac{1}{(k-1)!} - \frac{1}{k!} \right\rbrace $$
$$ = m! \left( \frac{1}{m!} - \frac{1}{n!} \right) = 1 - \frac{m!}{n!} < 1 $$
and hence
$$ \left\lfloor m! \sum_{k=m+1}^n \frac{a_k}{k!} \right\rfloor = 0.$$
Also for $ m > 1 $
$$ m \lfloor (m-1)! x \rfloor = m \left\lfloor (m-1)! \sum_{k=1}^n \frac{a_k}{k!} \right\rfloor $$
$$=m \left\lbrace (m-1)! \frac{a_1}{1!} + (m-1)! \frac{a_2}{2!} + \cdots + (m-1)!
\frac{a_{m -1}}{(m-1)!} \right\rbrace + m \left\lfloor \sum_{k=m}^n \frac{a_k}{k!} \right\rfloor $$
$$= \left( A_m - a_m \right) + m \left\lfloor \sum_{k=m}^n \frac{a_k}{k!} \right\rfloor
= A_m – a_m . $$
since
$$ \sum_{k=m}^n \frac{a_k}{k!} \le \sum_{k=m}^n \frac{k-1}{k!}
= \frac{1}{(m-1)!} - \frac{1}{n!} < 1 .$$
Which proves that for $ m > 1 $
$$ \lfloor m! x \rfloor - m \lfloor (m-1)! x \rfloor = A_m – (A_m – a_m) = a_m . \qquad (1)$$
To prove that $n$ is the smallest integer such that $n! x $ is an integer, suppose that
$ (n-1)! x $ is an integer. Then by $(1)$ we have for $n > 1$
$$ a_{n-1} = (n-1)! x – n! x < 0 $$
contradicting the fact that $a_k \ge 0 .$ And so $ (n-1)! x $ cannot be an integer, and if $ m! x $ is an integer for any $ m < n $ we have $(n-1)(n-2) \cdots m! x = (n-1)! x $ is an integer, another contradiction. Hence $ m! x $ cannot be an integer.
To show that every positive rational number $x$ can be expressed in this form, let $ x = p/q, $ where $ gcd(p,q)=1 \textrm{ and } p,q \in \mathbb{N} $ and let $ n $ be the smallest integer such that $ n! p/q $ is an integer. Define
$$ \begin{align*}
a_1 &= \left\lfloor \frac{p}{q} \right\rfloor
\\ a_m &= \left\lfloor m! \frac{p}{q} \right\rfloor - m \left\lfloor (m-1)! \frac{p}{q} \right\rfloor
\quad \textrm{ for } m > 1. \quad (2)
\end{align*} $$
We note that
$$ \left\lfloor (n-1)! \frac{p}{q} \right\rfloor < (n-1)! \frac{p}{q} , $$
since $ (n-1)! p/q $ is not an integer, and so $a_n > 0 .$
Also, since $ (m-1)! p/q $ is not an integer, for $ m \le n $ we can write
$$ (m-1)! \frac{p}{q} = N_m + r_m \quad \textrm{ where } 0 < r_m < 1 $$
and $N_m$ is an integer. Hence for $m=2,3,\ldots,n$ from $(2)$ we have
$$ a_m = \lfloor mN_m + mr_m \rfloor - mN_m = \lfloor m r_m \rfloor \le m-1.$$
Note also that the $a_m$ are non-negative. Now assume that
$$ \frac{p}{q} = \sum_{k=1}^n \frac{a_k}{k!} \quad (3)$$
then as the $a_k$ satisfy the conditions that each is a non-negative integer with $a_k \le k-1 $ for $k=2,3\ldots,n$ they are uniquely determined by $(1)$ and $a_1 = \lfloor p/q \rfloor .$
It only remains to prove $(3).$ To show this we note that
$$ \begin{align*}
\sum_{k=1}^n \frac{a_k}{k!} &= \left\lfloor \frac{p}{q} \right\rfloor + \sum_{k=2}^n \frac{a_k}{k!}
\\ &= \left\lfloor \frac{p}{q} \right\rfloor +
\sum_{k=2}^n \left\lbrace
\frac{ \left\lfloor k! \frac{p}{q} \right\rfloor - k \left\lfloor (k-1)! \frac{p}{q} \right\rfloor }{k!}
\right\rbrace
\\ &= \left\lfloor \frac{p}{q} \right\rfloor +
\sum_{k=2}^n \frac{ \left\lfloor k! \frac{p}{q} \right\rfloor }{k!} -
\sum_{k=2}^n \frac{ \left\lfloor (k-1)! \frac{p}{q} \right\rfloor }{(k-1)!}
\\ &= \frac {n! p/q}{n!} = \frac{p}{q},
\end{align*} $$
since $n! p/q$ is an integer and all the terms cancel, except the last. This completes the proof.
|
Do all manifolds have a densely defined chart? Let $M$ be a smooth connected manifold. Is it always possible to find a connected dense open subset $U$ of $M$ which is diffeomorphic to an open subset of R$^n$?
If we don't require $U$ to be connected, the answer is yes: it is enough to construct a countable collection of disjoint open "affines" whose union is dense, and this is not terribly difficult.
| Depending on what you consider a manifold, the long line may be a counterexample.
And for a non-connected manifold, surely the answer is no? Take your favorite smooth manifold, and take a disjoint union of more than $\mathfrak{c}$ copies of it. Again, unless your definition of "manifold" rules this out (by assuming separability, etc).
|
Proof by induction $\frac1{1 \cdot 2} + \frac1{2 \cdot 3} + \frac1{3 \cdot 4} + \cdots + \frac1{n \cdot (n+1)} = \frac{n}{n+1}$ Need some help on following induction problem:
$$\dfrac1{1 \cdot 2} + \dfrac1{2 \cdot 3} + \dfrac1{3 \cdot 4} + \cdots + \dfrac1{n \cdot (n+1)} = \dfrac{n}{n+1}$$
| Every question of the form: prove by induction that
$$\sum_{k=1}^n f(k)=g(n)$$
can be done by verifying two facts about the functions
$f$ and $g$:
*
*$f(1)=g(1)$
and
*$g(n+1)-g(n)=f(n+1)$.
|
$\epsilon$-$\delta$ limit proof, $\lim_{x \to 2} \frac{x^{2}-2x+9}{x+1}$ Prove that $\lim\limits_{x \to 2} \frac{x^{2}-2x+9}{x+1}$ using an epsilon delta proof.
So I have most of the work done. I choose $\delta = min{\frac{1}{2}, y}$,
$f(x)$ factors out to $\frac{|x-3||x-2|}{|x+1|}$
But $|x-3| \lt \frac{3}{2}$ for $\delta = \frac{1}{2}$ and also $|x+1| > 5/2$ (I'll spare you the details).
I'm not sure how to choose my y here. If I take $\lim\limits_{x \to 2} \frac{x^{2}-2x+9}{x+1}$ < $(3/5) \delta$ How do I choose my epsilon here (replace y with this) to satisfy this properly?
Thanks
| I'm going to go out on a limb and guess that you're trying to show the limit is 3 and that $f(x) = {x^2 - 2x + 9 \over x + 1} - 3$. I suggest trying to translate what you've done into the fact that $|{x^2 - 2x + 9 \over x + 1} - 3| < {3 \over 5}|x - 2|$ whenever $|x - 2| < {1 \over 2}$.
This means that if you choose any $\epsilon < {1 \over 2}$, then you have that $|{x^2 - 2x + 9 \over x + 1} - 3| < {3 \over 5}\epsilon$ whenever $|x - 2| < \epsilon$. So, given $\epsilon$, the natural choice for $\delta$ is ${3 \over 5}\epsilon$. (you got the $\delta$ and $\epsilon$ reversed.)
Now verify that the "for every $\epsilon$ there is a $\delta$" definition is satisfied in this way.
|
Elementary Row Operations - Interchange a Matrix's rows Let's consider a $2\times 2$ linear system:
$$
A\bf{u} = b
$$
The solution will still be the same even after we interchange the rows in $A$ and $B$. I know this to be true because algebraically, we will get the same set of equations before and after the row interchange.
However, the vectors in columns of $A$ and $B$ are different. So how can the system still have the same solution as before the row interchange?
Thank you.
| Let us consider a $2 \times 2$ example. We will then extend this higher dimensions.
Let $$A = \begin{bmatrix}A_{11} & A_{12}\\A_{21} & A_{22} \end{bmatrix}$$
$$b = \begin{bmatrix}b_1 \\b_2 \end{bmatrix}$$
So you now want to solve $Ax_1 = b$.
$x_1$ is given by $A^{-1}b$.
Now you swap the two rows of $A$ and $b$. Call them $\tilde{A}$ and $\tilde{b}$ respectively.
$$\tilde{A} = \begin{bmatrix}A_{21} & A_{22}\\A_{11} & A_{12} \end{bmatrix}$$
$$\tilde{b} = \begin{bmatrix}b_2 \\b_1 \end{bmatrix}$$
Now how do we relate $\tilde{A}$ and $A$ and similarly $\tilde{b}$ and $b$.
The relation is given by a Permutation matrix $P$.
$\tilde{A} = P A$ and $\tilde{b} = P b$.
The matrix $P$ is given by:
$$\tilde{P} = \begin{bmatrix}0 & 1\\1 & 0 \end{bmatrix}$$
Check that $\tilde{A} = P A$ and $\tilde{b} = P b$.
Now we look at solving the system $\tilde{A}x_2 = \tilde{b}$.
Substitute for $\tilde{A}$ and $\tilde{b}$ in terms of $A$ and $b$ respectively to get
$PAx_2 = Pb$.
Now the important thing to note is that $P^2 = I$.
This can be verified algebraically or by a simple argument by seeing that $P^2$ swaps and swaps again which reverts back to the original giving $I$ or the other way of looking is $P^2$ is $P$ applied to $P$ which swaps the two rows of $P$ giving back the identity matrix.
So from $P^2 = I$, we get $P^{-1} = P$.
So we have $PAx_2 = Pb$ and premultiplying by $P^{-1}$ gives $Ax_2 = b$.
So we have $Ax_1 = b$ and $Ax_2 = b$.
And if we assume $A$ is invertible this gives us a unique solution and hence we get $x_1 = x_2$.
or the other way to look at is to write $x_2 = \tilde{A}^{-1} \tilde b = (PA)^{-1}Pb = A^{-1} P^{-1} P b = A^{-1} I b = A^{-1} b$.
All you need to observe in the above step is that the matrix $P$ is invertible and hence the matrix $(PA)$ is also invertible (since $A$ is assumed to be invertible and that $(PA)^{-1} = A^{-1}P^{-1}$ and matrix multiplication is associative.
The same argument with permutation matrix holds true for a $n \times n$ system as well.
|
Guidance on a Complex Analysis question My homework question: Show that all zeros of $$p(z)=z^4 + 6z + 3$$ lie in the circle of radius $2$ centered at the origin.
I know $p(z)$ has a zero-count of $4$ by using the Fundamental Theorem of Algebra. Then using the Local Representation Theorem the $$\int \frac{n}{z+a} = 4(2 \pi i).$$ I am assuming $a=0$ since we are centered at the origin. I apologize for my lack of math-type. What does $$= 8 \pi i$$ mean? Am I going around the unit circle $4$ times? Or is it even relevant to my final answer. Which I am assuming is finding the coordinates to the $4$ singularities. I have always looked for my singularities in the values that make the denominator zero, but in this question my denominator is $z$. $z=0$ doesn't seem right. So the question is, am I suppose to factor the polynomial $z^4 + 6z + 3$ to find the zeros?
Thanks
| Hint: This kind of questions are usually handled using Rouche's Theorem. I suggest you look it up in the wikipedia article, where you can see an example of its usage. Also here's an example.
The key is choosing wisely another function $f(z)$ with which to compare in the inequality in Rouche's theorem and such that you can easily decide how many zeroes does $f(z)$ have inside the region you are considering, which in your case is the circle $|z| < 2$.
About your other question, you don't need to factor the polynomial in order to answer this.
|
What are the conditions for existence of the Fourier series expansion of a function $f\colon\mathbb{R}\to\mathbb{R}$ What are the conditions for existence of the Fourier series expansion of a function $f\colon\mathbb{R}\to\mathbb{R}$?
| In addition to Carleson's theorem (stated by AD above), which gives a sufficient condition for pointwise convergence almost everywhere, one might also consider the following theorem about uniform convergence:
Suppose $f$ is periodic. Then, if $f$ is $\mathcal{C}^0$ and piecewise $\mathcal{C}^1$, $S_N(f)$ converges uniformly to $f$ on $\mathbb{R}$.
|
Given a function $f(x)$ where $x$ is uniformly distributed between $a$ and $b$, how do I find the probability density function of $f$? For example, if $f(x) = \sin x$ and $x$ is uniformly distributed on $[0, \pi]$, how is the equation found that satisfies the probability distribution function of $f(x)$? I imagine the distribution function will be greater when the derivative of $f(x)$ is closer to zero, but this is just a guess.
I apologize if this question is vague or not advanced enough, but I can't find the answer anywhere.
| Note that $\sin(x)$ increases from $x = 0$ to $x = {\pi \over 2}$, then decreases from ${\pi \over 2}$ to $\pi$, in a way symmetric about ${\pi \over 2}$. So for a given $0 \leq \alpha \leq 1$, the $x \in [0,\pi]$ for which $\sin(x) \leq \alpha$ consists of two segments, $[0,\beta]$ and $[\pi - \beta, \pi]$, where $\beta$ is the number for which $\sin(\beta) = \alpha$. In other words $\beta = \arcsin(\alpha)$.
Since $x$ is uniformly distributed on $[0,\pi]$, the probability $x$ is in $[0,\beta]$ is ${\beta \over \pi}$, and the probability $x$ is in $[\pi - \beta, \pi]$ is also ${\beta \over \pi}$. So the chance that $x$ is in one of these two segments is $2{\beta \over \pi}$. This means the probability $\sin(x) \leq \alpha$ is $2{\beta \over \pi}$, or ${2 \over \pi} \arcsin(\alpha)$. Thus this gives the distribution function of $\sin(x)$. The density function is obtained by differentiating with respect to $\alpha$; the result is ${2 \over \pi \sqrt{1 - \alpha^2}}$.
|
Is $\lim\limits_{n \to \infty}\frac{1}{n}\left( \cos{\frac{\pi}{n}} + \cos{\frac{2\pi}{n}} + \ldots + \cos{\frac{n\pi}{n}} \right)$ a Riemann sum? This is probably simple, but I'm solving a practice problem:
$\lim_{n \to \infty}\frac{1}{n}\left( \cos{\frac{\pi}{n}} + \cos{\frac{2\pi}{n}} + \ldots +\cos{\frac{n\pi}{n}} \right)$
I recognize this as the Riemann sum from 0 to $\pi$ on $\cos{x}$, i.e. I think its the integral
$\int_0^\pi{ \cos{x}dx }$
which is 0, but the book I'm using says it should be
$ \frac{1}{\pi}\int_0^\pi{ \cos{x}dx }$
Still 0 anyway, but where did the $\frac{1}{\pi}$ in front come from?
| The key to this last assertion is the simple fact that $$\cos(\pi - x) = -\cos(x).$$ Said symmetry can be observed directly from the definition of the cosine function via the unit circle.
|
Roots of Legendre Polynomial I was wondering if the following properties of the Legendre polynomials are true in general. They hold for the first ten or fifteen polynomials.
*
*Are the roots always simple (i.e., multiplicity $1$)?
*Except for low-degree cases, the roots can't be calculated exactly, only approximated (unlike Chebyshev polynomials).
*Are roots of the entire family of Legendre Polynomials dense in the interval $[0,1]$ (i.e., it's not possible to find a subinterval, no matter how small, that doesn't contain at least one root of one polynomial)?
If anyone knows of an article/text that proves any of the above, please let me know. The definition of these polynomials can be found on Wikipedia.
| The Abramowitz–Stegun Handbook of Mathematical Functions claims on page 787 that all the roots are simple: http://convertit.com/Go/ConvertIt/Reference/AMS55.ASP?Res=150&Page=787
|
Can we reduce the number of states of a Turing Machine? My friend claims that one could reduce the number of states of a given turning machine by somehow blowing up the tape alphabet. He does not have any algorithm though. He only has the intuition.
But I say it's not possible. Else one could arbitrarily keep decreasing the states via the same algorithm and arrive at some constant sized machine.
Who is right?
| I think this in the same vein as creating a compression algorithm that will compress any given file, i.e. than we can compress the output again and again, until we reach a single bit that will represent all possible files. Yet, compression algorithms do exist, and they do compress some files.
So, even it the number of states of a given Turing machine is reducible, it does not mean that all Turing machines are reducible, since that would mean that all Turing machines are just different interpretations of one and the same one-state machine.
|
Euler's formula for connected planar graphs Euler's formula for connected planar graphs (i.e. a single connected component) states that $v-e+f=2$. State the generalization of Euler's formula for planar graphs with $k$ connected components (where $k\geq1$).
The correct answer is $v-e+f=1+k$, but I'm not understanding the reasoning behind it. Anyone care to share some insight?
| Consider 2 components...
Both are similar components now for first excluding face f4 three faces for each component is considered so for both components
V - E + (F-1) = 1
since, V = 10, E = 12
So, for adding both we get 2V - 2E + 2F-2 = 2
Now we will consider face F4 which will be unbounded face for whole graph and will we counted once so,adding 1 on both sides
2V - 2E + 2F-1 = 3
Where, total number of vertices = 2V
total number of edges = 2E
total number of faces = 2F-1
total number of components = k = 2
so, Vtotal - Etotal + Ftotal = k+1 can be proved.
hope this helps ....
|
Help solving a differential equation my Calculus II class is nearing the end of the quarter and we've just started differential equations to get ready for Calculus III. In my homework, I came upon these problems.
One of the problems was:
Find the general solution to the differential equation
$$\frac{dy}{dt} = t^3 + 2t^2 - 8t.$$
The teacher just said to integrate. So I did. Then in question 8a it gives the differential equation:
$$\frac{dy}{dt} = y^3 + 2y^2 - 8y.$$
and asks "Why can't we find a solution like we did to the previous problem? My guess was: "In 7 we were integrating with respect to t. Since this equation is the highest order derivative, we can't solve it like # 7". Although, I have no confidence in that answer and I'm not sure it makes total sense even to me.
Also, part 8B. asks: Show that the constant function $y(t) = 0$ is a solution.
I've done a problem like this before, except that it wasn't a constant function. This problem seems like a question that asks: "show that every member of the family of functions $y = (\ln x + C)/x$ is a solution to the differential equation (some diff. equation)" except it seems a little bit different.
Any hints on how I can solve this?
Thank you.
| In the first question, you are given the derivative in terms of the variable. But in the second question, you are given an expression for the derivative that involves the function. For instance, it would be one thing if you were told $\frac{dy}{dx} = x$ (which would mean that $y = \frac{1}{2}x^2 + C$), and a completely different thing if you are told $\frac{dy}{dx}=y$ (this tells you that the function $y$ is equal to its derivative; which means that $y=Ae^x$ for some constant $A$).
Actually, we can solve the second differential equation; but we don't solve it by simple integration, just like we don't solve the equation $y' = y$ by integrating.
Showing that $y(t)=0$ is a solution to the second equation is just a question of plugging in $0$ for $y$ and verifying that you get a solution by verifying that the resulting equality is true.
You never said what question 8a was, through; were you asked to solve the differential equation, or just to say what is the difference between it and the previous one?
|
Embedding torus in Euclidean space For $n > 2$, is it possible to embed $\underbrace{S^1 \times \cdots \times S^1}_{n\text{ times}}$ into $\mathbb R^{n+1}$?
| As Aaron Mazel-Gee's comment indicates, this follows from induction. Although you only asked about $n > 2$, it actually holds for $n \geq 1$.
The base case is $n = 1$, i.e. $S^1$ embeds in $\mathbb{R}^2$, which is clear
For the inductive step, suppose that $T^{k-1}$ embeds in $\mathbb{R}^k$. Then $T^k = T^{k-1}\times S^1$ embeds in $\mathbb{R}^k\times S^1$ which is diffeomorphic to $\mathbb{R}^{k-1}\times(\mathbb{R}\times S^1) \cong \mathbb{R}^{k-1}\times(\mathbb{R}^2\setminus\{(0, 0)\})$. Now note that $\mathbb{R}^{k-1}\times(\mathbb{R}^2\setminus\{(0, 0)\})$ embeds into $\mathbb{R}^{k-1}\times\mathbb{R}^2 \cong \mathbb{R}^{k+1}$. As the composition of embeddings is again an embedding, we see that $T^k$ embeds in $\mathbb{R}^{k+1}$.
By induction, $T^n$ embeds in $\mathbb{R}^{n+1}$ for every $n \geq 1$.
|
Is a regular sequence ordered? A regular sequence is an $n$-fold collection $\{r_1, \cdots, r_n\} \subset R$ of elements of a ring $R$ such that for any $2 \leq i \leq n$, $r_i$ is not a zero divisor of the quotient ring
$$ \frac R {\langle r_1, r_2, \cdots, r_{i-1} \rangle}.$$
Does the order of the $r_i$'s matter? That is, is any permutation of a regular sequence regular?
| Here is a general result for when any permutations of elements of a regular sequence forms a regular sequence:
Let $A$ be a Noetherian ring and $M$ a finitely generated $A$-module. If $x_1,...,x_n$ be an $M$-sequence s.t. $x_i \in J(A)$ for $1 \leq i \leq n$, where $J(A)$ is the Jacobson radical of $A$, then any permutation of $x_1,...,x_n$ becomes an $M$-sequence.
|
Typical applications of Fubini's theorem and Radon-Nikodym Can someone please share references (websites or books) where I can find problems related with Fubini's theorem and applications of Radon-Nikodym theorem? I have googled yes and don't find many problems. What are the "typical" problems (if there are any) related with these topics? [Yes, exam is coming soon so I don't know what to expect and don't have access to midterms from previous years].
Thank you
| One basic example of what you can do with Fubini's theorem is to represent an integral of a function of a function in terms of its distribution function. For instance, there is the formula (for reasonable $\phi$, $f$ basically arbitrary but nonnegative)
$$ \int \phi \circ f d\mu = \int_t \mu(\{ f> t\}) \phi'(t) dt$$ which reduces questions about integrals of, for instance, $p$th powers to questions about the distribution function. This is (for example) how the boundedness of the Hardy-Littlewood maximal operator on $L^p$ ($p>1$) is proved: you get a bound on the distribution function of the maximal function by general methods and then do an interpolation process.
To prove the formula above, as in Rudin, one can consider the collection $E$ of pairs $(x,t)$ such that $0 \leq t \leq f(x)$. This is a measurable subset of $X \times \mathbb{R}$ if $X$ is the initial measure space on which $f$ is defined and $\mathbb{R}$ has Lebesgue measure.
Then, one can write the second integral in the displayed equation
as $\int_t \phi'(t) dt \int_{x \in X} \chi_E(x,t) d \mu$ where $\chi_E$ denotes the characteristic function.
Now rearranging this integral via Fubini's theorem allows one to integrate with respect to $t$ first, for each $x$; then $t$ goes from $0$ to $f(x)$, and one can see that this integral becomes $\int_x \int_{t=0}^{f(x)} \phi'(t) dt$, which is the right-hand-side of the displayed equation.
|
For a Planar Graph, Find the Algorithm that Constructs A Cycle Basis, with each Edge Shared by At Most 2 Cycles In a planar graph $G$, one can easily find all the cycle basis by first finding the spanning tree ( any spanning tree would do), and then use the remaining edge to complete cycles. Given Vertex $V$, edge $E$, there are $C=E-V+1$ number of cycles, and there are $C$ number of edges that are inside the graph, but not inside the spanning tree.
Now, there always exists a set of cycle basis such that each and every edge inside the $G$ is shared by at most 2 cycles. My question is, is there any algorithm that allows me to find such a set of cycle basis? The above procedure I outlined only guarantees to find a set of cycle basis, but doesn't guarantee that all the edges in the cycle basis is shared by at most two cycles.
Note: Coordinates for each vertex are not known, even though we do know that the graph must be planar.
| I agree with all the commenters who say that you should just find a planar embedding. However, I happened to stumble across a description that might make you happy:
Let $G$ be a three-connected
planar graph and let $C$ be a cycle.
Let $G/C$ be the graph formed by
contracting $C$ down to a point. Then
$C$ is a face of the planar graph if
and only if $G/C$ is two-connected.
In particular, this lemma is useful in proving that a three-connected graph can only be planar in one way, a result of Whitney.
But testing this for every cycle is much less efficient than just finding the planar embedding.
|
Can $n!$ be a perfect square when $n$ is an integer greater than $1$?
Can $n!$ be a perfect square when $n$ is an integer greater than $1$?
Clearly, when $n$ is prime, $n!$ is not a perfect square because the exponent of $n$ in $n!$ is $1$. The same goes when $n-1$ is prime, by considering the exponent of $n-1$.
What is the answer for a general value of $n$? (And is it possible, to prove without Bertrand's postulate. Because Bertrands postulate is quite a strong result.)
| √n ≤ n/2 for n ≥ 4. Thus if p is a prime such that n/2 < p ≤ n, we have
√n < p → n < p² so p² cannot be a factor of n if n ≥ 4.
|
How many ways are there for 8 men and 5 women to stand in a line so that no two women stand next to each other? I have a homework problem in my textbook that has stumped me so far. There is a similar one to it that has not been assigned and has an answer in the back of the textbook. It reads:
How many ways are there for $8$ men and $5$ women to stand in a line so that no two women stand next to each other?
The answer is $609638400$, but no matter what I try I cannot reach that number. I have tried doing $2(8!5!/3!)*(8!/5!)$ since each woman must be paired with a man in order to prevent two women getting near each other. But of course, it's the wrong answer.
What am I doing wrong here?
| *
*M * M * M * M * M * M * M * M * .
there are 9 "*" .
8 "M" .
now select 5 * from 9 * then == c(9,5)
|
Partitioning an infinite set Can you partition an infinite set, into an infinite number of infinite sets?
| Note: I wrote this up because it is interesting, and 'rounds out' the theory, that we can partition $X$ into an infinite number of blocks with each on being countably infinite.
I am marking this as community wiki. If several people downvote this and/or post comments that they disagree with my sentiments, I'll delete this.
Proposition 1: Every infinite set $X$ can be partitioned into blocks with each one being countably infinite.
Proof
Choose a well order $\le$ on $X$ that has no maximal elements (see this) and let $\sigma$ denote the successor function on $X$. Let $L$ denote the elements of $X$ that do not have an immediate predecessor. For each $\alpha \in L$ define
$\tag 1 L_\alpha = \{ S^n(\alpha) \, | \, \text{integer } n \ge 0\}$
This family partitions $X$ (see this) and each set must also be countably infinite. $\quad \blacksquare$.
If $L$ is finite take any $\alpha \in L$ and partition the countably infinite set $L_\alpha$ into an infinite number of blocks (see the many fine answers in this thread). So we have
Proposition 2: Every infinite set $X$ can be partitioned into an infinite number of blocks
with each one being countably infinite.
|
Correlation between out of phase signals Say I have a numeric sequence A and a set of sequences B that vary with time.
I suspect that there is a relationship between one or more of the B sequences and sequence A, that changes in Bn are largely or wholly caused by changes in sequence A. However there is an unknown time delay between changes in A and their effect on each of the B sequences (they are each out of phase by varying amounts)
I am looking for a means of finding the most closely correlating B to A regardless of the time delay. What options are available to me?
** EDIT **
The crux of the problem here is that I have millions of B sequences to test, and there are approx 2 million data points within the lag window that I would like to test over. Working out a correllation for each B for each possible lag scenario is just going to be too computationally expensive (especially as in reality there will be a more dynamic relationship than just lag between A and B, so I will be looking to test variations of relationships as well).
So what I am looking for is a means of taking the lag out of calculation.
| Take a look at dynamic time warping. I think it's just the solution you need. I've used the R package 'dtw' which is described here. http://cran.r-project.org/web/packages/dtw/dtw.pdf
|
My Daughter's 4th grade math question got me thinking Given a number of 3in squares and 2in squares, how many of each are needed to get a total area of 35 in^2?
Through quick trial and error (the method they wanted I believe) you find that you need 3 3in squares and 2 2in squares, but I got to thinking on how to solve this exactly.
You have 2 unknowns and the following info:
4x + 9y = 35
x >= 0, y >= 0, x and y are both integers.
It also follows then that x <= 8 and y <= 3
I'm not sure how to use the inequalities or the integer only info to form a direct 2nd equation in order to solve the system of equations. How would you do this without trial and error?
| There is an algorithmic way to solve this which works when you have two types of squares.
if $\displaystyle \text{gcd}(a,b) = 1$, then for any integer $c$ the linear diophantine equation $\displaystyle ax + by = c$ has an infinite number of solution, with integer $\displaystyle x,y$.
In fact if $\displaystyle x_0, y_0$ are such that $\displaystyle a x_0 - b y_0 = 1$, then all the solutions of $\displaystyle ax + by = c$ are given by
$\displaystyle x = -tb + cx_0$, $\displaystyle y = ta - cy_0$, where $\displaystyle t$ is an arbitrary integer.
$\displaystyle x_0 , y_0$ can be found using the Extended Euclidean Algorithm.
Since you also need $\displaystyle x \ge 0$ and $\displaystyle y \ge 0$ you must pick a $\displaystyle t$ such that
$\displaystyle c x_0 \ge tb$ and $ta \ge cy_0$.
If there is no such $\displaystyle t$, then you do not have a solution.
In your case, $\displaystyle a= 9, b= 4$, we need a solution of $\displaystyle ax + by = 35$.
We can easily see that $\displaystyle x_0 = 1, y_0 = 2$ gives us $\displaystyle a x_0 - by_0 = 1$.
Thus we need to find a $\displaystyle t$ such that $ 35 \ge t\times 4$ and $ t\times 9 \ge 35\times 2$.
i.e.
$\displaystyle 35/4 \ge t \ge 35\times 2/9$
i.e.
$\displaystyle 8.75 \ge t \ge 7.77\dots$
Thus $t = 8$.
This gives us $\displaystyle x = cx_0 - tb = 3$, $\displaystyle y = ta- cy_0 = 2$.
(Note: I have swapped your x and y).
|
Example of a non-commutative rings with identity that do not contain non-trival ideals and are not division rings I'm looking for an example of a non-commutative ring, $R$, with identity s.t $R$ does not contain a non-trival 2 sided ideal and $R$ is not a division ring
| If you mean two-sided ideals, you are looking for simple rings (that are not division rings):
http://en.wikipedia.org/wiki/Simple_ring
E.g. as in the wikipedia article, is the ring of matrices (of a certain size) over a field. Clearly this is not a division ring since not every matrix is invertible (matrices with zero determinants are not invertible).
|
The staircase paradox, or why $\pi\ne4$ What is wrong with this proof?
Is $\pi=4?$
| (non rigorous) If you repeat the process a million times it "seems" (visually) that the perimeter approaches in length to the circumference, but if you magnify the picture of a single "tooth" to full screen, you will notice a big difference from the orthogonal segments and the arc of the circumference. No matter how many times you repeat the process that difference will never fade.
ADDED: A visual example of what I meant is folding of a rope. If you imagine the rope not having thickness, you can fold it so many times that you can tend to a point (zero length?). If you unfold it, it will return to its original shape. In the example the perimeter will always be of total length = 4, but it only appears to blend with the circumference.
|
subgroups of finitely generated groups with a finite index Let $G$ be a finitely generated group and $H$ a subgroup of $G$. If the index of $H$ in $G$ is finite, show that $H$ is also finitely generated.
| Hint: Suppose $G$ has generators $g_1, \ldots, g_n$. We can assume that the inverse of each generator is a generator. Now let $Ht_1, \ldots, Ht_m$ be all right cosets, with $t_1 = 1$. For all $i,j$, there is $h_{ij} \in H$ with $t_i g_j = h_{ij} t_{{k}_{ij}}$, for some $t_{{k}_{ij}}$. It's not hard to prove that $H$ is generated by all the $h_{ij}$.
|
What is the maximum number of primes generated consecutively generated by a polynomial of degree $a$? Let $p(n)$ be a polynomial of degree $a$. Start of with plunging in arguments from zero and go up one integer at the time. Go on until you have come at an integer argument $n$ of which $p(n)$'s value is not prime and count the number of distinct primes your polynomial has generated.
Question: what is the maximum number of distinct primes a polynomial of degree $a$ can generate by the process described above? Furthermore, what is the general form of such a polynomial $p(n)$?
This question was inspired by this article.
Thanks,
Max
[Please note that your polynomial does not need to generate consecutive primes, only primes at consecutive positive integer arguments.]
| Here is result by Rabinowitsch for quadratic polynomials.
$n^2+n+A$ is prime for $n=0,1,2,...,A-2$ if and only if $d=1-4A$ is squarefree and the class number of $\mathbb{Q}[\sqrt{d}]$ is $1$.
See this article for details.
http://matwbn.icm.edu.pl/ksiazki/aa/aa89/aa8911.pdf
Also here is a list of imaginary quadratic fields with class number $1$
http://en.wikipedia.org/wiki/List_of_number_fields_with_class_number_one#Imaginary_quadratic_fields
There are many other articles about prime generating (quadratic) polynomials that you can google.
|
Charpit's Method Find the complete integral of partial differential equation
$$\displaystyle z^2 = pqxy $$
I have solved this equation till auxiliary equation:
$$\displaystyle \frac{dp}{-pqy+2pz}=\frac{dq}{-pqx+2qz}=\frac{dz}{2pqxy}=\frac{dx}{qxy}=\frac{dy}{pxy} $$
But I have unable to find value of p and q.
EDIT:
p = ∂z/∂x
q = ∂z/∂y
r = ∂²z/∂x² = ∂p/∂x
s = ∂²z/∂x∂y = ∂p/∂y or ∂q/∂x
t = ∂²z/∂y² = ∂q/∂y
| A much easier solution can be obtained by introducing new dependent/independent variables U=log u, X=log x, Y=log y. Then, with P,Q denoting the first partial derivatives of U with respect to X,Y, respectively, the PDE becomes
PQ=1,
which can be solved very easily by Charpit's method.
|
How to get a reflection vector? I'm doing a raytracing exercise. I have a vector representing the normal of a surface at an intersection point, and a vector of the ray to the surface. How can I determine what the reflection will be?
In the below image, I have d and n. How can I get r?
Thanks.
| $$r = d - 2 (d \cdot n) n$$
where $d \cdot n$ is the dot product, and
$n$ must be normalized.
|
A stereographic projection related question This might be an easy question, but I haven't been able to up come up with a solution.
The image of the map $$f : \mathbb{R} \to \mathbb{R}^2, a \mapsto (\frac{2a}{a^2+1}, \frac{a^2-1}{a^2+1})$$
is the unit circle take away the north pole. $f$ extends to a function $$g: \mathbb{C} \backslash \{i, -i \} \to \mathbb{C}^2. $$ Can anything be said about the image of $g$?
| Note that although $a$ is complex, is valid :
$$\left(\frac{2a}{a^2+1}\right)^2+\left(\frac{a^2-1}{a^2+1}\right)^2= \frac{4a^2}{(a^2+1)^2}+\frac{(a^2-1)^2}{(a^2+1)}=$$
$$\frac{4a^2+a^4-2a^2+1}{(a^2+1)^2}=\frac{(a^4+2a^2+1)}{(a^2+1)^2}=\frac{(a^2+1)^2}{(a^2+1)^2}=1$$
Thus is also an circle
EDIT
Is say, the points of the set $\{g(a)\in \mathbb{C}^2 :a\in \mathbb{C}/ \{\imath,-\imath\}\}$ meet the above.
|
Intersection of neighborhoods of 0. Subgroup? Repeating for my exam in commutative algebra.
Let G be a topological abelian group, i.e. such that the mappings $+:G\times G \to G$ and $-:G\to G$ are continuous. Then we have the following Lemma:
Let H be the intersection of all neighborhoods of $0$ in $G$. Then $H$ is a subgroup.
The proof in the books is the following one-liner: "follows from continuity of the group operations". (this is from "Introduction to Commutative Algebra" by Atiyah-MacDonald)
I must admit that I don't really see how that "follows". If there is an easy explanation aimed at someone who has not encountered topological groups in any extent, I'd be happy to read it.
| If $U$ is a neighbourhood of $0$ then so is $-U=\{-x:x\in U\}$.
This shows that if $x\in H$ then $-x\in H$.
To show that $H$ is closed under addition, use the fact that
if $U$ is a neighbourhood of $0$ then there is another
neighbourhood $V$ of $0$ with $V+V\subseteq U$. The existence
of $V$ follows from the continuity of addition at $(0,0)$.
|
Mapping Irregular Quadrilateral to a Rectangle I have a camera looking at a computer monitor from varying angles. Since the camera is a grid of pixels, I can define the bounds of the monitor in the camera image as:
I hope that makes sense. What I want to do is come up with an algorithm to translate points within this shape to this:
I have points within the same domain as ABCD, as determined from the camera, but I need to draw these points in the domain of the monitor's resolution.
Does that makes sense? Any ideas?
| HINT
$A,B,C,D$ are not in the same plane.
A very approximate rectangular projection ratio ... by area projections with extended boundary length), may be obtained considering boundary vector lenghts.
$$\frac{\frac12(|u \times v|+|w \times a|)}{(|u|+|v|+|w|+|a|)^2}$$
The rectangle can be now re-sized.
|
how do you solve $y''+2y'-3y=0$? I want to solve this equation:
$y''+2y'-3y=0$
I did this:
$y' = z$
$y'' = z\dfrac{dz}{dy}$
$z\dfrac{dz}{dy}+2z-3y=0$
$zdz+2zdy-3ydy=0$
$zdz=(3y-2z)dy$
$z=3y-2z$
$z=y$
$y=y'=y''$
???
now, I'm pretty sure I did something wrong. could you please correct.
| You can also write it in matrix form:
$u=(y',y)$, $u'=\big(\matrix{-2 & 3 \\ \hphantom- 1 & 0}\big) u$. Find the eigenvalues and eigenvectors, turn it into a diagonal system whose solution is simple. Go back to the original coordinates.
|
Does $R[x] \cong S[x]$ imply $R \cong S$? This is a very simple question but I believe it's nontrivial.
I would like to know if the following is true:
If $R$ and $S$ are rings and $R[x]$ and $S[x]$ are isomorphic as rings, then $R$ and $S$ are isomorphic.
Thanks!
If there isn't a proof (or disproof) of the general result, I would be interested to know if there are particular cases when this claim is true.
| Here is a counterexample.
Let $R=\dfrac{\mathbb{C}[x,y,z]}{\big(xy - (1 - z^2)\big)}$, $S=\dfrac{\mathbb{C}[x,y,z]}{\big(x^2y - (1 - z^2)\big)}$. Then, $R$ is not isomorphic to $S$ but, $R[T]\cong S[T]$.
In many variables, this is called the Zariski problem or cancellation of indeterminates and is largely open. Here is a discussion by Hochster (problem 3).
|
Show that a continuous function has a fixed point Question: Let $a, b \in \mathbb{R}$ with $a < b$ and let $f: [a,b] \rightarrow [a,b]$ continuous. Show: $f$ has a fixed point, that is, there is an $x \in [a,b]$ with $f(x)=x$.
I suppose this has to do with the basic definition of continuity. The definition I am using is that $f$ is continuous at $a$ if $\displaystyle \lim_{x \to a} f(x)$ exists and if $\displaystyle \lim_{x \to a} f(x) = f(a)$. I must not be understanding it, since I am not sure how to begin showing this... Should I be trying to show that $x$ is both greater than or equal to and less than or equal to $\displaystyle \lim_{x \to a} f(x)$ ?
| Consider $x-f(x)$ and use Intermediate Value Theorem.
|
Applications for Homology The Question: Are there any ways that "applied" mathematicians can use Homology theory? Have you seen any good applications of it to the "real world" either directly or indirectly?
Why do I care? Topology has appealed to me since beginning it in undergrad where my university was more into pure math. I'm currently in a program where the mathematics program is geared towards more applied mathematics and I am constantly asked, "Yeah, that's cool, but what can you use it for in the real world?" I'd like to have some kind of a stock answer for this.
Full Disclosure. I am a first year graduate student and have worked through most of Hatcher, though I am not by any means an expert at any topic in the book. This is also my first post on here, so if I've done something wrong just tell me and I'll try to fix it.
| There are definite real world applications. I would look at the website/work of Gunnar Carlsson (http://comptop.stanford.edu/) and Robert Ghrist (http://www.math.upenn.edu/~ghrist/). Both are excellent mathematicians.
The following could be completely wrong: Carlsson is one of the main proponents of Persistent Homology which is about looking at what homology can tell you about large data sets, clouds, as well as applications of category theory to computer science. Ghrist works on stuff like sensor networks. I don't understand any of the math behind these things.
Also there are some preprints by Phillipe Gaucher you might want to check out. Peter Bubenik at cleveland state might also have some fun stuff on his website.
|
Probability Problems Problem:
The probability that a man who is 85 years. old will die before attaining the age of 90 is $\frac13$. A,B,C,D are four person who are 85 years old. what is the probability that A will die before attaining the age of 90 and will be the first to die ?
| The probability of dying by 90 is the same for all four. The probability that A dies first is simply 1/4 since we are give no more information. Since the two events are independent, the probability of their conjunction, i.e., that A dies and is the first to die, is simply the product of the two probabilities, or 1/12.
|
How to solve the following system? I need to find the function c(k), knowing that
$$\sum_{k=0}^{\infty} \frac{c(k)}{k!}=1$$
$$\sum_{k=0}^{\infty} \frac{c(2k)}{(2k)!}=0$$
$$\sum_{k=0}^{\infty} \frac{c(2k+1)}{(2k+1)!}=1$$
$$\sum_{k=0}^{\infty} \frac{(-1)^k c(2k+1)}{(2k+1)!}=-1$$
$$\sum_{k=0}^{\infty} \frac{(-1)^k c(2k)}{(2k)!}=0$$
Is it possible?
| You are looking for a function $\displaystyle f(z) = \sum_{k \ge 0} \frac{c(k)}{k!} z^k$ satisfying
$$f(1) = 1$$
$$f(-1) = -1$$
$$f(i) = -i.$$
Infinitely many functions have this property. There is a unique quadratic polynomial $p(z)$ with this property (for example by Lagrange interpolation), and for any entire function $q(z)$ the function $p(z) + (x - 1)(x + 1)(x - i) q(z)$ has this property. In fact these are all entire functions with this property.
More generally I think the theory of interpolation by entire functions is fairly well-understood, but I don't know of a good reference. If the set of $z$ at which you fix the value of $f$ has a limit point, then $f$ is unique by the identity theorem. If the set of $z$ at which you fix the value of $f$ is countable and does not have a limit point, then $f$ is non-unique by Weierstrass factorization.
|
What is the value of $1^x$? I am trying to understand why $1^{x}=1$ for any $x\in\mathbb{R}$
Is it OK to write $1^{x}$? As the base 1 should not equal 1 for $1^{x}$ to be an exponential function?
Is $1^{x}=1$ just because it is defined to be so?
If possible please refer me to a book or article that discusses this topic.
| I think you will at least agree that $1^x$=$1$ if $x$ is any natural number (since this is just 1 times itself x times). We can extend this to all integer values of $x$ by using the facts $a^0$=$1$ for all non-zero $a$ and $x^{-c}$=$1\over{x^c}$. Then we have $1^x$=1 for all rational numbers $x$ by the fact $1^{b\over{c}}$ is the cth root of $1^b$ which is 1 by previous results. Finally we exdend this to all real values of $x$ by using the following definition of the exponential function from calculus.
"If $t$ is irational, then $a^t$ is defined to be the limit as n approaches infinity of $a^{t_n}$ where {$t_n$} is any sequence of rational numbers converging to $t$ (assuming this limit exists)."
In the case of $1^t$ we already have that $1^{t_n}=1$ for all rational $t_n$ so any such sequence as described above is the constant sequence, 1,1,1,..... which converges to 1.
|
Infinitely differentiable How can one find if a function $f$ is infinitely differentiable?
| By differentiating it an infinite number of times?
But seriously, that's what you do. Just that usually you can infer what the higher order derivatives will be, so you don't have to compute it one by one.
Example To see that $\sin(x)$ is infinitely differentiable, you realize the following: $\frac{d}{dx}\sin(x) = \cos(x)$, and $\frac{d}{dx} \cos(x) = -\sin(x)$. So you see that $(\frac{d}{dx})^4\sin(x) = \sin(x)$, and so the derivatives are periodic. Therefore by continuity of $\sin(x)$ and its first three derivatives, $\sin(x)$ must be infinitely differentiable.
Example To see that $(1 + x^2)^{-1}$ is infinitely differentiable, you realize that $\frac{d}{dx}(1+x^2)^{-n} = -2n x (1+x^2)^{-n-1}$. So therefore by induction you have the following statement: all derivatives of $(1+x^2)^{-1}$ can be written as a polynomial in $x$ multplied by $(1 +x^2)^{-1}$ to some power. Then you can use the fact that (a) polynomial functions are continuous and (b) quotients of polynomial functions are continuous away from where the denominator vanishes to conclude that all derivatives are continuous.
The general philosophy at work is that in order to show all derivatives are bounded and continuous, you can take advantage of some sort of recursive relationship between the various derivatives to inductively give a general form of the derivatives. Then you reduce the problem to showing that all functions of that general form are continuous.
|
Are these transformations of the $\beta^\prime$ distribution from $\beta$ and to $F$ correct? Motivation
I have a prior on a random variable $X\sim \beta(\alpha,\beta)$ but I need to transform the variable to $Y=\frac{X}{1-X}$, for use in an analysis and I would like to know the distribution of $Y$.
Wikipedia states:
if $X\sim\beta(\alpha,\beta)$ then $\frac{X}{1-X} \sim\beta^\prime(\alpha,\beta)$
Thus, the distribution is $Y\sim\beta^\prime(\alpha,\beta)$. The software that I am using, JAGS, does not support the $\beta^\prime$ distribution. So I would like to find an equivalent of a distribution that is supported by JAGS, such as the $F$ or $\beta$.
In addition to the above relationship between the $\beta$ and $\beta^\prime$,
Wikipedia states:
if $X\sim\beta^\prime(\alpha,\beta)$ then $\frac{X\beta}{\alpha}\sim F(2\alpha, 2\beta)$
Unfortunately, neither of these statements are referenced.
Questions
*
*1) Can I find $c$, $d$ for $Y\sim\beta^\prime(\alpha,\beta)$ where $Y\sim\beta(c,d)$
*2) Are these transformations correct? If so, are there limitations to using them, or a reason to use one versus the other (I presume $\beta$ is a more direct transformation, but why)?
*3) Where can I find such a proof or how would one demonstrate the validity of these relatively simple transformations?
| 2 and 3) Both of these transformations are correct; you can prove them with the cdf (cumulative distribution function) technique. I don't see any limitations on using them.
Here is the derivation for the first transformation using the cdf technique. The derivation for the other will be similar. Let $X \sim \beta(\alpha, \beta)$. Let $Y = \frac{X}{1-X}.$ Then
$$P\left(Y \leq y\right) = P\left(\frac{X}{1-X} \leq y\right) = P\left(X \leq y(1-X)\right) = P\left(X(1+y) \leq y\right) = P\left(X \leq \frac{y}{1+y}\right)$$
$$= \int_0^{\frac{y}{1+y}} \frac{x^{\alpha - 1} (1-x)^{\beta-1}}{B(\alpha,\beta)} dx.$$
Differentiating both sides of this equation then yields the pdf (probability density function) of $Y$. We have
$$f_Y(y)=\frac{\left(\frac{y}{1+y}\right)^{\alpha - 1} \left(1-\frac{y}{1+y}\right)^{\beta-1}}{B(\alpha,\beta)} \frac{d}{dy} \left(\frac{y}{1+y}\right)$$
$$= \frac{\left(\frac{y}{1+y}\right)^{\alpha - 1} \left(\frac{1}{1+y}\right)^{\beta-1}}{B(\alpha,\beta)} \left(\frac{1}{1+y}\right)^2$$
$$= \frac{y^{\alpha - 1} \left(1+y\right)^{-\alpha-\beta}}{B(\alpha,\beta)},$$
which is the pdf of a $\beta'(\alpha,\beta)$ random variable.
|
In every power of 3 the tens digit is an even number How to prove that in every power of $3$, with natural exponent, the tens digit
is an even number?
For example, $3 ^ 5 = 243$ and $4$ is even.
| It's actually interesting.
If you do a table of multiples of 1, 3, 7, 9 modulo 20, you will find a closed set, ie you can't derive an 11, 13, 17 or 19 from these numbers.
What this means is that any number comprised entirely of primes that have an even tens-digit will itself have an even tens-digt. Such primes are 3, 7, 23, 29, 41, 43, 47, 61, 67, 83, 89, 101, 103, 107, 109 to a hundred.
If the tens-digit is odd, then it must be divisible by an odd number of primes of the form 11, 13, 17, 19 mod 20. Any even number of these would produce an even tens digit.
|
How to sum up this series? How to sum up this series :
$$2C_o + \frac{2^2}{2}C_1 + \frac{2^3}{3}C_2 + \cdots + \frac{2^{n+1}}{n+1}C_n$$
Any hint that will lead me to the correct solution will be highly appreciated.
EDIT: Here $C_i = ^nC_i $
| Let's assume $C_i=\binom ni$. I'll give a solution that is not precalculus level. Consider first the equality
$$ (1+x)^n=C_0+xC_1+x^2C_2+\dots+x^nC_n. $$
This is the binomial theorem.
Integrate from 0 to t. On the left hand side we get $\frac{(1+t)^{n+1}-1}{n+1}$ and on the right hand side $\sum \frac1{i+1}t^{i+1}C_i$.
Now set $t=2$, and a bit of algebra gives you the answer you want.
Pretty sure there is an elementary approach as well.
|
Is possible to simplify $P = N^{ CN + 1}$ in terms of $N$? Having: $P = N^{CN + 1}$;
How can I simplify this equation to $N = \cdots$?
I tried using logarithms but I'm stucked...
Any ideas?
| The equation $P=N^N$ doesn't have an elementary solution, but taking logarithms you can find approximate solutions (or even solutions in the form of transseries). In your case, take logarithms to find $(cn+1)\log n = p$. Assuming $p$ and so $n$ are large, $n\log n \approx p/c$. Therefore $n \approx p/c$, and so $$n \approx \frac{p/c}{\log (p/c)}.$$
Edit: the following is wrong, as J.M. pointed out.
Again ignoring the $1$ in the exponent, we have $P^{1/c} = N^N$ and so $N = W(P^{1/c})$, where $W$ is the Lambert function (see Wikipedia).
|
Computer Programs for Pure Mathematicians Question: Which computer programs are useful for a pure mathematician to familiarize themselves with?
Less Briefly: I was once told that, now-a-days, any new mathematician worth his beans knows how to TeX up work; when I began my graduate work one of the fourth year students told me he couldn't TeX, I was horrified! Similarly, a number of my peers were horrified when I told them I'd never used Matlab or Mathematica before. Currently, I can "get by" in all three of these programs, but it made me think: what programs are popular with pure mathematicians? I don't mean to limit this to computing things: programs which help to draw pictures and things can also be included.
Lest I Start A Flame War: This is not meant to be a "what is your favorite computer language" question or a poll on who thinks Mathematica is worse than Sage. This is meant to be a survey on what programs and resources are currently being used by pure mathematicians and may be good to look at and read up on.
I'm also unsure of how to tag this.
| mpmath, which is a part of sage has great special functions support.
nickle is good for quick things, and has C like syntax.
|
Generalizing Cauchy-Riemann Equations to Arbitrary Algebraic Fields Can it be done?
For an arbitrary quadratic field $Q[\sqrt{d}]$, it's easy to show the equations are simply $ f_x = -\sqrt{d} f_y $, where $ f : Q[\sqrt{d}] \to Q[\sqrt{d}]$. I'm working on the case of $Q[\theta]$, when $\theta$ is a root of $\theta^3 - a\theta - b$, but I'm not sure if it's even possible. Has there been any mathematical research done on this topic? What do you think about it?
| I don't know if this will help, but I thought about something like this when I was an undergrad. I was thinking about the Jugendtraum: the fact that abelian extensions of imaginary quadratic fields can be described by values of analytic functions on $\mathbb{C}$. My thought was the following: Let $K=\mathbb{Q}(\sqrt{-D})$ be a quadratic imaginary field. Then $\mathbb{C} \cong K \otimes \mathbb{R}$ and we can write the Cauchy-Riemmann equations as $(D \partial_x^2 + \partial_y^2) f=0$, which seems to be built from the norm form $K \to \mathbb{Q}$.
Therefore (I thought), if we want to generalize the Jugendtraum to a real quadratic field $\mathbb{Q}(\sqrt{D})$, we should consider functions on $K \otimes \mathbb{R}$ which obey $(D \partial_x^2 - \partial_y^2) f=0$.
Well, this didn't get anywhere. But I did show that a function $f: \mathbb{R}^2 \to \mathbb{R}$ obeying $(D \partial_x^2 - \partial_y^2) f=0$ is of the form $g(x+\sqrt{D}y) + h(x-\sqrt{D}y)$. So, if that's the road you're gong down, I can tell you where it ends.
|
Development of a specific hardware architecture for a particular algorithm How does a technical and theoretical study of a project of implementing an algorithm on a specific hardware architecture?
Function example:
%Expresion 1:
y1 = exp (- (const1 + x) ^ 2 / ((conts2 ^ 2)),
y2 = y1 * const3
Where x is the input variable,
y2 is the output and const1, const3 const2 and are constant.
I need to determine the error you get in terms of architecture that decides to develop, for example it suppose that is not the same architecture with 11 bits for the exponent and 52 bits for mantissa.
This is the concept of error I handle:
Relative Error = (Real Data - Architecture Data) / (Real Data) * 100
I consider as 'Real Data' the ouput of my algorithm I get from Matlab (Matlab use double precission float point, IEEE 754, 52 bits to mantisa, 11 bits for exponent, one bit fo sign ) with the expression 1 and I consider as 'Architecture Data' the ouput of my algorithm running in an particular architecture (For instance a architecture that uses 12 bits for mantisa, 5 bits for exponent and 1 bit for sign)
EDIT:
NOTE: The kind of algorithm I am refering to are all those which use matemathical functions that can be descompose in terms of addition, multiplication, subtrations and division.
Thank you!
| You have several issues to consider. First, your output format allows very many fewer output values than the double standard. If your exponent is 2-based, your output will be of the form $\pm m*2^e$ where m has 2048 values available and e has 32 values (maybe they range from -15 to +16). The restricted range of e means that you will underflow/overflow quite often. If you are within range, you will still have an error as large as one part in 4096 assuming your calculation is perfect and the only error is representing the output value.
A second issue is precision of the input and error propagation. If your input values are represented in the same format, they will be in error by up to 1 part in 4096 for the same reason. Then when you calculate $f(x)=c_3 \exp {\frac{-(c_1+x)^2}{c_2}}$ (where I absorbed the squaring of $c_2$ into its definition) the error will propagate. You can assess this by evaluating $\frac{\text{d}\ln f(x)}{\text{d}x}=\frac{\text{d}\ln (c_3 \exp {\frac{-(c_1+x)^2}{c_2}})}{\text{d}x}$ to get the fractional error in $f(x)$ depending on the error in $x$
All this assumes your calculation is perfect and the only problem is the accuracy of representing the data. If not, you also need to assess the error that is inherent in the computation.
|
Best Strategy for a die game You are allowed to roll a die up to six times. Anytime you stop, you get the dollar amount of the face value of your last roll.
Question: What is the best strategy?
According to my calculation, for the strategy 6,5,5,4,4, the expected value is $142/27\approx 5.26$, which I consider quite high. So this might be the best strategy.
Here, 6,5,5,4,4 means in the first roll you stop only when you get a 6; if you did not get a 6 in the first roll, then in the second roll you stop only when you roll a number 5 or higher (i.e. 5 or 6), etc.
| Just work backwards. At each stage, you accept a roll that is >= the expected gain from the later stages:
Expected gain from 6th roll: 7/2
Therefore strategy for 5th roll is: accept if >= 4
Expected gain from 5th roll: (6 + 5 + 4)/6 + (7/2)(3/6) = 17/4
Therefore strategy for 4th roll is: accept if >= 5
Expected gain from 4th roll: (6 + 5)/6 + (17/4)(4/6) = 14/3
Therefore strategy for 3rd roll is: accept if >= 5
Expected gain from 3rd roll: (6 + 5)/6 + (14/3)(4/6) = 89/18
Therefore strategy for 2nd roll is: accept if >= 5
Expected gain from 2nd roll: (6 + 5)/6 + (89/18)(4/6) = 277/54
Therefore strategy for 1st roll is: accept only if 6
Expected gain from 1st roll: 6/6 + (277/54)(5/6) = 1709/324
So your strategy is 6,5,5,5,4 for an expectation of $5.27469...
|
Projection of a lattice onto a subspace Let $G$ be a $n \times n$ matrix with real entries and let $\Lambda = \{x^n \colon \exists i^n \in \mathbb{Z}^n \text{ such that } x^n = G \cdot i^n\}$ define a lattice. I am interested in projecting the lattice points onto a $k$-dimensional subspace $U$ with $k < n$. Let $A$ be a $n \times k$ matrix of vectors that span $U$. Then, the projection of $\Lambda$ onto $U$ has the generator matrix $P_A \cdot G$ where $P_A = A (A^TA)^{-1}A^T$ is the projection operator. Lets also assume that the entries of $G$ have no structure between them - i.e., they are generally chosen from the reals.
I am interested in finding out when the projection of $\Lambda$ onto $U$ (call it $\Lambda_U$) is also a $k$-dimensional lattice. It seems to me that in most cases, $\Lambda_U$ will fill out the $k$-dimensional space $U$ completely, i.e., the shortest distance between neighboring points in $\Lambda_U$ will be arbitrarily small (obs. 1). For certain subspaces $U$ that seem to be "aligned" correctly with $\Lambda$, we get a well-defined $\Lambda_U$. By well-defined, I mean that the spacing between the nearest neighbors of $\Lambda_U$ is bounded away from $0$. As far as I can tell, this happens when $A$ is chosen such that the columns of $P_A \cdot G$ are rational multiples of one another (obs. 2).
Questions:
*
*Are the observations (obs. 1) and (obs. 2) correct?
*If so, is this a well-studied concept? I didn't have much success googling with the obvious keywords such as lattice "projection" or "matrix column rational multiples" etc.
*Assuming the observations are correct, given $G$, is there a way to choose $A$ such that the columns of $P_A \cdot G$ are rational multiples of one another?
| The technique you suggest is sometimes used to construct quasi-crystaline structures or in mathematical parlance, aperiodic tilings. If you take a high dimensional lattice and you project on an appropriate subspace U, the result can be the lattice structure of a quasi-crystal.
|
Homeomorphism of the unit disk onto itself which does not extend to the boundary It is well known that any conformal mapping of the unit disk onto itself extends to the unit cirle.
However, is there an homeomorphism of the unit disk onto itself which does not extend to a continuous function on the closed unit disk?
If yes, can you give an explicit one?
Thank you,
Malik
| This following gives an example, probably:
Let $H=\{(x,y)\in\mathbb R^2:y>0\}$ be the upper half plane in $\mathbb R^2$ and let $\bar H$ be its closure. Let me give you an homeo $f:H\to H$ which does not extend to an homeo $\bar f:\bar H\to\bar H$, and which fixes the point of infinity on the $x$-axis. Then you can conjugate with an homeo $D\to H$ from the open unit disk which extends to the boundary (apart from one point...)
Let $\phi:\mathbb R\times(0,\infty)\to\mathbb R$ be given by $\phi(x,t)=\frac1{\sqrt{4\pi t}}\exp\left(-\frac{x^2}{4t}\right)$, and let $h:\mathbb R\to\mathbb R$ be a positive smooth function with support on $[-2,2]$ and such that it is constantly $1$ on $[-1,1]$. Define now $g:H\to H$ so that when $x\in\mathbb R$ and $y>0$ we have $$
g(x,y)=\int_{-\infty}^\infty\phi(x-\xi,y)(1-h(\xi))\,\mathrm d\xi.
$$ This is a solution of the one-dimensional head equation. Properties of that equation (which one can easily show in this case!) imply that $f$ extends continuously to a map $\bar g:\bar H\to\mathbb H$.
Now consider the map $f:(x,t)\in H\mapsto (xg(x,t),t)\in H$. This is an homeomorphism, and it extends to the map $\bar f:(x,t)\in\bar H\mapsto (x\bar g(x,t),t)\in\bar H$, which is not an homeo.
|
Poincare Duality Reference In Hatcher's "Algebraic Topology" in the Poincaré Duality section he introduces the subject by doing orientable surfaces. He shows that there is a dual cell structure to each cell structure and it's easy to see that the first structure gives the cellular chain complex, while the other gives the cellular cochain complex. He goes on to say that this generalizes for manifolds of higher dimension, but that "requires a certain amount of manifold theory". Is there a good book or paper where I can read about this formulation of Poincaré Duality?
| See also my 2011 Bochum lectures The Poincare duality theorem and its converse I., II.
|
System of linear equations Common form of system of linear equations is A*X = B, X is unknown. But how to find A, if X and B are known?
A is MxN matrix, X is column vector(N), B is column vector(M)
| Do it row by row. Row $k$ in $A$ multiplied by the column vector $X$ equals the $k$th entry in the vector $B$. This is a single equation for the $N$ entries in that row of $A$ (so unless $X$ is zero, you get an $(N-1)$-parameter set of solutions for each row).
|
Please explain how Conditionally Convergent can be valid? I understand the basic idea of Conditionally Convergent (some infinitely long series can be made to converge to any value by reordering the series). I just do not understand how this could possibly be true. I think it defies common sense and seems like a clear violation of the Commutative Law.
| It deserves to be better known that there are simple cases where one can give closed forms for some rearrangements of alternating series. Here are a couple of interesting examples based on results of Schlömilch in 1873. Many further results can be found in classical textbooks on infinite series, e.g. those by Bromwich and Knopp.
Let $\rm\ H^m_n\ $ be the rearrangement of the alternating harmonic series $\rm\ 1 - 1/2 + 1/3 - 1/4 +\: \cdots\ $ obtained by taking consecutive groups of $\rm\:m\:$ positive terms and $\rm\:n\:$ negative terms. Then
$$\rm H^m_n\ =\ log\ 2 + \frac{1}2\ \lim_{k\to\infty}\ \int^{\:mk}_{nk}\frac{1}x\ dx\ =\ \log 2 + \frac{1}2 \log\frac{m}n $$
Similarly rearranging Lebniz's sum $\rm\ L\ =\ \pi/4\ =\ 1 - 1/3 + 1/5 - 1/7 +\: \cdots\ $ yields
$$\rm L^m_n\ =\ \frac{\pi}4 + \frac{1}2\ \lim_{k\to\infty}\ \int^{\:mk}_{nk}\frac{1}{2x-1}\ dx\ =\ \frac{\pi}4 + \frac{1}4 \log\frac{m}n $$
Thus as $\rm\:m\:$ varies we obtain infinitely many rearrangements with distinct sums.
The proof of the general theorem underlying these results is quite simple - using nothing deeper than the integral test. See Beigel: Rearranging Terms in Alternating Series, Math. Mag. 1981.
|
Projective closure Is the projective closure of an infinite affine variety (over an algebraically closed field, I only care about the classical case right now) always strictly larger than the affine variety? I know it is an open dense subset of its projective closure, but I don't think it can actually be its own projective closure unless it is finite.
I guess my intuition has me worried about cases like the plane curve $X^2 + Y^2 - 1$, since the real part is compact, but such a curve must still "escape to infinity" over an algebraically closed field, right?
| For your own example: the projectivization of your curve is given by $X^2 + Y^2 - Z^2= 0$. There is the point corresponding to the projective equivalence class of $(1,i,0)$, which does not belong to the affine curve.
For more general varieties given by a single equation, you can always similarly find a projective point that is part of the affine curve. For more general varieties, I should imagine that it is still the case; but for a proper argument a better person in algebraic geometry is needed.
|
Can a polynomial size CFG over large alphabet describe any of these languages: Can a polynomial size CFG over large alphabet describe any of these languages:
*
*Each terminal appears $0$ or $2$ times
*Word repetition $\{www^* \mid w \in \Sigma^*\}$ (word repetition of an arbitrary word $w$)
"Polynomial size" Context Free Grammar is defined as Chomsky Normal Form polynomial in $|Σ|$.
References to poly-size CFGs over large alphabets will be appreciated.
EDIT: This is not a homwork. Positive answer can give extension to Restricted read twice BDDs and context free grammars
| The language described in (2) is not context free. Take the word $a^nba^nba^nb$ for $n$ large enough, and use the pumping lemma to get a word not in the language.
As for (1), consider a grammar in CNF (Chomsky Normal Form) generating that language. Let $n = |\Sigma|$. Consider all $S=(2n)!/2^n$ saturated words containing each character twice. Looking at the tree generating each of these words $w$, there must be a symbol $A_w$ which generates a subword $s_w$ of length between $1/3$ of $2/3$ of the entire word (of length $2n$). Suppose $A_u = A_v$. Then $|s_u| = |s_v|$ and $s_u = s_v$ up to permutation, since otherwise we can replace one occurrence by the other to get a word not in the language.
How many times can a given $A_u$ appear? Equivalently, in how many saturated words does $s_u$ appear, up to permutation, somewhere in the word? Any such word can be obtained from $u$ by permuting $s_u$, permuting the rest of the word, any rotating the result, to a total of $2n|s_u|!(2n-|s_u|)! \leq 2n(2n/3)!(4n/3)!$. Therefore the number of different symbols in the language is at least
$$ \frac{(2n)!/2^n}{2n(2n/3)!(4n/3)!} = O\left(\frac{c^n}{n^{1.5}}\right),$$
where $c = 9/4\sqrt[3]{2} \approx 1.78583$ (asymptotics courtesy of Wolfram Alpha).
According to Wikipedia, one can transform a grammar $G$ into a CNF grammar of size $O(|G|^2)$. Therefore this language requires exponentially many symbols for any grammar.
|
The logic behind the rule of three on this calculation First,
to understand my question, checkout this one:
Calculating percentages for taxes
Second,
consider that I'm a layman in math.
So, after trying to understand the logic used to get the final result. I was wondering:
Why multiply $20,000 by 100 and then divide by 83? I know this is the rule of three, but, I can't understand the "internals" of this approach. It isn't intuitive as think in this way:
Say 100% of one value, is the same of divide this value by 100. In other words: I have 100 separeted parts of this integer.
It's intuitive think about the taxes like this:
$$X - 17\% = \$20.000$$
So:
$$\$20.000 = 83\%$$
For me, the most easy and compreensive way to solve this is:
$$\$20.000 / 83 = 240.96$$
It's the same as think, if 100% is 100 parts of one integer, 83% of that integer is the same of divide this integer by 83.
And finally to get the result:
$$\$20.000 + 17 * 240.96$$
My final question is:
How can I think intuitively like this using the Rule of Three?
In other words, why multiply 20.000 by 100 and then divide by 83 is a shorcut to get the result?
| The "rule of three" is an ancient ad-hoc mindless rote rule of inference that is best ignored. Instead, you should strive to learn the general principles behind it - namely, the laws of fraction arithmetic. Let's consider the example at hand. You seek the number of dollars $\rm\:X\:$ such that when decremented by $\:17\%\:$ yields $\rm\:N\:$ dollars. First, recall that $\rm\ 17\%\:$ of $\rm\:X\:$ means $\rm\displaystyle \frac{17}{100}\ X\:.\ $ Thus your equation is:
$$\rm N\ =\ X - \frac{17}{100}\ X\ =\ \bigg(1 - \frac{17}{100}\bigg)\ X\ =\ \bigg(\frac{100}{100}-\frac{17}{100}\bigg)\ X\ =\ \frac{83}{100}\ X $$
Thus $\rm\displaystyle\ \frac{83}{100}\ X\ =\ N\ \ \Rightarrow\ \ X\ =\ \frac{100}{83}\ N\ $ follows by multiplying both sides by $\rm\displaystyle\ \frac{100}{83}$
Note that we applied no ad-hoc rules above - just the basic laws of the arithmetic of fractions. These are the laws that are worthy of mastering.
It's interesting to look at the decline of the use of the "rule of three" over the last two centuries as the knowledge of general (abstract) algebra evolved. This is very easy using the recently-released Google Books Ngram viewer - which searches for phrases over 5 million books back to 1500. Browsing one of the earliest textbooks in the Google corpus containing the rule of three I noticed that it is immediately followed by a section titled "method of making taxes". So it seems this was a big application in the old days. Also notice how "fraction arithmetic" really ramped up circa 1960 (perhaps due to "new math" programs?).
|
Fiction "Division by Zero" By Ted Chiang Fiction "Division by Zero" By Ted Chiang
I read the fiction story "Division by Zero" By Ted Chiang
My interpretation is the character finds a proof that arithmetic is inconsistent.
Is there a formal proof the fiction can't come true? (I don't suggest the fiction can come true).
EDIT: I see someone tried
| Is there a formal proof the fiction can't come true?
No, by Gödel's second incompleteness theorem, formal systems can prove their own consistency if and only if they are inconsistent. So given that arithmetic is consistent, we'll never be able to prove that it is. (EDIT: Actually not quite true; see Alon's clarification below.)
As an aside, if you liked "Division by Zero," you might also like Greg Egan's pair of stories in which arithmetic isn't consistent: "Luminous" and "Dark Integers".
|
an example of a continuous function whose Fourier series diverges at a dense set of points Please give me a link to a reference for an example of a continuous function whose Fourier series diverges at a dense set of points. (given by Du Bois-Reymond). I couldn't find this in Wikipedia.
| Actually, such an almost-everywhere divergent Fourier series was constructed by Kolmogorov.
For an explicit example, you can consider a Riesz product of the form:
$$ \prod_{k=1}^\infty \left( 1+ i \frac{\cos 10^k x}{k}\right)$$
which is divergent. For more examples, see here and here.
Edit: (response to comment). Yes, you are right, du Bois Reymond did indeed construct the examples of Fourier series diverging at a dense set of points. However the result of Kolmogorov is stronger in that it gives almost everywhere divergence.
The papers of du Bois Reymond are:
Ueber die Fourierschen Reihen
available for free download here also another one here.
|
Probability that a random permutation has no fixed point among the first $k$ elements Is it true that $\frac1{n!} \int_0^\infty x^{n-k} (x-1)^k e^{-x}\,dx \approx e^{-k/n}$ when $k$ and $n$ are large integers with $k \le n$?
This quantity is the probability that a random permutation of $n$ elements does not fix any of the first $k$ elements.
| Update: This argument only holds for some cases. See italicized additions below.
Let $S_{n,k}$ denote the number of permutations in which the first $k$ elements are not fixed. I published an expository paper on these numbers earlier this year. See "Deranged Exams," (College Mathematics Journal, 41 (3): 197-202, 2010). Aravind's formula is in the paper, as are several others involving $S_{n,k}$ and related numbers.
Theorem 7 (which I also mention in this recent math.SE question) is relevant to this question. It's
$$S_{n+k,k} = \sum_{j=0}^n \binom{n}{j} D_{k+j},$$
where $D_n$ is the number of derangements on $n$ elements. See the paper for a simple combinatorial proof of this.
Since $D_n$ grows as $n!$ via $D_n = \frac{n!}{e} + O(1)$ (see Wikipedia's page on the derangement numbers), and if $k$ is much larger than $n$,
the dominant terms in the probability
$\frac{S_{n+k,k}}{(n+k)!}$ are the $j = n$ and $j = n-1$ terms from the Theorem 7 expression. Thus we have
$$\frac{S_{n+k,k}}{(n+k)!} \approx \frac{D_{n+k} + n D_{n+k-1}}{(n+k)!} \approx \frac{1}{e}\left(1 + \frac{n}{n+k}\right) \approx e^{-1} e^{\frac{n}{n+k}} = e^\frac{-k}{n+k},$$
where the second-to-last step uses the first two terms in the Maclaurin series expansion for $e^x$.
Again, this argument holds only for (in my notation) $k$ much larger than $n$.
|
Bi invariant metrics on $SL_n(\mathbb{R})$ Does there exist a bi-Invariant metric on $SL_n(\mathbb{R})$. I tried to google a bit but I didn't find anything helpful.
| If $d$ were a bi-invariant metric on $\operatorname{SL}_n(\mathbb{R})$, we would be able to restrict it to a bi-invariant metric on $\operatorname{SL}_2(\mathbb{R})$ as in Jason's answer.
And then we would have $$ d\left(\left[{\begin{array}{cc}
t & 0 \\
0 & 1/t \\
\end{array}}\right]\left[{\begin{array}{cc}
1 & 1 \\
0 & 1 \\
\end{array}}\right]\left[{\begin{array}{cc}
1/t & 0 \\
0 & t \\
\end{array}}\right], \left[{\begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array}}\right]\right)= d\left(\left[{\begin{array}{cc}
1 & t^2 \\
0 & 1 \\
\end{array}}\right], \left[{\begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array}}\right]\right)\to 0$$
as $t\to 0$. On the other hand, we also have
$$
d\left(\left[{\begin{array}{cc}
t & 0 \\
0 & 1/t \\
\end{array}}\right]\left[{\begin{array}{cc}
1 & 1 \\
0 & 1 \\
\end{array}}\right]\left[{\begin{array}{cc}
1/t & 0 \\
0 & t \\
\end{array}}\right], \left[{\begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array}}\right]\right)= d\left( \left[{\begin{array}{cc}
1 & 1 \\
0 & 1 \\
\end{array}}\right], \left[{\begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array}}\right]\right) \neq 0.
$$
|
Convolution of multiple probability density functions I have a series of tasks where when one task finishes the next task runs, until all of the tasks are done. I need to find the probability that everything will be finished at different points in time. How should I approach this? Is there a way to find this in polynomial time?
The pdfs for how long individual tasks will run have been found experimentally and are not guaranteed to follow any particular type of distribution.
| If the durations of the different tasks are independent, then the PDF of the overall duration is indeed given by the convolution of the PDFs of the individual task durations.
For efficient numerical computation of the convolutions, you probably want apply something like a Fourier transform to them first. If the PDFs are discretized and of bounded support, as one would expect of empirical data, you can use a Fast Fourier Transform. Then just multiply the transformed PDFs together and take the inverse transform.
|
An Eisenstein-like irreducibility criterion I could use some help with proving the following irreducibility criterion. (It came up in class and got me interested.)
Let p be a prime. For an integer $n = p^k n_0$, where p doesn't divide $n_0$, set: $e_p(n) = k$. Let $f(x) = a_n x^n + \cdots + a_1 x + a_0$ be a polynomial with integer coefficients. If:
*
*$e_p(a_n) = 0$,
*$e_p(a_i) \geq n - i$, where $i = 1, 2, \ldots, n-1$,
*$e_p(a_0) = n - 1$,
then f is irreducible over the rationals.
Reducing mod p and mimicking the proof of Eisenstein's criterion doesn't cut it (I think). I also tried playing with reduction mod $p^k$, but got stuck since $Z_{p^k}[X]$ is not a UFD.
Also, does this criterion has a name?
| Apply Eisenstein's criterion to ${1 \over p^{n-1}}x^nf({p \over x})$.
|
How to find $10 + 15 + 20 + 25 + \dots + 1500$ During a test I was given the following question:
What is $$10+15+20+25+...+1490+1495+1500=?$$
After process of elimination and guessing, $ came up with an answer. Can anyone tell me a simple way to calculate this problem without having to try to actually add the numbers?
| Hint 3 (Gauss): $10 + 1500 = 15 + 1495 = 20 + 1490 = \cdots = 1500 + 10$.
|
Find the Angle ( as Measured in Counter Clock Wise Direction) Between Two Edges This is a similar question to this one, but slightly different.
The question is given two edges ($e_1$ and $e_2$, with the vertex coordinates known), how to find the angles from $e_1$ to $e_2$, with the angles measured in anti clock wise direction?
A diagram is shown below:
One way I can think of is to compute the cross and dot product of the two edge's unit vectors:
$$\sin\,\theta=\frac{|e_1\times e_2|}{|e_1||e_2|}$$
$$\cos\,\theta=\frac{e_1\cdot e_2}{||e_1|| ||e_2||}$$
And try to find the $\theta$, taken into account of whether $\sin\theta$ and $\cos\theta$ is $>0$ or $<0$. But this is very, very tedious and error-prone. Not to mention I'm not too sure whether the angle I get is always measured in counterclockwise direction or not.
Is there a single, clean formula that allows me to do what I want to do?
| The way to get the smaller angle spanned by $\mathbf e_1=(x_1,y_1)$ and $\mathbf e_2=(x_2,y_2)$ is through the expression
$\min(|\arctan(x_1,y_1)-\arctan(x_2,y_2)|,2\pi-|\arctan(x_1,y_1)-\arctan(x_2,y_2)|)$
where $\arctan(x,y)$ is the two-argument arctangent.
|
Good Book On Combinatorics What is your recommendation for an in-depth introductory combinatoric book? A book that doesn't just tell you about the multiplication principle, but rather shows the whole logic behind the questions with full proofs. The book should be for a first-year-student in college. Do you know a good book on the subject?
Thanks.
| Alan Tucker's book is rather unreadable. I'd avoid it. Nick Loehr's Bijective Combinatorics text is much more thorough, and it reads like someone is explaining mathematics to you. It mixes rigor and approachability quite well.
|
Finding the change point in data from a piecewise linear function Greetings,
I'm performing research that will help determine the size of observed space and the time elapsed since the big bang. Hopefully you can help!
I have data conforming to a piecewise linear function on which I want to perform two linear regressions. There is a point at which the slope and intercept change, and I need to (write a program to) find this point.
Thoughts?
| (This was supposed to be a comment, but it got too long.)
The problem of piecewise linear regression has been looked into many times before; I do not currently have access to these papers (and thus cannot say more about them), but you might want to look into these:
This paper (published in a physiology journal, of all places) discusses how to fit a piecewise linear function to certain data sets encountered in neurological research. A FORTRAN routine is included.
This paper also includes a FORTRAN routine for fitting piecewise linear functions to data.
This paper relies on maximum likelihood to find the best fit piecewise linear function.
|
famous space curves in geometry history? For an university assignment I have to visualize some curves in 3 dimensional space.
Until now I've implemented Bézier, helix and conical spiral.
Could you give me some advice about some famous curves in geometry history?
| Though is it is not 3D, the Clothoid or Cornu Spiral is an amazing curve. It surely can be made 3D by adding a simple extra parameter $z(t)=t$. It has infinite length but converges to two points in the plane. It has several applications in optics and road engineering, for example. An it looks quite nice:
I found a 3D plot too:
|
integral test to show that infinite product $ \prod \limits_{n=1}^\infty\left(1+\frac{2}{n}\right)$ diverges This is part of an assignment that I need to get a good mark for - I'd appreciate it if you guys could look over it and give some pointers where I've gone wrong.
(apologies for the italics)
$$\prod_{n=1}^\infty\left(1+\frac{2}{n}\right)\; \text{ converges when }
\sum_1^\infty \ln\left(1+\frac{2}{n}\right)\; \text{ converges }.$$
$$\sum_1^\infty \ln\left(1+\frac{2}{n}\right)\;=\;\sum_1^\infty \ln(n+2)-\ln(n)$$
$$ \text{let } f(x)=\ln(x+2)-\ln(x) \rightarrow f'(x)=\frac{1}{x+2} - \frac{1}{x}$$
$$ = \frac{x-x-2}{x(x+2)} = \frac{-2}{x(x+2)}<0$$
$$f(x)\ \text{is a decreasing function}.$$
$$f(x) \; \text{is a positive function for} \;x\geq1$$
$$f(x)\;\text{is a continuous function for} \;x>=1$$
using integration test.
$$\int_1^\infty \ln(x+2) - \ln(x) = \lim_{t \to \infty}\int_1^t \ln(x+2)dx -
\lim_{t \to \infty}\int_1^t \ln x dx$$
$$\int \ln(x)dx = x \ln x - x + c \Rightarrow \int \ln(x+2) =
(x+2)\ln(x+2) - (x+2) + c$$
Therefore
$$\int \ln(x+2) - \ln(x)dx = (x+2)\ln(x+2)-x - 2 - x \ln(x) + x + c$$
$$ = x \ln(\frac{x+2}{x})+ 2\ln(x+2)-2 + c $$
Therefore,
$$\int_1^\infty \ln(x+2) - \ln(x)dx = \lim_{t \to
\infty}\left[x \ln(\frac{x+2}{x}) + 2 \ln(x+2) - 2\right]_1^t$$
$$ = \lim_{t \to
\infty}\left[t \ln(\frac{t+2}{t}) + 2\ln(t+2) - 2\right] - \lim_{t \to
\infty}\left[\ln(\frac{3}{1}) + 2\ln(3) - 2\right] $$
$$ =\lim_{t \to
\infty}\left[t \ln(\frac{t+2}{t}) + 2\ln(t+2) - 3\ln(3)\right]$$
$$ As\; t\rightarrow\infty, \; \lim_{t \to \infty}t \ln\left(\frac{t+2}{t}\right)
+ 2\ln(t+2) = \infty. $$
Therefore the series $$\sum_1^\infty \ln\left(1+\frac{2}{n}\right) $$ is divergent.
Similarly the infinite product $$\prod_{n=1}^\infty\left(1+\frac{2}{n}\right)$$ is also divergent.
| For $a_n \ge 0 $ the infinite product $\prod_{n=1}^\infty (1+ a_n)$ converges precisely when the infinite sum $\sum_{n=1}^\infty a_n $ converges, since
$$1+ \sum_{n=1}^N a_n \le \prod_{n=1}^N (1+ a_n) \le \exp \left( \sum_{n=1}^N a_n \right) . $$
So you only need consider $ \sum_{n=1}^\infty 2/n $ and you can use your integral test for that, or just quote the standard result.
|
Using the second principle of finite induction to prove $a^n -1 = (a-1)(a^{n-1} + a^{n-2} + ... + a + 1)$ for all $n \geq 1$ The hint for this problem is $a^{n+1} - 1 = (a + 1)(a^n - 1) - a(a^{n-1} - 1)$
I see that the problem is true because if you distribute the $a$ and the $-1$ the terms cancel out to equal the left side. However, since it is telling me to use strong induction I am guessing there is more I am supposed to be doing. On the hint I can see that it is a true statement, but I am not sure how to use that to prove the equation or how the right side of the hint relates to the right side of the problem. Also, I do realize that in the case of the hint $n = 1$ would be the special case.
| HINT $\ $ Put $\rm\ f(n) = a^n+a^{n-1}+\:\cdots\:+1\:.\ \ $ Then
$\rm\ a^{n+1}-1\ = \ (a+1)\ (a^n-1) - a\ (a^{n-1}-1)$
$\rm\phantom{\ a^{n+1}-1\ } =\ (a+1)\ ((a-1)\ f(n-1) - a\ (a-1)\ f(n-2))\quad $ by strong induction
$\rm\phantom{\ a^{n+1}-1\ } =\ (a-1)\ ((a+1)\ f(n-1)- a\ f(n-2))\quad$
$\rm\phantom{\ a^{n+1}-1\ } =\ (a-1)\ (\:f(n-1) + a\ (f(n-1)-f(n-2)) $
$\rm\phantom{\ a^{n+1}-1\ } =\ \ldots $
$\rm\phantom{\ a^{n+1}-1\ } =\ (a-1)\ f(n)$
|
How to make a sphere-ish shape with triangle faces? I want to make an origami of a sphere, so I planned to print some net of a pentakis icosahedron, but I have a image of another sphere with more polygons:
I would like to find the net of such model (I know it will be very fun to cut).
Do you know if it has a name ?
| Here is a net of a buckyball, from GoldenNumber.net:
It should be possible to turn this into the kind of net you're looking for by replacing the pentagons and hexagons with 5 and 6 isosceles triangles (the heights of the triangles determine the "elevation" of the center vertex from the original pentagonal/hexagonal faces and thus affect the sphericality of the result).
|
Making Change for a Dollar (and other number partitioning problems) I was trying to solve a problem similar to the "how many ways are there to make change for a dollar" problem. I ran across a site that said I could use a generating function similar to the one quoted below:
The answer to our problem ($293$) is the
coefficient of $x^{100}$ in the reciprocal
of the following:
$(1-x)(1-x^5)(1-x^{10})(1-x^{25})(1-x^{50})(1-x^{100})$
But I must be missing something, as I can't figure out how they get from that to $293$. Any help on this would be appreciated.
| You should be able to compute it using a Partial Fraction representation (involving complex numbers). For instance see this previous answer: Minimum multi-subset sum to a target
Note, this partial fraction expansion needs to be calculated only one time. Once you have that, you can compute the way to make change for an arbitrary amount pretty quickly.
In this case, I doubt they really did that for finding the coefficient of $x^{100}$. It is probably quicker to just multiply out, ignoring the terms which would not contribute to the coefficient of $x^{100}$. Or you could try computing the partial fraction representation of only some of the terms and then multiply out.
Note, if you are multiplying out to find the coefficient of $x^{100}$, it would be easier not to go to the reciprocal, which arises from considering an infinite number of terms.
You just need to multiply out
$$ (\sum_{j=0}^{100} x^j)\ (\sum_{j=0}^{20} x^{5j})\ (\sum_{j=0}^{10} x^{10j})\ (\sum_{j=0}^{4} x^{25j})\ (1 + x^{100})$$
which would amount to enumerating the different ways to make the change (and in fact is the the way we come up with the generating function in the first place).
You could potentially do other things, like computing the $100^{th}$ derivative at $0$, or computing a contour integral of the generating function divided by $x^{100}$, but I doubt they went that route either.
Hope that helps.
|
Equation for a circle I'm reading a book about Calculus on my own and am stuck at a problem, the problem is
There are two circles of radius $2$ that have centers on the line $x = 1$ and pass through the origin. Find their equations.
The equation for circle is $(x-h)^2 + (y-k)^2 = r^2$
Any hints will be really appreciated.
EDIT: Here is what I did. I drew a triangle from the origin and applied the pythogoras theorem to find the perpendicular, the hypotenuse being $2$ ($\text{radius} = 2$) and base $1$ (because $x = 1$), the value of y-coordinate is $\sqrt3$. Can anyone confirm if this is correct?
| HINT: Find out what points $(x, y)$ with $x = 1$ also have distance $2$ to the origin. What would these points represent?
|
Why is the Möbius strip not orientable? I am trying to understand the notion of an orientable manifold.
Let M be a smooth n-manifold. We say that M is orientable if and only if there exists an atlas $A = \{(U_{\alpha}, \phi_{\alpha})\}$ such that $\textrm{det}(J(\phi_{\alpha} \circ \phi_{\beta}^{-1}))> 0$ (where defined). My question is:
Using this definition of orientation, how can one prove that the Möbius strip is not orientable?
Thank you!
| If you had an orientation, you'd be able to define at each point $p$ a unit vector $n_p$ normal to the strip at $p$, in a way that the map $p\mapsto n_p$ is continuous. Moreover, this map is completely determined once you fix the value of $n_p$ for some specific $p$. (You have two possibilities, this uses a tangent plane at $p$, which is definable using a $(U_\alpha,\phi_\alpha)$ that covers $p$.)
The point is that the positivity condition you wrote gives you that the normal at any $p'$ is independent of the specific $(U_{\alpha'},\phi_{\alpha'})$ you may choose to use, and path connectedness gives you the uniqueness of the map. Now you simply check that if you follow a loop around the strip, the value of $n_p$ changes sign when you return to $p$, which of course is a contradiction.
(This is just a formalization of the intuitive argument.)
|