Q
stringlengths
70
13.7k
A
stringlengths
28
13.2k
meta
dict
$\pi$ in terms of polygamma The computer found this, but couldn't prove it. Let $\psi(n,x)$ denote the polygamma function. With precision 500 decimal digits we have: $$ \pi^2 = \frac{1}{4}(15 \psi(1, \frac13) - 3 \psi(1, \frac16)) $$ Is it true? In machine readable form: pi^2 == 1/4*(15*psi(1, 1/3) - 3*psi(1, 1/6))
Note that $$ \psi(m,x) =(-1)^{m+1} m! \sum_{k=0}^{\infty} \frac{1}{(x+k)^{m+1}}. $$ Therefore $$ \psi(m,1/6) = (-1)^{m+1} m! \sum_{k=0}^{\infty} \frac{1}{(k+1/6)^{m+1}} =(-1)^{m+1} m! 6^{m+1} \sum_{n\equiv 1 \mod 6} \frac{1}{n^{m+1}}. $$ Writing the condition $n\equiv 1 \mod 6$ as $n\equiv 1 \mod 3$ but not $4 \mod 6$, the above is \begin{align*} &(-1)^{m+1} m! 6^{m+1} \Big( \sum_{n\equiv 1 \mod 3} \frac{1}{n^{m+1}} - \frac{1}{2^{m+1}} \sum_{n\equiv 2 \mod 3} \frac{1}{n^{m+1}}\Big)\\ &= 2^{m+1} \psi(m,1/3)-\psi(m,2/3). \end{align*} We also have $$ \psi(m,1/3) +\psi(m,2/3) = (-1)^{m+1} m! 3^{m+1} \sum_{n \not\equiv 0\mod 3} \frac{1}{n^{m+1}} = (-1)^{m+1} m! (3^{m+1} -1) \zeta(m+1). $$ From these two relations, clearly we have a linear relation connecting $\psi(m,1/6)$, $\psi(m,1/3)$ and $\zeta(m+1)$: namely, $$ \psi(m,1/6) = (2^{m+1}+1) \psi(m,1/3)+(-1)^m m! (3^{m+1}-1) \zeta(m+1). $$
{ "language": "en", "url": "https://mathoverflow.net/questions/312479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Permutations $\pi\in S_n$ with $\sum_{k=1}^n\frac1{k+\pi(k)}=1$ Let $S_n$ be the symmetric group of all the permutations of $\{1,\ldots,n\}$. Motivated by Question 315568 (http://mathoverflow.net/questions/315568), here I pose the following question. QUESTION: Is it true that for each integer $n>5$ we have $$\sum_{k=1}^n\frac1{k+\pi(k)}=1$$ for some odd (or even) permutation $\pi\in S_n$? Let $a_n$ be the number of all permutations $\pi\in S_n$ with $\sum_{k=1}^n(k+\pi(k))^{-1}=1$. Via Mathematica, I find that \begin{gather}a_1=a_2=a_3=a_5=0,\ a_4=1,\ a_6=7, \\ a_7=6,\ a_8=30,\ a_9=110, \ a_{10}=278,\ a_{11}=1332.\end{gather} For example, $(1,4,3,2)$ is the unique (odd) permutation in $S_4$ meeting our requirement for $n=4$; in fact, $$\frac1{1+1}+\frac1{2+4}+\frac1{3+3}+\frac1{4+2}=1.$$ For $n=11$, we may take the odd permutation $(4,8,9,11,10,6,5,7,3,2,1)$ since \begin{align}&\frac1{1+4}+\frac1{2+8}+\frac1{3+9}+\frac1{4+11}+\frac1{5+10}\\&+\frac1{6+6}+\frac1{7+5}+\frac1{8+7}+\frac1{9+3}+\frac1{10+2}+\frac1{11+1}\end{align} has the value $1$, we may also take the even permutation $(5, 6, 7, 11, 10, 4, 9, 8, 3, 2, 1)$ to meet the requirement. I conjecture that the question has a positive answer. Your comments are welcome! PS: After my initial posting of this question, Brian Hopkins pointed out that A073112($n$) on OEIS gives the number of permutations $p\in S_n$ with $\sum_{k=1}^n\frac1{k+p(k)}\in\mathbb Z$, but A073112 contains no comment or conjecture.
Claim: $a_n>0$ for all $n\geq 6\quad (*)$. Proof: We use induction to prove $(*)$. We have $a_6,a_7,a_8,a_9,a_{10},a_{11}>0$. Assume $(*)$ holds for all the integers $\in [6,n-1]$. We want to show that $a_n>0$ for all $n\geq12$. If $n$ is an odd, let $n=2m+1$, we have $m\geq6$, so by induction hypothesis, there exists $\pi\in S_m$ such that $\sum\limits_{k=1}^{m}\frac{1}{k+\pi(k)}=1$. Let \begin{align*} \sigma(2k+1)&=2m+1-2k\quad\text{for}\quad k=0,1,2\ldots, m,\\ \sigma(2k)&=2\pi(k)\quad\text{for}\quad k=1,2,\ldots, m. \end{align*} Then $\sigma\in S_{n}$, and $$\sum\limits_{k=1}^{n}\frac{1}{k+\sigma(k)}=\sum\limits_{k=1}^{m}\frac{1}{2k+\sigma(2k)}+\sum\limits_{k=0}^{m}\frac{1}{2k+1+\sigma(2k+1)}\\ =\frac{1}{2}\sum\limits_{k=1}^{m}\frac{1}{k+\pi(k)}+\sum\limits_{k=0}^{m}\frac{1}{2k+1+(2m-2k+1)}=\frac{1}{2}+(m+1)\frac{1}{2m+2}=1.$$ If $n$ is an even, let $n=2m$, also we have $m\geq6$ and there exists $\pi\in S_m$, such that $\sum\limits_{k=1}^{m}\frac{1}{k+\pi(k)}=1$. Let \begin{align*} \sigma(2k-1)&=2m+1-2k\quad\text{for}\quad k=1,2,\ldots,m, \\ \sigma(2k)&=2\pi(k)\quad\text{for}\quad k=1,2,\ldots,m. \end{align*} Then $\sigma\in S_{n}$ and $$\sum\limits_{k=1}^{n}\frac{1}{k+\sigma(k)}=\sum\limits_{k=1}^{m}\frac{1}{2k+\sigma(2k)}+\sum\limits_{k=1}^{m}\frac{1}{2k-1+\sigma(2k-1)}=\frac{1}{2}\sum\limits_{k=1}^{m}\frac{1}{k+\pi(k)}+ \sum\limits_{k=1}^{m}\frac{1}{2k-1+(2m-2k+1)}=1.$$ Hence $a_{n}>0$. By induction $(*)$ holds for all the $n\geq 6$.
{ "language": "en", "url": "https://mathoverflow.net/questions/315648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 1, "answer_id": 0 }
Maximum eigenvalue of a covariance matrix of Brownian motion $$ A := \begin{pmatrix} 1 & \frac{1}{2} & \frac{1}{3} & \cdots & \frac{1}{n}\\ \frac{1}{2} & \frac{1}{2} & \frac{1}{3} & \cdots & \frac{1}{n}\\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} & \cdots & \frac{1}{n}\\ \vdots & \vdots & \vdots & \ddots & \frac{1}{n}\\ \frac{1}{n} & \frac{1}{n} & \frac{1}{n} & \frac{1}{n} & \frac{1}{n} \end{pmatrix}$$ How to prove that all the eigenvalues of $A$ are less than $3 + 2 \sqrt{2}$? This question is similar to this one. I have tried the Cholesky decomposition $A = L^{T} L$, where $$L^{T} = \left(\begin{array}{ccccc} 1 & 0 & 0 & \cdots & 0\\ \frac{1}{2} & \frac{1}{2} & 0 & \cdots & 0\\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ \frac{1}{n} & \frac{1}{n} & \frac{1}{n} & \frac{1}{n} & \frac{1}{n} \end{array}\right)$$ then $$(L^{T})^{-1}=\left(\begin{array}{ccccc} 1 & & & \cdots\\ -1 & 2 & & \cdots\\ & -2 & 3 & \cdots\\ \vdots & \vdots & \vdots & \ddots\\ & & & -(n-1) & n \end{array}\right)$$ $$A^{-1}=L^{-1}(L^{T})^{-1}$$ How to prove the eigenvalues of $A^{-1}$ $$\lambda_{i}\geq\frac{1}{3+2\sqrt{2}}$$ Further, I find that $A$ is the covariance matrix of Brownian motion at time $1, 1/2, 1/3, \ldots, 1/n$
In this answer I show that the largest eigenvalue is bounded by $5< 3 + 2\sqrt{2}$. I will first use the interpretation of this matrix as the covariance matrix of the Brownian motion at times $(\frac{1}{n},\dots, 1)$ (I reversed the order so that the sequence of times is increasing, which is more natural for me). We have $A_{ij} = \mathbb{E} (B_{t_{i}} B_{t_j})$. The largest eigenvalue will be the supremum over the unit ball of the expression $\langle x, A x\rangle$, which is equal to $\sum_{i,j} A_{ij} x_{i} x_{j}$. This is equal to $\mathbb{E} (\sum_{i=1}^{n} x_{i} B_{t_{i}})^2$. In order to exploit the independence of increments of the Brownian motion, we rewrite the sum $\sum_{i=1}^{n} x_i B_{t_{i}}$ as $\sum_{i=1}^{n} y_{i} (B_{t_{i}} - B_{t_{i-1}})$, where $y_{i}:= \sum_{k=i}^{n} x_{k}$ and $t_0:=0$. Thus we have $ \mathbb{E} (\sum_{i=1}^{n} x_{i} B_{t_{i}})^2 = \sum_{i=1}^{n} y_{i}^2 (t_{i}-t_{i-1}). $ The case $i=1$ is somewhat special and its contribution is $\frac{y_1^2}{n} \leqslant \sum_{k=1}^{n} x_{k}^2 = 1$. For the other ones we have $t_{i} - t_{i-1} = \frac{1}{(n-i+1)(n-i+2)}\leqslant \frac{1}{(n-i+1)^2}$. At this point, to get a nicer expression, I will reverse the order again by defining $z_{i}:= y_{n-i+1}$. So we want to estimate the expression $ \sum_{i=1}^{n} \left(\frac{z_i}{i}\right)^2. $ We can now use use Hardy's inequality to bound it by $4 \sum_{i=1}^{n} x_{i}^2 =4$. So in total we get 5 as an upper bound, if I haven't made any mistakes.
{ "language": "en", "url": "https://mathoverflow.net/questions/366339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is this Laurent phenomenon explained by invariance/periodicity? In Chapter 4 (page 23, subsection "Somos sequence update") of his Tracking the Automatic Ant, David Gale discusses three families of recursively defined sequences of numbers, all due to Dana Scott and inspired by the Somos sequences: Sequence 1. Fix a positive integer $k\geq 2$. Define a sequence $\left( a_{0},a_{1},a_{2},\ldots \right) $ of positive rational numbers recursively by setting \begin{align*} a_{n}=1\qquad \text{for each }n<k \end{align*} and \begin{align*} a_{n}=\dfrac{a_{n-1}^{2}+a_{n-2}^{2}+\cdots +a_{n-k+1}^{2}}{a_{n-k}}\qquad \text{for each }n\geq k. \end{align*} Sequence 2. Fix an odd positive integer $k\geq 2$. Define a sequence $\left( a_{0},a_{1},a_{2},\ldots \right) $ of positive rational numbers recursively by setting \begin{align*} a_{n}=1\qquad \text{for each }n<k \end{align*} and \begin{align*} a_{n}=\dfrac{a_{n-1}a_{n-2}+a_{n-3}a_{n-4}+\cdots +a_{n-k+2}a_{n-k+1}}{ a_{n-k}}\qquad \text{for each }n\geq k. \end{align*} Sequence 3. Fix a positive integer $k\geq 2$. Define a sequence $\left( a_{0},a_{1},a_{2},\ldots \right) $ of positive rational numbers recursively by setting \begin{align*} a_{n}=1\qquad \text{for each }n<k \end{align*} and \begin{align*} a_{n}=\dfrac{a_{n-1}a_{n-2}+a_{n-2}a_{n-3}+\cdots +a_{n-k+2}a_{n-k+1}}{ a_{n-k}}\qquad \text{for each }n\geq k. \end{align*} Note the difference between Sequences 2 and 3: The numerator in Sequence 2 is $\sum\limits_{i=1}^{\left(k-1\right)/2} a_{n-2i+1} a_{n-2i}$, whereas the numerator in Sequence 3 is $\sum\limits_{i=1}^{k-2} a_{n-i} a_{n-i-1}$. Thus the requirement for $k$ to be odd in Sequence 2. Now, Gale claims that all three sequences have the integrality property: i.e., all their entries $a_{0},a_{1},a_{2},\ldots $ are integers (for all possible values of $k$). More interesting is the way he claims to prove this: by constructing an auxiliary sequence that turns out to be constant or periodic with a small period. Unfortunately, he only shows this for Sequence 1. Here, the auxiliary sequence is $\left( b_{k},b_{k+1},b_{k+2},\ldots \right) $, defined by setting \begin{align*} b_{n}=\dfrac{a_{n}+a_{n-k}}{a_{n-1}a_{n-2}\cdots a_{n-k+1}}\qquad \text{for each }n\geq k. \end{align*} By applying the recursion of Sequence 1 once to $n$ and once to $n-1$ and subtracting, it is not hard to see that $b_{n}=b_{n-1}$ for each $n\geq k+1$ . Thus, the sequence $\left( b_{k},b_{k+1},b_{k+2},\ldots \right) $ is constant, and therefore all its entries $b_{n}$ are integers (since $b_{k}=k$ is an integer). However, we can solve the equation $b_{n}=\dfrac{ a_{n}+a_{n-k}}{a_{n-1}a_{n-2}\cdots a_{n-k+1}}$ for $a_{n}$, obtaining $ a_{n}=b_{n}a_{n-1}a_{n-2}\cdots a_{n-k+1}-a_{n-k}$, and this gives a new recursive equation for the sequence $\left( a_{0},a_{1},a_{2},\ldots \right) $. This new recursive equation no longer involves division, and thus a straightforward strong induction suffices to show that all $a_{n}$ are integers (since all $b_{n}$ as well as the first $k$ entries $ a_{0},a_{1},\ldots ,a_{k-1}$ of Sequence 1 are integers). The details of this proof can be found in Gale's book or in the Notes on mathematical problem solving I am currently writing for Math 235 at Drexel (Exercise 8.1.8). Gale claims that similar arguments work for Sequences 2 and 3. And indeed, this proof can be adapted to Sequence 2 rather easily, by redefining the auxiliary sequence to be a sequence $\left( b_{k+1},b_{k+2},b_{k+3},\ldots \right) $ (starting at $b_{k+1}$ this time) defined by \begin{align*} b_{n}=\dfrac{a_{n}+a_{n-k-1}}{a_{n-2}a_{n-3}\cdots a_{n-k+1}}\qquad \text{ for each }n\geq k+1. \end{align*} I am, however, struggling with adapting this line of reasoning to Sequence 3. If $k$ is odd, then we can set \begin{align*} b_{n}=\dfrac{a_{n}+a_{n-k+1}}{a_{n-1}a_{n-3}\cdots a_{n-k+2}}\qquad \text{ for each }n\geq k-1 \end{align*} (where the denominator is $\prod\limits_{i=1}^{\left( k-1\right) /2}a_{n-2i+1}$). The resulting sequence $\left( b_{k-1},b_{k},b_{k+1},\ldots \right) $ is not constant, but it is periodic with period $2$ (that is, $b_{n}=b_{n-2}$ for each $n\geq k+1$); this is still sufficient for our argument. However, this only applies to the case when $k$ is odd. (I have found this definition of $ b_{n}$ in Section 7.5 of Joshua Alman, Cesar Cuenca, Jiaoyang Huang, Laurent phenomenon sequences, J. Algebr. Comb. (2016) 43:589--633, which studies a more general recursion.) When $k$ is even, I see no such proof. I assume that the integrality of $ a_{0},a_{1},a_{2},\ldots $ follows from the standard Laurent phenomenon results known nowadays (by Fomin, Zelevinsky, Lam, Pylyavskyy and others). I haven't properly checked it, as there are a few technical conditions too many, but it is certainly consistent with SageMath experiments. Alman/Cuenca/Huang do not seem to consider the $k$-even case in their paper. Question. Can we prove using the above tools that the entries $a_{0},a_{1},a_{2},\ldots $ of Sequence 3 are integers?
Yes, we can. The argument for odd $k$ made in the Alman/Cuenca/Huang paper was a red herring. We can argue for arbitrary $k \geq 2$ as follows: Let $n \geq k+2$. Then, the recursive definition of Sequence 3 yields \begin{align*} a_{n}=\dfrac{a_{n-1}a_{n-2}+a_{n-2}a_{n-3}+\cdots +a_{n-k+2}a_{n-k+1}}{a_{n-k}} \end{align*} and thus \begin{align*} a_{n} a_{n-k} = a_{n-1}a_{n-2}+a_{n-2}a_{n-3}+\cdots +a_{n-k+2}a_{n-k+1} . \end{align*} The same reasoning (applied to $n-2$ instead of $n$) yields \begin{align*} a_{n-2} a_{n-k-2} = a_{n-3}a_{n-4}+a_{n-4}a_{n-5}+\cdots +a_{n-k}a_{n-k-1} . \end{align*} Subtracting this equality from the preceding equality, we obtain \begin{align*} a_{n} a_{n-k} - a_{n-2} a_{n-k-2} &= \left(a_{n-1}a_{n-2}+a_{n-2}a_{n-3}+\cdots +a_{n-k+2}a_{n-k+1} \right) \\ & \qquad - \left(a_{n-3}a_{n-4}+a_{n-4}a_{n-5}+\cdots +a_{n-k}a_{n-k-1} \right) \\ &= a_{n-1}a_{n-2}+a_{n-2}a_{n-3} - a_{n-k+1}a_{n-k} - a_{n-k}a_{n-k-1} . \end{align*} Let us add all the terms $a_{n-2} a_{n-k-2}, a_{n-k+1}a_{n-k}, a_{n-k}a_{n-k-1}$ to both sides of this equality, and throw in an $a_{n-2} a_{n-k}$ for good measure. Thus we obtain \begin{align*} &a_{n} a_{n-k} + a_{n-k+1}a_{n-k} + a_{n-k}a_{n-k-1} + a_{n-2} a_{n-k} \\ &= a_{n-1}a_{n-2}+a_{n-2}a_{n-3} + a_{n-2} a_{n-k-2} + a_{n-2} a_{n-k} . \end{align*} Both sides of this equality can be easily factored, so the equality rewrites as \begin{align*} &a_{n-k} \left(a_{n} + a_{n-2} + a_{n-k+1} + a_{n-k-1} \right) \\ &= a_{n-2} \left(a_{n-1} + a_{n-3} + a_{n-k} + a_{n-k-2} \right). \end{align*} Dividing this equality by $a_{n-2} a_{n-3} \cdots a_{n-k}$, we obtain \begin{align*} \dfrac{a_{n} + a_{n-2} + a_{n-k+1} + a_{n-k-1}}{a_{n-2}a_{n-3}\cdots a_{n-k+1}} &= \dfrac{a_{n-1} + a_{n-3} + a_{n-k} + a_{n-k-2}}{a_{n-3}a_{n-4}\cdots a_{n-k}} . \end{align*} In other words, $b_n = b_{n-1}$, where we define a sequence $\left(b_{k+1}, b_{k+2}, b_{k+3}, \ldots\right)$ of rational numbers by setting $b_m = \dfrac{a_m + a_{m-2} + a_{m-k+1} + a_{m-k-1}}{a_{m-2}a_{m-3}\cdots a_{m-k+1}}$ for each $m \geq k+1$. Thus, this sequence $\left(b_{k+1}, b_{k+2}, b_{k+3}, \ldots\right)$ is constant (since we have shown that $b_n = b_{n-1}$ for each $n \geq k+2$). Hence, all entries $b_m$ of this sequence are integers (since it is easily seen that $b_{k+1}$ is an integer). Now, we can solve the equality $b_m = \dfrac{a_m + a_{m-2} + a_{m-k+1} + a_{m-k-1}}{a_{m-2}a_{m-3}\cdots a_{m-k+1}}$ for $a_m$, obtaining \begin{align*} a_m = b_m a_{m-2}a_{m-3}\cdots a_{m-k+1} - \left(a_{m-2} + a_{m-k+1} + a_{m-k-1}\right) \end{align*} for each $m \geq k+1$. This easily yields (by strong induction on $m$) that all of $a_0, a_1, a_2, \ldots$ are integers (after you check manually that $a_0, a_1, \ldots, a_k$ are integers).
{ "language": "en", "url": "https://mathoverflow.net/questions/378121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Are all integers not congruent to 6 modulo 9 of the form $x^3+y^3+3z^2$? Are all integers not congruent to 6 modulo 9 of the form $x^3+y^3+3z^2$ for possibly negative integers $x,y,z$? We have the identity $ (-t)^3+(t-1)^3+3 t^2=3t-1$. The only congruence obstruction we found is 6 modulo 9.
We can also say the each $n \in \mathbb{Z}$ with $n \equiv 3 \pmod 9$ is representable as $x^3 + y^3 + 3z^2$. This is because $$(-t)^3 + (t-9)^3 + 3(3t -13)^2 = 9t - 222$$ and $-222 \equiv 3 \pmod 9$. So, along with the identity in the question we can represent each integer congruent to $2$, $3$, $5$, or $8 \pmod 9$. This leaves only $0$, $1$, $4$, and $7 \pmod 9$. More generally we have $$(-t)^3 + (t - a^2)^3 + 3(at - b)^2 = 3a(a^3 - 2b)t + 3b^2 - a^6$$ which is congruent to $-a^6 \pmod 3$. This is either $0$ or $-1 \pmod 3$ depending on if $3$ divides $a$ or not. Taking $a = 3$ and $b = 13$ we obtain the first equation which takes care of $n \equiv 3 \pmod 9$. If we take $a = 3$ and $b = 12$ we get $27t - 297$. This gives us $n \equiv 0 \equiv -297 \pmod{27}$ which is some further progress. Hence, $0 \pmod 9$ can be resolved by finding $9 \pmod {27}$ and $18 \pmod{27}$.
{ "language": "en", "url": "https://mathoverflow.net/questions/378968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Explicit eigenvalues of matrix? Consider the matrix-valued operator $$A = \begin{pmatrix} x & -\partial_x \\ \partial_x & -x \end{pmatrix}.$$ I am wondering if one can explicitly compute the eigenfunctions of that object on the space $L^2(\mathbb R)$?
First some heuristics, before constructing the complete answer - this looks a bit more transparent if one considers $$ A^2 = \begin{pmatrix} -\partial_{x}^{2} +x^2 & 1 \\ 1 & -\partial_{x}^{2} +x^2 \end{pmatrix} $$ Then, denoting the standard harmonic oscillator eigenfunctions (i.e., the eigenfunctions of $-\partial_{x}^{2} +x^2 $ with eigenvalues $\lambda_{n} =2n+1$) as $\psi_{n} (x)$, $A^2 $ has the eigenvectors $$ \begin{pmatrix} \psi_{n} (x) \\ \psi_{n} (x) \end{pmatrix} \ \ \ \mbox{and} \ \ \ \begin{pmatrix} \psi_{n} (x) \\ -\psi_{n} (x) \end{pmatrix} $$ with eigenvalues $\lambda_{n} +1$ and $\lambda_{n} -1$, respectively. The eigenvalues of $A$ are the square roots of the aforementioned, but this doesn't yet directly yield the eigenvectors of $A$ - some more algebra is needed. However, with these preliminaries, it now becomes apparent how the eigenvectors of $A$ are structured: Introduce the standard raising and lowering operators $a^{\dagger } $, $a$, in terms of which $x=(a^{\dagger } +a)/\sqrt{2} $ and $-\partial_{x} =(a^{\dagger } -a)/\sqrt{2} $. Acting on the $\psi_{n} $, these act as $a\psi_{n} = \sqrt{n} \psi_{n-1} $, $a^{\dagger } \psi_{n} = \sqrt{n+1} \psi_{n+1} $. $A$ takes the form $$ A=\frac{1}{\sqrt{2} } \begin{pmatrix} a^{\dagger } +a & a^{\dagger } -a \\ -a^{\dagger } +a & -a^{\dagger } -a \end{pmatrix} $$ One immediately has the state with zero eigenvalue, $(\psi_{0} , -\psi_{0} )$. In addition, the action of $A$ on the other eigenstates of $A^2 $ constructed previously above is $$ A\begin{pmatrix} \psi_{n} \\ \psi_{n} \end{pmatrix} =\sqrt{2n+2} \begin{pmatrix} \psi_{n+1} \\ -\psi_{n+1} \end{pmatrix} \ \ \ \mbox{and} \ \ \ A\begin{pmatrix} \psi_{n+1} \\ -\psi_{n+1} \end{pmatrix} =\sqrt{2n+2} \begin{pmatrix} \psi_{n} \\ \psi_{n} \end{pmatrix} $$ and therefore all that remains is to form the right linear combinations of these doublets: $$ A\begin{pmatrix} \psi_{n} + \psi_{n+1} \\ \psi_{n} - \psi_{n+1} \end{pmatrix} =\sqrt{2n+2} \begin{pmatrix} \psi_{n} + \psi_{n+1} \\ \psi_{n} - \psi_{n+1} \end{pmatrix} $$ and $$ A\begin{pmatrix} \psi_{n} - \psi_{n+1} \\ \psi_{n} + \psi_{n+1} \end{pmatrix} =-\sqrt{2n+2} \begin{pmatrix} \psi_{n} - \psi_{n+1} \\ \psi_{n} + \psi_{n+1} \end{pmatrix} $$ So the doubly degenerate eigenvalues $2n+2$ of $A^2 $ split up into separate eigenvalues $\pm \sqrt{2n+2} $ of $A$.
{ "language": "en", "url": "https://mathoverflow.net/questions/403523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Sequence of $k^2$ and $2k^2$ ordered in ascending order Let $\eta(n)$ be A006337, an "eta-sequence" defined as follows: $$\eta(n)=\left\lfloor(n+1)\sqrt{2}\right\rfloor-\left\lfloor n\sqrt{2}\right\rfloor$$ Sequence begins $$1, 2, 1, 2, 1, 1, 2, 1, 2, 1, 1, 2, 1, 2, 1, 2, 1, 1, 2, 1, 2, 1$$ Let $a(n)$ be A091524, $a(m)$ is the multiplier of $\sqrt{2}$ in the constant $\alpha(m) = a(m)\sqrt{2} - b(m)$, where $\alpha(m)$ is the value of the constant determined by the binary bits in the recurrence associated with the Graham-Pollak sequence. Sequence begins $$1, 1, 2, 2, 3, 4, 3, 5, 4, 6, 7, 5, 8, 6, 9, 7, 10, 11, 8, 12, 9, 13$$ Then we have an integer sequence given by $$b(n)=(a(n))^2\eta(n)$$ Sequence begins $$1, 2, 4, 8, 9, 16, 18, 25, 32, 36, 49, 50, 64, 72, 81, 98, 100, 121, 128, 144, 162, 169$$ I conjecture that $b(n)$ is a sequence of $k^2$ and $2k^2$ ordered in ascending order. Is there a way to prove it?
Denote by $f(n)$ the sequence of squares and double squares in ascending order. We have to prove that $f(n)=b(n)=(a(n))^2\eta(n)$. Consider two cases. * *$f(n)=k^2$. Then the number of squares and double squares not exceeding $k^2$ equals $n$, that is, $n=k+\lfloor k/\sqrt{2}\rfloor$. Therefore $n<k(1+1/\sqrt{2})$ that is equivalent (by multiplying to $2-\sqrt{2}$) to $n\sqrt{2}>2n-k$, and $\lfloor n\sqrt{2}\rfloor\geqslant 2n-k$. On the other hand, $n+1>k(1+1/\sqrt{2})$ that analogously yields $(n+1)\sqrt{2}>2(n+1)-k$ and $\lfloor (n+1)\sqrt{2} \rfloor\leqslant 2n-k+1$. Since also $\lfloor (n+1)\sqrt{2} \rfloor\geqslant \lfloor n\sqrt{2} \rfloor+1\geqslant 2n-k+1$, this implies that $\eta(n)=1$. According to OEIS we have $a(n)=a(\lfloor k(1+1/\sqrt{2}\rfloor)=k$, thus $(a(n))^2\eta(n)=k^2$ as needed. *$f(n)=2k^2$. Then the number of squares and double squares not exceeding $2k^2$ equals $n$, that is, $n=k+\lfloor k\sqrt{2}\rfloor$. So, $n<k(\sqrt{2}+1)<n+1$, and (by multiplying to $\sqrt{2}-1$) we get $n\sqrt{2}<n+k$ and $(n+1)\sqrt{2}>(n+1)+k$. This yields $\eta(n)=2$. Again by OEIS we get $a(n)=a(\lfloor k(1+\sqrt{2}\rfloor)=k$ and $(a(n))^2\eta(n)=2k^2$.
{ "language": "en", "url": "https://mathoverflow.net/questions/410799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Integer solutions of an algebraic equation I'm trying to find integer solutions $(a,b,c)$ of the following algebraic equation with additional conditions $b>a>0$, $c>0$. $(-a^2+b^2+c^2)(a^2-b^2+c^2)(a^2+b^2-c^2) + 2 a b (-a^2+b^2+c^2)(a^2-b^2+c^2) - 2 b c (a^2-b^2+c^2)(a^2+b^2-c^2) - 2 a c (-a^2+b^2+c^2) (a^2+b^2-c^2) = 0$ Is it possible to do using some math packages?
The equation is homogeneous in 3 variables, thus it is associated with a plane curve. First, I would check if the curve has genus less than 2. If the genus is 0 or 1, the curve is parametrizable or elliptic, respectively. In particular, for parametrizable curves, you can generate integer solutions, provided that at least one integer solution exists and you know it.
{ "language": "en", "url": "https://mathoverflow.net/questions/438582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Which Diophantine equations can be solved using continued fractions? Pell equations can be solved using continued fractions. I have heard that some elliptic curves can be "solved" using continued fractions. Is this true? Which Diophantine equations other than Pell equations can be solved for rational or integer points using continued fractions? If there are others, what are some good references? Edit: Professor Elkies has given an excellent response as to the role of continued fractions in solving general Diophantine equations including elliptic curves. What are some other methods to solve the Diophantine equations $$X^2 - \Delta Y^2 = 4 Z^3$$ and $$18 x y + x^2 y^2 - 4 x^3 - 4 y^3 - 27 = D z^2 ?$$
I should have stuck with your preferred notation, as in your $B^2 + B C - 57 C^2 = A^3$ in a comment. So the form of interest will be $x^2 + x y - 57 y^2.$The other classes with this discriminant of indefinite integral binary quadratic forms would then be given by $ 3 x^2 \pm xy - 19 y^2.$ Therefore, take $$ \phi(x,y) = x^2 + x y - 57 y^2.$$ The identity you need to deal with your $A= \pm 3$ is $$ \phi( 15 x^3 - 99 x^2 y + 252 x y^2 - 181 y^3 , \; 2 x^3 - 15 x^2 y + 33 x y^2 - 28 y^3 ) \; = \; ( 3 x^2 + xy - 19 y^2 )^3 $$ This leads most directly to $\phi(15,2) = 27.$ Using $ 3 x^2 + x y - 19 y^2 = -3$ when $x=7, y=3,$ this leads directly to $ \phi(1581, -196) = -27.$ However, we have an automorph of $\phi,$ $$ W \; = \; \left( \begin{array}{rr} 106 & 855 \\\ 15 & 121 \end{array} \right) , $$ and $ W \cdot (1581,-196)^T = (6, -1)^T,$ so $\phi(6,-1) = -27.$ Finally, any principal form of odd discriminant, call it $x^2 + x y + k y^2,$ (you have $k=-57$) has the improper automorph $$ Z \; = \; \left( \begin{array}{rr} 1 & 1 \\\ 0 & -1 \end{array} \right) , $$ while $ Z \cdot (6,-1)^T = (5, 1)^T,$ so $\phi(5,1) = -27.$ EDIT: a single formula cannot be visually obvious for all desired outcomes. There are an infinite number of integral solutions to $3 x^2 + x y - 19y^2 = -3.$ It is an excellent bet that one of these leads, through the identity I give, to at least one of the desired $\phi(5,1) = -27$ or $\phi(6,-1) = -27,$ but not necessarily both, largely because $3 x^2 + x y - 19y^2$ and $3 x^2 - x y - 19y^2$ are not properly equivalent. Worth investigating, I should think.
{ "language": "en", "url": "https://mathoverflow.net/questions/77986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 3 }
Fermat's proof for $x^3-y^2=2$ Fermat proved that $x^3-y^2=2$ has only one solution $(x,y)=(3,5)$. After some search, I only found proofs using factorization over the ring $Z[\sqrt{-2}]$. My question is: Is this Fermat's original proof? If not, where can I find it? Thank you for viewing. Note: I am not expecting to find Fermat's handwritings because they may not exist. I was hoping to find a proof that would look more ''Fermatian''.
Lemma. Let $a$ and $b$ be coprime integers, and let $m$ and $n$ be positive integers such that $a^2+2b^2=mn$. Then there are coprime integers $r$ and $s$ such that $m=r^2+2s^2$ divides $br-as$. Furthermore, for any such choice of $r$ and $s$, there are coprime integers $t$ and $u$ such that $a=rt-2su$, $b=ru+st$, and $n=t^2+2u^2$ divides $bt-au$. Proof. Assume the theorem is false, and let $m$ be a minimal counterexample. Evidently $m > 1$ since the theorem is trivially true for $m=1$. Note that $b$ is coprime to $m$. Let $A$ be an integer such that $Ab \equiv a\!\pmod{m}$, chosen so that $\tfrac{-m}{2} < A \le \tfrac{m}{2}$. Then $A^2+2 = lm$ for some positive integer $l < m$. Clearly $l$ cannot be a smaller counterexample than $m$, and so there exist coprime integers $r$ and $s$ such that $m=r^2+2s^2$ divides $br-as$. Let $t = \tfrac{ar+2bs}{m}$ and $u=\tfrac{br-as}{m}$. Direct calculation confirms the equations for $a$, $b$, and $n$. From $n=t^2+2u^2$, we deduce that $t$ is an integer because $u$ is an integer, and $t$ and $u$ are coprime because $\gcd(t,u)$ divides both $a$ and $b$. Finally, note that $n$ divides $bt-au=sn$. Hence $m$ is not a counterexample, contradicting the original assumption. $\blacksquare$ Corollary. Let $a$ and $b$ be coprime integers with $m$ an integer such that $m^3=a^2+2b^2$. Then there are coprime integers $r$ and $s$ such that $a=r(r^2-6s^2)$ and $b=s(3r^2-2s^2)$. Proof. Evidently $m$ is odd since $a^2+2b^2$ is at most singly even. And $a$ and $m$ must be coprime. Using the theorem, we have $m=r^2+2s^2$ and $m^2=t^2+2u^2$. Then $m$ divides $a(ur-ts)=t(br-as)-r(bt-au)$, and therefore $m \mid (ur-ts)$. The lemma can then be reapplied with $a$ and $b$ replaced by $t$ and $u$. Repeating the process, we eventually obtain integers $p$ and $q$ such that $p^2+2q^2=1$. The only solution is $q=0$ and $p=\pm1$. Ascending the path back to $a$ and $b$ (reversing signs along the way, if necessary) yields $a=r(r^2-6s^2)$ and $b=s(3r^2-2s^2)$, as claimed. $\blacksquare$ Theorem. The Diophantine equation $X^3 = Y^2+2$ has only one integer solution, namely $(x,y) = (3, \pm 5)$. Proof. Evidently $y$ and $2$ are coprime. By the corollary, we must have $b=1=s(3r^2-2s^2)$ for integers $r$ and $s$. The only solutions are $(r,s)=(\pm 1,1)$. Hence $a=y=r(r^2-6s^2)=\pm 5$, so $(x,y)=(3,\pm 5)$. $\blacksquare$
{ "language": "en", "url": "https://mathoverflow.net/questions/142220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 5, "answer_id": 0 }
Is this a new formula? $\Delta^d x^n/d! = \sum_k \left[ x \atop k\right]{ k+n \brace x + d}(-1)^{x+k}$ $$\frac{\Delta^d x^n}{d!} = \sum_k \left[ x \atop k\right]{ k+n \brace x + d}(-1)^{x+k}$$ Where $x$, $n$ and $d$ are non-negative integers, $\Delta^d$ is the $d$-th difference with respect to $x$, $\left[ x \atop k \right]$ are the Stirling numbers of the first kind, and ${ k \brace x} $ are the Stirling numbers of the second kind. The formula is a generalization of the inversion formula since $n=0$ and $d=0$ gives the inversion formula $$ \Delta^d x^0 = \sum_k \left[ x \atop k\right]{ k \brace x + d}(-1)^{x+k} = [x=x+d]=[d=0]$$ I have not found this formula documented anywhere and would be interested in either a reference or a reasonable argument that it is in fact previously unknown. A few proofs are shown in this MSE post. What follows is the path I took to discovery of this formula, it was fun and interesting to take so I hope it is for you as well. Use the formula for $x^n$ written using the falling factorial $x^{\underline p}=(x)(x-1)\dots(x-p+1)$ $$x^n = \sum_p { n \brace p}x^{\underline p} \tag{0}$$ Now factor out an $x$ on both sides of the formula $$x^{n-1} = \sum_p { n \brace p}(x-1)^{\underline {p-1}} $$ This leads to a new "first order" formula that works for any $x \gt 1$ $$x^{n} = \sum_p { n+1 \brace p}(x-1)^{\underline {p-1}} \tag{1}$$ If $x>2$ we can factor $(x-1)$ and use the "first order" to find the "second order" equation $$\begin{align} x^{n} &= \frac{x}{x-1}x^n - \frac{1}{x-1}x^n \\ &= \frac{1}{x-1}x^{n+1} - \frac{1}{x-1}x^n \\ &= \frac{1}{x-1}\sum_p { n+2 \brace p}(x-1)^{\underline {p-1}} - \frac{1}{x-1}\sum_p { n+1 \brace p}(x-1)^{\underline {p-1}} \\ &= \frac{1}{x-1}\sum_p \left({ n+2 \brace p} - { n+1 \brace p}\right)(x-1)^{\underline {p-1}}\\ &= \sum_p \left({ n+2 \brace p} - { n+1 \brace p}\right)(x-2)^{\underline {p-2}} \tag{2}\\ \end{align}$$ If the pattern is not obvious yet (it is the alternating signed Stirling numbers of the first kind) it can be done once more--factor $(x-2)$ and use the "second order" equation $$\begin{align} x^{n} =& \phantom{-}\frac{x}{x-2}x^n - \frac{2}{x-2}x^n \\ =& \phantom{-}\frac{1}{x-2}x^{n+1} - \frac{2}{x-2}x^n \\ =& \phantom{-}\frac{1}{x-2}\sum_p \left({ n+3 \brace p} - { n+2 \brace p}\right)(x-2)^{\underline {p-2}} \\ & - \frac{2}{x-2}\sum_p \left({ n+2 \brace p} - { n+1 \brace p}\right)(x-2)^{\underline {p-2}} \\ =& \phantom{-}\frac{1}{x-2}\sum_p \left({ n+3 \brace p} -3{ n+2 \brace p} + 2{ n+1 \brace p}\right)(x-2)^{\underline {p-2}}\\ =& \phantom{-}\sum_p \left({ n+3 \brace p} -3{ n+2 \brace p} + 2{ n+1 \brace p}\right)(x-3)^{\underline {p-3}} \tag{3}\\ \end{align}$$ Now that a pattern of the Stirling numbers is more apparent, we can extrapolate until all factors $x(x-1)\dots 2$ are removed leaving just the factor $1$. Thus the "$(x-1)$ order" equation is $$x^n = \sum_p \left(\sum_{k=1}^{x-1}(-1)^{x+k}\left[x-1 \atop k\right]{ n+k \brace p}\right)(x - x + 1)^{\underline{p-x+1}}$$ Interestingly enough, this extrapolation does not yet reach the final formula since $1^{\underline{p-x+1}}$ has two terms, one when $p=x-1$ and one when $p=x$. It does however provide enough motivation to check the simpler version of the formula, and SUCCESS!
It is well known that the differencing operator $\Delta$ behaves nicely in the polynomial basis given by the falling factorials: $\Delta(x^{\underline{k}}) = kx^{\underline{k-1}}$. It's also well known that the Stirling numbers are the coefficients that arise when changing from the basis $\{x^k\}$ to the basis $\{x^{\underline{k}}\}$. So I suspect that this formula follows from well-known techniques, even if it hasn't been noted before. I suggest checking out the relevant chapter in Graham/Knuth/Patashnik's Concrete Mathematics.
{ "language": "en", "url": "https://mathoverflow.net/questions/161830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Primes dividing $2^a+2^b-1$ From Fermat's little theorem we know that every odd prime $p$ divides $2^a-1$ with $a=p-1$. Is it possible to prove that there are infinitely many primes not dividing $2^a+2^b-1$? (With $2^a,2^b$ being incoguent modulo $p$) 1. Obviously, If $2$ is not a quadratic residue modulo $p$ then we have the solution $a=1, b=\frac{p-1}{2}$ 2. If $2$ is a quadratic residue and the order of $2 \ modp$ is $r=\frac{p-1}{2}$ then the set $ \{2^1,2^2,...,2^{\frac{p-1}{2}}\}$ is a complete quadratic residue system modp. So,In this case, $p\mid2^a+2^b-1$ is equivalent to $p\mid x^2+y^2-1$ with $x^2,y^2$ being incogruent modp, which is always true for every $p\geq11$ . 3. It is not true that if $p \mid2^a+2^b-1$ and $q\mid2^{a'}+2^{b'}-1$ then $p\cdot q\mid 2^c+2^d-1$ . There is the counterexample: $5\mid 2^1+2^2-1$ and $17\mid 2^1+2^4-1$ but $5\cdot 17=85\not \mid2^a+2^b-1$. We can see a few examples of numbers which have the questioned property :$3,7,31,73,89,...$ (In fact,every Mersenne prime does not divide $2^a+2^b-1$) Thanks in advance!
This is a heuristic which suggests that the problem is probably quite hard. We have that $p | 2^{a} + 2^{b} - 1$ if and only if there is some integer $k$, $1 \leq k \leq p-1$ with $k \ne \frac{p+1}{2}$ for which $2^{a} \equiv k \pmod{p}$ and $2^{b} \equiv 1-k \pmod{p}$ are both solvable. If $r$ is the order of $2$ modulo $p$, then for each element $k$ in $\langle 2 \rangle$, the "probability" that $1-k$ is also in $\langle 2 \rangle$ is $\frac{r}{p-1}$ (assuming that $1-k$ is a "random element of $\mathbb{F}_{p}^{\times}$). So the probability that there are no solutions is about $\left(\frac{p-1-r}{p-1}\right)^{r} \approx e^{-\frac{r^{2}}{p-1}}$. (Note that if $r$ is even, then we have the trivial solution $2^{a} \equiv -1 \pmod{p}$, and $b = 1$.) Thus, in order for there to be a solution, we must have that $r$ is small as a function of $p$, no bigger than about $\sqrt{p}$. This implies that $p$ must be a prime divisor of $2^{r} - 1$ of size $\gg r^{2}$. (For example, the prime $p < 5 \cdot 10^{5}$ that does not divide a number of the form $2^{a} + 2^{b} - 1$ for which $r$ is the largest is $p = 379399$ and for this number, $r = 1709$.) However, it seems difficult to prove unconditionally that there are numbers of the form $2^{n} - 1$ with large prime divisors. Let $P(2^{n} - 1)$ denote the largest prime divisor of $2^{n} - 1$. Then, the strongest unconditional results (see the paper of Cam Stewart in Acta. Math. from 2013) take the form $P(2^{n} - 1) \geq f(n)$, where $f(n) = O(n^{1 + \epsilon})$ for all $n$. (Of course, we only need a result for infinitely many $n$, but allowing exceptions seems not to help.) Murty and Wong (2002) prove assuming ABC that $P(2^{n} - 1) \gg n^{2 - \epsilon}$ for all $n$, and Pomerance and Murata (2004) show that $P(2^{n} - 1) \gg \frac{n^{4/3}}{\log \log(n)}$ for all but a density zero subset of $n$, assuming GRH.
{ "language": "en", "url": "https://mathoverflow.net/questions/172706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
A divisor sum congruence for 8n+6 Letting $d(m)$ be the number of divisors of $m$, is it the case that for $m=8n+6$, $$ d(m) \equiv \sum_{k=1}^{m-1} d(k) d(m-k) \pmod{8}\ ?$$ It's easy to show that both sides are 0 mod 4: the left side since two primes appear to odd order in the factorization of $m$, and the right side since $m$ is neither a square nor the sum of two squares and so even products all appear twice. But they also seem to match mod 8 for small values. Does this keep going, or does it fail somewhere?
The congruence you state is true for all $m \equiv 6 \pmod{8}$. The proof I give below relies on the theory of modular forms. First, observe that $$ \sum_{k=1}^{m-1} d(k) d(m-k) = 2 \sum_{k=1}^{\frac{m-2}{2}} d(k) d(m-k) + d\left(\frac{m}{2}\right)^{2}. $$ Noting that $d(m) \equiv d\left(\frac{m}{2}\right)^{2} \pmod{8}$ if $m \equiv 6 \pmod{8}$, it suffices to prove that for every $m \equiv 6 \pmod{8}$ that $$\sum_{k=1}^{\frac{m-2}{2}} d(k) d(m-k)$$ is a multiple of $4$. The only terms in the sum that are not multiples of $4$ are those where $k$ is a perfect square, and $m-k = py^{2}$, where $p$ is prime, and $p$ divides $y$ to an even power (or $k = py^{2}$ where $p$ divides $y$ to an even power, and $m-k$ is a square). It suffices therefore to show that if $m \equiv 6 \pmod{8}$, then $m$ has an even number of representations in the form $m = x^{2} + py^{2}$ with $x, y \in \mathbb{Z}$ with $x, y > 0$ and $p$ a prime number (and the $p$-adic valuation of $y$ is even). If $m = x^{2} + py^{2}$, then either $x$ is even, which forces $p = 2$ and $y$ odd, or $x$ is odd, in which case $y$ is odd and $p \equiv 5 \pmod{8}$. The function $F(z) = \sum_{n=0}^{\infty} \sigma(2n+1) q^{2n+1}$, $q = e^{2 \pi i z}$ is a modular form of weight $2$ for $\Gamma_{0}(4)$. Here $\sigma(k)$ denotes the sum of the divisors function. A simple calculation shows that if $n \equiv 5 \pmod{8}$, then $\sigma(n) \equiv 2 \pmod{4}$ if and only if $n = py^{2}$ for some prime $p \equiv 5 \pmod{8}$ and a some perfect square $y$ (and moreover, the power of $p$ dividing $y$ is even, a condition which will remain in effect). It follows from this that $$ \frac{1}{2} \sum_{n \equiv 5 \pmod{8}} \sigma(n) q^{n} \equiv \sum_{\substack{p \equiv 5 \pmod{8} \\ y \geq 1}} q^{py^{2}} \pmod{2}, $$ where by this statement I mean that the power series on the left and right hand sides have integer coefficients and the coefficient of $q^{k}$ on the left side is congruent (modulo $2$) to the coefficient of $q^{k}$ on the right hand side. By twisting $F(z)$ by Dirichlet charaters mod $8$, one can see that $G(z) = \frac{1}{2} \sum_{n \equiv 5 \pmod{8}} \sigma(n) q^{n}$ is a modular form of weight $2$ on $\Gamma_{0}(64)$. Now, observe that $F(z) \equiv \sum_{\substack{n \geq 1 \\ n \text{ odd }}} q^{n^{2}} \pmod{2}$. Now, we find that $$ F(z) G(z) + F(4z) F(2z) \equiv \sum_{x, y \geq 1, p \equiv 5 \pmod{8} \text{ prime }} q^{x^{2} + py^{2}} + \sum_{x,y} q^{x^{2} + 2y^{2}} \pmod{2}. $$ where in the second term on the right hand side we have $x \equiv 2 \pmod{4}$ and $y \equiv 1 \pmod{2}$. The left hand side is now a modular form of weight $4$ on $\Gamma_{0}(64)$, and a computation shows that the first $1000$ Fourier coefficients are even. Indeed, Sturm's theorem proves that if the first $32$ coefficients are even, then all of them are. It follows that the number of representations of $m$ in the form $x^{2} + py^{2}$ when $m \equiv 6 \pmod{8}$ is always even, and hence the desired congruence is true.
{ "language": "en", "url": "https://mathoverflow.net/questions/177477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Does the Lehmer quintic parameterize certain minimal polynomials of the $p$th root of unity for infinitely many $p$? The solvable Emma Lehmer quintic is given by, $$F(y) = y^5 + n^2y^4 - (2n^3 + 6n^2 + 10n + 10)y^3 + (n^4 + 5n^3 + 11n^2 + 15n + 5)y^2 + (n^3 + 4n^2 + 10n + 10)y + 1 = 0$$ with discriminant $D = (7 + 10 n + 5 n^2 + n^3)^2(25 + 25 n + 15 n^2 + 5 n^3 + n^4)^4$. For prime $p=25 + 25 n + 15 n^2 + 5 n^3 + n^4$, we solve $F(y)=0$ in radicals as a sum of powers of the root of the unity $\zeta_p = e^{2\pi i/p}$, $$y = a+b\sum_{k=1}^{(p-1)/5}\,{\zeta_p}^{c^k}\tag1$$ for integer $a,b,c$. The complete table for small $n$, $$\begin{array}{|c|c|c|c|c|} n &p &a &b &c \\ -1& 11& 0& +1& 10\\ +1& 71& 0& +1& 23\\ -2& 11& -1& -1& 10\\ +2& 191& -1& -1& 11\\ -3& 31& -2& -1& 6\\ -4& 101& -3& +1& 32\\ +4& 941& -3& +1& 12\\ -6& 631& -7& +1& 24\\ +7& 5051& -10& -1& 7\\ -9& 3931& -16& +1& 11\\ \end{array}$$ Questions: * *Is it true that for all prime $P(n)=25 + 25 n + 15 n^2 + 5 n^3 + n^4$, then a root of $F(y)=0$ in radicals can always be given in the form of $(1)$ with integer $a,b,c$? *Also, does $P(n)$ assume prime values infinitely often? P.S. This was inspired by cubic analogues I asked about in this MSE post, as well as this one, and this one.
For question 1, the answer is yes, as shown by Emma Lehmer herself. (See the paper here, in particular, equation (5.8) on page 539.) In particular, Lehmer states that one can take $$ a = \frac{\left(\frac{n}{5}\right) - n^{2}}{5}, \quad b = \left(\frac{n}{5}\right). $$ (Here $\left(\frac{n}{5}\right)$ denotes the Legendre symbol.) This polynomial is defining the unique degree $5$ subfield of $\mathbb{Q}(\zeta_{p})$ and so we take $c$ to be any element in $\mathbb{F}_{p}^{\times}$ of order $\frac{p-1}{5}$. Question 2 is definitely open. (In fact, it is not known if there is a polynomial $P(n)$ of degree $> 1$ that takes on prime values infinitely often.) Bunyakovsky's conjecture would imply that $P(n)$ does take on prime values infinitely often.
{ "language": "en", "url": "https://mathoverflow.net/questions/190893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do we show this matrix has full rank? I met with the following difficulty reading the paper Li, Rong Xiu "The properties of a matrix order column" (1988): Define the matrix $A=(a_{jk})_{n\times n}$, where $$a_{jk}=\begin{cases} j+k\cdot i&j<k\\ k+j\cdot i&j>k\\ 2(j+k\cdot i)& j=k \end{cases}$$ and $i^2=-1$. The author says it is easy to show that $rank(A)=n$. I have proved for $n\le 5$, but I couldn't prove for general $n$. Following is an attempt to solve this problem: let $$A=P+iQ$$ where $$P=\begin{bmatrix} 2&1&1&\cdots&1\\ 1&4&2&\cdots& 2\\ 1&2&6&\cdots& 3\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ 1&2&3&\cdots& 2n \end{bmatrix},Q=\begin{bmatrix} 2&2&3&\cdots& n\\ 2&4&3&\cdots &n\\ 3&3&6&\cdots& n\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ n&n&n&\cdots& 2n\end{bmatrix}$$ and define $$J=\begin{bmatrix} 1&0&\cdots &0\\ -1&1&\cdots& 0\\ \cdots&\cdots&\cdots&\cdots\\ 0&\cdots&-1&1 \end{bmatrix}$$ then we have $$JPJ^T=J^TQJ=\begin{bmatrix} 2&-2&0&0&\cdots&0\\ -2&4&-3&\ddots&0&0\\ 0&-3&6&-4\ddots&0\\ \cdots&\ddots&\ddots&\ddots&\ddots&\cdots\\ 0&0&\cdots&-(n-2)&2(n-1)&-(n-1)\\ 0&0&0&\cdots&-(n-1)&2n \end{bmatrix}$$ and $$A^HA=(P-iQ)(P+iQ)=P^2+Q^2+i(PQ-QP)=\binom{P}{Q}^T\cdot\begin{bmatrix} I& iI\\ -iI & I \end{bmatrix} \binom{P}{Q}$$
I use Christian Remling idea,In fact,I can find the matrix $$B_{ij}=\min{\{i,j\}}$$eigenvalue is $$\dfrac{1}{4\sin^2{\dfrac{j\pi}{2(n+1)}}},j=1,2,\cdots,n$$ proof: then we have $$B=\begin{bmatrix} 1&1&1&\ddots&1&1\\ 1&2&2&\ddots&\ddots&2\\ 1&2&3&3&\ddots&3\\ \vdots&\ddots&\ddots&\ddots&\ddots&\cdots\\ 1&\vdots&\ddots&\ddots&n-1&n-1\\ 1&2&\cdots&\cdots&n-1&n \end{bmatrix} $$ It is easy have $$C=B^{-1}=\begin{bmatrix} 2&-1\\ -1&2&-1\\ 0&\ddots&\ddots&\ddots\\ \vdots&\cdots&-1&2&-1\\ 0&\cdots&\cdots&-1&1 \end{bmatrix}$$ and consider $$b_{n}=|\lambda C-I|=\begin{vmatrix} \lambda-2&1&\cdots&\cdots&0\\ 1&\lambda-2&1&\cdots&0\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ \cdots&\cdots&1&\lambda-2&1\\ 0&\cdots&\cdots&1&\lambda-2 \end{vmatrix} $$ so $$b_{n+1}=(\lambda-2)b_{n}-b_{n-1},b_{1}=\lambda-2,b_{2}=(\lambda-2)^2-1$$ let $\lambda-2=-2\cos{x}$, then $$b_{n+1}=-2\cos{x}\cdot b_{n}-b_{n-1},b_{1}=-2\cos{x},b_{2}=4\cos^2{x}-1$$ and induction have $$b_{n}=(-1)^n\cdot\dfrac{\sin{(n+1)x}}{\sin{x}}=0\Longrightarrow x=\dfrac{j\pi}{n+1},j=1,2,\cdots,n$$ so we $B^{-1}$ with eigenvalue is $$\lambda=2-2\cos{x}=4\sin^2{\dfrac{x}{2}}=4\sin^2{\dfrac{j\pi}{2(n+1)}}$$
{ "language": "en", "url": "https://mathoverflow.net/questions/191796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 0 }
proving that a smooth curve in Euclidean n-space contains n+1 affinely independent points If I let $f(\theta)=((\mathrm{cos} \theta)X+(\mathrm{sin} \theta)Y)^{n-1}$ and view the range of this curve as a subset of the space of homogeneous polynomials of degree $n-1$ in two variables viewed as an $n$-dimensional Euclidean space, then I can show that there are $n$ linearly independent points on the curve, because if I pick $n$ distinct values of $\theta$ in the range $(0,\frac{\pi}{2})$ then we can show that the determinant of the matrix which has the corresponding values of $f(\theta)$ as columns is a nonzero multiple of a Vandermonde determinant. But I would like to show that this curve has $n+1$ affinely independent points. What would be the simplest way of doing that?
This isn't true if $n$ is odd. For example, if $n=3$, then your formula is $(a,b,c) = (\cos^2 \theta, 2 \sin \theta \cos \theta, \sin^2 \theta)$ and it always lies in the hyperplane $a+c=1$. More generally, whenever $n$ is odd, the equality $1 = (\sin^2 \theta+ \cos^2 \theta)^{(n-1)/2} = \sum \binom{(n-1)/2}{k} \sin^{2k} \theta \cos^{n-1-2k} \theta$ gives a hyperplane containing your points. When $n$ is even, this is true. We want to prove that the following list of $n+1$ functions is linearly independent: $\cos^{n-1-k} \theta \sin^k \theta$, for $0 \leq k \leq n-1$, and the constant function $1$. If $k$ is odd, then $\cos^{n-1-k} \theta \sin^k \theta$ is an odd function; if $k$ is even, then $\cos^{n-1-k} \theta \sin^k \theta$ is an even function, as is the constant function $1$. An odd function and an even function can never be proportional unless they are both zero, so it suffices to show that the odd and the even functions are linearly independent. For the odd functions, your Vandermonde determinant argument works (as do many others). For the even functions, note that $\cos^{n-1-2j} \theta \sin^{2j} \theta = \cos^{n-1-2j} \theta (1-\cos^2 \theta)^j$. This is a polynomial in odd powers of $\cos \theta$ with terms in degrees between $n-1$ and $n-1 - 2j$. Therefore, the matrix which transforms the even functions in your list into the functions $\cos^{2m+1} \theta$ is upper triangular with nonzero entries on the diagonal, and is thus invertible. For example, $$\begin{pmatrix} \cos^3 \theta \\ \cos \theta \sin^2 \theta \\ 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} \begin{pmatrix} \cos^3 \theta \\ \cos \theta \\ 1 \end{pmatrix} $$
{ "language": "en", "url": "https://mathoverflow.net/questions/199453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all solution $a,b,c$ with $(1-a^2)(1-b^2)(1-c^2)=8abc$ Two years ago, I made a conjecture on stackexchange: Today, I tried to find all solutions in integers $a,b,c$ to $$(1-a^2)(1-b^2)(1-c^2)=8abc,\quad a,b,c\in \mathbb{Q}^{+}.$$ I have found some solutions, such as $$(a,b,c)=(5/17,1/11,8/9),(1/7,5/16,9/11),(3/4,11/21,1/10),\cdots$$ $$(a,b,c)=\left(\dfrac{4p}{p^2+1},\dfrac{p^2-3}{3p^2-1},\dfrac{(p+1)(p^2-4p+1)}{(p-1)(p^2+4p+1)}\right),\quad\text{for $p>2+\sqrt{3}$ and $p\in\mathbb {Q}^{+}$}.$$ Here is another simple solution: $$(a,b,c)=\left(\dfrac{p^2-4p+1}{p^2+4p+1},\dfrac{p^2+1}{2p^2-2},\dfrac{3p^2-1}{p^3-3p}\right).$$ My question is: are there solutions of another form (or have we found all solutions)?
I'm late for this party, but using math110's method employing Euler bricks, couldn't resist giving some simple rational solutions to, $$(1-a^2)(1-b^2)(1-c^2) = 8abc$$ Solution 1: $$a,\,b,\,c = \frac{-(x-z)(2x+z)}{(2x-z)y},\;\frac{z}{2x},\;\frac{-2y+z}{2y+z}\tag1$$ where $x^2+y^2=z^2.$ Solution 2: $$a,\,b,\,c = \frac{2z^2}{xy},\;\frac{x-z}{x+z},\;-\frac{y+z}{y-z}\tag2$$ where $x^2+y^2=5z^2$, and which may also be solved as a Pell equation.
{ "language": "en", "url": "https://mathoverflow.net/questions/208485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 1 }
Is it possible that $(a,b,c)$, $(x,y,a)$, $(p,q,b)$ are Pythagorean triples simultaneously? Do there exist postive integers $a,b,c,x,y,p,q$ such $(a,b,c)$, $(x,y,a)$, $(p,q,b)$ are all Pythagorean triples? That is, does the system $$\begin{cases} a^2+b^2=c^2\\ x^2+y^2=a^2\\ p^2+q^2=b^2 \end{cases}$$ have a postive integer solution?
The system of equations. $$\left\{\begin{aligned}&a^2+b^2=c^2\\&x^2+y^2=a^2\\&f^2+q^2=b^2\end{aligned}\right.$$ Equivalent to the need to solve the following system of equations. $$\left\{\begin{aligned}&a=2ps=z^2+t^2\\&b=p^2-s^2=j^2+v^2\\&c=p^2+s^2\\&x=2zt\\&y=z^2-t^2\\&f=2jv\\&q=j^2-v^2\end{aligned}\right.$$ To ease calculations we change. $$B=k^2+2n^2-r^2$$ $$A=k^2+2n^2+r^2-4nr$$ $$W=2k(r-2n)$$ $$Q=2n^2+r^2-k^2-2rn$$ Then we need the number to obtain Pythagorean triples can be found by the formula. $$p=B^2+A^2$$ $$s=2Q^2$$ $$z=2BQ$$ $$t=2AQ$$ $$j=B^2+A^2-2Q^2$$ $$v=2WQ$$ At any stage of the computation can be divided into common divisor.
{ "language": "en", "url": "https://mathoverflow.net/questions/208891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
An inequality improvement on AMM 11145 I have asked the same question in math.stackexchange, I am reposting it here, looking for answers: How to show that for $a_1,a_2,\cdots,a_n >0$ real numbers and for $n \ge 3$: $$\sum_{k=1}^{n}\dfrac{k}{a_{1}+a_{2}+\cdots+a_{k}}\le\left(2-\dfrac{7\ln{2}}{8\ln{n}}\right)\sum_{k=1}^{n}\dfrac{1}{a_{k}}$$
I came up with something years ago which is similar to Fedor Petrov's (or other users'). Problem: Let $a_i > 0; \ i = 1, 2, \cdots, n$ ($n\ge 2$). Let $C(n) = 2 - \frac{7\ln 2}{8\ln n}$. Prove that $$\sum_{k=1}^n \frac{k}{a_1 + a_2 + \cdots + a_k} \le C(n)\sum_{k=1}^n \frac{1}{a_k}.$$ Introducing the coefficients (to be determined) $C_1, C_2, \cdots, C_n > 0$, by the Cauchy-Bunyakovsky-Schwarz inequality, we have $$\frac{k}{a_1 + a_2 + \cdots + a_k} \le \frac{k}{(C_1+C_2 + \cdots + C_n)^2}\sum_{i=1}^k \frac{C_i^2}{a_i}$$ and \begin{align} \sum_{k=1}^n \frac{k}{a_1 + a_2 + \cdots + a_k} &\le \sum_{k=1}^n \frac{k}{(C_1+C_2 + \cdots + C_n)^2}\sum_{i=1}^k \frac{C_i^2}{a_i}\\ &= \sum_{k=1}^n \Big[C_k^2\sum_{m=k}^n \frac{m}{(C_1+C_2+\cdots+C_m)^2}\Big]\frac{1}{a_k}. \end{align} Equality occurs if and only if $C_k = a_k, \ k=1, 2, \cdots, n$. The problem becomes: Can we choose $C_k > 0,\ k=1, 2, \cdots, n$ such that $$\sup_{k=1, 2, \cdots, n} C_k^2\sum_{m=k}^n \frac{m}{(C_1+C_2+\cdots+C_m)^2} \le C(n), \ \forall n\ge 2 ?$$ It is easy if $C(n)=2$. For a simple proof, I chose $$C_k = \sqrt{k(k+1)(k+2)(k+3)} - \sqrt{(k-1)k(k+1)(k+2)}, \ k = 1, 2, \cdots, n.$$ We have $C_1 + C_2 + \cdots + C_m = \sqrt{m(m+1)(m+2)(m+3)}$ and \begin{align} &C_k^2\sum_{m=k}^n \frac{m}{(C_1+C_2+\cdots+C_m)^2}\\ = \ &k(k+1)(k+2)\big(2k+2 - 2\sqrt{(k+3)(k-1)}\big)\\ &\quad \cdot\Big(\frac{1}{2(k+1)} - \frac{1}{2(k+2)} - \frac{1}{2(n+2)(n+3)}\Big)\\ \le \ & k(k+1)(k+2)\big(2k+2 - 2\sqrt{(k+3)(k-1)}\big) \Big(\frac{1}{2(k+1)} - \frac{1}{2(k+2)}\Big)\\ \le \ & 2. \end{align} Fedor Petrov chose $C_k = k,\ \forall k$. We have $$C_k^2\sum_{m=k}^n \frac{m}{(C_1+C_2+\cdots+C_m)^2} = k^2\Big(-\frac{4}{n+1}+4\Psi(1, n+2)+\frac{4}{k}-4\Psi(1, 1+k)\Big).$$ Denote the RHS as $f(k, n)$. We have the asymptotic expansion $f(\sqrt{n}, n) \sim 2 - \frac{2}{3\sqrt{n}} - \frac{2}{n} + \cdots (n\to \infty)$. Thus, it at most gives $C(n) \ge 2 - \frac{A}{\sqrt{n}}$ where $A$ is a constant. Fedor Petrov also suggested $C_k = k + \lambda$ (optimize $\lambda$). If $\lambda$ is a constant, similarly, we have $f(\sqrt{n},n, \lambda) \sim 2 - \frac{4\lambda + 2}{3\sqrt{n}} - \cdots (n \to \infty)$. It is not enough.
{ "language": "en", "url": "https://mathoverflow.net/questions/224616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 2 }
Investigation of $\sum \limits_{k=-\infty}^\infty \frac{x^{k+n}}{ \Gamma(k+n+1)}$ where $n \in C$? $$e^x=\sum \limits_{k=0}^\infty \frac{x^k}{k!}$$ We can rewrite the equation as $$e^x=\sum \limits_{k=0}^\infty \frac{x^k}{ \Gamma(k+1)} \tag{1}$$ because $x!=\Gamma(x+1)$ where $x$ is non-negative integer. $\Gamma(x)$ (Gamma function) also has undefined property of any of the integers {0, −1, −2,... . } at those points; it is a meromorphic function with simple poles at the non-positive integers. $m$ is non-positive, $$\frac{1}{\Gamma(m)}=0 \tag{2}$$ Let's combine the property $(1)$ with $(2)$ and the formula $(1)$ can be extended for $x \neq 0$ $$e^x=\sum \limits_{k=-\infty}^\infty \frac{x^k}{ \Gamma(k+1)} \tag{3}$$ If we change sum variable $k'=k+n$ where n is an integer . We get $$e^x=\sum \limits_{k'=-\infty}^\infty \frac{x^{k'+n}}{ \Gamma(k'+n+1)} \tag{4}$$ $n \in Z $ {...,-2,-1,0,1,2,...} I wanted to find out if the equation (4) holds true if we select $n \in C $ for $x \neq 0$ ? If we select , $n=\frac{1}{2}$ $$\Gamma(1/2)=\sqrt{\pi}$$ $$f(x)=\sqrt{x}\sum \limits_{k=-\infty}^\infty \frac{x^{k}}{ \Gamma(k+3/2)} \tag{5}$$ I have noticed $f(x)$ has the same derivative result as $e^x$. $$f'(x)=f(x) \tag{6}$$ Proof: $$f(x)=\sqrt{x}\sum \limits_{k=-\infty}^\infty \frac{x^{k}}{ \Gamma(k+3/2)} \tag{7}$$ $$f(x)=\frac{2\sqrt{x}}{\sqrt{\pi}}(1+\frac{2x}{3}+\frac{2^2x^2}{3.5}+\frac{2^3x^3}{3.5.7}+....)+\frac{1}{\sqrt{\pi x}}(1-\frac{1}{2x}+\frac{3}{2^2x^2}-\frac{3.5}{2^3x^3}+.... ) \tag{8}$$ $$f'(x)=\frac{1}{\sqrt{\pi x}}+\frac{2\sqrt{x}}{\sqrt{\pi}}(1+\frac{2x}{3}+\frac{2^2x^2}{3.5}+\frac{2^3x^3}{3.5.7}+....)-\frac{1}{\sqrt{\pi x}}(\frac{1}{2x}-\frac{3}{2^2x^2}+\frac{3.5}{2^3x^3}-.... ) \tag{9}$$ $$f'(x)=\frac{2\sqrt{x}}{\sqrt{\pi}}(1+\frac{2x}{3}+\frac{2^2x^2}{3.5}+\frac{2^3x^3}{3.5.7}+....)+\frac{1}{\sqrt{\pi x}}-\frac{1}{\sqrt{\pi x}}(\frac{1}{2x}-\frac{3}{2^2x^2}+\frac{3.5}{2^3x^3}-.... ) \tag{10}$$ $$f'(x)=\frac{2\sqrt{x}}{\sqrt{\pi}}(1+\frac{2x}{3}+\frac{2^2x^2}{3.5}+\frac{2^3x^3}{3.5.7}+....)+\frac{1}{\sqrt{\pi x}}(1-\frac{1}{2x}+\frac{3}{2^2x^2}-\frac{3.5}{2^3x^3}+.... ) \tag{11}$$ Thus $$f'(x)=f(x) \tag{12}$$ More generally, $$f_n(x)=\sum \limits_{k=-\infty}^\infty \frac{x^{k+n}}{ \Gamma(k+n+1)} \tag{13}$$ where $n \in C $ for $x \neq 0$ ? The function $f_n(x)$ has same differential property $f'_n(x)=f_n(x)$ with $e^x$. Please advice how to prove or disprove that $f_n(x)=e^x$ where $n \in C $ for $x \neq 0$ ? Thanks a lot
You might like to look at asymptotic expansions--a rich, fascinating field. See this excerpt for a quick intro and the book by Dingle, available courtesy of Michael Berry, who has himself written many fine articles on the subject, e.g., this one. (Wikipedia has a short bibliography.) Coincidentally, just last night I was looking at the asymptotic expansion of the upper incomplete gamma function (being related to some special Sheffer polynomial sequences): $$\frac{x^s}{(s-1)!} \Gamma(x,s)=x^s \int_1^{\infty} e^{-xt} \frac{t^{s-1}}{(s-1)!}dt \sim e^{-x}\sum_{n=1}^{\infty} \frac{x^{s-n}}{(s-n)!}.$$ Taking the first four terms in the summation for $x=1.9$ and $s=2.8$ gives $.653$ in good agreement with the exact value $.657$, so we have a summation for the lower asymptotic series in terms of the incomplete gamma function--the Cinderella function of Tricomi. (Sum initialization corrected 1/8/15.) The full Borel-Laplace transform (Whitttaker and Watson) by interchange of the Taylor series summation for $e^{xt}$ and the integrations then is $$e^x=e^xx^s [\int_0^{1}+\int_1^{\infty}] e^{-xt} \frac{t^{s-1}}{(s-1)!}dt \sim \sum_{n=0}^{\infty} \frac{x^{s+n}}{(s+n)!}+\sum_{n=1}^{\infty} \frac{x^{s-n}}{(s-n)!}.$$
{ "language": "en", "url": "https://mathoverflow.net/questions/227642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Tricky two-dimensional recurrence relation I would like to obtain a closed form for the recurrence relation $$a_{0,0} = 1,~~~~a_{0,m+1} = 0\\a_{n+1,0} = 2 + \frac 1 2 \cdot(a_{n,0} + a_{n,1})\\a_{n+1,m+1} = \frac 1 2 \cdot (a_{n,m} + a_{n,m+2}).$$ Even obtaining a generating function for that seems challenging. Is there a closed form for the recurrence relation or at least for the generating function? Alternatively, is there a closed form for $a_{n,0}$?
Here is the table for $a_{n,m}2^m$ (since these are integers): $$\begin{pmatrix} 1 & 5 & 14 & 35 & 82 & 186 & 412 & 899 & 1938 \\ 0 & 1 & 5 & 15 & 40 & 98 & 231 & 527 & 1180 \\ 0 & 0 & 1 & 5 & 16 & 45 & 115 & 281 & 660 \\ 0 & 0 & 0 & 1 & 5 & 17 & 50 & 133 & 336 \\ 0 & 0 & 0 & 0 & 1 & 5 & 18 & 55 & 152 \\ 0 & 0 & 0 & 0 & 0 & 1 & 5 & 19 & 60 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 5 & 20 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 5 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}$$ The top row does not give a hit in OEIS, but I conjecture that it is given by the following: $$ a_{n,0} = \frac{(4 n+3) \Gamma \left(\frac{n+1}{2}\right)}{\sqrt{\pi } \Gamma \left(\frac{n}{2}+1\right)}-2 $$ if $n$ is even, and $$a_{n,0} = \frac{(4n+5) \Gamma \left(\frac{n}{2}+1\right)}{\sqrt{\pi } \Gamma \left(\frac{n+3}{2}\right)}-2 $$ if $n$ is odd.
{ "language": "en", "url": "https://mathoverflow.net/questions/235041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Rogers-Ramanujan continued fraction $R(e^{-2 \pi \sqrt 5})$ Let $$R(q) = \cfrac{q^{1/5}}{1 + \cfrac{q}{1 + \cfrac{q^{2}}{1 + \cfrac{q^{3}}{1 + \cdots}}}}$$ It is easy to evaluate $R(e^{-2 \pi/ \sqrt 5})$ using the Dedekind eta function identity $\eta(-\frac{1}{z})=\sqrt{-iz}\eta(z)$ and one of the most fundamental properties of $R(q)$ $$\frac{1}{R(q)}-1-R(q)=\frac{f(-q^\frac{1}{5})}{q^\frac{1}{5}f(-q^5)}=\frac{\eta(\tau/5)}{\eta(5\tau)}$$ Where $q=e^{2 \pi i \tau}$ and $f(q)$ is the theta function (Ramanujan's notation). Then $$R(e^{-2 \pi/ \sqrt 5})=\sqrt[5]{\sqrt{1+\beta^{10}}-\beta^5}$$ where $$\beta=\frac{1+\sqrt{5}}{2 } $$ is the golden ratio. If $\alpha_1 , \alpha_2>0$ and $\alpha_1 \alpha_2=\pi^2$ then $$\bigg(\frac{1+\sqrt{5}}{2}+R(e^{-2 \alpha_1})\bigg)\bigg(\frac{1+\sqrt{5}}{2}+R(e^{-2 \alpha_2})\bigg)=\frac{5+\sqrt{5}}{2}$$ using this identity, I evaluate $R(e^{-2 \pi \sqrt 5})$ Since I already know the value of the $R(e^{- 2 \pi/ \sqrt5})$: $$\color{blue} {R(e^{-2 \pi \sqrt 5})= \frac{\beta+2}{\beta+\sqrt[5]{\sqrt{1+\beta^{10}}-\beta^5}}-\beta}.$$ There is another way to evaluate $R(e^{-2 \pi \sqrt 5})$. $$R(e^{-2 \pi \sqrt 5})=\sqrt{(\frac{A+1}{2})+1}-\frac{A+1}{2}$$ where $A$ satisfies the quadratic equation $$\frac{A}{\sqrt{5}V}-\frac{\sqrt{5}V}{A}=\bigg(V-V^{-1}\bigg)^2 \bigg(\frac{V-V^{-1}}{\sqrt{5}}-\frac{\sqrt{5}}{V-V^{-1}}\bigg)$$ and $$V=\frac{G_{125}}{G_5}$$ where $G_n=2^{-1/4}e^{\pi \sqrt{n}/24} \chi(e^{- \pi \sqrt{n}})$ is the Ramanujan's class invariant(algebraic), $\chi(q)$ is the Ramanujan's fuction defined by $\chi(-q)=(q;q)_\infty$, and $(a;q)_n$ is a q-Pochhammer symbol. There are my questions: $1.$ How to calculate the class invariant $G_{125}$ in order to evaluate $R(e^{-2 \pi \sqrt 5})$ as indicated above? $2.$ Does there exist another way to evaluate $R(e^{-2 \pi \sqrt 5})$ without using $R(e^{-2 \pi / \sqrt 5})$ and class invariants?
Let $R(q)$ be the Rogers-Ramanujan continued fraction $$ R(q):=\frac{q^{1/5}}{1+}\frac{q^1}{1+}\frac{q^2}{1+}\frac{q^3}{1+}\ldots,|q|<1 $$ Let also for $r>0$ $$ Y=Y(r):=R(e^{-2\pi\sqrt{r}})^{-5}-11-R(e^{-2\pi\sqrt{r}})^5 $$ It is easy to show someone that $$ Y\left(\frac{r}{5}\right)Y\left(\frac{1}{5r}\right)=125, : (1) $$ for all $r>0$. Hence for $r=1$ $$ Y\left(\frac{1}{5}\right)=\sqrt{125}=5\sqrt{5} $$ hence $$ R\left(e^{-2\pi/\sqrt{5}}\right)=\sqrt[5]{\frac{2}{11+5\sqrt{5}+\sqrt{250+110\sqrt{5}}}} $$ Using the modular relation $$ R\left(e^{-2\pi\sqrt{1/r}}\right)=\frac{-(1+\sqrt{5})R\left(e^{-2\pi\sqrt{r}}\right)+2}{2R\left(e^{-2\pi\sqrt{r}}\right)+1+\sqrt{5}} : (2) $$ we easily get the result. Proof of (1): The fifth degree modular equation of $R(q)$ is $$ R\left(q^{1/5}\right)^5=R(q)\frac{1-2R(q)+4R(q)^2-3R(q)^3+R(q)^4}{1+3R(q)+4R(q)^2+2R(q)^3+R(q)^4} $$ Also it holds (2), with $v_r=R\left(e^{-2\pi\sqrt{r}}\right)$, $r>0$ (see: W. Duke. 'Continued Fractions and Modular Functions'): $$ R\left(e^{-2\pi\sqrt{1/r}}\right)=v_{1/r}=\frac{-(1+\sqrt{5})v_r+2}{2v_r+1+\sqrt{5}} $$ A routine algebraic evaluation can show us that $$ \left(v_{r/25}^{-5}-11-v_{r/25}^5\right)\left(v_{1/r}^{-5}-11-v_{1/r}^{5}\right)=\ldots=125 $$ Hence the proof is complete.
{ "language": "en", "url": "https://mathoverflow.net/questions/241809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Is $x^{2k+1} - 7x^2 + 1$ irreducible? Question. Is the polynomial $x^{2k+1} - 7x^2 + 1$ irreducible over $\mathbb{Q}$ for every positive integer $k$? It is irreducible for all positive integers $k \leq 800$.
Here is a proof, based on a trick that can be used to prove that $x^n + x + 1$ is irreducible when $n \not\equiv 2 \bmod 3$. We work with Laurent polynomials in $R = \mathbb Z[x,x^{-1}]$; note that $R$ has unit group $R^\times = \pm x^{\mathbb Z}$. We observe that for $f \in R$, the sum of the squares of the coefficients is given by $$\|f\| = \int_0^1 |f(e^{2 \pi i t})|^2\,dt = \int_0^1 f(e^{2 \pi i t}) f(e^{-2 \pi i t})\,dt = \int_0^1 f(x) f(x^{-1})\big|_{x = e^{2 \pi i t}}\,dt .$$ Now assume that $f(x) = g(x) h(x)$. Then, since $f(x) f(x^{-1}) = \bigl(g(x)h(x^{-1})\bigr)\bigl(g(x^{-1})h(x)\bigr)$, $G(x) := g(x) h(x^{-1})$ satisfies $\|G\| = \|f\|$ and $G(x) G(x^{-1}) = f(x) f(x^{-1})$. Now we consider $f(x) = x^n - 7 x^2 + 1$; then $\|f\| = 51$. If $f = g h$ as above, then write $G(x) = \pm x^m(1 + a_1 x + a_2 x^2 + \ldots)$ and $G(x^{-1}) = \pm x^l(1 + b_1 x + b_2 x^2 + \ldots)$. The relation $G(x) G(x^{-1}) = f(x) f(x^{-1})$ translates into (equality of signs and) $$(1 + a_1 x + \ldots)(1 + b_1 x + \ldots) = 1 - 7 x^2 + O(x^{n-2}).$$ Assuming that $n > 40$ and considering terms up to $x^{20}$, one can check (see below) that the only solution such that $a_1^2 + a_2^2 + \ldots + a_{20}^2 + b_1^2 + b_2^2 + \ldots + b_{20}^2\le 49$ is, up to the substitution $x \leftarrow x^{-1}$, given by $1 + a_1 x + \ldots = 1 - 7x^2 + O(x^{21})$, $1 + b_1 x + \ldots = 1 + O(x^{21})$. Since the $-7$ (together with the leading and trailing 1) exhausts our allowance for the sum of squares of the coefficients, all other coefficients must be zero, and we obtain that $G(x) = \pm x^a f(x)$ or $G(x) = \pm x^a f(x^{-1})$. Modulo interchanging $g$ and $h$, we can assume that $g(x) h(x^{-1}) = \pm x^a f(x)$, so $h(x^{-1}) = \pm x^a f(x)/g(x) = \pm x^a h(x)$, and $x^{\deg h} h(x^{-1})$ divides $f(x)$. This implies that $h(x)$ divides $x^n f(x^{-1})$, so $h(x)$ must divide $$f(x) - x^n f(x^{-1}) = 7 x^2 (x^{n-4} - 1).$$ So $h$ also divides $$f(x) - x^4 (x^{n-4} - 1) = x^4 - 7 x^2 + 1 = (x^2-3x+1)(x^2+3x+1).$$ Since $h$ also divides $f$, it must divide the difference $x^n - x^4$, which for $n \neq 4$ it clearly doesn't, since the quartic has no roots of absolute value 0 or 1; contradiction. The argument shows that $x^n - 7 x^2 + 1$ is irreducible for $n > 40$; for smaller $n$, we can ask the Computer Algebra System we trust. This gives: Theorem. $x^n - 7 x^2 + 1$ is irreducible over $\mathbb Q$ for all positive integers $n$ except $n=4$. ADDED 2017-01-08: After re-checking the computations, I realized that there was a small mistake that ruled out some partial solutions prematurely. It looks like one needs to consider terms up to $x^{20}$. Here is a file with MAGMA code that verifies that there are no other solutions.
{ "language": "en", "url": "https://mathoverflow.net/questions/258914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 3, "answer_id": 0 }
Evaluation of a double definite integral with a singularity How to compute the $$\int_{0}^{1} \int_{0}^{1} \frac{(\log(1+x^2)-\log(1+y^2))^2 }{|x-y|^{2}}dx dy.$$ Is it possible to compute the integral analytically upto some terms. I believe it should involve hypergeometric series. Any ideas are welcome.
With some effort, Mathematica evaluates this as $$\int_{0}^{1} \int_{0}^{1} \frac{[\ln(1+x^2)-\ln(1+y^2)]^2 }{(x-y)^{2}}dx dy=2 \sqrt{2} \; _4F_3\left(\tfrac{1}{2},\tfrac{1}{2},\tfrac{1}{2},\tfrac{1}{2};\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2};\tfrac{1}{2}\right)+(4 \pi-\ln 2) C-2 i \,\text{Li}_3\left(\tfrac{1}{2}+\tfrac{1}{2}i\right)+2 i \left[\,\text{Li}_2\left(1-e^{i\pi/4}\right) - \text{Li}_2\left(-e^{i\pi/4}\right)\right] \ln 2-\left(\tfrac{69}{8}-\tfrac{35 }{32}i\right) \zeta (3)-\tfrac{23}{192} \pi ^3+\left(\tfrac{7}{2}-\tfrac{7 }{8}i\right) \pi +\tfrac{1}{24} i \ln ^3 2-\tfrac{7}{16} \pi \ln ^2 2-\left(\tfrac{1}{12}+\tfrac{9 }{32}i\right) \pi ^2 \ln 2+\tfrac{1}{2} \pi \ln \left(1+e^{i\pi/4}\right) \ln 2+\left(\tfrac{7}{8}+\tfrac{11 }{8}i\right) \ln 5-\left(\tfrac{3}{2}+\tfrac{7 }{4}i\right) \arctan\tfrac{1}{2}-\left(\tfrac{11}{2}-\tfrac{7 }{2}i\right) \arctan(1+i)-\tfrac{17}{4} \arctan 2=0.572532$$ with $C$ Catalan's constant and $\text{Li}_n$ the polylog.
{ "language": "en", "url": "https://mathoverflow.net/questions/327767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does this degree 12 genus 1 curve have only one point over infinitely many finite fields? Let $F(x,y,z)$ be the degree 12 homogeneous polynomial: $$x^{12} - x^9 y^3 + x^6 y^6 - x^3 y^9 + y^{12} - 4 x^9 z^3 + 3 x^6 y^3 z^3 - 2 x^3 y^6 z^3 + y^9 z^3 + 6 x^6 z^6 - 3 x^3 y^3 z^6 + y^6 z^6 - 4 x^3 z^9 + y^3 z^9 + z^{12}$$ Over the rationals it is irreducible and $F=0$ is genus 1 curve. Numerical evidence in Sagemath and Magma suggests that for infinitely many primes $p$, the curve $F=0$ is irreducible over $\mathbb{F}_p$ and $F=0$ has only one point over $\mathbb{F}_p$, the singular point $(1 : 0 : 1)$. Q1 Is this true? Set $p=50033$. Then we have only one point over the finite field and the curve is irreducible of genus 1. This appears to violate the bound on number of rational points over finite fields given in the paper "The number of points on an algebraic curve over a finite field", J.W.P. Hirschfeld, G. Korchmáros and F. Torres ,p. 23. Q2 What hypothesis am I missing for this violation? Sagemath code: def tesgfppoints2(): L1=5*10^4 L2=2*L1 for p in primes(L1,L2): K.<x,y,z>=GF(p)[] F=x^12 - x^9*y^3 + x^6*y^6 - x^3*y^9 + y^12 - 4*x^9*z^3 + 3*x^6*y^3*z^3 - 2*x^3*y^6*z^3 + y^9*z^3 + 6*x^6*z^6 - 3*x^3*y^3*z^6 + y^6*z^6 - 4*x^3*z^9 + y^3*z^9 + z^12 C=Curve(F) ire=C.is_irreducible() if not ire: continue rp=len(C.rational_points()) print 'p=',p,';rp=',rp,'ir=',ire,'g=',C.genus()
The polynomial you wrote is the product of the four polynomials $x^3 - r y^3 - z^3$, where $r$ is a root of the polynomial $t^4 - t^3 + t^2 - t + 1$. I did not read your reference, but likely they assume that the curves are geometrically irreducible.
{ "language": "en", "url": "https://mathoverflow.net/questions/332515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Polynomial inequality of sixth degree There is the following problem. Let $a$, $b$ and $c$ be real numbers such that $\prod\limits_{cyc}(a+b)\neq0$ and $k\geq2$ such that $\sum\limits_{cyc}(a^2+kab)\geq0.$ Prove that: $$\sum_{cyc}\frac{2a^2+bc}{(b+c)^2}\geq\frac{9}{4}.$$ I have a proof of this inequality for any $k\geq2.6$. I think, for $k<2.6$ it's wrong, but my software does not give me a counterexample and I don't know, how to prove it for some $k<2.6$. It's interesting that without condition $\sum\limits_{cyc}(a^2+kab)\geq0$ the equality occurs also for $(a,b,c)=(1,1,-1)$. My question is: What is a minimal value of $k$, for which this inequality is true? Thank you!
We want to show that your inequality does not hold for $k\in[2,13/5)$. In view of the identity in your answer, it is enough to show that for each $k\in[2,13/5)$ there is a triple $(a,b,c)\in\mathbb R^3$ with the following properties: $a=-1>b$, \begin{align}s_4&:=\sum_{cyc}(2a^3-a^2b-a^2c) \\ &=a^2 (2 a-b-c)+b^2 (-a+2 b-c)+c^2 (-a-b+2 c)=0, \end{align} $$s_3:=a b + b c + c a<0,$$ and $$s_2+k s_3=0,$$ where $$s_2:=a^2 + b^2 + c^2.$$ Indeed, then the right-hand side of your identity will be $$\frac{20}{3}\sum_{cyc}(a^4-a^2b^2)(13/5-k)s_3<0,$$ so that your identity will yield $$\sum_{cyc}\frac{2a^2+bc}{(b+c)^2}<9/4.$$ For each $k\in(2,13/5)$, the triple $(a,b,c)$ will have all the mentioned properties if $a=-1$, $b$ is the smallest (say) of the 6 real roots $x$ of the polynomial $$P_k(x):=-18 - 15 k + 4 k^2 + 4 k^3 + (36 k + 6 k^2 - 12 k^3) x + (-27 - 9 k - 21 k^2 + 6 k^3) x^2 + (18 + 60 k - 10 k^2 + 8 k^3) x^3 + (-27 - 9 k - 21 k^2 + 6 k^3) x^4 + (36 k + 6 k^2 - 12 k^3) x^5 + (-18 - 15 k + 4 k^2 + 4 k^3) x^6, $$ and $$c=\tfrac12\, (k - b k) - \tfrac12\, \sqrt{-4 - 4 b^2 + 4 b k + k^2 - 2 b k^2 + b^2 k^2}.$$ For $k=2$, $(a,b,c)=(-1,0,1)$ will be such a triple. So, we are done. This result was obtained with Mathematica, as follows (which took Mathematica about 0.05 sec):
{ "language": "en", "url": "https://mathoverflow.net/questions/358054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Maximum eigenvalue of a covariance matrix of Brownian motion $$ A := \begin{pmatrix} 1 & \frac{1}{2} & \frac{1}{3} & \cdots & \frac{1}{n}\\ \frac{1}{2} & \frac{1}{2} & \frac{1}{3} & \cdots & \frac{1}{n}\\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} & \cdots & \frac{1}{n}\\ \vdots & \vdots & \vdots & \ddots & \frac{1}{n}\\ \frac{1}{n} & \frac{1}{n} & \frac{1}{n} & \frac{1}{n} & \frac{1}{n} \end{pmatrix}$$ How to prove that all the eigenvalues of $A$ are less than $3 + 2 \sqrt{2}$? This question is similar to this one. I have tried the Cholesky decomposition $A = L^{T} L$, where $$L^{T} = \left(\begin{array}{ccccc} 1 & 0 & 0 & \cdots & 0\\ \frac{1}{2} & \frac{1}{2} & 0 & \cdots & 0\\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ \frac{1}{n} & \frac{1}{n} & \frac{1}{n} & \frac{1}{n} & \frac{1}{n} \end{array}\right)$$ then $$(L^{T})^{-1}=\left(\begin{array}{ccccc} 1 & & & \cdots\\ -1 & 2 & & \cdots\\ & -2 & 3 & \cdots\\ \vdots & \vdots & \vdots & \ddots\\ & & & -(n-1) & n \end{array}\right)$$ $$A^{-1}=L^{-1}(L^{T})^{-1}$$ How to prove the eigenvalues of $A^{-1}$ $$\lambda_{i}\geq\frac{1}{3+2\sqrt{2}}$$ Further, I find that $A$ is the covariance matrix of Brownian motion at time $1, 1/2, 1/3, \ldots, 1/n$
Inspired by @Mateusz Wasilewski I find another method. \begin{eqnarray*} \langle x,Ax\rangle & = & \langle Lx,Lx\rangle\\ & = & \sum_{i=1}^{n}u_{i}^{2} \end{eqnarray*} where $u_{i}=\sum_{j=i}^{n}\frac{1}{j}x_{j}$. \begin{eqnarray*} \sum_{i=1}^{n}u_{i}^{2} & = & \sum_{i=1}^{n}(\sum_{k=i}^{n}b_{k})^{2}\quad(\text{where} \ b_{k}=\frac{1}{k}x_{k})\\ & = & \sum_{i=1}^{n}(\sum_{k=i}^{n}b_{k}^{2}+2\sum_{k>j\geq i}b_{k}b_{j})\\ & = & \sum_{k=1}^{n}\sum_{i=1}^{k}b_{k}^{2}+2\sum_{j=1}^{n-1}\sum_{k=j+1}^{n}\sum_{i=1}^{j}b_{k}b_{j}\\ & = & \sum_{k=1}^{n}\frac{x_{k}^{2}}{k}+2\sum_{j=1}^{n-1}\sum_{k=j+1}^{n}b_{k}x_{j}\\ & = & \sum_{k=1}^{n}\frac{x_{k}^{2}}{k}+2\sum_{k=2}^{n}\sum_{j=1}^{k-1}b_{k}x_{j}\\ & = & \sum_{k=1}^{n}\frac{x_{k}^{2}}{k}+2\sum_{k=2}^{n}b_{k}z_{k-1}\\ & = & x_1^2 +\sum_{k=2}^{n}\frac{(z_{k}-z_{k-1})^{2}}{k}+2\sum_{k=2}^{n}\frac{(z_{k}-z_{k-1})}{k}z_{k-1}\\ & = & x_{1}^{2}+\sum_{k=2}^{n}\frac{z_{k}^{2}-z_{k-1}^{2}}{k}\\ & = & \sum_{k=1}^{n}\frac{z_{k}^{2}}{k}-\sum_{k=1}^{n-1}\frac{z_{k}^{2}}{k+1}\\ & = & \sum_{k=1}^{n-1}z_{k}^{2}(\frac{1}{k}-\frac{1}{k+1})+\frac{z_{n}^{2}}{n} \end{eqnarray*}
{ "language": "en", "url": "https://mathoverflow.net/questions/366339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Are all integers not congruent to 6 modulo 9 of the form $x^3+y^3+3z^2$? Are all integers not congruent to 6 modulo 9 of the form $x^3+y^3+3z^2$ for possibly negative integers $x,y,z$? We have the identity $ (-t)^3+(t-1)^3+3 t^2=3t-1$. The only congruence obstruction we found is 6 modulo 9.
We found another approach based on integral points on genus 0 curves. Let $G(x,z,a_1,a_2,a_3)=(a_1 x+a_2)^3+(-a_1 x+a_3)^3+3 z^2$. For fixed $n$ and $a_i$, $G=n$ is degree two genus 0 curve and it might have infinitely many integral points, which gives infinitely many solutions to the OP. WolramAlpna on solve (10x-5)^3+(-10x+3)^3+3*y^2=25 over integers. Related paper is Representation of an Integer as a Sum of Four Integer Cubes and Related Problems Added 22 Dec 2020 We can solve $n \equiv 1 \pmod{3}$ The solutions of $(-x-1)^3+(x+2)^3 - 3y^2=(3N+1)$ are $x=-1/2\sqrt{4y^2 + 4N + 1} - 3/2$. When we express $4N+1$ as difference of two squares $4N+1=z^2-y'^2$ with $y'$ even we have solution. Currently the only unsolved case appears to be $n \equiv 0 \pmod{9}$.
{ "language": "en", "url": "https://mathoverflow.net/questions/378968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
On some inequality (upper bound) on a function of two variables There is a problem (of physical origin) which needs an analytical solution or a hint. Let us consider the following real-valued function of two variables $y (t,a) = 4 \left(1 + \frac{t}{x(t,a)}\right)^{ - a - 1/2} \left(1 - \frac{2}{x(t,a)}\right)^{ 1/2} (x(t,a))^{-3/2} (z(t,a))^{1/4} \left(1 + \frac{t}{2}\right)^{ a},$ where $t > 0$, $0 < a < 1$ and $x(t,a) = \frac{a-1}{2} t + \frac{3}{2}+\frac{1}{2}\sqrt{z(t,a)}$, $z(t,a) = (1-a)^2 t^2+ 2(3-a)t +9.$ It is necessary to prove that $$y(t,a) < 1$$ for all $t > 0$ and $0 < a < 1$. The numerical analysis supports this bound. P.S. I apologize for too ``technical'' question. It looks that the inequality is valid also for $a = 1$ but it fails for $a > 1$.
a human verifiable proof: Let us prove that the sharp bound of $y$ is $\frac{\sqrt{3+2\sqrt 3}}{3}$. Letting $x := x(t, a), z := z(t, a), y := y(t, a)$, we have $$y = 4 \left(\frac{1 + t/2}{1 + t/x}\right)^a\cdot \left(\frac{z}{(1 + t/x)^2}\right)^{1/4} (1 - 2/x)^{1/2} x^{-3/2}. \tag{1}$$ Using Bernoulli inequality $(1 + u)^r \le 1 + ru$ for all $r \in (0, 1]$ and all $u > -1$, we have $$y \le 4 \left(1 + \left(\frac{1 + t/2}{1 + t/x} - 1\right)a\right)\cdot \left(\frac{z}{(1 + t/x)^2}\right)^{1/4} (1 - 2/x)^{1/2} x^{-3/2} \tag{2}$$ and $$y^2 \le 16 \left(1 + \left(\frac{1 + t/2}{1 + t/x} - 1\right)a\right)^2\cdot \left(\frac{z}{(1 + t/x)^2}\right)^{1/2} (1 - 2/x) x^{-3}. \tag{3}$$ We first simplify the expression. We have $$a \in (0, 1) \quad \iff \quad a = \frac{1}{1 + s}, \quad s > 0. $$ Then, we have (the so-called Euler substitution) $$t > 0 \quad \iff \quad t = \frac{2(1+s)(3s+2 - 3u))}{u^2 - s^2}, \quad s < u < s + \frac23. $$ With these substitutions, we have $$z = \left(\frac{-3u^2 + (6s + 4)u - 3s^2}{u^2 - s^2}\right)^2$$ and $$x = \frac{6s + 2}{u + s}.$$ Then, (3) is written as $$y^2 \le {\frac { \left(-3u^2 + 6su + 4u - 3s^2 \right) \left( 5\,s+2 - u \right) ^{2}}{4 \left( 3\,s+1 \right) ^{3}}}. $$ The constraints are: $s > 0$ and $s < u < s + \frac23$. We have $$\sup_{s> 0, ~ s < u < s + \frac23} {\frac { \left(-3u^2 + 6su + 4u - 3s^2 \right) \left( 5\,s+2 - u \right) ^{2}}{4 \left( 3\,s+1 \right) ^{3}}} = \frac{3 + 2\sqrt 3}{9}. \tag{4}$$ (The proof is given at the end.) Thus, we have $$y \le \frac{\sqrt{3 + 2\sqrt 3}}{3}.$$ On the other hand, in (1), letting $t = 3 + 3\sqrt 3, ~ a = 1$, we have $y = \frac{\sqrt{3 + 2\sqrt 3}}{3}$. Thus, $\frac{\sqrt{3 + 2\sqrt 3}}{3}$ is a sharp bound. We are done. Proof of (4): Fixed $s > 0$, let $$f(u) := {\frac { \left(-3u^2 + 6su + 4u - 3s^2 \right) \left( 5\,s+2 - u \right) ^{2}}{4 \left( 3\,s+1 \right) ^{3}}}.$$ The maximum of $f(u)$ is attained at $u_0 = 2\,s+1- \frac13\,\sqrt {9\,{s}^{2}+12\,s+3}$. Let $g(s) := f(u_0)$. It is easy to prove that $g'(s) < 0$ for all $s \ge 0$. Thus, $g(s) \le g(0) = \frac{3 + 2\sqrt 3}{9}$. We are done.
{ "language": "en", "url": "https://mathoverflow.net/questions/384678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
A tricky integral to evaluate I came across this integral in some work. So, I would like to ask: QUESTION. Can you evaluate this integral with proofs? $$\int_0^1\frac{\log x\cdot\log(x+2)}{x+1}\,dx.$$
$$\int_0^1\frac{\ln x\cdot\ln(x+2)}{x+1}\,dx=$$ $$=\text{Li}_3\left(-\tfrac{1}{3}\right)-2 \,\text{Li}_3\left(\tfrac{1}{3}\right)+\tfrac{1}{2} \ln 3\left[ \text{Li}_2\left(\tfrac{1}{9}\right)-6\, \text{Li}_2\left(\tfrac{1}{3}\right) -\tfrac{2}{3} \ln ^2 3\right]+\tfrac{13}{8} \zeta (3).$$ I checked that this combination of polylog's evaluates to $-0.651114$, equal to a numerical evaluation of the integral. Update: As Timothy Budd pointed out, that this combination of polylog's simplifies to $-\frac{13}{24}\zeta(3)$ is proven by Przemo at MSE. The identities that enable this simplification are $$\text{Li}_3\left(-\tfrac{1}{3}\right)-2 \,\text{Li}_3\left(\tfrac{1}{3}\right) = -\tfrac{1}{6} \ln^3 3 + \tfrac{1}{6}\pi^2 \ln 3 - \tfrac{13}{6} \zeta(3),$$ $$\text{Li}_2(\tfrac{1}{9})=2\,\text{Li}_2(-\tfrac{1}{3})+2\,\text{Li}_2(\tfrac{1}{3}),$$ $$2\text{Li}_2\left(-\tfrac{1}{3}\right)-4 \,\text{Li}_2\left(\tfrac{1}{3}\right) = \ln^2 3 -\tfrac{1}{3}\pi^2 .$$
{ "language": "en", "url": "https://mathoverflow.net/questions/385258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Improper integral $\int_0^\infty\frac{x^{2n+1}}{(1+x^2)^2(2+x^2)^N}dx,\ \ \ n\le N$ How can I evaluate this integral? $$\int_0^\infty\frac{x^{2n+1}}{(1+x^2)^2(2+x^2)^N}dx,\ \ \ n\le N$$ Maybe there is a recurrence relation for the integral?
Let $I_{n,N}$ denote the integral in question, where $n$ and $N$ are nonnegative integers such that $n\le N$. With the change of variables $y=2+x^2$ and then using the binomial expansion of $(y-2)^n$, we get \begin{equation} 2I_{n,N}=\int_2^\infty\frac{(y-2)^n\,dy}{(y-1)^2y^N} =\sum_{j=0}^n\binom nj (-2)^{n-j}J_{N-j}, \end{equation} where \begin{equation} J_k:=\int_2^\infty\frac{y^{-k}\,dy}{(y-1)^2}. \end{equation} Integrating by parts, we get \begin{equation} J_k=2^{-k}-kM_{k+1}, \end{equation} where \begin{equation} M_k:=\int_2^\infty\frac{y^{-k}\,dy}{y-1}=\int_0^{1/2}(1-t)^{-1}t^{k-1}\,dt, \end{equation} using $t=1/y$. Further, \begin{align*} M_k&=\int_0^{1/2}[(1-t)+t](1-t)^{-1}t^{k-1}\,dt \\ &=\int_0^{1/2}t^{k-1}\,dt+\int_0^{1/2}(1-t)^{-1}t^k\,dt \\ &=\frac1{k2^k}+M_{k+1}. \end{align*} So, \begin{align*} 2I_{n,N}&=\sum_{j=0}^n\binom nj (-2)^{n-j}2^{j-N} \\ & -\sum_{j=0}^n\binom nj (-2)^{n-j}(N-j)M_{N-j+1} \\ & =2^{-N}1(n=0)-\sum_{j=0}^n\binom nj (-2)^{n-j}(N-j)M_{N-j+1}. \end{align*} Thus, \begin{align*} I_{n,N}&=2^{-N-1}1(n=0)+\sum_{j=0}^n\binom nj (-2)^{n-j-1}(N-j)M_{N-j+1}, \tag{1} \end{align*} and the $M_k$'s are given by the simple recurrence \begin{equation} M_{k+1}=M_k-\frac1{k2^k} \tag{2} \end{equation} for natural $k$, with $M_1=\ln2$. From this, one can also get a (double) recurrence for $I_{n,N}$. Indeed, by (1), \begin{equation} I_{n,N}=2^{-N-1}1(n=0)+\sum_{j=0}^\infty a_{j,n}L_{N-j}, \end{equation} where \begin{equation} a_{j,n}:=\binom nj (-2)^{n-j-1},\quad L_k:=kM_{k+1}. \end{equation} From $\binom nj=\binom{n-1}{j-1}+\binom{n-1}j$, we get \begin{equation} a_{j,n}=a_{j-1,n-1}-2a_{j,n-1}. \end{equation} So, \begin{align*} I_{n,N}&=\sum_{j=0}^\infty a_{j-1,n-1}L_{N-j}-2\sum_{j=0}^\infty a_{j,n-1}L_{N-j} \\ &=\sum_{i=-1}^\infty a_{i,n-1}L_{N-1-i}-2\sum_{j=0}^\infty a_{j,n-1}L_{N-j} \\ &=I_{n-1,N-1}-2I_{n-1,N} \tag{3} \end{align*} if $1\le n\le N$, and (by (1)) $I_{0,N}=2^{-N-1}-NM_{N+1}/2$, with the $M_k$'s given by recurrence (2). This conclusion has been verified for a few pairs $(n,N)$.
{ "language": "en", "url": "https://mathoverflow.net/questions/393753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
The constant $\pi$ expressed by an infinite series I am looking for the proof of the following claim: First, define the function $\operatorname{sgn_1}(n)$ as follows: $$\operatorname{sgn_1}(n)=\begin{cases} -1 \quad \text{if } n \neq 3 \text{ and } n \equiv 3 \pmod{4}\\1 \quad \text{if } n \in \{2,3\} \text{ or } n \equiv 1 \pmod{4}\end{cases}$$ Let $n=p_1^{\alpha_1} \cdot p_2^{\alpha_2} \cdot \ldots \cdot p_k^{\alpha_k}$ , where the $p_i$s are the $k$ prime factors of order $\alpha_i$ . Next, define the function $\operatorname{sgn_2}(n)$ as follows: $$\operatorname{sgn_2}(n)=\displaystyle\prod_{i=1}^k(\operatorname{sgn_1}(p_i))^{\alpha_i}$$ Then, $$\pi=\displaystyle\sum_{n=1}^{\infty} \frac{\operatorname{sgn_2}(n)}{n}$$ The first few terms of this series: $$\pi=1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\frac{1}{6}-\frac{1}{7}+\frac{1}{8}+\frac{1}{9}+\frac{1}{10}-\frac{1}{11}+\frac{1}{12}+\frac{1}{13}-\frac{1}{14}+ \ldots$$ The sum of the first $3000000$ terms gives the following result rounded to the $37$ decimal places: $$\displaystyle\sum_{n=1}^{3000000} \frac{\operatorname{sgn_2}(n)}{n}=3.1415836871743792245050824485818285768$$ The SageMath cell that demonstrates this infinite series can be found here.
This can be proved similarly as Alexander Kalmynin's method . Let, the sum be $S$, then we can make the following identity because $\text{sgn}_1$ of $2,3$ is defined to be $1$. So, $\text{sgn}_1(ak)=\text{sgn}_1(k), a=2,3,6$. Also, from the definition of $\text{sgn}_2$ we can see, $\text{sgn}_2(n)=1,-1$ respectively for $n \equiv 1,-1 (\text{modulo} 4)$. As, $\text{sgn}_1(ak)=\text{sgn}_1(k), a=2,3$, we can separate them out from $S$ in the form of $\frac{S}{2}+\frac{S}{3}$. To prevent double counting of the $6$s multiples, we subtract $\frac{S}{6}$. Then, what is left is all odd numbers which aren't divisible by $3$. So, they can be classified as, $12k+\sigma , \sigma=1,-1,5,-5$ and also these are of the form $4k±1$. The identity : $$S=\frac{S}{2}+\frac{S}{3}-\frac{S}{6}+\left(\sum_{n\geq 0}\frac{1}{12n+1}-\sum_{n\geq 1}\frac{1}{12n-1}\right)+\left(\sum_{n\geq 0}\frac{1}{12n+5}-\sum_{n\geq 1}\frac{1}{12n-5}\right)$$ This gives $\frac{S}{3}=(1+\frac{1}{5})+\frac{1}{12^2}\left(\sum_{n=1}^{\infty} \frac{2}{(\frac{1}{12})^2-n^2}+\sum_{n=1}^{\infty} \frac{ 10}{(\frac{5}{12})^2-n^2}\right)$ or, $\frac{S}{3}=\frac{\pi}{12}\left(\text{cot}(\frac{\pi}{12})+\text{cot}(\frac{5\pi}{12})\right)$ This gives $S=\pi$
{ "language": "en", "url": "https://mathoverflow.net/questions/395438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Integrality of a sequence formed by sums Consider the following sequence defined as a sum $$a_n=\sum_{k=0}^{n-1}\frac{3^{3n-3k-1}\,(7k+8)\,(3k+1)!}{2^{2n-2k}\,k!\,(2k+3)!}.$$ QUESTION. For $n\geq1$, is the sequence of rational numbers $a_n$ always integral?
Let $A(x) = \sum_{n=1}^\infty a_n x^n$ and let $$S(x) = \sum_{k=0}^\infty (7k+8)\frac{(3k+1)!}{k!\,(2k+3)!} x^k.$$ Then the formula for $a_n$ gives $A(x) = R(x)S(x)$, where $$R(x) = \frac{1}{3}\biggl(\frac{1}{1-\frac{27}{4} x} -1\biggr).$$ A standard argument, for example by Lagrange inversion, gives $$S\left(\frac{y}{(1+y)^3}\right)=\frac{4+y}{3(1+y)^2}.$$ A straightforward computation gives $$R\left(\frac{y}{(1+y)^3}\right) = \frac{9y}{(4+y)(1-2y)^2}.$$ Thus $$A\left(\frac{y}{(1+y)^3}\right)=\frac{3y}{(1-2y)^2(1+y)^2}.$$ Since the power series expansion of $y/(1+y)^3$ starts with $y$ and has integer coefficients, its compositional inverse has integer coefficients, so $A(x)$ does also.
{ "language": "en", "url": "https://mathoverflow.net/questions/398037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 2 }
Subsequences of odd powers Let $p$ and $q$ be integers. Let $f(n)$ be A007814, the exponent of the highest power of $2$ dividing $n$, a.k.a. the binary carry sequence, the ruler sequence, or the $2$-adic valuation of $n$. Then we have an integer sequence given by \begin{align} a(0)=a(1)&=1\\ a(2n)& = pa(n)+qa(2n-2^{f(n)})\\ a(2n+1) &= a(n-2^{f(n)}) \end{align} which can also be expressed with a product $$a(n) = [t(n) = 0] + [t(n) > 0]\prod\limits_{k=0}^{t(n)-1} (q^{k+1} + p\sum\limits_{j=0}^{k} q^{j})^{g(n, k)}$$ where $$t(n)=\begin{cases} [n=2],&\text{if $n<4$;}\\ t(2^{m-1} + k),&\text{if $0 \leqslant k < 2^{m-1}, m > 1$ where $n = 2^m + k$;}\\ t(k) + A010060(k - 2^{m-1}),&\text{if $2^{m-1} \leqslant k < 2^m, m > 1$ where $n = 2^m + k$.} \end{cases}$$ $$g(n,0)=\begin{cases} [n=2]+2\cdot [n=4]+[n=6]+[n=7],&\text{if $n<8$;}\\ g(2^{m-1} + k,0) - A010060(k) + 1,&\text{if $0 \leqslant k < 2^{m-1}, m > 2$ where $n = 2^m + k$;}\\ g(2^{m-2} + k,0) + 1,&\text{if $2^{m-1} \leqslant k < 3\cdot 2^{m-2}, m > 2$ where $n = 2^m + k$;}\\ g(2^{m-3} + k,0) + A010060(k - 3\cdot 2^{m-2}) ,&\text{if $3\cdot 2^{m-2} \leqslant k < 7\cdot 2^{m-3}, m > 2$ where $n = 2^m + k$;}\\ 1,&\text{if $7\cdot 2^{m-3} \leqslant k < 2^m, m > 2$.} \end{cases}$$ $$g(n, k) = g(h(n, k), 0), n \geqslant 0, k > 0,$$ $$h(n, k) = h(h(n, 1), k - 1), n \geqslant 0, k > 1,$$ $$h(n , 1) = s(s(n)), n \geqslant 0,$$ $$s(n) = A053645(n), n > 0, s(0) = 0.$$ Here are the links to the sequences: A010060, A053645. I conjecture that $a(\frac{2^{kn}-1}{2^k-1})=(a(2^n-1))^{2k-1}$ for $n \geqslant 0$, $k>0$. This question generalizes the following: Subsequence of the cubes. Is there a way to prove it using expression with a product?
Quite similarly to my answer to the previous question, we have that for $n=2^tk$ with odd $k$, $$ a(n)=\sum_{i=0}^t \binom{t}{i}p^{t-i}q^i a(2^i(k-1)+1). $$ It further follows that for $n=2^{t_1}(1+2^{t_2+1}(1+\dots(1+2^{t_s+1}))\dots)$ with $t_j\geq 0$, we have \begin{split} a(n) &= \sum_{i_1=0}^{t_1} \binom{t_1}{i_1} p^{t_1-i_1}q^{i_1} \sum_{i_2=0}^{t_2+t_3+1+i_1} \binom{t_2+t_3+1+i_1}{i_2}p^{t_2+t_3+1+i_1-i_2}q^{i_2} \sum_{i_3=0}^{t_4+t_5+1+i_2} \\ &\qquad\dots \sum_{i_\ell=0}^{t_{2\ell-2}+t_{2\ell-1}+1+i_{\ell-1}} \binom{t_{2\ell-2}+t_{2\ell-1}+1+i_{\ell-1}}{i_\ell}p^{t_{2\ell-2}+t_{2\ell-1}+1-i_\ell}q^{i_\ell} \\ &=\prod_{j=0}^{\ell-1} \bigg(q^{\ell-j}+p\frac{q^{\ell-j}-1}{q-1}\bigg)^{t_{2j}+t_{2j+1}+1}, \end{split} where we conveniently define $\ell:=\left\lfloor\frac{s+1}2\right\rfloor$ and $t_0:=-1$. Now, for $n=\frac{2^{kn}-1}{2^k-1}$ we have $s=n$, $\ell=\left\lfloor\frac{n+1}2\right\rfloor$, $t_1=0$ and $t_j=k-1$ for all $j\in\{2,3,\dots,s\}$, implying that $$a(\tfrac{2^{kn}-1}{2^k-1}) = \prod_{j=1}^{\lfloor (n-1)/2\rfloor} \bigg(q^{\lfloor (n+1)/2\rfloor-j}+p\frac{q^{\lfloor (n+1)/2\rfloor-j}-1}{q-1}\bigg)^{2k-1}.$$ In particular, setting $k=1$, we get $$a(2^{n}-1) = \prod_{j=1}^{\lfloor (n-1)/2\rfloor} \bigg(q^{\lfloor (n+1)/2\rfloor-j}+p\frac{q^{\lfloor (n+1)/2\rfloor-j}-1}{q-1}\bigg),$$ and thus $$a(\tfrac{2^{kn}-1}{2^k-1}) = a(2^{n}-1)^{2k-1}.$$
{ "language": "en", "url": "https://mathoverflow.net/questions/405969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How big can a triangle be, whose sides are the perpendiculars to the sides of a triangle from the vertices of its Morley triangle? Given any triangle $\varDelta$, the perpendiculars from the vertices of its (primary) Morley triangle to their respective (nearest) side of $\varDelta$ intersect in a triangle $\varDelta'$, which is similar to $\varDelta$ but on a smaller scale—say with scale factor $s$. (If $\varDelta$ is isosceles, then $\varDelta'$ degenerates to a point, and $s=0$.) What is the maximum value of $s$, and for what shape of $\varDelta$ is this value attained? (This question was posted previously on Mathematics Stack Exchange, without response, but is probably more suitable for this site.)
We will perform computation using normed barycentric coordinates in the given triangle $\Delta ABC$ with side lengths $a,b,c$ and angles $\hat A=3\alpha$, $\hat B=3\beta$, $\hat C=3\gamma$. We denote by $R$ the circumradius of the circle $\odot(ABC)$, and the area $[ABC]$ by $S$. Trilinear coordinates of the points $A'$, $B'$, $C'$ (vertices of the first Morley triangle of $\Delta ABC$) can be extracted from the rows of the matrix: $$ \begin{bmatrix} 1 & 2\cos \gamma & 2\cos\beta\\ 2\cos \gamma & 1 & 2\cos\alpha\\ 2\cos \beta & 2\cos\alpha & 1\\ \end{bmatrix} \ . $$ (See for instance https://mathworld.wolfram.com/FirstMorleyTriangle.html .) The (not-normalized) barycentric coordinates can then be extracted from the rows of the matrix: $$ \tag{$*$} \begin{bmatrix} a & 2b\cos \gamma & 2c\cos\beta\\ 2a\cos \gamma & b & 2c\cos\alpha\\ 2a\cos \beta & 2c\cos\alpha & c\\ \end{bmatrix} \ . $$ Let now $P_A$, $P_B$, $P_C$ be three points with normalized barycentric coordinates $$ \begin{aligned} P_A &= (x_A, y_A, z_A)\ ,\\ P_B &= (x_B, y_B, z_B)\ ,\\ P_C &= (x_C, y_C, z_C)\ . \end{aligned} $$ Define $Q_A$ to be such that $Q_AP_B\perp AC$ and $Q_AP_C\perp AB$. Construct in a similar manner $Q_B,Q_C$. Computer algebra shows that $$ \begin{aligned} s^2 :=\ &\frac{[Q_AQ_BQ_C]}{[ABC]} =\frac{P_BP_C}{a^2} =\frac{P_CP_A}{b^2} =\frac{P_AP_B}{c^2} \qquad\text{ is given by the relation} \\ 16S^2\cdot s^2 =\ &\ \begin{pmatrix} + x_A(b^2-c^2) + a^2(y_A-z_A) \\ + y_B(c^2-a^2) + b^2(z_B-x_B) \\ + z_C(a^2-b^2) + c^2(x_C-y_C) \end{pmatrix} ^2 \ . \end{aligned} $$ (Keeping $A$ and exchanging $B\leftrightarrow C$ moves $s$ to $-s$.) So we have to maximize the expression $$ \begin{aligned} s &= \frac 1{4S} \begin{pmatrix} + x_A(b^2-c^2) + a^2(y_A-z_A) \\ + y_B(c^2-a^2) + b^2(z_B-x_B) \\ + z_C(a^2-b^2) + c^2(x_C-y_C) \end{pmatrix} \\ &=\frac 1{4S}\sum x_A(b^2-c^2) + a^2(y_A-z_A) \\ &=\frac 1{4S}\sum (1-y_A-z_A)(b^2-c^2) + a^2(y_A-z_A) \\ &=\frac 1{4S}\sum y_A(a^2-b^2+c^2) -z_A(a^2+b^2-c^2) \\ &=\frac 1{4S}\sum y_A\cdot 2ac\sin B -z_A\cdot 2ab\sin C \\ &=\frac 1{4RS}\sum y_A\cdot abc -z_A\cdot abc \\ &=\sum(y_A - z_A)\ . \\[3mm] &\qquad\text{Now we plug in the values for $y_A$, $z_A$ from $(*)$:} \\[3mm] y_A &= \frac {2\cdot 2R\sin B\cos\gamma} {2R(\sin A +2\cdot \sin B\cos \gamma +2\cdot \sin C\cos \beta)}\ , \\ z_A &= \frac {2\cdot 2R\sin C\cos\beta} {2R(\sin A +2\cdot \sin B\cos \gamma +2\cdot \sin C\cos \beta)}\ , \\[3mm] &\qquad\text{getting:} \\[3mm] s &=\sum \frac {2\cdot \sin B\cos \gamma - 2\sin C\cos\beta} {\sin A +2\cdot \sin B\cos \gamma +2\cdot \sin C\cos \beta} % \\ % &=-3 + \sum % \frac % {\sin A + 2\cdot \sin B\cos \gamma} % {\sin A + 2\cdot \sin B\cos \gamma + 2\cdot \sin C\cos \beta} \ . \end{aligned} $$ This is not an easy task now. (Either for the last expression of $s$, or for the formula in between with a cyclic sum $s=\sum(y_A-z_A)$, if there is some better geometric interpretation.) So we are starting to solve a new problem in the problem. Since time is an issue for me, instead of starting to compute using Lagrange multipliers, in order to proceed in some few lines here is a picture of the function to be maximized: The plot uses $x,y\in[0,1]$ to parametrize the angles $\alpha,\beta,\gamma$ as follows: $\displaystyle \alpha = \frac\pi3\cdot x$, $\displaystyle \beta = \frac\pi3\cdot y(1-x)$, and $\displaystyle \gamma = \frac\pi3\cdot (1-y)(1-x)$. It turns out that the maximum is in fact a supremum, taken for the case when $\displaystyle \alpha\nearrow\frac \pi 3$, and then $\displaystyle \beta,\gamma\searrow 0$, so that $\displaystyle \alpha+\beta+\gamma=\frac \pi 3$. In terms of the angles of $\Delta ABC$ we have $\hat A\nearrow\pi$, $\hat B,\hat C\searrow 0$. (But $\hat B$, $\hat C$ may need to be still correlated.) To compute this supremum, i need a notation that i can better type. So i will switch from $\alpha,\beta,\gamma$ to $x,y,z$. Then $\hat B=3y$, $\hat C=3z$, and corresponding sine values will be approximated by hand waving with $3y+O(y^3)$ and $3z+O(z^3)$, so that for $\hat A=\pi-(3x+3y)$ we have also a sine value in the shape $3y+3z+O(\dots)$. Computations will omit below terms in total monomial degree bigger / equal two. So $$ \begin{aligned} s &= \frac {2\sin B\cos\frac C3 - 2\sin C\cos \frac B3} {\sin A + 2\sin B\cos\frac C3 + 2\sin C\cos \frac B3} \\ &\qquad +\frac {2\sin C\cos\frac A3 - 2\sin A\cos \frac C3} {\sin B + 2\sin C\cos\frac A3 + 2\sin A\cos \frac C3} \\ &\qquad\qquad +\frac {2\sin A\cos\frac B3 - 2\sin B\cos \frac A3} {\sin C + 2\sin A\cos\frac B3 + 2\sin B\cos \frac A3} \\ &\sim \frac{2(3y-3z)}{3y+3z + 2(3y+3z)}\\ &\qquad +\frac {2\left(3z\cdot\frac 12 -(3y+3z)\right)}{3y + 2\left(3z\cdot\frac 12 +(3y+3z)\right)}\\ &\qquad\qquad +\frac {2\left((3y+3z) - 3y\cdot\frac 12\right)}{3z + 2\left((3x+3y) + 3y\cdot\frac 12\right)} \\ &= \frac {2y-2z}{3y+3z} - \frac {2y+z}{3y+3z} + \frac {y+2z}{3y+3z} \\ &=\frac{y-z}{3y+3z}\ . \end{aligned} $$ So we expect $\displaystyle\color{blue}{\frac 13}$ as a supremum value.
{ "language": "en", "url": "https://mathoverflow.net/questions/417175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Integral of a product between two normal distributions and a monomial The integral of the product of two normal distribution densities can be exactly solved, as shown here for example. I'm interested in compute the following integral (for a generic $n \in \mathbb{N}$): $$ I_n = \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{x^2}{2 \sigma^2}} \frac{1}{\sqrt{2 \pi} \rho} e^{-\frac{(x-\mu)^2}{2 \rho^2}} x^n\mathrm{d}x $$ Is it possible to extend this result and solve $I_n$ for a generic $n \in \mathbb{N}$?
$\def\m{\mu} \def\p{\pi} \def\s{\sigma} \def\f{\varphi} \def\r{\rho} \def\mm{M} \def\ss{S}$Let \begin{align*} \f(x;\m,\s) &= \frac{1}{\sqrt{2\p}\s} e^{-(x-\m)^2/(2\s^2)} \end{align*} and \begin{align*} \f_m(x) &= \prod_{i=1}^m \f(x;\m_i,\s_i) \\ &= \frac{1}{(2\p)^{m/2}\prod_{i=1}^m \s_i} \exp\left(-\sum_{i=1}^m \frac{(x-\m_i)^2}{2\s_i^2}\right). \end{align*} By completing the square one finds \begin{align*} \f_m(x) &= A(m)\f(x;\mm,\ss), \end{align*} where \begin{align*} \frac{1}{\ss^2} &= \sum_{i=1}^m \frac{1}{\s_i^2} \\ \mm &= \sum_{i=1}^m \frac{\m_i}{\s_i^2} \\ A(m) &= \frac{\ss}{(2\p)^{(m-1)/2} \prod_{i=1}^m \s_i} \exp \left[ \frac12\left( \frac{\mm^2}{\ss^2} - \sum_{i=1}^m \frac{\m_i^2}{\s_i^2} \right) \right]. \end{align*} That is, such a product is itself proportional to a normal distribution. Thus, \begin{align*} \int_{-\infty}^\infty x^n \f_m(x) \,dx &= A(m) \int_{-\infty}^\infty x^n \f(x;\mm,\ss)\,dx \\ &= A(m) \int_{-\infty}^\infty ((x-\mm)+\mm)^n \f(x;\mm,\ss)\,dx \\ &= A(m) \int_{-\infty}^\infty \sum_{k=0}^n \binom{n}{k} (x-\mm)^k \mm^{n-k} \f(x;\mm,\ss)\,dx \\ &= A(m) \sum_{k=0}^n \binom{n}{k} \mm^{n-k} \int_{-\infty}^\infty (x-\mm)^k \f(x;\mm,\ss)\,dx \\ &= A(m)\sum_{k=0\atop k{\textrm{ even}}}^n \binom{n}{k} \mm^{n-k} \ss^k (k-1)!!, \end{align*} and so \begin{align*} \int_{-\infty}^\infty x^n \f_m(x) \,dx &= \frac{\ss \mm^n}{\prod_{i=1}^m \s_i} \frac{1}{(2\p)^{(m-1)/2}} \exp \left[ \frac 12\left( \frac{\mm^2}{\ss^2} - \sum_{i=1}^m \frac{\m_i^2}{\s_i^2} \right) \right] \\ & \quad \times \sum_{k=0}^{\lfloor n/2\rfloor} \binom{n}{2k} (2k-1)!! \left(\frac{\ss}{\mm}\right)^{2k} \end{align*} For the original problem, \begin{align*} m &= 2 \\ (\m_1,\s_1) &= (0,\s) \\ (\m_2,\s_2) &= (\m,\r) \end{align*} so \begin{align*} \mm &= \frac{\m\s^2}{\s^2+\r^2} \\ \ss^2 &= \frac{\s^2\r^2}{\s^2+\r^2}. \end{align*} With a little work, we find a formula agreeing with that of @CarloBeenakker.
{ "language": "en", "url": "https://mathoverflow.net/questions/437171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Integration of the product of pdf & cdf of normal distribution Denote the pdf of normal distribution as $\phi(x)$ and cdf as $\Phi(x)$. Does anyone know how to calculate $\int \phi(x) \Phi(\frac{x -b}{a}) dx$? Notice that when $a = 1$ and $b = 0$ the answer is $1/2\Phi(x)^2$. Thank you!
We have $\phi(x)=\frac 1{\sqrt{2\pi}}\exp\left(-\frac{†^2}2\right)$ and $\Phi(x)=\int_{-\infty}^x\phi(t)dt$. We try to compute $$ I(a,b):=\int\phi(x)\Phi\left(\frac{x-b}a\right)dx.$$ Using the dominated convergence theorem, we are allowed to take the derivative with respect to $b$ inside the integral. We have $$\partial_bI(a,b)=\int\phi(x)\left(-\frac 1a\right)\phi\left(\frac{x-b}a\right)dx$$ and \begin{align} 2\pi\phi(x)\phi\left(\frac{x-b}a\right)&=\exp\left(-\frac 12\left(x^2+\frac{x^2}{a^2}-2\frac{bx}{a^2}+\frac{b^2}{a^2}\right)\right)\\\ &=\exp\left(-\frac 12\frac{a^2+1}{a^2}\left(x^2-2\frac b{a^2+1}x+\frac{b^2}{a^2+1}\right)\right)\\\ &=\exp\left(-\frac 12\frac{a^2+1}{a^2}\left(x-\frac b{a^2+1}\right)^2-\frac 12\frac{a^2+1}{a^2}\left(\frac{b^2}{a^2+1}-\frac{b^2}{(a^2+1)^2}\right)\right)\\\ &=\exp\left(-\frac 12\frac{a^2+1}{a^2}\left(x-\frac b{a^2+1}\right)^2\right)\exp\left(-\frac{b^2}{2a^2}\frac{a^2+1-1}{a^2+1}\right)\\\ &=\exp\left(-\frac 12\frac{a^2+1}{a^2}\left(x-\frac b{a^2+1}\right)^2\right)\exp\left(-\frac{b^2}{2(a^2+1)}\right). \end{align} Integrating with respect to $x$, we get that $$\partial_b I(a,b)=-\frac 1{\sqrt{a^2+1}}\phi\left(\frac b{\sqrt{a^2+1}}\right).$$ Since $\lim_{b\to +\infty}I(a,b)=0$, we have \begin{align}I(a,b)&=\int_b^{+\infty}\frac 1{\sqrt{a^2+1}}\phi\left(\frac s{\sqrt{a^2+1}}\right)ds\\\ &=\int_{b/\sqrt{a^2+1}}^{+\infty}\phi(t)dt = 1 - \Phi(b/\sqrt{a^2+1}). \end{align} This can be expressed with the traditional erf function.
{ "language": "en", "url": "https://mathoverflow.net/questions/101469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Logarithm of the hypergeometric function For $F(x)={}_2F_1 (a,b;c;x)$, with $c=a+b$, $a>0$, $b>0$, it has been proved in [1] that $\log F(x)$ is convex on $(0,1)$. I numerically checked that with a variety of $a,\ b$ values, $\log F(x)$ is not only convex, but also has a Taylor series in x consisting of strictly positive coefficients. Can this be proved? [1] Generalized convexity and inequalities, Anderson, Vamanamurthy, Vuorinen, Journal of Mathematical Analysis and Applications, Volume 335, Issue 2, http://www.sciencedirect.com/science/article/pii/S0022247X07001825#
Here's a sketch of a proof of a stronger statement: the coefficients of the Taylor series for $\log{}_2F_1(a,b;a+b+c;x)$ are rational functions of $a$, $b$, and $c$ with positive coefficients. To see this we first note that $$\begin{aligned} \frac{d\ }{dx} \log {}_2F_1(a,b;a+b+c;x) &= \frac{\displaystyle \frac{d\ }{dx}\,{}_2F_1(a,b;a+b+c;x)}{{}_2F_1(a,b;a+b+c;x)}\\[3pt] &=\frac{ab}{a+b+c}\frac{{}_2F_1(a+1,b+1;a+b+c+1;x)}{{}_2F_1(a,b;a+b+c;x)}. \end{aligned} $$ Then $$ \begin{gathered} \frac{{}_2F_1(a+1,b+1;a+b+c+1;x)}{{}_2F_1(a,b;a+b+c;x)} = \frac{{}_2F_1(a+1,b+1;a+b+c+1;x)}{{}_2F_1(a,b+1;a+b+c;x)} \\ \hfill\times \frac{{}_2F_1(a,b+1;a+b+c;x)}{{}_2F_1(a,b;a+b+c;x)}.\quad \end{gathered} $$ We have continued fractions for the two quotients on the right. Let $S(x; a_1, a_2, a_3, \dots)$ denote the continued fraction $$\cfrac{1}{1-\cfrac{a_1x} {1-\cfrac{a_2x} {1-\cfrac{a_3x} {1-\ddots} }}} $$ Then $$\begin{gathered}\frac{{}_2F_1(a+1,b+1;a+b+c+1;x)}{{}_2F_1(a,b+1;a+b+c;x)} = S \left( x;{\frac { \left( b+1 \right) \left( b+c \right) }{ \left( a +b+c+1 \right) \left( a+b+c \right) }}, \right.\hfill\\ \left. {\frac { \left( a+1 \right) \left( a+c \right) }{ \left( a+b+c+2 \right) \left( a+b+c+1 \right) }}, {\frac { \left( b+2 \right) \left( b+c+1 \right) }{ \left( a+b+c+3 \right) \left( a+b+c+2 \right) }}, \right.\\ \hfill \left. {\frac { \left( a+2 \right) \left( a+c+1 \right) }{ \left( a+b+c+4 \right) \left( a+b+c+3 \right) }},\dots \right) \end{gathered} $$ and $$\begin{gathered} \frac{{}_2F_1(a,b+1;a+b+c;x)}{{}_2F_1(a,b;a+b+c;x)} =S \left( x,{\frac {a}{a+b+c}}, {\frac { \left( b+1 \right) \left( b+c \right) }{ \left( a+b+c+1 \right) \left( a+b+c \right) }}, \right.\hfill\\ \left. {\frac { \left( a+1 \right) \left( a+c \right) }{ \left( a+b+c+2 \right) \left( a+b+c+1 \right) }}, {\frac { \left( b+2 \right) \left( b+c+1 \right) }{ \left( a+b+c+3 \right) \left( a+b+c+2 \right) }}, \right.\\ \hfill \left. {\frac { \left( a+2 \right) \left( a+c+1 \right) }{ \left( a+b+c+4 \right) \left( a+b+c+3 \right) }}, \dots\right) \end{gathered} $$ The first of these continued fractions is Gauss's well-known continued fraction, and the second can easily be derived from the first. It follows from these formulas that the coefficients of the Taylor series for $\log{}_2F_1(a,b;a+b+c;x)$ are rational functions of $a$, $b$, and $c$ with positive coefficients.
{ "language": "en", "url": "https://mathoverflow.net/questions/143350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
How many solutions to $2^a + 3^b = 2^c + 3^d$? A few weeks ago, I asked on math.stackexchange.com how many quadruples of non-negative integers $(a,b,c,d)$ satisfy the following equation: $$2^a + 3^b = 2^c + 3^d \quad (a \neq c)$$ I found 5 quadruples: $5 = 2^2 + 3^0 = 2^1 + 3^1$, $11 = 2^3 + 3^1 = 2^1 + 3^2$, $17 = 2^4 + 3^0 = 2^3 + 3^2$, $35 = 2^5 + 3^1 = 2^3 + 3^3$, $259 = 2^8 + 3^1 = 2^4 + 3^5$ I didn't get an answer, but only a link to an OEIS sequence (no more quadruples below $10^{4000}$), so I'm asking the same question here. Is there a way to prove that they are [not] infinite? And, more generally, are there known tuples for which the following equation: $$p_{i_1}^{a_1} + p_{i_2}^{a_2} + ... + p_{i_n}^{a_n}=p_{i_1}^{b_1} + p_{i_2}^{b_2} + ... + p_{i_n}^{b_n}$$ holds for infinitely many (or holds only for finitely many) $a_i,b_i$?
The answer is yes, there are finitely many for positive $a, b, c, d$, and you've found all of them. See Theorem 7 of R. Scott, R. Styer, On the generalized Pillai equation $\pm a^x \pm b^y = c$, Journal of Number Theory, 118 (2006), 236–265. I quote: Theorem 7. Let $a$ be prime, $a>b$, $b = 2$ or $b = 3$, $a$ not a large base-$b$ Wieferich prime, $1 \le x_1 \le x_2$, $1 \le y_1 \le y_2$, and $(x_1, y_1) \neq (x_2, y_2)$. If there is a solution $(a, x_1, y_1, x_2, y_2)$ to the equation $$\left|a^{x_1} - b^{y_1}\right| = \left|a^{x_2} - b^{y_2}\right|$$ then it is one of $$\begin{align*} 3-2&=3^2-2^3,\\ 2^3-3&=2^5-3^3,\\ 2^4-3&=2^8-3^5,\\ \cdots&=\cdots \end{align*}$$ where the omitted cases are irrelevant. The three listed equations are your decompositions of $\fbox{11}$, $\fbox{35}$, and $\fbox{259}$. Now we show that the answer is yes for non-negative $a, b, c, d$, and you've also found all of them. The only solutions to your equation not covered by the above result correspond to solutions of $$1 - 2^x + 2^y = 3^z, $$ where we may assume $0 < x < y$. If $z$ is odd, then the RHS is 3 modulo 4 and so we must have $x = 1$. Hence $3^z - 2^y = 3 - 2$ and so by the above we have the only solution as $(x, y, z) = (1,2,1)$. This is your decomposition of $\fbox{5}$. If $z$ is even, then the RHS is a perfect square. Working modulo 3 we see that we must have $x$ odd and $y$ even. Write $z' = z/2, y' = y/2, x' = x-1$, and note that $x' > 0$ is even. If $x' < y'$ then we have $(2^{y'}-1)^2 = 1 - 2^{y'} + 2^y < 1 - 2^x + 2^y < 2^y = (2^{y'})^2$, and so the LHS cannot be a perfect square. Hence $x' \ge y'$. If we have $x' = y'$, then we have $3^{z'} = 2^{y'} - 1$; this gives the solution $(x,y,z) = (3,4,2)$, corresponding to your decomposition of $\fbox{17}$. We claim that there are no other solutions. Any other solution must have $x' > y'$. We rearrange and write $$\left(2^{x'} - 1 + 3^{z'}\right)\left(2^{x'} - 1 - 3^{z'}\right) = 2^{2x'} - 2^{2y'}.$$ Note that we must have $2^{x'}-1 > 3^{z'}$ so that the signs on each side match. The RHS is divisible by $2^{2y'}$. As $x < y$ we have $2y' \ge x' + 2$. We have $\gcd(2^{x'} - 1 + 3^{z'}, 2^{x'} - 1 - 3^{z'}) = \gcd(2^{x'} - 1 - 3^{z'},2\cdot3^{z'})$, which is divisible by 2 but not by 4. Hence one of $2^{x'} - 1 + 3^{z'}$ and $2^{x'} - 1 - 3^{z'}$ is divisible by $2^{2y' - 1} \ge 2^{x' + 1}$. But $$2^{x' + 1} = 2^{x'} + 2^{x'} > 2^{x'} - 1 + 3^{z'} > 2^{x'} - 1 - 3^{z'},$$ and so neither can be divisible by $2^{2y' - 1}$. Hence there are no more solutions.
{ "language": "en", "url": "https://mathoverflow.net/questions/164624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 4 }
How do we show this matrix has full rank? I met with the following difficulty reading the paper Li, Rong Xiu "The properties of a matrix order column" (1988): Define the matrix $A=(a_{jk})_{n\times n}$, where $$a_{jk}=\begin{cases} j+k\cdot i&j<k\\ k+j\cdot i&j>k\\ 2(j+k\cdot i)& j=k \end{cases}$$ and $i^2=-1$. The author says it is easy to show that $rank(A)=n$. I have proved for $n\le 5$, but I couldn't prove for general $n$. Following is an attempt to solve this problem: let $$A=P+iQ$$ where $$P=\begin{bmatrix} 2&1&1&\cdots&1\\ 1&4&2&\cdots& 2\\ 1&2&6&\cdots& 3\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ 1&2&3&\cdots& 2n \end{bmatrix},Q=\begin{bmatrix} 2&2&3&\cdots& n\\ 2&4&3&\cdots &n\\ 3&3&6&\cdots& n\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ n&n&n&\cdots& 2n\end{bmatrix}$$ and define $$J=\begin{bmatrix} 1&0&\cdots &0\\ -1&1&\cdots& 0\\ \cdots&\cdots&\cdots&\cdots\\ 0&\cdots&-1&1 \end{bmatrix}$$ then we have $$JPJ^T=J^TQJ=\begin{bmatrix} 2&-2&0&0&\cdots&0\\ -2&4&-3&\ddots&0&0\\ 0&-3&6&-4\ddots&0\\ \cdots&\ddots&\ddots&\ddots&\ddots&\cdots\\ 0&0&\cdots&-(n-2)&2(n-1)&-(n-1)\\ 0&0&0&\cdots&-(n-1)&2n \end{bmatrix}$$ and $$A^HA=(P-iQ)(P+iQ)=P^2+Q^2+i(PQ-QP)=\binom{P}{Q}^T\cdot\begin{bmatrix} I& iI\\ -iI & I \end{bmatrix} \binom{P}{Q}$$
OK, let me try again, maybe I'll get it right this time. I'll show that $P$ is positive definite. This will imply the claim because if $(P+iQ)(x+iy)=0$ with $x,y\in\mathbb R^n$, then $Px=Qy$, $Py=-Qx$, and by taking scalar products with $x$ and $y$, respectively, we see that $\langle x, Px \rangle = -\langle y, Py\rangle$, which implies that $x=y=0$. Here I use that $Q$ is symmetric. Let me now show that $P>0$. Following math110's suggestion, we can simplify my original calculation as follows: Let $ B=B_n = P -\textrm{diag}(1,2,\ldots , n)$. For example, for $n=5$, this is the matrix $$ B_ 5= \begin{pmatrix} 1 & 1 & 1 & 1 & 1\\ 1 & 2 & 2 & 2 & 2\\ 1 & 2 & 3 & 3 & 3\\ 1 & 2 & 3 & 4 & 4\\ 1 & 2 & 3 & 4 & 5 \end{pmatrix} . $$ I can now (in general) subtract the $(n-1)$st row from the last row, then the $(n-2)$nd row from the $(n-1)$st row etc. This confirms that $\det B_n=1$. Moreover, the upper left $k\times k$ submatrices of $B_n$ are of the same type; they equal $B_k$. This shows that $B>0$, by Sylvester's criterion, and thus $P>0$ as well.
{ "language": "en", "url": "https://mathoverflow.net/questions/191796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
Angle subtended by the shortest segment that bisects the area of a convex polygon Let $C$ be a convex polygon in the plane and let $s$ be the shortest line segment (I believe this is called a "chord") that divides the area of $C$ in half. What is the smallest angle that $s$ could make where it touches the boundary of $C$? This picture shows an example, I I called the angle $\theta$ (and the area of the region is $A$): Numerical simulations suggest that $\theta\geq\pi/3$ but I haven't been able to prove this.
I'll prove that for any triangle below, $\displaystyle \theta \geqq \pi /3\ $ Assuming $\displaystyle X'Y' =z$ be the line segment that bisects area $\displaystyle A$ of triangle $\displaystyle XYZ$ . Let $\displaystyle \angle Z=\alpha $ , $\displaystyle Y'Z=x$ and $\displaystyle X'Z=y$ By Heron's formula, $\displaystyle \frac{A}{2} =\frac{1}{4}\sqrt{( x+y+z)( x+y-z)( x-y+z)( -x+y+z)} =\frac{1}{4}\sqrt{( 2xy)^{2} -\left( x^{2} +y^{2} -z^{2}\right)^{2}}$ , so we get $\displaystyle z=\sqrt{x^{2} +y^{2} -2\sqrt{( xy)^{2} -A^{2}}} \geqq \sqrt{2xy-2\sqrt{( xy)^{2} -A^{2}}}$ . On the other hand, $\displaystyle \frac{A}{2} =\frac{1}{2} xy\sin \alpha $, so $\displaystyle xy=\frac{A}{\sin \alpha }$, and we get $ $$\displaystyle z\geqq \sqrt{2A\left(\frac{1}{\sin \alpha } -\frac{1}{\tan \alpha }\right)} =\sqrt{2A\tan\frac{\alpha }{2}}$ When $\displaystyle x=y$ and $\displaystyle \alpha $ be the smallest angle of triangle $\displaystyle XYZ$, $\displaystyle z$ get the minimum value. and then $\displaystyle \theta =\frac{\pi -\alpha }{2}$, so $\displaystyle \theta \geqq \pi /3\ $.
{ "language": "en", "url": "https://mathoverflow.net/questions/201181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Curious inequality Set $$ g(x)=\sum_{k=0}^{\infty}\frac{1}{x^{2k+1}+1} \quad \text{for} \quad x>1. $$ Is it true that $$ \frac{x^{2}+1}{x(x^{2}-1)}+\frac{g'(x)}{g(x)}>0 \quad \text{for}\quad x>1? $$ The answer seems to be positive. I spent several hours in proving this statement but I did not come up with anything reasonable. Maybe somebody else has (or will have) any bright idea? Motivation? Nothing important, I was just playing around this question: A problem of potential theory arising in biology
The inequality is equivalent to $$S := (x^2+1)g(x) + x(x^2-1)g'(x) > 0.$$ The left hand side here can be expanded to $$S = \sum_{k\geq 0} \frac{(x^2+1)(x^{2k+1}+1) - (2k+1)x^{2k+1}(x^2-1)}{(x^{2k+1}+1)^2} $$ $$= \sum_{k\geq 0} \frac{(x^2+1) - (2k+1)(x^2-1)}{x^{2k+1}+1} + \sum_{k\geq 0}\frac{(2k+1)(x^2-1)}{(x^{2k+1}+1)^2}.$$ Now, the first sum here simplifies to $$\sum_{k\geq 0} \frac{(x^2+1) - (2k+1)(x^2-1)}{x^{2k+1}+1} = \sum_{k\geq 0} \frac{(2k+2)-2k x^2}{x^{2k+1}+1}$$ $$=\sum_{k\geq 1} \left( \frac{2k}{x^{2k-1}+1} - \frac{2k x^2}{x^{2k+1}+1}\right)=(1-x^2)\sum_{k\geq 1} \frac{2k}{(x^{2k-1}+1)(x^{2k+1}+1)}.$$ Hence $$\frac{S}{x^2-1} = \sum_{k\geq 0}\frac{2k+1}{(x^{2k+1}+1)^2} - \sum_{k\geq 1} \frac{2k}{(x^{2k-1}+1)(x^{2k+1}+1)}$$ $$\geq \sum_{k\geq 0}\frac{2k+1}{(x^{2k+1}+1)^2} - \sum_{k\geq 1} \frac{2k}{(x^{2k}+1)^2} = \sum_{k\geq 1} \frac{(-1)^{k-1}k}{(x^{k}+1)^2}.$$ Here we used AM-GM inequality $x^{2k-1}+x^{2k+1}\geq 2x^{2k}$ and thus $$(x^{2k-1}+1)(x^{2k+1}+1)=x^{4k}+x^{2k+1}+x^{2k-1}+1\geq (x^{2k}+1)^2.$$ So it remains to prove that for $x>1$, $$\sum_{k\geq 1} \frac{(-1)^{k-1}k}{(x^{k}+1)^2} > 0.\qquad(\star)$$ UPDATE #1. Substituting $x=e^{2t}$, we have $$\sum_{k\geq 1} \frac{(-1)^{k-1}k}{(x^{k}+1)^2} = \sum_{k\geq 1} \frac{(-1)^{k-1}ke^{-2tk}}{4 \cosh(tk)^2} = \frac{1}{4}\sum_{k\geq 1} (-1)^{k-1}ke^{-2tk}(1-\tanh(tk)^2)$$ $$ = \frac{e^{2t}}{4(e^{2t}+1)^2} - \frac{1}{4}\sum_{k\geq 1} (-1)^{k-1}ke^{-2tk} \tanh(tk)^2.$$ UPDATE #2. The proof of $(\star)$ is given by Iosif Pinelis.
{ "language": "en", "url": "https://mathoverflow.net/questions/217711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 3, "answer_id": 2 }
Are the following identities well known? $$ x \cdot y = \frac{1}{2 \cdot 2 !} \left( (x + y)^2 - (x - y)^2 \right) $$ $$ \begin{eqnarray} x \cdot y \cdot z &=& \frac{1}{2^2 \cdot 3 !} ((x + y + z)^3 - (x + y - z)^3 \nonumber \\ &-& (x - y + z)^3 + (x - y - z)^3 ), \end{eqnarray} $$ $$ \begin{eqnarray} x \cdot y \cdot z \cdot w &=& \frac{1}{2^3 \cdot 4 !} ( (x + y + z + w)^4 \nonumber \\ &-& (x + y + z - w)^4 - (x + y - z + w)^4 \nonumber \\ &+& (x + y - z - w)^4 - (x - y + z + w)^4 \nonumber \\ &+& (x - y + z - w)^4 + (x - y - z + w)^4 \nonumber \\ &-& (x - y - z - w)^4 ). \end{eqnarray} $$ The identity that rewrites a product of $n$ variables $( n \ge 2$, $n \in \boldsymbol{\mathbb{Z}_+})$ as additions of $n$ th power functions is as given below: $$ \begin{eqnarray} & &x_0 \cdots x_1 \cdot x_{n-1} = \frac{1}{2^{n - 1} \cdot n !} \cdot \sum_{j = 0}^{2^{ n - 1} -1} ( - 1 )^{\sum_{m = 1}^{n - 1} \sigma_m(j)} \times ( x_0 + (-1)^{\sigma_1(j)} x_1 + \cdots + (- 1)^{\sigma_{n - 1}(j)} x_{n - 1})^n, \\ & &\sigma_m(j) = r(\left\lfloor \frac{j}{2^{m - 1}}\right\rfloor, 2), m (\ge 1), j (\ge 0) \in \mathbb{Z}_+, \; \left\lfloor x \right\rfloor = \max \{ n \in \mathbb{Z}_+ ; n \le x, x \in \mathbb{R} \} \end{eqnarray} $$ where $r(\alpha, \beta)$, $\alpha, \beta (\ge 1) \in \mathbb{Z}_+$ means the remainder of the division of $\alpha$ by $\beta$ such that $r(\alpha, \beta)$ $=$ $\alpha - \beta \left\lfloor \frac{\alpha}{\beta}\right\rfloor$
Although not the exactly the same due to $2^{n-1}$ instead of $2^n$ terms, the OP's formula seems to be essentially the well-known polarization formula for homogeneous polynomials, which is stated as following: Any polynomial $f$, homogeneous of degree $n$ can be written as $f(x)=H(x,\ldots,x)$ for a specific multilinear form $H$. One has the following polarization formula for $H$ (see also this MO post): \begin{equation*} H(x_1,\ldots,x_n) = \frac{1}{2^n n!}\sum_{s \in \{\pm 1\}^n}s_1\ldots s_n f\Bigl(\sum\nolimits_{j=1}^n s_jx_j\Bigr) \end{equation*} In your case, $f=x^n$, so $H(x_1,\ldots,x_n) = x_1\cdots x_n$ (please note off by 1 indexing).
{ "language": "en", "url": "https://mathoverflow.net/questions/220447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 1, "answer_id": 0 }
Why is this not a perfect square? Let $x, y$ and $z$ be positive integers with $x<y$. It appears that the integer $$(y^2z^3-x)(y^2z^3+3x)$$ is never a perfect square. Why? A proof? I'm not sure if it is easy.
Rewrite your expression as $(y^2z^3+x)^2-4x^2$. This is clearly less than $(y^2z^3+x)^2$. On the other hand, it is larger than $(y^2z^3+x-2)^2=(y^2z^3+x)^2-4(y^2z^3+x)+4$, since $4(y^2z^3+x)-4\geq 4y^2+4x-4\geq 4y^2>4x^2$. So if this expression is a square, it must be $(y^2z^3+x-1)^2=(y^2z^3+x)^2-2(y^2z^3+x)+1$, so $4x^2=2(y^2z^3+x)-1$. LHS is even, while RHS is odd, so this can't be.
{ "language": "en", "url": "https://mathoverflow.net/questions/250163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Algorithm to decide whether two constructible numbers are equal? The set of constructible numbers https://en.wikipedia.org/wiki/Constructible_number is the smallest field extension of $\mathbb{Q}$ that is closed under square root and complex conjugation. I am looking for an algorithm that decides if two constructible numbers are equal (or, what is the same, if a constructible number is zero). As equality of rational numbers is trivial, this algorithm probably needs to reduce the complexity of the involved number in several steps. Does such an algorithm exist? If yes, what does it look like?
Although this can be done using the complicated algorithms for general algebraic numbers, there’s a much simpler recursive algorithm for constructible numbers that I implemented in the Haskell constructible library. A constructible field extension is either $\mathbb Q$ or $F{\left[\sqrt r\right]}$ for some simpler constructible field extension $F$ and $r ∈ F$ with $\sqrt r ∉ F$. We represent an element of $F{\left[\sqrt r\right]}$ as $a + b\sqrt r$ with $a, b ∈ F$. We have these obvious rules for $a, b, c, d ∈ F$: $$\begin{gather*} (a + b\sqrt r) + (c + d\sqrt r) = (a + c) + (b + d)\sqrt r, \\ -(a + b\sqrt r) = -a + (-b)\sqrt r, \\ (a + b\sqrt r)(c + d\sqrt r) = (ac + bdr) + (ad + bc)\sqrt r, \\ \frac{a + b\sqrt r}{c + d\sqrt r} = \frac{ac - bdr}{c^2 - d^2 r} + \frac{bc - ad}{c^2 - d^2 r}\sqrt r, \\ a + b\sqrt r = c + d\sqrt r \iff a = c ∧ b = d. \end{gather*}$$ To compute the square root of $a + b\sqrt r ∈ F{\left[\sqrt r\right]}$: * *If $\sqrt{a^2 - b^2 r} ∈ F$ and $\sqrt{\frac{a + \sqrt{a^2 - b^2 r}}{2}} ∈ F$, then $$\sqrt{a + b\sqrt r} = \sqrt{\frac{a + \sqrt{a^2 - b^2 r}}{2}} + \frac{b}{2\sqrt{\frac{a + \sqrt{a^2 - b^2 r}}{2}}}\sqrt r ∈ F{\left[\sqrt r\right]}.$$ *If $\sqrt{a^2 - b^2 r} ∈ F$ and $\sqrt{\frac{a + \sqrt{a^2 - b^2 r}}{2r}} ∈ F$, then $$\sqrt{a + b\sqrt r} = \frac{b}{2\sqrt{\frac{a + \sqrt{a^2 - b^2 r}}{2r}}} + \sqrt{\frac{a + \sqrt{a^2 - b^2 r}}{2r}}\sqrt r ∈ F{\left[\sqrt r\right]}.$$ *Otherwise, $\sqrt{a + b\sqrt r} ∉ F{\left[\sqrt r\right]}$, so we represent it as $$0 + 1\sqrt{a + b\sqrt r} ∈ F{\left[\sqrt r\right]}\left[\sqrt{a + b\sqrt r}\right].$$ In order to compute with numbers represented in different field extensions, we need to rewrite them in a common field extension first. To rewrite $a + b\sqrt r ∈ F{\left[\sqrt r\right]}$ and $c ∈ G$ in a common field extension, first rewrite $a, b, r ∈ F$ and $c ∈ G$ in a common field extension $H$. If $\sqrt r ∈ H$, then we have $a + b\sqrt r, c ∈ H$; otherwise we have $a + b\sqrt r, c + 0\sqrt r ∈ H{\left[\sqrt r\right]}$. I implemented the constructible real numbers and built the constructible complex numbers generically on top of those, to enable ordering relations and to avoid having to think too hard about branch cuts.
{ "language": "en", "url": "https://mathoverflow.net/questions/264827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Existence of generalized inverse-like operator Does there exist an operator, $\star$, such that for all full rank matrices $B$ and all $A$ of appropriate dimensions: $$ B(B^\intercal AB)^\star B^\intercal = A^\star, $$ and such that $A^\star=0$ if and only if $A=0$? Edit: Also, $\star : \operatorname{M}(m,n,\mathbb R) \to \operatorname{M}(n,m,\mathbb R)$. Edit: If possible, we would also like $\operatorname{rank}(A^\star)=\operatorname{rank}(A)$.
No. We suppose it is defined for $2\times 2$ matrices and we get a contradiction. Unless you drop the requirement on $\operatorname{rank} A^\star$ in which case $A^\star=0$ trivially works. Let $A=\begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}$ and let $B=\begin{pmatrix} a & b\\ c & d\end{pmatrix}$ with arbitrary $a,b,c,d\in\mathbb R$ satisfying $ad-bc\neq 0$. The key observation is that $B^T A B = \begin{pmatrix} a^2 & ab \\ ab & b^2\end{pmatrix}$ does not depend on $c,d$. Therefore $(B^T A B)^\star=B^{-1}A^\star (B^T)^{-1}$ should be independent of $c,d$ as well. Let $A^\star =\begin{pmatrix} x & y\\ z & w\end{pmatrix}$ for some $x,y,z,w\in\mathbb R$. Then we compute $$B^{-1}A^\star (B^T)^{-1}=\frac 1 {(\operatorname{det} B)^2}\begin{pmatrix} x d^2 - (y+z) cd + w c^2 & -x bd +(yad + zbc) -wac\\ -x bd +(ybc + zad) -wac & x b^2 - (y+z) ab + w a^2 \end{pmatrix}$$. In case $B=\begin{pmatrix} 1 & 0\\ t & 1\end{pmatrix}$ we get that $B^{-1}A^\star (B^T)^{-1}=\begin{pmatrix} x - (y+z)t + w t^2& \dots\\ \dots & \dots\end{pmatrix}$ is independent of $t$. So $y+z=0$ and $w=0$. Now, in case $B=\begin{pmatrix} 0 & -1\\ 1 & t\end{pmatrix}$ we get $B^{-1}A^\star (B^T)^{-1}=\begin{pmatrix} x t^2 + 0 & \dots\\ \dots & \dots\end{pmatrix}$ from which $x=0$. This is already a contradiction, because $A^\star=\begin{pmatrix} 0 & y \\ -y & 0\end{pmatrix}$ has either rank 0 or 2. But in case $B=\begin{pmatrix} 1 & 0\\ 0 & t\end{pmatrix}$ we get that $B^{-1}A^\star (B^T)^{-1}=\frac 1 {t^2}\begin{pmatrix} 0 & yt \\ -yt & 0\end{pmatrix}$ is independent of $t$, so $y=0$.
{ "language": "en", "url": "https://mathoverflow.net/questions/270049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Matrix rescaling increases lowest eigenvalue? Consider the set $\mathbf{N}:=\left\{1,2,....,N \right\}$ and let $$\mathbf M:=\left\{ M_i; M_i \subset \mathbf N \text{ such that } \left\lvert M_i \right\rvert=2 \text{ or }\left\lvert M_i \right\rvert=1 \right\}$$ be the set of all subsets of $\mathbf{N}$ that are of cardinality $1$ or $2.$ The cardinality of the set $\mathbf M$ itself is $\binom{n}{1}+\binom{n}{2}=:K$ We can then study for $y \in (0,1)$ the $K \times K$ matrix $$A_N = \left( \frac{\left\lvert M_i \cap M_j \right\rvert}{\left\lvert M_i \right\rvert\left\lvert M_j \right\rvert}y^{-\left\lvert M_i \cap M_j \right\rvert} \right)_{i,j}$$ and $$B_N = \left( \left\lvert M_i \cap M_j \right\rvert y^{-\left\lvert M_i \cap M_j \right\rvert} \right)_{i,j}.$$ Question I conjecture that $\lambda_{\text{min}}(A_N)\le \lambda_{\text{min}}(B_N)$ for any $N$ and would like to know if one can actually show this? As a first step, I would like to know if one can show that $$\lambda_{\text{min}}(A_N)\le C\lambda_{\text{min}}(B_N)$$ for some $C$ independent of $N$? In fact, I am not claiming that $A_N \le B_N$ in the sense of matrices. But it seems as if the eigenvalues of $B_N$ are shifted up when compared with $A_N.$ Numerical evidence: For $N=2$ we can explicitly write down the matrices $$A_2 =\left( \begin{array}{ccc} \frac{1}{y} & 0 & \frac{1}{2 y} \\ 0 & \frac{1}{y} & \frac{1}{2 y} \\ \frac{1}{2 y} & \frac{1}{2 y} & \frac{1}{2 y^2} \\ \end{array} \right) \text{ and }B_2 = \left( \begin{array}{ccc} \frac{1}{y} & 0 & \frac{1}{y} \\ 0 & \frac{1}{y} & \frac{1}{y} \\ \frac{1}{y} & \frac{1}{y} & \frac{2}{y^2} \\ \end{array} \right)$$ We obtain for the lowest eigenvalue of $A_2$ (orange) and $B_2$(blue) as a function of $y$ For $N=3$ we get qualitatively the same picture, i.e. the lowest eigenvalue of $A_3$ remains below the lowest one of $B_3$: In this case: $$A_3=\left( \begin{array}{cccccc} \frac{1}{y} & 0 & 0 & \frac{1}{2 y} & 0 & \frac{1}{2 y} \\ 0 & \frac{1}{y} & 0 & \frac{1}{2 y} & \frac{1}{2 y} & 0 \\ 0 & 0 & \frac{1}{y} & 0 & \frac{1}{2 y} & \frac{1}{2 y} \\ \frac{1}{2 y} & \frac{1}{2 y} & 0 & \frac{1}{2 y^2} & \frac{1}{4 y} & \frac{1}{4 y} \\ 0 & \frac{1}{2 y} & \frac{1}{2 y} & \frac{1}{4 y} & \frac{1}{2 y^2} & \frac{1}{4 y} \\ \frac{1}{2 y} & 0 & \frac{1}{2 y} & \frac{1}{4 y} & \frac{1}{4 y} & \frac{1}{2 y^2} \\ \end{array} \right)\text{ and } B_3=\left( \begin{array}{cccccc} \frac{1}{y} & 0 & 0 & \frac{1}{y} & 0 & \frac{1}{y} \\ 0 & \frac{1}{y} & 0 & \frac{1}{y} & \frac{1}{y} & 0 \\ 0 & 0 & \frac{1}{y} & 0 & \frac{1}{y} & \frac{1}{y} \\ \frac{1}{y} & \frac{1}{y} & 0 & \frac{2}{y^2} & \frac{1}{y} & \frac{1}{y} \\ 0 & \frac{1}{y} & \frac{1}{y} & \frac{1}{y} & \frac{2}{y^2} & \frac{1}{y} \\ \frac{1}{y} & 0 & \frac{1}{y} & \frac{1}{y} & \frac{1}{y} & \frac{2}{y^2} \\ \end{array} \right)$$
The matrices are of the form $$ A=\begin{pmatrix} 1 & C \\ C^* & D \end{pmatrix}, \quad\quad B= \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix} A \begin{pmatrix} 1 & 0\\ 0&2\end{pmatrix} , $$ with the blocks corresponding to the sizes of the sets $M_j$ involved. Let $v=(x,y)^t$ be a normalized eigenvector for the minimum eigenvalue $\lambda $ of $B$. Then $$ (x,2y)A\begin{pmatrix} x \\ 2y \end{pmatrix} = \lambda $$ also, but this modified vector has larger norm. So the desired inequality $\lambda_j(A)\le\lambda_j(B)$ (for all eigenvalues, not just the first one) will follow if we can show that $B\ge 0$. This is true for $y=1$ because in this case we can interpret $$ |M_j\cap M_k|=\sum_n \chi_j(n)\chi_k(n) $$ as the scalar product in $\ell^2$ of the characteristic functions, and this makes $v^*Bv$ equal to $\|f\|_2^2\ge 0$, with $f=\sum v_j\chi_j$. For general $y>0$, we have $B(y)=(1/y)B(1) + D$, for a diagonal matrix $D$ with non-negative entries.
{ "language": "en", "url": "https://mathoverflow.net/questions/313470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
How can I simplify this sum any further? Recently I was playing around with some numbers and I stumbled across the following formal power series: $$\sum_{k=0}^\infty\frac{x^{ak}}{(ak)!}\biggl(\sum_{l=0}^k\binom{ak}{al}\biggr)$$ I was able to "simplify" the above expression for $a=1$: $$\sum_{k=0}^\infty\frac{x^k}{k!}\cdot2^k=e^{2x}$$ I also managed to simplify the expression for $a=2$ with the identity $\sum_{i=0}^\infty\frac{x^{2k}}{(2k)!}=\cosh(x)$: $$\sum_{k=0}^\infty\frac{x^{2k}}{(2k)!}\biggl(\sum_{l=0}^k\binom{2k}{2l}\biggr)=\mathbf[\cdots\mathbf]=\frac{1}{4}\cdot(e^{2x}+e^{-2x})+\frac{1}{2}=\frac{1}{2}\cdot(\cosh(2x)+1)$$ However, I couldn't come up with a general method for all $a\in\Bbb{N}$. I would be very thankful if someone could either guide me towards simplifying this expression or post his solution here.
You might be able to use the fact that $$\sum_{k=0}^\infty b_{ak}=\sum_{k=0}^\infty \left(\frac{1}{a}\sum_{j=0}^{a-1} \exp\left(2\pi ijk/a\right)\right)b_k.$$ For example, when $a=1$, taking $b_k = \frac{x^k}{k!}\sum_{\ell \ge 0} \binom{k}{\ell}$ yields $$\sum_{k=0}^\infty b_{k}=\sum_{k=0}^\infty \frac{x^k}{k!}\sum_{\ell \ge 0} \binom{k}{\ell}=\sum_{k=0}^\infty \frac{x^k}{k!}2^k=\exp(2x),$$ as you already obtained. For $a=2$, first note that \begin{align} \sum_{\ell \ge 0}\binom{k}{2 \ell} &= \sum_{\ell\ge 0} \left(\frac{1}{2}\sum_{j=0}^1 \exp\left(2\pi ij\ell/2\right)\right)\binom{k}{\ell}\\ &= \sum_{\ell\ge 0} \frac{1+(-1)^\ell}{2}\binom{k}{\ell}\\ &= \frac{1}{2}\sum_{\ell\ge 0} \binom{k}{\ell}+ \frac{1}{2}\sum_{\ell\ge 0} (-1)^\ell\binom{k}{\ell}\\ &= \frac{2^k+0^k}{2}. \end{align} Now taking $b_k = \frac{x^k}{k!}\sum_{\ell \ge 0}\binom{k}{2 \ell}$ yields \begin{align}\sum_{k=0}^\infty b_{2k}&=\sum_{k=0}^\infty \left( \frac{1}{2}\sum_{j=0}^1 \exp\left(\pi ijk\right)\right)\frac{x^k}{k!}\sum_{\ell \ge 0}\binom{k}{2 \ell}\\ &=\frac{1}{2}\sum_{k=0}^\infty \left(1+(-1)^k\right)\frac{x^k}{k!}\left(2^{k-1}+\frac{1}{2}[k=0]\right)\\ &=\frac{1}{4}\sum_{k=0}^\infty \frac{x^k}{k!}2^k+\frac{1}{4}\sum_{k=0}^\infty (-1)^k\frac{x^k}{k!}2^k+\frac{1}{2}\\ &=\frac{\exp(2x)+\exp(-2x)}{4} +\frac{1}{2}\\ &=\cosh^2(x), \end{align} again matching your result. For $a=3$, first note that \begin{align} \sum_{\ell \ge 0}\binom{k}{3 \ell} &= \sum_{\ell\ge 0} \left(\frac{1}{3}\sum_{j=0}^2 \exp\left(2\pi ij\ell/3\right)\right)\binom{k}{\ell}\\ &= \sum_{\ell\ge 0} \frac{1+\exp(2\pi i\ell/3)+\exp(4\pi i\ell/3)}{3}\binom{k}{\ell}\\ &= \frac{1}{3}\sum_{\ell\ge 0} \binom{k}{\ell}+ \frac{1}{3}\sum_{\ell\ge 0} \exp(2\pi i/3)^\ell\binom{k}{\ell}+ \frac{1}{3}\sum_{\ell\ge 0} \exp(4\pi i/3)^\ell\binom{k}{\ell}\\ &= \frac{2^k+(1+\exp(2\pi i/3))^k+(1+\exp(4\pi i/3))^k}{3}\\ &= \frac{2^k+\exp(\pi i/3)^k+\exp(-\pi i/3)^k}{3}. \end{align} Now taking $b_k = \frac{x^k}{k!}\sum_{\ell \ge 0}\binom{k}{3 \ell}$ yields \begin{align}\sum_{k=0}^\infty b_{3k}&=\sum_{k=0}^\infty \left( \frac{1}{3}\sum_{j=0}^2 \exp\left(2\pi ijk/3\right)\right)\frac{x^k}{k!}\sum_{\ell \ge 0}\binom{k}{3 \ell}\\ &=\frac{1}{3}\sum_{k=0}^\infty \left(1+\exp(2\pi ik/3)+\exp(4\pi ik/3)\right)\frac{x^k}{k!}\frac{2^k+\exp(\pi i/3)^k+\exp(-\pi i/3)^k}{3}\\ &=\frac{1}{9}\sum_{k=0}^\infty (1+\exp(2\pi i/3)^k+\exp(4\pi i/3)^k)(2^k+\exp(\pi i/3)^k+\exp(-\pi i/3)^k)\frac{x^k}{k!}. \end{align} Now expand the product of trinomials to obtain 9 sums that reduce to $\exp(cx)$ for various constants $c$. Alternatively, note that: $$\sum_{k=0}^\infty \frac{x^{ak}}{(ak)!}\sum_{\ell \ge 0}\binom{ak}{a\ell} = \left(\sum_{k=0}^\infty \frac{x^{ak}}{(ak)!}\right)^2,$$ so you might as well just compute \begin{align} \sum_{k=0}^\infty \frac{x^{ak}}{(ak)!} &= \sum_{k=0}^\infty \left( \frac{1}{a}\sum_{j=0}^{a-1} \exp\left(2\pi ijk/a\right)\right)\frac{x^k}{k!} \\ &= \frac{1}{a}\sum_{j=0}^{a-1} \sum_{k=0}^\infty \frac{(\exp(2\pi ij/a)x)^k}{k!} \\ &= \frac{1}{a}\sum_{j=0}^{a-1} \exp(\exp(2\pi ij/a)x), \end{align} and then square the result.
{ "language": "en", "url": "https://mathoverflow.net/questions/346198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
Can an even perfect number be a sum of two cubes? A similar question was asked before in https://math.stackexchange.com/questions/2727090/even-perfect-number-that-is-also-a-sum-of-two-cubes, but no conclusions were drawn. On the Wikipedia article of perfect numbers there are two related results concerning whether an even perfect number can be a sum of two cubes. Gallardo's result in 2010 (which can be found here) claims that 28 is the only perfect number that can be a sum of two cubes. This part is copied from the question on MSE, which is summarized from the paper: Let $N$ be an even perfect number. Assume that $N=x^3+a^3=(x+a)(x^2-xa+a^2)$. Note that $x$ and $a$ have the same parity. Consider the case $x+a<x^2-xa+a^2$. By the Euclid–Euler theorem, it follows that $N=2^{p-1}(2^p-1)$, where $2^p-1$ is a Mersenne prime. Thus, $x+a=2^{p-1}$ and $x^2-xa+a^2=2^p-1$. However, nowhere in the proof was it proven that both $x,a$ are odd, or that $x+a$ and $x^2-xa+a^2$ are coprime. If $x,a$ are even, the second equation cannot hold. So, is this result true? If the subsequent analysis is correct, this still shows that a perfect number cannot be expressed as two odd cubes. Or are there similar results concerning whether a perfect number can be expressed as a sum of two perfect powers? Remark: the title of this paper is On a remark of Makowski about perfect numbers. The remark of Makowski, also referenced in the Wikipedia article, concerns the case $a=1$, so $x$ is also odd, and there is no issue of non-comprimality. For those interested, Makowski deduced that $x+1=2^{p-1}$ and $x^2-x+1 = 2^p-1$ from the fact that the latter factor must be odd. From these equations, $x=3$ follows immediately, hence $28$ is the only perfect number that is one more than a cube. @Mindlack's answer here generalizes the result to $N = x^m + 1$. Both proofs are elementary.
Here is a proof that 28 is the only even perfect number that is the sum of two positive cubes. The proof in Gallardo's article must be adapted in the case $x,a$ are even. Write $N=2^{p-1}(2^p-1) = x^3+y^3 = (x+y)(x^2-xy+y^2)$. The gcd $d$ of $x$ and $y$ must be a power of 2, because $d^3$ divides $N$. Writing $x=2^h u$, $y=2^h v$ gives $2^{p-1-3h}(2^p-1) = u^3+v^3$. We are going to show that the only solution of the equation $2^k(2^p-1) = u^3+v^3$ with $k<p$ and $(u,v)=1$ has no solution for $p \geq 5$. First we must have $k \geq 1$ because $2^p-1$ is prime, and $u,v$ must be odd. Moreover $u+v=2^k$ and $u^2-uv+v^2=2^p-1$. Then \begin{equation*} 2^p-1=u^2-uv+v^2=u^2-u(2^k-u)+(2^k-u)^2 = 2^{2k}-3uv. \end{equation*} We deduce the bounds $p \leq 2k$ and \begin{equation*} 2^p-1 \geq 2^{2k}-3 \cdot (2^{k-1})^2 \geq 2^{2k-2}, \end{equation*} which implies $p \geq 2k-2$. Putting these bounds together, we get $p=2k-1$. So $u,v$ are solutions to the system of equations \begin{equation*} \begin{cases} & u+v = 2^k \\ & uv = (2^{2k-1}+1)/3 \end{cases} \end{equation*} The discriminant $\Delta$ of the polynomial $(X-u)(X-v)$ must be a perfect square. We have $\Delta=4 \cdot (2^{2k-2}-1)/3$, hence $2^{2k-2}-1$ is 3 times an odd square. In particular $2^{2k-2} - 1 \equiv 3 \bmod{8}$, which is possible only for $k=2$ and $p=3$.
{ "language": "en", "url": "https://mathoverflow.net/questions/375879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A special congruence For any $a, b\in\mathbb{N}$ with $a+2b\not\equiv 0\pmod 3$, we define $\delta(a, b)$ as follows: \begin{align*} \delta(a, b)={\left\{\begin{array}{rl} 1,\ \ \ \ &{\rm if} \ a+2b\equiv 1\pmod 3,\\ 0,\ \ \ \ &{\rm if} \ a+2b\equiv 2\pmod 3. \end{array}\right.} \end{align*} Furthermore, for any $m, n\in\mathbb{N}$, we let $$s(m, n)=\sum\limits_{a=0}^m\sum\limits_{b=0\atop a+2b\not\equiv 0\pmod 3}^n (-1)^{m+n-a-b}2^{\delta(a, b)}\binom{m}{a}\binom{n}{b}.$$ I want to know how to prove that $s(m, n)\equiv 0\mod 3^k$, where $k=[(m+n)/2]$.
We can rewrite the sum as $$\sum_{a=0}^{m} (-1)^a \binom{m}{a}\sum_{\substack{b=0 \\ 3 \nmid a+2b}} (-1)^b 2^{\delta(a,b)} \binom{n}{b}$$ * *Now, when, $a=3k+1$, then we have $b=3k$ or $3k-1$. Then for, $b=3k$, $\delta(a,b)=1$ and for $b=3k-1$ and $\delta(a,b)=0$. *Similarly, when $a=3k-1$, $b=3k, \delta(a,b)=0$ and $b=3k+1 ,\delta(a,b)=1$ *When, $a=3k$, $b=3k+1 ,\delta(a,b)=0$ and $b=3k-1 ,\delta(a,b)=1$ [$\omega$ is the cube root of unity] Then, $$\sum_{\substack{b=0 \\b=3k}}^{n} (-1)^b \binom{n}{b}=\frac{1}{3}[(1-\omega)^n+(1-\omega^2)^n]=A_n$$ Similarly, $$\sum_{\substack{b=0 \\b=3k-1}}^{n} (-1)^b \binom{n}{b}=\frac{1}{3}[\omega(1-\omega)^n+\omega^2(1-\omega^2)^n]=C_n$$ And, $$\sum_{\substack{b=0 \\b=3k+1}}^{n} (-1)^b \binom{n}{b}=\frac{1}{3}[\omega^2(1-\omega)^n+\omega(1-\omega^2)^n]=B_n$$ Then, $$\sum_{a=0}^{m} (-1)^a \binom{m}{a}\sum_{\substack{b=0 \\ 3 \nmid a+2b}}^{n} (-1)^b 2^{\delta(a,b)} \binom{n}{b} =2(A_nB_m+B_nC_m+C_nA_m)+(A_mB_n+B_mC_n+C_mA_n)$$ $=-[(1-\omega)^{m+1}(1-\omega^2)^n+(1-\omega^2)^{m+1}(1-\omega)^n$. $=-(1-\omega)^{m+n}[(1+\omega)^{m+1}+(1+\omega)^n]$ From Wolfram Alpha we get $-(1-\omega)^{m+n}[(1+\omega)^{m+1}+(1+\omega)^n] \\ =-2.3^{\frac{m+n+1}{2}}[\cos(\frac{\pi (m+1-n)}{6})]$ This is divisible by $3^{\lfloor \frac{m+n}{2} \rfloor}$.
{ "language": "en", "url": "https://mathoverflow.net/questions/380193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a nonzero solution to this infinite system of congruences? Is there a triple of nonzero even integers $(a,b,c)$ that satisfies the following infinite system of congruences? $$ a+b+c\equiv 0 \pmod{4} \\ a+3b+3c\equiv 0 \pmod{8} \\ 3a+5b+9c\equiv 0 \pmod{16} \\ 9a+15b+19c\equiv 0 \pmod{32} \\ \vdots \\ s_na + t_nb + s_{n+1}c \equiv 0 \pmod{2^{n+1}} \\ \vdots $$ where $(s_n)$ and $(t_n)$ are weighted tribonacci sequences defined by $$ s_1=s_2=1, \\ s_3=3, \\ s_n = s_{n-1} +2s_{n-2} + 4s_{n-3} \text{ for } n>3, $$ and $$ t_1=1, \\ t_2=3, \\ t_3=5, \\ t_n = t_{n-1} +2t_{n-2} + 4t_{n-3} \text{ for } n>3. $$ I think there are no nonzero solutions, but I haven't been able to prove this. Computationally, I found there are no nonzero solutions for integers $a$, $b$, and $c$ up to $1000$. Note the $s_n$ and $t_n$ are always odd, and that the ratios $\frac{s_n}{s_{n-1}}$ and $\frac{t_n}{t_{n-1}}$ approach $2.4675...$.
Let $u_n = a s_n + b t_n + c s_{n+1}$. The stronger claim is true: for large enough values of $n$, the number $u_n$ will be exactly divisible by a fixed power of $2$ that doesn't depend on $n$. Let $u_n = a s_n + b t_n + c s_{n+1}$ then (by induction) $$u_{n} = u_{n-1} + 2 u_{n-2} + 4 u_{n-3}.$$ The polynomial $x^3 - x^2 - 2 x - 4$ is irreducible and has three roots $\alpha_1$, $\alpha_2$, and $\alpha_3$ in $\overline{\mathbf{Q}}$. By the general theory of recurrence relations, $$u_n = A_1 \alpha^n_1 + A_2 \alpha^n_2 + A_3 \alpha^n_3$$ for constants $A_1$, $A_2$, $A_3$. Since $u_n \in \mathbf{Q}$, we may additionally deduce that $A_i$ lie in $\mathbf{Q}(\alpha_1,\alpha_2,\alpha_3)$. That is because we can solve for $A_i$ using the equation $$\left( \begin{matrix} \alpha_1 & \alpha_2 & \alpha_3 \\ \alpha^2_1 & \alpha^2_2 & \alpha^2_3 \\ \alpha^3_1 & \alpha^3_2 & \alpha^3_3 \end{matrix} \right) \left( \begin{matrix} A_1 \\ A_2 \\ A_3 \end{matrix} \right) = \left( \begin{matrix} u_1 \\ u_2 \\ u_3 \end{matrix} \right)$$ and the matrix on the left is invertible (Vandermonde). In fact we deduce the stronger claim that any Galois automorphism sending $\alpha_i$ to $\alpha_j$ sends $A_i$ to $A_j$. (Simply consider the action of the Galois group on both sides of this equation, noting that the $A_i$ are determined uniquely from this equation.) In particular, if one of the $A_i = 0$, then all of the $A_i = 0$. But now fix an embedding of $\overline{\mathbf{Q}}$ into $\overline{\mathbf{Q}}_2$. From the Newton Polygon, we see that there is one root (call it $\alpha_1$) of valuation $0$, and the other two roots have valuation $1$. Hence $$\|A_1 \alpha^n_1 \|_2 = \|A_1\|_2, \quad \|A_2 \alpha^n_2 \|_2 = \|A_2\|_2 \cdot 2^{-n}, \quad \|A_3 \alpha^n_3 \|_2 = \|A_3\|_2 \cdot 2^{-n}.$$ In particular, if $A_1 \ne 0$, then (by the ultrametric inequality) $\| u_n \|_2 = \|A_1\|$ for $n$ large enough. Hence we deduce that either the $2$-adic valuation of $u_n$ is eventually constant (as claimed) or that $A_1 = 0$ and so $A_i = 0$ for all $i$, which implies that $u_n = 0$ for all $n$. But if $u_1 = u_2 = u_3 = 0$, then $$\left( \begin{matrix} 1 &1 & 1 \\ 1 &3 & 3 \\ 3 & 5 & 9 \end{matrix} \right) \left( \begin{matrix} a \\ b \\ c \end{matrix} \right) = \left( \begin{matrix} 0 \\ 0 \\ 0 \end{matrix} \right)$$ The matrix on the left is invertible which implies that $a=b=c=0$.
{ "language": "en", "url": "https://mathoverflow.net/questions/381057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Why $\lim_{n\rightarrow \infty}\frac{F(n,n)}{F(n-1,n-1)} =\frac{9}{8}$? $$F(m,n)= \begin{cases} 1, & \text{if $m n=0$ }; \\ \frac{1}{2} F(m ,n-1) + \frac{1}{3} F(m-1,n )+ \frac{1}{4} F(m-1,n-1), & \text{ if $m n>0$. }% \end{cases}$$ Please a proof of: $$\lim_{n\rightarrow \infty}\frac{F(n,n)}{F(n-1,n-1)} =\lim_{n\rightarrow \infty}\frac{F(n,n-1)}{F(n-1,n-1)}=\frac{9}{8}$$ $$\lim_{n\rightarrow \infty}\frac{F(n-1,n)}{F(n-1,n-1)}=1$$
We will compute the generating function, and use the method described in section 2 of this paper. Let $F_{m,n}=F(m,n)$. Consider the generating function $$G(x,y)=\sum_{m=0}^\infty\sum_{n=0}^\infty F_{m,n}x^my^n.$$ Then the recurrence gives \begin{align*} &G(x,y)=\sum_{m=0}^\infty F_{m,0}x^m+\sum_{n=1}^\infty F_{0,n}y^n+\sum_{m=1}^\infty\sum_{n=1}^\infty F_{m,n}x^my^n\\ &=\frac{1}{1-x}+\frac{y}{1-y}+\sum_{m=1}^\infty\sum_{n=1}^\infty\left(\frac{1}{2}F_{m,n-1}+\frac{1}{3}F_{m-1,n}+\frac{1}{4}F_{m-1,n-1}\right)x^my^n\\ &=\frac{1-xy}{(1-x)(1-y)}+\frac{y}{2}\sum_{m=1}^\infty\sum_{n=0}^\infty F_{m,n}x^my^n+\frac{x}{3}\sum_{m=0}^\infty\sum_{n=1}^\infty F_{m,n}x^my^n+\frac{xy}{4}G(x,y)\\ &=\frac{1-xy}{(1-x)(1-y)}+\frac{y}{2}\left(G(x,y)-\frac{1}{1-y}\right)+\frac{x}{3}\left(G(x,y)-\frac{1}{1-x}\right)+\frac{xy}{4}G(x,y)\\ &=\frac{1-xy-\frac{y}{2}(1-x)-\frac{x}{3}(1-y)}{(1-x)(1-y)}+\left(\frac{x}{3}+\frac{y}{2}+\frac{xy}{4}\right)G(x,y)\\ &=\frac{1-\frac{x}{3}-\frac{y}{2}-\frac{xy}{6}}{(1-x)(1-y)}+\left(\frac{x}{3}+\frac{y}{2}+\frac{xy}{4}\right)G(x,y). \end{align*} Solving for $G(x,y)$ gives $$G(x,y)=\frac{1-\frac{x}{3}-\frac{y}{2}-\frac{xy}{6}}{(1-x)(1-y)\left(1-\frac{x}{3}-\frac{y}{2}-\frac{xy}{4}\right)}.$$ Let $R(x,y)$ denote this rational function. We have shown that $G(x,y)$ converges to $R(x,y)$ in some neighborhood of the origin. Then for fixed small $x$, the Laurent series $G(x/y,y)$ will converge to $R(x/y,y)$ in some annulus around $y=0$. Furthermore, $H(x)=\sum_{m=0}^\infty F_{m,m}x^m$ is the constant term of $G(x/y,y)$, and can be found via residue calculus as \begin{align*} H(x)&=\frac{1}{2\pi i}\int_\gamma\frac{1}{y}G(x/y,y)\,dy\\ &=\frac{1}{2\pi i}\int_\gamma\frac{1}{y}R(x/y,y)\,dy\\ &=\sum_k\mathrm{Res}\left[\frac{1}{y}R(x/y,y),y=z_k\right] \end{align*} where $\gamma$ is a counterclockwise contour in the annulus, and where $z_k$ are the singularities of $R(x/y,y)$ lying inside of $\gamma$. We can compute \begin{align*} \frac{1}{y}R(x/y,y)&=\frac{y-\frac{x}{3}-\frac{y^2}{2}-\frac{xy}{6}}{(y-x)(1-y)\left(y-\frac{x}{3}-\frac{y^2}{2}-\frac{xy}{4}\right)}. \end{align*} This rational function has poles at the following points: * *$y=1$. This pole does not lie inside of $\gamma$. *$y=x$. This pole lies inside of $\gamma$, and has residue $\frac{8}{8-9x}$. *$y=(1-\frac{x}{4})+\sqrt{(\frac{x}{4}-1)^2-\frac{2}{3}x}$. This pole does not lie inside of $\gamma$. *$y=(1-\frac{x}{4})-\sqrt{(\frac{x}{4}-1)^2-\frac{2}{3}x}$. This pole lies inside of $\gamma$, and has residue $$\frac{\frac{x}{12}\left((1-\frac{x}{4})-\sqrt{(\frac{x}{4}-1)^2-\frac{2}{3}x}\right)}{\left((1-\frac{5x}{4})-\sqrt{(\frac{x}{4}-1)^2-\frac{2}{3}x}\right)\left(\frac{x}{4}+\sqrt{(\frac{x}{4}-1)^2-\frac{2}{3}x}\right)\left(\sqrt{(\frac{x}{4}-1)^2-\frac{2}{3}x}\right)}$$ which Wolfram Alpha can simplify to $$\frac{x\left(13\sqrt3\,x-12\sqrt3+\sqrt{3x^2-56x+48}\right)}{(7x-6)(9x-8)\sqrt{3x^2-56x+48}}.$$ Putting this all together gives $$H(x)=\frac{8}{8-9x}+\frac{x\left(13\sqrt3\,x-12\sqrt3+\sqrt{3x^2-56x+48}\right)}{(7x-6)(9x-8)\sqrt{3x^2-56x+48}}.$$ The second summand is actually holomorphic at $x=6/7$ and $x=8/9$. Then the singularity of $H(x)$ closest to the origin is $x=8/9$, and we obtain the asymptotic $$F_{m,m}\sim\left(\frac{9}{8}\right)^m$$ which proves the first limit. The remaining limits can be solved in a similar way, by first determining the asymptotics of $F(m,m-1)$ and $F(m-1,m)$.
{ "language": "en", "url": "https://mathoverflow.net/questions/390924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Improper integral $\int_0^\infty\frac{x^{2n+1}}{(1+x^2)^2(2+x^2)^N}dx,\ \ \ n\le N$ How can I evaluate this integral? $$\int_0^\infty\frac{x^{2n+1}}{(1+x^2)^2(2+x^2)^N}dx,\ \ \ n\le N$$ Maybe there is a recurrence relation for the integral?
One approach is to consider the sum $$ J = \sum_{n,m=0}^\infty s^nt^mI_{n,n+m} = \int_{x=0}^\infty F\,dx, $$ where $$ F = \sum_{n,m=0}^\infty s^nt^m \frac{x^{2n+1}}{(1+x^2)^2(2+x^2)^{n+m}} = \frac{x(2+x^2)^2}{(1+x^2)^2(2+x^2-sx^2)(2+x^2-t)} $$ This works out as $$ J = \frac{2s^2\ln(1-s)}{(1+s)^2(st-2s-t)} + \frac{t^2\ln(1-t/2)}{2(1-t)^2(st-2s-t)} + \frac{(2s-t-3st)\ln(2)}{2(1+s)^2(1-t)^2} + \frac{1}{2(1+s)(1-t)} $$ In principle you could expand this as a power series and extract $I_{n,n+m}$ as the coefficient of $s^nt^m$.
{ "language": "en", "url": "https://mathoverflow.net/questions/393753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
General formulas for derivative of $f_n(x)=\dfrac{ax^n+bx^{n-1}+cx^{n-2}+\cdots}{a'x^n+b'x^{n-1}+c'x^{n-2}+\cdots},\quad a'\neq0$ For the function $f_1(x)=\dfrac{ax+b}{a'x+b'},\quad a'\neq0$ , we have $$f_1'(x)=\dfrac{\begin{vmatrix}{a} && {b} \\ {a'} && {b'}\end{vmatrix}}{(a'x+b')^2}$$ For $f_2(x)=\dfrac{ax^2+bx+c}{a'x^2+b'x+c'},\quad a'\neq0$, we have $$f_2'(x)=\dfrac{{ \begin{vmatrix}{a} && {b} \\ {a'} && {b'}\end{vmatrix} }x^2+2{ \begin{vmatrix}{a} && {c} \\ {a'} && {c'}\end{vmatrix} }x+{ \begin{vmatrix}{b} && {c} \\ {b'} && {c'}\end{vmatrix} }}{(a'x^2+b'x+c')^2}$$ Can we generalize the formula containing determinants to find $f_n'(x)$?
You can easily extend this, but for $n\geq 3$ you will end up with more than one term per monomial: For two functions $f$, $g$ rewrite the quotient rule using a determinant $$\frac{d}{dx} \frac{f}{g} = \frac{\frac{df}{dx}g-f \frac{dg}{dx}}{g^2} = \frac{\begin{vmatrix} \frac{df}{dx} & f \\ \frac{dg}{dx} & g \end{vmatrix}}{g^2}$$ Now assume that $f(x) := a_nx^n+\dots + a_0$, $g(x) :=b_n x^n+ \dots + b_0$ are polyonomials. Then $\frac{df}{dx}$ and $\frac{dg}{dx}$ can be calculated explicitly and you can use the multilinearity of the determinant to split it by monomials: $$\frac{d}{dx} \frac{f}{g} = \frac{\begin{vmatrix} \sum_{k=0}^n a_k k x^{k-1} & \sum_{j=0}^n a_j x^j \\ \sum_{k=0}^n b_k k x^{k-1} & \sum_{j=0}^n b_j x^j \end{vmatrix}}{g^2} = \frac{\sum_{k=0}^n \sum_{j=0}^n k\begin{vmatrix} a_k & a_j \\ b_k & b_j \end{vmatrix} x^{k+j-1} }{g^2}$$ Now in the last sum for $k=j$ the determinant vanishes and if $k\neq j$ the same determinant occurs again with flipped sign if their roles are reversed. So you can only count the cases $j<k$ and get $$\frac{d}{dx} \frac{f}{g} = \frac{\sum_{k=0}^n \sum_{j=0}^{k-1} (k-j)\begin{vmatrix} a_k & a_j \\ b_k & b_j \end{vmatrix} x^{k+j-1} }{g^2} $$
{ "language": "en", "url": "https://mathoverflow.net/questions/396250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Integrality of a sequence formed by sums Consider the following sequence defined as a sum $$a_n=\sum_{k=0}^{n-1}\frac{3^{3n-3k-1}\,(7k+8)\,(3k+1)!}{2^{2n-2k}\,k!\,(2k+3)!}.$$ QUESTION. For $n\geq1$, is the sequence of rational numbers $a_n$ always integral?
Here is another proof, inspired by Tewodros Amdeberhan's. We represent the sum as a constant term in a power series. To represent $(7k+8) \frac{(3k+1)!}{k!\,(2k+3)!}$ as a constant term, we need to express it as a linear combination of binomial coefficients. To do this we express $7k+8$ as a linear combination of $(2k+2)(2k+3)$, $k(2k+3)$, and $k(k-1)$ since each of these polynomials yields a binomial coefficient when multiplied by $\frac{(3k+1)!}{k!\,(2k+3)!}$. We find that $$(7k+8) \frac{(3k+1)!}{k!\,(2k+3)!}=\frac13 \left[4\binom{3k+1}{k} -7\binom{3k+1}{k-2} -2\binom{3k+1}{k-2}\right].$$ Using $\binom{n}{j} =\text{CT}\, (1+x)^n/x^j$, where CT denotes the constant term in $x$, we have $$ \begin{aligned} (7k+8) \frac{(3k+1)!}{k!\,(2k+3)!} &= \text{CT}\, \frac13\left( 4\frac{(1+x)^{3k+1}}{x^k} -7\frac{(1+x)^{3k+1}}{x^{k-1}}-2\frac{(1+x)^{3k+1}}{x^{k-2}}\right)\\ &=\text{CT}\,\frac{(1-2x)(x+4)(1+x)^{3k+1}}{3x^k}. \end{aligned} $$ Multiplying by $3^{3n-3k-1}/2^{2n-2k}$, summing on $k$ from 0 to $n-1$, and simplifying gives $$ a_n = 3\,\text{CT}\, \frac{(1+x)^{3n+1}}{x^{n-1}(1-2x)} -3\left(\frac{27}{4}\right)^n\!\text{CT}\,\frac{x(1+x)}{1-2x}. $$ The second constant term is 0, so $a_n = 3\,[x^{n-1}]\, (1+x)^{3n+1}/(1-2x)$, which is clearly an integer. In fact this gives the formula $$a_n = 3 \sum_{k=0}^{n-1}2^{n-k-1}\binom{3n+1}{k}.$$ Tewodros's formula can also be derived in the same way; if we represent his sum as a constant term and simplify we also get $3\,[x^{n-1}]\, (1+x)^{3n+1}/(1-2x)$.
{ "language": "en", "url": "https://mathoverflow.net/questions/398037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 3 }
How to find the asymptotics of a linear two-dimensional recurrence relation Let $d$ be a positive number. There is a two dimensional recurrence relation as follow: $$R(n,m) = R(n-1,m-1) + R(n,m-d)$$ where $R(0,m) = 1$ and $R(n,0) = R(n,1) = \cdots = R(n, d-1) = 1$ for all $n,m>0$. How to analyze the asymptotics of $R(n, kn)$ for fixed $k$? It is easy to see that $$R(n, kn) = O\left( c_{k,d}^{n} \cdot (n+k+d)^{O(1)} \right)$$ Is there a way (or an algorithm) to find $c_{k,d}$ given $k$ and $d$? PS: I have calcuated the bivariate generating function of $R(\cdot, \cdot)$: \begin{align} f(x,y) &= \frac{1 - xy - y^{d} + xy^{d}}{(1 - x)(1 - y)(1 - xy - y^{d})} \\ &= \frac{1}{(1 - x)(1 - y)} + \frac{xy^{d}}{(1 - x)(1 - y)(1 - xy - y^{d})} \\ \end{align}
This is to complement Blanco's answer by showing that \begin{equation*} R(n,kn)=\exp\{(C_{k,d}+o(1))\,n\} \tag{0}\label{0} \end{equation*} (as $n\to\infty$), where $k\ge1$ and $d\ge1$ (are fixed), \begin{equation*} C_{k,d}:=\frac k{1+y_{k,d}\,d}\,\big(\ln(1+y_{k,d})+y_{k,d}\,\ln(1+1/y_{k,d})\big), \end{equation*} \begin{equation*} y_{k,d}:=\max\Big(x_d,\frac{k-1}d\Big), \end{equation*} and $x_d$ is the unique positive root of the equation \begin{equation*} x_d(1+x_d)^{d-1}=1. \tag{0.5}\label{0.5} \end{equation*} In view of Blanco's answer, it is enough to show that \begin{equation*} B(n):=B(n,kn)=\exp\{(C_{k,d}+o(1))\,n\}, \tag{1}\label{1} \end{equation*} where \begin{equation*} B(n):=\max\{c_{a,b}\colon (a,b)\in E_{n,k,d}\}, \end{equation*} \begin{equation*} E_{n,k,d}:=\{(a,b)\colon 0\le a\le n-1,b\ge0,bd+a\le kn-d,\ a,b \text{ are integers}\}, \end{equation*} \begin{equation*} c_{a,b}:=\binom{a+b}b=\binom{a+b}a. \end{equation*} Note that $c_{a,b}$ is increasing in $a$ and in $b$. Note also that $c_{a,b}=(a+b)^{O(1)}=n^{O(1)}$ for $(a,b)\in E_{n,k,d}$ if $a=O(1)$ or $b=O(1)$. So, it remains to consider the case when $a\to\infty$ and $b\to\infty$. Then, by Stirling's formula, \begin{equation*} \begin{aligned} \ln c_{a,b}\sim a\ln(1+b/a)+b\ln(1+a/b). \end{aligned} \tag{2}\label{2} \end{equation*} Also, \begin{equation*} \begin{aligned} \frac{c_{a+d,b-1}}{c_{a,b}}&=b\frac{(a+d+1)\cdots(a+d+b-1)}{(a+1)\cdots(a+b)} \\ &= b\frac{(a+b+1)\cdots(a+b+d-1)}{(a+1)\cdots(a+d)} \\ &\sim b\frac{(a+b)^{d-1}}{a^d} =\frac ba\Big(1+\frac ba\Big)^{d-1}. \end{aligned} \end{equation*} So, for a fixed value of $bd+a$, the maximum of $c_{a,b}$ occurs when $b/a\to x_d$ (recall \eqref{0.5}). Let now $(a,b)\in E_{n,k,d}$ be a maximizer of $c_{a,b}$ (such that $a\to\infty$ and $b\to\infty$). Then, since $c_{a,b}$ is increasing in $a$ and in $b$, we have \begin{equation*} bd+a\sim kn. \end{equation*} The conditions $b/a\to x_d$ and $bd+a\sim kn$ imply \begin{equation*} a\sim k\frac1{1+x_d\,d}\,n, \tag{3}\label{3} \end{equation*} and the latter condition is compatible with condition $a\le n-1$ only if \begin{equation*} k\frac1{1+x_d\,d}\le1. \tag{4}\label{4} \end{equation*} If this is the case, then \begin{equation*} b\sim k\frac{x_d}{1+x_d\,d}\,n, \end{equation*} so that, by \eqref{2}, \begin{equation*} \begin{aligned} \frac{\ln c_{a,b}}n\to \frac k{1+x_d\,d}\,(\ln(1+x_d)+x_d\,\ln(1+1/x_d)). \end{aligned} \tag{5}\label{5} \end{equation*} Also, \eqref{4} implies $x_d\ge(k-1)/d$, so that $y_{k,d}=x_d$. So, we have proved \eqref{1}, and thus \eqref{0} -- in the case when condition (3) is compatible with condition $a\le n-1$. Otherwise, we have $a=n-1\sim n$ and still $bd+a\sim kn$, whence $b\sim(k-1)n/d$. So, by \eqref{2}, here \begin{equation*} \begin{aligned} \frac{\ln c_{a,b}}n\to \ln\Big(1+\frac{k-1}d\Big)+\frac{k-1}d\,\ln\Big(1+\frac d{k-1}\Big). \end{aligned} \tag{6}\label{6} \end{equation*} Also, in this "incompatibility" case, we have $k\frac1{1+x_d\,d}\ge1$ -- cf. \eqref{4}. So, here $x_d\le(k-1)/d$ and hence $y_{k,d}=\frac{k-1}d$. So, we have proved \eqref{1}, and thus \eqref{0} -- in the case when condition (3) is incompatible with condition $a\le n-1$. Thus, in either case, \eqref{0} is proved. $\quad\Box$
{ "language": "en", "url": "https://mathoverflow.net/questions/416269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
Is equation $xy(x+y)=7z^2+1$ solvable in integers? Do there exist integers $x,y,z$ such that $$ xy(x+y)=7z^2 + 1 ? $$ The motivation is simple. Together with Aubrey de Grey, we developed a computer program that incorporates all standard methods we know (Hasse principle, quadratic reciprocity, Vieta jumping, search for large solutions, etc.) to try to decide the solvability of Diophantine equations, and this equation is one of the nicest (if not the nicest) cubic equation that our program cannot solve.
There is no solution. It is clear that at least one of $x$ and $y$ is positive and that neither is divisible by 7. We can assume that $a := x > 0$. The equation implies that there are integers $X$, $Y$ such that $$ X^2 - 7 a Y^2 = a (4 + a^3) $$ (with $X = a (a + 2y)$ and $Y = 2z$). First consider the case that $a$ is odd. Then $4 + a^3$ is also odd (and positive), so we can consider the Jacobi symbol $$ \left(\frac{7a}{4+a^3}\right) \,. $$ One of the two numbers involved is ${} \equiv 1 \bmod 4$, so by quadratic reciprocity, $$ \left(\frac{7a}{4+a^3}\right) = \left(\frac{4+a^3}{7}\right) \left(\frac{4+a^3}{a}\right) = \left(\frac{4+a^3}{7}\right) $$ ($4 + a^3$ is a square mod $a$). Since $7 \nmid a$, we have $4 + a^3 \equiv 3$ or $5 \bmod 7$, both of which are nonsquares $\bmod 7$, so the symbol is $-1$. This implies that there is an odd prime $p$ having odd exponent in $4 + a^3$ and such that $7a$ is a quadratic nonresidue $\bmod p$. This gives a contradiction (note that $p \nmid a$). Now consider the case $a = 2b$ even; write $b = 2^{v_2(b)} b'$. Then we have that $4 + a^3 = 4 (1 + 2 b^3)$ and $$ \left(\frac{7a}{1 + 2b^3}\right) = \left(\frac{14b}{1 + 2b^3}\right) = \left(\frac{2}{1 + 2b^3}\right)^{1+v_2(b)} \left(\frac{7b'}{1 + 2b^3}\right) \,. $$ If $b$ is odd, then this is $$ \left(\frac{2}{1 + 2b^3}\right) (-\left(\frac{-1}{b}\right)) \left(\frac{1 + 2b^3}{7}\right) \left(\frac{1 + 2b^3}{b}\right) \,, $$ which is always $-1$ (the product of the first two factors is $1$; then conclude similarly as above). We obtain again a contradiction. Finally, if $b$ is even, then $$ \left(\frac{2}{1 + 2b^3}\right)^{1+v_2(b)} \left(\frac{7b'}{1 + 2b^3}\right) = \left(\frac{1 + 2b^3}{7}\right) \left(\frac{1 + 2b^3}{b'}\right) = -1$$ again (the first symbol is $1$, and quadratic reciprocity holds with the positive sign), and the result is the same. Here is an alternative proof using the product formula for the quadratic Hilbert symbol. If $(a,y,z)$ is a solution (with $a > 0$), then for all places $v$ of $\mathbb Q$, we must have $(7a, a(4+a^3))_v = 1$. We can rewrite the symbol as follows. $$ (7a, a(4+a^3))_v = (-7, a (4 + a^3))_v (-a, a)_v (-a, 4+a^3)_v = (-7, a(4 + a^3))_v $$ (the last two symbols in the middle expression are $+1$). So it follows that $$ (-7, a)_v = (-7, 4 + a^3)_v \,.$$ When $v = \infty$, the symbols are $+1$, since $a > 0$. When $v = 2$, the symbols are $+1$, since $-7$ is a $2$-adic square. When $v = p \ne 7$ is an odd prime, one of the symbols is $+1$ (and therefore both are), since $a$ and $4 + a^3$ have no common odd prime factors. Finally, when $v = 7$, the symbol on the right is $$ (-7, 4 + a^3)_7 = \left(\frac{4 + a^3}{7}\right) = -1 $$ as in the first proof. Putting these together, we obtain a contradiction to the product formula for the Hilbert symbol.
{ "language": "en", "url": "https://mathoverflow.net/questions/420896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 2, "answer_id": 0 }
Division problem Are there infinitely many pairs of positive integers $(a,b)$ such that $2(6a+1)$ divides $6b^2+6ab+b-6a^2-2a-3$? That is, if there are infinitely many different $a$ and for which at least one value of $b$ can be found for a given $a$. Some $a$ values are $0,2,3,7,11,17....$ I think that the answer is yes but I have no idea how to show this. If the answer is yes, are there any polynomial parametric solutions?
If $6a+1$ is a prime and a quadratic residue modulo $17$ (which is true for infinitely many values of $a$), then there are infinitely many positive integers $b$ with the required property. First observe that $b$ is good if and only if $$f(a,b):=6b^2+6ab+b-6a^2-2a-3$$ is even and divisible by $6a+1$. Hence $b$ must be odd: $b=2c+1$. Now we need to find infinitely many positive integers $c$ such that $f(a,2c+1)$ is divisible by $6a+1$. The identity $$6f(a,2c+1) + (6a-5-12c)(6a + 1)=(12c+6)^2-17$$ shows that $c$ is good if and only if $(12c+6)^2-17$ is divisible by $6a+1$. So we need to guarantee that the congruence $$(12c+6)^2\equiv 17\pmod{6a+1}$$ has a solution. This is equivalent to $17$ being a quadratic residue modulo $6a+1$. By quadratic reciprocity, this is equivalent to $6a+1$ being a quadratic residue modulo $17$, and we are done.
{ "language": "en", "url": "https://mathoverflow.net/questions/436310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Parametric Solvable Septics? Known parametric solvable septics are, $$x^7+7ax^5+14a^2x^3+7a^3x+b=0\tag{1}$$ $$x^7 + 21x^5 + 35x^3 + 7x + a(7x^6 + 35x^4 + 21x^2 + 1)=0\tag{2}$$ $$x^7 - 2x^6 + x^5 - x^4 - 5x^2 - 6x - 4 + n(x - 1)x^2(x + 1)^2=0\tag{3}$$ $$x^7 + 7x^6 - 7\beta x^2 + 28\beta x + 2\beta(n - 13)=0\tag{4}$$ $$x^7 + 14x^4 + 7(n - 2)x^3 + 14(n - 5)x^2 - 28x - (n^2 + n + 3)=0\tag{5}$$ where $\beta = 4(n^2 + 27)$. The first generalizes Demoivre's quintic to 7th powers, the third can be derived from Kluener's database, while the fifth is a variation of the one in this post. In contrast, many parametric solvable quintics are known, such as the multi-variable, $$x^5+10cx^3+10dx^2+5ex+f=0$$ where the coefficients obey the simple quadratic in $f$, $$(c^3 + d^2 - c e) \big((5 c^2 - e)^2 + 16 c d^2\big) = (c^2 d + d e - c f)^2$$ Question: Surely there are other parametric solvable septics, also simple in form, known by now? Can someone give a sixth (without using transformations on the known ones)?
There is a parametric family of cyclic septics that obey $$x_1 x_2 + x_2 x_3 + \dots + x_7 x_1 - (x_1 x_3 + x_3 x_5 + \dots + x_6 x_1) = 0\tag1$$ as the Hashimoto-Hoshi septic, $$\small x^7 - (a^3 + a^2 + 5a + 6)x^6 + 3(3a^3 + 3a^2 + 8a + 4)x^5 + (a^7 + a^6 + 9a^5 - 5a^4 - 15a^3 - 22a^2 - 36a - 8)x^4 - a(a^7 + 5a^6 + 12a^5 + 24a^4 - 6a^3 + 2a^2 - 20a - 16)x^3 + a^2(2a^6 + 7a^5 + 19a^4 + 14a^3 + 2a^2 + 8a - 8)x^2 - a^4(a^4 + 4a^3 + 8a^2 + 4)x + a^7=0$$ For example, let $a=1$ so, $$1 - 17 x + 44 x^2 - 2 x^3 - 75 x^4 + 54 x^5 - 13 x^6 + x^7=0$$ which is the equation involved in $\cos\frac{\pi k}{43}$. If we order its roots as, $$x_1,\,x_2,\,x_3,\,x_4,\,x_5,\,x_6,\,x_7 =\\ r_1,\,r_2,\,r_5,\,r_6,\,r_3,\,r_7,\,r_4 = \\ -0.752399,\; 0.0721331,\; 2.63744,\; 3.62599,\; 0.480671,\; 6.29991,\; 0.636246$$ where the $r_i$ is the root numbering in Mathematica, then it satisfies $(1)$.
{ "language": "en", "url": "https://mathoverflow.net/questions/145278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Go I Know Not Whither and Fetch I Know Not What Next day: apparently my original question is harder, by far, than the other bits. So: it is a finite check, I was able to confirm by computer that, if the polynomial below satisfies $$ f(a,b,c,d) \equiv 0 \pmod {27}, \;\; \mbox{THEN} \; \; a,b,c,d \equiv 0 \pmod 3, $$ and if $$ f(a,b,c,d) \equiv 0 \pmod {125}, \;\; \mbox{THEN} \; \; a,b,c,d \equiv 0 \pmod 5, $$ ORIGINAL: $f$ is a polynomial in four variables. Take matrices $$ 1 = \left( \begin{array}{rrrr} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right), $$ $$ i = \left( \begin{array}{rrrr} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right), $$ $$ j = \left( \begin{array}{rrrr} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right), $$ $$ k = \left( \begin{array}{rrrr} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array} \right), $$ Then take $$ f(a,b,c,d) = \det (a \cdot 1 + b \sqrt 3 i + c \sqrt 5 j + d \sqrt{15} k), $$ $$ =a^4-6 a^2 b^2+9 b^4-10 a^2 c^2-30 b^2 c^2+25 c^4+120 a b c d-30 a^2 d^2-90 b^2 d^2-150 c^2 d^2+225 d^4$$. Note that everything is commutative; $$ i^2 = 1, j^2 = 1, k^2 = 1; \; ij=ji=k, ki=ik=j,jk=kj=i. $$ It is also possible to re-write this with the square roots absorbed into the definitions of $i,j,k.$ So, questions include: does it make sense to anyone that, as I checked by brute force, that if $$ f(a,b,c,d) \equiv 0 \pmod {81} $$ then $a,b,c,d \equiv 0 \pmod 3?$ Same for $625$ and $5.$ Need to think about how to check $5$ completely. Finally, is it true that this thing represents the same numbers as $x^2 - 15 y^2,$ and what is such a thing called anyway? It might be a field norm, I dunno. Oh, from a closed question at https://math.stackexchange.com/questions/931769/integer-solution-to-diophantine-equations which I found interesting. http://en.wikipedia.org/wiki/Go_I_Know_Not_Whither_and_Fetch_I_Know_Not_What EDIT: It turns out we may use $27$ in place of $81.$ Evidently explaining this is the hard part. Confirmed, anyway. See what I can do with $125$ instead of $625.$ EDIT 2: Figured out how to program it; if the polynomial is divisible by $125,$ each variable is indeed divisible by $5.$
Yes, this is a field norm; it is the norm of $a + b \sqrt{3} + c \sqrt{5} + d \sqrt{15}$, from $K = \mathbb{Q}(\sqrt{3}, \sqrt{5})$ down to $\mathbb{Q}$. Note that $a+b \sqrt{3} + c \sqrt{5} + d \sqrt{15}$ acts on the basis $(1, \sqrt{3}, \sqrt{5}, \sqrt{15})$ by $$a \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} + b \begin{pmatrix} 0 & 3 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 5 \\ 0 & 0 & 1 & 0 \end{pmatrix} + c \begin{pmatrix} 0 & 0 & 5 & 0 \\ 0 & 0 & 0 & 5 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix} + d \begin{pmatrix} 0 & 0 & 0 & 15 \\ 0 & 0 & 5 & 0 \\ 0 & 3 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{pmatrix}.$$ Now conjugate by the matrix whose diagonal entries are $(1, \sqrt{3}, \sqrt{5}, \sqrt{15})$ to get your matrix. The entries are no longer rational, so I can't think of the result as describing the action on $K$, but the determinant is the same. $\mathbb{Q}(\sqrt{15})$ has class number $2$ and $K$ is the class field. So, for a prime $p$ other than $2$, $3$, $5$, we have that $\pm p$ is a value of $x^2-15 y^2$ if and only if $p$ splits principally in $\mathbb{Q}(\sqrt{15})$ if and only if $p$ splits in $K$ if and only if $\pm p$ is a value of $f$. Also, neither $x^2-15 y^2$ nor $f$ can be $3 \bmod 4$, so the sign is the same in the two cases. However, they don't take the same set of composite values. Look at $-119 = 7 \times 17$. We have $61^2 - 15 \cdot 16^2 = -119$, but, if $7 | f(a,b,c,d)$ then $7^2 | f(a,b,c,d)$. I found this by hunting for two primes which are non-principally split in $\mathbb{Q}(\sqrt{15})$. In terms of quadratic forms, which I know you love, I needed primes of the form $3 x^2 - 5 y^2$, and I found $7=3 \cdot 2^2 - 5$ and $-17 = 3 \cdot 6^2 - 5 \cdot 5^2$. Then their product was of the form $x^2-15 y^2$. Since these primes split non principally in $\mathbb{Q}(\sqrt{15})$, they don't split further in the class field. (We can also directly compute $\left( \frac{3}{7} \right) = \left( \frac{3}{17} \right) = -1$.) So things divisible by one power of $7$ or $17$ are not norms from $K$.
{ "language": "en", "url": "https://mathoverflow.net/questions/180987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Find all solution $a,b,c$ with $(1-a^2)(1-b^2)(1-c^2)=8abc$ Two years ago, I made a conjecture on stackexchange: Today, I tried to find all solutions in integers $a,b,c$ to $$(1-a^2)(1-b^2)(1-c^2)=8abc,\quad a,b,c\in \mathbb{Q}^{+}.$$ I have found some solutions, such as $$(a,b,c)=(5/17,1/11,8/9),(1/7,5/16,9/11),(3/4,11/21,1/10),\cdots$$ $$(a,b,c)=\left(\dfrac{4p}{p^2+1},\dfrac{p^2-3}{3p^2-1},\dfrac{(p+1)(p^2-4p+1)}{(p-1)(p^2+4p+1)}\right),\quad\text{for $p>2+\sqrt{3}$ and $p\in\mathbb {Q}^{+}$}.$$ Here is another simple solution: $$(a,b,c)=\left(\dfrac{p^2-4p+1}{p^2+4p+1},\dfrac{p^2+1}{2p^2-2},\dfrac{3p^2-1}{p^3-3p}\right).$$ My question is: are there solutions of another form (or have we found all solutions)?
The original proposer asks for "simple methods". Simplicity, like beauty, is in the eye of the beholder. I am sure that Noam Elkies and Joe Silverman feel their answers are extremely simple. The following discussion is, in my humble opinion, simpler. We can express the underlying equation as a quadratic in $a$, \begin{equation*} a^2+\frac{8bc}{(b^2-1)(c^2-1)}a-1 \end{equation*} with the obvious condition that $|b| \ne 1$ and $|c| \ne 1$. For $a$ to be rational, the discriminant must be a rational square, so there exists $D \in \mathbb{Q}$ such that \begin{equation*} D^2=(c^2-1)^2b^4-2(c^4-10c^2+1)b^2+(c^2-1)^2 \end{equation*} This quartic has an obvious rational point when $b=0$, and so is birationally equivalent to an elliptic curve. We find the curve \begin{equation*} v^2=u(u+(c^2-1)^2)(u+4c^2) \end{equation*} with the reverse transformation \begin{equation*} b=\frac{v}{(c^2-1)(u+4c^2)} \end{equation*} The elliptic curve has $3$ points of order $2$, which give $b=0$ or $b$ undefined. There are also $4$ points of order $4$ at \begin{equation*} u=2c(c^2-1) \hspace{1cm} v= \pm 2c(c+1)(c-1)(c^2+2c-1) \end{equation*} and \begin{equation*} u=-2c(c^2-1) \hspace{1cm} v= \pm 2c(c+1)(c-1)(c^2-2c-1) \end{equation*} all of which give $|b|=1$. Thus, to get a non-trivial solution we need the elliptic curve to have rank at least $1$. Numerical investigations suggest that the rank is often $0$, so solutions do not exist for all $c$. We can derive parametric solutions by finding points of the curve, subject to certain conditions. For example, $u=c^2-1$ would give a point if $5c^2-1=\Box$. We can parametrize this quadric using the solution when $c=1$, to give \begin{equation*} a=\frac{(p-2)(p-5)(3p-5)}{p(p-1)(p-3)(2p-5)} \hspace{1cm} b=\frac{p^2-4p+5}{2(p^2-5p+5)} \hspace{1cm} c=\frac{p^2-4p+5}{p^2-5} \end{equation*} which gives strictly positive solutions when $p > 5$. Another simple point to consider could be $u=2c^2(c-1)(c+3)$ which gives a rational point when $(c+3)(3c+1)=\Box$.
{ "language": "en", "url": "https://mathoverflow.net/questions/208485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 3 }
sum, integral of certain functions While working on some research, I have encountered an infinite series and its improper integral analogue: \begin{align}\sum_{m=1}^{\infty}\frac1{\sqrt{m(m+1)(m+2)+\sqrt{m^3(m+2)^3}}}&=\frac12+\frac1{\sqrt{2}}, \\ \int_0^{\infty}\frac{dx}{\sqrt{x(x+1)(x+2)+\sqrt{x^3(x+2)^3}}}&=2.\end{align} The evaluations were guessed using numerical evidence. Can you provide proofs, or any reference (if available)?
For the integral, notice that the expression under the square root is $$ x(x+1)(x+2)+x(x+2)\sqrt{x(x+2)} = \frac12\,x(x+2)(\sqrt x+\sqrt{x+2})^2. $$ Consequently, \begin{align*} \frac1{\sqrt{x(x+1)(x+2)+x(x+2)\sqrt{x(x+2)}}} &= \frac{\sqrt 2}{(\sqrt x+\sqrt{x+2}) \sqrt{x(x+2)}} \\ &= \frac1{\sqrt 2}\,\frac{\sqrt{x+2}-\sqrt{x}}{\sqrt{x(x+2)}} \\ &= \frac1{\sqrt 2} \left( \frac1{\sqrt x}-\frac1{\sqrt{x+2}}\right); \end{align*} thus, the indefinite integral is $$ \sqrt{2}\, (\sqrt x-\sqrt{x+2})+C $$ and the result follows easily. As Antony Quas noticed, this also works for the sum showing that the partial sum over $m\in[1,M]$ is $$ \frac1{\sqrt 2} \sum_{m=1}^M \frac1{\sqrt m} - \frac1{\sqrt 2} \sum_{m=3}^{M+2} \frac1{\sqrt m} = \frac1{\sqrt 2} \left( 1+\frac1{\sqrt 2}\right) + o(1). $$
{ "language": "en", "url": "https://mathoverflow.net/questions/257982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Question on a generalisation of a theorem by Euler We call an integer $k\geq 1$ good if for all $q\in\mathbb{Q}$ there are $a_1,\ldots, a_k\in \mathbb{Q}$ such that $$q = \prod_{i=1}^k a_i \cdot\big(\sum_{i=1}^k a_i\big).$$ Euler showed that $k=3$ is good. Is the set of good positive integers infinite?
I suspect that $k = 4$ is good, but am not sure how to prove it. However, every positive integer $k \geq 5$ is good. This follows from the fact (see the proof of Theorem 1 from this preprint) that for any rational number $x$, there are rational numbers $a$, $b$, $c$, $d$ so that $a+b+c+d = 0$ and $abcd = x$. In particular, one can take $$ a(x) = \frac{2(1-4x)^{2}}{3(1+8x)}, b(x) = \frac{-(1+8x)}{6}, c(x) = \frac{-(1+8x)}{2(1-4x)}, d(x) = \frac{18x}{(1-4x)(1+8x)}, $$ as long as $x \not\in \{1/4, -1/8\}$. (For $x = 1/4$ one can take $(a,b,c,d) = (-1/2,1/2,-1,1)$ and for $x = -1/8$ one can take $(a,b,c,d) = (-2/3,25/12,-1/15,-27/20)$.) Now, fix $k \geq 5$, let $q \in \mathbb{Q}$ and take $a_{1} = a(q/(k-4))$, $a_{2} = b(q/(k-4))$, $a_{3} = c(q/(k-4))$, $a_{4} = d(q/(k-4))$ and $a_{5} = a_{6} = \cdots = a_{k} = 1$. We have that $$ a_{1} + a_{2} + a_{3} + a_{4} + \cdots + a_{k} = 0 + a_{5} + \cdots + a_{k} = k-4 $$ and $a_{1} a_{2} a_{3} a_{4} \cdots = \frac{q}{k-4} \cdot 1 \cdot 1 \cdots 1 = \frac{q}{k-4}$. Thus $$ \left(\prod_{i=1}^{k} a_{i}\right) \left(\sum_{i=1}^{k} a_{i}\right) = \frac{q}{k-4} \cdot (k-4) = q. $$
{ "language": "en", "url": "https://mathoverflow.net/questions/302933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 2, "answer_id": 0 }
How to find the analytical representation of eigenvalues of the matrix $G$? I have the following matrix arising when I tried to discretize the Green function, now to show the convergence of my algorithm I need to find the eigenvalues of the matrix $G$ and show it has absolute value less than 1 for certain choices of $N$. Note that the explicit formula for entry $(i,j)$ is $-i(N+1-j)$ when $i\le j$ and it is symmetric, so we can get the formulas for $i>j$ by interchanging $i$ and $j$ in the $i\le j$ case. Any one has any ideas about how to find the analytical representation of eigenvalues of the matrix $G$, i,e, the eigenvalues represented by $N$? Thank you so much for any help! $\begin{pmatrix} - N & - N + 1 & -N+2 & -N+3 &\ldots & 1(-2) & 1(-1) \\ - N + 1 & 2( - N + 1) & 2(-N+2) & 2(-N+3) &\ddots & 2(-2) & 2(-1) \\ - N + 2 & 2( - N + 2) & 3(-N+2) & 3(-N+3) &\ddots & 3(-2) & 3(-1) \\ - N + 3 & 2( - N + 3) & 3(-N+3) & 4(-N+3) &\ddots & 4(-2) & 4(-1) \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ - 2 & 2(-2) & 3(-2) & 4(-2) &\ddots & ( - 1 + N)( - 2) & ( - 1 + N)( - 1) \\ - 1 & 2(-1) & 3(-1) & 4(-1) &\ldots & ( - 1 + N)( - 1) & N( - 1) \\ \end{pmatrix}$
It's straightforward to show that this is the inverse of $1/(N+1)$ times the tridiagonal matrix $T_N$ with $-2$ on its main diagonal and $1$ on its super- and sub-diagonals. Let $t_N$ be the characteristic polynomial of $T_N$. We have $t_0(x)=1$, $t_1(x)=x+2$, and by cofactor expansion $t_N(x)=(x+2)t_{N-1}(x)-t_{N-2}(x)$. That is, $t_N$ is related to the Chebyshev polynomials of the second kind by $t_N(x)=U_N(x/2+1)$. The roots of the Chebyshev polynomials are $\cos(\frac{k\pi}{N+1})$ for $k=1,\dots,N$, so the eigenvalues of the inverse of your $G$ are $\frac{2}{N+1}(\cos(\frac{k\pi}{N+1})-1)$. These have absolute value less than $1$ for $N\ge 3$.
{ "language": "en", "url": "https://mathoverflow.net/questions/308835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Matrix rescaling increases lowest eigenvalue? Consider the set $\mathbf{N}:=\left\{1,2,....,N \right\}$ and let $$\mathbf M:=\left\{ M_i; M_i \subset \mathbf N \text{ such that } \left\lvert M_i \right\rvert=2 \text{ or }\left\lvert M_i \right\rvert=1 \right\}$$ be the set of all subsets of $\mathbf{N}$ that are of cardinality $1$ or $2.$ The cardinality of the set $\mathbf M$ itself is $\binom{n}{1}+\binom{n}{2}=:K$ We can then study for $y \in (0,1)$ the $K \times K$ matrix $$A_N = \left( \frac{\left\lvert M_i \cap M_j \right\rvert}{\left\lvert M_i \right\rvert\left\lvert M_j \right\rvert}y^{-\left\lvert M_i \cap M_j \right\rvert} \right)_{i,j}$$ and $$B_N = \left( \left\lvert M_i \cap M_j \right\rvert y^{-\left\lvert M_i \cap M_j \right\rvert} \right)_{i,j}.$$ Question I conjecture that $\lambda_{\text{min}}(A_N)\le \lambda_{\text{min}}(B_N)$ for any $N$ and would like to know if one can actually show this? As a first step, I would like to know if one can show that $$\lambda_{\text{min}}(A_N)\le C\lambda_{\text{min}}(B_N)$$ for some $C$ independent of $N$? In fact, I am not claiming that $A_N \le B_N$ in the sense of matrices. But it seems as if the eigenvalues of $B_N$ are shifted up when compared with $A_N.$ Numerical evidence: For $N=2$ we can explicitly write down the matrices $$A_2 =\left( \begin{array}{ccc} \frac{1}{y} & 0 & \frac{1}{2 y} \\ 0 & \frac{1}{y} & \frac{1}{2 y} \\ \frac{1}{2 y} & \frac{1}{2 y} & \frac{1}{2 y^2} \\ \end{array} \right) \text{ and }B_2 = \left( \begin{array}{ccc} \frac{1}{y} & 0 & \frac{1}{y} \\ 0 & \frac{1}{y} & \frac{1}{y} \\ \frac{1}{y} & \frac{1}{y} & \frac{2}{y^2} \\ \end{array} \right)$$ We obtain for the lowest eigenvalue of $A_2$ (orange) and $B_2$(blue) as a function of $y$ For $N=3$ we get qualitatively the same picture, i.e. the lowest eigenvalue of $A_3$ remains below the lowest one of $B_3$: In this case: $$A_3=\left( \begin{array}{cccccc} \frac{1}{y} & 0 & 0 & \frac{1}{2 y} & 0 & \frac{1}{2 y} \\ 0 & \frac{1}{y} & 0 & \frac{1}{2 y} & \frac{1}{2 y} & 0 \\ 0 & 0 & \frac{1}{y} & 0 & \frac{1}{2 y} & \frac{1}{2 y} \\ \frac{1}{2 y} & \frac{1}{2 y} & 0 & \frac{1}{2 y^2} & \frac{1}{4 y} & \frac{1}{4 y} \\ 0 & \frac{1}{2 y} & \frac{1}{2 y} & \frac{1}{4 y} & \frac{1}{2 y^2} & \frac{1}{4 y} \\ \frac{1}{2 y} & 0 & \frac{1}{2 y} & \frac{1}{4 y} & \frac{1}{4 y} & \frac{1}{2 y^2} \\ \end{array} \right)\text{ and } B_3=\left( \begin{array}{cccccc} \frac{1}{y} & 0 & 0 & \frac{1}{y} & 0 & \frac{1}{y} \\ 0 & \frac{1}{y} & 0 & \frac{1}{y} & \frac{1}{y} & 0 \\ 0 & 0 & \frac{1}{y} & 0 & \frac{1}{y} & \frac{1}{y} \\ \frac{1}{y} & \frac{1}{y} & 0 & \frac{2}{y^2} & \frac{1}{y} & \frac{1}{y} \\ 0 & \frac{1}{y} & \frac{1}{y} & \frac{1}{y} & \frac{2}{y^2} & \frac{1}{y} \\ \frac{1}{y} & 0 & \frac{1}{y} & \frac{1}{y} & \frac{1}{y} & \frac{2}{y^2} \\ \end{array} \right)$$
Claim. $\lambda_\min(A_N) \le 4\lambda_\min(B_N)$. Proof. Let $C_N:=\bigl[\tfrac{1}{|M_i||M_j|}\bigr]$. Then, $B_N = A_N \circ C_N$, where $\circ$ denotes the Hadamard product. Observe that by construction both $A_N$ and $C_N$ are positive semidefinite, so $B_N$ is also psd. Let's drop the subscript $N$ for brevity. Define $c = \text{diag}(C)$ sorted in decreasing order, so in particular, $c_\min = \min_{1\le i \le N} 1/|M_i|^2=1/4$. Now, from Theorem 3(ii) of Bapat and Sunder, it follows that: \begin{equation*} \lambda_\min(B)=\lambda_\min(A \circ C) \ge \lambda_\min(A)c_\min = \lambda_\min(A_N)/4. \end{equation*} Note: The result of Bapat and Sunder is more general. For psd matrices $A$ and $C$ it states that \begin{equation*} \prod_{j=k}^n \lambda_j(A\circ C) \ge \prod_{j=k}^n\lambda_j(A)c_j, \end{equation*} where $1\le k \le n$, and $\lambda_1(\cdot)\ge \lambda_2(\cdot) \ge \cdots \ge \lambda_n(\cdot)$, while $c$ is as above.
{ "language": "en", "url": "https://mathoverflow.net/questions/313470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Asymptotic Expansion of Bessel Function Integral I have an integral: $$I(y) = \int_0^\infty \frac{xJ_1(yx)^2}{\sinh(x)^2}\ dx $$ and would like to asymptotically expand it as a series in $1/y$. Does anyone know how to do this? By numerically computing the integral it appears that $$I(y) = \frac 12 - \frac 1 {\pi y}+ \frac {3\zeta(3)}{4y^3\pi^3} + O(y^{-5}) $$ but this is just (high precision) guesswork and I would like to understand the series analytically.
Inserting the Mellin-Barnes representation for the square of the Bessel function (DLMF), \begin{equation} J_{1}^2\left(xy\right)=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i \infty}\frac{\Gamma\left(-t\right)\Gamma\left(2t+3\right)}{\Gamma^2\left(t+2\right)\Gamma% \left(t+3\right)}\left(\frac{xy}{2}\right)^{2t+2}\,dt \end{equation} where $-3/2<\Re (c)<0$, and changing the order of integration, one obtains \begin{equation} I(y)=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i \infty}\frac{\Gamma\left(-t\right)\Gamma\left(2t+3\right)}{\Gamma^2\left(t+2\right)\Gamma% \left(t+3\right)}\left(\frac{y}{2}% \right)^{2t+2}\,dt\int_0^\infty \frac{x^{2t+3}}{\sinh^2x}\,dx \end{equation} From G. & R. (3.527.1) \begin{equation} \int_0^\infty \frac{x^{2t+3}}{\sinh^2x}\,dx=\frac{1}{2^{2t+2}}\Gamma\left( 2t+4 \right)\zeta\left( 2t+3 \right) \end{equation} valid for $t>-1$. Thus we choose $-1<\Re(c)<0$ and thus \begin{equation} I(y)=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i \infty}\frac{\Gamma\left(-t\right)\Gamma\left(2t+3\right)\Gamma\left( 2t+4 \right)\zeta\left( 2t+3 \right)}{\Gamma^2\left(t+2\right)\Gamma\left(t+3\right)}\left(\frac{y}{4}\right)^{2t+2}\,dt \end{equation} To evaluate asymptotically this integral, one can close the contour by the large left half-circle. Poles are situated at $t=-1$ and $t=-\frac{2n+1}{2}$, with $n=1,2,3\ldots$. With the help of a CAS, the first corresponding residues are: \begin{equation} R_{-1}=\frac{1}{2}\quad ;\quad R_{-3/2}=-\frac{1}{4\pi}\quad ;\quad R_{-5/2}=-\frac{3}{64\pi}\zeta'(-2)\quad ;\quad R_{-7/2}=\frac{15}{8192\pi}\zeta'(-4) \end{equation} (General expressions can probably be found, if necessary). The derivative of the Riemann Zeta function at even integer values are involved and can be simply expressed. We obtain finally \begin{equation} I(y)=\frac{1}{2}-\frac{1}{\pi}y^{-1}+\frac{3\zeta(3)}{4\pi^3}y^{-3}+\frac{45\zeta(5)}{32\pi^5}y^{-5}+O\left( y^{-7} \right) \end{equation}
{ "language": "en", "url": "https://mathoverflow.net/questions/315264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
eigenvalues of a symmetric matrix I have a special $N\times N$ matrix with the following form. It is symmetric and zero row (and column) sums. $$K=\begin{bmatrix} k_{11} & -1 & \frac{-1}{2} & \frac{-1}{3} & \frac{-1}{4} & \ldots & \frac{-1}{N-2} & \frac{-1}{N-1} & \\ -1 & k_{22} & \frac{-1}{2} & \frac{-1}{3} & \frac{-1}{4} & \ldots & \frac{-1}{N-2} & \frac{-1}{N-1} & \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \\ \frac{-1}{N-2} & \frac{-1}{N-2} & \frac{-1}{N-2} & \frac{-1}{N-2} & \frac{-1}{N-2} & \ldots & k_{N-1,N-1} & \frac{-1}{N-1} & \\ \frac{-1}{N-1} & \frac{-1}{N-1} & \frac{-1}{N-1} & \frac{-1}{N-1} & \frac{-1}{N-1} & \ldots & \frac{-1}{N-1} & 1 & \\ \end{bmatrix} $$ where $K_{ii}=\sum_{j=1, j\ne i}^{N}{(-k_{ij})}$ for $i=1, 2,3,\ldots , N $ For example if N=4, we have: $$K = \begin{bmatrix} 11/6 & -1 & -1/2 & -1/3 & \\ -1 & 11/6 & -1/2 & -1/3 & \\ -1/2 & -1/2 & 4/3 & -1/3 & \\ -1/3 & -1/3 & -1/3 & 1 & \\ \end{bmatrix} $$ How can I find an explicit equation for its eigenvalues?
Phillip Lampe seems to be correct. Here are the eigenvalues and eigenvectors computed by hand: Let $k_1 = 2 + \tfrac12 + \cdots + \tfrac{1}{N-1}$, then: $\lambda_0 = 0$ with eigenvector all ones (by construction). $\lambda_1 = k_{1}$ with eigenvector $\begin{bmatrix}-1& 1& 0&\cdots& 0\end{bmatrix}^T$ $\lambda_2 = k_1-1$ with eigenvector $\begin{bmatrix}-\tfrac12& -\tfrac12& 1& 0 &\cdots& 0\end{bmatrix}^T$ $\lambda_3 = k_1 -1- \tfrac12$ with eigenvector $\begin{bmatrix}-\tfrac13& -\tfrac13& -\tfrac13& 1& 0&\cdots& 0\end{bmatrix}^T$ $\lambda_4 = k_1 - 1-\tfrac12 - \tfrac13$ with eigenvector $\begin{bmatrix}-\tfrac14& \cdots& -\tfrac14& 1& 0&\cdots &0\end{bmatrix}^T$ and so on until $\lambda_{N-1} = k_1 -1-\tfrac12-\cdots-\tfrac{1}{N-2} = 1 + \tfrac{1}{N-1} = \tfrac{N}{N-1}$ with eigenvector $\begin{bmatrix}-\tfrac1{N-1}& \cdots& -\tfrac{1}{N-1}& 1\end{bmatrix}^T$. So in short: The eigenvalues are $0$ and the values $\lambda_j = 1+\sum_{i=j}^{N-1}\tfrac1i$ for $j=1,\dots,N-1$.
{ "language": "en", "url": "https://mathoverflow.net/questions/324165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Representation of $4\times4$ matrices in the form of $\sum B_i\otimes C_i$ Every matrix $A\in M_4(\mathbb{R})$ can be represented in the form of $$A=\sum_{i=1}^{n(A)} B_i\otimes C_i$$ for $B_i,C_i\in M_2(\mathbb{R})$. What is the least uniform upper bound $M$ for such $n(A)$? In other words, what is the least integer $M$ such that every $A$ admit such a representation with $n(A)\leq M$? Is this least upper bound equal to the corresponding least upper bound for all matrices $A$ which are a matrix representation of quaternions $a+bi+cj+dk$? As another question about tensor product representation: What is a sufficient condition for a $4\times 4$ matrix $A$ to be represented in the form of $A=B\otimes C -C\otimes B$?
Because $M_4(\mathbb R) = M_2(\mathbb R) \otimes M_2(\mathbb R)$ as vector spaces (and as algebras, but we won't use this), we can replace $M_2(\mathbb R)$ by an arbitrary $4$-dimensional vector space $V$ and $M_4(\mathbb R)$ by $V \otimes V$. We can represent elements of $V\otimes V$ conveniently as $4\times 4$ matrices, where simple tensors are rank one matrices. The question is then equivalent to asking the minimum number of rank $1$ matrices it takes to write a $4 \times 4$ matrix as a sum of rank $1$ matrices. The answer is obviously $4$. For quaternions acting by left multiplication by quaternions in the standard basis: $$1= \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\0 & 1 \end{pmatrix} \otimes\begin{pmatrix} 1 & 0 \\0 & 1 \end{pmatrix}$$ $$i= \begin{pmatrix} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \end{pmatrix} = \begin{pmatrix} 0 & -1 \\1 & 0 \end{pmatrix} \otimes\begin{pmatrix} 1 & 0 \\0 & 1 \end{pmatrix}$$ $$j= \begin{pmatrix} 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \otimes\begin{pmatrix} 0 & -1 \\1 & 0 \end{pmatrix}$$ $$k= \begin{pmatrix} 0 & 0 & 0 & -1 \\ 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \otimes\begin{pmatrix} 0 & -1 \\1 & 0 \end{pmatrix}$$ Because only two terms appear on the right side, the matrices clearly have rank at most $2$ at tensor product, and some obtain rank exactly $2$, so the answer is two. For the last question, the answer is isomorphic to the analogous answer for writing an element of $V \otimes V$ as $v_1 \otimes v_2 - v_2 \otimes v_1$ for $V$ a four-dimensional vector space. The condition is given by a skew-symmetry condition (i.e. 10 linear conditions) plus a Pfaffian condition (a quadratic condition). More precisely the general such matrix can be written as $$ \begin{pmatrix} 0 & a & -a & 0 \\ b & c & d & e \\ -b & -d & -c & -e \\ 0 & f & -f & 0 \\ \end{pmatrix}$$ such that $af-be+cd=0$
{ "language": "en", "url": "https://mathoverflow.net/questions/331525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Primality test for specific class of $N=8kp^n-1$ My following question is related to my question here Can you provide a proof or a counterexample for the following claim : Let $P_m(x)=2^{-m}\cdot \left(\left(x-\sqrt{x^2-4}\right)^{m}+\left(x+\sqrt{x^2-4}\right)^{m}\right)$ . Let $N=8kp^n-1$ such that $k>0$ , $3 \not\mid k$ , $p$ is a prime number, $p \neq 3$ , $n > 2$ and $8k<p^n$ . Let $S_i=P_p(S_{i-1})$ with $S_0=P_{2kp^2}(4)$ , then: $$N \text{ is a prime iff } S_{n-2} \equiv 0\pmod{N}$$ You can run this test here. EDIT I have verified this claim for $k \in [1,500]$ with $p \leq 139$ and $n \in [3,50]$ .
This is a partial answer. This answer proves that if $N$ is a prime, then $S_{n-2}\equiv 0\pmod N$. Proof : It can be proven by induction that $$S_i=(2-\sqrt 3)^{2kp^{i+2}}+(2+\sqrt 3)^{2kp^{i+2}}\tag1$$ Using $(1)$ and $2\pm\sqrt 3=\bigg(\frac{\sqrt{6}\pm\sqrt 2}{2}\bigg)^2$, we get $$\begin{align}&2^{N+1}S_{n-2}^2-2^{N+2} \\\\&=(\sqrt 6-\sqrt 2)(\sqrt 6-\sqrt 2)^{N}+(\sqrt 6+\sqrt 2)(\sqrt 6+\sqrt 2)^{N} \\\\&=\sqrt 6\bigg((\sqrt 6+\sqrt 2)^{N}+(\sqrt 6-\sqrt 2)^{N}\bigg) \\&\qquad\qquad +\sqrt 2\bigg((\sqrt 6+\sqrt 2)^{N}-(\sqrt 6-\sqrt 2)^{N}\bigg) \\\\&=\sqrt 6\sum_{i=0}^{N}\binom Ni(\sqrt 6)^{N-i}\bigg((\sqrt 2)^i+(-\sqrt 2)^i\bigg) \\&\qquad\qquad +\sqrt 2\sum_{i=0}^{N}\binom Ni(\sqrt 6)^{N-i}\bigg((\sqrt 2)^i-(-\sqrt 2)^i\bigg) \\\\&=\sum_{j=0}^{(N-1)/2}\binom N{2j}6^{(N+1-2j)/2}\cdot 2^{j+1}+\sum_{j=1}^{(N+1)/2}\binom N{2j-1}6^{(N-2j+1)/2}\cdot 2^{1+j} \\\\&\equiv 6^{(N+1)/2}\cdot 2+2^{(N+3)/2}\pmod N \\\\&\equiv 12\cdot 2^{(N-1)/2}\cdot 3^{(N-1)/2}+4\cdot 2^{(N-1)/2}\pmod N \\\\&\equiv 12\cdot (-1)^{(N^2-1)/8}\cdot \frac{(-1)^{(N-1)/2}}{\bigg(\frac N3\bigg)}+4\cdot (-1)^{(N^2-1)/8}\pmod N \\\\&\equiv 12\cdot 1\cdot \frac{-1}{1}+4\cdot 1\pmod N \\\\&\equiv -8\pmod N\end{align}$$ So, we get $$2^{N+1}S_{n-2}^2-2^{N+2}\equiv -8\pmod N$$ It follows from $2^{N-1}\equiv 1\pmod N$ that $$S_{n-2}\equiv 0\pmod N$$
{ "language": "en", "url": "https://mathoverflow.net/questions/361489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How small the radical of $xy(x+y)uv(u+v)$ can be infinitely often? Let $x,y,u,v$ be positive integers with $x,y$ coprime and $u,v$ coprime ( $xy,uv$ not necessarily coprime). Assume $x+y \ne u+v$. How small the radical of $xy(x+y)uv(u+v)$ can be infinitely often? Can we get $O(|(x+y)(u+v)|^{1-C})$ for $C>0$? These are just two pairs of good $abc$ triples so we can get $C=0$ with pairwise coprimality.
Here is a solution where the radical is $O(k^9)$ and $(x+y)(u+v)=O(k^{12})$ The idea is that $x,y,z=a^2,b^2,c^2$ for a Pythagorean triple and $u,v,u+v=A^2,B^2,C^2$ for another with $C=c^2.$ I used the most familiar type of triple (hypotenuse and long leg differ by $1$), there might be others that do better, or special values of $k.$ * *$x=(2k+1)^2$ *$y=\left(2k(k+1)\right)^2$ *$u=(y^2-x^2)^2=((2k^2-1)(2k^2+4k+1))^2$ *$v=(2xy)^2=(4k(k+1)(2k+1))^2$ then * *$x+y=(2k^2+2k+1)^2$ *$u+v=(2k^2+2k+1)^4$ Thus $(x+y)(u+v)=O(k^{12})$ But the radical of $xy(x+y)uv(u+v)$ is at most $$k(k+1)(2k+1)(2k^2-1)(2k^2+2k+1)(2k^2+4k+1)=O(k^9)$$
{ "language": "en", "url": "https://mathoverflow.net/questions/377124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there a nonzero solution to this infinite system of congruences? Is there a triple of nonzero even integers $(a,b,c)$ that satisfies the following infinite system of congruences? $$ a+b+c\equiv 0 \pmod{4} \\ a+3b+3c\equiv 0 \pmod{8} \\ 3a+5b+9c\equiv 0 \pmod{16} \\ 9a+15b+19c\equiv 0 \pmod{32} \\ \vdots \\ s_na + t_nb + s_{n+1}c \equiv 0 \pmod{2^{n+1}} \\ \vdots $$ where $(s_n)$ and $(t_n)$ are weighted tribonacci sequences defined by $$ s_1=s_2=1, \\ s_3=3, \\ s_n = s_{n-1} +2s_{n-2} + 4s_{n-3} \text{ for } n>3, $$ and $$ t_1=1, \\ t_2=3, \\ t_3=5, \\ t_n = t_{n-1} +2t_{n-2} + 4t_{n-3} \text{ for } n>3. $$ I think there are no nonzero solutions, but I haven't been able to prove this. Computationally, I found there are no nonzero solutions for integers $a$, $b$, and $c$ up to $1000$. Note the $s_n$ and $t_n$ are always odd, and that the ratios $\frac{s_n}{s_{n-1}}$ and $\frac{t_n}{t_{n-1}}$ approach $2.4675...$.
$u_n=s_na + t_nb + s_{n+1}c$ satisfies the same recurrence relation as $s_n$ and $t_n$: $u_n = u_{n-1} +2u_{n-2} + 4u_{n-3}$. The question is whether $2^{n+1}\mid u_n$. Since $v_n=u_n/2^{n+1}$ satisfies $v_n = \displaystyle\frac{v_{n-1} +v_{n-2} + v_{n-3}}{2}$ the answer is affirmative only if there are $v_0, v_1, v_2$ (not all 0) such that $v_n$ is always integral. EDIT. As sharply noticed by the OP, the attempt below was wrong, since a matrix I claimed to be invertible (mod $2$) is in fact singular. A similar, more computational argument does work (mod $5$). It's clear that (mod $2$) such a sequence $v_n$ must follow either one of the 3-periodic patterns $000$ and $110$ (up to shifts). In the $000$ case keep dividing entire sequence $(v_n)$ by $2$ until one term is odd, and then shift the sequence to start with that term, so it's back to the $110$ case. Therefore it must be that $$\require{cancel}\cancel{\det\left (\begin{smallmatrix}v_1 & v_2 & v_3\\ v_2 & v_3 & v_4\\ v_3 & v_4 & v_5 \end{smallmatrix}\right )\equiv \det\left (\begin{smallmatrix}1 & 1 & 0\\ 1 & 0 & 1\\ 0 & 1 & 1 \end{smallmatrix}\right )\!\!\!\!\pmod{2} \ne0}$$ CORRECTED ARGUMENT. Notice that $$D=\det\left (\begin{matrix}v_1 & v_2 & v_3\\ v_2 & v_3 & v_4\\ v_3 & v_4 & v_5 \end{matrix}\right )=\det\left (\begin{matrix}v_1 & v_2 & v_3\\ v_2 & v_3 & \frac{v_1+v_2+v_3}{2}\\ v_3 & \frac{v_1+v_2+v_3}{2} & \frac{v_1+3v_2+3v_3}{4} \end{matrix}\right )\\= \frac{-4v_3^3+4v_2 v_3^2+2v_1 v_3^2+v_2^2 v_3+5v_1 v_2 v_3-v_1^2 v_3-3v_2^3-2v_1 v_2^2-2v_1^2 v_2-v_1^3}{4}$$ is $\equiv 0 \pmod{5}$ if and only if $v_1\equiv v_2\equiv v_3\equiv 0 \pmod{5}$. This is proved by the following snippet of code: awk -vp=5 'BEGIN { for(a=0; a<p; a++) for(b=0; b<p; b++) for(c=0; c<p; c++) { d=4*c^3-4*b*c^2-2*a*c^2-b^2*c-5*a*b*c+a^2*c+3*b^3+2*a*b^2+2*a^2*b+a^3; if(d%p==0) print a, b, c; } }' Now divide the entire sequence $(v_n)$ by a power of $5$ so that at least one term is is not $\equiv 0$, and shift it to start with that term, thus $v_1\not\equiv 0$ and therefore $D\ne0$. Next this implies that there is an integral linear combination $(z_n)$ of $(v_n)$ and its shifts $(v_{n+1})$, $(v_{n+2})$ such that $z_1=z_2=0$, $z_3\ne 0$, and still $z_n=(z_{n-1} +z_{n-2} + z_{n-3})/2$ holds. Finally write $z_3=2^m d$, with $d$ odd, and start the recursion from $0, 0, 2^m d$ to easily notice that it runs into a half-integer in $m+1$ steps.
{ "language": "en", "url": "https://mathoverflow.net/questions/381057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
An explicit equation for $X_1(13)$ and a computation using MAGMA By a general theory $X_1(13)$ is smooth over $\mathbb{Z}[1/13]$, and so is its Jacobian $J$. And the hyperelliptic curve given by an affine model $y^2 = x^6 - 2x^5 + x^4 -2x^3 + 6x^2 -4x + 1$ is $X_1(13)$. However, according to MAGMA, $J$ is bad at $2$. What is wrong with my argument? Here is my code: P<x> := PolynomialRing(RationalField()); C := HyperellipticCurve(x^6 - 2 * x^5 + x^4 - 2 * x^3 + 6 * x^2 - 4 * x +1); J := Jacobian(C); BadPrimes(J);
To get a model with good reduction at $2$, take $y = 2Y + x^3 + x^2 + 1$, subtract $(x^3+x^2+1)^2$ from both sides, and divide by $4$ to get $$ Y^2 + (x^3+x^2+1) \, Y = -x^5-x^3+x^2-x. $$ (A similar tactic of un-completing the square is well-known for elliptic curves.)
{ "language": "en", "url": "https://mathoverflow.net/questions/385038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Improper integral $\int_0^\infty\frac{x^{2n+1}}{(1+x^2)^2(2+x^2)^N}dx,\ \ \ n\le N$ How can I evaluate this integral? $$\int_0^\infty\frac{x^{2n+1}}{(1+x^2)^2(2+x^2)^N}dx,\ \ \ n\le N$$ Maybe there is a recurrence relation for the integral?
Let us renumber $N=n+L$ and let $K_{n,L} = I_{n,n+L} = \frac{1}{2} \int_0^\infty \frac{y^n}{(1+y)^2 (2+y)^{n+L}} \, dy$, which is the desired integral after the variable change $y=x^2$. Let $K(s,t) = \sum_{L=0}^\infty \sum_{n=0}^\infty K_{n,L} s^L t^n$. For small enough $s$ and $t$, the integrands converge uniformly over the interval of integration, so we can exchange the summation with the integration to get \begin{align*} K(s,t) &= \frac{1}{2} \int_0^\infty \frac{(2+y)^2}{(1+y)^2 (2-s+y) (2+(1-t)y)} \, dy \\ &= \frac{1}{2\pi i} \frac{1}{2} \oint_\gamma \frac{\log(-y) (2+y)^2}{(1+y)^2 (2-s+y) (2+(1-t)y)} \, dy, \end{align*} where the complex contour $\gamma$ tightly encircles the positive real line clockwise. Deforming the contour to counter-clockwise encircle the poles at $y=-1,-2+s,-2/(1-t)$, we get \begin{align*} K(s,t) &= \sum_{z=-1,-2+s,-\frac{2}{(1-t)}} \operatorname{Res}_{y=z} \frac{1}{2} \frac{\log(-y) (2+y)^2}{(1+y)^2 (2-s+y) (2+(1-t)y)} \\ &= \frac{1}{2} \frac{1}{(1-s)(1+t)} + \frac{1}{2} \frac{s^2 \log(2-s)}{(1-s)^2 (s+2t-st)} - \frac{1}{2} \frac{4t^2 \log(\frac{2}{1-t})}{(1+t)^2 (s+2t-st)} \\ &= \frac{1}{2} \frac{1}{(1-s)(1+t)} \\ & \quad {} + \frac{1}{2(1-t)(s+\frac{2t}{1-t})} \left( \frac{s^2 \log(2-s)}{(1-s)^2} - \frac{(\frac{2t}{1-t})^2 \log(2+\frac{2t}{1-t})}{(1+\frac{2t}{1-t})^2} \right) . \end{align*} Note that the last term is of the form $(f(x)-f(y))/(x-y)$ for $x=s$ and $y=-\frac{2t}{1-t}$, so that this ratio is expressible as some $g(x,y)$ that is analytic at $x=y$, which tells you how an expansion in $s$ and $t$ is possible despite the pesky factor of $1/(s+\frac{2t}{1-t})$. This answer is the same as Neil Strickland's. But he didn't specify how to do the expansion in the presence of the pesky denominator mentioned above.
{ "language": "en", "url": "https://mathoverflow.net/questions/393753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
General formulas for derivative of $f_n(x)=\dfrac{ax^n+bx^{n-1}+cx^{n-2}+\cdots}{a'x^n+b'x^{n-1}+c'x^{n-2}+\cdots},\quad a'\neq0$ For the function $f_1(x)=\dfrac{ax+b}{a'x+b'},\quad a'\neq0$ , we have $$f_1'(x)=\dfrac{\begin{vmatrix}{a} && {b} \\ {a'} && {b'}\end{vmatrix}}{(a'x+b')^2}$$ For $f_2(x)=\dfrac{ax^2+bx+c}{a'x^2+b'x+c'},\quad a'\neq0$, we have $$f_2'(x)=\dfrac{{ \begin{vmatrix}{a} && {b} \\ {a'} && {b'}\end{vmatrix} }x^2+2{ \begin{vmatrix}{a} && {c} \\ {a'} && {c'}\end{vmatrix} }x+{ \begin{vmatrix}{b} && {c} \\ {b'} && {c'}\end{vmatrix} }}{(a'x^2+b'x+c')^2}$$ Can we generalize the formula containing determinants to find $f_n'(x)$?
If $f_n=\frac{P_n(x)}{Q_n(x)}$ and $P_n=\sum a_kx^k$, $Q_n=\sum b_kx^k$, then $$f'_n=\frac{\begin{vmatrix}{P'} && {Q'} \\ {P} && {Q}\end{vmatrix} }{Q^2}$$ Breaking the determinant on the numerator gives $$\sum_{j\geq 0} \left(\sum_{k+r=j+1} k\begin{vmatrix}{a_{k}} && {b_k} \\ {a_{j+1-k}} && {b_{j+1-k}}\end{vmatrix} \right)x^j=\sum_{j=0}^{2n-1}c_jx^j$$ Now, $k,r \leq n$ implies $ n\geq k,r \geq (j+1)-n ; k>0$. Also, we can further simplify $c_j$ as $$c_j=\sum_{k=j+1-n}^{\lfloor{\frac{j+1}{2}}\rfloor} k\begin{vmatrix}{a_k} && {b_k} \\ {a_{j+1-k}} && {a_{j+1-k}}\end{vmatrix}+ (j+1-k)\begin{vmatrix}{a_{j+1-k}} && {b_{j+1-k}} \\ {a_k} && {b_k}\end{vmatrix}$$ Hence, $$c_j=\sum_{k=j+1-n}^{\lfloor{\frac{j+1}{2}}\rfloor} (j+1-2k)\begin{vmatrix}{a_{j+1-k}} && {b_{j+1-k}} \\ {a_{k}} && {b_{k}}\end{vmatrix}$$ This gives $c_{2n-1}=0$
{ "language": "en", "url": "https://mathoverflow.net/questions/396250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Any hints on how to prove that the function $\lvert\alpha\;\sin(A)+\sin(A+B)\rvert - \lvert\sin(B)\rvert$ is negative over the half of the total area? I have this inequality with $0<A,B<\pi$ and a real $\lvert\alpha\rvert<1$: $$ f(A,B):=\bigl|\alpha\;\sin(A)+\sin(A+B)\bigr| - \bigl| \sin(B)\bigr| < 0$$ Numerically, I see that regardless of the value of $\alpha$, the area in which $f(A,B)<0$ is always half of the total area $\pi^2$. I appreciate any hints and comments on how I can prove this.
Let us assume $\alpha\in[0,1)$ (the case of $\alpha\in(-1,0]$ is similar). As $\sin B>0$ for $B\in (0,\pi)$, the inequality $f(A,B)<0$ amounts to $$ \alpha\sin A<\sin B-\sin(A+B),\quad -[\sin B+\sin(A+B)]<\alpha\sin A.\quad (\star) $$ Notice that $\sin A=2\sin\left(\frac{A}{2}\right)\cos\left(\frac{A}{2}\right)$, $\sin B-\sin(A+B)=-2\sin\left(\frac{A}{2}\right)\cos\left(\frac{A}{2}+B\right)$ and $\sin B+\sin(A+B)=2\cos\left(\frac{A}{2}\right)\sin\left(\frac{A}{2}+B\right)$. Substituting in $(\star)$ and cancelling the positive terms $2\sin\left(\frac{A}{2}\right)$ and $2\cos\left(\frac{A}{2}\right)$, we obtain the equivalent inequalities $$ \alpha\cos\left(\frac{A}{2}\right)<-\cos\left(\frac{A}{2}+B\right), \quad -\sin\left(\frac{A}{2}+B\right)<\alpha\sin\left(\frac{A}{2}\right).\quad (\star\star) $$ In $(\star\star)$, the LHS of the first inequality and the RHS of the second are non-negative. Hence $\frac{A}{2}+B$ - which belongs to $\left(0,\frac{3\pi}{2}\right)$ - must be in the second or the third quadrant; otherwise, the first inequality in $(\star\star)$ does not hold. Let us analyze these cases separately: * *If $\frac{\pi}{2}\leq\frac{A}{2}+B\leq\pi$, then the second inequality in $(\star\star)$ holds automatically (its RHS is always non-negative); and the first one can be written as $$\alpha\cos\left(\frac{A}{2}\right)<\cos\left(\pi-\frac{A}{2}-B\right).$$ Applying the strictly decreasing function $\cos^{-1}:[0,1]\rightarrow\left[0,\frac{\pi}{2}\right]$ yields: $$\pi-\frac{A}{2}-B<\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right).$$ This of course implies $\frac{A}{2}+B\geq\frac{\pi}{2}$. But we also need $\frac{A}{2}+B\leq\pi$. Combining these, the bounds for $B$ in terms of $A\in(0,\pi)$ are given by $$ \pi-\frac{A}{2}-\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right)\leq B\leq\pi-\frac{A}{2}. $$ The difference of the two bounds is $\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right)$. Consequently, the contribution to the area of $\{(A,B)\mid f(A,B)<0\}$ is $$ \int_{\{(A,B)\mid f(A,B)<0,\, \frac{\pi}{2}\leq\frac{A}{2}+B\leq\pi\}}\mathbf{1}= \int_{0}^\pi\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right){\rm{d}}A.\quad (1) $$ *If $\pi\leq\frac{A}{2}+B\leq\frac{3\pi}{2}$, all terms appearing in $(\star\star)$ are non-negative. We first rewrite these inequalities as $$ \alpha\cos\left(\frac{A}{2}\right)<\cos\left(\frac{A}{2}+B-\pi\right), \quad \sin\left(\frac{A}{2}+B-\pi\right)<\alpha\sin\left(\frac{A}{2}\right). $$ Next applying strictly monotonic functions $\cos^{-1}:[0,1]\rightarrow\left[0,\frac{\pi}{2}\right]$ and $\sin^{-1}:[0,1]\rightarrow\left[0,\frac{\pi}{2}\right]$ to them results in: $$ \frac{A}{2}+B-\pi<\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right),\quad \frac{A}{2}+B-\pi<\sin^{-1}\left(\alpha\sin\left(\frac{A}{2}\right)\right). $$ Hence the upper bound $$ B<\pi+\min\left\{\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right),\sin^{-1}\left(\alpha\sin\left(\frac{A}{2}\right)\right)\right\}-\frac{A}{2} $$ which of course implies $\frac{A}{2}+B\leq\frac{3\pi}{2}$. But $\pi\leq\frac{A}{2}+B$ is also required. We therefore arrive at the bounds for $B$ in terms of $A\in(0,\pi)$: $$ \pi-\frac{A}{2}\leq B\leq \pi+\min\left\{\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right),\sin^{-1}\left(\alpha\sin\left(\frac{A}{2}\right)\right)\right\}-\frac{A}{2}. $$ The difference of the bounds is $\min\left\{\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right),\sin^{-1}\left(\alpha\sin\left(\frac{A}{2}\right)\right)\right\}$. Therefore, the contribution to the area of $\{(A,B)\mid f(A,B)<0\}$ is $$ \int_{\{(A,B)\mid f(A,B)<0,\, \pi\leq\frac{A}{2}+B\leq\frac{3\pi}{2}\}}\mathbf{1}\\ =\int_{0}^\pi\min\left\{\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right),\sin^{-1}\left(\alpha\sin\left(\frac{A}{2}\right)\right)\right\}{\rm{d}}A.\quad (2) $$ Adding $(1)$ and $(2)$, the area of $\{(A,B)\mid f(A,B)<0\}$ turns out to be $$ \int_{0}^\pi\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right){\rm{d}}A\\+ \int_{0}^\pi\min\left\{\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right),\sin^{-1}\left(\alpha\sin\left(\frac{A}{2}\right)\right)\right\}{\rm{d}}A.\quad (\star\star\star)$$ So the question is if the quantity above coincides with $\frac{\pi^2}{2}$ for all $\alpha\in [0,1)$. First, we claim that the minimum above is $\sin^{-1}\left(\alpha\sin\left(\frac{A}{2}\right)\right)$. Notice that: $$ \sin^{-1}\left(\alpha\sin\left(\frac{A}{2}\right)\right) \leq\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right)\\ \Leftrightarrow \cos^{-1}\left(\alpha\sin\left(\frac{A}{2}\right)\right)+\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right)\geq\frac{\pi}{2}; $$ and the cosine of the last angle appearing above is $$ \left[\alpha\sin\left(\frac{A}{2}\right)\right]\left[\alpha\cos\left(\frac{A}{2}\right)\right] -\sqrt{1-\alpha^2\sin^2\left(\frac{A}{2}\right)} \sqrt{1-\alpha^2\cos^2\left(\frac{A}{2}\right)};$$ which is negative as $\alpha\sin\left(\frac{A}{2}\right)<\sqrt{1-\alpha^2\cos^2\left(\frac{A}{2}\right)}$ and $\alpha\cos\left(\frac{A}{2}\right)<\sqrt{1-\alpha^2\sin^2\left(\frac{A}{2}\right)}$ due to $|\alpha|<1$. We conclude that $(\star\star\star)$ is equal to $$ \int_{0}^\pi\cos^{-1}\left(\alpha\cos\left(\frac{A}{2}\right)\right){\rm{d}}A+ \int_{0}^\pi\sin^{-1}\left(\alpha\sin\left(\frac{A}{2}\right)\right){\rm{d}}A. $$ Call the expression above $h(\alpha)$. The goal is to establish $h(\alpha)=\frac{\pi^2}{2}$ for any $\alpha\in[0,1]$. This is clear when $\alpha=0$, and so it suffices to show $\frac{{\rm{d}}h}{{\rm{d}}\alpha}\equiv 0$. One has $$ \frac{{\rm{d}}h}{{\rm{d}}\alpha}= -\int_0^{\pi}\frac{\cos\left(\frac{A}{2}\right)}{\sqrt{1-\alpha^2\cos^2\left(\frac{A}{2}\right)}}{\rm{d}}A +\int_0^{\pi}\frac{\sin\left(\frac{A}{2}\right)}{\sqrt{1-\alpha^2\sin^2\left(\frac{A}{2}\right)}}{\rm{d}}A; $$ which is clearly zero because the change of variable $A\mapsto\pi-A$ indicates $$\int_0^{\pi}\frac{\cos\left(\frac{A}{2}\right)}{\sqrt{1-\alpha^2\cos^2\left(\frac{A}{2}\right)}}{\rm{d}}A =\int_0^{\pi}\frac{\sin\left(\frac{A}{2}\right)}{\sqrt{1-\alpha^2\sin^2\left(\frac{A}{2}\right)}}{\rm{d}}A.$$ This concludes the proof.
{ "language": "en", "url": "https://mathoverflow.net/questions/401878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
Can we cover the unit square by these rectangles? The following question was a research exercise (i.e. an open problem) in R. Graham, D.E. Knuth, and O. Patashnik, "Concrete Mathematics", 1988, chapter 1. It is easy to show that $$\sum_{1 \leq k } \left(\frac{1}{k} \times \frac{1}{k+1}\right) = \sum_{1 \leq k } \left(\frac{1}{k} - \frac{1}{k+1}\right) = 1.$$ The product $\frac{1}{k} \times \frac{1}{k+1}$ is equal to the area of a $\frac{1}{k}$by$\frac{1}{k+1}$ rectangle. The sum of the areas of these rectangles is equal to 1, which is the area of a unit square. Can we use these rectangles to cover a unit square? Is this problem still open? What are the best results we know about this problem (or its relaxations)?
it is hard to prove this problem directly, but it is not hard to prove (as someone mentioned in some comments) : If (n-1) rectangles has been put into the 1x1 square, then all rectangles can be put into the square of length (1+1/n) Proof: we have put (n-1) rectangles into unit square denote $P_n = \frac{1}{n} \times \frac{1}{n+1} \sim \frac{1}{n} \times \frac{1}{n}$ , put a rectangle into a square, it is ok put rectangles from n to infinity in this way of rows and columns into a large rectangle first row, $P_n \cdots P_{2n-1}$ second row, $P_{2n} \cdots P_{4n-1}$ third row, $P_{4n} \cdots P_{8n-1}$ .... as the following picture showing the left vertical is the familiar geometric sequence , the length is : $\frac{1}{n} + \frac{1}{2n} + \frac{1}{4n} + \cdots = \frac{2}{n} $ , the first horizontal row is the sum of part of harmonic sequence ,$\frac{1}{n} + \frac{1}{n+1} + \frac{1}{n+2} + \cdots + \frac{1}{2n-1} $ it is easy to estimate the upper bound as : $$ \frac{1}{n} + \frac{1}{n+1} + \frac{1}{n+2} + \cdots + \frac{1}{2n-1} + \frac{1}{2n} - \frac{1}{2n} \\ = \frac{1}{n} +( \frac{1}{n+1} + \frac{1}{n+2} + \cdots + \frac{1}{2n-1} + \frac{1}{2n}) - \frac{1}{2n} \\ < \frac{1}{n} +( \ln2) - \frac{1}{2n} = \ln 2 + \frac{1}{2n} $$ (the sum in bracket is easy to check, take $\frac{1}{x}$, the area of integral under the curve is larger than sum of rectanlges) the length of first row is less than $\ln 2 + \frac{1}{2n}$ , the length of second row is less than $\ln 2 + \frac{1}{4n}$ , ... the first row is longer than the above rows , when n takes big number, like n = 1000, this length $\ln 2 + \frac{1}{2n}$ is obviously less than 1, for $\ln 2 < 0.7$ . then rectangles from n to infinity are put into a rectangle of $\frac{2}{n} \times (\ln2 + \frac{1}{2n})$ , and divide this rectangle into two equal smaller rectangles $\frac{1}{n} \times (\ln2 + \frac{1}{2n}) $ along the unit square as following then we prove all rectangles can be put into a square of length (1+1/n), although in two stripes it is not efficient to pack them, and can be improved by some ways obviously. Paulhus gave a graph of 1000 rectangles in unit sqaure in his paper (An Algorithm for Packing Squares) as and I recovered a result of 1000 rectangle from an algorithm of Antal Joós in his paper (On packing of rectangles in a rectangle) similar to Paulhus' method, (took 2 seconds by a Mathematica program ) like as Paulhus result, so at least the result (1+ 1/1000) is better than Balint's result in this tricky way , and it seems not hard to go further like 10000 and Paulhus even claimed that he had put 10^9 rectangles into unit sqaure by computer !!!??? if someone has proved the final problem or check results in computer in very large number, please tell me
{ "language": "en", "url": "https://mathoverflow.net/questions/34145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "95", "answer_count": 6, "answer_id": 5 }
General integer solution for $x^2+y^2-z^2=\pm 1$ How to find general solution (in terms of parameters) for diophantine equations $x^2+y^2-z^2=1$ and $x^2+y^2-z^2=-1$? It's easy to find such solutions for $x^2+y^2-z^2=0$ or $x^2+y^2-z^2-w^2=0$ or $x^2+y^2+z^2-w^2=0$, but for these ones I cannot find anything relevant.
I believe the general solution to $x^2+y^2-z^2=1$ is $x=(rs+tu)/2$, $y=(rt-su)/2$, $z=(rs-tu)/2$, where $rt+su=2$. EDIT: Solutions to $x^2+y^2+1=z^2$ can be obtained by choosing $a$, $b$, $c$, $d$ such that $ad-bc=1$ and then letting $x=(a^2+b^2-c^2-d^2)/2$, $y=ac+bd$, $z=(a^2+b^2+c^2+d^2)/2$, though I'm not sure you get all the integer solutions this way. The rational solutions are a bit easier. $(0,0,1)$ is a (rational) point on the surface. The line $(0,0,1)+t(a,b,c)$ through that point intersects the surface again at $x=2ac/(a^2+b^2-c^2)$, $y=2bc/(a^2+b^2-c^2)$, $z=(a^2+b^2+c^2)/(a^2+b^2-c^2)$, giving all the rational points on the surface.
{ "language": "en", "url": "https://mathoverflow.net/questions/65957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
Fermat's proof for $x^3-y^2=2$ Fermat proved that $x^3-y^2=2$ has only one solution $(x,y)=(3,5)$. After some search, I only found proofs using factorization over the ring $Z[\sqrt{-2}]$. My question is: Is this Fermat's original proof? If not, where can I find it? Thank you for viewing. Note: I am not expecting to find Fermat's handwritings because they may not exist. I was hoping to find a proof that would look more ''Fermatian''.
Here is how Fermat probably did it (it is how I did it - not all of the steps were needed but I have to believe this was close to Fermat's thought process). Any prime of the form $8n+1$ or $8n+3$ can be written in the form $a^2 +2b^2$. This is proved with descent techniques once realizes that $-2$ and $1$ are squares mod $8n+1$ or $8n+3$ and hence setting $a^2=-2$ and $b^2 = 1$ gets the result of $0$ (mod $8n+1$ or $8n+3$) for $a^2+2b^2$, which means our prime divides the result. Any prime of the form $8n+5$ or $8n+7$ cannot be. Point two is that combinations of squares with common shapes when multiplied by each other retain their shape. Let $x = a^2 + Sb^2$, and $y = c^2 + Sd^2$. $xy = (ac+Sbd)^2 + S(ad-bc)^2 = (ac-Sbd)^2 + S(ad+bc)^2$ Point three is that if $y$ is even $y^2 + 2$ is even as is $x^3$. Dividing both sides by $2$ would make the left hand side odd and right hand side even so both $y$ and $x$ are odd. Point four is that if a non-prime is of the form $a^2 + 2b^2$ then all its prime factors must be of the form $8n+1$ or $8n+3$, or the factor must be a square. Point five is to observe that $y^2 + 2$ is of the form $a^2 + 2b^2$ with $a=y$ and $b=1$. Combining this with four and one means there are no squares of the form $8n+5$ or $8n+7$ since $b$ would be equal to that square, not $1$. So now we expand upon point three to make the proof. $x$ is of the form $a^2 + 2b^2$. $x^3$ can be written as $(a^3-3Sab^2)^2 + S(3a^2b-Sb^3)^2$. Letting $S=2$ we see that the expression $(3a^2b-2b^3)^2$ must be equal to $1$. Hence $b^2 \cdot (3a^2-2b^2)^2 =1$. Using positive integers we see $b=a=1$ is the only solution. Hence $x =1^2 + 2*1^2 = 3$ is the only possibility and $5^2 + 2 =3^3$ is the only solution
{ "language": "en", "url": "https://mathoverflow.net/questions/142220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 5, "answer_id": 3 }
The relationship between the dilogarithm and the golden ratio Among the values for which the dilogarithm and its argument can both be given in closed form are the following four equations: $Li_2( \frac{3 - \sqrt{5}}{2}) = \frac{\pi^2}{15} - log^2( \frac{1 +\sqrt{5}}{2} )$ (1) $Li_2( \frac{-1 + \sqrt{5}}{2}) = \frac{\pi^2}{10} - log^2( \frac{1 +\sqrt{5}}{2} )$ (2) $Li_2( \frac{1 - \sqrt{5}}{2}) = -\frac{\pi^2}{15} - log^2( \frac{1 +\sqrt{5}}{2} )$ (3) $Li_2( \frac{-1 - \sqrt{5}}{2}) = -\frac{\pi^2}{10} - log^2( \frac{1 +\sqrt{5}}{2} )$ (4) (from Zagier's The Remarkable Dilogarithm) where the argument of the logarithm on the right hand side is the golden ratio $\phi$. The above equations all have this (loosely speaking) kind of duality, and almost-symmetry that gets broken by the fact that $Li_2(\phi)$ fails to make an appearance. Can anyone explain what is the significance of the fact that $\phi$ appears on the right, but not on the left? Immediately one can see that the arguments on the lefthand side of (2)-(4) are related to $\phi$ as roots of a polynomial, but what other meaning does this structure have?
According to Maple $$ {\rm polylog}\left(2, \dfrac{1+\sqrt{5}}{2} \right) = \dfrac{7 \pi^2}{30} - \dfrac{1}{2} \log^2 \left(\dfrac{1+\sqrt{5}}{2}\right) - \log \left(\dfrac{1+\sqrt{5}}{2}\right) \log \left(\dfrac{1-\sqrt{5}}{2}\right)$$ Of course $\log((1-\sqrt{5})/2) = \log(-1/\phi) = - \log (\phi) + i \pi$
{ "language": "en", "url": "https://mathoverflow.net/questions/144322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Number of Permutations? Edit: This is a modest rephrasing of the question as originally stated below the fold: for $n \geq 3$, let $\sigma \in S_n$ be a fixed-point-free permutation. How many fixed-point-free permutations $\tau$ are there such that $\sigma \tau^{-1}$ is also fixed-point free? As the original post shows, this number is a function of $\sigma$; can one give a formula based on the character table of $S_n$? Given two permutation of $1, \ldots, N$. Where $3\le N\le 1000$ Example For $N=4$ First is $\begin{pmatrix}3& 1& 2& 4\end{pmatrix}$. Second is $\begin{pmatrix}2& 4& 1& 3\end{pmatrix}$. Find the number of possible permutations $X_1, \ldots, X_N$ of $1, \ldots, N$ such that if we write all three in $3\times N$ matrix, each column must have unique elements. $\begin{pmatrix}3 & 1 & 2 & 4\\ 2 & 4 & 1 & 3\\ X_1 & X_2 & X_3 & X_4\end{pmatrix},$ here $X_1$ can't be 3 or 2, $X_2$ can't be 1 or 4, $X_3$ can't be 2 or 1, $X_4$ cant be 4 or 3, Answer to above sample is 2 and possible permutation for third row is $\begin{pmatrix}1 & 3 & 4 & 2\end{pmatrix}$ and $\begin{pmatrix}4 & 2 & 3& 1\end{pmatrix}$. Example 2 First is $\begin{pmatrix}2 & 4 & 1 & 3\end{pmatrix}$. Second is $\begin{pmatrix}1 & 3 & 2 & 4\end{pmatrix}$. Anwser is 4. Possible permutations for third row are $\begin{pmatrix}3&1&4&2\end{pmatrix}$, $\begin{pmatrix}3&2&4&1\end{pmatrix}$, $\begin{pmatrix}4&1&3&2\end{pmatrix}$ and $\begin{pmatrix}4&2&3&1\end{pmatrix}$.
A formula for the answer to this question is given in formula (3) of J. Riordan, Three-line Latin rectangles, Amer. Math. Monthly 51, (1944), 450–452. Riordan doesn't really include a proof, though it's not too hard to see how the formula follows from the theory of rook polynomials. (He refers to a paper of Kaplansky, but Kaplansky's paper doesn't have this formula.) I believe that Riordan also discusses this problem in his book Introduction to Combinatorial Analysis, but I didn't check.
{ "language": "en", "url": "https://mathoverflow.net/questions/144899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
Binomial Identity I recently noted that $$\sum_{k=0}^{n/2} \left(-\frac{1}{3}\right)^k\binom{n+k}{k}\binom{2n+1-k}{n+1+k}=3^n$$ Is this a known binomial identity? Any proof or reference?
I rewrote your identity in an equivalent form $$ \sum_{k=0}^{n/2} (-1)^k 3^{n-k} \binom{n+k}{n, k} \binom{2n-k+1}{n-2k, n+k+1} = 3^{2n} , $$ and attempted to construct a proof by induction on $n$. I did not succeed, but discovered an interesting hurdle: to prove the equally curious identity $$ \sum_{k=0}^{n/2} (-1)^k 3^{n-k} \binom{n+k}{n, k} \binom{2n-k-2}{n-2k, n+k-2} = 0 . $$ Does this latter identity have a human proof? Here's a sketch of how I obtained the latter identity by rewriting the original equation. By Pascal's rule we have $ \binom{n+k}{n, k} = \binom{n+k-1}{n-1, k} + \binom{n+k-1}{n, k-1}; $ similarly $ \binom{2n-k+1}{n-2k, n+k+1} = 3 \binom{2n-k-1}{n-2k-1, n+k} + \binom{2n-k-2}{n-2k-3, n+k+1} + \binom{2n-k -2}{n-2k, n+k-2} . $ Multiply these and regroup: the original summation now equals $$ \sum_{k=0}^{n/2} (-1)^k 3^{n-k} \binom{n+k-1}{n-1, k} \ 3 \binom{2n-k-1}{n-2k-1, n+k} $$ $$ + \sum_{k=0}^{n/2} (-1)^k 3^{n-k} \binom{n+k-1}{n, k-1} \ 3 \binom{2n-k-1}{n-2k-1, n+k} + \sum_{k=0}^{n/2} (-1)^k 3^{n-k} \binom{n+k}{n, k} \binom{2n-k-2}{n-2k-3, n+k+1} $$ $$ + \sum_{k=0}^{n/2} (-1)^k 3^{n-k} \binom{n+k}{n, k} \binom{2n-k-2}{n-2k, n+k-2} . $$ Evaluate this first summation by the induction hypothesis and obtain $9 \cdot 3^{2(n-1)}$, which equals $3^{2n}$ as desired. The second and third summations are identical to each other except for a change of sign; together their sum is 0. So our proof by induction (of the original statement) would conclude if we establish that this fourth summation equals 0.
{ "language": "en", "url": "https://mathoverflow.net/questions/155402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
What is the value of the infinite product: $(1+ \frac{1}{1^1}) (1+ \frac{1}{2^2}) (1+ \frac{1}{3^3}) \cdots $? What is the value of the following infinite product? $$\left(1+ \frac{1}{1^1}\right) \left(1+ \frac{1}{2^2}\right) \left(1+ \frac{1}{3^3}\right) \cdots $$ Is the value known?
I'm not sure what the criterium for a full answer is here, so here a technique for $(1+c_k)$ kind of products, turning the infinite product into an infinite sum: Via telescoping, for friendly $a_n$ and any $m$, we have $$\lim_{n\to\infty}a_n=a_m+\sum_{n=m}^\infty\left(\dfrac{a_{n+1}}{a_n}-1\right)\,{a_n}.$$ So define $$a_n:=\prod_{k=1}^{n-1}\left(1+c_k\right)\hspace{.5cm}\implies\hspace{.5cm}\dfrac{a_{n+1}}{a_n}-1=c_n,$$ and then $$\prod_{n=1}^\infty\left(1+c_n\right) = \lim_{n\to\infty}a_n = \prod_{k=1}^{m-1}\left(1+c_k\right)+\sum_{n=m}^\infty c_n\prod_{k=1}^{n-1}\left(1+c_k\right).$$ For $c_n=\dfrac{1}{n^n}$, that's $$\frac{1^1+1}{1^1}\,\frac{2^2+1}{2^2}\frac{3^3+1}{3^3}+\sum_{n=4}^\infty\frac{1}{n^n}\prod_{k=1}^{n-1}\left(1+\frac{1}{k^k}\right)=2.603\dots$$ The first term is the lower bound $\frac{70}{27}=2.592\dots$ that's been pointed out in the comment and the remaining sum $\frac{1}{4^4}\dots+\frac{1}{5^5}\dots$ collects some $\mathcal{O}(10^{-2})$. Truncation of the product after $k=1$ reveals the infinite product is almost two times Sophomore's dream: $$\prod_{n=1}^\infty\left(1+n^{-n}\right)\approx 2\sum_{n=1}^\infty \frac{1}{n^n}=2.582\dots$$
{ "language": "en", "url": "https://mathoverflow.net/questions/200815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Create matrix containing values in [0,1] where sum of all diagonals and anti-diagonals is fixed The problem I am facing sounds at first glance pretty simple. However, as very often, it seems more complicated than I first assumed: I want to calculate a matrix $P = (p_{j,k}) \in \mathbb{R}^{n \times n}$, $n\in\mathbb{N}$, satisfying the following constraints: * *$p_{j,k} \in [0,1]$, *$\sum_{j,k} \ p_{j,k} = 1$, *the sums of all $2n-1$ diagonals are fixed by $b_{1-n},\ldots,b_{n-1}\in (0,1)$, e.g. $\sum_{j} p_{j,j} = b_0$ or $\sum_{j} p_{j,j+1} = b_{-1}$, and finally *the sums of all $2n-1$ antidiagonals are fixed by $a_1,\ldots,a_{2n-1}\in (0,1)$. Obviously, the problem is easy to solve for $n=2$, since the "corner elements" of the matrix are directly given from 3. and 4. However, for $n>2$, it becomes more difficult. A simple linear algebra approach for $n=3$ leads to the problem of "solving" $Mp=c$ with $$ M = \left( \begin{array}{ccccccccc} 1 & & & 0 & 0 & 0 & 0 & 0 & 0 \\ & 1 & & 1 & & & 0 & 0 & 0 \\ & & 1 & & 1 & & 1 & & \\ 0 & 0 & 0 & & & 1 & & 1 & \\ 0 & 0 & 0 & 0 & 0 & 0 & & & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & &\\ 0 & 0 & 0 & 1 & & & & 1 & \\ 1 & & & & 1 & & & & 1\\ & 1 & & & & 1 & 0 & 0 & 0 \\ & & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1& 1 & 1& 1 & 1 & 1 & 1& 1&1 \\ \end{array} \right), \quad p = \left( \begin{array}{c} p_{1,1} \\ p_{1,2} \\ p_{1,3} \\ p_{2,1} \\ p_{2,2} \\ p_{2,3} \\ p_{3,1} \\ p_{3,2} \\ p_{3,3} \end{array} \right), \quad c = \left( \begin{array}{c} a_1 \\ a_2 \\ a_3 \\ a_4 \\ a_5 \\ b_{-2} \\ b_{-1} \\ b_{0} \\ b_{1} \\ b_{2} \\ 1 \end{array} \right) $$ where $p \in \mathbb{R}^{n^2}$, $b \in \mathbb{R}^{4n-1}$ and $M$ is of size $(4n-1)\times n^2$ with entries in $\{0,1\}$. Note that $M$ is rank-deficient and the rows that specify the values for the "corner elements" are clearly visible. Dropping these rows leaves $$ \tilde{M} = \left( \begin{array}{ccccccccc} & 1 & & 1 & & & 0 & 0 & 0 \\ & & 1 & & 1 & & 1 & & \\ 0 & 0 & 0 & & & 1 & & 1 & \\ 0 & 0 & 0 & 1 & & & & 1 & \\ 1 & & & & 1 & & & & 1\\ & 1 & & & & 1 & 0 & 0 & 0 \\ 1& 1 & 1&1 &1 & 1 & 1 & 1 & 1 \\ \end{array} \right), \quad \tilde{p} = \left( \begin{array}{c} p_{1,2} \\ p_{2,1} \\ p_{2,2} \\ p_{2,3} \\ p_{3,2} \\ \end{array} \right), \quad \tilde{c} = \left( \begin{array}{c} a_2 \\ a_3 \\ a_4 \\ b_{-1} \\ b_{0} \\ b_{1} \\ 1-a_1-a_5-b_{-2}-b_2 \end{array} \right) . $$ The question is, if there exists a solution (for $n \in \mathbb{N}$) for this problem satisfying all properties and how to calculate it (least-squares, SVD,...)? Or is there an approach different from the one I chose which is suitable to compute a solution iteratively? I have the intuition, that due to the somewhat special constraints, there is a close connection to probability ("values are positive, sum equals one") or graph (adjacency matrices) theory.
The given constraints are a system of linear inequalities, so you can find a feasible solution (or prove that none exists) by feeding these constraints to a linear program (LP) solver with some arbitrarily chosen objective function.
{ "language": "en", "url": "https://mathoverflow.net/questions/203588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Are there some known identities of elliptic polylogarithms similar to the Abel identity of polylogarithm? Let \begin{align} Li_2(z) = \sum_{n=1}^{\infty} \frac{z^n}{n^2}. \end{align} This polylogarithm satisfies the following Abel identity: \begin{align} & Li_2(-x) + \log x \log y \\ & + Li_2(-y) + \log ( \frac{1+y}{x} ) \log y \\ & + Li_2(-\frac{1+y}{x}) + \log ( \frac{1+y}{x} ) \log (\frac{1+x+y}{xy}) \\ & + Li_2(-\frac{1+x+y}{xy}) + \log ( \frac{1+x}{y} ) \log (\frac{1+x+y}{xy}) \\ & + Li_2(-\frac{1+x}{y}) + \log ( \frac{1+x}{y} ) \log x \\ & = - \frac{\pi^2}{2}. \end{align} The following function \begin{align} ELi_{n,m}(x,y,q) = \sum_{j=1}^{\infty} \sum_{k=1}^{\infty} \frac{x^j}{j^n} \frac{y^k}{k^m} q^{jk} \end{align} is defined in the paper in (2.1). Are there some known identities similar to the Abel identity for $ELi_{n,m}(x,y,q)$? Thank you very much.
The 5-term relation is a special case of the Rogers identity: theorem 8.14 here. This is a degenerate version of the Bloch relation for elliptic dilogarithm (see page 30 here).
{ "language": "en", "url": "https://mathoverflow.net/questions/261765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What is known about the plethysm $\text{Sym}^d(\bigwedge^3 \mathbb{C}^6)$ What is known about the plethysm $\text{Sym}^d(\bigwedge^3 \mathbb{C}^6)$ as a representation of $\text{GL}(6)$? It is my understanding that this should be multiplicity-free. I tried computing it using the Schur Rings package in Macaulay2 and I cannot see a pattern among the weights that appear. If a formula is known, a reference would be nice also. Thanks. EDIT: To save others the work, here is the data for $0 \leq d \leq 5$: {{0, 0, 0, 0, 0, 0}}, {{0, 0, 0, 1, 1, 1}}, {{0, 1, 1, 1, 1, 2}, {0, 0, 0, 2, 2, 2}}, {{1, 1, 1, 2, 2, 2}, {0, 1, 1, 2, 2, 3}, {0, 0, 0, 3, 3, 3}}, {{2, 2, 2, 2, 2, 2}, {1, 1, 2, 2, 3, 3}, {1, 1, 1, 3, 3, 3}, {0, 2, 2, 2, 2, 4}, {0, 1, 1, 3, 3, 4}, {0, 0, 0, 4, 4, 4}}, {{2, 2, 2, 3, 3, 3}, {1, 2, 2, 3, 3, 4}, {1, 1, 2, 3, 4, 4}, {1, 1, 1, 4, 4, 4}, {0, 2, 2, 3, 3, 5}, {0, 1, 1, 4, 4, 5}, {0, 0, 0, 5, 5, 5}}}
No, it is not multiplicity-free. Already for $d=6$, this representation contains the Schur functor $S^{4,4,4,2,2,2}$ twice. This can be easily checked in Magma (even the online calculator) issuing the commands Q := Rationals(); s := SFASchur(Q); s.[6]~s.[1,1,1]; Ignoring the Schur functors that vanish on $\mathbb{C}^6$, we obtain s.[4,3,3,3,3,2] + 2*s.[4,4,4,2,2,2] + s.[5,4,3,3,2,1] + s.[5,4,4,2,2,1] + s.[5,5,4,2,1,1] + s.[5,5,5,1,1,1] +s.[6,3,3,3,3] + s.[6,4,4,2,2] + s.[6,5,5,1,1] +s.[6,6,6]
{ "language": "en", "url": "https://mathoverflow.net/questions/294184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Is there a sense in which one could expand $\frac{\sin (x/\epsilon )}{x} $ in powers of $\epsilon $? A standard representation of the $\delta $-distribution is $$ \pi \delta (x) = \lim_{\epsilon \searrow 0} \frac{\sin (x/\epsilon )}{x} $$ Is there a sense in which this could be seen as the leading term in an expansion of $\sin (x/\epsilon ) /x$ in positive powers of $\epsilon $, presumably with distributions as coefficients? If so, is it possible to give the expansion explicitly?
The expansion around point $a$ in positive powers exists, but what is the general form of the term and whether it is convergent requires further research. $$\frac{\sin (x/\epsilon)}x=\frac{\sin \left(\frac{x}{a}\right)}{x}-\frac{(\epsilon -a) \cos \left(\frac{x}{a}\right)}{a^2}+\frac{(\epsilon -a)^2 \left(2 a \cos \left(\frac{x}{a}\right)-x \sin \left(\frac{x}{a}\right)\right)}{2 a^4}$$ $$+\frac{(\epsilon -a)^3 \left(-6 a^2 \cos \left(\frac{x}{a}\right)+x^2 \cos \left(\frac{x}{a}\right)+6 a x \sin \left(\frac{x}{a}\right)\right)}{6 a^6}$$ $$+\frac{(\epsilon -a)^4 \left(24 a^3 \cos \left(\frac{x}{a}\right)-36 a^2 x \sin \left(\frac{x}{a}\right)+x^3 \sin \left(\frac{x}{a}\right)-12 a x^2 \cos \left(\frac{x}{a}\right)\right)}{24 a^8}$$ $$+\frac{(\epsilon -a)^5 \left(-120 a^4 \cos \left(\frac{x}{a}\right)+240 a^3 x \sin \left(\frac{x}{a}\right)+120 a^2 x^2 \cos \left(\frac{x}{a}\right)-x^4 \cos \left(\frac{x}{a}\right)-20 a x^3 \sin \left(\frac{x}{a}\right)\right)}{120 a^{10}}$$ $$+O\left((\epsilon -a)^6\right)$$
{ "language": "en", "url": "https://mathoverflow.net/questions/350697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
How to prove the determinant of a Hilbert-like matrix with parameter is non-zero Consider some positive non-integer $\beta$ and a non-negative integer $p$. Does anyone have any idea how to show that the determinant of the following matrix is non-zero? $$ \begin{pmatrix} \frac{1}{\beta + 1} & \frac{1}{2} & \frac{1}{3} & \dots & \frac{1}{p+1}\\ \frac{1}{\beta + 2} & \frac{1}{3} & \frac{1}{4} & \dots & \frac{1}{p+2}\\ \frac{1}{\beta + 3} & \frac{1}{4} & \frac{1}{5} & \dots & \frac{1}{p+3}\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{1}{\beta + p + 1} & \frac{1}{p+2} & \frac{1}{p+3} & \dots & \frac{1}{2p+1} \end{pmatrix}. $$
I think the reference "Advanced Determinant Calculus" has a pointer to the answer. But I'll still elaborate for it is ingenious. Suppose $x_i$'s and $y_j$'s, $1\leq i,j \leq N$, are numbers such that $x_i+y_j\neq 0$ for any $i,j$ combination, then the following identity (called Cauchy Alternant Identity) holds good: $$ \det ~\left(\frac{1}{x_i+y_j}\right)_{i,j} = \frac{\prod_{1\leq i<j\leq n}(x_i-x_j)(y_i-y_j)}{\prod_{1\leq i\neq j\leq n}(x_i+y_j)}. $$ Thus the determinant of $$ \begin{pmatrix} \frac{1}{\beta + 1} & \frac{1}{2} & \frac{1}{3} & \dots & \frac{1}{p+1}\\ \frac{1}{\beta + 2} & \frac{1}{3} & \frac{1}{4} & \dots & \frac{1}{p+2}\\ \frac{1}{\beta + 3} & \frac{1}{4} & \frac{1}{5} & \dots & \frac{1}{p+3}\\ \vdots & \vdots & \vdots & \dots & \vdots \\ \frac{1}{\beta + p + 1} & \frac{1}{p+2} & \frac{1}{p+3} & \dots & \frac{1}{2p+1} \end{pmatrix} $$ can be obtained by choosing $[x_1,\cdots, x_{p+1}] = [1, \cdots, (p+1)]$ and $[y_1,\cdots, y_{p+1}] = [\beta, 1, \cdots, p]$. This is certainly not zero as $\beta$ is not an integer. The proof of the identity is ingenious. Perform the basic column operation where, $C_j = C_j-C_n$, and remove common factors from the rows and columns. Then perform the row operations, $R_j = R_j-R_n$. This renders the matrix block diagonal of 2 blocks with size n-1 and 1. The first block is the the principal submatrix of the orignal matrix, and the second block is the element 1. This then induces a recursion for the determinant, which yields the desired result. Thanks for the good question and the reference.
{ "language": "en", "url": "https://mathoverflow.net/questions/358175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 0 }
Improper integral $\int_0^\infty\frac{x^{2n+1}}{(1+x^2)^2(2+x^2)^N}dx,\ \ \ n\le N$ How can I evaluate this integral? $$\int_0^\infty\frac{x^{2n+1}}{(1+x^2)^2(2+x^2)^N}dx,\ \ \ n\le N$$ Maybe there is a recurrence relation for the integral?
Recurrence (3) in my other answer on this page also follows immediately from the same recurrence for the respective integrands! :-) Concerning the case $n=0$: the recurrence for the $M_k$'s follows immediately from the definition of $M_k$ and the trivial identity $y^{-k}-y^{-k-1}=y^{-k-1}(y-1)$. To make this answer independent of the previous one, note that for the integral in question we have \begin{equation*} I_{n,N}=\int_0^\infty K_{n,N}(x)\,dx, \end{equation*} where \begin{equation*} K_{n,N}(x):=\frac{x^{2n+1}}{(1+x^2)^2(2+x^2)^N}. \end{equation*} Note next that $K_{n,N}=K_{n-1,N-1}-2K_{n-1,N}$ and hence \begin{equation*} I_{n,N}=I_{n-1,N-1}-2I_{n-1,N} \quad \text{if $1\le n\le N$}.\tag{1} \end{equation*} Also, \begin{equation*} 2I_{0,N}=J_N:=\int_0^\infty\frac{2x\,dx}{(1+x^2)^2(2+x^2)^N} =\int_2^\infty\frac{y^{-N}\,dy}{(y-1)^2}. \end{equation*} Next, \begin{align*} J_{N-1}-2J_N+J_{N+1}&=\int_2^\infty\frac{y^{-N+1}-2y^{-N}+y^{-N-1}}{(y-1)^2}\,dy \\ &=\int_2^\infty y^{-N-1}\,dy=\frac1{N2^N}. \end{align*} So, \begin{equation*} I_{0,N+1}=2I_{0,N}-I_{0,N-1}+\frac1{N2^{N+1}}\quad\text{for $N\ge1$}, \tag{2} \end{equation*} with \begin{equation*} \text{$I_{0,0}=1/2$ and $I_{0,1}=(1-\ln2)/2$.} \tag{3} \end{equation*} Formulas (1), (2), (3) provide the desired recurrence relations for $I_{n,N}$.
{ "language": "en", "url": "https://mathoverflow.net/questions/393753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Any hints on how to prove that the function $\lvert\alpha\;\sin(A)+\sin(A+B)\rvert - \lvert\sin(B)\rvert$ is negative over the half of the total area? I have this inequality with $0<A,B<\pi$ and a real $\lvert\alpha\rvert<1$: $$ f(A,B):=\bigl|\alpha\;\sin(A)+\sin(A+B)\bigr| - \bigl| \sin(B)\bigr| < 0$$ Numerically, I see that regardless of the value of $\alpha$, the area in which $f(A,B)<0$ is always half of the total area $\pi^2$. I appreciate any hints and comments on how I can prove this.
This is equivalent to \begin{align} |\alpha \sin A + \sin(A+B)|&<|\sin B|\\ ((\alpha+\cos B) \sin A + \cos A \sin B)^2&<(\sin B)^2\\ ((\alpha + \cos B)^2-\sin^2 B)\sin^2 A &<-2(\alpha+\cos\beta)\sin A \cos A \sin B\\ \frac{(\alpha + \cos B)^2-\sin^2 B}{\sin B} &<\frac{-2(\alpha+\cos\beta)\cos A}{\sin A}\\ \end{align} So the area in question is the sum of the areas with $$\frac{\sin^2 B-(\alpha + \cos B)^2}{|\alpha + \cos B|\sin B} >2\cot A,\ \ \alpha+\cos B > 0$$ $$\frac{\sin^2 B-(\alpha + \cos B)^2}{|\alpha + \cos B|\sin B} >-2\cot A,\ \ \alpha+\cos B < 0$$ Since $-\cot A=\cot(\pi-A)$, the area in question equals the area with $$\frac{\sin^2 B-(\alpha + \cos B)^2}{|\alpha + \cos B|\sin B} >2\cot A$$ This is equivalent to $$1/\frac{|\alpha + \cos B|}{\sin B}-\frac{|\alpha + \cos B|}{\sin B} >1/\tan\left(\frac{A}{2}\right)-\tan\left(\frac{A}{2}\right)$$ and therefore to $$\frac{|\alpha + \cos B|}{\sin B} <\tan\left(\frac{A}{2}\right)$$ So the area in question can also be written as $${\cal A}(\alpha)=\int_{B=0}^{\pi} 2\arctan\frac{|\alpha + \cos B|}{\sin B} dB$$ Now it is easy to verify ${\cal A}(0)=\pi^2/2$, and \begin{align} \frac{d{\cal A}(\alpha)}{d\alpha}&=\int_{0}^{\arccos(-\alpha)}\frac{2 \sin B\, dB}{1+2\alpha \cos B+\alpha^2}dB- \int_{\arccos(-\alpha)}^\pi\frac{2 \sin B\, dB}{1+2\alpha \cos B+\alpha^2}\\ &=\frac{1}{2\alpha}\log\frac{1+\alpha}{1-\alpha}-\frac{1}{2\alpha}\log\frac{1+\alpha}{1-\alpha}\\ &=0 \end{align} which leads to ${\cal A}(\alpha)=\pi^2/2$ for all $\alpha$.
{ "language": "en", "url": "https://mathoverflow.net/questions/401878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Inverse Mellin transform of 3 gamma functions product I want to calculate the inverse Mellin transform of products of 3 gamma functions. $$F\left ( s \right )=\frac{1}{2i\pi}\int \Gamma(s)\Gamma (2s+a)\Gamma( 2s+b)x^{-s}ds$$ Above contour integral has 3 poles. $$s_{1}=-n$$ $$s_{2}=-\left ( \frac{n+a}{2} \right )$$ $$s_{3}=-\left ( \frac{n+b}{2} \right )$$ Which pole is used in contour integral? $$F\left ( s \right )=\sum_{n=0}^{\infty }\frac{\left ( -1 \right )^{n}}{n!}\Gamma \left ( a-2n \right )\Gamma \left ( -2n+b \right )x^{n}+\sum_{n=0}^{\infty}\frac{\left ( -1 \right )^{n}}{n!}\Gamma \left ( -\frac{n}{2}-\frac{a}{2} \right)\Gamma \left ( -n-a+b\right )x^{\frac{n}{2}+\frac{a}{2}}+\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\Gamma \left (-\frac{n}{2}-\frac{b}{2} \right )\Gamma (-n-b+a)x^{\frac{n}{2}+\frac{b}{2}}$$ Is this correct??
To avoid all poles in the Mellin inversion formula you want to integrate along the line $\int_{\gamma-i\infty}^{\gamma+i\infty}ds$ where $\gamma>\max(0,-a/2,-b/2)$; then Mathematica gives the result in terms of the Meijer G-function: \begin{align} F( x )=&\frac{1}{2i\pi}\int_{\gamma-i\infty}^{\gamma+i\infty} \Gamma(s)\Gamma (2s+a)\Gamma( 2s+b)x^{-s}\,ds\\ =&\frac{1}{\pi}2^{a+b-2} G_{0,5}^{5,0}\left(\frac{x}{16}| \begin{array}{c} 0,\frac{a}{2},\frac{a+1}{2},\frac{b}{2},\frac{b+1}{2} \\ \end{array} \right). \end{align} The integral can alternatively be evaluated by contour integration, closing the contour in the left-half complex plane and picking up the poles at $-n$, $-(n+a)/2$, $-(n+b)/2$, $n=0,1,2,\ldots$. For this we assume that $a\neq b$ and $a,b$ are both non-integer --- so that all poles are simple. I then find that \begin{align} F(x)=&\sum_{n=0}^{\infty }\frac{\left ( -1 \right )^{n}}{n!}\Gamma \left ( a-2n \right )\Gamma \left ( -2n+b \right )x^{n}\\ &+\frac{1}{2}\sum_{n=0}^{\infty}\frac{\left ( -1 \right )^{n}}{n!}\Gamma \left ( -\frac{n}{2}-\frac{a}{2} \right)\Gamma \left ( -n-a+b\right )x^{\frac{n}{2}+\frac{a}{2}}\\ &+\frac{1}{2}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\Gamma \left (-\frac{n}{2}-\frac{b}{2} \right )\Gamma (-n-b+a)x^{\frac{n}{2}+\frac{b}{2}}. \end{align} I checked numerically that this agrees with the Meijer G-function result -- it differs from the result in the OP by factors 1/2 in the second and third sum.
{ "language": "en", "url": "https://mathoverflow.net/questions/420794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Ramanujan and algebraic number theory One out of the almost endless supply of identities discovered by Ramanujan is the following: $$ \sqrt[3]{\rule{0pt}{2ex}\sqrt[3]{2}-1} = \sqrt[3]{\frac19} - \sqrt[3]{\frac29} + \sqrt[3]{\frac49}, $$ which has the following interpretation in algebraic number theory: the fundamental unit $\sqrt[3]{2}-1$ of the pure cubic number field $K = {\mathbb Q}(\sqrt[3]{2})$ becomes a cube in the extension $L = K(\sqrt[3]{3})$. Are there more examples of this kind in Ramanujan's work?
$$(7 \sqrt[3]{20} - 19)^{1/6} = \ \sqrt[3]{\frac{5}{3}} - \sqrt[3]{\frac{2}{3}},$$ $$\left( \frac{3 + 2 \sqrt[4]{5}}{3 - 2 \sqrt[4]{5}} \right)^{1/4}= \ \ \frac{\sqrt[4]{5} + 1}{\sqrt[4]{5} - 1},$$ $$\left(\sqrt[5]{\frac{1}{5}} + \sqrt[5]{\frac{4}{5}}\right)^{1/2} = \ \ (1 + \sqrt[5]{2} + \sqrt[5]{8})^{1/5} = \ \ \sqrt[5]{\frac{16}{125}} + \sqrt[5]{\frac{8}{125}} + \sqrt[5]{\frac{2}{125}} - \sqrt[5]{\frac{1}{125}},$$ and so on. Many of these were submitted by Ramanujan as problems to the Journal of the Indian Mathematical Society. See the following link: jims.ps for more precise references. Quote: "although Ramanujan never used the term unit, and probably did not formally know what a unit was, he evidently realized their fundamental properties. He then recognized that taking certain powers of units often led to elegant identities."
{ "language": "en", "url": "https://mathoverflow.net/questions/43388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 1, "answer_id": 0 }
A lower bound of a particular convex function Hello, I suspect this reduces to a homework problem, but I've been a bit hung up on it for the last few hours. I'm trying to minimize the (convex) function $f(x) = 1/x + ax + bx^2$ , where $x,a,b>0$. Specifically, I'm interested in the minimal objective function value as a function of $a$ and $b$. Since finding the minimizer $x^*$ is tricky (requires solving a cubic), I figured I'd try and find a lower bound using the following argument: if $b=0$, the minimizer is $x=1/\sqrt{a}$ and the minimal value is $2\sqrt{a}$. If $a=0$, the minimizer is $x=(2b)^{-1/3}$ and the minimal value is $\frac{3\cdot2^{1/3}}{2}b^{1/3}$. Therefore, one possible approximate solution is the convex combination $(\frac{a}{a+b})\cdot2\sqrt{a} + (\frac{b}{a+b})\cdot\frac{3\cdot2^{1/3}}{2}b^{1/3}$. Numerical simulations suggest that the above expression is a lower bound for the minimal value. Does this follow from some nice result about parameterized convex functions? It seems like it shouldn't be hard to prove. I guess in a nutshell I just want to prove that for all $x,a,b>0$ we have $(\frac{a}{a+b})\cdot2\sqrt{a} + (\frac{b}{a+b})\cdot\frac{3\cdot2^{1/3}}{2}b^{1/3} \leq 1/x + ax + bx^2$. Thanks! EDIT: It also appears that if I take the convex combination $(\frac{a^{3/5}}{a^{3/5}+b^{2/5}})\cdot2\sqrt{a} + (\frac{b^{2/5}}{a^{3/5}+b^{2/5}})\cdot\frac{3\cdot2^{1/3}}{2}b^{1/3}$ then I get a tighter lower bound, and in fact the lower bound is within a factor of something like $3/2$ of the true minimal solution.
The first inequality is true. Write $$f=\frac{a}{a+b}f_0+\frac{b}{a+b}f_1,$$ where $f_0$ and $f_1$ correspond to the case $b=0$ and $a=0$, respectively. You know that $f_0\ge2\sqrt a$ and $f_1\ge\frac{3\cdot2^{1/3}}{2}b^{1/3}$. This implies $$f\ge\frac{a}{a+b}2\sqrt a+\frac{b}{a+b}\frac{3\cdot2^{1/3}}{2}b^{1/3}.$$
{ "language": "en", "url": "https://mathoverflow.net/questions/61946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
General integer solution for $x^2+y^2-z^2=\pm 1$ How to find general solution (in terms of parameters) for diophantine equations $x^2+y^2-z^2=1$ and $x^2+y^2-z^2=-1$? It's easy to find such solutions for $x^2+y^2-z^2=0$ or $x^2+y^2-z^2-w^2=0$ or $x^2+y^2+z^2-w^2=0$, but for these ones I cannot find anything relevant.
I think that the solutions to $x^2+y^2-z^2=-1$ are $x=RT-SU,y=RU+ST$ where $R^2+S^2-T^2-U^2=2$ then $z=R^2+S^2-1=T^2+U^2+1$ On the surface this looks similar to the solutions to the $+1$ case. However these are quite a bit rarer and depend on the locations of the primes. As we know, an integer can be uniquely written as $n=ab^2$ where $a$ (the squarefree part of $n$) is a product of distinct primes. $n$ can be written as a sum of two squares $n=j^2+k^2$ precisely when $a$ has no prime divisors of the form $4m+3$ (and we know in how many ways this can be done as well.) So the solutions depend on when we have 2 consecutive even numbers of this form. For example $292=73\cdot4^2$ and $290=2\cdot5\cdot329$ thus we know that there are expressions as a sum of two squares: $$292=6^2+16^2$$ $$290=1^2+17^2=11^2+13^2.$$ Running through the various possiblities gives these solutions for $R,S,T,U,x,y$ with $x^2+y^2-291^2=-1:$ * *6, 16, 17, 1, 86, 278 *16, 6, 11, 13, 98, 274 *16, 6, 17, 1, 266, 118 *16, 6, 13, 11, 142, 254 Certain families of solutions can be given. One is $x,y,z=2p,2p^2,2p^2+1.$
{ "language": "en", "url": "https://mathoverflow.net/questions/65957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
Computing the centers of Apollonian circle packings The radii of an Apollonian circle packing are computed from the initial curvatures e.g. (-10, 18, 23, 27) solving Descartes equation $2(a^2+b^2+c^2+d^2)=(a+b+c+d)^2$ and using the four matrices to generate more solutions $$ \left[\begin{array}{cccc} -1 & 2 & 2 & 2 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 1 \end{array}\right] \hspace{0.25 in} \left[\begin{array}{cccc} 1 & 0 & 0 & 0 \\\\ 2 & -1 & 2 & 2 \\\\ 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 1 \end{array}\right] \hspace{0.25in} \left[\begin{array}{cccc} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0\\\\ 2 & 2 & -1 & 2 \\\\ 0 & 0 & 0 & 1 \end{array}\right] \hspace{0.25in} \left[\begin{array}{cccc} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0\\\\ 0 & 0 & 1 & 0 \\\\ 2 & 2 & 2&-1 \end{array}\right] $$ How to compute the centers of circles in the Apollonian circle packing? The formulas probably simplify if you use complex numbers. Also in what sense it is the circle packing the limit set of a Kleinian group?
$$2(a^2+b^2+c^2+d^2)=(a+b+c+d)^2$$ $$a=4k(k+s)$$ $$b=4s(k+s)$$ $$c=p^2+k^2+s^2+2pk+2ps-2ks$$ $$d=p^2+k^2+s^2-2pk-2ps-2ks$$
{ "language": "en", "url": "https://mathoverflow.net/questions/88353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 4 }
The relationship between the dilogarithm and the golden ratio Among the values for which the dilogarithm and its argument can both be given in closed form are the following four equations: $Li_2( \frac{3 - \sqrt{5}}{2}) = \frac{\pi^2}{15} - log^2( \frac{1 +\sqrt{5}}{2} )$ (1) $Li_2( \frac{-1 + \sqrt{5}}{2}) = \frac{\pi^2}{10} - log^2( \frac{1 +\sqrt{5}}{2} )$ (2) $Li_2( \frac{1 - \sqrt{5}}{2}) = -\frac{\pi^2}{15} - log^2( \frac{1 +\sqrt{5}}{2} )$ (3) $Li_2( \frac{-1 - \sqrt{5}}{2}) = -\frac{\pi^2}{10} - log^2( \frac{1 +\sqrt{5}}{2} )$ (4) (from Zagier's The Remarkable Dilogarithm) where the argument of the logarithm on the right hand side is the golden ratio $\phi$. The above equations all have this (loosely speaking) kind of duality, and almost-symmetry that gets broken by the fact that $Li_2(\phi)$ fails to make an appearance. Can anyone explain what is the significance of the fact that $\phi$ appears on the right, but not on the left? Immediately one can see that the arguments on the lefthand side of (2)-(4) are related to $\phi$ as roots of a polynomial, but what other meaning does this structure have?
Wikipedia says $$Li_2\left({1+\sqrt5\over2}\right)={\pi^2\over10}-\log^2{\sqrt5-1\over2}$$
{ "language": "en", "url": "https://mathoverflow.net/questions/144322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
$\zeta(0)$ and the cotangent function In preparing some practice problems for my complex analysis students, I stumbled across the following. It is not hard to show, using Liouville's theorem, that $$\pi\cot(\pi z)=\frac{1}{z}+\sum_{n=1}^\infty\left(\frac{1}{z+n}+\frac{1}{z-n}\right),$$ which implies that $$-\frac{\pi z}{2}\cot(\pi z)=-\frac{1}{2}+\sum_{k=1}^\infty\zeta(2k)z^{2k},\qquad 0<|z|<1.$$ This formula predicts correctly that $\zeta(0)=-\frac{1}{2}$, and allows to calculate $\zeta(2k)$ as a rational multiple of $\pi^{2k}$ as well (in terms of Bernoulli numbers). Is there some simple explanation why the above prediction $\zeta(0)=-\frac{1}{2}$ is valid? Perhaps there is a not so simple but still transparent explanation via Eisenstein series. Added. Just to clarify what I mean by "simple explanation". The second identity above follows directly from the first identity, i.e. from basic principles of complex analysis: $$-\frac{\pi z}{2}\cot(\pi z)=-\frac{1}{2}+\sum_{n=1}^\infty\frac{z^2}{n^2-z^2}=-\frac{1}{2}+\sum_{n=1}^\infty\sum_{k=1}^\infty\left(\frac{z^2}{n^2}\right)^k =-\frac{1}{2}+\sum_{k=1}^\infty\zeta(2k)z^{2k}.$$ I would like to see a similar argument, perhaps somewhat more elaborate, that explains why the constant term here happens to be $\zeta(0)$, which seems natural in the light of the other terms.
Here is an explanation based on the Euler-Maclaurin summation formula. (Or rather, since we'll only ever need two terms of the Euler-Maclaurin summation, it's really more or less just "the trapezoid rule".) I think it's a good explanation because it sticks to the general structure of the argument outlined in the question, and its "18th-century-friendly" spirit. First, let's review how to apply Euler-Maclaurin to $\zeta$. We have $$ \begin{align} \zeta(s) &= \sum_{n=1}^N \frac{1}{n^s} + \sum_{n=N+1}^{\infty} \frac{1}{n^s} \\ &= \sum_{n=1}^N \frac{1}{n^s} + \int_N^{\infty} \frac{1}{x^s} \, dx - \frac12 \frac{1}{N^s} + \mathrm{Error} \\ &= \sum_{n=1}^N \frac{1}{n^s} + \frac{1}{s-1} \frac{1}{N^{s-1}} - \frac12 \frac{1}{N^s} + \mathrm{Error} \tag{1}. \\ \end{align} $$ We can say more about the error term later - for now, all we need to know is that $\mathrm{Error} = O ( \frac{1}{N^{\operatorname{Re}(s)+1}} )$ as $N \to \infty$, for each $s$. The key thing to note is that, even though the computation initially required $\operatorname{Re}(s)>1$ in order to be valid, in fact the "equation" $$ \zeta(s) = \sum_{n=1}^N \frac{1}{n^s} + \frac{1}{s-1} \frac{1}{N^{s-1}} - \frac12 \frac{1}{N^s} + O \left( \frac{1}{N^{\operatorname{Re}(s)+1}} \right) \tag{2} $$ has a unique constant solution $\zeta(s)$, for each $s$ with $\operatorname{Re}(s) > -1$ and $s \ne 1$. This is one way to define $\zeta(s)$ for all such $s$. In particular, we can plug in $0$ and immediately find that $\zeta(0) = -\frac12$. What about the function $f(z) = -\frac{\pi z}{2} \cot(\pi z)$? We can also apply Euler-Maclaurin to $f$ in the same way: $$ \begin{align} f(z) &= -\frac12 + \sum_{n=1}^N \frac{z^2}{n^2-z^2} + \sum_{n=N+1}^{\infty} \frac{z^2}{n^2-z^2} \\ &= -\frac12 + \sum_{n=1}^N \frac{z^2}{n^2-z^2} + \int_N^{\infty} \frac{z^2}{x^2-z^2} \, dx - \frac12 \frac{z^2}{N^2-z^2} + O \left( \frac{1}{N^3} \right) \tag{3} \\ \end{align} $$ as $N \to \infty$, for each $z$. The right-hand side of (3), without the error term, is what we'll call $f_N(z)$; we can simplify it to $$ f_N(z) = \sum_{n=1}^N \frac{z^2}{n^2-z^2} + z \cdot \frac12 \left( \log\left(1+\frac{z}{N}\right) - \log\left(1-\frac{z}{N}\right) \right) - \frac12 \frac{N^2}{N^2-z^2} \tag{4}. $$ I should emphasize that (4) is just a somewhat more elaborate variant of $-\frac12 + \sum_{n=1}^{\infty} \frac{z^2}{n^2-z^2}$, much like how (2) is just a somewhat more elaborate variant of $\sum_{n=1}^{\infty} \frac{1}{n^s}$. Now when we expand (4) into power series, we recognize the expression from (2) as the coefficients, and we recognize that we can make the sum run from $k=0$ to $\infty$, not just $k=1$ to $\infty$: $$ \begin{align} f_N(z) &= \sum_{n=1}^N \sum_{k=1}^{\infty} \frac{z^{2k}}{n^{2k}} + \sum_{k=1}^{\infty} \frac{1}{2k-1} \frac{1}{N^{2k-1}} z^{2k} - \frac12 \sum_{k=0}^{\infty} \frac{z^{2k}}{N^{2k}} \\ &= \sum_{k=0}^{\infty} \left( \sum_{n=1}^N \frac{1}{n^{2k}} + \frac{1}{2k-1} \frac{1}{N^{2k-1}} - \frac12 \frac{1}{N^{2k}} \right) z^{2k}. \tag{5} \\ \end{align} $$ This is exactly what we want: take the limit as $N \to \infty$ to conclude that $f(z) = \sum_{k=0}^{\infty} \zeta(2k) z^{2k}$. (To be rigorous in the final step, we'd need to be more precise about the error term in (1). The actual bound we get from Euler-Maclaurin is $ \lvert \mathrm{Error} \rvert \le \mathrm{constant} \cdot \int_N^{\infty} \left\lvert \frac{s(s+1)}{x^{s+2}} \right\rvert \, dx $ for all $N$ and all $s$ such that $\operatorname{Re}(s) > -1$ and $s \ne 1$. This lets us control the size of the difference $\sum_{k=0}^{\infty} \zeta(2k) z^{2k} - f_N(z)$.) This proves that all the even-power coefficients of the power series of $-\frac{\pi z}{2} \cot(\pi z)$ are the corresponding values of $\zeta$, including $\zeta(0)$ as the constant term.
{ "language": "en", "url": "https://mathoverflow.net/questions/188371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 2, "answer_id": 0 }
Determinant of a matrix filled with elements of the Thue–Morse sequence Let $n$ be a positive integer. Suppose we fill a square matrix $n\times n$ row-by-row with the first $n^2$ elements of the Thue–Morse sequence (with indexes from $0$ to $n^2-1$). Let $\mathcal D_n$ be the determinant of this matrix. For example, $$\small\mathcal D_7=\left| \begin{array}{} t_0 & t_1 & t_2 & t_3 & t_4 & t_5 & t_6 \\ t_7 & t_8 & t_9 & t_{10} & t_{11} & t_{12} & t_{13} \\ t_{14} & t_{15} & t_{16} & t_{17} & t_{18} & t_{19} & t_{20} \\ t_{21} & t_{22} & t_{23} & t_{24} & t_{25} & t_{26} & t_{27} \\ t_{28} & t_{29} & t_{30} & t_{31} & t_{32} & t_{33} & t_{34} \\ t_{35} & t_{36} & t_{37} & t_{38} & t_{39} & t_{40} & t_{41} \\ t_{42} & t_{43} & t_{44} & t_{45} & t_{46} & t_{47} & t_{48} \\ \end{array} \right|=\left| \begin{array}{} 0 & 1 & 1 & 0 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 & 0 & 1 & 0 \\ \end{array} \right|=0.$$ Question: For which $n$ does $\mathcal D_n\ne0$ hold? Using a brute-force computer search I found only $5$ cases: $\mathcal D_2 = -1,\,$ $\mathcal D_{11} = 9,\,$ $\mathcal D_{13} = -9,\,$ $\mathcal D_{19} = 270,\,$ $\mathcal D_{23} = -900,$ and no other cases for $n\le1940$. Are there any other cases except these five?
not an answer just the result of a computation. The following plot shows for each $n$ the minimal number $k$ such that the first $k$ rows are linearly dependent. The question is to find all $n$ such that $k=n+1$.
{ "language": "en", "url": "https://mathoverflow.net/questions/314742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 1, "answer_id": 0 }
How to prove the determinant of a Hilbert-like matrix with parameter is non-zero Consider some positive non-integer $\beta$ and a non-negative integer $p$. Does anyone have any idea how to show that the determinant of the following matrix is non-zero? $$ \begin{pmatrix} \frac{1}{\beta + 1} & \frac{1}{2} & \frac{1}{3} & \dots & \frac{1}{p+1}\\ \frac{1}{\beta + 2} & \frac{1}{3} & \frac{1}{4} & \dots & \frac{1}{p+2}\\ \frac{1}{\beta + 3} & \frac{1}{4} & \frac{1}{5} & \dots & \frac{1}{p+3}\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{1}{\beta + p + 1} & \frac{1}{p+2} & \frac{1}{p+3} & \dots & \frac{1}{2p+1} \end{pmatrix}. $$
Rows linearly dependent means for some $c_1$, $\ldots$, $c_{p+1}$ the non-zero rational function $\sum_{k=1}^{p+1} \frac{c_k}{x+k}$ has $p+1$ roots $\beta$, $1$, $2$, $\ldots$, $p$, not possible, since its numerator has degree at most $p$.
{ "language": "en", "url": "https://mathoverflow.net/questions/358175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 1 }
Norms in quadratic fields This should be well-known, but I can't find a reference (or a proof, or a counter-example...). Let $d$ be a positive square-free integer. Suppose that there is no element in the ring of integers of $\mathbb{Q}(\sqrt{d})$ with norm $-1$. Then I believe that no element of $\mathbb{Q}(\sqrt{d})$ has norm $-1\ $ (in fancy terms, the homomorphism $H^2(G,\mathscr{O}^*)\rightarrow H^2(G,\mathbb{Q}(\sqrt{d})^*)$, with $G:=\operatorname{Gal}(\mathbb{Q}(\sqrt{d})/\mathbb{Q})=$ $\mathbb{Z}/2 $, is injective). Is that correct? If yes, I'd appreciate a proof or a reference.
Dirichlet's version of Gauss composition is in the book by Cox, (page 49 in first) with a small typo corrected in the second edition. For our purpose, duplication, it has a better look to equate $a=a'$ from the start, with $\gcd(a,b) = 1$ sufficing, $$ \left( ax^2 +bxy+ acy^2 \right) \left( aw^2 +bwz+ acz^2 \right) = c X^2 + b XY + a^2 Y^2 $$ where $$ X = axz + ayw+byz \; \; , \; \; \; Y = xw - c yz $$ so that the square of $\langle a,b,ac \rangle$ is $\langle c,b,a^2 \rangle.$ Today's question concerns $c=-1$ $$ \left( ax^2 +bxy -ay^2 \right) \left( aw^2 +bwz -az^2 \right) = - X^2 + b XY + a^2 Y^2 $$ where $$ X = axz + ayw+byz \; \; , \; \; \; Y = xw + yz $$ so that $$\langle a,b,-a \rangle^2 = \langle -1,b,a^2 \rangle.$$ We also see Stanley's fact that the discriminant is the sum of two squares, $b^2 + 4 a^2$ the way I wrote things. By the Gauss theorem on duplication, $ \langle -1,b,a^2 \rangle$ is in the principal genus Furthermore, we now know that the principal form is $SL_z \mathbb Z$ equivalent to $$ \langle 1,b,-a^2 \rangle $$ The principal form may not integrally represent $-1$ but does so rationally. As to being in the same genus, we can use Siegel's definition of rational equivalence without essential denominator. $$ \left( \begin{array}{rr} 0 & 1 \\ -a^2 & -b \\ \end{array} \right) \left( \begin{array}{rr} 1 & \frac{b}{2} \\ \frac{b}{2} & -a^2 \\ \end{array} \right) \left( \begin{array}{rr} 0 & -a^2 \\ 1 & -b \\ \end{array} \right) = \; a^2 \; \left( \begin{array}{rr} -1 & \frac{b}{2} \\ \frac{b}{2} & a^2 \\ \end{array} \right) $$ $$ \left( \begin{array}{rr} b & 1 \\ -a^2 & 0 \\ \end{array} \right) \left( \begin{array}{rr} -1 & \frac{b}{2} \\ \frac{b}{2} & a^2 \\ \end{array} \right) \left( \begin{array}{rr} b & -a^2 \\ 1 & 0 \\ \end{array} \right) = \; a^2 \; \left( \begin{array}{rr} 1 & \frac{b}{2} \\ \frac{b}{2} & -a^2 \\ \end{array} \right) $$
{ "language": "en", "url": "https://mathoverflow.net/questions/369846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 0 }
How the solve the equation $\frac{(a+b\ln(x))^2}{x}=c$ I need to solve the equation $$\frac{(a+b\ln(x))^2}{x}=c$$ where $a$, $b$, and $c$ are given. It is known that $a$ and $b$ are fixed and satisfy some condition such that the left hand side is decreasing. So $x$ is uniquely determined by $c$ when $c$ is chosen in certain range. A related problem is $$\frac{(a+b\ln(x))}{x}=c$$ for which the solution is $$x=-\frac{bW(-\frac{ce^{-a/b}}{b})}{c}$$ where $W(z)$ is the product log function. Any hint about how to solve the first equation?
Step-by-step solution with Lambert W. The goal is to get something of the form $\color{red}{ue^u = v}$ then re-write it as $\color{blue}{u=W(v)}$. $$ \frac{(a+b\ln(x))^2}{x}=c \\ \frac{(a+b\ln(x))}{\sqrt{x}}=\pm\sqrt{c} \\ (a+b\ln(x))e^{-\ln(x)/2}=\pm\sqrt{c} \\ (a+b\ln(x))\exp\left(-\frac{a}{2b}-\frac{\ln(x)}{2}\right) =\pm\sqrt{c}\exp\left(\frac{-a}{2b}\right) \\ (a+b\ln(x))\exp\left(-\frac{a+b\ln(x)}{2b}\right) =\pm\sqrt{c}\exp\left(\frac{-a}{2b}\right) \\ \color{red}{-\frac{a+b\ln(x)}{2b}\exp\left(-\frac{a+b\ln(x)}{2b}\right) =\mp\frac{\sqrt{c}}{2b}\exp\left(\frac{-a}{2b}\right)} \\ \color{blue}{-\frac{a+b\ln(x)}{2b} = W\left(\mp\frac{\sqrt{c}}{2b}\exp\left(\frac{-a}{2b}\right)\right)} \\ \ln(x) = \frac{-a-2bW\left(\mp\frac{\sqrt{c}}{2b}\exp\left(\frac{-a}{2b}\right)\right)}{b} \\ x = \exp\left(-\frac{a}{b}-2 W\left(\mp\frac{\sqrt{c}}{2b}\exp\left(\frac{-a}{2b}\right)\right)\right) $$ The other example mentioned... $$ \frac{a+b\ln(t)}{t}=c \\ (a+b\ln(t))e^{-\ln(t)} = c \\ (a+b\ln(t))\exp\left(-\frac{a}{b}-\ln(t)\right) = c \exp\left(-\frac{a}{b}\right) \\ (a+b\ln(t))\exp\left(-\frac{a+b\ln(t)}{b}\right) = c \exp\left(-\frac{a}{b}\right) \\ \color{red}{-\frac{a+b\ln(t)}{b}\exp\left(-\frac{a+b\ln(t)}{b}\right) = -\frac{c}{b} \exp\left(-\frac{a}{b}\right)} \\ \color{blue}{-\frac{a+b\ln(t)}{b} = W\left(-\frac{c}{b} \exp\left(-\frac{a}{b}\right)\right)} \\ \ln(t) = -\frac{a+bW\left(-\frac{c}{b} \exp\left(-\frac{a}{b}\right)\right)}{b} \\ t = \exp\left(-\frac{a+bW\left(-\frac{c}{b} \exp\left(-\frac{a}{b}\right)\right)}{b}\right) $$ We may question the other solution given in the OP. In fact, this solution is equal to that solution: Claim $$ \exp\left(-\frac{a+bW\left(-\frac{c}{b} \exp\left(-\frac{a}{b}\right)\right)}{b}\right) = -\frac{b}{c}W\left(-\frac{c}{b} \exp\left(-\frac{a}{b}\right)\right) \tag1$$ Why? $$ \text{Let}\quad Q = W\left(-\frac{c}{b} \exp\left(-\frac{a}{b}\right)\right). \\ \text{Then}\quad Qe^Q = -\frac{c}{b}\exp\left(-\frac{a}{b}\right) \\ -\frac{b}{c}Q = \exp\left(-\frac{a}{b}\right)e^{-Q} \\ -\frac{b}{c}Q = \exp\left(-\frac{a+bQ}{b}\right) \\ \text{which is $(1)$.} $$ Challenge: Simplity the first solution in the same way: $$ x = \left[\frac{2b}{\sqrt{c}}W\left(\mp\frac{\sqrt{c}}{2b}\exp\left(-\frac{a}{2b}\right)\right)\right]^2 $$
{ "language": "en", "url": "https://mathoverflow.net/questions/410729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }