text
stringlengths 448
13k
| label
int64 0
1
| doc_idx
stringlengths 8
14
|
---|---|---|
---
abstract: 'We develop a method for estimating well-conditioned and sparse covariance and inverse covariance matrices from a sample of vectors drawn from a sub-gaussian distribution in high dimensional setting. The proposed estimators are obtained by minimizing the quadratic loss function and joint penalty of $\ell_1$ norm and variance of its eigenvalues. In contrast to some of the existing methods of covariance and inverse covariance matrix estimation, where often the interest is to estimate a sparse matrix, the proposed method is flexible in estimating both a sparse and well-conditioned covariance matrix simultaneously. The proposed estimators are optimal in the sense that they achieve the minimax rate of estimation in operator norm for the underlying class of covariance and inverse covariance matrices. We give a very fast algorithm for computation of these covariance and inverse covariance matrices which is easily scalable to large scale data analysis problems. The simulation study for varying sample sizes and variables shows that the proposed estimators performs better than several other estimators for various choices of structured covariance and inverse covariance matrices. We also use our proposed estimator for tumor tissues classification using gene expression data and compare its performance with some other classification methods.'
author:
| 1 | member_12 |
- |
Ashwini Maurya [email protected]\
Department of Statistics and Probability\
Michigan State University\
East Lansing, MI 48824, USA
bibliography:
- 'sample.bib'
title: 'A Well-Conditioned and Sparse Estimation of Covariance and Inverse Covariance Matrices Using a Joint Penalty'
---
Sparsity, Eigenvalue Penalty, Penalized Estimation
Introduction
============
With the recent surge in data technology and storage capacity, today’s statisticians often encounter data sets where sample size $n$ is small and number of variables $p$ is very large: often hundreds, thousands and even million or more. Examples include gene expression data and web search problems \[@Clark1:7, @pass:21\]. For many of the high dimensional data problems, the choice of classical statistical methods becomes inappropriate for making valid inference. The recent developments in asymptotic theory deal with increasing $p$ as long as both $p$ and $n$ tend to infinity at some rate depending upon the parameters of interest.
The estimation of covariance and inverse covariance matrix is a problem of primary interest in multivariate statistical analysis. Some of the applications include: **(i)** Principal component analysis (PCA) \[@Johnstone:14, @Zou:37\]:, where the goal is to project the data on “best" $k$-dimensional subspace, and where best means the projected data explains as much of the variation in original | 1 | member_12 |
data without increasing $k$. **(ii)** Discriminant analysis \[@Mardia:19\]:, where the goal is to classify observations into different classes. Here estimates of covariance and inverse covariance matrices play an important role as the classifier is often a function of these entities. **(iii)** Regression analysis: If interest focuses on estimation of regression coefficients with correlated (or longitudinal) data, a sandwich estimator of the covariance matrix may be used to provide standard errors for the estimated coefficients that are robust in the sense that they remain consistent under mis-specification of the covariance structure. **(iv)** Gaussian graphical modeling \[@Mein:20, @Wainwright:27, @Yuan:34,@Yuan1:35\]:, the relationship structure among nodes can be inferred from inverse covariance matrix. A zero entry in the inverse covariance matrix implies conditional independence between the corresponding nodes.\
The estimation of large dimensional covariance matrix based on few sample observations is a difficult problem, especially when $n \asymp p$ (here $a_n \asymp b_n$ means that there exist positive constants $c$ and $C$ such that $c \le a_n/b_n \le C $). In these situations, the sample covariance matrix becomes unstable which explodes the estimation error. It is well known that the eigenvalues of sample covariance matrix are over-dispersed which means that the eigen-spectrum of sample | 1 | member_12 |
covariance matrix is not a good estimator of its population counterpart \[@Marcenko:18, @Karoui1:16\]. To illustrate this point, consider $\Sigma_p=I_p$, so all the eigenvalues are $1$. A result from \[@Geman:12\] shows that if entries of $X_i$’s are i.i.d (let $X_i$’s have mean zero and variance 1) with a finite fourth moment and if $p/n \rightarrow \theta <1 $, then the largest sample eigenvalue $l_1$ satisfies: $$\begin{aligned}
l_1~ \rightarrow ~(1+\sqrt{\theta})^2, ~~~~~~ a.s\end{aligned}$$ This suggests that $l_1$ is not a consistent estimator of the largest eigenvalue $\sigma_1$ of population covariance matrix. In particular if $n=p$ then $l_1$ tends to $4$ whereas $\sigma_1$ is $1$. This is also evident in the eigenvalue plot in Figure 2.1. The distribution of $l_1$ also depends on the underlying structure of the true covariance matrix. From Figure 2.1, it is evident that the smaller sample eigenvalues tend to underestimate the true eigenvalues for large $p$ and small $n$. For more discussion on this topic, see @Karoui1:16.
To correct for this bias, a natural choice would be to shrink the sample eigenvalues towards some suitable constant to reduce the over-dispersion. For instance, @Stein:28 proposed an estimator of the form $\tilde{\Sigma}=\tilde{U} \Lambda (\tilde{\lambda}) \tilde{U}$, where $\Lambda (\tilde{\lambda})$ is a diagonal | 1 | member_12 |
matrix with diagonal entries as transformed function of the sample eigenvalues and $\tilde{U}$ is the matrix of the eigenvectors. In another interesting paper @Ledoit:17 proposed an estimator that shrinks the sample covariance matrix towards the identity matrix. In another paper, @Karoui:15 proposed a non-parametric estimation of spectrum of eigenvalues and show that his estimator is consistent in the sense of weak convergence of distributions.
The covariance matrix estimates based on eigen-spectrum shrinkage are well-conditioned in the sense that their eigenvalues are well bounded away from zero. These estimates are based on the shrinkage of the eigenvalues and therefore invariant under some orthogonal group i.e. the shrinkage estimators shrink the eigenvalues but eigenvectors remain unchanged. In other words, the basis (eigenvector) in which the data are given is not taken advantage of and therefore the methods rely on premise that one will be able to find a good estimate in any basis. In particular, it is reasonable to believe that the basis generating the data is somewhat nice. Often this translates into the assumption that the covariance matrix has particular structure that one should be able to take advantage of. In these situations, it becomes natural to perform certain form of | 1 | member_12 |
regularization directly on the entries of the sample covariance matrix.
Much of the recent literature focuses on two broad clases of regularized covariance matrix estimation. i) The one class relies on natural ordering among variables, where one often assumes that the variables far apart are weekly correlated and ii) the other class where there is no assumption on the natural ordering among variables. The first class includes the estimators based on banding and tapering \[@Bickel:3, @Cai1:6\]. These estimators are appropriate for a number of applications for ordered data (time series, spectroscopy, climate data). However for many applications including gene expression data, prior knowledge of any canonical ordering is not available and searching for all permutation of possible ordering would not be feasible. In these situations, an $\ell_1$ penalized estimator becomes more appropriate which yields a permutation-invariant estimate.
To obtain a suitable estimate which is both well-conditioned and sparse, we introduce two regularization terms: **i)** $\ell_1$ penalty for each of the off-diagonal elements of matrix and, **ii)** penalty propotional to the variance of the eigenvalues. The $\ell_1$ minimization problems are well studied in the covariance and inverse covariance matrix estimation literature \[@Freidman:11, @Banerjee:38, @Ravi1:24, @Bein:13, @Maurya:19 etc.\]. @Roth1:26 proposes an $\ell_1$ | 1 | member_12 |
penalized log-likelihood estimator and shows that estimator is consistent in Frobenius norm at the rate of $O_P\Big(\sqrt{\{(p+s)~log~p\}/{n}}\Big)$, as both $p$ and $n$ approach to infinity. Here $s$ is the number of non-zero off-diagonal elements in the true covariance matrix. In another interesting paper @Bein:13 propose an estimator of covariance matrix as penalized maximum likelihood estimator with a weighted lasso type penalty. In these optimization problems, the $\ell_1$ penalty results in sparse and a permutation-invariant estimator as compared to other $l_q, q \neq 1$ penalties. Another advantage is that the $\ell_1$ norm is a convex function which makes it suitable for large scale optimization problems. A number of fast algorithms exist in the literature for covariance and inverse covariance matrix estimation \[(@Freidman:11, @Roth:25\]. The eigenvalues variance penalty overcomes the over-dispersion in the sample covariance matrix so that the estimator remains well-conditioned.
@Ledoit:17 proposed an estimator of covariance matrix as a linear combination of sample covariance and identity matrix. Their estimator of covariance matrix is well-conditioned but it is not sparse. @Roth:25 proposed estimator of covariance matrix based on quadratic loss function and $\ell_1$ penalty with a log-barrier on the determinant of covariance matrix. The log-determinant barrier is a valid technique to | 1 | member_12 |
achieve positive definiteness but it is still unclear whether the iterative procedure proposed in @Roth:25 actually finds the right solution to the corresponding optimization problem. In another interesting paper, @Xue:31 proposed an estimator of covariance matrix as a minimizer of penalized quadratic loss function over set of positive definite matrices. In their paper, the authors solve a positive definite constrained optimization problem and establish the consistency of estimator. The resulting estimator is sparse and positive definite but whether it overcomes the over-dispersion of the eigen-spectrum of sample covariance matrix, is hard to justify. @Maurya:19 proposed a joint convex penalty as function of $\ell_1$ and trace norm (defined as sum of singular values of a matrix) for inverse covariance matrix estimation based on penalized likelihood approach.
In this paper, we propose the JPEN (Joint PENalty) estimators for covariance and inverse covariance matrices estimation and derive an explicit rate of convergence in both the operator and Frobenius norm. The JPEN estimators achieves minimax rate of convergence under operator norm for the underlying class of sparse covariance and inverse covariance matrices and hence is optimal. For more details see section $\S3$. One of the major advantage of the proposed estimators is that the | 1 | member_12 |
proposed algorithm is very fast, efficient and easily scalable to a large scale data analysis problem.
The rest of the paper is organized as following. The next section highlights some background and problem set-up for covariance and inverse covariance matrix estimation. In section 3, we describe the proposed estimators and establish their theoretical consistency. In section 4, we give an algorithm and compare its computational time with some other existing algorithms. Section 5 highlights the performance of the proposed estimators on simulated data while an application of proposed estimator to real life data is given in section 6.
**Notation:** For a matrix $M$, let $\|M\|_1 $ denote its $\ell_1$ norm defined as the sum of absolute values of the entries of $M$, $\|M\|_F$ denote its Frobenius norm, defined as the sum of square of elements of $M$, $\|M\|$ denote its operator norm (also called spectral norm), defined as the largest absolute eigenvalue of $M$, $M^{-}$ denotes matrix $M$ where all diagonal elements are set to zero, $M^{+}$ denote matrix $M$ where all off-diagonal elements are set to zero, $\sigma_i(M)$ denote the $i^{th}$ largest eigenvalue of $M$, $tr(M)$ denotes its trace, $det(M)$ denote its determinant, $\sigma_{min}(M) $ and $\sigma_{max}(M)$ denote the | 1 | member_12 |
minimum and maximum eigenvalues of $M$, $|M|$ be its cardinality, and let $\text{sign}(M)$ be matrix of s of elements of $M$. For any real $x$, let $\text{sign}(x) $ denotes of $x$, and let $|x|$ denotes its absolute value.
Background and Problem Set-up
=============================
Let $X=(X_1, X_2, \cdots, X_p) $ be a zero-mean p-dimensional random vector. The focus of this paper is the estimation of the covariance matrix $\Sigma:=\mathbb{E}(XX^T)$ and its inverse $\Sigma^{-1}$ from a sample of independently and identically distributed data $\{ X^{(k)} \}^{n}_{k=1}$. In this section we provide some background and problem setup more precisely.
The choice of loss function is very crucial in any optimization problem. An optimal estimator for a particular loss function may not be optimal for another choice of loss function. Recent literature in covariance matrix and inverse covariance matrix estimation mostly focuses on estimation based on likelihood function or quadratic loss function \[@Freidman:11, @Banerjee:38, @Bickel:3, @Ravi1:24, @Roth:25, @Maurya:19\]. The maximum likelihood estimation requires a tractable probability distribution of observations whereas quadratic loss function does not have any such requirement and therefore fully non-parametric. The quadratic loss function is convex and due to this analytical tractability, it is a widely applicable choice for many data | 1 | member_12 |
analysis problems.
Proposed Estimators
-------------------
Let $S$ be the sample covariance matrix. Consider the following optimization problem. $$\hat{\Sigma}_{\lambda,\gamma}=\operatorname*{arg\,min}_{\Sigma=\Sigma^T,tr(\Sigma)=tr(S)}~~\Big[ ||\Sigma-S||^2_2 + \lambda \|{\Sigma^-}\|_1 + \gamma\sum_{i=1}^{p} \big \{\sigma_i(\Sigma)-\bar{\sigma}_{\Sigma} \big \}^2\Big],$$ where $\bar{\sigma}_\Sigma$ is the mean of eigenvalues of $\Sigma$, $\lambda$ and $\gamma$ are some positive constants. Note that by penalty function $\|{\Sigma^-}\|_1$, we only penalize off-diagonal elements of $\Sigma$. The eigenvalues variance penalty term for eigen-spectrum shrinkage is chosen from the following points of interest: i) It is easy to interpret and ii) this choice of penalty function yields a very fast optimization algorithm. By constraint $tr(\Sigma)=tr(S)$, the total variation in $\hat{\Sigma}_{\lambda,\gamma}$ is same as that in sample covariance matrix $S$, however the eigenvalues of $\hat{\Sigma}_{\lambda,\gamma} $ are well-conditioned than those of $S$. From here onwards we suppress the dependence of $\lambda, \gamma $ on $\hat{\Sigma }$ and denote $\hat{\Sigma}_{\lambda,\gamma} $ by $\hat{\Sigma}$.\
\
For $\gamma=0$, the solution to (2.1) is the standard soft-thresholding estimator for quadratic loss function and its solution is given by (see $\S4$ for derivation of this estimator): $$\begin{aligned}
\begin{split}
\hat{\Sigma}_{ii}& =s_{ii} \\
\hat{\Sigma}_{ij}& =\text{sign}(s_{ij})\max\Big (|s_{ij}|-\frac{\lambda}{2},0\Big), ~~~~~~~~~~~~~i \neq j.
\end{split}\end{aligned}$$ It is clear from this expression that a sufficiently large value of $\lambda$ will result in sparse covariance | 1 | member_12 |
matrix estimate. But estimator $\hat{\Sigma}$ of (2.2) is not necesarily positive definite \[for more details here see @Xue:31\]. Moreover it is hard to say whether it overcomes the over-dispersion in the sample eigenvalues. The following eigenvalue plot (Figure (2.1)) illustrates this phenomenon for a neighbourhood type (see $\S5$ for details on description of neighborhood type of covariance matrix) covariance matrix. Here we simulated random vectors from multivariate normal distribution with sample size $n=50$ and number of covariates $~p=20$.
![*Comparison of Eigenvalues of Covariance Matrices* ](sam_ei_plot.pdf){width=".8\textwidth"}
As is evident from Figure 2.1, eigenvalues of sample covariance matrix are over-dispersed as most of them are either too large or close to zero. Eigenvalues of the proposed Joint Penalty (JPEN) estimator and PDSCE (Positive Definite Sparse Covariance matrix Estimator (@Roth1:26) of the covariance matrix are well aligned with those of true covariance matrix. See $\S 5$ for detailed discussion. Another drawback of the estimator (2.2) is that the estimate can be negative definite.\
As argued earlier, to overcome the over-dispersion in eigen-spectrum of sample covariance matrix, we include eigenvalues variance penalty. To illustrate its advantage, consider $\lambda=0$. After some algebra, let $\hat{\Sigma}$ be the minimizer of (2.1), then it is given by: $$\hat{\Sigma}= | 1 | member_12 |
(S+\gamma~t~I)/(1+\gamma),$$ where $I$ is the identity matrix, and $t=\sum_{i=1}^{p}S_{ii}/p$. After some algebra, conclude that for any $\gamma>0$: $$\begin{aligned}
\sigma_{min} (\hat{\Sigma}) & = & \sigma_{min} (S+ \gamma~t~I)/(1+\gamma) \\
& \geq & \frac{\gamma~t}{1+\gamma}>0\end{aligned}$$ This means that the eigenvalues variance penalty improves $S$ to a positive definite estimator $\hat{\Sigma}$. However the estimator (2.3) is well-conditioned but need not be sparse. Sparsity can be achieved by imposing $\ell_1$ penalty on the entries of covariance matrix. Simulations have shown that, in general the minimizer of (2.1) is not positive definite for all values of $\lambda >0$ and $\gamma >0$. Here onwards we focus on correlation matrix estimation, and later generalize the method for covariance matrix estimation.\
To achieve both well-conditioned and sparse positive definite estimator we optimize the following objective function in $R$ over specific region of values of $(\lambda, \gamma)$ which depends upon sample correlation matrix $K$ and $\lambda,\gamma$. Here the condition $tr(\Sigma)=tr(S)$ reduces to $tr(R)=p$, and $t=1$. Consider the following optimization problem: $$\hat{R}_K=\operatorname*{arg\,min}_{R=R^T,tr(R)=p|(\lambda, \gamma) \in \hat{\mathscr{S}}^{K}_1}~~\Big[ ||R-K||^2_F + \lambda \|R^-\|_1 + \gamma\sum_{i=1}^{p} \big \{\sigma_i(R)-\bar{\sigma}_{R} \big \}^2\Big],$$ where
$$\begin{aligned}
\hat{\mathscr{S}}^{K}_1& = \Big \{(\lambda,\gamma): \lambda, \gamma >0, \lambda \asymp \gamma \asymp \sqrt{\frac{log~ p}{n}}, \forall \epsilon >0,\sigma_{min}\{(K+\gamma I)-\frac{\lambda}{2}*sign(K+\gamma I)\}>\epsilon \Big \},\end{aligned}$$
and $\bar{\sigma}_{R}$ is mean of | 1 | member_12 |
the eigenvalues of $R$. For instance when $K$ is diagonal matrix, the set $\hat{\mathscr{S}}^{K}_1$ is given by:
$\hat{\mathscr{S}}^{K}_1 = \Big \{(\lambda,\gamma): \lambda, \gamma >0, \lambda \asymp \gamma \asymp \sqrt{\frac{log~ p}{n}}, \forall \epsilon >0,\lambda <2(\gamma-\epsilon) \Big \}$.
The minimization in (2.4) over $R$ is for fixed $ (\lambda, \gamma) \in \hat{\mathscr{S}}^{K}_1$. The proposed estimator of covariance matrix (based on regularized correlation matrix estimator $\hat{R}_K$) is given by $\hat{\Sigma}_K=({S^+})^{1/2}\hat{R}_K({S^+})^{1/2}$, where $S^+$ is the diagonal matrix of the diagonal elements of $S$. Furthermore Lemmas 3.1 and 3.2, respectively show that the objective function (2.4) is convex and estimator given in (2.4) is positive definite.
Our Contribution
----------------
The main contributions are the following:\
**i)** The proposed estimators are both sparse and well-conditioned simultaneously. This approach allows to take advantage of a prior structure if known on the eigenvalues of the true covariance and the inverse covariance matrices.\
**ii)** We establish theoretical consistency of proposed estimators in both operator and Frobenius norm. The proposed JPEN estimators achieves the minimax rate of convergence in operator norm for the underlying class of sparse and well-conditioned covariance and inverse covariance matrices and therefore is optimal.\
**iii)** The proposed algorithm is very fast, efficient and easily scalable to | 1 | member_12 |
large scale optimization problems.
Analysis of JPEN Method
=======================
**Def:** A random vector $X$ is said to have sub-gaussian distribution if for each $t \ge 0$ and $y \in \mathbb{R}^p $ with $\|y\|_2=1$, there exist $0< \tau < \infty $ such that $$\mathbb{P}\{|y^T(X-\mathbb{E}(X))|>t\} \le e^{-t^2/2\tau}$$ Although the JPEN estimators exists for any finite $2 \le n<p<\infty$, for theoretical consistency in operator norm we require $s~log~p=o(n)$ and for Frobenus norm we require $(p+s) ~log~p=o(n)$ where $s$ is the upper bound on the number of non-zero off-diagonal entries in true covariance matrix. For more details, see the remark after Theorem 3.1.\
Covariance Matrix Estimation
----------------------------
We make the following assumptions about the true covariance matrix $\Sigma_0$.\
**A0.** Let $X:=(X_1,X_2,\cdots,X_p)$ be a mean zero vector with covariance matrix $\Sigma_0$ such that each $X_i/ \sqrt{\Sigma_{0ii}}$ has subgaussian distribution with parameter $\tau$ as defined in (3.1).\
**A1.** With $ E=\{(i,j): \Sigma_{0ij} \neq 0, i \neq j \}, $ the $|E| \le s $ for some positive integer $s$.\
**A2.** There exists a finite positive real number $\bar{k} >0$ such that $ 1/\bar{k} \le \sigma_{min}(\Sigma_0) \le \sigma_{max}(\Sigma_0) \le \bar{k}$.\
Assumption A2 guarantees that the true covariance matrix $\Sigma_0$ is well-conditioned (i.e. all the eigenvalues are finite | 1 | member_12 |
and positive). A well-conditioned means that \[@Ledoit:17)\] inverting the matrix does not explode the estimation error. Assumption A1 is more of a definition which says that the number of non-zero off diagonal elements are bounded by some positive integer. Theorem 3.1 gives the rate of convergence of the proposed correlation based covariance matrix estimator (2.4). The following Lemmas show that optimization problem in (2.4) is convex and the proposed JPEN estimator (2.4) is positive definite.
The optimization problem in (2.4) is convex.
The estimator given by (2.4) is positive definite for any $2 \le n < \infty $ and $p <\infty$.
Let $(\lambda, \gamma) \in \hat{\mathscr{S}}^{K}_1$ and $\hat{\Sigma}_K$ be as defined in (2.4). Under Assumptions A0, A1, A2, $$\|\hat{R}_K-R_0\|_F=O_P \Big ( \sqrt{\frac{s~ log~p}{n}} \Big ) ~~~ \text{and} ~~~\|\hat{\Sigma}_K-\Sigma_0\|=O_P \Big ( \sqrt{\frac{(s+1) log~p}{n}} \Big ),$$ where $R_0$ is true correlation matrix.
**Remark: 1.** The JPEN estimator $\hat{\Sigma}_K$ is minimiax optimal under the operator norm. In (@Cai2:40), the authors obtain the minimax rate of convergence in the operator norm of their covariance matrix estimator for the particular construction of parameter space $\mathscr{H}_0(c_{n,p}):=\Big \{ \Sigma : max_{1 \le i \le p}\sum_{i=1}^{p}I\{\sigma_{ij}\neq 0\} \leq c_{n,p} \Big \}$. They show that this rate in operator | 1 | member_12 |
norm is $c_{n,p} \sqrt{log~p/n}$ which is same as that of $\hat{\Sigma}_K$ for $1 \leq c_{n,p}=\sqrt{s}$.\
[**2.**]{} @Bickel1:4 proved that under the assumption of $\sum_{j=1}|\sigma_{ij}|^q \leq c_0(p)$ for some $ 0 \leq q \leq 1$, the hard thresholding estimator of the sample covariance matrix for tuning parameter $\lambda \asymp \sqrt{(log~p)/n}$ is consistent in operator norm at a rate no worse than $ O_P\Big ( c_0(p) \sqrt{p}(\frac{log ~p}{n})^{(1-q)/2} \Big ) $ where $c_0(p)$ is the upper bound on the number of non-zero elements in each row. Here the truly sparse case corresponds to $q=0$. The rate of convergence of $\hat{\Sigma}_K$ is same as that of @Bickel1:4 except in the following cases:\
[**Case (i)**]{} The covariance matrix has all off diagonal elements zero except last row which has $ \sqrt{p}$ non-zero elements. Then $c_0(p)=\sqrt{p}$ and $ \sqrt{s}=\sqrt{2~\sqrt{p}-1}$. The opeartor norm rate of convergence for JPEN estimator is $O_P \Big ( \sqrt{\sqrt{p}~(log~p)/n} \Big )$ where as rate of Bickel and Levina’s estimator is $O_P \Big (\sqrt{p~(log~p)/n} \Big )$.\
[**Case (ii)**]{} When the true covariance matrix is tridiagonal, we have $c_0(p)=2$ and $s=2p-2$, the JPEN estimator has rate of $\sqrt{p~log~p/n}$ whereas the Bickel and Levina’s estimator has rate of $\sqrt{log~p/n}$.\
For the case $\sqrt{s} \asymp | 1 | member_12 |
c_0(p)$ and JPEN has the same rate of convergence as that of Bickel and Levina’s estimator.\
**3.** The operator norm rate of convergence is much faster than Frobenius norm. This is due to the fact that Frobenius norm convergence is in terms of all eigenvalues of the covariance matrix whereas the operator norm gives the convergence of the estimators in terms of the largest eigenvalue.\
**4.** Our proposed estimator is applicable to estimate any non-negative definite covariance matrix.\
Note that the estimator $\hat{\Sigma}_K$ is obtained by regularization of sample correlation matrix in (2.4). In some application it is desirable to directly regularize the sample covariance matrix. The JPEN estimator of the covariance matrix based on regularization of sample covariance matrix is obtained by solving the following optimization problem: $$\hat{\Sigma}_S=\operatorname*{arg\,min}_{\Sigma=\Sigma^T,tr(\Sigma)=tr(S)|(\lambda, \gamma) \in \hat{\mathscr{S}}^{S}_1}~~\Big[ ||\Sigma-S||^2_F + \lambda \|\Sigma^-\|_1 + \gamma\sum_{i=1}^{p} \{\sigma_i(\Sigma)-\bar{\sigma}_{\Sigma}\}^2\Big],$$ where $$\begin{aligned}
\hat{\mathscr{S}}^S_1& = \Big \{(\lambda,\gamma): \lambda,\gamma >0, \lambda \asymp \gamma \asymp \sqrt{\frac{log~ p}{n}}, \forall \epsilon >0, \sigma_{min}\{(S+\gamma t I)-\frac{\lambda}{2}*sign(S+\gamma t I)\}>\epsilon \},\end{aligned}$$ and $S$ is sample covariance matrix. The minimization in (3.3) over $\Sigma$ is for fixed $ (\lambda, \gamma) \in \hat{\mathscr{S}}^{S}_1$. The estimator $\hat{\Sigma}_S$ is positive definite and well-conditioned. Theorem 3.2 gives the rate of convergence of the estimator | 1 | member_12 |
$\hat{\Sigma}_S$ in Frobenius norm.
Let $(\lambda, \gamma) \in \hat{\mathscr{S}}^S_1$, and let $\hat{\Sigma}_S$ be as defined in (3.3). Under Assumptions A0, A1, A2, $$\|\hat{\Sigma}_S-\Sigma_0\|_F=O_P \Big ( \sqrt{\frac{(s+p) log~p}{n}} \Big )$$
As noted in @Roth1:26 the worst part of convergence here comes from estimating the diagonal entries.
### Weighted JPEN Estimator for the Covariance Matrix Estimation
A modification of estimator $\hat{R}_{K}$ is obtained by adding positive weights to the term $(\sigma_i(R)-\bar{\sigma}_R)^2$. This leads to weighted eigenvalues variance penalty with larger weights amounting to greater shrinkage towards the center and vice versa. Note that the choice of the weights allows one to use any prior structure of the eigenvalues (if known) in estimating the covariance matrix. The weighted JPEN correlation matrix estimator $\hat{R}_A$ is given by : $$\hat{R}_A=\operatorname*{arg\,min}_{R=R^T,tr(R)=p|(\lambda, \gamma) \in \hat{\mathscr{S}}^{K,A}_1}~~\Big[ ||R-K||^2_F + \lambda \|R^-\|_1 + \gamma\sum_{i=1}^{p} a_i\{\sigma_i(R)-\bar{\sigma}_R\}^2\Big],$$ where $$\begin{aligned}
\hat{\mathscr{S}}^{K,A}_1& = \Big \{(\lambda,\gamma): \lambda \asymp \gamma \asymp \sqrt{\frac{log~p}{n}}, \lambda \le \frac{(2~\sigma_{min}(K))(1+\gamma ~max(A_{ii})^{-1})}{(1+\gamma~ min(A_{ii}))^{-1}p}+ \frac{\gamma ~min(A_{ii})}{p} \Big \},\end{aligned}$$ and $A=\text{diag}(A_{11},A_{22},\cdots A_{pp})$ with $A_{ii}=a_i$. The proposed covariance matrix estimator is $\hat{\Sigma}_{K,A}=(S^{+})^{1/2}\hat{R}_A (S^{+})^{1/2}$. The optimization problem in (3.5) is convex and yields a positive definite estimator for each $(\lambda,\gamma) \in \hat{\mathscr{S}}^{K,A}_1$. A simple excercise shows that the estimator $\hat{\Sigma}_{K,A}$ has same rate of convergence as | 1 | member_12 |
that of $\hat{\Sigma}_{S}$.
Estimation of Inverse Covariance Matrix
---------------------------------------
We extend the JPEN approach to estimate a well-conditioned and sparse inverse covariance matrix. Similar to the covariance matrix estimation, we first propose an estimator for inverse covariance matrix based on regularized inverse correlation matrix and discuss its rate of convergence in Frobenious and operator norm.\
**Notation:** We shall use $Z$ and $\Omega$ for inverse correlation and inverse covariance matrix respectively.\
**Assumptions:** We make the following assumptions about the true inverse covariance matrix $\Omega_0$. Let $\Sigma_0=\Omega_0^{-1}$.\
**B0.** Same as the assumption $A0$.\
**B1.** With $ H=\{(i,j): \Omega_{0ij} \neq 0, i \neq j \}$, the $|H| \le s $, for some positive integer $s$.\
**B2.** There exist $ 0< \bar{k} < \infty $ large enough such that $ (1/{\bar{k}}) \le \sigma_{min}(\Omega_0) \le \sigma_{max}(\Omega_0) \le \bar{k}$.\
Let $\hat{R}_K$ be a JPEN estimator for the true correlation matrix. By Lemma 3.2, $\hat{R}_K$ is positive definite. Define the JPEN estimator of inverse correlation matrix as the solution to the following optimization problem, $$\hat{Z}_{K}=\operatorname*{arg\,min}_{Z=Z^T,tr(Z)=tr(\hat{R}_K^{-1})|(\lambda,\gamma) \in \hat{\mathscr{S}}^{{K}}_2}\Big [ \|Z-\hat{R}_K^{-1} \|^2~+~\lambda\|Z^-\|_1~+~\gamma\sum_{i=1}^{p} \{\sigma_i(Z)- \bar{\sigma}(Z)\}^2 \Big ]$$ where
$$\begin{aligned}
\hat{\mathscr{S}}^{{K}}_2& = \Big \{(\lambda,\gamma): \lambda,\gamma >0, \lambda \asymp \gamma \asymp \sqrt{\frac{log~ p}{n}}, \forall \epsilon >0, \\
& ~~~~~~~~~~~~ \sigma_{min}\{(\hat{R}_K^{-1}+\gamma t_1 I)-\frac{\lambda}{2}*sign(\hat{R}_K^{-1}+\gamma | 1 | member_12 |
t_1 I)\}>\epsilon \Big \},\end{aligned}$$
and $t_1$ is average of the diagonal elements of $\hat{R}_K^{-1}$. The minimization in (3.6) over $Z$ is for fixed $ (\lambda, \gamma) \in \hat{\mathscr{S}}^{{K}}_2$. The proposed JPEN estimator of inverse covariance matrix (based on regularized inverse correlation matrix estimator $\hat{Z}_{K}$) is given by $\hat{\Omega}_{{K}}=(S^+)^{-1/2}{\hat{Z}}_{{K}}(S^+)^{-1/2}$, where $S^+$ is a diagonal matrix of the diagonal elements of $S$. Moreover (3.6) is a convex optimization problem and $\hat{Z}_K$ is positive definite.\
Next we state the consistency of estimators $\hat{Z}_{{K}}$ and $\hat{\Omega}_{{K}}$.
Under Assumptions B0, B1, B2 and for $(\lambda,\gamma) \in \hat{\mathscr{S}}^{{K}}_{2}$, $$\|\hat{Z}_{{K}}-R_0^{-1}\|_F=O_P \Big ( \sqrt{\frac{s~log~p}{n}} \Big ) ~~~\text{and} ~~~\|\hat{\Omega}_{{K}}-\Omega_0\|=O_P \Big ( \sqrt{\frac{(s+1)~log~p}{n}} \Big )$$ where $R_0^{-1}$ is the inverse of true correlation matrix.
**Remark:1.** Note that the JPEN estimator $\hat{\Omega}_{{K}}$ achieves minimax rate of convergence for the class of covariance matrices satisfying assumption $B0$, $B1$, and $B2$ and therefore optimal. The similar rate is obtained in @Cai2:40 for their class of sparse inverse covariance matrices.\
Next we give another estimate of inverse covariance matrix based on $\hat{\Sigma}_{S}$. Consider the following optimization problem: $$\hat{\Omega}_S=\operatorname*{arg\,min}_{\Omega=\Omega^T,tr(\Omega)=tr(\hat{\Sigma}_{S}^{-1})|(\lambda, \gamma) \in \hat{\mathscr{S}}^{{S}}_2}~~\Big[ ||\Omega- \hat{\Sigma}_{S}^{-1}||^2_F + \lambda \|\Omega^-\|_1 + \gamma\sum_{i=1}^{p} \{\sigma_i(\Omega)-\bar{\sigma}_{\Omega}\}^2\Big],$$ where
$$\begin{aligned}
\hat{\mathscr{S}}^{{S}}_2&= \Big \{(\lambda,\gamma): \lambda,\gamma >0, \lambda \asymp \gamma \asymp \sqrt{\frac{log~ p}{n}}, ~\forall \epsilon | 1 | member_12 |
>0, \\
& ~~~~~~~~~~~~\sigma_{min}\{(\hat{\Sigma}_S^{-1}+\gamma ~t_2~I)-\frac{\lambda}{2}*sign(\hat{\Sigma}_S^{-1}+\gamma t_2 I)\}>\epsilon \Big \},
$$
and $t_2$ is average of the diagonal elements of $\hat{\Sigma}_S $. The minimization in (3.8) over $\Omega$ is for fixed $ (\lambda, \gamma) \in \hat{\mathscr{S}}^{S}_2$. The estimator in (3.8) is positive definite and well-conditioned. The consistency result of the estimator $\hat{\Omega}_S$ is given in following theorem.
Let $(\lambda, \gamma) \in \hat{\mathscr{S}}^{S}_2$ and let $\hat{\Omega}_S$ be as defined in (3.8). Under Assumptions B0, B1, and B2, $$\|\hat{\Omega}_S-\Omega_0\|_F=O_P \Big ( \sqrt{\frac{(s+p) log~p}{n}} \Big ).$$
### Weighted JPEN Estimator for The Inverse Covariance Matrix
Similar to weighted JPEN covariance matrix estimator $\hat{\Sigma}_{K,A}$, a weighted JPEN estimator of the inverse covariance matrix is obtained by adding positive weights $a_i $ to the term $(\sigma_i(Z)-1)^2$ in (3.8). The weighted JPEN estimator is $\hat{\Omega}_{{K},A}:=({S^{+}})^{-1/2}\hat{Z}_A ({S^{+}})^{-1/2}$, where $$\hat{Z}_A=\operatorname*{arg\,min}_{Z=Z^T,tr(Z)=tr(\hat{R}_K^{-1})|(\lambda, \gamma) \in \hat{\mathscr{S}}^{K,A}_2}~~\Big[ ||Z-\hat{R}_K^{-1}||^2_F + \lambda \|Z^-\|_1 + \gamma\sum_{i=1}^{p} a_i\{\sigma_i(Z)-1\}^2\Big],$$ with $$\begin{aligned}
\hat{\mathscr{S}}^{{K},A}_2& = \Big \{(\lambda,\gamma): \lambda \asymp \gamma \asymp \sqrt{\frac{log~p}{n}}, \lambda \le \frac{(2~\sigma_{min}({R}_K^{-1}))(1+\gamma t_1 max(A_{ii})^{-1})}{(1+\gamma~ min(A_{ii})^{-1}p}+ \frac{\gamma min(A_{ii})}{p} \Big \},\end{aligned}$$ and $A=\text{diag}(A_{11},A_{22},\cdots A_{pp})$ with $A_{ii}=a_i$. The optimization problem in (3.10) is convex and yields a positive definite estimator for $(\lambda,\gamma) \in \hat{\mathscr{S}}^{K,A}_2$. A simple excercise shows that the estimator $\hat{Z}_{A}$ has similar rate of convergence as that of | 1 | member_12 |
$\hat{Z}_K$.
An Algorithm
============
Covariance Matrix Estimation:
-----------------------------
The optimization problem (2.4) can be written as:\
$$\begin{aligned}
\hat{R}_K=\operatorname*{arg\,min}_{R=R^T|(\lambda, \gamma) \in \hat{\mathscr{S}}^{K}_1 }~f(R),\end{aligned}$$ where $$\begin{aligned}
f(R)=||R-K||^2_F + \lambda \|{R}^-\|_1 + \gamma\sum_{i=1}^{p} \{\sigma_i(R)-\bar{\sigma}(R)\}^2.\end{aligned}$$ Note that $\sum_{i=1}^{p} \{\sigma_i(R)-\bar{\sigma}(R)\}^2=tr(R^2)-2~tr(R)+p$, where we have used the constraint $tr(R)=p$. Therefore, $$\begin{aligned}
f(R) & = & \|R-K\|_F^2+\lambda \|{R}^-\|_1 +\gamma~ tr(R^2)-2~ \gamma~tr(R) +p\\
& = & tr(R^2 (1+\gamma))-2tr\{R(K+\gamma I)\}+ tr(K^TK) +\lambda ~\|{R}^-\|_1 +p\\
& = & (1+\gamma)\{tr(R^2)-2/(1+\gamma) tr\{R(K+\gamma I)\}+ (1/(1+\gamma))tr(K^TK)\} \\
& &~~~~+\lambda ~\|{R}^-\|_1 +p \\
& = & (1+\gamma)\{\|R- (K+\gamma I)/(1+\gamma)\|^2_F+ (1/(1+\gamma))tr(K^TK)\} \\
& & ~~~~+\lambda ~\|{R}^-\|_1 +p.\end{aligned}$$ The solution of (4.1) is soft thresholding estimator and it is given by: $$\begin{aligned}
\hat{R}_K =\frac{1}{1+\gamma}~\text{sign}(K)*\text{pmax}\{\text{abs}(K+\gamma ~I)-\frac{\lambda}{2},0\}\end{aligned}$$ with $(\hat{R}_{K})_{ii}=(K_{ii}+\gamma)/(1+\gamma)$, $pmax(A,b)_{ij}:=max(A_{ij},b)$ is elementwise max function for each entry of the matrix $A$. Note that for each $(\lambda,\gamma) \in \hat{\mathscr{S}}^{K}_1$, $\hat{R}_{K}$ is positive definite.\
\
[**Choice of $\lambda$ and $\gamma$:**]{} For a given value of $\gamma$, we can find the value of $\lambda $ satisfying: $$\begin{aligned}
\sigma_{min}\{(K+\gamma I)-\frac{\lambda}{2}*sign(K+\gamma I)\}>0\end{aligned}$$ which can be simplified to $$\begin{aligned}
\lambda < \frac{\sigma_{min}(K+\gamma I)}{C_{12}~\sigma_{max}(\text{sign}(K))}. \end{aligned}$$ For some $C_{12} \ge 0.5$. Such choice of $(\lambda,\gamma)\in \hat{\mathscr{S}}_1^{{K}}$, and the estimator $\hat{R}_{K}$ is positive definite. Smaller values of $C_{12}$ yeild a solution which is more sparse but may not be | 1 | member_12 |
positive definite.\
\
[**Choice of weight matrix A:**]{} For optimization problem in (3.5), the weights are chosen in following way:\
Let $\mathscr{E}$ be the set of sorted diagonal elements of the sample covariance matrix $S$.\
**i)** Let $k$ be largest index of $\mathscr{E}$ such that $k^{th}$ elements of $\mathscr{E}$ is less than $1$. For $i \le k, ~a_i=\mathscr{E}_{i}$. For $k<i \le p,~ a_i=1/\mathscr{E}_{i}.$\
**ii)** $A=\text{diag}(a_1,a_2,\cdots,a_p),~\text{where}~ a_j=a_j/\sum_{i=1}^p a_i.$ Such choice of weights allows more shrinkage of extreme sample eigenvalues than the ones in center of eigen-spectrum.
Inverse Covariance Matrix Estimation:
-------------------------------------
To get an expression of inverse covariance matrix estimate, we replace $K$ by $\hat{R}_K^{-1}$ in (4.2), where $\hat{R}_K$ is a JPEN estimator of correlation matrix. We chose $(\lambda,\gamma) \in \hat{\mathscr{S}}^{{K}}_2$. For a given $\gamma$, we chose $\lambda>0$ satisfying:\
$$\begin{aligned}
\sigma_{min}\{(\hat{R}_K^{-1}+\gamma t_1 I)-\frac{\lambda}{2}*sign(\hat{R}_K^{-1}+\gamma t_1 I)\}>0\end{aligned}$$ which can be simplified to $$\begin{aligned}
\lambda < \frac{\sigma_{min}(\hat{R}_K^{-1}+\gamma t_1 I)}{C_{12}~\sigma_{max}(\text{sign}(\hat{R}_K^{-1}))}. \end{aligned}$$
Computational Complexity
------------------------
The JPEN estimator $\hat{\Sigma}_{K}$ has computational complexity of $O(p^2)$ as there are at most $3p^2$ multiplications for computing the estimator $\hat{\Sigma}_{K}$. The other existing algorithm Glasso ([@Freidman:11]), PDSCE ([@Roth1:26]) have computational complexity of $O(p^3)$. We compare the computational timing of our algorithm to some other existing algorithms Glasso (@Freidman:11), PDSCE (@Roth1:26). | 1 | member_12 |
The exact timing of these algorithm also depends upon the implementation, platform etc. (we did our computations in $R$ on a AMD 2.8GHz processor). Following the approach [@Bickel1:4], the optimal tuning parameter $(\lambda,\gamma)$ was obtained by minimizing the $5-$fold cross validation error $$(1/5) \sum_{i=1}^5\|\hat{\Sigma}_i^v-\Sigma_i^{-v}\|_1,$$ where $\hat{\Sigma}_i^v$ is JPEN estimate of the covariance matrix
[H]{}[0.5]{}
![image](timing.png){width="40.00000%"}
based on $v=4n/5$ observations, $\Sigma_i^{-v}$ is the sample covariance matrix using $(n/5)$ observations. Figure 4.1 illustrates the total computational time taken to estimate the covariance matrix by $Glasso,~PDSCE$ and $JPEN$ algorithms for different values of $p$ for Toeplitz type of covariance matrix on log-log scale (see section $\S 5$ for Toeplitz type of covariance matrix). Although the proposed method requires optimization over a grid of values of $(\lambda,\gamma) \in \hat{\mathscr{S}}^{K}_1$, our algorithm is very fast and easily scalable to large scale data analysis problems.
Simulation Results
==================
#### •
We compare the performance of the proposed method to other existing methods on simulated data for five types of covariance and inverse covariance matrices.\
**(i) Hub Graph:** Here the rows/columns of $\Sigma_0$ are partitioned into J equally-sized disjoint groups: $ \{ V_1 \cup V_2~ \cup, ..., \cup~ V_J\} = \{1,2,...,p\},$ each group is associated with a | 1 | member_12 |
---
author:
- 'Phoolendra K. Mishra and Kristopher L. Kuhlman'
bibliography:
- 'review\_paper.bib'
title: 'Unconfined Aquifer Flow Theory - from Dupuit to present'
---
Abstract
========
Analytic and semi-analytic solution are often used by researchers and practicioners to estimate aquifer parameters from unconfined aquifer pumping tests. The non-linearities associated with unconfined (i.e., water table) aquifer tests makes their analysis more complex than confined tests. Although analytical solutions for unconfined flow began in the mid-1800s with Dupuit, Thiem was possibly the first to use them to estimate aquifer parameters from pumping tests in the early 1900s. In the 1950s, Boulton developed the first transient well test solution specialized to unconfined flow. By the 1970s Neuman had developed solutions considering both primary transient storage mechanisms (confined storage and delayed yield) without non-physical fitting parameters. In the last decade, research into developing unconfined aquifer test solutions has mostly focused on explicitly coupling the aquifer with the linearized vadose zone. Despite the many advanced solution methods available, there still exists a need for realism to accurately simulate real-world aquifer tests.
Introduction
============
Pumping tests are widely used to obtain estimates of hydraulic parameters characterizing flow and transport processes in subsurface (e.g., @kruseman90 [@batu1998aquifer]). Hydraulic | 1 | member_13 |
parameter estimates are often used in planning or engineering applications to predict flow and design of aquifer extraction or recharge systems. During a typical pumping test in a horizontally extensive aquifer, a well is pumped at constant volumetric flow rate and head observations are made through time at one or more locations. Pumping test data are presented as time-drawdown or distance-drawdown curves, which are fitted to idealized models to estimate aquifer hydraulic properties. For unconfined aquifers properties of interest include hydraulic conductivity, specific storage, specific yield, and possibly unsaturated flow parameters. When estimating aquifer properties using pumping test drawdown data, one can use a variety of analytical solutions involving different conceptualizations and simplifiying assumptions. Analytical solutions are impacted by their simplifiying assumptions, which limit their applicability to characterize certain types of unconfined aquifers. This review presents the historical evolution of the scientific and engineering thoughts concerning groundwater flow towards a pumping well in unconfined aquifers (also referred to variously as gravity, phreatic, or water table aquifers) from the steady-state solutions of Dupuit to the coupled transient saturated-unsaturated solutions. Although it is sometimes necessary to use gridded numerical models in highly irregular or heterogeneous systems, here we limit our consideration to | 1 | member_13 |
analytically derived solutions.
Early Well Test Solutions
=========================
Dupuit’s Steady-State Finite-Domain Solutions
---------------------------------------------
[@dupuit1857] considered steady-state radial flow to a well pumping at constant volumetric flowrate $Q$ \[L$^3$/T\] in a horizontal homogeneous confined aquifer of thickness $b$ \[L\]. He used Darcy’s law [@darcy1856] to express the velocity of groundwater flow $u$ \[L/T\] in terms of radial hydraulic head gradient $\left(\partial
h/\partial r\right)$ as $$\label{darcys-law}
u=K\frac{\partial h}{\partial r},$$ where $K=kg/\nu$ is hydraulic conductivity \[L/T\], $k$ is formation permeability \[L$^2$\], $g$ is the gravitational constant \[L/T$^2$\], $\nu$ is fluid kinematic viscosity \[L$^2$/T\], $h=\psi+z$ is hydraulic head \[L\], $\psi$ is gage pressure head \[L\], and $z$ is elevation above an arbitrary datum \[L\]. Darcy derived a form equivalent to for one-dimensional flow through sand-packed pipes. Dupuit was the first to apply to converging flow by combining it with mass conservation $Q=\left(2\pi rb \right)u$ across a cylindrical shell concentric with the well, leading to $$\label{dupuit_1}
Q=K\left( 2\pi rb\right)\frac{\partial h}{\partial r}.$$ Integrating between two radial distances $r_1$ and $r_2$ from the pumping well, Dupuit evaluated the confined steady-state head difference between the two points as $$\label{dupuit_confined}
h(r_{2})-h(r_{1})=\frac{Q}{2\pi Kb}\log\left( \frac{r_2}{r_1}\right).$$ This is the solution for flow to a well at the center of a circular island, | 1 | member_13 |
where a constant head condition is applied at the edge of the island ($r_2$).
@dupuit1857 also derived a radial flow solution for unconfined aquifers by neglecting the vertical flow component. Following a similar approach to confined aquifers, @dupuit1857 estimated the steady-state head difference between two distances from the pumping well for unconfined aquifers as $$\label{dupuit_unconfined}
h^{2}(r_{2}) - h^{2} (r_{1}) = \frac{Q}{\pi K} \log\left(
\frac{r_2}{r_1}\right).$$ These two solutions are only strictly valid for finite domains; when applied to domains without a physical boundary at $r_2$, the outer radius essentially becomes a fitting parameter. The solutions are also used in radially infinite systems under pseudo-static conditions, when the shape of the water table does not change with time.
Equations and are equivalent when $b$ in is average head $\left( h(r_1)+h(r_2)\right)/2$. In developing , [@dupuit1857] used the following assumptions (now commonly called the Dupuit assumptions) in context of unconfined aquifers:
- the aquifer bottom is a horizontal plane;
- groundwater flow toward the pumping wells is horizontal with no vertical hydraulic gradient component;
- the horizontal component of the hydraulic gradient is constant with depth and equal to the water table slope; and
- there is no seepage face at the borehole.
These | 1 | member_13 |
assumptions are one of the main approaches to simplifying the unconfined flow problem and making it analytically tractable. In the unconfined flow problem both the head and the location of the water table are unknowns; the Dupuit assumptions eliminate one of the unknowns.
Historical Developments after Dupuit
------------------------------------
@narasimhan98 and @vries2007 give detailed historical accounts of groundwater hydrology and soil mechanics; only history relevant to well test analysis is given here. @forchheimer1886 first recognized the Laplace equation $\nabla^2 h = 0$ governed two-dimensional steady confined groundwater flow (to which is a solution), allowing analogies to be drawn between groundwater flow and steady-state heat conduction, including the first application of conformal mapping to solve a groundwater flow problem. @slichter1898 also arrived at the Laplace equation for groundwater flow, and was the first to account for a vertical flow component. Utilizing Dupuit’s assumptions, @forchheimer1898 developed the steady-state unconfined differential equation (to which is a solution), $\nabla^2 h^2=0$. @boussinesq1904 first gave the transient version of the confined groundwater flow equation $\alpha_s \nabla^2 h = \partial h/\partial t$ (where $\alpha_s=K/S_s$ is hydraulic diffusivity \[L$^2$/T\] and $S_s$ is specific storage \[1/L\]), based upon analogy with transient heat conduction.
In Prague, @thiem1906 was possibly the first | 1 | member_13 |
to use for estimating $K$ from pumping tests with multiple observation wells [@simmons2008]. Equation (commonly called the Thiem equation) was tested in the 1930’s both in the field (@wenzel1932recent performed a 48-hour pumping test with 80 observation wells in Grand Island, Nebraska) and in the laboratory (@wyckoff1932dupuitflow developed a 15-degree unconfined wedge sand tank to simulate converging flow). Both found the steady-state solution lacking in ability to consistently estimate aquifer parameters. @wenzel1942 developed several complex averaging approaches (e.g., the “limiting” and “gradient” formulas) to attempt to consistently estimate $K$ using steady-state confined equations for a finite system from transient unconfined data. @muskat1932partialpen considered partial-penetration effects in steady-state flow to wells, discussing the nature of errors associated with assumption of uniform flux across the well screen in a partially penetrating well. Muskat’s textbook on porous media flow [@muskat1937book] summarized much of what was known in hydrology and petroleum reservoir engineering around the time of the next major advance in well test solutions by Theis.
Confined Transient Flow
-----------------------
[@theis1935] utilized the analogy between transient groundwater flow and heat conduction to develop an analytical solution for confined transient flow to a pumping well (see Figure \[fig:diagram\]). He initially applied his solution to | 1 | member_13 |
unconfined flow, assuming instantaneous drainage due to water table movement. The analytical solution was based on a Green’s function heat conduction solution in an infinite axis-symmetric slab due to an instantaneous line heat source or sink [@carslaw1921]. With the aid of mathematician Clarence Lubin, Theis extended the heat conduction solution to a continuous source, motivated to better explain the results of pumping tests like the 1931 test in Grand Island. [@theis1935] gave an expression for drawdown due to pumping a well at rate $Q$ in a homogeneous, isotropic confined aquifer of infinite radial extent as an exponential integral $$\label{theis}
s(r,t)=\frac{Q}{4\pi T}\int_{r^2 /(4 \alpha_s t)}^{\infty}\frac{e^{-u}}{u}
\;\mathrm{d}u,$$ where $s=h_0(r)-h(t,r)$ is drawdown, $h_0$ is pre-test hydraulic head, $T=Kb$ is transmissivity, and $S=S_s b$ is storativity. Equation is a solution to the diffusion equation, with zero-drawdown inital and far-field conditions, $$s(r,t=0) = s(r\rightarrow \infty,t) = 0.$$ The pumping well was approximated by a line sink (zero radius), and the source term assigned there was based upon , $${\label{boulton_sink_well}}
\lim_{r \rightarrow 0} r\frac{\partial s}{\partial r}=-\frac{Q}{2 \pi T}.$$
![Unconfined well test diagram[]{data-label="fig:diagram"}](well-diagram.eps){width="60.00000%"}
Although the transient governing equation was known through analogy with heat conduction, the transient storage mechanism (analogous to specific heat capacity) was not completely | 1 | member_13 |
understood. Unconfined aquifer tests were known to experience slower drawdown than confined tests, due to water supplied by dewatering the zone near the water table, which is related to the formation specific yield (porosity less residual water). @muskat1934transient and [@hurst1934unsteady] derived solutions to confined transient radial flow problems for finite domains, but attributed transient storage solely to fluid compressibility. @jacob1940 derived the diffusion equation for groundwater flow in compressible elastic confined aquifers, using mass conservation and Darcy’s law, without recourse to analogy with heat conduction. @terzaghi1923 developed a one-dimensional consolidation theory which only considered the compressibility of the soil (in his case a clay), unknown at the time to most hydrologists [@batu1998aquifer]. @meinzer1928 studied regional drawdown in North Dakota, proposing the modern storage mechanism related to both aquifer compaction and the compressiblity of water. @jacob1940 formally showed $S_s=\rho_w g(\beta_p + n\beta_w)$, where $\rho_w$ and $\beta_w$ are fluid density \[M/L$^3$\] and compressibility \[LT$^2$/M\], $n$ is dimensionless porosity, and $\beta_p$ is formation bulk compressibility. The axis-symmetric diffusion equation in radial coordinates is $$\label{diffusion}
\frac{\partial ^2 s}{\partial r^2}+\frac{1}{r}\frac{\partial
s}{\partial r}=
\frac{1}{\alpha_s}\frac{\partial s}{\partial t}.$$
When deriving analytical expressions, the governing equation is commonly made dimensionless to simplify presentation of results. For flow to a | 1 | member_13 |
pumping well, it is convenient to use $L_C = b$ as a characteristic length, $T_C = Sb^2/T$ as a characteristic time, and $H_C = Q/(4 \pi T)$ as a characteristic head. The dimensionless diffusion equation is $$\label{diffusion}
\frac{\partial ^2 s_D}{\partial r_D^2}+\frac{1}{r_D}\frac{\partial
s_D}{\partial r_D}=\frac{\partial s_D}{\partial t_D},$$ where $r_D=r/L_C$, $s_D=s/H_c$, and $t_D=t/T_C$ are scaled by characteristic quantities.
The [@theis1935] solution was developed for field application to estimate aquifer hydraulic properties, but it saw limited use because it was difficult to compute the exponential integral for arbitrary inputs. [@wenzel1942] proposed a type-curve method that enabled graphical application of the [@theis1935] solution to field data. [@cooperjacob1946] suggested for large values of $t_D$ ($t_D
\geq 25$), the infinite integral in the [@theis1935] solution can be approximated as $$\label{JacobCooper}
s_D(t_D,r_D) = \int_{r^2/(4\alpha_st)}^{\infty} \frac{e^{-u}}{u} \; \mathrm{d}u \approx
\log_e \left(\frac{4 Tt}{r^2S}\right) - \gamma$$ where $\gamma \approx 0.57722$ is the Euler-Mascheroni constant. This leads to Jacob and Cooper’s straight-line simplification $$\Delta s \approx 2.3 \frac{Q}{4 \pi T}$$ where $\Delta s$ is the drawdown over one log-cycle (base 10) of time. The intercept of the straight-line approximation is related to $S$ through This approximation made estimating hydraulic parameters much simpler at large $t_D$. @hantush1961 later extended Theis’ confined solution for | 1 | member_13 |
partially penetrating wells.
Observed Time-drawdown Curve
----------------------------
Before the time-dependent solution of @theis1935, distance drawdown was the diagnostic plot for aquifer test data. Detailed distance-drawdown plots require many observation locations (e.g., the 80 observation wells of @wenzel1936TheimTest). Re-analyzing results of the unconfined pumping test in Grand Island, [@wenzel1942] noticed that the [@theis1935] solution gave inconsistent estimates of $S_s$ and $K$, attributed to the delay in the yield of water from storage as the water table fell. The @theis1935 solution corresponds to the Dupuit assumptions for unconfined flow, and can only re-create the a portion of observed unconfined time-drawdown profiles (either late or early). The effect of the water table must be taken into account through a boundary condition or source term in the governing equation to reproduce observed behavior in unconfined pumping tests.
![Drawdown data from Cape Cod [@moenchetal2001], observation well F377-037. Upper dashed curve is confined model of @hantush1961 with $S=S_sb$, lower dotted curve is same with $S=S_sb + S_y$. Solid curve is unconfined model of @neuman1974 using $S_y=0.23$.[]{data-label="fig:capecod"}](cape_cod_F377-037.eps){width="80.00000%"}
[@walton1960] recognized three distinct segments characterizing different release mechanisms on time-drawdown curve under water table conditions (see Figure \[fig:capecod\]). A log-log time-drawdown plot in an unconfined aquifer has a characteristic | 1 | member_13 |
shape consisting of a steep early-time segment, a flatter intermediate segment and a steeper late-time segment. The early segment behaves like the @theis1935 solution with $S=S_s b$ (water release due to bulk medium relaxation), the late segment behaves like the @theis1935 solution with $S=S_s b + S_y$ [@gambolati1976transient] (water release due to water table drop), and the intermediate segment represents a transition between the two. Distance-drawdown plots from unconfined aquifer tests do not show a similar inflection or change in slope, and do not produce good estimates of storage parameters.
Early Unconfined Well Test Solutions
====================================
Moving Water Table Solutions Without Confined Storage
-----------------------------------------------------
The [@theis1935] solution for confined aquifers can only reproduce either the early or late segments of the unconfined time-drawdown curve (see Figure \[fig:capecod\]). [@boulton1954a] suggested it is theoretically unsound to use the @theis1935 solution for unconfined flow because it does not account for vertical flow to the pumping well. He proposed a new mechanism for flow towards a fully penetrating pumping well under unconfined conditions. His formulation assumed flow is governed by $\nabla^2 s = 0$, with transient effects incorporated through the water table boundary condition. He treated the water table (where $\psi=0$, located at $z=\xi$ | 1 | member_13 |
above the base of the aquifer) as a moving material boundary subject to the condition $h\left( r,z=\xi,t\right)=z$. He considered the water table without recharge to be comprised of a constant set of particles, leading to the kinematic boundary condition $$\label{dynamic}
\frac{D}{Dt}\left(h - z \right) = 0$$ which is a statement of conservation of mass, for an incompressible fluid. @boulton1954a considered the Darcy velocity of the water table as $u_z=-\frac{K_z}{S_y}\frac{\partial h}{\partial z}$ and $u_r=-\frac{K_r}{S_y}\frac{\partial h}{\partial r}$, and expressed the total derivative as $$\label{material_derivative}
\frac{D}{Dt}=\frac{\partial}{\partial t}-
\frac{K_r}{S_y}\frac{\partial h}{\partial r}\frac{\partial}{\partial r}-
\frac{K_z}{S_y}\frac{\partial h}{\partial z}\frac{\partial}{\partial z},$$ where $K_r$ and $K_z$ are radial and vertical hydraulic conductivity components. Using , the kinematic boundary condition in terms of drawdown is $${\label{free_surface}}
\frac{\partial s}{\partial t}-\frac{K_r}{S_y}
\left( \frac{\partial s}{\partial r} \right)^2-
\frac{K_z}{S_y} \left( \frac{\partial s}{\partial z} \right)^2=-
\frac{K_z}{S_y}\frac{\partial s}{\partial z}.$$
@boulton1954a utilized the wellbore and far-field boundary conditions of @theis1935. He also considered the aquifer rests on an impermeable flat horizontal boundary $\left. \partial
h/\partial z \right|_{z=0}= 0$; this was also inferred by @theis1935 because of his two-dimensional radial flow assumption. [@dagan1967] extended Boulton’s water table solution to the partially penetrating case by replacing the wellbore boundary condition with $$\lim_{r \rightarrow 0} r\frac{\partial s}{\partial r}=
\begin{cases}\frac{Q}{2\pi K (\ell | 1 | member_13 |
- d)} & b-\ell < z < b-d \\ 0 &
\text{otherwise}\end{cases},$$ where $\ell$ and $d$ are the upper and lower boundaries of the pumping well screen, as measured from the initial top of the aquifer.
The two sources of non-linearity in the unconfined problem are: 1) the boundary condition is applied at the water table, the location of which is unknown *a priori*; 2) the boundary condition applied on the water table includes $s^2$ terms.
In order to solve this non-linear problem both Boulton and Dagan linearized it by disregarding second order components in the free-surface boundary condition and forcing the free surface to stay at its initial position, yielding $$\label{boulton_linear_water_table}
\frac{\partial s}{\partial t}=-
\frac{K_z}{S_y}\frac{\partial s}{\partial z} \qquad\qquad z=h_0,$$ which now has no horizontal flux component after neglecting second-order terms. Equation can be written in non-dimensional form as $$\label{dimensionless_MWT_bc}
\frac{\partial s_D}{\partial t_D}=-
K_D \sigma^{\ast} \frac{\partial s_D}{\partial z_D} \qquad\qquad z_D=1,$$ where $K_D=K_z/K_r$ is the dimensionless anisotropy ratio and $\sigma^{\ast}=S/S_y$ is the dimensionless storage ratio.
Both [@boulton1954a] and [@dagan1967] solutions reproduce the intermediate and late segments of the typical unconfined time-drawdown curve, but neither of them reproduces the early segment of the curve. Both are solutions to the Laplace equation, and | 1 | member_13 |
therefore disregard confined aquifer storage, causing pressure pulses to propagate instantaneously through the saturated zone. Both solutions exhibit an instantaneous step-like increase in drawdown when pumping starts.
Delayed Yield Unconfined Response
---------------------------------
[@boulton1954b] extended Theis’ transient confined theory to include the effect of delayed yield due to movement of the water table in unconfined aquifers. Boulton’s proposed solutions ([-@boulton1954b; -@boulton1963]) reproduce all three segments of the unconfined time-drawdown curve. In his formulation of delayed yield, he assumed as the water table falls water is released from storage (through drainage) gradually, rather than instantaneously as in the free-surface solutions of [@boulton1954a] and [@dagan1967]. This approach yielded an integro-differential flow equation in terms of vertically averaged drawdown $s^\ast$ as $$\label{boulton_solution}
\frac{\partial ^2 s^\ast}{\partial r^2}+
\frac{1}{r}\frac{\partial s^\ast}{\partial r}=
\left[\frac{S}{T}\frac{\partial s^\ast}{\partial t} \right]+
\left\lbrace\alpha S_y \int _0^t \frac{\partial s^\ast}{\partial \tau}
e^{-\alpha (t-\tau )}\;\mathrm{d}\tau \right \rbrace$$ which Boulton linearized by treating $T$ as constant. The term in square brackets is instantaneous confined storage, the term in braces is a convolution integral representing storage released gradually since pumping began, due to water table decline. [@boulton1963] showed the time when delayed yield effects become negligible is approximately equal to $\frac{1}{\alpha}$, leading to the term “delay index”. [@prickett1965] | 1 | member_13 |
used this concept and through analysis of large amount of field drawdown data with [@boulton1963] solution, he established an empirical relationship between the delay index and physical aquifer properties. Prickett proposed a methodology for estimation of $S$, $S_y$, $K$, and $\alpha$ of unconfined aquifers by analyzing pumping tests with the [@boulton1963] solution.
Although Boulton’s model was able to reproduce all three segment of the unconfined time-drawdown curve, it failed to explain the physical mechanism of the delayed yield process because of the non-physical nature of the “delay index” in the [@boulton1963] solution.
[@streltsova1972a] developed an approximate solution for the decline of the water table and $s^\ast$ in fully penetrating pumping and observation wells. Like @boulton1954b, she treated the water table as a sharp material boundary, writing the two-dimensional depth-averaged flow equation as $$\label{streltsova_equation}
\frac{\partial ^2 s^\ast}{\partial r^2}+
\frac{1}{r}\frac{\partial s^\ast}{\partial r}=
\frac{S}{T}\left(\frac{\partial s^\ast}{\partial t}-
\frac{\partial \xi}{\partial t} \right).$$ The rate of water table decline was assumed to be linearly proportional to the difference between the water table elevation $\xi$ and the vertically averaged head $b-s^\ast$, $$\label{streltsova_watertable}
\frac{\partial \xi}{\partial t}=\frac{K_z}{S_yb_z}
\left( s^\ast-b+\xi \right)$$ where $b_z=b/3$ is an effective aquifer thickness over which water table recharge is distributed into the deep aquifer. Equation | 1 | member_13 |
can be viewed as an approximation to the zero-order linearized free-surface boundary condition of [@boulton1954a] and [@dagan1967]. Streltsova considered the initial condition $\xi
(r,t=0)=b$ and used the same boundary condition at the pumping well and the outer boundary $(r\rightarrow \infty )$ used by [@theis1935] and [@boulton1963]. Equation has the solution [@streltsova1972b] $$\label{strelstsova_sol}
\frac{\partial \xi}{\partial t} =
-\alpha_T \int _0^t e^{-\alpha_T (t-\tau)}
\frac{\partial s^\ast}{\partial \tau}\;\mathrm{d}\tau$$ where $\alpha_T = K_z/(S_yb_z)$. Substituting into produces solution of [@boulton1954b; @boulton1963]; the two solutions are equivalent. Boulton’s delayed yield theory (like that of Streltsova) does not account for flow in unsaturated zone but instead treats water table as material boundary moving vertically downward under influence of gravity. [@streltsova1973] used field data collected by [@meyer1962] to demonstrate unsaturated flow had virtually no impact on the observed delayed process. Although Streltsova’s solution related Boulton’s delay index to physical aquifer properties, it was later found to be a function of $r$ [@neuman1975; @herrera1978]. The delayed yield solutions of Boulton and Streltsova do not account for vertical flow within the unconfined aquifer through simplifying assumptions; they cannot be extended to account for partially penetrating pumping and observation wells.
Prickett’s pumping test in the vicinity of Lawrenceville, Illinois [@prickett1965] showed that | 1 | member_13 |
specific storage in unconfined aquifers can be much greater than typically observed values in confined aquifers – possibly due to entrapped air bubbles or poorly consolidated shallow sediments. It is clear the elastic properties of unconfined aquifers are too important to be disregarded.
Delayed Water Table Unconfined Response
---------------------------------------
Boulton’s ([-@boulton1954b; -@boulton1963]) models encountered conceptual difficulty explaining the physical mechanism of water release from storage in unconfined aquifers. [@neuman1972] presented a physically based mathematical model that treated the unconfined aquifer as compressible (like @boulton1954b [@boulton1963] and @streltsova1972a [@streltsova1972b]) and the water table as a moving material boundary (like @boulton1954a and @dagan1967). In Neuman’s approach aquifer delayed response was caused by physical water table movement, he therefore proposed to replace the phrase “delayed yield” by “delayed water table response”.
[@neuman1972] replaced the Laplace equation of [@boulton1954a] and [@dagan1967] by the diffusion equation; in dimensionless form it is $$\label{neuman_diffusion}
\frac{\partial ^2 s_D}{\partial r_D^2}+
\frac{1}{r_D}\frac{\partial s_D}{\partial r_D}+
K_D\frac{\partial ^2s_D}{\partial z_D^2} =
\frac{\partial s_D}{\partial t_D}.$$ Like @boulton1954a and @dagan1967, Neuman treated the water table as a moving material boundary, linearized it (using ), and treated the anisotropic aquifer as three-dimensional axis-symmetric. Like @dagan1967, @neuman1974 accounted for partial penetration. By including confined storage in the | 1 | member_13 |
governing equation , Neuman was able to reproduce all three parts of the observed unconfined time-drawdown curve and produce parameter estimates (including the ability to estimate $K_z$) very similar to the delayed yield models.
Compared to the delay index models, Neuman’s solution produced similar fits to data (often underestimating $S_y$, though), but [@neuman1975; @neuman1979] questioned the physical nature of Boulton’s delay index. He performed a regression fit between the @boulton1954b and @neuman1972 solutions, resulting in the relationship $$\label{alpha-regression}
\alpha = \frac{K_z}{S_yb}\left[3.063-0.567
\log\left( \frac{K_Dr^2}{b^2}\right) \right]$$ demonstrating $\alpha$ decreases linearly with $\log r$ and is therefore not a characteristic aquifer constant. When ignoring the logarithmic term in the relationship $\alpha=3K_z/(S_yb)$ proposed by [@streltsova1972a] is approximately recovered.
After comparative analysis of various methods for determination of specific yield, [@neuman1987] concluded the water table response to pumping is a much faster phenomenon than drainage in the unsaturated zone above it.
@malama2011 recently proposed an alternative linearization of , approximately including the effects of the neglected second-order terms, leading to the alternative water table boundary condition of $$\label{malama}
S_y \frac{\partial s}{\partial t} =- K_z
\left( \frac{\partial s}{\partial z} +
\beta \frac{\partial^2 s}{\partial z^2} \right) \qquad z=h_0$$ where $\beta$ is a linearization coefficient \[L\]. The parameter | 1 | member_13 |
$\beta$ provides additional adjustment of the shape of the intermediate portion of the time-drawdown curve (beyond adjustments possible with $K_D$ and $\sigma^\ast$ alone), leading to improved estimates of $S_y$. When $\beta=0$ simplifies to .
Hybrid Water Table Boundary Condition
-------------------------------------
The solution of [@neuman1972; @neuman1974] was accepted by many hydrologists “as the preferred model ostensibly because it appears to make the fewest simplifying assumptions” [@moenchetal2001]. Despite acceptance, [@nwankwor1984] and [@moench1995] pointed out that significant difference might exist between measured and model-predicted drawdowns, especially at locations near the water table, leading to significantly underestimated $S_y$ using Neuman’s models (e.g., see Figure \[fig:capecod\]). @moench1995 attributed the inability of Neuman’s models to give reasonable estimates of $S_y$ and capture this observed behavior near the water table due to the later’s disregard of “gradual drainage”. In an attempt to resolve this problem, [@moench1995] replaced the instantaneous moving water table boundary condition used by Neuman with one containing a @boulton1954b delayed yield convolution integral, $$\label{moench_hybrid}
\int _0^t\frac{\partial s}{\partial \tau}
\sum _{m=1}^M \alpha _m e^{-\alpha _m (t-\tau )}\;\mathrm{d}\tau =-
\frac{K_z}{S_y}\frac{\partial s}{\partial z}$$ This hybrid boundary condition ($M=1$ in @moench1995) included the convolution source term @boulton1954b [@boulton1963] and @streltsova1972a [@streltsova1972b] used in their depth-averaged governing flow equations. | 1 | member_13 |
In addition to this new boundary condition, [@moench1995] included a finite radius pumping well with wellbore storage, conceptually similar to how @papadopulosandcooper1967 modified the solution of @theis1935. In all other respects, his definition of the problem was similar to [@neuman1974].
Moench’s solution resulted in improved fits to experimental data and produced more realistic estimates of specific yield [@moenchetal2001], including the use of multiple delay parameters $\alpha_m$ [@moench2003]. @moenchetal2001 used with $M = 3$ to estimate hydraulic parameters in the unconfined aquifer at Cape Cod. They showed that $M=3$ enabled a better fit to the observed drawdown data than obtained by $M<3$ or the model of [@neuman1974]. Similar to the parameter $\alpha $ in Boulton’s model, the physical meaning of $\alpha _m$ is not clear.
Unconfined Solutions Considering Unsaturated Flow
=================================================
As an alternative to linearizing the water table condition of @boulton1954a, the unsaturated zone can be explicitly included. The non-linearity of unsaturated flow is substituted for the non-linearity of . By considering the vadose zone, the water table is internal to the domain, rather than a boundary condition. The model-data misfit in Figure \[fig:capecod\] at “late intermediate” time is one of the motivations for considering the mechanisms of delayed yield | 1 | member_13 |
and the effects of the unsaturated zone.
Unsaturated Flow Without Confined Aquifer Storage
-------------------------------------------------
[@kroszynskidagan1975] were the first to account analytically for the effect of the unsaturated zone on aquifer drawdown. They extended the solution of [@dagan1967] by accounting for unsaturated flow above the water table. They used Richards’ equation for axis-symmetric unsaturated flow in a vadose zone of thickness $L$ $$\label{richards}
K_r\frac{1}{r}\frac{\partial}{\partial r}\left( k(\psi
)r\frac{\partial \sigma}{\partial r}\right)+
K_z\frac{\partial}{\partial z}\left( k(\psi )\frac{\partial
\sigma}{\partial z}\right) = C(\psi)\frac{\partial \sigma
}{\partial t}
\qquad
\xi < z <b+L$$ where $\sigma = b + \psi_a - h$ is unsaturated zone drawdown \[L\], $\psi _a$ is air-entry pressure head \[L\], $0 \leq k(\psi )\leq 1$ is dimensionless relative hydraulic conductivity, $C(\psi)=d\theta/d
\psi$ is the moisture retention curve \[1/L\], and $\theta$ is dimensionless volumetric water content (see inset in Figure \[fig:diagram\]). They assumed flow in the underlying saturated zone was governed by the Laplace equation (like @dagan1967). The saturated and unsaturated flow equations were coupled through interface conditions at the water table expressing continuity of hydraulic heads and normal groundwater fluxes, $$\begin{aligned}
s=\sigma\qquad
\nabla s \cdot \textbf{n}=\nabla \sigma \cdot \textbf{n} \qquad
z= \xi\end{aligned}$$ where $\textbf{n}$ is the unit vector perpendicular to the water table.
To solve the | 1 | member_13 |
unsaturated flow equation , @kroszynskidagan1975 linearized by adopting the [@gardner1958] exponential model for the relative hydraulic conductivity, $k(\psi )=e^{\kappa_a (\psi -\psi_a)}$, where $\kappa_a$ is the sorptive number \[1/L\] (related to pore size). They adopted the same exponential form for the moisture capacity model, $\theta (\psi
)=e^{\kappa_k (\psi -\psi_k)}$, where $\psi_k$ is the pressure at which $k(\psi)=1$, $\kappa_a=\kappa_k$, and $\psi_a=\psi_k$, leading to the simplified form $C(\psi)=S_y\kappa_a e^{\kappa_a (\psi
-\psi_a)}$. In the limit as $\kappa_k=\kappa_a \rightarrow \infty$ their solution reduces to that of [@dagan1967]. The relationship between pressure head and water content is a step function. [@kroszynskidagan1975] took unsaturated flow above the water table into account but ignored the effects of confined aquifer storage, leading to early-time step-change behavior similar to @boulton1954a and @dagan1967.
Increasingly Realistic Saturated-Unsaturated Well Test Models
-------------------------------------------------------------
[@mathiasbutler2006] combined the confined aquifer flow equation with a one-dimensional linearized version of for a vadose zone of finite thickness. Their water table was treated as a fixed boundary with known flow conditions, decoupling the unsaturated and saturated solutions at the water table. Although they only considered a one-dimensional unsaturated zone, they included the additional flexibility provided by different exponents $(\kappa_a \neq \kappa_k)$. [@mathiasbutler2006] did not consider a partially penetrating well, | 1 | member_13 |
but they did note the possibility of accounting for it in principle by incorporating their uncoupled drainage function in the solution of @moench1997, which considers a partially penetrating well of finite radius.
[@tartakovskyneuman2007] similarly combined the confined aquifer flow equation , but with the original axis-symmetric form of considered by @kroszynskidagan1975. Also like @kroszynskidagan1975, their unsaturated zone was characterized by a single exponent $\kappa_a=\kappa_k$ and reference pressure head $\psi_a=\psi_k$. Unlike @kroszynskidagan1975 and @mathiasbutler2006, [@tartakovskyneuman2007] assumed an infinitely thick unsaturated zone.
[@tartakovskyneuman2007] demonstrated flow in the unsaturated zone is not strictly vertical. Numerical simulations by @moench2008 showed groundwater movement in the capillary fringe is more horizontal than vertical. [@mathiasbutler2006] and @moench2008 showed that using the same exponents and reference pressure heads for effective saturation and relative permeability decreases model flexibility and underestimates $S_y$. @moench2008 predicted an extended form of @tartakovskyneuman2007 with two separate exponents, a finite unsaturated zone, and wellbore storage would likely produce more physically realistic estimates of $S_y$.
[@mishraneuman2010] developed a new generalization of the solution of [@tartakovskyneuman2007] that characterized relative hydraulic conductivity and water content using $\kappa_a \neq \kappa_k$, $\psi_a
\neq \psi_k$ and a finitely thick unsaturated zone. @mishraneuman2010 validated their solution against numerical simulations of drawdown in | 1 | member_13 |
a synthetic aquifer with unsaturated properties given by the model of [@vangenuchten1980]. They also estimated aquifer parameters from Cape Cod drawdown data [@moenchetal2001], comparing estimated @vangenuchten1980 parameters with laboratory values [@maceetal1998].
[@mishraneuman2011] further extended their [-@mishraneuman2010] solution to include a finite-diameter pumping well with storage. @mishraneuman2010 [@mishraneuman2011] were the first to estimate non-exponential model unsaturated aquifer properties from pumping test data, by curve-fitting the exponential model to the [@vangenuchten1980] model. Analyzing pumping test data of @moenchetal2001 (Cape Cod, Massachusetts) and @nwankwor1984 [@nwankwor1992] (Borden, Canada), they estimated unsaturated flow parameters similar to laboratory-estimated values for the same soils.
Future Challenges
=================
The conceptualization of groundwater flow during unconfined pumping tests has been a challenging task that has spurred substantial theoretical research in the field hydrogeology for decades. Unconfined flow to a well is non-linear in multiple ways, and the application of analytical solutions has required utilization of advanced mathematical tools. There are still many additional challenges to be addressed related to unconfined aquifer pumping tests, including:
- Hysteretic effects of unsaturated flow. Different exponents and reference pressures are needed during drainage and recharge events, complicating simple superposition needed to handle multiple pumping wells, variable pumping rates, or analysis of recovery data.
| 1 | member_13 |
- Collecting different data types. Validation of existing models and motivating development of more realistic ones depends on more than just saturated zone head data. Other data types include vadose zone water content [@meyer1962], and hydrogeophysical data like microgravity [@damiata2006] or streaming potentials [@malama2009].
- Moving water table position. All solutions since @boulton1954a assume the water table is fixed horizontal $\xi(r,t)=h_0$ during the entire test, even close to the pumping well where large drawdown is often observed. Iterative numerical solutions can accommodate this, but this has not been included in an analytical solution.
- Physically realistic partial penetration. Well test solutions suffer from the complication related to the unknown distribution of flux across the well screen. Commonly, the flux distribution is simply assumed constant, but it is known that flux will be higher near the ends of the screened interval that are not coincident with the aquifer boundaries.
- Dynamic water table boundary condition. A large increase in complexity comes from explicitly including unsaturated flow in unconfined solutions. The kinematic boundary condition expresses mass conservation due to water table decline. Including an analogous dynamic boundary condition based on a force balance (capillarity vs. gravity) may include sufficient effects of unsaturated | 1 | member_13 |
---
abstract: 'We describe how to solve simultaneous [Padé]{}approximations over a power series ring ${{\mathsf{K}}}[[x]]$ for a field ${{\mathsf{K}}}$ using ${O^{\scriptscriptstyle \sim}\!}(n^{\omega - 1} d)$ operations in ${{\mathsf{K}}}$, where $d$ is the sought precision and $n$ is the number of power series to approximate. We develop two algorithms using different approaches. Both algorithms return a reduced sub-bases that generates the complete set of solutions to the input approximations problem that satisfy the given degree constraints. Our results are made possible by recent breakthroughs in fast computations of minimal approximant bases and Hermite [Padé]{}approximations.'
author:
- |
Johan Rosenkilde, né Nielsen\
Technical University of Denmark\
Denmark\
[email protected]
- |
Arne Storjohann\
University of Waterloo\
Canada\
[email protected]
- |
Johan Rosenkilde, né Nielsen\
\
\
Arne Storjohann\
\
\
bibliography:
- 'bibtex.bib'
title: 'Algorithms for Simultaneous [Padé]{}Approximations [^1] '
---
Introduction {#sec:intro}
============
The Simultaneous [Padé]{}approximation problem concerns approximating several power series $S_1,\ldots,S_n \in {{\mathsf{K}}}[[x]]$ with rational functions $\frac {\sigma_1}\lambda,\ldots,\frac {\sigma_n} \lambda$, all sharing the same denominator $\lambda$. In other words, for some $d \in {\mathbb Z\xspace}_{\geq 0}$, we seek $\lambda \in {{\mathsf{K}}}[x]$ of low degree such that each of $${{\textnormal{rem}}}(\lambda S_1,\ x^d) , {{\textnormal{rem}}}(\lambda S_2,\ x^d),\ \ldots,\ {{\textnormal{rem}}}(\lambda S_n,\ x^d)$$ has | 1 | member_14 |
low degree. The study of Simultaneous [Padé]{}approximations traces back to Hermite’s proof of the transcendence of $e$ [@hermite_sur_1878]. Solving Simultaneous [Padé]{}approximations has numerous applications, such as in coding theory, e.g. [@feng_generalization_1991; @schmidt_collaborative_2009]; or in distributed, reliable computation [@clement_pernet_high_2014]. Many algorithms have been developed for this problem, see e.g. [@beckermann_uniform_1992; @olesh_vector_2006; @sidorenko_linear_2011; @nielsen_generalised_2013] as well as the references therein. Usually one cares about the regime where $d \gg n$. Obtaining $O(n d^2)$ is classical through successive cancellation, see [@beckermann_uniform_1994] or [@feng_generalization_1991] for a Berlekamp–Massey-type variant. Using fast arithmetic, the previous best was ${O^{\scriptscriptstyle \sim}\!}(n^\omega d)$, where $\omega$ is the exponent for matrix multiplication, see \[ssec:cost\]. That can be done by computing a minimal approximant basis with e.g. [@giorgi_complexity_2003; @GuptaSarkarStorjohannValeriote11]; this approach traces back to [@barel_general_1992; @beckermann_uniform_1992]. Another possibility which achieves the same complexity is fast algorithms for solving structured linear systems, e.g. [@bostan_solving_2008]; see [@chowdhury_faster_2015] for a discussion of this approach.
A common description is to require $\deg \lambda < N_0$ for some degree bound $N_0$, and similarly $\deg {{\textnormal{rem}}}(\lambda S_1,\, x^d) < N_i$ for $i = 1,\ldots,n$. The degree bounds could arise naturally from the application, or could be set such that a solution must exist. A natural generalisation is | 1 | member_14 |
also to replace the $x^d$ moduli with arbitrary $g_1,\ldots,g_n \in {{\mathsf{K}}}[x]$. Formally, for any field ${{\mathsf{K}}}$:
\[prob:sim\_pade\] Given a tuple $({\bm{S}}, {\bm{g}}, {\bm{N}})$ where
- ${\bm{S}} = (S_1,\ldots,S_n) \in {{\mathsf{K}}}[x]^n$ is a sequence of polynomials,
- ${\bm{g}} = (g_1,\ldots,g_n) \in {{\mathsf{K}}}[x]^n$ is a sequence of moduli polynomials with $\deg S_i < \deg g_i$ for $i=1,\ldots,n$,
- and ${\bm{N}} = (N_0,\ldots,N_n)
\in {\mathbb Z\xspace}_{\geq 0}^{n+1}$ are degree bounds satisfying $1\leq N_0 \leq \max_i \deg g_i$ and $N_i \leq \deg g_i$ for $i=1,\ldots,n$,
find, if it exists, a non-zero vector $(\lambda, \phi_1, \ldots, \phi_n)$ such that
1. $\lambda S_i \equiv \phi_i \mod g_i$ for $i = 1,\ldots, n$, and \[p1item1\]
2. $\deg \lambda < N_0$ and $\deg \phi_i < N_i$ for $i=1,\ldots,n$.
We will call any vector $(\lambda, \phi_1, \ldots, \phi_n)$ as above *a solution* to a given Simultaneous [Padé]{}approximation problem. Note that if the $N_i$ are set too low, then it might be the case that no solution exists.
\[ex:simpade\] Consider over ${\mathbb F_{2}\xspace}[x]$ that $g_1 = g_2 = g_3 = x^5$, and ${\bm{S}} = (S_1,S_2,S_3) =
\left(x^{4} + x^{2} + 1,\,x^{4} + 1,\,x^{4} + x^{3} + 1\right)$, with degree bounds ${\bm{N}} = (5, 3, 4, 5)$. Then $\lambda_1 = x^4 | 1 | member_14 |
+ 1$ is a solution, since $\deg \lambda_1 < 5$ and $$\lambda_1 {\bm{S}} \equiv
\left(x^{2} + 1,\ 1,\ x^{3} + 1\right)
\mod x^5 \ .$$ $\lambda_2 = x^{3} + x$ is another solution, since $$\lambda_2 {\bm{S}} \equiv
\left(x,\ x^{3} + x,\ x^4+x^3 + x\right)
\mod x^5 \ .$$ These two solutions are linearly independent over ${\mathbb F_{2}\xspace}[x]$ and span all solutions.
Several previous algorithms for solving \[prob:sim\_pade\] are more ambitious and produce an entire *basis* of solutions that satisfy the first output condition $\lambda S_i \equiv \phi_i \mod g_i$ for $i=1,\ldots,n$, including solutions that do not satisfy the degree bounds stipulated by the second output condition. Our algorithms are slightly more restricted in that we only return the sub-basis that generates the set of solutions that satisfy both output requirements of \[prob:sim\_pade\]. Formally:
\[prob:sim\_pade\_basis\] Given an instance of \[prob:sim\_pade\], find a matrix $A \in {{\mathsf{K}}}[x]^{* \times (n+1)}$ such that:
- Each row of $A$ is a solution to the instance.
- All solutions are in the ${{\mathsf{K}}}[x]$-row space of $A$.
- $A$ is $(-{\bm{N}})$-row reduced[^2].
The last condition ensures that $A$ is minimal, in a sense, according to the degree bounds ${\bm{N}}$, and that we can easily parametrise which linear | 1 | member_14 |
combinations of the rows of $A$ are solutions. We recall the relevant definitions and lemmas in \[sec:preliminaries\].
We will call such a matrix $A$ a *solution basis*. In the complexities we report here, we cannot afford to compute $A$ explicitly. For example, if all $g_i = x^d$, the number of field elements required to explicitly write down all of the entries of $A$ could be $\Omega(n^2d)$. Instead, we remark that $A$ is completely given by the problem instance as well as the first column of $A$, containing the $\lambda$ polynomials.[^3] Our algorithms will therefore represent $A$ row-wise using the following compact representation.
For a given instance of \[prob:sim\_pade\_basis\], a *solution specification* is a tuple $({\bm{\lambda}},{\bm{\delta}}) \in
{{\mathsf{K}}}[x]^{k \times 1} \times {\mathbb Z\xspace}_{<0}^k$ such that the *completion* of ${\bm{\lambda}}$ is a solution basis, and where ${\bm{\delta}}$ are the $(-{\bm{N}})$-degrees of the rows of $A$.
The *completion* of ${\bm{\lambda}} = (\lambda_1,\ldots,\lambda_k){^\top}$ is the matrix $$\begin{bmatrix}
\lambda_1 & {{\textnormal{rem}}}(\lambda_1 S_1,\ g_1) & \ldots & {{\textnormal{rem}}}(\lambda_1 S_n,\ g_n) \\
\vdots & & \ddots & \vdots \\
\lambda_k & {{\textnormal{rem}}}(\lambda_k S_1,\ g_1) & \ldots & {{\textnormal{rem}}}(\lambda_k S_n,\ g_n) \\
\end{bmatrix}
\ .$$
Note that ${\bm{\delta}}$ will consist of only negative numbers, since any solution ${\bm{v}}$ | 1 | member_14 |
by definition has $\deg_{-{\bm{N}}} {\bm{v}} < 0$.
A solution specification for the problem in \[ex:simpade\] is $$({\bm{\lambda}}, {\bm{\delta}}) = \big( [x^4 + 1,\ x^3 + x]{^\top},\ (-1, -1) \big) \ .$$ The completion of this is $$A = \begin{bmatrix}
x^4 + 1 & x^{2} + 1 & 1 & x^{3} + 1 \\
x^3 + x & x & x^{3} + x & x^4+x^3 + x
\end{bmatrix}$$ One can verify that $A$ is $(-{\bm{N}})$-row reduced.
We present two algorithms for solving \[prob:sim\_pade\_basis\], both with complexity $O\big(n^{\omega-1}\, {{\mathsf{M}}}(d)\,(\log d)\,(\log
d/n)^2\big)$, where $d = \max_i \deg g_i$ and ${{\mathsf{M}}}(d)$ is the cost of multiplying two polynomials of degree $d$, see \[ssec:cost\]. They both depend crucially on recent developments that allow computing minimal approximant bases of non-square matrices faster than for the square case [@zhou_efficient_2012; @jeannerod_computation_2016]. We remark that from the solution basis, one can also compute the expanded form of one or a few of the solutions in the same complexity, for instance if a single, expanded solution to the simultaneous [Padé]{}problem is needed.
Our first algorithm in \[sec:dual\] assumes $g_i = x^d$ for all $i$ and some $d \in {\mathbb Z\xspace}_{\geq 0}$. It utilises a well-known duality between Simultaneous [Padé]{}approximations and | 1 | member_14 |
Hermite [Padé]{}approximations, see e.g. [@beckermann_uniform_1992]. The Hermite [Padé]{}problem is immediately solvable by fast minimal approximant basis computation. A remaining step is to efficiently compute a single row of the adjoint of a matrix in Popov form, and this is done by combining partial linearisation [@GuptaSarkarStorjohannValeriote11] and high-order lifting [@storjohann_high-order_2003].
Our second algorithm in \[sec:intersect\] supports arbitrary $g_i$. The algorithm first solves $n$ single-sequence [Padé]{}approximations, each of $S_1,\ldots,S_n$. The solution bases for two problem instances can be combined by computing the intersection of their row spaces; this is handled by a minimal approximant basis computation. A solution basis of the full Simultaneous [Padé]{}problem is then obtained by structuring intersections along a binary tree.
Before we describe our algorithms, we give some preliminary notation and definitions in \[sec:preliminaries\], and in \[sec:subroutines\] we describe some of the computational tools that we employ.
Both our algorithms have been implemented in Sage v. 7.0 [@stein_sagemath_????] (though asymptotically slower alternatives to the computational tools are used). The source code can be downloaded from <http://jsrn.dk/code-for-articles>.
Cost model {#ssec:cost}
----------
We count basic arithmetic operations in ${{\mathsf{K}}}$ on an algebraic RAM. We will state complexity results in terms of an exponent $\omega$ for matrix multiplication, and a function | 1 | member_14 |
${{\mathsf{M}}}(\cdot)$ that is a multiplication time for ${{\mathsf{K}}}[x]$ [@von_zur_gathen_modern_2012 Definition 8.26]. Then two $n\times n$ matrices over ${{\mathsf{K}}}$ can be multiplied in $O(n^{\omega})$ operations in ${{\mathsf{K}}}$, and two polynomials in ${{\mathsf{K}}}[x]$ of degree strictly less than $d$ can be multiplied in ${{\mathsf{M}}}(d)$ operations in ${{\mathsf{K}}}$. The best known algorithms allow $\omega < 2.38$ [@coppersmith_matrix_1990; @LeGall14], and we can always take ${{\mathsf{M}}}(d) \in O(n (\log n) (\operatorname{loglog}n))$ [@CantorKaltofen].
In this paper we assume that $\omega > 2$, and that ${{\mathsf{M}}}(d)$ is super-linear while ${{\mathsf{M}}}(d) \in O(d^{\omega-1})$. The assumption ${{\mathsf{M}}}(d) \in O(d^{\omega-1})$ simply stipulates that if fast matrix multiplication techiques are used then fast polynomial multiplication should be used also: for example, $n \, {{\mathsf{M}}}(nd) \in O(n^{\omega} \, {{\mathsf{M}}}(d))$.
Preliminaries {#sec:preliminaries}
=============
Here we gather together some definitions and results regarding row reduced bases, minimal approximant basis, and their shifted variants. For a matrix $A$ we denote by $A_{i,j}$ the entry in row $i$ and column $j$. For a matrix $A$ over ${{\mathsf{K}}}[x]$ we denote by ${{\textnormal{Row}}}(A)$ the ${{\mathsf{K}}}[x]$-linear row space of $A$.
Degrees and shifted degrees
---------------------------
The degree of a nonzero vector ${\bm{v}} \in {{\mathsf{K}}}[x]^{1 \times m}$ or matrix $A \in {{\mathsf{K}}}[x]^{n\times m}$ is denoted by $\deg {\bm{v}}$ or $\deg | 1 | member_14 |
A$, and is the maximal degree of entries of ${\bm{v}}$ or $A$. If $A$ has no zero rows the [*row degrees*]{} of $A$, denoted by ${{\textnormal{rowdeg}}}\, A$, is the tuple $(d_1,\ldots,d_n)$ with $d_i = \deg
{{\textnormal{row}}}(A,i)$.
The (row-wise) [*leading matrix*]{} of $A$, denoted by ${\rm LM}(A) \in {{\mathsf{K}}}^{n \times m}$, has ${\rm LM}(A)_{i,j}$ equal to the coefficient of $x^{d_i}$ of $A_{i,j}$.
Next we recall [@barel_general_1992; @zhou_efficient_2012; @jeannerod_computation_2016] the shifted variants of the notion of degree, row degrees, and leading matrix. For a [*shift*]{} ${\bm{s}} =
(s_1,\ldots,s_n) \in {\mathbb Z\xspace}^n$, define the $n \times n$ diagonal matrix $x^{{\bm{s}}}$ by $$x^{{\bm{s}}} := \left [ \begin{array}{ccc} x^{s_1}
& & \\
& \ddots & \\ & & x^{s_n} \end{array} \right ].$$ Then the [*${{\bm{s}}}$-degree*]{} of $v$, the [*${{\bm{s}}}$-row degrees*]{} of $A$, and the [*${\bm{s}}$-leading matrix*]{} of $A$, are defined by $\deg_{{\bm{s}}} v := \deg v x^{{\bm{s}}}$, ${{\textnormal{rowdeg}}}_{{\bm{s}}} A := {{\textnormal{rowdeg}}}\, Ax^{{\bm{s}}}$, and ${\rm LM}_{{\bm{s}}}(A) := {\rm
LM}(Ax^{{\bm{s}}})$. Note that we pass over the ring of Laurent polynomials only for convenience; our algorithms will only compute with polynomials. As pointed out in [@jeannerod_computation_2016], up to negation the definition of ${{\bm{s}}}$-degree is equivalent to that used in [@BeckermannLabahnVillard06] and to the notion of [*defect*]{} in [@beckermann_uniform_1994].
| 1 | member_14 |
For an instance $({\bm{S}}, {\bm{g}}, {\bm{N}})$ of \[prob:sim\_pade\], in the context of defining matrices, we will be using ${\bm{S}}$ and ${\bm{g}}$ as vectors, and by ${ \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}$ denote the diagonal matrix with the entries of ${\bm{g}}$ on its diagonal.
Row reduced
-----------
Although row reducedness can be defined for matrices of arbitrary shape and rank, it suffices here to consider the case of matrices of full row rank. A matrix $R \in {{\mathsf{K}}}[x]^{n \times m}$ is [*row reduced*]{} if ${\rm LM}(R)$ has full row rank, and [*${\bm{s}}$-row reduced*]{} if ${\rm LM}_{{\bm{s}}}(R)$ has full row rank. Every $A \in {{\mathsf{K}}}[x]^{n \times m}$ of full row rank is left equivalent to a matrix $R \in {{\mathsf{K}}}[x]^{n \times m}$ that is ${{\bm{s}}}$-row reduced. The rows of $R$ give a basis for ${{\textnormal{Row}}}(A)$ that is minimal in the following sense: the list of ${{\bm{s}}}$-degrees of the rows of $R$, when sorted in non-decreasing order, will be lexicographically minimal. An important feature of row reduced matrices is the so-called “predictable degree”-property [@kailath_linear_1980 Theorem 6.3-13]: for any ${\bm{v}} \in {{\mathsf{K}}}[x]^{1 \times n}$, we have $$\deg_{{\bm{s}}}({\bm{v}} R) = \max_{i=1,\ldots,n}( \deg_{{\bm{s}}} {\rm row}(R,i)
+ \deg v_i ) \ .$$
A canonical ${\bm{s}}$-reduced basis is | 1 | member_14 |
provided by the ${{\bm{s}}}$-Popov form. Although an ${{\bm{s}}}$-Popov form can be defined for a matrix of arbitrary shape and rank, it suffices here to consider the case of a non-singular matrix. The following definition is equivalent to [@jeannerod_computation_2016 Definition 1.2].
\[def:popov\] A non-singular matrix $R \in {{\mathsf{K}}}[x]^{n\times n}$ is in ${{\bm{s}}}$-Popov form if ${\rm LM}_{{\bm{s}}}(R)$ is unit lower triangular and the degrees of off-diagonal entries of $R$ are strictly less than the degree of the diagonal entry in the same column.
Adjoints of row reduced matrices
--------------------------------
For a non-singular matrix $A$ recall that the adjoint of $A$, denoted by ${\rm adj}(A)$, is equal to $(\det A)A^{-1}$, and that entry ${{{\textnormal{adj}}}(A)}_{i,j}{^\top}$ is equal to $(-1)^{i+j}$ times the determinant of the $(n-1) \times (n-1)$ sub-matrix that is obtained from $A$ by deleting row $i$ and column $j$.
\[lem:adjointRowReduced\] Let $A \in {{\mathsf{K}}}[x]^{n \times n}$ be ${\bm{s}}$-row reduced. Then ${{\textnormal{adj}}}(A){^\top}$ is $(-{\bm{s}})$-row reduced with $${{\textnormal{rowdeg}}}_{(-{\bm{s}})} {{\textnormal{adj}}}(A){^\top}=(\eta - s - \eta_1,\ldots , \eta - s -\eta_n) \ ,$$ where ${\bm{\eta}} = {{\textnormal{rowdeg}}}_{{\bm{s}}} A$, $\eta = \sum_i \eta_i$ and $s = \sum_i s_i$.
Since $A$ is ${\bm{s}}$-row reduced then $A x^{{\bm{s}}}$ is row reduced. Note that ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}(A x^{{\bm{s}}}){^\top}= (\det A x^{{\bm{s}}}) I_m$ with | 1 | member_14 |
$\deg
\det A x^{{\bm{s}}} = \eta$. It follows that row $i$ of ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}$ must have degree at least $\eta - \eta_i$ since $\eta_i$ is the degree of column $i$ of $(A x^{{\bm{s}}}){^\top}$. However, entries in row $i$ of ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}$ are minors of the matrix obtained from $A x^{{\bm{s}}}$ by removing row $i$, hence have degree at most $\eta
- \eta_i$. It follows that the (row-wise) leading coefficient matrix of ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}$ is non-singular, hence ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}$ is row reduced. Since ${{\textnormal{adj}}}(A x^{{\bm{s}}}){^\top}=
(\det x^{{\bm{s}}}) {{\textnormal{adj}}}(A){^\top}x^{-{\bm{s}}}$ we conclude that ${{\textnormal{adj}}}(A){^\top}$ is $(-{\bm{s}})$-row reduced with ${{\textnormal{rowdeg}}}_{(-{\bm{s}})} {{\textnormal{adj}}}(A) =
(\eta - \eta_1 - s, \ldots, \eta - \eta_n - s)$.
Minimal approximant bases
-------------------------
We recall the standard notion of minimal approximant basis, sometimes known as order basis or $\sigma$-basis [@beckermann_uniform_1994]. For a matrix $A \in {{\mathsf{K}}}[x]^{n \times m}$ and order $d \in
{\mathbb Z\xspace}_{\geq 0}$, an *order $d$ approximant* is a vector ${\bm{p}} \in
{{\mathsf{K}}}[x]^{1 \times n}$ such that ${\bm{p}}A \equiv {\bm{0}} \mod x^d.$ An *approximant basis of order $d$* is then a matrix $F \in
{{\mathsf{K}}}[x]^{n \times n}$ which is a basis of all order $d$ approximants. Such a basis always exists and has full rank $n$. For a | 1 | member_14 |
shift ${\bm{s}} \in {\mathbb Z\xspace}^n$, $F$ is then an *${\bm{s}}$-minimal approximant basis* if it is ${\bm{s}}$-row reduced.
Let ${{\ensuremath{\mathsf{MinBasis}}\xspace}}(d,A,{\bm{s}})$ be a function that returns $(F,{\bm{\delta}})$, where $F$ is an ${\bm{s}}$-minimal approximant basis of $A$ of order $d$, and ${\bm{\delta}} = {{\textnormal{rowdeg}}}_{{\bm{s}}} F$. The next lemma recalls a well known method of constructing minimal approximant bases recursively. Although the output of ${{\ensuremath{\mathsf{MinBasis}}\xspace}}$ may not be unique, the lemma holds for *any* ${\bm{s}}$-minimal approximant basis that ${{\ensuremath{\mathsf{MinBasis}}\xspace}}$ might return.
\[lem:paderec\] Let $A = \left [ \begin{array}{c|c}
A_1 & A_2 \end{array} \right ]$ over ${{\mathsf{K}}}[x]$. If $(F_1, {\bm{\delta}}_1) = {{\ensuremath{\mathsf{MinBasis}}\xspace}}(d,A_1,{\bm{s}})$ and $(F_2,{\bm{\delta}}_2) =
{{\ensuremath{\mathsf{MinBasis}}\xspace}}(d,F_1A_2,{\bm{\delta}}_1)$, then $F_2F_1$ is an ${\bm{s}}$-minimal approximant basis of $A$ of order $d$ with ${\bm{\delta}}_2 = {{\textnormal{rowdeg}}}_{{\bm{s}}} F_2 F_1$.
Sometimes only the [*negative part*]{} of an ${\bm{s}}$-minimal approximant bases is required, the submatrix of the approximant bases consisting of rows with negative ${\bm{s}}$-degree. Let function ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d,A,{\bm{s}})$ have the same output as ${{\ensuremath{\mathsf{MinBasis}}\xspace}}$, but with $F$ restricted to the negative part.
\[lem:paderecprune\] \[lem:paderec\] still holds if ${{\ensuremath{\mathsf{MinBasis}}\xspace}}$ is replaced by ${{\ensuremath{\mathsf{NegMinBasis}}\xspace}}$, and “an ${\bm{s}}$-minimal” is replaced with “the negative part of an ${\bm{s}}$-minimal.”
Using for example the algorithm `M-Basis` of [@giorgi_complexity_2003], it is easy to show that any order $d$ approximant | 1 | member_14 |
basis $G$ for an $A$ of column dimension $m$ has $\det G = x^D$ for some $D \in {\mathbb Z\xspace}_{\geq 0}$ with $D \leq md$.
Many problems of ${{\mathsf{K}}}[x]$ matrices or approximations reduce to the computation of (shifted) minimal approximant bases, see e.g. [@beckermann_uniform_1994; @giorgi_complexity_2003], often resulting in the best known asymptotic complexities for these problems.
Direct solving of Simultaneous [Padé]{}approximations {#sec:direct_solve}
-----------------------------------------------------
Let $({\bm{S}}, {\bm{g}}, {\bm{N}})$ be an instance of \[prob:sim\_pade\_basis\] of size $n$. We recall some known approaches for computing a solution specification using row reduction and minimal approximant basis computation.
### Via reduced basis {#sec:direct_reduced_basis}
Using the predictable degree property it is easy to show that if $R \in {{\mathsf{K}}}[x]^{(n+1)
\times (n+1)}$ is an $(-{\bm{N}})$-reduced basis of $$A =
\left [ \begin{array}{c|c}
1 & {\bm{S}} \\\hline
& { \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}\end{array} \right]
\in {{\mathsf{K}}}[x]^{(n+1) \times (n+1)},$$ then the sub-matrix of $R$ comprised of the rows with negative $(-{\bm{N}})$-degree form a solution basis. A solution specification $({\bm{\lambda}}, {\bm{\delta}})$ is then a subvector ${\bm{\lambda}}$ of the first column of $R$, with ${\bm{\delta}}$ the corresponding subtuple ${\bm{\delta}}$ of ${{\textnormal{rowdeg}}}_{(- {\bm{N}})} R$.
Mulders and Storjohann [@mulders_lattice_2003] gave an iterative algorithm for performing row reduction by successive cancellation; it is | 1 | member_14 |
similar to but faster than earlier algorithms [@kailath_linear_1980; @lenstra_factoring_1985]. Generically on input $F \in {{\mathsf{K}}}[x]^{m \times m}$ it has complexity $O(n^3 (\deg F)^2)$. Alekhnovich [@alekhnovich_linear_2005] gave what is essentially a Divide & Conquer variant of Mulders and Storjohann’s algorithm, with complexity ${O^{\scriptscriptstyle \sim}\!}(n^{\omega+1}\deg F)$. Nielsen remarked [@nielsen_generalised_2013] that these algorithms perform fewer iterations when applied to the matrix $A$ above, due to its low *orthogonality defect*: ${\rm OD}(F) = \sum{{\textnormal{rowdeg}}}F - \deg \det F$, resulting in $O(n^2(\deg A)^2)$ respectively ${O^{\scriptscriptstyle \sim}\!}(n^\omega \deg A)$. Nielsen also used the special shape of $A$ to give a variant of the Mulders–Storjohann algorithm that computes coefficients in the working matrix in a lazy manner with a resulting complexity $O(n \,\mathsf{P}(\deg A))$, where $\mathsf P(\deg A) = (\deg A)^2$ when the $g_i$ are all powers of $x$, and $\mathsf P(\deg A) = {{\mathsf{M}}}(\deg A)\deg A$ otherwise.
Giorgi, et al. [@giorgi_complexity_2003] gave a reduction for performing row reduction by computing a minimal approximant basis. For the special matrix $A$, this essentially boils down to the approach described in the following section.
When $n = 1$, the extended Euclidean algorithm on input $S_1$ and $g_1$ can solve the approximation problem by essentially computing the reduced basis of | 1 | member_14 |
the $2 \times 2$ matrix $A$: each iteration corresponds to a reduced basis for a range of possible shifts [@sugiyama_further_1976; @justesen_complexity_1976; @gustavson_fast_1979]. The complexity of this is $O({{\mathsf{M}}}(\deg g_1) \log \deg g_1)$.
### Via minimal approximant basis
First consider the special case when all $g_i = x^d$ for the same $d$. An approximant ${\bm{v}} = (\lambda, \phi_1, \ldots, \phi_n)$ of order $d$ of $$\begin{aligned}
A &= \left [ \begin{array}{c}
- {\bm{S}} \\
I
\end{array} \right ]
\in {{\mathsf{K}}}[x]^{(n+1) \times n}\end{aligned}$$ clearly satisfies $\lambda S_i \equiv \phi_i \mod x^d$ for $i =
1,\ldots,n$; conversely, any such vector ${\bm{v}}$ satisfying these congruences must be an approximant of $A$ of order $d$. So the negative part of a $(-{\bm{N}})$-minimal approximant basis of $A$ of order $d$ is a solution basis.
In the general case we can reduce to a minimal approximant bases computation as shown by \[alg:simpadedirect\]. Correctness of the algorithm follows from the following result.
\[thm:simpadedirect\] Corresponding to an instance $({\bm{S}}, {\bm{g}}, {\bm{N}})$ of \[prob:sim\_pade\_basis\] of size $n$, define a shift ${\bm{h}}$ and order $d$:
- ${\bm{h}} := -({\bm{N}} \mid N_0 -1, \ldots, N_0-1) \in {\mathbb Z\xspace}^{2n+1}$
- $d := N_0 + \max_i \deg g_i -1$
If $G$ is the negative part | 1 | member_14 |
of an ${\bm{h}}$-minimal approximant basis of $$H = \left [ \begin{array}{c} -{\bm{S}} \\
I \\
{ \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}\end{array} \right ] \in {{\mathsf{K}}}[x]^{(2n+1) \times n}$$ of order $d$, then the submatrix of $G$ comprised of the first $n+1$ columns is a solution basis to the problem instance.
An approximant ${\bm{v}} = (\lambda, \phi_1,\ldots,\phi_n,q_1,\ldots, q_n)$ of order $d$ of $H$ clearly satisfies $$\begin{aligned}
\label{eq:lambda}
\lambda S_i = \phi_i + q_ig_i \bmod
x^d\end{aligned}$$ for $i=1,\ldots,n$; conversely, any such vector ${\bm{v}}$ satisfying these congruences must be an approximant of $H$ of order $d$.
Now suppose ${\bm{v}}$ is an order $d$ approximant of $H$ with negative ${\bm{h}}$-degree, so $\deg \lambda \leq N_0-1$, $\deg \phi_i \leq
N_i-1$, and $\deg q_i \leq N_0 - 2$. Since \[prob:sim\_pade\] specifies that $\deg S_i < \deg g_i$ and $N_i \leq \deg g_i$, both $\lambda S_i$ and $q_i g_i$ will have degree bounded by $N_0 + \deg g_i - 2$. Since \[prob:sim\_pade\] specifies that $N_0 \geq 1$, it follows that both the left and right hand sides of (\[eq:lambda\]) have degree bounded by $N_0+ \deg g_i -2$, which is strictly less than $d$. We conclude that $$\begin{aligned}
\label{eq:lambda2}
\lambda S_i = \phi_i + q_i g_i\end{aligned}$$ for $i=1,\ldots,n$. It | 1 | member_14 |
follows that ${\bm{v}} H = 0$ so ${\bm{v}}$ is in the left kernel of $H$. Moreover, restricting ${\bm{v}}$ to its first $n+1$ entries gives $\bar{{\bm{v}}} := (\lambda, \phi_1,\ldots,\phi_n)$, a solution to the simultaneous [Padé]{}problem with $\deg_{- {\bm{N}}} \bar{{\bm{v}}} = \deg_{{\bm{h}}} {\bm{v}}$. Conversely, if $\bar{{\bm{v}}} = (\lambda, \phi_1,\ldots,\phi_n)$ is a solution to the simultaneous [Padé]{}problem, then the extension ${\bm{v}} = (\lambda, \phi_1,\ldots,\phi_n,q_1,\ldots,q_n)$ with $q_i
= (\lambda S_i - \phi_i)/g_i \in {{\mathsf{K}}}[x]$ for $i=1,\ldots,n$ is an approximant of $H$ of order $d$ with $\deg_{{\bm{h}}} {\bm{v}} =
\deg_{-{\bm{N}}} \bar{{\bm{v}}}$.
Finally, consider that a left kernel basis for $H$ is given by $$K = \left[ \begin{array}{c|c} K_1 & K_2 \end{array} \right ] =
\left [ \begin{array}{cc|c} 1 & {\bm{S}} & \\
& { \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}& -I \end{array} \right ].$$ We must have $G = M K$ for some polynomial matrix $M$ of full row rank. But then $M K_1$ also has full row rank with ${{\textnormal{rowdeg}}}_{-{\bm{N}}} MK_1
= {{\textnormal{rowdeg}}}_{{\bm{h}}} G$.
${\bm{h}} {\leftarrow}-( {\bm{N}} \mid N_0-1,\ldots,N_0-1) \in {\mathbb Z\xspace}^{2n+1}$ $d {\leftarrow}N_0 + \max_i \deg g_i - 1$ $H =
\left[ \begin{array}{c}
-{\bm{S}} \\
I \\
{ \ifx\null\null \Gamma_{{\bm{g}}} \else \Gamma_{{\bm{g}}_{\null}} \fi}\end{array} \right]$ $(\left [ \begin{array}{c|c} {\bm{\lambda}} & \ast \end{array}
\right ], {\bm{\delta}}) | 1 | member_14 |
{\leftarrow}{{\ensuremath{\mathsf{NegMinBasis}}\xspace}}(d, H, {\bm{h}})$ ${\textbf{return }}({\bm{\lambda}}, {\bm{\delta}})$
[$\mathsf{DirectSimPade}$]{} can be performed in time ${O^{\scriptscriptstyle \sim}\!}(n^{\omega} \deg H) = {O^{\scriptscriptstyle \sim}\!}(n^\omega \max_i \deg g_i)$ using the minimal approximant basis algorithm by Jeannerod, et al. [@jeannerod_computation_2016], see \[sec:subroutines\].
A closely related alternative to [$\mathsf{DirectSimPade}$]{} is the recent algorithm by Neiger [@neiger_fast_2016] for computing solutions to modular equations with general moduli $g_i$. This would give the complexity ${O^{\scriptscriptstyle \sim}\!}(n^{\omega-1} \sum_i \deg g_i) \subset {O^{\scriptscriptstyle \sim}\!}(n^\omega \max_i \deg g_i)$.
All of the above solutions ignore the sparse, simple structure of the input matrices, which is why they do not obtain the improved complexity that we do here.
Computational tools {#sec:subroutines}
===================
The main computational tool we will use is the following very recent result from Jeannerod, Neiger, Schost and Villard [@jeannerod_computation_2016] on minimal approximant basis computation.
\[thm:orderbasis\] There exists an algorithm ${{\ensuremath{\mathsf{PopovBasis}}\xspace}}(d,A, {\bm{s}})$ where the input is an order $d \in {\mathbb Z\xspace}_+$, a polynomial matrix $A \in {{\mathsf{K}}}[x]^{n
\times m}$ of degree at most $d$, and shift ${\bm{s}} \in {\mathbb Z\xspace}^n$, and which returns $(F, {\bm{\delta}})$, where $F$ is an ${\bm{s}}$-minimal approximant basis of $A$ of order $d$, $F$ is in ${\bm{s}}$-Popov form, and ${\bm{\delta}} = {{\textnormal{rowdeg}}}_{{\bm{s}}} F$. [[$\mathsf{PopovBasis}$]{}]{}has complexity $O(n^{\omega-1}\, {{\mathsf{M}}}(\sigma)\, (\log \sigma) | 1 | member_14 |
\, (\log \sigma /n)^2)$ operations in ${{\mathsf{K}}}$, where $\sigma = md$.
Our next result says that we can quickly compute the first row of ${{\textnormal{adj}}}(F)$ if $F$ is a minimal approximant basis in Popov form. In particular, since $F$ is an approximant basis $\det F = x^D$ for some $D \leq \sigma$, where $\sigma = md$ from \[thm:orderbasis\].
\[thm:fastsolver\] Let $F \in {{\mathsf{K}}}[x]^{n \times n}$ be in Popov form and with $\det F = x^D$ for some $D \in {\mathbb Z\xspace}_{\geq 0}$. Then the first row of ${{\textnormal{adj}}}(F)$ can be computed in $O(n^{\omega-1}\, {{\mathsf{M}}}(D)\, (\log D) \, (\log D/n))$ operations in ${{\mathsf{K}}}$.
Because $F$ is in ${\bm{s}}$-Popov form, $D$ is the sum of the column degrees of $F$. We consider two cases: $D \geq n$ and $D < n$.
First suppose $D
\geq n$. Partial linearisation [@GuptaSarkarStorjohannValeriote11 Corollary 2] can produce from $F$, with no operations in ${{\mathsf{K}}}$, a new matrix $G \in {{\mathsf{K}}}[x]^{\bar n \times \bar n}$ with dimension $\bar{n} < 2n$, $\deg G \leq \lceil D/n\rceil$, $\det G = \det F$, and such that $F^{{-1}}$ is equal to the principal $n \times n$ sub-matrix of $G^{{-1}}$. Let ${\bm{v}} \in {{\mathsf{K}}}[x]^{1 \times \bar{n}}$ be the first row of $x^DI_{\bar{n}}$. | 1 | member_14 |
Then the first row of ${{\textnormal{adj}}}(F)$ will be the first $n$ entries of the first row of ${\bm{v}}G^{-1}$. High-order $X$-adic lifting [@storjohann_high-order_2003 Algorithm 5] using the modulus $X=(x-1)^{\lceil
D/n \rceil}$ will compute ${\bm{v}}G^{-1}$ in $O\big(n^{\omega}\,
{{\mathsf{M}}}(\lceil D/n \rceil) \,(\log \lceil D/n \rceil)\big)$ operations in ${{\mathsf{K}}}$ [@storjohann_high-order_2003 Corollary 16]. Since $D \geq n$ this cost estimate remains valid if we replace $\lceil D/n \rceil$ with $D/n$. Finally, from the super-linearity assumption on ${{\mathsf{M}}}(\cdot)$ we have $M(D/n) \leq (1/n) {{\mathsf{M}}}(D)$, thus matching our target cost.
Now suppose $D < n$. In this case we can not directly appeal to the partial linearisation technique since the resulting $O(n^{\omega} \lceil D/n\rceil)$ may be asymptotically larger than our target cost. But $D < n$ means that $F$ has — possibly many — columns of degree 0; since $F$ is in Popov form, such columns have a 1 on the matrix’s diagonal and are 0 on the remaining entries. The following describes how to essentially ignore those columns. $D$ is then greater than or equal to the number of remaining columns, thus effectuating the gain from the partial linearisation.
If $n-k$ is the number of such columns in $F$ that means we can find a permutation | 1 | member_14 |
matrix $P$ such that $$\hat{F} := PFP{^\top}= \left [ \begin{array}{c|c}
F_1 & \\\hline
F_2 & I_{n-k}
\end{array} \right ] \ ,$$ with each column of $F_1$ having degree strictly greater than zero. Let $i$ be the row index of the single 1 in the first column of $P{^\top}$. Since $F^{-1} = P{^\top}\hat{F}^{-1}P$, we have $$\label{first}
{\rm row}({{\textnormal{adj}}}(F),1)P^{-1} = x^D\, {\rm row}(\hat{F}^{-1},i).$$ Considering that $$\hat{F}^{-1} = \left [ \begin{array}{c|c} F_1^{-1} & \\\hline -F_2F_1^{-1} & I_{n-k} \end{array} \right ],$$ it will suffice to compute the first $k$ entries of the vector on the right hand side of (\[first\]). If $i \leq k$ then let ${\bm{v}} \in {{\mathsf{K}}}[x]^{1 \times k}$ be row $i$ of $x^{D}I_k$. Otherwise, if $i>k$ then let ${\bm{v}}$ be row $i-k$ of $-x^{D}F_2$. Then in both cases, ${\bm{v}}F_1^{-1}$ will be equal to the first $k$ entries of the vector on the right hand side of (\[first\]). Like before, high-order lifting combined with partial linearisation will compute this vector in $O\big(k^{\omega}\, {{\mathsf{M}}}(\lceil D/k \rceil)\,(\log \lceil D/k \rceil)
\big)$ operations in ${{\mathsf{K}}}$. Since $D\geq k$ the cost estimate remains valid if $\lceil D/k \rceil$ is replaced with $D/k$.
Reduction to Hermite [PADÉ]{} {#sec:dual}
=============================
In this section we present an algorithm for | 1 | member_14 |
solving \[prob:sim\_pade\_basis\] when $g_1 = \ldots = g_n = x^d$ for some $d \in {\mathbb Z\xspace}_{\geq 0}$. The algorithm is based on the well-known duality between the Simultaneous [Padé]{}problem and the Hermite [Padé]{}problem, see for example [@beckermann_uniform_1992]. This duality, first observed in a special case [@Mahler68], and then later in the general case [@beckermann_recursiveness_1997], was exploited in [@beckermann_fraction-free_2009] to develop algorithms for the fraction free computation of Simultaneous [Padé]{}approximation. We begin with a technical lemma that is at the heart of this duality.
\[lem:duality\] Let $\hat A, \hat B \in {{\mathsf{K}}}[x]^{(n+1)\times(n+1)}$ be as follows. $$\begin{aligned}
\hat A &=
\left [ \begin{array}{c|c}
x^d & -{\bm{S}} \\\hline
& I
\end{array} \right ]
&&
\hspace*{-1em}\hat B &=
\left [ \begin{array}{c|cccc}
1 & \\{ \hline \\[\dimexpr-\normalbaselineskip+2pt] }
{\bm{S}}{^\top}& x^d I
\end{array} \right ]
\end{aligned}$$ Then $\hat B$ is the adjoint of $\hat A{^\top}$. Furthermore, $\hat A{^\top}$ is an approximant basis for $\hat
B$ of order $d$, and $\hat B{^\top}$ is an approximant basis of $\hat A$ of order $d$.
Direct computation shows that $\hat A{^\top}\hat B = x^d I_m =
\det \hat A{^\top}I_m$, so $\hat B$ is the adjoint of $\hat
A{^\top}$.
Let now $G$ be an approximant basis of $\hat B$. By the | 1 | member_14 |
above computation the row space of $\hat A{^\top}$ must be a subset of the row space of $G$. But since $G \hat B = (x^dI_m) R$ for some $R \in {{\mathsf{K}}}[x]^{(n+1)\times(n+1)}$, then $\det G = x^d \det R$. Thus $x^d \mid \det G$. But $\det \hat A{^\top}= x^d$, so the row space of $\hat A{^\top}$ can not be smaller than the row space of $G$. That is, $\hat A{^\top}$ is an approximant basis for $B$ of order $d$. Taking the transpose through the argument shows that $\hat
B{^\top}$ is an approximant basis of $\hat B$ of order $d$.
\[thm:dualityMinbasis\] Let $A$ and $B$ be as follows. $$\begin{aligned}
A &= \left [ \begin{array}{cccc}
-{\bm{S}} \\{ \hline \\[\dimexpr-\normalbaselineskip+1pt] }
I
\end{array} \right ] \in {{\mathsf{K}}}[x]^{(n+1) \times (n+1)}
&&&
\hspace*{-1em}B &= \left [ \begin{array}{c}
1 \\ {\bm{S}}
\end{array} \right] \in {{\mathsf{K}}}[x]^{(n+1) \times 1}\end{aligned}$$ If $G$ is an ${\bm{N}}$-minimal approximant basis of $B$ of order $d$ with shift ${\bm{N}} \in {\mathbb Z\xspace}_{\geq 0}^{n+1}$, then ${{\textnormal{adj}}}(G{^\top})$ is a $(-{\bm{N}})$-minimal approximant basis of $A$ of order $d$. Moreover, if ${\bm{\eta}} = {{\textnormal{rowdeg}}}_{{\bm{N}}} G$, then ${{\textnormal{rowdeg}}}_{-{\bm{N}}} {{\textnormal{adj}}}(G) =(\eta - N - \eta_1,\ldots , \eta - N
-\eta_{n+1})$, where $\eta = \sum_i \eta_i$ and $N = \sum_i N_i$.
| 1 | member_14 |
Introduce $\hat A$ and $\hat B$ as in \[lem:duality\]. Clearly $G$ is also an ${\bm{N}}$-minimal approximant basis of $\hat B$ of order $d$. Likewise, $\hat A$ and $A$ have the same minimal approximant bases for given order and shift.
Assume, without loss of generality, that we have scaled $G$ such that $\det G$ is monic. Since $\hat A{^\top}$ is also an approximant basis for $\hat B$ of order $d$, then $\det G = \det \hat A{^\top}= x^d$. By definition $G\hat B = x^d R$ for some matrix $R \in
{{\mathsf{K}}}[x]^{(n+1)\times(n+1)}$. That means $$\begin{aligned}
x^{2d}((G\hat B){^\top}))^{{-1}}&= x^{2d}((x^d R){^\top})^{{-1}}\ , & \textrm{so} \\
(x^d(G{^\top})^{{-1}})(x^d(\hat B{^\top})^{{-1}}) &= x^{d}(R{^\top})^{{-1}}\ , & \textrm{that is} \\
{{\textnormal{adj}}}(G{^\top}) \hat A &= x^d (R{^\top})^{{-1}}\ .
\end{aligned}$$ Now $\det R = 1$ since $(x^d)^{n+1} \det R = \det(G\hat B) =
x^{d+nd}$, so $(R{^\top})^{{-1}}= {{\textnormal{adj}}}(R{^\top}) \in {{\mathsf{K}}}[x]^{(n+1)
\times (n+1)}$. Therefore ${{\textnormal{adj}}}(G{^\top})$ is an approximant basis of $\hat A$ of order $d$. The theorem now follows from \[lem:adjointRowReduced\] by noting that $G$ is ${\bm{N}}$-row reduced.
We apply \[thm:dualityMinbasis\] to the problem of \[ex:simpade\] with shifts ${\bm{N}} = (5, 3, 4, 5)$. We have $$\begin{aligned}
A &=
\left[\begin{array}{rrr}
x^{4} + x^{2} + 1 & x^{4} + 1 & x^{4} + x^{3} + | 1 | member_14 |
---
abstract: |
We consider the Cauchy problem for the evolutive discrete $p$-Laplacian in infinite graphs, with initial data decaying at infinity. We prove optimal sup and gradient bounds for nonnegative solutions, when the initial data has finite mass, and also sharp evaluation for the confinement of mass, i.e., the effective speed of propagation. We provide estimates for some moments of the solution, defined using the distance from a given vertex.
Our technique relies on suitable inequalities of Faber-Krahn type, and looks at the local theory of continuous nonlinear partial differential equations. As it is known, however, not all of this approach can have a direct counterpart in graphs. A basic tool here is a result connecting the supremum of the solution at a given positive time with the measure of its level sets at previous times.
We also consider the case of slowly decaying initial data, where the total mass is infinite.
address:
- |
Department of Basic and Applied Sciences for Engineering\
Sapienza University of Rome, Italy
- |
South Mathematical Institute of VSC RAS\
Vladikavkaz, Russian Federation
author:
- Daniele Andreucci
- 'Anatoli F. Tedeev'
bibliography:
- 'paraboli.bib'
- 'pubblicazioni\_andreucci.bib'
title: |
Asymptotic estimates for the $p$-Laplacian | 1 | member_15 |
on infinite graphs\
with decaying initial data
---
\[1994/06/01\]
[^1]
[^2]
Introduction {#s:intro}
============
We consider nonnegative solutions to the Cauchy problem for discrete degenerate parabolic equations $$\begin{aligned}
{2}
\label{eq:pde}
{\frac{\partial {u}}{\partial t}}(x,t)
-
\operatorname{\Delta}_{p}
{u}(x,t)
&=
0
\,,
&\qquad&
x\in V\,, t>0
\,,
\\
\label{eq:init}
{u}(x,0)
&=
{u}_{0}(x)
\ge 0
\,,
&\qquad&
x\in V
\,.\end{aligned}$$ Here $V$ is the set of vertices of the graph $G(V,E)$ with edge set $E\subset V\times V$ and weight ${\omega}$, and $$\operatorname{\Delta}_{p} u(x,t)
=
\frac{1}{{d_{{\omega}}}(x)}
\sum_{y\in V}
{\lvert{u}(y)-{u}(x)\rvert}^{p-2}
({u}(y)-{u}(x))
{\omega}(x,y)
\,.$$ We assume that the graph $G$ is simple, undirected, infinite, connected with locally finite degree $${d_{{\omega}}}(x)
=
\sum_{y\sim x}
{\omega}(x,y)
\,,$$ where we write $y\sim x$ if and only if $\{x,y\}\in E$. Here the weight ${\omega}:V\times V\to [0,+\infty)$ is symmetric, i.e., ${\omega}(x,y)={\omega}(y,x)$, and is strictly positive if and only if $y\sim x$; then ${\omega}(x,x)=0$ for $x\in V$.
We assume also that $p>2$ and that ${u}_{0}$ is nonnegative; further assumptions on ${u}_{0}$ will be stated below.
We prove sharp sup bounds for large times of solutions corresponding to finite mass initial data; in order to prove the bound from below we find an optimal estimate for the effective speed of propagation of mass. We | 1 | member_15 |
also determine the stabilization rate for data exhibiting slow decay ‘at infinity’, in a suitable sense. To the best of our knowledge such results are new in the framework of discrete nonlinear diffusion equations on graphs.\
We apply an alternative approach, more local than the one in [@Mugnolo:2013], [@Hua:Mugnolo:2015] where the global arguments of semigroup theory are extended to graphs, actually in a more general setting which is out of the scope of this paper. We comment below in the Introduction on the inherent difficulty and even partial unfeasibility of a local approach in graphs. It is therefore an interesting and not trivial problem to understand how much of this body of techniques can be used in this environment. This paper can be seen as a cross section of this effort; specifically we look at the interplay between spread of mass and sup estimates, following ideas coming from the theory of continuous partial differential equations, with the differences required by the discrete character of graphs.
We recall the following notation: for any $R\in {{\boldsymbol}{N}}$, we let $$B_{R}(x_{0})
=
\{
x\in V
\mid
d(x,x_{0})
\le R
\}
\,.$$ Here $d$ is the standard combinatorial distance in $G$ so that $d$ only | 1 | member_15 |
takes integral values. For any $f:V\to {{\boldsymbol}{R}}$ we set for all $q\ge 1$, $U\subset V$ $$\begin{gathered}
{{\lVertf\rVert}_{\ell^{q}(U)}}^{q}
=
\sum_{x\in U}
{\lvertf(x)\rvert}^{q}
{d_{{\omega}}}(x)
\,,
\quad
{{\lVertf\rVert}_{\ell^{\infty}(U)}}
=
\sup_{x\in U} {\lvertf(x)\rvert}
\,,
\\
{\mu_{{\omega}}}(U)
=
\sum_{x\in U}
{d_{{\omega}}}(x)
\,.\end{gathered}$$ All the infinite sums in this paper are absolutely convergent. In the following we always assume, unless explicitly noted, that all balls are centered at a given fixed $x_{0}\in V$ and we write $B_{R}(x_{0})=B_{R}$. We denote generic constants depending on the parameters of the problem by $\gamma$ (large constants), $\gamma_{0}$ (small constants). We also set for all $A\subset V$ $$\chi_{A}(x)
=
1
\,,
\quad
x\in A
\,;
\qquad
\chi_{A}(x)
=
0
\,,
\quad
x\not\in A
\,.$$
\[d:fk\] We say that $G$ satisfies a global Faber-Krahn inequality for a given $p>1$ and function ${\Lambda_p}:(0,+\infty)\to(0,+\infty)$ if for any $v>0$ and any finite subset $U\subset V$ with ${\mu_{{\omega}}}(U)=v$ we have $$\label{eq:fk}
{\Lambda_p}(v)
\sum_{x\in U}
{\lvertf(x)\rvert}^{p}
{d_{{\omega}}}(x)
\le
\sum_{x,y\in (U)_{1}}
{\lvertf(y)-f(x)\rvert}^{p}
{\omega}(x,y)
\,,$$ for all $f:V\to {{\boldsymbol}{R}}$ such that $f(x)=0$ if $x\not \in U$; here $$(U)_{1}
=
\{
x\in V
\mid
d(x,U)
\le 1
\}
\,.$$
We assume throughout that ${\Lambda_p}\in C(0,+\infty)$ is decreasing and that two suitable positive constants ${N}$, $\omega$ exists such that $$\begin{aligned}
\label{eq:fkf_nd}
| 1 | member_15 |
v&\mapsto
{\Lambda_p}(v)^{-1}
v^{-\frac{p}{{N}}}
\,,
\quad
v>0
\,,
\quad
\text{is nondecreasing;}
\\
\label{eq:fkf_ni}
v&\mapsto
{\Lambda_p}(v)^{-1}
v^{-\omega}
\,,
\quad
v>0
\,,
\quad
\text{is nonincreasing.}\end{aligned}$$ An important class of functions in our approach is given by $$\label{eq:dcf_def}
{\psi}_{r}(s)
=
s^{\frac{p-2}{r}}
{\Lambda_p}(s^{-1})
\,,
\qquad
s>0
\,,$$ for each fixed $r\ge 1$. They, or more exactly their inverses, give the correct time-space scaling for the equation , see for example Theorems \[t:l1\] and \[p:bbl\] below.
If we make the additional assumption that for some constant $c>0$ $$\label{eq:fkf_bbl}
{\Lambda_p}(v)
\ge
c
{\mathcal{R}}(v)^{-p}
\,,
\qquad
v>0
\,,$$ where ${\mathcal{R}}:(0,+\infty)\to (0,+\infty)$ is such that ${\mu_{{\omega}}}(B_{{\mathcal{R}}(v)})=v$, we may connect ${\psi}_{1}$ to the measure of a ball in $G$. This in turn allows us to prove sharpness of our $\ell^{1}$–$\ell^{\infty}$ estimate. Property is rather natural. For instance it is known to hold for the explicit examples in Subsection \[s:examples\], to which we refer for implementations of our results in some concrete relevant cases.
\[r:spdim\] The constant ${N}$ in has no intrinsic meaning in this paper, and it is employed here only with the purpose of making easier the comparison with the case of standard regular graphs ${{\boldsymbol}{Z}}^{{N}}$, where ${\Lambda_p}(v)=\gamma_{0}v^{-p/{N}}$, see Subsection \[s:examples\].
\[r:fk\_below\] Let $x\in V$ and choose $U=\{x\}$, $f=\chi_{U}$ | 1 | member_15 |
in , which then yields $$\label{eq:rem_fk}
{\Lambda_p}({d_{{\omega}}}(x))
{d_{{\omega}}}(x)
\le
2{d_{{\omega}}}(x)
\,.$$ Since ${\Lambda_p}$ is decreasing by assumption we infer $$\label{eq:rem_fk_bound}
{d_{{\omega}}}(x)\ge {\Lambda_p}^{(-1)}(2)
\,.$$ A remark in this connection is perhaps in order: clearly according to its definition the Faber-Krahn function ${\Lambda_p}(v)$ is defined for uniformly positive $v$ according to , so that , should be assumed for such $v$. Aiming at a technically streamlined framework, we extend ${\Lambda_p}$ to all $v>0$, while easily preserving the latter assumptions. However one can check that for large times, ${\Lambda_p}$ is evaluated at large arguments in our results, which are thus independent of this extension.
\[r:lp\_scale\] A consequence of is that any bound in $\ell^{q}(V)$ yields immediately a uniform pointwise bound: if ${v}\in\ell^{q}(V)$, $$\label{eq:lp_linf}
{\lvert{v}(z)\rvert}^{q}
\le
{\lvert{v}(z)\rvert}^{q}\frac{{d_{{\omega}}}(z)}{{\Lambda_p}^{(-1)}(2)}
\le
\frac{1}{{\Lambda_p}^{(-1)}(2)}
{{\lVert{v}\rVert}_{\ell^{q}(V)}}^{q}
\,,
\quad
z\in V
\,.$$ In turn this implies that $\ell^{p}(V)\subset\ell^{q}(V)$ if $p<q$, since $$\sum_{x\in V}
{\lvertf(x)\rvert}^{q}
{d_{{\omega}}}(x)
\le
M^{q-p}
\sum_{x\in V}
{\lvertf(x)\rvert}^{p}
{d_{{\omega}}}(x)
\,,$$ for a suitable $M$ as in .
\[d:sol\] We say that ${u}\in L^{\infty}(0,T; \ell^{r}(V))$ is a solution to if ${u}(x)\in C^{1}([0,T])$ for every $x\in V$ and ${u}$ satisfies in the classical pointwise sense.\
A solution to – also is required to take the initial data prescribed by , for | 1 | member_15 |
each $x\in V$.
We refer the reader to [@Hua:Mugnolo:2015] for existence and uniqueness of solutions. To make this paper more self-contained however we sketch in Section \[s:prelim\] a proof of existence in Proposition \[p:ex\] (in $\ell^{q}$, $q>1$, see Theorem \[t:l1\] for $q=1$), and of uniqueness via comparison in Proposition \[p:compare\].
Our first two results are typical of the local approach we pursue. All solutions we consider below are nonnegative.
\[p:linf\_meas\] Let ${u}:V\to {{\boldsymbol}{R}}$ be a solution to , with ${u}\in L^{\infty}(0,T;\ell^{r}(V))$ for some $r\ge 1$. Then for all $x\in V$, $0<t<T$ $$\label{eq:linf_meas_n}
{u}(x,t)
\le
k
\,,$$ provided $k>0$ satisfies for a suitable $\gamma_{0}(p,{N})$ $$\label{eq:linf_meas_nn}
k^{-1}
t^{-\frac{1}{p-2}}
{\Lambda_p}\Big(\sup_{\frac{t}{4}<\tau<t}{\mu_{{\omega}}}(\{x\in V\mid {u}(x,\tau)> {k}/{2}\})\Big)^{-\frac{1}{p-2}}
\le
\gamma_{0}
\,.$$
\[co:linf\_int\] Under the assumptions in Proposition \[p:linf\_meas\], we have $$\label{eq:linf_int_m}
{u}(x,t)
\le
\gamma
\sup_{0<\tau<t}{{\lVert{u}(\tau)\rVert}_{\ell^{r}(V)}}
\big[{\psi}_{r}^{(-1)}
\big(
t^{-1}
\sup_{0<\tau<t}{{\lVert{u}(\tau)\rVert}_{\ell^{r}(V)}}^{-(p-2)}
\big)
\big]^{\frac{1}{r}}
\,,$$ for all $x\in V$, $0<t<T$. Here ${\psi}_{r}^{(-1)}$ is the inverse function of ${\psi}_{r}$ as defined in .
\[r:dcf\] One can check easily using the fact that ${\Lambda_p}$ is nonincreasing that $$a\mapsto a{\psi}_{r}^{(-1)}(sa^{-(p-2)})^{\frac{1}{r}}$$ is nondecreasing in $a>0$ for each fixed $s>0$.
Next Theorem follows directly from the estimates we stated above. Note that conservation of mass in was proved also in [@Hua:Mugnolo:2015], while the other estimates are | 1 | member_15 |
new, as far as we know.
\[t:l1\] Let ${u}_{0}\in\ell^{1}(V)$, ${u}_{0}\ge 0$. Then problem – has a unique solution satisfying for all $t>0$ $$\begin{aligned}
\label{eq:l1_n}
{{\lVert{u}(t)\rVert}_{\ell^{1}(V)}}
&=
{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}
\,,
\\
\label{eq:l1_nn}
{{\lVert{u}(t)\rVert}_{\ell^{\infty}(V)}}
&\le
\gamma
{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}
{\psi}_{1}^{(-1)}
\big(
t^{-1}
{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{-(p-2)}
\big)
\,.
\end{aligned}$$ In addition ${u}$ satisfies $$\begin{gathered}
\label{eq:l1_nnn}
\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1}
{\omega}(x,y)
{\,\textup{\textmd{d}}}\tau
\\
\le
\gamma
t^{\frac{1}{p}}
{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{2(p-1)}{p}}
{\psi}_{1}^{(-1)}(t^{-1}{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{-(p-2)})^{\frac{p-2}{p}}
\,.
\end{gathered}$$
\[r:lp\_est\] We notice that one could exploit , to derive trivially a bound of the integral in . This is due of course to the fact that the $p$-laplacian in our setting is discrete, and it would not be possible in the framework of continuous partial differential equations.
Such a bound however is not sharp, and for example could not be used in the proof of Theorem \[p:bbl\].
In other instances where optimality is not needed we exploit a device similar to the one just described, relying on Remark \[r:lp\_scale\]; see for example the proof of Lemma \[l:cacc2\].
So far our extension to graphs of methods and results of continuous differential equations has been successful. However, in the latter setting a standard device to prove optimality of the bound in relies on the property of finite speed of propagation | 1 | member_15 |
(i.e., solutions with initially bounded support keep this feature for all $t>0$). In the setting of graphs this property strikingly fails, as shown in [@Hua:Mugnolo:2015]. As a technical but perhaps worthwile side remark, we note that all the main ingredients in the proof of finite speed of propagation (see [@Andreucci:Tedeev:1999], [@Andreucci:Tedeev:2000]) seem to be available in graphs too: embeddings as in [@Ostrovskii:2005], Caccioppoli inequalities as in Lemma \[l:cacc\] below, and of course iterative techniques as the one displayed in the proof of Proposition \[p:linf\_meas\]. The key exception in this regard is the fact that full localization via an infinite sequence of nested shrinking balls is clearly prohibited by the discrete metric at hand. This is a point of marked difference with the continuous setting.
Still we can prove sharpness of our $\ell^{1}$–$\ell^{\infty}$ bound by means of the following result of confinement of mass. By the same argument we can estimate also a suitable moment of the solution, which is also a new result for nonlinear diffusion in graphs, see Section \[s:bbl\].
\[p:bbl\] Let ${u}_{0}\ge 0$ be finitely supported. Then for every $1>{\varepsilon}>0$ there exists a $\varGamma>0$ such that $$\label{eq:bbl_n}
{{\lVert{u}(t)\rVert}_{\ell^{1}(B_{R})}}
\ge
(1-{\varepsilon})
{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}
\,,
\qquad
t>0
\,,$$ provided $B_{{\lfloorR/2\rfloor}}$ contains | 1 | member_15 |
the support of ${u}_{0}$, and $R$ is chosen so that $$\label{eq:bbl_nn}
R
\ge
\varGamma
t^{\frac{1}{p}}
{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{p-2}{p}}
{\psi}_{1}^{(-1)}(t^{-1}{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{-(p-2)})^{\frac{p-2}{p}}
\ge 8
\,.$$ In addition, provided $R$ is chosen as in , for ${\varepsilon}=1/2$, and $\alpha\in (0,1)$, $$\label{eq:bbl_p}
\sum_{x\in V}
d(x,x_{0})^{\alpha}
{u}(x,t)
{d_{{\omega}}}(x)
\le
\gamma
R^{\alpha}
{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}
\,,
\qquad
t>0
\,.$$
Next we exploit the estimate – in order to show that up to a change in the constant we can reverse the inequality in , proving at once the optimality of both results.
\[p:bbl2\] Under the assumptions in Theorem \[p:bbl\], let in addition ${\Lambda_p}$ satisfy . Then $$\label{eq:bbl_nnn}
{{\lVert{u}(t)\rVert}_{\ell^{\infty}(V)}}
\ge
\frac{{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}}{2{\mu_{{\omega}}}(B_{R})}
\ge
\gamma_{0}
{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}
{\psi}_{1}^{(-1)}
\big(
t^{-1}
{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{-(p-2)}
\big)
\,,$$ where $R$ is as in , for ${\varepsilon}=1/2$.
Clearly, owing to the comparison principle of Proposition \[p:compare\], results like those in and may be proved even dropping the assumption that ${u}_{0}$ is finitely supported; for the sake of brevity we omit the details.
In order to state our last result we need to introduce the following function, which essentially gives the correct scaling between time and space in the case of slow decay initial data: for ${u}_{0}\in\ell^{q}(V)\setminus\ell^{1}(V)$ for some $q>1$ set $$\label{eq:decay_fn}
{T}_{{u}_{0}}(R,x_{0})
=
\Big[
\frac{{{\lVert{u}_{0}\rVert}_{\ell^{1}(B_{R}(x_{0}))}}}{{{\lVert{u}_{0}\rVert}_{\ell^{q}(V\setminus B_{R}(x_{0}))}}^{q}}
\Big]^{\frac{p-2}{q-1}}
\,
{\Lambda_p}\bigg(
\Big(
\frac{
| 1 | member_15 |
{{\lVert{u}_{0}\rVert}_{\ell^{1}(B_{R}(x_{0}))}}
}{
{{\lVert{u}_{0}\rVert}_{\ell^{q}(V\setminus B_{R}(x_{0}))}}
}
\Big)^{\frac{q}{q-1}}
\bigg)^{-1}
\,,$$ for $R\in {{\boldsymbol}{N}}$, $x_{0}\in V$. Clearly for each fixed $x_{0}$ the function ${T}_{{u}_{0}}$ is nondecreasing in $R$ and ${T}_{{u}_{0}}(R,x_{0})\to +\infty$ as $R\to\infty$. Conversely, ${T}_{{u}_{0}}(0,x_{0})$ may be positive. However it can be easily seen that for any given ${\varepsilon}>0$ there exists $x_{0}$ such that ${T}_{{u}_{0}}(0,x_{0})<{\varepsilon}$.
\[t:decay\] Let ${u}_{0}\in\ell^{q}(V)\setminus\ell^{1}(V)$ for some $q>1$. Then for all $t>0$, $x_{0}\in V$ $$\label{eq:decay_n}
{{\lVert{u}(t)\rVert}_{\ell^{\infty}(V)}}
\le
\gamma
{{\lVert{u}_{0}\rVert}_{\ell^{1}(B_{R}(x_{0}))}}
{\psi}_{1}^{(-1)}\big(t^{-1} {{\lVert{u}_{0}\rVert}_{\ell^{1}(B_{R}(x_{0}))}}^{-(p-2)}\big)
\,,$$ provided $R$ is chosen so that $$\label{eq:decay_nn}
t\le
{T}_{{u}_{0}}(R,x_{0})
\,,$$ the optimal choice being of course the minimum $R=R(t)$ such that holds true.
Let us comment briefly on the existing literature on the non-linear $p$-Laplacian in graphs. The papers [@Mugnolo:2013], [@Hua:Mugnolo:2015], deal with the Cauchy problem applying techniques inspired from the theory of semigroups of continuous differential operators. They consider a more general variety of weighted graphs and operators than we do here, dealing e.g., with existence, uniqueness, time regularity, possible extinction in a finite time. However our results do not seem to be easily reached by this approach. We also quote [@Keller:Mugnolo:2016] where a connection between Cheeger constants and the eigenvalues of the $p$-laplacian is drawn in a very flexible setting.\
Boundary problems on finite subgraphs are | 1 | member_15 |
also considered in several papers dealing with features like blow up or extinction; we quote only [@Chung:Choi:2014] and [@Chung:Park:2017].
The case of the discrete linear Laplacian where $p=2$ is more classical, also for its connections with probability theory (see e.g., [@Andres:etal:2013] and references therein), and is often attacked by means of suitable parallels with the theory of heat kernels in manifolds. We quote [@Coulhon:Grigoryan:1998], [@Barlow:Coulhon:Grigoryan:2001] where a connection is drawn between properties of heat kernels, of graphs and Faber-Krahn functions.\
In [@Lin:Wu:2017] heat kernels are used to study the blow up of solutions to the Cauchy problem for a semilinear equation on a possibly infinite graph.
The subject of diffusion in graphs is popular also owing to its applicative interest. We refer the reader to [@Mugnolo:2013], [@Elmoataz:Toutain:Tenbrinck:2015] and to the references therein for more on this point.
Finally we recall the papers [@Bakry:etal:1995a], [@Ostrovskii:2005] and books [@Chung:SGT], [@Grigoryan:AG] for basic information on functional analysis on graphs and manifolds.
We mention that in our setting it is still valid the argument in [@Bonforte:Grillo:2007] showing that optimal decay rates imply suitable embeddings.
Here we look essentially at the approach of [@DiBenedetto:Herrero:1989] and [@Andreucci:Tedeev:2015].
The paper is organized as follows: Section \[s:prelim\] is | 1 | member_15 |
devoted to preliminary material. Proposition \[p:linf\_meas\] and its Corollary \[co:linf\_int\] are proved in Section \[s:linf\], while Section \[s:l1\] contains the proof of Theorem \[t:l1\] and Section \[s:bbl\] deals with Theorem \[p:bbl\] and Corollary \[p:bbl2\]. Finally Theorem \[t:decay\] is proved in Section \[s:decay\].
Examples {#s:examples}
--------
1\) As a first example we consider the case of the standard lattice $G={{\boldsymbol}{Z}}^{{N}}$, where one can take ${\Lambda_p}(v)=\gamma_{0}v^{-p/{N}}$, according to the results of [@Wang:Wang:1977], [@Ostrovskii:2005]. This is the case where comparison with the Cauchy problem for the continuous $p$-Laplacian is more straightforward. In this case $$\label{eq:dcf_euc}
{\psi}_{r}(s)
=
\gamma_{0}
s^{\frac{{N}(p-2)+pr}{{N}r}}
\,,
\qquad
s>0
\,,$$ and for example estimate becomes $$\label{eq:l1_nn_grid}
{{\lVert{u}(t)\rVert}_{\ell^{\infty}(V)}}
\le
\gamma
{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{p}{{N}(p-2)+p}}
t^{-\frac{{N}}{{N}(p-2)+p}}
\,,$$ while the critical radius for expansion of mass in amounts to $$\label{eq:bbl_nn_grid}
R\ge
\gamma
{{\lVert{u}_{0}\rVert}_{\ell^{1}(V)}}^{\frac{p-2}{{N}(p-2)+p}}
t^{\frac{1}{{N}(p-2)+p}}
\,.$$ We remark that both results formally coincide with the corresponding ones for the continuous $p$-Laplacian in ${{{\boldsymbol}{R}}^{{N}}}$, see [@DiBenedetto:Herrero:1989].\
Next we apply Theorem \[t:decay\] to the following initial data: for $x=(x_{1},\dots,x_{{N}})\in{{\boldsymbol}{Z}}^{{N}}$ set ${u}_{0}(x)=({\lvertx_{1}\rvert}+\dots+{\lvertx_{{N}}\rvert})^{-\alpha}$ for a given $0<\alpha<{N}$. Let us write here $a(s)\simeq b(s)$ if $\gamma_{0}a(s)\le b(s)\le \gamma a(s)$ for two constants independent of $s$. One can see that $${{\lVert{u}_{0}\rVert}_{\ell^{1}(B_{R}(0))}}
\simeq
R^{{N}-\alpha}
\,;
\qquad
{{\lVert{u}_{0}\rVert}_{\ell^{q}(V\setminus B_{R}(0))}}
\simeq
R^{{N}-\alpha q}
\,,$$ for all | 1 | member_15 |
$q>{N}/\alpha$. Therefore in this case $${T}_{{u}_{0}}(R,0)
\simeq
R^{\alpha(p-2)+p}
\,,$$ and the estimate in essentially amounts to the decay rate $t^{-\alpha/(\alpha(p-2)+p)}$, which is the expected one in view of the results of [@Tedeev:1991].
2\) One can treat also other examples of product graphs; for instance if $H$ is a finite connected graph we let $G=H\times {{\boldsymbol}{Z}}^{{N}}$ and recover results similar to the ones of the previous example.
3\) All examples where the Faber-Krahn function is estimated for $p=2$ yield also examples in our case of $p>2$, as it follows from applying Hölder’s inequality; see e.g., [@Coulhon:Grigoryan:1998], [@Barlow:Coulhon:Grigoryan:2001].
Preliminary material {#s:prelim}
====================
We use for $f:V\to {{\boldsymbol}{R}}$ the notation $${D}_{y}f(x)
=
f(y)-f(x)
=
-
{D}_{x}f(y)
\,,
\qquad
x\,,y\in V
\,.$$
Caccioppoli type inequalities {#s:prelim_cacc}
-----------------------------
\[l:monot\] Let $q>0$, $p>2$, $h\ge 0$, ${u}$, ${v}:V\to{{\boldsymbol}{R}}$. Then for all $x$, $y\in V$ $$\begin{gathered}
\label{eq:monot_n}
\big(
{\lvert{D}_{y}{u}(x)\rvert}^{p-2}
{D}_{y}{u}(x)
-
{\lvert{D}_{y}{v}(x)\rvert}^{p-2}
{D}_{y}{v}(x)
\big)
{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x)-h)_{+}}{\left({u}(x)-{v}(x)-h\right)_{+}}}}^{q}
\\
\ge
\gamma_{0}
{\left|
{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x)-h)_{+}}{\left({u}(x)-{v}(x)-h\right)_{+}}}}^{\frac{q-1+p}{p}}
\right|}^{p}
\,.
\end{gathered}$$
First we remark that we may assume $h=0$, by renaming $\tilde{v}={v}+h$. The corresponding version of clearly holds true if ${D}_{y}{u}(x)={D}_{y}{v}(x)$.
If ${D}_{y}{u}(x)\not={D}_{y}{v}(x)$ the left hand side of with $h=0$ can be written as, on appealing also to a classical elementary result in monotone operators, see | 1 | member_15 |
[@DiB:dpe], $$\begin{gathered}
\label{eq:monot_i}
\big(
{\lvert{D}_{y}{u}(x)\rvert}^{p-2}
{D}_{y}{u}(x)
-
{\lvert{D}_{y}{v}(x)\rvert}^{p-2}
{D}_{y}{v}(x)
\big)
{D}_{y}({u}(x)-{v}(x))
\,
\mathcal{A}
\\
\ge
\gamma_{0}(p)
{\lvert{D}_{y}({u}(x)-{v}(x))\rvert}^{p}
\,
\mathcal{A}
\,,
\end{gathered}$$ where we define $$\mathcal{A}
=
\frac{
{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x))_{+}}{\left({u}(x)-{v}(x)\right)_{+}}}}^{q}
}{
{D}_{y}({u}(x)-{v}(x))
}
\ge 0
\,.$$
On the other hand, we write the right hand side of with $h=0$ as $$\label{eq:monot_ii}
{\lvert{D}_{y} ({u}(x)-{v}(x))\rvert}^{p}
\,
\mathcal{B}
\,,
\qquad
\mathcal{B}
:=
{\left|
\frac{
{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x))_{+}}{\left({u}(x)-{v}(x)\right)_{+}}}}^{\frac{q-1+p}{p}}
}{
{D}_{y}({u}(x)-{v}(x))
}
\right|}^{p}
\,.$$ Therefore we have only to prove that $\mathcal{A}\ge \gamma_{0}\mathcal{B}$. Clearly in doing so we may assume without loss of generality that $${u}(y)
-
{v}(y)
>
{u}(x)
-
{v}(x)
\,.$$ Hence it is left to prove that $$\begin{gathered}
\label{eq:monot_iii}
\big[
{u}(y)-{v}(y)
-
({u}(x)-{v}(x))
\big]^{p-1}
\big[
{{\ifthenelse{\equal{*}{*}}{({u}(y)-{v}(y))_{+}}{\left({u}(y)-{v}(y)\right)_{+}}}}^{q}
-
{{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x))_{+}}{\left({u}(x)-{v}(x)\right)_{+}}}}^{q}
\big]
\\
\ge
\gamma_{0}
\Big[
{{\ifthenelse{\equal{*}{*}}{({u}(y)-{v}(y))_{+}}{\left({u}(y)-{v}(y)\right)_{+}}}}^{\frac{q-1+p}{p}}
-
{{\ifthenelse{\equal{*}{*}}{({u}(x)-{v}(x))_{+}}{\left({u}(x)-{v}(x)\right)_{+}}}}^{\frac{q-1+p}{p}}
\Big]^{p}
\,.
\end{gathered}$$ Denote $$a
=
{u}(y)
-
{v}(y)
\,,
\qquad
b
=
{u}(x)
-
{v}(x)
\,.$$ If $b\le 0$, is obviously satisfied with $\gamma_{0}=1$. If $b>0$, by Hölder’s inequality we have $$\begin{gathered}
\label{eq:monot_iv}
\big[
a^{\frac{q-1+p}{p}}
-
b^{\frac{q-1+p}{p}}
\big]^{p}
=
\Big[
\frac{q-1+p}{p}
\int_{b}^{a}
s^{\frac{q-1}{p}}
{\,\textup{\textmd{d}}}s
\Big]^{p}
\\
\le
\gamma(q,p)
\Big[
\int_{b}^{a}
s^{q-1}
{\,\textup{\textmd{d}}}s
\Big]
\Big[
\int_{b}^{a}
{\,\textup{\textmd{d}}}s
\Big]^{p-1}
\le
\gamma(q,p)
(a^{q}-b^{q})
(a-b)^{p-1}
\,,
\end{gathered}$$ proving and concluding the proof.
In the following all radii of balls in $G$ will be | 1 | member_15 |
assumed to be natural numbers. Let $R_{2}\ge R_{1}+1$, $R_{1}$, $R_{2}>0$; we define the cutoff function $\zeta$ in $B_{R_{2}}(x_{0})$ by means of $$\begin{aligned}
{2}
\zeta(x)
&=
1
\,,
&\qquad&
x\in B_{R_{1}}(x_{0})
\,,
\\
\zeta(x)
&=
\frac{
R_{2}
-
d(x,x_{0})
}{
R_{2}
-
R_{1}
}
\,,
&\qquad&
x\in B_{R_{2}}\setminus B_{R_{1}}(x_{0})
\,,
\\
\zeta(x)
&=
0
\,,
&\qquad&
x\not \in B_{R_{2}}(x_{0})
\,.\end{aligned}$$ The function $\zeta$ is chosen so that $${\lvert{D}_{y}\zeta(x)\rvert}
=
{\lvert
\zeta(y)
-
\zeta(x)
\rvert}
\le
\frac{1}{R_{2}-R_{1}}
\,,
\qquad
x\sim y
\,.$$ For $\tau_{1}>\tau_{2}>0$ we also define the standard nonnegative cutoff function $\eta\in C^{1}({{\boldsymbol}{R}})$ such that $$\eta(t)
=0
\,,
\,\,\,
t\ge \tau_{1}
\,;
\quad
\eta(t)
=
0
\,,
\,\,\,
t\le \tau_{2}
\,;
\quad
0\le\eta'(t)\le \frac{2}{\tau_{1}-\tau_{2}}
\,,
\,\,\,
t\in{{\boldsymbol}{R}}\,.$$
Our next Lemma is not used in the sequel; we present it here to substantiate our claim made in the Introduction that suitable local Caccioppoli type inequalities are available in the nonlinear setting, and also for its possible independent interest. The proof is somehow more complex than in the continuous case.
\[l:cacc\] Let ${u}$ be a solution of in $V\times (0,T)$, $x_{0}\in V$. Then for $T>\tau_{1}>\tau_{2}>0$, $R_{2}>R_{1}+1$, $R_{1}>0$, $h>k>0$, $1>\theta>0$ we have $$\label{eq:cacc_n}
\begin{split}
&\sup_{\tau_{1}<\tau<t}
\sum_{x\in B_{R_{1}}(x_{0})}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta+1}
\zeta(x)^{p}
{d_{{\omega}}}(x)
\\
&\quad+
\int_{\tau_{1}}^{t}
\sum_{x\in | 1 | member_15 |
B_{R_{1}}(x_{0}),y\in V}
{\left|
{D}_{y}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\frac{p+\theta-1}{p}}
\right|}^{p}
{\omega}(x,y)
{\,\textup{\textmd{d}}}\tau
\\
&\qquad\le
\frac{\gamma}{\tau_{1}-\tau_{2}}
\int_{\tau_{2}}^{t}
\sum_{x\in B_{R_{2}}(x_{0})}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta+1}
{d_{{\omega}}}(x)
{\,\textup{\textmd{d}}}\tau
+
\gamma
A^{\frac{1}{p}}
B^{\frac{p-1}{p}}
+
\gamma
A
\,,
\end{split}$$ where $$\begin{aligned}
A&=
\frac{1}{(R_{2}-R_{1})^{p}}
\int_{\tau_{2}}^{t}
\sum_{x\in B_{R_{2}}(x_{0})}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-k)_{+}}{\left({u}(x,\tau)-k\right)_{+}}}}^{p+\theta-1}
{d_{{\omega}}}(x)
{\,\textup{\textmd{d}}}\tau
\,,
\\
B&=
h^{p}
(h-k)^{\theta-1}
\int_{\tau_{2}}^{t}
{\mu_{{\omega}}}(B_{R_{2}}(x_{0})\cap\{2h\ge {u}(x,\tau)>h\})
{\,\textup{\textmd{d}}}\tau
\,.
\end{aligned}$$
\[r:caccio\] The term $A^{1/p}B^{(p-1)/p}$ in can be reduced to one containing only $A$ by means of Young’s and Chebychev’s inequalities.
We multiply against $\zeta(x)^{p}\eta(t)^{p}{{\ifthenelse{\equal{*}{*}}{({u}(x,t)-h)_{+}}{\left({u}(x,t)-h\right)_{+}}}}^{\theta}$ and apply the well known formula of integration by parts $$\begin{gathered}
\sum_{x,y\in V}
{\lvert{D}_{y}{u}(x)\rvert}^{p-2}
{D}_{y}{u}(x)
f(x)
{\omega}(x,y)
\\=
-
\frac{1}{2}
\sum_{x,y\in V}
{\lvert{D}_{y}{u}(x)\rvert}^{p-2}
{D}_{y}{u}(x)
{D}_{y}f(x)
{\omega}(x,y)
\,,
\end{gathered}$$ where $f:V\to {{\boldsymbol}{R}}$ has finite support. Below we denote $B_{R}(x_{0})=B_{R}$ for simplicity of notation.
We obtain $$\begin{gathered}
\label{eq:cacc_i}
J_{1}+J_{2}
:=
\frac{1}{\theta+1}
\sum_{x\in B_{R_{2}}}
{{\ifthenelse{\equal{*}{*}}{({u}(x,t)-h)_{+}}{\left({u}(x,t)-h\right)_{+}}}}^{\theta+1}
\zeta(x)^{p}
\eta(t)^{p}
{d_{{\omega}}}(x)
\\
+
\frac{1}{2}
\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2}
{D}_{y}{u}(x,\tau)
{D}_{y}[
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}
\zeta(x)^{p}
]
{\omega}(x,y)
\eta(\tau)^{p}
{\,\textup{\textmd{d}}}\tau
\\
=
\frac{p}{\theta+1}
\int_{0}^{t}
\sum_{x\in B_{R_{2}}}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta+1}
\zeta(x)^{p}
\eta(\tau)^{p-1}
\eta'(\tau)
{d_{{\omega}}}(x)
{\,\textup{\textmd{d}}}\tau
=:J_{3}
\,.
\end{gathered}$$ We split $J_{2}$ according to the equality $${D}_{y}[
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}
\zeta(x)^{p}
]
=
\zeta(y)^{p}
{D}_{y}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}
+
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}
{D}_{y} \zeta(x)^{p}
\,.$$ Next we appeal to Lemma \[l:monot\] with ${v}=0$ to get $$\label{eq:cacc_iii}
{\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2}
{D}_{y}{u}(x,\tau)
{D}_{y}[
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}
]
\ge
\gamma_{0}
{\left|
{D}_{y}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\frac{p+\theta-1}{p}}
\right|}^{p}
\,.$$ Thus | 1 | member_15 |
from we infer the bound $$\label{eq:cacc_iv}
J_{1}
+
J_{21}
+
J_{22}
\le
J_{3}
+
J_{23}
\,,$$ where $$\begin{aligned}
J_{21}
&=
\gamma_{0}
\int_{0}^{t}
\sum_{x,y\in V}
{\left|
{D}_{y}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\frac{p+\theta-1}{p}}
\right|}^{p}
\zeta(y)^{p}
{\omega}(x,y)
\eta(\tau)^{p}
{\,\textup{\textmd{d}}}\tau
\,,
\\
J_{22}
&=
\frac{1}{4}
\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2}
{D}_{y}{u}(x,\tau)
{D}_{y}[
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}
]
\zeta(y)^{p}
{\omega}(x,y)
\eta(\tau)^{p}
{\,\textup{\textmd{d}}}\tau
\,,
\\
J_{23}
&=
\frac{1}{2}
\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1}
{\lvert{D}_{y}\zeta(x)^{p}\rvert}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}
\eta(\tau)^{p}
{\omega}(x,y)
{\,\textup{\textmd{d}}}\tau
\,.
\end{aligned}$$ The reason to preserve the fraction $J_{22}$ of $J_{2}$ (rather than treating it as in $J_{21}$) will become apparent presently. Let us introduce the functions $$\begin{gathered}
H(x,y;r)
=
\max[
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-r)_{+}}{\left({u}(x,\tau)-r\right)_{+}}}}
,
{{\ifthenelse{\equal{*}{*}}{({u}(y,\tau)-r)_{+}}{\left({u}(y,\tau)-r\right)_{+}}}}
]
\,,
\\
\chi_{x,y}
=
1
\,,
\quad
\text{if $H(x,y;h)>0$;}
\qquad
\chi_{x,y}
=
0
\,,
\quad
\text{if $H(x,y;h)=0$.}
\end{gathered}$$ Note that $r>0$ is arbitrary in the definition of $H$ but we fix $r=h$ in the definition of $\chi_{x,y}$. Next we select $0<k<h$; by elementary calculations and Young’s inequality we get $$\begin{split}
J_{23}
&\le
\frac{p}{2}
\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{y} {u}(x,\tau)\rvert}^{p-1}
{\lvert{D}_{y} \zeta(x)\rvert}
(\zeta(x)+\zeta(y))^{p-1}
H(x,y;k)^{\theta}
\chi_{x,y}
{\omega}(x,y)
\eta(\tau)^{p}
{\,\textup{\textmd{d}}}\tau
\\
&\le
{\varepsilon}\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{y}{u}(x,\tau)\rvert}^{p}
(\zeta(x)^{p}+\zeta(y)^{p})
H(x,y;k)^{\theta-1}
\chi_{x,y}
{\omega}(x,y)
\eta(\tau)^{p}
{\,\textup{\textmd{d}}}\tau
\\
&\quad+
\gamma {\varepsilon}^{1-p}
\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{y}\zeta(x)\rvert}^{p}
H(x,y;k)^{p+\theta-1}
{\omega}(x,y)
\eta(\tau)^{p}
{\,\textup{\textmd{d}}}\tau
=:
J_{231}
+
J_{232}
\,.
\end{split}$$ We want to absorb partially the | 1 | member_15 |
term $J_{231}$ into $J_{22}$, for a suitable choice of ${\varepsilon}$. To this end we observe that by a change of variables we have $$\begin{split}
J_{22}
&=
\frac{1}{4}
\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{x}{u}(y,\tau)\rvert}^{p-1}
{\lvert{D}_{x}{{\ifthenelse{\equal{*}{*}}{({u}(y,\tau)-h)_{+}}{\left({u}(y,\tau)-h\right)_{+}}}}^{\theta}\rvert}
\zeta(x)^{p}
{\omega}(y,x)
\eta(\tau)^{p}
{\,\textup{\textmd{d}}}\tau
\\
&=
\frac{1}{4}
\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1}
{\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}\rvert}
\zeta(x)^{p}
{\omega}(x,y)
\eta(\tau)^{p}
{\,\textup{\textmd{d}}}\tau
\\
&=
\frac{1}{8}
\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1}
{\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}\rvert}
(\zeta(x)^{p}+\zeta(y)^{p})
{\omega}(x,y)
\eta(\tau)^{p}
{\,\textup{\textmd{d}}}\tau
\,.
\end{split}$$ Then by elementary calculus $$\begin{gathered}
\label{eq:cacc_j}
\chi_{x,y}
{\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\theta}\rvert}
\ge
\chi_{x,y}
\theta
{\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}\rvert}
H(x,y;h)^{\theta-1}
\\
\ge
\chi_{x,y}
\theta
{\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}\rvert}
H(x,y;k)^{\theta-1}
\,.
\end{gathered}$$ Next we discriminate three cases in , aggregating equivalent symmetric cases: i) ${u}(x,\tau)>h$, ${u}(y,\tau)>h$. In this case clearly $${\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}\rvert}
=
{\lvert{D}_{y}{u}(x,\tau)\rvert}
\,.$$ ii) ${u}(x,\tau)>2h$, $h\ge {u}(y,\tau)$. Then $${\lvert{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}\rvert}
\ge
\frac{{u}(x,\tau)}{2}
\ge
\frac{1}{2}
{\lvert{D}_{y}{u}(x,\tau)\rvert}
\,.$$ iii) $2h \ge {u}(x,\tau)>h\ge {u}(y,\tau)$. In this case $J_{22}$ does not offer any help. We rather bound directly this part of $J_{231}$ as shown below.
Collecting the estimates above we see that, provided ${\varepsilon}\le 1/16$, $$\begin{gathered}
J_{231}
\le
J_{22}
+
{\varepsilon}2^{p+2}
h^{p}
(h-k)^{\theta-1}
\int_{\tau_{2}}^{t}
{\mu_{{\omega}}}(B_{R_{2}}\cap\{2h\ge {u}(x,\tau)>h\})
{\,\textup{\textmd{d}}}\tau
\,.
\end{gathered}$$ Hence we have transformed into $$\label{eq:cacc_jj}
J_{1}+J_{21}
\le
J_{3}
+
\gamma {\varepsilon}B
+
\gamma {\varepsilon}^{1-p}
A
\,,$$ where $A$ and $B$ are as in the statement.
Finally we check whether the root ${\varepsilon}$ of | 1 | member_15 |
${\varepsilon}B={\varepsilon}^{1-p}A$ is less than $1/16$; on distinguishing the cases ${\varepsilon}\le 1/16$, ${\varepsilon}>1/16$ we get the inequality in .
\[l:cacc2\] Let ${u}\in L^{\infty}(0,T;\ell^{q}(V))$, for a given $q> 1$, be a solution of in $V\times(0,T)$. Then for all $T>\tau_{1}>\tau_{2}>0$, $h\ge0$, we have for all $0<t<T$ $$\begin{gathered}
\label{eq:cacc2_n}
\sup_{\tau_{1}<\tau<t}
\sum_{x\in V}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q}
{d_{{\omega}}}(x)
+
\int_{\tau_{1}}^{t}
\sum_{x,y\in V}
{\left|{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\frac{p+q-2}{p}}\right|}^{p}
{\omega}(x,y)
{\,\textup{\textmd{d}}}\tau
\\
\le
\frac{\gamma}{\tau_{1}-\tau_{2}}
\int_{\tau_{2}}^{t}
\sum_{x\in V}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q}
{d_{{\omega}}}(x)
{\,\textup{\textmd{d}}}\tau
\,.
\end{gathered}$$ We have also, if condition is satisfied, $$\begin{gathered}
\label{eq:cacc2_nn}
\sup_{0<\tau<t}
\sum_{x\in V}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q}
{d_{{\omega}}}(x)
+
\int_{0}^{t}
\sum_{x,y\in V}
{\left|{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{\frac{p+q-2}{p}}\right|}^{p}
{\omega}(x,y)
{\,\textup{\textmd{d}}}\tau
\\
\le
\gamma
\int_{0}^{t}
\sum_{x\in V}
{{\ifthenelse{\equal{*}{*}}{({u}_{0}(x)-h)_{+}}{\left({u}_{0}(x)-h\right)_{+}}}}^{q}
{d_{{\omega}}}(x)
{\,\textup{\textmd{d}}}\tau
\,.
\end{gathered}$$
Let us prove ; the inequality is proved similarly.
We multiply against $\zeta(x)\eta(t){{\ifthenelse{\equal{*}{*}}{({u}(x,t)-h)_{+}}{\left({u}(x,t)-h\right)_{+}}}}^{q-1}$; on integrating by parts as in the proof of Lemma \[l:cacc\] we obtain $$\label{eq:cacc2_i}
\begin{split}
&\frac{1}{q}
\sum_{x\in V}
\zeta(x)
{{\ifthenelse{\equal{*}{*}}{({u}(x,t)-h)_{+}}{\left({u}(x,t)-h\right)_{+}}}}^{q}
{d_{{\omega}}}(x)
\eta(t)
\\
&\quad+
\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2}
{D}_{y}{u}(x,\tau)
\zeta(y)
{D}_{y}{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q-1}
{\omega}(x,y)
\eta(\tau)
{\,\textup{\textmd{d}}}\tau
\\
&\quad+
\int_{0}^{t}
\sum_{x,y\in V}
{\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-2}
{D}_{y}{u}(x,\tau)
{D}_{y}\zeta(x)
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q-1}
{\omega}(x,y)
\eta(\tau)
{\,\textup{\textmd{d}}}\tau
\\
&\qquad=
\frac{1}{q}
\int_{0}^{t}
\sum_{x\in V}
\zeta(x)
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q}
{d_{{\omega}}}(x)
\eta'(\tau)
{\,\textup{\textmd{d}}}\tau
\,.
\end{split}$$ We estimate next the second integral in . The absolute value of the integrand is bounded from above by $$\begin{gathered}
\frac{1}{R_{2}-R_{1}}
\sum_{x,y\in B_{R_{2}+1}}
{\lvert{D}_{y}{u}(x,\tau)\rvert}^{p-1}
{{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-h)_{+}}{\left({u}(x,\tau)-h\right)_{+}}}}^{q-1}
{\omega}(x,y)
\\
\le
| 1 | member_15 |
\frac{1}{R_{2}-R_{1}}
\sum_{x,y\in B_{R_{2}+1}}
\big(
{u}(x,\tau)^{p+q-2}
+
{u}(y,\tau)^{p-1}
{u}(x,\tau)^{q-1}
\big)
{\omega}(x,y)
\le
\frac{C_{u}}{R_{2}-R_{1}}
\,,
\end{gathered}$$ where $C_{u}$ is independent of $R_{i}$. Owing to $p+q-2>q$ and to Remark \[r:lp\_scale\], to this end it is only left to observe that $$\begin{gathered}
\sum_{x,y\in B_{R_{2}+1}}
{u}(y,\tau)^{p-1}
{u}(x,\tau)^{q-1}
{\omega}(x,y)
\\
\le
\Big(
\sum_{y\in V}
{u}(y,\tau)^{(p-1)q}
{d_{{\omega}}}(y)
\Big)^{\frac{1}{q}}
\Big(
\sum_{x\in V}
{u}(x,\tau)^{q}
{d_{{\omega}}}(x)
\Big)^{\frac{q-1}{q}}
\,,
\end{gathered}$$ and to use once more Remark \[r:lp\_scale\], since $(p-1)q>q$.
The sought after estimates follows immediately upon applying Lemma \[l:monot\] with ${v}=0$ and then letting first $R_{2}\to\infty$ and then $R_{1}\to\infty$.
\[r:prelim\_diff\] Lemma \[l:cacc2\] is still in force if ${u}$ is the difference of two solutions to . The proof is the same, when we start from the difference of the two equations and recall Lemma \[l:monot\].
Existence and comparison {#s:prelim_ex}
------------------------
\[p:ex\] Let ${u}_{0}\in\ell^{q}(V)$, $q>1$. Then – has a solution in $L^{\infty}(0,+\infty;\ell^{q}(V))$. If ${u}_{0}\ge 0$ then ${u}\ge 0$.
Let ${u}_{0}\in\ell^{q}(V)$, $q>1$. Define for $n\ge 1$ ${u}_{n}$ as the solution to $$\begin{aligned}
{2}
\label{eq:exist_pde_n}
{\frac{\partial {u}_{n}}{\partial t}}(x,t)
&=
\operatorname{\Delta}_{p}{u}_{n}(x,t)
\,,
&\qquad&
x\in B_{n}
\,,
t>0
\,,
\\
\label{eq:exist_init_n}
{u}_{n}(x,0)
&=
{u}_{0}(x)
\,,
&\qquad&
x\in B_{n}
\,,
\\
\label{eq:exist_dir_n}
{u}_{n}(x,t)
&=
0
\,,
&\qquad&
x\not\in B_{n}
\,,
t\ge 0
\,.
\end{aligned}$$ In practice this | 1 | member_15 |
is a finite system of ordinary differential equations, uniquely solvable in the class $C^{1}(0,T)$ at least as long as the solution stays bounded over $(0,T)$.
In this connection, we rewrite , as $${u}_{n}(x,t)^{q-1}
{\frac{\partial {u}_{n}}{\partial t}}(x,t)
=
{u}_{n}(x,t)^{q-1}
\operatorname{\Delta}_{p}{u}_{n}(x,t)
\,,
\qquad
x\in V
\,,
t>0
\,,$$ where we stress that the equality holds for all $x\in V$. In this Subsection we denote $s^{q-1}={\lverts\rvert}^{q-1}\textup{sign}(s)$ for all $s\in {{\boldsymbol}{R}}$. Thus, summing over $x\in V$ and integrating by parts both in $t$ and in $x$ (in the suitable sense) we see that the elliptic part of the equation yields a nonnegative contribution, so that $$\label{eq:exist_energy_n}
\sum_{x\in V}
{\lvert{u}_{n}(x,t)\rvert}^{q}
{d_{{\omega}}}(x)
\le
\sum_{x\in B_{n}}
{\lvert{u}_{0}(x)\rvert}^{q}
{d_{{\omega}}}(x)
\le
{{\lVert{u}_{0}\rVert}_{\ell^{q}(V)}}^{q}
\,.$$ In turn, as explained in Remark \[r:lp\_scale\], this implies stable sup bounds for ${u}_{n}$ which, together with the discrete character of the $p$-laplacian and with the equation , also give stable sup bounds for the time derivative $\partial {u}_{n}/\partial t$, for each fixed $x$. However $V$ is countable, so that this is enough to enable us to extract a subsequence, still denoted by ${u}_{n}$ such that $$\label{eq:exist_unif_conv}
{u}_{n}(x,t)
\to
{u}(x,t)
\,,
\quad
{\frac{\partial {u}_{n}}{\partial t}}(x,t)
\to
{\frac{\partial {u}}{\partial t}}(x,t)$$ for each $x\in V$, uniformly for $t\in | 1 | member_15 |
[0,T]$, where we have made use of the equation again to obtain convergence for the time derivative. Finally owing to we have $$\label{eq:exist_energy_nj}
\sum_{x\in V}
{\lvert{u}(x,t)\rvert}^{q}
{d_{{\omega}}}(x)
\le
{{\lVert{u}_{0}\rVert}_{\ell^{q}(V)}}^{q}
\,,
\qquad
t>0
\,.$$ It is easily seen that ${u}\in L^{\infty}(0,+\infty;\ell^{q}(V))$ is a solution to –. If ${u}_{0}\ge 0$, we appeal to our next result to prove that ${u}\ge 0$.
\[p:compare\] If ${u}_{1}$, ${u}_{2}\in L^{\infty}(0,T; \ell^{q}(V))$ solve – with ${u}_{01}$, ${u}_{02}\in\ell^{q}(V)$, ${u}_{01}\ge {u}_{02}$, then ${u}_{1}\ge{u}_{2}$.
According to Remark \[r:lp\_scale\] and to Definition \[d:sol\], we may assume $q>1$. Define ${w}={u}_{2}-{u}_{1}$. Then ${w}$ does not solve , but we may still apply (with $h=0$) to it, see Remark \[r:prelim\_diff\]. This proves ${{\ifthenelse{\equal{*}{*}}{({w})_{+}}{\left({w}\right)_{+}}}}=0$ and thus the statement.
Elementary inequalities {#s:prelim_elem}
-----------------------
We record for future use two immediate consequences of , : $$\begin{aligned}
2
\label{eq:fkf_above}
{\Lambda_p}(sa)^{-1}
&\le
s^{\omega}
{\Lambda_p}(a)^{-1}
\,,
&\qquad&
s\ge 1
\,,
a>0
\,;
\\
\label{eq:fkf_below}
{\Lambda_p}(\sigma a)^{-1}
&\le
\sigma^{\frac{p}{{N}}}
{\Lambda_p}(a)^{-1}
\,,
&\qquad&
0<\sigma\le 1
\,,
a>0
\,.\end{aligned}$$ Also the following Lemma relies on and will be used in a context where it is important that $\nu<1/(p-1)$.
\[l:dcf\] Let $\nu={N}(p-2)/[({N}(p-2)+p)(p-1)]$ and $b>0$. Then the function $$\tau\mapsto \tau^{\nu}{\psi}_{1}^{(-1)}(\tau^{-1}b)^{\frac{p-2}{p-1}}
\,,
\qquad
\tau>0
\,,$$ is nondecreasing.
Equivalently we show that $$r\mapsto r^{-\alpha}{\psi}_{1}^{(-1)}(r)^{p-2}$$ is nonincreasing for | 1 | member_15 |
$\alpha=\nu(p-1)$. Set $s={\psi}_{1}^{(-1)}(r)$, so that by definition of ${\psi}_{1}$ $$r^{-\alpha}{\psi}_{1}^{(-1)}(r)^{p-2}
=
s^{(1-\alpha)(p-2)}{\Lambda_p}(s^{-1})^{-\alpha}
=
[s^{-\frac{p}{{N}}}{\Lambda_p}(s^{-1})]^{-\alpha}
\,.$$ By assumption the latter quantity is indeed nonincreasing in $s$ which however is a nondecreasing function of $r$.
\[l:bbl\] Under assumption we have that if $R$, $s>0$, $c\ge 1$ and $$\label{eq:bbl_m}
R^{p}
=
cs
{\psi}_{1}^{(-1)}(s^{-1})^{p-2}
\,,$$ then $$\label{eq:bbl_mm}
{\mu_{{\omega}}}(B_{{\lfloorR\rfloor}})
\le
\gamma(c)
\psi_{1}^{(-1)}
(s^{-1})^{-1}
\,.$$
Let $\tau>0$ be such that $s^{-1}={\psi}_{1}(\tau)=\tau^{p-2}{\Lambda_p}(\tau^{-1})$. Then $$c^{-1}R^{p}
=
{\Lambda_p}(\tau^{-1})^{-1}
\,.$$ On the other hand, on setting $v={\mu_{{\omega}}}(B_{{\lfloorR\rfloor}})$ and invoking we get $$c^{-1}R^{p}
\ge
c^{-1}{\mathcal{R}}(v)^{p}
\ge
c^{-1}
\gamma_{0}
{\Lambda_p}(v)^{-1}
\ge
{\Lambda_p}((\gamma_{0}c^{-1})^{\frac{{N}}{p}}v)^{-1}
\,,$$ where we also used . Since ${\Lambda_p}$ is nonincreasing, the result follows.
Proofs of Proposition \[p:linf\_meas\] and Corollary \[co:linf\_int\] {#s:linf}
=====================================================================
By assumption, and by Remark \[r:lp\_scale\], ${u}\in L^{\infty}(0,T;\ell^{q}(V))$ for some $q> 1$; then for all $k>0$ the cut function ${{\ifthenelse{\equal{*}{*}}{({u}(t)-k)_{+}}{\left({u}(t)-k\right)_{+}}}}$ is finitely supported. For given $0<\sigma_{1}<\sigma_{2}<1/2$, $k>0$, $0<t<T$ define the decreasing sequences $$\begin{gathered}
k_{i}
=
k[
1-\sigma_{2}
+
2^{-i}
(\sigma_{2}-\sigma_{1})
]
\,,
\qquad
i=0\,,1\,,2\,,\dots
\\
t_{i}
=
\frac{t}{2}
[
1-\sigma_{2}
+
2^{-i}
(\sigma_{2}-\sigma_{1})
]
\,,
\qquad
i=0\,,1\,,2\,,\dots
\end{gathered}$$ and let $f_{i}(x,\tau)={{\ifthenelse{\equal{*}{*}}{({u}(x,\tau)-k_{i})_{+}}{\left({u}(x,\tau)-k_{i}\right)_{+}}}}^{\nu}$, where $\nu=(p+q-2)/p$. Let also $$\begin{aligned}
{m}_{i}(\tau)
&=
{\mu_{{\omega}}}(
\{
x\in V
\mid
{u}(x,\tau)>k_{i}
\}
)
\,,
\quad
{M}_{i}
=
\sup_{t_{i}<\tau<t}
{m}_{i}(\tau)
\,,
\\
{{D}}_{i}(\tau)
&=
\sum_{x,y\in V}
| 1 | member_15 |
{\lvert{D}_{y}f_{i}(x,\tau)\rvert}^{p}
{\omega}(x,y)
\,.
\end{aligned}$$ Since $b:=q/\nu<p$, it follows from Faber-Krahn inequality and Hölder’s and Young’s inequalities that $$\begin{gathered}
\label{eq:fkb}
\sum_{x\in V}
f_{i+1}(x,\tau)^{b}
{d_{{\omega}}}(x)
\le
{m}_{i+1}(\tau)^{1-\frac{b}{p}}
{\Lambda_p}({m}_{i+1}(\tau))^{-\frac{b}{p}}
{{D}}_{i+1}(\tau)^{\frac{b}{p}}
\\
\le
{\varepsilon}^{\frac{p}{b}}
{{D}}_{i+1}(\tau)
+
{\varepsilon}^{-\frac{p}{p-b}}
{\Lambda_p}({m}_{i+1}(\tau))^{-\frac{b}{p-b}}
{m}_{i+1}(\tau)
\,.
\end{gathered}$$ Here ${\varepsilon}>0$ is arbitrary and will be selected below.
We integrate over $(t_{i+1},t)$ to find $$\label{eq:linf_i}
\begin{split}
&\int_{t_{i+1}}^{t}
\sum_{x\in V}
f_{i+1}(x,\tau)^{b}
{d_{{\omega}}}(x)
{\,\textup{\textmd{d}}}\tau
\le
{\varepsilon}^{\frac{p}{b}}
\int_{t_{i+1}}^{t}
{{D}}_{i+1}(\tau)
{\,\textup{\textmd{d}}}\tau
\\
&\qquad+
{\varepsilon}^{-\frac{p}{p-b}}
\int_{t_{i+1}}^{t}
{\Lambda_p}({m}_{i+1}(\tau))^{-\frac{b}{p-b}}
{m}_{i+1}(\tau)
{\,\textup{\textmd{d}}}\tau
\\
&\quad\le
{\varepsilon}^{\frac{p}{b}}
\int_{t_{i+1}}^{t}
{{D}}_{i+1}(\tau)
{\,\textup{\textmd{d}}}\tau
+
{\varepsilon}^{-\frac{p}{p-b}}
t
{\Lambda_p}({M}_{i+1})^{-\frac{b}{p-b}}
{M}_{i+1}
\,.
\end{split}$$ Next we invoke Lemma \[l:cacc2\] with $\tau_{1}=t_{i}$, $\tau_{2}=t_{i+1}$, $h=k_{i}$, to infer $$\label{eq:linf_ii}
\begin{split}
&
L_{i}:=
\sup_{t_{i}<\tau<t}
\sum_{x\in V}
f_{i}(x,\tau)^{b}
{d_{{\omega}}}(x)
+
\int_{t_{i}}^{t}
{{D}}_{i}(\tau)
{\,\textup{\textmd{d}}}\tau
\\
&\quad
\le
\frac{\gamma 2^{i}}{t(\sigma_{2}-\sigma_{1})}
\int_{t_{i+1}}^{t}
\sum_{x\in V}
f_{i+1}(x,\tau)^{b}
{d_{{\omega}}}(x)
{\,\textup{\textmd{d}}}\tau
\\
&\quad\le
\frac{\gamma 2^{i}}{t(\sigma_{2}-\sigma_{1})}
{\varepsilon}^{\frac{p}{b}}
\int_{t_{i+1}}^{t}
{{D}}_{i+1}(\tau)
{\,\textup{\textmd{d}}}\tau
\\
&\qquad
+
\frac{\gamma 2^{i}}{\sigma_{2}-\sigma_{1}}
{\varepsilon}^{-\frac{p}{p-b}}
{\Lambda_p}({M}_{i+1})^{-\frac{b}{p-b}}
{M}_{i+1}
\,,
\end{split}$$ where the second inequality follows of course from . For a $\delta>0$ to be chosen, select ($\gamma$ denotes here the constant in ) $$\frac{\gamma 2^{i}}{t(\sigma_{2}-\sigma_{1})}
{\varepsilon}^{\frac{p}{b}}
=
\delta
\qquad
\text{i.e.,}
\qquad
{\varepsilon}=
\gamma_{0}
\delta^{\frac{b}{p}}
t^{\frac{b}{p}}
(\sigma_{2}-\sigma_{1})^{\frac{b}{p}}
2^{-\frac{b}{p}i}
\,.$$ On substituting this choice of ${\varepsilon}$ in we arrive at an estimate which can be successfully iterated, that is $$\label{eq:linf_iii}
L_{i}
\le
\delta L_{i+1}
| 1 | member_15 |