text
stringlengths 0
3.78M
| subset
stringclasses 22
values |
---|---|
---
abstract: 'Mean-reverting assets are one of the holy grails of financial markets: if such assets existed, they would provide trivially profitable investment strategies for any investor able to trade them, thanks to the knowledge that such assets oscillate predictably around their long term mean. The modus operandi of cointegration-based trading strategies [@tsay2005analysis §8] is to create first a portfolio of assets whose aggregate value mean-reverts, to exploit that knowledge by selling short or buying that portfolio when its value deviates from its long-term mean. Such portfolios are typically selected using tools from cointegration theory [@granger; @johansen], whose aim is to detect combinations of assets that are stationary, and therefore mean-reverting. We argue in this work that focusing on stationarity only may not suffice to ensure profitability of cointegration-based strategies. While it might be possible to create synthetically, using a large array of financial assets, a portfolio whose aggregate value is stationary and therefore mean-reverting, trading such a large portfolio incurs in practice important trade or borrow costs. Looking for stationary portfolios formed by many assets may also result in portfolios that have a very small volatility and which require significant leverage to be profitable. We study in this work algorithmic approaches that can take mitigate these effects by searching for maximally mean-reverting portfolios which are sufficiently sparse and/or volatile.'
author:
- |
Marco Cuturi\
Graduate School of Informatics\
Kyoto University\
`[email protected]`\
\
Alexandre d’Aspremont\
D.I., UMR CNRS 8548\
Ecole Normale Supérieure, `[email protected]`
title: |
Mean-Reverting Portfolios:\
Tradeoffs Between Sparsity and Volatility
---
Introduction
============
Mean-reverting assets, namely assets whose price oscillates predictably around a long term mean, provide investors with an ideal investment opportunity. Because of their tendency to pull back to a given price level, a naive contrarian strategy of buying the asset when its price lies below that mean, or selling short the asset when it lies above that mean can be profitable. Unsurprisingly, assets that exhibit significant mean-reversion are very hard to find in efficient markets. Whenever mean-reversion is observed in a single asset, it is almost always impossible to profit from it: the asset may typically have very low volatility, be illiquid, hard to short-sell, or its mean-reversion may occur at a time-scale (months, years) for which the borrow-cost of holding or shorting the asset may well exceed any profit expected from such a contrarian strategy.
### Synthetic Mean-Reverting Baskets
Since mean-reverting assets rarely appear in liquid markets, investors have focused instead on creating synthetic assets that can mimic the properties of a single mean-reverting asset, and trading such synthetic assets as if they were a single asset. Such a synthetic asset is typically designed by combining long and short positions in various liquid assets to form a *mean-reverting portfolio*, whose aggregate value exhibits significant mean-reversion.
Constructing such synthetic portfolios is, however, challenging. Whereas simple descriptive statistics and unit-root test procedures can be used to test whether a single asset is mean-reverting, building mean-reverting portfolios requires finding a proper vector of algebraic weights (long and short positions) that describes a portfolio which has a mean-reverting aggregate value. In that sense, mean-reverting portfolios are made by the investor, and cannot be simply chosen among tradable assets. A mean-reverting portfolio is characterized both by the pool of assets the investor has selected (starting with the dimension of the vector), and by the fixed nominal quantities (or weights) of each of these assets in the portfolio, which the investor also needs to set. When only two assets are considered, such baskets are usually known as long-short trading pairs. We consider in this paper baskets that are constituted by more than two assets.
### Mean-Reverting Baskets with Sufficient Volatility and Sparsity
A mean-reverting portfolio must exhibit sufficient mean-reversion to ensure that a contrarian strategy is profitable. To meet this requirement, investors have relied on cointegration theory [@granger; @maddala1998urc; @johansen2005cointegration] to estimate linear combinations of assets which exhibit stationarity (and therefore mean-reversion) using historical data. We argue in this work, as we did in earlier references [@alex; @cuturi2013mean], that mean-reverting strategies cannot, however, only rely on this approach to be profitable. Arbitrage opportunities can only exist if they are large enough to be traded without using too much leverage or incurring too many transaction costs. For mean-reverting baskets, this condition translates naturally into a first requirement that the gap between the basket valuation and its long term mean is large enough on average, namely that the basket price has sufficient variance or volatility. A second desirable property is that mean-reverting portfolios require trading as few assets as possible to minimize costs, namely that the weights vector of that portfolio is sparse. We propose in this work methods that maximize a proxy for mean reversion, and which can take into account at the same time constraints on variance and sparsity.\
\
We propose first in Section \[s:crit\] three proxies for mean reversion. Section \[s:opt\] defines the basket optimization problems corresponding to these quantities. We show in Section \[s:sdp\] that each of these problems translate naturally into semidefinite relaxations which produce either exact or approximate solutions using sparse PCA techniques. Finally, we present numerical evidence in Section \[s:numres\] that taking into account sparsity and volatility can significantly boost the performance of mean-reverting trading strategies in trading environments where trading costs are not negligible.
Proxies for Mean-Reversion {#s:crit}
==========================
Isolating stable linear combinations of variables of multivariate time series is a fundamental problem in econometrics. A classical formulation of the problem reads as follows: given a vector valued process $x=(x_t)_t$ taking values in $\RR^n$ and indexed by time $t\in\NN$, and making no assumptions on the stationarity of each individual component of $x$, can we estimate one or many directions $y\in\RR^n$ such that the univariate process $(y^Tx_t)$ is stationary? When such a vector $y$ exists, the process $x$ is said to be cointegrated. The goal of cointegration techniques is to detect and estimate such directions $y$. Taken for granted that such techniques can efficiently isolate sparse mean reverting baskets, their financial application can be either straightforward using simple event triggers to buy, sell or simply hold the basket [@tsay2005analysis §8.6], or more elaborate optimal trading strategies if one assumes that the mean-reverting basket value is a Ohrstein-Ullenbeck process, as discussed in [@jurek; @liu2010optimal; @elie:hal-00573429].
Related Work and Problem Setting
--------------------------------
@granger provided in their seminal work a first approach to compare two non-stationary univariate time series $(x_t,y_t)$, and test for the existence of a term $\alpha$ such that $y_t-\alpha x_t$ becomes stationary. Following this seminal work, several techniques have been proposed to generalize that idea to multivariate time series. As detailed in the survey by @maddala1998urc [§5], cointegration techniques differ in the modeling assumptions they require on the time series themselves. Some are designed to identify only one cointegrated relationship, whereas others are designed to detect many or all of them. Among these references, @johansen proposed a popular approach that builds upon a VAR model, as surveyed in [@johansen2005cointegration; @johansen2009cointegration]. These approaches all discuss issues that are relevant to econometrics, such as de-trending and seasonal adjustments. Some of them focus more specifically on testing procedures designed to check whether such cointegrated relationships exist or not, rather than on the robustness of the estimation of that relationship itself. We follow in this work a simpler approach proposed by @alex, which is to trade-off interpretability, testing and modeling assumptions for a simpler optimization framework which can be tailored to include other aspects than only stationarity. @alex did so by adding regularizers to the predictability criterion proposed by @box1977cam. We follow in this paper the approach we proposed in [@cuturi2013mean] to design mean-reversion proxies that do not rely on any modeling assumption.
Throughout this paper, we write $\symm_n$ for the $n\times n$ cone of positive definite matrices. We consider in the following a multivariate stochastic process $x=(x_t)_{t\in\NN}$ taking values in $\RR^n$. We write $\Acal_k= \Expect[x_t x_{t+k}^T], k\geq 0$ for the lag-$k$ autocovariance matrix of $x_t$ if it is finite. Using a sample path $\bx$ of $(x_t)$, where $\bx=(\bx_1,\ldots,\bx_T)$ and each $\bx_t\in\RR^n$, we write $A_k$ for the *empirical* counterpart of $\Acal_k$ computed from $\bx$, $$\label{eq:autos}
A_k\defeq \frac{1}{T-k-1}\sum_{t=1}^{T-k} \tilde{\bx}_t \tilde{\bx}_{t+k}^T,\; \tilde{\bx}_t\defeq \bx_t-\frac{1}{T}\sum_{t=1}^T \bx_t.$$ Given $y\in\RR^n$, we now define three measures which can all be interpreted as proxies for the mean reversion of $y^Tx_t$. **Predictability** – defined for stationary processes by @box1977cam and generalized for non-stationary processes by @Bewl94 – measures how close to noise the series is. The **portmanteau** statistic [@Ljun78] is used to test whether a process is white noise. Finally, the **crossing statistic** [@ylvisaker1965expected] measures the probability that a process crosses its mean per unit of time. In all three cases, low values for these criteria imply a fast mean-reversion.
Predictability {#subsec:pred}
--------------
We briefly recall the canonical decomposition derived in [@box1977cam]. Suppose that $x_t$ follows the recursion: \[eq:ar1\] x\_t= \_[t-1]{} + \_t, where $\hat{x}_{t-1}$ is a predictor of $x_t$ built upon past values of the process recorded up to $t-1$, and $\varepsilon_t$ is a vector of i.i.d. Gaussian noise with zero mean and covariance $\Sigma \in \symm_n$ independent of all variables $(x_{r})_{r<t}$. The canonical analysis in [@box1977cam] starts as follows.
### Univariate case
Suppose $n=1$ and thus $\Sigma\in\RR_+$, Equation (\[eq:ar1\]) leads thus to $$\Expect[x_t^2]=\Expect[\hat{x}_{t-1}^2]+\Expect{[\varepsilon_t^2]}, \text{ thus } 1=\frac{\hat{\sigma}^2}{\sigma^2}+\frac{\Sigma}{\sigma^2},$$ by introducing the variances $\sigma^2$ and $\hat{\sigma}^2$ of $x_t$ and $\hat{x}_t$ respectively. @box1977cam measure the *predictability* of $x_t$ by the ratio $$\lambda\defeq\frac{\hat{\sigma}^2}{\sigma^2}.$$ The intuition behind this variance ratio is simple: when it is small the variance of the noise dominates that of $\hat{x}_{t-1}$ and $x_t$ is dominated by the noise term; when it is large, $\hat{x}_{t-1}$ dominates the noise and $x_t$ can be accurately predicted on average.
### Multivariate case
Suppose $n>1$ and consider now the univariate process $(y^Tx_t)_{t}$ with weights $y\in\RR^{n}$. Using (\[eq:ar1\]) we know that $y^Tx_t =y^T\hat{x}_{t-1}+y^T\varepsilon_t$, and we can measure its predicability as \[eq:pred\] (y), where $\hat{\Acal}_0$ and $\Acal_0$ are the covariance matrices of $x_t$ and $\hat{x}_{t-1}$ respectively. Minimizing predictability $\lambda(y)$ is then equivalent to finding the minimum generalized eigenvalue $\lambda$ solving \[eq:pred2\] (\_0 - \_0) =0. Assuming that $\Acal_0$ is positive definite, the basket with minimum predictability will be given by $y=\Acal_0^{-1/2}y_0$, where $y_0$ is the eigenvector corresponding to the smallest eigenvalue of the matrix $\Acal_0^{-1/2} \hat{\Acal}_0 \Acal_0^{-1/2}$.
### Estimation of $\lambda(y)$
All of the quantities used to define $\lambda$ above need to be estimated from sample paths. $\Acal_0$ can be estimated by $A_0$ following Equation . All other quantities depend on the predictor $\hat{x}_{t-1}$. @box1977cam assume that $x_t$ follows a vector autoregressive model of order $p$ – VAR(p) in short – and therefore $\hat{x}_{t-1}$ takes the form, $$\hat{x}_{t-1}=\sum_{k=1}^p \Hca_k x_{t-k},$$ where the $p$ matrices $(\Hca_k)$ contain each $n\times n$ autoregressive coefficients. Estimating $\Hca_k$ from the sample path $\bx$, @box1977cam solve for the optimal basket by inserting these estimates in the generalized eigenvalue problem displayed in Equation . If one assumes that $p=1$ (the case $p>1$ can be trivially reformulated as a VAR(1) model with adequate reparameterization), then $$\hat{\Acal}_0=\Hca_1 \Acal_0 \Hca_1^T \text{ and }\Acal_1=\Acal_0 \Hca_1,$$ and thus the Yule-Walker estimator [@lutkepohl2005nim §3.3] of $\Hca_1$ would be $H_1=A_0^{-1} A_1$. Minimizing predictability boils down to solving in that case $$\min_{y} \hat{\lambda}(y), \; \hat{\lambda}(y)\defeq \frac{y^T \left( H_1 A_0 H_1^T\right) y}{y^T A_0 y}=\frac{y^T \left( A_1 A_0^{-1} A_1^T\right) y}{y^T A_0 y},$$ which is equivalent to computing the smallest eigenvector of the matrix $A_0^{-1/2}A_1 A_0^{-1} A_1^T A_0^{-1/2}$ if the covariance matrix $A_0$ is invertible.
The machinery of @box1977cam to quantify mean-reversion requires defining a model to form $\hat{x}_{t-1}$, the conditional expectation of $x_t$ given previous observations. We consider in the following two criteria that do without such modeling assumptions.
Portmanteau Criterion {#ss:portm}
---------------------
Recall that the [*portmanteau*]{} statistic of order $p$ [@Ljun78] of a centered univariate stationary process $x$ (with $n=1$) is given by $$\por_p(x)=\frac{1}{p}\sum_{i=1}^p \left(\frac{\Expect[x_t x_{t+i}]}{\Expect[x_t^2]}\right)^2$$ where ${\Expect[x_t x_{t+i}]}/{\Expect[x_t^2]}$ is the $i$th order autocorrelation of $x_t$. The portmanteau statistic of a white noise process is by definition $0$ for any $p$. Given a multivariate $(n>1)$ process $x$ we write $$\phi_p(y)=\por_p(y^T x)=\frac{1}{p}\sum_{i=1}^p\left(\frac{y^T \Acal_i y}{y^T \Acal_0 y}\right)^2,$$ for a coefficient vector $y\in\RR^n$. By construction, $\phi_p(y)=\phi_p(ty)$ for any $t\ne 0$ and in what follows, we will impose $\|y\|_2=1$. The quantities $\phi_p(y)$ are computed using the following estimates [@Hami94 p.110]: \[eq:portm\] \_p(y)=\_[i=1]{}\^p()\^2.
Crossing Statistics {#ss:cross}
-------------------
@Kede94 [§4.1] define the [*zero crossing rate*]{} of a univariate $(n=1)$ process $x$ (its expected number of crosses around $0$ per unit of time) as \[eq:cross-rate\] (x)=, A result known as the cosine formula states that if $x_t$ is an autoregressive process of order one AR(1), namely if $|a|<1$, $\varepsilon_t$ is i.i.d. standard Gaussian noise and $x_t=a x_{t-1} + \varepsilon_t$, then [@Kede94 §4.2.2]: $$\gamma(x)=\frac{\arccos(a)}{\pi}.$$ Hence, for AR(1) processes, minimizing the first order autocorrelation $a$ also directly maximizes the crossing rate of the process $x$. For $n>1$, since the first order autocorrelation of $y^Tx_t$ is equal to $y^T\Acal_1y$, we propose to minimize $y^T\Acal_1y$ and ensure that all other absolute autocorrelations $\abs{y^T\Acal_ky}$, $k>1$ are small.
Optimal Baskets {#s:opt}
===============
Given a centered multivariate process $\bx$, we form its covariance matrix $A_0$ and its $p$ autocovariances $(A_1,\ldots,A_p)$. Because $y^TAy=y^T(A+A^T)y/2$, we symmetrize all autocovariance matrices $A_i$. We investigate in this section the problem of estimating baskets that have maximal mean reversion (as measured by the proxies proposed in Section\[s:crit\]), while being at the same time sufficiently volatile and supported by as few assets as possible. The latter will be achieved by selecting portfolios $y$ that have a small “0-norm”, namely that the number of non-zero components in $y$, $$\|y\|_0\defeq \#\{1\leq i\leq d | y_i\ne 0\},$$ is small. The former will be achieved by selecting portfolios whose aggregated value exhibits a variance over time that exceeds a given threshold $\nu>0$. Note that for the variance of $(y^Tx_t)$ to exceed a level $\nu$, the largest eigenvalue of $A_0$ must necessarily be larger than $\nu$, which we always assume in what follows. Combining these two constraints, we propose three different mathematical programs that reflect these trade-offs.
Minimizing Predictability {#ss:opt-pred}
-------------------------
Minimizing Box-Tiao’s predictability $\hat{\lambda}$ defined in §\[subsec:pred\] while ensuring that both the variance of the resulting process exceeds $\nu$ and that the vector of loadings is sparse with a 0-norm equal to $k$, means solving the following program: \[eq:P1\] & y\^T M y\
& y\^T A\_0y,\
& y\_2=1,\
& y\_0=k, in the variable $y\in\RR^n$ with $M\defeq A_1 A_0^{-1} A_1^T$, where $M,A_0\in\symm_n$. Without the normalization constraint $\|y\|_2=1$ and the sparsity constraint $\|y\|_0=k$, problem is equivalent to a generalized eigenvalue problem in the pair $(M,A_0)$. That problem quickly becomes unstable when $A_0$ is ill-conditioned or $M$ is singular. Adding the normalization constraint $\|y\|_2=1$ solves these numerical problems.
Minimizing the Portmanteau Statistic {#ss:opt-portm}
------------------------------------
Using a similar formulation, we can also minimize the order $p$ portmanteau statistic defined in §\[ss:portm\] while ensuring a minimal variance level $\nu$ by solving: \[eq:P2\] &\_[i=1]{}\^[p]{}(y\^T A\_i y)\^2\
& y\^T A\_0y ,\
& y\_2=1,\
& y\_0=k, in the variable $y\in\RR^n$, for some parameter $\nu>0$. Problem has a natural interpretation: the objective function directly minimizes the portmanteau statistic, while the constraints normalize the norm of the basket weights to one, impose a variance larger than $\nu$ and impose a sparsity constraint on $y$.
Minimizing the Crossing Statistic {#ss:opt-portm2}
---------------------------------
Following the results in §\[ss:cross\], maximizing the crossing rate while keeping the rest of the autocorrelogram low, \[eq:P3\] & y\^TA\_1y + \_[k=2]{}\^[p]{}(y\^T A\_k y)\^2\
& y\^T A\_0y ,\
& y\_2=1,\
& y\_0=k, in the variable $y\in\RR^n$, for some parameters $\mu,\nu>0$, will produce processes that are close to being AR(1), while having a high crossing rate.
Semidefinite Relaxations and Sparse Components {#s:sdp}
==============================================
Problems , and are not convex, and can be in practice extremely difficult to solve, since they involve a sparse selection of variables. We detail in this section convex relaxations to these problems which can be used to derive relevant sub-optimal solutions.
A Semidefinite Programming Approach to Basket Estimation {#subsec:asemidefinite}
--------------------------------------------------------
We propose to relax problems , and into Semidefinite Programs (SDP) [@vandenberghe1996semidefinite]. We show that these semidefinite programs can handle naturally sparsity and volatility constraints while still aiming at mean-reversion. In some restricted cases, one can show that these relaxations are tight, in the sense that they solve exactly the programs described above. In such cases, the true solution $y^\star$ of some of the programs above can be recovered using their corresponding SDP solution $Y^\star$.
However, in most of the cases we will be interested in, such a correspondence is not guaranteed and these SDP relaxations can only serve as a guide to propose solutions to these hard non-convex problems when considered with respect to vector $y$. To do so, the optimal solution $Y^\star$ needs to be *deflated* from a large rank $d\times d$ matrix to a rank one matrix $yy^T$, where $y$ can be considered a good candidate for basket weights. A typical approach to deflate a positive definite matrix into a vector is to consider its eigenvector with the leading eigenvalue. Having sparsity constraints in mind, we propose to apply a heuristic grounded on sparse-PCA [@zou2006sparse; @d2007direct]. Instead of considering the lead eigenvector, we recover the leading *sparse* eigenvector of $Y^\star$ (with a $0$-norm constrained to be equal to $k$). Several efficient algorithmic approaches have been proposed to solve approximately that problem; we use the SPASM toolbox [@sjostrand2012spasm] in our experiments.
Predictability {#predictability}
--------------
We can form a convex relaxation of the predictability optimization problem over the variable $y\in\RR^n$, $$\BA{ll}
\mbox{minimize} & y^T M y\\
\mbox{subject to} & y^T A_0y\geq \nu\\
& \|y\|_2=1,\\
& \|y\|_0=k,
\EA$$ by using the lifting argument of @Lova91, writing $Y=yy^T$, to solve now the problem using a semidefinite variable $Y$, and by introducing a sparsity-inducing regularizer on $Y$ which considers the $L_1$ norm of $Y$, $$\norm{Y}_1\defeq \sum_{ij}\abs{Y_{ij}},$$ so that Problem becomes (here $\rho>0$), $$\BA{ll}
\mbox{minimize} & \Tr(MY) + \rho \norm{Y}_1\\
\mbox{subject to} & \Tr(A_0Y)\geq\nu\\
& \Tr(Y)=1,~\Rank(Y)=1,~Y\succeq 0.
\EA$$ We relax this last problem further by dropping the rank constraint, to get \[eq:SDP1\] & (MY) + \_1\
& (A\_0Y)\
& (Y)=1, Y0 which is a convex semidefinite program in $Y\in\symm_n$.
Portmanteau
-----------
Using the same lifting argument and writing $Y=yy^T$, we can relax problem by solving \[eq:SDP2\] & \_[i=1]{}\^p (A\_iY)\^2 + \_1\
& (BY)\
& (Y)=1, Y0, a semidefinite program in $Y\in\symm_n$.
Crossing Stats
--------------
As above, we can write a semidefinite relaxation for problem : \[eq:SDP3\] & (A\_1Y)+ \_[i=2]{}\^p (A\_iY)\^2 + \_1\
& (BY)\
& (Y)=1, Y0
### Tightness of the SDP Relaxation in the Absence of Sparsity Constraints
Note that for the crossing stats criterion (with $p=1$ and no quadratic term in $Y$) criteria, the original problem \[eq:P3\] and its relaxation \[eq:SDP3\] are equivalent, taken for granted that no sparsity constraint is considered in the original problems and $\mu$ set to $0$ in the relaxations. This relaxations boil down to an SDP’s that only has a linear objective, a linear constraint and a constraint on the trace of $Y$. In that case, @Bric61 showed that the range of two quadratic forms over the unit sphere is a convex set when the ambient dimension $n\geq 3$, which means in particular that for any two square matrices $A,B$ of dimension $n$ &{(y\^TAy,y\^TBy): y\^n, y\_2=1}=&\
&{((AY),(BY)): Y\_n, Y=1, Y0}& We refer the reader to [@Barv02 §II.13] for a more complete discussion of this result. As remarked in [@cuturi2013mean], the same equivalence holds for \[eq:P1\] and \[eq:SDP1\]. This means that, in the case where $\rho,\mu=0$ and the 0-norm of $y$ is *not* constrained, for any solution $Y^\star$ of the relaxation there exists a vector $y^\star$ which satisfies $\norm{y}_2^2=\Tr(Y^\star)=1$, $y^{\star T} A_0 y^\star=\Tr(BY^\star)$ and $y^{\star T}My^\star=\Tr(MY^\star)$ which means that $y^\star$ is an optimal solution of the original problem . @Boyd:1072 [App.B] show how to explicitly extract such a solution $y^\star$ from a matrix $Y^\star$ solving . This result is however mostly anecdotical in the context of this paper, in which we look for sparse and volatile baskets: using these two regularizers breaks the tightness result between the original problems in $\RR^d$ and their SDP counterparts.
Numerical Experiments {#s:numres}
=====================
![**Option implied volatility** for Apple between January 4 2004 and December 30 2010.[]{data-label="fig:vol"}](aapl.pdf){width=".7\textwidth"}
In this section, we evaluate the ability of our techniques to extract mean-reverting baskets with sufficient variance and small 0-norm from a universe of tradable assets. We measure performance by applying to these baskets a trading strategy designed specifically for mean-reverting processes. We show that, under realistic trading costs assumptions, selecting sparse and volatile mean-reverting baskets translates into lower incurred costs and thus improves the performance of trading strategies.
Historical Data
---------------
We consider daily time series of option implied volatilities for 210 stocks from January 4 2004 to December 30 2010. A key advantage of using option implied volatility data is that these numbers vary in a somewhat limited range. Volatility also tends to exhibit regime switching, hence can be considered piecewise stationary, which helps in extracting structural relationships. We plot a sample time series from this dataset in Figure \[fig:vol\] that corresponds to the implicit volatility of Apple’s stock. In what follows, we mean by asset the implied volatility of any of these stocks, whose value can be efficiently replicated using option portfolios.
Mean-reverting Basket Estimators
--------------------------------
We compare the three basket selection techniques detailed here, **predictability**, **portmanteau** and **crossing statistic**, implemented with varying targets for both sparsity and volatility, with two cointegration estimators that build upon principal component analysis [@maddala1998urc §5.5.4]. By the label ‘PCA’ we mean in what follows the eigenvector with smallest eigenvalue of the covariance matrix $A_0$ of the process [@stock1988tct]. By ‘sPCA’ we mean the sparse eigenvector of $A_0$ with 0-norm $k$ that has the smallest eigenvalue, which can be simply estimated by computing the leading sparse eigenvector of $\lambda I-A_0$ where $\lambda$ is bigger than the leading eigenvalue of $A_0$. This sparse principal component of the covariance matrix $A_0$ should not be confused with our utilization of sparse PCA in Section \[subsec:asemidefinite\] as a way to recover a vector solution from the solution of a positive semidefinite problem. Note also that techniques based on principal components do not take explicitly variance levels into account when estimating the weights of a co-integrated relationship.
@jurek Trading Strategy
-----------------------
While option implied volatility is not directly tradable, it can be synthesized using baskets of call options, and we assimilate it to a tradable asset with (significant) transaction costs in what follows. For baskets of volatilities isolated by the techniques listed above, we apply the [@jurek] strategy for log utilities to the basket process recording out of sample performance. @jurek propose to trade a stationary autoregressive process $(x_t)_{t}$ of order $1$ and mean $\mu$ governed by the equation $x_{t+1} = \rho x_t +\sigma \varepsilon_t$, where $\abs{\rho}<1$, by taking a position $N_t$ in the asset $x_t$ which is proportional to $$\label{eq:jurek}
N_t = \frac{\rho (\mu-x_t)}{\sigma^2}W_t$$ In effect, the strategy advocates taking a long (resp. short) position in the asset whenever it is below (resp. above) its long-term mean, and adjust the position size to account for the volatility of $x_t$ and its mean reversion speed $\rho$. Given basket weights $y$, we apply standard AR estimation procedures on the in-sample portion of $y^T\bx$ to recover estimates for $\hat{\rho}$ and $\hat{\sigma}$ and plug them directly in Equation . This approach is illustrated for two baskets in Figure \[fig:syn\].
-2.5cm![**Three sample trading experiments, using the PCA, sparse PCA and the Crossing Statistics estimators**. \[a\] Pool of 9 volatility time-series selected using our fast PCA selection procedure. \[b\] Basket weights estimated with in-sample data using either the eigenvector of the covariance matrix with smallest eigenvalue, the smallest eigenvector with a sparsity constraint of $k=\lfloor 0.5 \times 9\rfloor=4$ and the Crossing Statistics estimator with a volatility threshold of $\nu=0.2$, a constraint on the basket’s variance to be larger than $0.2 \times$ the median variance of all $8$ assets. \[c\] Using these 3 procedures, the time series of the resulting basket price in the in-sample part \[c\] and out-sample parts \[d\] are displayed. \[e\] Using the [@jurek] trading strategy results in varying positions (expressed as units of baskets) during the out-sample testing phase. \[f\] Transaction costs that result from trading the assets to achieve such positions accumulate over time. \[g\] Taking both trading gains and transaction costs into account, the net wealth of the investor for each strategy can be computed (the Sharpe over the test period is displayed in the legend). Note how both sparsity and volatility constraints translate into portfolios composed of less assets, but with a higher variance.[]{data-label="fig:syn"}](example_3basks2.pdf "fig:"){width="140.00000%"}
Transaction Costs
-----------------
We assume that fixed transaction costs are negligible, but that transaction costs per contract unit are incurred at each trading date. We vary the size of these costs across experiments to show the robustness of the approaches tested here to trading costs fluctuations. We let the transaction cost per contract unit vary between 0.03 and 0.17 cents by increments of 0.02 cents. Since the average value of a contract over our dataset is about 40 cents, this is akin to considering trading costs ranging from about 7 to about 40 Base Points (BP), that is 0.07 to 0.4%.
Experimental Setup
------------------
We consider 20 sliding windows of one year (255 trading days) taken in the history, and consider each of these windows independently. Each window is split between 85% of days to estimate and 15% of days to test-trade our models, resulting in 38 test-trading days. We do not recompute the weights of the baskets during the test phase. The 210 stock volatilities (assets) we consider are grouped into 13 subgroups, depending on the economic sector of their stock. This results in 13 sector pools whose size varies between 3 assets and 43 assets. We look for mean-reverting baskets in each of these 13 sector pools.
Because all combinations of stocks in each of the 13 sector pools may not necessarily mean-reverting, we select smaller candidate pools of $n$ assets through a greedy backward-forward minimization scheme, where $8\leq n\leq 12$. To do so, we start with an exhaustive search of all pools of size 3 within the sector pool, and proceed by adding or removing an asset using the PCA estimator (the smallest eigenvalue of the covariance matrix of a set of assets). We use the PCA estimator in that backward-forward search because it is the fastest to compute. We score each pool using that PCA statistic, the smaller meaning the better. We generate up to 200 candidate pools per each of the 13 sector pools. Out of all these candidate pools, we keep the best 50 in each window, and use then our cointegration estimation approaches separately on these candidates. One such pool was, for instance, composed of the stocks `{BBY,COST,DIS,GCI,MCD,VOD,VZ,WAG,T}` observed during the year 2006. Figure \[fig:syn\] provides a closeup on that universe of stocks, and shows the results of three trading experiments using either PCA, sparse PCA or the Crossing Stats estimator to build trading strategies.
Results
-------
### Robustness of Sharpe Ratios to Costs
In Figure \[fig:sharpe\], we plot the average of the Sharpe ratio over the $922$ baskets estimated in our experimental set versus transaction costs. We consider different PCA settings as well as our three estimators using, in all 3 cases, the variance bound $\nu$ to be $0.3$ times the median of all variances of assets available in a given asset pool, and the 0-norm to be equal to 0.3 times the size of the universe (itself between 8 and 12). We observe that Sharpe ratios decrease the fastest for the naive PCA based method, this decrease being somewhat mitigated when adding a constraint on the 0-norm of the basket weights obtained with sparse PCA. Our methods require, in addition to sparsity, enough volatily to secure sufficient gains. These empirical observations agree with the intuition of this paper: simple cointegration techniques can produce synthetic baskets with high mean-reversion, large support, low variance. Trading a portfolio with low variance which is supported by multiple assets translates in practice into high trading costs which can damage the overall performance of the strategy. Both sparse PCA and our techniques manage instead to achieve a trade-off between desirable mean-reversion properties and, at the same time, control for sufficient variance and small basket size to allow for lower overall transaction costs.
### Tradeoffs Between Mean Reversion, Sparsity, and Volatility
In the plots of Figure \[fig:sharpeCrossing\] and \[fig:sharpeCrossing2\], this analysis is further detailed by considering various settings for $\nu$ (volatility threshold) and $k$. To improve the legibility of these results we summarize, following the observation in Figure \[fig:sharpe\] that the relationship between Sharpes and transactions costs seems almost linear, each of these curves by two numbers: an intercept level (Sharpe ratio when costs are low) and a slope (degradtion of Sharpe as costs increase). Using these two numbers, we locate all considered strategies in the intercept/slope plane. We first show the spectral techniques, PCA and sPCA with different levels of sparsity, meaning that $k$ is set to $\lfloor u \times d\rfloor$ where $u\in\{0.3,0.5,0.7\}$ and $d$ is the size of the original basket. Each of the three estimators we propose is studied in a separate plot. For each we present various results characterized by two numbers: a volatility threshold $\nu\in\{0,0.1,0.2,0.3,0.4,0.5\}$ and a sparsity level $u\in\{0.3,0.5,0.7\}$. To avoid cumbersome labels, we attach an arrow to each point: the arrow’s length in the vertical direction is equal to $u$ and characterizes the size of the basket, the horizontal length is equal to $\nu$ and characterizes the volatility level. As can be seen in these 3 plots, an interesting interplay between these two factors allows for a continuum of strategies that trade mean-reversion (and thus Sharpe levels) for robustness to cost level.
![Average Sharpe ratio for the @jurek trading strategy captured over about 922 trading episodes, using different basket estimation approaches. These 922 trading episodes were obtained by considering 7 disjoint time-windows in our market sample, each of a length of about one year. Each time-window was divided into 85% in-sample data to estimate baskets, and 15% outsample to test strategies. On each time-window , the set of 210 tradable assets during that period was clustered using sectorial information, and each cluster screened (in the in-sample part of the time-window) to look for the most promising baskets of size between 8 and 12 in terms of mean reversion, by choosing greedily subsets of stocks that exhibited the smallest minimal eigenvalues in their covariance matrices. For each trading episode, the same universe of stocks was fed to different mean-reversion algorithms. Because volatility time-series are bounded and quite stationary, we consider the PCA approach, which uses the eigenvector with the smallest eigenvalue of the covariance matrix of the time-series to define a cointegrated relationship. Besides standard PCA, we have also consider sparse PCA eigenvectors with minimal eigenvalue, with the size $k$ of the support of the eigenvector (the size of the resulting basket) constrained to be 30%, 50% or 70% of the total number of considered assets. We consider also the portmanteau, predictability and crossing stats estimation techniques with variance thresholds of $\nu=0.2$ and a support whose size $k$ (the number of assets effectively traded) is targeted to be about $30\%$ of the size of the considered universe (itself between 8 and 12). As can be seen in the figure, the sharpe ratios of all trading approaches decrease with an increase in transaction costs. One expects sparse baskets to perform better under the assumption that costs are high, and this is indeed observed here. Because the relationship between sharpe ratios and transaction costs can be efficiently summarized as being a linear one, we propose in the plots displayed in Figures \[fig:sharpeCrossing\] and \[fig:sharpeCrossing2\] a way to summarize the lines above with two numbers each: their intercept (Sharpe level in the quasi-absence of costs) and slope (degradation of Sharpe as costs increase). This visualization is useful to observe how sparsity (basket size) and volatility thresholds influence the robustness to costs of the strategies we propose. This visualization allows us to observe how performance is influenced by these parameter settings.\[fig:sharpe\]](ex2-eps-converted-to.pdf){width=".8\textwidth"}
![Relationships between Sharpe in a low cost setting (intercept) in the $x$-axis and robustness of Sharpe to costs (slope of Sharpe/costs curve) of a different estimators implemented with varying volatility levels $\nu$ and sparsity levels $k$ parameterized as a multiple of the universe size. Each colored square in the figures above corresponds to the performance of a given estimator (Portmanteau in subfigure $(a)$, Predictability in subfigure $(b)$) using different parameters for $\nu\in\{0,0.1,0.2,0.3,0.4,0.5\}$ and $u\in\{0.3,0.5,0.7\}$. The parameters used for each experiment are displayed using an arrow whose vertical length is proportional to $\nu$ and horizontal length is proportional to $u$.\[fig:sharpeCrossing\]](Portmanteau___-eps-converted-to.pdf "fig:"){width="\textwidth"}\
(a) ![Relationships between Sharpe in a low cost setting (intercept) in the $x$-axis and robustness of Sharpe to costs (slope of Sharpe/costs curve) of a different estimators implemented with varying volatility levels $\nu$ and sparsity levels $k$ parameterized as a multiple of the universe size. Each colored square in the figures above corresponds to the performance of a given estimator (Portmanteau in subfigure $(a)$, Predictability in subfigure $(b)$) using different parameters for $\nu\in\{0,0.1,0.2,0.3,0.4,0.5\}$ and $u\in\{0.3,0.5,0.7\}$. The parameters used for each experiment are displayed using an arrow whose vertical length is proportional to $\nu$ and horizontal length is proportional to $u$.\[fig:sharpeCrossing\]](Predictability___-eps-converted-to.pdf "fig:"){width="\textwidth"}\
(b)
![Same setting as Figure \[fig:sharpeCrossing\], using the crossing statistics (c).\[fig:sharpeCrossing2\]](Crossing___-eps-converted-to.pdf "fig:"){width="\textwidth"}\
(c)
Conclusion
==========
We have described three different criteria to quantify the amount of mean reversion in a time series. For each of these criteria, we have detailed a tractable algorithm to isolate a vector of weights that has optimal mean reversion, while constraining both the variance (or signal strength) of the resulting univariate series to be above a certain level and its 0-norm to be at a certain level. We show that these bounds on variance and support size, together with our new criteria for mean reversion, can significantly improve the performance of mean reversion statistical arbitrage strategies and provide useful controls to adjust mean-reverting strategies to varying trading conditions, notably liquidity risk and cost environment.
| ArXiv |
---
abstract: 'We investigate the occurrence of anomalous diffusive transport associated with acoustic wave fields propagating through highly-scattering periodic media. Previous studies had correlated the occurrence of anomalous diffusion to either the random properties of the scattering medium or to the presence of localized disorder. In this study, we show that anomalous diffusive transport can occur also in perfectly periodic media and in the absence of disorder. The analysis of the fundamental physical mechanism leading to this unexpected behavior is performed via a combination of deterministic, stochastic, and fractional-order models in order to capture the different elements contributing to this phenomenon. Results indicate that this anomalous transport can indeed occur in perfectly periodic media when the dispersion behavior is characterized by anisotropic (partial) bandgaps. In selected frequency ranges, the propagation of acoustic waves not only becomes diffusive but its intensity distribution acquires a distinctive L[é]{}vy $\alpha$-stable profile having pronounced heavy-tails. In these ranges, the acoustic transport in the medium occurs according to a hybrid transport mechanism which is simultaneously propagating and anomalously diffusive. We show that such behavior is well captured by a fractional diffusive transport model whose order can be obtained by the analysis of the heavy tails.'
author:
- Salvatore Buonocore
- Mihir Sen
- Fabio Semperlotti
bibliography:
- 'ref.bib'
title: 'Occurrence of anomalous diffusion and non-local response in highly-scattering acoustic periodic media'
---
Introduction {#Introduction}
============
In recent years, several theoretical and experimental studies have shown that field transport processes in non-homogeneous and complex media can occur according to either hybrid or anomalous mechanisms. Some examples of these physical mechanisms include anomalous diffusive transport (such as non-Fourier [@povstenko2013fractional; @borino2011non; @ezzat2010thermoelectric], or non-Fickian diffusion [@benson2000application; @benson2001fractional; @cushman2000fractional; @fomin2005effect] with heavy-tailed distribution) or hybrid wave transport (characterized by simultaneous propagation and diffusion [@mainardi1996fractional; @mainardi1996fundamental; @mainardi1994special; @mainardi2010fractional; @chen2003modified; @chen2004fractional]). Simultaneous hybrid and anomalous transport has also been observed, particularly in wave propagation problems involving random scattering media. Electromagnetic waves traveling through a scattering material[@yamilov2014position] such as fog [@belin2008display] or murky water [@zevallos2005time] are relevant examples of practical problems where such transport process can arise.
A distinctive feature of anomalous transport is the occurrence of heavy-tailed distributions of the representative field quantities [@benson2001fractional]. In this case, the diffusion process does not follow a classical Gaussian distribution but instead is characterized by a high-probability of occurrence of the events associated with large variance (i.e. those described by the “heavy” tails).
This behavior is typically not accounted for in traditional field transport theories based on integer order differential or integral models. Purely numerical methods, such as Monte Carlo or finite element simulations[@huang1991optical; @ishimaru2012imaging; @mosk2012controlling; @sebbah2012waves; @gibson2005recent], can capture this response but are very computationally intensive and do not provide any additional insight in the physical mechanisms generating the macroscopic dynamic behavior. The ability to accurately predict the anomalous response and to retrieve information hidden in diffused fields remains a challenging and extremely important topic in many applications. Acoustical and optical imaging, non-intrusive monitoring of engineering and biomedical materials are just a few examples of practical problems in which the ability to carefully predict the field distribution is of paramount importance to achieve accurate and physically meaningful solutions. Nevertheless, in most classical approaches, information contained in the heavy tails is typically discarded because it cannot be properly captured and interpreted by integer-order transport models.
Hybrid and anomalous diffusive transport mechanisms are pervasive also in acoustics. This type of transport can arise when acoustic fields propagate in a highly scattering medium such as a urban environment [@albert2010effect; @remillieux2012experimental], a forest [@aylor1972noise; @tarrero2008sound], a stratified fluid (e.g. the ocean) [@baggeroer1993overview; @dowling2015acoustic; @casasanta2012fractional], or a porous medium [@benson2001fractional; @schumer2001eulerian; @fellah2003measuring; @fellah2000transient].
From a general perspective, classical diffusion of wave fields occurs within a range where the wavelength is comparable to the size of the scatterers, the so-called Mie scattering regime. Any deviation from classical diffusion, being either sub-diffusion [@metzler2000random; @goychuk2012fractional] (typically linked to Anderson localization) or super-diffusion (typically linked to L[é]{}vy-flights) [@barthelemy2008levy; @bertolotti2010multiple], still arises within the same regime. The two dominant factors are either the relation between the transport mean free path and the wavelength, or the statistical distributions of the scattering paths in presence of disorder. When a wave field interacts with scattering elements, it undergoes a variety of physical phenomena including reflection, refraction, diffraction, and absorption that significantly alter its initial characteristics. Depending on the quantity, distribution, and properties of the scatterers the momentum vector of an initially coherent wave can become quickly randomized. For most processes, the Central Limit Theorem (CLT) guarantees that the distribution of macroscopic observable quantities (e.g. the field intensity) converges to a Gaussian profile in full agreement with the predictions from classical Fourier diffusion. At the same time, the transition to a macroscopic diffusion behavior leads to an inevitable coexistence of diffusive and wave-like processes at the meso- and macro-scales.
There are numerous physical processes in nature whose *basin of attraction* is given by the normal (Gaussian) distribution. On the other hand, when the distribution of characteristic step-length has infinite variance, the diffusion process no longer follows the standard diffusion theory, but rather acquires an anomalous behavior with a basin of attraction given by the so-called $\alpha$-stable L[é]{}vy distribution. In the latter case, the unbounded value of the variance of the step-length distribution is due to the non-negligible probability of existence of steps whose lengths greatly differ from the mean value; these are usually denoted as L[é]{}vy flights. The distinctive feature of the $\alpha$-stable L[é]{}vy distributions is the occurrence of heavy tails having a power-law decay of the form $p(l) \sim l^{-(\alpha+1)}$. This characteristic suggests that transport phenomena evolving according to L[é]{}vy statistics are dominated by infrequent but very long steps, and therefore their dynamics are profoundly different from those predicted by the random (Brownian) motion. Many of the complex hybrid transport mechanisms mentioned above fall in this category, and therefore cannot be successfully described in the framework of classical diffusion theory.
In addition, these complex transport mechanisms are typically not amenable to closed-form analytical solutions therefore requiring either fully numerical or statistical approaches to predict the field quantities under various input conditions. Typical modeling approaches rely on random walk statistical models [@metzler2000random; @bouchaud1990anomalous] or on semi-empirical corrections to the fundamental diffusive transport equation via renormalization theory [@asatryan2003diffusion; @cobus2016anderson]. These approaches imply a considerable computational cost and do not provide physical insight in the operating mechanisms of the anomalous response. A few studies have also indicated that, for this type of processes, the macroscopic governing equation describing the evolution of the wave field intensity could be described by a generalization of the classical diffusion equation using fractional derivatives [@bertolotti2010multiple; @metzler2000random; @bertolotti2007light].
To-date, the occurrence of anomalous diffusion of wave fields has been connected and observed only in random and disordered media [@barthelemy2008levy; @burresi2012weak; @bouchaud1990anomalous; @asatryan2003diffusion; @cobus2016anderson]. In this study, we show theoretical and numerical evidence that anomalous behavior can occur even in presence of perfectly periodic media and in absence of disorder. We present this analysis in the context of diffusive transport of acoustic waves although the results could be generalized to other wave fields. In particular, we investigate the specific case of propagation of acoustic waves in a medium with identical and periodically distributed hard scatterers. We develop a theoretical framework for multiple scattering in super-diffusive periodic media. We first show, by full field numerical simulations, that under certain conditions, acoustic waves propagating through a periodic medium are subject to anomalous diffusion. Then, we propose an approach based on a combination of deterministic and stochastic methodologies to explore the physical origin of this unexpected behavior. Ultimately, we show that fractional order models can predict, more accurately and effectively, the resulting anomalous field quantities. More important, we will show that the analysis of the heavy tails provide a reliable means to extract the equivalent fractional order of the medium.
Anomalous diffusion in acoustic periodic media: overview of the method
======================================================================
We consider the generic problem of an acoustic bulk medium made of periodically-distributed cylindrical hard scatterers in air (Fig. \[Fig\_1\]). We assume a monopole-like acoustic source, located in the center of the lattice, which emits at a selected harmonic frequency chosen within the scattering regime.
The main objective is to characterize the propagation of acoustic waves in such medium based on different regimes of dispersion. As previously anticipated, in selected regimes the propagation of acoustic waves will exhibit anomalous diffusive transport properties. The reminder of this study will be dedicated to investigating the causes leading to the occurrence of such phenomenon. In order to identify the fundamental mechanisms at the origin of this behavior, we have designed a multi-folded approach capable of characterizing the different processes contributing to the macrosopic anomalous response.
The approach consists of the following components. First, we investigate via full-field numerical simulations the propagation of acoustic waves in either a 1D or a 2D periodic scattering medium. The numerical results will allow making important observations on the different propagation mechanisms occurring in the two systems and on the corresponding diffusive processes. Then, the radiative transfer theory will be applied to interpret the evolution of the wave intensity distribution and analyze the nature of the diffusive phenomena in the context of a renormalization approach.
In order to identify the physical mechanism at the origin of the anomalous diffusion, a multiple scattering analysis based on the multipole expansion method will be applied in order to characterize the interaction between different scatterers. In particular, this approach was intended to identify and quantify possible long-range interactions between pairs of scatterers. Based on the results of the multiple scattering analysis, a Monte Carlo model is used to confirm that the anomalous transport is in fact originated by the long-range interactions between different directions of propagation in the lattice.
Finally, we show that the behavior of the lattice can be effectively described in a homogenized sense, by a fractional continuum diffusion model whose fractional order can be identified by fitting an $\alpha$-stable distribution to the heavy tails of the wave intensity. This approach can be seen as an equivalent *fractional homogenization* of the medium. Of particular interest is the fact that the fractional (homogenized) model allows a closed-form analytical solution the agrees very well with the numerical predictions.
Scattering and diffusive transport {#Overview}
==================================
From a general perspective, it is possible to identify four different wave propagation regimes in scattering media which are classified based on the relative ratio of quantities such as the transport mean free path $l_t$, the wavelength of the propagating field $\lambda$, and the characteristic size of the scattering domain $L$. The four regimes are:
1. The *homogenized regime*: it occurs when the wavelength of the incident wave field is much larger than the typical characteristic size $d$ of the scatterer, that is $\lambda \gg d$.
2. The *diffusive regime*: it occurs when the wavelength $\lambda$ satisfies the relation $\lambda/2\pi \ll l_t \ll L $. In this regime the wave intensity can be approximated by the diffusion equation.
3. The *anomalously diffusive regime*: it occurs when the interference of waves causes the reduction of the transport mean free path $l_{t}$ and consequently a renormalization of the macroscopic diffusion constant $D$. In this regime, the transport mean free path varies according to the size of the cluster and to the degree of disorder.
4. The *localization regime*: it occurs in the range $\lambda/2\pi \geq l_{t}$ and corresponds to a diffusion constant $D$ tending to 0.
As mentioned in the classification above, there are regimes in which the intensity of the wave field can be properly described by the *diffusion approximation*, that is it varies in space as prescribed by the field evolution in a diffusion equation. In particular, when the incident wave has a wavelength smaller than the length-scale characterizing the material and/or of the geometric variations of the physical medium, the wave field undergoes multiple scattering with a consequent randomization of its phase and direction of propagation. In order to characterize this phenomenon a statistical description based on random walk models is typically employed. These models rely on phenomenological quantities such as the scattering $l_s$ and the transport $l_t$ mean free paths. From a physical perspective, $l_s$ represents the average distance between two successive scattering events, while $l_t$ is the mean distance after which the wave field loses memory of its initial direction and becomes randomized [@van1999multiple]. When the filling factor $f$ (which describes the density of scatterers) is low, $l_{s}$ and $l_{t}$ are defined as [@ishimaru1978wave]:
$$\begin{aligned}
\label{ls_lt}
l_{s} &=& \dfrac{1}{\rho\sigma_{t}}\\
l_{t} &=& \dfrac{l_{s}}{1-\langle\cos{\theta}\rangle} \nonumber
\label{eq:lt}\end{aligned}$$
where $\sigma_{t}$ is the total scattering cross section, $\langle\cos{\theta}\rangle$ is the *anisotropy factor* and $\rho$ is the scatterers concentration. Note that the relations in Eq. (\[ls\_lt\]) are valid only for low filling factors, approximately in the range $f \leq 0.1$. For increased values of the filling factor, the scattering cross section $\sigma_t$ needs to be rescaled. The rescaling factor in the range $0.1 \leq f \leq 0.6$ is given by $\sigma_t \rightarrow \sigma_t (1-f)$, while higher filling factors require a more elaborated rescaling procedure [@ishimaru1978wave].
The scattering cross section plays a crucial role in the characterization of multiple scattering phenomena and in two dimensions takes the form:
$$\begin{aligned}
\label{sigma_t}
\begin{split}
\sigma_t = \int_{2\pi}\sigma_d(\theta) d\theta .
\end{split}\end{aligned}$$
The integrand $\sigma_d$ is the differential scattering cross section defined as:
$$\begin{aligned}
\label{diff_scatt_cross_sect}
\begin{split}
\sigma_d(\theta)= \lim_{R \rightarrow \infty} R\left [ (S_s(\theta))/S_i\right ].
\end{split} \end{aligned}$$
.
In Eq. (\[diff\_scatt\_cross\_sect\]), the term $S_s$ is the scattered power flux density at a distance $R$ from the scatterer in the direction $\mathbf{\hat{o}}$ caused by an incident power flux density $S_i$. The azimuthal angle $\theta$ is the angle between the incident ($\mathbf{\hat{i}}$) and the scattered wave fields ($\mathbf{\hat{o}}$).
The *scattering phase function* is obtained by normalizing the differential scattering cross section with respect to $\sigma_t$:
$$\begin{aligned}
\label{}
p\mathbf{( \hat{o},\hat{i})}=p(\cos\theta) = \frac{\sigma_d(\theta)}{\sigma_{t}}
\end{aligned}$$
and represents the probability that a wave field impinging on the scatterer from a given direction will be scattered by an angle $\theta$. The mean value of the previous probability distribution defines the *anisotropy factor*:
$$\begin{aligned}
\label{}
\langle \cos \theta \rangle = \int_{2\pi} p(\cos \theta) \cos(\theta) d \theta.
\end{aligned}$$
This factor varies between 0 and 1, and it accounts for the existence of preferential scattering directions. For $\langle\cos{\theta}\rangle=0$ all the scattering directions have the same probability and the scattering is isotropic. As $\langle\cos{\theta}\rangle$ approaches 1, the forward scattering becomes the most probable event. These quantities will be used in the following analyses in order to identify the different scattering regimes.
One-dimensional medium
======================
Consider a one-dimensional bulk scattering medium composed of $N$ hard cylindrical scatterers equally distributed in an air background (Fig. \[Fig\_3\]).
This system can be interpreted by all means as a classical 1D acoustic metamaterial. The radius of the individual scatterer is $a = 0.2 d$, where $d$ indicates the distance between two neighboring cylinders. The filling fraction for this particular cluster is $f = \pi a^2/d^2 \approx 0.1257$.
The waveguide is excited by a monochromatic acoustic monopole $S$ that replaces the center cylinder. The response of the system is obtained numerically by means of a commercial finite element software (Comsol Multiphysics) and using symmetric boundary conditions on the top and bottom edges and perfectly matched layers (PML) on the left and right edges. The frequency of excitation is selected in the first bandgap (see Fig. \[Fig\_10\] for the general dispersion properties of this waveguide) and has a non-dimensional value $\Omega = 0.0831$. In this excitation regime the diffusive behavior is expected. Remember that, in the absence of disorder or trapping mechanisms and in the range of excitation frequencies where the diffusion approximation holds, the variance of the step-length distribution characterizing the multiple scattering process of the acoustic field is expected to be finite and, if the steps are independent (by virtue of the Central Limit Theorem) the limit distribution should be the Normal distribution as predicted by the standard diffusion model.
The resulting normalized magnitude of the acoustic pressure field generated in the waveguide is shown in Fig. \[Fig\_4\](a) in terms of a contour plot and in Fig. \[Fig\_4\](b),(c) in terms of the intensity profile along the mid-line of the waveguide as defined later in §\[Modell\]. From Fig. \[Fig\_4\](b),(c) a characteristic exponential decay of the type $e^{-x/l_s}$ (consistent with the Beer-Lambert law) is very well identifiable. This trend represents the decay of the coherent part of the intensity and corresponds to the squared absolute value of the Green’s function solution. In more general terms, this result shows a solution which is perfectly consistent with the classical diffusion behavior. This is a well expected result and it is reported here only for comparison with the results that will be presented below.
Two-dimensional medium
======================
Radial lattice
--------------
The immediate extension of the previous scenario to a two-dimensional system corresponds to a radial distribution of equally distributed hard scatterers. As in the 1D case, the system is excited by a monochromatic monopole source located in the center of the 2D lattice at point $S$. The source is monochromatic and it is actuated at the non-dimensional frequency $\Omega = 0.0831$, that belongs to the first bandgap. The normalized magnitude of the acoustic pressure field for this system is numerically calculated and shown in Fig. \[Fig\_6\](a). Fig. \[Fig\_6\](b) provides a closeup view of the field around the source (the area within the black dashed line).
The acoustic intensity profile along the $x$-axis direction shows an exponential decay as illustrated by Fig. \[Fig\_7\]. All radial directions (not shown) exhibit an identical response as expected due the azimuthal symmetry of the system.
As in the 1D case, this linear decay of the intensity distribution was expected and confirms that, in systems with a high degree of symmetry, a classical diffusion behavior should be recovered. From a practical perspective, this radial lattice could be seen as a radial arrangement of 1D waveguides.
Rectangular lattice {#rect_lattice}
-------------------
The dynamic behavior of the lattice changes quite drastically when the axial-symmetry is removed. Consider the square lattice of scatterers schematically illustrated in Fig. \[Fig\_1\]. Assume each scatterer having an individual radius of $a = 0.2 d$, where $d$ is the distance between two neighboring cylinders. The filling fraction for this periodic cluster is $f = \pi a^2/d^2 \approx 0.1257$.
### Dispersion analysis {#DispersionRelation}
In order to understand the dynamic behavior of this lattice and interpret the results that will follow, we start analyzing the fundamental dispersion structure of the square lattice. The dispersion was calculated using finite element analysis and the band structure is plotted along the irreducible part of the first Brillouin zone, as shown in Fig. \[Fig\_10\].
The results highlight the existence of anisotropy in terms of directions of propagation. These directions are connected to the existence of a partial bandgap in the $\Gamma-X$ direction between the non-dimensional frequencies $\Omega = 0.0824$ and $\Omega = 0.1103$. When the system is excited at a frequency within the bandgap, the propagation acquires an anisotropic distribution (see § \[Forced\] and Fig. \[Fig\_8\]), because propagation can only occur in the $\Gamma-M$ direction. This is not an unexpected result and, in fact, it is fully consistent with the propagation behavior expected in square periodic lattice. However, we will show that these dispersion characteristics play a key role in the occurrence of anomalous behavior.
### Forced response {#Forced}
The forced response of the lattice was also numerically evaluated. In this case, the lattice is excited by a monochromatic acoustic monopole $S$ that replaces the center cylinder. As for the previous two lattices, the total acoustic pressure field is calculated numerically using the finite element method and reported in Fig. \[Fig\_8\]. More specifically, Fig. \[Fig\_8\](a) presents the response to an excitation outside the first bandgap, while Fig. \[Fig\_8\](b) reports the case just inside the first bandgap. Note that due to symmetry considerations, only a quarter of the domain was solved.
As the acoustic wave fronts propagate through the medium in the radial directions and interact with the scattering particles, the rays are scattered in multiple directions. In both cases it is evident that the propagation is strongly anisotropic and occurs mostly along the diagonal directions of the lattice.
The response of the medium is shown in Fig. \[Fig\_8\] in terms of the normalized magnitude of the acoustic pressure distribution. Contrarily to what observed for the radial lattice, in this case the intensity distribution does not decay linearly. This behavior is very evident by performing a numerical fit of the simulation data, as shown in Fig. \[Fig\_9\]. These results suggest the occurrence of an unexpected mechanism of diffusion despite the lattice periodicity.
This is a remarkable departure compared with available results in the literature that, to-date, have highlighted the occurrence of anomalous diffusion only in connection with random distributions of geometric or material properties.
Radiative transport approach {#Modell}
============================
The results presented above illustrated that in case of anisotropic propagation a departure from the classical diffusive behavior is observed. In this section, we use a traditional radiative transport approach with renormalization to show that this observed behavior can be mapped to anomalous diffusion.
We investigate the presence of anomalous diffusion for wavelength ranges in the passband and in the bandgap. As already pointed out, within the regime $\lambda/2 \pi<l_{t}<L$ the diffusion approximation applies and the spatial evolution of the wave amplitude can be predicted by a diffusion equation for the wave intensity.
Starting from a cluster of particles, as schematically illustrated in Fig. \[Fig\_11\], and applying the diffusion approximation the 2D diffusion equation for harmonic excitation and lossless scatterers is given by:
$$\begin{aligned}
\label{Diffusion equation}
\begin{split}
\nabla^2 I =-\frac{P_0}{\pi l_t}\delta(\vec{r}-\vec{r}_s)
\end{split}\end{aligned}$$
where $I$ is the intensity of the acoustic wavefield, $P_0$ is the total emitted acoustic power, $\vec{r}$ and $\vec{r}_s$ are the position vectors of the source $S$ and of a generic point $P$, respectively. The average acoustic intensity of a monochromatic monopole source can be obtained as $\langle I \rangle = ||0.5*Re(p \cdot v')||$, where $p$ is the pressure field, and $v'$ is the complex conjugate of the velocity field. The diffusion equation Eq. (\[Diffusion equation\]) requires the following boundary conditions at the edge of the domain to be solved:
$$\begin{aligned}
\begin{split}
I\mathbf{(r_s)}-\frac{\pi l_t}{4}\frac{\partial }{\partial n} I\mathbf{(r_s)} = 0
\end{split}\end{aligned}$$
where $\mathbf{\hat{n}}$ is the unit inward normal. These boundary conditions are obtained by the requirement of zero inward flux at the boundaries[@ishimaru1978wave]. The numerical value of this boundary condition on the intensity was obtained by the previous finite element model.
By enforcing this boundary condition, Eq. (\[Diffusion equation\]) can be solved analytically:
$$\begin{aligned}
\begin{split}
I = -\frac{P_0}{2\pi^2 l_t}ln\frac{|\vec{r}-\vec{r}_s|}{L}+ I_{0}
\end{split}
\label{Solution}\end{aligned}$$
where $I_{0}$ is the value of the intensity at the boundary of the cluster of scatterers and $L$ is the size of the computational domain.
In order to be able to solve Eq. (\[Solution\]), we need to estimate the parameters $l_t$ and $l_s$ and characterize the specific regime of propagation. To achieve this result, we first plot $\langle l_{s}\rangle$ and $\langle l_{t}\rangle$ versus the wavelength $\lambda$ as shown in Fig. \[Fig\_12\]. These curves were numerically determined using the model presented in §\[rect\_lattice\] and the Eqs. (\[ls\_lt\]).
The transport mean free path $\langle l_{t}\rangle$ is always expected to be greater than $\langle l_{s}\rangle$ and to converge asymptotically to $\langle l_{s}\rangle$ for large wavelengths. In fact, for long wavelengths the wavefield is marginally affected by the presence of the scatterers. In the short wavelength limit, $l_s/d$ tends to 1 because the wave fronts are highly directional (this is the range of validity of ray acoustics approximation) and **$l_s$** is approximately given by the average distance between two neighboring scatterers. Fig. \[Fig\_13\] shows a detailed view of the previous curves in the frequency range corresponding to the first bandgap and within the diffusive regime. The labels $A$ and $B$ indicate the non-dimensional wavelengths corresponding to the excitation conditions analyzed in the following sections. Note that these curves provide the foundation to investigate the different regimes of propagation and to implement the renormalization approach.
### Renormalization and anomalous diffusion {#Numerical_1}
Fig. \[Fig\_14\] shows the acoustic intensity distribution $I$ along the $x$ axis for the two excitation conditions identified by the labels A and B.
The red circles show the numerical solution obtained by the FE model and provide a one-dimensional section of the data in Fig. \[Fig\_8\] along the $x$-axis. The continuous blue line is the analytical solution of the diffusion equation Eq. (\[Solution\]) after having rescaled the transport mean free path. In particular, for excitation wavelengths in the first passband the value $\langle l_{t}\rangle/d\approx 1.211$ was rescaled to $\langle l_{t}\rangle/d\approx 0.32 \pm 0.02$ (label $A^*$ in Fig. \[Fig\_13\]), while for the first bandgap the value $\langle l_{t}\rangle/d\approx 1.42$ was rescaled to $\langle l_{t}\rangle/d\approx 0.48 \pm 0.02$ (label $B^*$ in Fig. \[Fig\_13\]).
These results show that, in order to be able to predict the numerical data by using the diffusion approximation, a renormalization of the transport mean free path (and consequently of the diffusion coefficient) must take place. The renormalization requires smaller values of the transport parameters which is a clear indication of superdiffusive anomalous behavior.
Causes of anomalous diffusion {#NumericalViewPer}
=============================
In the previous sections we showed the occurrence of anomalous diffusion of acoustic waves in perfectly periodic square lattices and suggested that the possible origin of this mechanism is linked to the anisotropy of the dispersion properties (i.e. to the anisotropy of the bandgaps).
In this section we will present theoretical and numerical models with the intent of uncovering the physical mechanism leading to this unexpected propagation modality. It is anticipated that the occurrence of anomalous diffusion will be connected to the existence of long range interactions between different directions of propagation governed by either bandpass or stopband behavior. We will use a combination of both deterministic and stochastic methods in order to quantify the long-range interactions and to demostrate that they are at the origin of the macroscopic anomalous diffusion mechanism.
More specifically, we will use a scattering matrix approach to quantify the interaction between different scatterers in different regimes. Then, we will use a discrete random walk diffusion model (which uses probability density functions obtained from the scattering model) to show that, under these assumptions, the anomalous diffusion process matches well with the numerically predicted behavior.
The scattering matrix
---------------------
In order to evaluate and quantify the strength of the interaction between different scatterers in the lattice, we use a multiple scattering approach based on the multipole expansion method. According to this method, after applying the Jacobi’s expansion and the Graf’s addition theorem, the general solution of the wave field can be expressed as:
$$\begin{aligned}
\begin{split}
\label{Eq.206}
p(\vec{r}_m)=\sum_{j=-\infty}^{\infty} (e^{i\vec{k}\cdot \vec{P}_m} e^{ij(\pi /2-\psi_0)}J_j(\vec{r}_m)+A_j^m H_j(\vec{r}_m))+\\
\sum_{n=1,n\neq m}^{N}\sum_{q=-\infty}^{\infty}A_n^q\sum_{j=-\infty}^{\infty}H_{q-j}(kR_{nm})e^{i(q-j)\Phi_{nm}}J_j(\vec{r}_m)
\end{split}\end{aligned}$$
where $\vec{k}$ represents the wave vector, $\psi_0$ is the angle of the impinging wave field with respect to the $x$-axis, $\vec{P}_m$ is the position vector of the scatterer’s center $O_m$ with respect to the origin $O$ of the system of reference, $\vec{r}_m$ is the position vector of a generic point $P$ with respect to the scatterer’s center $O_m$, $R_{nm} = \left |\vec{P}_m - \vec{P}_n \right |$, $J_q(\cdot)$ are the Bessel functions of the first kind, $H_q(\cdot)$ are the Hankel functions of the first kind.
To determine the unknown amplitude coefficients $A_m^q$ the boundary conditions at the surface of the $m$th cylinder must be enforced. The result is a linear set of equations as follows[@linton2005multiple; @martin2006multiple; @kafesaki1999multiple]:
$$\begin{aligned}
\begin{split}
\label{Eq.207}
A_m^p+Z_p\sum_{n=1,n\neq m}^{N}\sum_{q=-\infty}^{\infty}A_n^qH_{q-p}(kR_{nm})e^{i(q-p)\Phi_{nm}}= \\ -Z_p e^{i\vec{k}\cdot\vec{P}_m} e^{ip(\pi/2-\psi_0)},\\ \quad m=1,...,N ,\quad p=0,+1,-1,...
\end{split}\end{aligned}$$
where $Z_p =J'_p(\cdot)/ H'_p(\cdot)$ specifies the Neumann boundary conditions on the surface of the cylinders. The unknown amplitude coefficients $A_m^q$ can be determined by solving the infinite system of algebraic equations with inner sum truncated at some positive integer $|q| =Q$. The information about the relative energy exchange between the scatterers can be obtained by rearranging Eq. (\[Eq.207\]) in matrix form as:
$$\begin{aligned}
\begin{split}
\label{Eq.209}
(I-TS)f=Ta
\end{split}\end{aligned}$$
where $I$ is the unit matrix, $T$ is the block diagonal impedance matrix, the vector $f$ represents the unknown expansions of scattered waves, and the vector $a$ stands for the expansion vector of incident waves on all the scattering cylinders. Finally the matrix $S$ is the so called combined translation matrix that can be expressed as follows:
$$\begin{aligned}
\begin{split}
\label{Eq.210}
S=\begin{bmatrix}
0 & L_{12} &...& L_{1N}\\
L_{21} & 0 &...&L_{2N}\\
\vdots & \vdots & \ddots & \vdots \\
L_{N1}& L_{N2} & ... & 0
\end{bmatrix}
\end{split}\end{aligned}$$
where the matrix $L_{nm}$ is defined as follows:
$$\begin{aligned}
\begin{split}
\label{Eq.211}
L_{nm}(q,p)=H_{q-p}(kR_{nm})e^{i(q-p)\Phi_{nm}}.
\end{split}\end{aligned}$$
The matrix $L_{nm}$ represents the translation matrix between the $n$th and the $m$th cylinder, representing therefore the incident wave on the $n$th cylinder caused by the scattered wave off the $m$th cylinder. The elements of the translation matrix can be obtained from the addition theorem of cylindrical harmonics also known as Graf’s theorem.
The generic term $S_{mn}$ quantifies the portion of the acoustic intensity scattered by the cylinder $m$ capable of reaching the cylinder $n$. Equivalently, it represents the fraction of the acoustic intensity reaching the cylinder $n$ due to the wave scattered by the cylinder $m$.
This approach was applied to model both the 1D and the 2D waveguides. The normalized scattering coefficients for the 1D waveguide are shown in matrix form in Fig. \[Fig\_16\]. Each block of this matrix has a size $\bar{Q} = 2*Q+1$, where $Q$ is the total number of spherical harmonics used in the multipole series expansion. The total size of the matrix is $N*\bar{Q}$, where $N$ is the total number of cylinders in the cluster. The main diagonal represents the coefficient $S_{mm}$, that is the scattering of a given cylinder $m$ towards itself, and therefore these terms are all zero.
The analysis of Fig. \[Fig\_16\] shows, as expected, that in a 1D waveguide the scatterers only interact with their closest neighbors. In other terms, there is no evidence of long-range interaction in 1D periodic waveguides. This is not surprising because we had already found from the full-field simulations in Fig. \[Fig\_4\](a) that the diffusive transport was following a purely Gaussian distribution (hence dominated by nearest neighbor interactions).
In a similar way, the analysis can be repeated for the 2D waveguide with square lattice structure. The resulting scattering coefficients are shown in Fig. \[Fig\_17\]. Contrarily to the 1D example, this scattering matrix has the appearance of a tridiagonal matrix that highlights the substantial interactions between distant neighbors. In other terms, the rectangular lattice show strong evidence of long-range interactions. These results provide a first important observation concerning the cause of anomalous diffusion in periodic rectangular lattice, that is the anisotropy of the dispersion bands gives rise to long-range interactions that ultimately alter the diffusion process.
Discrete random walk models: approximate acoustic intensity
-----------------------------------------------------------
The previous analysis is not yet sufficient to provide conclusive evidence that the long-range interactions due to the bandgap anisotropy are the main cause of the anomalous wave diffusion. In order to identify this further logical link, we developed a discrete random walk (DRW) model capable of simulating the diffusion process resulting from the multiple scattering of the acoustic waves.
The interaction between the different elements of a DRW model is typically represented by probability density functions (*pdf*). In the following, the *pdf*s are synthesized based on the coefficients of the scattering matrix. The model can then be numerically solved in order to predict the approximate acoustic intensity resulting from the scattered field.
### 1D discrete random walk model
The DRW model for a 1D waveguide is composed of a series of boxes (see Fig. \[Fig\_18\]), each one representing a scatterer. This model can be seen as the direct discrete equivalent of the 1D waveguide in Fig. \[Fig\_3\]. The dots in each box represent the different acoustic rays impinging on a given scatterer and being refracted towards different (scattering) elements. This model follows a ray acoustic approximation which is a reasonable assumption in the range of wavelength we have been considering. To simulate the monopole acoustic source located at the center of the waveguide, the center box (labeled $i$) contains a source term that serves as an omni-directional source of rays. In the 1D model, the rays emitted from the center box can be scattered both to the left and to the right according to the associated *pdf* synthesized based on the elements of the scattering matrix.
At every time increment, the rays “jump” into another box following a Markovian process and a *pdf* proportional to the coefficients extracted from the scattering matrix. The equilibrium condition needed to solve the DRW model and simulate the evolution of the acoustic intensity upon scattering is given by imposing the conservation of rays:
$$\begin{aligned}
\label{eqn:Conservation number of particles}
\begin{split}
n_{i,j+1}=n_{i,j}+\sum_{k=1}^{N_L} n_{k,j}P(i-k)+\sum_{k=1}^{N_R}n_{k,j}P(k-i)- \\
\sum_{k=1}^{N_L}n_{i,j}P(i-k)-\sum_{k=1}^{N_R}n_{i,j}P(k-i) + B_i
\end{split}\end{aligned}$$
where $i$ is the box index, $j$ is the time index, $n(i,j)$ is the number of rays at time $i$ entering the box $j$ (i.e. impinging on the scatterer $j$), $B_i$ is the source term, and $N_L$ and $N_R$ represent the number of boxes on the left and right side, respectively.
The previous equation can be rearranged as follows: $$\begin{aligned}
\label{eqn:Conservation number of particles 2}
\begin{split}
n_{i,j+1}=n_{i,j}+\sum_{k=1}^{N_L}(n_{k,j}-n_{i,j})P(i-k)- \\
\sum_{k=1}^{N_R}(n_{k,j}-n_{i,j})P(k-i)+ B_i.
\end{split}\end{aligned}$$
The comparison between the intensity distributions obtained with the FE model and by the equivalent 1D DRW model is given in Fig. \[Fig\_19\].
The direct comparison of the results shows a very good agreement between the two models. Note that the DRW is a diffusive model therefore the comparison between the intensity distributions is meaningful only in the tail region. As expected the tails evolve according to a Gaussian distribution. The comparison with the 1D waveguide was provided to illustrate the validity of the proposed approach and to confirm that, under the given assumptions, the results from the DRW converge to the full-field simulations.
### 2D discrete random walk model
The same approach illustrated above for the 1D waveguide can be applied to the analysis of the 2D square lattice. In this case, the DRW model is composed of a 2D distribution of boxes simulating the scatterers. The interactions between different boxes are again expressed in terms of *pdf*s that are synthesized based on the scattering coefficients obtained from the 2D multipole expansion model (Fig. \[Fig\_17\]). The equilibrium condition for the 2D DRW model is given by:
$$\begin{aligned}
\label{eqn:Conservation number of particles 2D}
\begin{split}
n_{i,h,j+1}=n_{i,h,j}+ \\ \sum_{k_h=1}^{N_D}\sum_{k_i=1}^{N_L}(n_{k_i,k_h,j}-n_{i,h,j})P(i-k_i,h-k_h)-\\ \sum_{k_h=1}^{N_U}\sum_{k_i=1}^{N_R}(n_{k_i,k_h,j}-n_{i,h,j})P(k_i-i,k_h-h)+ B_{ih}
\end{split}\end{aligned}$$
where $i$ and $h$ are the box indices, $j$ is the time index, $n(i,h,j)$ is the number of particles at time $j$ in the box $(i,h)$, $B_{ih}$ is the source term and $N_L $, $N_R$, $N_U$ and $N_D$ represent the number of boxes on the left, right, up and down sides, respectively.
The comparison between the intensity distributions obtained by numerical FE simulations and by equivalent 2D DRW model is reported in Fig. \[Fig\_20\].
Also in this case, the DRW model is in very good agreement with the FE simulations and, most important, is perfectly capable of capturing the anomalous (power-law) decay of the tails of the distribution. This result provides the conclusive proof that the anomalous behavior observed in the square lattice is in fact the result of long-range (L[é]{}vy flights) interactions due to scattering events occurring along different directions of propagation that are characterized by anisotropic dispersion.
$\alpha$-stable distributions and fractional diffusion equation {#fractional diffusion}
===============================================================
The renormalization criterion used in section §\[Numerical\_1\] to determine the existence of the anomalous diffusion regime is theoretically well-grounded but it does not allow a convenient approach to classify the anomalous regime. This classification typically requires the analysis of the time scales involved in the evolution of the moments of the distribution [@metzler2000random]. Here we suggest a different approach that, not only provides a more direct classification based on the available data, but opens new routes for an analytical treatment of the resulting diffusion problem.
The intensity distributions reported in Fig. \[Fig\_14\] suggest a power-law behavior of the tails. Recent studies [@mainardi1996fractional; @mainardi1995fractional; @benson2000application; @benson2001fractional] have shown that, for physical phenomena exhibiting this characteristic distribution of the field variables, the governing equations are generalizations to the fractional order of the classical diffusion equation. Power-law distributions, associated with infinite variance random variables (the so called L[é]{}vy flights), are in the domain of attraction of $\alpha$-stable random variables also called L[é]{}vy stable densities (their properties are summarized in Appendix \[Appendix\]). On the other hand finite-variance random variables are in the Normal domain of attraction that is a subset of L[é]{}vy stable densities. This suggests that the trend of the tails carries information about the $\alpha$-stable order of the underlying distribution.
In order to show that this situation occurs also in the present case, we performed numerical fits of the acoustic intensity profiles (Fig. \[Fig\_14\]) using $\alpha$-stable distributions.
The four parameters defining the $\alpha$-stable distributions were obtained by numerically solving a nonlinear optimization problem. The most important parameter is the characteristic exponent $\alpha$ (also called the index of stability) that is also connected to the slope of the tails. For the square lattice distribution, the values of $\alpha$ determined with the optimization procedure are $\alpha = 0.89$ and $\alpha = 0.57$ for the passband and bandgap excitation wavelengths, respectively.
In order to show that the order of the $\alpha$-stable distribution effectively describes the anomalous diffusive dynamics of the system, we use a generalized fractional diffusion equation [@mainardi2007fundamental]:
$$\begin{aligned}
\label{Fractional diffusion}
\begin{split}
_{x}\textrm{D}_{\theta}^{\alpha}u(x,t)=_{t}\textrm{D}_{*}^{\beta}u(x,t) \quad x \in \mathbb{R} , \quad t \in \mathbb{R}^+
\end{split}\end{aligned}$$
where $\alpha$, $\theta$, $\beta$ are real parameters always restricted as follows:
$$\begin{aligned}
\label{Restrictions}
\begin{split}
0< \alpha \leq2, \quad |\theta|\leq min\left \{ \alpha,2-\alpha \right \}, 0 <\beta \leq 2.
\end{split}\end{aligned}$$
In Eq. (\[Fractional diffusion\]), $u = u(x,t)$ is the field variable, $_{x}\textrm{D}_{\theta}^{\alpha}$ is the *Riesz-Feller* space fractional derivative of order $\alpha$, and $_{t}\textrm{D}_{*}^{\beta}$ is the Caputo time-fractional derivative of order $\beta$. The fractional operator in this equation exhibits a non-local behavior which makes it ideally suited to model dynamical systems dominated by long-range interactions.
Mainardi [@mainardi2007fundamental] reported the Green’s function for a Cauchy problem based on the space-time fractional diffusion equation. The self-similar nature of the solution allows the application of a similarity method that separates the solution into a space dependent (the reduced Green’s function $K$) and a time dependent term. In our system, we use a harmonic (constant amplitude) source and we analyze the steady state response, that is we consider a self-similar problem. In other terms, the reduced Green’s function $K$ proposed by Mainardi coincides with the normalized solution of the forced fractional diffusion equation governing our problem:
$$\begin{aligned}
\label{Reduced Green function}
\begin{split}
\textrm{K}_{\alpha,\beta}^{\alpha}(x) = \frac{1}{\pi x}\sum_{n=1}^{\infty}\frac{\Gamma (1+\alpha n)}{\Gamma(1+\beta n)}\sin\left [ \frac{n\pi}{2}(\theta-\alpha) \right ](-x^{-\alpha})^n.
\end{split}\end{aligned}$$
Note that this solution is valid in the case $\alpha<\beta$. In our case, $\beta=1$ to model a space fractional diffusion equation.
Fig. \[Fig\_22\] shows the comparison between the normalized acoustic intensity from the FE numerical data and the result from the reduced Green’s function $K$ (Eq. (\[Reduced Green function\])) calculated for the order $\alpha$ obtained by the previous $\alpha$-stable fits.
The above results clearly show that the fractional diffusion equation is able to capture the heavy-tailed behavior of the intensity distribution with good accuracy. They also confirm that the use of $\alpha$-stable fits provides a reliable approach to classify the anomalous behavior and to extract the corresponding fractional order of the operator. The above numerical results provide also further confirmation that the observed dynamic behavior from the full field numerical simulations is in fact dominated by anomalous diffusion. These results are particularly relevant if seen in a perspective of developing predictive capabilities for transport processes in highly inhomogeneous systems. As an example, fractional models would provide an excellent framework for the solution of inverse problems in imaging and remote sensing through highly scattering media. The ability to properly capture a mixed transport behavior, such as partially propagating and diffusive, would allow extracting more information from the measured response therefore improving the sensitivity and resolution of these approaches. From a broader perspective, this methodology has general applicability and could be extended to a variety of applications involving wave-like field transport such as those mentioned in the introduction.
Conclusions {#Conclusion}
===========
In this paper, we investigated the scattering behavior of sound waves in a perfectly periodic acoustic medium composed of a square lattice of hard cylinders in air. From a general perspective, the most remarkable result consists in the observation of the occurrence of anomalous hybrid transport in perfectly periodic lattice structures without disorder or random properties. This result is particularly relevant because the anomalous response of a scattering system was previously observed only in systems with either stochastic material or geometric properties. By using a combination of theoretical and numerical models, both deterministic and stochastic, it was determined that the existence of long-range interactions associated with the anisotropy of the dispersion bands was the driving factor leading to the occurrence of the anomalous transport behavior. The resulting diffused intensity fields were characterized by heavy-tails with marked asymptotic power-law decay, that were well described by $\alpha$-stable distributions. It was also shown that the $\alpha$-stable nature of the dynamic response provided a reliable approach for the classification and characterization of the non-local effect via the intrinsic parameters of $\alpha$-stable distributions.
Observing that $\alpha$-stable distributions represent the fundamental kernel for the solutions of fractional continuum models, we showed that a space fractional diffusion equation having the order predicted by the $\alpha$-stable fit of the acoustic intensity was capable of capturing very accurately the characteristic features of the anomalous transport process. From a general perspective, this approach can be interpreted as a fractional order homogenization of the periodic medium which is capable of mapping the complex inhomogeneous system to a (fractional) governing equation that still accepts an analytical solution.
This latter characteristic is particularly remarkable if seen from a practical application perspective because it could open the way to accurate and non-iterative inverse problems that play a critical role in remote sensing, imaging, and material design, just to name a few. Another key observation concerns the strong deviation of the tails of the acoustic intensity from the Gaussian distribution which highlights that much information is still contained in the tails. This aspect is particularly relevant for imaging and sensing in scattering media because traditional analytical methodologies typically assume a Gaussian distribution of the measured intensity field hence leading to two main drawbacks: 1) the loss of important information about the internal structure of the medium which is contained in the tails, and 2) the lack of a proper model capable of extracting and interpreting this information from measured data.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors gratefully acknowledge the financial support of the National Science Foundation under the grants DCSD CAREER $\#1621909$ and of the Air Force Office of Scientific Research under Grant No. YIP FA9550-15-1-0133.
Appendix A {#Appendix}
==========
This appendix summarizes some basic properties of the $\alpha$-stable distributions that have been used to analyze and interpret the simulation data in the paper. The family of $\alpha$-stable distributions are defined by the Fourier transform of their characteristics functions $\psi(w)$ that can be written in the explicit form as[@herranz2004alpha; @benson2002fractional]: $$\begin{aligned}
\label{alpha-stab0}
\psi(w)=\exp\left \{ i\mu w-\gamma\left | w \right |^\alpha B_{w,\alpha} \right \} \\
B_{w,\alpha}=\left\{\begin{matrix}
\left [1+i\beta \operatorname{sgn}(w) \tan\frac{\alpha \pi}{2} \right ] \quad \alpha\neq1 \nonumber \\
\left [1+i\beta \operatorname{sgn}(w) \frac{2}{\pi} \log\left | w \right | \right ] \quad \alpha=1
\end{matrix}\right.\end{aligned}$$
where $0<\alpha\leq 2$, $-1\leq\beta\leq 1$, $\gamma>0$, and $-\infty<\mu<\infty$. The parameters $\alpha$, $\beta$, $\gamma$ and $\mu$ uniquely and completely identify the stable distribution.
1. The parameter $\alpha$ is the *characteristic exponent*, or the *stability parameter*, and it defines the degree of impulsiveness of the distribution. As $\alpha$ decreases the level of impulsiveness of the distribution increases. For $\alpha=2$ we recover the Gaussian distribution. A particular case is obtained for $\alpha=1$ and $\beta=0$ that corresponds to the Cauchy distribution. For $\alpha \notin (0,2]$ the inverse Fourier transform $\psi(w)$ is not positive-definite and hence is not a proper probability density function.
2. The parameter $\beta$ is the *symmetry*, or *skewness parameter*, and determines the skewness of the distribution. Symmetric distributions have $\beta=0$, whereas $\beta=1$ and $\beta=-1$ correspond to completely skewed distributions.
3. The parameter $\gamma$ is the *scale parameter*. It is a measure of the spread of the samples from a distribution around the mean.
4. The parameter $\mu$ is the *location parameter* and corresponds to a shift in the $x$-axis of the pdf. For a symmetric $(\beta=0)$ distribution, $\mu$ is the mean when $1<\alpha\leq 2$ and the median when $0<\alpha\leq 1$.
The characteristic functions described in Eq. (\[alpha-stab0\]) are equivalent to a probability density function and do not have analytical solutions except for few special cases. The main feature of these characteristic functions is the presence of heavy-tails when compared to a Gaussian distribution. The probability density functions with tails heavier than Gaussian are also denoted as *impulsive*. An impulsive process is characterized by the presence of large values that significantly deviates from the mean value of the distribution with non-negligible probability. In this sense the $\alpha$-stable distribution represents a generalization of the Gaussian distribution that allows to model impulsive processes by using only four parameters instead of an infinite number of moments. The possibility of describing the distribution of particles in anomalous diffusion phenomena by using $\alpha$-stable distributions has numerous advantages: 1) many methods exist to perform statistical inference on $\alpha$-stable environments [@nikias1995signal; @janicki1993simulation], 2) these distributions are simple because they are completely characterized by only four parameters, 3) the use of $\alpha$-stable distributions finds a theoretical justification in the fact that they satisfy the generalized central limit theorem which states that the limit distribution on infinitely many [i.i.d.]{} random variables, is a stable distribution, 4) they include the Gaussian distribution as a particular case for a specific set of parameters. These distributions are stable since the output of a linear system in response to $\alpha$-stable inputs is again $\alpha$-stable.
| ArXiv |
---
abstract: 'We propose a new type of hidden layer for a multilayer perceptron, and demonstrate that it obtains the best reported performance for an MLP on the MNIST dataset.'
bibliography:
- 'strings.bib'
- 'strings-shorter.bib'
- 'ml.bib'
- 'aigaion-shorter.bib'
---
The piecewise linear activation function
========================================
We propose to use a specific kind of piecewise linear function as the activation function for a multilayer perceptron.
Specifically, suppose that the layer receives as input a vector $x \in \mathbb{R}^D$. The layer then computes presynaptic output $z = x^T W + b$ where $W \in \mathbb{R}^{D \times N}$ and $b \in \mathbb{R}^N$ are learnable parameters of the layer.
We propose to have each layer produce output via the activation function $h(z)_i = \text{max}_{j \in S_i} z_j$ where $S_i$ is a different non-empty set of indices into $z$ for each $i$.
This function provides several benefits:
- It is similar to the rectified linear units [@Glorot+al-AI-2011] which have already proven useful for many classification tasks.
- Unlike rectifier units, every unit is guaranteed to have some of its parameters receive some training signal at each update step. This is because the inputs $z_j$ are only compared to each other, and not to 0., so one is always guaranteed to be the maximal element through which the gradient flows. In the case of rectified linear units, there is only a single element $z_j$ and it is compared against 0. In the case when $0 > z_j$, $z_j$ receives no update signal.
- Max pooling over groups of units allows the features of the network to easily become invariant to some aspects of their input. For example, if a unit $h_i$ pools (takes the max) over $z_1$, $z_2$, and $z_3$, and $z_1$, $z_2$ and $z_3$ respond to the same object in three different positions, then $h_i$ is invariant to these changes in the objects position. A layer consisting only of rectifier units can’t take the max over features like this; it can only take their average.
- Max pooling can reduce the total number of parameters in the network. If we pool with non-overlapping receptive fields of size $k$, then $h$ has size $N / k$, and the next layer has its number of weight parameters reduced by a factor of $k$ relative to if we did not use max pooling. This makes the network cheaper to train and evaluate but also more statistically efficient.
- This kind of piecewise linear function can be seen as letting each unit $h_i$ learn its own activation function. Given large enough sets $S_i$, $h_i$ can implement increasing complex convex functions of its input. This includes functions that are already used in other MLPS, such as the rectified linear function and absolute value rectification.
Experiments
===========
We used $S_i = \{ 5 i, 5 i + 1, ... 5 i + 4 \}$ in our experiments. In other words, the activation function consists of max pooling over non-overlapping groups of five consecutive pre-synaptic inputs.
We apply this activation function to the multilayer perceptron trained on MNIST by @Hinton-et-al-arxiv2012. This MLP uses two hidden layers of 1200 units each. In our setup, the presynaptic activation $z$ has size 1200 so the pooled output of each layer has size 240. The rest of our training setup remains unchanged apart from adjustment to hyperparameters.
@Hinton-et-al-arxiv2012 report 110 errors on the test set. To our knowledge, this is the best published result on the MNIST dataset for a method that uses neither pretraining nor knowledge of the input geometry.
It is not clear how @Hinton-et-al-arxiv2012 obtained a single test set number. We train on the first 50,000 training examples, using the last 10,000 as a validation set. We use the misclassification rate on the validation set to determine at what point to stop training. We then record the log likelihood on the first 50,000 examples, and continue training but using the full 60,000 example training set. When the log likelihood of the validation set first exceeds the recorded value of the training set log likelihood, we stop training the model, and evaluate its test set error. Using this approach, our trained model made 94 mistakes on the test set. We believe this is the best-ever result that does not use pretraining or knowledge of the input geometry.
| ArXiv |
---
abstract: 'In this paper the stability of a closed-loop cascade control system in the trajectory tracking task is addressed. The considered plant consists of underlying second-order fully actuated perturbed dynamics and the first order system which describes dynamics of the input. The main theoretical result presented in the paper concerns stability conditions formulated based on the Lyapunov analysis for the cascade control structure taking advantage of the active rejection disturbance approach. In particular, limitations imposed on a feasible set of an observer bandwidth are discussed. In order to illustrate characteristics of the closed-loop control system simulation results are presented. Furthermore, the controller is verified experimentally using a two-axis telescope mount. The obtained results confirm that the considered control strategy can be efficiently applied for mechanical systems when a high tracking precision is required.'
author:
- |
Rados[ł]{}aw Patelski, Dariusz Pazderski\
Poznań University of Technology\
Institute of Automation and Robotics\
ul. Piotrowo 3a 60-965 Poznań, Poland
bibliography:
- 'bibDP.bib'
title: 'Tracking control for a cascade perturbed control system using active disturbance rejection paradigm[^1]'
---
Introduction
============
Set-point regulation and trajectory tracking constitute elementary tasks in control theory. It is well known that a fundamental method of stabilisation by means of a smooth static state feedback has significant limitations which come, among others, from the inability to measure the state as well as the occurrence of parametric and structural model uncertainties. Thus, for these reasons, various adaptive and robust control techniques are required to improve the performance of the closed-loop system. In particular, algorithms used for the state and disturbance estimation are of great importance here.
The use of high gain observers (HGOs) is well motivated in the theory of linear dynamic systems, where it is commonly assumed that state estimation dynamics are negligible with respect to the dominant dynamics of the closed-loop system. A similar approach can be employed successfully for a certain class of nonlinear systems where establishing a fast convergence of estimation errors may be sufficient to ensure the stability, [@KhP:2014]. In a natural way, the HGO observer is a basic tool to support a control feedback when a plant model is roughly known. Here one can mention the free-model control paradigm introduced by Fliess and others, [@Fliess:2009; @FlJ:2013] as well as the active disturbance rejection control (ADRC) proposed by Han and Gao, [@Han:1998; @Gao:2002; @Gao:2006; @Han:2009].
It turns out that the above-mentioned control methodology can be highly competitive with respect to the classic PID technique in many industrial applications, [@SiGao:2005; @WCW:2007; @MiGao:2005; @CZG:2007; @MiH:2015; @NSKCFL:2018]. Furthermore, it can be regarded as an alternative control approach in comparison to the sliding control technique proposed by Utkin and others, [@Utk:77; @Bartol:2008], where bounded matched disturbances are rejected due to fast switching discontinuous controls. Thus, it is possible to stabilise the closed-loop control system, in the sense of Filippov, on a prescribed, possibly time-varying, sliding surface, [@Bart:96; @NVMPB:2012]. Currently, also second and higher-order sliding techniques for control and state estimations are being explored, [@Levant:1993; @Levant:1998; @Bartol:1998; @Cast:2016]. It is noteworthy to recall a recent control algorithm based on higher-order sliding modes to solve the tracking problem in a finite time for a class of uncertain mechanical systems in robotics, [@Gal:2015; @Gal:2016]. From a theoretical point of view, some questions arise regarding conditions of application of control techniques based on a disturbance observer, with particular emphasis on maintaining the stability of the closed-loop system. Recently, new results concerning this issue have been reported for ADRC controllers, [@SiGao:2017; @ACSA:2017]. In this paper we further study the ADRC methodology taking into account a particular structure of perturbed plant. Basically, we deal with a cascade control system which is composed of two parts. The first component is represented by second-order dynamics which constitute an essential part of the plant. It is assumed that the system is fully actuated and subject to matched-type disturbances with bounded partial derivatives. The second component is defined by an elementary first-order linear system which describes input dynamics of the entire plant. Simultaneously, it is supposed that the state and control input of the second order dynamics are not fully available.
It can be seen that the considered plant well corresponds to a class of mechanical systems equipped with a local feedback applied at the level of actuators. As a result of additional dynamics, real control forces are not accessible directly which may deteriorate the stability of the closed-loop system.
In order to analyse the closed-loop system we take advantage of Lyapunov tools. Basically, we investigate how an extended state observer (ESO) affects the stability when additional input dynamics are considered. Further we formulate stability conditions and estimate bounds of errors. In particular, we show that the observer gains cannot be made arbitrarily large as it is commonly recommended in the ADRC paradigm. Such an obstruction is a result of the occurrence of input dynamics which is not explicitly taken into account in the feedback design procedure.
According to the best authors’ knowledge, the Lyapunov stability analysis for the considered control structure taking advantage of the ADRC approach has not been addressed in the literature so far.
Theoretical results are illustrated by numerical simulations and experiments. The experimental validation are conducted on a real two-axis telescope mount driven by synchronous gearless motors, [@KPKJPKBJN:2019]. Here we show that the considered methods provide high tracking accuracy which is required in such an application. Additionally, we compare the efficiency of compensation terms, computed based on the reference trajectory and on-line estimates in order to improve the tracking performance.
The paper is organised as follows. In Section 2 the model of a cascade control process is introduced. Then a preliminary feedback is designed and a corresponding extended state observer is proposed. The stability of the closed-loop system is studied using Lyapunov tools and stability conditions with respect to the considered control structure are formulated. Simulation results are presented in Section 3 in order to illustrate the performance of the controller. In Section 4 extensive experimental results are discussed. Section 5 concludes the paper.
Controller and observer design
==============================
Dynamics of a perturbed cascaded system
---------------------------------------
Consider a second order fully actuated control system defined as follows $$\left\{ \begin{array}{cl}
\dot{x}_{1} & =x_{2},\\
\dot{x}_{2} & =Bu+h(x_{1},x_{2})+q(x_{1},x_{2},u,t),
\end{array}\right.\label{eq:general:nominal system}$$ where $x_{1},\,x_{2}\in\mathbb{R}^{n}$ are state variables, $B\in\mathbb{R}^{n\times n}$ is a non-singular input matrix while $u\in\mathbb{R}^{n}$ stands for an input. Functions $h:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{n}$ and $q:\mathbb{R}^{2n}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}$ denote known and unknown components of the dynamics, respectively. Next, it is assumed that input $u$ in is not directly accessible for a control purpose, however, it is governed by the following first order dynamics $$\dot{u}=T^{-1}\left(-u+v\right),\label{eq:general:input dynamics}$$ where $v\in\mathbb{R}^{n}$ is regarded as a real input and $T\in\mathbb{R}^{n\times n}$ is a diagonal matrix of positive time constants. In fact, both dynamics constitute a cascaded third order plant, for which the underlying component is represented by , while corresponds to stable input dynamics.
Control system design
---------------------
The control task investigated in this paper deals with tracking of a reference trajectory specified for an output of system - which is determined by $y:=x_1$. Simultaneously, it is assumed that variables $x_2$ and $u$ are unavailable for measurement and the only information is provided by the output.
To be more precise, we define at least $C^3$ continuous reference trajectory $x_{d}(t):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ and consider output tracking error $\tilde{y}:=x_d-x_1$. Additionally, to quantify a difference between $u$ and $v$, we introduce error $\tilde{u}:=v-u$. Since $v$ is viewed as an alternative input of , one can rewrite as $$\left\{ \begin{array}{cl}
\dot{x}_{1} & =x_{2},\\
\dot{x}_{2} & =Bv-B\tilde{u}+h+q.
\end{array}\right.\label{eq:general:nominal_system_input_v}$$ For control design purposes, the tracking error will be considered with respect to the state of system . Consequently, one defines $$e = \begin{bmatrix}e_1\\ e_2\end{bmatrix}:=\begin{bmatrix}\tilde{y}\\ e_2\end{bmatrix}=\begin{bmatrix}x_d-x_1\\ \dot{x}_d-x_2\end{bmatrix}\in\mathbb{R}^{2n}.$$ Accordingly, taking time derivative of $e$, one can obtain the following open-loop error dynamics $$\left\{ \begin{array}{cl}
\dot{e}_{1} & =e_{2},\\
\dot{e}_{2} & =\ddot{x}_{d}-Bv+B\tilde{u}-h-q.
\end{array}\right.\label{eq:general:tracking error dynamics}$$ In order to stabilise system in a vicinity of zero, the following preliminary control law is proposed $$v:=B^{-1}\left(K_{p}\left(x_{d}-\hat{x}_{1}\right)+K_{d}\left(\dot{x}_{d}-\hat{x}_{2}\right)-h_{u}+\ddot{x}_{d}-w_c\right),\label{eq:general:control law}$$ where $K_{p},K_{d}\in\mathbb{R}^{n}$ are diagonal matrices of constant positive gains, $\hat{x}_1\in\mathbb{R}^n$, $\hat{x}_2\in\mathbb{R}^n$ and $w_c\in\mathbb{R}^n$ denote estimates of states and a disturbance, respectively. These estimates are computed by an observer that is not yet defined. Term $h_{u}:\mathbb{R}^{4n}\rightarrow\mathbb{R}^{n}$ is a compensation function, designed in attempt to attenuate influence of $h$ on the closed system dynamics, and is defined using available signals as follows $$h_{u}:=h_{1}(\hat{x}_{1},\hat{x}_{2})+h_{2}(x_{d},\dot{x}_{d}),\label{eq:general:known dynamics compensation}$$ while $h_1$ and $h_2$ satisfy $$h_{1}(x_{1},x_{2})+h_{2}(x_{1},x_{2}):=h.\label{eq:general:known dynamics}$$ Next, in order to simplify design of an observer we rewrite dynamics . Firstly, we consider a new form which does not introduce any change to the system dynamics and is as follows $$\left\{ \begin{array}{cl}
\dot{x}_{1} & =x_{2},\\
\dot{x}_{2} & =Bu+h_u+h-h_{u}+q.
\end{array}\right.\label{eq:general:nominal system rewritten}$$ Secondly, according to active disturbance rejection methodology, it is assumed that $$z_{3}:=q+h-h_{u}$$ describes an augmented state which can be regarded as a total disturbance. Correspondingly, one can introduce extended state $z=\begin{bmatrix}z_{1}^{T} & z_{2}^{T} & z_{3}^{T}\end{bmatrix}^{T}\in\mathbb{R}^{3n}$, where $z_1:=x_1$ and $z_2:=x_2$. As a result, the following extended form of dynamics can be established $$\left\{ \begin{array}{cl}
\dot{z}_{1} & =z_{2},\\
\dot{z}_{2} & =Bu+h_{u}+z_{3},\\
\dot{z}_{3} & =\dot{q}+\dot{h}-\dot{h}_{u}.
\end{array}\right.\label{eq:general:extended system-1}$$ Now, in order to estimate state $z$ we define the following Luenberger-like observer $$\left\{ \begin{array}{cl}
\dot{\hat{z}}_{1} & =K_{1}\left(z_{1}-\hat{z}_{1}\right)+\hat{z}_{2},\\
\dot{\hat{z}}_{2} & =K_{2}\left(z_{1}-\hat{z}_{1}\right)+\hat{z}_{3}+h_{u}+Bv,\\
\dot{\hat{z}}_{3} & =K_{3}\left(z_{1}-\hat{z}_{1}\right),
\end{array}\right.\label{eq:general:observer}$$ where $\hat{z}=\begin{bmatrix}\hat{z}_{1}^{T} & \hat{z}_{2}^{T} & \hat{z}_{3}^{T}\end{bmatrix}^{T}\in\mathbb{R}^{3n}$ denotes estimate of $z$ and $K_{1},K_{2},K_{3}\in\mathbb{R}^{n\times n}$ are diagonal matrices of positive gains of the observer which are chosen based on linear stability criteria. Since estimates $\hat{z}$ are expected to converge to real values of $z$, let observation errors be expressed as $\tilde{z}:=z-\hat{z}$. Taking time derivative of $\tilde{z}$, using , and recalling one obtains the following dynamics $$\dot{\tilde{z}}=H_{o}\tilde{z}+C_{0}B\tilde{u}+C_{1}\dot{z}_{3}\label{eq:general:observator error dynamics}$$ where $$\label{eq:general:Ho_def}
H_{o}=\begin{bmatrix}-K_{1} & I & 0\\
-K_{2} & 0 & I\\
-K_{3} & 0 & 0
\end{bmatrix}\in\mathbb{R}^{3n\times 3n},$$ $$C_{0}=\begin{bmatrix}0& -I& 0\end{bmatrix}^T,\ C_{1}=\begin{bmatrix}0& 0& I
\end{bmatrix}^T\in\mathbb{R}^{3n},$$ while $I$ stands for the identity matrix of size $n\times n$. Here, it is required that $H_{o}$ is Hurwitz, what can be guaranteed by a proper choice of observer gains. Next, we recall tracking dynamics and feedback . It is proposed that compensating term in , which partially rejects unknown disturbances, is defined by an estimate provided by observer , namely $w_c:=\hat{z}_3$. Consequently, by substituting into the following is obtained $$\dot{e}=H_{c}e+W_{1}\tilde{z}+C_{2}B\tilde{u},\label{eq:general:regulator error dynamics}$$ where $$\label{eq:general:Hc_def}
H_{c}=\begin{bmatrix}0 & I\\
-K_{p} & -K_{d}
\end{bmatrix},W_{1}=\begin{bmatrix}0 & 0 & 0\\
-K_{p} & -K_{d} & -I
\end{bmatrix},\ C_{2}=\begin{bmatrix}0\\
I
\end{bmatrix}\in\mathbb{R}^{2n\times n}$$ and $H_{c}$ is Hurwitz for $K_{p}\succ 0$ and $K_{d}\succ 0$.
Further, in order to facilitate the design and analysis of the closed-loop system, we take advantage of a scaling operator defined by $$\Delta_m\left(\alpha\right):=\mathrm{diag}\left\{\alpha^{m-1}I,\, \alpha^{m-2}I,\, \ldots,\, I\right\}\in\mathbb{R}^{mn\times mn},$$ where $\alpha>0$ is a positive scalar. Then we define the following scaled tracking and observation errors $$\begin{aligned}
\bar{e}:=&\left(\kappa\omega\right)^{-1}\Delta_2\left(\kappa\omega\right)e,\label{eq:general:regulator auxiliary errors}\\
\bar{z}:=&\omega^{-2}\Delta_3\left(\omega\right)\tilde{z},\label{eq:general:observer auxiliary errors} \end{aligned}$$ where $\omega\in\mathbb{R}_{+}$ is scaling parameter which modifies the bandwidth of the the observer, while $\kappa\in\mathbb{R}_{+}$ denotes a relative bandwidth of the feedback determined with respect to $\omega$. Embracing this notation one can introduce the following scaled gains $$\label{eq:design:scaled_gains}
\bar{K}_c:=\left(\kappa\omega\right)^{-1}K_c \Delta_2^{-1}\left(\kappa\omega\right),\, \bar{K}_o:=\omega^{-3}\Delta_3\left(\omega\right) \left[K_1^T\ K_2^T\ K_3^T\right]^T,$$ while $K_c:=\left[K_p\ K_d\right]\in\mathbb{R}^{n\times 2n}$. Additionally, exploring relationships outlined in the Appendix, one can rewrite dynamics and as follows $$\begin{aligned}
\dot{\bar{e}}=&\kappa\omega\bar{H}_{c}\bar{e}+\kappa^{-1}\omega\bar{W}_{1}\Delta_3\left(\kappa\right)\bar{z}+\left(\kappa\omega\right)^{-1}C_{2}B\tilde{u},\label{eq:general:regulator auxiliary error dynamics}\\
\dot{\bar{z}}=&\omega\bar{H}_{o}\bar{z}+\omega^{-1} C_{0}B\tilde{u}+\omega^{-2} C_{1}\dot{z}_{3},\label{eq:general:observator auxiliary error dynamics}\end{aligned}$$ with $\bar{H}_c$ and $\bar{H}_o$ being Hurwitz matrices of forms , defined in terms of scaled gains $\bar{K}_c$ and $\bar{K}_o$, respectively. Similarly, $\bar{W}_1$ corresponds to $W_1$ parameterised by new gains. Since $\bar{H}_c$ and $\bar{H}_o$ are Hurwitz, one can state that the following Lyapunov equations are satisfied $$\bar{P}_c\bar{H}_c^{T}+\bar{H}_c\bar{P}_c=-\bar{Q}_c,\ \bar{P}_o\bar{H}_o^{T}+\bar{H}_o\bar{P}_o=-\bar{Q}_o \label{eq:general:Lyapunov equation}$$ for some symmetric, positive defined matrices $\bar{Q}_c,\, \bar{P}_c\in\mathbb{R}^{2n\times 2n}$ and $\bar{Q}_o,\, \bar{P}_o\in\mathbb{R}^{3n\times 3n}$.
Stability analysis of the closed-loop cascaded control system
-------------------------------------------------------------
Lyapunov stability of the closed-loop is to be considered now. For this purpose, a state which consists of tracking, observation and input errors is defined as $$\bar\zeta=\begin{bmatrix}\bar{e}^T&\bar{z}^T&\tilde{u}^T\end{bmatrix}^T\in\mathbb{R}^{6n}.\label{eq:general:stability:errors}$$ A positive definite function is proposed as follows $$V(\bar{\zeta})=\frac{1}{2}\bar{e}^{T}\bar{P}_c\bar{e}+\frac{1}{2}\bar{z}^{T}\bar{P}_o\bar{z}+\frac{1}{2}\tilde{u}^{T}\tilde{u}.\label{eq:general:stability:lyapunov proposition}$$ Its derivative takes form of $$\begin{aligned}
\dot{V}(\bar{\zeta})=&-\frac{1}{2}\kappa\omega\bar{e}^{T}\bar{Q}_c\bar{e}-\frac{1}{2}\omega\bar{z}^{T}\bar{Q}_o\bar{z}+\kappa^{-1}\omega \bar{e}^T \bar{P}_c\bar{W}_1\Delta_3\left(\kappa\right)\bar{z}+\left(\kappa\omega\right)^{-1}\bar{e}^{T}\bar{P}_c C_2B\tilde{u}+\omega^{-1}\bar{z}^T\bar{P}_o C_o B\tilde{u}\\&+\omega^{-2}\bar{z}^{T}\bar{P}_o C_1\dot{z}_{3}-\tilde{u}^{T}T^{-1}\tilde{u}+\tilde{u}^{T}\dot{v}.\label{eq:general:stability:lyapunov derivative}
\end{aligned}$$ Derivative of control law $v$ defined by can be expressed in terms of $\bar{\zeta}$ as (the details are outlined in the Appendix) $$\dot{v}=B^{-1}\left(\omega^{3}\left(\kappa^3\bar{K}_c\bar{H}_c\bar{e}+\left(\kappa\bar{K}_c\bar{W}_1 \Delta_3\left(\kappa\right)+\bar{W}_2\Delta_3\left(\kappa\right)\bar{H}_o\right)\bar{z}\right)-\dot{h}_u+\dddot{x}_{d}\right),\label{eq:general:stability:control law derivative}$$ where $\bar{K}_c:=\left[\bar{K}_p\ \bar{K}_d\right]\in\mathbb{R}^{n\times 2n}$ and $\bar{W}_2:=\left[\bar{K}_c\ I \right]\in\mathbb{R}^{n\times{3n}}$. Substituting (\[eq:general:stability:control law derivative\]) and $\dot{z}_{3}$ into (\[eq:general:stability:lyapunov derivative\]) leads to $$\begin{aligned}
\dot{V}(\bar{\zeta})=&-\frac{1}{2}\kappa\omega\bar{e}^{T}\bar{Q}_c\bar{e}-\frac{1}{2}\omega\bar{z}^{T}\bar{Q}_o\bar{z}+\kappa^{-1}\omega\bar{e}^T\bar{P}_c\bar{W}_{1}\Delta_3\left(\kappa\right)\bar{z}+\left(\kappa\omega\right)^{-1}\bar{e}^{T}\bar{P}_c C_2B\tilde{u}+\omega^{-1}\bar{z}^T\bar{P}_o C_o B\tilde{u}\\&+\left(\kappa\omega\right)^3 \tilde{u}^{T}B^{-1}\bar{K}_c\bar{H}_c\bar{e}+\omega^3\tilde{u}^{T}B^{-1}\left(\kappa\bar{K}_c\bar{W}_1 \Delta_3\left(\kappa\right)+\bar{W}_2\Delta_3\left(\kappa\right)\bar{H}_o\right)\bar{z}-\tilde{u}^{T}T^{-1}\tilde{u}\\ &+\tilde{u}^{T}B^{-1}\dddot{x}_d+\tilde{u}^{T}B^{-1}\dot{h}_u+\omega^{-2}\bar{z}^{T}\bar{P}_o C_1\left(\dot h-\dot{h}_u\right)+\omega^{-2}\bar{z}^{T}\bar{P}_o C_1\dot{q}(z_{1},z_{2},u,t).\label{eq:general:stability:lyapunov derivative split}
\end{aligned}$$ In order to simplify the stability analysis, derivative $\dot{V}$ will be decomposed into four terms defined as follows $$\begin{aligned}
Y_1:=&-\frac{1}{2}\kappa\omega\bar{e}^{T}\bar{Q}_c\bar{e}-\frac{1}{2}\omega\bar{z}^{T}\bar{Q}_o\bar{z}+\kappa^{-1}\omega\bar{e}^T\bar{P}_c\bar{W}_{1}\Delta_3\left(\kappa\right)\bar{z}+\left(\kappa\omega\right)^{-1}\bar{e}^{T}\bar{P}_c C_2B\tilde{u}+\omega^{-1}\bar{z}^T\bar{P}_o C_o B\tilde{u}\\&+\left(\kappa\omega\right)^3 \tilde{u}^{T}B^{-1}\bar{K}_c\bar{H}_c\bar{e}+\omega^3\tilde{u}^{T}B^{-1}\left(\kappa\bar{K}_c\bar{W}_1 \Delta_3\left(\kappa\right)+\bar{W}_2\Delta_3\left(\kappa\right)\bar{H}_o\right)\bar{z}-\tilde{u}^{T}T^{-1}\tilde{u},\\
Y_2:=& \tilde{u}^{T}B^{-1}\dddot{x}_d,\,Y_3:=\tilde{u}^{T}B^{-1}\dot{h}_u+\omega^{-2}\bar{z}^{T}\bar{P}_o C_1\left(\dot h-\dot{h}_u\right),\,Y_4:= \omega^{-2}\bar{z}^{T}\bar{P}_o C_1\dot{q}(z_{1},z_{2},u,t).
\end{aligned}$$ Each term of $\dot{V}$ will be now considered separately. Firstly, $Y_{1}$ which represents mainly influence of input dynamics on the nominal system will be looked upon. Negative definiteness of this term will be a starting point for further analysis of the closed-loop stability. Let it be rewritten using the matrix notation as $$Y_{1} =-\frac{1}{2}\omega\bar{\zeta}^{T}Q_{Y1}\bar{\zeta},\label{eq:general:stability:Y1}$$ where $$\begin{split}
Q_{Y1}=\left[\begin{matrix}\kappa\bar{Q}_c &-\kappa^{-1}\bar{P}_c\bar{W}_1\Delta_3\left(\kappa\right)&Q_{Y1_{13}}\\-\kappa^{-1}\left(\bar{P}_c\bar{W}_1\Delta_3\left(\kappa\right)\right)^T&\bar{Q}_o&Q_{Y1_{23}}\\
Q_{Y1_{13}}^T&Q_{Y1_{23}}^T&2\omega^{-1} T^{-1}
\end{matrix}\right]\in\mathbb{R}^{6n\times 6n}
\end{split}$$ while $$\begin{aligned}
Q_{Y1_{13}} =& -\kappa^{-1}\omega^{-2}\bar{P}_c C_2B-\kappa^3\omega^2\left(B^{-1}\bar{K}_c\bar{H}_c\right)^T,\\
Q_{Y1_{23}}=&-\omega^{-2}\bar{P}_o C_o B-\omega^2\left(B^{-1}\left(\kappa\bar{K}_c\bar{W}_1 \Delta_3\left(\kappa\right)+\bar{W}_2\Delta_3\left(\kappa\right)\bar{H}_o\right)\right)^T.
\end{aligned}$$ It can be showed, that there may exist sets $\Omega_v, \mathrm{K}_v \subset \mathbb{R}_{+}$, such, that for every $\omega\in\Omega_v$ and $\kappa\in\mathrm{K}_v$ matrix $Q_{Y1}$ remains positive definite. Domains of both $\Omega_v$ and $\mathrm{K}_v$ strongly depend on inertia matrix $T$ and input matrix $B$ of nominal system. In the absence of other disturbances system would remain asymptotically stable for such a choice of both $\omega$ and $\kappa$ parameters. Influence of other elements of $\dot{V}(\bar{\zeta})$ will be considered in terms of upper bounds which can be imposed on them.
\[assu:desired trajectory\]Let desired trajectory $x_{d}$ be chosen such, that norms of $x_{d},\dot{x}_{d},\ddot{x}_{d},\dddot{x}_{d}$ are bounded by, respectively, constant positive scalar values $x_{b0},x_{b1},x_{b2},x_{b3}\in\mathbb{R}_{+}$.
Establishing upper bound for norm of $Y_{2}$ is straightforward by using Cauchy-Schwartz inequality. $$\begin{aligned}
Y_{2} & =-\tilde{u}^{T}B^{-1}\dddot{x}_{d},\nonumber \\
\left\Vert Y_{2}\right\Vert & \leq\left\Vert \tilde{u}\right\Vert \cdot\left\Vert B^{-1}\dddot{x}_{d}\right\Vert \nonumber \\
& \leq\left\Vert \bar{\zeta}\right\Vert \left\Vert B^{-1}\right\Vert x_{b3}.\label{eq:general:stability:Y2}\end{aligned}$$ Now, $Y_{3}$ is to be considered. This term comes from imperfect compensation of known dynamics in nominal system and it can be further split into the following $$Y_{31}:=\omega^{-2}\bar{z}^{T}P_{o}C_{1}\left(\dot{h}-\dot{h}_{u}\right), Y_{32}:=\tilde{u}^{T}B^{-1}\dot{h}_{u}.\label{eq:general:stability:Y3}$$
\[assu:bounded dynamics\]Let functions $h_{1}(a,b)$ and $h_{2}(a,b)$ be defined such, that norms of partial derivatives\
$\frac{\partial}{\partial a}h_{1}(a,b)$,$\frac{\partial}{\partial b}h_{1}(a,b)$,$\frac{\partial}{\partial a}h_{2}(a,b)$,$\frac{\partial}{\partial b}h_{2}(a,b)$ are bounded for every $a,b\in\mathbb{R}^{n}$ by $h_{1a},h_{1b,}h_{2a,}h_{2b}\in\mathbb{R}_{+}$ respectively.
By applying chain rule to calculate derivatives of each function and substituting difference of error and desired trajectory for state variables, term $Y_{31}$ can be expressed as $$Y_{31}=\omega^{-2}\bar{z}^{T}P_{o}C_{1}\left(W_{h1}\begin{bmatrix}\dot{x}_{d}\\\ddot{x}_{d}\end{bmatrix}
- W_{h2}\left(\kappa\omega\bar{H}_{c}\bar{e}+\kappa^{-1}\omega\bar{W}_{1}\Delta_3(\kappa)\bar{z}+\left(\kappa\omega\right)^{-1}C_{2}B\tilde{u}\right)+W_{h3}\left(\omega\bar{H}_{o}\bar{z}+\omega^{-1}C_{0}B\tilde{u}\right)\right),\label{eq:general:stability:Y31 equation}$$ where $$\begin{aligned}
W_{h1} & =\begin{bmatrix}\left(\frac{\partial h_{1}}{\partial z_{1}}+\frac{\partial h_{2}}{\partial z_{1}}-\frac{\partial h_{2}}{\partial x_{d}}-\frac{\partial h_{1}}{\partial\hat{z}_{1}}\right) & \left(\frac{\partial h_{1}}{\partial z_{2}}+\frac{\partial h_{2}}{\partial z_{2}}-\frac{\partial h_{2}}{\partial\dot{x}_{d}}-\frac{\partial h_{1}}{\partial\hat{z}_{2}}\right)\end{bmatrix}, \nonumber \\
W_{h2} & =\begin{bmatrix}\left(\frac{\partial h_{1}}{\partial z_{1}}+\frac{\partial h_{2}}{\partial z_{1}}-\frac{\partial h_{1}}{\partial\hat{z}_{1}}\right) & \kappa\omega\left(\frac{\partial h_{1}}{\partial z_{2}}+\frac{\partial h_{2}}{\partial z_{2}}-\frac{\partial h_{1}}{\partial\hat{z}_{2}}\right)\end{bmatrix}, \nonumber \\
W_{h3} & =\begin{bmatrix}\frac{\partial h_{1}}{\partial\hat{z}_{1}} & \omega\frac{\partial h_{1}}{\partial\hat{z}_{2}} & 0\end{bmatrix}. \nonumber\end{aligned}$$ This term can be said to be bounded by $$\begin{aligned}
\left\Vert Y_{31}\right\Vert \leq & \omega^{-2}\left\Vert \bar{\zeta}\right\Vert \left\Vert P_{o}C_{1}\right\Vert \left(\left(2h_{1a}+2h_{2a}\right)x_{b1}+\left(2h_{1b}+2h_{2b}\right)x_{b2}\right) \nonumber \\
& +\left\Vert \bar{\zeta}\right\Vert ^{2}\left\Vert P_{o}C_{1}\right\Vert \left(\omega^{-1}\kappa\left\Vert W_{h2b}\right\Vert \left(\left\Vert \bar{H}_{c}\right\Vert +\left\Vert \bar{W}_{1}\right\Vert \right)+\omega^{-3}\kappa^{-1}\left\Vert W_{h2b}\right\Vert \left\Vert C_{2}B\right\Vert\right) \nonumber \\
& +\left\Vert \bar{\zeta}\right\Vert ^{2}\left\Vert P_{o}C_{1}\right\Vert \left(\omega^{-1}\left\Vert W_{h3b}\right\Vert \left\Vert \bar{H}_{o}\right\Vert +\omega^{-2}\left\Vert B\right\Vert h_{1b} \right). \label{eq:general:stability:Y31 bound}\end{aligned}$$ where $W_{h2b}=\begin{bmatrix}2h_{1a} + h_{2a} & \kappa\omega\left(2h_{1b} + h_{2b}\right)\end{bmatrix}$ and $W_{h3b} = \begin{bmatrix}h_{1a} & \omega h_{1b} & 0 \end{bmatrix}$. Having established upper bound of $Y_{31}$, we can perform similar analysis with respect to $Y_{32}$. Let $Y_{32}$ be rewritten as $$Y_{32} = \tilde{u}^T B^{-1} \left(W_{h4}\begin{bmatrix}\dot{x}_{d}\\\ddot{x}_{d}\end{bmatrix}
- W_{h5}\left(\kappa\omega\bar{H}_{c}\bar{e}+\kappa^{-1}\omega\bar{W}_{1}\Delta_3(\kappa)\bar{z}+\left(\kappa\omega\right)^{-1}C_{2}B\tilde{u}\right)-W_{h6}\left(\omega\bar{H}_{o}\bar{z}+\omega^{-1}C_{0}B\tilde{u}\right)\right), \label{eq:general:stability:Y32 equation}$$ where $$\begin{aligned}
W_{h4} & =\begin{bmatrix}\left(\frac{\partial h_{2}}{\partial x_{d}}+\frac{\partial h_{1}}{\partial \hat{z}_{1}}\right) & \left(\frac{\partial h_{2}}{\partial \dot{x}_{d}}+\frac{\partial h_{1}}{\partial \hat{z}_{2}}\right)\end{bmatrix}, \nonumber \\
W_{h5} & =\begin{bmatrix}\frac{\partial h_{1}}{\partial \hat{z}_{1}} & \kappa\omega\frac{\partial h_{1}}{\partial \hat{z}_{2}}\end{bmatrix}, \nonumber \\
W_{h6} & = W_{h3}. \nonumber \end{aligned}$$ An upper bound of norm of $Y_{32}$ can be expressed by the following inequality $$\begin{aligned}
\left\Vert Y_{32} \right\Vert \leq & \omega^{-2}\left\Vert \bar{\zeta}\right\Vert \left\Vert \bar{P}_{o}C_{1}\right\Vert \left(q_{z1}x_{b1}+q_{z2}x_{b2}+\left\Vert B\right\Vert q_{z2}+\left\Vert T^{-1}\right\Vert q_{u}+\left\Vert \bar{P}_{o}C_{1}\right\Vert q_{t}\right) \nonumber \\
& +\kappa\omega^{-1}\left\Vert \bar{\zeta}\right\Vert ^{2}\left\Vert \bar{P}_{o}C_{1}\right\Vert \left\Vert W_{q2}\right\Vert \left(\left\Vert \bar{H}_{c}\right\Vert +\left\Vert \bar{W}_{1}\right\Vert \right)\label{eq:general:stability:Y_32 bound}\end{aligned}$$ where $W_{h5b} = \begin{bmatrix}h_{1a} & \kappa\omega h_{1b}\end{bmatrix}$ and naturally $W_{h6b}=W_{h3b}$. A remark can be made now about the structure of $W_{h2}$, $W_{h3}$, $W_{h4}$ and $W_{h5}$. It may be recognized, that elements of these matrices can be divided into group of derivatives calculated with respect to the first and the second argument. Former of these are not scaled by either observer or regulator bandwidth, while the latter is scaled by either $\kappa\omega$ or $\omega$ factor. As will be showed later in the analysis, this difference will have significant influence on the system stability and ability of the controller to reduce tracking errors.
Lastly, some upper bound need to be defined for $Y_{4}$ to complete the stability analysis. This final term comes from nominal disturbance $q(z_1, z_2, u, y)$ alone. By chain rule it can be shown that $$Y_{4} = \omega^{-2}\bar{z}^{T}\bar{P}_{o}C_{1}\left(W_{q1}\begin{bmatrix}\dot{x}_{d}\\\ddot{x}_{d}\end{bmatrix}+\kappa\omega W_{q2}\bar{H}_{c}\bar{e}+\kappa^{-1}\omega W_{q2}\bar{W}_{1}\Delta_3\left(\kappa\right)\bar{z}+\left(\kappa\omega\right)^{-1}W_{q2}C_{2}B\tilde{u}-\frac{\partial q}{\partial u}T^{-1}\tilde{u}+\frac{\partial q}{\partial t}\right),\label{eq:general:stability:Y4}$$ where $W_{q1}=\begin{bmatrix}\frac{\partial q}{\partial z_1} & \frac{\partial q}{\partial z_2}\end{bmatrix}$ and $W_{q2} = \begin{bmatrix}\frac{\partial q}{\partial z_1} & \kappa\omega\frac{\partial q}{\partial z_2}\end{bmatrix}$.
\[assu:disturbance derivatives\]Let partial derivatives $\frac{\partial}{\partial z_{1}}q(z_{1},z_{2},u,t),\frac{\partial}{\partial z_{2}}q(z_{1},z_{2},u,t),\frac{\partial}{\partial u}q(z_{1},z_{2},u,t),\frac{\partial}{\partial t}q(z_{1},z_{2},u,t)$ be defined in the whole domain and let their norms be bounded by constants $q_{z1},q_{z2},q_{u}$ and $q_{t}\in\mathbb{R}_{+}$, respectively.
Under Assumption \[assu:disturbance derivatives\] the norm of $Y_{4}$ is bounded by $$\begin{aligned}
\left\Vert Y_{4}\right\Vert & \leq\omega^{-2}\left\Vert \bar{\zeta}\right\Vert \left\Vert \bar{P}C\right\Vert \left(q_{z1}x_{b1}+q_{z2}x_{b2}\right)+\omega^{-1}\left\Vert \bar{\zeta}\right\Vert ^{2}\left\Vert \bar{P}CW_{5b}\right\Vert \left(\left\Vert \bar{H}\right\Vert +\omega^{-2}\left\Vert \bar{C}B\right\Vert \right)\label{eq:general:stability:Y4 bound}\\
& +\omega^{-2}\left\Vert \bar{\zeta}\right\Vert ^{2}\left\Vert \bar{P}C\right\Vert \left(\left\Vert T^{-1}\right\Vert q_{u}+q_{t}\right).\nonumber \end{aligned}$$ With some general bounds for each of $\dot{V}(\bar{\zeta})$ terms established, conclusions concerning system stability can be finally drawn. For the sake of convenience, let some auxiliary measure of Lyapunov function derivative negative definiteness $\Lambda_V$ and Lyapunov function derivative perturbation $\Gamma_V$ be defined as $$\begin{aligned}
\Lambda_V := & \frac{1}{2}\omega\lambda_{\min}(Q_{Y1}) -\kappa\omega\left\Vert B^{-1}\right\Vert \left\Vert W_{h5b}\right\Vert \left(\left\Vert \bar{H}_{c}\right\Vert +\left\Vert \bar{W}_{1}\right\Vert \right)-\omega\left\Vert B^{-1}\right\Vert \left\Vert W_{h6b}\right\Vert \left\Vert \bar{H}_{o}\right\Vert \nonumber \\
& -2h_{1b}-\omega^{-1}\left\Vert W_{h3b}\right\Vert \left\Vert \bar{H}_{o}\right\Vert -\kappa\omega^{-1}\left\Vert P_{o}C_{1}\right\Vert \left(\left\Vert W_{h2b}\right\Vert +\left\Vert W_{q2}\right\Vert \right)\left(\left\Vert \bar{H}_{c}\right\Vert +\left\Vert \bar{W}_{1}\right\Vert \right) \nonumber \\
& -\omega^{-2}\left\Vert B\right\Vert h_{1b}-\omega^{-3}\kappa^{-1}\left\Vert W_{h2b}\right\Vert \left\Vert C_{2}B\right\Vert, \label{eq:general:stability:Lyapunov negative definiteness} \\
\Gamma_V := & \left\Vert B^{-1}\right\Vert \left\Vert \left(h_{2a}+h_{1a}\right)x_{b1}+\left(h_{2b}+h_{1b}\right)x_{b2}\right\Vert \nonumber \\
& \omega^{-2}\left\Vert \bar{P}_{o}C_{1}\right\Vert \left(q_{z1}x_{b1}+q_{z2}x_{b2}+\left\Vert B\right\Vert q_{z2}+\left\Vert T^{-1}\right\Vert q_{u}+\left\Vert \bar{P}_{o}C_{1}\right\Vert q_{t}\right), \label{eq:general:stability:Lyapunov perturbation}\end{aligned}$$ where $\lambda_{\min}(Q)$ stands for the smallest eigenvalue of matrix $Q$, then upper bound of $\dot{V}_{\bar\zeta}(\bar\zeta)$ can be expressed as $$\dot{V}_{\bar\zeta} \leq -\Lambda_V\left\Vert \bar\zeta \right\Vert^2 + \Gamma_V\left\Vert \bar\zeta \right\Vert. \label{eq:general:stability:Lyapunov derivative bound}$$ Now, following conditions can be declared
1. \[enum:general:stability:condition 1\] $\omega\in\Omega_v, \kappa\in\mathrm{K}_v$,
2. \[enum:general:stability:condition 2\] $\Gamma_V \geq 0$,
and succeeding theorem concludes presented analysis.
Perturbed cascade system (\[eq:general:nominal system\])-(\[eq:general:input dynamics\]) satisfying Assumptions \[assu:desired trajectory\]-\[assu:disturbance derivatives\], controlled by feedback (\[eq:general:control law\]) which is supported by extended state observer (\[eq:general:observer\]), remains practically stable if there exist symmetric, positive defined matrices $Q_o$ and $Q_c$ such, that conditions \[enum:general:stability:condition 1\] and \[enum:general:stability:condition 2\] can be simultaneously satisfied. Scaled tracking errors $\bar\zeta$ are then bounded as follows $$\label{eq:control:conclusion}
\lim_{t\rightarrow\infty}\left\Vert \bar{\zeta}(t)\right\Vert\leq \frac{\Gamma_{V}}{\Lambda_{V}}.$$
Foregoing proposition remains valid only if Assumptions \[assu:desired trajectory\]-\[assu:disturbance derivatives\] are satisfied. While Assumption \[assu:desired trajectory\] considers desired trajectory only and can be easily fulfilled for any system with state $x_1$ defined on $\mathbb{R}^n$, a closer look at the remaining assumptions ought to be taken now. Similar in their nature, both concern imperfectly known parts of the system dynamics, with the difference being whether an attempt to implicitly compensate these dynamics is taken or not. As a known dynamic term satisfying Assumption \[assu:bounded dynamics\] can also be treated as an unknown disturbance, without a loss of generality, only Assumption \[assu:disturbance derivatives\] has to be commented here. It can be noted, that for many commonly considered systems this assumption cannot be satisfied. A mechanical system equipped with revolute kinematic pairs can be an example of such system, which dynamics, due to Coriolis and centrifugal forces, have neither bounded time derivative nor bounded partial derivative calculated with respect to second state variable. Engineering practice shows nonetheless that for systems, in which cross-coupling is insignificant enough due to a proper mass distribution, this assumption can be approximately satisfied, at least in a bounded set of the state-space, and the stability analysis holds. The requirement that partial derivatives of any disturbance in the system should be bounded is restrictive one, yet less conservative than commonly used in the ADRC analysis expectation of time derivative boundedness. In this sense, the presented analysis is more liberal than ones considered in the literature and it can be expected that enforced assumptions can be better justified.
Numerical simulations
=====================
In attempt to further research behaviour of the system in the presence of unmodelled dynamics governing the input signal numerical simulations have been conducted. Model of the system has been implemented in Matlab-Simulink environment. The second order, single degree of freedom system and the first order dynamics of the input have been modelled according to the following equations $$\left\{ \begin{array}{cl}
\dot{x}_{1} & =x_{2},\\
\dot{x}_{2} & =u,
\end{array}\right.\label{eq:simulation:system}$$ where $$\dot{u}=\frac{1}{T}\left(-u+v\right)\label{eq:simulation:input}$$ and $v$ is a controllable input of the system. Parameters $T$ and $\omega$ of the controller were modified in simulations to investigate how they affect the closed-loop system stability and the tracking accuracy. Chosen parameters of the system are presented in the table \[tab:simulation:gains\]. Desired trajectory $x_d$ was selected as a sine wave with unitary amplitude and frequency of $\unit[\frac{10}{2\pi}]{Hz}$.
$\bar{K}_{1}$ $\bar{K}_{2}$ $\bar{K}_{3}$ $\bar{K}_{p}$ $\bar{K}_{d}$ $\kappa$
--------------- --------------- --------------- --------------- --------------- ----------
$3$ $3$ $1$ $1$ $2$ $0.01$
: Auxiliary gains of the observer and controller\[tab:simulation:gains\]
Selected results of simulations are presented in Figs. \[fig:simulation:T01 adrc\]-\[fig:simulation:T1 pd\]. Tracking errors of two state variables are presented on the plots. Error of $x_{1}$ is presented with solid line, while $e_{2}$ has been plotted with dashed lines on each figure. Integrals of squared errors $e_1$ (ISE criterion) and integral of squared control signals $v$ (ISC criterion) have been calculated for each simulation and are presented above the plots to quantify obtained tracking results. Tests were performed for different values of $T$ and $\omega$, as well as for compensation term $w_c=\hat{z}_{3}$ enabled or disabled, cf. . It can be clearly seen that the existence of some upper bound of $\Omega$ is confirmed by simulation results as proposed by Eq. . As expected, value of this bound decreases with increase of time constant $T$. In the conducted simulations it was not possible to observe and confirm existence of any lower bound imposed on $\Omega$ and for an arbitrarily small $\omega$ stability of the system was being maintained. Secondly, an influence of disturbance rejection term $\hat{z}_{3}$ is clearly visible and is twofold. For $\omega$ chosen to satisfy stability condition \[enum:general:stability:condition 1\], it can be observed, that the presence of the disturbance estimate allows significant decreasing of tracking errors $e_2$ caused by the input dynamics which were not modelled during the controller synthesis. Basically, a residual value of error $e_2$ becomes smaller for a higher value of bandwidth $\omega$. Error trajectory $e_1$ is also slightly modified, however, this effect is irrelevant according to ISE criterion. Nonetheless, usage of the disturbance estimate leads to a significant shrink of $\Omega$ subset. It is plainly visible, that removal of $\hat{z}_{3}$ estimate may lead to recovering of stability of the system in comparison with simulation scenarios obtained using the corresponding ADRC controller.
Experimental results
====================
Practical experiments have been undertaken in order to further investigate the considered control problem. All experiment were carried out using robotic telescope mount developed at Institute of Automatic Control and Robotic of Poznan University of Technology, [@KPKJPKBJN:2019]. The plant consists of a robotic mount and an astronomic telescope with a mirror of diameter 0.5 m. The robotic mount alone includes two axes driven independently by $\unit[24]{V}$ permanent magnet synchronous motors (PMSM) with high-precision ring encoders producing absolute position measurement with 32-bit resolution. Control algorithms has been implemented in C++ using Texas Instruments AM4379 Sitara processor with ARM Cortex-A9 core clocked at $\unit[600]{MHz}$. Beside control structure, prepared firmware contains several additional blocks necessary for conducting of proper astronomical research. Controller itself is implemented in a cascade form which consists of independent current and position loops. Both loops work simultaneously with frequency of $\unit[10]{kHz}$. The current loop designed to precisely track desired torque of the motor employs Park-Clark transformation of measured phase currents to express motor dynamics in *q-d* coordinated. Both *q* and *d* axes are then controlled by independent PI regulators with feedforward term and anti-windup correction which satisfy the following equation $$\begin{aligned}\dot{v} & =k_{i}\left(\tilde{i}-k_{s}\left(k_{p}\tilde{i}+v+u_{r}-\mathrm{sat}\left(k_{p}\tilde{i}+v+u_{r}\right)\right)\right),\\
u & =\mathrm{sat}\left(k_{p}\tilde{i}+v+u_{r},U_{m}\right),
\end{aligned}
\label{eq:experiments:current loop}$$ where $\tilde{i}$ stands for current tracking error, $v$ is integrator input signal, $u$ is regulator input, $u_{r}$ expresses feedforward term, $k_{p}$, $k_{i}$ and $k_{s}$ are positive regulator gains, and finally $\mathrm{sat}(u^{*},U_{m})$ is saturation function of signal $u^{*}$ up to value of $U_{m}$. Output voltage $u$ is generated using PWM output. Current in $d$ axis is stabilised at zero, while current of *q* axis tracks the desired current of the axis. Relation between desired torque and desired current is modelled as a constant gain equals $\unit[2.45]{\frac{Nm}{A}}$. Desired torque is computed in the position loop by the active disturbance rejection based controller designed for the second order mechanical system modelled as follows $$\left\{ \begin{array}{cl}
\dot{x}_{1} & =x_{2}\\
\dot{x}_{2} & =B\tau+\smash{\underbrace{f_{c}\cdot\mathrm{tanh}(f_{t}\cdot x_{2})}_{h(x_{2})}},
\end{array}\right.\label{eq:experiment:model}\\*[0.625\normalbaselineskip]$$ where $x_{1}\in\mathbb{R}^2$ and $x_{2}\in\mathbb{R}^2$ are position and velocities of axes, $B$ is input matrix with diagonal coefficients equal $B_{1,1}=\frac{1}{5},B_{2,2}=\frac{1}{30}$, $f_c$ is the constant positive Coulomb friction coefficient while $f_{t}=10^{3}$ expresses scaling term which defines steepness of friction model. Velocity of the axis is approximated in the experiments using either observer estimate $\hat{z}_{2}$ or desired trajectory derivative $\dot{x}_{d}$. The assumed model of the friction force is strongly local, in the sense that different values of $f_{c}$ are required for different accelerations in a time instant when the sign of velocity changes. This locality was overcame during the experiments by manual changes of $f_{c}$ coefficient. While torque generated by the motor is treated as an input signal of the mechanical system, there exists residual dynamics defined by the current loop which is not modelled in the position loop. Here, we assume that this dynamics can be approximated by and thus we can infer about the stability according to mathematical analysis considered in Section 2. Other disturbances come chiefly from flexibility of the mount, ignored cross-coupling reactions between joints and torque ripples generated by synchronous motors. Though some of these disturbances globally do not satisfy assumptions accepted for theoretical analysis of the system stability, in the considered scenario an influence of these dynamics is insignificant. Due to small desired velocities chosen in the experiment, these assumptions can be approximately satisfied here. All gains of the controllers chosen for experiments are collected in Table \[tab:experiment:gains\].
Horizontal axis Vertical axis
--------- ------------------ -------------------
$K_{1}$ $1.2\cdot10^{3}$ $2.4\cdot10^{2}$
$K_{2}$ $5.7\cdot10^{5}$ $2.28\cdot10^{4}$
$K_{3}$ $10^{8}$ $0.8\cdot10^{6}$
$K_{p}$ $225$ $225$
$K_{d}$ $24$ $24$
: \[tab:experiment:gains\]Gains of the controllers and observers
Here we present selected results of the experiments. In the investigated experimental scenarios both axes were at move simultaneously and the desired trajectory was designed as a sine wave with period of $\unit[30]{s}$ and maximum velocity of $50v_{s}$, in the first experiment, and $500v_{s}$, in the second, where $v_{s}=\unit[7.268\cdot10^{-5}]{\frac{rad}{s}}$ stands for the nominal velocity of stars on the night sky.
During the system operation significant changes of friction forces are clearly visible and the influence of compensation term can be easily noticed. Since friction terms vary significantly around zero velocity the tracking accuracy is decreased. In such a case the process of the disturbance estimation is not performed fast enough. Furthermore, in the considered application one cannot select larger gains of the observer due to additional dynamics imposed by an actuator and delays in the control loop. Here, one can recall relationship which clearly states that the tracking precision is dependent on the bound of $\Gamma_V$, cf. . Thus, one can expect that the tracking accuracy increases in operating conditions when disturbances become slow-time varying. This is well illustrated in experiments where friction terms change in a wide range.
Each experiment presents results obtained with different approaches to $h_{u}$ term design. Once again integral squared error was calculated for each of the presented plots to ease evaluation of the obtained results.
Series of conclusions can be drawn from the presented results. Due to inherently more disturbed dynamics of horizontal axis, any improvement using friction compensation for slow trajectories is hardly achieved. Meanwhile, the compensation term based on the desired trajectory, effectively decreases tracking error bound for all other experiments. As may be expected, compensation function based on estimates of state variables is unable to provide any acceptable tracking quality due to inherent noise in the signal and the existence of input dynamics. It can be noted, that in the first experiment the friction compensation term allows one to decrease the bound of tracking error while overall quality expressed by ISE criterion is worse in comparison to this obtained in experiment without the corresponding term in the feedback. This behaviour is not seen in the second experiment, in which significant improvement was obtained for both axes in terms of error boundary as well as ISE criterion.
Conclusions
===========
This paper is focused on the application of ADRC controller to a class of second order systems subject to differentiable disturbances. In particular, the system is analysed taking into account the presence of the first order input dynamics and unmodelled terms which may include cross-coupling effects between the state variables. By the means of Lyapunov analysis, general conditions of practical stability are discussed. It is proved that, even in the presence of additional input dynamics, boundedness of partial derivatives of total disturbance can be a sufficient requirement to guarantee stability of the closed-loop system.
Using numerical simulations the considered controller is compared against a simple PD-based regulator. The obtained results confirm that in the case of input dynamics, the bandwidth of an extended observer is limited which restricts the effectiveness of the ADRC approach. Lastly, practical results of employing ADRC regulator in the task of trajectory tracking for a robotised astronomical telescope mount are presented. In this application, it is assumed that friction effects are modelled inaccurately and a local drive control-loop is treated as unknown input dynamics. The obtained results illustrate that the considered control algorithm can provide a high tracking accuracy.
Further research in this topic may include attempts to explore in more details conditions for the feasible selection of the observer parameters in order to guarantee the stability of the closed-loop system. Other forms of input dynamics and observer models can also be considered in the future works.
Appendix
========
Selected properties of scaled dynamics
--------------------------------------
Assuming that errors and gains are scaled according to , and the following relationships are satisfied: $$\begin{aligned}
\Delta_2\left(\kappa\omega\right)H_c \Delta_2^{-1}\left(\kappa\omega\right) =\kappa\omega\bar{H}_c,\ \Delta_3\left(\omega\right)H_o \Delta_3^{-1}\left(\omega\right) =\omega\bar{H}_o,\\ \Delta_2\left(\kappa\omega\right)W_1=W_1,\,
W_1\Delta_3^{-1}\left(\omega\right)=\bar{W}_1\Delta_3\left(\kappa\right)\\
W_2=\Delta_3\left(\kappa\omega\right)\bar{W}_2.\label{eq:app:scalled_terms}
\end{aligned}$$
Computation of $\dot{v}$
------------------------
Taking advantage of estimate $\bar{z}$ and assuming that $w_c:=\hat{z}_3$ one can rewrite as follows $$v=B^{-1}\left(K_c e + K_c \begin{bmatrix}\tilde{z}_1^T&\tilde{z}_2^T \end{bmatrix}^T-h_u+\ddot{x}_d-\hat{z}_3\right),$$ where $K_c := \left[K_p\ K_d\right]$. Equivalently, one has $$v=B^{-1}\left(K_c e - W_2\tilde{z}-h_u+\ddot{x}_d-z_3\right).$$ Consequently, time derivative of $v$ satisfies $$\begin{aligned}
\dot{v}&=B^{-1}\left(K_c \dot{e}+W_2\dot{\tilde{z}}-\dot{h}_u+\dddot{x}_d-\dot{z}_3\right){\stackrel{(\ref{eq:general:regulator error dynamics}),(\ref{eq:general:observator error dynamics})}{=}}B^{-1}\left(K_c H_ce+K_cW_1\tilde{z}+K_c C_2B\tilde{u}+W_2H_o\tilde{z}\right.\\ &\quad\left.+W_2C_oB\tilde{u}+W_2C_1\dot{z}_3-\dot{z}_3-\dot{h}_u+\dddot{x}_d\right)=B^{-1}\left(K_c H_ce+K_cW_1\tilde{z}+W_2H_o\tilde{z}-\dot{h}_u+\dddot{x}_d\right).
\end{aligned}$$
Computations of $Y_3$ and $Y_4$
-------------------------------
By chain rule it can be shown that $$\dot{h}_1(z_1, z_2) = \begin{bmatrix}\frac{\partial h_{1}}{\partial z_{1}} & \frac{\partial h_{1}}{\partial z_{2}}\end{bmatrix}\begin{bmatrix}\dot{x}_{d}\\
\ddot{x}_{d}
\end{bmatrix}-\begin{bmatrix}\frac{\partial h_{1}}{\partial z_{1}} & \kappa\omega\frac{\partial h_{1}}{\partial z_{2}}\end{bmatrix}\dot{\bar{e}},\
\dot{h}_2(z_1,z_2) = \begin{bmatrix}\frac{\partial h_{2}}{\partial z_{1}} & \frac{\partial h_{2}}{\partial z_{2}}\end{bmatrix}\begin{bmatrix}\dot{x}_{d}\\
\ddot{x}_{d}
\end{bmatrix}-\begin{bmatrix}\frac{\partial h_{2}}{\partial z_{1}} & \kappa\omega\frac{\partial h_{2}}{\partial z_{2}}\end{bmatrix}\dot{\bar{e}},$$ $$\dot{h}_1(\hat{z}_1,\hat{z}_2)=\begin{bmatrix}\frac{\partial h_{1}}{\partial\hat{z}_{1}} & \frac{\partial h_{1}}{\partial\hat{z}_{2}}\end{bmatrix}\begin{bmatrix}\dot{x}_{d}\\
\ddot{x}_{d}
\end{bmatrix}-\begin{bmatrix}\frac{\partial h_{1}}{\partial\hat{z}_{1}} & \kappa\omega\frac{\partial h_{1}}{\partial\hat{z}_{2}}\end{bmatrix}\dot{\bar{e}}-\begin{bmatrix}\frac{\partial h_{1}}{\partial\hat{z}_{1}} & \omega\frac{\partial h_{1}}{\partial\hat{z}_{2}} & 0\end{bmatrix}\dot{\bar{z}},\
\dot{h}_2(x_d,\dot{x}_d)=\begin{bmatrix}\frac{\partial h_{2}}{\partial x_{d}} & \frac{\partial h_{2}}{\partial\dot{x}_{d}}\end{bmatrix}\begin{bmatrix}\dot{x}_{d}\\
\ddot{x}_{d}
\end{bmatrix}.$$ From here, following are true $$\begin{aligned}
\dot{h} - \dot{h}_u &= W_{h1}\begin{bmatrix}\dot{x}_{d}\\
\ddot{x}_{d}
\end{bmatrix}-W_{h2}\dot{\bar{e}}+W_{h3}\dot{\bar{z}},\ \dot{h}_u = W_{h4}\begin{bmatrix}\dot{x}_{d}\\
\ddot{x}_{d}
\end{bmatrix}-W_{h5}\dot{\bar{e}}-W_{h6}\dot{\bar{z}},
\end{aligned}$$ what leads to solution of $Y_3$ by means of basic substitution. Now, the computation of term $Y_4$ will be taken into account. Disturbance term $q(z_{1},z_{2},u,t)$ can be expressed in form of $$\begin{aligned}
\dot{q}(z_{1},z_{2},u,t)&=\frac{\partial q}{\partial z_{1}}\left(\dot{x}_{d}-\dot{e}_{1}\right)+\frac{\partial q}{\partial z_{2}}\left(\ddot{x}_{d}-\dot{e}_{2}\right)+\frac{\partial q}{\partial u}T^{-1}\left(-u+v\right)+\frac{\partial q}{\partial t}\\
&=W_{q1}\begin{bmatrix}\dot{x}_{d}\\
\ddot{x}_{d}
\end{bmatrix}+W_{q2}\dot{\bar{e}}-\frac{\partial q}{\partial u}T^{-1}\tilde{u}+\frac{\partial q}{\partial t}\\
&=W_{q1}\begin{bmatrix}\dot{x}_{d}\\
\ddot{x}_{d}
\end{bmatrix}+W_{q2}\left(\kappa\omega\bar{H}_{c}\bar{e}+\kappa^{-1}\omega\bar{W}_{1}D(\kappa)\bar{z}+\left(\kappa\omega\right)^{-1}C_{2}B\tilde{u}\right)-\frac{\partial q}{\partial u}T^{-1}\tilde{u}+\frac{\partial q}{\partial t}.
\end{aligned}$$
[^1]: This work was supported by the National Science Centre (NCN) under the grant No. 2014/15/B/ST7/00429, contract No. UMO-2014/15/B/ST7/00429.
| ArXiv |
"---\nabstract: 'Star formation in galaxies is triggered by a combination of processes, including gr(...TRUNCATED) | ArXiv |
"---\nabstract: 'The equation of state (EOS) of the osmotic pressure for linear-polymer solutions in(...TRUNCATED) | ArXiv |
"---\nabstract: 'The relevant integration of wind power into the grid has involved a remarkable impa(...TRUNCATED) | ArXiv |
"---\nauthor:\n- Kaifeng Huang\n- Bihuan Chen\n- Bowen Shi\n- Ying Wang\n- Congying Xu\n- Xin Peng\n(...TRUNCATED) | ArXiv |
"---\nauthor:\n- \n- \n- \nbibliography:\n- '../bibliography.bib'\ntitle: 'Generating Optimal Privac(...TRUNCATED) | ArXiv |
"---\nabstract: 'At present, Babcock-Leighton flux transport solar dynamo models appear as the most (...TRUNCATED) | ArXiv |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 34