text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
sequencelengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
sequencelengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
sequencelengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Sascha Spors,
Professorship Signal Theory and Digital Signal Processing,
Institute of Communications Engineering (INT),
Faculty of Computer Science and Electrical Engineering (IEF),
University of Rostock,
Germany
# Tutorial Digital Signal Processing
**Uniform Quantization, Dithering, Noiseshaping**,
Winter Semester 2021/22 (Master Course #24505)
- lecture: https://github.com/spatialaudio/digital-signal-processing-lecture
- tutorial: https://github.com/spatialaudio/digital-signal-processing-exercises
Feel free to contact lecturer [email protected]
# Fundamentals of Quantization
## Packages / Functions
We import the required packages first and put some functions here that we will frequently use.
```python
# most common used packages for DSP, have a look into other scipy submodules
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy import signal
# audio write and play stuff
import soundfile as sf # requires 'pip install soundfile'
# last tested with soundfile-0.10.3
```
```python
def my_xcorr2(x, y, scaleopt='none'):
r""" Cross Correlation function phixy[kappa] -> x[k+kappa] y
input:
x input signal shifted by +kappa
y input signal
scaleopt scaling of CCF estimator
output:
kappa sample index
ccf correlation result
"""
N = len(x)
M = len(y)
kappa = np.arange(0, N+M-1) - (M-1)
ccf = signal.correlate(x, y, mode='full', method='auto')
if N == M:
if scaleopt == 'none' or scaleopt == 'raw':
ccf /= 1
elif scaleopt == 'biased' or scaleopt == 'bias':
ccf /= N
elif scaleopt == 'unbiased' or scaleopt == 'unbias':
ccf /= (N - np.abs(kappa))
elif scaleopt == 'coeff' or scaleopt == 'normalized':
ccf /= np.sqrt(np.sum(x**2) * np.sum(y**2))
else:
print('scaleopt unknown: we leave output unnormalized')
return kappa, ccf
```
```python
def uniform_midtread_quantizer(x, deltaQ):
r"""uniform_midtread_quantizer from the lecture:
https://github.com/spatialaudio/digital-signal-processing-lecture/blob/master/quantization/linear_uniform_quantization_error.ipynb
commit: b00e23e
note: we renamed the second input to deltaQ, since this is what the variable
actually represents, i.e. the quantization step size
input:
x input signal to be quantized
deltaQ quantization step size
output:
xq quantized signal
"""
# [-1...1) amplitude limiter
x = np.copy(x)
idx = np.where(x <= -1)
x[idx] = -1
idx = np.where(x > 1 - deltaQ)
x[idx] = 1 - deltaQ
# linear uniform quantization
xq = deltaQ * np.floor(x/deltaQ + 1/2)
return xq
```
```python
def my_quant(x, Q):
r"""Saturated uniform midtread quantizer
input:
x input signal
Q number of quantization steps
output:
xq quantized signal
Note: for even Q in order to retain midtread characteristics,
we must omit one quantization step, either that for lowest or the highest
amplitudes. Typically the highest signal amplitudes are saturated to
the 'last' quantization step. Then, in the special case of log2(N)
being an integer the quantization can be represented with bits.
"""
tmp = Q//2 # integer div
quant_steps = (np.arange(Q) - tmp) / tmp # we don't use this
# forward quantization, round() and inverse quantization
xq = np.round(x*tmp) / tmp
# always saturate to -1
xq[xq < -1.] = -1.
# saturate to ((Q-1) - (Q\2)) / (Q\2), note that \ is integer div
tmp2 = ((Q-1) - tmp) / tmp # for odd N this always yields 1
xq[xq > tmp2] = tmp2
return xq
```
## Quantization Process and Error
Quantization generates signals that have discrete values $x_q[k]$, $x_q(t)$ from signals with continuous values $x[k]$, $x(t)$.
For quantization, the signals can be both, discrete and continuous in time.
However, a signal that is discrete in time **and** discrete in value is termed a **digital** signal.
Only digital signals can be processed by computers.
Here the quantization of discrete-time signals is treated due to practical importance.
To describe quantization analytically, the model in the figure below is used.
The input and output signal differ by the so called quantization error (quantization noise) $e[k]$, that is defined as
\begin{equation}
e[k] = x_q[k] - x[k],
\end{equation}
so that the error constitutes an additive superposition
\begin{equation}
x[k] + e[k] = x_q[k]
\end{equation}
To use this error model, some assumption have to be made.
The quantization noise shall be uniformly distributed, which then can be modeled with the probability density function (PDF) $p_e(\theta) = \frac{1}{\Delta Q} \mathrm{rect}(\frac{\theta_e}{\Delta Q})$, where $\Delta Q$ denotes the quantization step size and $\theta_e$ the amplitudes of the quantization error signal.
This PDF is shown below.
```python
plt.figure(figsize=(4, 2))
plt.plot((-1, -1/2, -1/2, +1/2, +1/2, +1), (0, 0, 1, 1, 0, 0), lw=3)
plt.xlim(-1, 1)
plt.ylim(-0.1, 1.1)
plt.xticks((-0.5, +0.5), [r'-$\frac{\Delta Q}{2}$', r'+$\frac{\Delta Q}{2}$'])
plt.yticks((0, 1), [r'0', r'+$\frac{1}{\Delta Q}$'])
plt.xlabel(r'$\theta_e$')
plt.ylabel(r'$p_e(\theta)$')
plt.title(
r'$p_e(\theta) = \frac{1}{\Delta Q} \mathrm{rect}(\frac{\theta_e}{\Delta Q})$')
plt.grid(True)
```
Furthermore, it is assumed that $e[k]$ is not correlated with $x[k]$.
That this is not necessarily the case can be demonstrated with the help of some straightforward examples.
```python
Q = 9 # odd, number of quantization steps
N = 100
k = np.arange(2*N)
x = np.sin(2*np.pi/N*k)
xq = my_quant(x, Q)
e = xq-x
# actually stem plots would be correct, for convenience we plot as line style
plt.plot(k, x, 'C2', lw=3, label=r'$x$')
plt.plot(k, xq, 'C0o-', label=r'$x_q$')
plt.plot(k, e, 'C3', label=r'$e=x_q-x$')
plt.plot(k, k*0+1/(Q-1), 'k:', label=r'$\frac{\Delta Q}{2}$')
plt.xlabel(r'$k$')
plt.legend()
plt.grid(True)
```
A sine signal is quantized with $Q=9$ quantization steps.
A periodicity of the quantization noise can be easily identified.
For odd $Q$, the maximum amplitude of the quantization error can be estimated to
$$\frac{\Delta Q}{2}=\frac{\frac{2}{Q-1}}{2}=\frac{1}{Q-1}=\frac{1}{8}=0.125.$$
The auto-correlation function of the error signal $e[k]$ is presented next.
```python
kappa, acf = my_xcorr2(e, e, 'unbiased')
plt.plot(kappa, acf)
plt.xlim(-175, +175)
plt.xlabel(r'$\kappa$')
plt.ylabel(r'$\phi_{ee}[\kappa]$')
plt.title('ACF of quantization error')
plt.grid(True)
```
If $e[k]$ would be exactly following the probability density function $p_e(\theta) = \frac{1}{\Delta Q} \mathrm{rect}(\frac{\theta_e}{\Delta Q})$, the auto-correlation function $\phi_{ee}[\kappa]=\delta[\kappa]$ results.
However, this is not observable in this example!
Instead, from the above plot, we can deduce that $e[k]$ is correlated to itself, i.e. it exhibits periodicity each 100 samples in phase, and each 50 sample out of phase.
The sine period is precisely 100 samples, thus the input signal and the quantization error are somewhat linked and not independent.
Thus, the error model assumption is violated. That is bad, since the sine signal allows for otherwise comparable simple analytical calculus.
The links between the signals can be furthermore confirmed with the help of the cross-correlation functions.
Their oscillating characteristics reveal that quantization error is highly correlated.
```python
plt.figure(figsize=(9, 3))
plt.subplot(1, 2, 1)
kappa, acf = my_xcorr2(e, x, 'unbiased')
plt.plot(kappa, acf)
plt.xlim(-170, +170)
plt.xlabel(r'$\kappa$')
plt.ylabel(r'$\phi_{e,x}[\kappa]$')
plt.title('CCF quantization error and input signal')
plt.grid(True)
plt.subplot(1, 2, 2)
kappa, acf = my_xcorr2(e, xq, 'unbiased')
plt.plot(kappa, acf)
plt.xlim(-170, +170)
plt.xlabel(r'$\kappa$')
plt.ylabel(r'$\phi_{e,xq}[\kappa]$')
plt.title('CCF quantization error and quantized signal')
plt.grid(True)
```
Therefore, the special case of sine signals is in fact not suited for the quantization model above.
Because of the simplicity of the involved calculation it is common practice to conduct this analysis for sine signals nevertheless, and signal-to-noise ratios in the data sheets of A/D converters are mostly stated for excitation with sine signals.
For random signals, the quantization model is only valid for high levels in the quantizer. For more information see
[Udo Zölzer, Digital Audio Signal Processing, Wiley](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470680018)
(might be available as free access in your uni network)
- Task:
Increase the (odd) number of quantization steps $Q$ and check what happens with the shape and amplitudes of the correlations functions. Hint: closer look to the amplitudes of the correlation signals.
## Quantization Modeling / Mapping
The mapping of the infinitely large continuous set of values to a discrete number of amplitude steps is realized with a transfer characteristic.
The height of the amplitude steps is $\Delta Q$.
From the lecture, we know that the following mapping is used in order to quantize the continuous amplitude signal $x[k]$
towards
\begin{equation}
x_Q[k] = g( \; \lfloor \, f(x[k]) \, \rfloor \; ),
\end{equation}
where $g(\cdot)$ and $f(\cdot)$ denote real-valued mapping functions, and $\lfloor \cdot \rfloor$ a rounding operation (**not necessarily the plain floor operation**).
### Uniform Saturated Midtread Quantization Characteristic Curve
With the introduced mapping, the uniform saturated midtread quantizer can be discussed.
This is probably the most important curve for uniform quantization due to its practical relevance for coding quantized amplitude values as bits. In general, the uniform midtread quantizer can be given as the mapping
\begin{equation}
x_Q[k] = \frac{1}{Q \backslash 2} \cdot \lfloor (Q \backslash 2) \cdot x[k]\rfloor,
\end{equation}
where for $\lfloor \cdot \rfloor$ a rounding operation might used and $\backslash$ denotes integer division.
So the mapping functions $g$ and $f$ are simple multiplications.
At the beginning of this notebook, the function `my_quant` is implemented that realizes quantization based on this mapping.
The approach uses `numpy`'s `round` operation.
When asking for rounding, care has to be taken, which [approach](https://en.wikipedia.org/wiki/Rounding) shall be used.
Numpy rounds to the nearest **even** integer in contrast to e.g. Matlab's rounding to nearest integer.
Detailed analysis for `my_quant`:
- the quantization should be properly performed only for $-1 \leq x < 1$
- thus, it always saturates $x<-1$ towards $x_q = -1$
- in the case of an **odd** number of quantization steps $Q$, it saturates $x>+1$ towards $x_q = +1$. The quantization step size is $\Delta Q = \frac{2}{Q-1}$.
- In the case of an **even** number of quantization steps $Q$, it saturates $x>\frac{Q - 1 - \frac{Q}{2}}{\frac{Q}{2}} = 1-\frac{2}{Q}$ towards $x_q = \frac{Q - 1 - \frac{Q}{2}}{\frac{Q}{2}}=1-\frac{2}{Q}$. The quantization step size is $\Delta Q = \frac{2}{Q}$.
### AD / DA Converter Convention
The case of **even** $Q$ is practically used for virtually all analog/digital (AD) and digital/analog (DA) converters.
When additionally to the above statements
\begin{equation}
\log_2(Q)\in\mathbb{N}
\end{equation}
holds, it is meaningful to code the even and power of two $Q$ possible quantization steps with bits.
With $B\in\mathbb{N}$ denoting the number of bits, the number range convention for AD and DA converters is
\begin{equation}
\begin{split}
&-1\leq x \leq 1-2^{-(B-1)}\\
&-1\leq x \leq 1-\frac{2}{Q}
\end{split}
\end{equation}
using
\begin{equation}
Q=2^B
\end{equation}
quantization steps.
Values of $x$ outside this range will be saturated to the minimum $-1$ and maximum $1-\frac{2}{Q}$ quantization values in the quantization process.
For example, $B = 16$ bits are used to code [PCM data for CD](https://en.wikipedia.org/wiki/Compact_disc) audio quality.
Then we get the following quantities.
```python
B = 16 # number of bits
Q = 2**B # number of quantization steps
# for even Q only:
deltaQ = 2/Q
# maximum quantize value:
xqmax = 1-2**(-(B-1))
# or more general for even Q:
xqmax = 1-deltaQ
print(' B = %d bits\n quantization steps Q = %d\n quantization step size %e' %
(B, Q, deltaQ))
print(' smallest quantization value xqmin = -1')
print(' largest quantization value xqmax = %16.15f' % xqmax)
# B = 16 bits
# quantization steps Q = 65536
# quantization step size 3.051758e-05
# smallest quantization value xqmin = -1
# largest quantization value xqmax = 0.999969482421875
```
So called high definition audio uses 24 Bit. Video and photo typically uses 8-12 Bit quantization per color channel.
### Plotting the Midtread Curve
We now can visualize the characteristic curve for a simple, made up input signal, i.e. a monotonic increasing signal between $x_{max} = -x_{min}$ using an equidistant increment $\Delta x$ over sample index $k$.
Here, we use $x_{max} = 1.25$ and $\Delta x=0.001$ and assume that we start with $x_{min} = -1.25$ at $k=0$.
If $\Delta x$ is sufficiently small, the signal's amplitude can be interpreted as continuous straight line.
This straight line is degraded in a quantization process.
Plotting the quantization result over the input, results in the characteristic curve, in our example in the curve of the uniform saturated midtread quantizer.
Let us plot this.
**Please note:**
The quantizer `uniform_midtread_quantizer` known from lecture and `my_quant` yield the same results besides a slight detail: `uniform_midtread_quantizer` always exhibits an **even** number of quantization steps $Q$.
So, only for even $Q$ results are exactly identical.
We might verify this in the next plots as well.
```python
x = np.arange(-1.25, +1.25, 1e-3)
plt.figure(figsize=(4, 2))
plt.plot(x) # actually a stem plot is correct
plt.ylim(-1.25, +1.25)
plt.xlabel(r'$k$')
plt.ylabel(r'$x[k]$')
plt.grid(True)
```
```python
Q = 9 # number of quantization steps, odd or even
deltaQ = 1/(Q//2) # quantization step size, even/odd Q
xq = my_quant(x, Q) # used in exercise
xumq = uniform_midtread_quantizer(x, deltaQ) # as used in lecture
plt.figure(figsize=(6, 6))
plt.plot(x, xumq, 'C0', lw=2, label='uniform_midtread_quantizer()')
plt.plot(x, xq, 'C3', label='my_quant()')
plt.xticks(np.arange(-1, 1.25, 0.25))
plt.yticks(np.arange(-1, 1.25, 0.25))
plt.xlabel(r'input amplitude of $x$')
plt.ylabel(r'output ampliude of $x_q$')
plt.title(
r'uniform saturated midtread quantization, Q={0:d}, $\Delta Q$={1:3.2f}'.format(Q, deltaQ))
plt.axis('equal')
plt.legend()
plt.grid(True)
```
The following exercises used to be a homework assignment as exam's prerequisite.
# Exercise 1: Uniform Saturated Midtread Characteristic Curve of Quantization
## Task
Check this quantizer curve for $Q=7$ and $Q=8$.
Make sure that you get the idea of the midtread concept (the zero is always quantized to zero) and saturation (for even $Q$) largest quantization step is saturated).
```python
def check_my_quant(Q):
N = 5e2
x = 2*np.arange(N)/N - 1
xq = my_quant(x, Q)
e = xq - x
plt.plot(x, x, color='C2', lw=3, label=r'$x[k]$')
plt.plot(x, xq, color='C3', label=r'$x_q[k]$')
plt.plot(x, e, color='C0', label=r'$e[k] = x_q[k] - x[k]$')
plt.xticks(np.arange(-1, 1.25, 0.25))
plt.yticks(np.arange(-1, 1.25, 0.25))
plt.xlabel('input amplitude')
plt.ylabel('output amplitude')
if np.mod(Q, 2) == 0:
s = ' saturated '
else:
s = ' '
plt.title(
'uniform'+s+'midtread quantization with Q=%d steps, $\Delta Q$=%4.3e' % (Q, 1/(Q//2)))
plt.axis('equal')
plt.legend(loc='upper left')
plt.grid(True)
```
```python
Q = 7 # number of quantization steps
deltaQ = 1 / (Q//2) # general rule
deltaQ = 2 / (Q-1) # for odd Q only
plt.figure(figsize=(5, 5))
check_my_quant(Q)
```
```python
Q = 8 # number of quantization steps
deltaQ = 1 / (Q//2) # general rule
deltaQ = 2 / Q # for even Q only
plt.figure(figsize=(5, 5))
check_my_quant(Q)
```
# Exercise 2: Quantization and Signal-to-Noise Ratio
From theory the **6dB / Bit rule of thumb** is well known for uniform quantization. It states that the signal-to-noise ratio increases by 6 dB for every additional bit that is spent to quantize the input data.
Hence,
\begin{equation}
\text{SNR in dB} = 6 \cdot B + \gamma,
\end{equation}
where $\gamma$ is a offset value in dB that depends on the PDF of the signal to be quantized.
Note that this rule of thumb assumes that the quantization error exhibits uniform PDF and is not correlated with the quantized signal.
We can see that this rule of thumb is inaccurate when quantizing a sine signal with small number of bits or an amplitude in the range of the quantization step. Then, the mentioned assumptions are not fulfilled. We will observe this in exercise 3.
We plot the function SNR of bits below for uniform, normal and Laplace PDF noises and a sine signal.
We should observe the slope of always 6 dB per Bit.
We should note the different absolute values of the SNR depending on the varying $\gamma$.
The `dBoffset` values are discussed in the lecture and in the textbook [Udo Zölzer, Digital Audio Signal Processing, Wiley](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470680018).
```python
def check_quant_SNR(x, dBoffset, title):
print('std: {0:f}, var: {1:f}, mean: {2:f} of x'.format(np.std(x), np.var(x), np.mean(x)))
Bmax = 24
SNR = np.zeros(Bmax+1)
SNR_ideal = np.zeros(Bmax+1)
for B in range(1, Bmax+1): # start at 1, since zero Q is not meaningful
xq = my_quant(x, 2**B)
SNR[B] = 10*np.log10(np.var(x) / np.var(xq-x))
SNR_ideal[B] = B*20*np.log10(2) + dBoffset # 6dB/bit + offset rule
plt.figure(figsize=(5, 5))
plt.plot(SNR_ideal, 'o-', label='theoretical', lw=3)
plt.plot(SNR, 'x-', label='simulation')
plt.xticks(np.arange(0, 26, 2))
plt.yticks(np.arange(0, 156, 12))
plt.xlim(2, 24)
plt.ylim(6, 148)
plt.xlabel('number of bits')
plt.ylabel('SNR / dB')
plt.title(title)
plt.legend()
plt.grid(True)
print('maximum achievable SNR = {0:4.1f} dB at 24 Bit (i.e. HD audio)'.format(SNR[-1]))
```
```python
N = 10000
k = np.arange(N)
```
```python
np.random.seed(4)
x = np.random.rand(N)
x -= np.mean(x)
x *= np.sqrt(1/3) / np.std(x)
dBoffset = 0
check_quant_SNR(x, dBoffset, 'Uniform PDF')
```
```python
Omega = 2*np.pi * 997/44100 # use a rather odd ratio: e.g. in audio 997 Hz / 44100 Hz
sigma2 = 1/2
dBoffset = -10*np.log10(2 / 3)
x = np.sqrt(2*sigma2) * np.sin(Omega*k)
check_quant_SNR(x, dBoffset, 'Sine')
```
```python
np.random.seed(4)
x = np.random.randn(N)
x -= np.mean(x)
x *= np.sqrt(0.0471) / np.std(x)
dBoffset = -8.5 # from clipping propability 1e-5
check_quant_SNR(x, dBoffset, 'Normal PDF')
```
```python
np.random.seed(4)
x = np.random.laplace(size=N)
pClip = 1e-5 # clipping propability
sigma = -np.sqrt(2) / np.log(pClip)
x -= np.mean(x)
x *= sigma / np.std(x)
dBoffset = -13.5 # empircially found for pClip = 1e-5
check_quant_SNR(x, dBoffset, 'Laplace PDF')
```
# Exercise 3: Dithering
The discrete-time sine signal
- $x[k]=A \cdot\sin(\frac{2\pi f_\text{sin}}{f_s}k)$ for
- $0\leq k<96000$ with
- sampling frequency $f_s=48\,\text{kHz}$ and
- $f_\text{sin}=997\,\text{Hz}$
shall be quantized with the saturated uniform midtread quantizer for $-1\leq x_q \leq 1-\Delta Q$ using $B$ bits, i.e. $Q=2^B$ number of quantization steps and quantization step size of $\Delta Q = \frac{1}{Q\backslash 2}$.
We should discuss different parametrizations for signal amplitude $A$ and number of bits $B$.
Before quantizing $x[k]$, a dither noise signal $d[k]$ shall be added to $x[k]$ according figure below.
This dither signal with small amplitudes aims at de-correlating the quantization error $e[k]$ from the quantized signal $x_q[k]$, which is especially important for small amplitudes of $x[k]$.
This technique is called **dithering**.
For $d[k]=0$ no dithering is applied.
Since the quantization error may be in the range $-\frac{\Delta Q}{2}\leq e[k]\leq \frac{\Delta Q}{2}$ (assuming uniform distribution), it appears reasonable to use a dither noise with a probability density function (PDF) of
\begin{equation}
p_\text{RECT}(d)=\frac{1}{\Delta Q}\,\text{rect}\left(\frac{d}{\Delta Q}\right),
\end{equation}
i.e. a **zero-mean, uniformly distributed noise** with maximum amplitude $|d[k]|=\frac{\Delta Q}{2}$.
It can be shown that this dither noise improves the quality of the quantized signal.
However, there is still a noise modulation (i.e. a too high correlation between $x_q[k]$ and $e[k]$) that depends on the amplitude of the input signal.
The noise modulation can be almost completely eliminated with a **zero-mean noise** signal exhibiting a **symmetric triangular PDF**
\begin{equation}
p_\text{TRI}(d)=\frac{1}{\Delta Q}\,\text{tri}\left(\frac{d}{\Delta Q}\right)
\end{equation}
with maximum amplitude $|d[k]|=Q$.
By doing so, an almost ideal decorrelation between $x_q[k]$ and $e[k]$ is realized.
In audio, this technique is called TPDF-Dithering (Triangular Probability Density Function Dithering) and can be applied in the mastering process of audio material that is to be distributed e.g. on a CD or via streaming.
## Task
To get an impression on how dithering may be implemented and what quantized signals sound like, the following exercises shall be performed.
- Generate the sine signal $x[k]$ defined above.
- Generate the dither noise $d_\text{RECT}[k]$ according to the PDF $p_\text{RECT}(d)$. Check the resulting amplitude and distribution carefully. The length of $d_\text{RECT}[k]$ and $x[k]$ must be equal.
- Generate the dither noise $d_\text{TRI}[k]$ according to the PDF $p_\text{TRI}(d)$. Check the resulting amplitude and distribution carefully. The length of $d_\text{TRI}[k]$ and $x[k]$ must be equal.
- Add each dither noise $d_\text{RECT}[k]$ and $d_\text{TRI}[k]$ individually to $x[k]$. Together with the case of no dithering we now have three signals to be quantized.
- Quantize these signals individually using `my_quant(x,Q)` with $Q$ quantization steps.
- Plot the midtread characteristic curve.
- Plot the histogram of the dither noises as estimate of its PDF.
- Plot the histogram of the error noises as estimate of its PDF.
- Plot the sine signal, the dithered signal, the quantized signal and the quantization error signal in one diagram for all three cases.
- Calculate and plot the CCF of the signals $x_q[k]$ and $e[k]$ for all three cases.
- Interpret the obtained graphics.
- For each case, render WAV files from $x[k]$, $x[k]+d[k]$, $x_q[k]$ und $e[k]$ and listen to them. **Be careful! Do not harm your ears!** Pay special attention to the sound of the quantization error, how it is correlated with the quantized signal and how loud it appears.
- Consider the 5 cases
1. $B=16$ Bit, $A=1-\Delta Q$
2. $B=16$ Bit, $A=\Delta Q$
3. $B=3$ Bit, $A=1-\Delta Q$
4. $B=3$ Bit, $A=\Delta Q$
5. $B=3$ Bit, $A=\frac{\Delta Q}{2}$
In the last case the signal has amplitude even below the quantization step size $\Delta Q$. You might verify by listening that the sine is still perceivable if dithering is applied, but not if no dithering is applied.
**Again: Be careful! Do not harm your ears!**
The signal amplitude $A$ and chosen $B$ is directly related to the playback level!
**Warning again: start with very very low playback level, find the loudest signal first and then increase volume to your convenience**
## Solution
The task asks for repeated steps.
This is perfectly handled by a little function that solves the repeating subtasks.
```python
fs = 48000
N = 2*fs
k = np.arange(0, N)
fsin = 997
```
```python
def check_dithering(x, dither, Q, case):
deltaQ = 1 / (Q//2) # general rule
# dither noise
pdf_dither, edges_dither = np.histogram(dither, bins='auto', density=True)
xd = x + dither
# quantization
xq = my_quant(xd, Q)
e = xq-x
pdf_error, edges_error = np.histogram(e, bins='auto', density=True)
# write wavs
sf.write(file='x_'+case+'.wav', data=x,
samplerate=48000, subtype='PCM_24')
sf.write(file='xd_'+case+'.wav', data=xd,
samplerate=48000, subtype='PCM_24')
sf.write(file='xq_'+case+'.wav', data=xq,
samplerate=48000, subtype='PCM_24')
sf.write(file='e_'+case+'.wav', data=e,
samplerate=48000, subtype='PCM_24')
# CCF
kappa, ccf = my_xcorr2(xq, e, scaleopt='biased')
plt.figure(figsize=(12, 3))
if case == 'nodither':
plt.subplot(1, 2, 1)
# nothing to plot for the zero signal
# the PDF would be a weighted Dirac at amplitude zero
else:
# plot dither noise PDF estimate as histogram
plt.subplot(1, 2, 1)
plt.plot(edges_dither[:-1], pdf_dither, 'o-', ms=5)
plt.ylim(-0.1, np.max(pdf_dither)*1.1)
plt.grid(True)
plt.xlabel(r'$\theta$')
plt.ylabel(r'$\hat{p}(\theta)$')
plt.title('PDF estimate of dither noise')
# plot error noise PDF estimate as histogram
plt.subplot(1, 2, 2)
plt.plot(edges_error[:-1], pdf_error, 'o-', ms=5)
plt.ylim(-0.1, np.max(pdf_error)*1.1)
plt.grid(True)
plt.xlabel(r'$\theta$')
plt.ylabel(r'$\hat{p}(\theta)$')
plt.title('PDF estimate of error noise')
# plot signals
plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)
plt.plot(k, x, color='C2', label=r'$x[k]$')
plt.plot(k, xd, color='C1', label=r'$x_d[k] = x[k] + dither[k]$')
plt.plot(k, xq, color='C3', label=r'$x_q[k]$')
plt.plot(k, e, color='C0', label=r'$e[k] = x_q[k] - x[k]$')
plt.plot(k, k*0+deltaQ, ':k', label=r'$\Delta Q$')
plt.xlabel('k')
plt.title('signals')
plt.xticks(np.arange(0, 175, 25))
plt.xlim(0, 150)
plt.legend(loc='lower left')
plt.grid(True)
# plot CCF
plt.subplot(1, 2, 2)
plt.plot(kappa, ccf)
plt.xlabel(r'$\kappa$')
plt.ylabel(r'$\varphi_{xq,e}[\kappa]$')
plt.title('CCF betwen xq and e=xq-x')
plt.xticks(np.arange(-100, 125, 25))
plt.xlim(-100, 100)
plt.grid(True)
```
Chose one of the 5 cases and evaluate no dither, uniform PDF dither and triangular PDF dither noises below.
```python
# case 1
B = 16 # Bit
Q = 2**B # number of quantization steps
deltaQ = 1 / (Q//2) # quantization step size
x = (1-deltaQ) * np.sin(2*np.pi*fsin/fs*k) # largest positive amplitude
```
```python
# case 2
B = 16
Q = 2**B
deltaQ = 1 / (Q//2)
x = deltaQ * np.sin(2*np.pi*fsin/fs*k) # smallest amplitude
```
```python
# case 3
B = 3
Q = 2**B
deltaQ = 1 / (Q//2)
x = (1-deltaQ) * np.sin(2*np.pi*fsin/fs*k)
```
```python
# case 4 this is the default case when running the whole notebook
B = 3
Q = 2**B
deltaQ = 1 / (Q//2)
x = deltaQ * np.sin(2*np.pi*fsin/fs*k)
```
```python
# case 5
if False:
B = 3
Q = 2**B
deltaQ = 1 / (Q//2)
# amplitude below quantization step!
x = deltaQ/2 * np.sin(2*np.pi*fsin/fs*k)
```
```python
plt.figure(figsize=(4, 4))
check_my_quant(Q)
```
### No Dither Noise
```python
# no dither
check_dithering(x=x, dither=x*0, Q=Q, case='nodither')
```
**Be very careful! Do not harm your ears!**
| Signal | Audio Player |
| ----------------- | :------------ |
| $x[k]$ | <audio type="audio/wave" src="x_nodither.wav" controls></audio> |
| $x_q[k]$ | <audio type="audio/wave" src="xq_nodither.wav" controls></audio> |
| $e[k]$ | <audio type="audio/wave" src="e_nodither.wav" controls></audio> |
### Uniform PDF Dither Noise
```python
# uniform dither with max amplitude of deltaQ/2
np.random.seed(1)
dither_uni = (np.random.rand(N) - 0.5) * 2 * deltaQ/2
check_dithering(x=x, dither=dither_uni, Q=Q, case='unidither')
```
**Be very careful! Do not harm your ears!**
| Signal | Audio Player |
| ----------------- | :------------ |
| $x[k]$ | <audio type="audio/wave" src="x_unidither.wav" controls></audio> |
| $x_d[k]$ | <audio type="audio/wave" src="xd_unidither.wav" controls></audio> |
| $x_q[k]$ | <audio type="audio/wave" src="xq_unidither.wav" controls></audio> |
| $e[k]$ | <audio type="audio/wave" src="e_unidither.wav" controls></audio> |
### Triangular PDF Dither Noise
```python
np.random.seed(1)
# uniform PDF for amplitudes -1...+1:
dither_uni1 = (np.random.rand(N) - 0.5) * 2
dither_uni2 = (np.random.rand(N) - 0.5) * 2
# triangular PDF with max amplitude of deltaQ
dither_tri = (dither_uni1 + dither_uni2) * deltaQ/2
check_dithering(x=x, dither=dither_tri, Q=Q, case='tridither')
```
**Be very careful! Do not harm your ears!**
| Signal | Audio Player |
| ----------------- | :------------ |
| $x[k]$ | <audio type="audio/wave" src="x_tridither.wav" controls></audio> |
| $x_d[k]$ | <audio type="audio/wave" src="xd_tridither.wav" controls></audio> |
| $x_q[k]$ | <audio type="audio/wave" src="xq_tridither.wav" controls></audio> |
| $e[k]$ | <audio type="audio/wave" src="e_tridither.wav" controls></audio> |
# **Copyright**
The notebooks are provided as [Open Educational Resources](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebooks for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Frank Schultz, Digital Signal Processing - A Tutorial Featuring Computational Examples* with the URL https://github.com/spatialaudio/digital-signal-processing-exercises
| 69a5735a69c5eefe3674ac76502d340b7de666f4 | 42,573 | ipynb | Jupyter Notebook | quantization/quantization.ipynb | spatialaudio/digital-signal-processing-exercises | 0e16bc05cb8ed3dee0537371dbb5826db21c86b3 | [
"CC-BY-4.0"
] | 13 | 2019-10-24T14:27:43.000Z | 2022-02-22T02:14:43.000Z | quantization/quantization.ipynb | spatialaudio/digital-signal-processing-exercises | 0e16bc05cb8ed3dee0537371dbb5826db21c86b3 | [
"CC-BY-4.0"
] | 2 | 2019-11-05T12:51:46.000Z | 2021-12-17T19:46:19.000Z | quantization/quantization.ipynb | spatialaudio/digital-signal-processing-exercises | 0e16bc05cb8ed3dee0537371dbb5826db21c86b3 | [
"CC-BY-4.0"
] | 6 | 2019-10-24T14:27:51.000Z | 2021-08-06T17:33:24.000Z | 36.387179 | 589 | 0.561929 | true | 8,927 | Qwen/Qwen-72B | 1. YES
2. YES | 0.737158 | 0.779993 | 0.574978 | __label__eng_Latn | 0.940257 | 0.174197 |
# Lecture 2 - Introduction to Probability Theory
> Probability theory is nothing but common sense reduced to calculation. P. Laplace (1812)
## Objectives
+ To use probability theory to represent states of knowledge.
+ To use probability theory to extend Aristotelian logic to reason under uncertainty.
+ To learn about the **pruduct rule** of probability theory.
+ To learn about the **sum rule** of probability theory.
+ What is a **random variable**?
+ What is a **discrete random variable**?
+ When are two random variable **independent**?
+ What is a **continuous random variable**?
+ What is the **cumulative distribution function**?
+ What is the **probability density function**?
## Readings
Before coming to class, please read the following:
+ [Chapter 1 of Probabilistic Programming and Bayesian Methods for Hackers](http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter1_Introduction/Chapter1.ipynb)
+ [Chapter 1](http://home.fnal.gov/~paterno/images/jaynesbook/cc01p.pdf) of (Jaynes, 2003).
+ [Chapter 2](http://home.fnal.gov/~paterno/images/jaynesbook/cc02p.pdf) of (Jaynes, 2003) (skim through).
## The basic desiderata of probability theory
It is actually possible to derive the rules of probability based on a system of common sense requirements.
Paraphrasing
[Chapter 1](http://home.fnal.gov/~paterno/images/jaynesbook/cc01p.pdf) of \cite{jaynes2003}),
we would like our system to satisfy the following desiderata:
1) *Degrees of plausibility are represented by real numbers.*
2) *The system should have a qualitative correspondance with common sense.*
3) *The system should be consistent in the sense that:*
+ *If a conclusion can be reasoned out in more than one way, then every possible way must lead to the same result.*
+ *All the evidence relevant to a question should be taken into account.*
+ *Equivalent states of knowledge must be represented by equivalent plausibility assignments.*
## How to speak about probabilities?
Let
+ A be a logical sentence,
+ B be another logical sentence, and
+ and I be all other information we know.
There is no restriction on what A and B may be as soon as none of them is a contradiction.
We write as a shortcut:
$$
\mbox{not A} \equiv \neg,
$$
$$
A\;\mbox{and}\;B \equiv A,B \equiv AB,
$$
$$
A\;\mbox{or}\;B \equiv A + B.
$$
We **write**:
$$
p(A|BI),
$$
and we **read**:
> the probability of A being true given that we know that B and I is true
or (assuming knowledge I is implied)
> the probability of A being true given that we know that B is true
or (making it even shorter)
> the probability of A given B.
$$
p(\mbox{something} | \mbox{everything known}) = \mbox{probability samething is true conditioned on what is known}.
$$
$p(A|B,I)$ is just a number between 0 and 1 that corresponds to the degree of plaussibility of A conditioned on B and I.
0 and 1 are special.
+ If
$$
p(A|BI) = 0,
$$
we say that we are certain that A is false if B is true.
+ If
$$
p(A|BI) = 1,
$$
we say that we are certain that A is false if B is false.
+ If
$$
p(A|BI) \in (0, 1),
$$
we say that we are uncertain about A given that B is false.
Depending on whether $p(A|B,I)$ is closer to 0 or 1 we beleive more on one possibiliy or another.
Complete ignorance corresponds to a probability of 0.5.
## The rules of probability theory
According to
[Chapter 2](http://home.fnal.gov/~paterno/images/jaynesbook/cc02m.pdf) of \cite{jaynes2003} the desiderata are enough
to derive the rules of probability.
These rules are:
+ The **obvious rule** (in lack of a better name):
$$
p(A | I) + p(\neg A | I) = 1.
$$
+ The **product rule** (also known as the Bayes rule or Bayes theorem):
$$
p(AB|I) = p(A|BI)p(B|I).
$$
or
$$
p(AB|I) = p(B|AI)p(A|I).
$$
These two rules are enough to compute any probability we want. Let us demonstrate this by a very simple example.
### Example: Drawing balls from a box without replacement
Consider the following example of prior information I:
> We are given a box with 10 balls 6 of which are red and 4 of which are blue.
The box is sufficiently mixed so that so that when we get a ball from it, we don't know which one we pick.
When we take a ball out of the box, we do not put it back.
Let A be the sentence:
> The first ball we draw is blue.
Intuitively, we would set the probability of A equal to:
$$
p(A|I) = \frac{4}{10}.
$$
This choice can actually be justified, but we will come to this later in this course.
From the "obvious rule", we get that the probability of not drawing a blue ball, i.e.,
the probability of drawing a red ball in the first draw is:
$$
p(\neg A|I) = 1 - p(A|I) = 1 - \frac{4}{10} = \frac{6}{10}.
$$
Now, let B be the sentence:
> The second ball we draw is red.
What is the probability that we draw a red ball in the second draw given that we drew a blue ball in the first draw?
Just before our second draw, there remain 9 bals in the box, 3 of which are blue and 6 of which are red.
Therefore:
$$
p(B|AI) = \frac{6}{9}.
$$
We have not used the product rule just yet. What if we wanted to find the probability that we draw a blue during the first draw and a red during the second draw? Then,
$$
p(AB|I) = p(A|I)p(B|AI) = \frac{4}{10}\frac{6}{9} = \frac{24}{90}.
$$
What about the probability o a red followed by a blue? Then,
$$
p(\neg AB|I) = p(\neg A|I)p(B|AI) = \left[1 - p(A|I) \right]p(B|\neg AI) = \frac{6}{10}\frac{5}{9} = \frac{30}{90}.
$$
### Other rules of probability theory
All the other rules of probability theory can be derived from these two rules.
To demonstrate this, let's prove that:
$$
p(A + B|I) = p(A|I) + p(B|I) - p(AB|I).
$$
Here we go:
\begin{eqnarray*}
p(A+B|I) &=& 1 - p(\neg A \neg B|I)\;\mbox{(obvious rule)}\\
&=& 1 - p(\neg A|\neg BI)p(\neg B|I)\;\mbox{(product rule)}\\
&=& 1 - [1 - p(A |\neg BI)]p(\neg B|I)\;\mbox{(obvious rule)}\\
&=& 1 - p(\neg B|I) + p(A|\neg B I)p(\neg B|I)\\
&=& 1 - [1 - p(B|I)] + p(A|\neg B I)p(\neg B|I)\\
&=& p(B|I) + p(A|\neg B I)p(\neg B|I)\\
&=& p(B|I) + p(A\neg B|I)\\
&=& p(B|I) + p(\neg B|AI)p(A|I)\\
&=& p(B|I) + [1 - p(B|AI)] p(A|I)\\
&=& p(B|I) + p(A|I) - p(B|AI)p(A|I)\\
&=& p(A|I) + p(B|I) - p(AB|I).
\end{eqnarray*}
### The sum rule
Now consider a finite set of logical sentences, $B_1,\dots,B_n$ such that:
1. One of them is definitely true:
$$
p(B_1+\dots+B_n|I) = 1.
$$
2. They are mutually exclusive:
$$
p(B_iB_j|I) = 0,\;\mbox{if}\;i\not=j.
$$
The **sum rule** states that:
$$
P(A|I) = \sum_i p(AB_i|I) = \sum_i p(A|B_i I)p(B_i|I).
$$
We can prove this by induction, but let's just prove it for $n=2$:
$$
\begin{array}{ccc}
p(A|I) &=& p[A(B_1+B_2|I]\\
&=& p(AB_1 + AB_2|I)\\
&=& p(AB_1|I) + p(AB_2|I) - p(AB_1B_2|I)\\
&=& p(AB_1|I) + p(AB_2|I),
\end{array}
$$
since
$$
p(AB_1B_2|I) = p(A|B_1B_2|I)p(B_1B_2|I) = 0.
$$
Let's go back to our example. We can use the sum rule to compute the probability of getting a red ball on the second draw independently of what we drew first. This is how it goes:
$$
\begin{array}{ccc}
p(B|I) &=& p(AB|I) + p(\neg AB|I)\\
&=& p(B|AI)p(A|I) + p(B|\neg AI) p(\neg A|I)\\
&=& \frac{6}{9}\frac{4}{10} + \frac{5}{9}\frac{6}{10}\\
&=& \dots
\end{array}
$$
### Example: Medical Diagnosis
This example is a modified version of the one found in [Lecture 1](http://www.zabaras.com/Courses/BayesianComputing/IntroToProbabilityAndStatistics.pdf) of the Bayesian Scientific Computing course offered during Spring 2013 by Prof. N. Zabaras at Cornell University.
We are going to examine the usefullness of a new tuberculosis test. Let the prior information, I, be:
> The percentage of the population infected by tuberculosis is 0.4%. We have run several experiments and determined that:
+ If a tested patient has the disease, then 80% of the time the test comes out positive.
+ If a tested patient does not have the disease, then 90% of the time, the test comes out negative.
Suppose now that you administer this test to a patient and that the result is positive. How confident are you that the patient does indeed have the disease?
Let's use probability theory to answer this question. Let A be the event:
> The patient's test is positive.
Let B be the event:
> The patient has tuberculosis.
According to the prior information, we have:
$$
p(B|I) = p(\mbox{has tuberculosis}|I) = 0.004,
$$
and
$$
p(A|B,I) = p(\mbox{test is positive}|\mbox{has tuberculosis},I) = 0.8.
$$
Similarly,
$$
p(A|\neg B, I) = p(\mbox{test is positive}|\mbox{does not have tuberculosis}, I) = 0.1.
$$
We are looking for:
$$
\begin{array}{ccc}
p(\mbox{has tuberculosis}|\mbox{test is positive},I) &=& P(B|A,I)\\
&=& \frac{p(AB|I)}{p(A|I)} \\
&=& \frac{p(A|B,I)p(B|I)}{p(A|B,I)p(B|I) + p(A|\neg B, I)p(\neg B|I)}\\
&=& \frac{0.8\times 0.004}{0.8\times 0.004 + 0.1 \times 0.996}\\
&\approx& 0.031.
\end{array}
$$
How much would you pay for such a test?
## Conditional Independence
We say that $A$ and $B$ are **independent** (conditional on I), and write,
$$
A\perp B|I,
$$
if knowledge of one does not yield any information about the other. Mathematically, by $A\perp B|I$, we mean that:
$$
p(A|B,I) = p(A|I).
$$
Using the product rule, we can easily show that:
$$
A\perp B|I \iff p(AB|I) = p(A|I)p(B|I).
$$
### Question
+ Give an example of $I, A$ and $B$ so that $A\perp B|I$.
Now, let $C$ be another event. We say that $A$ and $B$ are **independent** conditional on $C$ (and I), and write:
$$
A\perp B|C,I,
$$
if knowlege of $C$ makes information about $A$ irrelevant to $B$ (and vice versa). Mathematically, we mean that:
$$
p(A|B,C,I) = p(A|C,I).
$$
### Question
+ Give an example of $I,A,B,C$ so that $A\perp B|C,I$.
## Random Variables
The formal mathematical definition of a random variable involves measure theory and is well beyond the scope of this course.
Fortunately, we do not have to go through that route to get a theory that is useful in applications.
For us, a **random variable** $X$ will just be a variable of our problem whose value is unknown to us.
Note that, you should not take the word "random" too literally.
If we could, we would change the name to **uncertain** or **unknown** variable.
A random variable could correspond to something fixed but unknown, e.g., the number of balls in a box,
or it could correspond to something truely random, e.g., the number of particles that hit a [Geiger counter](https://en.wikipedia.org/wiki/Geiger_counter) in a specific time interval.
### Discrete Random Variables
We say that a random variable $X$ is discrete, if the possible values it can take are discrete (possibly countably infinite).
We write:
$$
p(X = x|I)
$$
and we read "the probability of $X$ being $x$".
If it does not cause any ambiguity, sometimes we will simplify the notation to:
$$
p(x) \equiv p(X=x|I).
$$
Note that $p(X=x)$ is actually a discrete function of $x$ which depends on our beliefs about $X$.
The function $p(x) = p(X=x|I)$ is known as the probability density function of $X$.
Now let $Y$ be another random variable.
The **sum rule** becomes:
$$
p(X=x|I) = \sum_{y}p(X=x,Y=y|I) = \sum_y p(X=x|Y=y,I)p(Y=y|I),
$$
or in simpler notation:
$$
p(x) = \sum_y p(x,y) = \sum_y p(x|y)p(y).
$$
The function $p(X=x, Y=y|I) \equiv p(x, y)$ is known as the joint *probability mass function* of $X$ and $Y$.
The **product rule** becomes:
$$
p(X=x,Y=y|I) = p(X=x|Y=y,I)p(Y=y|I),
$$
or in simpler notation:
$$
p(x,y) = p(x|y)p(y).
$$
We say that $X$ and $Y$ are **independent** and write:
$$
X\perp Y|I,
$$
if knowledge of one does not yield any information about the other.
Mathematically, $Y$ gives no information about $X$ if:
$$
p(x|y) = p(x).
$$
From the product rule, however, we get that:
$$
p(x) = p(x|y) = \frac{p(x,y)}{p(y)},
$$
from which we see that the joint distribution of $X$ and $Y$ must factorize as:
$$
p(x, y) = p(x) p(y).
$$
It is trivial to show that if this factorization holds, then
$$
p(y|x) = p(y),
$$
and thus $X$ yields no information about $Y$ either.
### Continuous Random Variables
A random variable $X$ is continuous if the possible values it can take are continuous. The probability of a continuous variable getting a specific value is always zero. Therefore, we cannot work directly with probability mass functions as we did for discrete random variables. We would have to introduce the concepts of the **cumulative distribution function** and the **probability density function**. Fortunately, with the right choice of mathematical symbols, the theory will look exactly the same.
Let us start with a real continuous random variable $X$, i.e., a random variable taking values in the real line $\mathbb{R}$. Let $x \in\mathbb{R}$ and consider the probability of $X$ being less than or equal to $x$:
$$
F(x) := p(X\le x|I).
$$
$F(x)$ is known as the **cumulative distribution function** (CDF). Here are some properties of the CDF whose proof is
left as an excersise:
+ The CDF starts at zero and goes up to one:
$$
F(-\infty) = 0\;\mbox{and}\;F(+\infty) = 1.
$$
+ $F(x)$ is an increasing function of $x$, i.e.,
$$
x_1 \le x_2 \implies F(x_1)\le F(x_2).
$$
+ The probability of $X$ being in the interval $[x_1,x_2]$ is:
$$
p(x_1 \le X \le x_2|I) = F(x_2) - F(x_1).
$$
Now, assume that the derivative of $F(x)$ with respect to $x$ exists.
Let us call it $f(x)$:
$$
f(x) = \frac{dF(x)}{dx}.
$$
Using the fundamental theorem of calculus, it is trivial to show Eq. (\ref{eq:CDF_prob}) implies:
\begin{equation}
p(x_1 \le X \le x_2|I) = \int_{x_1}^{x_2}f(x)dx.
\end{equation}
$f(x)$ is known as the **probability density function** (PDF) and it is measured in probability per unit of $X$.
To see this note that:
$$
p(x \le X \le x + \delta x|I) = \int_{x}^{x+\delta x}f(x')dx' \approx f(x)\delta x,
$$
so that:
$$
f(x) \approx \frac{p(x \le X \le x + \delta x|I)}{\delta x}.
$$
The PDF should satisfy the following properties:
+ It should be positive
$$
f(x) \ge 0,
$$
+ It should integrate to one:
$$
\int_{-\infty}^{\infty} f(x) dx = 1.
$$
#### Notation about the PDF of continuous random variables
In order to make all the formulas of probability theory the same, we define for a continuous random variable $X$:
$$
p(x) := f(x) = \frac{dF(x)}{dx} = \frac{d}{dx}p(X \le x|I).
$$
But keep in mind, that if $X$ is continuous $p(x)$ is not a probability but a probability density.
That is, it needs a $dx$ to become a probability.
Let the PDF $p(x)$ of $X$ and the PDF $p(y)$ of $Y$ ($Y$ is another continuous random variable).
We can find the PDF of the random variable $X$ conditioned on $Y$, i.e., the PDF of $X$ if $Y$ is directly observed.
This is the **product rule** for continuous random variables:
\begin{equation}
\label{eq:continuous_bayes}
p(y|x) = \frac{p(x, y)}{p(y)},
\end{equation}
where $p(x,y)$ is the **joint PDF** of $X$ and $Y$.
The **sum rule** for continous random variables is:
\begin{equation}
\label{eq:continuous_sum}
p(x) = \int p(x, y) dy = \int p(x | y) p(y) dy.
\end{equation}
The similarity between these rules and the discrete ones is obvious.
We have prepared a table to help you remember it.
| Concept | Discrete Random Variables | Continuous Random Variables |
|---|---------------|-----------------|
|$p(x)$| in units of robability | in units of probability per unit of $X$|
|sum rule| $\sum_y p(x,y) = \sum_y p(x|y)p(y)$ | $\int_y p(x,y) dy = \int_y p(x|y) p(y)$|
|product rule| $p(x,y) = p(x|y)p(y)$ | $p(x,y) = p(x|y)p(y)$|
## Expectations
Let $X$ be a random variable. The expectation of $X$ is defined to be:
$$
\mathbb{E}[X] := \mathbb{E}[X | I] = \int x p(x) dx.
$$
Now let $g(x)$ be any function. The expectation of $g(X)$, i.e., the random variable defined after passing $X$ through $g(\cdot)$, is:
$$
\mathbb{E}[g(X)] := \mathbb{E}[g(X)|I] = \int g(x)p(x)dx.
$$
As usual, calling $\mathbb{E}[\cdot]$ is not a very good name.
You may think of $\mathbb{E}[g(X)]$ as the expected value of $g(X)$, but do not take it too far.
Can you think of an example in which the expected value is never actually observed?
### Conditional Expectation
Let $X$ and $Y$ be two random variables. The conditional expectation of $X$ given $Y=y$ is defined to be:
$$
\mathbb{E}[X|Y=y] := \mathbb{E}[X|Y=y,I] = \int xp(x|y)dx.
$$
### Properties of Expectations
The following properties of expectations of random variables are extremely helpful. In what follows, $X$ and $Y$ are random variables and $c$ is a constant:
+ Sum of random variable with a constant:
$$
\mathbb{E}[X+c] = \mathbb{E}[X] + c.
$$
+ Sum of two random variables:
$$
\mathbb{E}[X+Y] = \mathbb{E}[X] + \mathbb{E}[Y].
$$
+ Product of random variable with constant:
$$
\mathbb{E}[cX] = c\mathbb{E}[X].
$$
+ If $X\perp Y$, then:
$$
\mathbb{E}[XY] = \mathbb{E}[X]\mathbb{E}[Y].
$$
**NOTE**: This property does not hold if $X$ and $Y$ are not independent!
+ If $f(\cdot)$ is a convex function, then:
$$
f(\mathbb{E}[X]) \le \mathbb{E}[f(X)].
$$
**NOTE**: The equality holds only if $f(\cdot)$ is linear!
### Variance of a Random Variable
The variance of $X$ is defined to be:
$$
\mathbb{V}[X] = \mathbb{E}\left[X - \mathbb{E}[X])^2\right].
$$
It is easy to prove (and a very useful formulat to remember), that:
$$
\mathbb{V}[X] = \mathbb{E}[X^2] - \left(\mathbb{E}[X]\right)^2.
$$
### Covariance of Two Random Variables
Let $X$ and $Y$ be two random variables.
The covariance between $X$ and $Y$ is defined to be:
$$
\mathbb{C}[X, Y] = \mathbb{E}\left[\left(X - \mathbb{E}[X]\right)
\left(Y-\mathbb{E}[Y]\right)\right]
$$
### Properties of the Variance
Let $X$ and $Y$ be random variables and $c$ be a constant.
Then:
+ Sum of random variable with a constant:
$$
\mathbb{V}[X + c] = \mathbb{V}[X].
$$
+ Product of random variable with a constant:
$$
\mathbb{V}[cX] = c^2\mathbb{V}[X].
$$
+ Sum of two random variables:
$$
\mathbb{V}[X+Y] = \mathbb{V}[X] + \mathbb{V}[Y] + 2\mathbb{C}(X,Y).
$$
+ Sum of two independent random variables:
$$
\mathbb{V}[X+Y] = \mathbb{V}[X] + \mathbb{V}[Y].
$$
# References
(<a id="cit-jaynes2003" href="#call-jaynes2003">Jaynes, 2003</a>) E T Jaynes, ``_Probability Theory: The Logic of Science_'', 2003. [online](http://bayes.wustl.edu/etj/prob/book.pdf)
| 0b9379f5e2cc007f964bf8ad3e9ae1d0d82b2d1e | 25,294 | ipynb | Jupyter Notebook | lectures/lec_02.ipynb | GiveMeData/uq-course | da255bf80ed41cdaca04573ab2dad252cf1ca500 | [
"MIT"
] | 5 | 2018-01-14T00:48:35.000Z | 2021-01-08T01:30:05.000Z | lectures/lec_02.ipynb | GiveMeData/uq-course | da255bf80ed41cdaca04573ab2dad252cf1ca500 | [
"MIT"
] | null | null | null | lectures/lec_02.ipynb | GiveMeData/uq-course | da255bf80ed41cdaca04573ab2dad252cf1ca500 | [
"MIT"
] | 5 | 2018-01-01T14:24:40.000Z | 2021-06-24T22:09:48.000Z | 37.306785 | 510 | 0.537598 | true | 5,909 | Qwen/Qwen-72B | 1. YES
2. YES | 0.692642 | 0.828939 | 0.574158 | __label__eng_Latn | 0.992314 | 0.172291 |
```python
from scipy import stats
from statistics import mean, stdev
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["font.family"] = "Times New Roman"
import sys
import os
if "../" not in sys.path:
sys.path.append("../")
import os
os.chdir("..")
from envs.data_handler import DataHandler
import envs.data_utils as du
```
# Data Shift
* Step 1: Calculate the Standard Deviation
* Step 2: Create the ordering by the mean value for each <component,failure> group and sort them ascending by mean value, component name, failure name
* Step 3: Shift data
```python
dh = DataHandler(data_generation='Linear', take_component_id=True, transformation='raw')
data_new = du.shift_data(dh.data)
data_new.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Optimal_Affected_Component_Uid</th>
<th>Optimal_Failure</th>
<th>raw</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>_SEwwu-cdEeet0YmmfbMwkw</td>
<td>CF1</td>
<td>138.2518</td>
</tr>
<tr>
<th>1</th>
<td>_SExXgOcdEeet0YmmfbMwkw</td>
<td>CF2</td>
<td>25.9559</td>
</tr>
<tr>
<th>2</th>
<td>_SEx_HucdEeet0YmmfbMwkw</td>
<td>CF3</td>
<td>61.6029</td>
</tr>
<tr>
<th>3</th>
<td>_SEymDucdEeet0YmmfbMwkw</td>
<td>CF3</td>
<td>50.0732</td>
</tr>
<tr>
<th>4</th>
<td>_SExYKucdEeet0YmmfbMwkw</td>
<td>CF3</td>
<td>25.9599</td>
</tr>
</tbody>
</table>
</div>
Perform T-Test:
```python
ttest = du.execute_ttest(data_new)
num_distinguishable_pairs = len(ttest[ttest['pvalue']<0.05])
total = len(ttest.index) - 2
print('{0} of the {1} <component, failure> combination pairs are statistical significant'.format(num_distinguishable_pairs, total))
```
636 of the 912 <component, failure> combination pairs are statistical significant
# Parameterization
Evaluate different spread multiplication factors for an optimal shifting for each of the transformed datasets (e.g. cube/square root transfromation). We add to the series point of a <component,failure> group a shift value $s$, which is calculated as:
\begin{align}
s = f * (\sigma(x_{-1}) + \sigma(x)) + tieCount_{t}
\end{align}
Here, we evaluate different values for $f \in \mathbb{N}^{+}$ as the spread mulitplication factor which is mulitplied with the standard deviation of the previous <component,failure> combination $\sigma(x_{-1})$ and of the current one $\sigma(x)$. Since we shift the groups in the order of their ascending mean value, we add the $tieCount_{t}$ at for the comparison at time point $t$ which is the sum of the standard deviations of all <component,failure> pairs which have the same mean value.
```python
def parameterize(start, end, step):
result = {}
trans = ['raw', 'cube', 'sqt', 'log10', 'ln', 'log2']
factor = range(start, end, step)
for t in trans:
X = []
Y = []
# initialize the result dictionary for each transformation
result[t] = (0,0) # (spread_multiplication_factor, number of statistical significant pairs)
# evaluate the different spread mulitplication factors
for f in factor:
# load the data
dh = DataHandler(data_generation='Linear', take_component_id=True, transformation=t)
shifted_data = du.shift_data(dh.data, spread_multiplication_factor=f)
ttest = du.execute_ttest(shifted_data)
num_significant_pairs = len(ttest[ttest['pvalue']<0.05])
# save the results for plotting
X.append(f) # factor
Y.append(num_significant_pairs) # number of statistical significant pairs
if num_significant_pairs > result[t][1]:
# we were able to generate at least one more statistical significant pair
result[t] = (f, num_significant_pairs)
print("{0}: With the spread multiplication factor of {1} we have {2} statistical significant pairs. ".format(t, f, num_significant_pairs), end="\r")
plt.plot(X, Y, label=t)
print(result)
plt.xlabel("Spread Multiplication Factor")
plt.ylabel("No. of statistical significant pairs")
plt.legend()
plt.savefig('data_analysis/03_plots/shifting_parameter_evaluation_' + str(step) + '.pdf')
plt.show()
```
```python
parameterize(1, 300, 10)
```
```python
parameterize(1, 50, 1)
```
And then saving optimal shifted data to csv
```python
result = {'raw': (151, 667), 'cube': (241, 672), 'sqt': (231, 667), 'log10': (231, 672), 'ln': (231, 671), 'log2': (211, 666)}
dh = DataHandler(data_generation='Linear', take_component_id=True, transformation='raw')
optimal_shifted_data = dh.data[[dh.data.columns[0], dh.data.columns[1]]]
for t, value in result.items():
dh = DataHandler(data_generation='Linear', take_component_id=True, transformation=t)
data = du.shift_data(dh.data, spread_multiplication_factor=value[0])
ttest = du.execute_ttest(data)
print("Shifting {0} data with a factor of {1} results in {2} distinguishable pairs.".format(t, value[0], len(ttest[ttest['pvalue']<0.05])))
optimal_shifted_data = optimal_shifted_data.merge(data[[data.columns[2]]], how='outer', left_index=True, right_index=True)
optimal_shifted_data.to_csv('data/prepared_data/LinearShifted_Id.csv')
print("Data saved.")
```
Shifting raw data with a factor of 151 results in 667 distinguishable pairs.
Shifting cube data with a factor of 241 results in 672 distinguishable pairs.
Shifting sqt data with a factor of 231 results in 667 distinguishable pairs.
Shifting log10 data with a factor of 231 results in 672 distinguishable pairs.
Shifting ln data with a factor of 231 results in 671 distinguishable pairs.
Shifting log2 data with a factor of 211 results in 666 distinguishable pairs.
Data saved.
# Create datasets with only distinguishable <component,failue>
```python
trans = ['raw', 'cube', 'sqt', 'log10', 'ln', 'log2']
for t in trans:
dh = DataHandler(data_generation='LinearShifted', take_component_id=True, transformation=t)
ttest = du.execute_ttest(dh.data)
component_failure_list = du.get_distinguishable_groups(ttest)
filtered_data = du.filter_dataset(dh.data, component_failure_list)
ttest_2 = du.execute_ttest(filtered_data)
num_significant_pairs = len(ttest_2[ttest_2['pvalue']<0.05])
print("{0}: {1}/{2} statistical significant pairs ".format(t, num_significant_pairs, len(ttest_2)))
filtered_data.to_csv('data/prepared_data/LinearShifted_Id_' + t + '_dist.csv')
print("Data saved.")
```
raw: 728/730 statistical significant pairs
cube: 727/737 statistical significant pairs
sqt: 726/734 statistical significant pairs
log10: 727/736 statistical significant pairs
ln: 727/735 statistical significant pairs
log2: 727/736 statistical significant pairs
Data saved.
```python
```
| 47972b560c76cbcfc02defa6e55444e50f65a26e | 102,904 | ipynb | Jupyter Notebook | data_analysis/03_shifting.ipynb | hpi-sam/RL_4_Feedback_Control | 7e30e660f426f7f62a740e9fd4dafb32e3222690 | [
"MIT"
] | null | null | null | data_analysis/03_shifting.ipynb | hpi-sam/RL_4_Feedback_Control | 7e30e660f426f7f62a740e9fd4dafb32e3222690 | [
"MIT"
] | 57 | 2022-01-11T08:06:44.000Z | 2022-03-10T10:31:34.000Z | data_analysis/03_shifting.ipynb | hpi-sam/RL_4_Feedback_Control | 7e30e660f426f7f62a740e9fd4dafb32e3222690 | [
"MIT"
] | 1 | 2022-01-06T08:47:09.000Z | 2022-01-06T08:47:09.000Z | 263.181586 | 52,792 | 0.912618 | true | 1,999 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.805632 | 0.695576 | __label__eng_Latn | 0.792992 | 0.454388 |
# Solving Max-Cut Problem with QAOA
<em> Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>
## Overview
In the [tutorial on Quantum Approximate Optimization Algorithm](./QAOA_EN.ipynb), we talked about how to encode a classical combinatorial optimization problem into a quantum optimization problem and slove it with Quantum Approximate Optimization Algorithm [1] (QAOA). In this tutorial, we will take the Max-Cut Problem as an example to further elaborate on QAOA.
### Max-Cut Problem
The Max-Cut Problem is a common combinatorial optimization problem in graph theory, and it has important applications in statistical physics and circuit design. The maximum cut problem is an NP-hard problem, so there is no efficient algorithm that can solve this problem perfectly.
In graph theory, a graph is represented by a pair of sets $G=(V, E)$, where the elements in the set $V$ are the vertices of the graph, and each element in the set $E$ is a pair of vertices, representing an edge connecting these two vertices. For example, the graph in the figure below is represented by $V=\{0,1,2,3\}$ and $E=\{(0,1),(1,2),(2,3),(3, 0)\}$.
<div style="text-align:center">Figure 1: A graph with four vertices and four edges </div>
A cut on a graph refers to a partition of the graph's vertex set $V$ into two disjoint sets. Each cut corresponds to a set of edges, in which the two vertices of each edge are divided into different sets. So we can define the size of this cut as the size of this set of edges, that is, the number of edges being cut. The Max-Cut Problem is to find a cut that maximizes the number of edges being cut. Figure 2 shows a maximum cut of the graph in Figure 1. The size of the maximum cut is $4$, which means that all edges in the graph are cut.
<div style="text-align:center">Figure 2: A maximum cut of the graph in Figure 1 </div>
Assuming that the input graph $G=(V, E)$ has $n=|V|$ vertices and $m=|E|$ edges, we can describe the Max-Cut Problem as a combinatorial optimization problem with $n$ bits and $m$ clauses. Each bit corresponds to a vertex $v$ in the graph $G$, and its value $z_v$ is $0$ or $1$, corresponding to the vertex belonging to the set $S_{0}$ or $S_{1}$, respectively. Thus, each value $z$ of these $n$ bits corresponds to a distinct cut. Each clause corresponds to an edge $(u,v)$ in the graph $G$. A clause requires that the two vertices connected by its corresponding edge take different values, namely $z_u\neq z_v$, which means the edge is cut. In other words, when the two vertices connected by the edge are divided into different sets, we say that the clause is satisfied. Therefore, for each edge $(u,v)$ in the graph $G$, we have
$$
C_{(u,v)}(z) = z_u+z_v-2z_uz_v,
\tag{1}
$$
where $C_{(u,v)}(z) = 1$ if and only if the edge is cut. Otherwise, the function is equal to $0$. The objective function of the entire combinatorial optimization problem is
$$
C(z) = \sum_{(u,v)\in E}C_{(u,v)}(z) = \sum_{(u,v)\in E}z_u+z_v-2z_uz_v.
\tag{2}
$$
Therefore, to solve the maximum cut problem is to find a value $z$ that maximizes the objective function in equation (2).
### Encoding Max-Cut Problem
Here we take the Max-Cut Problem as an example to further elaborate on QAOA. In order to transform the Max-Cut Problem into a quantum problem, we need to use $n$ qubits, where each qubit corresponds to a vertex in the graph $G$. A qubit being in a quantum state $|0\rangle$ or $|1\rangle$ indicates that its corresponding vertex belongs to the set $S_{0}$ or $S_{1}$, respectively. It is worth noting that $|0\rangle$ and $|1\rangle$ are the two eigenstates of Pauli $Z$ gate, and their eigenvalues are respectively $1$ and $-1$, namely
$$
\begin{align}
Z|0\rangle&=|0\rangle,\tag{3}\\
Z|1\rangle&=-|1\rangle.\tag{4}
\end{align}
$$
Therefore, we can use Pauli $Z$ gate to construct the Hamiltonian $H_C$ of the Max-Cut Problem. Because mapping $f(x):x\to(x+1)/2$ maps $-1$ to $0$ and $1$ to $1$, we can replace $z$ in equation (2) with $(Z+I)/2$ ($I$ is the identity matrix) to get the Hamiltonian corresponding to the objective function of the original problem:
$$
\begin{align}
H_C &= \sum_{(u,v)\in E} \frac{Z_u+I}{2} + \frac{Z_v+I}{2}-2\cdot\frac{Z_u+I}{2} \frac{Z_v+I}{2}\tag{5}\\
&= \sum_{(u,v)\in E} \frac{Z_u+Z_v+2I-(Z_uZ_v+Z_u+Z_v+I)}{2}\tag{6}\\
&= \sum_{(u,v)\in E} \frac{I-Z_uZ_v}{2}.\tag{7}
\end{align}
$$
The expected value of this Hamiltonian for a quantum state $|\psi\rangle$ is
$$
\begin{align}
\langle\psi|H_C|\psi\rangle &= \langle\psi|\sum_{(u,v)\in E} \frac{I-Z_uZ_v}{2}|\psi\rangle\tag{8} \\
&= \langle\psi|\sum_{(u,v)\in E} \frac{I}{2}|\psi\rangle-\langle\psi|\sum_{(u,v)\in E} \frac{Z_uZ_v}{2}|\psi\rangle\tag{9}\\
&= \frac{|E|}{2}-\frac{1}{2}\langle\psi|\sum_{(u,v)\in E} Z_uZ_v|\psi\rangle.\tag{10}
\end{align}
$$
If we define
$$
H_D = -\sum_{(u,v)\in E} Z_uZ_v,
\tag{11}
$$
then finding the quantum state $|\psi\rangle$ that maximizes $\langle\psi|H_C|\psi\rangle$ is equivalent to finding the quantum state $|\psi\rangle$ such that $\langle\psi|H_D|\psi \rangle$ is the largest.
## Paddle Quantum Implementation
Now, let's implement QAOA with Paddle Quantum to solve the Max-Cut Problem. There are many ways to find the parameters $\vec{\gamma},\vec{\beta}$. Here we use the gradient descent method in classical machine learning.
To implement QAOA with Paddle Quantum, the first thing to do is to import the required packages. Among them, the `networkx` package can help us handle graphs conveniently.
```python
from IPython.core.display import HTML
display(HTML("<style>pre { white-space: pre !important; }</style>"))
```
<style>pre { white-space: pre !important; }</style>
```python
# Import related modules from Paddle Quantum and PaddlePaddle
import paddle
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import pauli_str_to_matrix
# Import additional packages needed
import numpy as np
from numpy import pi as PI
import matplotlib.pyplot as plt
import networkx as nx
```
Next, we generate the graph $G$ in the Max-Cut Problem. For the convenience of computation, the vertices here are labeled starting from $0$.
```python
# n is the number of vertices in the graph G, which is also the number of qubits
n = 4
G = nx.Graph()
V = range(n)
G.add_nodes_from(V)
E = [(0, 1), (1, 2), (2, 3), (3, 0)]
G.add_edges_from(E)
# Print out the generated graph G
pos = nx.circular_layout(G)
options = {
"with_labels": True,
"font_size": 20,
"font_weight": "bold",
"font_color": "white",
"node_size": 2000,
"width": 2
}
nx.draw_networkx(G, pos, **options)
ax = plt.gca()
ax.margins(0.20)
plt.axis("off")
plt.show()
```
### Encoding Hamiltonian
In Paddle Quantum, a Hamiltonian can be input in the form of `list`. Here we construct the Hamiltonian $H_D$ in equation (11).
```python
# Construct the Hamiltonian H_D in the form of list
H_D_list = []
for (u, v) in E:
H_D_list.append([-1.0,'z'+str(u) +',z' + str(v)])
print(H_D_list)
```
[[-1.0, 'z0,z1'], [-1.0, 'z1,z2'], [-1.0, 'z2,z3'], [-1.0, 'z3,z0']]
As you can see, in this example, the Hamiltonian $H_D$ is
$$
H_D = -Z_0Z_1-Z_1Z_2-Z_2Z_3-Z_3Z_0.
\tag{12}
$$
We can view the matrix form of the Hamiltonian $H_D$ and get information of its eigenvalues:
```python
# Convert Hamiltonian H_D from list form to matrix form
H_D_matrix = pauli_str_to_matrix(H_D_list, n)
# Take out the elements on the diagonal of H_D
H_D_diag = np.diag(H_D_matrix).real
# Get the maximum eigenvalue of H_D
H_max = np.max(H_D_diag)
print(H_D_diag)
print('H_max:', H_max)
```
[-4. 0. 0. 0. 0. 4. 0. 0. 0. 0. 4. 0. 0. 0. 0. -4.]
H_max: 4.0
### Building the QAOA circuit
Earlier we introduced that QAOA needs to apply two unitary transformations $U_C(\gamma)$ and $U_B(\beta)$ alternately on the initial state $|s\rangle = |+\rangle^{\otimes n}$. Here, we use the quantum gates and quantum circuit templates provided in Paddle Quantum to build a quantum circuit to achieve this step. It should be noted that in the Max-Cut Problem, we simplify the problem of maximizing the expected value of the Hamiltonian $H_C$ to the problem of maximizing the expected value of the Hamiltonian $H_D$, so the unitary transformations to be used are $U_D(\gamma)$ and $U_B(\beta)$. By alternately placing two circuit modules with adjustable parameters, we are able to build a QAOA circuit
$$
U_B(\beta_p)U_D(\gamma_p)\cdots U_B(\beta_1)U_D(\gamma_1),
\tag{13}
$$
where $U_D(\gamma) = e^{-i\gamma H_D}$ can be constructed with the circuit in the figure below. Another unitary transformation $U_B(\beta)$ is equivalent to applying a $R_x$ gate to each qubit.
<div style="text-align:center">Figure 3: Quantum circuit of unitary transformation $e^{i\gamma Z\otimes Z}$</div>
Therefore, the quantum circuit that realizes a layer of unitary transformation $U_B(\beta)U_D(\gamma)$ is shown in Figure 4.
<div style="text-align:center">Figure 4: Quantum circuit of unitary transformation $U_B(\beta)U_D(\gamma)$ </div>
In Paddle Quantum, the default initial state of each qubit is $|0\rangle$ (the initial state can be customized by input parameters). We can add a layer of Hadamard gates to change the state of each qubit from $|0\rangle$ to $|+\rangle$ so that we get the initial state $|s\rangle = |+\rangle^{\otimes n}$ required by QAOA. In Paddle Quantum, we can add a layer of Hadamard gates to the quantum circuit by calling `superposition_layer()`.
```python
def circuit_QAOA(p, gamma, beta):
# Initialize the quantum circuit of n qubits
cir = UAnsatz(n)
# Prepare quantum state |s>
cir.superposition_layer()
# Build a circuit with p layers
for layer in range(p):
# Build the circuit of U_D
for (u, v) in E:
cir.cnot([u, v])
cir.rz(gamma[layer], v)
cir.cnot([u, v])
# Build the circuit of U_B, that is, add a layer of R_x gates
for v in V:
cir.rx(beta[layer], v)
return cir
```
After running the constructed QAOA quantum circuit, we obtain the output state
$$
|\vec{\gamma},\vec{\beta}\rangle = U_B(\beta_p)U_D(\gamma_p)\cdots U_B(\beta_1)U_D(\gamma_1)|s\rangle.
\tag{14}
$$
### Calculating the loss function
From the output state of the circuit built in the previous step, we can calculate the objective function of the maximum cut problem
$$
F_p(\vec{\gamma},\vec{\beta}) = \langle\vec{\gamma},\vec{\beta}|H_D|\vec{\gamma},\vec{\beta}\rangle.
\tag{15}
$$
To maximize the objective function is equivalent to minimizing $-F_p$. Therefore, we define $L(\vec{\gamma},\vec{\beta}) = -F_p(\vec{\gamma},\vec{\beta})$ as the loss function, that is, the function to be minimized. Then, we use a classical optimization algorithm to find the optimal parameters $\vec{\gamma},\vec{\beta}$. The following code shows a complete QAOA network built with Paddle Quantum and PaddlePaddle:
```python
class Net(paddle.nn.Layer):
def __init__(self, p, dtype="float64",):
super(Net, self).__init__()
self.p = p
self.gamma = self.create_parameter(shape=[self.p],
default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2 * PI),
dtype=dtype, is_bias=False)
self.beta = self.create_parameter(shape=[self.p],
default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2 * PI),
dtype=dtype, is_bias=False)
def forward(self):
# Define QAOA's quantum circuit
cir = circuit_QAOA(self.p, self.gamma, self.beta)
# Run the quantum circuit
cir.run_state_vector()
# Calculate the loss function
loss = -cir.expecval(H_D_list)
return loss, cir
```
### Training quantum neural network
After defining the quantum neural network for QAOA, we use the gradient descent method to update the parameters in the network to maximize the expected value in equation (15).
```python
p = 4 # Number of layers in the quantum circuit
ITR = 120 # Number of training iterations
LR = 0.1 # Learning rate of the optimization method based on gradient descent
SEED = 1024 # Set global RNG seed
```
Here, we optimize the network defined above in PaddlePaddle.
```python
paddle.seed(SEED)
net = Net(p)
# Use Adam optimizer
opt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())
# Gradient descent iteration
for itr in range(1, ITR + 1):
# Run the network defined above
loss, cir = net()
# Calculate the gradient and optimize
loss.backward()
opt.minimize(loss)
opt.clear_grad()
if itr% 10 == 0:
print("iter:", itr, "loss:", "%.4f"% loss.numpy())
if itr == ITR:
print("\nThe trained circuit:")
print(cir)
gamma_opt = net.gamma.numpy()
print("Optimized parameters gamma:\n", gamma_opt)
beta_opt = net.beta.numpy()
print("Optimized parameters beta:\n", beta_opt)
```
iter: 10 loss: -3.8886
iter: 20 loss: -3.9134
iter: 30 loss: -3.9659
iter: 40 loss: -3.9906
iter: 50 loss: -3.9979
iter: 60 loss: -3.9993
iter: 70 loss: -3.9998
iter: 80 loss: -3.9999
iter: 90 loss: -4.0000
iter: 100 loss: -4.0000
iter: 110 loss: -4.0000
iter: 120 loss: -4.0000
The trained circuit:
--H----*-----------------*--------------------------------------------------X----Rz(3.140)----X----Rx(0.824)----*-----------------*--------------------------------------------------X----Rz(0.737)----X----Rx(2.506)----*-----------------*--------------------------------------------------X----Rz(4.999)----X----Rx(4.854)----*-----------------*--------------------------------------------------X----Rz(0.465)----X----Rx(1.900)--
| | | | | | | | | | | | | | | |
--H----X----Rz(3.140)----X----*-----------------*---------------------------|-----------------|----Rx(0.824)----X----Rz(0.737)----X----*-----------------*---------------------------|-----------------|----Rx(2.506)----X----Rz(4.999)----X----*-----------------*---------------------------|-----------------|----Rx(4.854)----X----Rz(0.465)----X----*-----------------*---------------------------|-----------------|----Rx(1.900)--
| | | | | | | | | | | | | | | |
--H---------------------------X----Rz(3.140)----X----*-----------------*----|-----------------|----Rx(0.824)---------------------------X----Rz(0.737)----X----*-----------------*----|-----------------|----Rx(2.506)---------------------------X----Rz(4.999)----X----*-----------------*----|-----------------|----Rx(4.854)---------------------------X----Rz(0.465)----X----*-----------------*----|-----------------|----Rx(1.900)--
| | | | | | | | | | | | | | | |
--H--------------------------------------------------X----Rz(3.140)----X----*-----------------*----Rx(0.824)--------------------------------------------------X----Rz(0.737)----X----*-----------------*----Rx(2.506)--------------------------------------------------X----Rz(4.999)----X----*-----------------*----Rx(4.854)--------------------------------------------------X----Rz(0.465)----X----*-----------------*----Rx(1.900)--
Optimized parameters gamma:
[3.14046713 0.73681226 4.99897226 0.46481489]
Optimized parameters beta:
[0.82379898 2.50618308 4.85422542 1.90024859]
### Decoding the quantum solution
After obtaining the minimum value of the loss function and the corresponding set of parameters $\vec{\gamma}^*,\vec{\beta}^*$, our task has not been completed. In order to obtain an approximate solution to the Max-Cut Problem, it is necessary to decode the solution to the classical optimization problem from the quantum state $|\vec{\gamma}^*,\vec{\beta}^*\rangle$ output by QAOA. Physically, to decode a quantum state, we need to measure it and then calculate the probability distribution of the measurement results:
$$
p(z)=|\langle z|\vec{\gamma}^*,\vec{\beta}^*\rangle|^2.
\tag{16}
$$
Usually, the greater the probability of a certain bit string, the greater the probability that it corresponds to an optimal solution of the Max-Cut problem.
Paddle Quantum provides a function to view the probability distribution of the measurement results of the state output by the QAOA quantum circuit:
```python
# Repeat the simulated measurement of the circuit output state 1024 times
prob_measure = cir.measure(plot=True)
```
After measurement, we can find the bit string with the highest probability of occurrence. Let the vertices whose bit values are $0$ in the bit string belong to the set $S_0$ and the vertices whose bit values are $1$ belong to the set $S_1$. The set of edges between these two vertex sets is a possible maximum cut of the graph.
The following code selects the bit string with the greatest chance of appearing in the measurement result, then maps it back to the classic solution, and draws the corresponding maximum cut:
- The red vertex belongs to the set $S_0$,
- The blue vertex belongs to the set $S_1$,
- The dashed line indicates the edge being cut.
```python
# Find the most frequent bit string in the measurement results
cut_bitstring = max(prob_measure, key=prob_measure.get)
print("The bit string form of the cut found:", cut_bitstring)
# Draw the cut corresponding to the bit string obtained above on the graph
node_cut = ["blue" if cut_bitstring[v] == "1" else "red" for v in V]
edge_cut = [
"solid" if cut_bitstring[u] == cut_bitstring[v] else "dashed"
for (u, v) in E
]
nx.draw(
G,
pos,
node_color=node_cut,
style=edge_cut,
**options
)
ax = plt.gca()
ax.margins(0.20)
plt.axis("off")
plt.show()
```
As you can see, in this example, QAOA has found a maximum cut on the graph.
_______
## References
[1] Farhi, E., Goldstone, J. & Gutmann, S. A Quantum Approximate Optimization Algorithm. [arXiv:1411.4028 (2014).](https://arxiv.org/abs/1411.4028)
| ddebe3addda1166af869d519ae2c4f871aa10660 | 67,248 | ipynb | Jupyter Notebook | tutorial/combinatorial_optimization/MAXCUT_EN.ipynb | gsq7474741/Quantum | 16e7d3bf2dba7e94e6faf5c853faf0e913e1f268 | [
"Apache-2.0"
] | 1 | 2020-07-14T14:10:23.000Z | 2020-07-14T14:10:23.000Z | tutorial/combinatorial_optimization/MAXCUT_EN.ipynb | gsq7474741/Quantum | 16e7d3bf2dba7e94e6faf5c853faf0e913e1f268 | [
"Apache-2.0"
] | null | null | null | tutorial/combinatorial_optimization/MAXCUT_EN.ipynb | gsq7474741/Quantum | 16e7d3bf2dba7e94e6faf5c853faf0e913e1f268 | [
"Apache-2.0"
] | null | null | null | 93.790795 | 15,648 | 0.769019 | true | 5,156 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.795658 | 0.68549 | __label__eng_Latn | 0.966615 | 0.430954 |
```python
import os, sys
import h5py
import numpy as np
from scipy.io import loadmat
import cv2
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from numpy import matrix as mat
from sympy import *
from numpy import linalg as la
```
```python
def getFx(para, frame): # para中为(13*frame+3)的一套参数,frame传进来是为了确定循环次数
#先写出参数表达式,ABDCDEF六个点的齐次坐标
K = Matrix([[1149.67569986785, 0.0, 508.848621645943],
[0.0, 1147.59161666764, 508.064917088557],
[0.0, 0.0, 1.0]])
r11, r12, r13, r14, r21, r22, r23, r24, r31, r32, r33 = symbols('r11 r12 r13 r14 r21 r22 r23 r24 r31 r32 r33')
Rt = Matrix([[r11, r12, r13, r14], [r21, r22, r23, r24], [r31, r32, r33, 1]])
a, b, c, th, al = symbols('a b c th al')
ua, va, wa, ub, vb, wb, uc, vc, wc, ud, vd, wd, ue, ve, we, uf, vf, wf = symbols('ua va wa ub vb wb uc vc wc ud vd wd ue ve we uf vf wf')
f = Symbol('f')
XA = Matrix([[-a * c * cos(th) * cos(al)], [c-a * c * sin(th)], [-a * c * cos(th) * sin(al)], [1]])
XB = Matrix([[0], [c], [0], [1]])
XC = Matrix([[a * c * cos(th) * cos(al)], [c+a * c * sin(th)], [a * c * cos(th) * sin(al)], [1]])
XD = Matrix([[-b * c], [0], [0], [1]])
XE = Matrix([[0], [0], [0], [1]])
XF = Matrix([[b * c], [0], [0], [1]])
ua, va, wa = K[0,:] * (Rt * XA), K[1,:] * (Rt * XA), K[2,:] * (Rt * XA)
ub, vb, wb = K[0,:] * (Rt * XB), K[1,:] * (Rt * XB), K[2,:] * (Rt * XB)
uc, vc, wc = K[0,:] * (Rt * XC), K[1,:] * (Rt * XC), K[2,:] * (Rt * XC)
ud, vd, wd = K[0,:] * (Rt * XD), K[1,:] * (Rt * XD), K[2,:] * (Rt * XD)
ue, ve, we = K[0,:] * (Rt * XE), K[1,:] * (Rt * XE), K[2,:] * (Rt * XE)
uf, vf, wf = K[0,:] * (Rt * XF), K[1,:] * (Rt * XF), K[2,:] * (Rt * XF)
#根据每一帧的循环,提取出Rt的参数,K是公用的,代入参数写出3D坐标,并计算出u/w,v/w
#写成f的形式,即按照六个点分块,每块里面有M帧
getfx = mat(np.zeros((6*frame*2,1)))
for i in range(6):
for j in range(frame):
if i == 0 :
f = Matrix([ua/wa, va/wa])
elif i == 1 :
f = Matrix([ub/wb, vb/wb])
elif i == 2 :
f = Matrix([uc/wc, vc/wc])
elif i == 3 :
f = Matrix([ud/wd, vd/wd])
elif i == 4 :
f = Matrix([ue/we, ve/we])
else:
f = Matrix([uf/wf, vf/wf])
f_value = f.subs({r11:para[13*j], r12:para[13*j+1], r13:para[13*j+2], r14:para[13*j+3],
r21:para[13*j+4], r22:para[13*j+5], r23:para[13*j+6], r24:para[13*j+7],
r31:para[13*j+8], r32:para[13*j+9], r33:para[13*j+10], th:para[13*j+11],
al:para[13*j+12], a:para[-3], b:para[-2], c:para[-1]})
getfx[i*frame*2+j*2] = f_value[0]
getfx[i*frame*2+j*2+1] = f_value[1]
#返回getfx值,2*frame*6 by 1
return getfx
```
```python
def getJacobian(point, frame, para):
# 用参数表示K,R矩阵
focalx, focaly, px, py = symbols('focalx focaly px py')
r11, r12, r13, r14, r21, r22, r23, r24, r31, r32, r33 = symbols('r11 r12 r13 r14 r21 r22 r23 r24 r31 r32 r33')
Rt = Matrix([[r11, r12, r13, r14], [r21, r22, r23, r24], [r31, r32, r33, 1]])
K = Matrix([[focalx, 0, px], [0, focaly, py], [0, 0, 1]])
# KRt = K * Rt
# 用参数表示ABCDEF六个点坐标
a, b, c, th, al = symbols('a b c th al')
ua, va, wa, ub, vb, wb, uc, vc, wc, ud, vd, wd, ue, ve, we, uf, vf, wf = symbols('ua va wa ub vb wb uc vc wc ud vd wd ue ve we uf vf wf')
f = Symbol('f')
if point == 0 :
XA = Matrix([[-a * c * cos(th) * cos(al)], [c-a * c * sin(th)], [-a * c * cos(th) * sin(al)], [1]])
ua, va, wa = K[0,:] * (Rt * XA), K[1,:] * (Rt * XA), K[2,:] * (Rt * XA)
f = Matrix([ua/wa, va/wa])
elif point == 1 :
XB = Matrix([[0], [c], [0], [1]])
ub, vb, wb = K[0,:] * (Rt * XB), K[1,:] * (Rt * XB), K[2,:] * (Rt * XB)
f = Matrix([ub/wb, vb/wb])
elif point == 2 :
XC = Matrix([[a * c * cos(th) * cos(al)], [c+a * c * sin(th)], [a * c * cos(th) * sin(al)], [1]])
uc, vc, wc = K[0,:] * (Rt * XC), K[1,:] * (Rt * XC), K[2,:] * (Rt * XC)
f = Matrix([uc/wc, vc/wc])
elif point == 3 :
XD = Matrix([[-b * c], [0], [0], [1]])
ud, vd, wd = K[0,:] * (Rt * XD), K[1,:] * (Rt * XD), K[2,:] * (Rt * XD)
f = Matrix([ud/wd, vd/wd])
elif point == 4 :
XE = Matrix([[0], [0], [0], [1]])
ue, ve, we = K[0,:] * (Rt * XE), K[1,:] * (Rt * XE), K[2,:] * (Rt * XE)
f = Matrix([ue/we, ve/we])
elif point == 5:
XF = Matrix([[b * c], [0], [0], [1]])
uf, vf, wf = K[0,:] * (Rt * XF), K[1,:] * (Rt * XF), K[2,:] * (Rt * XF)
f = Matrix([uf/wf, vf/wf])
args = Matrix([r11, r12, r13, r14, r21, r22, r23, r24, r31, r32, r33, th, al, a, b, c])
f_X1 = f[0,:].jacobian(args)
f_X2 = f[1,:].jacobian(args)
JA = Matrix([f_X1, f_X2]) # 2 by 16 matrix
JA_value = JA.subs({focalx:1149.676, focaly:1147.592, px:508.849, py:508.065, r11:para[13*frame], r12:para[13*frame+1],
r13:para[13*frame+2], r14:para[13*frame+3], r21:para[13*frame+4], r22:para[13*frame+5],
r23:para[13*frame+6], r24:para[13*frame+7], r31:para[13*frame+8], r32:para[13*frame+9],
r33:para[13*frame+10], th:para[13*frame+11], al:para[13*frame+12], a:para[-3], b:para[-2], c:para[-1]})
#JA_value = JA_value.subs({f:1149.68})
return JA_value
```
```python
def getJ(para, frame):
getj = mat(np.zeros((6*frame*2, 13*frame+3)))
for m in range(6):
for n in range(frame):
JA_value = getJacobian(m, n, para)
#print(JA_value)
getj[2*(m*frame+n):2*(m*frame+n+1), 13*n:13*n+13] = JA_value[:, 0:13]
getj[2*(m*frame+n):2*(m*frame+n+1), -3:] = JA_value[:, -3:]
return getj
```
```python
def getE(getfx, frame):
E = mat(np.zeros((6*frame*2,1)))
for i in range(6):
for j in range(frame):
if i==0 :
E[(i*frame+j)*2] = getfx[i*frame*2+j*2] - x2d[i,0]
E[(i*frame+j)*2+1] = getfx[i*frame*2+j*2+1] - x2d[i, 1]
elif i==1 :
E[(i*frame+j)*2] = getfx[i*frame*2+j*2] - x2d[i,0]
E[(i*frame+j)*2+1] = getfx[i*frame*2+j*2+1] - x2d[i, 1]
elif i==2 :
E[(i*frame+j)*2] = getfx[i*frame*2+j*2] - x2d[i,0]
E[(i*frame+j)*2+1] = getfx[i*frame*2+j*2+1] - x2d[i, 1]
elif i==3 :
E[(i*frame+j)*2] = getfx[i*frame*2+j*2] - x2d[i,0]
E[(i*frame+j)*2+1] = getfx[i*frame*2+j*2+1] - x2d[i, 1]
elif i==4 :
E[(i*frame+j)*2] = getfx[i*frame*2+j*2] - x2d[i,0]
E[(i*frame+j)*2+1] = getfx[i*frame*2+j*2+1] - x2d[i, 1]
elif i==5 :
E[(i*frame+j)*2] = getfx[i*frame*2+j*2] - x2d[i,0]
E[(i*frame+j)*2+1] = getfx[i*frame*2+j*2+1] - x2d[i, 1]
return E
```
```python
def LM_opti(frame, x_para, u=1, v=2, step_max=500):
J = mat(np.zeros((6*frame*2, 13*frame+3)))
E = mat(np.zeros((6*frame*2,1))) # E = f(X) - b ;
E_temp = mat(np.zeros((6*frame*2,1))) # E_temp compare with E in L-M
x_k = mat(x_para.copy()) #parameter initialization
step = 0 # iteration steps
mse_last = 0 # mse value after iteration each time
step_max = 500 # maximum number of iteration
u = 1
v = 2 # u, v initial value
# L-M Algorithm obtain optimal parameters
while(step < step_max):
step += 1
mse, mse_temp = 0, 0
# generate Jacobian Matrix and calculate E
getfx = mat(np.zeros((6*frame*2,1)))
getfx = getFx(x_k, frame)
E = getE(getfx, frame)
for i in range(6*frame*2):
mse += E[i]**2
mse /= 6*frame*2
# get new J
J = mat(np.zeros((6*frame*2, 13*frame+3)))
J = getJ(x_k, frame)
# delta X = ...
#print(J.T * J)
dx = mat(np.zeros((13*frame+3,1)))
LM = u * mat(np.eye(13*frame+3))
dx = -(J.T * J + LM).I * J.T * E
x_k_temp = x_k.copy()
x_k_temp += dx
#R的更新不能简单赋值
#get R meet R.T*R=I
#U * D * V.T = R --> R+ = U * V.T
R_old = mat([[x_k_temp[0,0], x_k_temp[1,0], x_k_temp[2,0]],
[x_k_temp[4,0], x_k_temp[5,0], x_k_temp[6,0]],
[x_k_temp[8,0], x_k_temp[9,0], x_k_temp[10,0]]])
U, sigma, VT = la.svd(R_old)
R_new = U * VT
x_k_temp[0,0], x_k_temp[1,0], x_k_temp[2,0] = R_new[0,0], R_new[0,1], R_new[0,2]
x_k_temp[4,0], x_k_temp[5,0], x_k_temp[6,0] = R_new[1,0], R_new[1,1], R_new[1,2]
x_k_temp[8,0], x_k_temp[9,0], x_k_temp[10,0] = R_new[2,0], R_new[2,1], R_new[2,2]
###########
# calculate E_temp with x_k_temp
# copy from E with x_k
getfx_temp = mat(np.zeros((6*frame*2,1)))
getfx_temp = getFx(x_k_temp, frame)
E_temp = getE(getfx_temp, frame)
for i in range(6*frame*2):
mse_temp += E_temp[i]**2
mse_temp /= 6*frame*2
# segma value to choose optimization model
segma = (mse - mse_temp)/((dx.T * (u * dx - J.T * E))[0,0])
# calculate new u
if segma > 0:
s = 1.0/3.0
v = 2
x_k = x_k_temp
mse = mse_temp
u = u * max(s, 1-pow(2*segma,3))
u = u[0,0]
else:
u = u * v
v = v * 2
x_k = x_k_temp
print("step = %d, abs(mse-mse_last) = %.8f" %(step, abs(mse-mse_last)))
if abs(mse-mse_last)<0.000001:
break
mse_last = mse
print("step = ", step)
print("mse = ", mse_last)
#print("parameter = ", x_k)
return x_k
```
```python
# 数据读取
frame = 1
m = loadmat("valid.mat")
# camera intrinsic matrix
K = m["annot"][0][0][4]
K_cam = K[0][0].tolist()
# key point 3D groundtruth
gt = m["annot"][0][0][3]
img1_gt = gt[135] # array 3 by 17
kp = np.zeros((17,2))
for i in range(17):
u = K_cam[0] * mat([img1_gt[0][i], img1_gt[1][i], img1_gt[2][i]]).T
v = K_cam[1] * mat([img1_gt[0][i], img1_gt[1][i], img1_gt[2][i]]).T
w = K_cam[2] * mat([img1_gt[0][i], img1_gt[1][i], img1_gt[2][i]]).T
kp[i][0] = u/w
kp[i][1] = v/w
# load and show image
img = cv2.imread("S9_Posing_1.55011271_000676.jpg")
plt.figure("Image") # 图像窗口名称
plt.imshow(img[:,:,[2,1,0]])
plt.axis('on') # 关掉坐标轴为 off
plt.title('image1') # 图像题目
plt.show()
# visualize key points
txt = ['1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17']
img_kp = plt.scatter(kp[:,0], kp[:,1], s = 80, c = 'g', marker = 'X')
for i in range(17):
plt.annotate(txt[i], xy = (kp[i,0], kp[i,1]), xytext = (kp[i,0]+0.1, kp[i,1]+0.1)) # 这里xy是需要标记的坐标,xytext是对应的标签坐标
plt.axis('on') # 关掉坐标轴为 off
plt.title('image_kp') # 图像题目
# visualize ABCDEF
plt.figure()
img_kp = plt.scatter(kp[0,0], kp[0,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(kp[1,0], kp[1,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(kp[4,0], kp[4,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(kp[8,0], kp[8,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(kp[11,0], kp[11,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(kp[14,0], kp[14,1], s = 80, c = 'g', marker = 'X')
plt.axis('on') # 关掉坐标轴为 off
plt.title('image_kp_ABCDEF') # 图像题目
plt.show()
# save 2D coordinate to list
x2d = np.zeros((6 * frame,2))
x2d[0,0] = kp[0,0]
for i in range(6):
for j in range(frame):
if i==0 :
x2d[i*frame+j, 0] = kp[14, 0]
x2d[i*frame+j, 1] = kp[14, 1]
elif i==1 :
x2d[i*frame+j, 0] = kp[8, 0]
x2d[i*frame+j, 1] = kp[8, 1]
elif i==2 :
x2d[i*frame+j, 0] = kp[11, 0]
x2d[i*frame+j, 1] = kp[11, 1]
elif i==3 :
x2d[i*frame+j, 0] = kp[4, 0]
x2d[i*frame+j, 1] = kp[4, 1]
elif i==4 :
x2d[i*frame+j, 0] = kp[0, 0]
x2d[i*frame+j, 1] = kp[0, 1]
elif i==5 :
x2d[i*frame+j, 0] = kp[1, 0]
x2d[i*frame+j, 1] = kp[1, 1]
print(x2d)
```
```python
# parameter initialization for all frame (K_cam, x2d(6*frame by 2))
# x_para(13*frame+3)
x_para = np.zeros((13*frame+3,1))
for i in range(frame):
x_para[13*i] = -1 # r11
x_para[13*i+5] = 1 # r22
x_para[13*i+10] = -1 # r33
x_para[13*i+3] = 0.0047
x_para[13*i+7] = -0.0997
x_para[13*i+11] = 0 # th
x_para[13*i+12] = 0 #al
x_para[-3] = 0.35 # a
x_para[-2] = 0.25 # b
distance = -0.095#0.096 # c
x_para[-1] = distance
print(mat(x_para.copy()))
getfx_ini = getFx(x_para, frame)
print(getfx_ini)
```
[[-1. ]
[ 0. ]
[ 0. ]
[ 0.0047]
[ 0. ]
[ 1. ]
[ 0. ]
[-0.0997]
[ 0. ]
[ 0. ]
[-1. ]
[ 0. ]
[ 0. ]
[ 0.35 ]
[ 0.25 ]
[-0.095 ]]
[[476.02538041]
[284.62882932]
[514.25209744]
[284.62882932]
[552.47881446]
[284.62882932]
[486.94729956]
[393.65003291]
[514.25209744]
[393.65003291]
[541.55689531]
[393.65003291]]
```python
plt.figure()
img_kp = plt.scatter(kp[14,0], kp[14,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_ini[0,0], getfx_ini[1,0], s = 80, c = 'r', marker = 'X')
img_kp = plt.scatter(kp[8,0], kp[8,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_ini[2,0], getfx_ini[3,0], s = 80, c = 'r', marker = 'X')
img_kp = plt.scatter(kp[11,0], kp[11,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_ini[4,0], getfx_ini[5,0], s = 80, c = 'r', marker = 'X')
img_kp = plt.scatter(kp[4,0], kp[4,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_ini[6,0], getfx_ini[7,0], s = 80, c = 'r', marker = 'X')
img_kp = plt.scatter(kp[0,0], kp[0,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_ini[8,0], getfx_ini[9,0], s = 80, c = 'r', marker = 'X')
img_kp = plt.scatter(kp[1,0], kp[1,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_ini[10,0], getfx_ini[11,0], s = 80, c = 'r', marker = 'X')
plt.axis('on') # 关掉坐标轴为 off
plt.title('image_kp_ABCDEF_original_opt') # 图像题目
plt.show()
```
```python
x_k = LM_opti(frame, x_para)
R_old = mat([[x_k[0,0], x_k[1,0], x_k[2,0]],
[x_k[4,0], x_k[5,0], x_k[6,0]],
[x_k[8,0], x_k[9,0], x_k[10,0]]])
U, sigma, VT = la.svd(R_old)
R_new = U * VT
x_k[0,0], x_k[1,0], x_k[2,0] = R_new[0,0], R_new[0,1], R_new[0,2]
x_k[4,0], x_k[5,0], x_k[6,0] = R_new[1,0], R_new[1,1], R_new[1,2]
x_k[8,0], x_k[9,0], x_k[10,0] = R_new[2,0], R_new[2,1], R_new[2,2]
```
step = 1, abs(mse-mse_last) = 9.83445441
step = 2, abs(mse-mse_last) = 0.07658973
step = 3, abs(mse-mse_last) = 0.03711521
```python
# visualize keypoint after BA
# compare with image_kp_ABCDEF
print(x_k)
getfx_final = getFx(x_k, frame)
print(getfx_final)
plt.figure()
img_kp = plt.scatter(kp[14,0], kp[14,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_final[0,0], getfx_final[1,0], s = 80, c = 'r', marker = 'X')
img_kp = plt.scatter(kp[8,0], kp[8,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_final[2,0], getfx_final[3,0], s = 80, c = 'r', marker = 'X')
img_kp = plt.scatter(kp[11,0], kp[11,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_final[4,0], getfx_final[5,0], s = 80, c = 'r', marker = 'X')
img_kp = plt.scatter(kp[4,0], kp[4,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_final[6,0], getfx_final[7,0], s = 80, c = 'r', marker = 'X')
img_kp = plt.scatter(kp[0,0], kp[0,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_final[8,0], getfx_final[9,0], s = 80, c = 'r', marker = 'X')
img_kp = plt.scatter(kp[1,0], kp[1,1], s = 80, c = 'g', marker = 'X')
img_kp = plt.scatter(getfx_final[10,0], getfx_final[11,0], s = 80, c = 'r', marker = 'X')
plt.axis('on') # 关掉坐标轴为 off
plt.title('image_kp_ABCDEF_original_opt') # 图像题目
plt.show()
```
```python
def getValue(X3D, para):
j = 0
X3D_value = X3D.subs({r11:para[13*j], r12:para[13*j+1], r13:para[13*j+2], r14:para[13*j+3],
r21:para[13*j+4], r22:para[13*j+5], r23:para[13*j+6], r24:para[13*j+7],
r31:para[13*j+8], r32:para[13*j+9], r33:para[13*j+10], th:para[13*j+11],
al:para[13*j+12], a:para[-3], b:para[-2], c:para[-1]})
return X3D_value
```
```python
# visualize 3D points and groundtruth
# XA,XB,XC,XD,XE,XF with GT[14,8,11,4,0,1]
para = x_k.copy()
K = Matrix([[1149.67569986785, 0.0, 508.848621645943],
[0.0, 1147.59161666764, 508.064917088557],
[0.0, 0.0, 1.0]])
r11, r12, r13, r14, r21, r22, r23, r24, r31, r32, r33 = symbols('r11 r12 r13 r14 r21 r22 r23 r24 r31 r32 r33')
Rt = Matrix([[r11, r12, r13, r14], [r21, r22, r23, r24], [r31, r32, r33, 1]])
a, b, c, th, al = symbols('a b c th al')
ua, va, wa, ub, vb, wb, uc, vc, wc, ud, vd, wd, ue, ve, we, uf, vf, wf = symbols('ua va wa ub vb wb uc vc wc ud vd wd ue ve we uf vf wf')
XA = Matrix([[-a * c * cos(th) * cos(al)], [c-a * c * sin(th)], [-a * c * cos(th) * sin(al)], [1]])
XB = Matrix([[0], [c], [0], [1]])
XC = Matrix([[a * c * cos(th) * cos(al)], [c+a * c * sin(th)], [a * c * cos(th) * sin(al)], [1]])
XD = Matrix([[-b * c], [0], [0], [1]])
XE = Matrix([[0], [0], [0], [1]])
XF = Matrix([[b * c], [0], [0], [1]])
A3D = Rt * XA
B3D = Rt * XB
C3D = Rt * XC
D3D = Rt * XD
E3D = Rt * XE
F3D = Rt * XF
j = 0
#A3D_value = A3D.subs({r11:para[13*j], r12:para[13*j+1], r13:para[13*j+2], r14:para[13*j+3],
# r21:para[13*j+4], r22:para[13*j+5], r23:para[13*j+6], r24:para[13*j+7],
# r31:para[13*j+8], r32:para[13*j+9], r33:para[13*j+10], th:para[13*j+11],
# al:para[13*j+12], a:para[-3], b:para[-2], c:para[-1]})
s = 5340.55881868
E3D_value = (getValue(E3D, para) - getValue(E3D, para))*s
A3D_value = (getValue(A3D, para) - getValue(E3D, para))*s
B3D_value = (getValue(B3D, para) - getValue(E3D, para))*s
C3D_value = (getValue(C3D, para) - getValue(E3D, para))*s
D3D_value = (getValue(D3D, para) - getValue(E3D, para))*s
F3D_value = (getValue(F3D, para) - getValue(E3D, para))*s
print(A3D_value)
print(B3D_value)
print(C3D_value)
print(D3D_value)
print(E3D_value)
print(F3D_value)
```
Matrix([[-143.732977969521], [-459.901052798691], [-36.8592481249503]])
Matrix([[29.9916868960575], [-465.649978271430], [2.31230523863426]])
Matrix([[203.716351761636], [-471.398903744170], [41.4838586022188]])
Matrix([[-121.569097994654], [-7.86570758781875], [-7.18081842657119]])
Matrix([[0], [0], [0]])
Matrix([[121.569097994654], [7.86570758781875], [7.18081842657119]])
```python
Y = mat([img1_gt[0][0], img1_gt[1][0], img1_gt[2][0]]).T
#s = 5340.55881868
X1 = (mat([img1_gt[0][14], img1_gt[1][14], img1_gt[2][14]]).T-Y)
print(X1)
X2 = (mat([img1_gt[0][8], img1_gt[1][8], img1_gt[2][8]]).T-Y)
print(X2)
X3 = (mat([img1_gt[0][11], img1_gt[1][11], img1_gt[2][11]]).T-Y)
print(X3)
X4 = (mat([img1_gt[0][4], img1_gt[1][4], img1_gt[2][4]]).T-Y)
print(X4)
X5 = (mat([img1_gt[0][0], img1_gt[1][0], img1_gt[2][0]]).T-Y)
print(X5)
X6 = (mat([img1_gt[0][1], img1_gt[1][1], img1_gt[2][1]]).T-Y)
print(X6)
```
[[-151.3162711 ]
[-440.06270576]
[ -93.45364426]]
[[ 22.92779888]
[-491.06066667]
[-118.64034573]]
[[ 194.39764354]
[-432.74999716]
[ -90.46298579]]
[[-123.65075656]
[ -10.21145788]
[ -0.85753061]]
[[0.]
[0.]
[0.]]
[[123.63491077]
[ 10.39999574]
[ 0.92235729]]
```python
# 绘制散点图
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(X1[0], X1[1], X1[2], s = 80, c = 'g', marker = 'X')
ax.scatter(X2[0], X2[1], X2[2], s = 80, c = 'g', marker = 'X')
ax.scatter(X3[0], X3[1], X3[2], s = 80, c = 'g', marker = 'X')
ax.scatter(X4[0], X4[1], X4[2], s = 80, c = 'g', marker = 'X')
ax.scatter(X5[0], X5[1], X5[2], s = 80, c = 'g', marker = 'X')
ax.scatter(X6[0], X6[1], X6[2], s = 80, c = 'g', marker = 'X')
ax.scatter(A3D_value[0], A3D_value[1], A3D_value[2], s = 80, c = 'r', marker = 'X')
ax.scatter(B3D_value[0], B3D_value[1], B3D_value[2], s = 80, c = 'r', marker = 'X')
ax.scatter(C3D_value[0], C3D_value[1], C3D_value[2], s = 80, c = 'r', marker = 'X')
ax.scatter(D3D_value[0], D3D_value[1], D3D_value[2], s = 80, c = 'r', marker = 'X')
ax.scatter(E3D_value[0], E3D_value[1], E3D_value[2], s = 80, c = 'r', marker = 'X')
ax.scatter(F3D_value[0], F3D_value[1], F3D_value[2], s = 80, c = 'r', marker = 'X')
ax.set_zlabel('Z', fontdict={'size': 15, 'color': 'red'})
ax.set_ylabel('Y', fontdict={'size': 15, 'color': 'red'})
ax.set_xlabel('X', fontdict={'size': 15, 'color': 'red'})
plt.show()
```
```python
```
| 419398d133b527499ed6b7355d94028cd05aa0f1 | 265,756 | ipynb | Jupyter Notebook | annot/BA and LM --- 1F6P #3.ipynb | XiaotengLu/Human-Torso-Pose-Estimation | 997990e74e95832cd377922ea7cc43ec50f82ae0 | [
"MIT"
] | null | null | null | annot/BA and LM --- 1F6P #3.ipynb | XiaotengLu/Human-Torso-Pose-Estimation | 997990e74e95832cd377922ea7cc43ec50f82ae0 | [
"MIT"
] | null | null | null | annot/BA and LM --- 1F6P #3.ipynb | XiaotengLu/Human-Torso-Pose-Estimation | 997990e74e95832cd377922ea7cc43ec50f82ae0 | [
"MIT"
] | null | null | null | 291.079956 | 97,228 | 0.903227 | true | 8,914 | Qwen/Qwen-72B | 1. YES
2. YES | 0.899121 | 0.637031 | 0.572768 | __label__kor_Hang | 0.054551 | 0.169062 |
```c++
// Copyright (c) 2020 Patrick Diehl
//
// SPDX-License-Identifier: BSL-1.0
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
```
# Exercise 1: Classical linear elasticity model
Let $\Omega = (0,1) \subset \mathbb R$ and $\overline{\Omega}$ be the closure of $\Omega$, i.e.\ $\Omega=[0,1]$. The continuum local problem consists in finding the displacement $(u\in\overline{\Omega})$ such that:
\begin{align}
\label{eq:1dlinearelasticity}
- E u''(x) = f_b(x), &\quad \forall x \in \Omega, \\
\label{eq:Dirichlet}
u(x) = 0, &\quad \text{at}\ x=0,\\
\label{eq:Neumann}
Eu'(x) = g, &\quad \text{at}\ x=1,
\end{align}
where $E$ is the constant modulus of elasticity of the bar, $f_b=f_b(x)$ is a scalar function describing the external body force density (per unit length), and $g \in \mathbb R$ is the traction force applied at end point $x=1$.
```c++
#include <blaze/Math.h>
#include <BlazeIterative.hpp>
#include <cmath>
#include <iostream>
#include <vector>
#include<run_hpx.cpp>
#include<hpx/include/lcos.hpp>
#include<hpx/include/parallel_for_loop.hpp>
```
## Helpfer function for Blaze
[Blaze](https://bitbucket.org/blaze-lib/blaze/src/master/) is an open-source, high-performance C++ math library for dense and sparse arithmetic. This linrary has a HPX backend and utilzies the parallel algorithms to accelarted the linear algebara functions. Blaze is not covered in this cours and for more details we refer to
* K. Iglberger, G. Hager, J. Treibig, and U. Rüde: High Performance Smart Expression Template Math Libraries . Proceedings of the 2nd International Workshop on New Algorithms and Programming Models for the Manycore Era (APMM 2012) at HPCS 2012
* K. Iglberger, G. Hager, J. Treibig, and U. Rüde: Expression Templates Revisited: A Performance Analysis of Current Methodologies (Download). SIAM Journal on Scientific Computing, 34(2): C42--C69, 2012
```c++
// Generates a Blaze dynamic matrix of size N times N and fills the matrix with zeros
blaze::DynamicMatrix<double> zeroMatrix(unsigned long N)
{
return blaze::DynamicMatrix<double>( N, N, 0 );
};
```
```c++
// Generates a Blaze dynamic vector of size N and fills the vector with zeros
blaze::DynamicVector<double> zeroVector(unsigned long N)
{
return blaze::DynamicVector<double>(N,0);
};
```
```c++
// Solves the matrix system A \times x = b and returns x
blaze::DynamicVector<double> solve(blaze::DynamicMatrix<double> A, blaze::DynamicVector<double> b )
{
blaze::iterative::BiCGSTABTag tag;
tag.do_log() = true;
return solve(A,b,tag);
}
```
## Force function
As the external load, a linear $force_b$ function $force : \mathbb{R} \rightarrow \mathbb{R}$
$$ force_b(x) = \begin{cases} 1, if x == 1, \\
0 , else\end{cases}, x = [0,1]$$
```c++
double force(double x){
if ( x == 1 )
return 1;
return 0;
}
```
## Discretization
As the domain $\overline{\Omega}$ we consider the intervall $[0,1]$ and discretize the interval with $n$ elements and using the spacing $h=\frac{1}{n}$ such that $x={0,1\cdot h,2\cdot h,\ldots,n\cdot h}$.
```c++
size_t n = std::pow(2,2);
double h= 1./n;
n += 1;
```
(unsigned long) 5
```c++
auto x = zeroVector(n);
for(size_t i = 0 ; i < n ; i++)
x[i] = i * h;
```
```c++
// Print the discrete nodes
std::cout << x ;
```
( 0 )
( 0.25 )
( 0.5 )
( 0.75 )
( 1 )
(std::ostream &) @0x7fce872e3500
<span style="color:blue">Task 1: Replace the for loop in Cell 11 with the hpx::for loop to fill the right-hand side $f$ in parrallel in Cell 12</span>.
## Prepare the external load
```c++
// Get the force vector for the right-hand side
auto f = zeroVector(n);
```
```c++
/*
for(size_t i = 0 ; i < n ; i++)
{
f[i] = force(x[i]);
}
*/
```
```c++
run_hpx([](){
});
```
(void) @0x7fce7831ce48
```c++
// Print the force vector
std::cout << f ;
```
( 0 )
( 0 )
( 0 )
( 0 )
( 1 )
(std::ostream &) @0x7fce872e3500
### Assemble the stiffness matrix using finite differences
1. Dirichlet boundary condition at $x=0$:
\begin{equation}
u_1 = 0.
\end{equation}
2. Finite difference schems for
In $\overline{\Omega}$:
$\forall i=2,\ldots,n-1$:
\begin{equation}
- E \frac{u_{i-1}-2u_i+u_{i+1}}{h^2} = f_b(x_k).
\end{equation}
3. Neumann boundary condition at $x=1$:
\begin{equation}
E \frac{u_{n-3}-4u_{n-2}+3u_n-1}{2h} = g.
\end{equation}
For simplicity we assume $E=1$.
<span style="color:blue">Task 2: Use aysnchronous programming to assemble the stiffness matrix using hpx::async and hpx::future</span>.
1. Finish the function assemble to fill the matrix from start to end in Cell 16
2. Generate a vector of hpx::futures<void> to collect all future for synchronization in Cell 17
3. Use hpx::async to execute the function assemble asynchronously in Cell 17
4. Use hpx::wait_all to syncrhonize the results in Cell 17
```c++
// Get the stiffness matrix filled with zeros
auto matrix = zeroMatrix(n);
```
```c++
/*
matrix(0,0) = 1;
for(size_t i = 1 ; i < n-1 ; i++){
matrix(i,i-1) = -2;
matrix(i,i) = 4;
matrix(i,i+1) = -2;
}
matrix(n-1,n-1) = 3*h;
matrix(n-1,n-2) = -4*h;
matrix(n-1,n-3) = h;
matrix *= 1./(2*h*h);
*/
```
```c++
// Assmeble the part of the stiffness matrix where the index goes from start to end
void assemble(blaze::DynamicMatrix<double>* matrix, size_t start, size_t end)
{
#(*matrix)(i,i-1) = -2;
#(*matrix)(i,i) = 4;
#(*matrix)(i,i+1) = -2;
}
```
```c++
```
(void) @0x7fce7831ce48
```c++
std::cout << matrix ;
```
( 8 0 0 0 0 )
( -16 32 -16 0 0 )
( 0 -16 32 -16 0 )
( 0 0 -16 32 -16 )
( 0 0 2 -8 6 )
(std::ostream &) @0x7fce872e3500
```c++
// Solve the matrix system matrix times displacement - f
auto displacement = solve(matrix,f);
```
```c++
std::cout << displacement;
```
( 0 )
( 0.25 )
( 0.5 )
( 0.75 )
( 1 )
(std::ostream &) @0x7fce872e3500
# Doing python plots from C++
Doing plots in Python is quite convinent using [matplotlib](https://matplotlib.org/), however, plotting in C++ is a little bit more tricky since we need to write the data as a CSV file and use Python or Matplotlib to plot it. The notebooks have some magic implemented to plot C++ variables directly in Python's matplotlib.
Below we use %plot x y to plot a new line and we can repeat this command to add new lines to the same plot. Using %plotc x y will plot all previous added lines and clear the figure.
```c++
%data x displacement
```
x shape is (5,)displacement shape is (5,)
```c++
%%plot
plt.xlabel("Position")
plt.ylabel("Displacement")
plt.plot(x,displacement,label="Simulation")
plt.plot(x,x,label="Exact solution")
plt.grid()
plt.legend()
```
<pre>
</pre>
| 8cd5d2aa1c474aa162c044feca2dc85799442565 | 50,268 | ipynb | Jupyter Notebook | exercise/Exercise1.ipynb | STEllAR-GROUP/HPXjupyterTutorial | 2deeb2086473a5bf12c6f64e794e0a61fb1cf595 | [
"BSL-1.0"
] | 1 | 2021-09-30T13:39:19.000Z | 2021-09-30T13:39:19.000Z | exercise/Exercise1.ipynb | STEllAR-GROUP/HPXjupyterTutorial | 2deeb2086473a5bf12c6f64e794e0a61fb1cf595 | [
"BSL-1.0"
] | null | null | null | exercise/Exercise1.ipynb | STEllAR-GROUP/HPXjupyterTutorial | 2deeb2086473a5bf12c6f64e794e0a61fb1cf595 | [
"BSL-1.0"
] | 1 | 2021-09-30T13:45:40.000Z | 2021-09-30T13:45:40.000Z | 72.120516 | 33,619 | 0.807134 | true | 2,242 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.740174 | 0.800692 | 0.592652 | __label__eng_Latn | 0.889212 | 0.215259 |
# Numpy (Numeric python)
> - 패키지 이름과 같이 **수리적 파이썬 활용**을 위한 파이썬 패키지
> - **선형대수학 구현**과 **과학적 컴퓨팅 연산**을 위한 함수를 제공
> - (key) `nparray` 다차원 배열을 사용하여 **벡터의 산술 연산**이 가능
> - **브로드캐스팅**을 활용하여 shape(형태 혹은 모양)이 다른 데이터의 연산이 가능
>> - 기존 언어에서는 제공 X
>> - 굉장히 파워풀한 기능으로서 빅데이터 연산에 굉장히 효율이 좋음
## Numpy 설치 와 import
> - 선행 학습을 통해 클래스와 함수에서 클래스를 불러들여 사용할 수 있다고 배웠습니다.
- 다만 직접 작성한 클래스가 아닐경우, 그리고 현재 컴퓨터에 사용해야 할 패키지가 없을경우 간단한 명령어로 설치가능.
>> - `pip`, `conda` 명령어 : python 라이브러리 관리 프로그램으로 오픈소스라이브러리를 간편하게 설치 할 수 있도록 하는 명령어
> - 1. 콘솔창에서 실행 시
**`pip` `install` `[패키지명]`** 혹은
**`conda` `install` `[패키지명]`**
> - 2. 주피터 노트북으로 실행 시
**`!pip` `install` `[패키지명]`**
> - 아나콘다 환경으로 python 환경설정 시 기본적으로 Numpy 설치가 되어있음
```python
# 주피터 노트북에서 Numpy 설치
!pip install numpy
```
[33mDEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621[0m
Requirement already satisfied: numpy in /opt/homebrew/lib/python3.9/site-packages (1.21.2)
```python
# Numpy 사용을 위해 패키지 불러들이기
import numpy as np
# 관례적으로 np라는 약자를 많이 사용하게 됩니다.
# 파이썬을 사용하는 대부분의 유저들이 사용하고 있는 닉네임이니 이건 꼭 지켜서 사용해주시는 것을 추천드립니다.
```
## 데이터분석을 위한 잠깐의 선형대수학
numpy는 기본적으로 수리적 컴퓨팅을 위한 패키지 입니다. 선형대수학을 약간만 이해한다면 데이터를 훨씬 더 깊이있게 다룰 수 있습니다.
출처 : https://art28.github.io/blog/linear-algebra-1/
### 데이터의 구분에 따른 표현 방법과 예시
#### 스칼라
1, 3.14, 실수 혹은 정수
#### 벡터
[1, 2, 3, 4], "문자열"
#### 3 X 4 매트릭스
[[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 0, 11, 12]]
#### 2 X 3 X 4 텐서
[[[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 0, 11, 12]],
[[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 0, 11, 12]]]
### 데이터로 표현한 선형대수학
출처 : https://m.blog.naver.com/nabilera1/221978354680
### 데이터 형태에 따른 사칙연산
> 스칼라 +, -, *, / -> 결과도 스칼라
벡터 +, -, 내적 -> +, - 결과는 벡터, 내적 결과는 스칼라
매트릭스 +, -, *, /
텐서 +, -, *, /
### 데이터분석에 자주 사용하는 특수한 연산
벡터와 벡터의 내적
$$\begin{bmatrix}1 & 2 & 3 & 4 \end{bmatrix} \times \begin{bmatrix}1 \\ 2 \\ 3 \\ 4 \end{bmatrix} = 1 * 1 + \
2 * 2 + 3 * 3 + 4 * 4 = 30$$
# $$ A^TA $$
#### 벡터와 벡터의 내적이 이루어지려면
1. 벡터가 마주하는 shape의 갯수(길이)가 같아야 합니다.
2. 연산 앞에 위치한 벡터는 전치(transpose) 되어야 합니다.
출처 : https://ko.wikipedia.org/wiki/%EC%A0%84%EC%B9%98%ED%96%89%EB%A0%AC
#### 벡터 내적으로 방정식 구현
$$y = \begin{bmatrix}1 & 2 & 1 \end{bmatrix} \times \begin{bmatrix}x_1 \\ x_2 \\ x_3 \\ \end{bmatrix} = 1 * x_1 + \
2 * x_2 + 1 * x_3 = x_1 + 2x_2 + x_3$$
## 브로드캐스팅
> 파이썬 넘파이 연산은 브로드캐스팅을 지원합니다.
벡터연산 시 shape이 큰 벡터의 길이만큼 shape이 작은 벡터가 연장되어 연산됩니다.
출처 : http://www.astroml.org/book_figures/appendix/fig_broadcast_visual.html
```python
np.arange(3).reshape((3,1))+np.arange(3)
```
array([[0, 1, 2],
[1, 2, 3],
[2, 3, 4]])
## Numpy function(유니버셜함수)
> `numpy`는 컴퓨팅연산을 위한 다양한 연산함수를 제공합니다.
>> 연산함수 기본구조
ex) **`np.sum`**(연산대상, axis=연산방향)
**`dtype()`**
### 수리연산
- **`prod()`**
- **`dot()`**
- **`sum()`**
- **`cumprod()`**
- **`cumsum()`**
- **`abs()`**
- **`sqaure()`**
- **`sqrt()`**
- **`exp()`**
- **`log()`**
### 통계연산
- **`mean()`**
- **`std()`**
- **`var()`**
- **`max()`**
- **`min()`**
- **`argmax()`**
- **`argmin()`**
### 로직연산
- **`arange()`**
- **`isnan()`**
- **`isinf()`**
- **`unique()`**
### 기하
- **`shape()`**
- **`reshape()`**
- **`ndim()`**
- **`transpose()`**
각종 연산 함수 참고: https://numpy.org/doc/stable/reference/routines.math.html
### numpy 함수 실습
```python
# 함수 예제를 위한 데이터셋
test_list = [1, 2, 3, 4]
test_list2 = [[1, 3], [5, 7]]
test_flist = [1, 3.14, -4.5]
test_list_2nd = [[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
test_list_3rd = [[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[1, 2, 3, 4],
[5, 6, 7, 8]]]
test_exp = [0, 1, 10]
test_nan = [0, np.nan, np.inf]
```
```python
# 곱연산
np.prod(test_list)
```
24
```python
# 합연산
np.sum(test_list)
```
10
```python
# 누적곱연산 벡터형태
np.cumprod(test_list)
```
array([ 1, 2, 6, 24])
```python
# 누적합연산
# 매일/매월 매출에 대한 계산을 할 때 자주 사용
np.cumsum(test_list)
```
array([ 1, 3, 6, 10])
```python
# 절대값
np.abs(test_flist)
```
array([1. , 3.14, 4.5 ])
```python
# 제곱
np.sqrt(test_list)
```
array([1. , 1.41421356, 1.73205081, 2. ])
```python
# 루트
```
```python
# exp
```
```python
# 로그
```
### 통계값
```python
# 평균
np.mean(test_list)
```
2.5
```python
# 표준편차
np.std(test_list)
```
1.118033988749895
```python
# 분산
np.var(test_list)
```
1.25
```python
# 최대값
```
```python
# 최소값
```
```python
test_list_2nd
```
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
```python
# 최대값이 존재하고 있는 인덱스 넘버를 리턴
# 출력값이 인덱스
np.argmax(test_list_2nd)
```
8
```python
# 최소값 인덱스
np.argmin(test_list_2nd)
```
0
```python
# 범위설정
# range() 함수와 동일하게 작동함
# for i in range(0, 100, 10):
# print(i)
#(시작포인트, 마지막포인트+1, 스텝수)
np.arange(10, 100, 10)
```
array([10, 20, 30, 40, 50, 60, 70, 80, 90])
```python
# 범위설정2
#(시작포인트, 마지막포인트+1, 데이터수)
np.linspace(0, 10, 50) # 0~10까지 50개 동일간격으로
```
array([ 0. , 0.20408163, 0.40816327, 0.6122449 , 0.81632653,
1.02040816, 1.2244898 , 1.42857143, 1.63265306, 1.83673469,
2.04081633, 2.24489796, 2.44897959, 2.65306122, 2.85714286,
3.06122449, 3.26530612, 3.46938776, 3.67346939, 3.87755102,
4.08163265, 4.28571429, 4.48979592, 4.69387755, 4.89795918,
5.10204082, 5.30612245, 5.51020408, 5.71428571, 5.91836735,
6.12244898, 6.32653061, 6.53061224, 6.73469388, 6.93877551,
7.14285714, 7.34693878, 7.55102041, 7.75510204, 7.95918367,
8.16326531, 8.36734694, 8.57142857, 8.7755102 , 8.97959184,
9.18367347, 9.3877551 , 9.59183673, 9.79591837, 10. ])
```python
test_nan
```
[0, nan, inf]
```python
# 결측 확인(nan 확인)
np.isnan(test_nan)
```
array([False, True, False])
```python
# 발산 확인(무한대 확인)
np.isinf(test_nan)
```
array([False, False, True])
```python
test_list_3rd
```
[[[1, 2, 3, 4], [5, 6, 7, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8]]]
```python
# 고유값 확인
np.unique(test_list_3rd)
len(np.unique(test_list_3rd))
```
8
```python
# 데이터 구조(모양)확인(차원 확인)
# 안쪽(열)shape부터 확인하자
np.shape(test_list_3rd)
```
(3, 2, 4)
```python
# 데이터 shape 변경
# 어떤 조건에서 reshape가능한가? 데이터 내부에 존재하는 속성 갯수가 같아야 함.
np.reshape(test_list_3rd, (4,6))
np.reshape(test_list_3rd, (2,2,6))
```
array([[[1, 2, 3, 4, 5, 6],
[7, 8, 1, 2, 3, 4]],
[[5, 6, 7, 8, 1, 2],
[3, 4, 5, 6, 7, 8]]])
```python
test_list_3rd
```
[[[1, 2, 3, 4], [5, 6, 7, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8]],
[[1, 2, 3, 4], [5, 6, 7, 8]]]
```python
# 데이터 차원확인
np.ndim(test_list_3rd)
# 기하학적 데이터의 차원수를 이야기하고 데이터분식시 열기준으로 차원을 이야기한다
```
3
```python
test_list_2nd
```
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
```python
# 전치행렬
np.transpose(test_list_2nd)
```
array([[1, 4, 7],
[2, 5, 8],
[3, 6, 9]])
```python
test_list
```
[1, 2, 3, 4]
```python
```
## Numpy array (배열, 행렬)
> - numpy 연산의 기본이 되는 데이터 구조입니다.
- 리스트보다 간편하게 만들 수 있으며 **연산이 빠른** 장점이 있습니다.
- **브로드캐스팅 연산을 지원**합니다.
- 단, **같은 type**의 데이터만 저장 가능합니다.
- array 또한 numpy의 기본 함수로서 생성 가능합니다.
>> array 함수 호출 기본구조
ex) **`np.array(배열변환 대상 데이터)`**
ex) **`np.arange(start, end, step_forward)`**
### numpy array 실습
```python
# 기존 데이터 구조를 array로 변환
test_array = np.array(test_list)
test_array2 = np.array(test_list2)
test_farray = np.array(test_flist)
test_array_2nd = np.array(test_list_2nd)
test_array_3rd = np.array(test_list_3rd)
```
```python
# array 생성 확인
test_array
```
array([1, 2, 3, 4])
```python
array_list = [1,2,4.5]
```
```python
# 같은 타입의 데이터만 들어가는지 확인
array_test = np.array(array_list)
```
```python
array_test # 정수, 실수, 문자열 순으로 전체 타입이 설정됨
```
array([1. , 2. , 4.5])
```python
# 2차원 배열 확인
test_list_2nd
# 2차원 배열을 좀 더 편하게 보여줌
test_array_2nd
```
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
```python
# 3차원 배열 확인
test_list_3rd
test_array_3rd
```
array([[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[1, 2, 3, 4],
[5, 6, 7, 8]]])
```python
# np.arange 함수로 생성
np.arange(25).reshape(5,5)
```
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
```python
np.arange(1,25).reshape(2,12)
```
array([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
[13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]])
### 특수한 형태의 array를 함수로 생성
함수 호출의 기본구조
> ex) **`np.ones([자료구조 shape])`**
>> 자료구조 shape은 정수, **[ ]**리스트, **( )** 튜플 로만 입력가능합니다.
- ones()
- zeros()
- empty()
- eye()
```python
# 1로 초기화한 array 생성
np.ones([3,3])
```
array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
```python
# 0으로 초기화
np.zeros((5,5))
```
array([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]])
```python
# 빈 값으로 초기화
np.empty((2,2))
```
array([[0., 0.],
[0., 0.]])
```python
# 항등행렬 초기화
# A X 항등행렬 = A
# shape가 안맞는 경우 연산이 가능하도록 만들때 사용
np.eye(4,4)
# 예시
# 날짜 매출
# 날짜 매출
# 날짜 매출
# 날짜 매출
# 날짜 매출
# 날짜 날짜 날짜 날짜 날짜
# 매출 없고 없고 없고 없고
# 없고 매출 없고 없고 없고
# 없고 없고 매출 없고 없고
```
array([[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.]])
```python
np.arange(16).reshape(4,4) @ np.eye(4,4)
```
array([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]])
```python
test_array @ np.transpose(test_array)
```
array([1, 2, 3, 4])
### array 속성 및 내장함수
`np.array` 에는 유용한 수리, 통계 연산을 위한 함수가 갖추어져 있습니다. 다차원 기하학적 연산을 위한 함수도 함께 살펴보겠습니다.
> array 내장 속성 호출 기본구조
ex) **`test_array.ndim`**
자주 사용하는 속성 `shape`, `dtype`, `ndim`
> array 내장함수 호출 기본구조
ex) **`test_array.prod()`**
위에 학습한 np.sum() 과는 달리 array 변수의 인자를 받아 그대로 사용합니다.
#### array 속성
```python
test_array_3rd
```
array([[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[1, 2, 3, 4],
[5, 6, 7, 8]]])
```python
# 데이터 타입확인
test_array_3rd.dtype
```
dtype('int64')
```python
# 데이터 타입 확인
test_array_3rd.shape
```
(3, 2, 4)
```python
# 데이터 차원 확인
test_array_3rd.ndim
```
3
```python
# 전치행렬
np.transpose(test_array_3rd)
test_array_3rd.T
```
array([[[1, 1, 1],
[5, 5, 5]],
[[2, 2, 2],
[6, 6, 6]],
[[3, 3, 3],
[7, 7, 7]],
[[4, 4, 4],
[8, 8, 8]]])
#### array 내장함수
```python
# 내장함수 호출
test_array.mean()
np.sqrt(test_array)
```
array([1. , 1.41421356, 1.73205081, 2. ])
```python
# numpy 함수와 키워드가 같습니다.
test_array.prod()
test_array.sum(axis=)
...
...
```
### array 연산
컴퓨팅 연산을 위한 패키지인 만큼 편리한 배열 연산 기능을 지원합니다. 여러 array 연산을 통해 다른 자료구조와의 연산 차이를 알아봅시다.
```python
test_list = [1,2,3,4,5]
test_list2 = [x*2 for x in test_list]
test_list2
```
[2, 4, 6, 8, 10]
```python
# array 덧셈, 뺄셈, 곱셈, 나눗셈
test_array = np.array(test_list)
test_array > 1
```
array([False, True, True, True, True])
```python
# 실제 연산속도 차이를 확인하기 위한 큰 데이터 생성
big_list = [x for x in range(400000)]
big_array = np.array(big_list)
len(big_list), len(big_array)
```
(400000, 400000)
```python
# for로 각 리스트에 1씩 값을 더함
big_list2 = [x+1 for x in big_list] # 일반적인 for문보다 빠름(파이썬은 줄단위로 값을 확인)
# for index, item in enumerate(big_list):
# big_list[index] = item + 1
```
UsageError: Line magic function `%%time` not found.
```python
# array성질을 이용
big_array + 1
```
array([ 1, 2, 3, ..., 399998, 399999, 400000])
```python
# 행렬내적
first_array = np.arange(15).reshape(5, 3)
second_array = np.arange(15).reshape(3, 5)
```
```python
first_array.shape, second_array.shape
```
((5, 3), (3, 5))
```python
# 행렬내적 연산
first_array @ second_array
```
array([[ 25, 28, 31, 34, 37],
[ 70, 82, 94, 106, 118],
[115, 136, 157, 178, 199],
[160, 190, 220, 250, 280],
[205, 244, 283, 322, 361]])
### 벡터 가중합
벡터의 내적은 가중합을 계산할 때 쓰일 수 있다. **가중합(weighted sum)**이란 복수의 데이터를 단순히 합하는 것이 아니라 각각의 수에 어떤 가중치 값을 곱한 후 이 곱셈 결과들을 다시 합한 것을 말한다.
만약 데이터 벡터가 $x=[x_1, \cdots, x_N]^T$이고 가중치 벡터가 $w=[w_1, \cdots, w_N]^T$이면 데이터 벡터의 가중합은 다음과 같다.
$$
\begin{align}
w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i
\end{align}
$$
이 값을 벡터 $x$와 $w$의 곱으로 나타내면 $w^Tx$ 또는 $x^Tw$ 라는 간단한 수식으로 표시할 수 있다.
쇼핑을 할 때 각 물건의 가격은 데이터 벡터, 각 물건의 수량은 가중치로 생각하여 내적을 구하면 총금액을 계산할 수 있다.
```python
# 벡터의 가중합 연습문제
# 삼성전자, 셀트리온, 카카오로 포트폴리오를 구성하려한다.
# 각 종목의 가격은 80,000원, 270,000원, 160,000원이다.
# 삼성전자 100주, 셀트리온 30주, 카카오 50주로 구성하기 위한 매수금액을 구하시오
price = np.array([80000,270000,160000])
item = np.array([100,30,50])
price @ item.T
# shape가 맞지 않는데도 결과가 나옴 => @ 연산을하면 item.T로 작동함
price @ item
```
24100000
### array 인덱싱, 슬라이싱(매우중요)
> 기본적으로 자료구조란 데이터의 묶음, 그 묶음을 관리 할 수 있는 바구니를 이야기 합니다.
데이터 분석을 위해 자료구조를 사용하지만 자료구조안 내용에 접근을 해야 할 경우도 있습니다.
>> **인덱싱**이란?
데이터 바구니 안에 든 내용 하나에 접근하는 명령, 인덱스는 내용의 순번
>> **슬라이싱**이란?
데이터 바구니 안에 든 내용 여러가지에 접근 하는 명령
기본적으로 인덱싱과 슬라이싱의 색인은 리스트와 동일합니다.
#### 인덱싱, 슬라이싱 실습
```python
# 10부터 19까지 범위를 가지는 array생성
test = np.arange(10,20)
```
```python
# 0부터 3번 인덱스까지
test[:3]
```
array([10, 11, 12])
```python
# 4번 인덱스부터 마지막 인덱스까지
test[4:]
```
array([14, 15, 16, 17, 18, 19])
```python
# 마지막 인덱스부터 뒤에서 3번째 인덱스까지
test[-3:]
```
array([17, 18, 19])
```python
# 0부터 3씩 증가하는 인덱스
test[::3]
```
array([10, 13, 16, 19])
#### 여러가지 인덱싱 및 슬라이싱 방법을 시도해봅시다
```python
index_test2 = np.array(range(25)).reshape([5, 5])
index_test2
```
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
```python
index_test2[2:,1:4]
```
array([[11, 12, 13],
[16, 17, 18],
[21, 22, 23]])
```python
index_test2[:2,2:]
```
array([[2, 3, 4],
[7, 8, 9]])
```python
index_test3 = np.arange(40).reshape(2, 5, 4)
index_test3
```
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15],
[16, 17, 18, 19]],
[[20, 21, 22, 23],
[24, 25, 26, 27],
[28, 29, 30, 31],
[32, 33, 34, 35],
[36, 37, 38, 39]]])
```python
index_test3[0,3:5,1:3]
```
array([[13, 14],
[17, 18]])
```python
index_test3[0:,0:,1]
```
array([[ 1, 5, 9, 13, 17],
[21, 25, 29, 33, 37]])
## 팬시인덱싱
numpy에서 벡터연산을 통해 bool 형태의 벡터를 기준으로 데이터를 선별하는 방법
```python
pet = np.array(['개','고양이','고양이','햄스터','개','햄스터'])
num = np.array([1,2,3,4,5,6])
indexing_test = np.random.randn(6,5) # randn : 난수로 행렬생성
```
```python
indexing_test
```
array([[ 1.21892182, -0.57278769, -2.01837856, 0.37751666, -2.14632145],
[ 1.72627741, 0.19978955, -0.45559384, 2.3741275 , 0.61280794],
[ 0.38249301, -0.10195438, 1.1064851 , 0.14316647, -0.01103692],
[ 0.89879404, -0.08147537, -0.02147208, 0.77555509, -0.45326144],
[ 0.12338784, -0.00431352, -1.31633818, 0.85516829, 0.29007829],
[-0.17169369, 0.42409219, 1.69908292, 1.43223254, -1.29304307]])
```python
pet == '개'
```
array([ True, False, False, False, True, False])
```python
indexing_test[pet=='개']
```
array([[ 1.21892182, -0.57278769, -2.01837856, 0.37751666, -2.14632145],
[ 0.12338784, -0.00431352, -1.31633818, 0.85516829, 0.29007829]])
```python
indexing_test[~(pet=='개')]
```
array([[ 1.72627741, 0.19978955, -0.45559384, 2.3741275 , 0.61280794],
[ 0.38249301, -0.10195438, 1.1064851 , 0.14316647, -0.01103692],
[ 0.89879404, -0.08147537, -0.02147208, 0.77555509, -0.45326144],
[-0.17169369, 0.42409219, 1.69908292, 1.43223254, -1.29304307]])
```python
(pet=='개') | (pet=='햄스터')
```
array([ True, False, False, True, True, True])
```python
indexing_test[(pet=='개') | (pet=='햄스터')]
```
array([[ 1.21892182, -0.57278769, -2.01837856, 0.37751666, -2.14632145],
[ 0.89879404, -0.08147537, -0.02147208, 0.77555509, -0.45326144],
[ 0.12338784, -0.00431352, -1.31633818, 0.85516829, 0.29007829],
[-0.17169369, 0.42409219, 1.69908292, 1.43223254, -1.29304307]])
```python
num > 3
```
array([False, False, False, True, True, True])
```python
indexing_test[num>3]
```
array([[ 0.89879404, -0.08147537, -0.02147208, 0.77555509, -0.45326144],
[ 0.12338784, -0.00431352, -1.31633818, 0.85516829, 0.29007829],
[-0.17169369, 0.42409219, 1.69908292, 1.43223254, -1.29304307]])
```python
# bool 형태는 1,0으로 계산이 된다
(pet =='개').sum()
```
2
```python
# true가 1개라도 있으면 true
(pet =='개').any()
```
True
```python
(pet =='개').all()
```
False
| 37befcb53e5804c7b7f02bac823d4a10b4affb58 | 62,797 | ipynb | Jupyter Notebook | python/210910-Python-numpy.ipynb | PpangPpang93/TIL | 63e541f223f2317fff06d98cac79f372fd11ed1d | [
"MIT"
] | null | null | null | python/210910-Python-numpy.ipynb | PpangPpang93/TIL | 63e541f223f2317fff06d98cac79f372fd11ed1d | [
"MIT"
] | null | null | null | python/210910-Python-numpy.ipynb | PpangPpang93/TIL | 63e541f223f2317fff06d98cac79f372fd11ed1d | [
"MIT"
] | null | null | null | 21.287119 | 622 | 0.466232 | true | 9,102 | Qwen/Qwen-72B | 1. YES
2. YES | 0.912436 | 0.782662 | 0.71413 | __label__kor_Hang | 0.992799 | 0.497494 |
*Sebastian Raschka*
last modified: 03/31/2014
<hr>
I am really looking forward to your comments and suggestions to improve and extend this tutorial! Just send me a quick note
via Twitter: [@rasbt](https://twitter.com/rasbt)
or Email: [[email protected]](mailto:[email protected])
<hr>
### Problem Category
- Statistical Pattern Recognition
- Supervised Learning
- Parametric Learning
- Bayes Decision Theory
- Univariate data
- 2-class problem
- different variances
- Gaussian model (2 parameters)
- No Risk function
<hr>
<p><a name="sections"></a>
<br></p>
# Sections
<p>• <a href="#given">Given information</a><br>
• <a href="#deriving_db">Deriving the decision boundary</a><br>
• <a href="#plotting_db">Plotting the class conditional densities, posterior probabilities, and decision boundary</a><br>
• <a href="#classify_rand">Classifying some random example data</a><br>
• <a href="#emp_err">Calculating the empirical error rate</a><br>
<hr>
<p><a name="given"></a>
<br></p>
## Given information:
[<a href="#sections">back to top</a>] <br>
####model: continuous univariate normal (Gaussian) model for the class-conditional densities
$ p(x | \omega_j) \sim N(\mu|\sigma^2) $
$ p(x | \omega_j) \sim \frac{1}{\sqrt{2\pi\sigma^2}} \exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu}{\sigma}\bigg)^2 \bigg] } $
####Prior probabilities:
$ P(\omega_1) = P(\omega_2) = 0.5 $
#### Variances of the sample distributions
$ \sigma_1^2 = 4, \quad \sigma_2^2 = 1 $
#### Means of the sample distributions
$ \mu_1 = 4, \quad \mu_2 = 10 $
<br>
<p><a name="deriving_db"></a>
<br></p>
## Deriving the decision boundary
[<a href="#sections">back to top</a>] <br>
### Bayes' Rule:
$ P(\omega_j|x) = \frac{p(x|\omega_j) * P(\omega_j)}{p(x)} $
###Bayes' Decision Rule:
Decide $ \omega_1 $ if $ P(\omega_1|x) > P(\omega_2|x) $ else decide $ \omega_2 $.
<br>
\begin{equation}
\begin{aligned}
&\Rightarrow \frac{p(x|\omega_1) * P(\omega_1)}{p(x)} > \frac{p(x|\omega_2) * P(\omega_2)}{p(x)}
\end{aligned}
\end{equation}
###Bayes' Decision Rule:
Decide $ \omega_1 $ if $ P(\omega_1|x) > P(\omega_2|x) $ else decide $ \omega_2 $.
<br>
$ \Rightarrow \frac{p(x|\omega_1) * P(\omega_1)}{p(x)} > \frac{p(x|\omega_2) * P(\omega_2)}{p(x)} $
We can drop $ p(x) $ since it is just a scale factor.
$ \Rightarrow P(x|\omega_1) * P(\omega_1) > p(x|\omega_2) * P(\omega_2) $
$ \Rightarrow \frac{p(x|\omega_1)}{p(x|\omega_2)} > \frac{P(\omega_2)}{P(\omega_1)} $
$ \Rightarrow \frac{p(x|\omega_1)}{p(x|\omega_2)} > \frac{0.5}{0.5} $
$ \Rightarrow \frac{p(x|\omega_1)}{p(x|\omega_2)} > 1 $
$ \Rightarrow \frac{1}{\sqrt{2\pi\sigma_1^2}} \exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu_1}{\sigma_1}\bigg)^2 \bigg] } > \frac{1}{\sqrt{2\pi\sigma_2^2}} \exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu_2}{\sigma_2}\bigg)^2 \bigg] } \quad \bigg| \quad ln $
$ \Rightarrow ln(1) - ln\bigg({\sqrt{2\pi\sigma_1^2}}\bigg) -\frac{1}{2}\bigg( \frac{x-\mu_1}{\sigma_1}\bigg)^2 > ln(1) - ln\bigg({{\sqrt{2\pi\sigma_2^2}}}\bigg) -\frac{1}{2}\bigg( \frac{x-\mu_2}{\sigma_2}\bigg)^2 \quad \bigg| \quad \sigma_1^2 = 4, \quad \sigma_2^2 = 1,\quad \mu_1 = 4, \quad \mu_2 = 10 $
$ \Rightarrow -ln({\sqrt{2\pi4}}) -\frac{1}{2}\bigg( \frac{x-4}{2}\bigg)^2 > -ln({{\sqrt{2\pi}}}) -\frac{1}{2}(x-10)^2 $
$ \Rightarrow -\frac{1}{2} ln({2\pi}) - ln(2) -\frac{1}{8} (x-4)^2 > -\frac{1}{2}ln(2\pi) -\frac{1}{2}(x-10)^2 \quad \bigg| \; \times\; 2 $
$ \Rightarrow -ln({2\pi}) - 2ln(2) - \frac{1}{4}(x-4)^2 > -ln(2\pi) - (x-10)^2 \quad \bigg| \; + ln(2\pi) $
$ \Rightarrow -4ln(4) - (x-4)^2 >- 4(x-10)^2 $
$ \Rightarrow -ln(4) - \frac{1}{4}(x-4)^2 > -(x-10)^2 \quad \big| \; \times \; 4 $
$ \Rightarrow -8ln(2) - x^2 + 8x - 16 > - 4x^2 + 80x - 400 $
$ \Rightarrow 3x^2 - 72x + 384 -8ln(2) > 0 $
$ \Rightarrow x < 7.775 \quad and \quad x > 16.225 $
<p><a name="plotting_db"></a>
<br></p>
## Plotting the class conditional densities, posterior probabilities, and decision boundary
[<a href="#sections">back to top</a>] <br>
```python
%pylab inline
import numpy as np
from matplotlib import pyplot as plt
def pdf(x, mu, sigma):
"""
Calculates the normal distribution's probability density
function (PDF).
"""
term1 = 1.0 / ( math.sqrt(2*np.pi) * sigma )
term2 = np.exp( -0.5 * ( (x-mu)/sigma )**2 )
return term1 * term2
# generating some sample data
x = np.arange(-100, 100, 0.05)
# probability density functions
pdf1 = pdf(x, mu=4, sigma=4)
pdf2 = pdf(x, mu=10, sigma=1)
# Class conditional densities (likelihoods)
plt.plot(x, pdf1)
plt.plot(x, pdf2)
plt.title('Class conditional densities (likelihoods)')
plt.ylabel('p(x)')
plt.xlabel('random variable x')
plt.legend(['p(x|w_1) ~ N(4,4)', 'p(x|w_2) ~ N(10,1)'], loc='upper left')
plt.ylim([0,0.5])
plt.xlim([-15,20])
plt.show()
```
```python
def posterior(likelihood, prior):
"""
Calculates the posterior probability (after Bayes Rule) without
the scale factor p(x) (=evidence).
"""
return likelihood * prior
# probability density functions
posterior1 = posterior(pdf(x, mu=4, sigma=4), 0.5)
posterior2 = posterior(pdf(x, mu=10, sigma=1), 0.5)
# Class conditional densities (likelihoods)
plt.plot(x, posterior1)
plt.plot(x, posterior2)
plt.title('Posterior Probabilities w. Decision Boundaries')
plt.ylabel('P(w)')
plt.xlabel('random variable x')
plt.legend(['P(w_1|x)', 'p(w_2|X)'], loc='upper left')
plt.ylim([0,0.25])
plt.xlim([-15,20])
plt.axvline(7.775, color='r', alpha=0.8, linestyle=':', linewidth=2)
plt.axvline(16.225, color='r', alpha=0.8, linestyle=':', linewidth=2)
plt.annotate('R2', xy=(10, 0.2), xytext=(10, 0.22))
plt.annotate('R1', xy=(4, 0.2), xytext=(4, 0.22))
plt.annotate('R1', xy=(17, 0.2), xytext=(17.5, 0.22))
plt.show()
```
<p><a name="classify_rand"></a>
<br></p>
## Classifying some random example data
[<a href="#sections">back to top</a>] <br>
```python
# Parameters
mu_1 = 4
mu_2 = 10
sigma_1_sqr = 4
sigma_2_sqr = 1
# Generating 10 random samples drawn from a Normal Distribution for class 1 & 2
x1_samples = sigma_1_sqr**0.5 * np.random.randn(20) + mu_1
x2_samples = sigma_1_sqr**0.5 * np.random.randn(20) + mu_2
y = [0 for i in range(20)]
# Plotting sample data with a decision boundary
plt.scatter(x1_samples, y, marker='o', color='green', s=40, alpha=0.5)
plt.scatter(x2_samples, y, marker='^', color='blue', s=40, alpha=0.5)
plt.title('Classifying random example data from 2 classes')
plt.ylabel('P(x)')
plt.xlabel('random variable x')
plt.legend(['w_1', 'w_2'], loc='upper right')
plt.ylim([-0.1,0.1])
plt.xlim([0,20])
plt.axvline(7.775, color='r', alpha=0.8, linestyle=':', linewidth=2)
plt.axvline(16.225, color='r', alpha=0.8, linestyle=':', linewidth=2)
plt.annotate('R2', xy=(10, 0.03), xytext=(10, 0.03))
plt.annotate('R1', xy=(4, 0.03), xytext=(4, 0.03))
plt.annotate('R1', xy=(17, 0.03), xytext=(17.5, 0.03))
plt.show()
```
<p><a name="emp_err"></a>
<br></p>
## Calculating the empirical error rate
[<a href="#sections">back to top</a>] <br>
```python
w1_as_w2, w2_as_w1 = 0, 0
for x1,x2 in zip(x1_samples, x2_samples):
if x1 > 7.775 and x1 < 16.225:
w1_as_w2 += 1
if x2 <= 7.775 and x2 >= 16.225:
w2_as_w1 += 1
emp_err = (w1_as_w2 + w2_as_w1) / float(len(x1_samples) + len(x2_samples))
print('Empirical Error: {}%'.format(emp_err * 100))
```
Empirical Error: 0.0%
```python
```
```python
```
```python
test complete; Gopal
```
| a1319cad159a2934145e5d33d3914c4ebdea8394 | 76,655 | ipynb | Jupyter Notebook | tests/others/2_stat_superv_parametric.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:16:23.000Z | 2019-05-10T09:16:23.000Z | tests/others/2_stat_superv_parametric.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | null | null | null | tests/others/2_stat_superv_parametric.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-10-14T07:30:18.000Z | 2019-10-14T07:30:18.000Z | 170.723831 | 23,708 | 0.886987 | true | 2,810 | Qwen/Qwen-72B | 1. YES
2. YES | 0.785309 | 0.731059 | 0.574107 | __label__eng_Latn | 0.329487 | 0.172172 |
## 유클리드 유사도 vs 코사인 유사도
- 유클리드 유사도 구하는 함수 구현하기
- 코사인 유사도 구하는 함수 구현하기
- 코드설명
- 두 결과 비교
- 응용분야 예시
#### Similarity
```The similairt measure is the measure of how much alike
two data objects are.
Similarity measure in a data mining context is a distance with
dimensions representing features of the objects.
If this distance is small, it will be the high degree of similarity where large distance will be the low degree of similarity.```
##### 1. Euclidean distance
- literally distance b/w two vectors
```python
import sympy
sympy.init_printing(use_latex='mathjax')
```
```python
from math import*
def e_d(x,y) :
"e_d stands for Euclidean distance"
c = [(a-b)**2 for a,b in zip(x,y)]
return sqrt(sum(c))
print(e_d([1,2], [4,1]))
```
3.1622776601683795
```python
a = np.array([1,2])
b = np.array([4,1])
c = a-b
plt.plot(a[0],a[1], 'ro')
plt.plot(b[0],b[1], 'ro')
plt.text(0.6, 1.6 , "$a$", fontdict={"size" : 18})
plt.text(3.6, 0.6 , "$b$", fontdict={"size" : 18})
plt.annotate('', xy=a, xytext=(0,0), arrowprops=dict(facecolor='gray', ls="dashed"))
plt.annotate('', xy=b, xytext=(0,0), arrowprops=dict(facecolor='gray', ls="dashed"))
plt.arrow(4,1,-3,1, width=0.008)
plt.xticks(range(6))
plt.yticks(range(6))
plt.show()
```
```python
e_d([1,2], [4,1]), sqrt(10)
```
$$\left ( 3.1622776601683795, \quad 3.1622776601683795\right )$$
##### 2. Cosine similarity
- Cosine similarty metric finds the nomalized dot product of two attributes.
```python
def norm(x) :
return sqrt(sum([ a**2 for a in x]))
def dot_product(x,y):
return sum(a*b for a,b in zip(x,y))
def c_s(x,y):
"""c_s stands for consine similarity"""
denominator = norm(x)*norm(y)
numerator = dot_product(x,y)
return numerator / denominator
def c_d(*arg):
"c_d stands for consine distance"
return 1 - c_s(x,y)
print(c_s([1,2],[4,1]))
print(c_d([1,2],[4,1]))
```
0.6507913734559685
0.3492086265440315
```python
```
#### Example
```python
index = ["D1","D2","D3","D4"]
columns = ["team","coach","hockey","baseball","soccer","penalty","score","win","loss","season"]
datas = {
"team" : [5,3,0,0],
"coach" : [0,0,7,1],
"hockey" : [3,2,0,0],
"baseball" : [0,0,2,0],
"soccer" : [2,1,1,1],
"penalty" : [0,1,0,2],
"score" : [0,0,0,2],
"win" : [2,1,3,0],
"loss" : [0,0,0,3],
"season" : [0,1,0,0]
}
```
```python
df = pd.DataFrame(index=index, columns=columns, data=datas)
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>team</th>
<th>coach</th>
<th>hockey</th>
<th>baseball</th>
<th>soccer</th>
<th>penalty</th>
<th>score</th>
<th>win</th>
<th>loss</th>
<th>season</th>
</tr>
</thead>
<tbody>
<tr>
<th>D1</th>
<td>5</td>
<td>0</td>
<td>3</td>
<td>0</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>D2</th>
<td>3</td>
<td>0</td>
<td>2</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>D3</th>
<td>0</td>
<td>7</td>
<td>0</td>
<td>2</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>3</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>D4</th>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>2</td>
<td>0</td>
<td>3</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
```python
#cosine similarity
```
```python
def norm(x) :
return sqrt(sum([ a**2 for a in x]))
def dot_product(x,y):
return sum(a*b for a,b in zip(x,y))
def c_s(x,y):
"""c_s stands for consine similarity"""
denominator = norm(x)*norm(y)
numerator = dot_product(x,y)
return numerator / denominator
def c_d(*arg):
"c_d stands for consine distance"
return 1 - c_s(x,y)
print(c_s([1,2],[4,1]))
print(c_d([1,2],[4,1]))
```
0.6507913734559685
0.3492086265440315
```python
C_D1_D2 = c_s(df.loc["D1"],df.loc["D2"])
C_D1_D2
```
$$0.9356014857063997$$
```python
C_D1_D3 = c_s(df.loc["D1"],df.loc["D3"])
C_D1_D3
```
$$0.1555231582719478$$
```python
C_D1_D4 = c_s(df.loc["D1"],df.loc["D4"])
C_D1_D4
```
$$0.07079923254047886$$
```python
C_D2_D3 = c_s(df.loc["D2"],df.loc["D3"])
C_D2_D3
```
$$0.12222646627042817$$
```python
C_D2_D4 = c_s(df.loc["D2"],df.loc["D4"])
C_D2_D4
```
$$0.16692446522239712$$
```python
C_D3_D4 = c_s(df.loc["D3"],df.loc["D4"])
C_D3_D4
```
$$0.23122932520643197$$
```python
#Euclidean similarty
```
```python
from math import*
def e_d(x,y) :
"e_d stands for Euclidean distance"
c = [(a-b)**2 for a,b in zip(x,y)]
return sqrt(sum(c))
print(e_d([1,2], [4,1]))
```
3.1622776601683795
```python
E_D1_D2 = e_d(df.loc["D1"],df.loc["D2"])
E_D1_D2
```
$$3.0$$
```python
E_D1_D3 = e_d(df.loc["D1"],df.loc["D3"])
E_D1_D3
```
$$9.433981132056603$$
```python
E_D1_D4 = e_d(df.loc["D1"],df.loc["D4"])
E_D1_D4
E_D1_D4
```
$$7.54983443527075$$
```python
E_D2_D3 = e_d(df.loc["D2"],df.loc["D3"])
E_D2_D3
```
$$8.48528137423857$$
```python
E_D2_D4 = e_d(df.loc["D2"],df.loc["D4"])
E_D2_D4
```
$$5.477225575051661$$
```python
E_D3_D4 = e_d(df.loc["D3"],df.loc["D4"])
E_D3_D4
```
$$8.12403840463596$$
```python
data1 = {"D1_D2" : [C_D1_D2, E_D1_D2] ,
"D1_D3" : [C_D1_D3, E_D1_D3],
"D1_D4" : [C_D1_D4, E_D1_D4],
"D2_D3" : [C_D2_D3, E_D2_D3],
"D2_D4" : [C_D2_D4, E_D2_D4],
"D3_D4" : [C_D3_D4, E_D3_D4] }
```
```python
df2 = pd.DataFrame(columns=["D1_D2","D1_D3","D1_D4","D2_D3","D2_D4","D3_D4"], index=["Cosine_similarity","Euclidean_similarty"], data=data1)
df2
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>D1_D2</th>
<th>D1_D3</th>
<th>D1_D4</th>
<th>D2_D3</th>
<th>D2_D4</th>
<th>D3_D4</th>
</tr>
</thead>
<tbody>
<tr>
<th>Cosine_similarity</th>
<td>0.935601</td>
<td>0.155523</td>
<td>0.070799</td>
<td>0.122226</td>
<td>0.166924</td>
<td>0.231229</td>
</tr>
<tr>
<th>Euclidean_similarty</th>
<td>3.000000</td>
<td>9.433981</td>
<td>7.549834</td>
<td>8.485281</td>
<td>5.477226</td>
<td>8.124038</td>
</tr>
</tbody>
</table>
</div>
```python
```
```python
```
```python
data1 = {"D1_D2" : [C_D1_D2, E_D1_D2**2] ,
"D1_D3" : [C_D1_D3, E_D1_D3**2],
"D1_D4" : [C_D1_D4, E_D1_D4**2],
"D2_D3" : [C_D2_D3, E_D2_D3**2],
"D2_D4" : [C_D2_D4, E_D2_D4**2],
"D3_D4" : [C_D3_D4, E_D3_D4**2] }
```
```python
df2 = pd.DataFrame(columns=["D1_D2","D1_D3","D1_D4","D2_D3","D2_D4","D3_D4"], index=["Cosine_similarity","Euclidean_similarty"], data=data1)
df2
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>D1_D2</th>
<th>D1_D3</th>
<th>D1_D4</th>
<th>D2_D3</th>
<th>D2_D4</th>
<th>D3_D4</th>
</tr>
</thead>
<tbody>
<tr>
<th>Cosine_similarity</th>
<td>0.935601</td>
<td>0.155523</td>
<td>0.070799</td>
<td>0.122226</td>
<td>0.166924</td>
<td>0.231229</td>
</tr>
<tr>
<th>Euclidean_similarty</th>
<td>9.000000</td>
<td>89.000000</td>
<td>57.000000</td>
<td>72.000000</td>
<td>30.000000</td>
<td>66.000000</td>
</tr>
</tbody>
</table>
</div>
```python
```
```python
```
```python
```
```python
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import euclidean_distances
```
```python
corpus =[
"All my cats in a row",
"When my cat sits down, she looks like a Furby toy!",
"The cat from outer space",
"Sunshine loves to sit like this for some reason."
]
```
```python
vectorizer = CountVectorizer()
features = vectorizer.fit_transform(corpus).todense()
print(vectorizer.vocabulary_ )
```
{'all': 0, 'my': 11, 'cats': 2, 'in': 7, 'row': 14, 'when': 25, 'cat': 1, 'sits': 17, 'down': 3, 'she': 15, 'looks': 9, 'like': 8, 'furby': 6, 'toy': 24, 'the': 21, 'from': 5, 'outer': 12, 'space': 19, 'sunshine': 20, 'loves': 10, 'to': 23, 'sit': 16, 'this': 22, 'for': 4, 'some': 18, 'reason': 13}
```python
for i in features:
print(i)
```
[[1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0]]
[[0 1 0 1 0 0 1 0 1 1 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1]]
[[0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0]]
[[0 0 0 0 1 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0 1 0 1 1 0 0]]
```python
for i in features :
print(euclidean_distances(features[0], i))
```
[[0.]]
[[3.60555128]]
[[3.16227766]]
[[3.74165739]]
```python
for i in features:
print(cosine_similarity(features[0], i))
```
| df647fd8a7e8e13ed94480c57a98fba34ff8b6d1 | 67,443 | ipynb | Jupyter Notebook | 01_linearalgebra/01_Euclidean Distance & Cosine distance.ipynb | seokyeongheo/study-math-with-python | 18266dc137e46ea299cbd89241e474d7fd610122 | [
"MIT"
] | null | null | null | 01_linearalgebra/01_Euclidean Distance & Cosine distance.ipynb | seokyeongheo/study-math-with-python | 18266dc137e46ea299cbd89241e474d7fd610122 | [
"MIT"
] | null | null | null | 01_linearalgebra/01_Euclidean Distance & Cosine distance.ipynb | seokyeongheo/study-math-with-python | 18266dc137e46ea299cbd89241e474d7fd610122 | [
"MIT"
] | 1 | 2018-06-07T05:57:02.000Z | 2018-06-07T05:57:02.000Z | 63.806055 | 36,248 | 0.744658 | true | 3,926 | Qwen/Qwen-72B | 1. YES
2. YES | 0.936285 | 0.819893 | 0.767654 | __label__eng_Latn | 0.217104 | 0.621849 |
# Laboratório 5: Câncer e Tratamento Químico
### Referente ao capítulo 10
Queremos minimizar a densidade de um tumor em um organismo e os efeitos colaterais das drogas para o tratamento de câncer por quimioterapia em um período de tempo fixo. É assumido que o tumor tenha um crescimento Gompertzian. A hipótese *log-kill* de Skipper é utilizada para modelar a ocisão de células cancerosas. Ela diz que a morte de células devido aos químicos é proporcional à população de tumor.
Considere $N(t)$ a densidade normalizada do tumor no tempo $t$. Assim:
$$N'(t) = rN(t)\ln\left(\frac{1}{N(t)}\right) - u(t)\delta N(t)$$
onde $r$ é a taxa de crescimento do tumor, $\delta$ a magnitude da dose e $u(t)$ descreve a força do efeito da droga (famacocinética).
Escolhemos o funcional da seguinte maneira para minimizar tanto a densidade de tumor quanto os efeitos da droga. Supomos aqui que quanto maior a força do efeito da droga, maior seu efeito negativo, o que é razoável.
$$\min_u \int_0^T aN(t)^2 + u(t)^2 dt$$
Além disso, $u(t) \geq 0$ e $N(0) = N_0$.
## Condições Necessárias
### Hamiltoniano
$$
H = aN^2 + u^2 + \lambda\left[rN\ln\left(\frac{1}{N}\right) - u\delta N\right]
$$
### Equação adjunta
$$
\lambda '(t) = - H_N = -2aN - \lambda\left(r\ln\left(\frac{1}{N}\right) - rN\frac{1}{1/N}\frac{1}{N^2} - u\delta\right) = -2aN + \lambda\left[r-r\ln\left(\frac{1}{N}\right) + u\delta\right]
$$
### Condição de transversalidade
$$
\lambda(T) = 0
$$
### Condição de otimalidade
$$
H_u = 2u - \lambda \delta N
$$
Nesse caso temos apenas uma desigualdade, a inferior e, portanto, $H_u < 0$ não é possível. Temos que $\delta > 0$ dada sua interpretação e $N(t) > 0$, afinal sua derivada não é nem definida caso contrário. Como o problema é de minimização, temos que lembrar que as desigualdades ficam invertidas em relação a $H_u$.
$$
H_u > 0 \implies u^*(t) = 0 \implies \lambda \delta N < 0 \implies \lambda(t) < 0
$$
$$
H_u = 0 \implies 0 \le u^*(t) = \frac{\delta}{2}\lambda(t)N(t)
$$
Assim $u^*(t) = \max\left\{0, \frac{\delta}{2}\lambda(t)N(t)\right\}$
### Importanto as bibliotecas
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
import sympy as sp
import sys
sys.path.insert(0, '../pyscripts/')
from optimal_control_class import OptimalControl
```
### Usando a biblitoca sympy
```python
x_sp,u_sp,lambda_sp, r_sp, a_sp, delta_sp = sp.symbols('N u lambda r a delta')
H = a_sp*x_sp**2 + u_sp**2 + lambda_sp*(r_sp*x_sp*sp.log(1/x_sp) - u_sp*delta_sp*x_sp)
H
```
$\displaystyle N^{2} a + \lambda \left(- N \delta u + N r \log{\left(\frac{1}{N} \right)}\right) + u^{2}$
```python
print('H_x = {}'.format(sp.diff(H,x_sp)))
print('H_u = {}'.format(sp.diff(H,u_sp)))
print('H_lambda = {}'.format(sp.diff(H,lambda_sp)))
```
H_x = 2*N*a + lambda*(-delta*u + r*log(1/N) - r)
H_u = -N*delta*lambda + 2*u
H_lambda = -N*delta*u + N*r*log(1/N)
Resolvendo para $H_u = 0$
```python
eq = sp.Eq(sp.diff(H,u_sp), 0)
sp.solve(eq,u_sp)
```
[N*delta*lambda/2]
Aqui podemos descrever as funções necessárias para a classe.
```python
parameters = {'a': None, 'delta': None, 'r': None}
diff_state = lambda t, N, u, par: -N*par['delta']*u + N*par['r']*np.log(1/N)
diff_lambda = lambda t, N, u, lambda_, par: -2*N*par['a'] + lambda_*(par['delta']*u + par['r']*(1 - np.log(1/N)))
update_u = lambda t, N, lambda_, par: np.maximum(0, 0.5*par['delta']*lambda_*N)
```
## Aplicando a classe ao exemplo
Vamos fazer algumas exeperimentações. Sinta-se livre para variar os parâmetros. Nesse caso passaremos o limite inferior como padrão inicial. Note que `np.inf` deve ser escrito quando não há limite.
```python
problem = OptimalControl(diff_state, diff_lambda, update_u, bounds = [(0,np.inf)])
```
```python
N0 = 0.975
T = 20
parameters['a'] = 3
parameters['delta'] = 0.45
parameters['r'] = 0.3
```
```python
t,x,u,lambda_ = problem.solve(N0, T, parameters)
ax = problem.plotting(t,x,u,lambda_)
```
Note o formato do controle: Inicia com valor alto e depois reduz conforme se aproxima do dia 20. Isto é consistente com as práticas médicas atuais. Todavia, também observamos que o tumor tende a crescer após o dia 12. Por isso, vamos aumentar a importância de reduzir as células cancerosas, em troca de mais danos pelos remédios. Vamos notar que a força da droga é muito maior quando reduzimos a importância de seu efeito negativo. O formato das curvas é, entretanto, muito similar.
```python
parameters['a'] = 10
t,x,u,lambda_ = problem.solve(N0, T, parameters)
ax = problem.plotting(t,x,u,lambda_)
```
Vamos comparar o controle escolhido conforme diferentes valor iniciais da densidade $N_0$.
```python
parameters['a'] = 3
N0_values = [0.1, 0.5, 0.7, 0.9]
u_values = []
for N0 in N0_values:
_,_,u,_ = problem.solve(N0, T, parameters)
u_values.append(u)
fig = plt.figure(figsize = (10,5))
plt.xlabel("Tempo")
plt.ylabel("Químico")
plt.title("Quimioterapia utilizada")
for i, N0 in enumerate(N0_values):
plt.plot(t, u_values[i],label = r'$N_0$ = {}'.format(N0))
plt.legend()
plt.grid(alpha = 0.5)
```
Podemos observar acima que apenas o estágio inicial do controle é afetado pela densidade inicial, até que atinge uma espécie de equilíbrio.
Outro ponto importante é o quanto a magnitude da dose influencia na dosagem do químico ao longo do tempo. Observe que quanto maior a magnitude, o valor inicial é mais alto, porém o decaimento é mais rápido, usando menos químico no total, como era esperado.
```python
N0 = 0.8
delta_values = [0.25, 0.5, 0.75]
u_values = []
for delta in delta_values:
parameters['delta'] = delta
_,_,u,_ = problem.solve(N0, T, parameters)
u_values.append(u)
fig = plt.figure(figsize = (10,5))
plt.xlabel("Tempo")
plt.ylabel("Químico")
plt.title("Quimioterapia utilizada")
for i, delta in enumerate(delta_values):
plt.plot(t, u_values[i],label = r'$\delta$ = {}'.format(delta))
plt.legend()
plt.grid(alpha = 0.5)
```
Podemos ver também diferentes valores temporais $T$ de duração do tratamento alteram o comportamento da quantidade de células cancerosas. Na verdade, o formato fica muito similar em todos os casos, variando o tempo de estabilidade no meio do caminho.
```python
parameters['delta'] = 0.4
T_values = [10, 20, 40, 80]
x_values = []
u_values = []
t_values = []
for T in T_values:
t,x,u,_ = problem.solve(N0, T, parameters)
t_values.append(t)
x_values.append(x)
u_values.append(u)
fig, ax = plt.subplots(2,2,figsize = (10,8))
fig.suptitle(r'Comparação temporal de $N(t)$')
for k in range(4):
i = k//2
j = k%2
ax[i][j].set_ylabel(r"$N(t)$")
ax[i][j].set_title("Densisade de células cancerosas")
ax[i][j].plot(t_values[k], x_values[k])
ax[i][j].grid(alpha = 0.5)
fig, ax = plt.subplots(2,2,figsize = (10,8))
fig.suptitle(r'Comparação temporal de $u(t)$')
for k in range(4):
i = k//2
j = k%2
ax[i][j].set_ylabel(r"$u(t)$")
ax[i][j].set_title("Força do efeito da quimioterapia")
ax[i][j].plot(t_values[k], u_values[k], color = 'green')
ax[i][j].grid(alpha = 0.5)
```
Você deve ter observado em todos os exemplos que $u(t)$ atinge 0 apenas pontualmente. É necessária a restrição $u(t) \ge 0$?
## Experimentação
```python
#N0 = 1
#T = 5
#parameters['r'] = 0.3
#parameters['a'] = 10
#parameters['delta'] = 0.4
#
#t,x,u,lambda_ = problem.solve(N0, T, parameters)
#roblem.plotting(t,x,u,lambda_)
```
### Este é o final do notebook
| 5a776214ccedf4b2ee94d7f932cfc25f8dbdb0bc | 265,469 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/Laboratory5-checkpoint.ipynb | lucasmoschen/optimal-control-biological | 642a12b6a3cb351429018120e564b31c320c44c5 | [
"MIT"
] | 1 | 2021-11-03T16:27:39.000Z | 2021-11-03T16:27:39.000Z | notebooks/.ipynb_checkpoints/Laboratory5-checkpoint.ipynb | lucasmoschen/optimal-control-biological | 642a12b6a3cb351429018120e564b31c320c44c5 | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/Laboratory5-checkpoint.ipynb | lucasmoschen/optimal-control-biological | 642a12b6a3cb351429018120e564b31c320c44c5 | [
"MIT"
] | null | null | null | 539.571138 | 48,892 | 0.941541 | true | 2,538 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.847968 | 0.72409 | __label__por_Latn | 0.967536 | 0.520637 |
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
import sympy
#from astropy.visualization import astropy_mpl_style, quantity_support
#from google.colab import drive
#drive.mount('/content/drive')
c=299792458
```
Una matriz en python y algunas operaciones. Con esto uds van a construir un cuaderno para calcular eventos espacio-tiempo en 2 sistema de referencia.
**En la siguiente celda**
```
vx = input('Ingrese la velocidad para el boost en x')
vy = input('Ingrese la velocidad para el boost en y')
vz = input('Ingrese la velocidad para el boost en z')
vx = float(vx)
betax=vx/c
vy = float(vy)
betay=vy/c
vz = float(vz)
betaz=vz/c
v2=vx**2+vy**2+vz**2
beta2=betax**2+betay**2+betaz**2
if v2>c**2:
print('Este valor de velocidad no es posible, está por encima de c, los cálculos NO SON CORRECTOS')
```
Ingrese la velocidad para el boost en x5000
Ingrese la velocidad para el boost en y4000
Ingrese la velocidad para el boost en z3000
```
#Ahora se calcula directamente la T. de Lorentz con la forma general
Mgeneral=[[gamma, -gamma*betax, -gamma*betay, -gamma*betaz],[-gamma*betax, 1+(gamma-1)*betax**2/beta2, (gamma-1)*betax*betay/beta2,(gamma-1)*betax*betaz/beta2],[-gamma*betay, (gamma-1)*betay*betax/beta2 ,1+(gamma-1)*betay**2/beta2, (gamma-1)*betay*betaz/beta2],[-gamma*betaz, (gamma-1)*betaz*betax/beta2, (gamma-1)*betay*betaz/beta2, 1+(gamma-1)*betaz**2/beta2]]
print(Mgeneral)
```
[[1.1547005383792517, -1.9258332015464707e-05, -1.5406665612371767e-05, -1.1554999209278823e-05], [-1.9258332015464707e-05, 1.0773502691896257, 0.06188021535170067, 0.04641016151377551], [-1.5406665612371767e-05, 0.06188021535170068, 1.0495041722813605, 0.037128129211020405], [-1.1554999209278823e-05, 0.04641016151377551, 0.037128129211020405, 1.0278460969082652]]
| c9bd3220e1cc5badfc41eb492edd196101ab4c43 | 3,424 | ipynb | Jupyter Notebook | datos/colabs/Transformaciones de Lorentz.ipynb | Sekilloda/sekilloda.github.io | 1a272eb607400a71a2971569e6ac2426f81661f7 | [
"MIT"
] | null | null | null | datos/colabs/Transformaciones de Lorentz.ipynb | Sekilloda/sekilloda.github.io | 1a272eb607400a71a2971569e6ac2426f81661f7 | [
"MIT"
] | null | null | null | datos/colabs/Transformaciones de Lorentz.ipynb | Sekilloda/sekilloda.github.io | 1a272eb607400a71a2971569e6ac2426f81661f7 | [
"MIT"
] | null | null | null | 3,424 | 3,424 | 0.726343 | true | 665 | Qwen/Qwen-72B | 1. YES
2. YES | 0.956634 | 0.679179 | 0.649726 | __label__spa_Latn | 0.374144 | 0.347861 |
# Linear Regression with Regularization
Regularization is a way to prevent overfitting and allows the model to generalize better. We'll cover the *Ridge* and *Lasso* regression here.
## The Need for Regularization
Unlike polynomial fitting, it's hard to imagine how linear regression can overfit the data, since it's just a single line (or a hyperplane). One situation is that features are **correlated** or redundant.
Suppose there are two features, both are exactly the same, our predicted hyperplane will be in this format:
$$
\hat{y} = w_0 + w_1x_1 + w_2x_2
$$
and the true values of $x_2$ is almost the same as $x_1$ (or with some multiplicative factor and noise). Then, it's best to just drop $w_2x_2$ term and use:
$$
\hat{y} = w_0 + w_1x_1
$$
to fit the data. This is a simpler model.
But we don't know whether $x_1$ and $x_2$ is **actually** redundant or not, at least with bare eyes, and we don't want to manually drop a parameter just because we feel like it. We want to model to learn to do this itself, that is, to *prefer a simpler model that fits the data well enough*.
To do this, we add a *penalty term* to our loss function. Two common penalty terms are L2 and L1 norm of $w$.
## L2 and L1 Penalty
### 0. No Penalty (or Linear)
This is linear regression without any regularization (from [previous article](/blog_content/linear_regression/linear_regression_tutorial.html#writing-sse-loss-in-matrix-notation)):
$$
L(w) = \sum_{i=1}^{n} \left( y^i - wx^i \right)^2
$$
### 1. L2 Penalty (or Ridge)
We can add the **L2 penalty term** to it, and this is called **L2 regularization**.:
$$
L(w) = \sum_{i=1}^{n} \left( y^i - wx^i \right)^2 + \lambda\sum_{j=0}^{d}w_j^2
$$
This is called L2 penalty just because it's a L2-norm of $w$. In fancy term, this whole loss function is also known as **Ridge regression**.
Let's see what's going on. Loss function is something we **minimize**. Any terms that we add to it, we also want it to be minimized (that's why it's called *penalty term*). The above means we want $w$ that fits the data well (first term), but we also want the values of $w$ to be small as possible (second term). The lambda ($\lambda$) is there to adjust how much to penalize $w$. Note that `sklearn` refers to this as alpha ($\alpha$) instead, but whatever.
It's tricky to know the appropriate value for lambda. You just have to try them out, in exponential range (0.01, 0.1, 1, 10, etc), then select the one that has the lowest loss on validation set, or doing k-fold cross validation.
Setting $\lambda$ to be very low means we don't penalize the complex model much. Setting it to $0$ is the original linear regression. Setting it high means we strongly prefer simpler model, at the cost of how well it fits the data.
#### Closed-form solution of Ridge
It's not hard to find a closed-form solution for Ridge, first write the loss function in matrix notation:
$$
L(w) = {\left\lVert y - Xw \right\rVert}^2 + \lambda{\left\lVert w \right\rVert}_2^2
$$
Then the gradient is:
$$
\nabla L_w = -2X^T(y-Xw) + 2\lambda w
$$
Setting to zero and solve:
$$
\begin{align}
0 &= -2X^T(y-Xw) + 2\lambda w \\
&= X^T(y-Xw) - \lambda w \\
&= X^Ty - X^TXw - \lambda w \\
&= X^Ty - (X^TX + \lambda I_d) w
\end{align}
$$
Move that to other side and we get a closed-form solution:
$$
\begin{align}
(X^TX + \lambda I_d) w &= X^Ty \\
w &= (X^TX + \lambda I_d)^{-1}X^Ty
\end{align}
$$
which is almost the same as linear regression without regularization.
### 2. L1 Penalty (or Lasso)
As you might guess, you can also use L1-norm for **L1 regularization**:
$$
L(w) = \sum_{i=1}^{n} \left( y^i - wx^i \right)^2 + \lambda\sum_{j=0}^{d}\left|w_j\right|
$$
Again, in fancy term, this loss function is also known as **Lasso regression**. Using matrix notation:
$$
L(w) = {\left\lVert y - Xw \right\rVert}^2 + \lambda{\left\lVert w \right\rVert}_1
$$
It's more complex to get a closed-form solution for this, so we'll leave it here.
## Visualizing the Loss Surface with Regularization
Let's see what these penalty terms mean geometrically.
### L2 loss surface
<center>
\
</center>
This simply follows the 3D equation:
$$
L(w) = {\left\lVert w \right\rVert}_2^2 = w_0^2 + w_1^2
$$
The center of the bowl is lowest, since `w = [0,0]`, but that is not even a line and it won't predict anything useful.
#### L2 loss surface under different lambdas
When you multiply the L2 norm function with lambda, $L(w) = \lambda(w_0^2 + w_1^2)$, the width of the bowl changes. The lowest (and flattest) one has lambda of 0.25, which you can see it penalizes The two subsequent ones has lambdas of 0.5 and 1.0.
<center>
\
</center>
### L1 loss surface
Below is the loss surface of L1 penalty:
<center>
\
</center>
Similarly the equation is $L(w) = \lambda(\left| w_0 \right| + \left| w_1 \right|)$.
### Contour of different penalty terms
If the L2 norm is 1, you get a unit circle ($w_0^2 + w_1^2 = 1$). In the same manner, you get "unit" shapes in other norms:
<center>
\
</center>
**When you walk along these lines, you get the same loss, which is 1**
These shapes can hint us different behaviors of each norm, which brings us to the next question.
## Which one to use, L1 or L2?
What's the point of using different penalty terms, as it seems like both try to push down the size of $w$.
**Turns out L1 penalty tends to produce sparse solutions**. This means many entries in $w$ are zeros. This is good if you want the model to be simple and compact. Why is that?
### Geometrical Explanation
*Note: these figures are generated with unusually high lambda to exaggerate the plot*
First let's bring both linear regression and penalty loss surface together (left), and recall that we want to find the **minimum loss when both surfaces are summed up** (right):
<center>
\
</center>
Ridge regression is like finding the middle point where the loss of a sum between linear regression and L2 penalty loss is lowest:
<center>
\
</center>
You can imagine starting with the linear regression solution (red point) where the loss is the lowest, then you move towards the origin (blue point), where the penalty loss is lowest. **The more lambda you set, the more you'll be drawn towards the origin, since you penalize the values of $w_i$ more** so it wants to get to where they're all zeros:
<center>
\
</center>
Since the loss surfaces of linear regression and L2 norm are both ellipsoid, the solution found for Ridge regression **tends to be directly between both solutions**. Notice how the summed ellipsoid is still right in the middle.
---
For Lasso:
<center>
\
</center>
And this is the Lasso solution for lambda = 30 and 60:
<center>
\
\
</center>
Notice that the ellipsoid of linear regression **approaches, and finally hits a corner of L1 loss**, and will always stay at that corner. What does a corner of L1 norm means in this situation? It means $w_1 = 0$.
Again, this is because the contour lines **at the same loss value** of L2 norm reaches out much farther than L1 norm:
<center>
\
</center>
If the linear regression finds an optimal contact point along the L2 circle, then it will stop since there's no use to move sideways where the loss is usually higher. However, with L1 penalty, it can drift toward a corner, because it's **the same loss along the line** anyway (I mean, why not?) and thus is exploited, if the opportunity arises.
```python
```
| ae3c1d979dcc6c2f9f9c9b741f3938a5b50781ab | 11,357 | ipynb | Jupyter Notebook | blog_content_source/linear_regression/linear_regression_regularized.ipynb | aunnnn/ml-tutorial | b40a6fb04dd4dc560f87486f464b292d84f02fdf | [
"MIT"
] | null | null | null | blog_content_source/linear_regression/linear_regression_regularized.ipynb | aunnnn/ml-tutorial | b40a6fb04dd4dc560f87486f464b292d84f02fdf | [
"MIT"
] | null | null | null | blog_content_source/linear_regression/linear_regression_regularized.ipynb | aunnnn/ml-tutorial | b40a6fb04dd4dc560f87486f464b292d84f02fdf | [
"MIT"
] | null | null | null | 36.400641 | 469 | 0.579378 | true | 2,096 | Qwen/Qwen-72B | 1. YES
2. YES | 0.803174 | 0.91118 | 0.731836 | __label__eng_Latn | 0.999355 | 0.538631 |
# Variablen
Wenn Sie ein neues Jupyter Notebook erstellen, wählen Sie `Python 3.6` als Typ des Notebooks aus.
Innerhalb des Notebooks arbeiten Sie dann mit Python in der Version 3.6. Um zu verstehen, welche Bedeutung Variablen haben, müssen Sie also Variablen in Python 3.6 verstehen.
In Python ist eine Variable ein Kurzname für einen Speicherbereich, in den Daten abgelegt werden können. Auf diese Daten kann dann unter dem Namen der Variablen zugegriffen werden. Dies haben wir prinzipiell bereits in den ersten Übungen durchgeführt. Beispiele:
`d_i = 20 # Innendurchmesser eines Rohres`
Den Name einer Variablen muss eindeutig sein. Er darf nicht mit einer Ziffer beginnen und darf keine Operationszeichen wie `'+', '-', '*', ':', '^'` oder `'#'` enthalten. Soll einer Variablen ein Wert zugewiesen werden, so geschieht das mittels des Gleichheitszeichens, siehe oben.
Das `'#'`-Zeichen leitet einen Kommentar ein. Alles, was nach diesem Zeichen folgt, hat einen rein informativen Charakter aber wird von Python ignoriert.
Sobald eine Variable angelegt ist, kann diese in Berechnungen verwendet werden, z.B.
```python
import math
```
```python
d_i = 20e-3 # Innendurchmesser eines Rohres in m
A_i = math.pi*d_i**2/4 # Lichter Querschnitt in m**2
```
In einer Zelle dürfen mehrere Operationen stehen, siehe oben. Eine Zelle kann eine Ausgabe haben. Das ist dann immer das Ergebnis der Letzten Operation.
In der oben stehenden Zelle steht als letztes eine Wertzuweisung `A_i = ...`, die keine Ausgabe erzeugt.
Um interaktiv arbeiten zu können, sollte man nicht zu lange Berechnungen in einzelnen Zellen durchführen. Statt dessen sollte man nach wenigen sinnvollen Schritten Zwischenergebnisse anzeigen, um den Gang der Arbeit nachvollziehen zu können.
Stellt man einen Fehler fest, so können Werte geändert und die Zelle erneut ausgeführt werden.
Um das Ergebnis der oben durchgeführten Berechnung anzuzeigen, kann man die zuletzt erzeugte Variable aufrufen. Die Zelle sähe dann so aus:
```python
d_i = 20e-3 # Innendurchmesser eines Rohres in m
A_i = math.pi*d_i**2/4 # Lichter Querschnitt in m**2
A_i
```
0.0003141592653589793
Variablen haben einen Typ, den man sich durch die Funktion `type(variable)` anzeigen kann, z.B.:
```python
type(A_i)
```
float
# Aufgabe
Untersuchen Sie, welchen Typ die Variablen
`a=1`
`x=5.0`
`Name = 'Bernd'`
und
`Punkt = (3,4)`
haben.
```python
a = 1
type(a)
```
int
```python
x=5.0
type(x)
```
float
```python
name='Bernd'
type(name)
```
str
```python
Punkt=(3,4)
type(Punkt)
```
tuple
Untersuchen Sie, welche Auswirkung der Befehl
`2*name`
```python
2*name
```
'BerndBernd'
mit dem oben definierten Namen hat. Welche Auswirkung hat analog
`2*Punkt`?
```python
2*Punkt
```
(3, 4, 3, 4)
Stellen Sie eine Vermutung auf, welchen Typ das Produkt `a*x` und die Summe `a+x` mit den oben festgelegten Werten von `a` und `x` haben und überprüfen Sie diese.
```python
type(a+x)
```
float
```python
type(a*x)
```
float
Um die Anwendungen in der Mathematik nicht aus den Augen zu verlieren, bearbeiten Sie die folgende Aufgabe:
# Aufgabe
Berechnen Sie das Gewicht von 250m Kupferrohr CU15$\times$1 in kg. Entnehmen Sie dafür die Dichte $\varrho$ von Kupfer ihrem Tabellenbuch. Die Zusammenhänge sind durch die folgenden Formeln gegeben:
\begin{align}
A &= \dfrac{\pi\,(d_a^2 - d_i^2)}{4}
\\[2ex]
V &= A\, l
\\[2ex]
m &= \varrho\, V
\end{align}
```python
import math
```
```python
# Ihre Lösung beginnt hier
d_a = 15e-2 # d_a in dm
d_i = 13e-2 # d_i in dm
l = 250e1 # l in dm
A = math.pi*(d_a**2 - d_i**2)/4 # Querschnitt in dm**2
V = A*l # Volumen in dm**3
rho = 8.96 # kg/dm**3
m = rho*V
m # Masse in kg
```
98.52034561657587
| 36e883b6d78e79d9e4d5e5a6226591b05d222628 | 8,996 | ipynb | Jupyter Notebook | src/03-Variablen_lsg.ipynb | w-meiners/anb-first-steps | 6cb3583f77ae853922acd86fa9e48e9cf5188596 | [
"MIT"
] | null | null | null | src/03-Variablen_lsg.ipynb | w-meiners/anb-first-steps | 6cb3583f77ae853922acd86fa9e48e9cf5188596 | [
"MIT"
] | null | null | null | src/03-Variablen_lsg.ipynb | w-meiners/anb-first-steps | 6cb3583f77ae853922acd86fa9e48e9cf5188596 | [
"MIT"
] | null | null | null | 21.419048 | 290 | 0.521787 | true | 1,224 | Qwen/Qwen-72B | 1. YES
2. YES | 0.882428 | 0.896251 | 0.790877 | __label__deu_Latn | 0.99769 | 0.675805 |
$$
\def\abs#1{\left\lvert #1 \right\rvert}
\def\Set#1{\left\{ #1 \right\}}
\def\mc#1{\mathcal{#1}}
\def\M#1{\boldsymbol{#1}}
\def\R#1{\mathsf{#1}}
\def\RM#1{\boldsymbol{\mathsf{#1}}}
\def\op#1{\operatorname{#1}}
\def\E{\op{E}}
\def\d{\mathrm{\mathstrut d}}
\DeclareMathOperator{\Tr}{Tr}
\DeclareMathOperator*{\argmin}{arg\,min}
\def\norm#1{\left\lVert #1 \right\rVert}
$$
# What is adversarial attack and why we care about it?
Despite the effectiveness on a variety of tasks, deep neural networks can be very vulnerable to adversarial attacks. In security-critical domains like facial recognition authorization and autonomous vehicles, such vulnerability against adversarial attacks makes the model highly unreliable.
An adversarial attack tries to add an imperceptible perturbation to the sample so that a trained neural network would classify it incorrectly. The following is an example of adversarial attack.
## Adversarial Attacks
### Categories of Attacks
#### Poisoning Attack and Evasion Attack
Poisoning attacks involve manipulating the training process. The manipulation can happen on the training set (by replacing original samples or inserting fake samples) or the training algorithm itself (by changing the logic of the training algorithm). Such attacks either directly cause poor performance of the trained model or make the model fail on certain samples so as to construct a backdoor for future use.
Evasion attacks aim to manipulate the benign samples $\M x$ by a small perturbation $\M \delta$ so that a trained model can no longer classify it correctly. Usually, such perturbations are so small that a human observer cannot notice them. In other words, the perturbed sample "evades" from the classification of the model.
#### Targeted Attack and Non-Targeted Attack
A targeted attack tries to perturb the benign samples $\M x$ so that the trained model classifies it as a given certain class $t\in \mc Y$. A non-targeted attack only tries to perturb the benign samples $\M x$ so that the trained model classifies them incorrectly.
#### White-Box Attack and Black-Box Attack
For white-box attacks, the attacker has access to all the knowledge of the targeted model. For neural networks, the attacker knows all the information about the network structure, parameters, gradients, etc.
For black-box attacks, the attacker only knows the outputs of the model when feeding inputs into it. In practice, black-box attacks usually rely on generating adversarial perturbations from another model that the attacker has full access to.
Black-box attacks are more usual in applications, but the robustness against white-box attacks is the ultimate goal of a robust model because it reveals the fundamental weakness of neuron networks. Thus, most of the study of adversarial robustness focuses on white-box, non-targeted, evasion attacks.
### Examples of White-box Attacking Algorithms
Most white-box attacking algorithms are based on using the gradient calculated by the model to perturb the samples. Two typical examples are Fast Gradient Sign Method (FGSM) attack and its multi-step variant, Projected Gradient Descent (PGD) attack. FGSM attack generates the perturbation as
$$\begin{align}
\M \delta = \epsilon\text{sgn}\nabla_{\M x}L(\M x, y),
\end{align}$$
where $\epsilon$ controls the perturbation size. The adversarial sample is
$$\begin{align}
\M x' = \M x + \M \delta.
\end{align}$$
FGSM attack can be seen as trying to maximizing (single step) the loss of the model.
PGD attack tries perform the same task, but in a iterative way (at the cost of higher computational time):
$$\begin{align}
\M x^{t+1} = \Pi_{\M x+\epsilon}\left(\M x^t + \alpha\text{sgn}\nabla_{\M x}L(\M x, y)\right),
\end{align}$$
where $\alpha$ is the step size. $\M x^t$ denote the generated adversarial sample in step $t$, with $\M x^0$ being the original sample. $\Pi$ refers to a projection operation that clips the generated adversarial sample into the valid region: the $\epsilon$-ball around $\M x$, which is $\{\M x':\norm{\M x'-\M x}\leq \epsilon \}$.
In practice, a PGD attack with a relatively small adversarial power $\epsilon$ (small enough to be neglected by human observers) is able to reduce the accuracy of a well-trained model to nearly zero. Because of such effectiveness, researchers often use PGD attacks as a basic check of the adversarial robustness of their models.
#### Implementation of FGSM attack
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
import matplotlib.pyplot as plt
# NOTE: This is a hack to get around "User-agent" limitations when downloading MNIST datasets
# see, https://github.com/pytorch/vision/issues/3497 for more information
from six.moves import urllib
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib.request.install_opener(opener)
```
##### Inputs
- **epsilons** - List of epsilon values to use for the run. It is
important to keep 0 in the list because it represents the model
performance on the original test set. Also, intuitively we would
expect the larger the epsilon, the more noticeable the perturbations
but the more effective the attack in terms of degrading model
accuracy. Since the data range here is $[0,1]$, no epsilon
value should exceed 1.
- **pretrained_model** - path to the pretrained MNIST model which was
trained with [pytorch/examples/mnist ](https://github.com/pytorch/examples/tree/master/mnist).
For simplicity, download the pretrained model [here](https://drive.google.com/drive/folders/1fn83DF14tWmit0RTKWRhPq5uVXt73e0h?usp=sharing).
- **use_cuda** - boolean flag to use CUDA if desired and available.
Note, a GPU with CUDA is not critical for this tutorial as a CPU will
not take much time.
```python
epsilons = [0, .05, .1, .15, .2, .25, .3]
pretrained_model = "lenet_mnist_model.pth"
use_cuda=True
```
##### Model Under Attack
As mentioned, the model under attack is the same MNIST model from
[pytorch/examples/mnist](https://github.com/pytorch/examples/tree/master/mnist).
You may train and save your own MNIST model or you can download and use
the provided model. The *Net* definition and test dataloader here have
been copied from the MNIST example. The purpose of this section is to
define the model and dataloader, then initialize the model and load the
pretrained weights.
```python
# LeNet Model definition
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
# MNIST Test dataset and dataloader declaration
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, download=True, transform=transforms.Compose([
transforms.ToTensor(),
])),
batch_size=1, shuffle=True)
# Define what device we are using
print("CUDA Available: ",torch.cuda.is_available())
device = torch.device("cuda" if (use_cuda and torch.cuda.is_available()) else "cpu")
# Initialize the network
model = Net().to(device)
# Load the pretrained model
model.load_state_dict(torch.load(pretrained_model, map_location='cpu'))
# Set the model in evaluation mode. In this case this is for the Dropout layers
model.eval()
```
CUDA Available: True
Net(
(conv1): Conv2d(1, 10, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(10, 20, kernel_size=(5, 5), stride=(1, 1))
(conv2_drop): Dropout2d(p=0.5, inplace=False)
(fc1): Linear(in_features=320, out_features=50, bias=True)
(fc2): Linear(in_features=50, out_features=10, bias=True)
)
##### FGSM Attack
Now, we can define the function that creates the adversarial examples by
perturbing the original inputs. The ``fgsm_attack`` function takes three
inputs, *image* is the original clean image ($x$), *epsilon* is
the pixel-wise perturbation amount ($\epsilon$), and *data_grad*
is gradient of the loss w.r.t the input image
($\nabla_{x} L(\mathbf{\theta}, \mathbf{x}, y)$). The function
then creates perturbed image as
\begin{align}perturbed\_image = image + epsilon*sign(data\_grad) = x + \epsilon * sign(\nabla_{x} L(\mathbf{\theta}, \mathbf{x}, y))\end{align}
Finally, in order to maintain the original range of the data, the
perturbed image is clipped to range $[0,1]$.
```python
# FGSM attack code
def fgsm_attack(image, epsilon, data_grad):
# Collect the element-wise sign of the data gradient
sign_data_grad = data_grad.sign()
# Create the perturbed image by adjusting each pixel of the input image
perturbed_image = image + epsilon*sign_data_grad
# Adding clipping to maintain [0,1] range
perturbed_image = torch.clamp(perturbed_image, 0, 1)
# Return the perturbed image
return perturbed_image
```
##### Testing Function
Finally, the central result of this tutorial comes from the ``test``
function. Each call to this test function performs a full test step on
the MNIST test set and reports a final accuracy. However, notice that
this function also takes an *epsilon* input. This is because the
``test`` function reports the accuracy of a model that is under attack
from an adversary with strength $\epsilon$. More specifically, for
each sample in the test set, the function computes the gradient of the
loss w.r.t the input data ($data\_grad$), creates a perturbed
image with ``fgsm_attack`` ($perturbed\_data$), then checks to see
if the perturbed example is adversarial. In addition to testing the
accuracy of the model, the function also saves and returns some
successful adversarial examples to be visualized later.
```python
def test( model, device, test_loader, epsilon ):
# Accuracy counter
correct = 0
adv_examples = []
# Loop over all examples in test set
for data, target in test_loader:
# Send the data and label to the device
data, target = data.to(device), target.to(device)
# Set requires_grad attribute of tensor. Important for Attack
data.requires_grad = True
# Forward pass the data through the model
output = model(data)
init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
# If the initial prediction is wrong, dont bother attacking, just move on
if init_pred.item() != target.item():
continue
# Calculate the loss
loss = F.nll_loss(output, target)
# Zero all existing gradients
model.zero_grad()
# Calculate gradients of model in backward pass
loss.backward()
# Collect datagrad
data_grad = data.grad.data
# Call FGSM Attack
perturbed_data = fgsm_attack(data, epsilon, data_grad)
# Re-classify the perturbed image
output = model(perturbed_data)
# Check for success
final_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
if final_pred.item() == target.item():
correct += 1
# Special case for saving 0 epsilon examples
if (epsilon == 0) and (len(adv_examples) < 5):
adv_ex = perturbed_data.squeeze().detach().cpu().numpy()
adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )
else:
# Save some adv examples for visualization later
if len(adv_examples) < 5:
adv_ex = perturbed_data.squeeze().detach().cpu().numpy()
adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )
# Calculate final accuracy for this epsilon
final_acc = correct/float(len(test_loader))
print("Epsilon: {}\tTest Accuracy = {} / {} = {}".format(epsilon, correct, len(test_loader), final_acc))
# Return the accuracy and an adversarial example
return final_acc, adv_examples
```
##### Run Attack
The last part of the implementation is to actually run the attack. Here,
we run a full test step for each epsilon value in the *epsilons* input.
For each epsilon we also save the final accuracy and some successful
adversarial examples to be plotted in the coming sections. Notice how
the printed accuracies decrease as the epsilon value increases. Also,
note the $\epsilon=0$ case represents the original test accuracy,
with no attack.
```python
accuracies = []
examples = []
# Run test for each epsilon
for eps in epsilons:
acc, ex = test(model, device, test_loader, eps)
accuracies.append(acc)
examples.append(ex)
```
Epsilon: 0 Test Accuracy = 9810 / 10000 = 0.981
Epsilon: 0.05 Test Accuracy = 9426 / 10000 = 0.9426
Epsilon: 0.1 Test Accuracy = 8510 / 10000 = 0.851
Epsilon: 0.15 Test Accuracy = 6826 / 10000 = 0.6826
Epsilon: 0.2 Test Accuracy = 4303 / 10000 = 0.4303
Epsilon: 0.25 Test Accuracy = 2087 / 10000 = 0.2087
Epsilon: 0.3 Test Accuracy = 871 / 10000 = 0.0871
##### Results
Accuracy vs Epsilon
The first result is the accuracy versus epsilon plot. As alluded to
earlier, as epsilon increases we expect the test accuracy to decrease.
This is because larger epsilons mean we take a larger step in the
direction that will maximize the loss. Notice the trend in the curve is
not linear even though the epsilon values are linearly spaced. For
example, the accuracy at $\epsilon=0.05$ is only about 4% lower
than $\epsilon=0$, but the accuracy at $\epsilon=0.2$ is 25%
lower than $\epsilon=0.15$. Also, notice the accuracy of the model
hits random accuracy for a 10-class classifier between
$\epsilon=0.25$ and $\epsilon=0.3$.
```python
plt.figure(figsize=(5,5))
plt.plot(epsilons, accuracies, "*-")
plt.yticks(np.arange(0, 1.1, step=0.1))
plt.xticks(np.arange(0, .35, step=0.05))
plt.title("Accuracy vs Epsilon")
plt.xlabel("Epsilon")
plt.ylabel("Accuracy")
plt.show()
```
##### Sample Adversarial Examples
Remember the idea of no free lunch? In this case, as epsilon increases
the test accuracy decreases **BUT** the perturbations become more easily
perceptible. In reality, there is a tradeoff between accuracy
degredation and perceptibility that an attacker must consider. Here, we
show some examples of successful adversarial examples at each epsilon
value. Each row of the plot shows a different epsilon value. The first
row is the $\epsilon=0$ examples which represent the original
“clean” images with no perturbation. The title of each image shows the
“original classification -> adversarial classification.” Notice, the
perturbations start to become evident at $\epsilon=0.15$ and are
quite evident at $\epsilon=0.3$. However, in all cases humans are
still capable of identifying the correct class despite the added noise.
```python
# Plot several examples of adversarial samples at each epsilon
cnt = 0
plt.figure(figsize=(8,10))
for i in range(len(epsilons)):
for j in range(len(examples[i])):
cnt += 1
plt.subplot(len(epsilons),len(examples[0]),cnt)
plt.xticks([], [])
plt.yticks([], [])
if j == 0:
plt.ylabel("Eps: {}".format(epsilons[i]), fontsize=14)
orig,adv,ex = examples[i][j]
plt.title("{} -> {}".format(orig, adv))
plt.imshow(ex, cmap="gray")
plt.tight_layout()
plt.show()
```
## Adversarial Training with Perturbed Examples
To defend adversarial attacks, a direct idea is to add perturbations during the training process. Mardy et al. \cite{madry2017} proposed to formulate a robust optimization problem to minimize the adversarial risk instead of the usual empirical risk:
$$\begin{align}
\min_{\theta} \E_{(\M x, y)\in \mc D} \left[\max_{\norm{\M\delta}_p<\epsilon}L(\M x+\M\delta, y)\right].
\end{align}$$
The inner maximization tries to find perturbed samples that produce a high loss, which is also the goal of PGD attacks. The outer minimization problem tries to find the model parameters that minimize the adversarial loss given by the inner adversaries.
This robust optimization approach effectively trains more robust models. It has been a benchmark for evaluating the adversarial robustness of models and is often seen as the standard way of adversarial training. Based on this method, many variants were proposed in the following years. They involve using a more sophisticated regularizer or adaptively adjusting the adversarial power. Those methods that add perturbations during training share a common disadvantage of high computational cost.
## Adversarial Training with Stochastic Networks
Stochastic networks refer to the neuron networks that involve random noise layers. Liu et al. proposed Random Self Ensemble (RSE). Their method injects spherical Gaussian noise into different layers of a network and uses the ensemble of multiple forward pass as the final output. The variance of their added noise is treated as a hyper-parameter to be tuned. RSE shows good robustness against PGD attack and C\&W attack.
Similar to RSE, He et al. proposed Parametric Noise Injection (PNI). Rather than fixed variance, they applied an additional intensity parameter to control the variance of the noise. This intensity parameter is trained together with the model parameters.
Inspired by the idea of trainable noise, Eustratiadis et al. proposed Weight-Covariance Alignment (WCA) method \cite{wca}. This method adds trainable Gaussian noise to the activation of the penultimate layer of the network. Let $\M g_{\theta}:\mathcal X\rightarrow \mathbb R^D$ be the neural network parameterized by $\theta$ except the final layer and $f_{\M W, \M b}:\mathbb R^D \rightarrow \mathbb R^K$ be the final linear layer parameterized by weight matrix $\M W^{K\times D}$ and bias vector $\M b^{K\times 1}$, where $K=\abs{\mc Y}$ is the number of classes . This WCA method adds a Gaussian noise $\M u \sim \mathcal N_{0, \M\Sigma}$ to the output of penultimate layer $\M g_{\theta}(x)$, where $\M\Sigma^{D\times D}$ is the covariance matrix. Thus, the final output becomes
$$\begin{align}
f_{\M W, \M b}\left(\M g_\theta(\M x)\right) = \M W\left(\M g_\theta (\M x)+\M u\right) + \M b.
\end{align}$$
The loss function is defined as
$$\begin{align}
L=L_{\text{CE}} + L_{\text{WCA}}+ \lambda \sum_{y\in \mathcal{Y}}\M W_y^{\intercal} \M W_y,
\end{align}$$
where $L_{\text{CE}}$ is the usual cross-entropy loss, and $L_{\text{WCA}}$ is a term that encourage the noise and the weight of last layer to be aligned with each other. The third term is gives $l^2$ penalty to $\M W_y$ with large magnitude. The WCA regularizer is defined as
$$\begin{align}
L_{\text{WCA}} = -\log\sum_{y\in \mathcal{Y}}\M W_y \M\Sigma\M W_y^\intercal .
\end{align}$$
where $\M W_y$ is the weight vector of the last layer that is associated with class $y$.
The WCA regularizer encourages the weights associated with the last layer to be well aligned with the covariance matrix of the noise. Larger trained variance corresponding to one feature means this feature is harder to perturb, so putting more weight on such features will force the final layer to focus more on these robust features.
Models trained with WCA show better performance against PGD attacks on various datasets comparing with the aforementioned approaches. In addition, because of the fact that WCA does not involve generating adversarial samples, the computational time is significantly lower than adversarial training with perturbations. The method we propose is inspired by this WCA method. But instead of adding noise to the penultimate layer, we directly add noise to the output of the final layer.
## Training a neural network with noisy logits
We consider training a model with a noisy representation $\R{Z}$ satisfying the Markov chain:
$$\R{X}\to \R{Z} \to \hat{\R{Y}}$$
Hence, the estimate for $P_{\R{Y}|\R{X}}$ is given by $P_{\R{Y}|\R{Z}}$ and $P_{\R{Z}|\R{X}}$ as
$$P_{\hat{\R{Y}}|\R{X}} (y|x) = E\left[\left.P_{\hat{\R{Y}}|\R{Z}}(y|\R{Z}) \right|\R{X}=x\right].$$
In particular, we propose to set $\R{Z}$ to be the noisy logits of $\hat{\R{Y}}$, i.e.,
$P_{\hat{Y}|\R{Z}}$ is defined by the pmf obtained with the usual softmax function
$$
p_{\hat{\R{Y}}|\RM{z}} (y|\M{z}) := \frac{\exp(z_y)}{\sum_{y'\in \mathcal{Y}} \exp(z_{y'})},
$$
so $z_y$ is the logit for class $y$.
The noisy logit is defined as
$$
\R{Z} = \RM{z}:=[g(y|\R{X})+\R{u}_y]_{y\in \mathcal{Y}}
$$
where
$$g(y|x)\in \mathbb{R}$$
for $(x,y)\in \mathcal{X}\times \mathcal{Y}$
is computed by a neural network to be trained, and $\R{u}_y\sim \mathcal{N}_{0,\sigma_y^2}$ for $y\in \mathcal{Y}$ are independent gaussian random variables with variance $\sigma_y^2>0$. For simplicity,
$$
$$\begin{align}
\M{g}(x)&:= [g(y|x)]_{y\in \mathcal{Y}}\\
\RM{u}&:=[\R{u}_y]_{y\in \mathcal{Y}}\\
\M{\Sigma}&:=\M{\sigma} \M{I} \M{\sigma}^\intercal \quad \text{with }\M{\sigma}:=[
\sigma_y]_{y\in \mathcal{Y}},
\end{align}$$
$$ which are referred to as the (noiseless) logits, additive noise (vector) and its (diagonal) covariance matrix respectively. Hence, $P_{\R{Z}|\R{X}}$ is defined by the multivariate gaussian density function
$$
p_{\RM{z}|\R{X}}(\M{z}|x) = \mathcal{N}_{\M{g}(x),\Sigma}(\M{z})
$$
for $x\in \mathcal{X}$ and $\M{z}\in \mathbb{R}^{\abs{\mathcal{Y}}}$.
The loss function used for training the neural network is derived from
$$
L := E\left[-\log p_{\hat{\R{Y}}|\R{X}}(\R{Y}|\R{X})\right] - \log \sum_{y\in \mathcal{Y}}\sigma_y^2 + \lambda\left(\sum_{y\in \mathcal{Y}}\sigma_y^2\right)
$$
```python
```
| ece557c6386cb4ddf4efe5dcfa5fe5fb26961420 | 149,751 | ipynb | Jupyter Notebook | part3/Noisy_logits.ipynb | ccha23/miml | 6a41de1c0bb41d38e3cdc6e9c27363215b7729b9 | [
"MIT"
] | 1 | 2021-08-17T15:16:11.000Z | 2021-08-17T15:16:11.000Z | part3/Noisy_logits.ipynb | ccha23/miml | 6a41de1c0bb41d38e3cdc6e9c27363215b7729b9 | [
"MIT"
] | null | null | null | part3/Noisy_logits.ipynb | ccha23/miml | 6a41de1c0bb41d38e3cdc6e9c27363215b7729b9 | [
"MIT"
] | null | null | null | 220.222059 | 89,968 | 0.892542 | true | 5,863 | Qwen/Qwen-72B | 1. YES
2. YES | 0.875787 | 0.76908 | 0.67355 | __label__eng_Latn | 0.985763 | 0.403214 |
# Reduced Helmholtz equation of state: carbon dioxide
**Water equation of state:** You can see the full, state-of-the-art equation of state for water, which also uses a reduced Helmholtz approach: the IAPWS 1995 formulation (Wagner 2002). This equation is state is available using CoolProp with the `Water` fluid.
One modern approach for calculating thermodynamic properties of real fluids uses a reduced Helmholtz equation of state, using the reduced Helmholtz free energy function $\alpha$:
\begin{equation}
\alpha (\tau, \delta) = \frac{a}{RT} = \frac{u - Ts}{RT}
\end{equation}
which is a function of reduced density $\delta$ and reduced temperature $\tau$:
\begin{equation}
\delta = \frac{\rho}{\rho_{\text{crit}}} \quad \text{and} \quad \tau = \frac{T_{\text{crit}}}{T}
\end{equation}
The reduced Helmhotz free energy function, $\alpha(\tau, \delta)$, is given as the sum of ideal gas and residual components:
\begin{equation}
\alpha(\tau, \delta) = \alpha_{IG} (\tau, \delta) + \alpha_{\text{res}} (\tau, \delta) \;,
\end{equation}
which are both given as high-order fits using many coefficients:
```python
import matplotlib.pyplot as plt
%matplotlib inline
# these are mostly for making the saved figures nicer
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'png')
plt.rcParams['figure.dpi']= 150
plt.rcParams['savefig.dpi'] = 150
import numpy as np
import cantera as ct
from scipy import integrate, optimize
from pint import UnitRegistry
ureg = UnitRegistry()
Q_ = ureg.Quantity
```
```python
import sympy
sympy.init_printing(use_latex='mathjax')
T, R, tau, delta = sympy.symbols('T, R, tau, delta', real=True)
a_vars = sympy.symbols('a0, a1, a2, a3, a4, a5, a6, a7', real=True)
theta_vars = sympy.symbols('theta3, theta4, theta5, theta6, theta7', real=True)
n_vars = sympy.symbols('n0, n1, n2, n3, n4, n5, n6, n7, n8, n9, n10, n11', real=True)
alpha_ideal = sympy.log(delta) + a_vars[0] + a_vars[1]*tau + a_vars[2]*sympy.log(tau)
for i in range(3, 8):
alpha_ideal += a_vars[i] * sympy.log(1.0 - sympy.exp(-tau * theta_vars[i-3]))
display(sympy.Eq(sympy.symbols('alpha_IG'), alpha_ideal))
alpha_res = (
n_vars[0] * delta * tau**0.25 +
n_vars[1] * delta * tau**1.25 +
n_vars[2] * delta * tau**1.50 +
n_vars[3] * delta**3 * tau**0.25 +
n_vars[4] * delta**7 * tau**0.875 +
n_vars[5] * delta * tau**2.375 * sympy.exp(-delta) +
n_vars[6] * delta**2 * tau**2 * sympy.exp(-delta) +
n_vars[7] * delta**5 * tau**2.125 * sympy.exp(-delta) +
n_vars[8] * delta * tau**3.5 * sympy.exp(-delta**2) +
n_vars[9] * delta * tau**6.5 * sympy.exp(-delta**2) +
n_vars[10] * delta**4 * tau**4.75 * sympy.exp(-delta**2) +
n_vars[11] * delta**2 * tau**12.5 * sympy.exp(-delta**3)
)
display(sympy.Eq(sympy.symbols('alpha_res'), alpha_res))
```
$\displaystyle \alpha_{IG} = a_{0} + a_{1} \tau + a_{2} \log{\left(\tau \right)} + a_{3} \log{\left(1.0 - e^{- \tau \theta_{3}} \right)} + a_{4} \log{\left(1.0 - e^{- \tau \theta_{4}} \right)} + a_{5} \log{\left(1.0 - e^{- \tau \theta_{5}} \right)} + a_{6} \log{\left(1.0 - e^{- \tau \theta_{6}} \right)} + a_{7} \log{\left(1.0 - e^{- \tau \theta_{7}} \right)} + \log{\left(\delta \right)}$
$\displaystyle \alpha_{res} = \delta^{7} n_{4} \tau^{0.875} + \delta^{5} n_{7} \tau^{2.125} e^{- \delta} + \delta^{4} n_{10} \tau^{4.75} e^{- \delta^{2}} + \delta^{3} n_{3} \tau^{0.25} + \delta^{2} n_{11} \tau^{12.5} e^{- \delta^{3}} + \delta^{2} n_{6} \tau^{2} e^{- \delta} + \delta n_{0} \tau^{0.25} + \delta n_{1} \tau^{1.25} + \delta n_{2} \tau^{1.5} + \delta n_{5} \tau^{2.375} e^{- \delta} + \delta n_{8} \tau^{3.5} e^{- \delta^{2}} + \delta n_{9} \tau^{6.5} e^{- \delta^{2}}$
## Carbon dioxide equation of state
The coefficients $a_i$, $\theta_i$, and $n_i$ are given for carbon dioxide:
```python
# actual coefficients
coeffs_a = [
8.37304456, -3.70454304, 2.500000, 1.99427042,
0.62105248, 0.41195293, 1.04028922, 8.327678e-2
]
coeffs_theta = [
3.151630, 6.111900, 6.777080, 11.32384, 27.08792
]
coeffs_n = [
0.89875108, -0.21281985e1, -0.68190320e-1, 0.76355306e-1,
0.22053253e-3, 0.41541823, 0.71335657, 0.30354234e-3,
-0.36643143, -0.14407781e-2, -0.89166707e-1, -0.23699887e-1
]
```
Through some math, we can find an expression for pressure:
\begin{equation}
P = R T \rho \left[ 1 + \delta \left(\frac{\partial \alpha_{\text{res}}}{\partial \delta} \right)_{\tau} \right]
\end{equation}
Use this expression to estimate the pressure at $T$ = 350 K and $v$ = 0.01 m$^3$/kg, and compare against that obtained from Cantera. We can use our symbolic expression for $\alpha_{\text{res}} (\tau, \delta)$ and take the partial derivative:
```python
# use Cantera fluid to get specific gas constant and critical properties
f = ct.CarbonDioxide()
gas_constant = ct.gas_constant / f.mean_molecular_weight
temp_crit = f.critical_temperature
density_crit = f.critical_density
# conditions of interest
temp = 350
specific_volume = 0.01
density = 1.0 / specific_volume
# take the partial derivative of alpha_res with respect to delta
derivative_alpha_delta = sympy.diff(alpha_res, delta)
# substitute all coefficients
derivative_alpha_delta = derivative_alpha_delta.subs(
[(n, n_val) for n, n_val in zip(n_vars, coeffs_n)]
)
def get_pressure(
temp, specific_vol, fluid, derivative_alpha_delta, tau, delta
):
'''Calculates pressure for reduced Helmholtz equation of state'''
red_density = (1.0 / specific_vol) / fluid.critical_density
red_temp_inv = fluid.critical_temperature / temp
gas_constant = ct.gas_constant / fluid.mean_molecular_weight
dalpha_ddelta = derivative_alpha_delta.subs(
[(delta, red_density), (tau, red_temp_inv)]
)
pres = (
gas_constant * temp * (1.0 / specific_vol) *
(1.0 + red_density * dalpha_ddelta)
)
return pres
pres = get_pressure(temp, specific_volume, f, derivative_alpha_delta, tau, delta)
print(f'Pressure: {pres / 1e6: .3f} MPa')
f.TV = temp, specific_volume
print(f'Cantera pressure: {f.P / 1e6: .3f} MPa')
```
Pressure: 5.464 MPa
Cantera pressure: 5.475 MPa
Our calculation and that from Cantera agree fairly well! They are not exactly the same because Cantera uses a slightly different formulation for the equation of state.
Let's compare the calculations now for a range of specific volumes and multiple temperatures:
```python
fig, ax = plt.subplots(figsize=(8, 4))
specific_volumes = np.geomspace(0.001, 0.01, num=20)
temperatures = [300, 400, 500]
for temp in temperatures:
pressures = np.zeros(len(specific_volumes))
pressures_cantera = np.zeros(len(specific_volumes))
for idx, spec_vol in enumerate(specific_volumes):
pressures[idx] = get_pressure(
temp, spec_vol, f,
derivative_alpha_delta, tau, delta
)
f.TV = temp, spec_vol
pressures_cantera[idx] = f.P
ax.loglog(specific_volumes, pressures/1000., 'o', color='blue')
ax.loglog(specific_volumes, pressures_cantera/1000., color='blue')
bbox_props = dict(boxstyle='round', fc='w', ec='0.3', alpha=0.9)
ax.text(2e-3, 4e3, '300 K', ha='center', va='center', bbox=bbox_props)
ax.text(2e-3, 1.6e4, '400 K', ha='center', va='center', bbox=bbox_props)
ax.text(2e-3, 7e4, '500 K', ha='center', va='center', bbox=bbox_props)
ax.legend(['Reduced Helmholtz', 'Cantera'])
plt.grid(True, which='both')
plt.xlabel('Specific volume (m^3/kg)')
plt.ylabel('Pressure (kPa)')
fig.tight_layout()
plt.show()
```
We can see that the pressure calculated using the reduced Helmholtz equation of state matches closely with that from Cantera, which uses a different but similarly advanced equation of state.
## Bibliography
Wagner, W., & Pruß, A. (2002). The IAPWS Formulation 1995 for the Thermodynamic Properties of Ordinary Water Substance for General and Scientific Use. Journal of Physical and Chemical Reference Data, 31(2), 387–535. https://doi.org/10.1063/1.1461829
| 6f06dd9bbc1022316a219d9c7ad0d2c55634f68c | 123,395 | ipynb | Jupyter Notebook | content/properties-pure/reduced-helmholtz.ipynb | msb002/computational-thermo | 9302288217a36e0ce29e320688a3f574921909a5 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | content/properties-pure/reduced-helmholtz.ipynb | msb002/computational-thermo | 9302288217a36e0ce29e320688a3f574921909a5 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | content/properties-pure/reduced-helmholtz.ipynb | msb002/computational-thermo | 9302288217a36e0ce29e320688a3f574921909a5 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | 353.567335 | 87,044 | 0.919624 | true | 2,663 | Qwen/Qwen-72B | 1. YES
2. YES | 0.884039 | 0.746139 | 0.659616 | __label__eng_Latn | 0.63647 | 0.37084 |
**Notas para contenedor de docker:**
Comando de docker para ejecución de la nota de forma local:
nota: cambiar `dir_montar` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.
```
dir_montar=<ruta completa de mi máquina a mi directorio>#aquí colocar la ruta al directorio a montar, por ejemplo:
#dir_montar=/Users/erick/midirectorio.
```
Ejecutar:
```
$docker run --rm -v $dir_montar:/datos --name jupyterlab_prope_r_kernel_tidyverse -p 8888:8888 -d palmoreck/jupyterlab_prope_r_kernel_tidyverse:3.0.16
```
Ir a `localhost:8888` y escribir el password para jupyterlab: `qwerty`
Detener el contenedor de docker:
```
docker stop jupyterlab_prope_r_kernel_tidyverse
```
Documentación de la imagen de docker `palmoreck/jupyterlab_prope_r_kernel_tidyverse:3.0.16` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/prope_r_kernel_tidyverse).
---
Para ejecución de la nota usar:
[docker](https://www.docker.com/) (instalación de forma **local** con [Get docker](https://docs.docker.com/install/)) y ejecutar comandos que están al inicio de la nota de forma **local**.
O bien dar click en alguno de los botones siguientes:
[](https://mybinder.org/v2/gh/palmoreck/dockerfiles-for-binder/jupyterlab_prope_r_kernel_tidyerse?urlpath=lab/tree/Propedeutico/Python/clases/2_calculo_DeI/1_aproximacion_a_derivadas_e_integrales.ipynb) esta opción crea una máquina individual en un servidor de Google, clona el repositorio y permite la ejecución de los notebooks de jupyter.
[](https://repl.it/languages/python3) esta opción no clona el repositorio, no ejecuta los notebooks de jupyter pero permite ejecución de instrucciones de Python de forma colaborativa con [repl.it](https://repl.it/). Al dar click se crearán nuevos ***repl*** debajo de sus users de ***repl.it***.
En Python podemos utilizar el paquete *SymPy* para realizar cómputo algebraico o simbólico, ver [computer algebra](https://en.wikipedia.org/wiki/Computer_algebra).
```python
import sympy #usamos import para importar módulos o paquetes de python
```
# Representación de símbolos matemáticos como objetos de Python
Para usar el contenido al que podemos acceder dentro de un paquete de Python utilizamos `sympy.<aquí escribir el contenido a usar>`
## Clase `Symbol`
Usamos la clase `Symbol` para crear un objeto `Symbol` de Python:
```python
x = sympy.Symbol("x") #el nombre del símbolo es x y se asigna a la variable x
```
```python
x
```
$\displaystyle x$
Y tenemos funciones ya en las librerías incluidas en Python cuando se instala en nuestras máquinas como `type`:
```python
type(x) #podemos revisar qué tipo de objeto es con la función type
```
sympy.core.symbol.Symbol
```python
y = sympy.Symbol("y")
```
```python
y
```
$\displaystyle y$
```python
type(y)
```
sympy.core.symbol.Symbol
Podemos pasar argumentos a la clase `Symbol` para identificar el tipo del objeto.
```python
x = sympy.Symbol("x")
y = sympy.Symbol("y", positive=True) #argumento positive igual a True
z = sympy.Symbol("z", negative=True)
```
Una vez que hemos creado un objeto tipo `Symbol` podemos usar funciones como `sqrt`:
```python
sympy.sqrt(x**2)
```
$\displaystyle \sqrt{x^{2}}$
```python
sympy.sqrt(y**2)
```
$\displaystyle y$
```python
sympy.sqrt(z**2)
```
$\displaystyle - z$
*SymPy* nos devuelve simplificaciones útiles si identificamos el tipo del objeto:
```python
n1 = sympy.Symbol("n1")
n2 = sympy.Symbol("n2", integer=True)
n3 = sympy.Symbol("n3", odd=True)
n4 = sympy.Symbol("n4", even=True)
```
```python
sympy.cos(n1*sympy.pi)
```
$\displaystyle \cos{\left(\pi n_{1} \right)}$
```python
sympy.cos(n2*sympy.pi)
```
$\displaystyle \left(-1\right)^{n_{2}}$
```python
sympy.cos(n3*sympy.pi)
```
$\displaystyle -1$
```python
sympy.cos(n4*sympy.pi)
```
$\displaystyle 1$
Podemos definir símbolos en una sola línea con la función de `symbols` como sigue:
```python
a, b, c = sympy.symbols("a, b, c") #obsérvese el uso de tuplas del lado izquierdo de la igualdad
```
```python
a
```
$\displaystyle a$
```python
b
```
$\displaystyle b$
## Expresiones
Para representar en *SymPy* la expresión algebraica $1 + 2x^2 + 3x^3 - x^2 + 5$ creamos al símbolo $x$:
```python
x = sympy.Symbol("x")
```
```python
expr = 1 + 2*x**2 + 3*x**3 - x**2 + 5
```
```python
expr
```
$\displaystyle 3 x^{3} + x^{2} + 6$
**obsérvese que se ha simplificado la expresión.**
## Subs
**Ejemplo evaluar la expresión $3x^3 + x^2 + 6$ en $x=1, 2$**
```python
x = sympy.Symbol("x")
```
```python
expr = 3*x**3 + x**2 + 6
```
Podemos usar el método `subs` del objeto `expr` para substituir el valor de $x$ en $1$:
```python
expr.subs(x,1)
```
$\displaystyle 10$
```python
expr.subs(x,2)
```
$\displaystyle 34$
**Ejemplo: evaluar $xy + z^2x$ en $x = 1.25$, $y=0.4$, $z=3.2$**
```python
x, y, z = sympy.symbols("x,y,z")
```
```python
expr = x*y + z**2*x
```
```python
vals = {x: 1.25, y: 0.4, z: 3.2} #obsérvese el uso de diccionarios
```
```python
expr.subs(vals)
```
$\displaystyle 13.3$
## Simplify
**Ejemplo: $2(x^2 -x) -x(x+1)$**
```python
x = sympy.Symbol("x")
```
```python
expr2 = 2*(x**2 - x) - x*(x+1)
```
```python
expr2
```
$\displaystyle 2 x^{2} - x \left(x + 1\right) - 2 x$
Usamos la función `simplify` de *SymPy*:
```python
sympy.simplify(expr2)
```
$\displaystyle x \left(x - 3\right)$
Y es equivalente a usar el método *simplify* del objeto `expr2`:
```python
expr2.simplify()
```
$\displaystyle x \left(x - 3\right)$
**Ejemplo: $2\sin(x)\cos(x)$**
```python
x = sympy.Symbol("x")
```
```python
expr3 = 2*sympy.sin(x)*sympy.cos(x)
```
```python
expr3
```
$\displaystyle 2 \sin{\left(x \right)} \cos{\left(x \right)}$
```python
expr3.simplify()
```
$\displaystyle \sin{\left(2 x \right)}$
## Expand
**Ejemplo:** $(x+1)*(x+2)$
```python
x = sympy.Symbol("x")
```
```python
expr = (x+1)*(x+2)
```
```python
sympy.expand(expr)
```
$\displaystyle x^{2} + 3 x + 2$
```python
expr.expand()
```
$\displaystyle x^{2} + 3 x + 2$
## Factor
**Ejemplo:** $x^2 -1$
```python
x = sympy.Symbol("x")
```
```python
expr = x**2 -1
```
```python
expr.factor()
```
$\displaystyle \left(x - 1\right) \left(x + 1\right)$
```python
sympy.factor(expr)
```
$\displaystyle \left(x - 1\right) \left(x + 1\right)$
## Ecuaciones
**Ejemplo:** $x^2+2x-3=0$
```python
x = sympy.Symbol("x")
```
```python
sympy.solve(x**2 + 2*x -3)
```
[-3, 1]
**Ejemplo:** $ax^2+bx+c=0$ para la variable $x$
```python
x = sympy.Symbol("x")
```
```python
a,b,c = sympy.symbols("a, b, c")
```
```python
sympy.solve(a*x**2 + b*x + c, x) #aquí especificamos la variable que es incógnita
```
[(-b + sqrt(-4*a*c + b**2))/(2*a), -(b + sqrt(-4*a*c + b**2))/(2*a)]
Y hay ejemplos de ecuaciones que no pueden resolverse de forma cerrada (en términos de sus coeficientes y operaciones) como $x^5 - x^2 +1 = 0$
```python
x = sympy.Symbol("x")
```
```python
sympy.solve(x**5 - x**2 +1)
```
[CRootOf(x**5 - x**2 + 1, 0),
CRootOf(x**5 - x**2 + 1, 1),
CRootOf(x**5 - x**2 + 1, 2),
CRootOf(x**5 - x**2 + 1, 3),
CRootOf(x**5 - x**2 + 1, 4)]
## Referencias
* [SymPy](https://www.sympy.org/en/index.html) y [Numerical Python by Robert Johansson, Apress](https://www.apress.com/gp/book/9781484242452)
| 7254c7209156004f5a3b0f2899d5ddbcbf9e4ff3 | 27,737 | ipynb | Jupyter Notebook | Python/clases/2_calculo_DeI/0_modulo_sympy.ipynb | CarlosJChV/Propedeutico | d903192ffa64a7576faace68c2256e69bc11087c | [
"Apache-2.0"
] | 29 | 2019-07-07T07:51:19.000Z | 2022-03-04T18:17:36.000Z | Python/clases/2_calculo_DeI/0_modulo_sympy.ipynb | CarlosJChV/Propedeutico | d903192ffa64a7576faace68c2256e69bc11087c | [
"Apache-2.0"
] | 18 | 2019-06-12T01:15:41.000Z | 2021-08-01T18:20:04.000Z | Python/clases/2_calculo_DeI/0_modulo_sympy.ipynb | CarlosJChV/Propedeutico | d903192ffa64a7576faace68c2256e69bc11087c | [
"Apache-2.0"
] | 102 | 2019-06-07T15:24:05.000Z | 2021-07-27T03:05:41.000Z | 21.173282 | 393 | 0.510077 | true | 2,531 | Qwen/Qwen-72B | 1. YES
2. YES | 0.658418 | 0.865224 | 0.569679 | __label__spa_Latn | 0.721653 | 0.161884 |
```python
from sympy import *
init_printing()
'''
r_GEO = 36000 + 6371 KM
r_LEO = 2000 + 6371 KM
G = 6.674e-11
Me = 5.972e24
'''
M, E = symbols("M E", Functions = True)
e_c, a, G, M_e, r, mu = symbols("e_c a G M_e r mu", Contstants = True)
T_circular, T_elliptical, T_GEO, T_GTO, T_LEO, r_LEO, r_GEO, T_tot = symbols("T_circular T_elliptical T_GEO T_GTO T_LEO r_LEO r_GEO T_tot", Constants = True)
t, x, y, Y = symbols("t x y Y", Variables = True)
mu_calculated = (6.674e-11 * 5.972e24)
```
The orbital period of a circular Orbit:
```python
Eq(T_circular, 2*pi*sqrt(r**3 / mu))
```
Where mu is:
```python
Eq(mu, G*M_e)
```
Then, the GEO's orbital period in hours is:
```python
r_GEO_Calculated = (36000 + 6371)*1000
T_GEO_Calculated = 2*pi*sqrt(r_GEO_Calculated**3 / mu_calculated)
Eq(T_GEO, T_GEO_Calculated.evalf()/60/60)
```
And the LEO's orbital period in hours is:
```python
r_LEO_Calculated = (2000 + 6371)*1000
T_LEO_Calculated = 2*pi*sqrt(r_LEO_Calculated**3 / mu_calculated)
Eq(T_LEO, T_LEO_Calculated.evalf()/60/60)
```
_____________________________________________
# Finding the GTO.
The goal is to get both 'e' (the eccentricity of our GTO) and 'a' (its semi-major axis). So, we need 2 eqns.
The equation of the GEO (A circle equation):
```python
geo = Eq(y**2, r**2-x**2)
geo
```
The equation of the GTO (An ellipse equation):
```python
gto = Eq(((x+a*e_c)**2/a**2)+(y**2/a**2*(1-e_c**2)), 1)
gto
```
We Wanna solve these two eqns to get the semi-major axis of our GTO and the raduis of our LEO.
first, substitute the GEO's eqn in the GTO's eqn.
```python
toSolve = gto.subs({y**2:geo.rhs})
toSolve
```
Now we can solve for x.
```python
solX = solveset(toSolve, x)
solX
```
Now we can calculate the y coordinate for each x.
```python
solY1 = solveset(Eq(geo.lhs, geo.rhs.subs({x:list(solX)[0]})), y)
solY1
```
```python
solY2 = solveset(Eq(geo.lhs, geo.rhs.subs({x:list(solX)[1]})), y)
solY2
```
We have 4 different possible points for the intersection between a circle and an ellipse, but the intersection between the GEO and the GTO is going to be at only one point with an x coordinate of '-r_GEO' (the radius of the GEO).
Now, we can get the first eqn.
```python
geoAndGtoIntersection = solveset(Eq(list(solX)[0], -r_GEO).subs({r:r_GEO}), a)
geoAndGtoIntersection
```
Surbrisingly, there are 2 possible values for a. But we're not interrested in the negative value. So our first eqn is:
```python
eqn1 = Eq(a, list(list(geoAndGtoIntersection.args)[2])[1])
eqn1
```
To get another eqn, we can do the same but this time with the LOE.
The intersection between our LEO and GTO is exactly at the x coordinate of 'r_LEO'.
```python
gtoAndLeoIntersection = solveset(Eq(list(solX)[1], r).subs({r:r_LEO}), a)
gtoAndLeoIntersection
```
Again, there are 2 possible values for 'r_LEO' but we need the positive one.
```python
eqn2 = Eq(a, list(list(gtoAndLeoIntersection.args)[2])[0])
eqn2
```
This is the positive because 0 < e_c < 1.
Now, we have 2 eqns and 2 variables. And we're ready to get 'a' and 'e_c'
```python
e_c_Exp = Eq(e_c, solveset(eqn1.subs({a:eqn2.rhs, r_GEO:r_GEO_Calculated, r_LEO:r_LEO_Calculated}), e_c).args[0])
e_c_Calculated = e_c_Exp.rhs
e_c_Exp
```
```python
s = solveset(eqn2.subs({r_LEO:r_LEO_Calculated, e_c:e_c_Calculated})).args[0]
a_Exp = Eq(a, s)
a_Calculated = a_Exp.rhs
a_Exp
```
There's another way for finding 'a'.
```python
p1 = plot(sqrt(r_GEO_Calculated**2-x**2), -sqrt(r_GEO_Calculated**2-x**2), sqrt(r_LEO_Calculated**2-x**2), -sqrt(r_LEO_Calculated**2-x**2), sqrt(a_Calculated**2*(1-e_c_Calculated**2)*(1-((x+a_Calculated*e_c_Calculated)**2/a_Calculated**2))), -sqrt(a_Calculated**2*(1-e_c_Calculated**2)*(1-((x+a_Calculated*e_c_Calculated)**2/a_Calculated**2))),(x, -5*10**7, 5*10**7),xlim = (-7.5*10**7, 7.5*10**7), ylim=((-5*10**7, 5*10**7)))
```
From the geometry we can say that:
```python
Eq(a, (r_LEO + r_GEO)/2)
```
This could've saved us a lot of math work :)
__________________________________
# Now let's calculate the periods.
The orbital period of an elliptical orbit is:
```python
Eq(T_elliptical, 2*pi*sqrt(a**3 / mu))
```
```python
T_GTO_Calculated = 2*pi*sqrt(a_Calculated**3/mu_calculated)
Eq(T_GTO, T_GTO_Calculated.evalf()/60/60)
```
So, the total time required to put our satellite in a GEO using Hohmann transfer is:
```python
Eq(T_tot, T_GTO / 2 + T_LEO / 2)
```
The total time required to put our satellite in a GEO of a 36,000 Kilometers above sea level in hours is:
```python
Eq(T_tot, (T_GTO_Calculated / 2 + T_LEO_Calculated / 2).evalf()/60/60)
```
| 1e53111d5c0a13baee4c6e7b73cb80cf171ffb77 | 108,030 | ipynb | Jupyter Notebook | Python Notebooks/Hohmann Transfer.ipynb | Yaamani/Satellite-Simulation | f9b3363e79b62a30724c53c99fdb097a68ff324d | [
"MIT"
] | null | null | null | Python Notebooks/Hohmann Transfer.ipynb | Yaamani/Satellite-Simulation | f9b3363e79b62a30724c53c99fdb097a68ff324d | [
"MIT"
] | null | null | null | Python Notebooks/Hohmann Transfer.ipynb | Yaamani/Satellite-Simulation | f9b3363e79b62a30724c53c99fdb097a68ff324d | [
"MIT"
] | null | null | null | 125.762515 | 21,864 | 0.848301 | true | 1,628 | Qwen/Qwen-72B | 1. YES
2. YES | 0.952574 | 0.913677 | 0.870345 | __label__eng_Latn | 0.769415 | 0.860436 |
**This notebook is an exercise in the [Computer Vision](https://www.kaggle.com/learn/computer-vision) course. You can reference the tutorial at [this link](https://www.kaggle.com/ryanholbrook/convolution-and-relu).**
---
# Introduction #
In this exercise, you'll work on building some intuition around feature extraction. First, we'll walk through the example we did in the tutorial again, but this time, with a kernel you choose yourself. We've mostly been working with images in this course, but what's behind all of the operations we're learning about is mathematics. So, we'll also take a look at how these feature maps can be represented instead as arrays of numbers and what effect convolution with a kernel will have on them.
Run the cell below to get started!
```python
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.computer_vision.ex2 import *
```
# Apply Transformations #
The next few exercises walk through feature extraction just like the example in the tutorial. Run the following cell to load an image we'll use for the next few exercises.
```python
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
image_path = '../input/computer-vision-resources/car_illus.jpg'
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image, channels=1)
image = tf.image.resize(image, size=[400, 400])
plt.figure(figsize=(6, 6))
plt.imshow(tf.squeeze(image), cmap='gray')
plt.axis('off')
plt.show();
```
You can run this cell to see some standard kernels used in image processing.
```python
import learntools.computer_vision.visiontools as visiontools
from learntools.computer_vision.visiontools import edge, bottom_sobel, emboss, sharpen
kernels = [edge, bottom_sobel, emboss, sharpen]
names = ["Edge Detect", "Bottom Sobel", "Emboss", "Sharpen"]
plt.figure(figsize=(12, 12))
for i, (kernel, name) in enumerate(zip(kernels, names)):
plt.subplot(1, 4, i+1)
visiontools.show_kernel(kernel)
plt.title(name)
plt.tight_layout()
```
# 1) Define Kernel #
Use the next code cell to define a kernel. You have your choice of what kind of kernel to apply. One thing to keep in mind is that the *sum* of the numbers in the kernel determines how bright the final image is. Generally, you should try to keep the sum of the numbers between 0 and 1 (though that's not required for a correct answer).
In general, a kernel can have any number of rows and columns. For this exercise, let's use a $3 \times 3$ kernel, which often gives the best results. Define a kernel with `tf.constant`.
```python
# YOUR CODE HERE: Define a kernel with 3 rows and 3 columns.
kernel = tf.constant([
[-1, 1, -1],
[1, 8, 1],
[-1, 1, -1],
])
# Uncomment to view kernel
visiontools.show_kernel(kernel)
# Check your answer
q_1.check()
```
```python
# Lines below will give you a hint or solution code
#q_1.hint()
#q_1.solution()
```
Now we'll do the first step of feature extraction, the filtering step. First run this cell to do some reformatting for TensorFlow.
```python
# Reformat for batch compatibility.
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = tf.expand_dims(image, axis=0)
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
kernel = tf.cast(kernel, dtype=tf.float32)
```
# 2) Apply Convolution #
Now we'll apply the kernel to the image by a convolution. The *layer* in Keras that does this is `layers.Conv2D`. What is the *backend function* in TensorFlow that performs the same operation?
```python
# YOUR CODE HERE: Give the TensorFlow convolution function (without arguments)
conv_fn = tf.nn.conv2d
# Check your answer
q_2.check()
```
<IPython.core.display.Javascript object>
<span style="color:#33cc33">Correct</span>
```python
# Lines below will give you a hint or solution code
#q_2.hint()
#q_2.solution()
```
Once you've got the correct answer, run this next cell to execute the convolution and see the result!
```python
image_filter = conv_fn(
input=image,
filters=kernel,
strides=1, # or (1, 1)
padding='SAME',
)
plt.imshow(
# Reformat for plotting
tf.squeeze(image_filter)
)
plt.axis('off')
plt.show();
```
Can you see how the kernel you chose relates to the feature map it produced?
# 3) Apply ReLU #
Now detect the feature with the ReLU function. In Keras, you'll usually use this as the activation function in a `Conv2D` layer. What is the *backend function* in TensorFlow that does the same thing?
```python
# YOUR CODE HERE: Give the TensorFlow ReLU function (without arguments)
relu_fn = tf.nn.relu
# Check your answer
q_3.check()
```
```python
# Lines below will give you a hint or solution code
#q_3.hint()
#q_3.solution()
```
Once you've got the solution, run this cell to detect the feature with ReLU and see the result!
The image you see below is the feature map produced by the kernel you chose. If you like, experiment with some of the other suggested kernels above, or, try to invent one that will extract a certain kind of feature.
```python
image_detect = relu_fn(image_filter)
plt.imshow(
# Reformat for plotting
tf.squeeze(image_detect)
)
plt.axis('off')
plt.show();
```
In the tutorial, our discussion of kernels and feature maps was mainly visual. We saw the effect of `Conv2D` and `ReLU` by observing how they transformed some example images.
But the operations in a convolutional network (like in all neural networks) are usually defined through mathematical functions, through a computation on numbers. In the next exercise, we'll take a moment to explore this point of view.
Let's start by defining a simple array to act as an image, and another array to act as the kernel. Run the following cell to see these arrays.
```python
# Sympy is a python library for symbolic mathematics. It has a nice
# pretty printer for matrices, which is all we'll use it for.
import sympy
sympy.init_printing()
from IPython.display import display
image = np.array([
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 1, 1, 1],
[0, 1, 0, 0, 0, 0],
])
kernel = np.array([
[1, -1],
[1, -1],
])
display(sympy.Matrix(image))
display(sympy.Matrix(kernel))
# Reformat for Tensorflow
image = tf.cast(image, dtype=tf.float32)
image = tf.reshape(image, [1, *image.shape, 1])
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
kernel = tf.cast(kernel, dtype=tf.float32)
```
# 4) Observe Convolution on a Numerical Matrix #
What do you see? The image is simply a long vertical line on the left and a short horizontal line on the lower right. What about the kernel? What effect do you think it will have on this image? After you've thought about it, run the next cell for the answer.
```python
# View the solution (Run this code cell to receive credit!)
q_4.check()
```
Now let's try it out. Run the next cell to apply convolution and ReLU to the image and display the result.
```python
image_filter = tf.nn.conv2d(
input=image,
filters=kernel,
strides=1,
padding='VALID',
)
image_detect = tf.nn.relu(image_filter)
# The first matrix is the image after convolution, and the second is
# the image after ReLU.
display(sympy.Matrix(tf.squeeze(image_filter).numpy()))
display(sympy.Matrix(tf.squeeze(image_detect).numpy()))
```
Is the result what you expected?
# Conclusion #
In this lesson, you learned about the first two operations a convolutional classifier uses for feature extraction: **filtering** an image with a **convolution** and **detecting** the feature with the **rectified linear unit**.
# Keep Going #
Move on to [**Lesson 3**](https://www.kaggle.com/ryanholbrook/maximum-pooling) to learn the final operation: **condensing** the feature map with **maximum pooling**!
---
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum) to chat with other Learners.*
| adee5c7500430b75308db90d56f0d9efe32fcce4 | 412,969 | ipynb | Jupyter Notebook | Computer_Vision/exercise-convolution-and-relu.ipynb | olonok69/kaggle | bf61ba510c83fd55262939ac6c5a62b7c855ba53 | [
"MIT"
] | null | null | null | Computer_Vision/exercise-convolution-and-relu.ipynb | olonok69/kaggle | bf61ba510c83fd55262939ac6c5a62b7c855ba53 | [
"MIT"
] | null | null | null | Computer_Vision/exercise-convolution-and-relu.ipynb | olonok69/kaggle | bf61ba510c83fd55262939ac6c5a62b7c855ba53 | [
"MIT"
] | null | null | null | 412,969 | 412,969 | 0.960028 | true | 2,085 | Qwen/Qwen-72B | 1. YES
2. YES | 0.779993 | 0.689306 | 0.537653 | __label__eng_Latn | 0.988035 | 0.087479 |
# Semantics: PrefScLTL.
In this notebook, we ensure that semantics of our proposed preference logic are sound.
Proposed semantics:
* $(w_1, w_2) \models \alpha_1~\trianglerighteq~\alpha_2$ iff $w_1 \models \alpha_1$ and $w_2 \models \alpha_2 \land \neg \alpha_1$
We expect the remaining operator semantics to follow from this definition.
* $(w_1, w_2) \models \alpha_1~\triangleright~\alpha_2$ iff $(w_1, w_2) \models \alpha_1~\trianglerighteq~\alpha_2$ and $(w_1, w_2) \not\models \alpha_2~\trianglerighteq~\alpha_1$
* $(w_1, w_2) \models \alpha_1~\sim~\alpha_2$ iff $(w_1, w_2) \models \alpha_1~\trianglerighteq~\alpha_2$ and $(w_1, w_2) \models \alpha_2~\trianglerighteq~\alpha_1$
In what follows, we derive these semantics to ensure soundness of our definitions.
## Indifference.
Every atomic preference formula induces four partitions of $\Sigma^\omega$. Correspondingly, we define 4 propositions each for $w_1, w_2$ to denote which scLTL formulas the words satisfy.
* `w1_00` means $w1$ satisfies $\not \alpha_1$, $\not \alpha_2$.
* `w1_01` means $w1$ satisfies $\not \alpha_1$, $\alpha_2$.
* `w1_10` means $w1$ satisfies $\alpha_1$, $\not \alpha_2$.
* `w1_11` means $w1$ satisfies $\alpha_1$, $\alpha_2$.
* `w2_00` means $w2$ satisfies $\not \alpha_1$, $\not \alpha_2$.
* `w2_01` means $w2$ satisfies $\not \alpha_1$, $\alpha_2$.
* `w2_10` means $w2$ satisfies $\alpha_1$, $\not \alpha_2$.
* `w2_11` means $w2$ satisfies $\alpha_1$, $\alpha_2$.
```python
from sympy import *
from sympy.logic import simplify_logic
```
```python
w1_00, w1_01, w1_10, w1_11, w2_00, w2_01, w2_10, w2_11 = symbols('w1_00 w1_01 w1_10 w1_11 w2_00 w2_01 w2_10 w2_11')
```
```python
# Constraint 1: w1 must be in one of the classes.
w1_constraint1 = w1_00 | w1_01 | w1_10 | w1_11
w2_constraint1 = w2_00 | w2_01 | w2_10 | w2_11
w1_constraint1, w2_constraint1
```
(w1_00 | w1_01 | w1_10 | w1_11, w2_00 | w2_01 | w2_10 | w2_11)
```python
# Constraint 2: w1 \models \alpha_1 and w_2 \models \neg \alpha_1 \land \alpha_2
w1_constraint2 = w1_10 | w1_11
w2_constraint2 = w2_01
w1_constraint2, w2_constraint2
```
(w1_10 | w1_11, w2_01)
```python
# Constraint 3: w1 \not\models \alpha_2 and w_2 \not\models \neg \alpha_2 \land \alpha_1
w1_constraint3 = w1_10 | w1_10 | w1_00
w2_constraint3 = w2_01 | w2_11 | w2_00
w1_constraint3, w2_constraint3
```
(w1_00 | w1_10, w2_00 | w2_01 | w2_11)
```python
# Semantics for strict preference.
# w1 must satisfy constraints 1, 2, 3.
w1_satisfy = w1_constraint1 & w1_constraint2 & w1_constraint3
w2_satisfy = w2_constraint1 & w2_constraint2 & w2_constraint3
w1_satisfy = simplify_logic(w1_satisfy)
w2_satisfy = simplify_logic(w2_satisfy)
print("w1_satisfy", w1_satisfy)
print("w2_satisfy", w2_satisfy)
```
w1_satisfy w1_10 | (w1_00 & w1_11)
w2_satisfy w2_01
In `w1_satisfy`, note that the clause `(w1_00 & w1_11)` is trivially false because w1 cannot simultaneously satisfy and violate both $\alpha_1, \alpha_2$.
Therefore, `w1_satisfy = w1_10`. And, `w2_satisfy = w2_01`.
We recognize this to be the semantics we have defined for our logic.
Q.E.D.
| c3ae95004eb2cdc198f001b9e7fd0ae583162ac9 | 5,763 | ipynb | Jupyter Notebook | jp-notebooks/pref-scltl-semantics.ipynb | abhibp1993/preference-planning | a6384457debee65735eb24eed678f8f98f69d113 | [
"BSD-3-Clause"
] | null | null | null | jp-notebooks/pref-scltl-semantics.ipynb | abhibp1993/preference-planning | a6384457debee65735eb24eed678f8f98f69d113 | [
"BSD-3-Clause"
] | null | null | null | jp-notebooks/pref-scltl-semantics.ipynb | abhibp1993/preference-planning | a6384457debee65735eb24eed678f8f98f69d113 | [
"BSD-3-Clause"
] | null | null | null | 28.529703 | 201 | 0.557175 | true | 1,231 | Qwen/Qwen-72B | 1. YES
2. YES | 0.774583 | 0.737158 | 0.57099 | __label__eng_Latn | 0.856931 | 0.164932 |
# CS-109A Introduction to Data Science
## Lab 11: Neural Network Basics - Introduction to `tf.keras`
**Harvard University**<br>
**Fall 2019**<br>
**Instructors:** Pavlos Protopapas, Kevin Rader, Chris Tanner<br>
**Lab Instructors:** Chris Tanner and Eleni Kaxiras. <br>
**Authors:** Eleni Kaxiras, David Sondak, and Pavlos Protopapas.
```python
## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
```
<style>
blockquote { background: #AEDE94; }
h1 {
padding-top: 25px;
padding-bottom: 25px;
text-align: left;
padding-left: 10px;
background-color: #DDDDDD;
color: black;
}
h2 {
padding-top: 10px;
padding-bottom: 10px;
text-align: left;
padding-left: 5px;
background-color: #EEEEEE;
color: black;
}
div.exercise {
background-color: #ffcccc;
border-color: #E9967A;
border-left: 5px solid #800080;
padding: 0.5em;
}
span.sub-q {
font-weight: bold;
}
div.theme {
background-color: #DDDDDD;
border-color: #E9967A;
border-left: 5px solid #800080;
padding: 0.5em;
font-size: 18pt;
}
div.gc {
background-color: #AEDE94;
border-color: #E9967A;
border-left: 5px solid #800080;
padding: 0.5em;
font-size: 12pt;
}
p.q1 {
padding-top: 5px;
padding-bottom: 5px;
text-align: left;
padding-left: 5px;
background-color: #EEEEEE;
color: black;
}
header {
padding-top: 35px;
padding-bottom: 35px;
text-align: left;
padding-left: 10px;
background-color: #DDDDDD;
color: black;
}
</style>
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import pandas as pd
%matplotlib inline
from PIL import Image
```
```python
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
tf.keras.backend.clear_session() # For easy reset of notebook state.
print(tf.__version__) # You should see a 2.0.0 here!
```
2.0.0
#### Instructions for running `tf.keras` with Tensorflow 2.0:
1. Create a `conda` virtual environment by cloning an existing one that you know works
```
conda create --name myclone --clone myenv
```
2. Go to [https://www.tensorflow.org/install/pip](https://www.tensorflow.org/install/pip) and follow instructions for your machine.
3. In a nutshell:
```
pip install --upgrade pip
pip install tensorflow==2.0.0
```
All references to Keras should be written as `tf.keras`. For example:
```
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
tf.keras.models.Sequential
tf.keras.layers.Dense, tf.keras.layers.Activation,
tf.keras.layers.Dropout, tf.keras.layers.Flatten, tf.keras.layers.Reshape
tf.keras.optimizers.SGD
tf.keras.preprocessing.image.ImageDataGenerator
tf.keras.regularizers
tf.keras.datasets.mnist
```
You could avoid the long names by using
```
from tensorflow import keras
from tensorflow.keras import layers
```
These imports do not work on some systems, however, because they pick up previous versions of `keras` and `tensorflow`. That is why I avoid them in this lab.
## Learning Goals
In this lab we will understand the basics of neural networks and how to start using a deep learning library called `keras`. By the end of this lab, you should:
- Understand how a simple neural network works and code some of its functionality from scratch.
- Be able to think and do calculations in matrix notation. Also think of vectors and arrays as tensors.
- Know how to install and run `tf.keras`.
- Implement a simple real world example using a neural network.
## Part 1: Neural Networks 101
Suppose we have an input vector $X=${$x_1, x_2, ... x_L$} to a $k$-layered network. <BR><BR>
Each layer has its own number of nodes. For the first layer in our drawing that number is $J$. We can store the weights for each node in a vector $\mathbf{W} \in \mathbb{R}^{JxL+1}$ (accounting for bias). Similarly, we can store the biases from each node in a vector $\mathbf{b} \in \mathbb{R}^{I}$. The affine transformation is then written as $$\mathbf{a} = \mathbf{W^T}X + \mathbf{b}$$ <BR> What we then do is "absorb" $\mathbf{b}$ into $X$ by adding a column of ones to $X$. Our $X$ matrix than becomes $\mathbf{X} \in \mathbb{R}^{JxL+1}$ and our equation: <BR><BR>$$\mathbf{a} = \mathbf{W^T}_{plusones}X$$ <br>We have that $\mathbf{a} \in \mathbb{R}^{J}$ as well. Next we evaluate the output from each node. We write $$\mathbf{u} = \sigma\left(\mathbf{a}\right)$$ where $\mathbf{u}\in\mathbb{R}^{J}$. We can think of $\sigma$ operating on each individual element of $\mathbf{a}$ separately or in matrix notation. If we denote each component of $\mathbf{a}$ by $a_{j}$ then we can write $$u_{j} = \sigma\left(a_{j}\right), \quad j = 1, ... J.$$<br> In our code we will implement all these equations in matrix notation.
`tf.keras` (Tensorflow) and `numpy` perform the calculations in matrix format.
<br><br>
Image source: *"Modern Mathematical Methods for Computational Science and Engineering"* Efthimios Kaxiras and Athanassios Fokas.
Let's assume that we have 3 input points (L = 3), two hidden layers ($k=2$), and 2 nodes in each layer ($J=2$)<br>
### Input Layer
$𝑋$={$𝑥_1,𝑥_2,x_3$}
### First Hidden Layer
\begin{equation}
\begin{aligned}
a^{(1)}_1 = w^{(1)}_{10} + w^{(1)}_{11}x_1 + w^{(1)}_{12}x_2 + w^{(1)}_{13}x_3 \\
a^{(1)}_2 = w^{(1)}_{20} + w^{(1)}_{21}x_1 + w^{(1)}_{22}x_2 + w^{(1)}_{23}x_3 \\
\end{aligned}
\end{equation}
<br> All this in matrix notation: $$\mathbf{a} = \mathbf{W^T}X$$
<br> NOTE: in $X$ we have added a column of ones to account for the bias<BR><BR>
**Then the sigmoid is applied**:
\begin{equation}
\begin{aligned}
u^{(1)}_1 = \sigma(a^{(1)}_1) \\
u^{(1)}_2 = \sigma(a^{(1)}_2) \\
\end{aligned}
\end{equation}
or in matrix notation: $$\mathbf{u} = \sigma\left(\mathbf{a}\right)$$
### Second Hidden Layer
\begin{equation}
\begin{aligned}
a^{(2)}_1 = w^{(2)}_{10} + w^{(2)}_{11}u^{(1)}_1 + w^{(2)}_{12}u^{(1)}_2 + w^{(2)}_{13}u^{(1)}_3 \\
a^{(2)}_2 = w^{(2)}_{20} + w^{(2)}_{21}u^{(1)}_1 + w^{(2)}_{22}u^{(1)}_2 + w^{(2)}_{23}u^{(1)}_3 \\
\end{aligned}
\end{equation}
<br>
**Then the sigmoid is applied**:
\begin{equation}
\begin{aligned}
u^{(2)}_1 = \sigma(a^{(2)}_1) \\
u^{(2)}_2 = \sigma(a^{(2)}_2) \\
\end{aligned}
\end{equation}
### Output Layer
#### If the output is categorical:
For example with four classes ($M=4$): $Y$={$y_1, y_2, y_3, y_4$}, we have the affine and then the sigmoid is lastly applied:
\begin{equation}
\begin{aligned}
a^{(3)}_1 = w^{(3)}_{10} + w^{(3)}_{11}u^{(2)}_1 + w^{(3)}_{12}u^{(2)}_2 \\
a^{(3)}_2 = w^{(3)}_{20} + w^{(3)}_{21}u^{(2)}_1 + w^{(3)}_{22}u^{(2)}_2 \\
a^{(3)}_3 = w^{(3)}_{30} + w^{(3)}_{31}u^{(2)}_1 + w^{(3)}_{32}u^{(2)}_2 \\
a^{(3)}_4 = w^{(3)}_{40} + w^{(3)}_{41}u^{(2)}_1 + w^{(3)}_{42}u^{(2)}_2 \\
\end{aligned}
\end{equation}
<br>
\begin{equation}
\begin{aligned}
y_1 = \sigma(a^{(3)}_1) \\
y_2 = \sigma(a^{(3)}_2) \\
y_3 = \sigma(a^{(3)}_3) \\
y_3 = \sigma(a^{(3)}_4) \\
\end{aligned}
\end{equation}
$\sigma$ will be softmax in the case of multiple classes and sigmoid for binary.
<BR>
#### If the output is a number (regression):
We have a single y as output:
\begin{equation}
\begin{aligned}
y = w^{(3)}_{10}+ w^{(3)}_{11}u^{(2)}_1 + w^{(3)}_{12}u^{(2)}_2 + w^{(3)}_{13}u^{(2)}_3 \\
\end{aligned}
\end{equation}
#### Matrix Multiplication and constant addition
```python
a = np.array([[1, 0], [0, 1], [2, 3]])
b = np.array([[4, 1, 1], [2, 2, 1]])
print(np.matrix(a))
print('------')
print(np.matrix(b))
```
[[1 0]
[0 1]
[2 3]]
------
[[4 1 1]
[2 2 1]]
```python
# both Tensorflow and numpy take care of transposing.
c = tf.matmul(a, b) # the tensorflow way
print(c)
d = np.dot(a, b) # the numpy way
print(d)
```
tf.Tensor(
[[ 4 1 1]
[ 2 2 1]
[14 8 5]], shape=(3, 3), dtype=int64)
[[ 4 1 1]
[ 2 2 1]
[14 8 5]]
```python
# how do we add the constant in the matrix
a = [[1, 0], [0, 1]]
ones = np.ones((len(a),1))
a = np.append(a, ones, axis=1)
a
```
array([[1., 0., 1.],
[0., 1., 1.]])
<div class="exercise"><b>1. In class exercise : Plot the sigmoid</b></div>
Define the `sigmoid` and the `tanh`. For `tanh` you may use `np.tanh` and for the `sigmoid` use the general equation:
\begin{align}
\sigma = \dfrac{1}{1+e^{-2(x-c)/a}} \qquad\text{(1.1)}
\textrm{}
\end{align}
Generate a list of 500 $x$ points from -5 to 5 and plot both functions. What do you observe? What do variables $c$ and $a$ do?
```python
# your code here
```
```python
# %load solutions/sigmoid.py
# The smaller the `a`, the sharper the function is.
# Variable `c` moves the function along the x axis
def sigmoid(x,c,a):
z = ((x-c)/a)
return 1.0 / (1.0 + np.exp(-z))
x = np.linspace(-5.0, 5.0, 500) # input points
c = 1.
a = 0.5
plt.plot(x, sigmoid(x, c, a), label='sigmoid')
plt.plot(x, np.tanh(x), label='tanh')
plt.grid();
plt.legend();
```
<div class="exercise"><b>2. In class exercise: Approximate a Gaussian function using a node and manually adjusting the weights. Start with one layer with one node and move to two nodes.</b></div>
The task is to approximate (learn) a function $f\left(x\right)$ given some input $x$. For demonstration purposes, the function we will try to learn is a Gaussian function:
\begin{align}
f\left(x\right) = e^{-x^{2}}
\textrm{}
\end{align}
Even though we represent the input $x$ as a vector on the computer, you should think of it as a single input.
#### 2.1 Start by plotting the above function using the $x$ dataset you created earlier
```python
x = np.linspace(-5.0, 5.0, 500) # input points
def gaussian(x):
return np.exp(-x*x)
f = gaussian(x)
plt.plot(x, f, label='gaussian')
plt.legend()
```
```python
f.shape
```
(500,)
#### 2.2 Now, let's code the single node as per the image above.
Write a function named `affine` that does the transformation. The definition is provided below. Then create a simpler sigmoid with just one variable. We choose a **sigmoid** activation function and specifically the **logistic** function. Sigmoids are a family of functions and the logistic function is just one member in that family. $$\sigma\left(z\right) = \dfrac{1}{1 + e^{-z}}.$$ <br>
Define both functions in code.
```python
def affine(x, w, b):
"""Return affine transformation of x
INPUTS
======
x: A numpy array of points in x
w: An array representing the weight of the perceptron
b: An array representing the biases of the perceptron
RETURN
======
z: A numpy array of points after the affine transformation
z = wx + b
"""
# Code goes here
return z
```
```python
# your code here
```
```python
# %load solutions/affine-sigmoid.py
def affine(x, w, b):
return w * x + b
def sigmoid(z):
return 1.0 / (1.0 + np.exp(-z))
```
And now we plot the activation function and the true function. What do you think will happen if you change $w$ and $b$?
```python
w = [-5.0, 0.1, 5.0] # Create a list of weights
b = [0.0, -1.0, 1.0] # Create a list of biases
fig, ax = plt.subplots(1,1, figsize=(9,5))
SIZE = 16
# plot our true function, the gaussian
ax.plot(x, f, lw=4, ls='-.', label='True function')
# plot 3 "networks"
for wi, bi in zip(w, b):
h = sigmoid(affine(x, wi, bi))
ax.plot(x, h, lw=4, label=r'$w = {0}$, $b = {1}$'.format(wi,bi))
ax.set_title('Single neuron network', fontsize=SIZE)
# Create labels (very important!)
ax.set_xlabel('$x$', fontsize=SIZE) # Notice we make the labels big enough to read
ax.set_ylabel('$y$', fontsize=SIZE)
ax.tick_params(labelsize=SIZE) # Make the tick labels big enough to read
ax.legend(fontsize=SIZE, loc='best') # Create a legend and make it big enough to read
```
We didn't do an exhaustive search of the weights and biases, but it sure looks like this single perceptron is never going to match the actual function. Again, we shouldn't be suprised about this. The output layer of the network is simple the logistic function, which can only have so much flexibility.
Let's try to make our network more flexible by using **more nodes**!
### Multiple Perceptrons in a Single Layer
It appears that a single neuron is somewhat limited in what it can accomplish. What if we expand the number of nodes/neurons in our network? We have two obvious choices here. One option is to add depth to the network by putting layers next to each other. The other option is to stack neurons on top of each other in the same layer. Now the network has some width, but is still only one layer deep.
```python
x = np.linspace(-5.0, 5.0, 500) # input points
f = np.exp(-x*x) # data
w = np.array([3.5, -3.5])
b = np.array([3.5, 3.5])
# Affine transformations
z1 = w[0] * x + b[0]
z2 = w[1] * x + b[1]
# Node outputs
h1 = sigmoid(z1)
h2 = sigmoid(z2)
```
Now let's plot things and see what they look like.
```python
fig, ax = plt.subplots(1,1, figsize=(9,5))
ax.plot(x, f, lw=4, ls = '-.', label='True function')
ax.plot(x, h1, lw=4, label='First neuron')
ax.plot(x, h2, lw=4, label='Second neuron')
# Set title
ax.set_title('Comparison of Neuron Outputs', fontsize=SIZE)
# Create labels (very important!)
ax.set_xlabel('$x$', fontsize=SIZE) # Notice we make the labels big enough to read
ax.set_ylabel('$y$', fontsize=SIZE)
ax.tick_params(labelsize=SIZE) # Make the tick labels big enough to read
ax.legend(fontsize=SIZE, loc='best') # Create a legend and make it big enough to read
```
Just as we expected. Some sigmoids. Of course, to get the network prediction we must combine these two sigmoid curves somehow. First we'll just add $h_{1}$ and $h_{2}$ without any weights to see what happens.
#### Note
We are **not** doing classification here. We are trying to predict an actual function. The sigmoid activation is convenient when doing classification because you need to go from $0$ to $1$. However, when learning a function, we don't have as good of a reason to choose a sigmoid.
```python
# Network output
wout = np.ones(2) # Set the output weights to unity to begin
bout = -1 # bias
yout = wout[0] * h1 + wout[1] * h2 + bout
```
And plot.
```python
fig, ax = plt.subplots(1,1, figsize=(9,5))
ax.plot(x, f, ls='-.', lw=4, label=r'True function')
ax.plot(x, yout, lw=4, label=r'$y_{out} = h_{1} + h_{2}$')
# Create labels (very important!)
ax.set_xlabel('$x$', fontsize=SIZE) # Notice we make the labels big enough to read
ax.set_ylabel('$y$', fontsize=SIZE)
ax.tick_params(labelsize=SIZE) # Make the tick labels big enough to read
ax.legend(fontsize=SIZE, loc='best') # Create a legend and make it big enough to read
```
Very cool! The two nodes interact with each other to produce a pretty complicated-looking function. It still doesn't match the true function, but now we have some hope. In fact, it's starting to look a little bit like a Gaussian!
We can do better. There are three obvious options at this point:
1. Change the number of nodes
2. Change the activation functions
3. Change the weights
#### We will leave this simple example for some other time! Let's move on to fashion items!
## Part 2: Tensors, Fashion, and Reese Witherspoon
We can think of tensors as multidimensional arrays of real numerical values; their job is to generalize matrices to multiple dimensions. While tensors first emerged in the 20th century, they have since been applied to numerous other disciplines, including machine learning. Tensor decomposition/factorization can solve, among other, problems in unsupervised learning settings, temporal and multirelational data. For those of you that will get to handle images for Convolutional Neural Networks, it's a good idea to have the understanding of tensors of rank 3.
We will use the following naming conventions:
- scalar = just a number = rank 0 tensor ($a$ ∈ $F$,)
<BR><BR>
- vector = 1D array = rank 1 tensor ( $x = (\;x_1,...,x_i\;)⊤$ ∈ $F^n$ )
<BR><BR>
- matrix = 2D array = rank 2 tensor ( $\textbf{X} = [a_{ij}] ∈ F^{m×n}$ )
<BR><BR>
- 3D array = rank 3 tensor ( $\mathscr{X} =[t_{i,j,k}]∈F^{m×n×l}$ )
<BR><BR>
- $\mathscr{N}$D array = rank $\mathscr{N}$ tensor ( $\mathscr{T} =[t_{i1},...,t_{i\mathscr{N}}]∈F^{n_1×...×n_\mathscr{N}}$ ) <-- Things start to get complicated here...
#### Tensor indexing
We can create subarrays by fixing some of the given tensor’s indices. We can create a vector by fixing all but one index. A 2D matrix is created when fixing all but two indices. For example, for a third order tensor the vectors are
<br><BR>
$\mathscr{X}[:,j,k]$ = $\mathscr{X}[j,k]$ (column), <br>
$\mathscr{X}[i,:,k]$ = $\mathscr{X}[i,k]$ (row), and <BR>
$\mathscr{X}[i,j,:]$ = $\mathscr{X}[i,j]$ (tube) <BR>
#### Tensor multiplication
We can multiply one matrix with another as long as the sizes are compatible ((n × m) × (m × p) = n × p), and also multiply an entire matrix by a constant. Numpy `numpy.dot` performs a matrix multiplication which is straightforward when we have 2D or 1D arrays. But what about > 3D arrays? The function will choose according to the matching dimentions but if we want to choose we should use `tensordot`, but, again, we **do not need tensordot** for this class.
### Reese Witherspoon
This image is from the dataset [Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/person/Reese_Witherspoon.html) used for machine learning training. Images are 24-bit RGB images (height, width, channels) with 8 bits for each of R, G, B channel. Explore and print the array.
```python
# load and show the image
FILE = '../fig/Reese_Witherspoon.jpg'
img = mpimg.imread(FILE)
imgplot = plt.imshow(img)
```
```python
print(f'The image is a: {type(img)} of shape {img.shape}')
img[3:5, 3:5, :]
```
The image is a: <class 'numpy.ndarray'> of shape (150, 150, 3)
array([[[241, 241, 241],
[242, 242, 242]],
[[241, 241, 241],
[242, 242, 242]]], dtype=uint8)
#### Slicing tensors: slice along each axis
```python
# we want to show each color channel
fig, axes = plt.subplots(1, 3, figsize=(10,10))
for i, subplot in zip(range(3), axes):
temp = np.zeros(img.shape, dtype='uint8')
temp[:,:,i] = img[:,:,i]
subplot.imshow(temp)
subplot.set_axis_off()
plt.show()
```
#### Multiplying Images with a scalar (just for fun, does not really help us in any way)
```python
temp = img
temp = temp * 2
plt.imshow(temp)
```
For more on image manipulation by `matplotlib` see: [matplotlib-images](https://matplotlib.org/3.1.1/tutorials/introductory/images.html)
### Anatomy of an Artificial Neural Network
In Part 1 we hand-made a neural network by writing some simple python functions. We focused on a regression problem where we tried to learn a function. We practiced using the logistic activation function in a network with multiple nodes, but a single or two hidden layers. Some of the key observations were:
* Increasing the number of nodes allows us to represent more complicated functions
* The weights and biases have a very big impact on the solution
* Finding the "correct" weights and biases is really hard to do manually
* There must be a better method for determining the weights and biases automatically
We also didn't assess the effects of different activation functions or different network depths.
###
https://www.tensorflow.org/guide/keras
`tf.keras` is TensorFlow's high-level API for building and training deep learning models. It's used for fast prototyping, state-of-the-art research, and production. `Keras` is a library created by François Chollet. After Google released Tensorflow 2.0, the creators of `keras` recommend that "Keras users who use multi-backend Keras with the TensorFlow backend switch to `tf.keras` in TensorFlow 2.0. `tf.keras` is better maintained and has better integration with TensorFlow features".
#### IMPORTANT: In `Keras` everything starts with a Tensor of N samples as input and ends with a Tensor of N samples as output.
### The 3 parts of an ANN
- **Part 1: the input layer** (our dataset)
- **Part 2: the internal architecture or hidden layers** (the number of layers, the activation functions, the learnable parameters and other hyperparameters)
- **Part 3: the output layer** (what we want from the network)
In the rest of the lab we will practice with end-to-end neural network training
1. Load the data
2. Define the layers of the model.
3. Compile the model.
4. Fit the model to the train set (also using a validation set).
5. Evaluate the model on the test set.
6. Plot metrics such as accuracy.
7. Predict on random images from test set.
8. Predict on a random image from the web!
```python
seed = 7
np.random.seed(seed)
```
### Fashion MNIST
MNIST, the set of handwritten digits is considered the Drosophila of Machine Learning. It has been overused, though, so we will try a slight modification to it.
**Fashion-MNIST** is a dataset of clothing article images (created by [Zalando](https://github.com/zalandoresearch/fashion-mnist)), consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a **28 x 28** grayscale image, associated with a label from **10 classes**. The creators intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits. Each pixel is 8 bits so its value ranges from 0 to 255.
Let's load and look at it!
#### 1. Load the data
```python
x_train.shape
```
(60000, 28, 28)
```python
%%time
# get the data from keras
fashion_mnist = tf.keras.datasets.fashion_mnist
# load the data splitted in train and test! how nice!
(x_train, y_train),(x_test, y_test) = fashion_mnist.load_data()
# normalize the data by dividing with pixel intensity
# (each pixel is 8 bits so its value ranges from 0 to 255)
x_train, x_test = x_train / 255.0, x_test / 255.0
# classes are named 0-9 so define names for plotting clarity
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# plot
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(x_train[i], cmap=plt.cm.binary)
plt.xlabel(class_names[y_train[i]])
plt.show()
```
```python
plt.imshow(x_train[3], cmap=plt.cm.binary)
```
```python
x_train.shape, x_test.shape
```
((60000, 28, 28), (10000, 28, 28))
```python
y_train.shape
```
(60000,)
#### 2. Define the layers of the model.
```python
# type together
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(154, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
#tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
```
#### 3. Compile the model
```python
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
model.compile(optimizer=optimizer,
loss=loss_fn,
metrics=['accuracy'])
```
```python
model.summary()
```
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 784) 0
_________________________________________________________________
dense (Dense) (None, 154) 120890
_________________________________________________________________
dense_1 (Dense) (None, 64) 9920
_________________________________________________________________
dense_2 (Dense) (None, 10) 650
=================================================================
Total params: 131,460
Trainable params: 131,460
Non-trainable params: 0
_________________________________________________________________
```python
tf.keras.utils.plot_model(
model,
#to_file='model.png', # if you want to save the image
show_shapes=True, # True for more details than you need
show_layer_names=True,
rankdir='TB',
expand_nested=False,
dpi=96
)
```
[Everything you wanted to know about a Keras Model and were afraid to ask](https://www.tensorflow.org/api_docs/python/tf/keras/Model)
#### 4. Fit the model to the train set (also using a validation set)
This is the part that takes the longest.
-----------------------------------------------------------
**ep·och** <BR>
noun: epoch; plural noun: epochs. A period of time in history or a person's life, typically one marked by notable events or particular characteristics. Examples: "the Victorian epoch", "my Neural Netwok's epochs". <BR>
-----------------------------------------------------------
```python
%%time
# the core of the network training
history = model.fit(x_train, y_train, validation_split=0.33, epochs=50,
verbose=2)
```
Train on 40199 samples, validate on 19801 samples
Epoch 1/50
40199/40199 - 3s - loss: 0.5265 - accuracy: 0.8121 - val_loss: 0.4014 - val_accuracy: 0.8543
Epoch 2/50
40199/40199 - 3s - loss: 0.3875 - accuracy: 0.8587 - val_loss: 0.3988 - val_accuracy: 0.8521
Epoch 3/50
40199/40199 - 3s - loss: 0.3502 - accuracy: 0.8711 - val_loss: 0.3711 - val_accuracy: 0.8645
Epoch 4/50
40199/40199 - 3s - loss: 0.3199 - accuracy: 0.8818 - val_loss: 0.3884 - val_accuracy: 0.8573
Epoch 5/50
40199/40199 - 3s - loss: 0.3066 - accuracy: 0.8870 - val_loss: 0.3463 - val_accuracy: 0.8763
Epoch 6/50
40199/40199 - 3s - loss: 0.2861 - accuracy: 0.8931 - val_loss: 0.3719 - val_accuracy: 0.8655
Epoch 7/50
40199/40199 - 3s - loss: 0.2775 - accuracy: 0.8964 - val_loss: 0.3487 - val_accuracy: 0.8752
Epoch 8/50
40199/40199 - 3s - loss: 0.2641 - accuracy: 0.9008 - val_loss: 0.3437 - val_accuracy: 0.8779
Epoch 9/50
40199/40199 - 3s - loss: 0.2528 - accuracy: 0.9043 - val_loss: 0.3454 - val_accuracy: 0.8809
Epoch 10/50
40199/40199 - 3s - loss: 0.2430 - accuracy: 0.9093 - val_loss: 0.3273 - val_accuracy: 0.8857
Epoch 11/50
40199/40199 - 3s - loss: 0.2335 - accuracy: 0.9127 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 12/50
40199/40199 - 3s - loss: 0.2259 - accuracy: 0.9141 - val_loss: 0.3352 - val_accuracy: 0.8876
Epoch 13/50
40199/40199 - 3s - loss: 0.2189 - accuracy: 0.9175 - val_loss: 0.3281 - val_accuracy: 0.8915
Epoch 14/50
40199/40199 - 3s - loss: 0.2128 - accuracy: 0.9184 - val_loss: 0.3394 - val_accuracy: 0.8850
Epoch 15/50
40199/40199 - 3s - loss: 0.2038 - accuracy: 0.9215 - val_loss: 0.3257 - val_accuracy: 0.8934
Epoch 16/50
40199/40199 - 3s - loss: 0.1982 - accuracy: 0.9249 - val_loss: 0.3501 - val_accuracy: 0.8895
Epoch 17/50
40199/40199 - 3s - loss: 0.1912 - accuracy: 0.9277 - val_loss: 0.3758 - val_accuracy: 0.8849
Epoch 18/50
40199/40199 - 3s - loss: 0.1828 - accuracy: 0.9295 - val_loss: 0.3625 - val_accuracy: 0.8889
Epoch 19/50
40199/40199 - 3s - loss: 0.1817 - accuracy: 0.9315 - val_loss: 0.3517 - val_accuracy: 0.8911
Epoch 20/50
40199/40199 - 3s - loss: 0.1747 - accuracy: 0.9332 - val_loss: 0.3463 - val_accuracy: 0.8929
Epoch 21/50
40199/40199 - 3s - loss: 0.1707 - accuracy: 0.9349 - val_loss: 0.3691 - val_accuracy: 0.8892
Epoch 22/50
40199/40199 - 3s - loss: 0.1651 - accuracy: 0.9372 - val_loss: 0.3567 - val_accuracy: 0.8932
Epoch 23/50
40199/40199 - 3s - loss: 0.1611 - accuracy: 0.9374 - val_loss: 0.3737 - val_accuracy: 0.8914
Epoch 24/50
40199/40199 - 3s - loss: 0.1582 - accuracy: 0.9397 - val_loss: 0.3833 - val_accuracy: 0.8906
Epoch 25/50
40199/40199 - 3s - loss: 0.1518 - accuracy: 0.9416 - val_loss: 0.3730 - val_accuracy: 0.8921
Epoch 26/50
40199/40199 - 3s - loss: 0.1485 - accuracy: 0.9429 - val_loss: 0.4085 - val_accuracy: 0.8897
Epoch 27/50
40199/40199 - 3s - loss: 0.1415 - accuracy: 0.9462 - val_loss: 0.3862 - val_accuracy: 0.8909
Epoch 28/50
40199/40199 - 3s - loss: 0.1433 - accuracy: 0.9451 - val_loss: 0.4204 - val_accuracy: 0.8907
Epoch 29/50
40199/40199 - 3s - loss: 0.1374 - accuracy: 0.9483 - val_loss: 0.3937 - val_accuracy: 0.8920
Epoch 30/50
40199/40199 - 3s - loss: 0.1330 - accuracy: 0.9498 - val_loss: 0.4073 - val_accuracy: 0.8887
Epoch 31/50
40199/40199 - 3s - loss: 0.1339 - accuracy: 0.9493 - val_loss: 0.4131 - val_accuracy: 0.8891
Epoch 32/50
40199/40199 - 3s - loss: 0.1288 - accuracy: 0.9514 - val_loss: 0.4089 - val_accuracy: 0.8919
Epoch 33/50
40199/40199 - 3s - loss: 0.1239 - accuracy: 0.9526 - val_loss: 0.4195 - val_accuracy: 0.8904
Epoch 34/50
40199/40199 - 3s - loss: 0.1228 - accuracy: 0.9529 - val_loss: 0.4848 - val_accuracy: 0.8873
Epoch 35/50
40199/40199 - 3s - loss: 0.1183 - accuracy: 0.9547 - val_loss: 0.4405 - val_accuracy: 0.8924
Epoch 36/50
40199/40199 - 3s - loss: 0.1176 - accuracy: 0.9555 - val_loss: 0.4518 - val_accuracy: 0.8895
Epoch 37/50
40199/40199 - 3s - loss: 0.1174 - accuracy: 0.9552 - val_loss: 0.4882 - val_accuracy: 0.8883
Epoch 38/50
40199/40199 - 3s - loss: 0.1101 - accuracy: 0.9581 - val_loss: 0.4646 - val_accuracy: 0.8947
Epoch 39/50
40199/40199 - 3s - loss: 0.1050 - accuracy: 0.9599 - val_loss: 0.4875 - val_accuracy: 0.8919
Epoch 40/50
40199/40199 - 3s - loss: 0.1089 - accuracy: 0.9599 - val_loss: 0.4824 - val_accuracy: 0.8900
Epoch 41/50
40199/40199 - 3s - loss: 0.1047 - accuracy: 0.9583 - val_loss: 0.5133 - val_accuracy: 0.8872
Epoch 42/50
40199/40199 - 3s - loss: 0.1056 - accuracy: 0.9598 - val_loss: 0.4922 - val_accuracy: 0.8908
Epoch 43/50
40199/40199 - 3s - loss: 0.0965 - accuracy: 0.9635 - val_loss: 0.5100 - val_accuracy: 0.8891
Epoch 44/50
40199/40199 - 3s - loss: 0.1012 - accuracy: 0.9617 - val_loss: 0.5292 - val_accuracy: 0.8891
Epoch 45/50
40199/40199 - 3s - loss: 0.0968 - accuracy: 0.9631 - val_loss: 0.5449 - val_accuracy: 0.8865
Epoch 46/50
40199/40199 - 3s - loss: 0.0926 - accuracy: 0.9642 - val_loss: 0.5455 - val_accuracy: 0.8904
Epoch 47/50
40199/40199 - 3s - loss: 0.0945 - accuracy: 0.9634 - val_loss: 0.5249 - val_accuracy: 0.8909
Epoch 48/50
40199/40199 - 3s - loss: 0.0915 - accuracy: 0.9649 - val_loss: 0.5712 - val_accuracy: 0.8861
Epoch 49/50
40199/40199 - 3s - loss: 0.0887 - accuracy: 0.9670 - val_loss: 0.5586 - val_accuracy: 0.8849
Epoch 50/50
40199/40199 - 3s - loss: 0.0873 - accuracy: 0.9670 - val_loss: 0.5836 - val_accuracy: 0.8896
CPU times: user 3min 50s, sys: 26.9 s, total: 4min 17s
Wall time: 2min 21s
#### Save the model
You can save the model so you do not have `.fit` everytime you reset the kernel in the notebook. Network training is expensive!
For more details on this see [https://www.tensorflow.org/guide/keras/save_and_serialize](https://www.tensorflow.org/guide/keras/save_and_serialize)
```python
# save the model so you do not have to run the code everytime
model.save('fashion_model.h5')
# Recreate the exact same model purely from the file
#model = tf.keras.models.load_model('fashion_model.h5')
```
#### 5. Evaluate the model on the test set.
```python
test_loss, test_accuracy = model.evaluate(x_test, y_test, verbose=0)
print(f'Test accuracy={test_accuracy}')
```
Test accuracy=0.8804000020027161
#### 6. We learn a lot by studying History! Plot metrics such as accuracy.
You can learn a lot about neural networks by observing how they perform while training. You can issue `kallbacks` in `keras`. The networks's performance is stored in a `keras` callback aptly named `history` which can be plotted.
```python
print(history.history.keys())
```
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
```python
# plot accuracy and loss for the test set
fig, ax = plt.subplots(1,2, figsize=(20,6))
ax[0].plot(history.history['accuracy'])
ax[0].plot(history.history['val_accuracy'])
ax[0].set_title('Model accuracy')
ax[0].set_ylabel('accuracy')
ax[0].set_xlabel('epoch')
ax[0].legend(['train', 'val'], loc='best')
ax[1].plot(history.history['loss'])
ax[1].plot(history.history['val_loss'])
ax[1].set_title('Model loss')
ax[1].set_ylabel('loss')
ax[1].set_xlabel('epoch')
ax[1].legend(['train', 'val'], loc='best')
```
#### 7. Now let's use the Network for what it was meant to do: Predict!
```python
predictions = model.predict(x_test)
```
```python
predictions[0]
```
array([2.3347583e-26, 5.1650537e-17, 7.8453709e-22, 4.0654924e-26,
6.6475328e-18, 1.3495652e-14, 1.9801722e-15, 1.6511808e-06,
1.7612138e-24, 9.9999833e-01], dtype=float32)
```python
np.argmax(predictions[0]), class_names[np.argmax(predictions[0])]
```
(9, 'Ankle boot')
Let's see if our network predicted right! Is the first item what was predicted?
```python
plt.figure()
plt.imshow(x_test[0], cmap=plt.cm.binary)
plt.xlabel(class_names[y_test[0]])
plt.colorbar()
```
**Correct!!** Now let's see how confident our model is by plotting the probability values:
```python
# code source: https://www.tensorflow.org/tutorials/keras/classification
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
```
```python
i = 406
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], y_test, x_test)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], y_test)
plt.show()
```
#### 8. Predicting in the real world
Let's see if our network can generalize beyond the MNIST fashion dataset. Let's give it an random googled image of a boot. Does it have to be a clothing item resembling the MNIST fashion dataset? Can it be a puppy?
Download an image from the internet and resize it to 28x28.
`tf.keras` models are optimized to make predictions on a batch, or collection, of examples at once. Accordingly, even though you're using a single image, you need to add it to a list:
```python
```
| e35c0849445fd7a49f0d2faade404062704b0ed5 | 558,685 | ipynb | Jupyter Notebook | content/labs/lab11/notes/lab11_MLP_solutions_part1.ipynb | chksi/2019-CS109A | 4b925115f8a0ad5a4f5b95d3d616fabf60bfc3c0 | [
"MIT"
] | null | null | null | content/labs/lab11/notes/lab11_MLP_solutions_part1.ipynb | chksi/2019-CS109A | 4b925115f8a0ad5a4f5b95d3d616fabf60bfc3c0 | [
"MIT"
] | null | null | null | content/labs/lab11/notes/lab11_MLP_solutions_part1.ipynb | chksi/2019-CS109A | 4b925115f8a0ad5a4f5b95d3d616fabf60bfc3c0 | [
"MIT"
] | null | null | null | 300.691604 | 68,408 | 0.921381 | true | 11,338 | Qwen/Qwen-72B | 1. YES
2. YES | 0.754915 | 0.815232 | 0.615431 | __label__eng_Latn | 0.888463 | 0.268183 |
# `sympy`
`scipy` 계열은 [`sympy`](https://www.sympy.org)라는 *기호 처리기*도 포함하고 있다.<br>
`scipy` stack also includes [`sympy`](https://www.sympy.org), a *symbolic processor*.
2006년 이후 2019 까지 800명이 넘는 개발자가 작성한 코드를 제공하였다.<br>
Since 2006, more than 800 developers contributed so far in 2019.
## 기호 연산 예<br>Examples of symbolic processing
`sympy` 모듈을 `sy` 라는 이름으로 불러온다.<br>Import `sympy` module in the name of `sy`.
```python
import sympy as sy
sy.init_printing()
```
비교를 위해 `numpy` 모듈도 불러온다.<br>
Import `numpy` module to compare.
```python
import numpy as np
```
### 제곱근<br>Square root
10의 제곱근을 구해보자.<br>Let't find the square root of ten.
```python
np.sqrt(10)
```
```python
sy.sqrt(10)
```
10의 제곱근을 제곱해보자.<br>Let't square the square root of ten.
```python
np.sqrt(10) ** 2
```
```python
sy.sqrt(10) ** 2
```
위 결과의 차이에 대해 어떻게 생각하는가?<br>
What do you think about the differences of the results above?
### 분수<br>Fractions
10 / 3 을 생각해보자.<br>Let't think about 10/3.
```python
ten_over_three = 10 / 3
```
```python
ten_over_three
```
```python
ten_over_three * 3
```
```python
import fractions
```
```python
fr_ten_over_three = fractions.Fraction(10 ,3)
```
```python
fr_ten_over_three
```
```python
fr_ten_over_three * 3
```
```python
sy_ten_over_three = sy.Rational(10, 3)
```
```python
sy_ten_over_three
```
```python
sy_ten_over_three * 3
```
위 결과의 차이에 대해 어떻게 생각하는가?<br>
What do you think about the differences of the results above?
### 변수를 포함하는 수식<br>Expressions with variables
사용할 변수를 정의한다.<br>Define variables to use.
```python
a, b, c, x = sy.symbols('a b c x')
theta, phi = sy.symbols('theta phi')
```
변수들을 한번 살펴보자.<br>Let's take a look at the variables
```python
a, b, c, x
```
```python
theta, phi
```
변수를 조합하여 새로운 수식을 만들어 보자.<br>
Let's make equations using variables.
```python
y = a * x + b
```
```python
y
```
```python
z = a * x * x + b * x + c
```
```python
z
```
```python
w = a * sy.sin(theta) ** 2 + b
```
```python
w
```
```python
p = (x - a) * (x - b) * (x - c)
```
```python
p
```
```python
sy.expand(p, x)
```
```python
sy.collect(_, x)
```
### 미적분<br>Calculus
```python
z.diff(x)
```
```python
sy.integrate(z, x)
```
```python
w.diff(theta)
```
```python
sy.integrate(w, theta)
```
### 근<br>Root
```python
z_sol_list = sy.solve(z, x)
```
```python
z_sol_list
```
```python
sy.solve(2* sy.sin(theta) ** 2 - 1, theta)
```
### 코드 생성<br>Code generation
```python
print(sy.python(z_sol_list[0]))
```
```python
import sympy.utilities.codegen as sc
```
```python
[(c_name, c_code), (h_name, c_header)] = sc.codegen(
("z_sol", z_sol_list[0]),
"C89",
"test"
)
```
```python
c_name
```
```python
print(c_code)
```
```python
h_name
```
```python
print(c_header)
```
### 연립방정식<br>System of equations
```python
a1, a2, a3 = sy.symbols('a1:4')
b1, b2, b3 = sy.symbols('b1:4')
c1, c2 = sy.symbols('c1:3')
x1, x2 = sy.symbols('x1:3')
```
```python
eq1 = sy.Eq(
a1 * x1 + a2 * x2,
c1,
)
```
```python
eq1
```
```python
eq2 = sy.Eq(
b1 * x1 + b2 * x2,
c2,
)
```
```python
eq_list = [eq1, eq2]
```
```python
eq_list
```
```python
sy.solve(eq_list, (x1, x2))
```
## 참고문헌<br>References
* SymPy Development Team, SymPy 1.4 documentation, sympy.org, 2019 04 10. [Online] Available : https://docs/sympy.org/latest/index.html.
* SymPy Development Team, SymPy Tutorial, SymPy 1.4 documentation, sympy.org, 2019 04 10. [Online] Available : https://docs/sympy.org/latest/tutorial/index.html.
* d84_n1nj4, "How to keep fractions in your equation output", Stackoverflow.com, 2017 08 12. [Online] Available : https://stackoverflow.com/a/45651175.
* Python developers, "Fractions", Python documentation, 2019 10 12. [Online] Available : https://docs.python.org/3.7/library/fractions.html.
* SymPy Development Team, codegen, SymPy 1.4 documentation, sympy.org, 2019 04 10. [Online] Available : https://docs/sympy.org/latest/modules/utilities/codegen.html.
## Final Bell<br>마지막 종
```python
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
```
```python
```
| e50ac68a04bda1da28083127d607b4e5d0de7770 | 13,280 | ipynb | Jupyter Notebook | 70_sympy/10_sympy.ipynb | kangwonlee/2009eca-nmisp-template | 46a09c988c5e0c4efd493afa965d4a17d32985e8 | [
"BSD-3-Clause"
] | null | null | null | 70_sympy/10_sympy.ipynb | kangwonlee/2009eca-nmisp-template | 46a09c988c5e0c4efd493afa965d4a17d32985e8 | [
"BSD-3-Clause"
] | null | null | null | 70_sympy/10_sympy.ipynb | kangwonlee/2009eca-nmisp-template | 46a09c988c5e0c4efd493afa965d4a17d32985e8 | [
"BSD-3-Clause"
] | null | null | null | 17.382199 | 174 | 0.466265 | true | 1,549 | Qwen/Qwen-72B | 1. YES
2. YES | 0.950411 | 0.884039 | 0.840201 | __label__kor_Hang | 0.668793 | 0.790401 |
# PaBiRoboy dynamic equations
First, import the necessary functions from SymPy that will allow us to construct time varying vectors in the reference frames.
```python
from __future__ import print_function, division
from sympy import symbols, simplify, Matrix
from sympy import trigsimp
from sympy.physics.mechanics import dynamicsymbols, ReferenceFrame, Point, Particle, inertia, RigidBody, KanesMethod
from numpy import deg2rad, rad2deg, array, zeros, linspace
from sympy.physics.vector import init_vprinting, vlatex
import numpy as np
from scipy.integrate import odeint
from sympy.utilities.codegen import codegen
from pydy.codegen.ode_function_generators import generate_ode_function
from matplotlib.pyplot import plot, legend, xlabel, ylabel, rcParams
rcParams['figure.figsize'] = (14.0, 6.0)
```
SymPy has a rich printing system. Here we initialize printing so that all of the mathematical equations are rendered in standard mathematical notation.
```python
from sympy.physics.vector import init_vprinting
init_vprinting(use_latex='mathjax', pretty_print=False)
```
## Interial frames
```python
inertial_frame = ReferenceFrame('I')
lower_leg_left_frame = ReferenceFrame('R_1')
upper_leg_left_frame = ReferenceFrame('R_2')
hip_frame = ReferenceFrame('R_3')
upper_leg_right_frame = ReferenceFrame('R_4')
lower_leg_right_frame = ReferenceFrame('R_5')
```
## Angles
```python
theta0, theta1, theta2, theta3, phi = dynamicsymbols('theta0, theta1, theta2, theta3, phi')
```
```python
lower_leg_left_frame.orient(inertial_frame, 'Axis', (phi, inertial_frame.z))
simplify(lower_leg_left_frame.dcm(inertial_frame))
```
$$\left[\begin{matrix}\operatorname{cos}\left(\phi\right) & \operatorname{sin}\left(\phi\right) & 0\\- \operatorname{sin}\left(\phi\right) & \operatorname{cos}\left(\phi\right) & 0\\0 & 0 & 1\end{matrix}\right]$$
```python
upper_leg_left_frame.orient(lower_leg_left_frame, 'Axis', (theta0, -lower_leg_left_frame.z))
simplify(upper_leg_left_frame.dcm(inertial_frame))
```
$$\left[\begin{matrix}\operatorname{cos}\left(\phi - \theta_{0}\right) & \operatorname{sin}\left(\phi - \theta_{0}\right) & 0\\- \operatorname{sin}\left(\phi - \theta_{0}\right) & \operatorname{cos}\left(\phi - \theta_{0}\right) & 0\\0 & 0 & 1\end{matrix}\right]$$
```python
hip_frame.orient(upper_leg_left_frame, 'Axis', (theta1, -upper_leg_left_frame.z))
hip_frame.dcm(inertial_frame)
```
$$\left[\begin{matrix}\left(- \operatorname{sin}\left(\theta_{0}\right) \operatorname{sin}\left(\theta_{1}\right) + \operatorname{cos}\left(\theta_{0}\right) \operatorname{cos}\left(\theta_{1}\right)\right) \operatorname{cos}\left(\phi\right) - \left(- \operatorname{sin}\left(\theta_{0}\right) \operatorname{cos}\left(\theta_{1}\right) - \operatorname{sin}\left(\theta_{1}\right) \operatorname{cos}\left(\theta_{0}\right)\right) \operatorname{sin}\left(\phi\right) & \left(- \operatorname{sin}\left(\theta_{0}\right) \operatorname{sin}\left(\theta_{1}\right) + \operatorname{cos}\left(\theta_{0}\right) \operatorname{cos}\left(\theta_{1}\right)\right) \operatorname{sin}\left(\phi\right) + \left(- \operatorname{sin}\left(\theta_{0}\right) \operatorname{cos}\left(\theta_{1}\right) - \operatorname{sin}\left(\theta_{1}\right) \operatorname{cos}\left(\theta_{0}\right)\right) \operatorname{cos}\left(\phi\right) & 0\\- \left(- \operatorname{sin}\left(\theta_{0}\right) \operatorname{sin}\left(\theta_{1}\right) + \operatorname{cos}\left(\theta_{0}\right) \operatorname{cos}\left(\theta_{1}\right)\right) \operatorname{sin}\left(\phi\right) + \left(\operatorname{sin}\left(\theta_{0}\right) \operatorname{cos}\left(\theta_{1}\right) + \operatorname{sin}\left(\theta_{1}\right) \operatorname{cos}\left(\theta_{0}\right)\right) \operatorname{cos}\left(\phi\right) & \left(- \operatorname{sin}\left(\theta_{0}\right) \operatorname{sin}\left(\theta_{1}\right) + \operatorname{cos}\left(\theta_{0}\right) \operatorname{cos}\left(\theta_{1}\right)\right) \operatorname{cos}\left(\phi\right) + \left(\operatorname{sin}\left(\theta_{0}\right) \operatorname{cos}\left(\theta_{1}\right) + \operatorname{sin}\left(\theta_{1}\right) \operatorname{cos}\left(\theta_{0}\right)\right) \operatorname{sin}\left(\phi\right) & 0\\0 & 0 & 1\end{matrix}\right]$$
```python
upper_leg_right_frame.orient(hip_frame, 'Axis', (theta2, -hip_frame.z))
simplify(upper_leg_right_frame.dcm(inertial_frame))
```
$$\left[\begin{matrix}\operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) & - \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) & 0\\\operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) & \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) & 0\\0 & 0 & 1\end{matrix}\right]$$
```python
lower_leg_right_frame.orient(upper_leg_right_frame, 'Axis', (theta3, -upper_leg_right_frame.z))
simplify(lower_leg_right_frame.dcm(inertial_frame))
```
$$\left[\begin{matrix}\operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) & - \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) & 0\\\operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) & \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) & 0\\0 & 0 & 1\end{matrix}\right]$$
## Points and Locations
```python
origin = Point('Origin')
ankle_left = Point('AnkleLeft')
knee_left = Point('KneeLeft')
hip_left = Point('HipLeft')
hip_center = Point('HipCenter')
hip_right = Point('HipRight')
knee_right = Point('KneeRight')
ankle_right = Point('AnkleRight')
```
```python
lower_leg_length, upper_leg_length, hip_length = symbols('l1, l2, l3')
```
```python
origin = Point('Origin')
ankle_left = Point('AnkleLeft')
knee_left = Point('KneeLeft')
hip_left = Point('HipLeft')
hip_center = Point('HipCenter')
hip_right = Point('HipRight')
knee_right = Point('KneeRight')
ankle_right = Point('AnkleRight')
# here go the lengths of your robots links
lower_leg_length, upper_leg_length, hip_length = symbols('l1, l2, l3')
ankle_left.set_pos(origin, (0 * inertial_frame.y)+(0 * inertial_frame.x))
#ankle_left.pos_from(origin).express(inertial_frame).simplify()
knee_left.set_pos(ankle_left, lower_leg_length * lower_leg_left_frame.y)
#knee_left.pos_from(origin).express(inertial_frame).simplify()
hip_left.set_pos(knee_left, upper_leg_length * upper_leg_left_frame.y)
#hip_left.pos_from(origin).express(inertial_frame).simplify()
hip_center.set_pos(hip_left, hip_length/2 * hip_frame.x)
#hip_center.pos_from(origin).express(inertial_frame).simplify()
hip_right.set_pos(hip_center, hip_length/2 * hip_frame.x)
#hip_right.pos_from(origin).express(inertial_frame).simplify()
knee_right.set_pos(hip_right, upper_leg_length * -upper_leg_right_frame.y)
#knee_right.pos_from(origin).express(inertial_frame).simplify()
ankle_right.set_pos(knee_right, lower_leg_length * -lower_leg_right_frame.y)
#ankle_right.pos_from(origin).express(inertial_frame).simplify()
```
```python
lower_leg_left_com_length = lower_leg_length/2
upper_leg_left_com_length = upper_leg_length/2
hip_com_length = hip_length/2
upper_leg_right_com_length = upper_leg_length/2
lower_leg_right_com_length = lower_leg_length/2
lower_leg_left_mass_center = Point('L_COMleft')
upper_leg_left_mass_center = Point('U_COMleft')
hip_mass_center = Point('H_COMleft')
upper_leg_right_mass_center = Point('U_COMright')
lower_leg_right_mass_center = Point('L_COMright')
```
```python
lower_leg_left_mass_center.set_pos(ankle_left, lower_leg_left_com_length * lower_leg_left_frame.y)
#lower_leg_left_mass_center.pos_from(origin).express(inertial_frame).simplify()
upper_leg_left_mass_center.set_pos(knee_left, upper_leg_left_com_length * upper_leg_left_frame.y)
#upper_leg_left_mass_center.pos_from(origin).express(inertial_frame).simplify()
hip_mass_center.set_pos(hip_center, 0 * hip_frame.x)
#hip_mass_center.pos_from(origin).express(inertial_frame).simplify()
upper_leg_right_mass_center.set_pos(knee_right, upper_leg_right_com_length * upper_leg_right_frame.y)
#upper_leg_right_mass_center.pos_from(origin).express(inertial_frame).simplify()
lower_leg_right_mass_center.set_pos(ankle_right, lower_leg_right_com_length * lower_leg_right_frame.y)
lower_leg_right_mass_center.pos_from(origin).express(inertial_frame).simplify()
```
$$(- \frac{l_{1}}{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - l_{1} \operatorname{sin}\left(\phi\right) - l_{2} \operatorname{sin}\left(\phi - \theta_{0}\right) - l_{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) + l_{3} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right))\mathbf{\hat{i}_x} + (- \frac{l_{1}}{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} \operatorname{cos}\left(\phi\right) + l_{2} \operatorname{cos}\left(\phi - \theta_{0}\right) - l_{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - l_{3} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1}\right))\mathbf{\hat{i}_y}$$
## Kinematical Differential Equations
```python
omega0, omega1, omega2, omega3, psi = dynamicsymbols('omega0, omega1, omega2, omega3, psi')
kinematical_differential_equations = [omega0 - theta0.diff(),
omega1 - theta1.diff(),
omega2 - theta2.diff(),
omega3 - theta3.diff(),
psi - phi.diff(),
]
kinematical_differential_equations
```
$$\left [ \omega_{0} - \dot{\theta}_{0}, \quad \omega_{1} - \dot{\theta}_{1}, \quad \omega_{2} - \dot{\theta}_{2}, \quad \omega_{3} - \dot{\theta}_{3}, \quad \psi - \dot{\phi}\right ]$$
## Angular Velocities
```python
lower_leg_left_frame.set_ang_vel(inertial_frame, 0 * inertial_frame.z)
#lower_leg_left_frame.ang_vel_in(inertial_frame)
upper_leg_left_frame.set_ang_vel(upper_leg_left_frame, omega0 * inertial_frame.z)
#upper_leg_left_frame.ang_vel_in(inertial_frame)
hip_frame.set_ang_vel(hip_frame, omega1 * inertial_frame.z)
#hip_frame.ang_vel_in(inertial_frame)
upper_leg_right_frame.set_ang_vel(upper_leg_right_frame, omega2 * inertial_frame.z)
#upper_leg_right_frame.ang_vel_in(inertial_frame)
lower_leg_right_frame.set_ang_vel(lower_leg_right_frame, omega3 * inertial_frame.z)
lower_leg_right_frame.ang_vel_in(inertial_frame)
```
$$- \dot{\theta}_{3}\mathbf{\hat{r_4}_z} - \dot{\theta}_{2}\mathbf{\hat{r_3}_z} - \dot{\theta}_{1}\mathbf{\hat{r_2}_z} - \dot{\theta}_{0}\mathbf{\hat{r_1}_z}$$
# Linear Velocities
Finally, the linear velocities of the mass centers are needed. Starting at the ankle_left which has a velocity of 0.
```python
ankle_left.set_vel(inertial_frame, 0)
origin.set_vel(inertial_frame, 0)
```
Working our way up the chain we can make use of the fact that the joint points are located on two rigid bodies. Any fixed point in a reference frame can be computed if the linear velocity of another point on that frame is known and the frame's angular velocity is known.
$$^I\mathbf{v}^{P_2} = ^I\mathbf{v}^{P_1} + ^I\omega^A \times \mathbf{r}^{\frac{P_2}{P_1}}$$
The `Point.v2pt_theory()` method makes it easy to do this calculation.
```python
knee_left.v2pt_theory(ankle_left, inertial_frame, lower_leg_left_frame)
#knee_left.vel(inertial_frame)
hip_left.v2pt_theory(knee_left, inertial_frame, upper_leg_left_frame)
#hip_left.vel(inertial_frame)
hip_center.v2pt_theory(hip_left, inertial_frame, hip_frame)
#hip_center.vel(inertial_frame)
hip_mass_center.v2pt_theory(hip_center, inertial_frame, hip_frame)
#hip_mass_center.vel(inertial_frame)
hip_right.v2pt_theory(hip_center, inertial_frame, hip_frame)
#hip_right.vel(inertial_frame)
knee_right.v2pt_theory(hip_right, inertial_frame, upper_leg_right_frame)
#knee_right.vel(inertial_frame)
ankle_right.v2pt_theory(knee_right, inertial_frame, lower_leg_right_frame)
#ankle_right.vel(inertial_frame)
lower_leg_left_mass_center.v2pt_theory(ankle_left, inertial_frame, lower_leg_left_frame)
#lower_leg_left_mass_center.vel(inertial_frame)
lower_leg_right_mass_center.v2pt_theory(ankle_right, inertial_frame, lower_leg_right_frame)
#lower_leg_right_mass_center.vel(inertial_frame)
upper_leg_left_mass_center.v2pt_theory(knee_left, inertial_frame, upper_leg_left_frame)
#upper_leg_left_mass_center.vel(inertial_frame)
upper_leg_right_mass_center.v2pt_theory(knee_right, inertial_frame, upper_leg_right_frame)
upper_leg_right_mass_center.vel(inertial_frame)
```
$$l_{2} \dot{\theta}_{0}\mathbf{\hat{r_2}_x} + l_{3} \left(- \dot{\theta}_{0} - \dot{\theta}_{1}\right)\mathbf{\hat{r_3}_y} + \frac{l_{2}}{2} \left(- \dot{\theta}_{0} - \dot{\theta}_{1} - \dot{\theta}_{2}\right)\mathbf{\hat{r_4}_x}$$
# Masses, Inertia, Rigid Bodies
```python
lower_leg_mass, upper_leg_mass, hip_mass = symbols('m_L, m_U, m_H')
lower_leg_inertia, upper_leg_inertia, hip_inertia = symbols('I_Lz, I_Uz, I_Hz')
lower_leg_left_inertia_dyadic = inertia(lower_leg_left_frame, lower_leg_inertia, lower_leg_inertia, lower_leg_inertia)
lower_leg_left_central_inertia = (lower_leg_left_inertia_dyadic, lower_leg_left_mass_center)
lower_leg_left_inertia_dyadic.to_matrix(lower_leg_left_frame)
upper_leg_left_inertia_dyadic = inertia(upper_leg_left_frame, upper_leg_inertia, upper_leg_inertia, upper_leg_inertia)
upper_leg_left_central_inertia = (upper_leg_left_inertia_dyadic, upper_leg_left_mass_center)
upper_leg_left_inertia_dyadic.to_matrix(upper_leg_left_frame)
hip_inertia_dyadic = inertia(hip_frame, hip_inertia, hip_inertia, hip_inertia)
hip_central_inertia = (hip_inertia_dyadic, hip_mass_center)
hip_inertia_dyadic.to_matrix(hip_frame)
upper_leg_right_inertia_dyadic = inertia(upper_leg_right_frame, upper_leg_inertia, upper_leg_inertia, upper_leg_inertia)
upper_leg_right_central_inertia = (upper_leg_right_inertia_dyadic, upper_leg_right_mass_center)
upper_leg_right_inertia_dyadic.to_matrix(upper_leg_right_frame)
lower_leg_right_inertia_dyadic = inertia(lower_leg_right_frame, lower_leg_inertia, lower_leg_inertia, lower_leg_inertia)
lower_leg_right_central_inertia = (lower_leg_right_inertia_dyadic, lower_leg_right_mass_center)
lower_leg_right_inertia_dyadic.to_matrix(lower_leg_right_frame)
lower_leg_left = RigidBody('Lower Leg Left', lower_leg_left_mass_center, lower_leg_left_frame, lower_leg_mass, lower_leg_left_central_inertia)
upper_leg_left = RigidBody('Upper Leg Left', upper_leg_left_mass_center, upper_leg_left_frame, upper_leg_mass, upper_leg_left_central_inertia)
hip = RigidBody('Hip', hip_mass_center, hip_frame, hip_mass, hip_central_inertia)
upper_leg_right = RigidBody('Upper Leg Right', upper_leg_right_mass_center, upper_leg_right_frame, upper_leg_mass, upper_leg_right_central_inertia)
lower_leg_right = RigidBody('Lower Leg Right', lower_leg_right_mass_center, lower_leg_right_frame, lower_leg_mass, lower_leg_right_central_inertia)
particles = []
particles.append(Particle('ankle_left', ankle_left, 0))
particles.append(Particle('knee_left', knee_left, 0))
particles.append(Particle('hip_left', hip_left, 0))
particles.append(Particle('hip_center', hip_center, 0))
particles.append(Particle('hip_right', hip_right, 0))
particles.append(Particle('knee_right', knee_right, 0))
particles.append(Particle('ankle_right', ankle_right, 0))
particles
mass_centers = []
mass_centers.append(Particle('lower_leg_left_mass_center', lower_leg_left_mass_center, lower_leg_mass))
mass_centers.append(Particle('upper_leg_left_mass_center', upper_leg_left_mass_center, upper_leg_mass))
mass_centers.append(Particle('hip_mass_center', hip_mass_center, hip_mass))
mass_centers.append(Particle('hip_mass_center', hip_mass_center, hip_mass))
mass_centers.append(Particle('hip_mass_center', hip_mass_center, hip_mass))
mass_centers.append(Particle('lower_leg_left_mass_center', lower_leg_right_mass_center, lower_leg_mass))
mass_centers.append(Particle('upper_leg_left_mass_center', upper_leg_right_mass_center, upper_leg_mass))
mass_centers
```
[lower_leg_left_mass_center,
upper_leg_left_mass_center,
hip_mass_center,
hip_mass_center,
hip_mass_center,
lower_leg_left_mass_center,
upper_leg_left_mass_center]
# Forces and Torques
```python
g = symbols('g')
```
```python
lower_leg_left_grav_force = (lower_leg_left_mass_center, -lower_leg_mass * g * inertial_frame.y)
upper_leg_left_grav_force = (upper_leg_left_mass_center, -upper_leg_mass * g * inertial_frame.y)
hip_grav_force = (hip_mass_center, -hip_mass * g * inertial_frame.y)
upper_leg_right_grav_force = (upper_leg_right_mass_center, -upper_leg_mass * g * inertial_frame.y)
lower_leg_right_grav_force = (lower_leg_right_mass_center, -lower_leg_mass * g * inertial_frame.y)
ankle_torque0, knee_torque0, hip_torque0, hip_torque1, knee_torque1, ankle_torque1 = dynamicsymbols('T_a0, T_k0, T_h0, T_h1, T_k1, T_a1')
lower_leg_left_torque_vector = ankle_torque0 * inertial_frame.z - knee_torque0 * inertial_frame.z
upper_leg_left_torque_vector = knee_torque0 * inertial_frame.z - hip_torque0 * inertial_frame.z
hip_left_torque_vector = hip_torque0 * inertial_frame.z - hip_torque1 * inertial_frame.z
hip_right_torque_vector = hip_torque1 * inertial_frame.z - knee_torque1 * inertial_frame.z
upper_leg_right_torque_vector = knee_torque1 * inertial_frame.z - ankle_torque1 * inertial_frame.z
lower_leg_right_torque_vector = ankle_torque1 * inertial_frame.z
lower_leg_left_torque = (lower_leg_left_frame, lower_leg_left_torque_vector)
upper_leg_left_torque = (upper_leg_left_frame, upper_leg_left_torque_vector)
hip_left_torque = (hip_frame, hip_left_torque_vector)
hip_right_torque = (hip_frame, hip_right_torque_vector)
upper_leg_right_torque = (upper_leg_right_frame, upper_leg_right_torque_vector)
lower_leg_right_torque = (lower_leg_right_frame, lower_leg_right_torque_vector)
```
# Equations of Motion
```python
coordinates = [theta0, theta1, theta2, theta3, phi]
coordinates
```
$$\left [ \theta_{0}, \quad \theta_{1}, \quad \theta_{2}, \quad \theta_{3}, \quad \phi\right ]$$
```python
speeds = [omega0, omega1, omega2, omega3, psi]
speeds
```
$$\left [ \omega_{0}, \quad \omega_{1}, \quad \omega_{2}, \quad \omega_{3}, \quad \psi\right ]$$
```python
kinematical_differential_equations
```
$$\left [ \omega_{0} - \dot{\theta}_{0}, \quad \omega_{1} - \dot{\theta}_{1}, \quad \omega_{2} - \dot{\theta}_{2}, \quad \omega_{3} - \dot{\theta}_{3}, \quad \psi - \dot{\phi}\right ]$$
```python
kane = KanesMethod(inertial_frame, coordinates, speeds, kinematical_differential_equations)
```
```python
loads = [lower_leg_left_grav_force,
upper_leg_left_grav_force,
hip_grav_force,
upper_leg_right_grav_force,
lower_leg_right_grav_force,
lower_leg_left_torque,
upper_leg_left_torque,
hip_left_torque,
hip_right_torque,
upper_leg_right_torque,
lower_leg_right_torque]
loads
```
[(L_COMleft, - g*m_L*I.y),
(U_COMleft, - g*m_U*I.y),
(H_COMleft, - g*m_H*I.y),
(U_COMright, - g*m_U*I.y),
(L_COMright, - g*m_L*I.y),
(R_1, (T_a0 - T_k0)*I.z),
(R_2, (-T_h0 + T_k0)*I.z),
(R_3, (T_h0 - T_h1)*I.z),
(R_3, (T_h1 - T_k1)*I.z),
(R_4, (-T_a1 + T_k1)*I.z),
(R_5, T_a1*I.z)]
```python
bodies = [lower_leg_left, upper_leg_left, hip, upper_leg_right, lower_leg_right]
bodies
```
[Lower Leg Left, Upper Leg Left, Hip, Upper Leg Right, Lower Leg Right]
```python
fr, frstar = kane.kanes_equations(loads, bodies)
```
```python
trigsimp(fr + frstar)
```
$$\left[\begin{matrix}- \frac{g l_{1}}{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - g l_{2} m_{H} \operatorname{sin}\left(\phi - \theta_{0}\right) - g l_{2} m_{L} \operatorname{sin}\left(\phi - \theta_{0}\right) - g l_{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - \frac{3 g}{2} l_{2} m_{U} \operatorname{sin}\left(\phi - \theta_{0}\right) - \frac{g l_{2}}{2} m_{U} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) + \frac{g l_{3}}{2} m_{H} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) + g l_{3} m_{L} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) + g l_{3} m_{U} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) - \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) - \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{sin}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) - \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) + \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) - l_{2}^{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) + l_{2}^{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) - \frac{l_{2}^{2} m_{U}}{2} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2}^{2} m_{U}}{2} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2} l_{3}}{2} m_{H} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{1}\right) - \frac{l_{2} l_{3}}{2} m_{H} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) + l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{1}\right) - l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) + l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - l_{2} l_{3} m_{L} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) + l_{2} l_{3} m_{U} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{1}\right) - \frac{l_{2} l_{3}}{2} m_{U} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) + \frac{l_{2} l_{3}}{2} m_{U} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - l_{2} l_{3} m_{U} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) - \left(I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} - 2 l_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right) - 2 l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right)\right)\right) \dot{\omega}_{3} - \left(I_{Lz} + I_{Uz} + \frac{l_{2} m_{U}}{4} \left(- 2 l_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2} - 2 l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) + m_{L} \left(\frac{l_{1}^{2}}{4} - \frac{l_{1} l_{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - \frac{l_{1} l_{3}}{2} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) - l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right)\right)\right) \dot{\omega}_{2} - \left(I_{Hz} + I_{Lz} + I_{Uz} + \frac{l_{3} m_{H}}{4} \left(- 2 l_{2} \operatorname{sin}\left(\theta_{1}\right) + l_{3}\right) + m_{L} \left(\frac{l_{1}^{2}}{4} - \frac{l_{1} l_{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - l_{1} l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) - l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) + m_{U} \left(- \frac{l_{2}^{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2}^{2}}{4} - l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right)\right) \dot{\omega}_{1} - \left(I_{Hz} + I_{Lz} + 2 I_{Uz} + \frac{l_{2}^{2} m_{U}}{4} + m_{H} \left(l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) + \frac{l_{3}^{2}}{4}\right) + m_{L} \left(\frac{l_{1}^{2}}{4} - l_{1} l_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - l_{1} l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) - 2 l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + 2 l_{2}^{2} - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) + m_{U} \left(- l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + \frac{5 l_{2}^{2}}{4} - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right)\right) \dot{\omega}_{0} - T_{k0}\\- \frac{g l_{1}}{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - g l_{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - \frac{g l_{2}}{2} m_{U} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) + \frac{g l_{3}}{2} m_{H} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) + g l_{3} m_{L} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) + g l_{3} m_{U} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) - \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) - \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) + \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) + l_{2}^{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2}^{2} m_{U}}{2} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) - \frac{l_{2} l_{3}}{2} m_{H} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) - l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) + l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - l_{2} l_{3} m_{L} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) - \frac{l_{2} l_{3}}{2} m_{U} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) + \frac{l_{2} l_{3}}{2} m_{U} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - l_{2} l_{3} m_{U} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) - \left(I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right) - 2 l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right)\right)\right) \dot{\omega}_{3} - \left(I_{Lz} + I_{Uz} + \frac{l_{2} m_{U}}{4} \left(l_{2} - 2 l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) + m_{L} \left(\frac{l_{1}^{2}}{4} + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - \frac{l_{1} l_{3}}{2} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right)\right)\right) \dot{\omega}_{2} - \left(I_{Hz} + I_{Lz} + I_{Uz} + \frac{l_{3}^{2} m_{H}}{4} + m_{L} \left(\frac{l_{1}^{2}}{4} + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - l_{1} l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) + l_{2}^{2} - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) + m_{U} \left(\frac{l_{2}^{2}}{4} - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right)\right) \dot{\omega}_{1} - \left(I_{Hz} + I_{Lz} + I_{Uz} + \frac{l_{3} m_{H}}{4} \left(- 2 l_{2} \operatorname{sin}\left(\theta_{1}\right) + l_{3}\right) + m_{L} \left(\frac{l_{1}^{2}}{4} - \frac{l_{1} l_{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - l_{1} l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) - l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) + m_{U} \left(- \frac{l_{2}^{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2}^{2}}{4} - l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right)\right) \dot{\omega}_{0} - T_{h0}\\- \frac{g l_{1}}{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - g l_{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - \frac{g l_{2}}{2} m_{U} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) - \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) + l_{2}^{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2}^{2} m_{U}}{2} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) - l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - \frac{l_{2} l_{3}}{2} m_{U} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - \left(I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right)\right)\right) \dot{\omega}_{3} - \left(I_{Lz} + I_{Uz} + \frac{l_{2}^{2} m_{U}}{4} + m_{L} \left(\frac{l_{1}^{2}}{4} + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) + l_{2}^{2}\right)\right) \dot{\omega}_{2} - \left(I_{Lz} + I_{Uz} + \frac{l_{2} m_{U}}{4} \left(l_{2} - 2 l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) + m_{L} \left(\frac{l_{1}^{2}}{4} + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - \frac{l_{1} l_{3}}{2} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right)\right)\right) \dot{\omega}_{1} - \left(I_{Lz} + I_{Uz} + \frac{l_{2} m_{U}}{4} \left(- 2 l_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2} - 2 l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) + m_{L} \left(\frac{l_{1}^{2}}{4} - \frac{l_{1} l_{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - \frac{l_{1} l_{3}}{2} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) - l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right)\right)\right) \dot{\omega}_{0} - T_{k1}\\- \frac{g l_{1}}{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) - \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) - \left(I_{Lz} + \frac{l_{1}^{2} m_{L}}{4}\right) \dot{\omega}_{3} - \left(I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right)\right)\right) \dot{\omega}_{2} - \left(I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right) - 2 l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right)\right)\right) \dot{\omega}_{1} - \left(I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} - 2 l_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right) - 2 l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right)\right)\right) \dot{\omega}_{0} - T_{a1}\\0\end{matrix}\right]$$
Keep in mind that out utlimate goal is to have the equations of motion in first order form:
$$ \dot{\mathbf{x}} = \mathbf{g}(\mathbf{x}, t) $$
The equations of motion are linear in terms of the derivatives of the generalized speeds and the `KanesMethod` class automatically puts the equations in a more useful form for the next step of numerical simulation:
$$ \mathbf{M}(\mathbf{x}, t)\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, t) $$
Note that
$$ \mathbf{g} = \mathbf{M}^{-1}(\mathbf{x}, t) \mathbf{f}(\mathbf{x}, t) $$
and that $\mathbf{g}$ can be computed analytically but for non-toy problems, it is best to do this numerically. So we will simply generate the $\mathbf{M}$ and $\mathbf{f}$ matrices for later use.
The mass matrix, $\mathbf{M}$, can be accessed with the `mass_matrix` method (use `mass_matrix_full` to include the kinematical differential equations too. We can use `trigsimp` again to make this relatively compact:
```python
mass_matrix = trigsimp(kane.mass_matrix_full)
mass_matrix
```
$$\left[\begin{matrix}1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0 & I_{Hz} + I_{Lz} + 2 I_{Uz} + \frac{l_{2}^{2} m_{U}}{4} + m_{H} \left(l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) + \frac{l_{3}^{2}}{4}\right) + m_{L} \left(\frac{l_{1}^{2}}{4} - l_{1} l_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - l_{1} l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) - 2 l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + 2 l_{2}^{2} - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) + m_{U} \left(- l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + \frac{5 l_{2}^{2}}{4} - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) & I_{Hz} + I_{Lz} + I_{Uz} + \frac{l_{3} m_{H}}{4} \left(- 2 l_{2} \operatorname{sin}\left(\theta_{1}\right) + l_{3}\right) + m_{L} \left(\frac{l_{1}^{2}}{4} - \frac{l_{1} l_{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - l_{1} l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) - l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) + m_{U} \left(- \frac{l_{2}^{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2}^{2}}{4} - l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) & I_{Lz} + I_{Uz} + \frac{l_{2} m_{U}}{4} \left(- 2 l_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2} - 2 l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) + m_{L} \left(\frac{l_{1}^{2}}{4} - \frac{l_{1} l_{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - \frac{l_{1} l_{3}}{2} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) - l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) & I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} - 2 l_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right) - 2 l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right)\right) & 0\\0 & 0 & 0 & 0 & 0 & I_{Hz} + I_{Lz} + I_{Uz} + \frac{l_{3} m_{H}}{4} \left(- 2 l_{2} \operatorname{sin}\left(\theta_{1}\right) + l_{3}\right) + m_{L} \left(\frac{l_{1}^{2}}{4} - \frac{l_{1} l_{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - l_{1} l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) - l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) + m_{U} \left(- \frac{l_{2}^{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2}^{2}}{4} - l_{2} l_{3} \operatorname{sin}\left(\theta_{1}\right) - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) & I_{Hz} + I_{Lz} + I_{Uz} + \frac{l_{3}^{2} m_{H}}{4} + m_{L} \left(\frac{l_{1}^{2}}{4} + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - l_{1} l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) + l_{2}^{2} - 2 l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) + m_{U} \left(\frac{l_{2}^{2}}{4} - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right) + l_{3}^{2}\right) & I_{Lz} + I_{Uz} + \frac{l_{2} m_{U}}{4} \left(l_{2} - 2 l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) + m_{L} \left(\frac{l_{1}^{2}}{4} + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - \frac{l_{1} l_{3}}{2} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) & I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right) - 2 l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right)\right) & 0\\0 & 0 & 0 & 0 & 0 & I_{Lz} + I_{Uz} + \frac{l_{2} m_{U}}{4} \left(- 2 l_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2} - 2 l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) + m_{L} \left(\frac{l_{1}^{2}}{4} - \frac{l_{1} l_{2}}{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - \frac{l_{1} l_{3}}{2} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) - l_{2}^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) & I_{Lz} + I_{Uz} + \frac{l_{2} m_{U}}{4} \left(l_{2} - 2 l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) + m_{L} \left(\frac{l_{1}^{2}}{4} + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) - \frac{l_{1} l_{3}}{2} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right) + l_{2}^{2} - l_{2} l_{3} \operatorname{sin}\left(\theta_{2}\right)\right) & I_{Lz} + I_{Uz} + \frac{l_{2}^{2} m_{U}}{4} + m_{L} \left(\frac{l_{1}^{2}}{4} + l_{1} l_{2} \operatorname{cos}\left(\theta_{3}\right) + l_{2}^{2}\right) & I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right)\right) & 0\\0 & 0 & 0 & 0 & 0 & I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} - 2 l_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right) - 2 l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right)\right) & I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right) - 2 l_{3} \operatorname{sin}\left(\theta_{2} + \theta_{3}\right)\right) & I_{Lz} + \frac{l_{1} m_{L}}{4} \left(l_{1} + 2 l_{2} \operatorname{cos}\left(\theta_{3}\right)\right) & I_{Lz} + \frac{l_{1}^{2} m_{L}}{4} & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\end{matrix}\right]$$
The right hand side, $\mathbf{f}$, is a vector function of all the non-inertial forces (gyroscopic, external, coriolis, etc):
```python
forcing_vector = trigsimp(kane.forcing_full)
forcing_vector
```
$$\left[\begin{matrix}\omega_{0}\\\omega_{1}\\\omega_{2}\\\omega_{3}\\\psi\\- \frac{g l_{1}}{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - g l_{2} m_{H} \operatorname{sin}\left(\phi - \theta_{0}\right) - g l_{2} m_{L} \operatorname{sin}\left(\phi - \theta_{0}\right) - g l_{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - \frac{3 g}{2} l_{2} m_{U} \operatorname{sin}\left(\phi - \theta_{0}\right) - \frac{g l_{2}}{2} m_{U} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) + \frac{g l_{3}}{2} m_{H} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) + g l_{3} m_{L} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) + g l_{3} m_{U} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) - \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) - \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{sin}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) - \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) + \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) - l_{2}^{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) + l_{2}^{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) - \frac{l_{2}^{2} m_{U}}{2} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2}^{2} m_{U}}{2} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2} l_{3}}{2} m_{H} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{1}\right) - \frac{l_{2} l_{3}}{2} m_{H} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) + l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{1}\right) - l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) + l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - l_{2} l_{3} m_{L} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) + l_{2} l_{3} m_{U} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{1}\right) - \frac{l_{2} l_{3}}{2} m_{U} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) + \frac{l_{2} l_{3}}{2} m_{U} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - l_{2} l_{3} m_{U} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) - T_{k0}\\- \frac{g l_{1}}{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - g l_{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - \frac{g l_{2}}{2} m_{U} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) + \frac{g l_{3}}{2} m_{H} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) + g l_{3} m_{L} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) + g l_{3} m_{U} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) - \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) - \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) + \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) + l_{2}^{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2}^{2} m_{U}}{2} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) - \frac{l_{2} l_{3}}{2} m_{H} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) - l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) + l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - l_{2} l_{3} m_{L} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) - \frac{l_{2} l_{3}}{2} m_{U} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) + \frac{l_{2} l_{3}}{2} m_{U} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - l_{2} l_{3} m_{U} \omega^{2}_{0} \operatorname{cos}\left(\theta_{1}\right) - T_{h0}\\- \frac{g l_{1}}{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - g l_{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - \frac{g l_{2}}{2} m_{U} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2} - \omega_{3}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) - \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) + l_{2}^{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) + \frac{l_{2}^{2} m_{U}}{2} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) - l_{2} l_{3} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - \frac{l_{2} l_{3}}{2} m_{U} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2}\right) - T_{k1}\\- \frac{g l_{1}}{2} m_{L} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - \frac{l_{1} l_{2}}{2} m_{L} \left(- \omega_{0} - \omega_{1} - \omega_{2}\right)^{2} \operatorname{sin}\left(\theta_{3}\right) + \frac{l_{1} l_{2}}{2} m_{L} \omega^{2}_{0} \operatorname{sin}\left(\theta_{1} + \theta_{2} + \theta_{3}\right) - \frac{l_{1} l_{3}}{2} m_{L} \left(- \omega_{0} - \omega_{1}\right)^{2} \operatorname{cos}\left(\theta_{2} + \theta_{3}\right) - T_{a1}\\0\end{matrix}\right]$$
# Simulation
```python
from scipy.integrate import odeint
from sympy.utilities.codegen import codegen
from pydy.codegen.ode_function_generators import generate_ode_function
from matplotlib.pyplot import plot, legend, xlabel, ylabel, rcParams
rcParams['figure.figsize'] = (14.0, 6.0)
specified = [ankle_torque0, knee_torque0, hip_torque0, hip_torque1, knee_torque1, ankle_torque1]
numerical_specified = zeros(6)
x0 = zeros(10)
x0
x0[0] = deg2rad(80)
x0[1] = -deg2rad(80)
x0[2] = -deg2rad(80)
x0[3] = deg2rad(80)
x0[4] = 0
x0
```
array([ 1.3962634, -1.3962634, -1.3962634, 1.3962634, 0. ,
0. , 0. , 0. , 0. , 0. ])
```python
#%% Jacobian for the right ankle, which we will try to leave where it is when
# moving the hip center
print ('calculating jacobian for the right ankle')
F0 = ankle_right.pos_from(origin).express(inertial_frame).simplify().to_matrix(inertial_frame)
F0 = Matrix([F0[0], F0[1]])
F0
```
calculating jacobian for the right ankle
$$\left[\begin{matrix}- l_{1} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - l_{1} \operatorname{sin}\left(\phi\right) - l_{2} \operatorname{sin}\left(\phi - \theta_{0}\right) - l_{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) + l_{3} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right)\\- l_{1} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} \operatorname{cos}\left(\phi\right) + l_{2} \operatorname{cos}\left(\phi - \theta_{0}\right) - l_{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - l_{3} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1}\right)\end{matrix}\right]$$
```python
J_ankleRight = F0.jacobian([theta0, theta1, theta2, theta3, phi])
J_ankleRight
```
$$\left[\begin{matrix}- l_{1} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) + l_{2} \operatorname{cos}\left(\phi - \theta_{0}\right) - l_{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - l_{3} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1}\right) & - l_{1} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - l_{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - l_{3} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1}\right) & - l_{1} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - l_{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) & - l_{1} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) & l_{1} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - l_{1} \operatorname{cos}\left(\phi\right) - l_{2} \operatorname{cos}\left(\phi - \theta_{0}\right) + l_{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) + l_{3} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1}\right)\\l_{1} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) + l_{2} \operatorname{sin}\left(\phi - \theta_{0}\right) + l_{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - l_{3} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) & l_{1} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) + l_{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - l_{3} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) & l_{1} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) + l_{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) & l_{1} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) & - l_{1} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - l_{1} \operatorname{sin}\left(\phi\right) - l_{2} \operatorname{sin}\left(\phi - \theta_{0}\right) - l_{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) + l_{3} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right)\end{matrix}\right]$$
```python
F1 = hip_center.pos_from(origin).express(inertial_frame).simplify().to_matrix(inertial_frame)
F1 = Matrix([F1[0], F1[1]])
F1
```
$$\left[\begin{matrix}- l_{1} \operatorname{sin}\left(\phi\right) - l_{2} \operatorname{sin}\left(\phi - \theta_{0}\right) + \frac{l_{3}}{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right)\\l_{1} \operatorname{cos}\left(\phi\right) + l_{2} \operatorname{cos}\left(\phi - \theta_{0}\right) - \frac{l_{3}}{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1}\right)\end{matrix}\right]$$
```python
J_hipCenter = F1.jacobian([theta0, theta1, theta2, theta3, phi])
J_hipCenter
```
$$\left[\begin{matrix}l_{2} \operatorname{cos}\left(\phi - \theta_{0}\right) - \frac{l_{3}}{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1}\right) & - \frac{l_{3}}{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1}\right) & 0 & 0 & - l_{1} \operatorname{cos}\left(\phi\right) - l_{2} \operatorname{cos}\left(\phi - \theta_{0}\right) + \frac{l_{3}}{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1}\right)\\l_{2} \operatorname{sin}\left(\phi - \theta_{0}\right) - \frac{l_{3}}{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) & - \frac{l_{3}}{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right) & 0 & 0 & - l_{1} \operatorname{sin}\left(\phi\right) - l_{2} \operatorname{sin}\left(\phi - \theta_{0}\right) + \frac{l_{3}}{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right)\end{matrix}\right]$$
```python
#%% we stack the two jacobians
J = J_ankleRight.col_join(J_hipCenter)
J.shape
```
$$\left ( 4, \quad 5\right )$$
```python
#%% lets try the pseudo inverse with a couple of real values
values = {lower_leg_length: 0.4, upper_leg_length: 0.54, hip_length: 0.2, theta0: x0[0], theta1: x0[1], theta2: x0[2], theta3: x0[3], phi: x0[4]}
Jpinv = J.subs(values).evalf().pinv()
Jpinv
```
$$\left[\begin{matrix}-0.0459332479170267 & 0.219478074655015 & -2.3994104929455 & -2.31969900118024\\0.235092856483776 & -1.12331981399207 & -0.514831317577474 & 1.87254742697749\\-0.235092856483778 & -0.757099837648197 & 0.514831317577473 & 1.88829187630304\\-2.45406675208297 & 2.10175902875642 & 2.39941049294551 & -2.32277520564262\\-0.00872301122043817 & 0.0416802598264825 & -2.48089742314152 & -0.440525356532326\end{matrix}\right]$$
```python
#%% we stack the two endeffektor points, which we will evaluate in the integration
F2 = F0.col_join(F1)
F2
```
$$\left[\begin{matrix}- l_{1} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) - l_{1} \operatorname{sin}\left(\phi\right) - l_{2} \operatorname{sin}\left(\phi - \theta_{0}\right) - l_{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) + l_{3} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right)\\- l_{1} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2} + \theta_{3}\right) + l_{1} \operatorname{cos}\left(\phi\right) + l_{2} \operatorname{cos}\left(\phi - \theta_{0}\right) - l_{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1} + \theta_{2}\right) - l_{3} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1}\right)\\- l_{1} \operatorname{sin}\left(\phi\right) - l_{2} \operatorname{sin}\left(\phi - \theta_{0}\right) + \frac{l_{3}}{2} \operatorname{cos}\left(- \phi + \theta_{0} + \theta_{1}\right)\\l_{1} \operatorname{cos}\left(\phi\right) + l_{2} \operatorname{cos}\left(\phi - \theta_{0}\right) - \frac{l_{3}}{2} \operatorname{sin}\left(- \phi + \theta_{0} + \theta_{1}\right)\end{matrix}\right]$$
```python
#%% this defines how long you want to simulate, we simulate for one second with 30 fps
# simulation can take quite a while...
frames_per_sec = 30
final_time = 1
t = linspace(0.0, final_time, final_time * frames_per_sec)
```
```python
numerical_constants = array([0.42, # lower_leg_length [m]
0.54, # upper_leg_length [m]
0.2, # hip_length
1.0, # lower_leg_mass [kg]
1.5, # upper_leg_mass [kg]
2.0, # hip_mass [kg]
0.1, # lower_leg_inertia [kg*m^2]
0.2, # upper_leg_inertia [kg*m^2]
0.1, # hip_inertia [kg*m^2]
9.81], # acceleration due to gravity [m/s^2]
)
numerical_constants
```
array([ 0.42, 0.54, 0.2 , 1. , 1.5 , 2. , 0.1 , 0.2 , 0.1 , 9.81])
```python
constants = [lower_leg_length,
upper_leg_length,
hip_length,
lower_leg_mass,
upper_leg_mass,
hip_mass,
lower_leg_inertia,
upper_leg_inertia,
hip_inertia,
g]
constants
```
$$\left [ l_{1}, \quad l_{2}, \quad l_{3}, \quad m_{L}, \quad m_{U}, \quad m_{H}, \quad I_{Lz}, \quad I_{Uz}, \quad I_{Hz}, \quad g\right ]$$
```python
#%%
x0 = zeros(10)
x0
x0[0] = deg2rad(80)
x0[1] = -deg2rad(80)
x0[2] = -deg2rad(80)
x0[3] = deg2rad(80)
x0[4] = 0
x0
```
array([ 1.3962634, -1.3962634, -1.3962634, 1.3962634, 0. ,
0. , 0. , 0. , 0. , 0. ])
```python
#%% Use this for full kinematics simulation
#right_hand_side = generate_ode_function(forcing_vector, coordinates, speeds,
# constants, mass_matrix=mass_matrix,
# specifieds=specified, generator='cython')
#
#args = {'constants': numerical_constants,
# 'specified': numerical_specified}
#
#right_hand_side(x0, 0.0, numerical_specified, numerical_constants)
##%%
#y = odeint(right_hand_side, x0, t, args=(numerical_specified, numerical_constants))
#y
#%% only useful when you want to simulate full kinematics, use in right_hand_side equation
#from sympy import lambdify, solve
#M_func = lambdify(coordinates + speeds + constants + specified, mass_matrix) # Create a callable function to evaluate the mass matrix
#f_func = lambdify(coordinates + speeds + constants + specified, forcing_vector) # Create a callable function to evaluate the forcing vector
#%% inverse kinematics gains, you can balance here how much the respective
# targets should be tried to reach
kp = np.eye(4)
kp[0,0] = 10
kp[1,1] = 10
kp[2,2] = 10
kp[3,3] = 10
kp
#%%
kpN = np.zeros((4,4),dtype=float)
kpN[1,1] = 10
kpN
#%% These are our target points
L = 1 # this defines how far apart the left and right ankle should be, y=0 so the
# foot does not lift of the ground
# the last two values are x and y coordinates for the hip center
x_des = Matrix([L,0,0.7,0.1])
x_des.shape
#%% This is the intergration function (the inverse kinematics)
# I highly recommend you have a look at this textbook http://smpp.northwestern.edu/savedLiterature/Spong_Textbook.pdf
# especially chapter 5.10 Inverse Velocity and Acceleration
i = 0
def right_hand_side(x, t, args):
"""Returns the derivatives of the states.
Parameters
----------
x : ndarray, shape(2 * (n + 1))
The current state vector.
t : float
The current time.
args : ndarray
The constants.
Returns
-------
dx : ndarray, shape(2 * (n + 1))
The derivative of the state.
"""
global i
r = 0.0 # The input force is always zero
arguments = np.hstack((x, args)) # States, input, and parameters
values = {lower_leg_length: 0.4, upper_leg_length: 0.54, hip_length: 0.2,
theta0: x[0], theta1: x[1], theta2: x[2], theta3: x[3], phi: x[4],
omega0: 0, omega1: 0, omega2: 0, omega3: 0, psi: 0}
Jpinv = J.subs(values).evalf().pinv()
# use this for nullspace movements
# N=np.eye(4)-Jpinv*J.subs(values).evalf()
x_current = F2.subs(values).evalf() # use this for nullspace movements
dq = np.array(Jpinv*(kp*( x_des - x_current ))).astype(float).T[0]# + N*(kpN*(Matrix([0,-deg2rad(80),0,0,0])-Matrix([x[0],x[1],x[2],x[3])))).astype(float).T[0]
dq = np.hstack((dq,np.zeros(5,dtype=float)))
if i%10==0:
print(x_current)
i = i+1
return dq
# use this for full kinematic simulation
# dq = np.array(np.linalg.solve(M_func(*arguments), # Solving for the derivatives
# f_func(*arguments))).T[0]
# return dq
```
```python
#%% Ok lets simulate
print('integrating')
args = (np.hstack((numerical_constants,)),)
y = odeint(right_hand_side, x0, t, args)#,full_output = 1,mxstep=1000000
#y = np.hstack((y,np.zeros(y.shape)))
print(y)
print('done integrating')
```
integrating
Matrix([[1.26359237325318], [0], [0.631796186626592], [0.493770015940142]])
Matrix([[1.26175159570018], [6.37010906447567e-9], [0.632272477736748], [0.491020164371748]])
Matrix([[1.25528230436820], [-8.25783376997690e-9], [0.633946400654344], [0.481355924543045]])
Matrix([[1.24204513938050], [-4.32124976079695e-9], [0.637371479067839], [0.461581463804779]])
Matrix([[1.22016985269966], [-7.65121452371391e-9], [0.643031651745630], [0.428902850571606]])
Matrix([[1.19347469825923], [-4.40686484179387e-9], [0.649938948994721], [0.389024040678037]])
Matrix([[1.16441873966249], [-1.32369283719833e-9], [0.657457097886850], [0.345618520687556]])
Matrix([[1.13086159526863], [3.19119596654989e-9], [0.666139915607050], [0.295488862780702]])
Matrix([[1.09762332427629], [8.60721838030418e-9], [0.674740225069592], [0.245835558050327]])
Matrix([[1.06845152046108], [1.41858201431304e-8], [0.682288346687779], [0.202256984915244]])
Matrix([[1.04481151791494], [1.23655330506317e-8], [0.688405133486092], [0.166942137274830]])
Matrix([[1.02579277469106], [2.67137921592525e-9], [0.693326187989628], [0.138530789666744]])
Matrix([[1.01358973421518], [-1.82801794187948e-9], [0.696483696012692], [0.120301150837070]])
Matrix([[1.00627090789950], [-6.78787654556645e-10], [0.698377421556476], [0.109367853168821]])
Matrix([[1.00242339361369], [-1.00974011790766e-9], [0.699372955849698], [0.103620206981665]])
Matrix([[1.00085360468088], [-8.53671905734488e-10], [0.699779133163587], [0.101275165217524]])
Matrix([[1.00024778277731], [2.14709708346028e-10], [0.699935886932838], [0.100370152121145]])
Matrix([[1.00006920332609], [4.67516945296120e-11], [0.699982093856457], [0.100103379898465]])
Matrix([[1.00001656522374], [-1.12818834135942e-11], [0.699995713807771], [0.100024746112718]])
[[ 1.3962634 -1.3962634 -1.3962634 1.3962634 0. 0. 0.
0. 0. 0. ]
[ 1.58082794 -1.62016678 -1.57321165 1.84765989 -0.03033541 0. 0.
0. 0. 0. ]
[ 1.67079324 -1.76233809 -1.68716838 2.12830504 -0.088468 0. 0.
0. 0. 0. ]
[ 1.71531961 -1.85578284 -1.76218806 2.31695656 -0.14682271 0. 0.
0. 0. 0. ]
[ 1.73730152 -1.91741631 -1.81150631 2.44799534 -0.19618586 0. 0.
0. 0. 0. ]
[ 1.74806591 -1.9579694 -1.84371585 2.54052487 -0.23486065 0. 0.
0. 0. 0. ]
[ 1.75326896 -1.98459453 -1.86464227 2.60641834 -0.26395708 0. 0.
0. 0. 0. ]
[ 1.75573295 -2.00208225 -1.87822052 2.65352561 -0.28534559 0. 0.
0. 0. 0. ]
[ 1.75686168 -2.01361142 -1.8870601 2.68723902 -0.30085218 0. 0.
0. 0. 0. ]
[ 1.7573498 -2.02126402 -1.89285726 2.71135407 -0.31199988 0. 0.
0. 0. 0. ]
[ 1.75753831 -2.0263886 -1.89669765 2.7285809 -0.31997206 0. 0.
0. 0. 0. ]
[ 1.75759247 -2.02985352 -1.8992705 2.74086793 -0.32565456 0. 0.
0. 0. 0. ]
[ 1.75759082 -2.0322183 -1.90101329 2.74961855 -0.32969648 0. 0.
0. 0. 0. ]
[ 1.75756949 -2.03384574 -1.90220558 2.75584256 -0.33256757 0. 0.
0. 0. 0. ]
[ 1.75754423 -2.03497359 -1.90302811 2.76026484 -0.33460516 0. 0.
0. 0. 0. ]
[ 1.75752123 -2.0357596 -1.90359937 2.7634044 -0.33605036 0. 0.
0. 0. 0. ]
[ 1.75750238 -2.03630975 -1.90399819 2.76563194 -0.33707498 0. 0.
0. 0. 0. ]
[ 1.75748773 -2.03669608 -1.90427773 2.76721166 -0.33780123 0. 0.
0. 0. 0. ]
[ 1.75747671 -2.03696802 -1.90447424 2.7683316 -0.33831589 0. 0.
0. 0. 0. ]
[ 1.75746858 -2.03715978 -1.90461268 2.76912538 -0.33868056 0. 0.
0. 0. 0. ]
[ 1.75746266 -2.03729517 -1.90471035 2.76968789 -0.33893892 0. 0.
0. 0. 0. ]
[ 1.75745838 -2.03739086 -1.90477935 2.77008646 -0.33912196 0. 0.
0. 0. 0. ]
[ 1.75745531 -2.03745853 -1.90482812 2.77036885 -0.33925163 0. 0.
0. 0. 0. ]
[ 1.75745311 -2.03750641 -1.90486262 2.77056891 -0.33934349 0. 0.
0. 0. 0. ]
[ 1.75745155 -2.03754029 -1.90488704 2.77071064 -0.33940856 0. 0.
0. 0. 0. ]
[ 1.75745043 -2.03756428 -1.90490432 2.77081103 -0.33945465 0. 0.
0. 0. 0. ]
[ 1.75744964 -2.03758126 -1.90491655 2.77088215 -0.3394873 0. 0.
0. 0. 0. ]
[ 1.75744908 -2.03759329 -1.90492521 2.77093253 -0.33951043 0. 0.
0. 0. 0. ]
[ 1.75744868 -2.03760181 -1.90493135 2.77096822 -0.33952681 0. 0.
0. 0. 0. ]
[ 1.7574484 -2.03760784 -1.90493569 2.77099349 -0.33953841 0. 0.
0. 0. 0. ]]
done integrating
```python
#%% Plot
# here we plot a little
print("Plotting")
plot(t, rad2deg(y[:, :5]))
xlabel('Time [s]')
ylabel('Angle [deg]')
legend(["${}$".format(vlatex(c)) for c in coordinates])
```
Plotting
<matplotlib.legend.Legend at 0x7f46cc386790>
# Visualization
```python
from pydy.viz.shapes import Cylinder, Sphere
from pydy.viz.scene import Scene
from pydy.viz.visualization_frame import VisualizationFrame
# stack the lengths and use the bodies from kinematics
lengths = [lower_leg_length, upper_leg_length, hip_length, hip_length, hip_length, upper_leg_length, lower_leg_length]
bodies = [lower_leg_left, upper_leg_left, hip, hip, hip, upper_leg_right, lower_leg_right]
viz_frames = []
colors = ['yellow','green','red','red','red','green','blue']
for i, (body, particle, mass_center) in enumerate(zip(bodies, particles, mass_centers)):
# body_shape = Cylinder(name='cylinder{}'.format(i),
# radius=0.05,
# length=lengths[i],
# color='red')
#
# viz_frames.append(VisualizationFrame('link_frame{}'.format(i), body,
# body_shape))
particle_shape = Sphere(name='sphere{}'.format(i),
radius=0.06,
color=colors[i])
viz_frames.append(VisualizationFrame('particle_frame{}'.format(i),
body.frame,
particle,
particle_shape))
mass_center_shape = Sphere(name='sphere{}'.format(i),
radius=0.02,
color='black')
viz_frames.append(VisualizationFrame('mass_center_frame{}'.format(i),
body.frame,
mass_center,
mass_center_shape))
target_shape = Sphere(name='sphere{}'.format(i),
radius=0.02,
color='green')
target_right_leg = Point('target_right_leg')
target_right_leg.set_pos(origin, (x_des[2] * inertial_frame.x)+(x_des[3] * inertial_frame.y))
viz_frames.append(VisualizationFrame('target_frame_right',
inertial_frame,
target_right_leg,
target_shape))
## Now the visualization frames can be passed in to create a scene.
scene = Scene(inertial_frame, origin, *viz_frames)
# Provide the data to compute the trajectories of the visualization frames.
scene.constants = dict(zip(constants, numerical_constants))
scene.states_symbols = coordinates+speeds
scene.states_trajectories = y
scene.display()
```
/home/letrend/workspace/roboy-ros-control/python/pydy-resources
('Serving HTTP on', '127.0.0.1', 'port', 8001, '...')
To view visualization, open:
http://localhost:8001/index.html?load=2017-06-28_02-51-33_scene_desc.json
Press Ctrl+C to stop server...
# export to c header
```python
t = symbols('t')
a0 = ankle_left.pos_from(origin).express(inertial_frame).simplify().to_matrix(inertial_frame)
k0 = knee_left.pos_from(origin).express(inertial_frame).simplify().to_matrix(inertial_frame)
hl = hip_left.pos_from(origin).express(inertial_frame).simplify().to_matrix(inertial_frame)
hc = hip_center.pos_from(origin).express(inertial_frame).simplify().to_matrix(inertial_frame)
hr = hip_right.pos_from(origin).express(inertial_frame).simplify().to_matrix(inertial_frame)
k1 = knee_right.pos_from(origin).express(inertial_frame).simplify().to_matrix(inertial_frame)
a1 = ankle_right.pos_from(origin).express(inertial_frame).simplify().to_matrix(inertial_frame)
[(c_name, c_code), (h_name, c_header)] = codegen(
[("Jacobian",J),
("ankle_right_hip_center",F2),
("ankle_left", a0),
("knee_left", k0),
("hip_left", hl),
("hip_center", hc),
("hip_right", hr),
("knee_right", k1),
("ankle_right",a1)
] ,
"C", "PaBiRoboy_DanceControl", project='PaBiRoboy_DanceControl',global_vars=(t,lower_leg_length, upper_leg_length, hip_length, theta0,theta1,theta2,theta3),
header=True, empty=False)
print(c_code)
```
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /index.html?load=2017-06-28_02-51-33_scene_desc.json HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /css/bootstrap.min.css HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /css/slider.css HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /css/main.css HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /css/codemirror/codemirror.css HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /css/codemirror/blackboard.css HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/external/jquery/jquery.min.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/external/jquery/jquery-ui.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/external/bootstrap/bootstrap.min.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/external/codemirror/codemirror.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/external/codemirror/javascript-mode.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/external/three/three.min.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/external/three/TrackballControls.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/external/utils/bootstrap-slider.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/external/utils/modernizr-2.0.6.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/external/utils/prototype.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/dyviz/dv.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/dyviz/scene.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/dyviz/parser.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/dyviz/param_editor.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/dyviz/materials.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:40] "GET /js/dyviz/main.js HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:41] code 404, message File not found
127.0.0.1 - - [28/Jun/2017 02:51:41] "GET /fonts/glyphicons-halflings-regular.woff2 HTTP/1.1" 404 -
127.0.0.1 - - [28/Jun/2017 02:51:41] "GET /2017-06-28_02-51-33_scene_desc.json HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:41] "GET /fonts/glyphicons-halflings-regular.woff HTTP/1.1" 200 -
127.0.0.1 - - [28/Jun/2017 02:51:41] "GET /2017-06-28_02-51-33_simulation_data.json HTTP/1.1" 200 -
/******************************************************************************
* Code generated with sympy 1.0 *
* *
* See http://www.sympy.org/ for more information. *
* *
* This file is part of 'PaBiRoboy_DanceControl' *
******************************************************************************/
#include "PaBiRoboy_DanceControl.h"
#include <math.h>
void Jacobian(double *out_7040956801295068946) {
out_7040956801295068946[0] = -l1*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) + l2*cos(phi(t) - theta0(t)) - l2*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t)) - l3*sin(-phi(t) + theta0(t) + theta1(t));
out_7040956801295068946[1] = -l1*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) - l2*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t)) - l3*sin(-phi(t) + theta0(t) + theta1(t));
out_7040956801295068946[2] = -l1*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) - l2*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t));
out_7040956801295068946[3] = -l1*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t));
out_7040956801295068946[4] = l1*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) - l1*cos(phi(t)) - l2*cos(phi(t) - theta0(t)) + l2*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t)) + l3*sin(-phi(t) + theta0(t) + theta1(t));
out_7040956801295068946[5] = l1*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) + l2*sin(phi(t) - theta0(t)) + l2*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t)) - l3*cos(-phi(t) + theta0(t) + theta1(t));
out_7040956801295068946[6] = l1*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) + l2*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t)) - l3*cos(-phi(t) + theta0(t) + theta1(t));
out_7040956801295068946[7] = l1*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) + l2*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t));
out_7040956801295068946[8] = l1*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t));
out_7040956801295068946[9] = -l1*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) - l1*sin(phi(t)) - l2*sin(phi(t) - theta0(t)) - l2*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t)) + l3*cos(-phi(t) + theta0(t) + theta1(t));
out_7040956801295068946[10] = l2*cos(phi(t) - theta0(t)) - 1.0L/2.0L*l3*sin(-phi(t) + theta0(t) + theta1(t));
out_7040956801295068946[11] = -1.0L/2.0L*l3*sin(-phi(t) + theta0(t) + theta1(t));
out_7040956801295068946[12] = 0;
out_7040956801295068946[13] = 0;
out_7040956801295068946[14] = -l1*cos(phi(t)) - l2*cos(phi(t) - theta0(t)) + (1.0L/2.0L)*l3*sin(-phi(t) + theta0(t) + theta1(t));
out_7040956801295068946[15] = l2*sin(phi(t) - theta0(t)) - 1.0L/2.0L*l3*cos(-phi(t) + theta0(t) + theta1(t));
out_7040956801295068946[16] = -1.0L/2.0L*l3*cos(-phi(t) + theta0(t) + theta1(t));
out_7040956801295068946[17] = 0;
out_7040956801295068946[18] = 0;
out_7040956801295068946[19] = -l1*sin(phi(t)) - l2*sin(phi(t) - theta0(t)) + (1.0L/2.0L)*l3*cos(-phi(t) + theta0(t) + theta1(t));
}
void ankle_right_hip_center(double *out_916151544320618253) {
out_916151544320618253[0] = -l1*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) - l1*sin(phi(t)) - l2*sin(phi(t) - theta0(t)) - l2*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t)) + l3*cos(-phi(t) + theta0(t) + theta1(t));
out_916151544320618253[1] = -l1*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) + l1*cos(phi(t)) + l2*cos(phi(t) - theta0(t)) - l2*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t)) - l3*sin(-phi(t) + theta0(t) + theta1(t));
out_916151544320618253[2] = -l1*sin(phi(t)) - l2*sin(phi(t) - theta0(t)) + (1.0L/2.0L)*l3*cos(-phi(t) + theta0(t) + theta1(t));
out_916151544320618253[3] = l1*cos(phi(t)) + l2*cos(phi(t) - theta0(t)) - 1.0L/2.0L*l3*sin(-phi(t) + theta0(t) + theta1(t));
}
void ankle_left(double *out_8826841557315653786) {
out_8826841557315653786[0] = 0;
out_8826841557315653786[1] = 0;
out_8826841557315653786[2] = 0;
}
void knee_left(double *out_2247889337447503075) {
out_2247889337447503075[0] = -l1*sin(phi(t));
out_2247889337447503075[1] = l1*cos(phi(t));
out_2247889337447503075[2] = 0;
}
void hip_left(double *out_9064110474024648940) {
out_9064110474024648940[0] = -l1*sin(phi(t)) - l2*sin(phi(t) - theta0(t));
out_9064110474024648940[1] = l1*cos(phi(t)) + l2*cos(phi(t) - theta0(t));
out_9064110474024648940[2] = 0;
}
void hip_center(double *out_659177576507600807) {
out_659177576507600807[0] = -l1*sin(phi(t)) - l2*sin(phi(t) - theta0(t)) + (1.0L/2.0L)*l3*cos(-phi(t) + theta0(t) + theta1(t));
out_659177576507600807[1] = l1*cos(phi(t)) + l2*cos(phi(t) - theta0(t)) - 1.0L/2.0L*l3*sin(-phi(t) + theta0(t) + theta1(t));
out_659177576507600807[2] = 0;
}
void hip_right(double *out_8420808845996663653) {
out_8420808845996663653[0] = -l1*sin(phi(t)) - l2*sin(phi(t) - theta0(t)) + l3*cos(-phi(t) + theta0(t) + theta1(t));
out_8420808845996663653[1] = l1*cos(phi(t)) + l2*cos(phi(t) - theta0(t)) - l3*sin(-phi(t) + theta0(t) + theta1(t));
out_8420808845996663653[2] = 0;
}
void knee_right(double *out_2877649314090428462) {
out_2877649314090428462[0] = -l1*sin(phi(t)) - l2*sin(phi(t) - theta0(t)) - l2*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t)) + l3*cos(-phi(t) + theta0(t) + theta1(t));
out_2877649314090428462[1] = l1*cos(phi(t)) + l2*cos(phi(t) - theta0(t)) - l2*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t)) - l3*sin(-phi(t) + theta0(t) + theta1(t));
out_2877649314090428462[2] = 0;
}
void ankle_right(double *out_8643290169570541467) {
out_8643290169570541467[0] = -l1*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) - l1*sin(phi(t)) - l2*sin(phi(t) - theta0(t)) - l2*sin(-phi(t) + theta0(t) + theta1(t) + theta2(t)) + l3*cos(-phi(t) + theta0(t) + theta1(t));
out_8643290169570541467[1] = -l1*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t) + theta3(t)) + l1*cos(phi(t)) + l2*cos(phi(t) - theta0(t)) - l2*cos(-phi(t) + theta0(t) + theta1(t) + theta2(t)) - l3*sin(-phi(t) + theta0(t) + theta1(t));
out_8643290169570541467[2] = 0;
}
```python
```
| 9cca2b44019c33fcf24e519985f511a0801b3ec4 | 147,670 | ipynb | Jupyter Notebook | python/PaBiRoboy_dynamics.ipynb | Roboy/roboy_dynamics | a0a0012bad28029d01b6aead507faeee4509dd62 | [
"BSD-3-Clause"
] | null | null | null | python/PaBiRoboy_dynamics.ipynb | Roboy/roboy_dynamics | a0a0012bad28029d01b6aead507faeee4509dd62 | [
"BSD-3-Clause"
] | null | null | null | python/PaBiRoboy_dynamics.ipynb | Roboy/roboy_dynamics | a0a0012bad28029d01b6aead507faeee4509dd62 | [
"BSD-3-Clause"
] | null | null | null | 81.048299 | 14,543 | 0.455292 | true | 30,303 | Qwen/Qwen-72B | 1. YES
2. YES | 0.917303 | 0.785309 | 0.720366 | __label__kor_Hang | 0.183477 | 0.511982 |
<a href="https://colab.research.google.com/github/robfalck/dymos_tutorial/blob/main/01_dymos_simple_driver_boundary_value_problem.ipynb" target="_parent"></a>
# Dymos: Using an Optimizer to Solve a Simple Boundary Value Problem
In the previous notebook, we demonstrated
- how to install Dymos
- how to define a simple ODE system
- how to use that system to propagate a simple trajectory
Using the `scipy.itegrate.solve_ivp` functionality from within Dymos will provide a trajectory assuming we know the initial state, control profile, and time duration of that trajectory.
In our next use case we want to solve a simple boundary value problem:
_How far will the cannonball fly?_
In order to determine this, we need to allow the duration of the flight to vary such that the final height of the cannonball is zero. There are two general approaches to this.
1. Use an optimizer and pose the problem as an optimal control problem.
2. Use a nonlinear solver and pose the problem with residuals to be driven to zero.
# Posing the cannonball boundary value problem as an optimal control problem.
The optimal control problem can be stated as:
\begin{align}
\mathrm{Minimize}\;J &= t_f \\
\mathrm{subject\;to\!:} \\
x(t_0) &= 0 \;\mathrm{m} \\
y(t_0) &= 0 \;\mathrm{m}\\
vx(t_0) &= 100 \;\mathrm{m/s} \\
vy(t_0) &= 100 \;\mathrm{m/s} \\
y(t_f) &= 0 \;\mathrm{m}
\end{align}
Traditionally, the collocation techniques in Dymos have used an optimizer to satisfy the _defect constraints_ ($\Delta$).
These defects are the difference between the slope of each state polynomial representation in each segment and the rate of change as given by the ODE.
In Dymos, these defects are measured at a certain subset of our nodes called the _collocation nodes_. For the Radau pseudospectral transcription, each 3rd-order segment has 4 state input nodes, and 3 collocation nodes.
Dymos handles this underlying transcription from an optimal control problem into a nonlinear programming (NLP) problem.
The equivalent NLP problem for our optimal control problem above is:
\begin{align}
\mathrm{Minimize}\;J &= t[-1] \\
\mathrm{subject\;to\!:} \\
x_{lb} &\le x_i \le x_{ub} \\
y_{lb} &\le y_i \le y_{ub} \\
vx_{lb} &\le vx_i \le vx_{ub} \\
vy_{lb} &\le vy_i \le vy_{ub} \\
\Delta x_j &= 0 \\
\Delta y_j &= 0 \\
\Delta vx_j &= 0 \\
\Delta vy_j &= 0 \\
vy[-1] &= 0 \\
i &= 1\;...n-1 \\
j &= 0\;...n-2 \\
\end{align}
In the case of our single 3rd-order Radau segment, we have 4 nodes (n=4).
Since each initial state is fixed, we have 12 design variables and 12 constraints. There is, at most, a single solution to this problem. This makes sense. Since there are no control variables or changes in the initial state allowed, there should be only one possible physical path for the cannonball. Most optimizers will be perfectly capable of solving this problem, despite the fact that it only has a single feasible solution.
# Solving the optimal control problem
First, lets bring in our definition of the ODE and the other general setup that we used in the previous notebook.
```
!pip install dymos
```
Requirement already satisfied: dymos in /usr/local/lib/python3.6/dist-packages (0.17.0)
Requirement already satisfied: openmdao>=3.3.0 in /usr/local/lib/python3.6/dist-packages (from dymos) (3.5.0)
Requirement already satisfied: numpy>=1.14.1 in /usr/local/lib/python3.6/dist-packages (from dymos) (1.18.5)
Requirement already satisfied: scipy>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from dymos) (1.4.1)
Requirement already satisfied: pyparsing in /usr/local/lib/python3.6/dist-packages (from openmdao>=3.3.0->dymos) (2.4.7)
Requirement already satisfied: pyDOE2 in /usr/local/lib/python3.6/dist-packages (from openmdao>=3.3.0->dymos) (1.3.0)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from openmdao>=3.3.0->dymos) (2.23.0)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from openmdao>=3.3.0->dymos) (2.5)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->openmdao>=3.3.0->dymos) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->openmdao>=3.3.0->dymos) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->openmdao>=3.3.0->dymos) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->openmdao>=3.3.0->dymos) (2020.12.5)
Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.0->openmdao>=3.3.0->dymos) (4.4.2)
```
import numpy as np
import openmdao.api as om
import dymos as dm
```
```
class ProjectileODE(om.ExplicitComponent):
def initialize(self):
self.options.declare('num_nodes', types=int,
desc='the number of points at which this ODE is simultaneously evaluated')
def setup(self):
nn = self.options['num_nodes']
self.add_input('vx', shape=(nn,), units='m/s')
self.add_input('vy', shape=(nn,), units='m/s')
self.add_output('x_dot', shape=(nn,), units='m/s',
tags=['state_rate_source:x', 'state_units:m'])
self.add_output('y_dot', shape=(nn,), units='m/s',
tags=['state_rate_source:y', 'state_units:m'])
self.add_output('vx_dot', shape=(nn,), units='m/s**2',
tags=['state_rate_source:vx', 'state_units:m/s'])
self.add_output('vy_dot', shape=(nn,), units='m/s**2',
tags=['state_rate_source:vy', 'state_units:m/s'])
self.declare_partials(of='*', wrt='*', method='fd')
def compute(self, inputs, outputs):
outputs['x_dot'][:] = inputs['vx']
outputs['y_dot'][:] = inputs['vy']
outputs['vx_dot'][:] = 0.0
outputs['vy_dot'][:] = -9.81
```
```
prob = om.Problem()
traj = prob.model.add_subsystem('traj', dm.Trajectory())
phase = traj.add_phase('phase', dm.Phase(ode_class=ProjectileODE, transcription=dm.Radau(num_segments=1, order=3)))
phase.set_time_options(fix_initial=True, duration_bounds=(1, 1000), units='s')
phase.set_state_options('x', fix_initial=True)
phase.set_state_options('y', fix_initial=True)
phase.set_state_options('vx', fix_initial=True)
phase.set_state_options('vy', fix_initial=True)
```
# Imposing state boundary constraints
There are two ways in which we can impose boundary constraints on the state. The first is to use the arguments `fix_initial` and `fix_final`.
Using `fix_final` removes the final value of the state as a design variable. That is, our final value of `y` will always be satisfied and the optimizer will work to find the rest of the state history.
The other option is to use a nonlinear boundary constraint. This is a constraint applied after-the-fact. The optimizer is free to suggest a final value of `y` that violates the constraint, but then will work to satisfy it as it iterates. This approach gives the optimizer a bit more freedom to explore the design space and might therefore be more robust. Using `fix_initial` and `fix_final` also impose limitations when using shooting methods in Dymos, which we'll discuss later.
```
phase.add_boundary_constraint('y', loc='final', equals=0)
```
# Adding an objective
Using an optimization driver **requires** that an objective be specified.
Again, in this case it's somewhat meaningless, there should only be a single valid solution that satisfies all of our constraints.
To satisfy the driver, we'll set the objective to be the final time.
Note that the Dymos `Phase` object has an `add_objective` method that's similar to the standard OpenMDAO `add_objective` method, but also allows us to specify the _locaiton_ in time where the objective should be evaluated (`'initial'` or `'final'`).
The variable `'time'` is recognized by Dymos as the one of the special variables in the phase (along with states, controls, and parameters). If we wanted to, we could also make any output of the ODE an objective by specifying the ODE-relative path to that variable.
```
phase.add_objective('time', loc='final', ref=1.0)
```
# Specifying a driver
Since we're performing optimization now, we'll need a driver.
The optimizers in `scipy.optimize` will work in this case. To use them, we use OpenMDAO's `ScipyOptimizeDriver`.
```
prob.driver = om.ScipyOptimizeDriver()
```
Next we call OpenMDAO's `Problem.setup` method. This works out the various connections between the different systems that Dymos uses to solve this problem and allocates memory. You can think of this step a bit like compiling our model before we run it.
```
prob.setup()
```
<openmdao.core.problem.Problem at 0x7f3e99f6e320>
As before, we'll specify an initial guess for the initial time and duration of the phase, along with linear guesses to the state-time history.
```
prob.set_val('traj.phase.t_initial', 0.0, units='s')
prob.set_val('traj.phase.t_duration', 15.0, units='s')
prob.set_val('traj.phase.states:x', phase.interpolate(ys=[0, 100], nodes='state_input'), units='m')
prob.set_val('traj.phase.states:y', phase.interpolate(ys=[0, 0], nodes='state_input'), units='m')
prob.set_val('traj.phase.states:vx', phase.interpolate(ys=[100, 100], nodes='state_input'), units='m/s')
prob.set_val('traj.phase.states:vy', phase.interpolate(ys=[100, -100], nodes='state_input'), units='m/s')
```
# Solving the Problem
To solve this problem, we need to allow the driver to iterate.
The native OpenMDAO option to do so is `Problem.run_driver`.
Dymos provides its own function, `dymos.run_problem`, which is basically a wrapper around the OpenMDAO method with some additional conveniences built in.
Though not necessary yet, we'll use that method just to get in the habit of doing so.
```
dm.run_problem(prob, run_driver=True, simulate=True, make_plots=True)
```
Model viewer data has already has already been recorded for Driver.
Optimization terminated successfully. (Exit mode 0)
Current function value: 20.387359836901105
Iterations: 2
Function evaluations: 2
Gradient evaluations: 2
Optimization Complete
-----------------------------------
Simulating trajectory traj
/usr/local/lib/python3.6/dist-packages/openmdao/recorders/recording_manager.py:266: UserWarning:The model is being run again, if the options or scaling of any components has changed then only their new values will be recorded.
Done simulating trajectory traj
```
# TODO: Make these more automatic (and use an interactive plot library like bokeh)
from IPython.display import display
from ipywidgets import widgets, GridBox
items = [widgets.Image(value=open(f'plots/states_{state}.png', 'rb').read()) for state in ['x', 'y', 'vx', 'vy']] + \
[widgets.Image(value=open(f'plots/state_rates_{state}.png', 'rb').read()) for state in ['x', 'y', 'vx', 'vy']]
widgets.GridBox(items, layout=widgets.Layout(grid_template_columns="repeat(2, 500px)"))
```
GridBox(children=(Image(value=b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x01\xb0\x00\x00\x01 \x08\x06\x00\x…
# The Solution
The plots above show that the implicit solution and the simulated trajectory are now in agreement (the simulated trajectory is a reasonably accurate interpolation of the solution).
To extract the solution, we can pull values for the special variables in the phase from the _timeseries_ output.
```
t = prob.get_val('traj.phase.timeseries.time')
x = prob.get_val('traj.phase.timeseries.states:x')
y = prob.get_val('traj.phase.timeseries.states:y')
print(f'The time of flight is {t[-1, 0]} sec')
print(f'The range flown is {x[-1, 0]} m')
print(f'The final altitude of the cannonball is {y[-1, 0]} m')
```
The time of flight is 20.387359836901105 sec
The range flown is 2038.7359836901096 m
The final altitude of the cannonball is -5.684341886080802e-14 m
| c3b255e1d5fb313dc718cbd98022aa960a12249d | 43,261 | ipynb | Jupyter Notebook | 01_dymos_simple_driver_boundary_value_problem.ipynb | robfalck/dymos_tutorial | 04ec3b4804c601818503b3aa10679a42ab13fece | [
"Apache-2.0"
] | null | null | null | 01_dymos_simple_driver_boundary_value_problem.ipynb | robfalck/dymos_tutorial | 04ec3b4804c601818503b3aa10679a42ab13fece | [
"Apache-2.0"
] | null | null | null | 01_dymos_simple_driver_boundary_value_problem.ipynb | robfalck/dymos_tutorial | 04ec3b4804c601818503b3aa10679a42ab13fece | [
"Apache-2.0"
] | null | null | null | 37.749564 | 493 | 0.513257 | true | 3,272 | Qwen/Qwen-72B | 1. YES
2. YES | 0.699254 | 0.805632 | 0.563342 | __label__eng_Latn | 0.971541 | 0.147162 |
# Modeling Stock Movement
Brian Bahmanyar
***
```python
import numpy as np
import pandas as pd
import scipy.optimize as opt
import seaborn as sns
import sys
sys.path.append('./src/')
from plots import *
```
```python
%matplotlib inline
```
```python
tech = pd.read_csv('data/tech_bundle.csv', index_col=0)
tech.index = pd.to_datetime(tech.index)
```
Recall neither Facebook's and Apple's stock had an obvious autocorrelation structure in their adjusted daily close prices. Thus we conclude that these series follow a random walk.
It can be shown that the following holds: $\frac{y_t}{y_{t-1}} \sim LogN(\mu, \sigma^2)$, where $y_t$ represents the adjusted close price of a stock.
Notice below when we plot $\frac{y_t}{y_{t-1}}$ (daily return ratio) for Facebook's historical stock price the distibution we observe can be approximated by a log-normal distribution.
> We prefer a log-normal to a normal distribution because the daily return ratio can never be negative
```python
def get_daily_return_ratio(series):
"""
Args: series (ndarray)----the time series to get the lagged return ratios
window_size (int)---number of trading weeks to use
Return: (ndarray) y(t)/y(t-1) over the weeks specified
"""
return (series[1:]/series[:-1])
```
```python
assert(tech.FB.count()-1 == len(get_daily_return_ratio(tech.FB.values)))
assert(tech.FB[1]/tech.FB[0] == get_daily_return_ratio(tech.FB.values)[0])
```
```python
plt.figure(figsize=(10,3));
plt.title("Facebook's Daily Return Ratio");
plt.xlabel('Return Ratio');
plt.ylabel('Count');
sns.distplot(get_daily_return_ratio(tech['FB'].values), kde=False);
```
We will use Maximum Likelyhood Estimation to estimate the parameters of the distribution.
```python
def neg_log_llh(theta, data):
"""
Args theta (list)-----the params [mu, sigma**2]
data (ndarray)---data points to be fit by log normal distribution
Return: (double) negative log-likelihood for the log normal.
"""
mu, sigma = theta[0], np.sqrt(theta[1])
neg_log_llhs = np.log(data*sigma*np.sqrt(2*np.pi)) + ((np.log(data)-mu)**2/(2*(sigma**2)))
return neg_log_llhs.sum()
```
```python
def mle_log_norm(data, init_theta=[1,1]):
"""
Args: theta (list)-----the params [mu, sigma**2]
data (ndarray)---data points to be fit by log normal distribution
Return: (list) mu, sigma**2 for fitted log normal params
"""
fit = opt.minimize(neg_log_llh, init_theta, data, method='Nelder-Mead')
return fit.x
```
```python
fit = mle_log_norm(get_daily_return_ratio(tech['FB'].values))
fit
```
array([ 0.00173078, 0.00054006])
This tells us for Facebook: $\frac{y_t}{y_{t-1}} \sim LogN(0.0017, 0.00054)$
Lets vizualize what this theoretical distribution looks like:
```python
plt.figure(figsize=(10,3));
plt.title(r'$LogN(0.0017, 0.00054)$');
plt.xlabel('Return Ratio');
plt.ylabel('Count');
sns.distplot(np.random.lognormal(fit[0], np.sqrt(fit[1]), size=1000000), hist=False);
```
Now we should be convinced that daily return ratios follow a log-normal distribution we can exploit this to forcast future days prices with a log normal random walk.
Quite simply if $\quad \frac{y_t}{y_{t-1}} \sim LogN(0.0017, 0.00054) \quad\Rightarrow\quad y_{t+1} \sim y_{t}*LogN(0.0017, 0.00054) $
### Modeling Facebook's Daily Adjusted Close Prices
```python
plot_stocks(tech.index, [tech['FB']], ['FB'], label_annually=False)
```
Lets simulate this random walk and see what the movement looks like.
```python
def simulate_random_walk(series, window_size=10, ahead=50):
"""
Args: series (ndarray)----time series to use
window_size (int)---number of trading weeks to use for log normal estimation
ahead (int)---------how many days do you want to forcast head for
Return (ndarray): simulated future price
"""
forcasts = np.zeros(ahead)
window = series[-((window_size*5)+1):].values
for step in range(ahead):
mu, sigma2 = mle_log_norm( get_daily_return_ratio(window) )
forcast = window[-1]*np.random.lognormal(mu, np.sqrt(sigma2), 1)
forcasts[step] = forcast
window = np.roll(window, -1)
window[-1] = forcast
return forcasts
```
```python
def plot_simulated_forcast(series, window, ahead, train_on=None, n=10):
"""
Plots Simulated Forcast and Original Series
Args: series (ndarray)---time series for which to forcast
window (int)-------window size for MLE of log normal
ahead (int)--------number of days to forcast for
train_on (int)-----optional, number of days from series to use to forcast
n (int)------------number of simulations to plot
Returns: None, plots inline
"""
if not train_on:
train_on = series.count()
train_x_space = np.arange(0,series.count())
test_x_space = np.arange(train_on, train_on+ahead)
plt.figure(figsize=(15,6));
plt.plot(train_x_space, series)
for _ in range(n):
walk = simulate_random_walk(series[:train_on], window, ahead)
plt.plot(test_x_space, walk, linestyle='-', color='salmon', alpha=0.7)
plt.xlabel('Time Index (Days)');
plt.ylabel('Adjusted Close Price');
plt.title('Simulated Log Normal Random Walk Forcast');
```
Here we are using data from the first 650 prices and a daily return window of 100 days to predict Facebook's price 135 days into the future. We can notice a general positive slope in all our predictions (in pink). However when we approximate our forcast we will want to employ some math to calculate an expected value and variance at each step.
```python
plot_simulated_forcast(tech['FB'], window=100, ahead=135, train_on=650, n=10);
```
Simulation is great but what we really want is the expected value and variance of the process and any time $t+k$.
##### Deriving the k-step ahead expected value
$$
\begin{align}
\hat{y_{t+k}} &= y_{t+(k-1)} \cdot \mathrm{E}[LogN(\mu, \sigma^2)] \\
&= y_{t} \cdot \mathrm{E}[LogN(\mu, \sigma^2)]^k \\
&= y_{t} \cdot {(e^{\mu+\sigma^2/2})}^k
\end{align}
$$
The expected value of a log-normal: $\mathrm{E}[LogN(\mu, \sigma^2)] = e^{\mu+\sigma^2/2}$ as a python function below
```python
def get_expected_value(mu, sigma2):
"""
Returns expected value for log normal with params mu, sigma2
Args: mu (float)-------parameter mu
sigma2 (float)---parameter sigma squared
Returns: expected value (float) of the log normal
"""
return (np.exp((mu+sigma2)/2))
```
The variance of a log normal: $\mathrm{Var}(LogN(\mu, \sigma^2)) = (e^{\sigma^2}\!\!-1) e^{2\mu+\sigma^2}$ as a python function below
```python
def get_variance(mu, sigma2):
"""
Returns variance for log normal with params mu, sigma2
Args: mu (float)-------parameter mu
sigma2 (float)---parameter sigma squared
Returns: variance (float) of the log normal
"""
return (np.exp(sigma2)-1)*np.exp((2*mu)+sigma2)
```
**Deriving the k-step ahead variance**
$$
\begin{aligned}
\mathrm{V}[y_{t+k}|y_t] &= \mathrm{E}[y^2_{t+k}|y_t] - \mathrm{E}[y_{t+k}|y_t]^2 && \textit{by definition of Variance} \\
&= y^2_t \cdot \mathrm{E}\big[Z^2_{t+1}\big]^k - y^2_t \cdot \mathrm{E}\big[Z_{t+1}\big]^{2k} && \textit{let } Z \sim \mathrm{E}[LogN(\mu, \sigma^2)]\\
&= y^2_t \cdot \Big(\mathrm{E}\big[Z^2_{t+1}\big]^k - \mathrm{E}\big[Z_{t+1}\big]^{2k}\Big) \\
&= y^2_t \cdot \Big[\Big(\mathrm{V}\big[Z_{t+1}\big] + \mathrm{E}\big[Z_{t+1}\big]^{2}\Big)^k - \cdot \mathrm{E}\big[Z_{t+1}\big]^{2k}\Big]
\end{aligned}
$$
```python
def get_k_step_variance(mu, sigma2, k, yt):
"""
Returns a k step ahead variance for log normal with params mu, sigma2
Args: mu (float)-------parameter mu
sigma2 (float)---parameter sigma squared
k (int)----------number of steps ahead
yt (float)-------price at time t (end of sample)
Returns: variance (float) of the log normal
"""
return (yt**2) * ((get_variance(mu, sigma2) + get_expected_value(mu, sigma2)**2)**k - (get_expected_value(mu, sigma2)**2)**k)
```
We can now use these functions to create an expected forcast.
```python
def get_expected_walk(series, window_size=10, ahead=50):
"""
Args: series (ndarray)----time series to use
window_size (int)---number of trading weeks to use for log normal estimation
ahead (int)---------how many days do you want to forcast head for
Return (ndarray): simulated future price
"""
E_Xs = np.zeros(ahead)
V_Xs = np.zeros(ahead)
yt = series[-1]
last_price = series[-1]
window = series[-((window_size*5)+1):].values
for step in range(ahead):
mu, sigma2 = mle_log_norm( get_daily_return_ratio(window) )
E_Xs[step] = last_price * get_expected_value(mu, sigma2)
V_Xs[step] = get_k_step_variance(mu, sigma2, step+1, yt)
last_price = last_price * get_expected_value(mu, sigma2)
return E_Xs, V_Xs
```
```python
def plot_expected_forcast(series, window, ahead, train_on=None, error=2):
"""
Plots Expected Forcast and Original Series
Args: series (ndarray)---time series for which to forcast
window (int)-------window size for MLE of log normal
ahead (int)--------number of days to forcast for
train_on (int)-----optional, number of days from series to use to forcast
error (int)--------number of standard devations to use for confidence bands
Returns: None, plots inline
"""
if not train_on:
train_on = series.count()
mu, sigma2 = get_expected_walk(series[:train_on], window, ahead)
low_bound = mu-(error*np.sqrt(sigma2))
high_bound = mu+(error*np.sqrt(sigma2))
train_x_space = np.arange(0,series.count())
test_x_space = np.arange(train_on, train_on+ahead)
plt.figure(figsize=(15,6));
plt.plot(train_x_space, series);
plt.plot(test_x_space, mu, 'maroon');
plt.plot(test_x_space, high_bound, 'olive');
plt.plot(test_x_space, low_bound, 'olive');
plt.fill_between(test_x_space, low_bound, high_bound, alpha=0.2, color='y')
plt.xlabel('Time Index (Days)');
plt.ylabel('Adjusted Close Price');
plt.title('Expected Log Normal Random Walk Forcast');
```
Below we use the equations above to create an expected forcast using the first 635 data points and a return ratio window of 30 days. We can see that the forcast is doing a great job. The yellow are the 95% confidence interval bands which incease as we get farther from our sample.
```python
plot_expected_forcast(tech.FB, 30, 150, 635, error=2)
```
#### Please ignore, this was an ARMA(p,d,q) I chose to remove
```python
# def acf_objective(order, series):
# try:
# model = arima_model.ARIMA(series, order).fit()
# return model.aic
# except:
# return np.inf # if the model is not stationary return inf
```
```python
# def get_optimal_arima(series, cost):
# grid = (slice(0, 4, 1), slice(1,2,1), slice(0, 4, 1))
# orders = brute(cost, grid, args=(series,), finish=None)
# orders = [int(order) for order in orders]
# arima = arima_model.ARIMA(series, order=orders).fit()
# print({'p':orders[0], 'd':orders[1], 'q':orders[2]})
# return arima
```
```python
# def fit_arima(series, p, d, q):
# model = arima_model.ARIMA(series, [p,d,q], method='css').fit()
# return model.aic
```
```python
# fit = opt.minimize(hat, -2, (a, b))
```
```python
# fig, ax = plt.subplots(figsize=(10,5))
# ax.plot(np.arange(1,787), tech.FB.values)
# fig = arima.plot_predict(649, 785, dynamic=True, ax=ax,
# plot_insample=False)
```
___
```python
from IPython.display import HTML
HTML("""<style>@import "http://fonts.googleapis.com/css?family=Lato|Source+Code+Pro|Montserrat:400,700";@font-face{font-family:"Computer Modern";src:url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf')}.rendered_html h1,h2,h3,h4,h5,h6,p{font-family:'Computer Modern'}p,ul{font-family:'Computer Modern'}div#notebook-container{-webkit-box-shadow:none;box-shadow:none}h1{font-size:70pt}h2{font-size:50pt}h3{font-size:40pt}h4{font-size:35pt}h5{font-size:30pt}h6{font-size:25pt}.rendered_html p{font-size:11pt;line-height:14pt}.CodeMirror pre{font-family:'Source Code Pro', monospace;font-size:09pt}div.input_area{border:none;background:#f5f5f5}ul{font-size:10.5pt;font-family:'Computer Modern'}ol{font-size:11pt;font-family:'Computer Modern'}.output_png img{display:block;margin-left:auto;margin-right:auto}</style>""")
```
<style>@import "http://fonts.googleapis.com/css?family=Lato|Source+Code+Pro|Montserrat:400,700";@font-face{font-family:"Computer Modern";src:url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf')}.rendered_html h1,h2,h3,h4,h5,h6,p{font-family:'Computer Modern'}p,ul{font-family:'Computer Modern'}div#notebook-container{-webkit-box-shadow:none;box-shadow:none}h1{font-size:70pt}h2{font-size:50pt}h3{font-size:40pt}h4{font-size:35pt}h5{font-size:30pt}h6{font-size:25pt}.rendered_html p{font-size:11pt;line-height:14pt}.CodeMirror pre{font-family:'Source Code Pro', monospace;font-size:09pt}div.input_area{border:none;background:#f5f5f5}ul{font-size:10.5pt;font-family:'Computer Modern'}ol{font-size:11pt;font-family:'Computer Modern'}.output_png img{display:block;margin-left:auto;margin-right:auto}</style>
| 78a03238bda446e724c8a9d5b313cfbba2341529 | 265,148 | ipynb | Jupyter Notebook | Example-Project/03-LogNormalRandomWalk.ipynb | wileong/data_science_projects_directory | 1a2e018bd6e8e0b97a8b6df1fa074f1a369d4318 | [
"MIT"
] | null | null | null | Example-Project/03-LogNormalRandomWalk.ipynb | wileong/data_science_projects_directory | 1a2e018bd6e8e0b97a8b6df1fa074f1a369d4318 | [
"MIT"
] | null | null | null | Example-Project/03-LogNormalRandomWalk.ipynb | wileong/data_science_projects_directory | 1a2e018bd6e8e0b97a8b6df1fa074f1a369d4318 | [
"MIT"
] | null | null | null | 348.878947 | 83,730 | 0.918012 | true | 3,877 | Qwen/Qwen-72B | 1. YES
2. YES | 0.888759 | 0.851953 | 0.757181 | __label__eng_Latn | 0.672849 | 0.597516 |
# Tutorial
## Regime-Switching Model
`regime_switch_model` is a set of algorithms for learning and inference on regime-switching model. Let $y_t$ be a $p\times 1$ observed time series and $h_t$ be a homogenous and stationary hidden Markov
chain taking values in $\{1, 2, \dots, m\}$ with transition probabilities
\begin{equation}
w_{kj} = P(h_{t+1}=j\mid h_t=k), \quad k,j=1, \dots, m
\end{equation}
where the number of hidden states $m$ is known. It is assumed that the financial market in
each period can be realized as one of $m$ regime. Furthermore, the regimes are characterized
by a set of $J$ risk factors, which represent broad macro and micro economic indicators. Let $F_{tj}$ be the value of the $j$th risk factor $(j=1, \dots, J)$ in period $t$. Correspondingly, $F_t$ is the vector of risk factors in period $t$. We assumes that, for $t=1, \dots, n$, when the market is in regime $h_t$ in period $t$,
\begin{equation}
y_t = u_{h_t} + B_{h_t}F_t + \Gamma_{h_t}\epsilon_t,
\end{equation}
where $\epsilon_t \sim N(0,I)$. The model parameters $\{u_{h_t}, B_{h_t}, \Gamma_{h_t}\}$ depend on the regime $h_t$. $u_{h_t}$ is the state-dependent intercepts of the linear factor model. The matrix $B_{h_t}$ defines the sensitivities of asset returns to the common risk factors in state $h_t$ and is often called the loading matrix.
`regime_switch_model` solves the following fundamental problems:
* Given the observed data, estimate the model parameters
* Given the model parameters and observed data, estimate the optimal sequence of hidden states
The implementation of code is based on the well-known Baum-Welch algorithm and Viterbi algorithm that are widely used in hidden Markov model.
```python
import numpy as np
import pandas as pd
from regime_switch_model.rshmm import *
```
## Generate samples based on the regime-switching model
```python
model = HMMRS(n_components=2)
# startprob
model.startprob_ = np.array([0.9, 0.1])
# transition matrix
model.transmat_ = np.array([[0.9, 0.1], [0.6, 0.4]])
# risk factor matrix
# read file from Fama-French three-factor data
Fama_French = pd.read_csv('Global_ex_US_3_Factors_Daily.csv', skiprows=3)
Fama_French.rename(columns={'Unnamed: 0': 'TimeStamp'}, inplace=True)
Fama_French.replace(-99.99, np.nan);
Fama_French.replace(-999, np.nan);
# select data
#Fama_French_subset = Fama_French[(Fama_French['TimeStamp'] >= 20150101) & (Fama_French['TimeStamp'] <= 20171231)]
Fama_French_subset = Fama_French
Fama_French_subset.drop(['TimeStamp', 'RF'], axis=1, inplace=True)
F = np.hstack((np.atleast_2d(np.ones(Fama_French_subset.shape[0])).T, Fama_French_subset))
# loading matrix with intercept
loadingmat1 = np.array([[0.9, 0.052, -0.02],
[0.3, 0.27, 0.01],
[0.12, 0.1, -0.05],
[0.04, 0.01, -0.15],
[0.15, 0.04, -0.11]])
intercept1 = np.atleast_2d(np.array([-0.015, -0.01, 0.005, 00.1, 0.02])).T
model.loadingmat_ = np.stack((np.hstack((intercept1, loadingmat1)),
np.hstack((0.25*intercept1, -0.5* loadingmat1))), axis=0)
# covariance matrix
n_stocks = 5
rho = 0.2
Sigma1 = np.full((n_stocks, n_stocks), rho) + np.diag(np.repeat(1-rho, n_stocks))
model.covmat_ = np.stack((Sigma1, 10*Sigma1), axis=0)
save = True
# sample
Y, Z = model.sample(F)
```
## Split data into training and test
```python
# Use the last 300 day as the test data
Y_train = Y[:-300,:]
Y_test = Y[-300:,:]
F_train = F[:-300,:]
F_test = F[-300:,:]
```
## Fitting Regime-Switch Model
```python
remodel = HMMRS(n_components=2, verbose=True)
remodel.fit(Y_train, F_train)
Z2, logl, viterbi_lattice = remodel.predict(Y_train, F_train)
```
1 -63535.1590 nan
2 -60972.1968 2562.9622
3 -59533.7367 1438.4601
4 -56005.9127 3527.8240
5 -54584.0500 1421.8628
6 -54259.0186 325.0314
7 -54199.8384 59.1802
8 -54192.7580 7.0804
9 -54192.0477 0.7103
10 -54191.9793 0.0684
11 -54191.9727 0.0065
12 -54191.9721 0.0006
13 -54191.9720 0.0001
### Examine model parameters
```python
np.set_printoptions(precision=2)
print("Number of data points = ", Y_train.shape[0])
print(" ")
print("Starting probability")
print(remodel.startprob_)
print(" ")
print("Transition matrix")
print(remodel.transmat_)
print(" ")
print("Means and vars of each hidden state")
for i in range(remodel.n_components):
print("{0}th hidden state".format(i))
print("loading matrix = ", remodel.loadingmat_[i])
print("covariance = ", remodel.covmat_[i])
print(" ")
```
('Number of data points = ', 6723)
Starting probability
[ 1.78e-13 1.00e+00]
Transition matrix
[[ 0.37 0.63]
[ 0.11 0.89]]
Means and vars of each hidden state
0th hidden state
('loading matrix = ', array([[-0.14, -0.53, -0.15, -0.03],
[-0.04, -0.04, -0.17, 0.06],
[ 0.04, -0.15, -0.23, -0.02],
[ 0.02, 0.02, 0.19, -0. ],
[ 0.01, 0.07, -0.04, 0.24]]))
('covariance = ', array([[ 10.55, 1.92, 2.14, 1.56, 1.37],
[ 1.92, 9.53, 1.68, 2.19, 1.74],
[ 2.14, 1.68, 9.44, 1.85, 1.71],
[ 1.56, 2.19, 1.85, 9.89, 2.62],
[ 1.37, 1.74, 1.71, 2.62, 10.24]]))
1th hidden state
('loading matrix = ', array([[ -6.79e-03, 8.93e-01, 3.71e-02, 4.66e-02],
[ -2.98e-02, 2.93e-01, 2.42e-01, 1.15e-01],
[ 7.01e-04, 1.09e-01, 6.24e-02, -2.15e-02],
[ 9.63e-02, 4.09e-02, -1.29e-02, -1.78e-01],
[ 1.62e-02, 1.48e-01, -2.12e-04, -1.60e-01]]))
('covariance = ', array([[ 0.98, 0.2 , 0.24, 0.19, 0.2 ],
[ 0.2 , 0.98, 0.21, 0.21, 0.17],
[ 0.24, 0.21, 1.01, 0.19, 0.21],
[ 0.19, 0.21, 0.19, 0.94, 0.19],
[ 0.2 , 0.17, 0.21, 0.19, 0.98]]))
### Examine the predicted hidden state
```python
print("Prediction accuracy of the hidden states = ", np.mean(np.equal(Z[:-300], 1-Z2)))
```
('Prediction accuracy of the hidden states = ', 0.9840844860925182)
| df3ebad12407026b9c838ae10451ec067ca44092 | 10,266 | ipynb | Jupyter Notebook | examples/tutorial.ipynb | Liuyi-Hu/regime_switch_model | 1da6ab9cf989f3b6363f628c88138eebf3215277 | [
"BSD-3-Clause"
] | 13 | 2018-04-16T20:44:01.000Z | 2022-03-27T13:03:37.000Z | examples/tutorial.ipynb | arita37/regime_switch_model | 1da6ab9cf989f3b6363f628c88138eebf3215277 | [
"BSD-3-Clause"
] | 2 | 2019-06-29T18:56:13.000Z | 2020-04-06T04:04:57.000Z | examples/tutorial.ipynb | arita37/regime_switch_model | 1da6ab9cf989f3b6363f628c88138eebf3215277 | [
"BSD-3-Clause"
] | 8 | 2018-02-01T07:44:10.000Z | 2021-07-03T12:25:05.000Z | 31.587692 | 349 | 0.504578 | true | 2,293 | Qwen/Qwen-72B | 1. YES
2. YES | 0.926304 | 0.812867 | 0.752962 | __label__eng_Latn | 0.777291 | 0.587715 |
# Bayesian Linear Regression
## What is the problem?
Given inputs $X$ and outputs $\mathbf{y}$, we want to find the best parameters $\boldsymbol{\theta}$, such that predictions $\hat{\mathbf{y}} = X\boldsymbol{\theta}$ can estimate $\mathbf{y}$ very well. In other words, we want L2 norm of errors $||\hat{\mathbf{y}} - \mathbf{y}||_2$, as low as possible.
## Applying Bayes Rule
In this problem, we will {ref}`model the distribution of parameters <parameters-framework>`.
\begin{equation}
\underbrace{p(\boldsymbol{\theta}|X, \mathbf{y})}_{\text{Posterior}} = \frac{\overbrace{p(\mathbf{y}|X, \boldsymbol{\theta})}^{\text{Likelihood}}}{\underbrace{p(\mathbf{y}|X)}_{\text{Evidence}}}\underbrace{p(\boldsymbol{\theta})}_{\text{Prior}}
\end{equation}
\begin{equation}
p(\mathbf{y}|X) = \int_{\boldsymbol{\theta}}p(\mathbf{y}|X, \boldsymbol{\theta})p(\boldsymbol{\theta})d\boldsymbol{\theta}
\end{equation}
We are interested in posterior $p(\boldsymbol{\theta}|X, \mathbf{y})$ and to derive that, we need prior, likelihood and evidence terms. Let us look at them one by one.
### Prior
Let's assume a multivariate Gaussian prior over the $\boldsymbol{\theta}$ vector.
$$
p(\theta) \sim \mathcal{N}(\boldsymbol{\mu}_0, \Sigma_0)
$$
### Likelihood
Given a $\boldsymbol{\theta}$, our prediction is $X\boldsymbol{\theta}$. Our data $\mathbf{y}|X$ will have some irreducible noise which needs to be incorporated in the likelihood. Thus, we can assume the likelihood distribution over $\mathbf{y}$ to be centered at $X\boldsymbol{\theta}$ with random i.i.d. homoskedastic noise with variance $\sigma^2$:
$$
p(\mathbf{y}|X, \theta) \sim \mathcal{N}(X\boldsymbol{\theta}, \sigma^2I)
$$
### Maximum Likelihood Estimation (MLE)
Let us find the optimal parameters by differentiating likelihood $p(\mathbf{y}|X, \boldsymbol{\theta})$ w.r.t $\boldsymbol{\theta}$.
\begin{equation}
p(\mathbf{y}|X, \boldsymbol{\theta}) = \frac{1}{\sqrt{(2\pi)^n |\sigma^2I|}}\exp \left( (\mathbf{y} - X\boldsymbol{\theta})^T(\sigma^2I)^{-1}(\mathbf{y} - X\boldsymbol{\theta}) \right)
\end{equation}
Simplifying the above equation:
\begin{equation}
p(\mathbf{y}|X, \boldsymbol{\theta}) = \frac{1}{(2\pi\sigma^2)^{\frac{n}{2}}}\exp \left( \sigma^{-2}(\mathbf{y} - X\boldsymbol{\theta})^T(\mathbf{y} - X\boldsymbol{\theta}) \right)
\end{equation}
Taking log to simplify further:
\begin{align}
\log p(\mathbf{y}|X, \boldsymbol{\theta}) &= (\mathbf{y} - X\boldsymbol{\theta})^T(\mathbf{y} - X\boldsymbol{\theta}) + \log \sigma^{-2} + \log \frac{1}{(2\pi\sigma^2)^{\frac{n}{2}}}\\
\frac{d}{d\boldsymbol{\theta}} \log p(\mathbf{y}|X, \boldsymbol{\theta}) &= \frac{d}{d\boldsymbol{\theta}}(\mathbf{y} - X\boldsymbol{\theta})^T(\mathbf{y} - X\boldsymbol{\theta})\\
&= \frac{d}{d\boldsymbol{\theta}}(\mathbf{y}^T - \boldsymbol{\theta}^TX^T)(\mathbf{y} - X\boldsymbol{\theta})\\
&= \frac{d}{d\boldsymbol{\theta}} \left[ \mathbf{y}^T\mathbf{y} - \mathbf{y}^TX\boldsymbol{\theta} - \boldsymbol{\theta}^TX^T\mathbf{y} + \boldsymbol{\theta}^TX^TX\boldsymbol{\theta}\right]\\
&= -(\mathbf{y}^TX)^T - X^T\mathbf{y} + 2X^TX\boldsymbol{\theta} = 0\\
\therefore X^TX\boldsymbol{\theta} &= X^T\mathbf{y}\\
\therefore \boldsymbol{\theta}_{MLE} &= (X^TX)^{-1}X^T\mathbf{y}
\end{align}
We used some of the formulas from [this cheatsheet](http://www.gatsby.ucl.ac.uk/teaching/courses/sntn/sntn-2017/resources/Matrix_derivatives_cribsheet.pdf) but they can also be derived from scratch.
### Maximum a posteriori estimation (MAP)
We know from {ref}`the previous discussion <MAP-1>` that:
$$
\arg \max_{\boldsymbol{\theta}} p(\boldsymbol{\theta}|X, \mathbf{y}) = \arg \max_{\boldsymbol{\theta}} p(\mathbf{y}|X, \boldsymbol{\theta})p(\boldsymbol{\theta})
$$
Now, differentiating $p(\mathbf{y}|X, \boldsymbol{\theta})p(\boldsymbol{\theta})$ w.r.t $\theta$ by reusing some of the steps from MLE:
| 0d05335fe186302c056309541eb50d3aca9f16c0 | 5,368 | ipynb | Jupyter Notebook | linear-regression.ipynb | patel-zeel/bayesian-ml | 2b7657f22fbf70953a91b2ab2bc321bb451fa5a5 | [
"MIT"
] | null | null | null | linear-regression.ipynb | patel-zeel/bayesian-ml | 2b7657f22fbf70953a91b2ab2bc321bb451fa5a5 | [
"MIT"
] | null | null | null | linear-regression.ipynb | patel-zeel/bayesian-ml | 2b7657f22fbf70953a91b2ab2bc321bb451fa5a5 | [
"MIT"
] | null | null | null | 47.504425 | 369 | 0.564456 | true | 1,351 | Qwen/Qwen-72B | 1. YES
2. YES | 0.945801 | 0.887205 | 0.839119 | __label__eng_Latn | 0.423247 | 0.787888 |
# Node Embeddings and Skip Gram Examples
**Purpose:** - to explore the node embedding methods used for methods such as Word2Vec.
**Introduction-** one of the key methods used in node classification actually draws inspiration from natural language processing. This based in the fact that one approach for natural language processing views the ordering of words in a manner similar to a graph since each n-gram has a set of words that follow it. Strategies that treat text this way are naturally amenable to domains where we are explicitly working on a network structure.
Methods which employ node embeddings have several fundamental steps:
1. Create a "corpus" of node connections using a random walk.
2. Define a transformation on the list of node connections from **1** which groups node values that are close together with a high number, and nodes that have less of a relationship with a small number.
3. Run a standard machine learning method on the new set of factors from step **2**.
## Random Walks:
Here we explore the first step in this process: The random choosing of node values in the graph structure. This step is taken to approximate the connections each node has as a list. This carries two advantages:
1. Each node similarity measure has both local (direct) connections, and also expresses higher order connections (indirect). This is known as **Expressivity**.
2. All node pairs don't need to be encoded; we don't have to worry about coding the zero probabilities. This is **Efficiency**.
We will discuss some of the methods used for random walks in the sections below in reference to the paper where they were originally discussed.
### DeepWalk Method
*DeepWalk: Online Learning of Social Representations* uses short random walks. In this case, we define a random walk starting at vertex $V_i$ as $W_i$. This random walk is a stochastic process composed of random variables $W_i^k$ where k denotes the step in the sequence of each random walk.
For this method, a stream of random walks is created. This method has the added advantage of being easy to parallelize and is also less sensitive to changes in the underlying graph than using a larger length random walk.
The implementation of the DeepWalk method is used in the function below:
```python
import pandas as pd, numpy as np, os, random
from IPython.core.debugger import set_trace
np.random.seed(13)
dat = pd.read_csv("../Data/soc-sign-bitcoinalpha.csv", names = ["SOURCE", "TARGET", "RATING", "TIME"])
```
```python
len(pd.unique(dat.SOURCE))
```
3286
```python
len(pd.unique(dat.TARGET) )
```
3754
```python
#from_vals = pd.unique(dat.SOURCE)
#a = dat.TARGET[dat.SOURCE == from_vals[1]]
# Generate list comprehension using from values as a key; to values are saved as a list.
#node_lists = {x:dat.TARGET[dat.SOURCE == x].values for x in from_vals }
# Generate a step by selecting one value randomly from the list of "to" nodes:
def gen_step(key_val,dict_vals):
# print(dict_vals[key_val])
return( dict_vals[key_val][random.randint(0,len(dict_vals[key_val])-1)] )
def gen_walk(key_val,dict_vals,steps):
walk_vals = [key_val]
for i in range(0,steps-1):
walk_vals.append(gen_step(walk_vals[-1],dict_vals) )
return(walk_vals)
def RW_DeepWalk( orig_nodes, to_vals, walk_length=3):
from_vals = pd.unique(orig_nodes)
node_lists = {x:to_vals[orig_nodes == x].values for x in from_vals}
start_nodes = [* node_lists]
start_nodes=[x for x in start_nodes if x in node_lists.keys()]
walks = {x:gen_walk(key_val= x,dict_vals = node_lists,steps=walk_length) for x in start_nodes}
return(walks)
```
```python
# In order to sort these values, we need to make a full list of "from" and "to" for the random walk. This is performed in the script below:
# Identify values in "to" column that might not be in the from column:
f = dat.SOURCE
t = dat.TARGET
unique_t = [x for x in pd.unique(t) if not(x in pd.unique(f))]
x_over = dat[dat['TARGET'].isin( unique_t)]
# Add entries from the "to" column to the from column; add corresponding entries from the "from" column. This way, we include mappings of nodes in the "to" column as part of the random walk.
full_from = f.append(x_over.TARGET)
full_to = t.append(x_over.SOURCE)
```
```python
random_walk = RW_DeepWalk( full_from, full_to, walk_length=10)
```
An example of one of the arrays obtained using a random walk:
```python
random_walk[1]
```
[1, 2273, 2202, 1134, 35, 1385, 114, 1202, 605, 230]
The choice of the random walk method provides a way of representing the network that can be performed quickly. This method is also simple to parallelize. Finally, this method and the speed it can be used allows for a quick way to update calculations due to changes in the graph structure.
### Node2vec Method
The paper "Scalable Feature Learning for Networks" uses a separate method called a "biased random walk".
One of the points made in the paper is the type of sampling strategies that can be used to try to approximate the neighborhood around some node (this is denoted as $N_s$ in the paper). There are two extremes for sampling strategies that can be employed:
* Breadh-first sampling (BFS) - The neighborhood is restricted to nodes which are immediate neighbors of the source node. For this, we define the neighborhood **only** with directly adjacent nodes.
* Depth-first sampling (DFS) - The neighborhood consists of nodes sequentially sampled at increasing distances from the source node. This is represented in the random walk algorithm that was shown in the last section.
A biased random walk as expressed by the authors is an interpolation between the two strategies mentioned above.
Let $u$ be the source node, and $l$ be the length of the random walk. Let $c_i$ be the $i$th node in the walk where $c_0 = u$. Then, $c_i$ is generated as follows:
$$ P(c_i = x | c_{i-1} =v) = \frac{\pi_{v,x} }{Z} $$ and 0 otherwise.
Where $\pi_{v,x}$ is the unnormalized transition probability between nodes $v$ and $x$, and $Z$ is some constant that normalizes the probability between the two nodes. This is very similar to the formulation that was desecribed earlier for DeepWalk.
The simplest way to introduce bias to the random walks is to sample based onthe static edge weights: $w_{v,x} = \pi_{v,x} $. In the case of an unweighted graph like the one used in the example above, $w_{v,x} =1$.
We will define a $2$nd order random walk with parameters $p,q$. We will set the unnoramlized transition probability to $\pi_{v,x} = \alpha_{p,q}(t,x)*w_{v,x}$ where \alpha_{p,q}(t,x) is defined as:
\begin{equation}
\alpha_{p,q}(t,x) =
\begin{cases}
\frac{1}{p} & \text{if $d_{t,x}=0$ }\\
1 & \text{if $d_{t,x}=1$ }\\
\frac{1}{q} & \text{if $d_{t,x}=2$ }
\end{cases}
\end{equation}
Where $d_{t,x}$ defines the shortest path distance between nodes $t$ and $x$ Also note that $d_{t,x} \in \{0,1,2\}$
Changing parameters $p$ and $q$ will impact the speed that the walk leaves the current neighborhood. In the example provided in the paper, the authors consider a process which as just transitioned to node *v* from node *t*. It has three potential choices for its next step:
* Transition back to *t* with the bias of $\alpha_{t,v} = \frac{1}{p}$ being applied.
* Transition to a shared node with a bias of 1 being applied.
* Transition to an unshared node with a bias of $\alpha_{t,v} = \frac{1}{q}$ being applied.
Then - a lower q-value and higher p-value will increase the likelihood of leaving the initial neighborhood of *t*. At the extreme, you would get the original random walk implementation described above by letting $p =1$ and $q=1$.
A higher q value will decrease the likelihood of the current step moving to a node that neig
```python
from_vals = pd.unique(full_from)
node_lists = {x:full_to[full_from == x].values for x in from_vals}
node_lists
```
{7188: array([1]),
430: array([ 1, 13, 59, 247, 831, 817, 1055, 7595, 7509]),
3134: array([ 1, 22, 27, 617]),
3026: array([1]),
3010: array([1]),
804: array([ 1, 25, 26, 85, 204, 7583, 1020]),
160: array([ 1, 18, 57, 89, 294, 7579, 952, 1845, 817, 945]),
95: array([ 1, 3, 4, 6, 7, 8, 11, 19, 24, 25, 26,
29, 31, 32, 33, 36, 38, 40, 41, 42, 43, 47,
56, 62, 67, 73, 75, 82, 92, 93, 188, 394, 1829,
493, 526, 391, 315, 242, 331, 5679, 179, 221, 966, 345,
411, 278, 2410, 3403, 245, 464, 1065, 2336, 191, 205, 105,
1889, 154, 2953, 373, 3302, 1370, 666, 5342, 1874, 136, 3246,
413, 246, 2358, 553, 3179, 1045, 332, 244, 1278, 104, 174,
2330, 1307, 241, 7432, 7550, 172, 643, 2304, 111, 752, 941,
1171, 318, 1348, 123, 185, 882, 813, 228, 396, 362, 428,
7497, 103, 3134, 2257, 177, 7339, 145, 424, 124, 7580, 288,
125, 2369, 7602, 7601, 7599, 7598, 7604, 7574, 7581, 7584, 7600,
7548]),
377: array([ 1, 363, 399, 3413]),
888: array([ 1, 58, 213, 261, 1886]),
89: array([ 1, 2, 10, 28, 29, 37, 41, 48, 77, 85, 373,
395, 160, 369, 175, 124, 2269, 699, 519, 122, 945, 587,
946, 154, 629, 2097, 516, 1708, 2096, 253, 90, 1690, 406,
1267, 2628, 2625, 2623, 302, 931, 1453, 254, 132, 3774, 2043,
7439, 7564]),
1901: array([1]),
161: array([ 1, 2, 7, 10, 31, 41, 43, 69, 77, 78, 103,
156, 550, 635, 260, 1322, 1022, 7600, 695, 1324, 1039, 623,
454, 2213, 592, 1765, 2209, 1770, 1505, 520, 690, 1494, 204,
1140, 1313, 798]),
256: array([ 1, 9, 18, 38, 40, 42, 57, 60, 65, 67, 69,
75, 121, 156, 175, 236, 1739, 329, 672, 485, 632, 947,
479, 4934, 1970, 943, 2792, 2790, 346, 7592]),
351: array([ 1, 3, 12, 34, 36, 58, 82, 96, 105, 120, 222,
432, 1893, 1594, 1924, 759]),
3329: array([1]),
3341: array([1]),
649: array([ 1, 12, 33, 34, 105, 118, 123, 125, 310, 1192, 1885,
650, 1197]),
1583: array([1]),
87: array([ 1, 2, 3, 7, 15, 19, 27, 31, 33, 42, 64,
67, 85, 86, 397, 126, 1070, 206, 707, 424, 602, 454,
2080, 241, 158, 96, 291, 192, 171, 1045, 157, 526, 3318,
1884, 702, 259, 280, 2155, 163, 7412, 309, 154, 3306, 1863,
1880, 1156, 288, 1381, 572, 592, 395, 133, 214, 491, 7415,
7552, 7591, 7473, 7542]),
37: array([ 1, 2, 4, 9, 10, 14, 15, 16, 20, 21, 25,
28, 31, 1019, 7552, 312, 297, 54, 159, 314, 657, 113,
89, 479, 380, 601, 41, 168, 7403, 185, 594, 166, 66,
570, 610, 561, 1926, 990, 7400, 1076, 1764, 75, 1025, 65,
140, 97, 1399, 91, 40, 1632, 474, 7371, 7564, 7448]),
309: array([ 1, 3, 5, 11, 42, 87, 92, 125, 248, 292, 416,
3334, 1885, 1353, 637, 702, 1180, 894, 759, 3274, 3172, 3254,
1372]),
821: array([ 1, 92, 334, 648]),
1496: array([ 1, 330]),
637: array([ 1, 7, 52, 58, 309, 432, 759]),
964: array([ 1, 30, 46, 162]),
594: array([ 1, 18, 37, 1582, 1522]),
2249: array([1]),
554: array([ 1, 8, 600]),
20: array([ 1, 2, 6, 9, 11, 14, 15, 17, 22, 31, 37,
47, 188, 65, 1042, 159, 231, 730, 489, 372, 66, 366,
52, 152, 176, 145, 1301, 211, 38, 892, 491, 122, 90,
241, 212, 586, 272, 729, 41, 83, 807, 1771, 45, 108,
26, 131, 155, 870, 1269, 198, 1145, 60, 1127, 177, 7565,
7577, 7547]),
2227: array([1]),
1315: array([ 1, 793]),
519: array([ 1, 6, 27, 49, 89, 110, 142, 146, 453, 745, 1164,
1288, 2899, 946]),
1316: array([1]),
2149: array([ 1, 35, 943]),
1724: array([ 1, 14]),
18: array([ 1, 11, 243, 74, 294, 57, 160, 91, 860, 1170, 594,
220, 1306, 455, 690, 872, 7565, 868, 346, 194, 1564, 1800,
72, 1130, 729, 1703, 945, 1164, 2228, 260, 2200, 117, 256,
7603, 543, 736, 384, 7372, 1442, 733, 386, 2091, 1271, 2693,
2086, 866, 79, 151, 454, 1526, 1525, 3053, 2247, 954, 2961,
2199, 1305, 38, 2803, 121, 2690, 2118, 40, 50, 1284, 613,
2758, 542, 2097, 630, 1477, 26, 154, 274, 33, 2106, 2710,
437]),
57: array([ 1, 10, 18, 35, 38, 50, 294, 160, 630, 1135, 361,
1480, 389, 1719, 664, 2160, 1733, 1689, 2137, 1728, 304, 793,
2120, 2107, 323, 252, 1963, 150, 2656, 583, 1742, 7579, 117,
874, 256, 346, 872, 1442, 2766, 2098, 2106, 2093, 75, 614,
2576, 79, 2508, 357, 2637, 2552, 2595, 7564, 994, 109, 716,
65, 407, 1657]),
118: array([ 1, 5, 10, 12, 19, 27, 32, 34, 36, 43, 58,
72, 82, 85, 103, 105, 115, 765, 1592, 292, 526, 141,
125, 375, 238, 528, 222, 603, 158, 173, 649, 163, 822,
214, 1580, 706, 398, 7597]),
3254: array([ 1, 3, 5, 19, 36, 191, 309, 432, 433, 641, 1147]),
1177: array([ 1, 220, 243]),
112: array([ 1, 6, 8, 13, 26, 30, 33, 56, 73, 427, 524,
177, 701, 301, 748, 396, 124, 116, 2276, 7595, 7600, 3031,
1043, 697]),
11: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 93,
34, 13, 31, 21, 47, 122, 24, 12, 26, 7562, 17,
20, 90, 29, 82, 1159, 133, 255, 14, 211, 15, 22,
734, 58, 105, 87, 128, 32, 19, 228, 42, 196, 41,
43, 411, 304, 171, 190, 103, 25, 482, 70, 318, 51,
156, 153, 77, 7328, 253, 45, 398, 207, 762, 3345, 198,
1887, 7446, 326, 125, 310, 1883, 647, 309, 155, 36, 290,
174, 1871, 248, 185, 1379, 3269, 1068, 242, 7514, 280, 216,
30, 123, 7434, 3224, 2356, 1562, 96, 966, 116, 124, 95,
1342, 74, 2257, 162, 320, 801, 205, 487, 295, 593, 488,
491, 78, 694, 83, 7377, 166, 372, 260, 1156, 623, 1512,
1775, 552, 1508, 2217, 88, 2918, 2915, 360, 550, 2167, 145,
639, 197, 1034, 276, 347, 188, 345, 272, 1493, 264, 949,
1755, 1145, 1754, 739, 1311, 277, 274, 2865, 2180, 1748, 1030,
799, 69, 100, 2168, 65, 316, 1292, 193, 2791, 7563, 942,
735, 135, 72, 39, 870, 1716, 7603, 66, 1697, 732, 1116,
408, 187, 79, 154, 777, 131, 327, 6878, 7445, 7597, 7525,
85, 7595, 7588, 7538, 680, 183, 7583, 177, 7593, 7564, 7499,
7577, 7573, 798, 7592, 4910]),
586: array([ 1, 14, 20, 135, 592, 1283, 2753]),
35: array([ 1, 2, 14, 15, 86, 64, 233, 1385, 126, 192, 104,
114, 3441, 558, 375, 491, 94, 2289, 7384, 7562, 279, 70,
7415, 80, 801, 7580, 85]),
15: array([ 1, 2, 6, 9, 10, 11, 13, 14, 352, 94, 233,
86, 64, 98, 35, 953, 114, 126, 192, 104, 605, 337,
2336, 498, 1894, 1386, 2436, 897, 1202, 766, 1074, 1590, 1596,
281, 558, 230, 96, 376, 87, 164, 958, 272, 91, 48,
137, 3451, 2437, 3443, 3448, 1396, 3447, 2287, 7325, 2434, 2433,
3442, 3382, 3439, 2432, 1075, 1390, 1599, 1906, 3438, 2422, 604,
906, 2431, 902, 1905, 3437, 653, 179, 3434, 7310, 3433, 269,
2289, 530, 1134, 88, 234, 95, 698, 159, 438, 20, 37,
70, 67, 681, 40, 155, 3410, 3430, 3431, 1208, 1904, 3427,
2428, 1903, 6921, 3091, 3286, 7369, 764, 905, 2312, 1393, 2426,
3407, 2293, 1205, 651, 358, 7161, 984, 186, 832, 2417, 2633,
1895, 900, 1070, 1387, 375, 1385, 2828, 427, 279, 824, 1891,
1873, 165, 1162, 3120, 1351, 967, 518, 767, 122, 24, 190,
354, 17, 196, 51, 29, 75, 274, 211, 1471, 853, 33,
682, 7335, 7330, 7321, 3372, 7312, 3428, 400, 19, 47, 85,
7368, 801, 7366, 7560, 7393, 55, 185, 3369, 80, 7552, 1249,
374, 7415, 491, 7400, 7580, 7603, 369, 7379, 7577, 200, 31,
469, 7562, 113, 267, 7518, 5342, 1921, 176, 188, 145]),
1445: array([1, 4]),
152: array([ 1, 9, 20, 22, 37, 38, 44, 45, 48, 54, 75,
91, 101, 113, 148, 234, 251, 1691, 216, 672, 253, 168,
626, 188, 908, 236, 988, 7552, 153, 769, 512, 1628]),
2: array([ 1, 37, 38, 168, 4, 20, 767, 23, 40, 91, 113,
54, 108, 469, 250, 271, 471, 185, 188, 31, 312, 74,
119, 8, 381, 223, 257, 47, 17, 9, 150, 7400, 263,
42, 109, 489, 145, 107, 22, 473, 99, 165, 24, 7,
189, 787, 264, 66, 562, 270, 871, 15, 137, 75, 353,
21, 658, 45, 121, 10, 164, 1380, 2404, 1691, 14, 13,
139, 206, 19, 318, 561, 51, 90, 241, 1494, 77, 133,
70, 211, 681, 272, 862, 140, 217, 182, 251, 1098, 106,
503, 1409, 446, 769, 1209, 917, 710, 1632, 2006, 2698, 1127,
647, 126, 43, 242, 87, 5, 111, 1156, 16, 412, 5342,
95, 85, 3, 219, 93, 18, 350, 268, 89, 1785, 306,
25, 161, 52, 97, 83, 117, 146, 2839, 1487, 67, 175,
587, 1475, 1709, 672, 39, 424, 1693, 35, 449, 1998, 1929,
1977, 236, 777, 63, 1957, 1635, 915, 65, 508, 2499, 2497,
299, 1421, 1225, 1232, 2488, 1936, 382, 355, 238, 1090, 1222,
564, 2459, 774, 420, 237, 209, 1922, 532, 504, 7401, 987,
2446, 200, 44, 986, 148, 1603, 379, 436, 512, 402, 11,
7508, 7552, 7500, 7482, 7583, 798, 7577, 7603]),
113: array([ 1, 2, 3, 4, 6, 7, 9, 14, 17, 29, 37,
54, 65, 72, 78, 84, 134, 358, 188, 355, 170, 801,
353, 317, 502, 271, 769, 1127, 154, 177, 268, 350, 130,
660, 155, 1607, 7580, 7603]),
44: array([ 1, 2, 6, 10, 12, 16, 22, 23, 24, 39, 40,
74, 101, 546, 420, 45, 77, 613, 1429, 446, 445, 49,
593, 156, 1729, 632, 100, 442, 1090, 501, 803, 476, 1921,
340, 323, 379, 1076, 711, 1131, 249, 163, 58, 1009, 82,
514, 152, 106, 131, 148, 1087]),
2401: array([ 1, 12]),
10: array([ 1, 2, 3, 4, 8, 7552, 970, 271, 37, 623, 145,
334, 1044, 41, 26, 161, 42, 58, 62, 52, 67, 40,
185, 242, 190, 7512, 277, 44, 22, 319, 592, 25, 97,
142, 11, 187, 17, 225, 222, 118, 1550, 935, 1797, 1167,
2250, 1040, 686, 201, 562, 259, 265, 487, 787, 454, 137,
1772, 77, 1150, 2190, 24, 33, 347, 175, 75, 159, 102,
54, 217, 510, 918, 1424, 99, 168, 1018, 1702, 482, 492,
1573, 413, 49, 1673, 174, 1536, 1562, 975, 156, 3135, 56,
320, 73, 3122, 296, 967, 1111, 7334, 695, 491, 1031, 3014,
89, 2986, 614, 456, 29, 258, 1137, 1327, 103, 150, 166,
2957, 946, 741, 51, 941, 1288, 545, 315, 2215, 1324, 2204,
318, 146, 92, 1762, 639, 1146, 2787, 2188, 69, 2179, 1307,
66, 777, 2778, 2129, 914, 48, 39, 664, 1078, 840, 383,
710, 65, 2582, 1981, 57, 725, 130, 140, 2558, 226, 1992,
1430, 478, 1988, 249, 1636, 1967, 2519, 139, 719, 7438, 569,
468, 113, 7595, 327, 117, 798, 7399, 613, 177, 7577, 15,
7564]),
2378: array([ 1, 34]),
126: array([ 1, 2, 15, 35, 41, 54, 64, 86, 91, 94, 96,
98, 114, 293, 1593, 269, 192, 625, 171, 230, 605, 352,
558, 3381, 168, 155, 3371, 233, 1700, 512, 1554, 2673, 2331,
374, 7552, 690]),
3245: array([1]),
783: array([ 1, 19, 7564]),
493: array([ 1, 3, 95, 172, 225, 525, 1538, 752]),
1358: array([1, 3]),
1180: array([ 1, 7, 85, 141, 177, 309]),
529: array([ 1, 193, 236, 421, 677, 2316, 1154, 747, 1221, 1210, 2529,
992]),
333: array([ 1, 3, 8, 26, 27, 83, 177, 244, 259, 643, 7595,
373, 7604]),
1538: array([ 1, 177, 493]),
2282: array([1]),
1519: array([ 1, 191, 286]),
2966: array([1]),
474: array([ 1, 37, 78, 357, 745, 1912, 475, 7394, 2937, 2901, 669,
7603]),
330: array([ 1, 2, 6, 26, 43, 53, 178, 185, 190, 1496, 695,
396, 813, 1170, 2054, 7509, 7604, 7600, 7555]),
958: array([ 1, 15, 92, 155, 641, 798, 3162, 7504]),
17: array([ 1, 2, 4, 9, 10, 11, 14, 15, 159, 20, 176,
113, 26, 47, 145, 39, 272, 101, 1300, 60, 55, 451,
284, 521, 549, 38, 483, 40, 1735, 65, 21, 66, 544,
75, 287, 639, 350, 807, 63, 1497, 31, 156, 1305, 1746,
1304, 203, 197, 193, 98, 51, 107, 135, 942, 211, 728,
2125, 22, 344, 48, 79, 140, 42, 97, 680, 934, 748,
1785, 166, 411, 265, 306, 2206, 5140, 67, 946, 255, 1282,
362, 1490, 1489, 686, 799, 133, 70, 2821, 277, 142, 2143,
1732, 2797, 78, 360, 871, 315, 122, 188, 219, 681, 1471,
212, 2738, 137, 622, 542, 660, 355, 77, 580, 1691, 516,
7410]),
1295: array([ 1, 4, 22, 257]),
38: array([ 1, 2, 4, 9, 14, 17, 18, 20, 22, 26, 37,
285, 80, 55, 152, 71, 361, 947, 451, 301, 742, 618,
455, 260, 159, 47, 346, 256, 7392, 411, 2313, 1047, 954,
241, 2232, 744, 272, 1305, 2162, 117, 2399, 3310, 151, 3108,
530, 3044, 95, 700, 1148, 960, 1728, 2967, 2577, 211, 65,
165, 66, 277, 2914, 874, 2906, 1502, 1493, 1489, 2853, 1141,
793, 2841, 121, 2840, 64, 1289, 2807, 2139, 57, 1482, 1483,
7579, 7454, 7456]),
1952: array([1]),
223: array([ 1, 2, 4, 16, 63, 91, 109, 145, 165, 268, 466,
1997, 624, 473, 910, 1413, 711, 251, 379]),
625: array([ 1, 74, 91, 439, 1469, 7552]),
1392: array([ 1, 19]),
3355: array([1]),
1881: array([ 1, 1066]),
58: array([ 1, 2, 3, 5, 7, 10, 11, 13, 15, 19, 26,
27, 29, 30, 31, 32, 33, 34, 36, 42, 43, 44,
47, 53, 310, 647, 416, 704, 81, 110, 85, 417, 111,
191, 158, 118, 2369, 288, 1383, 751, 280, 103, 320, 171,
291, 375, 194, 165, 1582, 163, 87, 307, 125, 164, 246,
205, 2403, 762, 637, 830, 888, 1589, 1382, 242, 1190, 197,
3411, 1590, 145, 1900, 292, 154, 245, 173, 349, 206, 321,
2336, 3392, 141, 96, 83, 88, 95, 59, 634, 82, 120,
2419, 73, 222, 286, 1893, 3352, 3361, 545, 1867, 412, 1032,
351, 601, 3348, 3344, 93, 585, 572, 3226, 1544, 1564, 3218,
1847, 7349, 7514, 113, 92, 7422, 7578, 185, 7544, 338, 7522,
7523, 7532, 7510, 7600, 7604, 7552, 7591, 7531, 7521, 7434]),
96: array([ 1, 2, 5, 7, 11, 15, 19, 27, 30, 31, 87,
94, 95, 125, 141, 126, 832, 2336, 1379, 984, 704, 3425,
498, 3404, 163, 3387, 111, 2416, 824, 250, 375, 351, 281,
3379, 376, 1895, 269, 2412, 2411, 114, 499, 172, 1553, 648,
525, 491, 7595]),
1580: array([ 1, 36, 118, 822]),
196: array([ 1, 9, 15, 22, 64, 70, 80, 122, 176, 177, 188,
249, 2287, 3286, 208, 1277, 3114, 703, 747, 205, 7603, 7604]),
146: array([ 1, 2, 5, 7, 9, 10, 14, 31, 33, 52, 69,
78, 85, 110, 1852, 1510, 541, 763, 2226, 277, 634, 7603,
199, 2197, 1149, 1501, 2219, 2939, 2934, 1507, 2894, 346, 519,
946, 2168]),
416: array([ 1, 34, 48, 58, 72, 163, 309, 398, 898, 1071, 432,
490]),
1198: array([ 1, 34, 49, 81]),
3319: array([1]),
1867: array([ 1, 58, 151]),
896: array([ 1, 72, 222, 259, 432, 1071]),
617: array([ 1, 27, 85, 128, 383, 459, 493, 620, 1638]),
3300: array([1]),
1877: array([ 1, 397]),
462: array([ 1, 5, 55, 92, 144, 155, 163, 292, 372, 386, 427,
601, 3326, 1878]),
3279: array([1]),
454: array([ 1, 3, 10, 18, 43, 87, 120, 161, 603, 1592, 1068,
754, 1149, 7552]),
1860: array([ 1, 259, 494]),
121: array([ 1, 2, 4, 16, 18, 38, 40, 89, 119, 1058, 486,
633, 501, 800, 472, 2170, 256, 2165, 438, 512, 2684, 598,
2208, 2864, 2862, 2855, 232, 2838, 2066, 2118, 578, 343, 325,
505]),
151: array([ 1, 18, 38, 213, 261, 808, 218, 3338, 839, 2399, 1884,
2393, 194, 2383, 700, 3277, 1867, 285, 2971, 7340]),
1570: array([ 1, 81, 196, 824]),
1573: array([ 1, 10, 81]),
1063: array([ 1, 3, 30, 433, 1053, 2225]),
1353: array([ 1, 141, 241, 309]),
459: array([ 1, 12, 33, 35, 80, 197, 247, 617, 1066, 1886, 1884]),
2334: array([1, 3]),
1267: array([ 1, 28, 89, 1120]),
1060: array([ 1, 3, 244, 373, 397, 645]),
1061: array([ 1, 3, 12, 229]),
7431: array([ 1, 3, 27, 125]),
1355: array([ 1, 1798]),
71: array([ 1, 13, 21, 33, 38, 243, 968, 349, 745, 296, 1791,
1233, 486, 800, 457, 742, 954, 2234, 2969, 1331, 2223]),
3070: array([1]),
2113: array([1]),
3001: array([ 1, 285]),
396: array([ 1, 3, 4, 8, 22, 25, 95, 106, 112, 124, 145, 177, 331]),
2260: array([1]),
142: array([ 1, 3, 7, 8, 9, 10, 14, 17, 25, 26, 27,
29, 32, 41, 43, 50, 69, 70, 78, 133, 135, 172,
955, 259, 2236, 1039, 177, 1770, 1186, 645, 280, 517, 697,
951, 915, 329, 5342, 519, 2229, 1779, 1285, 550, 946, 255,
272]),
2238: array([ 1, 7553]),
123: array([ 1, 3, 6, 7, 8, 9, 11, 12, 22, 27, 29,
30, 34, 40, 49, 73, 95, 115, 116, 7595, 663, 1811,
177, 260, 391, 649, 128, 1376, 136, 334, 585, 1146, 1671,
1821, 7540, 7556]),
2942: array([1]),
1509: array([ 1, 201, 427, 918]),
7410: array([ 1, 17, 40, 319, 393, 585, 743]),
1760: array([ 1, 7, 11, 33, 67, 69, 83, 190, 258, 1148, 1320,
1497, 1498, 7566]),
2876: array([1]),
259: array([ 1, 3, 10, 19, 22, 52, 87, 93, 115, 116, 142,
177, 199, 896, 1802, 1860, 262, 333, 395, 614, 479, 1149,
874, 1317, 7604]),
1493: array([ 1, 11]),
2845: array([1]),
370: array([ 1, 7, 25, 30, 33, 43, 50, 51, 69, 102, 117,
170, 219, 305, 798, 1488]),
2844: array([ 1, 102, 798]),
2167: array([ 1, 11]),
156: array([ 1, 4, 5, 7, 9, 10, 11, 12, 17, 22, 43,
44, 49, 51, 69, 70, 73, 103, 155, 7603, 1842, 7562,
1032, 639, 634, 1277, 256, 157, 1025, 2156, 2377, 692, 289,
431, 198, 177, 1027, 162, 591, 455, 1324, 161, 1495, 485,
211, 2824, 4934, 7604]),
2808: array([1]),
255: array([ 1, 11, 17, 22, 33, 35, 39, 40, 47, 65, 67,
75, 79, 83, 97, 130, 132, 142, 145, 1282, 859, 386,
869, 1740, 798, 7375, 777, 409, 7603, 1727, 264, 1285, 1477,
1685, 681]),
736: array([ 1, 18, 194]),
7603: array([ 1, 2, 4, 9, 11, 14, 17, 18, 20, 21, 22,
24, 26, 39, 40, 42, 47, 48, 50, 51, 60, 63,
100, 146, 152, 159, 176, 202, 211, 255, 256, 272, 274,
305, 357, 409, 452, 482, 505, 546, 548, 587, 588, 681,
685, 796, 851, 939, 1022, 1140, 1158, 1264, 1449, 1471, 1473,
1475, 1479, 1709, 1723, 2112, 2145, 2172, 2748, 2789, 2828, 7424]),
346: array([ 1, 9, 18, 22, 35, 38, 57, 70, 117, 146, 256,
304, 305, 942, 7565, 2816]),
9: array([ 1, 2, 4, 20, 159, 977, 145, 14, 176, 267, 22,
60, 518, 65, 113, 15, 40, 17, 122, 75, 211, 70,
66, 107, 77, 47, 234, 854, 1725, 546, 135, 272, 443,
31, 348, 452, 38, 197, 193, 51, 37, 256, 943, 35,
513, 219, 97, 1478, 733, 1470, 202, 42, 1721, 111, 30,
123, 295, 1048, 329, 2992, 549, 1163, 133, 687, 83, 165,
32, 24, 1458, 548, 589, 1487, 146, 316, 2139, 2819, 2141,
1484, 142, 2479, 7388, 2727, 1709, 4934, 156, 547, 108, 1734,
1286, 2145, 1136, 67, 1023, 1139, 154, 305, 346, 2781, 1287,
2777, 315, 196, 284, 98, 683, 2128, 2124, 632, 175, 1283,
2770, 467, 1726, 7550, 300, 203, 622, 274, 1714, 1284, 2760,
2122, 26, 33, 613, 545, 7568, 287, 7518, 7600, 1292, 409,
681, 1472, 7565, 7387, 7602, 7601, 7334, 7537, 798, 7534, 7564]),
75: array([ 1, 2, 3, 6, 7, 8, 9, 10, 13, 14, 15,
16, 17, 22, 26, 37, 39, 41, 48, 51, 57, 70,
78, 210, 266, 152, 326, 1100, 216, 95, 85, 198, 5342,
411, 204, 255, 133, 256, 315, 212, 854, 226, 76, 132,
401, 7394, 719, 775, 467, 169, 328, 568, 918, 1417]),
175: array([ 1, 2, 9, 10, 42, 45, 65, 89, 90, 169, 438,
3516, 449, 182, 368, 326, 313, 236, 256, 1136, 210, 1965,
300, 217, 404, 7589, 293]),
2754: array([1]),
22: array([ 1, 2, 4, 5, 7, 9, 10, 11, 13, 14, 15,
17, 19, 20, 21, 47, 23, 159, 302, 99, 70, 24,
39, 37, 1113, 203, 342, 196, 165, 145, 38, 306, 123,
272, 35, 135, 108, 90, 44, 210, 641, 188, 185, 66,
1023, 681, 284, 626, 26, 561, 369, 2317, 61, 179, 31,
73, 64, 7518, 122, 2239, 103, 150, 106, 189, 802, 133,
117, 589, 685, 51, 683, 101, 40, 42, 851, 1132, 152,
140, 295, 2336, 191, 2651, 253, 3134, 703, 29, 2310, 358,
424, 396, 2255, 593, 33, 43, 205, 41, 487, 166, 347,
330, 80, 110, 259, 1285, 940, 638, 1744, 2171, 1301, 949,
255, 2745, 1258, 1295, 2159, 156, 1736, 2817, 67, 346, 1138,
1638, 933, 2773, 384, 869, 735, 1278, 2638, 1218, 2079, 452,
75, 539, 2119, 613, 7513, 170, 7600, 7583, 7579, 7565, 7483,
7580, 7535, 7603, 7593, 7555]),
155: array([ 1, 11, 15, 20, 26, 29, 36, 42, 64, 65, 73,
77, 79, 113, 115, 128, 140, 149, 366, 1468, 462, 544,
403, 622, 853, 424, 203, 958, 3169, 156, 645, 545, 2714,
1453, 217, 367, 327, 7415, 7547]),
1261: array([ 1, 994, 1710, 2712]),
2552: array([ 1, 57]),
2586: array([1]),
710: array([ 1, 2, 4, 10, 16, 23, 769]),
578: array([ 1, 4, 35, 121, 252, 727, 994, 860, 2027]),
472: array([ 1, 106, 121, 270, 328, 353, 437, 1652, 1235, 2480]),
2472: array([ 1, 224]),
1606: array([ 1, 716, 767]),
4: array([ 1, 2, 302, 7552, 168, 77, 423, 381, 63, 150, 311,
23, 148, 562, 45, 131, 501, 40, 91, 54, 119, 312,
626, 136, 31, 17, 683, 74, 479, 39, 1239, 1109, 510,
777, 65, 508, 534, 420, 7401, 437, 16, 223, 106, 1230,
37, 1270, 159, 25, 177, 11, 21, 215, 1678, 1445, 578,
1009, 572, 202, 97, 1108, 1107, 1398, 567, 274, 654, 661,
466, 910, 1091, 1602, 365, 841, 84, 379, 251, 917, 270,
561, 710, 121, 113, 583, 1702, 95, 1691, 489, 2201, 361,
47, 9, 386, 140, 7327, 1264, 356, 2024, 727, 1010, 781,
2017, 2002, 3774, 664, 1003, 367, 181, 1634, 326, 538, 88,
1233, 72, 1615, 536, 1621, 1415, 323, 1611, 1088, 609, 1085,
236, 145, 366, 771, 711, 380, 153, 419, 468, 193, 237,
250, 768, 439, 99, 1816, 2103, 3356, 2405, 227, 396, 411,
1054, 807, 2997, 178, 67, 1038, 521, 347, 1322, 265, 2868,
1025, 1295, 1737, 1139, 156, 35, 2740, 733, 26, 2709, 117,
859, 2001, 1636, 1642, 918, 2511, 341, 570, 716, 313, 718,
111, 2492, 1218, 1086, 2485, 109, 139, 535, 2467, 2468, 2465,
401, 353, 224, 772, 2454, 848, 773, 842, 249, 2442, 7550,
840, 2439, 1908, 512, 531, 1087, 1909, 446, 532, 1209, 10,
231, 7589, 7569, 7582, 211, 7596]),
563: array([ 1, 45, 99, 237, 313, 444, 544, 773, 7550, 1413]),
744: array([ 1, 38, 40, 249, 402, 712]),
3422: array([1]),
250: array([ 1, 2, 4, 5, 16, 19, 31, 49, 85, 96, 238,
362, 406, 2523, 715]),
249: array([ 1, 2, 4, 10, 25, 29, 34, 36, 44, 58, 85,
103, 125, 163, 185, 193, 196, 220, 221, 235, 238, 242,
402, 744, 904, 3420, 1588, 288, 1368, 523, 389, 7604, 7595,
656]),
2427: array([ 1, 2410]),
3414: array([1]),
1590: array([ 1, 15, 19, 35, 58, 64, 86, 98, 104, 114, 337,
824, 2429]),
2305: array([ 1, 40]),
1900: array([ 1, 58, 291]),
3392: array([ 1, 58, 141]),
1065: array([ 1, 5, 12, 42, 95, 128, 290, 759]),
1147: array([ 1, 3, 7, 35, 80, 117, 491, 798, 3254]),
1072: array([ 1, 12, 149, 2336]),
3375: array([1]),
1024: array([ 1, 14, 762, 7575]),
709: array([ 1, 104, 833]),
1382: array([ 1, 58, 603]),
3332: array([1]),
1885: array([ 1, 309, 649]),
3330: array([1]),
1197: array([ 1, 30, 105, 649]),
3316: array([1]),
1886: array([ 1, 459, 888]),
154: array([ 1, 3, 5, 7, 8, 9, 11, 12, 16, 18, 19,
24, 29, 31, 32, 33, 34, 41, 43, 58, 72, 78,
87, 89, 90, 105, 106, 113, 131, 136, 150, 153, 5342,
219, 222, 896, 290, 177, 162, 876, 7411, 593, 231, 403,
7498, 425, 441, 467, 270, 340, 7604]),
1522: array([ 1, 53, 594, 7537, 7506]),
2391: array([ 1, 1196, 7390]),
3298: array([1]),
432: array([ 1, 292, 351, 363, 416, 1887, 637, 604, 1772, 1195, 1793,
3301, 1822, 896, 3254]),
3292: array([1]),
3290: array([1]),
7597: array([ 1, 11, 34, 36, 118, 128, 759, 1568]),
3274: array([ 1, 309]),
1875: array([ 1, 115, 287]),
291: array([ 1, 3, 5, 12, 19, 34, 36, 58, 85, 87, 208,
1900, 2344]),
158: array([ 1, 3, 5, 12, 19, 29, 34, 49, 58, 72, 82,
87, 93, 118, 128, 136, 248, 650, 904, 3386, 695, 397,
345]),
1579: array([ 1, 36, 549]),
1846: array([ 1, 12, 7525]),
1066: array([ 1, 459, 1881, 2393, 3263]),
290: array([ 1, 3, 11, 12, 27, 33, 34, 36, 49, 81, 116,
128, 131, 154, 241, 288, 467, 904, 490, 397, 1065, 2388,
1191]),
3233: array([1]),
891: array([ 1, 3, 93, 893]),
247: array([ 1, 12, 26, 34, 36, 81, 116, 125, 229, 1067, 413,
459, 1575, 336, 1864, 3230, 430]),
1856: array([ 1, 24, 1536, 1544]),
3211: array([1]),
115: array([ 1, 3, 5, 6, 7, 25, 33, 49, 72, 81, 93,
97, 219, 1569, 335, 163, 492, 123, 118, 241, 1875, 128,
1185, 3251, 155, 259, 372, 2369, 216, 435, 208, 1343, 2302,
5679, 2364, 1574, 3220, 3213, 7434, 7390, 1544, 197, 320, 280]),
1365: array([ 1, 218, 301]),
483: array([ 1, 7, 49, 66, 177, 2994]),
3210: array([1]),
3189: array([1]),
3193: array([1]),
2352: array([1, 6]),
1847: array([ 1, 58, 92]),
3173: array([1]),
1051: array([ 1, 68, 73, 124, 177, 1173]),
1843: array([ 1, 3, 67]),
2342: array([ 1, 172]),
7341: array([ 1, 3, 5, 33, 36, 49, 327, 644, 1864]),
3139: array([1]),
1186: array([ 1, 142, 245, 262, 817, 7525]),
3167: array([1]),
3149: array([1]),
1842: array([ 1, 156]),
3156: array([ 1, 27]),
3141: array([1]),
817: array([ 1, 125, 160, 430, 2339, 1186]),
1840: array([ 1, 30, 125]),
1839: array([1, 3]),
2324: array([ 1, 199]),
3143: array([1]),
3142: array([1]),
174: array([ 1, 3, 5, 7, 8, 10, 11, 12, 29, 31, 42,
43, 62, 82, 92, 93, 95, 110, 116, 164, 173, 1160,
266, 7591, 822, 262, 580, 280, 391, 315, 7343, 775, 7595,
185, 177]),
2315: array([1, 3]),
1356: array([1, 3]),
1835: array([ 1, 3, 177]),
296: array([ 1, 3, 7, 10, 30, 67, 68, 71, 88, 92, 177, 203, 214,
225, 237, 266, 295, 331, 946]),
3118: array([1]),
1542: array([ 1, 6, 107]),
3111: array([1]),
753: array([ 1, 3, 56, 286]),
3088: array([ 1, 2794]),
395: array([ 1, 5, 12, 25, 41, 87, 89, 90, 116, 177, 259,
373, 7412, 2310, 1540, 962, 1166]),
3068: array([1]),
3042: array([1]),
1442: array([ 1, 18, 57]),
1339: array([ 1, 3, 203, 218]),
596: array([ 1, 8, 49, 62, 106, 124, 177, 185, 191, 197, 645]),
2281: array([ 1, 373]),
222: array([ 1, 2, 5, 10, 27, 29, 31, 36, 72, 82, 85,
118, 120, 150, 154, 173, 7591, 238, 321, 351, 1586, 391,
898, 2418, 1890, 3380, 2402, 896, 373, 2336]),
194: array([ 1, 18, 58, 91, 117, 129, 151, 736, 712, 530, 1154,
2978, 2983, 1715, 1955, 798]),
3064: array([1]),
1812: array([ 1, 13, 46, 178]),
1796: array([ 1, 30, 33]),
701: array([ 1, 3, 6, 8, 13, 50, 112, 172, 403, 890, 1556,
7443]),
124: array([ 1, 3, 6, 7, 8, 11, 22, 24, 25, 29, 30,
32, 53, 56, 62, 66, 68, 73, 83, 88, 89, 100,
106, 112, 177, 228, 162, 520, 332, 1169, 813, 642, 227,
1033, 750, 298, 882, 815, 643, 684, 456, 179, 885, 2299,
1820, 1051, 319, 748, 7595, 396, 7334, 172, 596, 585, 2140]),
3057: array([1]),
7427: array([ 1, 747]),
2276: array([ 1, 112]),
2271: array([ 1, 315]),
1525: array([ 1, 18, 177, 642]),
2235: array([ 1, 1332]),
3027: array([1]),
2090: array([ 1, 15, 1804, 2081, 3363]),
1342: array([ 1, 11, 25, 30, 937]),
331: array([ 1, 3, 6, 26, 73, 92, 95, 162, 177, 296, 1526,
5679, 492, 1029, 396, 7334, 7529]),
3023: array([1]),
2261: array([ 1, 46, 127, 435]),
68: array([ 1, 5, 13, 46, 61, 814, 457, 296, 1364, 1866, 884,
881, 2262, 1803, 177, 178, 599, 1537, 3121, 1828, 3099, 2272,
2297, 1051, 124, 1531, 1807, 7336, 3029, 1804, 1168, 591, 627,
1105, 3024, 166, 7602, 7601, 7599, 7598, 7600, 7334, 7604, 408]),
116: array([ 1, 6, 11, 12, 19, 26, 29, 30, 34, 36, 51,
59, 62, 73, 107, 112, 7421, 706, 292, 822, 2376, 336,
585, 172, 310, 2387, 1191, 290, 2362, 123, 174, 1871, 3285,
1043, 1815, 1343, 247, 7525, 5679, 7595, 3137, 395, 957, 177,
259, 491, 810, 7604]),
1043: array([ 1, 3, 12, 30, 62, 112, 116, 128]),
3020: array([1]),
491: array([ 1, 9, 10, 11, 15, 20, 22, 29, 35, 41, 43,
53, 55, 64, 70, 80, 85, 94, 96, 104, 116, 119,
166, 171, 172, 176, 177, 196, 203, 220, 268, 269, 281,
304, 360, 381, 388, 389, 392, 443, 488, 7415, 520, 684,
1321, 1221, 1752, 1514, 2233, 839, 1622, 1351, 1388, 7580, 7552,
876, 1039, 801, 824, 2336, 7602, 7601, 7599, 7598, 7604, 7518,
7603]),
179: array([ 1, 2, 7, 10, 22, 30, 53, 62, 82, 88, 95,
124, 177, 322, 348, 1604, 1691, 582, 5342, 7415, 752, 228,
595, 748, 1111, 3017, 7383, 7552, 7603, 7602, 7601, 7599, 7598,
7604, 7600, 7527, 7334]),
2252: array([ 1, 33]),
214: array([ 1, 6, 7, 32, 73, 118, 172, 177, 555, 517, 1780,
696, 296, 453, 595, 656, 1750, 1793, 1795, 7600, 7513]),
439: array([ 1, 4, 148, 625, 1044, 994, 1402]),
2147: array([ 1, 24]),
1167: array([ 1, 10, 103, 162, 587]),
697: array([ 1, 13, 46, 112, 137, 142, 177, 332]),
67: array([ 1, 2, 4, 5, 9, 10, 14, 15, 17, 22, 24,
26, 27, 32, 34, 41, 43, 51, 52, 746, 70, 256,
87, 102, 69, 588, 255, 79, 316, 615, 1068, 422, 1363,
1843, 641, 95, 752, 296, 2247, 453, 136, 5342, 85, 411,
137, 988, 1327, 260, 1150, 1760, 110, 634, 159, 360, 135,
1289, 211, 1603, 1290, 2800, 1288, 1730]),
1793: array([ 1, 214]),
258: array([ 1, 10, 11, 13, 25, 35, 46, 61, 62, 69, 117,
136, 177, 190, 593, 358, 1535, 697, 1787, 7553, 2053, 623,
874, 1760, 1498, 639, 635, 264, 1921]),
1340: array([ 1, 25, 143, 876]),
2996: array([1]),
1520: array([ 1, 38]),
1336: array([ 1, 3, 25, 30, 59, 248]),
30: array([ 1, 2, 3, 5, 6, 7, 8, 11, 16, 19, 20,
21, 27, 188, 95, 345, 308, 320, 172, 319, 572, 318,
1197, 124, 116, 197, 245, 1195, 597, 56, 750, 112, 2256,
145, 278, 266, 5342, 487, 689, 42, 47, 173, 1589, 1043,
7516, 1421, 123, 31, 111, 82, 1691, 757, 1063, 3191, 114,
244, 3164, 207, 1840, 1184, 1552, 431, 1059, 1732, 643, 103,
1540, 703, 5679, 3100, 32, 748, 3080, 53, 1346, 1796, 36,
1342, 882, 228, 627, 136, 162, 964, 198, 523, 1513, 370,
876, 43, 41, 2970, 1336, 159, 456, 2972, 412, 33, 90,
7558, 686, 1517, 7391, 7380, 177, 7595, 7565, 7341, 7421, 623,
7579, 314, 7432, 1103, 256, 142, 285, 119, 7600, 7604, 296,
7603, 5533, 7517, 7536]),
178: array([ 1, 4, 5, 13, 29, 32, 46, 61, 127, 5389, 330,
1557, 2323, 1814, 1530, 7521, 3333, 982, 2355, 774, 705, 434,
1812, 1354, 2301, 1821, 1813, 2152, 1274, 881]),
747: array([ 1, 29, 80, 184, 196, 529, 7427, 846, 1164, 1517]),
2962: array([1]),
2952: array([1]),
455: array([ 1, 18, 156, 285, 690, 5342, 636]),
260: array([ 1, 11, 18, 35, 38, 67, 78, 80, 106, 117, 123,
161, 189, 225, 1369, 940, 7565, 7390, 623, 746, 7512, 530,
2959, 1135, 780, 1511, 636, 402, 285, 7604]),
623: array([ 1, 10, 11, 33, 43, 69, 78, 161, 177, 258, 260,
798, 7564]),
29: array([ 1, 5, 6, 7, 8, 10, 11, 15, 19, 25, 27,
242, 208, 83, 43, 123, 124, 222, 32, 155, 2385, 1188,
278, 684, 228, 747, 178, 116, 122, 411, 496, 207, 158,
983, 981, 3264, 174, 509, 201, 102, 7595, 373, 111, 33,
1056, 885, 95, 113, 287, 89, 1794, 7512, 787, 136, 7426,
1046, 960, 614, 319, 205, 85, 142, 110, 360, 1149, 177]),
1510: array([ 1, 146]),
2927: array([1]),
301: array([ 1, 38, 80, 112, 264, 285, 1365, 3106, 945, 7604]),
2907: array([1]),
2909: array([1]),
1148: array([ 1, 117, 798, 874, 1760]),
955: array([ 1, 3, 28, 52, 78, 142, 798]),
1502: array([ 1, 33, 38, 143]),
2853: array([ 1, 35, 38]),
1500: array([ 1, 143, 1149, 1320]),
2178: array([ 1, 117]),
1320: array([ 1, 52, 1500, 1760]),
874: array([ 1, 18, 38, 57, 121, 258, 259, 798, 2204, 1148]),
2177: array([ 1, 69, 1318]),
2892: array([ 1, 57]),
636: array([ 1, 260, 455, 2913]),
1034: array([ 1, 11, 25, 47, 52]),
1494: array([ 1, 2, 69, 161, 188]),
276: array([ 1, 11, 221, 556, 949, 2197]),
1318: array([ 1, 69, 117]),
117: array([ 1, 2, 4, 6, 7, 18, 22, 24, 35, 38, 57,
69, 70, 80, 90, 258, 410, 799, 1318, 346, 1332, 7600,
260, 1148, 306, 7588, 2933, 2931, 285, 122, 1324, 506, 2210,
1147, 1758, 2911, 2908, 402, 2702, 1742, 1487, 1314, 2199, 2178,
2193, 1317, 370, 1488, 194, 1025, 2148, 2805, 437, 798, 796,
7425]),
1492: array([ 1, 25, 1313]),
1302: array([ 1, 69, 3225]),
1313: array([ 1, 161, 1492]),
1491: array([ 1, 5, 798, 1544]),
2870: array([1]),
2181: array([1]),
1309: array([ 1, 7, 26, 88]),
1750: array([ 1, 214]),
2848: array([1]),
1458: array([ 1, 9, 26, 28, 83, 254, 347, 2141]),
951: array([ 1, 52, 142, 145, 177, 317, 332, 798]),
1029: array([ 1, 24, 40, 331, 798, 937]),
2176: array([ 1, 69, 108]),
2846: array([1]),
347: array([ 1, 3, 4, 10, 11, 21, 22, 166, 177, 7577, 485,
1749, 939, 409, 1458, 1488, 798]),
2173: array([ 1, 694]),
2826: array([1]),
1742: array([ 1, 57, 117]),
2166: array([1]),
2823: array([1]),
2820: array([1]),
2814: array([1]),
1736: array([ 1, 22, 31, 589]),
1025: array([ 1, 14, 37, 117]),
1482: array([ 1, 14, 38, 60, 1483]),
1483: array([ 1, 14, 38, 1482]),
2152: array([ 1, 35, 178]),
2806: array([1]),
1128: array([ 1, 14, 26, 409, 7603]),
2801: array([1]),
2795: array([1]),
1290: array([ 1, 14, 42, 67, 78]),
1288: array([ 1, 10, 14, 67, 519]),
1436: array([ 1, 7564]),
2498: array([1]),
2784: array([1]),
587: array([ 1, 2, 33, 43, 62, 89, 177, 305, 3357, 1167, 614,
787, 7603]),
2780: array([1]),
2779: array([1]),
305: array([ 1, 6, 9, 12, 26, 33, 110, 1375, 7603, 346, 1367,
370, 1022, 2785, 587, 944]),
944: array([ 1, 14, 305, 7565]),
2129: array([ 1, 10]),
683: array([ 1, 4, 9, 22, 26, 211, 7565, 798]),
42: array([ 1, 2, 3, 5, 7, 9, 10, 11, 14, 16, 17,
21, 22, 24, 30, 31, 33, 36, 41, 270, 932, 601,
66, 924, 7603, 87, 412, 153, 78, 145, 79, 101, 63,
777, 175, 1269, 58, 173, 2155, 822, 309, 81, 174, 1065,
203, 1359, 7403, 177, 85, 640, 1337, 136, 62, 2241, 43,
1149, 693, 83, 51, 315, 256, 7526, 1290, 48, 90, 181,
155, 187, 863, 76, 508, 84, 715, 992, 1618, 840, 609,
109, 7438, 7582]),
1136: array([ 1, 9, 25, 33, 97, 175, 516]),
1283: array([ 1, 9, 33, 586]),
2127: array([ 1, 135]),
2093: array([ 1, 57]),
1284: array([ 1, 9, 18, 50, 97]),
72: array([ 1, 3, 4, 11, 16, 18, 19, 24, 48, 262, 118,
416, 222, 935, 830, 219, 6369, 1476, 77, 324, 3347, 154,
896, 650, 280, 292, 158, 1838, 115, 546, 2761, 1475, 113,
2762, 632, 284, 507, 1279, 132, 447, 568, 181, 99, 2449,
362]),
264: array([ 1, 2, 11, 15, 21, 52, 137, 183, 185, 255, 258,
805, 301, 488, 593, 2194, 639, 2126, 1457, 798]),
2756: array([1]),
1285: array([ 1, 22, 142, 143, 255]),
613: array([ 1, 8, 9, 18, 22, 26, 44, 48, 133, 226, 366,
385, 876, 1337, 738]),
304: array([ 1, 9, 11, 14, 15, 20, 57, 101, 176, 219, 220,
267, 360, 518, 392, 548, 681, 945, 2799, 7603, 346, 3516,
389]),
1282: array([ 1, 7, 17, 255]),
4721: array([1]),
1475: array([ 1, 2, 72, 7603]),
2746: array([1]),
1715: array([ 1, 77, 194, 252]),
734: array([ 1, 6, 546, 1280]),
2108: array([ 1, 682]),
1923: array([ 1, 712]),
1465: array([ 1, 78, 153]),
2742: array([1]),
2117: array([ 1, 2737]),
1720: array([ 1, 66, 682]),
90: array([ 1, 2, 3, 6, 11, 21, 22, 24, 25, 30, 32,
39, 40, 41, 42, 43, 48, 50, 52, 56, 69, 78,
79, 89, 357, 100, 145, 7564, 328, 570, 314, 219, 266,
883, 395, 154, 203, 506, 189, 286, 122, 117, 402, 137,
405, 175, 131, 2038, 516, 132, 1011, 841, 7572]),
2731: array([1]),
1457: array([ 1, 264, 940, 1477]),
2725: array([1]),
2109: array([ 1, 682, 732]),
543: array([ 1, 18, 78, 253, 792]),
2102: array([ 1, 231]),
712: array([ 1, 194, 744, 1923, 1676, 727]),
231: array([ 1, 4, 20, 26, 48, 77, 132, 154, 514, 854, 2102,
1695, 7328]),
2687: array([1]),
1708: array([ 1, 89, 132]),
1129: array([ 1, 3, 460, 937, 2351, 1440]),
76: array([ 1, 28, 39, 42, 75, 240, 303, 853, 2685, 1684, 78,
79, 77, 2696, 314, 97, 1011, 130, 339, 1258, 2626, 84,
2046, 619, 678, 672, 480, 2037, 408, 911, 1941]),
2065: array([ 1, 480, 2660]),
1251: array([ 1, 343, 994, 1257]),
1248: array([ 1, 65, 101, 145, 1250]),
725: array([ 1, 10, 102, 130, 183, 574, 1661, 2007, 2579, 1113]),
2559: array([1]),
507: array([ 1, 72, 84, 150, 153, 182, 187, 405, 422, 566, 535,
583, 2104, 1985, 7589, 7371, 609]),
1942: array([ 1, 252]),
1232: array([ 1, 2, 1643, 2504, 1936]),
534: array([ 1, 4, 9, 16, 21, 181, 912, 2859, 2474, 2460, 7582]),
994: array([ 1, 35, 57, 218, 252, 439, 578, 727, 1261, 1251]),
503: array([ 1, 2, 16, 23, 33, 43, 99, 200]),
2448: array([1]),
846: array([ 1, 8, 747]),
1: array([ 160, 1028, 309, 11, 594, 1316, 1392, 1583, 888, 637, 1520,
18, 35, 1901, 44, 10, 783, 821, 112, 964, 89, 20,
256, 223, 1881, 351, 196, 416, 1877, 87, 2367, 3254, 1573,
247, 1353, 493, 1358, 1177, 1538, 2296, 222, 2282, 2113, 2260,
71, 142, 2249, 1519, 2227, 1496, 519, 1493, 1315, 1750, 156,
9, 1724, 22, 15, 255, 1267, 1261, 57, 710, 472, 152,
1952, 1606, 379, 4, 563, 113, 744, 625, 3422, 3421, 3418,
3402, 250, 249, 2427, 3414, 1590, 3355, 2305, 1900, 3385, 1147,
377, 1072, 96, 118, 3375, 3367, 1024, 709, 3349, 1580, 545,
146, 1198, 1589, 3341, 3339, 2402, 2401, 1382, 3335, 3317, 3329,
3332, 3330, 1867, 1197, 3319, 1885, 1862, 1886, 649, 154, 3316,
617, 3313, 3300, 1522, 2391, 896, 3298, 462, 3295, 432, 3292,
3290, 2383, 1195, 3283, 1875, 3279, 2378, 1579, 291, 454, 2375,
292, 158, 3256, 1846, 3259, 1066, 1061, 3253, 1860, 3247, 2368,
151, 3245, 1574, 290, 1570, 3233, 891, 1856, 115, 3215, 483,
1365, 3211, 3210, 3189, 3193, 1847, 3202, 2351, 1063, 2352, 1051,
3173, 1843, 3163, 1186, 7341, 2342, 3139, 3167, 459, 2338, 1842,
2334, 3156, 817, 1840, 3141, 1521, 3149, 2324, 1839, 3143, 3148,
3142, 1060, 7431, 1355, 3134, 174, 2315, 1356, 3124, 1835, 296,
1180, 3118, 1542, 3113, 3111, 3013, 529, 1826, 3103, 1822, 3101,
753, 3083, 3088, 395, 3084, 333, 1339, 3068, 3042, 3078, 596,
1796, 2281, 3070, 1442, 3064, 194, 3062, 701, 3059, 124, 7427,
3057, 396, 2276, 2273, 3048, 523, 2271, 95, 1525, 2235, 2261,
1342, 331, 3027, 3023, 3026, 373, 68, 1331, 116, 2147, 3020,
2252, 179, 3004, 491, 214, 439, 3010, 2090, 3006, 810, 1167,
697, 67, 258, 1793, 3000, 2999, 1340, 2996, 2989, 1043, 554,
1336, 1337, 2238, 178, 747, 2966, 2962, 474, 330, 260, 2952,
455, 2941, 2947, 2945, 2942, 623, 1509, 2926, 29, 2217, 123,
1510, 1505, 2927, 804, 2920, 301, 2782, 2907, 2909, 958, 1148,
1502, 2904, 955, 874, 2898, 1500, 2198, 1320, 1760, 1494, 2178,
2892, 636, 1034, 276, 2876, 1318, 259, 2881, 117, 1492, 2185,
1313, 1491, 2870, 1309, 2181, 2858, 2177, 1749, 2848, 1458, 951,
1029, 2176, 347, 2846, 2845, 370, 2173, 2844, 1302, 2167, 2837,
517, 2826, 2166, 2163, 1742, 2830, 1295, 2823, 2632, 2820, 2814,
796, 1736, 1025, 1482, 1483, 2152, 2810, 2149, 2808, 2144, 2806,
1128, 38, 2801, 736, 2136, 2795, 2793, 1288, 1436, 2498, 7603,
2786, 2784, 587, 2783, 2780, 944, 305, 2779, 2775, 346, 683,
2772, 2129, 75, 2771, 175, 1136, 1283, 2127, 2767, 2764, 2763,
2093, 1284, 72, 264, 2754, 2756, 1285, 613, 304, 1282, 4721,
2752, 2750, 2746, 586, 2747, 2119, 1715, 1475, 734, 2744, 2108,
1923, 2743, 1465, 1720, 2742, 2117, 90, 2707, 1457, 2731, 2725,
2109, 2713, 155, 543, 2102, 231, 1708, 2700, 2687, 1413, 1129,
2695, 76, 1445, 2065, 2619, 2605, 1251, 2586, 1117, 300, 1248,
725, 578, 2559, 507, 1942, 1232, 42, 534, 994, 503, 2448,
846, 2, 7348, 7425, 7557, 7589]),
1368: array([ 2, 13, 19, 120, 245, 249]),
164: array([ 2, 11, 12, 15, 16, 19, 58, 82, 85, 86, 105,
837, 835, 199, 174, 1904, 279, 375, 269, 761, 1393]),
244: array([ 2, 3, 6, 26, 30, 33, 43, 56, 80, 95, 120,
134, 143, 177, 197, 227, 1181, 245, 280, 2329, 7595, 703,
641, 1060, 1356, 373, 295, 333, 1041, 7584]),
62: array([ 2, 3, 7, 8, 10, 21, 24, 25, 26, 42, 51,
102, 124, 85, 166, 116, 179, 136, 92, 306, 229, 174,
203, 886, 2314, 237, 95, 885, 596, 1043, 5679, 933, 73,
587, 258, 1785, 145, 1781, 278, 1165, 205, 265, 7590]),
2244: array([ 2, 32, 41, 166, 7539, 7462, 7600]),
148: array([ 2, 4, 16, 40, 44, 54, 84, 109, 119, 131, 139,
540, 840, 380, 512, 1995, 1913, 1600, 313, 1099, 152, 7327,
1401, 466, 1409, 768, 270, 439, 7603]),
7403: array([ 2, 13, 17, 37, 64, 145]),
52: array([ 2, 7, 8, 9, 10, 20, 21, 25, 26, 32, 39,
50, 51, 257, 177, 362, 189, 961, 487, 1320, 7565, 1138,
2195, 637, 203, 266, 90, 807, 306, 83, 259, 284, 1328,
1986, 550, 146, 2902, 946, 274, 949, 67, 955, 1034, 193,
2893, 2896, 686, 2180, 1033, 69, 100, 264, 951, 2886, 361,
634, 1145, 1755, 739, 277, 7603, 798]),
159: array([ 2, 4, 9, 10, 14, 15, 17, 20, 22, 24, 37,
38, 42, 47, 65, 66, 67, 69, 113, 122, 143, 145,
272, 1140, 1099, 444, 1234, 7402, 7565, 7589]),
66: array([ 2, 8, 9, 10, 11, 17, 20, 26, 32, 41, 42,
47, 48, 1242, 143, 101, 124, 98, 159, 100, 482, 1456,
70, 686, 2851, 409, 2856, 2594, 83, 133, 97, 1449, 1300,
1646, 738, 483, 1720, 253, 1473, 7603]),
60: array([ 2, 14, 16, 17, 25, 35, 267, 176, 740, 1773, 673,
549, 134, 1504, 2184, 80, 63, 361, 256, 518, 7573, 7564]),
585: array([ 2, 21, 27, 34, 36, 58, 103, 116, 123, 124, 463,
547, 798, 1859, 7410, 7600, 7604]),
97: array([ 2, 3, 4, 5, 9, 10, 12, 14, 17, 19, 25,
31, 40, 41, 66, 69, 73, 76, 79, 85, 177, 475,
273, 280, 102, 366, 1706, 228, 345, 925, 105, 136, 115,
807, 2215, 1488, 316, 854, 255, 1136, 211, 284, 1284, 919,
7451]),
2006: array([2]),
1421: array([ 2, 30, 494]),
353: array([ 2, 4, 16, 45, 99, 113, 119, 216, 1102, 1101, 996,
505, 1916, 443, 438, 532, 379, 1212]),
51: array([ 2, 7, 8, 9, 10, 11, 14, 15, 17, 21, 22,
24, 26, 35, 39, 40, 41, 42, 45, 47, 185, 67,
741, 170, 1077, 177, 135, 62, 1020, 98, 145, 2133, 173,
1809, 116, 166, 1162, 961, 553, 295, 107, 80, 1322, 7389,
1145, 1314, 2887, 638, 2879, 1287, 687, 1140, 370, 70, 133,
796, 371, 75, 1484, 211, 157, 156, 345, 2150, 2151, 65,
672, 2143, 409, 2802, 2798, 588, 176, 7580, 7573, 7592, 7526]),
185: array([ 2, 6, 9, 10, 11, 21, 24, 31, 32, 37, 47,
51, 53, 55, 83, 95, 98, 137, 170, 188, 349, 198,
249, 7562, 288, 1691, 687, 264, 813, 423, 582]),
312: array([ 2, 4, 33, 37, 45, 74, 967, 668, 784, 574, 715,
7604]),
168: array([ 2, 4, 6, 10, 37, 56, 74, 91, 140, 152, 323,
996, 591, 7328, 772, 439, 7602, 7601, 7599, 7598, 7604]),
354: array([ 2, 188, 224, 251, 440, 1402, 7564]),
108: array([ 2, 9, 20, 22, 31, 70, 78, 95, 1292, 1743, 483,
188, 145, 798, 382, 1099, 2663, 130, 1363, 645, 2176, 2175,
1744, 2842, 546, 1683, 300, 1456, 2005, 2610, 1951, 355, 1601,
109]),
91: array([ 2, 4, 15, 18, 37, 54, 194, 200, 234, 469, 152,
441, 168, 625, 1084, 223, 119, 1665, 438, 769, 532, 7424,
297, 7546]),
40: array([ 2, 3, 4, 6, 7, 9, 10, 12, 14, 15, 16,
17, 18, 21, 22, 23, 32, 37, 77, 271, 384, 92,
47, 198, 256, 98, 632, 792, 129, 1224, 1214, 121, 2103,
744, 2891, 128, 1794, 1179, 2314, 3033, 123, 95, 177, 2305,
1029, 103, 268, 7410, 97, 2919, 65, 316, 2140, 255, 51,
1243, 1478, 2126, 735, 870, 90, 1132, 1279, 452, 1718, 1717,
2531, 606, 770, 717, 237, 1077, 234, 851, 148, 1600, 1919,
1226, 45, 917, 379, 989, 44, 1401]),
271: array([ 2, 40, 54, 113, 119, 471]),
119: array([ 2, 4, 23, 54, 91, 471, 662, 235, 271, 293, 491,
384, 148, 121, 311, 2521, 1946, 7511, 322, 769]),
54: array([ 2, 4, 10, 16, 23, 37, 40, 188, 469, 234, 200,
7552, 91, 334, 84, 74, 119, 113, 108, 383, 145, 368,
566, 293, 148, 449, 102, 226, 1430, 2557, 914, 1106, 328,
610, 840, 531, 152, 379, 466, 7550, 7564]),
375: array([ 2, 12, 15, 35, 36, 43, 53, 58, 64, 85, 94,
96, 98, 104, 114, 118, 163, 164, 605, 3424, 498, 3399,
651, 604, 1390, 2336, 603, 900, 7578, 7591]),
177: array([ 2, 3, 4, 6, 8, 9, 10, 13, 14, 16, 24,
25, 26, 27, 29, 30, 31, 32, 33, 36, 40, 41,
42, 43, 49, 51, 52, 56, 62, 63, 68, 73, 77,
83, 85, 90, 92, 95, 100, 103, 112, 113, 116, 122,
123, 124, 135, 142, 143, 154, 156, 162, 166, 172, 174,
816, 244, 802, 296, 214, 265, 642, 750, 748, 810, 1163,
278, 333, 640, 521, 523, 1515, 350, 7595, 815, 1352, 227,
492, 395, 882, 228, 287, 1806, 1165, 878, 371, 219, 487,
592, 1548, 1179, 621, 7554, 1180, 969, 524, 522, 703, 396,
1809, 1171, 331, 697, 879, 1048, 193, 1033, 295, 552, 623,
1328, 393, 1765, 190, 1553, 269, 2225, 349, 266, 3125, 1835,
3117, 1456, 2311, 2304, 2286, 1538, 1051, 1054, 1753, 1159, 595,
404, 1053, 596, 1347, 951, 1525, 1024, 877, 332, 279, 1524,
2254, 259, 787, 1324, 1293, 3022, 203, 1310, 347, 3016, 1041,
306, 3003, 258, 196, 785, 506, 2988, 587, 7482, 1517, 197,
1162, 490, 489, 2946, 1304, 2963, 639, 1039, 2958, 411, 799,
1714, 284, 2935, 2679, 7389, 2213, 1696, 204, 2924, 7588, 357,
2280, 7598, 7601, 7599, 7604, 7530, 7581, 7584, 7590, 7517, 7565,
7536, 5533, 7583, 798]),
31: array([ 2, 4, 5, 6, 7, 9, 11, 12, 14, 17, 19,
20, 22, 24, 25, 188, 105, 429, 45, 59, 92, 47,
401, 36, 136, 174, 154, 642, 42, 779, 65, 37, 96,
286, 1569, 95, 250, 345, 700, 7590, 962, 2977, 1040, 219,
1134, 1146, 146, 161, 140, 109, 566, 379, 177]),
47: array([ 2, 4, 5, 6, 7, 9, 12, 14, 15, 17, 20,
21, 22, 24, 25, 26, 27, 31, 36, 38, 40, 41,
66, 159, 211, 460, 83, 272, 163, 280, 371, 185, 78,
487, 77, 197, 1034, 255, 51, 140, 150, 7602, 7599, 7598,
7604]),
23: array([ 2, 4, 16, 129, 469, 656, 437, 844, 1131, 389, 564,
293, 770, 843, 607, 500, 842, 713, 772, 847, 153, 1400,
1401, 1613, 1082, 167, 504, 501, 298, 845, 908, 1911, 99,
108, 1226, 139, 710, 44, 768, 848, 119, 440, 773, 2443,
2447, 503, 1402, 138, 401, 1080, 466, 2441, 1078, 84, 1601,
1398, 2438, 935, 1919, 1087, 512, 40]),
74: array([ 2, 4, 18, 1017, 1263, 131, 1259, 1448, 1648, 1620, 1605,
625, 1127, 1402, 1087, 168, 312]),
165: array([ 2, 7, 9, 15, 22, 26, 38, 43, 85, 114, 145,
188, 407, 489, 647, 223, 7161]),
306: array([ 2, 6, 7, 8, 17, 22, 31, 41, 43, 52, 62, 66, 138,
177, 189, 411, 452]),
8: array([ 2, 3, 5, 6, 7, 521, 600, 396, 306, 25, 7380,
227, 500, 554, 1896, 105, 335, 19, 1578, 174, 1836, 288,
36, 13, 701, 95, 29, 318, 1518, 522, 11, 21, 869,
3400, 2344, 118, 120, 528, 2408, 24, 128, 1691, 1865, 2389,
32, 59, 983, 2384, 34, 645, 391, 336, 981, 1571, 7578,
694, 1566, 877, 2333, 356, 1556, 3159, 458, 7390, 1415, 1361,
2330, 123, 3131, 1459, 362, 287, 3116, 10, 333, 1178, 1343,
748, 154, 237, 523, 549, 3028, 596, 937, 846, 451, 3018,
962, 2253, 1788, 142, 2921, 133, 1041, 106, 1159, 136, 1038,
1254, 1298, 33, 1033, 295, 205, 166, 266, 487, 801, 961,
484, 51, 85, 2955, 2172, 63, 26, 286, 43, 201, 284,
75, 157, 650, 395, 401, 372, 7512, 707, 148, 309, 1068,
602, 225, 3369, 597, 242, 189, 393, 319, 591, 1043, 599,
547, 739, 752, 883, 592, 457, 1346, 1794, 879, 812, 317,
147, 347, 519, 268, 2395, 360, 107, 452, 585, 204, 427,
7352, 7362, 7361, 1579, 7359, 7360, 7358, 7496, 7382, 7357, 7354,
7356, 7355, 7353, 7489, 7487, 7495, 7494, 7493, 7492, 7491, 7490,
7486, 7488, 382, 7351, 7350, 1883, 2354, 4721, 1856, 2821, 3129,
4017, 209, 593, 617, 130, 78, 214, 3516, 331, 304, 2031,
969, 1179, 702, 492, 7328, 1195, 758, 229, 780, 1104, 5446,
1502, 4934, 7485, 665, 7423, 7345, 1584, 6792, 527, 7347, 649,
754, 885, 696, 1162, 1322, 52, 265, 5679, 7381, 7575, 613,
7603, 7507, 7558, 7408, 7552, 7475, 7474, 7543, 7567, 7542, 7470,
7459, 7472, 7471, 7435, 7412, 7597]),
242: array([ 2, 3, 5, 10, 26, 29, 56, 58, 85, 95, 196,
5342, 7514, 451]),
412: array([ 2, 11, 30, 31, 42, 58, 81, 125, 144, 166, 173,
5342, 417, 496, 1380, 759, 975, 1853, 7550, 1307, 716]),
381: array([ 2, 4, 84, 145, 251, 551]),
561: array([ 2, 4, 37, 106, 131, 293, 1087, 2590]),
216: array([ 2, 5, 7, 11, 12, 75, 115, 140, 152, 157, 756,
219, 2074, 1325, 1616, 272, 353, 441, 7604]),
107: array([ 2, 3, 6, 9, 14, 16, 17, 24, 32, 51, 56,
83, 85, 513, 348, 621, 7595, 1058, 7565, 193, 317, 1420,
116, 1542, 883, 693, 205, 1777, 549, 361, 360]),
681: array([ 2, 9, 11, 15, 17, 22, 255, 304, 7603, 2776, 2719,
7564]),
79: array([ 2, 9, 11, 17, 18, 39, 42, 57, 67, 76, 140,
2091, 314, 1705, 730, 1704, 2688, 1018, 2086, 2683, 2654, 2678,
2672, 2675, 1700, 2670, 1697, 212, 2749, 300, 97, 386, 1130,
272, 1714, 1660, 1472, 792, 2723, 255, 2104, 2068, 1003, 90,
2099, 732, 405, 580, 2706, 2708, 1133, 1276, 2530, 1132, 2101,
542, 937, 1273, 384, 1468, 380, 2087, 2092, 2694, 926, 1120,
863, 155, 624]),
272: array([ 2, 7, 9, 11, 15, 20, 39, 45, 47, 65, 79,
106, 107, 129, 133, 142, 145, 150, 152, 170, 177, 188,
211, 212, 216, 217, 426, 7603, 2647, 348, 1003, 2476, 277,
7571, 7576, 798, 7566, 7593, 7577, 7534, 7557, 7572, 7596, 7564]),
39: array([ 2, 4, 9, 10, 11, 17, 21, 22, 26, 108, 680,
326, 48, 7603, 785, 201, 77, 90, 274, 784, 76, 130,
1145, 51, 1470, 79, 217, 240, 7580, 515, 2070, 924, 2016,
163, 70, 2873, 52, 255, 1280, 7550, 2111, 75, 2105, 231,
1276, 408, 1125, 314, 137, 2630, 1001, 7604, 7533]),
150: array([ 2, 4, 6, 7, 10, 11, 22, 47, 57, 99, 145,
581, 7552, 1463, 1040, 863, 235, 1685, 299, 680, 154, 265,
2822, 193, 212, 1427, 226, 515, 2545, 478, 2564, 1999, 403,
1647, 660, 666, 511, 473, 507, 1098, 1081, 7599, 7602, 7601,
7598, 7604, 7564]),
181: array([ 2, 4, 21, 42, 72, 109, 130, 510, 356, 237, 863,
182, 1966, 999, 2618, 2012, 2611, 2000, 2566, 226, 405, 404,
401, 382, 7589, 7582]),
109: array([ 2, 4, 16, 31, 42, 48, 57, 65, 84, 102, 108,
449, 512, 566, 670, 779, 422, 1270, 866, 931, 167, 407,
7524, 217, 1431, 722, 668, 669, 922, 1428, 328, 215, 535,
999, 181, 1930, 990, 500, 711, 1099, 223, 379, 148]),
2404: array([2]),
24: array([ 2, 3, 5, 6, 7, 9, 10, 11, 13, 14, 15,
19, 22, 1691, 63, 32, 308, 34, 145, 88, 124, 70,
47, 185, 85, 31, 484, 269, 49, 173, 125, 107, 51,
67, 25, 33, 949, 122, 412, 36, 494, 621, 5342, 1029,
95, 159, 29, 201, 50, 42, 98, 288, 371, 708, 94,
72, 241, 250, 93, 191, 249, 229, 126, 7562, 83, 1856,
168, 134, 3147, 703, 632, 7512, 345, 374, 110, 143, 643,
941, 73, 885, 1351, 166, 549, 443, 62, 562, 411, 133,
117, 44, 211, 485, 316, 588, 2147, 77, 1731, 7565, 7600,
798, 7377, 190, 7548, 90]),
19: array([ 2, 3, 5, 6, 7, 8, 9, 11, 12, 15, 34,
49, 335, 207, 30, 93, 36, 708, 164, 321, 292, 29,
85, 32, 31, 288, 118, 302, 73, 22, 43, 24, 82,
110, 1368, 580, 33, 191, 116, 95, 1578, 59, 262, 1392,
27, 145, 497, 141, 53, 245, 97, 250, 1370, 96, 3373,
88, 402, 7568, 1199, 3364, 647, 398, 1192, 1882, 1374, 72,
650, 295, 757, 241, 259, 125, 87, 154, 7391, 707, 291,
894, 158, 1581, 783, 26, 42, 666, 332, 1322, 551, 128,
2388, 818, 197, 3254, 7406, 1031, 3185, 1184, 1574, 92, 86,
2336, 7446, 7585, 7563]),
7: array([ 2, 3, 6, 34, 36, 370, 30, 638, 429, 956, 1151,
640, 145, 266, 203, 85, 75, 1178, 25, 492, 111, 1153,
190, 1499, 95, 5342, 757, 174, 330, 26, 27, 1346, 31,
877, 124, 809, 878, 100, 801, 1782, 47, 1036, 51, 197,
7565, 98, 1768, 1310, 417, 1064, 96, 645, 70, 391, 371,
7562, 2307, 1171, 752, 5679, 1152, 882, 750, 696, 8, 879,
113, 161, 946, 506, 21, 1308, 156, 2211, 484, 24, 1309,
1287, 1495, 2192, 191, 497, 2361, 3378, 345, 3351, 650, 1057,
2381, 980, 1185, 19, 1574, 596, 289, 6369, 115, 253, 3177,
1059, 810, 3145, 1360, 296, 748, 1649, 621, 3002, 2302, 2301,
179, 110, 1348, 150, 768, 523, 40, 123, 3052, 684, 221,
214, 1522, 699, 3019, 549, 1046, 3011, 1162, 154, 204, 42,
1516, 743, 2987, 1033, 2981, 41, 13, 306, 1784, 356, 83,
362, 146, 1282, 318, 796, 407, 319, 1757, 1317, 1325, 348,
117, 2210, 2923, 2209, 483, 2922, 29, 52, 452, 1147, 637,
2869, 2889, 22, 687, 2142, 2866, 1731, 7540, 2408, 7425, 7591,
7501, 536, 43, 7554, 7528, 7588, 7442, 7568, 7516, 11, 7567,
7585, 7595, 7504, 7398, 7507, 7558, 7604, 177, 7602, 244, 7574,
7594, 7586, 7590, 7484, 7513, 7506, 7417, 7505, 7483, 7535, 142,
7600, 7566, 1760]),
348: array([ 2, 7, 9, 11, 21, 26, 107, 145, 179, 272, 277,
307, 626, 1707, 694, 2407, 787, 557]),
473: array([ 2, 150, 182, 215, 223, 293, 355, 404, 986, 999, 998]),
1921: array([ 2, 15, 44, 80, 202, 217, 258, 715, 1270, 1433]),
99: array([ 2, 4, 10, 21, 23, 50, 72, 84, 716, 7552, 209,
503, 562, 2624, 2019, 1658, 1993, 145, 1619, 1642, 149, 444,
150, 563, 3516, 659, 2455, 419, 106, 353, 842, 2473, 238]),
1380: array([ 2, 6, 138, 185, 244, 412, 690, 7435, 7346]),
73: array([ 2, 10, 12, 19, 22, 24, 26, 32, 49, 53, 62,
120, 883, 112, 82, 207, 163, 85, 463, 83, 214, 103,
172, 1054, 1051, 332, 95, 222, 528, 526, 3346, 88, 3342,
7412, 116, 728, 262, 227, 494, 460, 97, 156, 155, 136,
171, 1056, 331, 199, 123, 92, 278, 228, 813, 2336, 7595,
124, 7525, 177, 7604, 7464]),
85: array([ 2, 5, 6, 7, 8, 11, 13, 15, 19, 21, 24,
26, 29, 31, 34, 35, 36, 42, 43, 53, 58, 59,
62, 67, 73, 75, 205, 250, 171, 249, 647, 103, 188,
320, 291, 95, 191, 288, 125, 89, 491, 3415, 238, 246,
1570, 2369, 141, 1180, 118, 399, 398, 2408, 7398, 2373, 173,
601, 87, 164, 242, 207, 617, 93, 967, 295, 2328, 287,
146, 7428, 1162, 92, 360, 165, 804, 7588, 107, 97, 166,
941, 2951, 106, 157, 375, 1322, 199, 703, 2336, 222, 7503,
882, 7602, 7601, 7599, 7598, 7334, 7604, 177, 7579]),
189: array([ 2, 43, 52, 90, 106, 306, 1330, 552, 660, 1298, 260,
1424, 1506]),
787: array([ 2, 10, 29, 177, 204, 348, 587, 2407]),
133: array([ 2, 6, 8, 9, 11, 13, 15, 20, 25, 26, 51,
64, 66, 70, 75, 83, 100, 101, 257, 318, 688, 142,
589, 203, 7442, 2216, 319, 391, 7588, 272, 266, 799, 613,
371, 1272, 7415, 135, 798]),
1709: array([ 2, 9, 7603]),
658: array([ 2, 23, 324, 466, 7569]),
293: array([ 2, 23, 60, 74, 91, 113, 119, 126, 149, 175, 439,
7400, 389, 505, 775, 561, 473, 402]),
768: array([ 2, 4, 7, 16, 23, 45, 148, 436, 446, 1349, 926]),
45: array([ 2, 4, 9, 11, 16, 21, 26, 31, 39, 40, 42,
44, 132, 183, 1076, 272, 389, 570, 404, 225, 1175, 288,
431, 199, 51, 284, 865, 988, 1673, 920, 326, 382, 202,
2541, 1106, 1107, 366, 719, 1424, 234, 850, 1634, 175, 2502,
716, 1623, 1943, 569, 537, 1947, 1083, 89, 149, 563, 533,
1933, 1404, 564, 1928, 993, 224, 353, 466, 531, 768, 251,
917, 681, 7395, 7520, 7571, 7576, 7564, 7589]),
471: array([2]),
1632: array([ 2, 37]),
281: array([ 2, 15, 35, 64, 86, 96, 98, 104, 114, 164, 180,
192, 1598, 2435, 337, 375, 2336, 3450, 906, 605, 498, 1386,
558, 900, 3443, 3445, 302, 766, 491]),
269: array([ 2, 15, 35, 60, 64, 80, 86, 87, 98, 114, 126,
164, 177, 192, 196, 233, 900, 1070, 325, 369, 3129, 824,
7552, 801, 7415]),
2336: array([ 2, 3, 7, 8, 15, 19, 22, 26, 27, 31, 35,
54, 64, 73, 82, 85, 86, 94, 95, 96, 98, 104,
114, 126, 145, 157, 164, 186, 188, 218, 222, 230, 233,
246, 279, 281, 288, 292, 337, 352, 358, 374, 400, 469,
491, 498, 605, 902, 905, 906, 907, 953, 1205, 1388, 1390,
1393, 1598, 1820, 1905, 2293, 2312, 3449, 3377, 3436, 3435, 3362,
3432, 3428, 2428, 2422, 7273, 3391, 3372, 6921, 7398, 3286, 7364,
7560, 7393, 7385, 7604]),
206: array([ 2, 13, 46, 61, 87, 188, 823, 3133, 2354, 979, 7433,
2363, 1855, 2657, 1559, 3175, 1557, 457, 1062, 588, 284, 632,
7550, 7531, 7521, 7514]),
241: array([ 2, 12, 19, 24, 87, 95, 105, 115, 217, 2400, 602,
1353, 7507, 1822, 751, 397, 707, 1880, 895, 290]),
190: array([ 2, 3, 6, 7, 10, 11, 13, 15, 24, 25, 26,
32, 35, 41, 43, 49, 55, 66, 70, 80, 157, 177,
188, 189, 903, 889, 1391, 7600, 550, 2424, 196, 599, 197,
330, 392, 304, 267, 801, 318, 7588, 1760, 258, 798, 7580,
7400, 7562]),
7577: array([ 2, 135, 193, 197, 347, 687, 1030, 2174]),
798: array([ 2, 10, 11, 17, 21, 24, 26, 32, 52, 69, 78,
108, 117, 122, 133, 161, 177, 194, 197, 204, 232, 255,
264, 347, 370, 371, 487, 513, 588, 623, 639, 686, 691,
1312, 1503, 1146, 1505, 7397, 958, 1498, 957, 2186, 2917, 874,
2207, 1769, 1148, 1147, 1766, 955, 2198, 2884, 2185, 1491, 2183,
857, 2875, 1144, 2871, 1753, 1140, 2867, 1486, 1030, 951, 1297,
2849, 1029, 1741, 946, 2174, 7550, 949]),
211: array([ 2, 9, 11, 14, 15, 17, 20, 21, 24, 26, 47,
51, 67, 70, 97, 98, 156, 7603, 389, 1489, 272, 687,
631, 1143, 232, 1139, 1286, 4934, 683, 1292, 7592]),
140: array([ 2, 4, 10, 17, 22, 25, 31, 37, 47, 48, 65,
70, 79, 88, 131, 134, 680, 544, 476, 155, 1644, 216,
2318, 685, 7565, 406, 1649, 168, 671, 2584, 2568, 660, 7498]),
217: array([ 2, 39, 109, 130, 149, 175, 182, 241, 272, 516, 2561,
449, 1994, 476, 1651, 406, 663, 1235, 1973, 382, 1921, 911,
368, 328, 401, 2525, 1292]),
182: array([ 2, 50, 106, 149, 169, 175, 181, 3774, 609, 367, 473,
1437, 507, 382, 323, 778, 442, 660, 920, 217, 713, 236,
366, 841, 568, 7604]),
236: array([ 2, 4, 23, 152, 175, 182, 273, 383, 1214, 293, 350,
256, 934, 1973, 922, 365, 529, 2532]),
777: array([ 2, 4, 10, 11, 26, 42, 77, 130, 167, 255, 380, 382, 547]),
1098: array([ 2, 84, 150, 992, 7589, 7371]),
106: array([ 2, 4, 6, 8, 22, 29, 41, 43, 44, 70, 78,
84, 85, 182, 280, 1041, 406, 215, 472, 509, 660, 355,
252, 311, 297, 596, 131, 453, 396, 124, 739, 260, 166,
941, 189, 2041, 167, 561, 383, 154, 328, 715, 225, 536,
422, 187, 272, 500, 655, 7582]),
237: array([ 2, 4, 8, 16, 32, 40, 59, 62, 149, 167, 181,
205, 270, 505, 296, 563, 913, 848, 1219, 656, 7590]),
1401: array([ 2, 4, 23, 40, 148, 7552]),
1409: array([ 2, 16, 148, 2597]),
769: array([ 2, 91, 119, 152, 460]),
1209: array([ 2, 4, 379]),
917: array([ 2, 4, 40, 45, 1115]),
114: array([ 2, 15, 30, 35, 64, 70, 86, 94, 98, 104, 192,
337, 1594, 126, 352, 1903, 7335, 558, 233, 281, 279, 164,
906, 3398, 1906, 7312, 3429, 3426, 3423, 362, 1597, 2336, 3383,
1202, 766, 2413, 1896, 3374, 3372, 897, 1385, 1879, 832, 7370,
7310, 3369, 145, 469, 7560]),
2698: array([2]),
1127: array([ 2, 12, 20, 27, 74, 113]),
1009: array([ 2, 4, 44, 125, 603, 3405]),
1386: array([ 2, 15, 35, 164, 281, 375, 376, 1387, 7393]),
173: array([ 2, 3, 7, 11, 19, 21, 24, 26, 29, 30, 32,
42, 43, 51, 58, 81, 85, 88, 93, 105, 125, 141,
245, 2392, 174, 222, 191, 1841, 363, 7398, 376, 1370, 207,
398, 362, 3337, 602, 496, 1032, 2373, 1837, 7559, 7567]),
43: array([ 2, 3, 5, 7, 8, 11, 13, 19, 25, 29, 30,
32, 33, 34, 42, 142, 1899, 95, 49, 246, 330, 487,
189, 454, 319, 145, 118, 491, 751, 172, 161, 694, 122,
63, 135, 190, 743, 171, 82, 375, 134, 125, 154, 174,
1568, 2078, 88, 138, 1310, 960, 1795, 1331, 67, 85, 1337,
117, 1517, 203, 1162, 1779, 1783, 106, 156, 587, 1734, 1778,
503, 392, 393, 2936, 1328, 592, 137, 113, 5392, 177, 7588,
7378, 370, 7555, 173, 7602, 7601, 7598, 7599, 7604, 7600, 7584,
7583, 7501, 7426, 7484, 7506, 7417, 7505, 7513, 7483, 7535]),
647: array([ 2, 11, 19, 58, 96, 165, 188, 204, 206, 614, 2425,
7503, 7532, 7522, 7523, 7544, 7545]),
5: array([ 2, 3, 67, 19, 8, 46, 13, 248, 6, 31, 53,
32, 435, 191, 100, 242, 68, 163, 335, 174, 42, 433,
59, 36, 58, 982, 49, 43, 291, 818, 557, 81, 85,
24, 61, 158, 826, 122, 1872, 391, 1068, 88, 893, 115,
12, 47, 96, 250, 29, 641, 26, 434, 118, 309, 1372,
135, 288, 1375, 262, 1192, 1373, 1535, 198, 125, 82, 92,
460, 238, 178, 3336, 602, 3307, 154, 2395, 599, 3206, 245,
120, 1819, 3309, 310, 774, 695, 1156, 280, 27, 1909, 33,
3009, 462, 146, 2386, 597, 2360, 2354, 1194, 2333, 825, 1065,
1378, 136, 97, 93, 1868, 684, 336, 216, 1854, 1376, 801,
103, 692, 7444, 1859, 1832, 1359, 890, 3236, 7341, 988, 2363,
3231, 979, 706, 156, 1491, 2335, 395, 203, 289, 7568, 1551,
1176, 2397, 7559, 2408, 7578, 7591, 416, 7503, 7480, 2336, 7479,
7510, 7532, 7522, 7523, 338, 7544, 11, 7406, 7447, 7521, 7477,
384, 7575, 255, 7550, 3516, 7423, 7485, 759, 946, 7585, 7563,
3133, 7433, 7516, 7380, 197, 7457, 3254, 314, 7514]),
1156: array([ 2, 5, 11, 87, 397, 522]),
553: array([ 2, 33, 51, 95, 111, 113, 166, 176, 191, 550, 3252,
3249, 1561, 3243, 3153, 1543, 3090, 2231, 1321, 693, 7603]),
170: array([ 2, 3, 22, 26, 51, 113, 633, 286, 686, 1527, 1961,
2264, 370, 588, 185, 1805, 960, 2224, 2179, 482, 1137, 272,
7604]),
93: array([ 2, 3, 5, 10, 12, 19, 26, 49, 58, 85, 92,
764, 248, 111, 262, 891, 376, 125, 335, 308, 174, 95,
3412, 141, 484, 3284, 706, 1068, 1865, 158, 197, 1064, 295,
6369, 115, 259, 288, 229, 3171, 7342, 7515, 7469, 7550, 7468,
177]),
145: array([ 2, 9, 11, 15, 16, 17, 19, 21, 22, 24, 25,
26, 30, 42, 51, 54, 62, 65, 90, 95, 99, 108,
113, 131, 318, 2348, 165, 188, 1550, 3152, 483, 234, 469,
200, 396, 2251, 807, 1761, 487, 951, 737, 255, 1248, 1998,
150, 2469, 2456, 1611, 7327, 298, 153, 223, 1099, 7502, 201,
506, 941, 332, 381, 639, 7550, 7334, 7588, 7594, 7440, 7536,
7517, 5533, 7565, 7600, 7593, 7328]),
1523: array([ 2, 3, 13, 162, 554]),
350: array([ 2, 17, 21, 26, 33, 41, 113, 133, 176, 177, 193,
236, 1022, 3007, 696, 1321, 360, 801, 1941, 1740, 635, 7397]),
77: array([ 2, 4, 9, 10, 11, 14, 17, 21, 24, 33, 39,
40, 44, 47, 48, 65, 72, 76, 132, 155, 78, 268,
161, 131, 938, 408, 89, 177, 545, 1328, 777, 4934, 384,
926, 664, 231, 140, 2667, 403, 857, 1638, 853, 660, 516,
1440, 265, 1118]),
1785: array([ 2, 17, 62, 166]),
166: array([ 2, 8, 10, 11, 13, 17, 24, 25, 26, 31, 32,
37, 55, 62, 68, 83, 85, 106, 135, 137, 157, 177,
491, 995, 5342, 552, 2246, 2240, 1514, 1786, 390, 329, 347,
2244, 846, 1797, 695, 2995, 1164, 278, 1154, 205, 412, 2965,
7588, 1777, 553, 7415, 272, 1020, 548, 7603, 7517, 7536, 5533,
7565]),
489: array([ 2, 4, 20, 28, 165, 177]),
25: array([ 2, 4, 6, 7, 8, 10, 11, 13, 16, 21, 24,
265, 284, 52, 31, 696, 249, 29, 801, 686, 592, 145,
95, 329, 2280, 55, 62, 124, 332, 228, 1165, 523, 318,
876, 1787, 41, 32, 60, 1326, 133, 115, 663, 701, 7509,
641, 244, 1830, 1342, 1336, 1792, 47, 2308, 7551, 492, 97,
36, 3082, 396, 395, 3071, 140, 7540, 813, 143, 33, 287,
78, 3043, 26, 370, 656, 1340, 1788, 7553, 1155, 2135, 693,
2980, 172, 1136, 122, 166, 7588, 295, 1776, 266, 43, 37,
593, 187, 804, 7565, 487, 1764, 2196, 1031, 799, 1034, 258,
479, 142, 90, 949, 1032, 739, 1492, 7548, 177, 7593]),
83: array([ 2, 7, 9, 11, 21, 24, 26, 29, 36, 42, 51,
52, 66, 73, 177, 591, 394, 1048, 107, 7552, 166, 133,
207, 333, 967, 92, 279, 482, 253, 277, 2960, 390, 171,
1760, 7566, 7499, 255, 487, 7600, 7604, 7595, 7603]),
277: array([ 2, 10, 14, 17, 21, 26, 52, 69, 70, 78, 83,
135, 146, 198, 232, 272, 348, 740, 741, 2194, 1142, 2181,
389, 7375, 7564]),
70: array([ 2, 7, 9, 11, 13, 15, 16, 17, 20, 21, 22,
24, 26, 32, 33, 35, 39, 51, 64, 65, 66, 67,
69, 133, 279, 1767, 196, 966, 114, 698, 140, 360, 106,
427, 190, 258, 7387, 1308, 117, 2175, 277, 75, 346, 98,
108, 142, 232, 371, 156, 149, 211, 203, 7580, 491, 7603,
7388, 7565]),
1487: array([ 2, 9, 11, 117, 1035]),
270: array([ 2, 4, 16, 48, 148, 237, 472, 1932, 436, 379, 7455]),
871: array([ 2, 9, 11, 14, 17, 7603, 942, 7388]),
389: array([ 2, 23, 45, 57, 60, 74, 126, 211, 235, 249, 257,
277, 293, 304, 311, 379, 439, 7400, 1077, 2998, 2219, 491,
2123, 469, 630]),
544: array([ 2, 11, 17, 33, 140, 155, 251, 272, 563, 7547, 926]),
2631: array([ 2, 132]),
672: array([ 2, 76, 152, 169]),
137: array([ 2, 15, 17, 21, 31, 33, 39, 41, 43, 48, 67,
90, 844, 697, 7565, 329, 1781, 2245, 185, 264, 735, 1691,
870, 2717, 253, 2094, 7327, 7394, 2527, 2526]),
424: array([ 2, 36, 87, 95, 155, 183, 382, 404, 638, 2188, 1138,
1030, 1437, 537, 645]),
65: array([ 2, 4, 7, 9, 10, 11, 14, 16, 17, 20, 33,
38, 40, 51, 53, 57, 165, 113, 244, 272, 1271, 1939,
2018, 84, 140, 588, 66, 1258, 2159, 256, 77, 255, 145,
516, 155, 2682, 175, 1404, 2071, 1460, 467, 862, 109, 926,
341, 1248, 130, 1662, 2544, 1984, 1933, 850, 7604, 7572, 7596,
7570]),
1693: array([ 2, 366, 577]),
475: array([ 2, 48, 97, 282, 474, 7406, 1980, 724, 1627]),
449: array([ 2, 4, 16, 54, 65, 109, 175, 183, 217, 1004, 670,
7587]),
1998: array([ 2, 145]),
1977: array([ 2, 1978]),
915: array([ 2, 142, 239, 317, 611, 3097, 1407, 1104]),
63: array([ 2, 3, 4, 8, 11, 14, 16, 17, 24, 29, 42,
43, 60, 899, 336, 1551, 176, 801, 105, 392, 84, 655,
1784, 522, 209, 875, 537, 2483, 1095, 1083, 718, 565, 1222,
2453, 845, 224, 223, 7604, 7603, 177]),
1957: array([ 2, 39, 508]),
1635: array([ 2, 130, 1105]),
508: array([ 2, 4, 42, 102, 339, 444, 1957, 609, 7589]),
2499: array([2]),
2497: array([2]),
299: array([ 2, 48, 84, 125, 130, 150, 183, 226, 615, 614, 785,
327, 7564, 1241, 713, 355]),
1225: array([2]),
238: array([ 2, 5, 85, 92, 99, 118, 125, 219, 222, 714, 355,
249, 570, 3261, 250, 1095, 1624]),
2488: array([2]),
1936: array([ 2, 1232]),
382: array([ 2, 45, 84, 108, 167, 181, 182, 217, 252, 368, 777,
1004, 1002, 424, 422, 7589]),
355: array([ 2, 17, 41, 106, 108, 238, 263, 299, 314, 473, 2512,
444]),
1090: array([ 2, 16, 44, 2470]),
1222: array([ 2, 16, 63, 2000, 7550]),
21: array([ 2, 3, 4, 7, 8, 9, 11, 12, 13, 16, 17,
25, 41, 85, 205, 266, 329, 277, 157, 110, 274, 51,
45, 916, 30, 295, 362, 52, 62, 738, 1297, 201, 37,
77, 137, 5342, 1152, 90, 83, 100, 798, 1307, 181, 47,
700, 748, 89, 563, 878, 33, 319, 71, 197, 530, 225,
509, 621, 185, 360, 961, 1298, 78, 2759, 348, 264, 956,
552, 22, 522, 70, 393, 1986, 361, 145, 347, 585, 946,
7603, 48, 580, 28, 579, 1004, 669, 2520, 2486, 534, 42,
7552, 235, 99]),
564: array([ 2, 23, 45, 125, 244, 1618, 7552, 655, 748]),
2459: array([2]),
774: array([ 2, 3, 5, 178, 419, 531, 3133, 2475]),
209: array([ 2, 8, 63, 99, 148, 169, 176, 184, 267, 801, 257,
565, 2466, 1087, 7550]),
1922: array([ 2, 16]),
504: array([ 2, 23, 132, 1109, 2593, 608, 995]),
987: array([ 2, 419]),
2446: array([2]),
986: array([ 2, 167, 202, 408, 473, 540]),
1603: array([ 2, 67, 153, 212, 297, 2054]),
379: array([ 2, 4, 31, 40, 44, 54, 109, 223, 270, 1209, 1230,
466]),
446: array([ 2, 4, 44, 445, 768]),
436: array([ 2, 270, 1419, 7437]),
512: array([ 2, 4, 23, 109, 121, 148, 152, 460]),
971: array([ 3, 33, 81]),
92: array([ 3, 5, 6, 10, 12, 19, 27, 31, 34, 40, 58,
62, 73, 83, 105, 821, 1053, 141, 462, 309, 1576, 976,
93, 238, 95, 207, 526, 289, 7444, 296, 1847, 648, 1833,
1425, 3208, 132, 1560, 1620, 103, 1821, 331, 110, 199, 1323,
345, 177, 640, 2985, 1150, 958, 3774]),
7336: array([ 3, 68, 239]),
1545: array([3]),
103: array([ 3, 5, 10, 11, 12, 22, 27, 30, 32, 36, 40,
56, 58, 73, 85, 92, 102, 177, 413, 284, 398, 128,
1325, 1317, 946, 904, 249, 163, 125, 141, 118, 695, 5342,
7595, 1167, 585, 156, 295, 2229, 1508, 7565, 161]),
207: array([ 3, 6, 11, 19, 26, 29, 30, 34, 36, 53, 73,
83, 85, 92, 111, 173, 398, 666, 7516, 7595]),
6: array([ 3, 5, 138, 56, 31, 1069, 809, 112, 227, 19, 1057,
394, 36, 32, 1395, 1371, 41, 59, 1546, 198, 75, 524,
287, 1350, 734, 207, 1571, 1185, 1566, 208, 8, 95, 11,
25, 92, 27, 549, 49, 306, 7444, 168, 1064, 557, 431,
162, 2326, 100, 278, 1542, 728, 7, 1541, 24, 113, 280,
330, 20, 1461, 7415, 295, 458, 185, 47, 826, 3365, 1157,
1374, 150, 1865, 12, 123, 2352, 3305, 163, 266, 1380, 2384,
214, 7420, 2371, 638, 706, 526, 3229, 1560, 115, 3222, 3192,
3176, 190, 754, 815, 810, 3155, 818, 7390, 1254, 3158, 1555,
1179, 1178, 1361, 701, 1055, 305, 106, 116, 107, 522, 1152,
1830, 244, 3112, 492, 331, 85, 124, 1056, 43, 3096, 3066,
53, 3092, 7551, 16, 1346, 203, 545, 149, 2292, 117, 44,
519, 1776, 642, 40, 30, 7528, 133, 39, 1450, 408, 177,
7552, 7604, 90, 7554, 7587]),
7380: array([ 3, 5, 8, 30, 59, 227, 2382]),
287: array([ 3, 6, 8, 12, 17, 25, 29, 32, 85, 95, 123,
162, 177, 208, 424, 1875, 937, 966, 748]),
754: array([ 3, 6, 8, 27, 81, 454, 522, 645, 755, 1823, 812,
2332, 5342]),
26: array([ 3, 5, 7, 8, 9, 10, 11, 12, 17, 18, 19,
20, 22, 24, 279, 88, 29, 93, 47, 242, 112, 308,
2280, 32, 67, 245, 111, 7514, 27, 166, 33, 190, 348,
52, 485, 211, 49, 116, 191, 262, 7595, 268, 51, 70,
373, 83, 100, 1138, 318, 933, 1030, 386, 133, 277, 145,
75, 589, 142, 1294, 1128, 683, 125, 7516, 207, 3304, 173,
73, 266, 2389, 128, 1876, 666, 2372, 247, 2650, 248, 391,
172, 229, 760, 494, 975, 6369, 1568, 1175, 286, 1059, 1035,
333, 295, 244, 199, 330, 62, 5342, 216, 804, 219, 272,
38, 85, 66, 941, 350, 1501, 69, 55, 1309, 798, 632,
777, 547, 45, 505, 193, 305, 943, 35, 1137, 1711, 613,
284, 132, 1021, 231, 155, 7579, 7578, 177, 7604, 7602, 816,
7603]),
34: array([ 3, 11, 12, 16, 19, 24, 26, 27, 29, 32, 33,
59, 120, 291, 158, 398, 429, 128, 2369, 2414, 43, 92,
58, 288, 1863, 205, 36, 67, 894, 351, 983, 248, 1861,
249, 141, 136, 123, 207, 88, 3362, 416, 154, 1198, 308,
649, 490, 81, 131, 280, 980, 397, 1381, 292, 585, 219,
1352, 2378, 125, 118, 335, 290, 1378, 7525, 116, 320, 247,
208, 197, 2350, 7597]),
525: array([ 3, 96, 458, 493, 644, 819]),
545: array([ 3, 6, 9, 10, 14, 49, 77, 128, 135, 155, 415]),
56: array([ 3, 6, 10, 27, 30, 36, 53, 753, 244, 1567, 187,
124, 147, 227, 90, 177, 421, 95, 112, 2337, 103, 1539,
969, 168, 3166, 197, 242, 7595, 431, 1360, 1179, 1346, 107,
3107]),
1849: array([3]),
557: array([ 3, 5, 6, 27, 36, 81, 307, 348, 494]),
208: array([ 3, 6, 29, 33, 34, 59, 115, 162, 196, 287, 291,
320, 336, 484, 7578]),
1558: array([ 3, 49]),
2346: array([3]),
974: array([ 3, 33, 128, 149, 458]),
262: array([ 3, 11, 19, 72, 73, 93, 125, 172, 259, 892, 431,
1186, 975]),
2001: array([ 3, 4, 16]),
815: array([ 3, 6, 12, 36, 124, 177, 3238, 1191, 5342, 885, 7581]),
120: array([ 3, 5, 12, 34, 36, 46, 49, 59, 73, 82, 111,
310, 464, 141, 1201, 1071, 163, 245, 351, 222, 603, 454,
1368, 362, 762, 7512, 244]),
492: array([ 3, 6, 7, 10, 25, 27, 115, 177, 331]),
1175: array([ 3, 26, 45, 3168]),
32: array([ 3, 5, 6, 8, 9, 11, 12, 19, 24, 25, 26,
29, 43, 173, 105, 124, 36, 214, 90, 201, 58, 88,
166, 1506, 52, 67, 2365, 34, 49, 107, 142, 203, 521,
118, 191, 163, 245, 40, 59, 154, 3312, 366, 237, 219,
73, 287, 1064, 307, 941, 190, 672, 2244, 178, 696, 66,
136, 185, 103, 371, 102, 265, 393, 70, 7442, 1774, 7411,
2134, 257, 2132, 485, 177, 7583, 798, 796]),
172: array([ 3, 5, 7, 12, 25, 26, 27, 30, 36, 43, 73,
95, 96, 116, 124, 142, 555, 214, 177, 262, 413, 1190,
2342, 641, 493, 701, 1173, 491, 205, 5342, 1327, 7604]),
1347: array([ 3, 177, 1169]),
1338: array([3]),
105: array([ 3, 11, 12, 26, 27, 31, 32, 33, 49, 63, 81,
92, 97, 602, 199, 164, 118, 125, 1045, 315, 362, 289,
154, 704, 830, 1915, 649, 490, 163, 292, 1197, 484, 2397,
496, 720, 7445, 241, 613, 185, 7575, 7585]),
818: array([ 3, 5, 6, 19, 24]),
890: array([ 3, 5, 81, 413, 433, 701, 1032]),
1362: array([ 3, 27]),
1547: array([3]),
1056: array([ 3, 6, 29, 73]),
2280: array([ 3, 25]),
879: array([ 3, 7, 177, 199, 883]),
36: array([ 3, 5, 6, 7, 8, 11, 12, 19, 24, 25, 27,
30, 31, 32, 34, 118, 227, 177, 1381, 209, 1579, 122,
49, 56, 116, 424, 155, 82, 291, 197, 2379, 822, 1184,
207, 3368, 249, 375, 58, 222, 102, 983, 47, 1562, 585,
7585, 7563, 1580, 172, 351, 292, 702, 2377, 759, 801, 1378,
3187, 2374, 247, 103, 136, 120, 3254, 572, 7341, 815, 336,
557, 1067, 372, 42, 81, 290, 1191, 124, 83, 1173, 840,
95, 7597]),
590: array([ 3, 128, 6878, 614]),
3260: array([3]),
1174: array([ 3, 16]),
3198: array([3]),
460: array([ 3, 5, 47, 54, 73, 434, 705, 769, 512, 1855, 1129,
1366, 1559]),
3200: array([3]),
589: array([ 3, 9, 22, 26, 133, 157, 2154, 1736]),
433: array([ 3, 5, 125, 1063, 572, 2369, 3254, 2370, 890, 751, 7595,
7541]),
81: array([ 3, 5, 12, 27, 33, 34, 36, 42, 49, 58, 64,
1198, 105, 290, 971, 128, 754, 890, 3331, 467, 1562, 1553,
248, 173, 3250, 1570, 391, 627, 3244, 1859, 1573, 413, 229,
247, 557, 2357, 412, 645, 115]),
7509: array([ 3, 25, 102, 5342]),
973: array([ 3, 27, 641]),
1555: array([3, 6]),
3154: array([3]),
7595: array([ 3, 7, 8, 26, 27, 29, 30, 53, 56, 83, 103,
107, 112, 116, 123, 124, 129, 174, 177, 207, 244, 249,
288, 333, 430, 458, 548, 813, 882, 886, 969, 1045, 1157,
1171, 1323, 1541, 1691, 1792, 1811, 1841, 2319, 2322, 2327, 3074,
3079, 3136, 3138, 5342, 5679, 7418, 7461, 7556, 7604]),
1361: array([3, 6, 8]),
2331: array([ 3, 7552]),
509: array([ 3, 16, 21, 29, 41, 106, 228, 1000, 2345, 1157, 878,
2548]),
969: array([ 3, 56, 100, 177, 288, 2214, 7595]),
197: array([ 3, 5, 7, 9, 11, 14, 17, 19, 21, 30, 34,
36, 47, 56, 58, 69, 93, 115, 125, 135, 162, 177,
190, 7603, 591, 198, 7577, 822, 759, 2369, 1068, 459, 1794,
244, 798]),
621: array([ 3, 7, 21, 24, 107, 177, 2843]),
1036: array([ 3, 7, 877, 7515]),
1054: array([ 3, 4, 73, 177]),
1818: array([3]),
358: array([ 3, 113, 130, 147, 199, 258, 288, 2015, 1438, 7604, 7603]),
748: array([ 3, 7, 8, 21, 26, 30, 43, 124, 177, 179, 228,
287, 564, 885, 1811, 3067, 813]),
13: array([ 3, 5, 7, 8, 11, 127, 46, 21, 659, 1394, 651,
435, 884, 1052, 1050, 1168, 965, 1534, 1529, 133, 1817, 812,
70, 1533, 1810, 1532, 697, 1172, 320, 20, 599, 1819, 2288,
2277, 178, 2283, 2284, 701, 1049, 2275, 2243, 428, 881, 7580,
1759, 204, 417, 3370, 650, 2355, 2386, 1523, 3241, 3180, 1332,
1182, 1831, 3144, 2320, 2290, 1828, 71, 3119, 597, 1808, 1351,
3089, 1537, 5389, 3075, 1813, 3045, 1807, 2278, 3076, 95, 3073,
2270, 952, 3072, 1535, 2279, 5446, 1210, 1812, 3060, 3061, 112,
3047, 3056, 3054, 33, 3051, 3050, 1335, 3046, 258, 2268, 1531,
456, 2267, 2266, 3041, 3039, 1528, 3037, 3036, 3038, 75, 25,
1767, 7403, 1497, 7397, 430, 517, 7549, 7478, 7550, 7514, 7429,
68, 7603, 7565, 487]),
88: array([ 3, 4, 5, 10, 11, 15, 16, 17, 19, 20, 24,
26, 27, 29, 32, 33, 34, 43, 49, 73, 82, 85,
461, 646, 245, 125, 140, 1201, 1518, 145, 288, 164, 173,
318, 7398, 666, 92, 308, 3235, 296, 815, 95, 684, 179,
1309, 129, 1405, 7559, 7328, 124, 7404, 7552, 185, 7543, 7419,
177]),
163: array([ 3, 5, 6, 12, 19, 32, 39, 44, 47, 58, 59,
73, 82, 87, 96, 103, 105, 115, 118, 120, 165, 398,
249, 375, 416, 650, 462, 397, 1774, 7560]),
1576: array([3]),
2353: array([3]),
3221: array([3]),
136: array([ 3, 4, 5, 8, 29, 30, 31, 32, 33, 34, 36,
41, 42, 51, 62, 67, 73, 95, 97, 123, 135, 528,
893, 258, 962, 876, 2242, 2406, 158, 891, 1459, 228, 154,
203, 1794, 153, 7604]),
1367: array([ 3, 305, 691, 1187, 2349]),
7512: array([ 3, 10, 24, 29, 120, 204, 260, 487, 639, 703, 937,
1190, 1766]),
757: array([ 3, 7, 12, 19, 30, 49, 113, 1844, 1187, 7509]),
49: array([ 3, 5, 6, 10, 11, 12, 19, 24, 31, 32, 36,
43, 44, 120, 93, 310, 250, 596, 82, 88, 81, 483,
289, 123, 290, 1378, 145, 1874, 526, 141, 125, 111, 104,
398, 1198, 1195, 280, 158, 7436, 830, 650, 105, 242, 1558,
545, 666, 3289, 336, 759, 2372, 115, 519, 174, 156, 1560,
190, 757, 73, 7341, 2338, 7525, 177, 7563, 7585, 7597]),
646: array([ 3, 88, 1844]),
2328: array([ 3, 27, 85]),
1360: array([ 3, 7, 12, 56]),
7377: array([ 3, 11, 24]),
2321: array([3]),
125: array([ 3, 5, 11, 12, 19, 24, 26, 27, 29, 34, 43,
49, 53, 68, 82, 85, 88, 93, 95, 96, 103, 105,
118, 429, 292, 131, 191, 197, 433, 249, 412, 497, 3408,
238, 484, 376, 526, 1009, 3396, 345, 321, 826, 173, 299,
704, 649, 2395, 496, 2394, 309, 247, 413, 289, 253, 458,
262, 817, 1840, 564, 666, 1837, 3362, 7367, 1032, 7502, 7568,
2225, 7431]),
1548: array([ 3, 177]),
1837: array([ 3, 125, 173]),
1834: array([ 3, 12]),
703: array([ 3, 22, 24, 30, 33, 177, 196, 244, 7340, 7512]),
1456: array([ 3, 23, 66, 108, 147, 177, 421, 7603]),
7406: array([ 3, 19, 475]),
1832: array([3, 5]),
3105: array([3]),
641: array([ 3, 5, 8, 22, 25, 27, 33, 67, 99, 172, 288,
610, 973, 2341, 958, 1224, 2255, 762]),
320: array([ 3, 10, 11, 13, 30, 33, 34, 58, 85, 115, 156,
208, 278, 2833, 435, 5342]),
885: array([ 3, 8, 24, 29, 62, 124, 748, 886, 1825, 1826]),
632: array([ 3, 6, 9, 24, 26, 40, 44, 72, 78, 256, 1485,
3151]),
2300: array([3]),
702: array([ 3, 27, 36, 87, 229, 309, 397, 602, 707]),
228: array([ 3, 11, 25, 29, 33, 73, 97, 122, 124, 136, 162,
177, 179, 816, 509, 318, 7334, 813, 7604]),
752: array([ 3, 7, 67, 95, 179, 199, 373, 493, 937]),
2291: array([3]),
656: array([ 3, 23, 25, 46, 209, 214, 237, 373, 749, 1161, 937,
812, 7529]),
877: array([ 3, 7, 8, 27, 177, 1036]),
2293: array([ 3, 15, 35, 64, 104, 145, 498, 3369, 2336]),
650: array([ 3, 13, 19, 49, 72, 105, 158, 163, 490, 530, 649]),
3328: array([3]),
720: array([ 3, 105, 139, 226, 252, 323, 343, 398, 437, 718, 1979,
7546]),
1379: array([ 3, 11, 96, 398, 424, 462, 549]),
219: array([ 3, 9, 16, 17, 26, 31, 32, 72, 81, 82, 90,
115, 135, 154, 163, 177, 216, 1789, 238, 304, 370, 802,
1711, 1478, 2686]),
3275: array([3]),
345: array([ 3, 7, 11, 24, 30, 31, 51, 92, 95, 97, 125,
158, 463, 1744]),
336: array([ 3, 5, 8, 27, 36, 49, 59, 63, 116, 208, 247,
7525, 1577, 980]),
295: array([ 3, 6, 8, 9, 11, 21, 22, 25, 26, 33, 51,
85, 93, 153, 162, 244, 5342, 497, 839, 937, 296, 1517,
1152]),
1375: array([ 3, 5, 305]),
435: array([ 3, 5, 13, 46, 115, 127, 157, 320, 434, 1855, 2368,
627, 1190, 1825]),
7390: array([ 3, 6, 8, 115, 260, 374, 3197]),
12: array([ 3, 5, 6, 11, 19, 73, 291, 1031, 651, 27, 154,
305, 59, 288, 976, 100, 36, 93, 229, 44, 484, 103,
292, 757, 31, 81, 602, 125, 123, 751, 120, 614, 21,
1374, 40, 158, 32, 1516, 1577, 391, 290, 627, 7444, 458,
904, 164, 375, 1586, 1784, 898, 1127, 92, 163, 245, 1072,
104, 1064, 1210, 157, 983, 3354, 321, 3366, 216, 530, 882,
105, 376, 47, 2401, 7575, 82, 649, 2396, 1535, 1873, 280,
1331, 34, 397, 241, 1360, 7585, 118, 1043, 459, 49, 3297,
1553, 3293, 1322, 128, 287, 335, 1184, 822, 3281, 315, 1162,
191, 886, 156, 1846, 1061, 174, 1376, 395, 1065, 35, 1861,
825, 116, 1834, 1185, 1350, 289, 26, 97, 975, 3234, 172,
1187, 893, 3232, 7525, 33, 247, 826]),
1853: array([ 3, 203, 412]),
1184: array([ 3, 12, 19, 30, 36]),
1064: array([ 3, 6, 7, 12, 32, 49, 93]),
1560: array([ 3, 49, 92, 138]),
988: array([ 3, 5, 45, 67, 152, 478, 629, 7596]),
203: array([ 3, 5, 6, 7, 9, 11, 17, 22, 32, 41, 42,
43, 52, 53, 62, 70, 78, 89, 90, 122, 133, 136,
155, 177, 296, 1853, 1339, 1292]),
976: array([ 3, 12, 92, 130]),
289: array([ 3, 5, 7, 12, 49, 92, 102, 105, 125, 156, 229,
253, 391, 627, 1888, 830, 413, 1425, 7467]),
7394: array([ 3, 28, 50, 75, 137, 474, 1358, 7576, 7571, 7520]),
3203: array([3]),
1844: array([ 3, 646, 757]),
2053: array([ 3, 258]),
1363: array([ 3, 67, 108, 130]),
3181: array([3]),
288: array([ 3, 5, 8, 12, 19, 34, 45, 58, 72, 82, 85,
87, 88, 93, 95, 141, 185, 191, 249, 280, 2347, 358,
290, 1278, 3350, 2369, 1588, 759, 318, 570, 969, 641, 7595,
1691]),
2263: array([ 3, 642]),
111: array([ 3, 4, 7, 26, 29, 30, 33, 49, 59, 93, 95,
207, 245, 120, 417, 141, 433, 125, 7512, 7589, 7602, 7601,
7599, 7598, 7604, 7539]),
3132: array([3]),
1059: array([ 3, 7, 26, 30, 244, 484, 2357, 1153]),
2332: array([ 3, 754]),
691: array([ 3, 1367, 798]),
1550: array([ 3, 10, 145]),
286: array([ 3, 8, 26, 31, 80, 90, 170, 177, 706, 1519, 753,
411, 7399, 7603, 7591, 7579, 7556]),
205: array([ 3, 8, 11, 21, 22, 24, 29, 34, 41, 58, 62,
85, 95, 107, 166, 172, 362, 219, 308, 643, 237, 232,
401, 807, 319, 876, 1041, 857, 2976, 266]),
1359: array([ 3, 42]),
1459: array([ 3, 8, 16, 136]),
135: array([ 3, 5, 9, 11, 17, 22, 29, 43, 64, 67, 110,
133, 551, 586, 197, 176, 5342, 235, 277, 136, 3267, 941,
166, 177, 2895, 142, 1732, 547, 545, 219, 2127, 7577]),
7432: array([ 3, 95]),
1815: array([ 3, 116, 1053]),
3127: array([3]),
1836: array([3, 8]),
266: array([ 3, 6, 7, 8, 21, 25, 26, 30, 41, 52, 69, 75, 90,
133, 174, 177, 205, 296, 451]),
2311: array([ 3, 177]),
2308: array([ 3, 25]),
27: array([ 3, 5, 6, 7, 12, 14, 26, 56, 754, 267, 333,
36, 199, 123, 222, 103, 591, 645, 973, 125, 34, 58,
105, 81, 290, 877, 110, 29, 371, 47, 88, 519, 1179,
321, 1127, 280, 2418, 118, 104, 706, 82, 825, 617, 87,
702, 336, 172, 585, 557, 253, 1187, 886, 96, 92, 67,
2336, 1055, 1362, 142, 177, 641, 492, 30]),
755: array([ 3, 754]),
2306: array([3]),
5679: array([ 3, 7, 62, 95, 115, 116, 7595]),
1790: array([ 3, 5342]),
937: array([ 3, 8, 27, 79, 177, 199, 287, 295, 436, 752, 1352,
1129, 1029, 1342, 7512, 7594, 7551]),
100: array([ 3, 5, 6, 7, 8, 11, 12, 14, 16, 21, 26,
44, 50, 52, 66, 90, 827, 133, 7588, 1694, 969, 645,
124, 7501, 547, 2720, 365, 2490, 7604, 7603, 177]),
53: array([ 3, 5, 6, 19, 25, 29, 5342, 85, 185, 56, 748,
110, 3417, 417, 375, 245, 73, 429, 58, 65, 1959, 360,
332, 138, 319, 1522, 179, 203, 330, 7588, 124, 7595, 7545,
754, 7512, 177, 7463, 7443]),
2298: array([ 3, 1649]),
3087: array([3]),
1274: array([ 3, 46, 61, 178]),
227: array([ 3, 4, 6, 8, 30, 36, 56, 73, 90, 124, 145,
177, 244, 330, 1883, 6369, 7380, 278, 1833, 7604]),
1348: array([ 3, 7, 95, 882, 1349]),
2295: array([3]),
394: array([ 3, 4, 6, 41, 83]),
3043: array([ 3, 25]),
2285: array([ 3, 1053]),
1536: array([ 3, 10, 32, 371, 1856]),
883: array([ 3, 73, 90, 107, 315, 522, 879]),
199: array([ 3, 16, 26, 27, 33, 45, 55, 63, 73, 92, 101,
105, 134, 143, 146, 1587, 937, 315, 1811, 2324, 451, 244,
752, 1820, 801, 440, 267, 369, 259, 879, 358, 517, 257]),
2290: array([3]),
1816: array([3, 4]),
1524: array([ 3, 177, 332, 7590]),
1169: array([ 3, 124, 1347]),
749: array([ 3, 656, 813]),
876: array([ 3, 25, 30, 41, 80, 136, 154, 205, 278, 360, 453,
491, 613, 741, 1340, 7553, 1776, 7476]),
33: array([ 3, 5, 6, 8, 9, 10, 12, 13, 14, 15, 18,
19, 21, 24, 25, 26, 29, 30, 34, 177, 487, 629,
1280, 111, 105, 649, 115, 88, 431, 971, 199, 112, 43,
95, 136, 41, 1513, 350, 7565, 305, 42, 1283, 77, 253,
408, 312, 540, 183, 58, 362, 2968, 974, 125, 81, 3322,
326, 290, 1882, 1191, 87, 376, 7398, 7341, 459, 208, 641,
244, 71, 703, 1833, 1349, 7428, 228, 964, 7513, 2252, 1796,
1788, 1166, 960, 320, 137, 1327, 295, 587, 154, 553, 623,
1038, 503, 2932, 550, 1762, 1760, 1502, 1766, 370, 1034, 204,
2878, 739, 50, 1314, 1754, 1137, 255, 635, 1144, 482, 7387,
69, 65, 70, 2150, 2809, 146, 1139, 1136, 735, 2122, 7372,
586, 870, 1476, 544, 841, 2105, 1130, 48, 2007, 167, 2596,
45, 798, 7499, 7604]),
3: array([ 5, 92, 6, 207, 123, 287, 103, 34, 56, 142, 120,
351, 333, 43, 32, 1338, 170, 105, 818, 208, 974, 347,
93, 973, 1362, 1547, 509, 197, 1545, 1056, 492, 2280, 879,
36, 259, 19, 291, 891, 460, 30, 557, 1849, 136, 1558,
115, 31, 890, 262, 190, 1555, 7595, 1361, 971, 1359, 1061,
1356, 42, 493, 331, 124, 1824, 621, 1036, 754, 1054, 702,
1818, 90, 396, 24, 13, 88, 163, 158, 590, 247, 3254,
1576, 7380, 7525, 2353, 638, 1174, 525, 289, 1363, 2346, 7512,
81, 7509, 646, 162, 1839, 21, 244, 1360, 7377, 2321, 174,
1548, 1834, 107, 969, 886, 1832, 641, 2306, 1790, 1523, 63,
937, 632, 2300, 1175, 752, 2295, 394, 2291, 1536, 358, 1339,
748, 1816, 7, 29, 7336, 1347, 290, 87, 72, 3328, 720,
650, 1379, 309, 3275, 1336, 345, 295, 3260, 1375, 97, 2903,
7390, 12, 753, 1853, 1184, 3221, 1560, 1358, 7341, 203, 3198,
976, 1063, 454, 3203, 3200, 1367, 433, 2053, 3181, 288, 2263,
757, 49, 111, 1843, 2336, 774, 3132, 1059, 2332, 2334, 3154,
691, 2328, 1550, 1060, 286, 2331, 205, 7431, 1459, 2315, 27,
1837, 135, 1835, 173, 125, 815, 10, 7432, 1815, 3127, 1836,
296, 266, 703, 1456, 7406, 242, 3105, 2311, 320, 40, 1129,
1147, 2308, 955, 755, 5679, 885, 100, 53, 2298, 3087, 228,
1274, 227, 1348, 75, 2285, 883, 8, 199, 2290, 11, 656,
62, 877, 95, 701, 1524, 1043, 116, 749, 3049, 876, 172,
7604]),
215: array([ 4, 84, 106, 169, 322, 232, 327, 323, 912, 473, 1405,
569, 1094]),
479: array([ 4, 25, 256, 259, 1699, 7564]),
302: array([ 4, 60, 89, 157, 176, 187, 334, 689, 3223, 403]),
131: array([ 4, 11, 20, 22, 31, 34, 44, 48, 74, 77, 90,
106, 110, 125, 791, 669, 148, 680, 938, 863, 654, 154,
145, 561, 290, 3212, 1404, 380, 721, 140, 1453, 1241, 169,
353, 607, 531]),
859: array([ 4, 255, 573, 7419, 2703, 2099]),
437: array([ 4, 18, 23, 35, 57, 117, 237, 472, 2014, 720, 1979,
669]),
1398: array([ 4, 23]),
423: array([ 4, 11, 187, 275, 1277, 1718, 1108, 1659]),
510: array([ 4, 10, 181, 1679]),
420: array([4]),
224: array([ 4, 45, 63, 84, 354, 225, 735, 1653, 1996, 2570, 384,
1997, 1994, 535, 2472, 7552, 2471, 2462, 2464, 234]),
7582: array([ 4, 16, 990]),
16: array([ 4, 6, 14, 109, 37, 199, 311, 149, 63, 567, 23,
134, 1174, 107, 101, 54, 779, 75, 442, 7552, 993, 655,
711, 164, 1407, 21, 1286, 1664, 121, 65, 353, 40, 100,
1091, 42, 565, 470, 44, 419, 981, 171, 509, 2257, 60,
392, 177, 7596, 2003, 781, 1983, 367, 1640, 148, 3516, 7550,
270, 1925, 72, 771, 503, 145, 34, 3194, 2340, 1459, 70,
25, 30, 1322, 2954, 2206, 2950, 2944, 2077, 265, 2930, 1427,
1326, 219, 926, 1010, 406, 449, 2001, 2565, 2556, 88, 45,
226, 187, 223, 341, 850, 154, 537, 7582, 250, 1093, 1409,
1222, 1090, 298, 534, 1922, 1610, 1083, 1917, 138, 237, 710,
607, 2445, 1914, 1212, 842, 1910, 768, 7438, 80, 7587]),
311: array([ 4, 106, 119, 1426, 610, 1444, 1096, 1089]),
562: array([ 4, 10, 24, 780, 1466]),
1230: array([ 4, 134, 379]),
1239: array([ 4, 48]),
567: array([ 4, 16, 101]),
139: array([ 4, 10, 23, 572, 663, 1641, 713, 720, 328, 718, 2522,
2518, 327, 2509, 2514, 2515, 2510, 2507, 341, 167, 1083, 148,
2257, 7603]),
466: array([ 4, 23, 45, 54, 148, 223, 356, 379, 658, 654]),
1085: array([ 4, 129]),
771: array([ 4, 16, 419]),
1270: array([ 4, 109, 1921]),
1264: array([ 4, 167, 7603]),
3774: array([ 4, 89, 182, 931, 1247, 1655, 1684, 7561]),
1003: array([ 4, 79, 169, 272]),
367: array([ 4, 16, 74, 155, 182, 368, 1648, 383, 1638, 7587]),
1107: array([ 4, 45]),
661: array([ 4, 328, 357, 404, 1007, 1971, 2571]),
274: array([ 4, 9, 15, 18, 39, 52, 167, 232, 2928, 7603, 422]),
654: array([ 4, 131, 254, 382, 466, 535, 718, 2041, 716, 1918]),
910: array([ 4, 223]),
1091: array([ 4, 16]),
84: array([ 4, 23, 42, 54, 63, 65, 76, 450, 263, 138, 381,
863, 109, 404, 324, 7589, 106, 382, 848, 619, 215, 1975,
853, 1417, 507, 224, 909, 1423, 339, 609, 1100, 500, 718,
1616, 1628, 445, 299, 1098, 2481, 610, 1083, 775, 1619, 99,
148]),
531: array([ 4, 45, 54, 131, 149, 405, 419, 840, 1633, 1095, 774,
7327]),
501: array([ 4, 23, 44, 121, 436, 794, 1102]),
583: array([ 4, 57, 507, 628]),
1702: array([ 4, 10]),
3356: array([4]),
733: array([ 4, 9, 18, 122, 1726]),
356: array([ 4, 7, 8, 41, 78, 624, 405, 466, 743, 7564, 7372,
539, 1415, 468, 1101]),
1678: array([4]),
1010: array([ 4, 16, 2115, 2110]),
7596: array([ 4, 16, 50, 102, 536, 620, 988]),
781: array([ 4, 16, 101, 928]),
2017: array([4]),
2002: array([4]),
572: array([ 4, 30, 36, 58, 87, 413, 433]),
202: array([ 4, 9, 48, 130, 169, 574, 303, 1968, 240, 1651, 357,
986, 7603, 584, 2636, 785, 935, 620, 2598, 344, 1011, 540,
1921, 1433, 2004]),
664: array([ 4, 10, 14, 57, 77, 340, 1279, 1276, 2704]),
1636: array([ 4, 10, 1967]),
326: array([ 4, 11, 33, 39, 45, 75, 130, 172, 175, 1002, 341,
909, 342, 339]),
341: array([ 4, 16, 65, 130, 139, 300, 326, 1002, 1122, 7570, 447,
568, 2461, 850, 1224, 1938]),
538: array([ 4, 536]),
1233: array([ 4, 35, 71]),
2492: array([4]),
536: array([ 4, 84, 106, 149, 406, 422, 535, 612, 2643, 2614, 668,
1237, 1965, 538, 1623, 992, 7371, 3516, 7596]),
1621: array([ 4, 2487]),
1415: array([ 4, 8, 356]),
323: array([ 4, 44, 57, 168, 182, 215, 226, 313, 1015, 478, 720,
2581, 1241]),
1611: array([ 4, 420, 7552, 7550]),
609: array([ 4, 42, 48, 84, 182, 507, 508, 1243, 1093, 1089]),
366: array([ 4, 45, 155, 182, 613, 1693, 1638, 403, 655]),
848: array([ 4, 23, 84, 237, 997, 7552]),
1602: array([4]),
365: array([ 4, 50, 100, 169, 236, 541, 2096, 408, 1969, 713, 717,
612]),
419: array([ 4, 16, 84, 99, 149, 167, 774, 1927, 990, 655, 531,
771, 987, 1214, 606]),
841: array([ 4, 33, 90, 132, 182, 909]),
468: array([ 4, 10, 356, 405, 425, 502, 7552]),
193: array([ 4, 9, 11, 17, 26, 51, 52, 101, 107, 110, 150,
168, 177, 722, 548, 7577, 517, 350, 249, 529, 4057, 7332,
7388]),
2103: array([ 4, 40]),
2405: array([ 4, 292, 761]),
411: array([ 4, 11, 17, 24, 26, 29, 38, 41, 67, 75, 95,
177, 286, 306, 2956, 7588]),
807: array([ 4, 8, 17, 20, 21, 52, 97, 145, 205, 7553]),
1038: array([ 4, 8, 33, 360, 1511, 5342, 7458]),
265: array([ 4, 8, 10, 16, 17, 25, 32, 62, 77, 150, 177, 490, 393]),
2201: array([ 4, 1034]),
361: array([ 4, 14, 21, 38, 52, 60, 107, 3012, 1753, 371]),
1737: array([ 4, 14, 547]),
1139: array([ 4, 9, 14, 33, 211, 360]),
386: array([ 4, 18, 26, 48, 79, 144, 255, 732, 495, 462, 7564]),
727: array([ 4, 252, 578, 712, 994]),
1108: array([ 4, 423, 614]),
1642: array([ 4, 99, 252]),
1109: array([ 4, 504, 1110]),
918: array([ 4, 10, 75, 343, 382, 1509, 920]),
2511: array([4]),
570: array([ 4, 45, 90, 238, 288, 1249, 863, 1101]),
716: array([ 4, 45, 57, 502, 566, 654, 1687, 2019, 1630, 913, 1943,
1089, 2495, 7371, 1220, 1606]),
313: array([ 4, 48, 148, 175, 183, 323, 929, 408, 1249, 563]),
718: array([ 4, 63, 84, 139, 615, 654, 720, 1974]),
1615: array([ 4, 311, 996, 1218, 1918]),
1218: array([ 4, 2608]),
1086: array([ 4, 2689]),
2485: array([4]),
7569: array([ 4, 84, 606, 658]),
1088: array([ 4, 252, 537, 565]),
535: array([ 4, 102, 109, 201, 224, 339, 422, 507, 912, 536, 654,
3516]),
2467: array([4]),
2468: array([4]),
2465: array([4]),
401: array([ 4, 23, 28, 31, 75, 181, 205, 212, 217, 2974, 715,
1666, 660, 1611]),
772: array([ 4, 23, 168, 420, 1000]),
2454: array([4]),
773: array([ 4, 563, 655]),
711: array([ 4, 16, 44, 223]),
7550: array([ 4, 16, 39, 69, 93, 188, 209, 210, 251, 412, 563,
576, 798, 938, 1130, 1222, 1462, 1611, 2121, 7387, 7552]),
842: array([ 4, 16, 23, 99, 148, 153]),
153: array([ 4, 11, 42, 136, 145, 226, 482, 1465, 869, 253, 931,
858, 2248, 295, 507, 792, 154, 864, 1692, 2639, 515, 2609,
1662, 670, 2574, 1665, 842]),
2442: array([4]),
840: array([ 4, 10, 42, 54, 148, 531]),
2439: array([4]),
1908: array([ 4, 1078]),
251: array([ 4, 45, 132, 152, 210, 212, 223, 544, 381, 354, 1713,
1917, 2482, 7550]),
1087: array([ 4, 23, 44, 74]),
1909: array([4, 5]),
532: array([ 4, 91, 2665, 1119, 7400]),
603: array([ 5, 118, 120, 375, 454, 1382, 7521, 1199, 1009, 7552]),
191: array([ 5, 7, 11, 12, 19, 22, 24, 26, 32, 58, 59,
85, 87, 95, 125, 145, 288, 463, 435, 3384, 345, 1519,
3254, 596, 599, 553]),
248: array([ 5, 11, 26, 34, 81, 93, 158, 3280, 1191, 309, 3273,
494, 980, 1192]),
372: array([ 5, 11, 20, 36, 115, 144, 892, 462, 1376, 3257, 2371]),
982: array([ 5, 61, 178, 2398, 3291]),
1372: array([ 5, 413, 1575]),
335: array([ 5, 19, 59, 115, 1016, 7525]),
1192: array([ 5, 19, 248, 649, 1878, 1691]),
893: array([ 5, 12, 136, 891]),
59: array([ 5, 6, 12, 19, 26, 31, 34, 163, 336, 208, 7380,
981, 120, 7420, 191, 81, 1374, 237, 116, 308, 759, 3270,
2362, 1336, 335, 494, 2364, 490, 130, 1376, 825]),
904: array([ 5, 12, 82, 103, 158, 249, 290, 526]),
349: array([ 5, 55, 71, 144, 185, 495, 1159]),
61: array([ 5, 22, 46, 68, 972, 206, 965, 457, 178, 2366, 1274,
2398, 3196, 982, 1851, 7433, 435, 884, 505, 953, 979, 705,
3186, 1052, 3161, 599, 3150, 1182, 3126, 3123, 3130, 1813, 1828,
3081, 3104, 5389, 3098, 3093, 3085, 2268, 1814, 3077, 7604]),
826: array([ 5, 6, 12, 125, 199, 7525, 1893, 2374, 1577]),
122: array([ 5, 9, 11, 14, 15, 17, 20, 22, 25, 29, 36,
41, 43, 89, 90, 117, 733, 1738, 759, 228, 203, 591,
964, 177, 162, 960, 159, 198, 196, 469, 143, 798, 467,
2774, 7565]),
1872: array([5]),
391: array([ 5, 7, 8, 12, 26, 41, 81, 95, 123, 174, 222, 253, 289,
452]),
1068: array([ 5, 11, 67, 93, 197, 454]),
7591: array([ 5, 27, 58, 82, 141, 222, 286, 526, 528, 2369, 3394,
7422, 7510]),
82: array([ 5, 11, 12, 19, 27, 31, 36, 43, 44, 49, 73,
158, 288, 351, 321, 120, 904, 125, 417, 7503, 222, 980,
704, 174, 1490, 3416, 179, 376, 164, 825, 3360, 163, 118,
3343, 95, 707, 219, 506, 2336, 7591, 7568]),
3307: array([5]),
3206: array([ 5, 127]),
434: array([ 5, 2421, 1278, 460]),
597: array([ 5, 13, 488, 812, 1862, 2354, 1533, 965, 1848, 1851]),
684: array([ 5, 7, 26, 29, 83, 88, 124, 157]),
801: array([ 5, 7, 8, 11, 25, 36, 63, 113, 190, 209, 350,
491, 689, 759, 954, 2230, 7604]),
1373: array([ 5, 134]),
110: array([ 5, 7, 8, 19, 21, 24, 27, 29, 53, 67, 92,
390, 305, 518, 369, 523, 188, 141, 959, 131, 2309, 146,
487, 417, 2419, 1883, 174, 362, 2274, 519, 440]),
7126: array([5]),
1535: array([ 5, 12, 13, 258]),
528: array([ 5, 73, 118, 136, 163, 321, 7591, 1586]),
198: array([ 5, 6, 11, 20, 30, 40, 69, 75, 122, 138, 144,
156, 177, 185, 197, 390, 235, 865, 277, 2187, 7604, 7527]),
3336: array([5]),
602: array([ 5, 12, 87, 105, 173, 241, 7575, 702]),
2395: array([ 5, 125, 457]),
599: array([ 5, 13, 61, 68, 127, 190, 191, 229, 244, 1188, 2320,
1537]),
245: array([ 5, 6, 11, 12, 19, 26, 32, 88, 95, 111, 120,
173, 177, 185, 244, 308, 526, 1890, 708, 2380, 1368, 1889,
7398, 397, 1186, 7560, 7600, 7604, 7497, 7562, 7559]),
1819: array([ 5, 13]),
3309: array([5]),
310: array([ 5, 11, 36, 49, 58, 116, 120, 707, 649, 7435, 7391,
759]),
695: array([ 5, 10, 103, 158, 161, 166, 315, 330, 451, 803]),
280: array([ 5, 6, 11, 12, 27, 29, 34, 47, 49, 58, 72,
87, 97, 106, 115, 142, 174, 244, 1032, 288, 704, 1173]),
3009: array([5]),
2386: array([ 5, 13]),
484: array([ 5, 7, 8, 12, 14, 21, 39, 52, 65, 93, 105,
125, 188, 208, 526, 1568, 1059, 1322]),
2354: array([ 5, 206, 597]),
1194: array([ 5, 972, 3311, 7531, 1869]),
2333: array([5, 8]),
825: array([ 5, 12, 27, 59, 82, 229, 3188, 3268]),
292: array([ 5, 11, 12, 19, 34, 36, 58, 72, 105, 116, 118,
125, 128, 141, 526, 321, 894, 462, 432, 1526, 309]),
1378: array([ 5, 26, 34, 36, 49, 145]),
1868: array([ 5, 1050, 3195, 3272, 1869]),
1854: array([ 5, 457]),
1376: array([ 5, 12, 59, 123, 372]),
692: array([ 5, 156, 2225]),
7444: array([ 5, 6, 12, 92, 3129, 3227]),
1859: array([ 5, 81, 518, 585]),
3236: array([5]),
2363: array([ 5, 206]),
3231: array([5]),
2320: array([ 5, 13, 599]),
706: array([ 5, 6, 27, 93, 116, 118, 128, 286, 398, 765]),
2335: array([ 5, 663]),
141: array([ 5, 19, 34, 49, 53, 82, 85, 92, 93, 96, 103,
110, 111, 118, 120, 458, 614, 292, 2325, 1180, 602, 751,
1370, 3389, 288, 321, 2406, 173, 1353, 7481, 7591]),
7531: array([ 5, 58, 127, 1194, 7521]),
3305: array([6]),
524: array([ 6, 112]),
809: array([ 6, 7, 317]),
1057: array([6, 7]),
1541: array([ 6, 177, 490, 7595]),
1371: array([6]),
1571: array([ 6, 8, 138]),
1185: array([ 6, 7, 12, 115]),
1566: array([ 6, 8, 27]),
810: array([6]),
1350: array([6]),
549: array([ 6, 7, 8, 17, 60, 107, 7603, 1579]),
526: array([ 6, 29, 49, 59, 73, 87, 92, 95, 110, 118, 125,
245, 292, 7591, 3216, 904, 1826]),
431: array([ 6, 30, 33, 45, 56, 156, 262, 278, 861, 1552, 610,
1549, 2340, 3201, 493]),
1546: array([ 6, 138]),
1461: array([ 6, 101, 2073]),
458: array([ 6, 8, 12, 125, 141, 525, 974]),
1374: array([ 6, 12, 19, 59]),
7420: array([ 6, 59]),
2376: array([ 6, 116, 149]),
3222: array([6]),
3158: array([6]),
2326: array([6]),
728: array([ 6, 17, 239, 2615]),
278: array([ 6, 29, 30, 31, 41, 62, 73, 95, 166, 177, 227,
320, 431, 380, 523, 1792, 876, 360, 1038, 1638, 7565, 7603,
7588]),
1830: array([ 6, 25, 138, 244, 362, 7554]),
7415: array([ 6, 863]),
104: array([ 6, 12, 15, 27, 35, 48, 49, 64, 86, 87, 94,
498, 281, 279, 2336, 192, 233, 114, 766, 900, 1879, 2437,
2293, 3445, 3443, 2432, 7321, 1397, 1396, 246, 966, 1597, 1388,
1596, 497, 378, 375, 376, 709, 984, 1351, 269, 832, 3353,
7603, 145, 7560, 491, 185, 801, 288, 7552]),
1157: array([ 6, 509, 548, 7595, 7527]),
2384: array([6]),
2371: array([ 6, 138, 372]),
1578: array([ 6, 8, 19]),
3176: array([6]),
3155: array([6]),
1254: array([ 6, 8, 21, 7564]),
1179: array([ 6, 27, 40, 56, 177]),
1055: array([ 6, 27, 430]),
7554: array([ 6, 27, 177, 1035, 1830]),
522: array([ 6, 8, 63, 171, 177, 883, 1337, 5342, 1156]),
1152: array([ 6, 7, 21, 295]),
3112: array([ 6, 138]),
3096: array([ 6, 15, 101]),
3066: array([6]),
188: array([ 6, 11, 15, 17, 54, 73, 99, 108, 110, 113, 124,
139, 145, 157, 161, 165, 177, 185, 200, 573, 196, 483,
469, 234, 489, 334, 241, 941, 484, 943, 251, 815, 1762,
7579, 7551, 7594, 7602, 7601, 7598, 7599, 7334, 7604, 7550, 7540,
7460, 7600]),
3092: array([6]),
750: array([ 6, 7, 30, 124, 177]),
1346: array([ 6, 7, 27, 30, 56]),
149: array([ 6, 16, 45, 50, 70, 99, 1072, 2294, 565, 467, 531,
974, 640, 485, 1969, 226, 169, 217, 155, 182, 1951, 660,
536, 911, 293, 1094, 2451, 237, 711, 419]),
2292: array([ 6, 962, 1349]),
1776: array([ 6, 25, 317, 876]),
7528: array([6, 7]),
1450: array([ 6, 28, 328, 407, 786, 2050]),
408: array([ 6, 11, 28, 33, 39, 48, 76, 77, 138, 195, 240,
313, 328, 365, 7564, 1453, 542, 986, 467, 622, 619]),
7517: array([ 7, 13, 20, 62, 145, 159, 166, 177, 5533, 7565, 7536,
7553]),
7536: array([ 7, 11, 62, 145, 159, 166, 5533, 7517, 7565, 7553]),
1151: array([7]),
429: array([ 7, 31, 34, 53, 125]),
956: array([ 7, 21, 2218]),
638: array([ 7, 9, 52, 424]),
5342: array([ 7, 8, 13, 15, 21, 29, 30, 41, 53, 67, 75,
76, 84, 103, 109, 135, 142, 154, 166, 172, 226, 294,
303, 320, 323, 328, 353, 357, 383, 394, 406, 412, 447,
451, 478, 507, 516, 522, 536, 643, 680, 686, 689, 754,
777, 813, 815, 994, 1235, 1328, 1401, 1611, 1789, 1790, 2280,
4934, 7482, 7500, 7584, 7392, 7405, 7407, 7556, 7509, 7418, 7595,
7530, 7581, 7551, 7594, 7430, 5389, 7600, 7427, 7538, 7373, 7586,
7590, 7565, 7588]),
1178: array([ 7, 8, 497]),
640: array([ 7, 42, 92, 149, 177, 2831]),
2211: array([7]),
98: array([ 7, 9, 15, 17, 35, 40, 51, 64, 66, 69, 70,
86, 87, 94, 114, 233, 192, 1272, 126, 337, 279, 1597,
352, 906, 558, 1074, 375, 164, 1387, 2336, 953, 281, 900,
634, 687, 211, 940]),
696: array([ 7, 8, 25, 32, 143, 179, 214, 350, 2299]),
878: array([ 7, 21, 177, 509, 5392]),
1782: array([7]),
1768: array([7]),
1310: array([7]),
417: array([ 7, 53, 58, 82, 110, 111, 412, 901, 520, 1199, 1582,
7591, 754, 7512]),
666: array([ 7, 19, 49, 88, 95, 125, 173, 207, 208, 212, 253, 397, 463,
494, 704]),
980: array([ 7, 19, 34, 82, 248, 336, 895, 7512]),
645: array([ 7, 8, 27, 81, 100, 108, 142, 155, 315, 596, 2343,
1060, 754, 1838, 3165, 3160]),
371: array([ 7, 24, 27, 32, 41, 47, 51, 70, 80, 133, 177,
199, 361, 417, 523, 1173, 622, 798, 1272, 672]),
7562: array([ 7, 11, 41, 185, 190, 245, 941, 1143]),
1171: array([ 7, 95, 177, 7595]),
2981: array([7]),
946: array([ 7, 10, 17, 21, 41, 52, 89, 103, 142, 146, 296,
519, 738, 798, 1741, 1746]),
506: array([ 7, 82, 90, 117, 145, 177, 201, 205, 7375]),
318: array([ 7, 8, 10, 11, 25, 26, 133, 145, 190, 228, 288,
656, 937, 7604]),
...}
```python
gen_step(430,node_lists)
```
1
```python
cur_node = gen_step(430,node_lists)
prev_node_list = node_lists[cur_node]
cur_node_list = node_lists[430]
shared_nodes = list(set(prev_node_list) & set(cur_node_list))
unshared_nodes = list(set(prev_node_list) ^ set(cur_node_list))
prev_node = 430
```
```python
def gen_biased_step(cur_val, prev_val,dict_vals,p = 1, q = 1):
# set_trace()
prev_node_list = node_lists[prev_val]
cur_node_list = node_lists[cur_val]
shared_nodes = list(set(prev_node_list) & set(cur_node_list))
unshared_nodes = list( set(prev_node_list) ^ set(cur_node_list)^set([prev_val]) )
all_nodes = shared_nodes + unshared_nodes + [prev_val]
shared_weights = [1/p]*len(shared_nodes)
unshared_weights = [1/q]*len(unshared_nodes)
all_weights = shared_weights +unshared_weights + [1]
# set_trace()
node_step = random.choices(all_nodes,all_weights)
return( node_step )
```
```python
test = gen_biased_step(cur_val = 59, prev_val = 430,dict_vals = node_lists,p = 1, q = 1)
test
```
[247]
```python
def gen_walk_biased(key_val,dict_vals,steps,p=1,q=1):
walk_vals = [key_val]
for i in range(0,steps-1):
if i==0:
prev_val = key_val
else:
prev_val =walk_vals[-1]
# set_trace()
walk_vals.append(
gen_biased_step(
cur_val = key_val, prev_val = prev_val,dict_vals = dict_vals,p = p, q = q)[0] )
# gen_biased_step(cur_val = 59, prev_val = 430,dict_vals = node_lists,p = 1, q = 1)
# walk_vals.append(gen_step(walk_vals[-1],dict_vals) )
# set_trace()
return(walk_vals)
# Split the node values into three different groups
# Apply weightings to each edge to change the likelihood of leaving the neighborhood.
# A biased random walk as described in the node2vec paper. The p and q values are defaulted to 1 which will make this the same as the RW_DeepWalk paper described earlier.
def RW_Biased( orig_nodes, to_vals, walk_length=3,p = 1,q =1):
from_vals = pd.unique(orig_nodes)
node_lists = {x:to_vals[orig_nodes == x].values for x in from_vals}
start_nodes = [* node_lists]
start_nodes=[x for x in start_nodes if x in node_lists.keys()]
# set_trace()
# walks = {x:gen_walk_biased(key_val= start_nodes[x], prev_key = start_nodes[x-1],dict_vals = node_lists,steps=walk_length,p = p, q = q) for x in range(1,len(start_nodes))}
# walks = {x:gen_walk(key_val= x,dict_vals = node_lists,steps=walk_length) for x in start_nodes}
walks = [gen_walk(key_val= x,dict_vals = node_lists,steps=walk_length) for x in start_nodes]
return(walks)
```
```python
full_from = full_from.astype(str )
full_to = full_to.astype(str )
```
```python
test = RW_Biased(full_from, full_to,walk_length =10,p = .5, q = .7)
test
```
[['7188', '1', '7557', '542', '48', '2114', '48', '270', '4', '131'],
['430', '817', '430', '1055', '430', '817', '125', '249', '25', '1031'],
['3134', '27', '754', '755', '3', '227', '8', '107', '7565', '18'],
['3026', '1', '35', '192', '171', '73', '163', '59', '494', '11'],
['3010', '1', '472', '1235', '2588', '1235', '328', '661', '1007', '448'],
['804', '7583', '804', '7583', '1020', '51', '40', '51', '1020', '166'],
['160', '945', '18', '613', '226', '75', '411', '67', '69', '256'],
['95', '179', '88', '461', '88', '15', '80', '698', '64', '133'],
['377', '363', '399', '363', '377', '3413', '377', '3413', '377', '399'],
['888', '1', '725', '1661', '725', '10', '1040', '10', '2204', '874'],
['89', '3774', '1684', '3774', '1247', '102', '1005', '102', '62', '42'],
['1901', '1', '296', '203', '70', '279', '177', '52', '2195', '52'],
['161', '7', '768', '2', '2446', '2', '51', '107', '32', '24'],
['256', '67', '15', '906', '96', '19', '1199', '459', '1884', '151'],
['351', '759', '801', '759', '351', '1', '563', '1', '1900', '58'],
['3329', '1', '2695', '1', '3292', '1', '1315', '793', '57', '1689'],
['3341', '1', '1295', '1', '1901', '1', '2767', '1', '2779', '1'],
['649', '123', '40', '65', '2544', '65', '113', '7580', '689', '14'],
['1583', '1', '1590', '86', '1891', '94', '337', '7386', '180', '230'],
['87', '1863', '34', '320', '85', '53', '3', '244', '1060', '3'],
['37', '65', '77', '1118', '864', '153', '253', '666', '88', '815'],
['309', '1372', '413', '125', '292', '36', '375', '163', '416', '1071'],
['821', '92', '1576', '3', '81', '627', '7549', '627', '289', '105'],
['1496', '330', '1170', '330', '7604', '157', '12', '172', '413', '1372'],
['637', '52', '26', '1568', '26', '505', '237', '505', '669', '474'],
['964', '46', '1803', '68', '1804', '2090', '2081', '1103', '665', '239'],
['594', '1522', '53', '7588', '117', '1317', '259', '1', '1445', '1'],
['2249', '1', '2383', '151', '218', '261', '151', '1884', '87', '126'],
['554', '1', '75', '328', '342', '920', '182', '442', '16', '2954'],
['20', '366', '613', '876', '80', '598', '80', '371', '199', '267'],
['2227', '1', '291', '1900', '291', '58', '103', '40', '9', '1'],
['1315', '793', '38', '277', '78', '1758', '117', '90', '189', '552'],
['519', '89', '1267', '1120', '1267', '1', '459', '617', '128', '40'],
['1316', '1', '2826', '1', '174', '266', '21', '11', '135', '197'],
['2149', '1', '7431', '125', '43', '7588', '190', '49', '7597', '1568'],
['1724', '1', '1316', '1', '2999', '1', '3329', '1', '174', '8'],
['18', '274', '7603', '47', '7599', '7600', '2218', '956', '2218', '956'],
['57', '160', '1845', '160', '18', '151', '2971', '151', '2399', '38'],
['118', '7597', '759', '122', '5', '7485', '5', '2397', '105', '602'],
['3254', '19', '32', '11', '72', '935', '403', '303', '2680', '303'],
['1177', '1', '2065', '2660', '2065', '480', '2065', '480', '76', '678'],
['112', '748', '564', '748', '21', '1152', '21', '2759', '21', '51'],
['11', '216', '353', '216', '7', '7585', '36', '815', '124', '179'],
['586', '135', '1732', '30', '145', '7588', '428', '204', '177', '6'],
['35', '15', '7577', '197', '17', '101', '42', '177', '122', '5'],
['15', '902', '1388', '7385', '1388', '902', '1388', '1393', '164', '761'],
['1445', '4', '67', '1289', '2146', '547', '777', '547', '1484', '2821'],
['152', '251', '2482', '251', '45', '272', '7577', '347', '1749', '69'],
['2', '1225', '2', '38', '117', '1314', '117', '2178', '1', '1822'],
['113', '502', '533', '45', '1076', '37', '1632', '2', '381', '2'],
['44', '249', '36', '822', '118', '72', '48', '680', '17', '75'],
['2401', '12', '1553', '12', '1873', '12', '27', '222', '72', '935'],
['10', '334', '302', '187', '16', '2001', '4', '16', '1917', '251'],
['2378', '1', '1356', '3', '207', '26', '333', '3', '646', '88'],
['126', '512', '148', '44', '77', '384', '77', '403', '155', '77'],
['3245', '1', '260', '225', '403', '1992', '403', '303', '1460', '65'],
['783', '7564', '927', '2021', '927', '2021', '927', '2021', '927', '444'],
['493', '225', '493', '752', '493', '525', '493', '172', '30', '266'],
['1358', '1', '222', '72', '48', '850', '48', '914', '339', '254'],
['1180', '177', '174', '822', '118', '12', '288', '19', '96', '281'],
['529', '2529', '529', '1', '2173', '1', '155', '65', '2', '777'],
['333', '643', '3110', '643', '124', '1051', '1173', '280', '174', '7343'],
['1538', '1', '1283', '586', '2753', '586', '1283', '33', '6', '1350'],
['2282', '1', '416', '34', '3362', '34', '197', '14', '9', '1136'],
['1519', '1', '2784', '1', '1413', '223', '16', '850', '45', '431'],
['2966', '1', '955', '78', '42', '36', '25', '1787', '25', '1034'],
['474', '1912', '474', '745', '71', '1791', '71', '954', '285', '151'],
['330', '185', '95', '32', '696', '7', '51', '7526', '78', '474'],
['958', '798', '1029', '40', '1224', '514', '440', '110', '390', '144'],
['17', '344', '202', '7603', '305', '346', '7565', '13', '1497', '17'],
['1295', '257', '1295', '257', '209', '267', '176', '209', '801', '2230'],
['38', '2906', '38', '152', '38', '17', '78', '7526', '42', '22'],
['1952', '1', '2898', '1', '3290', '1', '951', '145', '95', '7'],
['223', '466', '4', '99', '1642', '252', '218', '1339', '203', '53'],
['625', '1469', '625', '74', '1017', '1448', '1605', '74', '1448', '1605'],
['1392', '19', '2336', '1393', '1388', '902', '2336', '8', '7558', '496'],
['3355', '1', '89', '154', '425', '7570', '385', '1114', '7564', '169'],
['1881', '1', '523', '7', '95', '179', '7603', '274', '18', '437'],
['58', '81', '1562', '11', '1', '1066', '2393', '1066', '459', '1884'],
['96', '2', '175', '65', '2018', '65', '256', '40', '234', '225'],
['1580', '118', '43', '125', '817', '125', '103', '92', '331', '7334'],
['196', '80', '51', '1145', '20', '159', '1234', '159', '145', '22'],
['146', '1510', '146', '519', '946', '17', '686', '66', '98', '40'],
['416', '58', '30', '572', '58', '7531', '58', '7510', '7591', '528'],
['1198', '34', '32', '219', '238', '125', '88', '43', '7583', '804'],
['3319', '1', '1724', '1', '1261', '2712', '1261', '2712', '1261', '2712'],
['1867', '1', '2795', '1', '1392', '19', '128', '2397', '105', '292'],
['896', '432', '1887', '432', '1793', '1', '3048', '1', '1320', '1'],
['617', '459', '247', '336', '27', '36', '702', '3', '90', '30'],
['3300', '1', '4721', '1', '1342', '11', '31', '161', '1765', '177'],
['1877', '1', '75', '10', '54', '234', '54', '2557', '54', '102'],
['462', '163', '650', '105', '720', '1979', '720', '105', '490', '265'],
['3279', '1', '1282', '17', '411', '38', '7392', '38', '451', '275'],
['454', '3', '2053', '3', '5679', '116', '1191', '290', '241', '217'],
['1860', '1', '2427', '2410', '2427', '1', '7341', '5', '3133', '206'],
['121', '633', '410', '1027', '410', '2938', '410', '486', '35', '3441'],
['151', '1', '1846', '7525', '826', '6', '519', '2899', '519', '453'],
['1570', '1', '347', '21', '51', '1162', '15', '211', '11', '7597'],
['1573', '10', '327', '299', '2', '420', '4', '361', '38', '700'],
['1063', '3', '1367', '3', '1061', '12', '5', '1551', '63', '209'],
['1353', '241', '95', '26', '231', '7328', '731', '7328', '28', '1455'],
['459', '80', '7599', '7598', '7600', '117', '122', '2774', '122', '29'],
['2334', '3', '90', '2038', '480', '2065', '2660', '2065', '2660', '2065'],
['1267', '89', '1267', '1120', '28', '1674', '28', '80', '384', '866'],
['1060', '3', '7380', '8', '336', '8', '25', '1034', '11', '24'],
['1061', '229', '333', '27', '47', '280', '106', '7582', '990', '37'],
['7431', '1', '3329', '1', '7425', '117', '80', '268', '2634', '176'],
['1355', '1798', '1355', '1', '249', '2', '1977', '1978', '513', '1978'],
['71', '33', '70', '7388', '871', '942', '26', '1021', '26', '22'],
['3070', '1', '1482', '14', '1027', '410', '2157', '410', '800', '35'],
['2113', '1', '888', '58', '545', '3', '3254', '191', '59', '3270'],
['3001', '1', '1353', '309', '3', '7390', '6', '208', '291', '87'],
['396', '25', '7588', '25', '696', '214', '595', '1032', '890', '81'],
['2260', '1', '1491', '798', '1030', '11', '4', '379', '31', '20'],
['142', '9', '108', '355', '108', '2175', '108', '2', '658', '23'],
['2238', '1', '2181', '1', '2144', '14', '1289', '38', '1148', '117'],
['123', '40', '316', '693', '316', '365', '50', '1932', '270', '148'],
['2942', '1', '2147', '24', '229', '93', '7469', '93', '706', '27'],
['1509', '918', '10', '1146', '31', '188', '113', '2', '24', '7600'],
['7410', '585', '547', '1737', '14', '2818', '14', '27', '2418', '27'],
['1760', '69', '232', '69', '739', '1331', '71', '800', '35', '7415'],
['2876', '1', '175', '7589', '84', '1098', '7371', '716', '654', '716'],
['259', '1802', '259', '1149', '146', '541', '1638', '792', '1713', '251'],
['1493', '11', '777', '2', '51', '67', '34', '585', '1859', '585'],
['2845', '1', '888', '1886', '1', '2282', '1', '115', '335', '115'],
['370', '117', '798', '347', '798', '21', '552', '11', '17', '48'],
['2844', '102', '29', '110', '92', '958', '155', '3169', '155', '203'],
['2167', '1', '2999', '1', '292', '72', '16', '1083', '45', '183'],
['156', '1324', '117', '7600', '161', '1', '796', '14', '2157', '410'],
['2808', '1', '1881', '1', '3113', '1', '2271', '1', '1248', '1250'],
['255', '1', '2750', '1', '3256', '1', '333', '1', '223', '624'],
['736', '18', '1', '709', '1', '118', '34', '288', '12', '376'],
['7603', '1479', '2131', '1479', '2131', '1479', '7603', '681', '22', '7593'],
['346', '9', '2479', '9', '4934', '256', '346', '57', '294', '2829'],
['9', '513', '9', '133', '70', '20', '7565', '683', '211', '21'],
['75', '328', '342', '663', '25', '701', '50', '257', '133', '7588'],
['175', '368', '1243', '40', '1279', '664', '14', '545', '135', '586'],
['2754', '1', '3332', '1', '1724', '14', '2825', '14', '939', '568'],
['22', '330', '7600', '83', '133', '1272', '98', '1272', '133', '371'],
['155', '545', '3', '105', '7585', '36', '102', '1990', '102', '54'],
['1261', '1', '11', '4', '140', '131', '1404', '45', '716', '1630'],
['2552', '57', '256', '121', '2208', '410', '742', '71', '38', '2139'],
['2586', '1', '2065', '2660', '2065', '480', '1117', '28', '2604', '28'],
['710', '23', '656', '7529', '331', '6', '524', '112', '177', '244'],
['578', '727', '994', '35', '558', '15', '801', '113', '271', '113'],
['472', '2480', '472', '353', '2', '2698', '2', '39', '10', '918'],
['2472', '1', '2342', '1', '2177', '69', '1780', '214', '1793', '214'],
['1606', '716', '45', '272', '150', '7601', '7334', '331', '1029', '331'],
['4', '366', '403', '1001', '132', '39', '22', '189', '660', '182'],
['563', '237', '32', '118', '34', '81', '1859', '5', '7578', '8'],
['744', '249', '402', '90', '132', '1132', '132', '201', '24', '93'],
['3422', '1', '1509', '1', '2820', '1', '2771', '1', '1509', '201'],
['250', '715', '401', '715', '7438', '715', '401', '31', '36', '118'],
['249', '3420', '249', '523', '371', '80', '269', '7415', '863', '783'],
['2427', '2410', '2427', '2410', '95', '2336', '64', '2416', '64', '269'],
['3414', '1', '3292', '1', '2820', '1', '796', '117', '38', '744'],
['1590', '15', '269', '2', '241', '1822', '241', '751', '58', '1589'],
['2305', '40', '12', '174', '62', '306', '411', '7588', '428', '7600'],
['1900', '1', '3059', '1', '3316', '1', '3349', '1', '725', '574'],
['3392', '141', '53', '375', '163', '398', '720', '437', '23', '713'],
['1065', '128', '617', '27', '1055', '27', '645', '7', '750', '30'],
['1147', '491', '268', '967', '15', '1906', '114', '3398', '114', '1879'],
['1072', '149', '70', '20', '372', '892', '20', '47', '197', '1794'],
['3375', '1', '3189', '1', '2166', '1', '2881', '1', '747', '196'],
['1024', '1', '7603', '1473', '48', '109', '1431', '1661', '7564', '1436'],
['709', '833', '709', '1', '3245', '1', '2351', '1129', '460', '47'],
['1382', '58', '1893', '351', '34', '248', '26', '798', '1486', '1631'],
['3332', '1', '2273', '2202', '311', '1444', '7564', '130', '1255', '28'],
['1885', '309', '1180', '141', '2406', '141', '2406', '141', '19', '116'],
['3330', '1', '523', '30', '172', '116', '7421', '116', '1815', '116'],
['1197', '30', '95', '42', '822', '174', '8', '2031', '476', '927'],
['3316', '1', '796', '117', '122', '143', '3094', '143', '1500', '1'],
['1886', '888', '1', '1186', '1', '2296', '1', '637', '432', '416'],
['154', '106', '167', '109', '102', '1247', '102', '54', '1430', '54'],
['1522', '1', '563', '99', '2473', '99', '716', '913', '1622', '2540'],
['2391', '7390', '8', '7492', '708', '7493', '708', '5679', '7595', '7'],
['3298', '1', '18', '542', '7557', '542', '18', '386', '144', '390'],
['432', '1793', '1', '462', '372', '3257', '372', '20', '489', '2'],
['3292', '1', '2093', '57', '294', '2827', '294', '2829', '294', '2829'],
['3290', '1', '115', '280', '142', '133', '257', '1077', '257', '52'],
['7597', '118', '82', '7503', '321', '7', '1760', '1497', '1760', '69'],
['3274', '1', '594', '37', '561', '4', '439', '1402', '354', '7564'],
['1875', '287', '1875', '287', '12', '976', '3', '1059', '3', '2280'],
['291', '2344', '291', '12', '288', '2369', '95', '246', '101', '567'],
['158', '49', '120', '245', '2380', '95', '553', '2', '1635', '130'],
['1579', '549', '17', '1691', '185', '582', '185', '95', '103', '284'],
['1846', '1', '3023', '1', '2945', '1', '42', '1337', '43', '7483'],
['1066', '1', '3026', '1', '42', '62', '51', '295', '6', '1179'],
['290', '154', '72', '3', '1832', '3', '1043', '30', '314', '7564'],
['3233', '1', '1901', '1', '2167', '11', '4', '7550', '563', '237'],
['891', '893', '12', '7525', '12', '280', '1173', '36', '2374', '826'],
['247', '3230', '247', '36', '292', '432', '363', '432', '1', '1458'],
['1856', '1536', '371', '177', '2958', '177', '396', '112', '73', '49'],
['3211', '1', '7431', '1', '753', '286', '3', '1558', '49', '483'],
['115', '2364', '59', '2364', '115', '97', '316', '67', '79', '1130'],
['1365', '1', '2764', '1', '1720', '1', '2845', '1', '2147', '24'],
['483', '1', '3027', '1', '3290', '1', '1843', '3', '3254', '36'],
['3210', '1', '1885', '309', '5', '262', '93', '115', '1574', '19'],
['3189', '1', '1583', '1', '76', '42', '2241', '42', '992', '1095'],
['3193', '1', '331', '177', '2311', '177', '26', '248', '11', '7377'],
['2352', '1', '2129', '10', '320', '30', '145', '7328', '145', '99'],
['1847', '1', '2892', '57', '7579', '85', '188', '7579', '22', '593'],
['3173', '1', '2177', '69', '2874', '69', '146', '541', '146', '1149'],
['1051', '177', '490', '416', '48', '152', '44', '1076', '37', '65'],
['1843', '3', '1358', '3', '7406', '19', '1368', '249', '389', '45'],
['2342', '1', '2276', '112', '7595', '430', '247', '1067', '247', '413'],
['7341', '1864', '3265', '1864', '7341', '49', '519', '1288', '1', '1579'],
['3139', '1', '2892', '1', '1147', '3254', '191', '58', '321', '82'],
['1186', '817', '1', '2909', '1', '1060', '645', '3165', '645', '155'],
['3167', '1', '222', '150', '507', '405', '909', '841', '4', '131'],
['3149', '1', '1579', '549', '107', '116', '19', '141', '93', '174'],
['1842', '156', '22', '302', '157', '684', '26', '373', '3069', '373'],
['3156', '1', '493', '225', '260', '18', '1526', '331', '296', '203'],
['3141', '1', '2', '1487', '117', '2931', '117', '1742', '57', '2508'],
['817', '160', '952', '160', '7579', '22', '39', '22', '1301', '22'],
['1840', '30', '90', '7564', '576', '7564', '57', '38', '37', '25'],
['1839', '3', '7431', '1', '2315', '3', '1375', '305', '346', '22'],
['2324', '1', '3156', '27', '34', '3', '40', '2', '70', '371'],
['3143', '1', '2795', '1', '697', '1', '2276', '112', '7600', '83'],
['3142', '1', '2402', '222', '27', '12', '105', '12', '1535', '258'],
['174', '266', '296', '203', '90', '78', '42', '63', '537', '23'],
['2315', '1', '1826', '456', '46', '3065', '46', '656', '23', '1131'],
['1356', '3', '190', '66', '17', '7410', '743', '134', '598', '3258'],
['1835', '3', '876', '7476', '876', '7553', '41', '319', '10', '2190'],
['296', '1', '955', '1', '1043', '128', '158', '695', '10', '242'],
['3118', '1', '2108', '1', '347', '4', '21', '85', '53', '245'],
['1542', '1', '1846', '12', '104', '2437', '104', '27', '172', '3'],
['3111', '1', '2713', '1', '2782', '1', '296', '331', '3', '1059'],
['753', '56', '1360', '12', '1873', '3214', '1873', '15', '1895', '96'],
['3088', '2794', '3088', '2794', '3088', '1', '2782', '1', '2375', '128'],
['395', '1166', '1795', '43', '142', '78', '2083', '78', '2083', '1463'],
['3068', '1', '2786', '1', '2315', '3', '1129', '2351', '1129', '2351'],
['3042', '1', '3062', '1', '1881', '1', '2742', '1', '2249', '1'],
['1442', '57', '79', '1468', '155', '7547', '20', '11', '205', '8'],
['1339', '203', '1339', '218', '763', '146', '199', '45', '51', '345'],
['596', '62', '124', '32', '19', '551', '135', '3', '6', '149'],
['2281', '373', '395', '962', '31', '45', '1623', '536', '406', '50'],
['222', '2336', '246', '43', '88', '7398', '2408', '7', '7484', '7'],
['194', '129', '502', '113', '660', '113', '7580', '39', '2070', '39'],
['3064', '1', '2767', '1', '3020', '1', '817', '430', '247', '116'],
['1812', '178', '3333', '178', '3333', '178', '881', '46', '3015', '46'],
['1796', '30', '5342', '478', '2560', '478', '109', '931', '3774', '931'],
['701', '8', '201', '298', '23', '842', '16', '1640', '477', '577'],
['124', '813', '7595', '3074', '27', '702', '229', '825', '12', '59'],
['3057', '1', '2152', '1', '152', '148', '270', '48', '386', '26'],
['7427', '1', '1923', '1', '1510', '146', '85', '15', '3369', '1205'],
['2276', '112', '697', '112', '1043', '112', '177', '1515', '177', '2679'],
['2271', '1', '3375', '1', '2763', '1', '3402', '1', '155', '203'],
['1525', '1', '1520', '38', '1483', '1482', '38', '2313', '38', '71'],
['2235', '1', '223', '1413', '717', '40', '1279', '40', '917', '40'],
['3027', '1', '2779', '1', '2700', '1', '2771', '1', '3418', '1'],
['2090', '15', '19', '1578', '6', '11', '1697', '2655', '1638', '1255'],
['1342', '30', '5533', '13', '75', '5342', '447', '342', '663', '25'],
['331', '3', '95', '413', '289', '12', '125', '95', '362', '7'],
['3023', '1', '2282', '1', '1198', '34', '29', '983', '12', '216'],
['2261', '46', '3308', '46', '2258', '162', '287', '32', '219', '81'],
['68', '3029', '68', '3121', '68', '1803', '68', '7604', '188', '113'],
['116', '51', '7592', '51', '2887', '51', '1287', '9', '1458', '28'],
['1043', '116', '112', '748', '7', '2', '268', '26', '666', '212'],
['3020', '1', '2947', '1', '87', '7591', '3394', '7591', '7422', '7591'],
['491', '96', '141', '93', '92', '34', '2378', '34', '158', '58'],
['179', '1', '2823', '1', '1505', '161', '2', '2446', '2', '39'],
['2252', '1', '291', '2344', '291', '85', '291', '36', '557', '348'],
['214', '7600', '157', '7603', '1022', '305', '7603', '587', '614', '201'],
['439', '1402', '439', '1044', '439', '4', '88', '308', '376', '398'],
['2147', '1', '3062', '1', '330', '7600', '1153', '317', '951', '142'],
['1167', '1', '3233', '1', '1500', '1', '2823', '1', '3173', '1'],
['697', '177', '393', '452', '689', '801', '954', '38', '1289', '67'],
['67', '2247', '67', '79', '2683', '79', '2104', '79', '272', '47'],
['1793', '214', '1750', '214', '517', '27', '125', '197', '17', '2821'],
['258', '1787', '25', '8', '1584', '527', '6792', '527', '1584', '527'],
['1340', '143', '7604', '177', '798', '958', '1', '2952', '1', '1295'],
['2996', '1', '2352', '6', '75', '15', '2', '242', '58', '647'],
['1520', '38', '874', '1148', '1760', '1320', '1760', '1497', '1760', '258'],
['1336', '59', '19', '197', '69', '1495', '7', '3177', '7', '1185'],
['30', '750', '124', '1820', '124', '298', '1123', '671', '273', '575'],
['178', '61', '3186', '3217', '3186', '3217', '3186', '3217', '3186', '3217'],
['747', '80', '268', '223', '2', '7508', '922', '236', '1214', '419'],
['2962', '1', '529', '1210', '12', '882', '177', '492', '25', '287'],
['2952', '1', '1574', '7', '2211', '7', '21', '798', '24', '145'],
['455', '690', '18', '543', '792', '1472', '79', '386', '26', '166'],
['260', '106', '396', '145', '1611', '4', '16', '7552', '224', '735'],
['623', '69', '2169', '69', '1318', '69', '2163', '69', '1752', '69'],
['29', '43', '7501', '100', '7588', '428', '7600', '1153', '7', '25'],
['1510', '1', '1283', '33', '50', '52', '949', '11', '7446', '19'],
['2927', '1', '3167', '1', '637', '432', '363', '399', '363', '399'],
['301', '7604', '7602', '53', '25', '663', '25', '29', '32', '287'],
['2907', '1', '2907', '1', '1796', '30', '3', '242', '451', '314'],
['2909', '1', '1148', '1760', '83', '7600', '83', '51', '45', '1928'],
['1148', '874', '258', '61', '457', '2297', '68', '3121', '68', '599'],
['955', '142', '25', '2196', '69', '156', '256', '67', '69', '98'],
['1502', '38', '2162', '38', '411', '67', '5342', '7586', '46', '127'],
['2853', '38', '346', '2816', '64', '62', '124', '73', '278', '7603'],
['1500', '1149', '1500', '1149', '29', '142', '8', '130', '2575', '130'],
['2178', '1', '2771', '1', '2767', '1', '3422', '1', '964', '30'],
['1320', '52', '1138', '52', '264', '593', '11', '41', '2979', '456'],
['874', '2204', '10', '383', '10', '177', '30', '1336', '30', '32'],
['2177', '1', '1580', '1', '7557', '1', '2837', '1', '2725', '1'],
['2892', '1', '3101', '1', '563', '313', '148', '1409', '148', '131'],
['636', '455', '18', '91', '625', '7552', '99', '145', '24', '122'],
['1034', '1', '1901', '1', '7431', '3', '2331', '3', '886', '12'],
['1494', '161', '1322', '161', '41', '593', '635', '33', '7341', '327'],
['276', '949', '52', '7565', '51', '2887', '51', '21', '83', '52'],
['1318', '117', '122', '198', '20', '145', '30', '370', '1', '1493'],
['117', '70', '427', '15', '64', '520', '2973', '520', '2973', '520'],
['1492', '25', '142', '3', '7406', '475', '724', '282', '273', '1612'],
['1302', '1', '264', '488', '61', '435', '46', '3178', '1176', '979'],
['1313', '1', '151', '218', '763', '218', '7604', '95', '26', '229'],
['1491', '1', '1952', '1', '523', '7', '2381', '7', '699', '243'],
['2870', '1', '2119', '22', '259', '1149', '42', '508', '444', '159'],
['2181', '1', '1356', '1', '1840', '1', '9', '24', '44', '156'],
['1309', '88', '245', '177', '642', '138', '477', '138', '16', '1983'],
['1750', '214', '296', '214', '118', '72', '830', '58', '288', '82'],
['2848', '1', '2249', '1', '2870', '1', '3139', '1', '2619', '1'],
['1458', '347', '10', '49', '174', '7591', '222', '118', '10', '113'],
['951', '317', '69', '266', '296', '237', '40', '65', '926', '16'],
['1029', '937', '7551', '3095', '7551', '7430', '7530', '7584', '95', '7599'],
['2176', '69', '317', '107', '116', '34', '123', '30', '1552', '30'],
['2846', '1', '1180', '7', '3019', '7', '1064', '3', '3181', '3'],
['347', '21', '100', '7588', '7', '484', '1568', '43', '743', '7'],
['2173', '694', '812', '705', '460', '769', '119', '322', '119', '23'],
['2826', '1', '2793', '1', '2378', '34', '288', '7595', '2319', '7595'],
['1742', '57', '2656', '57', '1657', '2011', '1657', '57', '1742', '1'],
['2166', '1', '2830', '1', '3084', '1', '2166', '1', '1284', '1'],
['2823', '1', '3233', '1', '734', '6', '3222', '6', '557', '27'],
['2820', '1', '523', '177', '3022', '177', '122', '36', '82', '3360'],
['2814', '1', '38', '14', '2136', '14', '484', '7', '2381', '128'],
['1736', '31', '9', '70', '39', '1125', '314', '37', '185', '10'],
['1025', '1', '2808', '1', '2178', '1', '116', '6', '306', '7'],
['1482', '60', '7573', '51', '2887', '51', '65', '17', '2125', '17'],
['1483', '14', '16', '219', '370', '33', '1191', '248', '5', '289'],
['2152', '178', '32', '201', '132', '1132', '22', '7', '330', '1'],
['2806', '1', '396', '177', '1714', '79', '2654', '79', '255', '35'],
['1128', '7603', '2828', '7603', '1140', '50', '90', '570', '1101', '570'],
['2801', '1', '396', '145', '21', '1152', '295', '93', '3412', '93'],
['2795', '1', '89', '519', '1288', '519', '745', '474', '475', '282'],
['1290', '78', '260', '1369', '260', '7390', '3', '116', '290', '36'],
['1288', '1', '1024', '14', '42', '5', '154', '43', '7483', '7513'],
['1436', '7564', '169', '48', '42', '3', '7', '640', '2831', '640'],
['2498', '1', '2217', '11', '680', '140', '671', '579', '671', '140'],
['2784', '1', '256', '947', '38', '1489', '38', '7579', '57', '2093'],
['587', '33', '244', '197', '798', '347', '1749', '347', '939', '568'],
['2780', '1', '2261', '435', '1825', '885', '1826', '885', '124', '62'],
['2779', '1', '1875', '1', '2966', '1', '258', '7553', '258', '25'],
['305', '346', '70', '15', '1471', '15', '122', '798', '955', '798'],
['944', '305', '26', '112', '748', '7', '7567', '7', '877', '8'],
['2129', '1', '2909', '1', '2784', '1', '1842', '1', '2920', '1'],
['683', '22', '424', '2', '471', '2', '16', '1212', '16', '850'],
['42', '181', '404', '33', '1502', '143', '24', '5342', '103', '249'],
['1136', '1', '1283', '9', '409', '1481', '409', '9', '2992', '9'],
['1283', '1', '3010', '1', '2775', '1', '2754', '1', '503', '2'],
['2127', '135', '547', '777', '380', '79', '2530', '79', '1133', '231'],
['2093', '57', '7564', '78', '543', '792', '1713', '251', '4', '1816'],
['1284', '1', '2238', '7553', '7603', '1264', '4', '54', '37', '89'],
['72', '568', '298', '568', '48', '1716', '11', '34', '32', '58'],
['264', '2', '306', '6', '638', '9', '15', '75', '216', '353'],
['2756', '1', '1251', '343', '918', '4', '583', '57', '1689', '57'],
['1285', '1', '462', '1878', '1880', '241', '2', '11', '7563', '11'],
['613', '44', '1087', '4', '1615', '311', '119', '471', '2', '13'],
['304', '7603', '2828', '7603', '305', '9', '7602', '7600', '1153', '7411'],
['1282', '255', '22', '123', '7556', '7595', '7604', '7603', '546', '9'],
['4721', '1', '709', '104', '246', '104', '7552', '2440', '7552', '99'],
['1475', '7603', '274', '167', '382', '252', '1942', '1', '256', '42'],
['2746', '1', '2227', '1', '1128', '7603', '42', '58', '321', '7'],
['1715', '77', '10', '1288', '1', '1525', '1', '3010', '1', '1475'],
['734', '1', '2989', '1', '1521', '1078', '353', '996', '1615', '996'],
['2108', '1', '95', '752', '493', '225', '224', '45', '7576', '614'],
['1923', '712', '194', '2978', '194', '1154', '529', '1', '625', '439'],
['1465', '1', '1034', '1', '683', '4', '7596', '620', '404', '1956'],
['2742', '1', '3256', '1', '1167', '587', '89', '587', '33', '1882'],
['2117', '2737', '2117', '2737', '2117', '1', '2559', '1', '3064', '1'],
['1720', '66', '1456', '66', '70', '7', '407', '165', '647', '96'],
['90', '32', '173', '2', '52', '487', '69', '2863', '69', '33'],
['2731', '1', '2962', '1', '2767', '1', '68', '178', '774', '2475'],
['1457', '1', '1525', '642', '138', '436', '2', '382', '181', '2566'],
['2725', '1', '146', '2', '20', '7577', '135', '142', '329', '743'],
['2109', '682', '938', '131', '154', '231', '77', '76', '1258', '76'],
['543', '1', '4721', '1', '3148', '1', '395', '5', '61', '979'],
['2102', '231', '7328', '7440', '7328', '7440', '145', '11', '3', '1836'],
['712', '1', '1542', '1', '1521', '1078', '10', '935', '202', '2636'],
['231', '7328', '145', '381', '84', '718', '654', '2041', '106', '472'],
['2687', '1', '1025', '37', '7448', '37', '65', '7604', '53', '375'],
['1708', '1', '3057', '1', '874', '2204', '874', '258', '358', '7604'],
['1129', '3', '228', '73', '10', '798', '133', '51', '588', '51'],
['76', '678', '7327', '531', '45', '533', '502', '7552', '625', '7552'],
['2065', '480', '7533', '480', '2065', '1', '2814', '1', '7557', '272'],
['1251', '994', '57', '79', '405', '507', '609', '48', '482', '11'],
['1248', '145', '95', '7581', '177', '2935', '177', '172', '1327', '213'],
['725', '2579', '725', '102', '62', '136', '154', '231', '48', '475'],
['2559', '1', '3233', '1', '3143', '1', '2378', '1', '1261', '1'],
['507', '84', '1616', '216', '7604', '7601', '188', '7460', '188', '334'],
['1942', '1', '1267', '28', '2031', '476', '1111', '179', '752', '493'],
['1232', '1936', '1232', '2504', '1232', '1', '2108', '1', '817', '430'],
['534', '21', '360', '317', '107', '1420', '993', '16', '65', '165'],
['994', '727', '994', '57', '389', '57', '664', '57', '1', '1742'],
['503', '43', '7588', '100', '3', '1061', '3', '1362', '3', '1835'],
['2448', '1', '1942', '252', '129', '843', '23', '389', '57', '65'],
['846', '1', '2605', '1', '3298', '1', '416', '58', '3226', '58'],
['1', '3375', '1', '2742', '1', '1842', '1', '1749', '69', '1760'],
['1368', '19', '33', '7398', '87', '33', '540', '986', '202', '785'],
['164', '279', '375', '96', '832', '15', '853', '1673', '10', '1111'],
['244', '120', '141', '173', '81', '36', '2374', '36', '249', '3420'],
['62', '24', '632', '40', '103', '295', '85', '165', '145', '153'],
['2244', '166', '7415', '863', '679', '863', '2068', '863', '783', '1'],
['148', '54', '840', '531', '54', '840', '4', '37', '380', '131'],
['7403', '64', '192', '15', '272', '7593', '11', '408', '622', '9'],
['52', '1986', '52', '20', '155', '1', '3313', '1', '1856', '1544'],
['159', '4', '512', '4', '37', '159', '17', '1732', '135', '2895'],
['66', '1473', '78', '955', '798', '874', '798', '2', '1156', '5'],
['60', '14', '1482', '60', '361', '52', '50', '142', '50', '142'],
['585', '2', '1693', '577', '7576', '7394', '7571', '7395', '45', '7589'],
['97', '7451', '1694', '7451', '1694', '2101', '79', '2683', '79', '67'],
['2006', '2', '306', '52', '189', '1298', '189', '260', '189', '660'],
['1421', '30', '1346', '30', '3164', '30', '5679', '115', '97', '85'],
['353', '1102', '501', '23', '1082', '23', '108', '546', '9', '175'],
['51', '2887', '51', '26', '132', '1705', '132', '231', '854', '539'],
['185', '24', '95', '104', '1396', '15', '897', '145', '7588', '21'],
['312', '574', '857', '69', '257', '1077', '51', '2143', '51', '2798'],
['168', '2', '40', '10', '1146', '798', '1753', '798', '1144', '33'],
['354', '188', '113', '7', '113', '177', '196', '2287', '196', '176'],
['108', '9', '346', '942', '871', '2', '7603', '588', '14', '518'],
['91', '152', '251', '1713', '251', '544', '563', '773', '4', '661'],
['40', '268', '101', '44', '420', '4', '54', '84', '1083', '1095'],
['271', '2', '133', '100', '645', '596', '197', '7', '150', '22'],
['119', '662', '119', '293', '74', '2', '188', '234', '469', '2'],
['54', '102', '67', '453', '69', '939', '682', '938', '77', '131'],
['375', '3424', '375', '118', '43', '11', '1292', '203', '17', '39'],
['177', '750', '177', '7601', '95', '56', '227', '330', '2054', '1603'],
['31', '429', '53', '177', '92', '462', '372', '20', '159', '143'],
['47', '12', '3354', '12', '3293', '12', '7444', '5', '2363', '206'],
['23', '84', '42', '1359', '3', '1832', '5', '434', '2421', '434'],
['74', '18', '160', '57', '1719', '57', '346', '2816', '64', '81'],
['165', '85', '141', '103', '141', '288', '249', '4', '567', '101'],
['306', '8', '7486', '8', '7382', '708', '7382', '8', '1298', '189'],
['8', '333', '244', '7584', '5342', '522', '177', '266', '41', '142'],
['242', '95', '1278', '95', '185', '37', '65', '1460', '303', '403'],
['412', '144', '198', '5', '122', '2774', '122', '14', '1333', '14'],
['381', '145', '22', '203', '42', '7582', '4', '1678', '4', '2439'],
['561', '37', '7448', '37', '14', '50', '1948', '50', '1755', '50'],
['216', '5', '3231', '5', '7457', '26', '51', '135', '235', '7400'],
['107', '17', '212', '1722', '212', '667', '2649', '667', '780', '667'],
['681', '2719', '681', '2', '987', '2', '424', '87', '1884', '459'],
['79', '1273', '481', '628', '789', '628', '583', '57', '65', '1258'],
['272', '7566', '51', '672', '169', '175', '1136', '97', '115', '219'],
['39', '2016', '39', '70', '64', '2429', '64', '700', '332', '428'],
['150', '7601', '7600', '161', '690', '147', '358', '1438', '358', '199'],
['181', '404', '785', '1249', '313', '148', '313', '175', '7589', '775'],
['109', '57', '2160', '57', '252', '1930', '1935', '7569', '4', '7550'],
['2404', '2', '446', '4', '1691', '30', '111', '4', '1107', '45'],
['24', '95', '882', '88', '24', '133', '15', '274', '39', '515'],
['19', '402', '235', '7400', '532', '91', '37', '41', '593', '635'],
['7', '3145', '27', '1362', '3', '287', '25', '43', '1899', '43'],
['348', '26', '88', '16', '177', '30', '5', '3236', '5', '125'],
['473', '223', '268', '491', '269', '35', '801', '113', '268', '26'],
['1921', '15', '95', '93', '229', '69', '197', '759', '59', '163'],
['99', '503', '99', '150', '99', '842', '16', '7582', '990', '109'],
['1380', '7346', '1380', '2', '1635', '2', '70', '15', '3431', '15'],
['73', '32', '59', '6', '117', '1758', '78', '739', '78', '955'],
['85', '7604', '95', '93', '95', '36', '292', '128', '580', '79'],
['189', '1424', '189', '1424', '10', '798', '11', '7563', '11', '1775'],
['787', '29', '205', '166', '31', '12', '287', '966', '95', '5342'],
['133', '26', '1294', '26', '17', '1735', '17', '942', '346', '70'],
['1709', '9', '681', '2776', '681', '7564', '275', '1121', '275', '2669'],
['658', '466', '658', '466', '4', '54', '226', '1004', '449', '54'],
['293', '149', '45', '175', '1', '2113', '1', '683', '7565', '687'],
['768', '16', '1910', '16', '100', '1694', '657', '37', '657', '629'],
['45', '1634', '45', '7576', '7394', '474', '37', '9', '123', '9'],
['471', '2', '449', '1004', '2010', '1004', '2010', '263', '50', '142'],
['1632', '2', '35', '233', '114', '64', '902', '1388', '7273', '1388'],
['281', '906', '96', '498', '233', '15', '185', '32', '7442', '2216'],
['269', '325', '209', '7550', '7387', '7550', '412', '42', '1337', '522'],
['2336', '114', '1879', '104', '12', '321', '12', '245', '397', '707'],
['206', '7550', '7552', '715', '922', '715', '250', '16', '1664', '16'],
['241', '115', '197', '198', '197', '759', '1871', '759', '894', '19'],
['190', '7', '191', '288', '1278', '95', '7602', '7600', '214', '7513'],
['7577', '2', '185', '47', '9', '1283', '1', '625', '1', '1860'],
['798', '2186', '798', '264', '255', '11', '2168', '146', '85', '92'],
['211', '15', '29', '1188', '29', '43', '7535', '7505', '43', '156'],
['140', '4', '366', '613', '26', '166', '278', '7588', '63', '105'],
['217', '2', '40', '121', '1058', '555', '80', '184', '360', '21'],
['182', '149', '182', '106', '7582', '990', '109', '668', '109', '148'],
['236', '256', '1739', '256', '479', '256', '75', '7394', '137', '2094'],
['777', '26', '277', '135', '3267', '135', '142', '27', '519', '142'],
['1098', '7589', '175', '10', '168', '4', '1091', '4', '716', '7371'],
['106', '311', '610', '37', '166', '2', '39', '137', '7327', '1401'],
['237', '32', '1064', '49', '115', '280', '58', '36', '290', '154'],
['1401', '23', '1911', '23', '138', '477', '608', '44', '711', '223'],
['1409', '2597', '1409', '148', '1401', '40', '1077', '389', '1077', '389'],
['769', '460', '3', '87', '397', '290', '36', '1580', '1', '472'],
['1209', '4', '1239', '4', '3774', '182', '3774', '931', '109', '215'],
['917', '40', '717', '1612', '273', '1612', '273', '671', '862', '619'],
['114', '30', '119', '148', '131', '77', '1638', '22', '641', '67'],
['2698', '2', '51', '170', '686', '7', '879', '199', '3', '351'],
['1127', '20', '83', '1760', '1', '587', '305', '944', '14', '1288'],
['1009', '125', '826', '2374', '826', '1893', '58', '53', '3417', '53'],
['1386', '281', '180', '186', '498', '86', '1204', '86', '906', '3446'],
['173', '2', '77', '777', '10', '710', '10', '2204', '874', '798'],
['43', '42', '715', '922', '109', '512', '2', '1090', '44', '546'],
['647', '96', '27', '886', '27', '110', '2309', '171', '69', '26'],
['5', '314', '90', '132', '9', '329', '9', '1283', '586', '1283'],
['1156', '5', '81', '49', '5', '12', '1376', '372', '144', '3262'],
['553', '33', '1137', '14', '47', '4', '2', '91', '37', '657'],
['170', '633', '50', '1974', '50', '1948', '50', '365', '541', '2026'],
['93', '111', '33', '703', '177', '347', '1749', '69', '256', '672'],
['145', '25', '1031', '12', '975', '12', '1065', '95', '40', '989'],
['1523', '13', '650', '158', '82', '36', '572', '36', '336', '5'],
['350', '3007', '350', '26', '1711', '729', '4311', '867', '4311', '867'],
['77', '131', '48', '313', '563', '444', '1928', '209', '7550', '16'],
['1785', '62', '24', '443', '129', '23', '139', '327', '1638', '327'],
['166', '7536', '7553', '2238', '1', '1060', '397', '666', '212', '35'],
['489', '28', '314', '28', '489', '2', '1475', '72', '115', '49'],
['25', '16', '781', '4', '664', '57', '160', '294', '2829', '294'],
['83', '133', '589', '157', '435', '5', '1192', '649', '1192', '1878'],
['277', '348', '307', '3190', '307', '3190', '307', '32', '203', '296'],
['70', '966', '3055', '966', '15', '767', '15', '558', '15', '9'],
['1487', '11', '488', '705', '460', '512', '2', '87', '2080', '87'],
['270', '7455', '4910', '7455', '4910', '7455', '4910', '11', '550', '161'],
['871', '942', '232', '215', '322', '657', '322', '28', '675', '195'],
['389', '491', '1221', '529', '1210', '13', '8', '780', '571', '1103'],
['544', '155', '65', '1662', '153', '670', '449', '217', '1651', '217'],
['2631', '2', '561', '2590', '561', '2590', '561', '131', '77', '265'],
['672', '169', '447', '342', '326', '339', '671', '339', '914', '10'],
['137', '39', '17', '66', '2', '242', '26', '207', '83', '967'],
['424', '1437', '1451', '784', '39', '1001', '573', '188', '469', '122'],
['65', '1404', '45', '225', '1619', '225', '530', '15', '7393', '86'],
['1693', '577', '477', '577', '1693', '577', '477', '608', '340', '1944'],
['475', '282', '387', '1121', '387', '282', '1266', '282', '786', '2599'],
['449', '670', '449', '7587', '449', '16', '100', '365', '717', '365'],
['1998', '2', '83', '277', '741', '2912', '741', '10', '840', '148'],
['1977', '2', '1098', '2', '489', '4', '1445', '4', '150', '57'],
['915', '1104', '239', '1104', '611', '2517', '611', '665', '239', '728'],
['63', '537', '1088', '537', '1088', '252', '1642', '99', '2473', '99'],
['1957', '508', '4', '356', '7', '266', '25', '693', '107', '1777'],
['1635', '1105', '7564', '1249', '313', '148', '1913', '148', '270', '436'],
['508', '1957', '2', '350', '26', '1501', '1298', '189', '1424', '189'],
['2499', '2', '90', '100', '827', '100', '969', '177', '522', '5342'],
['2497', '2', '647', '7503', '82', '125', '26', '142', '26', '83'],
['299', '614', '12', '27', '1179', '177', '2286', '177', '1517', '102'],
['1225', '2', '40', '90', '3', '1839', '1', '2725', '1', '1340'],
['238', '249', '10', '941', '41', '395', '90', '78', '632', '1485'],
['2488', '2', '5342', '1401', '148', '119', '23', '466', '54', '334'],
['1936', '1232', '2', '986', '408', '313', '929', '313', '408', '467'],
['382', '2', '777', '2', '2446', '2', '85', '703', '3', '197'],
['355', '41', '509', '41', '142', '697', '112', '396', '22', '638'],
['1090', '2', '7482', '7500', '7482', '177', '2213', '177', '295', '85'],
['1222', '63', '655', '63', '8', '2330', '27', '199', '45', '466'],
['21', '201', '21', '41', '2244', '7539', '111', '141', '751', '229'],
['564', '125', '7431', '1', '491', '304', '267', '27', '3', '97'],
['2459', '2', '353', '532', '4', '139', '2510', '1105', '1635', '1105'],
['774', '2', '508', '102', '725', '183', '169', '672', '169', '202'],
['209', '2', '987', '2', '1421', '494', '26', '52', '69', '1318'],
['1922', '2', '106', '8', '24', '44', '106', '4', '2709', '4'],
['504', '608', '504', '2593', '504', '1109', '4', '1678', '4', '111'],
['987', '419', '774', '531', '7327', '531', '419', '655', '63', '11'],
['2446', '2', '90', '11', '398', '103', '2229', '103', '102', '856'],
['986', '408', '986', '2', '189', '260', '11', '65', '588', '170'],
['1603', '297', '1915', '105', '362', '52', '69', '798', '2884', '798'],
['379', '44', '163', '120', '1071', '416', '34', '2414', '34', '116'],
['446', '445', '446', '2', '862', '65', '57', '2120', '57', '2656'],
['436', '1419', '1624', '238', '118', '5', '602', '87', '291', '19'],
['512', '23', '4', '1218', '2608', '1218', '4', '2454', '4', '2997'],
['971', '33', '167', '447', '1120', '447', '167', '1655', '226', '1647'],
['92', '1620', '74', '312', '715', '312', '715', '922', '236', '922'],
['7336', '239', '2642', '239', '1464', '239', '1110', '1109', '504', '608'],
['1545', '3', '87', '1', '951', '142', '69', '2874', '69', '2121'],
['103', '22', '933', '226', '7373', '1433', '1657', '2011', '778', '2011'],
['207', '30', '32', '166', '8', '2384', '6', '638', '7', '3177'],
['6', '278', '1038', '7458', '1038', '4', '411', '41', '319', '10'],
['7380', '227', '73', '62', '136', '42', '155', '42', '81', '3244'],
['287', '1875', '1', '3339', '1', '1822', '241', '397', '163', '249'],
['754', '755', '3', '1054', '3', '115', '2302', '115', '123', '7'],
['26', '330', '1', '1382', '1', '587', '1', '3233', '1', '1720'],
['34', '320', '3', '1816', '3', '56', '95', '7497', '245', '1889'],
['525', '493', '225', '260', '402', '610', '641', '2255', '641', '67'],
['545', '155', '1468', '79', '2749', '79', '730', '226', '16', '781'],
['56', '1567', '56', '3166', '56', '147', '1539', '147', '963', '147'],
['1849', '3', '1175', '3', '1836', '3', '1379', '96', '281', '906'],
['557', '5', '203', '41', '2972', '41', '122', '15', '94', '900'],
['208', '33', '183', '169', '182', '217', '382', '1002', '341', '300'],
['1558', '49', '759', '49', '73', '728', '2615', '728', '2615', '728'],
['2346', '3', '3127', '3', '2903', '3', '7336', '3', '3221', '3'],
['974', '458', '525', '644', '525', '819', '525', '819', '525', '458'],
['262', '892', '20', '366', '1693', '2', '1691', '30', '3100', '30'],
['2001', '3', '92', '462', '92', '345', '51', '409', '2832', '409'],
['815', '3', '1363', '67', '211', '97', '3', '107', '883', '90'],
['120', '36', '32', '2244', '7600', '190', '197', '7603', '546', '7603'],
['492', '331', '5679', '7', '5342', '7556', '123', '1', '1320', '1500'],
['1175', '3168', '1175', '3', '7406', '19', '2388', '19', '3185', '19'],
['32', '40', '6', '107', '621', '24', '72', '113', '3', '2280'],
['172', '3', '2280', '3', '632', '9', '1734', '9', '7387', '9'],
['1347', '3', '7509', '102', '29', '509', '29', '1046', '162', '1523'],
['1338', '3', '1347', '177', '1171', '95', '103', '128', '3294', '128'],
['105', '163', '1774', '32', '371', '622', '17', '2738', '17', '20'],
['818', '3', '7336', '68', '599', '191', '125', '5', '191', '58'],
['890', '413', '95', '2369', '95', '6', '295', '8', '189', '1330'],
['1362', '3', '1375', '3', '1347', '177', '63', '2', '251', '1713'],
['1547', '3', '62', '306', '22', '1258', '76', '2696', '76', '77'],
['1056', '29', '83', '7600', '7', '2922', '7', '21', '669', '131'],
['2280', '25', '228', '162', '2258', '162', '1523', '2', '5', '7447'],
['879', '3', '259', '115', '219', '135', '43', '82', '904', '249'],
['36', '172', '701', '13', '1534', '13', '3370', '13', '7478', '13'],
['590', '128', '1876', '128', '895', '241', '19', '26', '33', '253'],
['3260', '3', '320', '2833', '320', '10', '271', '40', '1132', '78'],
['1174', '16', '54', '23', '153', '1662', '477', '1640', '16', '7552'],
['3198', '3', '3049', '3', '1456', '3', '774', '5', '7514', '1062'],
['460', '1366', '1016', '2370', '1016', '705', '460', '47', '5', '310'],
['3200', '3', '34', '1352', '177', '1165', '62', '7', '7554', '27'],
['589', '157', '684', '7', '877', '3', '1832', '5', '32', '1064'],
['433', '7541', '1187', '3199', '1187', '12', '35', '7415', '863', '2068'],
['81', '645', '1060', '397', '1060', '373', '1060', '373', '3069', '373'],
['7509', '25', '6', '3112', '138', '436', '2', '19', '125', '253'],
['973', '641', '5', '335', '7525', '336', '8', '7412', '87', '1884'],
['1555', '3', '590', '128', '26', '1568', '7595', '177', '2988', '177'],
['3154', '3', '1338', '3', '886', '7595', '3138', '7595', '1792', '7595'],
['7595', '103', '12', '983', '128', '40', '1179', '56', '27', '177'],
['1361', '6', '92', '3208', '92', '27', '267', '7580', '689', '1160'],
['2331', '3', '2353', '3', '748', '43', '118', '822', '12', '105'],
['509', '41', '154', '12', '1184', '3', '2280', '25', '13', '456'],
['969', '3', '7431', '1', '1051', '1173', '280', '6', '2384', '6'],
['197', '3', '208', '320', '58', '145', '639', '7512', '29', '207'],
['621', '7', '319', '31', '95', '29', '496', '173', '1837', '125'],
['1036', '877', '8', '225', '494', '26', '7', '699', '89', '587'],
['1054', '177', '33', '137', '697', '112', '697', '1', '2147', '1'],
['1818', '3', '2332', '754', '1823', '754', '5342', '7594', '688', '133'],
['358', '147', '1539', '56', '969', '177', '1048', '177', '68', '7602'],
['748', '124', '100', '124', '11', '593', '7397', '204', '1328', '77'],
['13', '1807', '13', '599', '2320', '5', '19', '1370', '173', '93'],
['88', '1309', '1', '725', '183', '674', '7564', '1118', '864', '153'],
['163', '47', '140', '680', '7564', '354', '1402', '354', '188', '165'],
['1576', '3', '345', '7', '1784', '7', '7588', '100', '6', '163'],
['2353', '3', '2334', '3', '3221', '3', '29', '5', '197', '7'],
['3221', '3', '2328', '3', '701', '403', '851', '7603', '1479', '2131'],
['136', '1794', '40', '77', '926', '77', '403', '302', '89', '2269'],
['1367', '1187', '1367', '691', '1367', '1187', '88', '95', '191', '599'],
['7512', '1190', '7512', '120', '3', '1054', '4', '534', '2460', '534'],
['757', '7509', '102', '32', '88', '5', '435', '1855', '460', '54'],
['49', '12', '33', '2968', '33', '137', '1691', '1384', '1691', '17'],
['646', '1844', '3', '116', '112', '13', '2284', '13', '1168', '13'],
['2328', '27', '333', '7604', '7', '85', '165', '15', '85', '647'],
['1360', '7', '203', '62', '24', '10', '49', '1560', '92', '10'],
['7377', '3', '205', '643', '95', '36', '572', '413', '81', '115'],
['2321', '3', '331', '1029', '331', '177', '1293', '1745', '1293', '14'],
['125', '197', '1068', '5', '1068', '93', '288', '2369', '58', '2369'],
['1548', '177', '259', '3', '3221', '3', '877', '8', '6', '2326'],
['1837', '125', '289', '229', '2253', '229', '702', '309', '125', '68'],
['1834', '12', '1861', '12', '315', '42', '508', '1957', '39', '7603'],
['703', '24', '2147', '24', '190', '599', '2320', '13', '7397', '204'],
['1456', '23', '153', '1465', '78', '403', '2699', '403', '1430', '54'],
['7406', '475', '2', '402', '14', '177', '1806', '177', '5533', '811'],
['1832', '5', '103', '30', '16', '768', '4', '1611', '7552', '564'],
['3105', '3', '509', '228', '11', '85', '92', '73', '3342', '73'],
['641', '973', '641', '33', '244', '227', '1883', '110', '523', '30'],
['320', '208', '34', '19', '92', '5', '24', '3', '34', '1352'],
['885', '62', '1785', '2', '353', '4', '1003', '79', '2694', '79'],
['632', '78', '511', '612', '324', '2035', '324', '612', '1638', '792'],
['2300', '3', '774', '419', '987', '2', '52', '20', '60', '549'],
['702', '397', '1877', '1', '2947', '1', '3167', '1', '3143', '1'],
['228', '33', '81', '64', '3409', '2415', '114', '3369', '1205', '15'],
['752', '493', '95', '553', '2', '272', '277', '348', '1707', '348'],
['2291', '3', '1061', '3', '1056', '6', '75', '256', '1970', '409'],
['656', '749', '656', '237', '32', '1506', '32', '1774', '163', '650'],
['877', '177', '1048', '1811', '199', '16', '2077', '515', '426', '7518'],
['2293', '35', '94', '164', '174', '173', '3337', '245', '2380', '95'],
['650', '530', '12', '241', '290', '12', '156', '692', '5', '7585'],
['3328', '3', '1790', '3', '1523', '3', '333', '1', '3048', '1'],
['720', '3', '1456', '147', '1539', '147', '3115', '147', '223', '624'],
['1379', '549', '6', '185', '137', '39', '217', '109', '1099', '108'],
['219', '1711', '26', '172', '5', '3', '247', '1575', '1372', '1575'],
['3275', '3', '158', '82', '904', '5', '462', '292', '11', '78'],
['345', '7', '1348', '95', '1', '2926', '1', '1136', '25', '31'],
['336', '5', '3236', '5', '1859', '585', '27', '103', '946', '1741'],
['295', '6', '524', '6', '2352', '6', '168', '7601', '1691', '179'],
['1375', '5', '26', '32', '163', '398', '34', '1198', '81', '42'],
['435', '1855', '206', '588', '51', '65', '926', '385', '7570', '65'],
['7390', '8', '1104', '915', '611', '667', '611', '667', '780', '667'],
['12', '172', '5', '7516', '207', '111', '417', '7512', '10', '1967'],
['1853', '412', '81', '27', '7', '371', '133', '203', '41', '47'],
['1184', '19', '31', '92', '296', '203', '122', '41', '5342', '451'],
['1064', '3', '879', '199', '1820', '199', '451', '275', '679', '863'],
['1560', '92', '199', '257', '52', '274', '39', '51', '2802', '51'],
['988', '5', '2', '473', '999', '109', '328', '2661', '328', '1101'],
['203', '17', '681', '2', '250', '238', '1095', '1083', '139', '23'],
['976', '92', '3774', '182', '106', '85', '320', '278', '523', '25'],
['289', '102', '7594', '688', '55', '232', '274', '167', '1964', '328'],
['7394', '137', '844', '137', '870', '137', '2527', '1401', '40', '148'],
['3203', '3', '3203', '3', '286', '3', '1849', '3', '259', '177'],
['1844', '3', '2053', '3', '87', '7473', '87', '1156', '5', '7433'],
['2053', '3', '29', '207', '83', '171', '1035', '171', '83', '482'],
['1363', '3', '3221', '3', '1560', '49', '5', '156', '1277', '196'],
['3181', '3', '955', '142', '1039', '803', '218', '1339', '218', '803'],
['288', '8', '883', '90', '7564', '89', '254', '2009', '254', '21'],
['2263', '3', '36', '103', '85', '73', '332', '177', '2924', '177'],
['111', '7589', '775', '7589', '175', '210', '7550', '209', '1087', '23'],
['3132', '3', '1043', '128', '193', '26', '350', '236', '175', '210'],
['1059', '26', '67', '52', '193', '107', '883', '90', '2', '915'],
['2332', '3', '158', '49', '5', '7585', '36', '557', '3', '3254'],
['691', '1367', '305', '370', '170', '370', '33', '1796', '1', '1318'],
['1550', '3', '632', '40', '452', '7603', '685', '140', '2318', '244'],
['286', '8', '7496', '708', '498', '104', '2432', '15', '179', '177'],
['205', '41', '356', '78', '17', '272', '2476', '272', '145', '7594'],
['1359', '42', '7526', '42', '24', '93', '19', '3254', '433', '7595'],
['1459', '136', '528', '7591', '7422', '7591', '7422', '7591', '82', '3360'],
['135', '176', '267', '11', '276', '556', '276', '556', '276', '1'],
['7432', '95', '643', '333', '7595', '5679', '116', '3137', '116', '706'],
['1815', '1053', '1815', '116', '2376', '149', '419', '167', '986', '2'],
['3127', '3', '1338', '3', '493', '95', '8', '1794', '29', '287'],
['1836', '8', '457', '2297', '457', '1854', '457', '206', '3175', '206'],
['266', '52', '637', '309', '87', '602', '7575', '602', '7575', '12'],
['2311', '3', '757', '1187', '3199', '1187', '88', '666', '95', '172'],
['2308', '3', '11', '639', '10', '1636', '1967', '1636', '10', '491'],
['27', '172', '493', '172', '12', '158', '136', '2242', '136', '7604'],
['755', '754', '27', '321', '7', '43', '25', '13', '650', '490'],
['2306', '3', '3328', '3', '1523', '13', '11', '26', '231', '2102'],
['5679', '3', '1360', '12', '31', '174', '92', '526', '49', '280'],
['1790', '5342', '7595', '7', '2301', '7', '1348', '1349', '187', '423'],
['937', '8', '7507', '241', '602', '173', '363', '377', '363', '432'],
['100', '124', '83', '51', '2143', '17', '2206', '16', '2556', '16'],
['53', '7463', '53', '124', '1169', '1347', '1169', '1347', '177', '1714'],
['2298', '3', '205', '876', '7476', '876', '491', '94', '192', '98'],
['3087', '3', '1036', '3', '34', '12', '292', '1526', '292', '432'],
['1274', '178', '1814', '61', '3104', '61', '1828', '61', '1813', '13'],
['227', '73', '26', '62', '1043', '1', '1034', '52', '90', '52'],
['1348', '7', '83', '2', '211', '70', '117', '1488', '97', '925'],
['2295', '3', '174', '95', '7497', '95', '424', '183', '1116', '11'],
['394', '6', '3222', '6', '1560', '138', '477', '1640', '16', '164'],
['3043', '3', '1790', '3', '30', '244', '333', '373', '89', '29'],
['2285', '3', '228', '816', '177', '332', '177', '32', '672', '2'],
['1536', '371', '523', '25', '593', '41', '42', '62', '5679', '3'],
['883', '3', '1360', '7', '497', '464', '497', '464', '53', '110'],
['199', '257', '2916', '257', '32', '66', '42', '136', '891', '893'],
['2290', '3', '1359', '42', '33', '1144', '33', '10', '319', '7'],
['1816', '4', '148', '2', '257', '1295', '257', '380', '777', '10'],
['1524', '3', '320', '33', '326', '45', '353', '996', '1615', '996'],
['1169', '124', '585', '123', '1376', '372', '20', '730', '79', '1276'],
['749', '813', '5342', '507', '2104', '79', '732', '682', '2109', '1'],
['876', '41', '7562', '7', '7588', '428', '7600', '161', '550', '553'],
['33', '18', '260', '189', '1506', '32', '9', '1163', '9', '1478'],
['3', '877', '7', '640', '149', '50', '7603', '546', '7603', '1264'],
['215', '569', '405', '507', '182', '660', '189', '1298', '21', '522'],
['479', '25', '1165', '177', '73', '88', '16', '265', '490', '177'],
['302', '176', '2634', '268', '1039', '268', '101', '42', '51', '40'],
['131', '721', '406', '721', '406', '106', '8', '1068', '197', '198'],
['859', '4', '840', '54', '7564', '1455', '7564', '275', '1688', '384'],
['437', '18', '121', '800', '71', '243', '1177', '1', '2947', '1'],
['1398', '23', '504', '608', '340', '608', '504', '995', '1210', '857'],
['423', '187', '16', '779', '16', '2565', '16', '14', '392', '184'],
['510', '181', '510', '1679', '1978', '513', '798', '264', '185', '264'],
['420', '4', '379', '44', '446', '44', '49', '174', '3', '701'],
['224', '4', '1415', '8', '1415', '356', '743', '134', '743', '356'],
['7582', '990', '419', '84', '1417', '50', '663', '820', '663', '2335'],
['16', '2950', '16', '1010', '2110', '1010', '2110', '1010', '2110', '1010'],
['311', '4', '2024', '4', '211', '1286', '9', '1023', '22', '613'],
['562', '4', '3774', '89', '28', '50', '778', '2011', '1657', '2011'],
['1230', '4', '2439', '4', '367', '383', '7564', '89', '1267', '28'],
['1239', '48', '2114', '48', '10', '56', '197', '798', '955', '798'],
['567', '101', '193', '168', '6', '2352', '6', '3', '120', '5'],
['139', '167', '33', '1833', '33', '204', '7565', '687', '639', '11'],
['466', '223', '91', '1665', '91', '441', '721', '50', '1755', '11'],
['1085', '129', '1227', '7401', '4', '386', '144', '758', '495', '386'],
['771', '16', '2003', '16', '1922', '16', '2001', '16', '1222', '63'],
['1270', '109', '999', '109', '422', '536', '84', '404', '175', '42'],
['1264',
'7603',
'548',
'7603',
'1723',
'7603',
'2828',
'7603',
'1479',
'2131'],
['3774', '182', '169', '668', '380', '777', '167', '45', '1424', '189'],
['1003', '169', '1439', '130', '1646', '66', '483', '2994', '483', '177'],
['367', '74', '1263', '74', '131', '3212', '131', '721', '297', '91'],
['1107', '4', '567', '4', '121', '2118', '18', '220', '304', '1'],
['661', '404', '2506', '1666', '401', '715', '7552', '16', '145', '7334'],
['274', '167', '419', '987', '2', '2459', '2', '1409', '148', '109'],
['654', '716', '2495', '716', '57', '872', '57', '361', '3012', '361'],
['910', '223', '910', '223', '910', '4', '1086', '2689', '1086', '4'],
['1091', '4', '1737', '547', '585', '36', '291', '5', '163', '3'],
['84', '1417', '50', '35', '7415', '863', '131', '1241', '299', '150'],
['531', '7327', '531', '1095', '531', '774', '531', '45', '21', '3'],
['501', '794', '501', '44', '16', '60', '361', '60', '7564', '1441'],
['583', '628', '481', '314', '355', '106', '22', '42', '181', '356'],
['1702', '4', '215', '323', '1015', '323', '1241', '538', '536', '1237'],
['3356', '4', '223', '165', '407', '109', '16', '419', '99', '7552'],
['733', '122', '1738', '122', '43', '33', '1349', '2292', '6', '815'],
['356', '7564', '1681', '7564', '1663', '7564', '680', '1472', '680', '140'],
['1678', '4', '781', '16', '148', '1913', '148', '439', '4', '1109'],
['1010', '16', '65', '145', '17', '55', '166', '995', '194', '712'],
['7596', '536', '612', '511', '403', '302', '89', '587', '43', '246'],
['781', '928', '781', '16', '7552', '21', '348', '694', '2173', '694'],
['2017', '4', '446', '2', '1209', '379', '2', '504', '2593', '504'],
['2002', '4', '538', '536', '535', '912', '1623', '536', '538', '536'],
['572', '4', '1010', '16', '2944', '16', '2077', '515', '2077', '515'],
['202', '1011', '76', '28', '1450', '786', '65', '7570', '2008', '385'],
['664', '4', '202', '1011', '90', '56', '753', '286', '90', '1011'],
['1636', '1967', '10', '2957', '10', '482', '11', '372', '36', '172'],
['326', '4', '150', '22', '70', '360', '876', '278', '30', '1342'],
['341', '1224', '236', '529', '1154', '529', '2529', '529', '236', '383'],
['538', '4', '148', '4', '99', '106', '44', '39', '2111', '39'],
['1233', '71', '2234', '71', '38', '80', '371', '32', '2132', '7565'],
['2492', '4', '23', '138', '477', '867', '4311', '867', '477', '1662'],
['536', '2614', '536', '149', '182', '217', '1651', '202', '4', '7327'],
['1621', '4', '249', '904', '82', '3343', '82', '43', '156', '591'],
['1415', '356', '7372', '7564', '102', '67', '256', '2790', '2788', '2790'],
['323', '57', '50', '663', '50', '912', '534', '2474', '534', '2474'],
['1611', '7550', '576', '579', '619', '167', '106', '472', '106', '1041'],
['609', '182', '175', '169', '149', '45', '7576', '614', '577', '1693'],
['366', '655', '16', '42', '7603', '4', '99', '145', '1611', '7552'],
['848', '4', '439', '148', '1600', '40', '770', '40', '18', '1564'],
['1602', '4', '840', '4', '7552', '625', '439', '1044', '439', '1402'],
['365', '50', '14', '1158', '14', '316', '14', '1128', '409', '2164'],
['419', '1927', '419', '84', '1423', '84', '382', '45', '26', '933'],
['841', '90', '145', '7502', '7363', '7502', '125', '118', '163', '39'],
['468', '7552', '1401', '23', '656', '3', '433', '5', '1535', '12'],
['193', '9', '2141', '1458', '83', '279', '98', '558', '15', '469'],
['2103', '4', '178', '982', '178', '46', '178', '29', '155', '113'],
['2405', '4', '119', '23', '770', '7328', '28', '1450', '328', '2661'],
['411', '24', '173', '19', '291', '5', '47', '7602', '7', '34'],
['807', '20', '152', '988', '629', '48', '2734', '48', '42', '79'],
['1038', '33', '7513', '43', '1783', '43', '122', '759', '11', '1755'],
['265', '77', '403', '1001', '39', '52', '21', '205', '172', '555'],
['2201', '4', '771', '419', '606', '419', '531', '54', '2', '99'],
['361', '60', '673', '598', '134', '244', '80', '260', '78', '550'],
['1737', '4', '536', '2643', '536', '149', '50', '195', '864', '153'],
['1139', '211', '156', '103', '10', '941', '10', '787', '2407', '787'],
['386', '144', '386', '495', '758', '495', '349', '1159', '7528', '6'],
['727', '252', '1084', '252', '343', '855', '343', '252', '7518', '22'],
['1108', '423', '1108', '4', '768', '7', '7574', '1691', '165', '38'],
['1642', '4', '773', '563', '1', '2782', '1', '3148', '1', '2846'],
['1109', '504', '132', '201', '8', '347', '21', '41', '356', '7'],
['918', '343', '918', '1509', '427', '112', '427', '1777', '427', '1777'],
['2511', '4', '1816', '3', '396', '4', '1737', '4', '7327', '531'],
['570', '1249', '28', '931', '3774', '1655', '226', '75', '568', '182'],
['716', '502', '533', '45', '44', '6', '331', '492', '25', '1031'],
['313', '48', '542', '1276', '664', '57', '874', '38', '3310', '38'],
['718', '654', '1918', '654', '466', '658', '2', '87', '280', '142'],
['1615', '996', '1615', '996', '1615', '4', '1642', '4', '917', '4'],
['1218', '4', '1908', '1078', '1521', '1078', '353', '4', '420', '4'],
['1086', '2689', '1086', '2689', '1086', '2689', '1086', '4', '361', '107'],
['2485', '4', '2868', '4', '379', '54', '113', '54', '74', '18'],
['7569', '84', '853', '1673', '853', '77', '2667', '77', '177', '10'],
['1088', '4', '10', '2582', '10', '259', '614', '1108', '4', '1233'],
['535', '3516', '536', '422', '382', '1004', '449', '4', '1691', '7604'],
['2467', '4', '193', '177', '1765', '177', '214', '73', '728', '2615'],
['2468', '4', '1087', '23', '1082', '23', '44', '249', '389', '74'],
['2465', '4', '202', '2004', '202', '574', '202', '1011', '784', '1451'],
['401', '205', '62', '596', '191', '125', '484', '65', '244', '143'],
['772', '4', '7596', '536', '612', '536', '668', '109', '217', '1292'],
['2454', '4', '1615', '996', '1615', '996', '97', '17', '65', '272'],
['773', '563', '237', '563', '99', '10', '67', '22', '2255', '641'],
['711', '16', '1910', '16', '993', '505', '993', '16', '2944', '16'],
['7550', '938', '132', '315', '26', '760', '978', '760', '978', '760'],
['842', '148', '131', '654', '254', '2620', '28', '130', '1646', '66'],
['153', '1692', '1115', '402', '117', '796', '7603', '274', '422', '1419'],
['2442', '4', '117', '1324', '161', '69', '2171', '22', '165', '38'],
['840', '54', '2', '564', '748', '21', '946', '519', '89', '1453'],
['2439', '4', '353', '1916', '1607', '113', '177', '2213', '177', '32'],
['1908', '4', '1737', '4', '1086', '2689', '1086', '2689', '1086', '2689'],
['251', '45', '7589', '382', '167', '183', '670', '28', '1450', '407'],
['1087', '44', '23', '2447', '23', '772', '168', '7599', '7600', '2218'],
['1909', '4', '10', '26', '284', '7372', '18', '729', '20', '366'],
['532', '1119', '7564', '849', '7564', '670', '7564', '1681', '7564', '90'],
['603', '1382', '1', '305', '9', '348', '557', '3', '971', '81'],
['191', '24', '145', '639', '177', '68', '457', '3133', '774', '419'],
['248', '3280', '248', '158', '904', '82', '2336', '186', '374', '400'],
['372', '5', '433', '3254', '309', '3254', '1', '625', '1469', '625'],
['982', '5', '125', '817', '1', '3143', '1', '1742', '117', '370'],
['1372', '1575', '1372', '5', '27', '96', '125', '5', '3', '227'],
['335', '115', '1574', '115', '3251', '115', '280', '12', '3354', '12'],
['1192', '5', '2395', '125', '249', '185', '37', '314', '355', '444'],
['893', '12', '1553', '177', '142', '519', '1288', '1', '2754', '1'],
['59', '12', '33', '7428', '85', '31', '188', '161', '623', '260'],
['904', '82', '179', '748', '26', '494', '59', '12', '1072', '12'],
['349', '55', '272', '9', '295', '8', '1104', '915', '142', '272'],
['61', '1274', '46', '2303', '46', '2279', '46', '1313', '1492', '1313'],
['826', '12', '1873', '15', '70', '65', '9', '97', '475', '1980'],
['122', '41', '95', '278', '227', '7380', '59', '191', '463', '111'],
['1872', '5', '31', '146', '2168', '146', '277', '78', '1021', '78'],
['391', '7', '743', '7', '757', '3', '266', '69', '2183', '798'],
['1068', '197', '798', '2207', '634', '67', '634', '2207', '798', '370'],
['7591', '141', '92', '83', '255', '1282', '7', '1346', '27', '321'],
['82', '31', '14', '24', '122', '15', '953', '1390', '2336', '164'],
['3307', '5', '238', '714', '238', '99', '503', '1', '3385', '1'],
['3206', '5', '43', '694', '11', '680', '2112', '7603', '452', '132'],
['434', '1278', '288', '141', '173', '105', '164', '19', '1581', '19'],
['597', '1851', '705', '1016', '335', '59', '5', '1375', '305', '1022'],
['684', '124', '25', '7509', '25', '60', '549', '17', '113', '14'],
['801', '8', '457', '68', '296', '214', '1795', '1166', '614', '647'],
['1373', '5', '7479', '2336', '1393', '1388', '7385', '1388', '1902', '1388'],
['110', '8', '268', '2634', '268', '8', '2955', '8', '157', '7603'],
['7126', '5', '97', '14', '146', '2894', '146', '9', '548', '166'],
['1535', '258', '358', '199', '879', '3', '752', '199', '358', '7604'],
['528', '7591', '2369', '115', '208', '33', '1502', '143', '1026', '143'],
['198', '75', '216', '115', '435', '115', '2364', '59', '12', '292'],
['3336', '5', '7457', '684', '83', '1760', '190', '550', '11', '145'],
['602', '7575', '602', '5', '7523', '338', '7523', '338', '7532', '338'],
['2395', '125', '249', '7604', '53', '332', '7337', '332', '242', '95'],
['599', '68', '7602', '7', '7600', '190', '889', '190', '7580', '39'],
['245', '111', '125', '173', '602', '7575', '602', '12', '291', '1900'],
['1819', '13', '2320', '13', '1529', '13', '1497', '17', '66', '483'],
['3309', '5', '1373', '134', '598', '673', '740', '3005', '740', '60'],
['310', '116', '310', '7391', '19', '8', '7507', '650', '649', '125'],
['695', '10', '3', '7431', '125', '247', '81', '33', '24', '166'],
['280', '106', '7582', '16', '607', '769', '119', '121', '38', '152'],
['3009', '5', '460', '434', '460', '73', '24', '1856', '24', '1731'],
['2386', '5', '7341', '1864', '7341', '33', '228', '179', '22', '14'],
['484', '1568', '7595', '116', '259', '614', '707', '82', '163', '5'],
['2354', '206', '588', '67', '135', '17', '2797', '17', '272', '106'],
['1194', '1869', '1868', '1050', '61', '3123', '61', '3093', '61', '7433'],
['2333', '8', '3116', '8', '2384', '6', '522', '8', '754', '812'],
['825', '59', '208', '162', '1167', '103', '102', '109', '1270', '109'],
['292', '894', '11', '7525', '12', '391', '253', '27', '26', '211'],
['1378', '36', '424', '87', '1156', '522', '8', '613', '48', '15'],
['1868',
'3195',
'1868',
'3195',
'1868',
'1869',
'1868',
'1869',
'1194',
'1869'],
['1854', '5', '27', '1127', '20', '807', '21', '347', '1749', '69'],
['1376', '59', '34', '12', '7575', '12', '21', '110', '29', '142'],
['692', '156', '5', '49', '1195', '432', '351', '12', '1061', '1'],
['7444', '3129', '269', '87', '259', '1149', '454', '161', '690', '455'],
['1859', '5', '1372', '5', '2333', '5', '1535', '13', '1532', '13'],
['3236', '5', '979', '1528', '3133', '46', '457', '3133', '46', '127'],
['2363', '5', '1156', '397', '1060', '373', '3146', '373', '395', '89'],
['3231', '5', '26', '1128', '7603', '305', '12', '191', '24', '443'],
['2320', '5', '774', '3133', '1528', '46', '3102', '46', '13', '7514'],
['706', '128', '115', '128', '123', '1146', '31', '22', '2079', '735'],
['2335', '5', '434', '460', '769', '152', '188', '165', '26', '7578'],
['141', '118', '292', '321', '27', '118', '103', '102', '288', '969'],
['7531', '1194', '972', '1194', '7531', '58', '830', '2336', '157', '26'],
['3305', '6', '150', '7598', '1691', '1384', '1691', '165', '114', '233'],
['524', '6', '826', '6', '36', '3', '162', '154', '290', '490'],
['809', '6', '1056', '29', '33', '2596', '33', '362', '95', '345'],
['1057', '7', '7417', '7484', '43', '487', '22', '9', '219', '81'],
['1541', '7595', '53', '332', '3325', '332', '73', '155', '77', '664'],
['1371', '6', '754', '3', '1545', '3', '1336', '25', '2', '7401'],
['1571', '138', '477', '138', '16', '7438', '10', '2204', '874', '259'],
['1185', '12', '1376', '59', '130', '849', '1667', '50', '24', '42'],
['1566', '8', '347', '3', '1174', '16', '993', '16', '771', '16'],
['810', '6', '3222', '6', '1546', '6', '287', '1875', '287', '748'],
['1350', '6', '549', '1579', '549', '1579', '36', '172', '205', '3'],
['549', '7', '51', '638', '9', '42', '109', '379', '1230', '134'],
['526', '292', '5', '7447', '762', '555', '80', '269', '64', '86'],
['431', '1549', '3266', '1549', '431', '2340', '16', '298', '1210', '529'],
['1546', '138', '566', '507', '422', '467', '408', '138', '436', '7437'],
['1461', '6', '3096', '15', '1390', '953', '61', '2268', '61', '3123'],
['458', '8', '492', '331', '492', '3', '242', '7514', '13', '1531'],
['1374', '19', '335', '7525', '12', '459', '12', '3281', '12', '1072'],
['7420', '6', '112', '56', '53', '3', '1347', '3', '641', '8'],
['2376', '149', '531', '1095', '63', '177', '1304', '798', '7397', '350'],
['3222', '6', '112', '3031', '112', '2276', '112', '73', '728', '2615'],
['3158', '6', '1157', '7595', '430', '817', '430', '817', '125', '26'],
['2326', '6', '7444', '3129', '8', '877', '1036', '7515', '8', '1033'],
['728', '17', '77', '660', '189', '52', '39', '1145', '52', '306'],
['278', '29', '5', '416', '398', '3359', '398', '3359', '398', '706'],
['1830', '138', '477', '608', '477', '1662', '477', '608', '995', '608'],
['7415', '863', '368', '382', '7589', '175', '1', '1232', '1936', '2'],
['104', '27', '67', '1068', '5', '7503', '82', '49', '596', '62'],
['1157', '509', '1000', '509', '1157', '7527', '1157', '6', '3066', '6'],
['2384', '6', '20', '7577', '347', '1458', '26', '798', '7550', '69'],
['2371', '138', '436', '1419', '436', '1419', '1624', '238', '2', '871'],
['1578', '19', '1581', '19', '27', '81', '229', '93', '5', '3206'],
['3176', '6', '331', '296', '295', '22', '2336', '906', '104', '185'],
['3155', '6', '106', '7582', '990', '37', '1764', '2940', '1764', '37'],
['1254', '21', '85', '75', '152', '148', '466', '658', '7569', '606'],
['1179', '56', '227', '7380', '5', '7406', '3', '7341', '1', '3167'],
['1055', '430', '7595', '882', '88', '129', '845', '23', '401', '1666'],
['7554', '27', '88', '461', '88', '20', '2', '1632', '37', '570'],
['522', '6', '1542', '1', '3355', '1', '116', '491', '22', '156'],
['1152', '6', '190', '157', '334', '54', '379', '40', '379', '1209'],
['3112', '138', '477', '1662', '153', '154', '270', '4', '439', '148'],
['3096', '101', '567', '101', '478', '101', '16', '7550', '251', '4'],
['3066', '6', '1571', '138', '436', '270', '7455', '4910', '7455', '4910'],
['188', '161', '623', '33', '15', '55', '26', '49', '105', '164'],
['3092', '6', '408', '6', '203', '1853', '412', '7550', '210', '2732'],
['750', '7', '1757', '69', '2196', '25', '228', '177', '219', '1789'],
['1346', '6', '190', '304', '945', '304', '346', '18', '194', '151'],
['149', '911', '541', '1627', '475', '2', '241', '12', '649', '310'],
['2292', '1349', '373', '3069', '373', '46', '1172', '13', '1049', '1052'],
['1776', '6', '207', '92', '1821', '127', '1591', '127', '1591', '127'],
['7528', '6', '5', '13', '659', '99', '842', '4', '2485', '4'],
['1450', '28', '1674', '28', '1450', '6', '32', '203', '17', '255'],
['408', '138', '16', '1212', '353', '438', '121', '2838', '121', '119'],
['7517', '62', '51', '796', '14', '113', '155', '622', '1018', '622'],
['7536', '62', '21', '264', '2', '189', '552', '166', '8', '600'],
['1151', '7', '3', '62', '92', '207', '6', '1350', '6', '1152'],
['429', '53', '5342', '1611', '7550', '188', '11', '482', '66', '2'],
['956', '7', '34', '12', '1322', '161', '204', '1150', '592', '87'],
['638', '9', '98', '69', '98', '211', '2', '473', '986', '540'],
['5342', '109', '379', '109', '422', '536', '612', '303', '2045', '303'],
['1178', '7', '484', '7', '75', '216', '756', '216', '7', '319'],
['640', '2831', '640', '2831', '640', '42', '7603', '60', '267', '304'],
['2211', '7', '100', '124', '1820', '199', '517', '7603', '1158', '14'],
['98', '211', '47', '2', '165', '26', '8', '2821', '17', '799'],
['696', '179', '582', '1691', '7601', '7', '743', '134', '244', '197'],
['878', '5392', '878', '5392', '878', '5392', '878', '21', '201', '1509'],
['1782', '7', '30', '689', '329', '284', '212', '1617', '212', '17'],
['1768', '7', '40', '37', '10', '454', '3', '2285', '3', '3200'],
['1310', '7', '1499', '7', '24', '49', '3289', '49', '36', '209'],
['417', '110', '92', '345', '3', '1849', '3', '976', '130', '1210'],
['666', '253', '543', '253', '391', '12', '893', '5', '7380', '227'],
['980', '82', '73', '32', '5', '1156', '87', '171', '83', '7499'],
['645', '754', '645', '7', '752', '95', '1889', '2387', '27', '67'],
['371', '80', '269', '35', '1385', '35', '7580', '689', '7580', '689'],
['7562', '11', '360', '350', '1740', '255', '17', '176', '20', '241'],
['1171', '7', '768', '148', '768', '446', '4', '156', '9', '733'],
['2981', '7', '7501', '7', '2361', '2412', '96', '2412', '96', '269'],
['946', '10', '1031', '19', '88', '7404', '88', '10', '492', '115'],
['506', '90', '203', '89', '77', '4', '1088', '537', '45', '26'],
['318', '26', '7604', '7334', '145', '51', '7', '1731', '14', '1158'],
...]
## Creating a Node Embedding
Now that we've created a representation of the likelihood of getting to different nodes in each graph, we can the methods which we will use to represent the network as an embedding vector. Note that this is an alternative to other methods such as one-hot encoding of the results which are extremely memory/computation intensive. In principle, what we want to do is represent the "context" or relationship of each of these nodes to all other nodes by mapping each node into an $N$ dimensional vector space. The length of the vector is arbitrary; As it is increased the precision will rise while the speed of the computation will fall.Nodes which are in the immediate neighborhood of the current node will be heavily favored, second order connections, less so, and those that are completely unconnected, not at all. This method was first explored in [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf). The paper that was just mentioned provides two methods for natural language processing:
1. Continuous bag-of-words
2. Skip-gram models.
Both methods are valid and have their strengths and weaknesses, but we will rely on skip-gram models in this discussion. For skip-gram models, the node embedding is generated using a simple neural network. We will step through an independent implementation of this below which leans on Tensorflow, but [Stellargraph](https://www.stellargraph.io/) provides a good straightforward interface to it as well.
### Step 1: Identify neighborhood for each node
This is the step that we discussed above by implementing the biased random walk and the random walk methods. This has a key impact: The longer and more biased our random walk, the greater of range of connections we will identify, but we will possibly draw in more tenuous connections.
### Step 2: Map neighborhood values to one-hot autoencoders:
The neighborhoods are used to generate vectors which encode the relationship of nodes. This includes a one-hot autoencoder for the target node, and a set of autoencoders for neighboring nodes.
### Step 3: Perform Optimization:
The following procedure is used for each one-hot autoencoders:
1. The $1 \times N$ encoder multiplies an $N \times w $ matrix.
2.
```python
```
```python
```
```python
```
```python
```
```python
```
```python
def gen_auto_encoders(node_lists):
```
[['7188', '1', '7557', '542', '48', '2114', '48', '270', '4', '131'],
['430', '817', '430', '1055', '430', '817', '125', '249', '25', '1031'],
['3134', '27', '754', '755', '3', '227', '8', '107', '7565', '18'],
['3026', '1', '35', '192', '171', '73', '163', '59', '494', '11'],
['3010', '1', '472', '1235', '2588', '1235', '328', '661', '1007', '448'],
['804', '7583', '804', '7583', '1020', '51', '40', '51', '1020', '166'],
['160', '945', '18', '613', '226', '75', '411', '67', '69', '256'],
['95', '179', '88', '461', '88', '15', '80', '698', '64', '133'],
['377', '363', '399', '363', '377', '3413', '377', '3413', '377', '399'],
['888', '1', '725', '1661', '725', '10', '1040', '10', '2204', '874'],
['89', '3774', '1684', '3774', '1247', '102', '1005', '102', '62', '42'],
['1901', '1', '296', '203', '70', '279', '177', '52', '2195', '52'],
['161', '7', '768', '2', '2446', '2', '51', '107', '32', '24'],
['256', '67', '15', '906', '96', '19', '1199', '459', '1884', '151'],
['351', '759', '801', '759', '351', '1', '563', '1', '1900', '58'],
['3329', '1', '2695', '1', '3292', '1', '1315', '793', '57', '1689'],
['3341', '1', '1295', '1', '1901', '1', '2767', '1', '2779', '1'],
['649', '123', '40', '65', '2544', '65', '113', '7580', '689', '14'],
['1583', '1', '1590', '86', '1891', '94', '337', '7386', '180', '230'],
['87', '1863', '34', '320', '85', '53', '3', '244', '1060', '3'],
['37', '65', '77', '1118', '864', '153', '253', '666', '88', '815'],
['309', '1372', '413', '125', '292', '36', '375', '163', '416', '1071'],
['821', '92', '1576', '3', '81', '627', '7549', '627', '289', '105'],
['1496', '330', '1170', '330', '7604', '157', '12', '172', '413', '1372'],
['637', '52', '26', '1568', '26', '505', '237', '505', '669', '474'],
['964', '46', '1803', '68', '1804', '2090', '2081', '1103', '665', '239'],
['594', '1522', '53', '7588', '117', '1317', '259', '1', '1445', '1'],
['2249', '1', '2383', '151', '218', '261', '151', '1884', '87', '126'],
['554', '1', '75', '328', '342', '920', '182', '442', '16', '2954'],
['20', '366', '613', '876', '80', '598', '80', '371', '199', '267'],
['2227', '1', '291', '1900', '291', '58', '103', '40', '9', '1'],
['1315', '793', '38', '277', '78', '1758', '117', '90', '189', '552'],
['519', '89', '1267', '1120', '1267', '1', '459', '617', '128', '40'],
['1316', '1', '2826', '1', '174', '266', '21', '11', '135', '197'],
['2149', '1', '7431', '125', '43', '7588', '190', '49', '7597', '1568'],
['1724', '1', '1316', '1', '2999', '1', '3329', '1', '174', '8'],
['18', '274', '7603', '47', '7599', '7600', '2218', '956', '2218', '956'],
['57', '160', '1845', '160', '18', '151', '2971', '151', '2399', '38'],
['118', '7597', '759', '122', '5', '7485', '5', '2397', '105', '602'],
['3254', '19', '32', '11', '72', '935', '403', '303', '2680', '303'],
['1177', '1', '2065', '2660', '2065', '480', '2065', '480', '76', '678'],
['112', '748', '564', '748', '21', '1152', '21', '2759', '21', '51'],
['11', '216', '353', '216', '7', '7585', '36', '815', '124', '179'],
['586', '135', '1732', '30', '145', '7588', '428', '204', '177', '6'],
['35', '15', '7577', '197', '17', '101', '42', '177', '122', '5'],
['15', '902', '1388', '7385', '1388', '902', '1388', '1393', '164', '761'],
['1445', '4', '67', '1289', '2146', '547', '777', '547', '1484', '2821'],
['152', '251', '2482', '251', '45', '272', '7577', '347', '1749', '69'],
['2', '1225', '2', '38', '117', '1314', '117', '2178', '1', '1822'],
['113', '502', '533', '45', '1076', '37', '1632', '2', '381', '2'],
['44', '249', '36', '822', '118', '72', '48', '680', '17', '75'],
['2401', '12', '1553', '12', '1873', '12', '27', '222', '72', '935'],
['10', '334', '302', '187', '16', '2001', '4', '16', '1917', '251'],
['2378', '1', '1356', '3', '207', '26', '333', '3', '646', '88'],
['126', '512', '148', '44', '77', '384', '77', '403', '155', '77'],
['3245', '1', '260', '225', '403', '1992', '403', '303', '1460', '65'],
['783', '7564', '927', '2021', '927', '2021', '927', '2021', '927', '444'],
['493', '225', '493', '752', '493', '525', '493', '172', '30', '266'],
['1358', '1', '222', '72', '48', '850', '48', '914', '339', '254'],
['1180', '177', '174', '822', '118', '12', '288', '19', '96', '281'],
['529', '2529', '529', '1', '2173', '1', '155', '65', '2', '777'],
['333', '643', '3110', '643', '124', '1051', '1173', '280', '174', '7343'],
['1538', '1', '1283', '586', '2753', '586', '1283', '33', '6', '1350'],
['2282', '1', '416', '34', '3362', '34', '197', '14', '9', '1136'],
['1519', '1', '2784', '1', '1413', '223', '16', '850', '45', '431'],
['2966', '1', '955', '78', '42', '36', '25', '1787', '25', '1034'],
['474', '1912', '474', '745', '71', '1791', '71', '954', '285', '151'],
['330', '185', '95', '32', '696', '7', '51', '7526', '78', '474'],
['958', '798', '1029', '40', '1224', '514', '440', '110', '390', '144'],
['17', '344', '202', '7603', '305', '346', '7565', '13', '1497', '17'],
['1295', '257', '1295', '257', '209', '267', '176', '209', '801', '2230'],
['38', '2906', '38', '152', '38', '17', '78', '7526', '42', '22'],
['1952', '1', '2898', '1', '3290', '1', '951', '145', '95', '7'],
['223', '466', '4', '99', '1642', '252', '218', '1339', '203', '53'],
['625', '1469', '625', '74', '1017', '1448', '1605', '74', '1448', '1605'],
['1392', '19', '2336', '1393', '1388', '902', '2336', '8', '7558', '496'],
['3355', '1', '89', '154', '425', '7570', '385', '1114', '7564', '169'],
['1881', '1', '523', '7', '95', '179', '7603', '274', '18', '437'],
['58', '81', '1562', '11', '1', '1066', '2393', '1066', '459', '1884'],
['96', '2', '175', '65', '2018', '65', '256', '40', '234', '225'],
['1580', '118', '43', '125', '817', '125', '103', '92', '331', '7334'],
['196', '80', '51', '1145', '20', '159', '1234', '159', '145', '22'],
['146', '1510', '146', '519', '946', '17', '686', '66', '98', '40'],
['416', '58', '30', '572', '58', '7531', '58', '7510', '7591', '528'],
['1198', '34', '32', '219', '238', '125', '88', '43', '7583', '804'],
['3319', '1', '1724', '1', '1261', '2712', '1261', '2712', '1261', '2712'],
['1867', '1', '2795', '1', '1392', '19', '128', '2397', '105', '292'],
['896', '432', '1887', '432', '1793', '1', '3048', '1', '1320', '1'],
['617', '459', '247', '336', '27', '36', '702', '3', '90', '30'],
['3300', '1', '4721', '1', '1342', '11', '31', '161', '1765', '177'],
['1877', '1', '75', '10', '54', '234', '54', '2557', '54', '102'],
['462', '163', '650', '105', '720', '1979', '720', '105', '490', '265'],
['3279', '1', '1282', '17', '411', '38', '7392', '38', '451', '275'],
['454', '3', '2053', '3', '5679', '116', '1191', '290', '241', '217'],
['1860', '1', '2427', '2410', '2427', '1', '7341', '5', '3133', '206'],
['121', '633', '410', '1027', '410', '2938', '410', '486', '35', '3441'],
['151', '1', '1846', '7525', '826', '6', '519', '2899', '519', '453'],
['1570', '1', '347', '21', '51', '1162', '15', '211', '11', '7597'],
['1573', '10', '327', '299', '2', '420', '4', '361', '38', '700'],
['1063', '3', '1367', '3', '1061', '12', '5', '1551', '63', '209'],
['1353', '241', '95', '26', '231', '7328', '731', '7328', '28', '1455'],
['459', '80', '7599', '7598', '7600', '117', '122', '2774', '122', '29'],
['2334', '3', '90', '2038', '480', '2065', '2660', '2065', '2660', '2065'],
['1267', '89', '1267', '1120', '28', '1674', '28', '80', '384', '866'],
['1060', '3', '7380', '8', '336', '8', '25', '1034', '11', '24'],
['1061', '229', '333', '27', '47', '280', '106', '7582', '990', '37'],
['7431', '1', '3329', '1', '7425', '117', '80', '268', '2634', '176'],
['1355', '1798', '1355', '1', '249', '2', '1977', '1978', '513', '1978'],
['71', '33', '70', '7388', '871', '942', '26', '1021', '26', '22'],
['3070', '1', '1482', '14', '1027', '410', '2157', '410', '800', '35'],
['2113', '1', '888', '58', '545', '3', '3254', '191', '59', '3270'],
['3001', '1', '1353', '309', '3', '7390', '6', '208', '291', '87'],
['396', '25', '7588', '25', '696', '214', '595', '1032', '890', '81'],
['2260', '1', '1491', '798', '1030', '11', '4', '379', '31', '20'],
['142', '9', '108', '355', '108', '2175', '108', '2', '658', '23'],
['2238', '1', '2181', '1', '2144', '14', '1289', '38', '1148', '117'],
['123', '40', '316', '693', '316', '365', '50', '1932', '270', '148'],
['2942', '1', '2147', '24', '229', '93', '7469', '93', '706', '27'],
['1509', '918', '10', '1146', '31', '188', '113', '2', '24', '7600'],
['7410', '585', '547', '1737', '14', '2818', '14', '27', '2418', '27'],
['1760', '69', '232', '69', '739', '1331', '71', '800', '35', '7415'],
['2876', '1', '175', '7589', '84', '1098', '7371', '716', '654', '716'],
['259', '1802', '259', '1149', '146', '541', '1638', '792', '1713', '251'],
['1493', '11', '777', '2', '51', '67', '34', '585', '1859', '585'],
['2845', '1', '888', '1886', '1', '2282', '1', '115', '335', '115'],
['370', '117', '798', '347', '798', '21', '552', '11', '17', '48'],
['2844', '102', '29', '110', '92', '958', '155', '3169', '155', '203'],
['2167', '1', '2999', '1', '292', '72', '16', '1083', '45', '183'],
['156', '1324', '117', '7600', '161', '1', '796', '14', '2157', '410'],
['2808', '1', '1881', '1', '3113', '1', '2271', '1', '1248', '1250'],
['255', '1', '2750', '1', '3256', '1', '333', '1', '223', '624'],
['736', '18', '1', '709', '1', '118', '34', '288', '12', '376'],
['7603', '1479', '2131', '1479', '2131', '1479', '7603', '681', '22', '7593'],
['346', '9', '2479', '9', '4934', '256', '346', '57', '294', '2829'],
['9', '513', '9', '133', '70', '20', '7565', '683', '211', '21'],
['75', '328', '342', '663', '25', '701', '50', '257', '133', '7588'],
['175', '368', '1243', '40', '1279', '664', '14', '545', '135', '586'],
['2754', '1', '3332', '1', '1724', '14', '2825', '14', '939', '568'],
['22', '330', '7600', '83', '133', '1272', '98', '1272', '133', '371'],
['155', '545', '3', '105', '7585', '36', '102', '1990', '102', '54'],
['1261', '1', '11', '4', '140', '131', '1404', '45', '716', '1630'],
['2552', '57', '256', '121', '2208', '410', '742', '71', '38', '2139'],
['2586', '1', '2065', '2660', '2065', '480', '1117', '28', '2604', '28'],
['710', '23', '656', '7529', '331', '6', '524', '112', '177', '244'],
['578', '727', '994', '35', '558', '15', '801', '113', '271', '113'],
['472', '2480', '472', '353', '2', '2698', '2', '39', '10', '918'],
['2472', '1', '2342', '1', '2177', '69', '1780', '214', '1793', '214'],
['1606', '716', '45', '272', '150', '7601', '7334', '331', '1029', '331'],
['4', '366', '403', '1001', '132', '39', '22', '189', '660', '182'],
['563', '237', '32', '118', '34', '81', '1859', '5', '7578', '8'],
['744', '249', '402', '90', '132', '1132', '132', '201', '24', '93'],
['3422', '1', '1509', '1', '2820', '1', '2771', '1', '1509', '201'],
['250', '715', '401', '715', '7438', '715', '401', '31', '36', '118'],
['249', '3420', '249', '523', '371', '80', '269', '7415', '863', '783'],
['2427', '2410', '2427', '2410', '95', '2336', '64', '2416', '64', '269'],
['3414', '1', '3292', '1', '2820', '1', '796', '117', '38', '744'],
['1590', '15', '269', '2', '241', '1822', '241', '751', '58', '1589'],
['2305', '40', '12', '174', '62', '306', '411', '7588', '428', '7600'],
['1900', '1', '3059', '1', '3316', '1', '3349', '1', '725', '574'],
['3392', '141', '53', '375', '163', '398', '720', '437', '23', '713'],
['1065', '128', '617', '27', '1055', '27', '645', '7', '750', '30'],
['1147', '491', '268', '967', '15', '1906', '114', '3398', '114', '1879'],
['1072', '149', '70', '20', '372', '892', '20', '47', '197', '1794'],
['3375', '1', '3189', '1', '2166', '1', '2881', '1', '747', '196'],
['1024', '1', '7603', '1473', '48', '109', '1431', '1661', '7564', '1436'],
['709', '833', '709', '1', '3245', '1', '2351', '1129', '460', '47'],
['1382', '58', '1893', '351', '34', '248', '26', '798', '1486', '1631'],
['3332', '1', '2273', '2202', '311', '1444', '7564', '130', '1255', '28'],
['1885', '309', '1180', '141', '2406', '141', '2406', '141', '19', '116'],
['3330', '1', '523', '30', '172', '116', '7421', '116', '1815', '116'],
['1197', '30', '95', '42', '822', '174', '8', '2031', '476', '927'],
['3316', '1', '796', '117', '122', '143', '3094', '143', '1500', '1'],
['1886', '888', '1', '1186', '1', '2296', '1', '637', '432', '416'],
['154', '106', '167', '109', '102', '1247', '102', '54', '1430', '54'],
['1522', '1', '563', '99', '2473', '99', '716', '913', '1622', '2540'],
['2391', '7390', '8', '7492', '708', '7493', '708', '5679', '7595', '7'],
['3298', '1', '18', '542', '7557', '542', '18', '386', '144', '390'],
['432', '1793', '1', '462', '372', '3257', '372', '20', '489', '2'],
['3292', '1', '2093', '57', '294', '2827', '294', '2829', '294', '2829'],
['3290', '1', '115', '280', '142', '133', '257', '1077', '257', '52'],
['7597', '118', '82', '7503', '321', '7', '1760', '1497', '1760', '69'],
['3274', '1', '594', '37', '561', '4', '439', '1402', '354', '7564'],
['1875', '287', '1875', '287', '12', '976', '3', '1059', '3', '2280'],
['291', '2344', '291', '12', '288', '2369', '95', '246', '101', '567'],
['158', '49', '120', '245', '2380', '95', '553', '2', '1635', '130'],
['1579', '549', '17', '1691', '185', '582', '185', '95', '103', '284'],
['1846', '1', '3023', '1', '2945', '1', '42', '1337', '43', '7483'],
['1066', '1', '3026', '1', '42', '62', '51', '295', '6', '1179'],
['290', '154', '72', '3', '1832', '3', '1043', '30', '314', '7564'],
['3233', '1', '1901', '1', '2167', '11', '4', '7550', '563', '237'],
['891', '893', '12', '7525', '12', '280', '1173', '36', '2374', '826'],
['247', '3230', '247', '36', '292', '432', '363', '432', '1', '1458'],
['1856', '1536', '371', '177', '2958', '177', '396', '112', '73', '49'],
['3211', '1', '7431', '1', '753', '286', '3', '1558', '49', '483'],
['115', '2364', '59', '2364', '115', '97', '316', '67', '79', '1130'],
['1365', '1', '2764', '1', '1720', '1', '2845', '1', '2147', '24'],
['483', '1', '3027', '1', '3290', '1', '1843', '3', '3254', '36'],
['3210', '1', '1885', '309', '5', '262', '93', '115', '1574', '19'],
['3189', '1', '1583', '1', '76', '42', '2241', '42', '992', '1095'],
['3193', '1', '331', '177', '2311', '177', '26', '248', '11', '7377'],
['2352', '1', '2129', '10', '320', '30', '145', '7328', '145', '99'],
['1847', '1', '2892', '57', '7579', '85', '188', '7579', '22', '593'],
['3173', '1', '2177', '69', '2874', '69', '146', '541', '146', '1149'],
['1051', '177', '490', '416', '48', '152', '44', '1076', '37', '65'],
['1843', '3', '1358', '3', '7406', '19', '1368', '249', '389', '45'],
['2342', '1', '2276', '112', '7595', '430', '247', '1067', '247', '413'],
['7341', '1864', '3265', '1864', '7341', '49', '519', '1288', '1', '1579'],
['3139', '1', '2892', '1', '1147', '3254', '191', '58', '321', '82'],
['1186', '817', '1', '2909', '1', '1060', '645', '3165', '645', '155'],
['3167', '1', '222', '150', '507', '405', '909', '841', '4', '131'],
['3149', '1', '1579', '549', '107', '116', '19', '141', '93', '174'],
['1842', '156', '22', '302', '157', '684', '26', '373', '3069', '373'],
['3156', '1', '493', '225', '260', '18', '1526', '331', '296', '203'],
['3141', '1', '2', '1487', '117', '2931', '117', '1742', '57', '2508'],
['817', '160', '952', '160', '7579', '22', '39', '22', '1301', '22'],
['1840', '30', '90', '7564', '576', '7564', '57', '38', '37', '25'],
['1839', '3', '7431', '1', '2315', '3', '1375', '305', '346', '22'],
['2324', '1', '3156', '27', '34', '3', '40', '2', '70', '371'],
['3143', '1', '2795', '1', '697', '1', '2276', '112', '7600', '83'],
['3142', '1', '2402', '222', '27', '12', '105', '12', '1535', '258'],
['174', '266', '296', '203', '90', '78', '42', '63', '537', '23'],
['2315', '1', '1826', '456', '46', '3065', '46', '656', '23', '1131'],
['1356', '3', '190', '66', '17', '7410', '743', '134', '598', '3258'],
['1835', '3', '876', '7476', '876', '7553', '41', '319', '10', '2190'],
['296', '1', '955', '1', '1043', '128', '158', '695', '10', '242'],
['3118', '1', '2108', '1', '347', '4', '21', '85', '53', '245'],
['1542', '1', '1846', '12', '104', '2437', '104', '27', '172', '3'],
['3111', '1', '2713', '1', '2782', '1', '296', '331', '3', '1059'],
['753', '56', '1360', '12', '1873', '3214', '1873', '15', '1895', '96'],
['3088', '2794', '3088', '2794', '3088', '1', '2782', '1', '2375', '128'],
['395', '1166', '1795', '43', '142', '78', '2083', '78', '2083', '1463'],
['3068', '1', '2786', '1', '2315', '3', '1129', '2351', '1129', '2351'],
['3042', '1', '3062', '1', '1881', '1', '2742', '1', '2249', '1'],
['1442', '57', '79', '1468', '155', '7547', '20', '11', '205', '8'],
['1339', '203', '1339', '218', '763', '146', '199', '45', '51', '345'],
['596', '62', '124', '32', '19', '551', '135', '3', '6', '149'],
['2281', '373', '395', '962', '31', '45', '1623', '536', '406', '50'],
['222', '2336', '246', '43', '88', '7398', '2408', '7', '7484', '7'],
['194', '129', '502', '113', '660', '113', '7580', '39', '2070', '39'],
['3064', '1', '2767', '1', '3020', '1', '817', '430', '247', '116'],
['1812', '178', '3333', '178', '3333', '178', '881', '46', '3015', '46'],
['1796', '30', '5342', '478', '2560', '478', '109', '931', '3774', '931'],
['701', '8', '201', '298', '23', '842', '16', '1640', '477', '577'],
['124', '813', '7595', '3074', '27', '702', '229', '825', '12', '59'],
['3057', '1', '2152', '1', '152', '148', '270', '48', '386', '26'],
['7427', '1', '1923', '1', '1510', '146', '85', '15', '3369', '1205'],
['2276', '112', '697', '112', '1043', '112', '177', '1515', '177', '2679'],
['2271', '1', '3375', '1', '2763', '1', '3402', '1', '155', '203'],
['1525', '1', '1520', '38', '1483', '1482', '38', '2313', '38', '71'],
['2235', '1', '223', '1413', '717', '40', '1279', '40', '917', '40'],
['3027', '1', '2779', '1', '2700', '1', '2771', '1', '3418', '1'],
['2090', '15', '19', '1578', '6', '11', '1697', '2655', '1638', '1255'],
['1342', '30', '5533', '13', '75', '5342', '447', '342', '663', '25'],
['331', '3', '95', '413', '289', '12', '125', '95', '362', '7'],
['3023', '1', '2282', '1', '1198', '34', '29', '983', '12', '216'],
['2261', '46', '3308', '46', '2258', '162', '287', '32', '219', '81'],
['68', '3029', '68', '3121', '68', '1803', '68', '7604', '188', '113'],
['116', '51', '7592', '51', '2887', '51', '1287', '9', '1458', '28'],
['1043', '116', '112', '748', '7', '2', '268', '26', '666', '212'],
['3020', '1', '2947', '1', '87', '7591', '3394', '7591', '7422', '7591'],
['491', '96', '141', '93', '92', '34', '2378', '34', '158', '58'],
['179', '1', '2823', '1', '1505', '161', '2', '2446', '2', '39'],
['2252', '1', '291', '2344', '291', '85', '291', '36', '557', '348'],
['214', '7600', '157', '7603', '1022', '305', '7603', '587', '614', '201'],
['439', '1402', '439', '1044', '439', '4', '88', '308', '376', '398'],
['2147', '1', '3062', '1', '330', '7600', '1153', '317', '951', '142'],
['1167', '1', '3233', '1', '1500', '1', '2823', '1', '3173', '1'],
['697', '177', '393', '452', '689', '801', '954', '38', '1289', '67'],
['67', '2247', '67', '79', '2683', '79', '2104', '79', '272', '47'],
['1793', '214', '1750', '214', '517', '27', '125', '197', '17', '2821'],
['258', '1787', '25', '8', '1584', '527', '6792', '527', '1584', '527'],
['1340', '143', '7604', '177', '798', '958', '1', '2952', '1', '1295'],
['2996', '1', '2352', '6', '75', '15', '2', '242', '58', '647'],
['1520', '38', '874', '1148', '1760', '1320', '1760', '1497', '1760', '258'],
['1336', '59', '19', '197', '69', '1495', '7', '3177', '7', '1185'],
['30', '750', '124', '1820', '124', '298', '1123', '671', '273', '575'],
['178', '61', '3186', '3217', '3186', '3217', '3186', '3217', '3186', '3217'],
['747', '80', '268', '223', '2', '7508', '922', '236', '1214', '419'],
['2962', '1', '529', '1210', '12', '882', '177', '492', '25', '287'],
['2952', '1', '1574', '7', '2211', '7', '21', '798', '24', '145'],
['455', '690', '18', '543', '792', '1472', '79', '386', '26', '166'],
['260', '106', '396', '145', '1611', '4', '16', '7552', '224', '735'],
['623', '69', '2169', '69', '1318', '69', '2163', '69', '1752', '69'],
['29', '43', '7501', '100', '7588', '428', '7600', '1153', '7', '25'],
['1510', '1', '1283', '33', '50', '52', '949', '11', '7446', '19'],
['2927', '1', '3167', '1', '637', '432', '363', '399', '363', '399'],
['301', '7604', '7602', '53', '25', '663', '25', '29', '32', '287'],
['2907', '1', '2907', '1', '1796', '30', '3', '242', '451', '314'],
['2909', '1', '1148', '1760', '83', '7600', '83', '51', '45', '1928'],
['1148', '874', '258', '61', '457', '2297', '68', '3121', '68', '599'],
['955', '142', '25', '2196', '69', '156', '256', '67', '69', '98'],
['1502', '38', '2162', '38', '411', '67', '5342', '7586', '46', '127'],
['2853', '38', '346', '2816', '64', '62', '124', '73', '278', '7603'],
['1500', '1149', '1500', '1149', '29', '142', '8', '130', '2575', '130'],
['2178', '1', '2771', '1', '2767', '1', '3422', '1', '964', '30'],
['1320', '52', '1138', '52', '264', '593', '11', '41', '2979', '456'],
['874', '2204', '10', '383', '10', '177', '30', '1336', '30', '32'],
['2177', '1', '1580', '1', '7557', '1', '2837', '1', '2725', '1'],
['2892', '1', '3101', '1', '563', '313', '148', '1409', '148', '131'],
['636', '455', '18', '91', '625', '7552', '99', '145', '24', '122'],
['1034', '1', '1901', '1', '7431', '3', '2331', '3', '886', '12'],
['1494', '161', '1322', '161', '41', '593', '635', '33', '7341', '327'],
['276', '949', '52', '7565', '51', '2887', '51', '21', '83', '52'],
['1318', '117', '122', '198', '20', '145', '30', '370', '1', '1493'],
['117', '70', '427', '15', '64', '520', '2973', '520', '2973', '520'],
['1492', '25', '142', '3', '7406', '475', '724', '282', '273', '1612'],
['1302', '1', '264', '488', '61', '435', '46', '3178', '1176', '979'],
['1313', '1', '151', '218', '763', '218', '7604', '95', '26', '229'],
['1491', '1', '1952', '1', '523', '7', '2381', '7', '699', '243'],
['2870', '1', '2119', '22', '259', '1149', '42', '508', '444', '159'],
['2181', '1', '1356', '1', '1840', '1', '9', '24', '44', '156'],
['1309', '88', '245', '177', '642', '138', '477', '138', '16', '1983'],
['1750', '214', '296', '214', '118', '72', '830', '58', '288', '82'],
['2848', '1', '2249', '1', '2870', '1', '3139', '1', '2619', '1'],
['1458', '347', '10', '49', '174', '7591', '222', '118', '10', '113'],
['951', '317', '69', '266', '296', '237', '40', '65', '926', '16'],
['1029', '937', '7551', '3095', '7551', '7430', '7530', '7584', '95', '7599'],
['2176', '69', '317', '107', '116', '34', '123', '30', '1552', '30'],
['2846', '1', '1180', '7', '3019', '7', '1064', '3', '3181', '3'],
['347', '21', '100', '7588', '7', '484', '1568', '43', '743', '7'],
['2173', '694', '812', '705', '460', '769', '119', '322', '119', '23'],
['2826', '1', '2793', '1', '2378', '34', '288', '7595', '2319', '7595'],
['1742', '57', '2656', '57', '1657', '2011', '1657', '57', '1742', '1'],
['2166', '1', '2830', '1', '3084', '1', '2166', '1', '1284', '1'],
['2823', '1', '3233', '1', '734', '6', '3222', '6', '557', '27'],
['2820', '1', '523', '177', '3022', '177', '122', '36', '82', '3360'],
['2814', '1', '38', '14', '2136', '14', '484', '7', '2381', '128'],
['1736', '31', '9', '70', '39', '1125', '314', '37', '185', '10'],
['1025', '1', '2808', '1', '2178', '1', '116', '6', '306', '7'],
['1482', '60', '7573', '51', '2887', '51', '65', '17', '2125', '17'],
['1483', '14', '16', '219', '370', '33', '1191', '248', '5', '289'],
['2152', '178', '32', '201', '132', '1132', '22', '7', '330', '1'],
['2806', '1', '396', '177', '1714', '79', '2654', '79', '255', '35'],
['1128', '7603', '2828', '7603', '1140', '50', '90', '570', '1101', '570'],
['2801', '1', '396', '145', '21', '1152', '295', '93', '3412', '93'],
['2795', '1', '89', '519', '1288', '519', '745', '474', '475', '282'],
['1290', '78', '260', '1369', '260', '7390', '3', '116', '290', '36'],
['1288', '1', '1024', '14', '42', '5', '154', '43', '7483', '7513'],
['1436', '7564', '169', '48', '42', '3', '7', '640', '2831', '640'],
['2498', '1', '2217', '11', '680', '140', '671', '579', '671', '140'],
['2784', '1', '256', '947', '38', '1489', '38', '7579', '57', '2093'],
['587', '33', '244', '197', '798', '347', '1749', '347', '939', '568'],
['2780', '1', '2261', '435', '1825', '885', '1826', '885', '124', '62'],
['2779', '1', '1875', '1', '2966', '1', '258', '7553', '258', '25'],
['305', '346', '70', '15', '1471', '15', '122', '798', '955', '798'],
['944', '305', '26', '112', '748', '7', '7567', '7', '877', '8'],
['2129', '1', '2909', '1', '2784', '1', '1842', '1', '2920', '1'],
['683', '22', '424', '2', '471', '2', '16', '1212', '16', '850'],
['42', '181', '404', '33', '1502', '143', '24', '5342', '103', '249'],
['1136', '1', '1283', '9', '409', '1481', '409', '9', '2992', '9'],
['1283', '1', '3010', '1', '2775', '1', '2754', '1', '503', '2'],
['2127', '135', '547', '777', '380', '79', '2530', '79', '1133', '231'],
['2093', '57', '7564', '78', '543', '792', '1713', '251', '4', '1816'],
['1284', '1', '2238', '7553', '7603', '1264', '4', '54', '37', '89'],
['72', '568', '298', '568', '48', '1716', '11', '34', '32', '58'],
['264', '2', '306', '6', '638', '9', '15', '75', '216', '353'],
['2756', '1', '1251', '343', '918', '4', '583', '57', '1689', '57'],
['1285', '1', '462', '1878', '1880', '241', '2', '11', '7563', '11'],
['613', '44', '1087', '4', '1615', '311', '119', '471', '2', '13'],
['304', '7603', '2828', '7603', '305', '9', '7602', '7600', '1153', '7411'],
['1282', '255', '22', '123', '7556', '7595', '7604', '7603', '546', '9'],
['4721', '1', '709', '104', '246', '104', '7552', '2440', '7552', '99'],
['1475', '7603', '274', '167', '382', '252', '1942', '1', '256', '42'],
['2746', '1', '2227', '1', '1128', '7603', '42', '58', '321', '7'],
['1715', '77', '10', '1288', '1', '1525', '1', '3010', '1', '1475'],
['734', '1', '2989', '1', '1521', '1078', '353', '996', '1615', '996'],
['2108', '1', '95', '752', '493', '225', '224', '45', '7576', '614'],
['1923', '712', '194', '2978', '194', '1154', '529', '1', '625', '439'],
['1465', '1', '1034', '1', '683', '4', '7596', '620', '404', '1956'],
['2742', '1', '3256', '1', '1167', '587', '89', '587', '33', '1882'],
['2117', '2737', '2117', '2737', '2117', '1', '2559', '1', '3064', '1'],
['1720', '66', '1456', '66', '70', '7', '407', '165', '647', '96'],
['90', '32', '173', '2', '52', '487', '69', '2863', '69', '33'],
['2731', '1', '2962', '1', '2767', '1', '68', '178', '774', '2475'],
['1457', '1', '1525', '642', '138', '436', '2', '382', '181', '2566'],
['2725', '1', '146', '2', '20', '7577', '135', '142', '329', '743'],
['2109', '682', '938', '131', '154', '231', '77', '76', '1258', '76'],
['543', '1', '4721', '1', '3148', '1', '395', '5', '61', '979'],
['2102', '231', '7328', '7440', '7328', '7440', '145', '11', '3', '1836'],
['712', '1', '1542', '1', '1521', '1078', '10', '935', '202', '2636'],
['231', '7328', '145', '381', '84', '718', '654', '2041', '106', '472'],
['2687', '1', '1025', '37', '7448', '37', '65', '7604', '53', '375'],
['1708', '1', '3057', '1', '874', '2204', '874', '258', '358', '7604'],
['1129', '3', '228', '73', '10', '798', '133', '51', '588', '51'],
['76', '678', '7327', '531', '45', '533', '502', '7552', '625', '7552'],
['2065', '480', '7533', '480', '2065', '1', '2814', '1', '7557', '272'],
['1251', '994', '57', '79', '405', '507', '609', '48', '482', '11'],
['1248', '145', '95', '7581', '177', '2935', '177', '172', '1327', '213'],
['725', '2579', '725', '102', '62', '136', '154', '231', '48', '475'],
['2559', '1', '3233', '1', '3143', '1', '2378', '1', '1261', '1'],
['507', '84', '1616', '216', '7604', '7601', '188', '7460', '188', '334'],
['1942', '1', '1267', '28', '2031', '476', '1111', '179', '752', '493'],
['1232', '1936', '1232', '2504', '1232', '1', '2108', '1', '817', '430'],
['534', '21', '360', '317', '107', '1420', '993', '16', '65', '165'],
['994', '727', '994', '57', '389', '57', '664', '57', '1', '1742'],
['503', '43', '7588', '100', '3', '1061', '3', '1362', '3', '1835'],
['2448', '1', '1942', '252', '129', '843', '23', '389', '57', '65'],
['846', '1', '2605', '1', '3298', '1', '416', '58', '3226', '58'],
['1', '3375', '1', '2742', '1', '1842', '1', '1749', '69', '1760'],
['1368', '19', '33', '7398', '87', '33', '540', '986', '202', '785'],
['164', '279', '375', '96', '832', '15', '853', '1673', '10', '1111'],
['244', '120', '141', '173', '81', '36', '2374', '36', '249', '3420'],
['62', '24', '632', '40', '103', '295', '85', '165', '145', '153'],
['2244', '166', '7415', '863', '679', '863', '2068', '863', '783', '1'],
['148', '54', '840', '531', '54', '840', '4', '37', '380', '131'],
['7403', '64', '192', '15', '272', '7593', '11', '408', '622', '9'],
['52', '1986', '52', '20', '155', '1', '3313', '1', '1856', '1544'],
['159', '4', '512', '4', '37', '159', '17', '1732', '135', '2895'],
['66', '1473', '78', '955', '798', '874', '798', '2', '1156', '5'],
['60', '14', '1482', '60', '361', '52', '50', '142', '50', '142'],
['585', '2', '1693', '577', '7576', '7394', '7571', '7395', '45', '7589'],
['97', '7451', '1694', '7451', '1694', '2101', '79', '2683', '79', '67'],
['2006', '2', '306', '52', '189', '1298', '189', '260', '189', '660'],
['1421', '30', '1346', '30', '3164', '30', '5679', '115', '97', '85'],
['353', '1102', '501', '23', '1082', '23', '108', '546', '9', '175'],
['51', '2887', '51', '26', '132', '1705', '132', '231', '854', '539'],
['185', '24', '95', '104', '1396', '15', '897', '145', '7588', '21'],
['312', '574', '857', '69', '257', '1077', '51', '2143', '51', '2798'],
['168', '2', '40', '10', '1146', '798', '1753', '798', '1144', '33'],
['354', '188', '113', '7', '113', '177', '196', '2287', '196', '176'],
['108', '9', '346', '942', '871', '2', '7603', '588', '14', '518'],
['91', '152', '251', '1713', '251', '544', '563', '773', '4', '661'],
['40', '268', '101', '44', '420', '4', '54', '84', '1083', '1095'],
['271', '2', '133', '100', '645', '596', '197', '7', '150', '22'],
['119', '662', '119', '293', '74', '2', '188', '234', '469', '2'],
['54', '102', '67', '453', '69', '939', '682', '938', '77', '131'],
['375', '3424', '375', '118', '43', '11', '1292', '203', '17', '39'],
['177', '750', '177', '7601', '95', '56', '227', '330', '2054', '1603'],
['31', '429', '53', '177', '92', '462', '372', '20', '159', '143'],
['47', '12', '3354', '12', '3293', '12', '7444', '5', '2363', '206'],
['23', '84', '42', '1359', '3', '1832', '5', '434', '2421', '434'],
['74', '18', '160', '57', '1719', '57', '346', '2816', '64', '81'],
['165', '85', '141', '103', '141', '288', '249', '4', '567', '101'],
['306', '8', '7486', '8', '7382', '708', '7382', '8', '1298', '189'],
['8', '333', '244', '7584', '5342', '522', '177', '266', '41', '142'],
['242', '95', '1278', '95', '185', '37', '65', '1460', '303', '403'],
['412', '144', '198', '5', '122', '2774', '122', '14', '1333', '14'],
['381', '145', '22', '203', '42', '7582', '4', '1678', '4', '2439'],
['561', '37', '7448', '37', '14', '50', '1948', '50', '1755', '50'],
['216', '5', '3231', '5', '7457', '26', '51', '135', '235', '7400'],
['107', '17', '212', '1722', '212', '667', '2649', '667', '780', '667'],
['681', '2719', '681', '2', '987', '2', '424', '87', '1884', '459'],
['79', '1273', '481', '628', '789', '628', '583', '57', '65', '1258'],
['272', '7566', '51', '672', '169', '175', '1136', '97', '115', '219'],
['39', '2016', '39', '70', '64', '2429', '64', '700', '332', '428'],
['150', '7601', '7600', '161', '690', '147', '358', '1438', '358', '199'],
['181', '404', '785', '1249', '313', '148', '313', '175', '7589', '775'],
['109', '57', '2160', '57', '252', '1930', '1935', '7569', '4', '7550'],
['2404', '2', '446', '4', '1691', '30', '111', '4', '1107', '45'],
['24', '95', '882', '88', '24', '133', '15', '274', '39', '515'],
['19', '402', '235', '7400', '532', '91', '37', '41', '593', '635'],
['7', '3145', '27', '1362', '3', '287', '25', '43', '1899', '43'],
['348', '26', '88', '16', '177', '30', '5', '3236', '5', '125'],
['473', '223', '268', '491', '269', '35', '801', '113', '268', '26'],
['1921', '15', '95', '93', '229', '69', '197', '759', '59', '163'],
['99', '503', '99', '150', '99', '842', '16', '7582', '990', '109'],
['1380', '7346', '1380', '2', '1635', '2', '70', '15', '3431', '15'],
['73', '32', '59', '6', '117', '1758', '78', '739', '78', '955'],
['85', '7604', '95', '93', '95', '36', '292', '128', '580', '79'],
['189', '1424', '189', '1424', '10', '798', '11', '7563', '11', '1775'],
['787', '29', '205', '166', '31', '12', '287', '966', '95', '5342'],
['133', '26', '1294', '26', '17', '1735', '17', '942', '346', '70'],
['1709', '9', '681', '2776', '681', '7564', '275', '1121', '275', '2669'],
['658', '466', '658', '466', '4', '54', '226', '1004', '449', '54'],
['293', '149', '45', '175', '1', '2113', '1', '683', '7565', '687'],
['768', '16', '1910', '16', '100', '1694', '657', '37', '657', '629'],
['45', '1634', '45', '7576', '7394', '474', '37', '9', '123', '9'],
['471', '2', '449', '1004', '2010', '1004', '2010', '263', '50', '142'],
['1632', '2', '35', '233', '114', '64', '902', '1388', '7273', '1388'],
['281', '906', '96', '498', '233', '15', '185', '32', '7442', '2216'],
['269', '325', '209', '7550', '7387', '7550', '412', '42', '1337', '522'],
['2336', '114', '1879', '104', '12', '321', '12', '245', '397', '707'],
['206', '7550', '7552', '715', '922', '715', '250', '16', '1664', '16'],
['241', '115', '197', '198', '197', '759', '1871', '759', '894', '19'],
['190', '7', '191', '288', '1278', '95', '7602', '7600', '214', '7513'],
['7577', '2', '185', '47', '9', '1283', '1', '625', '1', '1860'],
['798', '2186', '798', '264', '255', '11', '2168', '146', '85', '92'],
['211', '15', '29', '1188', '29', '43', '7535', '7505', '43', '156'],
['140', '4', '366', '613', '26', '166', '278', '7588', '63', '105'],
['217', '2', '40', '121', '1058', '555', '80', '184', '360', '21'],
['182', '149', '182', '106', '7582', '990', '109', '668', '109', '148'],
['236', '256', '1739', '256', '479', '256', '75', '7394', '137', '2094'],
['777', '26', '277', '135', '3267', '135', '142', '27', '519', '142'],
['1098', '7589', '175', '10', '168', '4', '1091', '4', '716', '7371'],
['106', '311', '610', '37', '166', '2', '39', '137', '7327', '1401'],
['237', '32', '1064', '49', '115', '280', '58', '36', '290', '154'],
['1401', '23', '1911', '23', '138', '477', '608', '44', '711', '223'],
['1409', '2597', '1409', '148', '1401', '40', '1077', '389', '1077', '389'],
['769', '460', '3', '87', '397', '290', '36', '1580', '1', '472'],
['1209', '4', '1239', '4', '3774', '182', '3774', '931', '109', '215'],
['917', '40', '717', '1612', '273', '1612', '273', '671', '862', '619'],
['114', '30', '119', '148', '131', '77', '1638', '22', '641', '67'],
['2698', '2', '51', '170', '686', '7', '879', '199', '3', '351'],
['1127', '20', '83', '1760', '1', '587', '305', '944', '14', '1288'],
['1009', '125', '826', '2374', '826', '1893', '58', '53', '3417', '53'],
['1386', '281', '180', '186', '498', '86', '1204', '86', '906', '3446'],
['173', '2', '77', '777', '10', '710', '10', '2204', '874', '798'],
['43', '42', '715', '922', '109', '512', '2', '1090', '44', '546'],
['647', '96', '27', '886', '27', '110', '2309', '171', '69', '26'],
['5', '314', '90', '132', '9', '329', '9', '1283', '586', '1283'],
['1156', '5', '81', '49', '5', '12', '1376', '372', '144', '3262'],
['553', '33', '1137', '14', '47', '4', '2', '91', '37', '657'],
['170', '633', '50', '1974', '50', '1948', '50', '365', '541', '2026'],
['93', '111', '33', '703', '177', '347', '1749', '69', '256', '672'],
['145', '25', '1031', '12', '975', '12', '1065', '95', '40', '989'],
['1523', '13', '650', '158', '82', '36', '572', '36', '336', '5'],
['350', '3007', '350', '26', '1711', '729', '4311', '867', '4311', '867'],
['77', '131', '48', '313', '563', '444', '1928', '209', '7550', '16'],
['1785', '62', '24', '443', '129', '23', '139', '327', '1638', '327'],
['166', '7536', '7553', '2238', '1', '1060', '397', '666', '212', '35'],
['489', '28', '314', '28', '489', '2', '1475', '72', '115', '49'],
['25', '16', '781', '4', '664', '57', '160', '294', '2829', '294'],
['83', '133', '589', '157', '435', '5', '1192', '649', '1192', '1878'],
['277', '348', '307', '3190', '307', '3190', '307', '32', '203', '296'],
['70', '966', '3055', '966', '15', '767', '15', '558', '15', '9'],
['1487', '11', '488', '705', '460', '512', '2', '87', '2080', '87'],
['270', '7455', '4910', '7455', '4910', '7455', '4910', '11', '550', '161'],
['871', '942', '232', '215', '322', '657', '322', '28', '675', '195'],
['389', '491', '1221', '529', '1210', '13', '8', '780', '571', '1103'],
['544', '155', '65', '1662', '153', '670', '449', '217', '1651', '217'],
['2631', '2', '561', '2590', '561', '2590', '561', '131', '77', '265'],
['672', '169', '447', '342', '326', '339', '671', '339', '914', '10'],
['137', '39', '17', '66', '2', '242', '26', '207', '83', '967'],
['424', '1437', '1451', '784', '39', '1001', '573', '188', '469', '122'],
['65', '1404', '45', '225', '1619', '225', '530', '15', '7393', '86'],
['1693', '577', '477', '577', '1693', '577', '477', '608', '340', '1944'],
['475', '282', '387', '1121', '387', '282', '1266', '282', '786', '2599'],
['449', '670', '449', '7587', '449', '16', '100', '365', '717', '365'],
['1998', '2', '83', '277', '741', '2912', '741', '10', '840', '148'],
['1977', '2', '1098', '2', '489', '4', '1445', '4', '150', '57'],
['915', '1104', '239', '1104', '611', '2517', '611', '665', '239', '728'],
['63', '537', '1088', '537', '1088', '252', '1642', '99', '2473', '99'],
['1957', '508', '4', '356', '7', '266', '25', '693', '107', '1777'],
['1635', '1105', '7564', '1249', '313', '148', '1913', '148', '270', '436'],
['508', '1957', '2', '350', '26', '1501', '1298', '189', '1424', '189'],
['2499', '2', '90', '100', '827', '100', '969', '177', '522', '5342'],
['2497', '2', '647', '7503', '82', '125', '26', '142', '26', '83'],
['299', '614', '12', '27', '1179', '177', '2286', '177', '1517', '102'],
['1225', '2', '40', '90', '3', '1839', '1', '2725', '1', '1340'],
['238', '249', '10', '941', '41', '395', '90', '78', '632', '1485'],
['2488', '2', '5342', '1401', '148', '119', '23', '466', '54', '334'],
['1936', '1232', '2', '986', '408', '313', '929', '313', '408', '467'],
['382', '2', '777', '2', '2446', '2', '85', '703', '3', '197'],
['355', '41', '509', '41', '142', '697', '112', '396', '22', '638'],
['1090', '2', '7482', '7500', '7482', '177', '2213', '177', '295', '85'],
['1222', '63', '655', '63', '8', '2330', '27', '199', '45', '466'],
['21', '201', '21', '41', '2244', '7539', '111', '141', '751', '229'],
['564', '125', '7431', '1', '491', '304', '267', '27', '3', '97'],
['2459', '2', '353', '532', '4', '139', '2510', '1105', '1635', '1105'],
['774', '2', '508', '102', '725', '183', '169', '672', '169', '202'],
['209', '2', '987', '2', '1421', '494', '26', '52', '69', '1318'],
['1922', '2', '106', '8', '24', '44', '106', '4', '2709', '4'],
['504', '608', '504', '2593', '504', '1109', '4', '1678', '4', '111'],
['987', '419', '774', '531', '7327', '531', '419', '655', '63', '11'],
['2446', '2', '90', '11', '398', '103', '2229', '103', '102', '856'],
['986', '408', '986', '2', '189', '260', '11', '65', '588', '170'],
['1603', '297', '1915', '105', '362', '52', '69', '798', '2884', '798'],
['379', '44', '163', '120', '1071', '416', '34', '2414', '34', '116'],
['446', '445', '446', '2', '862', '65', '57', '2120', '57', '2656'],
['436', '1419', '1624', '238', '118', '5', '602', '87', '291', '19'],
['512', '23', '4', '1218', '2608', '1218', '4', '2454', '4', '2997'],
['971', '33', '167', '447', '1120', '447', '167', '1655', '226', '1647'],
['92', '1620', '74', '312', '715', '312', '715', '922', '236', '922'],
['7336', '239', '2642', '239', '1464', '239', '1110', '1109', '504', '608'],
['1545', '3', '87', '1', '951', '142', '69', '2874', '69', '2121'],
['103', '22', '933', '226', '7373', '1433', '1657', '2011', '778', '2011'],
['207', '30', '32', '166', '8', '2384', '6', '638', '7', '3177'],
['6', '278', '1038', '7458', '1038', '4', '411', '41', '319', '10'],
['7380', '227', '73', '62', '136', '42', '155', '42', '81', '3244'],
['287', '1875', '1', '3339', '1', '1822', '241', '397', '163', '249'],
['754', '755', '3', '1054', '3', '115', '2302', '115', '123', '7'],
['26', '330', '1', '1382', '1', '587', '1', '3233', '1', '1720'],
['34', '320', '3', '1816', '3', '56', '95', '7497', '245', '1889'],
['525', '493', '225', '260', '402', '610', '641', '2255', '641', '67'],
['545', '155', '1468', '79', '2749', '79', '730', '226', '16', '781'],
['56', '1567', '56', '3166', '56', '147', '1539', '147', '963', '147'],
['1849', '3', '1175', '3', '1836', '3', '1379', '96', '281', '906'],
['557', '5', '203', '41', '2972', '41', '122', '15', '94', '900'],
['208', '33', '183', '169', '182', '217', '382', '1002', '341', '300'],
['1558', '49', '759', '49', '73', '728', '2615', '728', '2615', '728'],
['2346', '3', '3127', '3', '2903', '3', '7336', '3', '3221', '3'],
['974', '458', '525', '644', '525', '819', '525', '819', '525', '458'],
['262', '892', '20', '366', '1693', '2', '1691', '30', '3100', '30'],
['2001', '3', '92', '462', '92', '345', '51', '409', '2832', '409'],
['815', '3', '1363', '67', '211', '97', '3', '107', '883', '90'],
['120', '36', '32', '2244', '7600', '190', '197', '7603', '546', '7603'],
['492', '331', '5679', '7', '5342', '7556', '123', '1', '1320', '1500'],
['1175', '3168', '1175', '3', '7406', '19', '2388', '19', '3185', '19'],
['32', '40', '6', '107', '621', '24', '72', '113', '3', '2280'],
['172', '3', '2280', '3', '632', '9', '1734', '9', '7387', '9'],
['1347', '3', '7509', '102', '29', '509', '29', '1046', '162', '1523'],
['1338', '3', '1347', '177', '1171', '95', '103', '128', '3294', '128'],
['105', '163', '1774', '32', '371', '622', '17', '2738', '17', '20'],
['818', '3', '7336', '68', '599', '191', '125', '5', '191', '58'],
['890', '413', '95', '2369', '95', '6', '295', '8', '189', '1330'],
['1362', '3', '1375', '3', '1347', '177', '63', '2', '251', '1713'],
['1547', '3', '62', '306', '22', '1258', '76', '2696', '76', '77'],
['1056', '29', '83', '7600', '7', '2922', '7', '21', '669', '131'],
['2280', '25', '228', '162', '2258', '162', '1523', '2', '5', '7447'],
['879', '3', '259', '115', '219', '135', '43', '82', '904', '249'],
['36', '172', '701', '13', '1534', '13', '3370', '13', '7478', '13'],
['590', '128', '1876', '128', '895', '241', '19', '26', '33', '253'],
['3260', '3', '320', '2833', '320', '10', '271', '40', '1132', '78'],
['1174', '16', '54', '23', '153', '1662', '477', '1640', '16', '7552'],
['3198', '3', '3049', '3', '1456', '3', '774', '5', '7514', '1062'],
['460', '1366', '1016', '2370', '1016', '705', '460', '47', '5', '310'],
['3200', '3', '34', '1352', '177', '1165', '62', '7', '7554', '27'],
['589', '157', '684', '7', '877', '3', '1832', '5', '32', '1064'],
['433', '7541', '1187', '3199', '1187', '12', '35', '7415', '863', '2068'],
['81', '645', '1060', '397', '1060', '373', '1060', '373', '3069', '373'],
['7509', '25', '6', '3112', '138', '436', '2', '19', '125', '253'],
['973', '641', '5', '335', '7525', '336', '8', '7412', '87', '1884'],
['1555', '3', '590', '128', '26', '1568', '7595', '177', '2988', '177'],
['3154', '3', '1338', '3', '886', '7595', '3138', '7595', '1792', '7595'],
['7595', '103', '12', '983', '128', '40', '1179', '56', '27', '177'],
['1361', '6', '92', '3208', '92', '27', '267', '7580', '689', '1160'],
['2331', '3', '2353', '3', '748', '43', '118', '822', '12', '105'],
['509', '41', '154', '12', '1184', '3', '2280', '25', '13', '456'],
['969', '3', '7431', '1', '1051', '1173', '280', '6', '2384', '6'],
['197', '3', '208', '320', '58', '145', '639', '7512', '29', '207'],
['621', '7', '319', '31', '95', '29', '496', '173', '1837', '125'],
['1036', '877', '8', '225', '494', '26', '7', '699', '89', '587'],
['1054', '177', '33', '137', '697', '112', '697', '1', '2147', '1'],
['1818', '3', '2332', '754', '1823', '754', '5342', '7594', '688', '133'],
['358', '147', '1539', '56', '969', '177', '1048', '177', '68', '7602'],
['748', '124', '100', '124', '11', '593', '7397', '204', '1328', '77'],
['13', '1807', '13', '599', '2320', '5', '19', '1370', '173', '93'],
['88', '1309', '1', '725', '183', '674', '7564', '1118', '864', '153'],
['163', '47', '140', '680', '7564', '354', '1402', '354', '188', '165'],
['1576', '3', '345', '7', '1784', '7', '7588', '100', '6', '163'],
['2353', '3', '2334', '3', '3221', '3', '29', '5', '197', '7'],
['3221', '3', '2328', '3', '701', '403', '851', '7603', '1479', '2131'],
['136', '1794', '40', '77', '926', '77', '403', '302', '89', '2269'],
['1367', '1187', '1367', '691', '1367', '1187', '88', '95', '191', '599'],
['7512', '1190', '7512', '120', '3', '1054', '4', '534', '2460', '534'],
['757', '7509', '102', '32', '88', '5', '435', '1855', '460', '54'],
['49', '12', '33', '2968', '33', '137', '1691', '1384', '1691', '17'],
['646', '1844', '3', '116', '112', '13', '2284', '13', '1168', '13'],
['2328', '27', '333', '7604', '7', '85', '165', '15', '85', '647'],
['1360', '7', '203', '62', '24', '10', '49', '1560', '92', '10'],
['7377', '3', '205', '643', '95', '36', '572', '413', '81', '115'],
['2321', '3', '331', '1029', '331', '177', '1293', '1745', '1293', '14'],
['125', '197', '1068', '5', '1068', '93', '288', '2369', '58', '2369'],
['1548', '177', '259', '3', '3221', '3', '877', '8', '6', '2326'],
['1837', '125', '289', '229', '2253', '229', '702', '309', '125', '68'],
['1834', '12', '1861', '12', '315', '42', '508', '1957', '39', '7603'],
['703', '24', '2147', '24', '190', '599', '2320', '13', '7397', '204'],
['1456', '23', '153', '1465', '78', '403', '2699', '403', '1430', '54'],
['7406', '475', '2', '402', '14', '177', '1806', '177', '5533', '811'],
['1832', '5', '103', '30', '16', '768', '4', '1611', '7552', '564'],
['3105', '3', '509', '228', '11', '85', '92', '73', '3342', '73'],
['641', '973', '641', '33', '244', '227', '1883', '110', '523', '30'],
['320', '208', '34', '19', '92', '5', '24', '3', '34', '1352'],
['885', '62', '1785', '2', '353', '4', '1003', '79', '2694', '79'],
['632', '78', '511', '612', '324', '2035', '324', '612', '1638', '792'],
['2300', '3', '774', '419', '987', '2', '52', '20', '60', '549'],
['702', '397', '1877', '1', '2947', '1', '3167', '1', '3143', '1'],
['228', '33', '81', '64', '3409', '2415', '114', '3369', '1205', '15'],
['752', '493', '95', '553', '2', '272', '277', '348', '1707', '348'],
['2291', '3', '1061', '3', '1056', '6', '75', '256', '1970', '409'],
['656', '749', '656', '237', '32', '1506', '32', '1774', '163', '650'],
['877', '177', '1048', '1811', '199', '16', '2077', '515', '426', '7518'],
['2293', '35', '94', '164', '174', '173', '3337', '245', '2380', '95'],
['650', '530', '12', '241', '290', '12', '156', '692', '5', '7585'],
['3328', '3', '1790', '3', '1523', '3', '333', '1', '3048', '1'],
['720', '3', '1456', '147', '1539', '147', '3115', '147', '223', '624'],
['1379', '549', '6', '185', '137', '39', '217', '109', '1099', '108'],
['219', '1711', '26', '172', '5', '3', '247', '1575', '1372', '1575'],
['3275', '3', '158', '82', '904', '5', '462', '292', '11', '78'],
['345', '7', '1348', '95', '1', '2926', '1', '1136', '25', '31'],
['336', '5', '3236', '5', '1859', '585', '27', '103', '946', '1741'],
['295', '6', '524', '6', '2352', '6', '168', '7601', '1691', '179'],
['1375', '5', '26', '32', '163', '398', '34', '1198', '81', '42'],
['435', '1855', '206', '588', '51', '65', '926', '385', '7570', '65'],
['7390', '8', '1104', '915', '611', '667', '611', '667', '780', '667'],
['12', '172', '5', '7516', '207', '111', '417', '7512', '10', '1967'],
['1853', '412', '81', '27', '7', '371', '133', '203', '41', '47'],
['1184', '19', '31', '92', '296', '203', '122', '41', '5342', '451'],
['1064', '3', '879', '199', '1820', '199', '451', '275', '679', '863'],
['1560', '92', '199', '257', '52', '274', '39', '51', '2802', '51'],
['988', '5', '2', '473', '999', '109', '328', '2661', '328', '1101'],
['203', '17', '681', '2', '250', '238', '1095', '1083', '139', '23'],
['976', '92', '3774', '182', '106', '85', '320', '278', '523', '25'],
['289', '102', '7594', '688', '55', '232', '274', '167', '1964', '328'],
['7394', '137', '844', '137', '870', '137', '2527', '1401', '40', '148'],
['3203', '3', '3203', '3', '286', '3', '1849', '3', '259', '177'],
['1844', '3', '2053', '3', '87', '7473', '87', '1156', '5', '7433'],
['2053', '3', '29', '207', '83', '171', '1035', '171', '83', '482'],
['1363', '3', '3221', '3', '1560', '49', '5', '156', '1277', '196'],
['3181', '3', '955', '142', '1039', '803', '218', '1339', '218', '803'],
['288', '8', '883', '90', '7564', '89', '254', '2009', '254', '21'],
['2263', '3', '36', '103', '85', '73', '332', '177', '2924', '177'],
['111', '7589', '775', '7589', '175', '210', '7550', '209', '1087', '23'],
['3132', '3', '1043', '128', '193', '26', '350', '236', '175', '210'],
['1059', '26', '67', '52', '193', '107', '883', '90', '2', '915'],
['2332', '3', '158', '49', '5', '7585', '36', '557', '3', '3254'],
['691', '1367', '305', '370', '170', '370', '33', '1796', '1', '1318'],
['1550', '3', '632', '40', '452', '7603', '685', '140', '2318', '244'],
['286', '8', '7496', '708', '498', '104', '2432', '15', '179', '177'],
['205', '41', '356', '78', '17', '272', '2476', '272', '145', '7594'],
['1359', '42', '7526', '42', '24', '93', '19', '3254', '433', '7595'],
['1459', '136', '528', '7591', '7422', '7591', '7422', '7591', '82', '3360'],
['135', '176', '267', '11', '276', '556', '276', '556', '276', '1'],
['7432', '95', '643', '333', '7595', '5679', '116', '3137', '116', '706'],
['1815', '1053', '1815', '116', '2376', '149', '419', '167', '986', '2'],
['3127', '3', '1338', '3', '493', '95', '8', '1794', '29', '287'],
['1836', '8', '457', '2297', '457', '1854', '457', '206', '3175', '206'],
['266', '52', '637', '309', '87', '602', '7575', '602', '7575', '12'],
['2311', '3', '757', '1187', '3199', '1187', '88', '666', '95', '172'],
['2308', '3', '11', '639', '10', '1636', '1967', '1636', '10', '491'],
['27', '172', '493', '172', '12', '158', '136', '2242', '136', '7604'],
['755', '754', '27', '321', '7', '43', '25', '13', '650', '490'],
['2306', '3', '3328', '3', '1523', '13', '11', '26', '231', '2102'],
['5679', '3', '1360', '12', '31', '174', '92', '526', '49', '280'],
['1790', '5342', '7595', '7', '2301', '7', '1348', '1349', '187', '423'],
['937', '8', '7507', '241', '602', '173', '363', '377', '363', '432'],
['100', '124', '83', '51', '2143', '17', '2206', '16', '2556', '16'],
['53', '7463', '53', '124', '1169', '1347', '1169', '1347', '177', '1714'],
['2298', '3', '205', '876', '7476', '876', '491', '94', '192', '98'],
['3087', '3', '1036', '3', '34', '12', '292', '1526', '292', '432'],
['1274', '178', '1814', '61', '3104', '61', '1828', '61', '1813', '13'],
['227', '73', '26', '62', '1043', '1', '1034', '52', '90', '52'],
['1348', '7', '83', '2', '211', '70', '117', '1488', '97', '925'],
['2295', '3', '174', '95', '7497', '95', '424', '183', '1116', '11'],
['394', '6', '3222', '6', '1560', '138', '477', '1640', '16', '164'],
['3043', '3', '1790', '3', '30', '244', '333', '373', '89', '29'],
['2285', '3', '228', '816', '177', '332', '177', '32', '672', '2'],
['1536', '371', '523', '25', '593', '41', '42', '62', '5679', '3'],
['883', '3', '1360', '7', '497', '464', '497', '464', '53', '110'],
['199', '257', '2916', '257', '32', '66', '42', '136', '891', '893'],
['2290', '3', '1359', '42', '33', '1144', '33', '10', '319', '7'],
['1816', '4', '148', '2', '257', '1295', '257', '380', '777', '10'],
['1524', '3', '320', '33', '326', '45', '353', '996', '1615', '996'],
['1169', '124', '585', '123', '1376', '372', '20', '730', '79', '1276'],
['749', '813', '5342', '507', '2104', '79', '732', '682', '2109', '1'],
['876', '41', '7562', '7', '7588', '428', '7600', '161', '550', '553'],
['33', '18', '260', '189', '1506', '32', '9', '1163', '9', '1478'],
['3', '877', '7', '640', '149', '50', '7603', '546', '7603', '1264'],
['215', '569', '405', '507', '182', '660', '189', '1298', '21', '522'],
['479', '25', '1165', '177', '73', '88', '16', '265', '490', '177'],
['302', '176', '2634', '268', '1039', '268', '101', '42', '51', '40'],
['131', '721', '406', '721', '406', '106', '8', '1068', '197', '198'],
['859', '4', '840', '54', '7564', '1455', '7564', '275', '1688', '384'],
['437', '18', '121', '800', '71', '243', '1177', '1', '2947', '1'],
['1398', '23', '504', '608', '340', '608', '504', '995', '1210', '857'],
['423', '187', '16', '779', '16', '2565', '16', '14', '392', '184'],
['510', '181', '510', '1679', '1978', '513', '798', '264', '185', '264'],
['420', '4', '379', '44', '446', '44', '49', '174', '3', '701'],
['224', '4', '1415', '8', '1415', '356', '743', '134', '743', '356'],
['7582', '990', '419', '84', '1417', '50', '663', '820', '663', '2335'],
['16', '2950', '16', '1010', '2110', '1010', '2110', '1010', '2110', '1010'],
['311', '4', '2024', '4', '211', '1286', '9', '1023', '22', '613'],
['562', '4', '3774', '89', '28', '50', '778', '2011', '1657', '2011'],
['1230', '4', '2439', '4', '367', '383', '7564', '89', '1267', '28'],
['1239', '48', '2114', '48', '10', '56', '197', '798', '955', '798'],
['567', '101', '193', '168', '6', '2352', '6', '3', '120', '5'],
['139', '167', '33', '1833', '33', '204', '7565', '687', '639', '11'],
['466', '223', '91', '1665', '91', '441', '721', '50', '1755', '11'],
['1085', '129', '1227', '7401', '4', '386', '144', '758', '495', '386'],
['771', '16', '2003', '16', '1922', '16', '2001', '16', '1222', '63'],
['1270', '109', '999', '109', '422', '536', '84', '404', '175', '42'],
['1264',
'7603',
'548',
'7603',
'1723',
'7603',
'2828',
'7603',
'1479',
'2131'],
['3774', '182', '169', '668', '380', '777', '167', '45', '1424', '189'],
['1003', '169', '1439', '130', '1646', '66', '483', '2994', '483', '177'],
['367', '74', '1263', '74', '131', '3212', '131', '721', '297', '91'],
['1107', '4', '567', '4', '121', '2118', '18', '220', '304', '1'],
['661', '404', '2506', '1666', '401', '715', '7552', '16', '145', '7334'],
['274', '167', '419', '987', '2', '2459', '2', '1409', '148', '109'],
['654', '716', '2495', '716', '57', '872', '57', '361', '3012', '361'],
['910', '223', '910', '223', '910', '4', '1086', '2689', '1086', '4'],
['1091', '4', '1737', '547', '585', '36', '291', '5', '163', '3'],
['84', '1417', '50', '35', '7415', '863', '131', '1241', '299', '150'],
['531', '7327', '531', '1095', '531', '774', '531', '45', '21', '3'],
['501', '794', '501', '44', '16', '60', '361', '60', '7564', '1441'],
['583', '628', '481', '314', '355', '106', '22', '42', '181', '356'],
['1702', '4', '215', '323', '1015', '323', '1241', '538', '536', '1237'],
['3356', '4', '223', '165', '407', '109', '16', '419', '99', '7552'],
['733', '122', '1738', '122', '43', '33', '1349', '2292', '6', '815'],
['356', '7564', '1681', '7564', '1663', '7564', '680', '1472', '680', '140'],
['1678', '4', '781', '16', '148', '1913', '148', '439', '4', '1109'],
['1010', '16', '65', '145', '17', '55', '166', '995', '194', '712'],
['7596', '536', '612', '511', '403', '302', '89', '587', '43', '246'],
['781', '928', '781', '16', '7552', '21', '348', '694', '2173', '694'],
['2017', '4', '446', '2', '1209', '379', '2', '504', '2593', '504'],
['2002', '4', '538', '536', '535', '912', '1623', '536', '538', '536'],
['572', '4', '1010', '16', '2944', '16', '2077', '515', '2077', '515'],
['202', '1011', '76', '28', '1450', '786', '65', '7570', '2008', '385'],
['664', '4', '202', '1011', '90', '56', '753', '286', '90', '1011'],
['1636', '1967', '10', '2957', '10', '482', '11', '372', '36', '172'],
['326', '4', '150', '22', '70', '360', '876', '278', '30', '1342'],
['341', '1224', '236', '529', '1154', '529', '2529', '529', '236', '383'],
['538', '4', '148', '4', '99', '106', '44', '39', '2111', '39'],
['1233', '71', '2234', '71', '38', '80', '371', '32', '2132', '7565'],
['2492', '4', '23', '138', '477', '867', '4311', '867', '477', '1662'],
['536', '2614', '536', '149', '182', '217', '1651', '202', '4', '7327'],
['1621', '4', '249', '904', '82', '3343', '82', '43', '156', '591'],
['1415', '356', '7372', '7564', '102', '67', '256', '2790', '2788', '2790'],
['323', '57', '50', '663', '50', '912', '534', '2474', '534', '2474'],
['1611', '7550', '576', '579', '619', '167', '106', '472', '106', '1041'],
['609', '182', '175', '169', '149', '45', '7576', '614', '577', '1693'],
['366', '655', '16', '42', '7603', '4', '99', '145', '1611', '7552'],
['848', '4', '439', '148', '1600', '40', '770', '40', '18', '1564'],
['1602', '4', '840', '4', '7552', '625', '439', '1044', '439', '1402'],
['365', '50', '14', '1158', '14', '316', '14', '1128', '409', '2164'],
['419', '1927', '419', '84', '1423', '84', '382', '45', '26', '933'],
['841', '90', '145', '7502', '7363', '7502', '125', '118', '163', '39'],
['468', '7552', '1401', '23', '656', '3', '433', '5', '1535', '12'],
['193', '9', '2141', '1458', '83', '279', '98', '558', '15', '469'],
['2103', '4', '178', '982', '178', '46', '178', '29', '155', '113'],
['2405', '4', '119', '23', '770', '7328', '28', '1450', '328', '2661'],
['411', '24', '173', '19', '291', '5', '47', '7602', '7', '34'],
['807', '20', '152', '988', '629', '48', '2734', '48', '42', '79'],
['1038', '33', '7513', '43', '1783', '43', '122', '759', '11', '1755'],
['265', '77', '403', '1001', '39', '52', '21', '205', '172', '555'],
['2201', '4', '771', '419', '606', '419', '531', '54', '2', '99'],
['361', '60', '673', '598', '134', '244', '80', '260', '78', '550'],
['1737', '4', '536', '2643', '536', '149', '50', '195', '864', '153'],
['1139', '211', '156', '103', '10', '941', '10', '787', '2407', '787'],
['386', '144', '386', '495', '758', '495', '349', '1159', '7528', '6'],
['727', '252', '1084', '252', '343', '855', '343', '252', '7518', '22'],
['1108', '423', '1108', '4', '768', '7', '7574', '1691', '165', '38'],
['1642', '4', '773', '563', '1', '2782', '1', '3148', '1', '2846'],
['1109', '504', '132', '201', '8', '347', '21', '41', '356', '7'],
['918', '343', '918', '1509', '427', '112', '427', '1777', '427', '1777'],
['2511', '4', '1816', '3', '396', '4', '1737', '4', '7327', '531'],
['570', '1249', '28', '931', '3774', '1655', '226', '75', '568', '182'],
['716', '502', '533', '45', '44', '6', '331', '492', '25', '1031'],
['313', '48', '542', '1276', '664', '57', '874', '38', '3310', '38'],
['718', '654', '1918', '654', '466', '658', '2', '87', '280', '142'],
['1615', '996', '1615', '996', '1615', '4', '1642', '4', '917', '4'],
['1218', '4', '1908', '1078', '1521', '1078', '353', '4', '420', '4'],
['1086', '2689', '1086', '2689', '1086', '2689', '1086', '4', '361', '107'],
['2485', '4', '2868', '4', '379', '54', '113', '54', '74', '18'],
['7569', '84', '853', '1673', '853', '77', '2667', '77', '177', '10'],
['1088', '4', '10', '2582', '10', '259', '614', '1108', '4', '1233'],
['535', '3516', '536', '422', '382', '1004', '449', '4', '1691', '7604'],
['2467', '4', '193', '177', '1765', '177', '214', '73', '728', '2615'],
['2468', '4', '1087', '23', '1082', '23', '44', '249', '389', '74'],
['2465', '4', '202', '2004', '202', '574', '202', '1011', '784', '1451'],
['401', '205', '62', '596', '191', '125', '484', '65', '244', '143'],
['772', '4', '7596', '536', '612', '536', '668', '109', '217', '1292'],
['2454', '4', '1615', '996', '1615', '996', '97', '17', '65', '272'],
['773', '563', '237', '563', '99', '10', '67', '22', '2255', '641'],
['711', '16', '1910', '16', '993', '505', '993', '16', '2944', '16'],
['7550', '938', '132', '315', '26', '760', '978', '760', '978', '760'],
['842', '148', '131', '654', '254', '2620', '28', '130', '1646', '66'],
['153', '1692', '1115', '402', '117', '796', '7603', '274', '422', '1419'],
['2442', '4', '117', '1324', '161', '69', '2171', '22', '165', '38'],
['840', '54', '2', '564', '748', '21', '946', '519', '89', '1453'],
['2439', '4', '353', '1916', '1607', '113', '177', '2213', '177', '32'],
['1908', '4', '1737', '4', '1086', '2689', '1086', '2689', '1086', '2689'],
['251', '45', '7589', '382', '167', '183', '670', '28', '1450', '407'],
['1087', '44', '23', '2447', '23', '772', '168', '7599', '7600', '2218'],
['1909', '4', '10', '26', '284', '7372', '18', '729', '20', '366'],
['532', '1119', '7564', '849', '7564', '670', '7564', '1681', '7564', '90'],
['603', '1382', '1', '305', '9', '348', '557', '3', '971', '81'],
['191', '24', '145', '639', '177', '68', '457', '3133', '774', '419'],
['248', '3280', '248', '158', '904', '82', '2336', '186', '374', '400'],
['372', '5', '433', '3254', '309', '3254', '1', '625', '1469', '625'],
['982', '5', '125', '817', '1', '3143', '1', '1742', '117', '370'],
['1372', '1575', '1372', '5', '27', '96', '125', '5', '3', '227'],
['335', '115', '1574', '115', '3251', '115', '280', '12', '3354', '12'],
['1192', '5', '2395', '125', '249', '185', '37', '314', '355', '444'],
['893', '12', '1553', '177', '142', '519', '1288', '1', '2754', '1'],
['59', '12', '33', '7428', '85', '31', '188', '161', '623', '260'],
['904', '82', '179', '748', '26', '494', '59', '12', '1072', '12'],
['349', '55', '272', '9', '295', '8', '1104', '915', '142', '272'],
['61', '1274', '46', '2303', '46', '2279', '46', '1313', '1492', '1313'],
['826', '12', '1873', '15', '70', '65', '9', '97', '475', '1980'],
['122', '41', '95', '278', '227', '7380', '59', '191', '463', '111'],
['1872', '5', '31', '146', '2168', '146', '277', '78', '1021', '78'],
['391', '7', '743', '7', '757', '3', '266', '69', '2183', '798'],
['1068', '197', '798', '2207', '634', '67', '634', '2207', '798', '370'],
['7591', '141', '92', '83', '255', '1282', '7', '1346', '27', '321'],
['82', '31', '14', '24', '122', '15', '953', '1390', '2336', '164'],
['3307', '5', '238', '714', '238', '99', '503', '1', '3385', '1'],
['3206', '5', '43', '694', '11', '680', '2112', '7603', '452', '132'],
['434', '1278', '288', '141', '173', '105', '164', '19', '1581', '19'],
['597', '1851', '705', '1016', '335', '59', '5', '1375', '305', '1022'],
['684', '124', '25', '7509', '25', '60', '549', '17', '113', '14'],
['801', '8', '457', '68', '296', '214', '1795', '1166', '614', '647'],
['1373', '5', '7479', '2336', '1393', '1388', '7385', '1388', '1902', '1388'],
['110', '8', '268', '2634', '268', '8', '2955', '8', '157', '7603'],
['7126', '5', '97', '14', '146', '2894', '146', '9', '548', '166'],
['1535', '258', '358', '199', '879', '3', '752', '199', '358', '7604'],
['528', '7591', '2369', '115', '208', '33', '1502', '143', '1026', '143'],
['198', '75', '216', '115', '435', '115', '2364', '59', '12', '292'],
['3336', '5', '7457', '684', '83', '1760', '190', '550', '11', '145'],
['602', '7575', '602', '5', '7523', '338', '7523', '338', '7532', '338'],
['2395', '125', '249', '7604', '53', '332', '7337', '332', '242', '95'],
['599', '68', '7602', '7', '7600', '190', '889', '190', '7580', '39'],
['245', '111', '125', '173', '602', '7575', '602', '12', '291', '1900'],
['1819', '13', '2320', '13', '1529', '13', '1497', '17', '66', '483'],
['3309', '5', '1373', '134', '598', '673', '740', '3005', '740', '60'],
['310', '116', '310', '7391', '19', '8', '7507', '650', '649', '125'],
['695', '10', '3', '7431', '125', '247', '81', '33', '24', '166'],
['280', '106', '7582', '16', '607', '769', '119', '121', '38', '152'],
['3009', '5', '460', '434', '460', '73', '24', '1856', '24', '1731'],
['2386', '5', '7341', '1864', '7341', '33', '228', '179', '22', '14'],
['484', '1568', '7595', '116', '259', '614', '707', '82', '163', '5'],
['2354', '206', '588', '67', '135', '17', '2797', '17', '272', '106'],
['1194', '1869', '1868', '1050', '61', '3123', '61', '3093', '61', '7433'],
['2333', '8', '3116', '8', '2384', '6', '522', '8', '754', '812'],
['825', '59', '208', '162', '1167', '103', '102', '109', '1270', '109'],
['292', '894', '11', '7525', '12', '391', '253', '27', '26', '211'],
['1378', '36', '424', '87', '1156', '522', '8', '613', '48', '15'],
['1868',
'3195',
'1868',
'3195',
'1868',
'1869',
'1868',
'1869',
'1194',
'1869'],
['1854', '5', '27', '1127', '20', '807', '21', '347', '1749', '69'],
['1376', '59', '34', '12', '7575', '12', '21', '110', '29', '142'],
['692', '156', '5', '49', '1195', '432', '351', '12', '1061', '1'],
['7444', '3129', '269', '87', '259', '1149', '454', '161', '690', '455'],
['1859', '5', '1372', '5', '2333', '5', '1535', '13', '1532', '13'],
['3236', '5', '979', '1528', '3133', '46', '457', '3133', '46', '127'],
['2363', '5', '1156', '397', '1060', '373', '3146', '373', '395', '89'],
['3231', '5', '26', '1128', '7603', '305', '12', '191', '24', '443'],
['2320', '5', '774', '3133', '1528', '46', '3102', '46', '13', '7514'],
['706', '128', '115', '128', '123', '1146', '31', '22', '2079', '735'],
['2335', '5', '434', '460', '769', '152', '188', '165', '26', '7578'],
['141', '118', '292', '321', '27', '118', '103', '102', '288', '969'],
['7531', '1194', '972', '1194', '7531', '58', '830', '2336', '157', '26'],
['3305', '6', '150', '7598', '1691', '1384', '1691', '165', '114', '233'],
['524', '6', '826', '6', '36', '3', '162', '154', '290', '490'],
['809', '6', '1056', '29', '33', '2596', '33', '362', '95', '345'],
['1057', '7', '7417', '7484', '43', '487', '22', '9', '219', '81'],
['1541', '7595', '53', '332', '3325', '332', '73', '155', '77', '664'],
['1371', '6', '754', '3', '1545', '3', '1336', '25', '2', '7401'],
['1571', '138', '477', '138', '16', '7438', '10', '2204', '874', '259'],
['1185', '12', '1376', '59', '130', '849', '1667', '50', '24', '42'],
['1566', '8', '347', '3', '1174', '16', '993', '16', '771', '16'],
['810', '6', '3222', '6', '1546', '6', '287', '1875', '287', '748'],
['1350', '6', '549', '1579', '549', '1579', '36', '172', '205', '3'],
['549', '7', '51', '638', '9', '42', '109', '379', '1230', '134'],
['526', '292', '5', '7447', '762', '555', '80', '269', '64', '86'],
['431', '1549', '3266', '1549', '431', '2340', '16', '298', '1210', '529'],
['1546', '138', '566', '507', '422', '467', '408', '138', '436', '7437'],
['1461', '6', '3096', '15', '1390', '953', '61', '2268', '61', '3123'],
['458', '8', '492', '331', '492', '3', '242', '7514', '13', '1531'],
['1374', '19', '335', '7525', '12', '459', '12', '3281', '12', '1072'],
['7420', '6', '112', '56', '53', '3', '1347', '3', '641', '8'],
['2376', '149', '531', '1095', '63', '177', '1304', '798', '7397', '350'],
['3222', '6', '112', '3031', '112', '2276', '112', '73', '728', '2615'],
['3158', '6', '1157', '7595', '430', '817', '430', '817', '125', '26'],
['2326', '6', '7444', '3129', '8', '877', '1036', '7515', '8', '1033'],
['728', '17', '77', '660', '189', '52', '39', '1145', '52', '306'],
['278', '29', '5', '416', '398', '3359', '398', '3359', '398', '706'],
['1830', '138', '477', '608', '477', '1662', '477', '608', '995', '608'],
['7415', '863', '368', '382', '7589', '175', '1', '1232', '1936', '2'],
['104', '27', '67', '1068', '5', '7503', '82', '49', '596', '62'],
['1157', '509', '1000', '509', '1157', '7527', '1157', '6', '3066', '6'],
['2384', '6', '20', '7577', '347', '1458', '26', '798', '7550', '69'],
['2371', '138', '436', '1419', '436', '1419', '1624', '238', '2', '871'],
['1578', '19', '1581', '19', '27', '81', '229', '93', '5', '3206'],
['3176', '6', '331', '296', '295', '22', '2336', '906', '104', '185'],
['3155', '6', '106', '7582', '990', '37', '1764', '2940', '1764', '37'],
['1254', '21', '85', '75', '152', '148', '466', '658', '7569', '606'],
['1179', '56', '227', '7380', '5', '7406', '3', '7341', '1', '3167'],
['1055', '430', '7595', '882', '88', '129', '845', '23', '401', '1666'],
['7554', '27', '88', '461', '88', '20', '2', '1632', '37', '570'],
['522', '6', '1542', '1', '3355', '1', '116', '491', '22', '156'],
['1152', '6', '190', '157', '334', '54', '379', '40', '379', '1209'],
['3112', '138', '477', '1662', '153', '154', '270', '4', '439', '148'],
['3096', '101', '567', '101', '478', '101', '16', '7550', '251', '4'],
['3066', '6', '1571', '138', '436', '270', '7455', '4910', '7455', '4910'],
['188', '161', '623', '33', '15', '55', '26', '49', '105', '164'],
['3092', '6', '408', '6', '203', '1853', '412', '7550', '210', '2732'],
['750', '7', '1757', '69', '2196', '25', '228', '177', '219', '1789'],
['1346', '6', '190', '304', '945', '304', '346', '18', '194', '151'],
['149', '911', '541', '1627', '475', '2', '241', '12', '649', '310'],
['2292', '1349', '373', '3069', '373', '46', '1172', '13', '1049', '1052'],
['1776', '6', '207', '92', '1821', '127', '1591', '127', '1591', '127'],
['7528', '6', '5', '13', '659', '99', '842', '4', '2485', '4'],
['1450', '28', '1674', '28', '1450', '6', '32', '203', '17', '255'],
['408', '138', '16', '1212', '353', '438', '121', '2838', '121', '119'],
['7517', '62', '51', '796', '14', '113', '155', '622', '1018', '622'],
['7536', '62', '21', '264', '2', '189', '552', '166', '8', '600'],
['1151', '7', '3', '62', '92', '207', '6', '1350', '6', '1152'],
['429', '53', '5342', '1611', '7550', '188', '11', '482', '66', '2'],
['956', '7', '34', '12', '1322', '161', '204', '1150', '592', '87'],
['638', '9', '98', '69', '98', '211', '2', '473', '986', '540'],
['5342', '109', '379', '109', '422', '536', '612', '303', '2045', '303'],
['1178', '7', '484', '7', '75', '216', '756', '216', '7', '319'],
['640', '2831', '640', '2831', '640', '42', '7603', '60', '267', '304'],
['2211', '7', '100', '124', '1820', '199', '517', '7603', '1158', '14'],
['98', '211', '47', '2', '165', '26', '8', '2821', '17', '799'],
['696', '179', '582', '1691', '7601', '7', '743', '134', '244', '197'],
['878', '5392', '878', '5392', '878', '5392', '878', '21', '201', '1509'],
['1782', '7', '30', '689', '329', '284', '212', '1617', '212', '17'],
['1768', '7', '40', '37', '10', '454', '3', '2285', '3', '3200'],
['1310', '7', '1499', '7', '24', '49', '3289', '49', '36', '209'],
['417', '110', '92', '345', '3', '1849', '3', '976', '130', '1210'],
['666', '253', '543', '253', '391', '12', '893', '5', '7380', '227'],
['980', '82', '73', '32', '5', '1156', '87', '171', '83', '7499'],
['645', '754', '645', '7', '752', '95', '1889', '2387', '27', '67'],
['371', '80', '269', '35', '1385', '35', '7580', '689', '7580', '689'],
['7562', '11', '360', '350', '1740', '255', '17', '176', '20', '241'],
['1171', '7', '768', '148', '768', '446', '4', '156', '9', '733'],
['2981', '7', '7501', '7', '2361', '2412', '96', '2412', '96', '269'],
['946', '10', '1031', '19', '88', '7404', '88', '10', '492', '115'],
['506', '90', '203', '89', '77', '4', '1088', '537', '45', '26'],
['318', '26', '7604', '7334', '145', '51', '7', '1731', '14', '1158'],
...]
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
biased_rw_for_training =[list(x.values()) for x in [RW_Biased(full_from, full_to,walk_length =10,p = .5, q = 2) for i in range(10) ] ]
```
```python
biased_rw_for_training[0].append(biased_rw_for_training[1])
final_results = []
for i in biased_rw_for_training:
final_results = final_results + i
```
```python
```
```python
biased_rw_for_training[0][0]
```
```python
from stellargraph.data import BiasedRandomWalk
from stellargraph import StellarGraph
from stellargraph import datasets
from IPython.display import display, HTML
from gensim.models import Word2Vec
model = Word2Vec(biased_rw_for_training[0] , size=128, window=5, min_count=0, sg=1, workers=2, iter=1)
```
## References:
1. [NRL Totorial Part 1](http://snap.stanford.edu/proj/embeddings-www/files/nrltutorial-part1-embeddings.pdf)
```python
```
| b32e0d1c240dd77a99edca86372a5cf9ee8e0833 | 334,510 | ipynb | Jupyter Notebook | Notes/Node Embeddings and Skip Gram Examples.ipynb | poc1673/ML-for-Networks | 201ca30ab51954a7b1471740eb404b98f1d26213 | [
"MIT"
] | null | null | null | Notes/Node Embeddings and Skip Gram Examples.ipynb | poc1673/ML-for-Networks | 201ca30ab51954a7b1471740eb404b98f1d26213 | [
"MIT"
] | null | null | null | Notes/Node Embeddings and Skip Gram Examples.ipynb | poc1673/ML-for-Networks | 201ca30ab51954a7b1471740eb404b98f1d26213 | [
"MIT"
] | null | null | null | 70.586622 | 1,189 | 0.38754 | true | 141,121 | Qwen/Qwen-72B | 1. YES
2. YES | 0.718594 | 0.721743 | 0.518641 | __label__krc_Cyrl | 0.976854 | 0.043305 |
# Worksheet 5
```
%matplotlib inline
```
## Question 1
Explain when multistep methods such as Adams-Bashforth are useful and when multistage methods such as RK methods are better.
### Answer Question 1
Multistep methods are more computationally efficient (fewer function evaluations) and more accurate than multistage methods. However, they are not self-starting, difficult to adapt to use variable step sizes, and the theory to show that they are stable and convergent is more complex. They are most useful when efficiency is the primary concern and the system is sufficiently well controlled that equally spaced steps can be taken.
In other situations, as discussed on worksheet 4, the self-starting simplicity combined with adaptive stepping means that multistage methods are preferrable.
## Question 2
Compute the coefficients of the AB3 algorithm.
### Answer Question 2
For Adams-Bashforth methods we have
\begin{equation}
y_{n+1} = y_n + b_{k−1} f_n + b_{k−2} f_{n−1} + \dots + b_0 f_{n+1−k}.
\end{equation}
Here we have $k = 3$ and so we have
\begin{equation}
y_{n+1} = y_n + h \left[ b_2 f_n + b_1 f_{n−1} + b_0 f_{n−2} \right] .
\end{equation}
We want to ensure that this gives an exact approximation of the integral form for polynomials of
order s = $0, \dots, 2$. That is, we want
\begin{equation}
\int^{x_{n+1}}_{x_n} p_s (x) = h \left[ b_2 p_s (x_n) + b_1 p_s (x_{n−1}) + b_0 p_s (x_{n−2}) \right].
\end{equation}
For simplicity, and without loss of generality, we set $x_n = 0$, and use the polynomials
\begin{align}
p_0(x) & = 1, \\
p_1(x) & = x, \\
p_2(x) & = x ( x + h )
\end{align}
We then see that we get
\begin{align}
s & = 0: & \int_0^h 1 & = h \left[ b_2 \times 1 + b_1 \times 1 + b_0 \times 1 \right] \\
\implies && 1 & = b_2 + b_1 + b_0. \\
s & = 1: & \int_0^h x & = h \left[ b_2 \times 0 + b_1 \times (-h) + b_0 \times (-2 h) \right] \\
\implies && \frac{1}{2} & = -b_1 - 2 b_0. \\
s & = 2: & \int_0^h x ( x + h ) & = h \left[ b_2 \times 0 + b_1 \times 0 + b_0 \times (-2 h) (-h) \right] \\
\implies && \frac{5}{6} & = 2 b_0.
\end{align}
By back-substitution we find
\begin{equation}
b_0 = \frac{5}{12}, \quad b_1 = -\frac{4}{3}, \quad b_2 = \frac{23}{12}.
\end{equation}
This means the algorithm is
\begin{equation}
y_{n+1} = y_n + \frac{h}{12} \left[ 23 f_n − 16 f_{n−1} + 5 f_{n−2} \right].
\end{equation}
## Question 3
Explain the meaning of stability, consistency and convergence when applied to numerical methods for IVPs. State the theorem connecting these.
### Answer Question 3
*Stability*: The numerical solution is bounded at all iterations over a finite interval. I.e., if the true solution is $y(x)$ and $x \in [0, X]$ with $X$ finite, and we use $N + 1$ steps with $x_0 = 0$ and $x_N = X$, then $|y_i|$ is finite for all $i = 0, 1, \dots , N$, irrespective of the value of $N$.
*Consistency*: The numerical method is a faithful representation of the differential equation to lowest order in $h$. That is, if you Taylor expand the numerical difference scheme and let $h \to 0$ you recover the original differential equation independent of the limiting process.
*Convergence*: If $y(x)$ is the exact solution and $y(x; h)$ the numerical solution using step size $h$, in the limit as $h \to 0$ the numerical solution is the exact solution:
\begin{equation}
\lim_{h \to 0} y(x; h) = y(x).
\end{equation}
The theorem states that consistency and stability are equivalent to convergence.
## Question 4
Using the stability polynomial and your results above, check the order of accuracy and the stability of the 3 step Adams-Bashforth method.
### Answer Question 4
The coefficients of AB3 in the standard $k$-step formula notation are
\begin{align}
a_3 & = 1 & a_2 & = -1 & a_1 & = 0 & a_0 & = 0 \\
b_3 & = 0 & b_2 & = \frac{23}{12} & b_1 & = -\frac{4}{3} & b_0 & = \frac{5}{12}.
\end{align}
Therefore the stability polynomial is
\begin{equation}
p(z) = z^3 - z^2
\end{equation}
with derivative
\begin{equation}
p'(z) = 3 z^2 - 2 z
\end{equation}
and the other required polynomial is
\begin{equation}
q(z) = \frac{1}{12} \left( 23 z^2 - 16 z + 5 \right).
\end{equation}
To check consistency we need that $p(1) = 0$ and $p'(1) = q(1)$, which we check:
\begin{align}
p(1) & = 1 - 1 \\ & = 0. \\
p'(1) - q(1) & = (3 - 2) - \frac{1}{12} (23 -16 + 5) \\ & = 1 - \frac{12}{12} \\ & = 0.
\end{align}
So the method is consistent.
To check stability we have to find the roots of the stability polynomial $p(z)$. We write
\begin{equation}
p(z) = z^2 (z − 1)
\end{equation}
to see that the roots are 0 (twice) and 1, which means that the method satisfies the *strong* root condition implying both stability and relative stability, meaning it is a useful method.
## Coding Question 1
Apply the 2-step Adams-Bashforth method to the ODE from Worksheet 4,
\begin{equation}
y' + 2 y = 2 − e^{−4x}, \quad y(0) = 1.
\end{equation}
Use the Euler or Euler predictor-corrector method to start. Again, find the value of y(1) (analytic answer is $1 − (e^{−2} − e^{−4} )/2)$ and see how your method converges with resolution.
### Answer Coding Question 1
```
def AB2(f, y0, interval, N = 100, start = 'Euler'):
"""Solve the IVP y' = f(x, y) on the given interval using N+1 points (counting the initial point) with initial data y0."""
import numpy as np
h = (interval[1] - interval[0]) / N
x = np.linspace(interval[0], interval[1], N+1)
y = np.zeros((len(y0), N+1))
ff = np.zeros((len(y0), N+1))
y[:, 0] = y0
ff[:, 0] = f(x[0], y[:, 0])
if (start == 'Euler'):
y[:, 1] = y0 + h * ff[:, 0]
elif (start == 'Euler PC'):
yp = y0 + h * ff[:, 0]
y[:, 1] = y0 + h * ( ff[:, 0] + f(x[1], yp) ) / 2.0
else:
raise Exception("Only allowed values for start are ['Euler', 'Euler PC']")
ff[:, 1] = f(x[1], y[:, 1])
for i in range(1, N):
y[:, i+1] = y[:, i] + h * ( 3.0 * ff[:, i] - ff[:, i-1] ) / 2.0
ff[:, i+1] = f(x[i+1], y[:, i+1])
return x, y
def fn_q1(x, y):
"""Function defining the IVP in question 1."""
import numpy as np
return 2.0 - np.exp(-4.0*x) - 2.0*y
# Now do the test
import numpy as np
exact_y_end = 1.0 - (np.exp(-2.0) - np.exp(-4.0)) / 2.0
# Test at default resolution
x, y = AB2(fn_q1, np.array([1.0]), [0.0, 1.0])
print "Error at the end point is ", y[:, -1] - exact_y_end
import matplotlib.pyplot as plt
fig = plt.figure(figsize = (12, 8), dpi = 50)
plt.plot(x, y[0, :], 'b-+')
plt.xlabel('$x$', size = 16)
plt.ylabel('$y$', size = 16)
# Now do the convergence test
levels = np.array(range(4, 10))
Npoints = 2**levels
abs_err_Euler = np.zeros(len(Npoints))
abs_err_EulerPC = np.zeros(len(Npoints))
for i in range(len(Npoints)):
x, y = AB2(fn_q1, np.array([1.0]), [0.0, 1.0], Npoints[i])
abs_err_Euler[i] = abs(y[0, -1] - exact_y_end)
x, y = AB2(fn_q1, np.array([1.0]), [0.0, 1.0], Npoints[i], 'Euler PC')
abs_err_EulerPC[i] = abs(y[0, -1] - exact_y_end)
# Best fit to the errors
h = 1.0 / Npoints
p_Euler = np.polyfit(np.log(h), np.log(abs_err_Euler), 1)
p_EulerPC = np.polyfit(np.log(h), np.log(abs_err_EulerPC), 1)
fig = plt.figure(figsize = (12, 8), dpi = 50)
plt.loglog(h, abs_err_Euler, 'kx')
plt.loglog(h, np.exp(p_Euler[1]) * h**(p_Euler[0]), 'k-')
plt.loglog(h, abs_err_EulerPC, 'bo')
plt.loglog(h, np.exp(p_EulerPC[1]) * h**(p_EulerPC[0]), 'b--')
plt.xlabel('$h$', size = 16)
plt.ylabel('$|$Error$|$', size = 16)
plt.legend(('AB2 Errors (Euler start)', "Best fit line slope {0:.3}".format(p_Euler[0]), 'AB2 Errors (Euler PC start)', "Best fit line slope {0:.3}".format(p_EulerPC[0])), loc = "upper left")
plt.show()
```
Both converge at order two: oddly, the results starting with the Euler predictor-corrector are noticeably worse.
## Coding Question 2
Apply the 2-step implicit Adams-Moulton method to the above ODE, using the 2-step Adams-Bashforth method as a predictor. Use the Euler or Euler predictor-corrector method to start. See how your method converges with resolution.
### Answer Coding Question 3
```
def AM2(f, y0, interval, N = 100, start = 'Euler'):
"""Solve the IVP y' = f(x, y) on the given interval using N+1 points (counting the initial point) with initial data y0."""
import numpy as np
h = (interval[1] - interval[0]) / N
x = np.linspace(interval[0], interval[1], N+1)
y = np.zeros((len(y0), N+1))
ff = np.zeros((len(y0), N+1))
y[:, 0] = y0
ff[:, 0] = f(x[0], y[:, 0])
if (start == 'Euler'):
y[:, 1] = y0 + h * ff[:, 0]
elif (start == 'Euler PC'):
yp = y0 + h * ff[:, 0]
y[:, 1] = y0 + h * ( ff[:, 0] + f(x[1], yp) ) / 2.0
else:
raise Exception("Only allowed values for start are ['Euler', 'Euler PC']")
ff[:, 1] = f(x[1], y[:, 1])
for i in range(1, N):
# Adams-Bashforth 2 for the predictor step
yp = y[:, i] + h * ( 3.0 * ff[:, i] - ff[:, i-1] ) / 2.0
# Adams-Moulton 2 for the corrector step
y[:, i+1] = h * (ff[:, i] + f(x[i+1], yp)) / 2.0
ff[:, i+1] = f(x[i+1], y[:, i+1])
return x, y
def fn_q2(x, y):
"""Function defining the IVP in question 2."""
import numpy as np
return 2.0 - np.exp(-4.0*x) - 2.0*y
# Now do the test
import numpy as np
exact_y_end = 1.0 - (np.exp(-2.0) - np.exp(-4.0)) / 2.0
# Test at default resolution
x, y = AB2(fn_q2, np.array([1.0]), [0.0, 1.0])
print "Error at the end point is ", y[:, -1] - exact_y_end
import matplotlib.pyplot as plt
fig = plt.figure(figsize = (12, 8), dpi = 50)
plt.plot(x, y[0, :], 'b-+')
plt.xlabel('$x$', size = 16)
plt.ylabel('$y$', size = 16)
# Now do the convergence test
levels = np.array(range(4, 10))
Npoints = 2**levels
abs_err_AM2_Euler = np.zeros(len(Npoints))
abs_err_AM2_EulerPC = np.zeros(len(Npoints))
for i in range(len(Npoints)):
x, y = AB2(fn_q2, np.array([1.0]), [0.0, 1.0], Npoints[i])
abs_err_AM2_Euler[i] = abs(y[0, -1] - exact_y_end)
x, y = AB2(fn_q2, np.array([1.0]), [0.0, 1.0], Npoints[i], 'Euler PC')
abs_err_AM2_EulerPC[i] = abs(y[0, -1] - exact_y_end)
# Best fit to the errors
h = 1.0 / Npoints
p_Euler = np.polyfit(np.log(h), np.log(abs_err_AM2_Euler), 1)
p_EulerPC = np.polyfit(np.log(h), np.log(abs_err_AM2_EulerPC), 1)
fig = plt.figure(figsize = (12, 8), dpi = 50)
plt.loglog(h, abs_err_AM2_Euler, 'kx')
plt.loglog(h, np.exp(p_Euler[1]) * h**(p_Euler[0]), 'k-')
plt.loglog(h, abs_err_AM2_EulerPC, 'bo')
plt.loglog(h, np.exp(p_EulerPC[1]) * h**(p_EulerPC[0]), 'b--')
plt.xlabel('$h$', size = 16)
plt.ylabel('$|$Error$|$', size = 16)
plt.legend(('AM2 Errors (Euler start)', "Best fit line slope {0:.3}".format(p_Euler[0]), 'AM2 Errors (Euler PC start)', "Best fit line slope {0:.3}".format(p_EulerPC[0])), loc = "upper left")
plt.show()
```
The results are essentially identical to Adams-Bashforth 2.
```
from IPython.core.display import HTML
def css_styling():
styles = open("../../IPythonNotebookStyles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: Verdana, Arial, Helvetica, sans-serif;
}
h2 {
font-family: Verdana, Arial, Helvetica, sans-serif;
}
h3 {
font-family: Verdana, Arial, Helvetica, sans-serif;
}
div.text_cell_render{
font-family: Gill, Verdana, Arial, Helvetica, sans-serif;
line-height: 110%;
font-size: 120%;
width:700px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
/* .prompt{
display: None;
}*/
.text_cell_render h5 {
font-weight: 300;
font-size: 12pt;
color: #4057A1;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
> (The cell above executes the style for this notebook. It closely follows the style used in the [12 Steps to Navier Stokes](http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/) course.)
| 2962f79baf57212f929725c9439fd61d611649ff | 153,678 | ipynb | Jupyter Notebook | Worksheets/Worksheet5_Notebook.ipynb | alistairwalsh/NumericalMethods | fa10f9dfc4512ea3a8b54287be82f9511858bd22 | [
"MIT"
] | 1 | 2021-12-01T09:15:04.000Z | 2021-12-01T09:15:04.000Z | Worksheets/Worksheet5_Notebook.ipynb | indranilsinharoy/NumericalMethods | 989e0205565131057c9807ed9d55b6c1a5a38d42 | [
"MIT"
] | null | null | null | Worksheets/Worksheet5_Notebook.ipynb | indranilsinharoy/NumericalMethods | 989e0205565131057c9807ed9d55b6c1a5a38d42 | [
"MIT"
] | 1 | 2021-04-13T02:58:54.000Z | 2021-04-13T02:58:54.000Z | 243.546751 | 39,589 | 0.884967 | true | 4,271 | Qwen/Qwen-72B | 1. YES
2. YES | 0.746139 | 0.887205 | 0.661978 | __label__eng_Latn | 0.869151 | 0.376328 |
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
```
### O Metódo de Euler Explicito ou Forward Euler
#### Expansão em Taylor de uma função y
Expansão em Série de Taylor de $y(t)$ centrada em $t_0$ é dada por
$$
y(t) = \sum_{n=0}^{\infty} \frac{y^{(n)}(t_0)}{n!}(t-t_0)^n
$$
##### Expansão de y até a primeira derivada
Seja $h = t_n - t_{n-1}$
$$
y(t_{k+1}) = y(t_k) + y'(t_k)h + \mathcal{O}(h^2)\\
$$
O metódo de Euler explicito é um método recursivo de solução de equações diferenciais ordinárias, e consiste em utilizar a aproximação por Taylor e ignorar o erro $\mathcal{O}(h^2)$, nos dando: <br>
$$
y_{n+1} \approx u_{n+1} = u_n + f(u_n,t_n) \cdot (t_{n+1} - t_n)
$$
Com $y_n = y(t_n)$ a solução analitica no ponto $t_n$ e $u_n$ a aproximação númerica, $f(a,b)$ a derivada de $a$ em $b$
```python
## F: Função derivada da função que queremos encontrar
## t0: tempo inicial
## y0: ponto inicial
## ts: range de tempo
## p: parametros particulares de cada modelo
def f_euler(F, y0, ts, p = 0):
ys = [y0]
h = ts[1]-ts[0]
for tnext in ts[1:]:
ynext = ys[-1] + F(ys[-1],tnext,p)*h
ys.append(ynext)
t = tnext
return np.array(ys)
```
### O metódo de Runge-Kutta de segunda ordem
Enquanto o metódo de Euler aproxima a solução em um ponto andando na tangente daquele ponto, o metódo de Runge-Kutta de segunda ordem aproxima o mesmo ponto andando na média entre a tangente no ponto e a tangente no ponto futuro.<br>
Seja
$$
k_1 = f(u_n,t_n)\\
k_2 = f(u_n + hk_1,t_{n+1})
$$
Então $k_1$ é a derivada no ponto e $k_2$ é a derivada no ponto futuro, aproximando a mesma pelo metódo de Euler<br>
O passo para Runge-Kutta será então a média entre estas duas, e ficará:
$$
y_{n+1} \approx u_{n+1} = u_n + h \frac{k_1 + k_2}{2}
$$
```python
def rk_2(F, y0, ts, p = 0):
ys = [y0]
t = ts[0]
h = ts[1] - ts[0]
for tnext in ts:
k1 = F(ys[-1], t, p)
k2 = F(ys[-1] + h*k1,tnext, p)
ynext = ys[-1] + h * (k1+k2) / 2.0
ys.append(ynext)
t = tnext
return np.array(ys[:-1])
```
Testando para a EDO
$$ \begin{cases} y'(t) = - y(t) + 2\sin(t^2) \\ y(0) = 1.2\end{cases} $$
```python
def F(y,t,p = 0):
return -y + 2*np.sin(t**2)
```
```python
## Definindo o dominio
ts = np.linspace(-5,5,500)
y0 = 1.2
## Criando a lista para Runge-Kutta 2nd order
ys = rk_2(F,y0,ts)
## Criando a lista para Euler Explicito
ys2 = f_euler(F,y0,ts)
#ans = f(y,ts)
plt.plot(ts,ys,label='RK')
plt.plot(ts,ys2,label='Explicito')
plt.legend()
plt.show()
```
### O metódo de Euler converge - Um pouco de Análise
### Definições:
Seja $\frac{\mathrm{d}y}{\mathrm{d}t} = f(y,t)$<br>
Seja $t \in \mathbb{N}$ o número de 'tempos' no dominio e $t^*$ o tempo final e $\lfloor \cdot \rfloor$ a função `floor`, que retorna a parte inteira.<br>
Seja $h \in \mathbb{R}, h > 0$, o 'tamanho' de cada partição, ou seja $h = t_{n+1} - t_n$ <br>
Podemos então definir $n$, tal que $n$ assume valores no conjunto $\{0, \dots , \lfloor \frac{t^*}{h} \rfloor\}$<br>
Seja $\lVert \cdot \rVert$ uma norma definida no espaço
Seja $y_n$ o valor real (analitico) da função **$y$** no ponto $t_n$, ou seja $y_n = y(t_n)$<br>
Seja $u_n$ o valor númerico aproximado da função $y$ no ponto $t_n$ pelo método de Euler, ou seja $u_{n+1} = u_{n} + f(u_{n},t_{n})\cdot h$<br>
Uma função $f$ é dita Lipschitz se satisfaz a condição de Lipschitz: $\exists M \in \mathbb{R}; \lVert f(x_1) - f(x_2) \rVert \leq M\cdot\lVert x_1 - x_2\rVert$
Um metódo é dito convergente se:
$$
\lim_{h\to 0^+} \max_{n=0, \dots , \lfloor \frac{t^*}{h} \rfloor} \lVert u_n - y_n \rVert = 0
$$
Ou seja, sempre que a malha for refinida, a solução númerica em um ponto se aproxima da solução analitica neste ponto
### Teorema: O metódo de Euler converge
#### Prova
Tomemos $f(y,t)$ analitica, ou seja, pode ser representada pela série de Taylor centrada em um ponto $t_0$ e é Lipschitz.<br>
$f(y,t)$ analitica implica $y$ analitica.<br>
Vamos definir $err_n = u_n - y_n$, nosso erro númerico, então queremos provar
$$
\lim_{h\to 0^+} \max_{n=0, \dots , \lfloor \frac{t^*}{h} \rfloor} \lVert err_n \rVert = 0
$$
Expandindo nossa solução $y$ da equação diferencial por Taylor:
$$
y_{n+1} = y_n + hf(y_n,t_n)+\mathcal{O}(h^2) \tag{1}
$$
Como $y$ é analitica, então sua derivada é contínua, logo pelo `Teorema do Valor Extremo`, dado uma vizinhança em torno de $t_n$ o termo $\mathcal{O}(h^2)$ é limitado $\forall h>0$ e $n \leq \lfloor t^*/h \rfloor$ por $M>0, M \in \mathbb{R}$, e pela propriedade arquimediana do corpo do reais $\exists c \in \mathbb{R}, c>0; c\cdot h^2 \geq M$, portanto podemos limitar $\mathcal{O}(h^2)$ por $ch^2, c>0$.<br>
Agora vamos fazer $err_{n+1} = u_{n+1} - y_{n+1}$ usando a expansão em Taylor $y_{n+1}$ e Euler em $u_{n+1}$
$$
\begin{align}
err_{n+1} &= u_{n+1} - y_{n+1}\\
&= u_n + h(f(u_n,t_n)) - y_n - h(f(yn,tn) + \mathcal{O}(h^2)\\
&= \underbrace{u_n - y_n}_{err_n} + h\left(f(u_n,t_n) - f(y_n,t_n)\right) + \mathcal{O}(h^2)\\
&= err_n + h\left(f(u_n,t_n) - f(y_n,t_n)\right) + \mathcal{O}(h^2)\\
\end{align}
$$
Daqui podemos perceber que o erro no passo seguinte depende também do erro anterior já cometido<br>
E segue do fato de que $\mathcal{O}(h^2)$ é limitada, com uma cota superior $ch^2$ e da desigualdade triangular
$$
\lVert err_{n+1} \rVert \leq \lVert err_n\rVert + \lVert h\left(f(u_n,t_n) - f(y_n,t_n)\right)\rVert + \lVert ch^2 \rVert
$$
E pela condição de Lipschitz
$$
\lVert f(u_n,t_n) - f(y_n,t_n) \rVert \leq \lambda\lVert u_n - y_n \rVert = \lambda\lVert err_n \rVert, \lambda > 0
$$
Então temos
$$
\lVert err_{n+1} \rVert \leq \lVert err_n\rVert + \lVert h\left(f(u_n,t_n) - f(y_n,t_n)\right)\rVert + \lVert ch^2 \rVert \leq \lVert err_n\rVert + \lambda h\lVert err_n \rVert + ch^2\\$$
$\therefore$
$$
\lVert err_{n+1} \rVert \leq (1+h\lambda)\lVert err_n \rVert + ch^2 \tag{2}
$$
---
Agora vamos propor:
$$
\lVert err_n \rVert \leq \frac{c}{\lambda}h[(1+h\lambda)^n - 1]
$$
#### Demonstração: Indução em n
Para $n = 0$
$$
\lVert err_0 \rVert \leq \frac{c}{\lambda}h[(1+h\lambda)^0 - 1] = \frac{c}{\lambda}h[1 - 1] = 0\\
err_0 = u_0 - y_0 = 0, \text{pois é a condição inicial}
$$
Temos portanto nossa hipotese de indução, vale para $n=k$, vamos para o passo indutivo: $n = k+1$. Da equação 2, temos:
$$
\lVert err_{k+1}\rVert \leq (1+h\lambda)\lVert err_k \rVert + ch^2
$$
E pela hipotese de indução
$$
\lVert err_k \rVert\leq \frac{c}{\lambda}h[(1+h\lambda)^k - 1]
$$
Logo
$$
\lVert err_{k+1} \rVert \leq (1+h\lambda)\frac{c}{\lambda}h[(1+h\lambda)^k - 1] + ch^2
$$
Desenvolvendo o termo da direita:
$$
\begin{align}
(1+h\lambda)\frac{c}{\lambda}h[(1+h\lambda)^k - 1] + ch^2 &= \frac{c}{\lambda}h[(1+h\lambda)^{k+1} - (1+h\lambda)] +ch^2\\
&= \frac{c}{\lambda}h(1+h\lambda)^{k+1} - \frac{c}{\lambda}h(1+h\lambda) +ch^2\\
&= \frac{c}{\lambda}h(1+h\lambda)^{k+1} - \frac{c}{\lambda}h - \frac{c}{\lambda}h^2\lambda + ch^2\\
&= \frac{c}{\lambda}h(1+h\lambda)^{k+1} - \frac{c}{\lambda}h\\
&= \frac{c}{\lambda}h[(1+h\lambda)^{k+1} - 1]
\end{align}
$$
Portanto
$$
\lVert err_{k+1} \rVert \leq \frac{c}{\lambda}h[(1+h\lambda)^{k+1} - 1]
$$
E o passo indutivo vale. Logo pelo principio de indução finita temos:
$$
\lVert err_n \rVert \leq \frac{c}{\lambda}h[(1+h\lambda)^n - 1] \tag{3}
$$
---
Como $h\lambda >0$, então temos $(1+h\lambda) < e^{h\lambda}$ e portanto $(1+h\lambda)^n < e^{nh\lambda}$, e n assume valor máximo em $n = \lfloor t^*/h \rfloor $, portanto:
$$(1+h\lambda)^n < e^{\lfloor t^*/h \rfloor h\lambda} \leq e^{t^*\lambda}$$
Substituindo na inequação 3 para $err_n$, teremos:
$$
\lVert err_n \rVert \leq \frac{c}{\lambda}h[e^{t^*\lambda} - 1]
$$
Passando o limite $h\to 0$, teremos:
$$
\lim_{h\to 0}\lVert err_n \rVert \leq \frac{c}{\lambda}h[e^{t^*\lambda} - 1] = 0\\
\therefore
\lim_{h\to 0}\lVert err_n \rVert = 0
$$
Portanto o Metódo de Euler converge para toda função Lipschitz. Q.E.D.
### Visualizando o teorema
Vamos plotar a solução da equação diferencial $y' = sin(t^2) - y$ com um refinamento da malha cada vez melhor e visualizar a convergência do metódo<br>
Plotaremos também um gráfico com a evolução do erro relativo entre a solução de malha mais fina e todas as soluções anteriores
```python
## Equação Diferencial
def F(y,t,p=0):
return -y + 2*np.sin(t**2)
# Criação dos dominios com vários h diferentes
ts = np.array([np.linspace(-10,10,i) for i in np.arange(50,300,63)])
# Condição inicial
y0 = 1.2
# Preparação da listas para plotagem
ys_e = np.array([f_euler(F,y0,i) for i in ts ])
# Estilo das curvas
lstyle = ['--','-.',':','-']
# Plot do gráfico de solução
plt.figure(figsize=(15,7))
for i in range(len(ts)):
plt.plot(ts[i],ys_e[i], ls = lstyle[i], label='$h = '+ str("{0:.2f}".format(20.0/len(ts[i])) +'$'))
plt.title('Visualização da convergência do Metódo de Euler')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.legend()
plt.show()
## Criando os arrays de erro
hs = [0.4,0.18,0.11]
ans = [[],[],[]]
for i in range(len(ys_e[:-1])):
n = np.floor(hs[i]/0.08)
for j in range(len(ys_e[i])):
try: ans[i].append(ys_e[-1][n*j])
except: ans[i].append(ys_e[-1][-1])
for i in range(len(ans)):
ans[i] = np.array(ans[i])
err = np.array([abs(j - i) for i,j in zip(ys_e,ans)])
plt.figure(figsize=(15,7))
for i in range(len(ts)-1):
plt.plot(ts[i],err[i], ls = lstyle[i], label='$h = '+ str("{0:.2f}".format(20.0/len(ts[i])) +'$'))
plt.title('Visualização do Erro da solução mais convergida em relação às outras soluções')
plt.xlabel('t')
plt.ylabel('err(y)')
plt.legend()
plt.show()
```
### Gráficos de Convergência de Runge-Kutta
```python
# Preparação da listas para plotagem
ys_rk = np.array([rk_2(F,y0,i) for i in ts ])
# Estilo das curvas
lstyle = ['--','-.',':','-']
# Plot do gráfico de solução
plt.figure(figsize=(15,7))
for i in range(len(ts)):
plt.plot(ts[i],ys_rk[i], ls = lstyle[i], label='$h = '+ str("{0:.2f}".format(20.0/len(ts[i])) +'$'))
plt.title('Visualização da convergência de Runge-Kutta')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.legend()
plt.show()
```
```python
plt.plot(ts[-1],abs(ys_e[-1]-ys_rk[-1]))
```
##### Teorema: O metódo de Euler converge
###### Prova
Tomemos $f(y,t)$ analitica, ou seja, pode ser representada pela série de Taylor centrada em um ponto $t_0$ e é Lipschitz continua.<br>
$f(y,t)$ analitica implica $y$ analitica.<br>
Vamos definir $err_n = u_n - y_n$, nosso erro númerico, então queremos provar
$$
\lim_{h\to 0^+} \max_{n=0, \dots , \lfloor \frac{t^*}{h} \rfloor} \lVert err_n \rVert = 0
$$
Expandindo nossa solução $y$ da equação diferencial por Taylor:
$$
y_{n+1} = y_n + hf(y_n,t_n)+\mathcal{O}(h^2)
$$
Como $y$ é analitica, então sua derivada é contínua, logo pelo `Teorema do Valor Extremo`, dado uma vizinhança em torno de $t_n$ o termo $\mathcal{O}(h^2)$ é limitado $\forall h>0$ e $n \leq \lfloor t^*/h \rfloor$ por $M>0, M \in \mathbb{R}$, e pela propriedade arquimediana do corpo do reais $\exists c \in \mathbb{R}, c>0; c\cdot h^2 \geq M$, portanto podemos limitar $\mathcal{O}(h^2)$ por $ch^2, c>0$.<br>
Agora vamos fazer $err_{n+1} = u_{n+1} - y_{n+1}$ usando a expansão em Taylor e Euler em $u_n$
$$
\begin{align}
err_{n+1} &= u_{n+1} - y_{n+1}\\
&= u_n + h(f(u_n,t_n)) - y_n - h(f(yn,tn) + \mathcal{O}(h^2)\\
&= \underbrace{u_n - y_n}_{err_n} + h\left(f(u_n,t_n) - f(y_n,t_n)\right) + \mathcal{O}(h^2)\\
&= err_n + h\left(f(u_n,t_n) - f(y_n,t_n)\right) + \mathcal{O}(h^2)\\
\end{align}
$$
Daqui podemos perceber que o erro no passo seguinte depende também do erro anterior já cometido<br>
E segue do limite superior para $\mathcal{O}(h^2)$ e da desigualdade triangular
$$
\lVert err_{n+1} \rVert \leq \lVert err_n\rVert + \lVert h\left(f(u_n,t_n) - f(y_n,t_n)\right)\rVert + \lVert ch^2 \rVert
$$
E pela condição de Lipschitz
$$
\lVert f(u_n,t_n) - f(y_n,t_n) \rVert \leq \lambda\lVert u_n - y_n \rVert = \lambda\lVert err_n \rVert, \lambda > 0
$$
Então temos
$$
\lVert err_{n+1} \rVert \leq \lVert err_n\rVert + \lVert h\left(f(u_n,t_n) - f(y_n,t_n)\right)\rVert + \lVert ch^2 \rVert \leq \lVert err_n\rVert + \lambda h\lVert err_n \rVert + ch^2\\
\therefore
\lVert err_{n+1} \rVert \leq \lVert err_n\rVert + \lVert h\left(f(u_n,t_n) - f(y_n,t_n)\right)\rVert + \lVert ch^2 \rVert \leq (1+h\lambda)\lVert err_n \rVert + ch^2
$$
Assumiremos (provar mais tarde)
$$
\lVert err_n \rVert \leq \frac{c}{\lambda}h[(1+h\lambda)^n - 1]
$$
Como $h\lambda >0$, então temos $(1+h\lambda) < e^{h\lambda}$ e portanto $(1+h\lambda)^n < e^{nh\lambda}$, e n assume valor máximo em $n = \lfloor t^*/h \rfloor $, portanto: $(1+h\lambda)^n < e^{\lfloor t^*/h \rfloor h\lambda} = e^{t^*\lambda}$<br>
Substituindo na inequação anterior para $err_n$, teremos:
$$
\lVert err_n \rVert \leq \frac{c}{\lambda}h[e^{t^*\lambda} - 1]
$$
Passando o limite $h\to 0$, teremos:
$$
\lim_{h\to 0}\lVert err_n \rVert \leq \frac{c}{\lambda}h[e^{t^*\lambda} - 1] = 0\\
\therefore
\lim_{h\to 0}\lVert err_n \rVert = 0
$$
O Metódo de Euler converger. Q.E.D.
```python
```
```python
```
### O modelo presa-predador de Lotka-Volterra (em construção)
O modelo é dado pelas EDOs
$$
\begin{cases}
\frac{\mathrm{d}x}{\mathrm{d}t} = (\lambda - by)x\\
\frac{\mathrm{d}y}{\mathrm{d}t} = (-\mu + cx)y\\
\end{cases}
$$
Com $\lambda, \mu, b, c$ todos reais positivos e $x$ representando a população de presas e $y$ a população de predadores<br>
Como já visto anteriormente, vamos tratar de forma vetorial este problema, sendo:
$$
v = \begin{bmatrix}
\frac{\mathrm{d}x}{\mathrm{d}t} \\
\frac{\mathrm{d}y}{\mathrm{d}t}
\end{bmatrix}
$$
E sendo $D$ o operador linear de derivada, teremos:
$$
Dv = \begin{bmatrix}
(\lambda - by)x\\
(-\mu + cx)y
\end{bmatrix}
$$
E então podemos encontrar a solução aplicando algum metódo númerico
```python
## Parametros:
## v: vetor dos pontos iniciais
## p[l,b,m,c]: uma lista com os parametros do modelo
### l: lambda
### b: b
### m: mu
### c: c
def model(v,t,p = 0):
if p == 0: p = [1,1,1,1]
return np.array([(p[0]-p[1]*v[1])*v[0],(p[3]*v[0]-p[2])*v[1]])
```
```python
# Parametros de ajusate
ts = np.linspace(0,30,500)
y0 = [2,1]
```
```python
ys_e = f_euler(model,y0,ts)
plt.figure(figsize=(10,5))
plt.plot(ts,ys_e)
plt.title('Modelo Presa-Predador - Solução com Euler')
plt.legend(['Presa', 'Predador'])
plt.xlabel('$t$')
plt.ylabel('População $y(t)$')
plt.grid(alpha = 0.5)
plt.show()
```
```python
ys_rk = rk_2(model,y0,ts)
plt.figure(figsize=(10,5))
plt.plot(ts,ys_rk)
plt.title('Modelo Presa-Predador - Solução com Runge-Kutta')
plt.legend(['Presa', 'Predador'])
plt.xlabel('$t$')
plt.ylabel('População $y(t)$')
plt.grid(alpha = 0.5)
plt.show()
```
```python
## F: Função derivada da função que queremos encontrar
## t0: tempo inicial
## y0: ponto inicial
## ts: range de tempo
## p: parametros particulares de cada modelo
def f2_euler(F, y0, ts, p = 0):
ys = [y0]
h = ts[1]-ts[0]
for tnext in ts[1:]:
ynext = ys[-1] + F(ys[-1],tnext,p)*h
ys.append(ynext)
t = tnext
return np.array(ys)
```
```python
## Parametros:
## v: vetor dos pontos iniciais
## p[l,b,m,c]: uma lista com os parametros do modelo
### l: lambda
### b: b
### m: mu
### c: c
def model(v,t,p = 0):
if p == 0: p = [1,1,1,1]
return np.array([(p[0]-p[1]*v[1])*v[0],(p[3]*v[0]-p[2])*v[1]])
```
```python
# Parametros de ajusate
ts = np.linspace(0,30,500)
y0 = [2,1]
ys_e = f2_euler(model,y0,ts)
plt.figure(figsize=(10,5))
plt.plot(ts,ys_e)
plt.title('Modelo Presa-Predador - Solução com Euler')
plt.legend(['Presa', 'Predador'])
plt.xlabel('$t$')
plt.ylabel('População $y(t)$')
plt.grid(alpha = 0.5)
plt.show()
```
```python
def model2(y, t, p = 0):
return -0.5*y
```
```python
ts = np.linspace(0,30,500)
plt.plot(ts,f2_euler(model2,2,ts))
```
```python
def EulerFW(F,y0,ts,p=0):
ys=[y0]
h=ts[1]-ts[0]
tc=ts[0]
for t in ts[1:]:
yn=ys[-1]+h*F(ys[-1],ts,p)
ys.append(yn)
tc=t
return ys
```
```python
ts = np.linspace(0,30,500)
plt.plot(ts,EulerFW(model2,2,ts))
```
```python
ts = np.linspace(0,30,500)
plt.plot(ts,EulerFW(model,[2,1],ts))
```
```python
ts = np.linspace(0,5,500)
plt.plot(ts,f2_euler(model,[2,1],ts))
```
```python
def model3(v,t,p=0):
if p == 0: p = [1.5,1.2,1,1]
return np.array([p[0]*np.log(v[1])-p[1]*v[1]+p[2]*np.log(v[0])+p[3]*v[0]])
```
```python
plt.plot(ts,rk_2(model3,[2,1],ts))
```
```python
model3([2,1],ts)
```
array([1.49314718])
### Análise qualitativa da EDO
Não temos uma solução analítica do sistema de EDO, mas podemos encontrar uma relação entre as variaveis do problema, olhando para a taxa de variação de cada uma das populações da seguinte forma
\begin{equation}
\dfrac{\mathrm{d}y}{\mathrm{d}x} = \frac{\dfrac{\mathrm{d}y}{\mathrm{d}t}}{\dfrac{\mathrm{d}x}{\mathrm{d}t}} = \dfrac{(-\mu + cx)y}{(\lambda - by)x}
\end{equation}
E esta é uma equação separavel, podemos seguir com a solução:
\begin{equation}
\dfrac{(-\mu + cx)y}{y}\mathrm{d}y = \dfrac{(\lambda - by)x}{x}\mathrm{d}x
\end{equation}
Obtendo então
\begin{equation}
\int \bigg( \dfrac{\lambda - by}{y}\bigg) dy = \int \bigg( \dfrac{-\mu + cx}{x}\bigg) dx
\end{equation}
Resolvendo, temos a solução geral para o modelo:
\begin{equation}
\lambda\ln(|y|) - by = -\mu\ln(|x|) + cx + K
\end{equation}
Como as populações $x,y$ são sempre positivas, podemos reescrever
\begin{equation}
\lambda\ln(y) - by + \mu\ln(x) - cx = K
\end{equation}
Sendo $K \in \mathbb{R}$, constante em relação a cada solução.\\
Temos assim uma relação entre cada parametro e variavel do nosso problema
```python
ys_rk = f_euler(model,[40,20],ts,p=[3,1.3,2.7,0.5])
plt.figure(figsize=(10,5))
plt.plot(ts,ys_rk)
plt.title('Modelo Presa-Predador - Solução com Runge-Kutta')
plt.legend(['Presa', 'Predador'])
plt.xlabel('$t$')
plt.ylabel('População $y(t)$')
plt.grid(alpha = 0.5)
plt.show()
```
```python
## Parametros:
## v: vetor dos pontos iniciais
## p[l,b,m,c]: uma lista com os parametros do modelo
### l: lambda
### b: b
### m: mu
### c: c
def model4(v,t,p = 0):
if p == 0: p = [3,1.3,2.7,0.5]
return np.array([(p[0]-p[1]*v[1])*v[0],(p[3]*v[0]-p[2])*v[1]])
```
```python
ans = odeint(model4,[2,1],ts)
plt.plot(ts,ans)
```
```python
ans2 = rk_2(model4,[2,1],ts)
plt.plot(ts,ans2)
```
```python
```
| 6d91e40d70885540198327df5fd837201f0950b4 | 858,502 | ipynb | Jupyter Notebook | analise-numerica-edo-2019-1/RK e Eulers.ipynb | mirandagil/university-courses | e70ce5262555e84cffb13e53e139e7eec21e8907 | [
"MIT"
] | 1 | 2019-12-23T16:39:01.000Z | 2019-12-23T16:39:01.000Z | analise-numerica-edo-2019-1/RK e Eulers.ipynb | mirandagil/university-courses | e70ce5262555e84cffb13e53e139e7eec21e8907 | [
"MIT"
] | null | null | null | analise-numerica-edo-2019-1/RK e Eulers.ipynb | mirandagil/university-courses | e70ce5262555e84cffb13e53e139e7eec21e8907 | [
"MIT"
] | null | null | null | 759.064545 | 142,096 | 0.948121 | true | 7,292 | Qwen/Qwen-72B | 1. YES
2. YES | 0.882428 | 0.909907 | 0.802927 | __label__por_Latn | 0.84211 | 0.703802 |
# Principal Component Analysis: lecture
## 1. Introduction
Up until now, We have focused on supervised learning. This group of methods aims at predicting labels based on training data that is labeled as well. Principal Componant Analysis is our first so-called "unsupervised" estimator. Generally, the aim of unsupervised estimators is to reveal interesting data patterns without having any reference labels.
The first unsupervised learning algorithm, is Principal Component Analysis, also referred to as PCA. PCA is a dimensionality reduction technique which is often used in practice for visualization, feature extraction, noise filtering, etc.
Generally, PCA would be applied on data sets with many variables. PCA creates new variables that are linear combinations of the original variables. The idea is to reduce the dimension of the data considerably while maintaining as much information as possible. While the purpose is to significantly reduce the dimensionality, the maximum amount of new variables that can possibly be created is equal to the number of original variables. A nice feature of PCA is that the newly created variables are uncorrelated.
**A simple example start** : Imagine that a data set consists of the height and weight of a group of people. One could imagine that these 2 metrics are heavily correlated, so we could basically summarize these 2 metrics in one variable, a linear combination of the two. This one variable will contain most of the information from 2 variables in one. It is important to note that the effectiveness of PCA strongly depends on the structure of the correlation matrix of the existing variables!
## 2. Intermezzo: Eigenvalues and eigenvectors
An eigenvector is a vector that after transformation hasn't changed, except by a scalar value known as the *eigenvalue*.
### 2.1 Definition
If there exists a square matrix $A$ (an n x n matrix) , then a scalar $\lambda$ is called the **eigenvalue** of $A$ if there is a non-zero vector $v$ such that
$$Av = \lambda v$$.
This vector $v$ is thene called the **eigenvector** of A corresponding to $\lambda$.
Eigenvalues and eigenvectors are very useful and have tons of applications!
Imagine you have a matrix
\begin{equation}
A = \begin{bmatrix}
3 & 2 \\
3 & -2
\end{bmatrix}
\end{equation}
We have an eigenvector
\begin{equation}
v = \begin{bmatrix}
2 \\
1
\end{bmatrix}
\end{equation}
Let's perform the multiplication $A v$
\begin{equation}
Av = \begin{bmatrix}
3 & 2 \\
3 & -2
\end{bmatrix}
\begin{bmatrix}
2 \\
1
\end{bmatrix} = \begin{bmatrix}
3*2+2*1 \\
3*2+(-2*1)
\end{bmatrix}
= \begin{bmatrix}
8 \\
4
\end{bmatrix}
\end{equation}
Now we want to see if we can find a $\lambda$ such that
\begin{equation}
Av = \begin{bmatrix}
8 \\
4
\end{bmatrix}= \lambda \begin{bmatrix}
2 \\
1
\end{bmatrix}
\end{equation}
Turns out $\lambda =4$ is the eigenvalue for our proposed eigenvector!
### 2.2 But how can you find values of eigenmatrices?
An $n xn$ matrix has n eigenvectors and n eigenvalues! How to find the eigenvalues?
$ det(A- \lambda I)= 0$
\begin{equation}
det(A- \lambda I) = det\begin{bmatrix}
3-\lambda & 2 \\
3 & -2-\lambda
\end{bmatrix}
\end{equation}
This way we indeed find that 4 is an eigenvalue, and so is -3! You'll learn about the connection between eigenvalues and eigenmatrices in a second.
https://georgemdallas.wordpress.com/2013/10/30/principal-component-analysis-4-dummies-eigenvectors-eigenvalues-and-dimension-reduction/
https://www.youtube.com/watch?v=ue3yoeZvt8E
"toepassingen van de statistiek" sl 53-63 from the PCA chapter!
## 3. PCA: some notation
### 3.1 The data matrix
Let's say we have P variables $X_1, X_2, \dots, X_p$ and $n$ observations $1,...,n$. Or data looks like this:
\begin{bmatrix}
X_{11} & X_{12} & X_{13} & \dots & X_{1p} \\
X_{21} & X_{22} & X_{23} & \dots & X_{2p} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
X_{n1} & X_{n2} & X_{n3} & \dots & X_{np}
\end{bmatrix}
For 2 variables, this is what our data could look like:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
np.random.seed(123)
X = np.random.normal(2, 1.5, 50)
Y = np.random.normal(3, 0.6, 50)
fig, ax = plt.subplots()
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
plt.ylim(-5,5)
plt.xlim(-7,7)
plt.scatter(X, Y, s = 6)
```
### 3.2 The mean & mean-corrected data
The **mean** of the $j$-th variable:
$\bar{X}_j = \dfrac{\sum_{i=1}^n X_{ij}}{n}= \bar X_{.j}$
To get to the **mean-corrected** data: substract the mean from each $X_{ij}$
$x_{ij} = X_{ij}-\bar X_{.j}$
Going back to our two variables example, this is how the data would be shifted:
```python
fig, ax = plt.subplots()
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
X_mean = X- np.mean(X)
Y_mean = Y- np.mean(Y)
plt.ylim(-5,5)
plt.xlim(-7,7)
plt.scatter(X_mean, Y_mean, s = 6)
```
### 3.3 The variance & standardized data
$s^2_j = \dfrac{\sum_{i=1}^n (X_{ij}-\bar X_{.j})^2}{n-1}= \dfrac{\sum_{i=1}^n x_{ij}^2}{n-1}$
To get to the **standardized** data: divide the mean-corrected data by the standard deviation $s_j$.
$z_{ij} = \dfrac{x_{ij}}{s_{j}}$
Going back to the example with 2 variables, this is what standardized data would look like:
```python
fig, ax = plt.subplots()
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
X_mean = X- np.mean(X)
Y_mean = Y- np.mean(Y)
X_std= np.std(X)
Y_std = np.std(Y)
X_stdized = X_mean / X_std
Y_stdized = Y_mean / Y_std
plt.ylim(-5,5)
plt.xlim(-7,7)
plt.scatter(X_stdized, Y_stdized, s=6)
```
### 3.4 The covariance
The covariance for two variables $X_j$ and $X_k$:
$s_{jk} = \dfrac{\sum_{i=1}^n (X_{ij}-\bar X_{.j})(X_{ij}-\bar X_{.k})}{n-1}= \dfrac{\sum_{i=1}^n x_{ij}x_{ik}}{n-1}$
Denote $\mathbf{S}$ the sample covariance matrix
\begin{equation}
\mathbf{S} = \begin{bmatrix}
s^2_{1} & s_{12} & \dots & s_{1p} \\
s_{21} & s^2_{2} & \dots & s_{2p} \\
\vdots & \vdots & \ddots & \vdots \\
s_{p1} & s_{p2} & \dots & s^2_{p}
\end{bmatrix}
\end{equation}
When you do the same computation with standardized variables, you get the **correlation**. Remember that the correlation $r_{jk}$ always lies between -1 and 1.
$r_{jk} = \dfrac{\sum_{i=1}^n z_{ij}z_{ik}}{n-1}$
Then, $\mathbf{R}$ is the correlation matrix
\begin{equation}
\mathbf{R} = \begin{bmatrix}
1 & r_{12} & \dots & r_{1p} \\
r_{21} & 1 & \dots & r_{2p} \\
\vdots & \vdots & \ddots & \vdots \\
r_{p1} & r_{p2} & \dots & 1
\end{bmatrix}
\end{equation}
## 4. How does PCA work? Matrices and eigendecomposition
### 4.1 Finding principal components
$ \mathbf{X}= (X_1, X_2, \ldots, X_p)$ is a random variable.
Then the principal components of $\mathbf{X}$, denoted by $PC_1, \ldots, PC_p$ satisfy these 3 conditions:
- $(PC_1, PC_2, \ldots, PC_p)$ are mutually uncorrelated
- $var(PC_1)\geq var(PC_2) \geq \ldots \geq var(PC_p)$
- $PC_j = c_{j1} X_1+c_{j2} X_2+\ldots+c_{jp} X_p$
Note that for $j=1,\ldots,p$ and $c_j = (c_{j1}, c_{j2}, \ldots, c_{jp})$' is a vector of constants satisfying $ \lVert{\mathbf{c_j} \rVert^2 = \mathbf{c'_j}\mathbf{c_j}} = \displaystyle\sum^p_{k=1} c^2_{kj}=1 $
The variance of $PC$ is then:
$var(PC_j) = var( c_{j1} X_1+c_{j2} X_2+\ldots+c_{jp} X_p) \\
= c_{j1}^2 var(X_1) +c_{j2}^2 var(X_2) + \ldots + c_{jp}^2 var(X_p) + 2 \displaystyle\sum_k\sum_{l \neq k}c_{jk}c_{jl} cov(X_k, X_l) \\ = c_j' \Sigma c_j$
In words, this means that variances can easily be computed using the coefficients used while making the linear combinations.
We can prove that $var(PC_1)\geq var(PC_2) \geq \ldots \geq var(PC_p)$ is actually given by the eigenvalues $\lambda_1\geq \lambda_2 \geq \ldots \geq \lambda_3$ and the eigenvectors are given by $c_j = (c_{j1}, c_{j2}, \ldots, c_{jp})$. From here on, we'll denote the eigenvectors by $e_j$ instead of $c_j$.
# Sources
http://www.bbk.ac.uk/ems/faculty/brooms/teaching/SMDA/SMDA-02.pdf
https://stackoverflow.com/questions/13224362/principal-component-analysis-pca-in-python
https://georgemdallas.wordpress.com/2013/10/30/principal-component-analysis-4-dummies-eigenvectors-eigenvalues-and-dimension-reduction/
| 33028a56cf1d2c9d1ad7039ac5d1024d9bdc5ab5 | 33,446 | ipynb | Jupyter Notebook | Principal Component Analysis.ipynb | learn-co-students/ml-pca-staff | cb364402d9cfec4f3064942f9c3cc053b900ecf9 | [
"BSD-4-Clause-UC"
] | 2 | 2018-05-27T21:48:21.000Z | 2018-05-27T21:48:27.000Z | Principal Component Analysis.ipynb | learn-co-students/ml-pca-staff | cb364402d9cfec4f3064942f9c3cc053b900ecf9 | [
"BSD-4-Clause-UC"
] | null | null | null | Principal Component Analysis.ipynb | learn-co-students/ml-pca-staff | cb364402d9cfec4f3064942f9c3cc053b900ecf9 | [
"BSD-4-Clause-UC"
] | null | null | null | 60.810909 | 6,212 | 0.775429 | true | 2,767 | Qwen/Qwen-72B | 1. YES
2. YES | 0.868827 | 0.870597 | 0.756398 | __label__eng_Latn | 0.947634 | 0.595699 |
# Understanding the impact of timing on defaults
> How do delays in recognising defaults impact the apparent profitability of Afterpay?
- toc: true
- badges: true
- comments: true
- categories: [Sympy,Finance,Afterpay]
- image: images/2020-10-03-Afterpay-Customer-Defaults-Part-7/header.png
## The Context
The thesis of this post is actually pretty simple. There is a delay between when customers make a transaction, and when *Afterpay* realises that they have defaulted. Because of this delay, combined with the rapid growth in the total value of transactions, defaults as a percentage of transaction value may be artificially reduced.
> Important: Obviously I need a disclaimer. If you use anything I say as the basis for any decision, financial or otherwise, you are an idiot.
## The Model
First off, let's load in a bunch of libraries.
```python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from io import StringIO
import pandas as pd
import scipy.optimize
plt.rcParams["figure.figsize"] = (10,10)
from warnings import filterwarnings
filterwarnings('ignore')
```
While reading through Afterpay's releases to the markets, I came across this chart, which appears on page 3 of [this](https://www.afterpaytouch.com/images/APT_ASX-Announcement_Q2-FY18_16-Jan-Final-2.pdf) release. Let's use this to build a simple quadratic model of the reported sales.
## Loading the data
```python
#Underlying sales
csv_data = StringIO('''anz_underlying_sales_value,date,month_count
0,FY15,0
37.3,FY16,12
561.2,FY17,24
2184.6,FY18,36
4314.1,FY19,48
6566.9,FY20,60''')
df = pd.read_csv(csv_data, sep=",")
```
## Fitting a curve
Let's first fit quadratic:
```python
def quadratic(t, a, b, c):
y = a * t**2 + b * t + c
return y
xdata = df.month_count.values
ydata = df.anz_underlying_sales_value.values
popt, pcov = scipy.optimize.curve_fit(quadratic, xdata, ydata)
print(popt)
```
[ 2.17012649 -17.61639881 -58.725 ]
```python
x = np.linspace(0,60, 61)
y = quadratic(x, *popt)
plt.plot(xdata, ydata, 'o', label='data')
plt.plot(x,y, label='fit')
plt.title('ANZ Sales by preceding Financial Year ($M AUD)')
plt.xlabel('Months after launch')
plt.ylabel('ANZ Sales ($M AUD)')
plt.legend(loc='best')
plt.show()
```
## Delays in reporting.
So we found that we could model the annual reported sales as: $$2.170 t^2 - 17.61t - 58.725$$
The instantanious rate of sales, is given by : $$0.1808t^2 + 0.7021t -9.36$$.
Don't wory how I arrived at this, I will show how in in the appendix of this post.
```python
t = np.linspace(0,60, 61)
sales = 0.1808*t**2 + 0.7021* t - 9.36
plt.plot(sales)
plt.title('ANZ Sales by month ($M AUD)')
plt.xlabel('Months after launch')
plt.ylabel('ANZ Sales by month ($M AUD)')
plt.show()
```
Now let's model a delay of 6 months between when the transaction happens, and when *Afterpay* finally realised there was a default.
From this we can see there is a potentially significant difference between the true rate at which losses are occurring, and the rate at which we observe them occurring, at any point in time.
```python
delay = 6 #months
true_loss_rate = 0.01
losses_true = true_loss_rate*(0.1808*t**2 + 0.7021* t - 9.36)
losses_observed = true_loss_rate*(0.1808*(t-delay)**2 + 0.7021* (t-delay) - 9.36)
plt.plot(losses_observed,label='Observed')
plt.plot(losses_true,label='True')
plt.legend()
plt.title('ANZ losses by month ($M AUD)')
plt.xlabel('Months after launch')
plt.ylabel('ANZ losses by month ($M AUD)')
plt.show()
```
Now let's integrate by financial year.
```python
def integrate_by_year(y):
integrated = np.array([0,np.sum(y[0:12]),np.sum(y[12:24]),np.sum(y[24:36]),np.sum(y[36:48]),np.sum(y[48:60])])
return(integrated)
observed_loss_rate = integrate_by_year(losses_true)/integrate_by_year(losses_observed)
plt.plot(observed_loss_rate)
plt.title('Ratio of true losses to observed losses')
plt.xlabel('Years after launch')
plt.ylabel('Ratio of true losses to observed losses')
plt.ylim(1,2.5)
plt.xlim(2,5)
plt.grid()
plt.show()
```
## Conclusion
In conclusion, we can clearly see the impact of a delay in recognising losses, in situations where there is rapid growth. Even after years of growth, with a 6 month delay in recognising losses, the true losses could be 30-40% higher than reported.
# Appendix
### Finding an integral
So we found that we could model the annual reported sales as: $$2.170 t^2 - 17.61t - 58.725$$
Let's call this function $f(t)$
We want to find the function $g(t)$, which is the underlying rate of sales, which I claimed was : $$0.1808t^2 + 0.7021t -9.36$$.
This function, when integrated over 12 months, will give us the annual reported sales.
To help us with the algebraic manipulation, we can use [Sympy](https://docs.sympy.org/latest/index.html). An alternative is to do the algebraic manipulation by hand, but this is probably faster and more scalable.
```python
import sympy as sym
sym.init_printing(use_unicode=True)
a,b,c,d,t = sym.symbols('a b c d t')
```
So we are looking for a quadratic function, the definite integral of which is equal to $$2.170 t^2 - 17.61t - 58.725$$. Let's start by forming the definite integral.
```python
expr = sym.simplify((a*t**3 + b*t**2 + c*t + d) - (a*(t-12)**3 + b*(t-12)**2 + c*(t-12) + d))
```
```python
print(sym.collect(expr,t))
```
36*a*t**2 + 1728*a - 144*b + 12*c + t*(-432*a + 24*b)
```python
fitted_quadratic = t**2 * 2.17012649 + t*-17.61639881 -58.725
```
### Solving for the coefficients
Let's now form a set of simultaneous equations, and solve for each of the coefficients of $$t$$.
```python
equations = []
for i in [2,1,0]:
eq = sym.collect(expr,t).coeff(t, i)
coeff = sym.collect(fitted_quadratic,t).coeff(t, i)
equations.append(sym.Eq(eq,coeff))
result = sym.solve(equations,(a,b,c))
print(result)
```
{a: 0.0602812913888889, b: 0.351046627916667, c: -9.36169642500000}
### Finding the derivative
Now all that's left to do, is to find the derivative of the indefinite integral.
```python
expr = result[a] * t**3 + result[b]*t**2 + result[c]*t
print(sym.diff(expr, t))
```
0.180843874166667*t**2 + 0.702093255833333*t - 9.361696425
Voila!
| bef5b31222796ebaec79d9b70dde2edf6ead1bdf | 144,518 | ipynb | Jupyter Notebook | _notebooks/2020-10-03-Afterpay-Customer-Defaults-Part-7.ipynb | CGCooke/Blog | ab1235939011d55674c0888dba4501ff7e4008c6 | [
"Apache-2.0"
] | 1 | 2020-10-29T06:32:23.000Z | 2020-10-29T06:32:23.000Z | _notebooks/2020-10-03-Afterpay-Customer-Defaults-Part-7.ipynb | CGCooke/Blog | ab1235939011d55674c0888dba4501ff7e4008c6 | [
"Apache-2.0"
] | 20 | 2020-04-04T09:39:50.000Z | 2022-03-25T12:30:56.000Z | _notebooks/2020-10-03-Afterpay-Customer-Defaults-Part-7.ipynb | CGCooke/Blog | ab1235939011d55674c0888dba4501ff7e4008c6 | [
"Apache-2.0"
] | null | null | null | 305.534884 | 39,740 | 0.933579 | true | 1,821 | Qwen/Qwen-72B | 1. YES
2. YES | 0.885631 | 0.849971 | 0.752761 | __label__eng_Latn | 0.967538 | 0.587249 |
```python
%matplotlib inline
```
# Visualize the hemodynamic response
In this example, we describe how the hemodynamic response function was
estimated in the previous model. We fit the same ridge model as in the previous
example, and further describe the need to delay the features in time to account
for the delayed BOLD response.
Because of the temporal dynamics of neurovascular coupling, the recorded BOLD
signal is delayed in time with respect to the stimulus. To account for this
lag, we fit encoding models on delayed features. In this way, the linear
regression model weighs each delayed feature separately and recovers the shape
of the hemodynamic response function in each voxel separately. In turn, this
method (also known as a Finite Impulse Response model, or FIR) maximizes the
model prediction accuracy. With a repetition time of 2 seconds, we typically
use 4 delays [1, 2, 3, 4] to cover the peak of the the hemodynamic response
function. However, the optimal number of delays can vary depending on the
experiment and the brain area of interest, so you should experiment with
different delays.
In this example, we show that a model without delays performs far worse than a
model with delays. We also show how to visualize the estimated hemodynamic
response function (HRF) from a model with delays.
```python
```
## Path of the data directory
```python
import os
from voxelwise_tutorials.io import get_data_home
directory = os.path.join(get_data_home(), "vim-5")
print(directory)
```
```python
# modify to use another subject
subject = "S01"
```
## Load the data
We first load the fMRI responses.
```python
import numpy as np
from voxelwise_tutorials.io import load_hdf5_array
file_name = os.path.join(directory, "responses", f"{subject}_responses.hdf")
Y_train = load_hdf5_array(file_name, key="Y_train")
Y_test = load_hdf5_array(file_name, key="Y_test")
print("(n_samples_train, n_voxels) =", Y_train.shape)
print("(n_repeats, n_samples_test, n_voxels) =", Y_test.shape)
```
We average the test repeats, to remove the non-repeatable part of fMRI
responses.
```python
Y_test = Y_test.mean(0)
print("(n_samples_test, n_voxels) =", Y_test.shape)
```
We fill potential NaN (not-a-number) values with zeros.
```python
Y_train = np.nan_to_num(Y_train)
Y_test = np.nan_to_num(Y_test)
```
Then, we load the semantic "wordnet" features.
```python
feature_space = "wordnet"
file_name = os.path.join(directory, "features", f"{feature_space}.hdf")
X_train = load_hdf5_array(file_name, key="X_train")
X_test = load_hdf5_array(file_name, key="X_test")
print("(n_samples_train, n_features) =", X_train.shape)
print("(n_samples_test, n_features) =", X_test.shape)
```
## Define the cross-validation scheme
We define the same leave-one-run-out cross-validation split as in the
previous example.
```python
from sklearn.model_selection import check_cv
from voxelwise_tutorials.utils import generate_leave_one_run_out
# indice of first sample of each run
run_onsets = load_hdf5_array(file_name, key="run_onsets")
print(run_onsets)
```
We define a cross-validation splitter, compatible with ``scikit-learn`` API.
```python
n_samples_train = X_train.shape[0]
cv = generate_leave_one_run_out(n_samples_train, run_onsets)
cv = check_cv(cv) # copy the cross-validation splitter into a reusable list
```
## Define the model
We define the same model as in the previous example. See the previous
example for more details about the model definition.
```python
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from voxelwise_tutorials.delayer import Delayer
from himalaya.kernel_ridge import KernelRidgeCV
from himalaya.backend import set_backend
backend = set_backend("torch_cuda", on_error="warn")
X_train = X_train.astype("float32")
X_test = X_test.astype("float32")
alphas = np.logspace(1, 20, 20)
pipeline = make_pipeline(
StandardScaler(with_mean=True, with_std=False),
Delayer(delays=[1, 2, 3, 4]),
KernelRidgeCV(
alphas=alphas, cv=cv,
solver_params=dict(n_targets_batch=500, n_alphas_batch=5,
n_targets_batch_refit=100)),
)
```
```python
from sklearn import set_config
set_config(display='diagram') # requires scikit-learn 0.23
pipeline
```
## Fit the model
We fit on the train set, and score on the test set.
```python
pipeline.fit(X_train, Y_train)
scores = pipeline.score(X_test, Y_test)
scores = backend.to_numpy(scores)
print("(n_voxels,) =", scores.shape)
```
## Intermission: understanding delays
To have an intuitive understanding of what we accomplish by delaying the
features before model fitting, we will simulate one voxel and a single
feature. We will then create a ``Delayer`` object (which was used in the
previous pipeline) and visualize its effect on our single feature. Let's
start by simulating the data.
```python
# number of total trs
n_trs = 50
# repetition time for the simulated data
TR = 2.0
rng = np.random.RandomState(42)
y = rng.randn(n_trs)
x = np.zeros(n_trs)
# add some arbitrary value to our feature
x[15:20] = .5
x += rng.randn(n_trs) * 0.1 # add some noise
# create a delayer object and delay the features
delayer = Delayer(delays=[0, 1, 2, 3, 4])
x_delayed = delayer.fit_transform(x[:, None])
```
In the next cell we are plotting six lines. The subplot at the top shows the
simulated BOLD response, while the other subplots show the simulated feature
at different delays. The effect of the delayer is clear: it creates multiple
copies of the original feature shifted forward in time by how many samples we
requested (in this case, from 0 to 4 samples, which correspond to 0, 2, 4, 6,
and 8 s in time with a 2 s TR).
When these delayed features are used to fit a voxelwise encoding model, the
brain response $y$ at time $t$ is simultaneously modeled by the
feature $x$ at times $t-0, t-2, t-4, t-6, t-8$. In the remaining
of this example we will see that this method improves model prediction accuracy
and it allows to account for the underlying shape of the hemodynamic response
function.
```python
import matplotlib.pyplot as plt
fig, axs = plt.subplots(6, 1, figsize=(8, 6.5), constrained_layout=True,
sharex=True)
times = np.arange(n_trs)*TR
axs[0].plot(times, y, color="r")
axs[0].set_title("BOLD response")
for i, (ax, xx) in enumerate(zip(axs.flat[1:], x_delayed.T)):
ax.plot(times, xx, color='k')
ax.set_title("$x(t - {0:.0f})$ (feature delayed by {1} sample{2})".format(
i*TR, i, "" if i == 1 else "s"))
for ax in axs.flat:
ax.axvline(40, color='gray')
ax.set_yticks([])
_ = axs[-1].set_xlabel("Time [s]")
plt.show()
```
## Compare with a model without delays
We define here another model without feature delays (i.e. no ``Delayer``).
Because the BOLD signal is inherently slow due to the dynamics of
neuro-vascular coupling, this model is unlikely to perform well.
```python
pipeline_no_delay = make_pipeline(
StandardScaler(with_mean=True, with_std=False),
KernelRidgeCV(
alphas=alphas, cv=cv,
solver_params=dict(n_targets_batch=500, n_alphas_batch=5,
n_targets_batch_refit=100)),
)
pipeline_no_delay
```
We fit and score the model as the previous one.
```python
pipeline_no_delay.fit(X_train, Y_train)
scores_no_delay = pipeline_no_delay.score(X_test, Y_test)
scores_no_delay = backend.to_numpy(scores_no_delay)
print("(n_voxels,) =", scores_no_delay.shape)
```
Then, we plot the comparison of model prediction accuracies with a 2D
histogram. All ~70k voxels are represented in this histogram, where the
diagonal corresponds to identical prediction accuracy for both models. A
distibution deviating from the diagonal means that one model has better
prediction accuracy than the other.
```python
from voxelwise_tutorials.viz import plot_hist2d
ax = plot_hist2d(scores_no_delay, scores)
ax.set(
title='Generalization R2 scores',
xlabel='model without delays',
ylabel='model with delays',
)
plt.show()
```
We see that the model with delays performs much better than the model without
delays. This can be seen in voxels with scores above 0. The distribution
of scores below zero is not very informative, since it corresponds to voxels
with poor predictive performance anyway, and it only shows which model is
overfitting the most.
## Visualize the HRF
We just saw that delays are necessary to model BOLD responses. Here we show
how the fitted ridge regression weights follow the hemodynamic response
function (HRF).
Fitting a kernel ridge regression results in a set of coefficients called the
"dual" coefficients $w$. These coefficients differ from the "primal"
coefficients $\beta$ obtained with a ridge regression, but the primal
coefficients can be computed from the dual coefficients using the training
features $X$:
\begin{align}\beta = X^\top w\end{align}
To better visualize the HRF, we will refit a model with more delays, but only
on a selection of voxels to speed up the computations.
```python
# pick the 10 best voxels
voxel_selection = np.argsort(scores)[-10:]
# define a pipeline with more delays
pipeline_more_delays = make_pipeline(
StandardScaler(with_mean=True, with_std=False),
Delayer(delays=[0, 1, 2, 3, 4, 5, 6]),
KernelRidgeCV(
alphas=alphas, cv=cv,
solver_params=dict(n_targets_batch=500, n_alphas_batch=5,
n_targets_batch_refit=100)),
)
pipeline_more_delays.fit(X_train, Y_train[:, voxel_selection])
# get the (primal) ridge regression coefficients
primal_coef = pipeline_more_delays[-1].get_primal_coef()
primal_coef = backend.to_numpy(primal_coef)
# split the ridge coefficients per delays
delayer = pipeline_more_delays.named_steps['delayer']
primal_coef_per_delay = delayer.reshape_by_delays(primal_coef, axis=0)
print("(n_delays, n_features, n_voxels) =", primal_coef_per_delay.shape)
# select the feature with the largest coefficients for each voxel
feature_selection = np.argmax(np.sum(np.abs(primal_coef_per_delay), axis=0),
axis=0)
primal_coef_selection = primal_coef_per_delay[:, feature_selection,
np.arange(len(voxel_selection))]
plt.plot(delayer.delays, primal_coef_selection)
plt.xlabel('Delays')
plt.xticks(delayer.delays)
plt.ylabel('Ridge coefficients')
plt.title(f'Largest feature for the {len(voxel_selection)} best voxels')
plt.axhline(0, color='k', linewidth=0.5)
plt.show()
```
We see that the hemodynamic response function (HRF) is captured in the model
weights. Note that in this dataset, the brain responses are recorded every
two seconds.
| f21ff95ce588509223b19c983a23c999eaadde8b | 16,499 | ipynb | Jupyter Notebook | tutorials/notebooks/movies_3T/03_plot_hemodynamic_response.ipynb | gallantlab/voxelwise_tutorials | 3df639dd5fb957410f41b4a3b986c9f903f5333b | [
"BSD-3-Clause"
] | 12 | 2021-09-08T22:22:26.000Z | 2022-02-10T18:06:33.000Z | tutorials/notebooks/movies_3T/03_plot_hemodynamic_response.ipynb | gallantlab/voxelwise_tutorials | 3df639dd5fb957410f41b4a3b986c9f903f5333b | [
"BSD-3-Clause"
] | 2 | 2021-09-11T16:06:44.000Z | 2021-12-16T23:39:40.000Z | tutorials/notebooks/movies_3T/03_plot_hemodynamic_response.ipynb | gallantlab/voxelwise_tutorials | 3df639dd5fb957410f41b4a3b986c9f903f5333b | [
"BSD-3-Clause"
] | 4 | 2021-09-13T19:11:00.000Z | 2022-03-26T04:35:11.000Z | 45.830556 | 1,500 | 0.617492 | true | 2,666 | Qwen/Qwen-72B | 1. YES
2. YES | 0.705785 | 0.72487 | 0.511603 | __label__eng_Latn | 0.984017 | 0.026953 |
```python
%reload_ext autoreload
%aimport trochoid
%autoreload 1
```
```python
import math
import numpy as np
# %matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('seaborn-colorblind')
plt.style.use('seaborn-whitegrid')
plt.rcParams['figure.figsize'] = 800/72,800/72
plt.rcParams["font.size"] = 21
# plt.rcParams["legend.loc"] = 'upper right'
import os
import sys
sys.path.append(os.getcwd())# 作業ティレクトリをパスに追加\n",
from trochoid import *
```
## トロコイド
定円の半径を $r_c$、動円の半径を $r_m$、回転角を $\theta$、描画点の半径を $r_d$ とする
外トロコイド:
\begin{cases}
x=(r_{c}+r_{m})\cos \theta -r_{d}\cos \left({\cfrac {r_{c}+r_{m}}{r_{m}}}\theta \right),\\y=(r_{c}+r_{m})\sin \theta -r_{d}\sin \left({\cfrac {r_{c}+r_{m}}{r_{m}}}\theta \right),
\end{cases}
内トロコイド:
\begin{cases}
x=(r_{c}-r_{m})\cos \theta +r_{d}\cos \left({\cfrac {r_{c}-r_{m}}{r_{m}}}\theta \right),\\y=(r_{c}-r_{m})\sin \theta -r_{d}\sin \left({\cfrac {r_{c}-r_{m}}{r_{m}}}\theta \right),
\end{cases}
## 一般形トロコイド
描画円がなぞる曲線を$\boldsymbol{p}$とし,接線方向ベクトルは$\boldsymbol{t}$,法線方向ベクトルは$\boldsymbol{n}$とする
円の転がってきたパスの長さ$s$は
\begin{align}
s &= \sum_0^n \sqrt{dx[i]^2 + dy[i]^2}
\end{align}
と表される
描画円は滑らずに転がるため
\begin{align}
r_m\theta &= s
\end{align}
したがって,
\begin{align}
\theta & = s/r_m \\
d\theta &= ds/r_m ,
\end{align}
となる.
点$p[i]$における接線ベクトル$t[i]$,法線ベクトル$n[i]$は
\begin{align}
t[i] &= \frac{p[i+1]-p[i-1]}{2} \\
n[i] &= \frac{t[i+1]-t[i-1]}{2} \\
&= \frac{p[i+2]-p[i-2]}{4}
\end{align}
描画円の中心の位置$p_m[i]$は単位法線ベクトル$n_u[i]$を用いて,
\begin{align}
p_m[i] &= p[i] + r_m n_u[i]
\end{align}
となる.ただし,$n_u[i]$は曲線の凹凸によって向きが反転する.これを避けるため,$n_u[i]$は改めて,
\begin{align}
n_u[i] &= R(\frac{\pi}{2}) t_u[i]
\end{align}
とする.ここで$R$はの回転行列である.
描画点の座標$p_d[i]$は
\begin{align}
p_d &= p_m[i] + r_d R(\theta[i]) e_x
\end{align}
となる.ここで$e_x$はx軸方向の単位ベクトルである
```python
# trocoid along sin-curve
fig = plt.figure()
fig.add_subplot(1, 1, 1)
sq_info = {}
x=np.linspace(0,10,num=150)
y=np.sin(x)
plt.plot(x,y)
x,y = trochoid(px=x,py=y,rm=0.1,rd=0.1)
plt.plot(x,y)
fig.axes[0].set_aspect('equal', 'box')
# plt.savefig("masic.pdf")
```
```python
# trocid on path
fig = plt.figure()
fig.add_subplot(1, 1, 1)
sq_info = {}
x0,y0 = polygon(64, 1)
plt.plot(x0,y0)
x1,y1 = ctrochoid(rc=0.9, rm=0.99*1/3, rd=0.4,n=1024,outer=False)
print(lcm(9,4))
# print(plot_trochoid(rc=0.95, rm=0.3, rd=0.4,outer=False,rmax=0.98,n=100))
plt.plot(x1,y1)
# x,y=trochoid(rc=1.2, rm=1/3, rd=1/3,outer=False,n=100)
# plt.plot(x,y)
n = 7
# col_inverse = a[:, ::-1]
rm = path_length(x1,y1)/(2*np.pi)/n
x,y = trochoid(px=np.tile(x1,n),py=np.tile(y1,n),rm=rm,rd=0.02*rm)
plt.plot(x,y)
# x,y = ptrochoid(px=np.tile(x1,2),py=np.tile(y1,2),rm=0.1,rd=0.4)
# plt.plot(x,y)
# x2,y2 = trochoid(rc=0.9, rm=0.15/2, rd=1.2,n=1024)
# plt.plot(x2,y2)
# x,y = ptrochoid(px=np.tile(x2,12),py=np.tile(y2,12),rm=3,rd=0.4)
# plt.plot(x,y)
fig.axes[0].set_aspect('equal', 'box')
# plt.savefig("trocids002.pdf")
```
```python
# trocid on path
fig = plt.figure()
fig.add_subplot(1, 1, 1)
x0,y0 = polygon(64, 1)
plt.plot(x0,y0)
x1,y1 = ctrochoid(rc=1, rm=1/6, rd=0.8*1/6,n=1024,outer=False)
plt.plot(x1,y1)
m = 7
n = 11/m
rm = path_length(x1,y1)/(2*np.pi)/n
fig.axes[0].set_aspect('equal', 'box')
plt.savefig("trocids002.pdf")
```
```python
def demo_trochoid(px, py, rm, rd, right=True, rmax=None, orient=0, *args, **kwargs):
x = np.zeros(len(px))
y = np.zeros(len(py))
s = 0
theta = 0
rot = np.pi/2
if right is True:
rot = -rot
r_mat = np.matrix(
[[np.cos(rot), -np.sin(rot)],
[np.sin(rot), np.cos(rot)],]
)
for i in range(len(px)):
ds = 0
if i > 0:
ds = np.linalg.norm(np.array([px[i]-px[i-1], py[i]-py[i-1]]))
d_theta = ds/rm
s = s+ds
theta = theta + d_theta
if (i - 1) < 0 :
t = np.array([px[i+1]-px[i], py[i+1]-py[i]])
elif (i+1) >= len(px):
t = np.array([px[i]-px[i-1], py[i]-py[i-1]])
else :
t = np.array([px[i+1]-px[i-1], py[i+1]-py[i-1]])*0.5
t = t/(np.linalg.norm(t) + 1e-9)
n = np.dot(r_mat, np.reshape(t, (2, 1)))
p_m = np.array([[px[i]], [py[i]]]) + rm*n
r_ort = np.matrix(
[[np.cos(theta), -np.sin(theta)],
[np.sin(theta), np.cos(theta)],]
)
p_d = p_m + rd*np.dot(r_ort, np.array([[1], [0]]))
x[i] = p_d.item(0)
y[i] = p_d.item(1)
if i %10 == 0:
p_m=np.reshape(p_m,(1,-1))
plt.plot(*polygon(n=32,r=rm,cx=p_m.item(0),cy=p_m.item(1),color='gray',alpha=0.5) )
plt.plot([p_m.item(0),x[i]],[p_m.item(1),y[i]],'r-o')
# info = {'rot': i}
return (x,y)
```
```python
# trocoid along sin-curve
fig = plt.figure()
fig.add_subplot(1, 1, 1)
sq_info = {}
x=np.linspace(0,10,num=150)
y=np.sin(x)
plt.plot(x,y)
x,y = demo_trochoid(px=x,py=y,rm=0.2,rd=0.3)
plt.plot(x,y)
fig.axes[0].set_aspect('equal', 'box')
plt.savefig("torocoid_sin.pdf")
```
| 78d509b5bc4c1a7bfbd8deba09f8b024307690a9 | 435,722 | ipynb | Jupyter Notebook | demo.ipynb | botamochi6277/trochoid-py | 13efc06c86ed60e4b682d4b5c98d3ee6ad401a25 | [
"MIT"
] | null | null | null | demo.ipynb | botamochi6277/trochoid-py | 13efc06c86ed60e4b682d4b5c98d3ee6ad401a25 | [
"MIT"
] | null | null | null | demo.ipynb | botamochi6277/trochoid-py | 13efc06c86ed60e4b682d4b5c98d3ee6ad401a25 | [
"MIT"
] | null | null | null | 1,081.19603 | 261,132 | 0.954308 | true | 2,240 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.808067 | 0.697678 | __label__yue_Hant | 0.084524 | 0.459272 |
# Mass Maps From Mass-Luminosity Inference Posterior
In this notebook we start to explore the potential of using a mass-luminosity relation posterior to refine mass maps.
Content:
- [Math](#Math)
- [Imports, Constants, Utils, Data](#Imports,-Constants,-Utils,-Data)
- [Probability Functions](#Probability-Functions)
- [Results](#Results)
- [Discussion](#Discussion)
### Math
Infering mass from mass-luminosity relation posterior ...
\begin{align}
P(M|L_{obs},z,\sigma_L^{obs}) &= \iint P(M|\alpha, S, L_{obs}, z)P(\alpha, S|L_{obs},z,\sigma_L^{obs})\ d\alpha dS\\
&\propto \iiint P(L_{obs}| L,\sigma_L^{obs})P(L|M,\alpha,S,z)P(M|z)P(\alpha, S|L_{obs},z,\sigma_L^{obs})\ dLd\alpha dS\\
&\approx \frac{P(M|z)}{n_{\alpha,S}}\sum_{\alpha,S \sim P(\alpha, S|L_{obs},z,\sigma_L^{obs})}\left( \frac{1}{n_L}\sum_{L\sim P(L|M,\alpha,S,z)}P(L_{obs}|L,\sigma_L^{obs})\right)\\
&= \frac{P(M|z)}{n_{\alpha,S}}\sum_{\alpha,S \sim P(\alpha, S|L_{obs},z,\sigma_L^{obs})}f(M;\alpha,S,z)\\
\end{align}
Refine for individual halo ...
\begin{align}
P(M_k|L_{obs},z,\sigma_L^{obs}) &= \iint P(M_k|\alpha, S, L_{obs\ k}, z_k)P(\alpha, S|L_{obs},z,\sigma_L^{obs})\ d\alpha dS\\
&\propto \iiint P(L_{obs\ k}| L_k,\sigma_L^{obs})P(L_k|M_k,\alpha,S,z_k)P(M_k|z_k)P(\alpha, S|L_{obs},z,\sigma_L^{obs})\ dLd\alpha dS\\
&\approx \frac{P(M_k|z_k)}{n_{\alpha,S}}\sum_{\alpha,S \sim P(\alpha, S|L_{obs},z,\sigma_L^{obs})}\left( \frac{1}{n_L}\sum_{L\sim P(L_k|M_k,\alpha,S,z_k)}P(L_{obs\ k}|L_k,\sigma_L^{obs})\right)\\
&=\frac{P(M_k|z_k)}{n_{\alpha,S}}\sum_{\alpha,S \sim P(\alpha, S|L_{obs},z,\sigma_L^{obs})}f(M_k;\alpha,S,z_k)\\
\end{align}
Can also factor it more conventionally for MCMC ...
\begin{align}
\underbrace{P(M_k|L_{obs},z,\sigma_L^{obs})}_{posterior}
&\propto \underbrace{P(M_k|z_k)}_{prior}\underbrace{\iiint P(L_{obs\ k}| L_k,\sigma_L^{obs})P(L_k|M_k,\alpha,S,z_k)P(\alpha, S|L_{obs},z,\sigma_L^{obs})\ dLd\alpha dS}_{likelihood}\\
\end{align}
In the code we have the following naming convention:
- p1 for $P(M|z)$
- p2 for $P(\alpha, S|L_{obs},z,\sigma_L^{obs})$
- p3 for $P(L_k|M_k,\alpha,S,z_k)$
- p4 for $P(L_{obs\ k}|L_k, \sigma^{obs}_L)$
We use the terms **eval** and **samp** to help distinguish between evaluating a distribution and sampling from it.
### Imports, Constants, Utils, Data
```python
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import rc
rc('text', usetex=True)
from bigmali.grid import Grid
from bigmali.prior import TinkerPrior
from bigmali.hyperparameter import get
import numpy as np
from scipy.stats import lognorm
from numpy.random import normal
#globals that functions rely on
grid = Grid()
prior = TinkerPrior(grid)
a_seed = get()[:-1]
S_seed = get()[-1]
mass_points = prior.fetch(grid.snap(0)).mass[2:-2] # cut edges
tmp = np.loadtxt('/Users/user/Code/PanglossNotebooks/MassLuminosityProject/SummerResearch/mass_mapping.txt')
z_data = tmp[:,0]
lobs_data = tmp[:,1]
mass_data = tmp[:,2]
ra_data = tmp[:,3]
dec_data = tmp[:,4]
sigobs = 0.05
def fast_lognormal(mu, sigma, x):
return (1/(x * sigma * np.sqrt(2 * np.pi))) * np.exp(- 0.5 * (np.log(x) - np.log(mu)) ** 2 / sigma ** 2)
```
### Probability Functions
```python
def p1_eval(zk):
return prior.fetch(grid.snap(zk)).prob[2:-2]
def p2_samp(nas=100):
"""
a is fixed on hyperseed,
S is normal distribution centered at hyperseed.
"""
return normal(S_seed, S_seed / 10, size=nas)
def p3_samp(mk, a, S, zk, nl=100):
mu_lum = np.exp(a[0]) * ((mk / a[2]) ** a[1]) * ((1 + zk) ** (a[3]))
return lognorm(S, scale=mu_lum).rvs(nl)
def p4_eval(lobsk, lk, sigobs):
return fast_lognormal(lk, sigobs, lobsk)
def f(a, S, zk, lobsk, nl=100):
ans = []
for mk in mass_points:
tot = 0
for x in p3_samp(mk, a, S, zk, nl):
tot += p4_eval(lobsk, x, sigobs)
ans.append(tot / nl)
return ans
def mass_dist(ind=1, nas=10, nl=100):
lobsk = lobs_data[ind]
zk = z_data[ind]
tot = np.zeros(len(mass_points))
for S in p2_samp(nas):
tot += f(a_seed, S, zk, lobsk, nl)
prop = p1_eval(zk) * tot / nas
return prop / np.trapz(prop, x=mass_points)
```
### Results
```python
plt.subplot(3,3,1)
dist = p1_eval(zk)
plt.plot(mass_points, dist)
plt.gca().set_xscale('log')
plt.gca().set_yscale('log')
plt.ylim([10**-25, 10])
plt.xlim([mass_points.min(), mass_points.max()])
plt.title('Prior')
plt.xlabel(r'Mass $(M_\odot)$')
plt.ylabel('Density')
for ind in range(2,9):
plt.subplot(3,3,ind)
dist = mass_dist(ind)
plt.plot(mass_points, dist, alpha=0.6, linewidth=2)
plt.xlim([mass_points.min(), mass_points.max()])
plt.gca().set_xscale('log')
plt.gca().set_yscale('log')
plt.ylim([10**-25, 10])
plt.gca().axvline(mass_data[ind], color='red', linewidth=2, alpha=0.6)
plt.title('Mass Distribution')
plt.xlabel(r'Mass $(M_\odot)$')
plt.ylabel('Density')
# most massive
ind = np.argmax(mass_data)
plt.subplot(3,3,9)
dist = mass_dist(ind)
plt.plot(mass_points, dist, alpha=0.6, linewidth=2)
plt.gca().set_xscale('log')
plt.gca().set_yscale('log')
plt.xlim([mass_points.min(), mass_points.max()])
plt.ylim([10**-25, 10])
plt.gca().axvline(mass_data[ind], color='red', linewidth=2, alpha=0.6)
plt.title('Mass Distribution')
plt.xlabel(r'Mass $(M_\odot)$')
plt.ylabel('Density')
# plt.tight_layout()
plt.gcf().set_size_inches((10,6))
```
### Turning into Probabilistic Catalogue
```python
index = range(2,9) + [np.argmax(mass_data)]
plt.title('Simple Sketch of Field of View')
plt.scatter(ra_data[index], dec_data[index] , s=np.log(mass_data[index]), alpha=0.6)
plt.xlabel('ra')
plt.ylabel('dec');
```
Need to build graphic:
- Make grid that will correspond to pixels
- map ra-dec window to grid
- snap objects onto grid and accumulate mass in each bin of the grid
- plot the grayscale image
# Discussion
- While this is a simple toy model, the consistency between then predicted mass distribution and true mass is encouraging.
- The noise in the mass distribution plots is interesting. The noise increases for masses that are further away than the truth. A similar effect may also exist in bigmali, could it lead to a failure mode?
- In order to build probabilistic mass maps we will need to be able to sample from the mass distributions. One way to do this would be fitting a normal distribution and drawing from that distribution. This will also mitigate the influence of the noise for masses far from the true mass.
| c9c84f045d7357dc237216cb26b7bb1889131558 | 93,243 | ipynb | Jupyter Notebook | MassLuminosityProject/SummerResearch/MassMapsFromMassLuminosity_20170626.ipynb | davidthomas5412/PanglossNotebooks | 719a3b9a5d0e121f0e9bc2a92a968abf7719790f | [
"MIT"
] | null | null | null | MassLuminosityProject/SummerResearch/MassMapsFromMassLuminosity_20170626.ipynb | davidthomas5412/PanglossNotebooks | 719a3b9a5d0e121f0e9bc2a92a968abf7719790f | [
"MIT"
] | 2 | 2016-12-13T02:05:57.000Z | 2017-01-21T02:16:27.000Z | MassLuminosityProject/SummerResearch/MassMapsFromMassLuminosity_20170626.ipynb | davidthomas5412/PanglossNotebooks | 719a3b9a5d0e121f0e9bc2a92a968abf7719790f | [
"MIT"
] | null | null | null | 293.216981 | 68,576 | 0.903768 | true | 2,138 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.654895 | 0.589752 | __label__eng_Latn | 0.57468 | 0.208522 |
```python
import numpy as np
import sympy as sym
x_1, x_2, x_3, x_4 = sym.symbols('x_1, x_2, x_3, x_4')
y_1, y_2, y_3, y_4 = sym.symbols('y_1, y_2, y_3, y_4')
XY = sym.Matrix([sym.symbols('x_1, x_2, x_3, x_4'), sym.symbols('y_1, y_2, y_3, y_4')]).transpose()
xi, eta = sym.symbols('xi, eta')
basis = sym.Matrix([xi, eta])
N1 = (1-xi)*(1-eta)/4
N2 = (1+xi)*(1-eta)/4
N3 = (1+xi)*(1+eta)/4
N4 = (1-xi)*(1+eta)/4
NN = sym.Matrix([N1, N2, N3, N4])
coordinate = (NN.transpose()*XY).transpose()
NN_diff = NN.jacobian(basis)
jacob = coordinate.jacobian(basis)
jacob_det = sym.det(jacob)
jacob_inv = sym.inv_quick(jacob)
NN_diff_global = (NN_diff*jacob_inv)
temp = NN_diff_global*NN_diff_global.transpose()/jacob_det
sq3 = np.sqrt(1/3)
```
```python
NN
```
$\displaystyle \left[\begin{matrix}\frac{\left(1 - \eta\right) \left(1 - \xi\right)}{4}\\\frac{\left(1 - \eta\right) \left(\xi + 1\right)}{4}\\\frac{\left(\eta + 1\right) \left(\xi + 1\right)}{4}\\\frac{\left(1 - \xi\right) \left(\eta + 1\right)}{4}\end{matrix}\right]$
```python
coordinate
```
$\displaystyle \left[\begin{matrix}\frac{x_{1} \left(1 - \eta\right) \left(1 - \xi\right)}{4} + \frac{x_{2} \left(1 - \eta\right) \left(\xi + 1\right)}{4} + \frac{x_{3} \left(\eta + 1\right) \left(\xi + 1\right)}{4} + \frac{x_{4} \left(1 - \xi\right) \left(\eta + 1\right)}{4}\\\frac{y_{1} \left(1 - \eta\right) \left(1 - \xi\right)}{4} + \frac{y_{2} \left(1 - \eta\right) \left(\xi + 1\right)}{4} + \frac{y_{3} \left(\eta + 1\right) \left(\xi + 1\right)}{4} + \frac{y_{4} \left(1 - \xi\right) \left(\eta + 1\right)}{4}\end{matrix}\right]$
```python
jacob
```
$\displaystyle \left[\begin{matrix}- \frac{x_{1} \left(1 - \eta\right)}{4} + \frac{x_{2} \left(1 - \eta\right)}{4} + \frac{x_{3} \left(\eta + 1\right)}{4} - \frac{x_{4} \left(\eta + 1\right)}{4} & - \frac{x_{1} \left(1 - \xi\right)}{4} - \frac{x_{2} \left(\xi + 1\right)}{4} + \frac{x_{3} \left(\xi + 1\right)}{4} + \frac{x_{4} \left(1 - \xi\right)}{4}\\- \frac{y_{1} \left(1 - \eta\right)}{4} + \frac{y_{2} \left(1 - \eta\right)}{4} + \frac{y_{3} \left(\eta + 1\right)}{4} - \frac{y_{4} \left(\eta + 1\right)}{4} & - \frac{y_{1} \left(1 - \xi\right)}{4} - \frac{y_{2} \left(\xi + 1\right)}{4} + \frac{y_{3} \left(\xi + 1\right)}{4} + \frac{y_{4} \left(1 - \xi\right)}{4}\end{matrix}\right]$
```python
jacob.subs(([x_1, -0.7], [x_2, 0.2], [x_3, 0.8], [x_4, 0], [y_1, 0.3], [y_2, -0.8], [y_3, 0], [y_4, 0.8]))
```
$\displaystyle \left[\begin{matrix}0.425 - 0.025 \eta & 0.325 - 0.025 \xi\\0.075 \eta - 0.475 & 0.075 \xi + 0.325\end{matrix}\right]$
```python
jacob.subs(([xi, sq3], [eta, -sq3], [x_1, -0.8], [x_2, 0], [x_3, 0.8], [x_4, 0], [y_1, 0], [y_2, -0.8], [y_3, 0], [y_4, 0.8]))
```
$\displaystyle \left[\begin{matrix}0.5 & 0.5\\-0.5 & 0.5\end{matrix}\right]$
```python
jaco_inv = sym.inv_quick(jaco)
jaco_inv
```
$\displaystyle \left[\begin{matrix}1.25 & -1.25\\1.25 & 1.25\end{matrix}\right]$
```python
NN_diff.subs(([xi, sq3], [eta, -sq3]))
```
$\displaystyle \left[\begin{matrix}-0.394337567297406 & -0.105662432702594\\0.394337567297406 & -0.394337567297406\\0.105662432702594 & 0.394337567297406\\-0.105662432702594 & 0.105662432702594\end{matrix}\right]$
```python
NN_diff
```
$\displaystyle \left[\begin{matrix}\frac{\eta}{4} - \frac{1}{4} & \frac{\xi}{4} - \frac{1}{4}\\\frac{1}{4} - \frac{\eta}{4} & - \frac{\xi}{4} - \frac{1}{4}\\\frac{\eta}{4} + \frac{1}{4} & \frac{\xi}{4} + \frac{1}{4}\\- \frac{\eta}{4} - \frac{1}{4} & \frac{1}{4} - \frac{\xi}{4}\end{matrix}\right]$
```python
jacob_det
```
$\displaystyle - \frac{\eta x_{1} y_{2}}{8} + \frac{\eta x_{1} y_{3}}{8} + \frac{\eta x_{2} y_{1}}{8} - \frac{\eta x_{2} y_{4}}{8} - \frac{\eta x_{3} y_{1}}{8} + \frac{\eta x_{3} y_{4}}{8} + \frac{\eta x_{4} y_{2}}{8} - \frac{\eta x_{4} y_{3}}{8} - \frac{x_{1} \xi y_{3}}{8} + \frac{x_{1} \xi y_{4}}{8} + \frac{x_{1} y_{2}}{8} - \frac{x_{1} y_{4}}{8} + \frac{x_{2} \xi y_{3}}{8} - \frac{x_{2} \xi y_{4}}{8} - \frac{x_{2} y_{1}}{8} + \frac{x_{2} y_{3}}{8} + \frac{x_{3} \xi y_{1}}{8} - \frac{x_{3} \xi y_{2}}{8} - \frac{x_{3} y_{2}}{8} + \frac{x_{3} y_{4}}{8} - \frac{x_{4} \xi y_{1}}{8} + \frac{x_{4} \xi y_{2}}{8} + \frac{x_{4} y_{1}}{8} - \frac{x_{4} y_{3}}{8}$
```python
jacob_inv
```
$\displaystyle \left[\begin{matrix}\frac{- \frac{y_{1} \left(1 - \xi\right)}{4} - \frac{y_{2} \left(\xi + 1\right)}{4} + \frac{y_{3} \left(\xi + 1\right)}{4} + \frac{y_{4} \left(1 - \xi\right)}{4}}{\left(- \frac{x_{1} \left(1 - \eta\right)}{4} + \frac{x_{2} \left(1 - \eta\right)}{4} + \frac{x_{3} \left(\eta + 1\right)}{4} - \frac{x_{4} \left(\eta + 1\right)}{4}\right) \left(- \frac{y_{1} \left(1 - \xi\right)}{4} - \frac{y_{2} \left(\xi + 1\right)}{4} + \frac{y_{3} \left(\xi + 1\right)}{4} + \frac{y_{4} \left(1 - \xi\right)}{4}\right) - \left(- \frac{x_{1} \left(1 - \xi\right)}{4} - \frac{x_{2} \left(\xi + 1\right)}{4} + \frac{x_{3} \left(\xi + 1\right)}{4} + \frac{x_{4} \left(1 - \xi\right)}{4}\right) \left(- \frac{y_{1} \left(1 - \eta\right)}{4} + \frac{y_{2} \left(1 - \eta\right)}{4} + \frac{y_{3} \left(\eta + 1\right)}{4} - \frac{y_{4} \left(\eta + 1\right)}{4}\right)} & \frac{\frac{x_{1} \left(1 - \xi\right)}{4} + \frac{x_{2} \left(\xi + 1\right)}{4} - \frac{x_{3} \left(\xi + 1\right)}{4} - \frac{x_{4} \left(1 - \xi\right)}{4}}{\left(- \frac{x_{1} \left(1 - \eta\right)}{4} + \frac{x_{2} \left(1 - \eta\right)}{4} + \frac{x_{3} \left(\eta + 1\right)}{4} - \frac{x_{4} \left(\eta + 1\right)}{4}\right) \left(- \frac{y_{1} \left(1 - \xi\right)}{4} - \frac{y_{2} \left(\xi + 1\right)}{4} + \frac{y_{3} \left(\xi + 1\right)}{4} + \frac{y_{4} \left(1 - \xi\right)}{4}\right) - \left(- \frac{x_{1} \left(1 - \xi\right)}{4} - \frac{x_{2} \left(\xi + 1\right)}{4} + \frac{x_{3} \left(\xi + 1\right)}{4} + \frac{x_{4} \left(1 - \xi\right)}{4}\right) \left(- \frac{y_{1} \left(1 - \eta\right)}{4} + \frac{y_{2} \left(1 - \eta\right)}{4} + \frac{y_{3} \left(\eta + 1\right)}{4} - \frac{y_{4} \left(\eta + 1\right)}{4}\right)}\\\frac{\frac{y_{1} \left(1 - \eta\right)}{4} - \frac{y_{2} \left(1 - \eta\right)}{4} - \frac{y_{3} \left(\eta + 1\right)}{4} + \frac{y_{4} \left(\eta + 1\right)}{4}}{\left(- \frac{x_{1} \left(1 - \eta\right)}{4} + \frac{x_{2} \left(1 - \eta\right)}{4} + \frac{x_{3} \left(\eta + 1\right)}{4} - \frac{x_{4} \left(\eta + 1\right)}{4}\right) \left(- \frac{y_{1} \left(1 - \xi\right)}{4} - \frac{y_{2} \left(\xi + 1\right)}{4} + \frac{y_{3} \left(\xi + 1\right)}{4} + \frac{y_{4} \left(1 - \xi\right)}{4}\right) - \left(- \frac{x_{1} \left(1 - \xi\right)}{4} - \frac{x_{2} \left(\xi + 1\right)}{4} + \frac{x_{3} \left(\xi + 1\right)}{4} + \frac{x_{4} \left(1 - \xi\right)}{4}\right) \left(- \frac{y_{1} \left(1 - \eta\right)}{4} + \frac{y_{2} \left(1 - \eta\right)}{4} + \frac{y_{3} \left(\eta + 1\right)}{4} - \frac{y_{4} \left(\eta + 1\right)}{4}\right)} & \frac{- \frac{x_{1} \left(1 - \eta\right)}{4} + \frac{x_{2} \left(1 - \eta\right)}{4} + \frac{x_{3} \left(\eta + 1\right)}{4} - \frac{x_{4} \left(\eta + 1\right)}{4}}{\left(- \frac{x_{1} \left(1 - \eta\right)}{4} + \frac{x_{2} \left(1 - \eta\right)}{4} + \frac{x_{3} \left(\eta + 1\right)}{4} - \frac{x_{4} \left(\eta + 1\right)}{4}\right) \left(- \frac{y_{1} \left(1 - \xi\right)}{4} - \frac{y_{2} \left(\xi + 1\right)}{4} + \frac{y_{3} \left(\xi + 1\right)}{4} + \frac{y_{4} \left(1 - \xi\right)}{4}\right) - \left(- \frac{x_{1} \left(1 - \xi\right)}{4} - \frac{x_{2} \left(\xi + 1\right)}{4} + \frac{x_{3} \left(\xi + 1\right)}{4} + \frac{x_{4} \left(1 - \xi\right)}{4}\right) \left(- \frac{y_{1} \left(1 - \eta\right)}{4} + \frac{y_{2} \left(1 - \eta\right)}{4} + \frac{y_{3} \left(\eta + 1\right)}{4} - \frac{y_{4} \left(\eta + 1\right)}{4}\right)}\end{matrix}\right]$
```python
D
```
$\displaystyle \left[\begin{matrix}1 & 1\\1 & 1\end{matrix}\right]$
```python
a = NN_diff_global.subs(([xi, -sq3], [eta, -sq3], [x_1, -0.8], [x_2, 0], [x_3, 0.8], [x_4, 0], [y_1, 0], [y_2, -0.8], [y_3, 0], [y_4, 0.8]))
```
```python
b = NN_diff_global.subs(([xi, sq3], [eta, -sq3], [x_1, -0.8], [x_2, 0], [x_3, 0.8], [x_4, 0], [y_1, 0], [y_2, -0.8], [y_3, 0], [y_4, 0.8]))
```
```python
c = NN_diff_global.subs(([xi, sq3], [eta, +sq3], [x_1, -0.8], [x_2, 0], [x_3, 0.8], [x_4, 0], [y_1, 0], [y_2, -0.8], [y_3, 0], [y_4, 0.8]))
```
```python
d = NN_diff_global.subs(([xi, -sq3], [eta, +sq3], [x_1, -0.8], [x_2, 0], [x_3, 0.8], [x_4, 0], [y_1, 0], [y_2, -0.8], [y_3, 0], [y_4, 0.8]))
```
```python
a
```
$\displaystyle \left[\begin{matrix}-0.985843918243516 & 0\\0.360843918243516 & -0.625\\0.264156081756484 & 0\\0.360843918243516 & 0.625\end{matrix}\right]$
```python
aa = np.array(a, dtype=np.float32)
bb = np.array(b, dtype=np.float32)
cc = np.array(c, dtype=np.float32)
dd = np.array(d, dtype=np.float32)
```
```python
aa
```
array([[-0.9858439 , 0. ],
[ 0.36084393, -0.625 ],
[ 0.26415607, 0. ],
[ 0.36084393, 0.625 ]], dtype=float32)
```python
np.einsum('mi,ni->imn', aa, aa)*0.32
```
array([[[ 0.31100422, -0.11383545, -0.08333333, -0.11383545],
[-0.11383545, 0.04166667, 0.03050212, 0.04166667],
[-0.08333333, 0.03050212, 0.0223291 , 0.03050212],
[-0.11383545, 0.04166667, 0.03050212, 0.04166667]],
[[ 0. , 0. , 0. , 0. ],
[ 0. , 0.125 , 0. , -0.125 ],
[ 0. , 0. , 0. , 0. ],
[ 0. , -0.125 , 0. , 0.125 ]]],
dtype=float32)
```python
(np.einsum('mi,ni->imn', aa, aa)+np.einsum('mi,ni->imn', bb, bb)+\
np.einsum('mi,ni->imn', cc, cc)+np.einsum('mi,ni->imn', dd, dd))*0.32
```
array([[[ 0.5833333 , -0.08333333, -0.41666663, -0.08333333],
[-0.08333333, 0.08333334, -0.08333333, 0.08333334],
[-0.41666663, -0.08333333, 0.5833333 , -0.08333333],
[-0.08333333, 0.08333334, -0.08333333, 0.08333334]],
[[ 0.08333334, -0.08333333, 0.08333334, -0.08333333],
[-0.08333333, 0.5833333 , -0.08333333, -0.41666663],
[ 0.08333334, -0.08333333, 0.08333334, -0.08333333],
[-0.08333333, -0.41666663, -0.08333333, 0.5833333 ]]],
dtype=float32)
```python
```
| e11011c15250383c3c63214d0a31e762f294cf01 | 21,573 | ipynb | Jupyter Notebook | equations4twoDimensionalElement.ipynb | AndrewWangJZ/pyfem | 8e7df6aa69c1c761bb8ec67302847e30a83190b4 | [
"MIT"
] | 1 | 2022-03-10T17:22:53.000Z | 2022-03-10T17:22:53.000Z | equations4twoDimensionalElement.ipynb | AndrewWangJZ/pyfem | 8e7df6aa69c1c761bb8ec67302847e30a83190b4 | [
"MIT"
] | null | null | null | equations4twoDimensionalElement.ipynb | AndrewWangJZ/pyfem | 8e7df6aa69c1c761bb8ec67302847e30a83190b4 | [
"MIT"
] | 2 | 2022-03-10T12:47:34.000Z | 2022-03-10T13:25:18.000Z | 41.013308 | 3,835 | 0.46484 | true | 4,981 | Qwen/Qwen-72B | 1. YES
2. YES | 0.957912 | 0.763484 | 0.73135 | __label__yue_Hant | 0.14196 | 0.537504 |
# Descriptive Statistics.
Working with dataset:
- https://archive.ics.uci.edu/ml/datasets/Vertebral+Column
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
Vamos a ver aquí algunas medidas iniciales para el análisis de datos. El análisis inicial de los datos es muy importante para hacer estimaciones y sacar conclusiones.
```python
names=["pelvic_incidence","pelvic_tilt","lumbar_lordosis_angle","sacral_slope","pelvic_radius","degree_spondylolisthesis","class"]
data=pd.read_csv("../Datas/VertebralColumn/column_2C.dat",delimiter="\s+",names=names)
```
```python
data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>pelvic_incidence</th>
<th>pelvic_tilt</th>
<th>lumbar_lordosis_angle</th>
<th>sacral_slope</th>
<th>pelvic_radius</th>
<th>degree_spondylolisthesis</th>
<th>class</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>63.03</td>
<td>22.55</td>
<td>39.61</td>
<td>40.48</td>
<td>98.67</td>
<td>-0.25</td>
<td>AB</td>
</tr>
<tr>
<th>1</th>
<td>39.06</td>
<td>10.06</td>
<td>25.02</td>
<td>29.00</td>
<td>114.41</td>
<td>4.56</td>
<td>AB</td>
</tr>
<tr>
<th>2</th>
<td>68.83</td>
<td>22.22</td>
<td>50.09</td>
<td>46.61</td>
<td>105.99</td>
<td>-3.53</td>
<td>AB</td>
</tr>
<tr>
<th>3</th>
<td>69.30</td>
<td>24.65</td>
<td>44.31</td>
<td>44.64</td>
<td>101.87</td>
<td>11.21</td>
<td>AB</td>
</tr>
<tr>
<th>4</th>
<td>49.71</td>
<td>9.65</td>
<td>28.32</td>
<td>40.06</td>
<td>108.17</td>
<td>7.92</td>
<td>AB</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>305</th>
<td>47.90</td>
<td>13.62</td>
<td>36.00</td>
<td>34.29</td>
<td>117.45</td>
<td>-4.25</td>
<td>NO</td>
</tr>
<tr>
<th>306</th>
<td>53.94</td>
<td>20.72</td>
<td>29.22</td>
<td>33.22</td>
<td>114.37</td>
<td>-0.42</td>
<td>NO</td>
</tr>
<tr>
<th>307</th>
<td>61.45</td>
<td>22.69</td>
<td>46.17</td>
<td>38.75</td>
<td>125.67</td>
<td>-2.71</td>
<td>NO</td>
</tr>
<tr>
<th>308</th>
<td>45.25</td>
<td>8.69</td>
<td>41.58</td>
<td>36.56</td>
<td>118.55</td>
<td>0.21</td>
<td>NO</td>
</tr>
<tr>
<th>309</th>
<td>33.84</td>
<td>5.07</td>
<td>36.64</td>
<td>28.77</td>
<td>123.95</td>
<td>-0.20</td>
<td>NO</td>
</tr>
</tbody>
</table>
<p>310 rows × 7 columns</p>
</div>
Podemos hacer primero una visión general de nuestros datos para entender cual es el compotamiento de las variables que tenemos en el estudio. Esto podemos hacerlo muy fácilmente con Python usando una librería de pandas y usando algunos de los algoritmos y funciones predefinidas.
```python
print("El número de variables es N_var =",len(data.columns))
print("El número de casos o instancias (o individuos estudiados) es N =",len(data))
```
El número de variables es N_var = 7
El número de casos o instancias (o individuos estudiados) es N = 310
Algunos estdísticos comunes y usuales. Por ejemplo podemos calcular la media, mediana y moda, así como también las desviaciones estándar y varianzas para cada una de las 7 variables.
- Media (Promedio):
\begin{equation}
\bar x=\frac{1}{N}\sum_{i=0}^{N-1}x_i
\end{equation}
- Mediana (Cuartil 50): Con datos ordenados en orden ascendente:
\begin{equation}
median=\left\{\begin{matrix}
x\left[\frac{N}{2}\right] & \mbox{si $N$ es par}\\
\frac{1}{2}\left(x\left[\frac{N-1}{2}\right]+x\left[\frac{N+1}{2}\right]\right) & \mbox{si $N$ es impar}\\
\end{matrix}\right.
\end{equation}
- Moda (dato con mayor frecuencia)
- varianza:
\begin{equation}
s^2(x)=\mbox{Var}(x)=\frac{1}{N}\sum_{i=0}^{N-1}(x_i-\bar x)^2
\end{equation}
- Desviación estándar:
\begin{equation}
\sigma_x=s(x)=\sqrt{\mbox{Var}(x)}=\sqrt{\frac{1}{N}\sum_{i=0}^{N-1}(x_i-\bar x)^2}
\end{equation}
```python
def average(var):
return np.sum(var)/len(var)
def median(var):
var_ord=np.sort(var)
N=len(var_ord)
if N%2==0:
median=var_ord[N//2]
if N%2!=0:
median=(var_ord[(N-1)//2]+var_ord[(N+1)//2])/2
return median
def mode(var):
dat,frec=np.unique(var,return_counts=True)
if len(dat[frec==frec.max()])>1:
print("Existen varias modas (la variable es multimodal)")
if len(dat[frec==frec.max()])==1:
print("Existe una moda (la variable es unimodal)")
return dat[frec==frec.max()]
```
```python
def variance(var):
return np.sum((var-average(var))**2)/len(var)
def std(var):
return np.sqrt(variance(var))
```
```python
for i in data.columns[:-1]:
print("Para la variable "+str(i),"el promedio (la media) es <{}> =".format(str(i)),average(data[i]))
```
Para la variable pelvic_incidence el promedio (la media) es <pelvic_incidence> = 60.49648387096773
Para la variable pelvic_tilt el promedio (la media) es <pelvic_tilt> = 17.542903225806448
Para la variable lumbar_lordosis_angle el promedio (la media) es <lumbar_lordosis_angle> = 51.93070967741936
Para la variable sacral_slope el promedio (la media) es <sacral_slope> = 42.953870967741935
Para la variable pelvic_radius el promedio (la media) es <pelvic_radius> = 117.92054838709676
Para la variable degree_spondylolisthesis el promedio (la media) es <degree_spondylolisthesis> = 26.296741935483873
```python
for i in data.columns[:-1]:
print("Para la variable "+str(i),"la mediana es med({}) =".format(str(i)),median(data[i]))
```
Para la variable pelvic_incidence la mediana es med(pelvic_incidence) = 58.78
Para la variable pelvic_tilt la mediana es med(pelvic_tilt) = 16.42
Para la variable lumbar_lordosis_angle la mediana es med(lumbar_lordosis_angle) = 49.78
Para la variable sacral_slope la mediana es med(sacral_slope) = 42.44
Para la variable pelvic_radius la mediana es med(pelvic_radius) = 118.34
Para la variable degree_spondylolisthesis la mediana es med(degree_spondylolisthesis) = 12.07
```python
for i in data.columns[:-1]:
print("Para la variable "+str(i),"la moda es moda({}) =".format(str(i)),mode(data[i]))
```
Existen varias modas (la variable es multimodal)
Para la variable pelvic_incidence la moda es moda(pelvic_incidence) = [42.52 49.71 50.91 53.94 54.92 63.03 65.01 65.76 74.72]
Existen varias modas (la variable es multimodal)
Para la variable pelvic_tilt la moda es moda(pelvic_tilt) = [ 5.27 8.4 10.06 10.22 10.76 13.11 13.28 13.92 14.38 15.4 16.42 16.74
17.44 19.44 21.12 23.08 26.33 33.28 37.52]
Existen varias modas (la variable es multimodal)
Para la variable lumbar_lordosis_angle la moda es moda(lumbar_lordosis_angle) = [35. 42. 47. 52. 58.]
Existe una moda (la variable es unimodal)
Para la variable sacral_slope la moda es moda(sacral_slope) = [56.31]
Existen varias modas (la variable es multimodal)
Para la variable pelvic_radius la moda es moda(pelvic_radius) = [110.71 116.56 116.59 116.8 117.98 119.32 129.39]
Existen varias modas (la variable es multimodal)
Para la variable degree_spondylolisthesis la moda es moda(degree_spondylolisthesis) = [-4.08 -2.01 1.01 3.09 4.96]
```python
for i in data.columns[:-1]:
print("Para la variable "+str(i),"la varianza es Var({}) =".format(str(i)),variance(data[i]))
```
Para la variable pelvic_incidence la varianza es Var(pelvic_incidence) = 296.12512021748176
Para la variable pelvic_tilt la varianza es Var(pelvic_tilt) = 99.83976124869929
Para la variable lumbar_lordosis_angle la varianza es Var(lumbar_lordosis_angle) = 343.13176723829343
Para la variable sacral_slope la varianza es Var(sacral_slope) = 179.5889740478668
Para la variable pelvic_radius la varianza es Var(pelvic_radius) = 176.7871277637877
Para la variable degree_spondylolisthesis la varianza es Var(degree_spondylolisthesis) = 1406.119148417274
```python
for i in data.columns[:-1]:
print("Para la variable "+str(i),"la desviación estándar es STD({}) =".format(str(i)),std(data[i]))
```
Para la variable pelvic_incidence la desviación estándar es STD(pelvic_incidence) = 17.208286382364797
Para la variable pelvic_tilt la desviación estándar es STD(pelvic_tilt) = 9.991984850303732
Para la variable lumbar_lordosis_angle la desviación estándar es STD(lumbar_lordosis_angle) = 18.523816216921755
Para la variable sacral_slope la desviación estándar es STD(sacral_slope) = 13.401081077579779
Para la variable pelvic_radius la desviación estándar es STD(pelvic_radius) = 13.296132060256761
Para la variable degree_spondylolisthesis la desviación estándar es STD(degree_spondylolisthesis) = 37.49825527164263
También podemos usar una función de Pandas que reune algunos estadísticos básicos:
```python
vertebral_stats=data.describe().T
```
```python
vertebral_stats
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
</tr>
</thead>
<tbody>
<tr>
<th>pelvic_incidence</th>
<td>310.0</td>
<td>60.496484</td>
<td>17.236109</td>
<td>26.15</td>
<td>46.4325</td>
<td>58.690</td>
<td>72.8800</td>
<td>129.83</td>
</tr>
<tr>
<th>pelvic_tilt</th>
<td>310.0</td>
<td>17.542903</td>
<td>10.008140</td>
<td>-6.55</td>
<td>10.6675</td>
<td>16.360</td>
<td>22.1200</td>
<td>49.43</td>
</tr>
<tr>
<th>lumbar_lordosis_angle</th>
<td>310.0</td>
<td>51.930710</td>
<td>18.553766</td>
<td>14.00</td>
<td>37.0000</td>
<td>49.565</td>
<td>63.0000</td>
<td>125.74</td>
</tr>
<tr>
<th>sacral_slope</th>
<td>310.0</td>
<td>42.953871</td>
<td>13.422748</td>
<td>13.37</td>
<td>33.3475</td>
<td>42.405</td>
<td>52.6925</td>
<td>121.43</td>
</tr>
<tr>
<th>pelvic_radius</th>
<td>310.0</td>
<td>117.920548</td>
<td>13.317629</td>
<td>70.08</td>
<td>110.7100</td>
<td>118.265</td>
<td>125.4675</td>
<td>163.07</td>
</tr>
<tr>
<th>degree_spondylolisthesis</th>
<td>310.0</td>
<td>26.296742</td>
<td>37.558883</td>
<td>-11.06</td>
<td>1.6000</td>
<td>11.765</td>
<td>41.2850</td>
<td>418.54</td>
</tr>
</tbody>
</table>
</div>
también podemos usar las funciones que están predefinidas en Python para estas tareas:
```python
import scipy.stats as st
# La mediana puede ser también calculada, así como el coeficiente de asimetría y curtosis respectivamente
print(data.columns)
print("media =",np.mean(data.loc[:, data.columns != 'class'],axis=0))
print("median =",np.median(data.loc[:, data.columns != 'class'],axis=0))
print("varianza =",np.var(data.loc[:, data.columns != 'class'],axis=0))
print("desviación estándar =",np.std(data.loc[:, data.columns != 'class'],axis=0))
print("skewness =",st.skew(data.loc[:, data.columns != 'class'],axis=0))
print("kurtosis =",st.kurtosis(data.loc[:, data.columns != 'class'],axis=0))
```
Index(['pelvic_incidence', 'pelvic_tilt', 'lumbar_lordosis_angle',
'sacral_slope', 'pelvic_radius', 'degree_spondylolisthesis', 'class'],
dtype='object')
media = pelvic_incidence 60.496484
pelvic_tilt 17.542903
lumbar_lordosis_angle 51.930710
sacral_slope 42.953871
pelvic_radius 117.920548
degree_spondylolisthesis 26.296742
dtype: float64
median = [ 58.69 16.36 49.565 42.405 118.265 11.765]
varianza = pelvic_incidence 296.125120
pelvic_tilt 99.839761
lumbar_lordosis_angle 343.131767
sacral_slope 179.588974
pelvic_radius 176.787128
degree_spondylolisthesis 1406.119148
dtype: float64
desviación estándar = pelvic_incidence 17.208286
pelvic_tilt 9.991985
lumbar_lordosis_angle 18.523816
sacral_slope 13.401081
pelvic_radius 13.296132
degree_spondylolisthesis 37.498255
dtype: float64
skewness = [ 0.51786409 0.67329913 0.5964695 0.78883655 -0.17607403 4.29696644]
kurtosis = [ 0.2006561 0.64597047 0.13977901 2.94050906 0.90035703 37.4374569 ]
```python
vertebral_stats["median"]=np.median(data.loc[:, data.columns != 'class'],axis=0)
vertebral_stats["skewness"]=st.skew(data.loc[:, data.columns != 'class'],axis=0)
vertebral_stats["kurtosis"]=st.kurtosis(data.loc[:, data.columns != 'class'],axis=0)
```
```python
vertebral_stats
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
<th>median</th>
<th>skewness</th>
<th>kurtosis</th>
</tr>
</thead>
<tbody>
<tr>
<th>pelvic_incidence</th>
<td>310.0</td>
<td>60.496484</td>
<td>17.236109</td>
<td>26.15</td>
<td>46.4325</td>
<td>58.690</td>
<td>72.8800</td>
<td>129.83</td>
<td>58.690</td>
<td>0.517864</td>
<td>0.200656</td>
</tr>
<tr>
<th>pelvic_tilt</th>
<td>310.0</td>
<td>17.542903</td>
<td>10.008140</td>
<td>-6.55</td>
<td>10.6675</td>
<td>16.360</td>
<td>22.1200</td>
<td>49.43</td>
<td>16.360</td>
<td>0.673299</td>
<td>0.645970</td>
</tr>
<tr>
<th>lumbar_lordosis_angle</th>
<td>310.0</td>
<td>51.930710</td>
<td>18.553766</td>
<td>14.00</td>
<td>37.0000</td>
<td>49.565</td>
<td>63.0000</td>
<td>125.74</td>
<td>49.565</td>
<td>0.596469</td>
<td>0.139779</td>
</tr>
<tr>
<th>sacral_slope</th>
<td>310.0</td>
<td>42.953871</td>
<td>13.422748</td>
<td>13.37</td>
<td>33.3475</td>
<td>42.405</td>
<td>52.6925</td>
<td>121.43</td>
<td>42.405</td>
<td>0.788837</td>
<td>2.940509</td>
</tr>
<tr>
<th>pelvic_radius</th>
<td>310.0</td>
<td>117.920548</td>
<td>13.317629</td>
<td>70.08</td>
<td>110.7100</td>
<td>118.265</td>
<td>125.4675</td>
<td>163.07</td>
<td>118.265</td>
<td>-0.176074</td>
<td>0.900357</td>
</tr>
<tr>
<th>degree_spondylolisthesis</th>
<td>310.0</td>
<td>26.296742</td>
<td>37.558883</td>
<td>-11.06</td>
<td>1.6000</td>
<td>11.765</td>
<td>41.2850</td>
<td>418.54</td>
<td>11.765</td>
<td>4.296966</td>
<td>37.437457</td>
</tr>
</tbody>
</table>
</div>
```python
```
```python
```
Ahora entendamos el problema en el que nos embarcamos. Resulta que los datos pueden contener outliers, o en otras palabras, hay datos que parecen estar fuera del rango de valores del conjunto total. Veamos esto con otro conjunto de datos: https://archive.ics.uci.edu/ml/datasets/ionosphere
```python
#names=["pelvic_incidence","pelvic_tilt","lumbar_lordosis_angle","sacral_slope","pelvic_radius","degree_spondylolisthesis","class"]
data=pd.read_csv("../Datas/Ionosphere/ionosphere.data",delimiter=",",names=["Atr"+str(i) for i in range(35)])
```
```python
data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Atr0</th>
<th>Atr1</th>
<th>Atr2</th>
<th>Atr3</th>
<th>Atr4</th>
<th>Atr5</th>
<th>Atr6</th>
<th>Atr7</th>
<th>Atr8</th>
<th>Atr9</th>
<th>...</th>
<th>Atr25</th>
<th>Atr26</th>
<th>Atr27</th>
<th>Atr28</th>
<th>Atr29</th>
<th>Atr30</th>
<th>Atr31</th>
<th>Atr32</th>
<th>Atr33</th>
<th>Atr34</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>0</td>
<td>0.99539</td>
<td>-0.05889</td>
<td>0.85243</td>
<td>0.02306</td>
<td>0.83398</td>
<td>-0.37708</td>
<td>1.00000</td>
<td>0.03760</td>
<td>...</td>
<td>-0.51171</td>
<td>0.41078</td>
<td>-0.46168</td>
<td>0.21266</td>
<td>-0.34090</td>
<td>0.42267</td>
<td>-0.54487</td>
<td>0.18641</td>
<td>-0.45300</td>
<td>g</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>0</td>
<td>1.00000</td>
<td>-0.18829</td>
<td>0.93035</td>
<td>-0.36156</td>
<td>-0.10868</td>
<td>-0.93597</td>
<td>1.00000</td>
<td>-0.04549</td>
<td>...</td>
<td>-0.26569</td>
<td>-0.20468</td>
<td>-0.18401</td>
<td>-0.19040</td>
<td>-0.11593</td>
<td>-0.16626</td>
<td>-0.06288</td>
<td>-0.13738</td>
<td>-0.02447</td>
<td>b</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>0</td>
<td>1.00000</td>
<td>-0.03365</td>
<td>1.00000</td>
<td>0.00485</td>
<td>1.00000</td>
<td>-0.12062</td>
<td>0.88965</td>
<td>0.01198</td>
<td>...</td>
<td>-0.40220</td>
<td>0.58984</td>
<td>-0.22145</td>
<td>0.43100</td>
<td>-0.17365</td>
<td>0.60436</td>
<td>-0.24180</td>
<td>0.56045</td>
<td>-0.38238</td>
<td>g</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>0</td>
<td>1.00000</td>
<td>-0.45161</td>
<td>1.00000</td>
<td>1.00000</td>
<td>0.71216</td>
<td>-1.00000</td>
<td>0.00000</td>
<td>0.00000</td>
<td>...</td>
<td>0.90695</td>
<td>0.51613</td>
<td>1.00000</td>
<td>1.00000</td>
<td>-0.20099</td>
<td>0.25682</td>
<td>1.00000</td>
<td>-0.32382</td>
<td>1.00000</td>
<td>b</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>0</td>
<td>1.00000</td>
<td>-0.02401</td>
<td>0.94140</td>
<td>0.06531</td>
<td>0.92106</td>
<td>-0.23255</td>
<td>0.77152</td>
<td>-0.16399</td>
<td>...</td>
<td>-0.65158</td>
<td>0.13290</td>
<td>-0.53206</td>
<td>0.02431</td>
<td>-0.62197</td>
<td>-0.05707</td>
<td>-0.59573</td>
<td>-0.04608</td>
<td>-0.65697</td>
<td>g</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>346</th>
<td>1</td>
<td>0</td>
<td>0.83508</td>
<td>0.08298</td>
<td>0.73739</td>
<td>-0.14706</td>
<td>0.84349</td>
<td>-0.05567</td>
<td>0.90441</td>
<td>-0.04622</td>
<td>...</td>
<td>-0.04202</td>
<td>0.83479</td>
<td>0.00123</td>
<td>1.00000</td>
<td>0.12815</td>
<td>0.86660</td>
<td>-0.10714</td>
<td>0.90546</td>
<td>-0.04307</td>
<td>g</td>
</tr>
<tr>
<th>347</th>
<td>1</td>
<td>0</td>
<td>0.95113</td>
<td>0.00419</td>
<td>0.95183</td>
<td>-0.02723</td>
<td>0.93438</td>
<td>-0.01920</td>
<td>0.94590</td>
<td>0.01606</td>
<td>...</td>
<td>0.01361</td>
<td>0.93522</td>
<td>0.04925</td>
<td>0.93159</td>
<td>0.08168</td>
<td>0.94066</td>
<td>-0.00035</td>
<td>0.91483</td>
<td>0.04712</td>
<td>g</td>
</tr>
<tr>
<th>348</th>
<td>1</td>
<td>0</td>
<td>0.94701</td>
<td>-0.00034</td>
<td>0.93207</td>
<td>-0.03227</td>
<td>0.95177</td>
<td>-0.03431</td>
<td>0.95584</td>
<td>0.02446</td>
<td>...</td>
<td>0.03193</td>
<td>0.92489</td>
<td>0.02542</td>
<td>0.92120</td>
<td>0.02242</td>
<td>0.92459</td>
<td>0.00442</td>
<td>0.92697</td>
<td>-0.00577</td>
<td>g</td>
</tr>
<tr>
<th>349</th>
<td>1</td>
<td>0</td>
<td>0.90608</td>
<td>-0.01657</td>
<td>0.98122</td>
<td>-0.01989</td>
<td>0.95691</td>
<td>-0.03646</td>
<td>0.85746</td>
<td>0.00110</td>
<td>...</td>
<td>-0.02099</td>
<td>0.89147</td>
<td>-0.07760</td>
<td>0.82983</td>
<td>-0.17238</td>
<td>0.96022</td>
<td>-0.03757</td>
<td>0.87403</td>
<td>-0.16243</td>
<td>g</td>
</tr>
<tr>
<th>350</th>
<td>1</td>
<td>0</td>
<td>0.84710</td>
<td>0.13533</td>
<td>0.73638</td>
<td>-0.06151</td>
<td>0.87873</td>
<td>0.08260</td>
<td>0.88928</td>
<td>-0.09139</td>
<td>...</td>
<td>-0.15114</td>
<td>0.81147</td>
<td>-0.04822</td>
<td>0.78207</td>
<td>-0.00703</td>
<td>0.75747</td>
<td>-0.06678</td>
<td>0.85764</td>
<td>-0.06151</td>
<td>g</td>
</tr>
</tbody>
</table>
<p>351 rows × 35 columns</p>
</div>
```python
len(data.columns)
```
35
```python
plt.figure(figsize=(30,6))
data.iloc[:,2:].boxplot(grid=True,fontsize=15)
plt.legend(fontsize=16)
#plt.xticks(range(1,7),np.array(["pelvic \n incidence","pelvic \n tilt","lumbar \n lordosis \n angle","sacral \n slope","pelvic \n radius","degree \n spondylolisthesis"]),fontsize=16)
plt.yticks(fontsize=16)
plt.savefig("../../figures/ionosphere_boxplot1.png",bbox_inches ="tight")
plt.show()
```
```python
ionosphere_stats=data.describe().T
```
```python
ionosphere_stats["LII"]=ionosphere_stats["25%"]-((ionosphere_stats["75%"]-ionosphere_stats["25%"])*1.5)
ionosphere_stats["LIS"]=ionosphere_stats["75%"]+((ionosphere_stats["75%"]-ionosphere_stats["25%"])*1.5)
ionosphere_stats["LEI"]=ionosphere_stats["25%"]-((ionosphere_stats["75%"]-ionosphere_stats["25%"])*3.0)
ionosphere_stats["LES"]=ionosphere_stats["75%"]+((ionosphere_stats["75%"]-ionosphere_stats["25%"])*3.0)
```
```python
ionosphere_stats
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>count</th>
<th>mean</th>
<th>std</th>
<th>min</th>
<th>25%</th>
<th>50%</th>
<th>75%</th>
<th>max</th>
<th>LII</th>
<th>LIS</th>
<th>LEI</th>
<th>LES</th>
</tr>
</thead>
<tbody>
<tr>
<th>Atr0</th>
<td>351.0</td>
<td>0.891738</td>
<td>0.311155</td>
<td>0.0</td>
<td>1.000000</td>
<td>1.00000</td>
<td>1.000000</td>
<td>1.0</td>
<td>1.000000</td>
<td>1.000000</td>
<td>1.000000</td>
<td>1.000000</td>
</tr>
<tr>
<th>Atr1</th>
<td>351.0</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.0</td>
<td>0.000000</td>
<td>0.00000</td>
<td>0.000000</td>
<td>0.0</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Atr2</th>
<td>351.0</td>
<td>0.641342</td>
<td>0.497708</td>
<td>-1.0</td>
<td>0.472135</td>
<td>0.87111</td>
<td>1.000000</td>
<td>1.0</td>
<td>-0.319663</td>
<td>1.791797</td>
<td>-1.111460</td>
<td>2.583595</td>
</tr>
<tr>
<th>Atr3</th>
<td>351.0</td>
<td>0.044372</td>
<td>0.441435</td>
<td>-1.0</td>
<td>-0.064735</td>
<td>0.01631</td>
<td>0.194185</td>
<td>1.0</td>
<td>-0.453115</td>
<td>0.582565</td>
<td>-0.841495</td>
<td>0.970945</td>
</tr>
<tr>
<th>Atr4</th>
<td>351.0</td>
<td>0.601068</td>
<td>0.519862</td>
<td>-1.0</td>
<td>0.412660</td>
<td>0.80920</td>
<td>1.000000</td>
<td>1.0</td>
<td>-0.468350</td>
<td>1.881010</td>
<td>-1.349360</td>
<td>2.762020</td>
</tr>
<tr>
<th>Atr5</th>
<td>351.0</td>
<td>0.115889</td>
<td>0.460810</td>
<td>-1.0</td>
<td>-0.024795</td>
<td>0.02280</td>
<td>0.334655</td>
<td>1.0</td>
<td>-0.563970</td>
<td>0.873830</td>
<td>-1.103145</td>
<td>1.413005</td>
</tr>
<tr>
<th>Atr6</th>
<td>351.0</td>
<td>0.550095</td>
<td>0.492654</td>
<td>-1.0</td>
<td>0.211310</td>
<td>0.72873</td>
<td>0.969240</td>
<td>1.0</td>
<td>-0.925585</td>
<td>2.106135</td>
<td>-2.062480</td>
<td>3.243030</td>
</tr>
<tr>
<th>Atr7</th>
<td>351.0</td>
<td>0.119360</td>
<td>0.520750</td>
<td>-1.0</td>
<td>-0.054840</td>
<td>0.01471</td>
<td>0.445675</td>
<td>1.0</td>
<td>-0.805613</td>
<td>1.196448</td>
<td>-1.556385</td>
<td>1.947220</td>
</tr>
<tr>
<th>Atr8</th>
<td>351.0</td>
<td>0.511848</td>
<td>0.507066</td>
<td>-1.0</td>
<td>0.087110</td>
<td>0.68421</td>
<td>0.953240</td>
<td>1.0</td>
<td>-1.212085</td>
<td>2.252435</td>
<td>-2.511280</td>
<td>3.551630</td>
</tr>
<tr>
<th>Atr9</th>
<td>351.0</td>
<td>0.181345</td>
<td>0.483851</td>
<td>-1.0</td>
<td>-0.048075</td>
<td>0.01829</td>
<td>0.534195</td>
<td>1.0</td>
<td>-0.921480</td>
<td>1.407600</td>
<td>-1.794885</td>
<td>2.281005</td>
</tr>
<tr>
<th>Atr10</th>
<td>351.0</td>
<td>0.476183</td>
<td>0.563496</td>
<td>-1.0</td>
<td>0.021120</td>
<td>0.66798</td>
<td>0.957895</td>
<td>1.0</td>
<td>-1.384043</td>
<td>2.363058</td>
<td>-2.789205</td>
<td>3.768220</td>
</tr>
<tr>
<th>Atr11</th>
<td>351.0</td>
<td>0.155040</td>
<td>0.494817</td>
<td>-1.0</td>
<td>-0.065265</td>
<td>0.02825</td>
<td>0.482375</td>
<td>1.0</td>
<td>-0.886725</td>
<td>1.303835</td>
<td>-1.708185</td>
<td>2.125295</td>
</tr>
<tr>
<th>Atr12</th>
<td>351.0</td>
<td>0.400801</td>
<td>0.622186</td>
<td>-1.0</td>
<td>0.000000</td>
<td>0.64407</td>
<td>0.955505</td>
<td>1.0</td>
<td>-1.433258</td>
<td>2.388763</td>
<td>-2.866515</td>
<td>3.822020</td>
</tr>
<tr>
<th>Atr13</th>
<td>351.0</td>
<td>0.093414</td>
<td>0.494873</td>
<td>-1.0</td>
<td>-0.073725</td>
<td>0.03027</td>
<td>0.374860</td>
<td>1.0</td>
<td>-0.746602</td>
<td>1.047737</td>
<td>-1.419480</td>
<td>1.720615</td>
</tr>
<tr>
<th>Atr14</th>
<td>351.0</td>
<td>0.344159</td>
<td>0.652828</td>
<td>-1.0</td>
<td>0.000000</td>
<td>0.60194</td>
<td>0.919330</td>
<td>1.0</td>
<td>-1.378995</td>
<td>2.298325</td>
<td>-2.757990</td>
<td>3.677320</td>
</tr>
<tr>
<th>Atr15</th>
<td>351.0</td>
<td>0.071132</td>
<td>0.458371</td>
<td>-1.0</td>
<td>-0.081705</td>
<td>0.00000</td>
<td>0.308975</td>
<td>1.0</td>
<td>-0.667725</td>
<td>0.894995</td>
<td>-1.253745</td>
<td>1.481015</td>
</tr>
<tr>
<th>Atr16</th>
<td>351.0</td>
<td>0.381949</td>
<td>0.618020</td>
<td>-1.0</td>
<td>0.000000</td>
<td>0.59091</td>
<td>0.935705</td>
<td>1.0</td>
<td>-1.403558</td>
<td>2.339263</td>
<td>-2.807115</td>
<td>3.742820</td>
</tr>
<tr>
<th>Atr17</th>
<td>351.0</td>
<td>-0.003617</td>
<td>0.496762</td>
<td>-1.0</td>
<td>-0.225690</td>
<td>0.00000</td>
<td>0.195285</td>
<td>1.0</td>
<td>-0.857152</td>
<td>0.826747</td>
<td>-1.488615</td>
<td>1.458210</td>
</tr>
<tr>
<th>Atr18</th>
<td>351.0</td>
<td>0.359390</td>
<td>0.626267</td>
<td>-1.0</td>
<td>0.000000</td>
<td>0.57619</td>
<td>0.899265</td>
<td>1.0</td>
<td>-1.348898</td>
<td>2.248163</td>
<td>-2.697795</td>
<td>3.597060</td>
</tr>
<tr>
<th>Atr19</th>
<td>351.0</td>
<td>-0.024025</td>
<td>0.519076</td>
<td>-1.0</td>
<td>-0.234670</td>
<td>0.00000</td>
<td>0.134370</td>
<td>1.0</td>
<td>-0.788230</td>
<td>0.687930</td>
<td>-1.341790</td>
<td>1.241490</td>
</tr>
<tr>
<th>Atr20</th>
<td>351.0</td>
<td>0.336695</td>
<td>0.609828</td>
<td>-1.0</td>
<td>0.000000</td>
<td>0.49909</td>
<td>0.894865</td>
<td>1.0</td>
<td>-1.342297</td>
<td>2.237163</td>
<td>-2.684595</td>
<td>3.579460</td>
</tr>
<tr>
<th>Atr21</th>
<td>351.0</td>
<td>0.008296</td>
<td>0.518166</td>
<td>-1.0</td>
<td>-0.243870</td>
<td>0.00000</td>
<td>0.188760</td>
<td>1.0</td>
<td>-0.892815</td>
<td>0.837705</td>
<td>-1.541760</td>
<td>1.486650</td>
</tr>
<tr>
<th>Atr22</th>
<td>351.0</td>
<td>0.362475</td>
<td>0.603767</td>
<td>-1.0</td>
<td>0.000000</td>
<td>0.53176</td>
<td>0.911235</td>
<td>1.0</td>
<td>-1.366853</td>
<td>2.278087</td>
<td>-2.733705</td>
<td>3.644940</td>
</tr>
<tr>
<th>Atr23</th>
<td>351.0</td>
<td>-0.057406</td>
<td>0.527456</td>
<td>-1.0</td>
<td>-0.366885</td>
<td>0.00000</td>
<td>0.164630</td>
<td>1.0</td>
<td>-1.164157</td>
<td>0.961902</td>
<td>-1.961430</td>
<td>1.759175</td>
</tr>
<tr>
<th>Atr24</th>
<td>351.0</td>
<td>0.396135</td>
<td>0.578451</td>
<td>-1.0</td>
<td>0.000000</td>
<td>0.55389</td>
<td>0.905240</td>
<td>1.0</td>
<td>-1.357860</td>
<td>2.263100</td>
<td>-2.715720</td>
<td>3.620960</td>
</tr>
<tr>
<th>Atr25</th>
<td>351.0</td>
<td>-0.071187</td>
<td>0.508495</td>
<td>-1.0</td>
<td>-0.332390</td>
<td>-0.01505</td>
<td>0.156765</td>
<td>1.0</td>
<td>-1.066123</td>
<td>0.890497</td>
<td>-1.799855</td>
<td>1.624230</td>
</tr>
<tr>
<th>Atr26</th>
<td>351.0</td>
<td>0.541641</td>
<td>0.516205</td>
<td>-1.0</td>
<td>0.286435</td>
<td>0.70824</td>
<td>0.999945</td>
<td>1.0</td>
<td>-0.783830</td>
<td>2.070210</td>
<td>-1.854095</td>
<td>3.140475</td>
</tr>
<tr>
<th>Atr27</th>
<td>351.0</td>
<td>-0.069538</td>
<td>0.550025</td>
<td>-1.0</td>
<td>-0.443165</td>
<td>-0.01769</td>
<td>0.153535</td>
<td>1.0</td>
<td>-1.338215</td>
<td>1.048585</td>
<td>-2.233265</td>
<td>1.943635</td>
</tr>
<tr>
<th>Atr28</th>
<td>351.0</td>
<td>0.378445</td>
<td>0.575886</td>
<td>-1.0</td>
<td>0.000000</td>
<td>0.49664</td>
<td>0.883465</td>
<td>1.0</td>
<td>-1.325197</td>
<td>2.208662</td>
<td>-2.650395</td>
<td>3.533860</td>
</tr>
<tr>
<th>Atr29</th>
<td>351.0</td>
<td>-0.027907</td>
<td>0.507974</td>
<td>-1.0</td>
<td>-0.236885</td>
<td>0.00000</td>
<td>0.154075</td>
<td>1.0</td>
<td>-0.823325</td>
<td>0.740515</td>
<td>-1.409765</td>
<td>1.326955</td>
</tr>
<tr>
<th>Atr30</th>
<td>351.0</td>
<td>0.352514</td>
<td>0.571483</td>
<td>-1.0</td>
<td>0.000000</td>
<td>0.44277</td>
<td>0.857620</td>
<td>1.0</td>
<td>-1.286430</td>
<td>2.144050</td>
<td>-2.572860</td>
<td>3.430480</td>
</tr>
<tr>
<th>Atr31</th>
<td>351.0</td>
<td>-0.003794</td>
<td>0.513574</td>
<td>-1.0</td>
<td>-0.242595</td>
<td>0.00000</td>
<td>0.200120</td>
<td>1.0</td>
<td>-0.906668</td>
<td>0.864193</td>
<td>-1.570740</td>
<td>1.528265</td>
</tr>
<tr>
<th>Atr32</th>
<td>351.0</td>
<td>0.349364</td>
<td>0.522663</td>
<td>-1.0</td>
<td>0.000000</td>
<td>0.40956</td>
<td>0.813765</td>
<td>1.0</td>
<td>-1.220648</td>
<td>2.034413</td>
<td>-2.441295</td>
<td>3.255060</td>
</tr>
<tr>
<th>Atr33</th>
<td>351.0</td>
<td>0.014480</td>
<td>0.468337</td>
<td>-1.0</td>
<td>-0.165350</td>
<td>0.00000</td>
<td>0.171660</td>
<td>1.0</td>
<td>-0.670865</td>
<td>0.677175</td>
<td>-1.176380</td>
<td>1.182690</td>
</tr>
</tbody>
</table>
</div>
```python
ionosphere_stats2=pd.DataFrame()
ionosphere_stats2.index=ionosphere_stats.index
atipicos,extremos=[],[]
for i in ionosphere_stats2.index:
LII=ionosphere_stats[ionosphere_stats.index==i]["LII"][0]
LIS=ionosphere_stats[ionosphere_stats.index==i]["LIS"][0]
LEI=ionosphere_stats[ionosphere_stats.index==i]["LEI"][0]
LES=ionosphere_stats[ionosphere_stats.index==i]["LES"][0]
atipicos.append(len(data[((data[i]<=LII)&(data[i]>=LIS))|((data[i]>=LIS)&(data[i]<=LES))]))
extremos.append(len(data[((data[i]<=LEI)|(data[i]>=LES))]))
ionosphere_stats2["Valores atipicos"]=atipicos
ionosphere_stats2["Valores extremos"]=extremos
ionosphere_stats2["Valores atipicos (%)"]=np.round(ionosphere_stats2["Valores atipicos"]/len(data)*100,2)
ionosphere_stats2["Valores extremos (%)"]=np.round(ionosphere_stats2["Valores extremos"]/len(data)*100,2)
ionosphere_stats2.loc['Total'] = ionosphere_stats2.sum(axis=0)
```
```python
ionosphere_stats2[24:]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Valores atipicos</th>
<th>Valores extremos</th>
<th>Valores atipicos (%)</th>
<th>Valores extremos (%)</th>
</tr>
</thead>
<tbody>
<tr>
<th>Atr24</th>
<td>0.0</td>
<td>0.0</td>
<td>0.00</td>
<td>0.0</td>
</tr>
<tr>
<th>Atr25</th>
<td>20.0</td>
<td>0.0</td>
<td>5.70</td>
<td>0.0</td>
</tr>
<tr>
<th>Atr26</th>
<td>0.0</td>
<td>0.0</td>
<td>0.00</td>
<td>0.0</td>
</tr>
<tr>
<th>Atr27</th>
<td>0.0</td>
<td>0.0</td>
<td>0.00</td>
<td>0.0</td>
</tr>
<tr>
<th>Atr28</th>
<td>0.0</td>
<td>0.0</td>
<td>0.00</td>
<td>0.0</td>
</tr>
<tr>
<th>Atr29</th>
<td>31.0</td>
<td>0.0</td>
<td>8.83</td>
<td>0.0</td>
</tr>
<tr>
<th>Atr30</th>
<td>0.0</td>
<td>0.0</td>
<td>0.00</td>
<td>0.0</td>
</tr>
<tr>
<th>Atr31</th>
<td>33.0</td>
<td>0.0</td>
<td>9.40</td>
<td>0.0</td>
</tr>
<tr>
<th>Atr32</th>
<td>0.0</td>
<td>0.0</td>
<td>0.00</td>
<td>0.0</td>
</tr>
<tr>
<th>Atr33</th>
<td>40.0</td>
<td>0.0</td>
<td>11.40</td>
<td>0.0</td>
</tr>
<tr>
<th>Total</th>
<td>971.0</td>
<td>755.0</td>
<td>276.64</td>
<td>215.1</td>
</tr>
</tbody>
</table>
</div>
Tomemos por ejemplo los datos que tienen valores atípicos en mayor cantidad, por ejemplo, los datos con `Atr33`, y veamos como cambia la distribución cual eliminamos los valores atípicos.
```python
def filter_outlier(col):
LII=ionosphere_stats[ionosphere_stats.index==col]["LII"][0]
LIS=ionosphere_stats[ionosphere_stats.index==col]["LIS"][0]
LEI=ionosphere_stats[ionosphere_stats.index==col]["LEI"][0]
LES=ionosphere_stats[ionosphere_stats.index==col]["LES"][0]
return data[~(((data[col]<=LII)&(data[col]>=LIS))|((data[col]>=LIS)&(data[col]<=LES)))][col]
```
```python
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.hist(filter_outlier("Atr33"),label="Atr33 (Outliers removed) ({} rows)".format(len(filter_outlier("Atr33"))),alpha=0.4,color='b')
plt.hist(data["Atr33"],label="Atr33 (All data) ({} rows)".format(len(data)),alpha=0.4,color='r')
plt.vlines(np.mean(filter_outlier("Atr33")),0,120,label="Outliers removed mean's (<Atr33>={})".format(np.round(np.mean(filter_outlier("Atr33")),4)),alpha=0.99,color='b',linestyle="--",linewidth=4)
plt.vlines(np.mean(data["Atr33"]),0,120,label="All data mean's (<Atr33>={})".format(np.round(np.mean(data["Atr33"]),4)),alpha=0.99,color='r',linestyle="--",linewidth=4)
plt.legend(title="Absolute distribution",fontsize=11)
plt.ylabel("Counts",fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.subplot(122)
plt.hist(filter_outlier("Atr33"),label="Atr33 (Outliers removed) ({} rows)".format(len(filter_outlier("Atr33"))),alpha=0.4,cumulative=True,color='b')
plt.hist(data["Atr33"],label="Atr33 (All data) ({} rows)".format(len(data)),alpha=0.4,cumulative=True,color='r')
plt.vlines(np.median(filter_outlier("Atr33")),0,350,label="Outliers removed median's (<Atr33>={})".format(np.round(np.mean(filter_outlier("Atr33")),4)),alpha=0.99,color='b',linestyle="--",linewidth=6)
plt.vlines(np.median(data["Atr33"]),0,350,label="All data median's (<Atr33>={})".format(np.round(np.mean(data["Atr33"]),4)),alpha=0.99,color='r',linestyle="--",linewidth=3)
plt.legend(title="Cumulative distribution",fontsize=11)
plt.ylabel("Counts",fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.savefig("../../figures/ionosphere_hist2.png",bbox_inches ="tight")
plt.show()
```
Entonces como podemos ver cláramente la distribución de los datos cambian cuando eliminamos de nuestros datos los valores atípicos y por tanto también cambian las medidas de tendencia central, en particular la diferencia porcentual para el cálculo de la media en los datos originales y los datos cuando han sido removidos los valores atípicos es de:
```python
np.round(abs(np.mean(filter_outlier("Atr33"))-np.mean(data["Atr33"]))/np.mean(data["Atr33"])*100,2)
```
794.06
```python
np.mean(filter_outlier("Atr33"))
```
-0.10050096463022508
```python
np.mean(data["Atr33"])
```
0.014480113960113956
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
| eff9a206b7016237a647fea29a2d094c0409c2a3 | 170,671 | ipynb | Jupyter Notebook | Notebooks/descriptive_statistics.ipynb | sierraporta/Data_Mining_Excersices | 3790466d1d8314d83178b61035fc6c28b567ab59 | [
"MIT"
] | null | null | null | Notebooks/descriptive_statistics.ipynb | sierraporta/Data_Mining_Excersices | 3790466d1d8314d83178b61035fc6c28b567ab59 | [
"MIT"
] | null | null | null | Notebooks/descriptive_statistics.ipynb | sierraporta/Data_Mining_Excersices | 3790466d1d8314d83178b61035fc6c28b567ab59 | [
"MIT"
] | null | null | null | 73.155165 | 46,972 | 0.694875 | true | 16,292 | Qwen/Qwen-72B | 1. YES
2. YES | 0.851953 | 0.831143 | 0.708095 | __label__kor_Hang | 0.123892 | 0.483473 |
# Models, Data, Learning Problems
In this lab we start our first data analysis on a concrete problem. We are using Fisher's famous <a href="https://en.wikipedia.org/wiki/Iris_flower_data_set">Iris data set</a>. The goal is to classify flowers from the Iris family into one of three species, that look as follows:
<table>
<tr>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<td>Iris Setosa</td>
<td>Iris Versicolor</td>
<td>Iris Virginica</td>
</tr>
</table>
Our data set contains 50 flowers from each class, thus 150 in total. There are four features, the length and width of the petal (dt. Kronblatt) and sepal (dt. Kelchblatt) in centimeters.
Your goal is to go through the notebook, understand premade code and text as well as filling blanks and exercises left for you. You may also edit the notebook as you wish. A good way to learn is to add comments (lines starting with #) or modifying the code and see what changes.
The data set is distributed with sci-kit learn, the only thing we have to do is to important a function and call it.
```python
from sklearn.datasets import load_iris
data = load_iris()
X = data.data
y = data.target
print(type(X))
print(X.shape)
print(f"First three rows of data\n{X[:3]}")
print(f"First three labels: {y[:3]}")
```
<class 'numpy.ndarray'>
(150, 4)
First three rows of data
[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]]
First three labels: [0 0 0]
Not only do we get the input matrix $X \in \mathbb{R}^{150 \times 4}$ and target $y \in \mathbb{R}^{150}$, but also meta information such as what the class labels $0, 1, 2$ stand for and what the features (i.e. columns of $X$) correspond to.
```python
print(data.target_names)
print(data.feature_names)
```
['setosa' 'versicolor' 'virginica']
['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
As a first step we focus our analysis on the first two variables, the sepal length and sepal width. Since we obtain a representation of the data in two dimensions, we are able to plot it.
```python
X_2 = X[:, :2]
y_2 = y
```
```python
# Configures Jupyter to show graphics in the notebook
%matplotlib inline
from matplotlib import pyplot as plt # standard import
# We write a function so we can reuse it later.
def generate_scatter_plot(X, y):
class_names = data.target_names
class_colors = ['blue','yellow','green']
fig = plt.figure(figsize=(12, 6)) # increase size of plot
for i, class_color in enumerate(class_colors):
# plot the points only of this class label
plt.scatter(X[y == i, 0], X[y == i, 1], c=class_color, label=class_names[i])
plt.xlabel(data.feature_names[0]) # label the axis
plt.ylabel(data.feature_names[1])
plt.legend(loc="best") # with legend
generate_scatter_plot(X_2, y)
```
We see that we could discriminate the iris setosa linearly from other two species. The linear function could even have a slope of about $1$. Let us substitute the first feature with the difference of the two features.
```python
import numpy as np
x_new = X[:, 0] - X[:, 1]
X_new = np.column_stack((x_new, X[:, 1]))
print(X_new.shape)
generate_scatter_plot(X_new, y)
plt.xlabel("sepal length - sepal width")
```
Remember that our main goal is to find a model,
$$ y_\theta: X \rightarrow Y $$
such that $y_\theta(x)$ models the knowledge we got from our training data plus the inductive bias. The plot gives the decision rule (or part of):
<center>"If sepal length - sepal width $\leq$ 2.2 $\rightarrow$ Classify iris setosa"</center>
<b>Exercise 1:</b>
Implement the naive decision rule as given above. If the condition for iris setosa is not fulfilled, classify the result as 'iris versicolor'.
```python
def naive_decision_rule(x):
# X matrix with 4 dimensional columns
# returns the expected class label for each row of X (0 = setosa, 1 = versicolor, 2 = virginica)
# FILL IN
x_new = x[0] - x[1]
pred_class = 0 if (x_new <= 2.2) else 1
return pred_class
```
The following function takes a decision rule (or model) and a matrix of data points to generate the predictions for this matrix.
```python
def predict(model, X):
"""Builds prediction on a matrix X given a model for each data point in a row.
Returns a flat vector of predictions.
"""
return np.apply_along_axis(model, axis=1, arr=X)
y_pred = predict(naive_decision_rule, X)
print(y_pred[:50]) # print first 50 predictions
print(y_pred) # print all 150
```
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1]
The predictions of the first 50 numbers should be zero and one for all others. Now we have to judge the quality of our model, we do this by using the zero-loss function of the lecture.
<b>Exercise 2:</b>
Implement the zero loss function as defined in the lecture,
$$
\begin{align}
l(x_i, y_i; \theta) &= l_{0/1}(y_\theta(x_i), y_i) = \begin{cases} 0, & \mbox{ if } y_\theta(x_i) = y_i \\ 1, & \mbox{ otherwise } \end{cases} \\
l(X, y; \theta) &= \sum_i{ l(x_i, y_i; \theta). }
\end{align}
$$
In lay-man terms one counts how often the label predicted differed from the observed label.
```python
def zero_one_loss(y_pred, y_true,loss):
# FILL IN
#print (y_pred)
#print (y_true)
for i in range(len(y_pred)):
if y_pred[i] - y_true[i] != 0:
loss= loss+1
return loss
```
```python
print(f"The 0-1-loss of the naive decision rule is {zero_one_loss(y_pred, y,loss=0)} (should be 50)")
```
The 0-1-loss of the naive decision rule is 50 (should be 50)
<b>Exercise 3:</b>
Improve the decision rule to have a maximum number of missclassifications of $10$. As an informal constraint use "Occams Razor" as an inductive bias, i.e. as simplest as possible.
<b>Discussion topic:</b> Why could a complex model with zero missclassifications perform worse in reality (we got out and measure new flowers) than a simple model with more misclassifications?
```python
import numpy as np
x_new1 = X[:, 2]- X[:, 3]
X_new1 = np.column_stack((x_new1, X[:, 3]))
print(X_new.shape)
generate_scatter_plot(X_new1, y)
plt.xlabel("P length - P width")
```
```python
# Place for your analysis.
def my_decision_rule(x):
# Take 'petal length (cm)', 'petal width (cm)' into account
# 'petal width (cm)' at Y axis
x_new = x[2] - x[3]
if (x_new <= 1.8):
pred_class = 0
elif ((x_new >= 1.8) & (x_new <= 3.6)) & (x[3] <= 1.6) :
pred_class = 1
else:
pred_class = 2
return pred_class
pass
```
```python
# Evaluation script
y_pred = predict(my_decision_rule, X)
print(y_pred)
loss = zero_one_loss(y_pred, y,loss=0)
print(f"Your loss {loss}.")
if loss <= 10:
print("You have made it!")
else:
print("Uhm, try again. Maybe you have flipped some class?")
```
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1
1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
Your loss 4.
You have made it!
```python
```
| 767802580f170bbd2f134fa0331849daf2be5c93 | 78,015 | ipynb | Jupyter Notebook | Lab2/.ipynb_checkpoints/Models, Data, Learning Problems-checkpoint.ipynb | pratik98/Machine-LearningSummer2020 | ab1ab87c2bd3c9ffb42a88dfb1b93891ed8aa746 | [
"MIT"
] | null | null | null | Lab2/.ipynb_checkpoints/Models, Data, Learning Problems-checkpoint.ipynb | pratik98/Machine-LearningSummer2020 | ab1ab87c2bd3c9ffb42a88dfb1b93891ed8aa746 | [
"MIT"
] | null | null | null | Lab2/.ipynb_checkpoints/Models, Data, Learning Problems-checkpoint.ipynb | pratik98/Machine-LearningSummer2020 | ab1ab87c2bd3c9ffb42a88dfb1b93891ed8aa746 | [
"MIT"
] | null | null | null | 165.286017 | 22,572 | 0.888406 | true | 2,687 | Qwen/Qwen-72B | 1. YES
2. YES | 0.888759 | 0.907312 | 0.806382 | __label__eng_Latn | 0.977139 | 0.711828 |
```python
import pyzx as zx
import sympy
from fractions import Fraction
```
```python
gamma = sympy.Symbol('gamma')
```
```python
g=zx.graph.GraphSym()
v= g.add_vertex(zx.VertexType.Z, qubit=0, row=1, phase=gamma)
w= g.add_vertex(zx.VertexType.Z, qubit=1, row=1, phase=1)
x= g.add_vertex(zx.VertexType.Z, qubit=2, row=1, phase=1)
g.add_edge(g.edge(v,w),edgetype=zx.EdgeType.SIMPLE)
```
```python
zx.draw_matplotlib(g)
```
```python
g.phase(0)
```
```python
e = zx.editor.edit(g)
```
```python
gamma = sympy.Symbol('gamma')
```
```python
type(Fraction(numerator=2, denominator=1) * gamma)
```
```python
```
| ab0ae4fb568902f150646992c1dfbda41a1962cd | 3,596 | ipynb | Jupyter Notebook | scratchpads/symbolic_diagrams.ipynb | tobsto/pyzx | 9d92f9163bec91315e60423703fb7a7ad0d96fc8 | [
"Apache-2.0"
] | null | null | null | scratchpads/symbolic_diagrams.ipynb | tobsto/pyzx | 9d92f9163bec91315e60423703fb7a7ad0d96fc8 | [
"Apache-2.0"
] | null | null | null | scratchpads/symbolic_diagrams.ipynb | tobsto/pyzx | 9d92f9163bec91315e60423703fb7a7ad0d96fc8 | [
"Apache-2.0"
] | null | null | null | 27.037594 | 332 | 0.576474 | true | 208 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.685949 | 0.617718 | __label__kor_Hang | 0.192167 | 0.273496 |
Create a reduced basis for a simple sinusoid model
```
%matplotlib inline
import numpy as np
from misc import *
import matplotlib.pyplot as plt
from lalapps import pulsarpputils as pppu
```
Create the signal model:
\begin{equation}
h(t) = \frac{h_0}{2}\left[\frac{1}{2}F_+ (1+\cos{}^2\iota)\cos{\phi_0} + F_{\times}\cos{\iota} \sin{\phi_0 }\right].
\end{equation}
```
def signalmodel(t, h0, cosi, phi0, psi, FA, FB):
"""
The real part of a heterodyned CW signal model, where the frequency and frequency
derivatives represents offsets from the values used in the initial heterodyne.
"""
ctwopsi = np.cos(2.*psi)
stwopsi = np.sin(2.*psi)
fps = FA*ctwopsi + FB*stwopsi
fcs = FB*ctwopsi - FA*stwopsi
phase = phi0
ht = 0.5*h0*(0.5*fps*(1.+cosi**2)*np.cos(phase) + fcs*cosi*np.sin(phase))
return ht
```
Initialise the model time series and other constant parameters, including antenna patterns that don't including the polarisation angle.
```
# a time series
t0 = 0
tend = 86400.*10
N = (tend-t0)/60.
ts = np.linspace(t0, tend, N)
dt = ts[1]-ts[0]
ra = 0. # right ascension at zero radians
dec = 0. # declination at zero radians
det = 'H1' # LIGO Hanford detector
FAs = np.zeros(len(ts))
FBs = np.zeros(len(ts))
# create antenna response (without polsaition angle)
for i, t in enumerate(ts):
FA, FB = pppu.antenna_response(t, ra, dec, 0.0, det)
FAs[i] = FA
FBs[i] = FB
```
Create a training set of 2000 waveforms with random frequency and frequency derivatives within a narrow range.
```
# number of training waveforms
TS_size = 2000
phi0s = np.random.rand(TS_size)*(2.*np.pi)
psis = np.random.rand(TS_size)*(np.pi/2.)-(np.pi/4.)
cosis = np.random.rand(TS_size)*2. - 1.
# allocate memory and create training set
TS = np.zeros(TS_size*len(ts)).reshape(TS_size, len(ts)) # store training space in TS_size X len(ts) array
h0 = 1.
for i in range(TS_size):
TS[i] = signalmodel(ts, h0, cosis[i], phi0s[i], psis[i], FAs, FBs)
# normalize
TS[i] /= np.sqrt(abs(dot_product(dt, TS[i], TS[i])))
```
Allocate memory for reduced basis vectors.
```
# Allocate storage for projection coefficients of training space waveforms onto the reduced basis elements
proj_coefficients = np.zeros(TS_size*TS_size).reshape(TS_size, TS_size)
# Allocate matrix to store the projection of training space waveforms onto the reduced basis
projections = np.zeros(TS_size*len(ts)).reshape(TS_size, len(ts))
rb_errors = []
#### Begin greedy: see Field et al. arXiv:1308.3565v2 ####
tolerance = 1e-12 # set maximum RB projection error
sigma = 1 # (2) of Algorithm 1. (projection error at 0th iteration)
rb_errors.append(sigma)
```
Run greedy algorithm for creating the reduced basis
```
RB_matrix = [TS[0]] # (3) of Algorithm 1. (seed greedy algorithm (arbitrary))
iter = 0
while sigma >= tolerance: # (5) of Algorithm 1.
# project the whole training set onto the reduced basis set
projections = project_onto_basis(dt, RB_matrix, TS, projections, proj_coefficients, iter)
residual = TS - projections
# Find projection errors
projection_errors = [dot_product(dt, residual[i], residual[i]) for i in range(len(residual))]
sigma = abs(max(projection_errors)) # (7) of Algorithm 1. (Find largest projection error)
# break out if sigma is less than tolerance, so another basis is not added to the set
# (this can be required is the waveform only requires a couple of basis vectors, and it
# stops a further basis containing large amounts of numerical noise being added)
if sigma < tolerance:
break
print sigma, iter
index = np.argmax(projection_errors) # Find Training-space index of waveform with largest proj. error
rb_errors.append(sigma)
#Gram-Schmidt to get the next basis and normalize
next_basis = TS[index] - projections[index] # (9) of Algorithm 1. (Gram-Schmidt)
next_basis /= np.sqrt(abs(dot_product(dt, next_basis, next_basis))) #(10) of Alg 1. (normalize)
RB_matrix.append(next_basis) # (11) of Algorithm 1. (append reduced basis set)
iter += 1
```
0.999999972831 0
Check that this basis does give the expected residuals for a new set of random waveforms generated from the same parameter range.
```
#### Error check ####
TS_rand_size = 4000
TS_rand = np.zeros(TS_rand_size*len(ts)).reshape(TS_rand_size, len(ts)) # Allocate random training space
phi0s_rand = np.random.rand(TS_rand_size)*(2.*np.pi)
psis_rand = np.random.rand(TS_rand_size)*(np.pi/2.)-(np.pi/4.)
cosis_rand = np.random.rand(TS_rand_size)*2. - 1.
for i in range(TS_rand_size):
TS_rand[i] = signalmodel(ts, h0, cosis_rand[i], phi0s_rand[i], psis_rand[i], FAs, FBs)
# normalize
TS_rand[i] /= np.sqrt(abs(dot_product(dt, TS_rand[i], TS_rand[i])))
### find projection errors ###
iter = 0
proj_rand = np.zeros(len(ts))
proj_error = []
for h in TS_rand:
while iter < len(RB_matrix):
proj_coefficients_rand = dot_product(dt, RB_matrix[iter], h)
proj_rand += proj_coefficients_rand*RB_matrix[iter]
iter += 1
residual = h - proj_rand
projection_errors = abs(dot_product(dt, residual, residual))
proj_error.append(projection_errors)
proj_rand = np.zeros(len(ts))
iter = 0
plt.scatter(np.linspace(0, len(proj_error), len(proj_error)), np.log10(proj_error))
plt.ylabel('log10 projection error')
plt.show()
```
Now we will create the empirical interpolant and find time time stamp 'nodes' at which it matches the full model.
```
# put basis into complex form
e = np.array(RB_matrix)
indices = []
ts_nodes = []
V = np.zeros((len(e), len(e)))
```
```
from scipy.linalg import inv
# seed EIM algorithm
indices.append( int(np.argmax( np.abs(e[0]) )) ) # (2) of Algorithm 2
ts_nodes.append(ts[indices]) # (3) of Algorithm 2
for i in range(1, len(e)): #(4) of Algorithm 2
#build empirical interpolant for e_iter
for j in range(len(indices)): # Part of (5) of Algorithm 2: making V_{ij}
for k in range(len(indices)): # Part of (5) of Algorithm 2: making V_{ij}
V[k][j] = e[j][indices[k]] # Part of (5) of Algorithm 2: making V_{ij}
invV = inv(V[0:len(indices), 0:len(indices)]) # Part of (5) of Algorithm 2: making V_{ij}
B = B_matrix(invV, e) # Part of (5) of Algorithm 2: making B_j(f)
interpolant = emp_interp(B, e[i], indices) # Part of (5) of Algorithm 2: making the empirical interpolant of e
res = interpolant - e[i] # 6 of Algorithm 2
index = int(np.argmax(np.abs(res))) # 7 of Algorithm 2
print "ts_{%i} = %f"%(i, ts[index])
indices.append(index) # 8 of Algorithm 2
ts_nodes.append( ts[index] ) # 9 of Algorithm 2
# make B matrix with all the indices
for j in range(len(indices)):
for k in range(len(indices)):
V[k][j] = e[j][indices[k]]
invV = inv(V[0:len(indices), 0:len(indices)])
B = B_matrix(invV, e)
```
ts_{1} = 83585.804570
Compare the interpolant with the full signal model
```
h_for_comparison = signalmodel(ts, h0, cosis[0], phi0s[0], psis[0], FAs, FBs)
interpolant_for_comparison = np.inner(B.T, h_for_comparison[indices])
plt.plot(ts, h_for_comparison-interpolant_for_comparison, 'b')
plt.xlabel('time (s)')
plt.show()
```
```
print len(ts_nodes)
print len(ts)
```
2
14400
```
H_size = 2000
H = np.zeros(H_size*len(ts)).reshape(H_size, len(ts)) # Allocate random training space
phi0s_rand = np.random.rand(H_size)*(2.*np.pi)
psis_rand = np.random.rand(H_size)*(np.pi/2.)-(np.pi/4.)
cosis_rand = np.random.rand(H_size)*2. - 1.
# create set of test waveforms
for i in range(H_size):
H[i] = signalmodel(ts, h0, cosis[i], phi0s[i], psis[i], FAs, FBs)
# find errors between full waveform and interpolants
list_of_errors = []
for i in range(H_size):
interpolant = np.inner(B.T, H[i][indices])
interpolant /= np.sqrt(np.vdot(interpolant, interpolant)) #normalize
H[i] /= np.sqrt(np.vdot(H[i], H[i]) ) #normalize
error = abs(np.vdot(H[i] - interpolant, H[i] - interpolant ))
list_of_errors.append(error)
print error
plt.scatter(np.linspace(0, H_size, H_size), np.log10(list_of_errors))
plt.ylabel('log10 interpolation error')
plt.show()
```
Now let's find the weights for the "reduced order quadrature", which are calculated as
\begin{equation}
w_j = \sum_{i=1}^N d_i B_{j,i}
\end{equation}
where $N$ is the full number of time steps, and $B$ is the matrix produced for the empirical interpolant.
First we'll just create some fake data consisting of Gaussian noise.
```
data = np.random.randn(len(ts))
# create weights
w = np.inner(B, data.T)
```
Now, compare calculating $\sum_{i=1}^N d_i h_i$ with the full set of time stamps, with the using the interpolant $\sum_{i=1}^M w_i h(F_i)$, where in this case $h$ is only calculated at the interpolant nodes.
```
d_dot_h = np.vdot(data, signalmodel(ts, 1., cosis_rand[12], phi0s_rand[65], psis_rand[101], FAs, FBs))
FA_nodes = FAs[indices]
FB_nodes = FBs[indices]
hsig = signalmodel(ts_nodes, 1., cosis_rand[12], phi0s_rand[65], psis_rand[101], FA_nodes, FB_nodes)
ROQ = np.dot(w, hsig)
print "regular inner product = %.15e"%d_dot_h
print "ROQ = %.15e"%ROQ
```
regular inner product = -8.576471621009784e+00
ROQ = -8.576471621009867e+00
Now test the speed-up
```
import time
t1 = time.time()
for i in range(50000):
np.dot(data, signalmodel(ts, 1., cosis_rand[0], phi0s_rand[0], psis_rand[0], FAs, FBs)) # regular inner product
e1 = time.time()
t2 = time.time()
for i in range(50000):
np.dot(w, signalmodel(ts_nodes, 1., cosis_rand[0], phi0s_rand[0], psis_rand[0], FA_nodes, FB_nodes)) # ROQ inner product
e2 = time.time()
print "regular inner product took %f s"%((e1-t1)/50000.)
print "ROQ took %f s"%((e2-t2)/50000.)
print "speedup = %f"%((e1-t1) / (e2-t2))
```
regular inner product took 0.000124 s
ROQ took 0.000018 s
speedup = 6.927863
Now we want to see if we can do the same thing to compute the inner product of the model $\sum_{i=1}^N h_ih_i$. For this we need to use our interpolant
\begin{equation}
\mathcal{I}[h](t) = \sum_{j=1}^M B_j(t)h(T_j)
\end{equation}
where $M$ is the number of nodes in the intepolant time stamps $T$, and $B$ is the matrix produced for the interpolant. Given this we have
\begin{equation}
\langle\, h | h\,\rangle \approx \langle\, \mathcal{I}[h] | \mathcal{I}[h]\,\rangle = \sum_{i=1}^N \left( \sum_{j=1}^M B_j(t_i)h(T_j)\right)^2.
\end{equation}
We want to try and separate out the longer sum over $N$, so that it can be performed in pre-processing. This part can be separated out as an array with components given by $\bar{B}_{mn} = \sum_{i=1}^N B_m(t_i) B_n(t_i)$, where $m$ and $n$ run from 1 to $M$.
Then to get the "reduced order quadrature" we need to do
\begin{equation}
\langle\, \mathcal{I}[h] | \mathcal{I}[h]\,\rangle = \vec{H} \bar{B} \vec{H}^{T},
\end{equation}
where $\vec{H} = [h(T_1), h(T_2), \ldots]$. Note that this is an order $M^2$ operation, which is quick for $M^2 \ll N$, but otherwise does not save computational time.
```
# create new weights
w2 = np.zeros((B.shape[0], B.shape[0]))
for i in range(B.shape[0]):
for j in range(B.shape[0]):
w2[i,j] = np.sum(B[i]*B[j])
```
```
sigfull = signalmodel(ts, 1., cosis_rand[0], phi0s_rand[0], psis_rand[0], FAs, FBs)
sigred = signalmodel(ts_nodes, 1., cosis_rand[0], phi0s_rand[0], psis_rand[0], FA_nodes, FB_nodes)
h_dot_h = np.vdot(sigfull, sigfull)
ROQh = np.dot(np.dot(sigred, w2), sigred)
print "regular inner product = %.15e"%h_dot_h
print "ROQ = %.15e"%ROQh
```
regular inner product = 4.557619458915702e+02
ROQ = 4.557619458915702e+02
```
t1 = time.time()
for i in range(50000):
sigfullnew = signalmodel(ts, 1., cosis_rand[0], phi0s_rand[0], psis_rand[0], FAs, FBs)
np.dot(sigfullnew, sigfullnew) # regular inner product
e1 = time.time()
t2 = time.time()
for i in range(50000):
sigrednew = signalmodel(ts_nodes, 1., cosis_rand[0], phi0s_rand[0], psis_rand[0], FA_nodes, FB_nodes)
np.dot(np.dot(sigrednew, w2), sigrednew) # ROQ inner product
e2 = time.time()
print "regular inner product took %f s"%((e1-t1)/50000.)
print "ROQ took %f s"%((e2-t2)/50000.)
print "speedup = %f"%((e1-t1) / (e2-t2))
```
regular inner product took 0.000124 s
ROQ took 0.000018 s
speedup = 6.796928
The other way to get this inner product is to analytically integrate the model using the known form of the antenna pattern (e.g. equations 10-13 of [Jaranowski, Krolak & Schutz (1998)](http://adsabs.harvard.edu/abs/1998PhRvD..58f3001J)), such that
\begin{align}
\langle\,h(t)|h(t)\,\rangle =& \frac{1}{\Delta t}\int_{t_1}^{t_2} h^2 {\rm d}t \\
=& \frac{h_0^2}{2^2\Delta t}\Bigg[ \left(\frac{(1+\cos{}^2\iota)}{2}\cos{\phi_0}\right)^2 \left(\int_{t_1}^{t_2} F_+^2 {\rm d}t\right) + \left(\cos{\iota}\sin{\phi_0}\right)^2\left(\int_{t_1}^{t_2} F_{\times}^2 {\rm d}t\right) +\\
& 2\frac{(1+\cos{}^2\iota)}{2}\cos{\iota}\cos{\phi_0}\sin{\phi_0}\left(\int_{t_1}^{t_2} F_+ F_{\times} {\rm d}t\right) \Bigg].
\end{align}
These integral would be cheap to produce once at the start of the run. Things would become more complex if the model has a varying phase evolution due to searching over frequency parameters. In that case it might be possible to use the stationary phase approx
_Note_: write down the above integrals in full. Also note that the above is all just for the real part of a heterodyned signal. The imaginary part is given by
\begin{equation}
h(t) = \frac{h_0}{2}\left[\frac{1}{2}F_+ (1+\cos{}^2\iota)\sin{\phi_0} - F_{\times}\cos{\iota} \cos{\phi_0 }\right].
\end{equation}
I should redo all the above, but just getting the reduced basis, interpolant and ROQ for the antenna patterns (i.e. over $\psi$ [or in fact not even over that as it just gives pre-factors, so really it's just the two $a$ and $b$ functions of the antenna pattern that need orthogonalising!]), as the other things are all just pre-factors that are very quickly computed.
| cf113704f31a6e081450678be604e7f6a67ad44d | 245,425 | ipynb | Jupyter Notebook | ROQ/Reduced basis for CW signal model (no frequency range).ipynb | mattpitkin/random_scripts | 8fcfc1d25d8ca7ef66778b7b30be564962e3add3 | [
"MIT"
] | null | null | null | ROQ/Reduced basis for CW signal model (no frequency range).ipynb | mattpitkin/random_scripts | 8fcfc1d25d8ca7ef66778b7b30be564962e3add3 | [
"MIT"
] | null | null | null | ROQ/Reduced basis for CW signal model (no frequency range).ipynb | mattpitkin/random_scripts | 8fcfc1d25d8ca7ef66778b7b30be564962e3add3 | [
"MIT"
] | null | null | null | 88.633081 | 72,003 | 0.800526 | true | 4,438 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.771844 | 0.666403 | __label__eng_Latn | 0.870399 | 0.386609 |
EE 502 P: Analytical Methods for Electrical Engineering
# Homework 1: Python Setup
## Due October 10, 2021 by 11:59 PM
### <span style="color: red">Mayank Kumar</span>
Copyright © 2021, University of Washington
<hr>
**Instructions**: Please use this notebook as a template. Answer all questions using well formatted Markdown with embedded LaTeX equations, executable Jupyter cells, or both. Submit your homework solutions as an `.ipynb` file via Canvas.
<span style="color: red'">
Although you may discuss the homework with others, you must turn in your own, original work.
</span>
**Things to remember:**
- Use complete sentences. Equations should appear in text as grammatical elements.
- Comment your code.
- Label your axes. Title your plots. Use legends where appropriate.
- Before submitting a notebook, choose Kernel -> Restart and Run All to make sure your notebook runs when the cells are evaluated in order.
Note : Late homework will be accepted up to one week after the due date and will be worth 50% of its full credit score.
### 0. Warmup (Do not turn in)
- Get Jupyter running on your computer, or learn to use Google Colab's Jupyter environment.
- Make sure you can click through the Lecture 1 notes on Python. Try changing some of the cells to see the effects.
- If you haven't done any Python, follow one of the links in Lecture 1 to a tutorial and work through it.
- If you haven't done any Numpy or Sympy, read through the linked documentation and tutorials for those too.
### 1. Complex Numbers
Write a function `rand_complex(n)` that returns a list of `n` random complex numbers uniformly distributed in the unit circle (i.e., the magnitudes of the numbers are all between 0 and 1). Give the function a docstring. Demonstrate the function by making a list of 25 complex numbers.
```python
def rand_complex(n):
"""
n : number of complex number to be generated
function during call imports random and numpy libraries for processing.
This function prints the complex number uniformly distributed in the unit circle.
"""
import random # import "random" library to generate random numbers.
import numpy as np # import numpy to process mathematical expression
a = [ ] # declare an empty list
for i in range(n):
x = random.uniform(-1,1) # generate random number between -1 and 1.
y = random.uniform(-(np.sqrt(1-(x*x))),np.sqrt(1-(x*x))) # generating random value of y between -max and max.
# max implies corresponding point on the circle for with y will lie on the circle.
a.append(complex(x,y)) # append complex number x+yj to the list a.
# print("Complex %d: %f + %fj" %(i+1,x, y))
# print("Value %d: %f" %((i+1),np.sqrt(x*x+y*y)))
return a
#function call for 25 complex numbers
rand_complex(25)
```
[(-0.23291727633158343-0.03223906965648793j),
(0.1075264193171328-0.6287554693178661j),
(-0.9648352685957642-0.09836768328893047j),
(0.9729332666351493-0.07563889894437151j),
(-0.31264579365485834-0.6952447113453855j),
(-0.7543046289338111-0.2007322497775722j),
(0.35390768875343515+0.24066078970197824j),
(0.40023028389808935-0.4322478564776867j),
(0.7575837355779895-0.42474223841029735j),
(-0.017343727286792454-0.9318922926373809j),
(0.9893226815073348-0.07429562306514148j),
(-0.7557002838583444+0.5007781393244392j),
(0.14229428659162435-0.05580365383869268j),
(0.34332631754502363+0.7610185062340256j),
(-0.6663127004466431-0.43778123350460907j),
(0.698490378322481+0.33122161883243073j),
(0.3875019475298742+0.5520329887602473j),
(-0.7285191732411849+0.20066390378294496j),
(-0.6876991691513978-0.10572637939038865j),
(0.2895621483951627-0.14222573877072409j),
(0.559677483878626+0.18654677359454308j),
(0.8954820520485438-0.2273070618668761j),
(0.4062147904057407+0.3113418259043812j),
(0.5540089171896212-0.42022748042357205j),
(-0.8270129193619533-0.27250082404388287j)]
### 2. Hashes
Write a function `to_hash(L) `that takes a list of complex numbers `L` and returns an array of hashes of equal length, where each hash is of the form `{ "re": a, "im": b }`. Give the function a docstring and test it by converting a list of 25 numbers generated by your `rand_complex` function.
```python
def to_hash(L):
"""
L: It is a list of complex numbers.
With a list as input, the function returns an array of hashes in form {"re": a, "im": b}
"""
b = [] #declare an empty list
for i in range(len(L)):
x1 = L[i].real #extract real part of the (i)th value of list L
y1 = L[i].imag #extract imaginary part of the (i)th value of list L
b.append({"re": x1, "im": y1}) #append the values in hash form.
return b
#function call using the function from Question 1.
to_hash(rand_complex(25))
#extra line of code to verify the outcome.
#d = to_hash(rand_complex(25))
#print(d[1])
#d[1]["re"]
```
### 3. Matrices
Write a function `lower_traingular(n)` that returns an $n \times n$ numpy matrix with zeros on the upper diagonal, and ones on the diagonal and lower diagonal. For example, `lower_triangular(3)` would return
```python
array([[1, 0, 0],
[1, 1, 0],
[1, 1, 1]])
```
```python
import numpy as np # variable np is used inside the function to generate array.
#function defination starts here.
def lower_triangular(n):
"""
function takes integer n as input and returns lower triangular matrix of dimension nxn.
import module "numpy as np" to use this function without any errors.
"""
c = np.ones(n*n) # create an array of nxn elements with value = 1.
c = c.reshape (n,n) # Reshape the array to a matrix of dimension nxn.
for i in range (n):
for j in range (n):
if j > i: # checks for the elements where we have to change it to zero.
c[i][j] = 0 # Replace the elements in upper triangle other than diagonal elements with 0.
return c
print("lower triangular matrix :")
lower_triangular(3)
```
lower triangular matrix :
array([[1., 0., 0.],
[1., 1., 0.],
[1., 1., 1.]])
### 4. Numpy
Write a function `convolve(M,K)` that takes an $n \times m$ matrix $M$ and a $3 \times 3$ matrix $K$ (called the kernel) and returns their convolution as in [this diagram](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTYo2_VuAlQhfeEGJHva3WUlnSJLeE0ApYyjw&usqp=CAU).
Please do not use any predefined convolution functions from numpy or scipy. Write your own. If the matrix $M$ is too small, your function should return a exception.
You can read more about convolution in [this post](https://setosa.io/ev/image-kernels/).
The matrix returned will have two fewer rows and two fewer columns than $M$. Test your function by making a $100 \times 100$ matrix of zeros and ones that as an image look like the letter X and convolve it with the kernel
$$
K = \frac{1}{16} \begin{pmatrix}
1 & 2 & 1 \\
2 & 4 & 2 \\
1 & 2 & 1
\end{pmatrix}
$$
Use `imshow` to display both images using subplots.
```python
# import numpy as np #import numpy library into variable np
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#define kernel
K = np.array ([[1,2,1],
[2,4,2],
[1,2,1]])
K = K/16 # divide all elements of matrix by 16.
#create image for testing as per the question
m = 100
n = 100
im = np.ones(m*n) #created an array of length mn.
im = im.reshape(m,n) #reshaped the array in matrix of dimension mxn.
#creating image in form X.
for i in range (m):
for j in range (n):
if ((i==j) or (i+j == n-1)): #select all the elements where manipulation is required.
im[i][j] = 0 #replace 1 with 0 at all the locations selected above.
fig = plt.figure(figsize = (8,8)) #define figure size
fig.add_subplot(1,2,1) #add a subplot to the figure
plt.imshow(im) #show input image which was created earlier.
plt.axis('off') #axis are turned off
plt.title("Original") #adding title to the image
#function defination starts here
def convolve (M,K):
"""
function take two inputs.
M: it is the original image on which convolution should be performed.
K: it is the kernel which will be applied to the input matrix.
it returns matrix with (m-2) x (n-2)
"""
im_out = np.zeros(M.shape[0]* M.shape[1]).reshape(M.shape[0], M.shape[1]) #declare a matrix with same dimension as M
if (M.shape[0] < 3 or M.shape[1] < 3): #raising exception for the matrices with either if dimensions < 3.
raise Exception("Convolution of matrix M can't be calculated. Check input matrix for dimension.")
else:
for i in range(1, M.shape[0]-1):
for j in range(1, M.shape[1]-1):
im_out[i][j] = np.sum(K * M[(i-1):(i+2),(j-1):(j+2)]) #performing sum on matrix multiplication with kernal and current pixels of the image.
if (im_out[i][j]) > 255: #checking for the pixels which get more than 255.
im_out[i][j] = 255
elif (im_out[i][j] < 0): #checking for the pixels which get less than 0.
im_out[i][j] = 0
im_out = im_out[1:(M.shape[0]-1),1:(M.shape[1]-1)] #removed the blank pixels where convolution was not possible.
return im_out
out_image = convolve(im,K) #function call, output is saved for further usage
#print(len(out_image[0]))
fig.add_subplot(1,2,2) #add a subplot to the figure
plt.imshow(out_image) #Show the output image to the location
plt.axis('off') #axis is turned off as it is not required
plt.title("Output") #adding title to the output image
```
### 5. Symbolic Manipulation
Use sympy to specify and solve the following equations for $x$.
- $x^2 + 2x - 1 = 0$
- $a x^2 + bx + c = 0$
Also, evaluate the following integrals using sympy
- $\int x^2 dx$
- $\int x e^{6x} dx$
- $\int (3t+5)\cos(\frac{t}{4}) dt$
```python
#Importing related libraries
import math
from sympy import *
init_printing(use_latex='mathjax')
#solving First equation i.e., x^2 + 2x - 1 = 0
x = symbols("x")
expr_1 = (x**2) + (2*x) - 1
result_1 = solve(expr_1,x)
print("solution of equation x^2 + 2x - 1 = 0 is :")
result_1
```
```python
#solving second equation i.e., ax^2 + bx + c = 0
a,b,c = symbols("a b c")
x = symbols("x")
expr_2 = (a*(x**2)) + b*x + c
result_2 = solve(expr_2,x)
print("solution of equation ax^2 + bx + c = 0 is : " )
result_2
```
solution of equation ax^2 + bx + c = 0 is :
$\displaystyle \left[ \frac{- b + \sqrt{- 4 a c + b^{2}}}{2 a}, \ - \frac{b + \sqrt{- 4 a c + b^{2}}}{2 a}\right]$
```python
#evaluating integral 1
x = symbols("x")
expr_3 = x**2
integrate(expr_3,x)
```
```python
#evaluating integral 2
x = symbols("x")
expr_4 = x * exp(6*x)
integrate(expr_4)
```
```python
#evaluating integral 3
t = symbols("t")
expr_5 = ((3*t) + 5) * cos(t/4)
integrate(expr_5)
```
### 6. Typesetting
Use LaTeX to typeset the following equations.
## 17 Equations that changed the world
### by Ian Stewart
#### Typesetting starts Now
---
**1. Pythagoras theorm:**
\begin{align}
a^2 + b^2 = c^2
\end{align}
___
**2. Logrithms:**
\begin{align}
\log xy = \log x +\log y
\end{align}
___
**3. Calculus:**
\begin{align}
\frac{df}{dt} = \lim_{h \rightarrow 0} \frac {f(t+h) - f(t)}{f(h)}
\end{align}
I have made some modification in the formula written above. as it was not conveying intended meaning.
___
**4. Law of Gravity:**
\begin{align}
F = G\frac{m_1 m_2}{r^2}
\end{align}
___
**5. Square root of minus one**
\begin{align}
i^2 = -1
\end{align}
___
**6. The Euler's formula for polyhedra**
\begin{align}
V - E + F = 2
\end{align}
___
**7. Normal Distribution**
\begin{align}
\phi(x) = \frac {1}{\sqrt{2\pi\rho}} e^\frac{(x-\mu)^2}{2\rho^2}
\end{align}
___
**8. Wave Equation**
\begin{align}
\frac{\partial^2 u}{\partial t^2} = c^2\frac{\partial^2 u}{\partial x^2}
\end{align}
___
**9. Fourier Transform**
\begin{align}
f(w) = \int_{-\infty}^{\infty} f(x)e^{-2\pi i x w} dx
\end{align}
___
**10. Navier-Stokes Equation**
\begin{align}
\rho(\frac{\partial v}{\partial t} + v.\nabla v ) = - \nabla p + \nabla.T + f
\end{align}
___
**11. Maxwell's Equation**
\begin{align}
\nabla . E = 0 \hspace{50 pt}\nabla.H = 0 \\
\nabla \times E = -\frac{1}{c} \frac{\partial H}{\partial t} \hspace{30 pt}\nabla \times H = \frac{1}{c} \frac{\partial E}{\partial t}
\end{align}
___
**12. Second Law of Thermodynamics**
\begin{align}
dS \geq 0
\end{align}
___
**13. Relativity**
\begin{align}
E = mc^2
\end{align}
___
**14. Schrodinger's Equation**
\begin{align}
i\hbar \frac{\partial}{\partial t} \Psi = H \Psi
\end{align}
___
**15. Information Theory**
\begin{align}
H = -\sum p(x)\log p(x)
\end{align}
___
**16. Chaos Theory**
\begin{align}
x_{t + 1} = k x_t(1 - x_t)
\end{align}
___
**17.Black-Scholes Equation**
\begin{align}
\frac{1}{2}\sigma S^2 \frac{\partial^2 V}{\partial S^2} + r S \frac{\partial V}{\partial S} + \frac{\partial V}{\partial t} - r V = 0
\end{align}
___
```python
```
| c5ec5ce4038952500bb6fd46a3be752742e5f055 | 59,648 | ipynb | Jupyter Notebook | Basics/HW_01_Python_EEP502_Mayank Kumar.ipynb | krmayankb/Analytical_Methods | a1bf58e1b9056949f5aa0fb25070e2d0ffbf5c4f | [
"MIT"
] | null | null | null | Basics/HW_01_Python_EEP502_Mayank Kumar.ipynb | krmayankb/Analytical_Methods | a1bf58e1b9056949f5aa0fb25070e2d0ffbf5c4f | [
"MIT"
] | null | null | null | Basics/HW_01_Python_EEP502_Mayank Kumar.ipynb | krmayankb/Analytical_Methods | a1bf58e1b9056949f5aa0fb25070e2d0ffbf5c4f | [
"MIT"
] | null | null | null | 95.743178 | 38,892 | 0.810002 | true | 4,156 | Qwen/Qwen-72B | 1. YES
2. YES | 0.785309 | 0.798187 | 0.626823 | __label__eng_Latn | 0.94997 | 0.29465 |
# Selección óptima de portafolios II
Entonces, tenemos que:
- La LAC describe las posibles selecciones de riesgo-rendimiento entre un activo libre de riesgo y un activo riesgoso.
- Su pendiente es igual al radio de Sharpe del activo riesgoso.
- La asignación óptima de capital para cualquier inversionista es el punto tangente de la curva de indiferencia del inversionista con la LAC.
Para todo lo anterior, supusimos que ya teníamos el portafolio óptimo (activo riesgoso).
En la clase pasada aprendimos a hallar este portafolio óptimo si el conjunto de activos riesgosos estaba conformado únicamente por dos activos:
$$w_{1,EMV}=\frac{(E[r_1]-r_f)\sigma_2^2-(E[r_2]-r_f)\sigma_{12}}{(E[r_2]-r_f)\sigma_1^2+(E[r_1]-r_f)\sigma_2^2-((E[r_1]-r_f)+(E[r_2]-r_f))\sigma_{12}}.$$
- Sin embargo, la complejidad del problema crece considerablemente con el número de variables, y la solución analítica deja de ser viable cuando mencionamos que un portafolio bien diversificado consta aproximadamente de 50-60 activos.
- En esos casos, este problema se soluciona con rutinas numéricas que hagan la optimización por nosotros, porque son una solución viable y escalable a más variables.
**Objetivos:**
- ¿Cuál es el portafolio óptimo de activos riesgosos cuando tenemos más de dos activos?
- ¿Cómo construir la frontera de mínima varianza cuando tenemos más de dos activos?
*Referencia:*
- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.
___
## 1. Maximizando el radio de Sharpe
### ¿Qué pasa si tenemos más de dos activos riesgosos?
En realidad es algo muy similar a lo que teníamos con dos activos.
- Para dos activos, construir la frontera de mínima varianza es trivial: todas las posibles combinaciones.
- Con más de dos activos, recordar la definición: la frontera de mínima varianza es el lugar geométrico de los portafolios que proveen el mínimo riesgo para un nivel de rendimiento dado.
<font color=blue> Ver en el tablero.</font>
Analíticamente:
- $n$ activos,
- caracterizados por $(\sigma_i,E[r_i])$,
- cada uno con peso $w_i$, con $i=1,2,\dots,n$.
Entonces, buscamos los pesos tales que
\begin{align}
\min_{w_1,\dots,w_n} & \quad \sum_{i=1}^{n}w_i^2\sigma_i^2+\sum_{i=1}^{n}\sum_{j=1,j\neq i}^{n}w_iw_j\sigma_{ij}\\
\text{s.a.} & \quad \sum_{i=1}^{n}w_i=1, w_i\geq0\\
& \quad \sum_{i=1}^{n}w_iE[r_i]=\bar{\mu},
\end{align}
donde $\bar{\mu}$ corresponde a un nivel de rendimiento objetivo.
**Obviamente, tendríamos que resolver este problema para muchos niveles de rendimiento objetivo.**
- <font color=blue> Explicar relación con gráfica.</font>
- <font color=green> Recordar clase 10.</font>
Lo anterior se puede escribir vectorialmente como:
\begin{align}
\min_{\boldsymbol{w}} & \quad \boldsymbol{w}^T\Sigma\boldsymbol{w}\\
\text{s.a.} & \quad \boldsymbol{1}^T\boldsymbol{w}=1, \boldsymbol{w}\geq0\\
& \quad E[\boldsymbol{r}^T]\boldsymbol{w}=\bar{\mu},
\end{align}
donde:
- $\boldsymbol{w}=\left[w_1,\dots,w_n\right]^T$ es el vector de pesos,
- $\boldsymbol{1}=\left[1,\dots,1\right]^T$ es un vector de unos,
- $E[\boldsymbol{r}]=\left[E[r_1],\dots,E[r_n]\right]^T$ es el vector de rendimientos esperados, y
- $\Sigma=\left[\begin{array}{cccc}\sigma_{1}^2 & \sigma_{12} & \dots & \sigma_{1n} \\
\sigma_{21} & \sigma_{2}^2 & \dots & \sigma_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
\sigma_{n1} & \sigma_{n2} & \dots & \sigma_{n}^2\end{array}\right]$ es la matriz de varianza-covarianza.
**Esta última forma es la que comúnmente usamos al programar, por ser eficiente y escalable a problemas de N variables.**
### Entonces, ¿para cuántos niveles de rendimiento objetivo tendríamos que resolver el anterior problema con el fin de graficar la frontera de mínima varianza?
- Observar que el problema puede volverse muy pesado a medida que incrementamos el número de activos en nuestro portafolio...
- Una tarea bastante compleja.
### Sucede que, en realidad, sólo necesitamos conocer dos portafolios que estén sobre la *frontera de mínima varianza*.
- Si logramos encontrar dos portafolios sobre la frontera, entonces podemos a la vez encontrar todas las posibles combinaciones de estos dos portafolios para trazar la frontera de mínima varianza.
- Ver el caso de dos activos.
### ¿Qué portafolios usar?
Hasta ahora, hemos estudiando profundamente como hallar dos portafolios muy importantes que de hecho yacen sobre la frontera de mínima varianza:
1. Portafolio de EMV: máximo SR.
2. Portafolio de mínima varianza: básicamente, el mismo problema anterior, sin la restricción de rendimiento objetivo.
Luego, tomar todas las posibles combinaciones de dichos portafolios usando las fórmulas para dos activos de medias y varianzas:
- w: peso para el portafolio EMV,
- 1-w: peso para le portafolio de mínima varianza.
## 2. Ejemplo ilustrativo.
Retomamos el ejemplo de mercados de acciones en los países integrantes del $G5$: EU, RU, Francia, Alemania y Japón.
```python
# Importamos pandas y numpy
import pandas as pd
import numpy as np
```
```python
# Resumen en base anual de rendimientos esperados y volatilidades
annual_ret_summ = pd.DataFrame(columns=['EU', 'RU', 'Francia', 'Alemania', 'Japon'], index=['Media', 'Volatilidad'])
annual_ret_summ.loc['Media'] = np.array([0.1355, 0.1589, 0.1519, 0.1435, 0.1497])
annual_ret_summ.loc['Volatilidad'] = np.array([0.1535, 0.2430, 0.2324, 0.2038, 0.2298])
annual_ret_summ.round(4)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>EU</th>
<th>RU</th>
<th>Francia</th>
<th>Alemania</th>
<th>Japon</th>
</tr>
</thead>
<tbody>
<tr>
<th>Media</th>
<td>0.1355</td>
<td>0.1589</td>
<td>0.1519</td>
<td>0.1435</td>
<td>0.1497</td>
</tr>
<tr>
<th>Volatilidad</th>
<td>0.1535</td>
<td>0.243</td>
<td>0.2324</td>
<td>0.2038</td>
<td>0.2298</td>
</tr>
</tbody>
</table>
</div>
```python
# Matriz de correlación
corr = pd.DataFrame(data= np.array([[1.0000, 0.5003, 0.4398, 0.3681, 0.2663],
[0.5003, 1.0000, 0.5420, 0.4265, 0.3581],
[0.4398, 0.5420, 1.0000, 0.6032, 0.3923],
[0.3681, 0.4265, 0.6032, 1.0000, 0.3663],
[0.2663, 0.3581, 0.3923, 0.3663, 1.0000]]),
columns=annual_ret_summ.columns, index=annual_ret_summ.columns)
corr.round(4)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>EU</th>
<th>RU</th>
<th>Francia</th>
<th>Alemania</th>
<th>Japon</th>
</tr>
</thead>
<tbody>
<tr>
<th>EU</th>
<td>1.0000</td>
<td>0.5003</td>
<td>0.4398</td>
<td>0.3681</td>
<td>0.2663</td>
</tr>
<tr>
<th>RU</th>
<td>0.5003</td>
<td>1.0000</td>
<td>0.5420</td>
<td>0.4265</td>
<td>0.3581</td>
</tr>
<tr>
<th>Francia</th>
<td>0.4398</td>
<td>0.5420</td>
<td>1.0000</td>
<td>0.6032</td>
<td>0.3923</td>
</tr>
<tr>
<th>Alemania</th>
<td>0.3681</td>
<td>0.4265</td>
<td>0.6032</td>
<td>1.0000</td>
<td>0.3663</td>
</tr>
<tr>
<th>Japon</th>
<td>0.2663</td>
<td>0.3581</td>
<td>0.3923</td>
<td>0.3663</td>
<td>1.0000</td>
</tr>
</tbody>
</table>
</div>
```python
# Tasa libre de riesgo
rf = 0.05
```
Esta vez, supondremos que tenemos disponibles todos los mercados de acciones y el activo libre de riesgo.
#### 1. Construir la frontera de mínima varianza
##### 1.1. Encontrar portafolio de mínima varianza
```python
# Importamos funcion minimize del modulo optimize de scipy
from scipy.optimize import minimize
```
```python
## Construcción de parámetros
# 1. Sigma: matriz de varianza-covarianza Sigma = S.dot(corr).dot(S)
S = np.diag(annual_ret_summ.loc['Volatilidad'])
Sigma = S.dot(corr).dot(S).astype(float)
# 2. Eind: rendimientos esperados activos individuales
Eind = annual_ret_summ.loc['Media'].values.astype(float)
```
```python
# Función objetivo
def varianza(w, Sigma):
return w.T.dot(Sigma).dot(w)
```
___
### Explicación funciones lambda
```python
def f(x):
return x**2
```
```python
f(1), f(10), f(13)
```
(1, 100, 169)
```python
g = lambda x: x**2
```
```python
g(1), g(10), g(13)
```
(1, 100, 169)
___
```python
# Número de activos
n = len(Eind)
# Dato inicial
w0 = np.ones(n) / n
# Cotas de las variables
bnds = ((0, 1),) * n
# Restricciones
cons = {"type": "eq", "fun": lambda w: w.sum() - 1}
```
```python
# Portafolio de mínima varianza
minvar = minimize(fun=varianza,
x0=w0,
args=(Sigma,),
bounds=bnds,
constraints=cons,
tol=1e-10)
```
```python
minvar
```
fun: 0.018616443574850344
jac: array([0.03723289, 0.03879431, 0.03847277, 0.03723289, 0.03723289])
message: 'Optimization terminated successfully.'
nfev: 70
nit: 10
njev: 10
status: 0
success: True
x: array([6.20463011e-01, 8.67361738e-19, 5.42101086e-19, 2.03475313e-01,
1.76061676e-01])
```python
# Pesos, rendimiento, riesgo y razón de Sharpe del portafolio de mínima varianza
w_minvar = minvar.x
e_minvar = Eind.dot(w_minvar)
s_minvar = (w_minvar.T.dot(Sigma).dot(w_minvar))**0.5
rs_minvar = (e_minvar - rf) / s_minvar
e_minvar, s_minvar, rs_minvar
```
(0.1396278783022745, 0.13644208872210342, 0.6568931855391253)
##### 1.2. Encontrar portafolio EMV
```python
# Función objetivo
def menos_rs(w, Eind, rf, Sigma):
ep = Eind.dot(w)
sp = (w.T.dot(Sigma).dot(w))**0.5
rs = (ep - rf) / sp
return -rs
```
```python
# Número de activos
n = len(Eind)
# Dato inicial
w0 = np.ones(n) / n
# Cotas de las variables
bnds = ((0, 1),) * n
# Restricciones
cons = {"type": "eq", "fun": lambda w: w.sum() - 1}
```
```python
# Portafolio EMV
emv = minimize(fun=menos_rs,
x0=w0,
args=(Eind, rf, Sigma),
bounds=bnds,
constraints=cons,
tol=1e-10)
```
```python
emv
```
fun: -0.6644375126752685
jac: array([-0.36087245, -0.36087371, -0.36087482, -0.36087404, -0.36087622])
message: 'Optimization terminated successfully.'
nfev: 56
nit: 8
njev: 8
status: 0
success: True
x: array([0.50729071, 0.07475346, 0.02413765, 0.18995791, 0.20386028])
```python
# Pesos, rendimiento, riesgo y razón de Sharpe del portafolio EMV
w_emv = emv.x
e_emv = Eind.dot(w_emv)
s_emv = (w_emv.T.dot(Sigma).dot(w_emv))**0.5
rs_emv = (e_emv - rf) / s_emv
e_emv, s_emv, rs_emv
```
(0.14205956750708812, 0.13855263399626944, 0.6644375126752685)
```python
e_minvar, s_minvar, rs_minvar
```
(0.1396278783022745, 0.13644208872210342, 0.6568931855391253)
##### 1.3. Construir frontera de mínima varianza
También debemos encontrar la covarianza (o correlación) entre estos dos portafolios:
```python
# Covarianza entre los portafolios
cov = w_emv.T.dot(Sigma).dot(w_minvar)
#cov = w_minvar.T.dot(Sigma).dot(w_emv)
cov
```
0.018689768271811187
```python
# Correlación entre los portafolios
corr = cov / (s_emv * s_minvar)
corr
```
0.988645903271872
```python
w_minvar
```
array([6.20463011e-01, 8.67361738e-19, 5.42101086e-19, 2.03475313e-01,
1.76061676e-01])
```python
w_emv
```
array([0.50729071, 0.07475346, 0.02413765, 0.18995791, 0.20386028])
```python
# Vector de w
w = np.linspace(0, 1, 101)
```
```python
# DataFrame de portafolios:
# 1. Índice: i
# 2. Columnas 1-2: w, 1-w
# 3. Columnas 3-4: E[r], sigma
# 4. Columna 5: Sharpe ratio
f_mv = pd.DataFrame({
"w": w,
"1-w": 1 - w,
"Media": w * e_emv + (1 - w) * e_minvar,
"Vol": ((w * s_emv)**2 + ((1 - w) * s_minvar)**2 + 2 * w * (1 - w) * cov)**0.5
})
f_mv["rs"] = (f_mv["Media"] - rf) / f_mv["Vol"]
f_mv.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>w</th>
<th>1-w</th>
<th>Media</th>
<th>Vol</th>
<th>rs</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.00</td>
<td>1.00</td>
<td>0.139628</td>
<td>0.136442</td>
<td>0.656893</td>
</tr>
<tr>
<th>1</th>
<td>0.01</td>
<td>0.99</td>
<td>0.139652</td>
<td>0.136448</td>
<td>0.657045</td>
</tr>
<tr>
<th>2</th>
<td>0.02</td>
<td>0.98</td>
<td>0.139677</td>
<td>0.136453</td>
<td>0.657195</td>
</tr>
<tr>
<th>3</th>
<td>0.03</td>
<td>0.97</td>
<td>0.139701</td>
<td>0.136460</td>
<td>0.657343</td>
</tr>
<tr>
<th>4</th>
<td>0.04</td>
<td>0.96</td>
<td>0.139725</td>
<td>0.136466</td>
<td>0.657490</td>
</tr>
</tbody>
</table>
</div>
```python
# Importar librerías de gráficos
from matplotlib import pyplot as plt
```
```python
# Gráfica de dispersión de puntos coloreando
# de acuerdo a SR, los activos individuales
# y los portafolios hallados
# Frontera
plt.scatter(f_mv["Vol"], f_mv["Media"], c=f_mv["rs"], label="Front. MV")
# Activos ind
for i in range(n):
plt.plot(annual_ret_summ.iloc[1, i],
annual_ret_summ.iloc[0, i],
"o",
ms=10,
label=annual_ret_summ.columns[i])
# Port. óptimos
plt.plot(s_minvar, e_minvar, '*g', ms=10, label="P. Min. Var")
plt.plot(s_emv, e_emv, '*b', ms=10, label="P. EMV")
# Etiquetas de los ejes
plt.xlabel("Volatilidad $\sigma$")
plt.ylabel("Rendimiento esperado $E[r]$")
plt.colorbar()
# Leyenda
plt.legend(loc="upper left", bbox_to_anchor=(1.3, 1))
```
**A partir de lo anterior, solo restaría construir la LAC y elegir la distribución de capital de acuerdo a las preferencias (aversión al riesgo).**
___
```python
# Vector de wp variando entre 0 y 1.5 con n pasos
wp = np.linspace(0, 1.5, 101)
```
```python
# DataFrame de CAL:
# 1. Índice: i
# 2. Columnas 1-2: wp, wrf
# 3. Columnas 3-4: E[r], sigma
# 4. Columna 5: Sharpe ratio
lac = pd.DataFrame({"wp": wp,
"wrf": 1 - wp,
"Media": wp * e_emv + (1 - wp) * rf,
"Vol": wp * s_emv})
lac["RS"] = (lac["Media"] - rf) / lac["Vol"]
```
```python
lac.head(10)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>wp</th>
<th>wrf</th>
<th>Media</th>
<th>Vol</th>
<th>RS</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.000</td>
<td>1.000</td>
<td>0.050000</td>
<td>0.000000</td>
<td>NaN</td>
</tr>
<tr>
<th>1</th>
<td>0.015</td>
<td>0.985</td>
<td>0.051381</td>
<td>0.002078</td>
<td>0.664438</td>
</tr>
<tr>
<th>2</th>
<td>0.030</td>
<td>0.970</td>
<td>0.052762</td>
<td>0.004157</td>
<td>0.664438</td>
</tr>
<tr>
<th>3</th>
<td>0.045</td>
<td>0.955</td>
<td>0.054143</td>
<td>0.006235</td>
<td>0.664438</td>
</tr>
<tr>
<th>4</th>
<td>0.060</td>
<td>0.940</td>
<td>0.055524</td>
<td>0.008313</td>
<td>0.664438</td>
</tr>
<tr>
<th>5</th>
<td>0.075</td>
<td>0.925</td>
<td>0.056904</td>
<td>0.010391</td>
<td>0.664438</td>
</tr>
<tr>
<th>6</th>
<td>0.090</td>
<td>0.910</td>
<td>0.058285</td>
<td>0.012470</td>
<td>0.664438</td>
</tr>
<tr>
<th>7</th>
<td>0.105</td>
<td>0.895</td>
<td>0.059666</td>
<td>0.014548</td>
<td>0.664438</td>
</tr>
<tr>
<th>8</th>
<td>0.120</td>
<td>0.880</td>
<td>0.061047</td>
<td>0.016626</td>
<td>0.664438</td>
</tr>
<tr>
<th>9</th>
<td>0.135</td>
<td>0.865</td>
<td>0.062428</td>
<td>0.018705</td>
<td>0.664438</td>
</tr>
</tbody>
</table>
</div>
```python
plt.scatter(f_mv["Vol"], f_mv["Media"], c=f_mv["rs"], label="Front. MV")
# Activos ind
for i in range(n):
plt.plot(annual_ret_summ.iloc[1, i],
annual_ret_summ.iloc[0, i],
"o",
ms=10,
label=annual_ret_summ.columns[i])
# Port. óptimos
plt.plot(s_minvar, e_minvar, '*g', ms=10, label="P. Min. Var")
plt.plot(s_emv, e_emv, '*b', ms=10, label="P. EMV")
# LAC
plt.plot(lac["Vol"], lac["Media"], "--k", lw=3, label="LAC")
# Etiquetas de los ejes
plt.xlabel("Volatilidad $\sigma$")
plt.ylabel("Rendimiento esperado $E[r]$")
plt.colorbar()
# Leyenda
plt.legend(loc="upper left", bbox_to_anchor=(1.3, 1))
plt.axis([0.13, 0.15, 0.135, 0.145])
```
```python
# Para gamma=5
g = 5
w_opt = (e_emv - rf) / (g * s_emv**2)
w_opt, 1 - w_opt
```
(0.9591120623418227, 0.04088793765817733)
```python
# Ponderaciones finales
w_opt * w_emv, 1 - w_opt
```
(array([0.48654864, 0.07169694, 0.02315071, 0.18219092, 0.19552485]),
0.04088793765817733)
## 3. Comentarios finales
### 3.1. Restricciones adicionales
Los inversionistas pueden tener restricciones adicionales:
1. Restricciones en posiciones cortas.
2. Pueden requerir un rendimiento mínimo.
3. Inversión socialmente responsable: prescinden de inversiones en negocios o paises considerados éticamente o políticamente indeseables.
Todo lo anterior se puede incluir como restricciones en el problema de optimización, y puede ser llevado a cabo a costa de un cociente de Sharpe menor.
### 3.2. Críticas a la optimización media varianza
1. Solo importan medias y varianzas: recordar que la varianza subestima el riesgo en algunos casos.
2. Preferencias media-varianza tratan las ganancias y pérdidas simétricamente: el sentimiento de insatisfacción de una perdida es mayor al sentimiento de satisfacción de una ganancia (aversión a pérdidas).
3. La aversión al riesgo es constante: la actitud frente al riesgo puede cambiar, por ejemplo con el estado de la economía.
4. Horizonte corto (un periodo).
5. Basura entra - basura sale: la optimización media varianza es supremamente sensible a las entradas: estimaciones de rendimientos esperados y varianzas.
___
# Anuncios parroquiales
## 1. Quiz la próxima clase (clases 12, 13, y 14).
## 2. Revisar archivo Tarea 6.
## 3. [Nota interesante](http://yetanothermathprogrammingconsultant.blogspot.com/2016/08/portfolio-optimization-maximize-sharpe.html)
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
| ef4a4284245f8350e3dbb12ffdd0567ea76fab56 | 98,645 | ipynb | Jupyter Notebook | Modulo3/Clase14_SeleccionOptimaPortII.ipynb | if722399/porinvo2021 | 19ee9421b806f711c71b2affd1633bbfba40a9eb | [
"MIT"
] | null | null | null | Modulo3/Clase14_SeleccionOptimaPortII.ipynb | if722399/porinvo2021 | 19ee9421b806f711c71b2affd1633bbfba40a9eb | [
"MIT"
] | null | null | null | Modulo3/Clase14_SeleccionOptimaPortII.ipynb | if722399/porinvo2021 | 19ee9421b806f711c71b2affd1633bbfba40a9eb | [
"MIT"
] | null | null | null | 74.787718 | 32,600 | 0.77269 | true | 7,105 | Qwen/Qwen-72B | 1. YES
2. YES | 0.658418 | 0.835484 | 0.550097 | __label__spa_Latn | 0.676811 | 0.116389 |
# Week 2 worksheet 3: Gaussian elimination
This notebook was created by Charlotte Desvages.
Gaussian elimination is a direct method to solve linear systems of the form $Ax = b$, with $A \in \mathbb{R}^{n\times n}$ and $b \in \mathbb{R}^n$, to find the unknown $x \in \mathbb{R}^n$. This week, we put what we have seen so far into practice, and program different steps of the Gaussian elimination algorithm: forward substitution, backward substitution, and elementary row operations.
The best way to learn programming is to write code. Don't hesitate to edit the code in the example cells, or add your own code, to test your understanding. You will find practice exercises throughout the notebook, denoted by 🚩 ***Exercise $x$:***.
## How workshops will work:
1. Read the Exercise
2. Write tests and fail these tests
3. Write the solution
4. Pass the tests
### Displaying solutions
Solutions will be released after the workshops, as a new `.txt` file in the same GitHub repository. After pulling the file to your workspace, run the following cell to create clickable buttons under each exercise, which will allow you to reveal the solutions.
```python
%run scripts/create_widgets.py W02-W3
```
---
### 📚 Book sections
- **ASC**: sections 4.1, ***4.2***, 4.7, 4.8
- **PCP**: sections 2.3, 2.4, 5.6
🚩 Section **4.2** of **ASC** is **mandatory reading** this week, particularly when working through sections 3 and 4 of this notebook. You probably have seen Gaussian elimination in your first year Linear Algebra course, so this should be familiar already -- but this will be a good refresher.
---
## 1. NumPy's `np.linalg`
Numpy has a **sub-module** called `linalg`, which contains many useful functions for linear algebra and matrix operations. If we imported Numpy as `np`, for example, then to use the functions in `linalg`, you will need to prefix them with `np.linalg.`. Some of the functions provided by the `np.linalg` submodule are:
```python
import numpy as np
# Create a random 3x3 matrix and a vector of three 1s
A = np.random.random([3, 3])
b = np.ones(3)
print(np.linalg.eigvals(A)) # Eigenvalues of a matrix: note the complex values here, j=sqrt(-1)
eig_val_A, eig_vec_A = np.linalg.eig(A) # Eigenvalues and right eigenvectors
print("Eigenvalues: ", eig_val_A)
print("Eigenvectors: ", eig_vec_A)
print('\nInverse and determinant:')
print("A^(-1) =", np.linalg.inv(A)) # Inverse of a matrix
print("det(A) =", np.linalg.det(A)) # Determinant of a matrix
print('\nSolution of Ax = b:')
print("x =", np.linalg.solve(A, b)) # Solve Ax = b for x
```
---
**📚 Learn more:**
* [numpy.linalg](https://docs.scipy.org/doc/numpy/reference/routines.linalg.html)
* **ASC**: section 4.8
---
🚩 ***Exercise 1:*** Create two variables `M` and `y`, assigned with Numpy arrays representing the matrix $M$ and vector $y$ defined as (you can reuse the code from the linear algebraic equations example)
$$
M =
\begin{pmatrix}
9 & 3 & 0 \\
-2 & -2 & 1 \\
0 & -1 & 1
\end{pmatrix}, \qquad
y =
\begin{pmatrix}
0.4 \\ -3 \\ 0.3
\end{pmatrix}.
$$
Then, solve the system $Mx = y$ for $x$, using `np.linalg.solve()`.
*For checking:* the result should be `[-3.16666667 9.63333333 9.93333333]`.
```python
# Tests
assert x[0] == -57/18, "x[0] should be -57/18"
```
```python
%run scripts/show_solutions.py W02-W3_ex1
```
---
## 2. Diagonal matrices
Diagonal matrices have elements off the leading diagonal equal to zero. Elements on the leading diagonal of a diagonal matrix may or may not be equal to zero. A diagonal matrix $A$ is invertible iff none of its diagonal elements are equal to zero.
### 2.1. Solving diagonal systems
When $A$ is a diagonal matrix, the linear system $Ax = b$ can be written as
$$
\begin{pmatrix}
a_{11} & & & \\
& a_{22} & & \\
& & \ddots & \\
& & & a_{nn}
\end{pmatrix}
\begin{pmatrix}
x_1 \\ x_2 \\ \vdots \\ x_n
\end{pmatrix}
=
\begin{pmatrix}
b_1 \\ b_2 \\ \vdots \\ b_n
\end{pmatrix}
\qquad \Leftrightarrow \qquad
\begin{cases}
a_{11} x_1 &= b_1 \\
a_{22} x_2 &= b_2 \\
&\vdots \\
a_{nn} x_n &= b_n
\end{cases},
$$
where $a_{ii}, b_{i}, i = 1, \dots, n$ are known. The matrix $A$ is invertible (and therefore the system $Ax = b$ has precisely one solution) iff all $a_{ii} \neq 0$.
---
🚩 ***Exercise 2:*** Write a function `linsolve_diag()` which solves the linear system $A x = b$, returning $x$ in the output variable `x` (as a NumPy array), without using `np.linalg.solve()`. Here the input `A` should be assumed to be an invertible **diagonal** square matrix, and `b` a column vector.
*Hints:*
- Use the `.shape` attribute of NumPy arrays to determine the size of the input matrix and vector.
- The solution may be computed using a `for` loop.
- There is also an efficient way to do this via a NumPy function which extracts the diagonal elements of a matrix.
*For checking:* the solution to the given example is $[20, 10]$.
```python
import numpy as np
def linsolve_diag(A, b):
'''
Solves the diagonal system Ax = b for x,
assuming A is invertible.
'''
# your code here
x = np.ones(2)
return x
# Use the function on the following example
A = np.array([[2, 0],
[0, 0.5]])
b = np.array([40, 5])
x = linsolve_diag(A, b)
print(x)
# Test for the expected solution
assert x[0] == 20
assert x[1] == 10
```
```python
%run scripts/show_solutions.py W02-W3_ex2
```
---
🚩 ***Exercise 3:*** Use your `linsolve_diag` function to solve the linear system
\begin{equation}
\left( \begin{array}{ccc} 3 & 0 & 0 \\ 0 & -1 & 0 \\ 0
& 0 & 10 \end{array} \right) x = \left( \begin{array}{c} 3 \\ 1 \\ 1 \end{array}
\right),
\end{equation}
for $x$.
```python
```
```python
%run scripts/show_solutions.py W02-W3_ex3
```
### 2.2. Measuring computation time
The `time()` function in Python's `time` module allows Python to read the current time from your computer's clock. We can therefore use it to time how long it takes a section of code to run, as follows:
```python
import time
t0 = time.time()
# Code to time
t = time.time() - t0
print(f"Elapsed time: {t:.6f} seconds")
```
and the resulting time is stored in the variable `t`, as the time elapsed between the first and the second measurement.
---
**📚 Learn more:**
- [The `time` module](https://docs.python.org/3/library/time.html) - Python documentation
- [`time.time()`](https://docs.python.org/3/library/time.html#time.time) - Python documentation
- **PCP**: section 5.6, which discusses measuring computation time and efficiency, and provides examples using a different Python module called [`timeit`](https://docs.python.org/3/library/timeit.html)
---
🚩 ***Exercise 4:*** The following code generates a randomised invertible diagonal square matrix $A$ with dimension $N$, stored in the variable `A`, and a right-hand-side vector $b$, stored in the variable `b`, for a given value of `N`. Use `time.time()` to time how long it takes the `np.linalg.solve()` function to solve $A x = b$ for $x$. Compare this against the time it takes your `linsolve_diag()` function from Exercise 2 to solve for $x$, for different values of `N`.
Display the measured times in a way that is convenient to read (you can use an f-string, for instance; see the Week 1 workshop task).
*Hint:* limit `N` to less than $\sim 1,000$ to avoid using excessive memory.
```python
import time
# Create a randomised invertible diagonal matrix A and vector b
N = 500
A = np.diag(np.random.random([N])) + np.eye(N)
b = np.random.random([N])
# your code here
# remember to add tests
```
```python
%run scripts/show_solutions.py W02-W3_ex4
```
---
## 3. Forward and backward substitution
Gaussian elimination can be performed in 2 steps: forward substitution and backward substitution. In your previous courses on linear algebra, you probably have performed this by hand on small systems ($4\times 4$ or so). We can *implement* (program) the procedure in Python to be able to solve systems of any size much more quickly.
### 3.1. Lower triangular systems: forward substitution
**Lower triangular matrices** have elements above the leading diagonal equal to zero. Elements on or below the leading diagonal may or may not be equal to zero.
Linear systems involving lower triangular invertible square matrices can be solved via **forward substitution**. For example for the linear system
\begin{equation}
\left( \begin{array}{ccc} 2 & 0 & 0 \\ -1 & 1 & 0 \\ -1 & 1 & 2 \end{array} \right)
\left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \end{array} \right)
= \left( \begin{array}{c} 4 \\ 1 \\ 4 \end{array} \right),
\end{equation}
applying the matrix multiplication gives
\begin{equation}
\left( \begin{array}{c} 2 x_1 \\ -x_1 + x_2 \\ -x_1 + x_2 + 2 x_3 \end{array} \right)
= \left( \begin{array}{c} 4 \\ 1 \\ 4 \end{array} \right),
\end{equation}
where, for instance, $-x_1 + x_2 + 2 x_3$ is the 3rd element of the vector $Ax$. Comparing the first elements gives $x_1$. Since $x_1$ is now known, comparing the second elements gives $x_2$. Since $x_1$ and $x_2$ are now known, comparing the third elements gives $x_3$.
In other words, $x_1$ is trivial to compute, and is then *substituted* into the next equation, which means that $x_2$ is now trivial to compute, etc. The substitutions cascade *forward*.
#### Forward substitution in Python
The function `linsolve_lt()` below solves the linear system $A x = b$ using forward substitution, returning $x$ in the output variable `x`. Here the input `A` should be assumed to be an invertible **lower triangular** square matrix, and `b` a column vector.
```python
def linsolve_lt(A, b):
'''
Solves the lower triangular system Ax = b.
'''
N = b.shape[0]
x = np.zeros(N)
for i in range(N):
x[i] = (b[i] - A[i, :i] @ x[:i]) / A[i, i]
return x
# Solving the system in the example above
A = np.array([[2, 0, 0],
[-1, 1, 0],
[-1, 1, 2]], dtype=float)
b = np.array([4, 1, 4], dtype=float)
x = linsolve_lt(A, b)
print(x)
```
---
🚩 ***Exercise 5:*** Examine the function `linsolve_lt()` carefully to understand how and why it works. Add code comments in the function definition to explain each step.
*Hint:*
- pen and paper will be useful here! Write (or sketch) what line 8 achieves depending on the value of `row`. For instance, what happens at the first iteration of the loop (when `row` is `0`)? at the second iteration (when `row` is `1`)? etc.
- @ or np.matmul() indicates matrix/matrix-vector products
```python
```
```python
%run scripts/show_solutions.py W02-W3_ex5
```
---
### 3.2. Upper triangular systems: backward substitution
**Upper triangular matrices** have elements below the leading diagonal equal to zero. Elements on or above the leading diagonal may or may not be equal to zero.
Linear systems involving upper triangular invertible square matrices can be solved via **backward substitution**. Backward substitution is similar to forward substitution, but starts from the last row, and substitutions cascade backward until the first row.
#### Backward substitution in Python
🚩 ***Exercise 6:*** Write a function `linsolve_ut()` which solves the linear system $A x = b$ using backward substitution, returning $x$ in the output variable `x`. Here the input `A` should be assumed to be an invertible **upper triangular** square matrix, and `b` a column vector.
You can start from `linsolve_lt()` above and adapt it to use backward substitution.
*For checking:* The solution to the given example is $[-1, 2]$.
```python
def linsolve_ut(A, b):
'''
Solves the upper triangular system Ax = b.
'''
# your code here
return x
# Testing with an example
A = np.array([[1, 1],
[0, 0.5]])
b = np.array([1, 1])
x = linsolve_ut(A, b)
print(x)
```
```python
%run scripts/show_solutions.py W02-W3_ex6
```
---
🚩 ***Exercise 7:*** The following code generates an invertible upper triangular square matrix $A$ with dimension $N$, stored in the variable `A`, and a right-hand-side vector $b$, stored in the variable `b`, for a given value of `N`. Use `time.time()` to time how long it takes the `np.linalg.solve()` function to solve $A x = b$ for $x$. Compare this against the time it takes your `linsolve_ut()` function to solve for $x$, for different values of `N`.
*Hint:* Limit `N` to less than $\sim 1,000$ to avoid using excessive memory.
```python
import time
# Create a randomised invertible upper triangular matrix A and vector b
N = 800
A = np.triu(np.random.random([N])) + np.eye(N)
b = np.random.random([N])
# your code here
```
```python
%run scripts/show_solutions.py W02-W3_ex7
```
## 4. Gaussian elimination
We now know how to solve lower and upper triangular systems. Now, consider a system which is not triangular -- for instance:
$$
\begin{pmatrix}
1 & 1 & 1 \\ 2 & 1 & -1 \\ 1 & 1 & 2
\end{pmatrix}
\begin{pmatrix}
x_1 \\ x_2 \\ x_3
\end{pmatrix}
\begin{pmatrix}
2 \\ 1 \\ 0
\end{pmatrix}.
$$
We can build the *augmented matrix* by adding $b$ as an extra column in $A$:
$$
\begin{pmatrix} 1 & 1 & 1 & 2 \\ 2 & 1 & -1 & 1 \\ 1 & 1 & 2 & 0 \end{pmatrix}.
$$
The goal is now to **reduce** this augmented matrix into **reduced row echelon form** (RREF), i.e.
$$
\begin{pmatrix}
1 & 0 & 0 & x_1 \\ 0 & 1 & 0 & x_2 \\ 0 & 0 & 1 & x_3 \end{pmatrix},
$$
and the final column is then the solution of the original linear problem. We do this by applying **elementary row operations** to the augmented matrix, to create zeros under each diagonal element:
\begin{align*}
\left( \begin{array}{cccc} 1 & 1 & 1 & 2 \\ 2 & 1 & -1 & 1 \\ 1 & 1 & 2 & 0 \end{array} \right)
\underset{R_2 - 2 R_1}{\rightarrow}
& \left( \begin{array}{cccc} 1 & 1 & 1 & 2 \\ 0 & -1 & -3 & -3 \\ 1 & 1 & 2 & 0 \end{array} \right) \nonumber \\
\underset{R_3 - R_1}{\rightarrow}
& \left( \begin{array}{cccc} 1 & 1 & 1 & 2 \\ 0 & -1 & -3 & -3 \\ 0 & 0 & 1 & -2 \end{array} \right) \nonumber \\
\end{align*}
This is equivalent to the linear equations:
\begin{align*}
1 x_1 + 1 x_2 + 1 x_3 & = 2, \nonumber \\
0 x_1 - 1 x_2 - 3 x_3 & = -3, \nonumber \\
0 x_1 + 0 x_2 + 1 x_3 & = -2.
\end{align*}
We could keep going, and apply further elementary row operations to the augmented matrix... but this system is now **upper triangular**, and therefore we can solve it using **backward substitution**!
Time to tie it all together -- remember that you can check section **4.2** in **ASC** for help for these final problems.
### 4.1. Elementary row operations
🚩 ***Exercise 8:*** Write a function `row_op()` which applies the elementary row operation
\begin{equation}
\left( \textrm{Row} j \right) \rightarrow \beta \times \left( \textrm{Row } j \right) + \alpha \times \left( \textrm{Row } i \right),
\end{equation}
where $\alpha, \beta \in \mathbb{R}$.
*For checking:* The solution to the given example is $[[0, 4], [1, 2]]$.
*Hint:* Input arguments of functions can be modified if they are e.g. lists or NumPy arrays (remember section 4 of the Week 3 tutorial), so you can apply this operation to `A` itself, and thus change it. As long as you don't redefine `A` from scratch inside the function, you don't even need to return it, as it will be changed in place. Since you don't return it (i.e. you return `None`), there is no result to assign to a new variable -- so simply calling your function (as in the example), without assigning the output to `A` for instance, will still work.
Here is a simpler example to illustrate this:
```python
def change_in_place(x):
'''
Change the first element of x in-place.
'''
x[0] = 12345 # note that we don't return anything here!
# Test our function
z = np.array([10, 15, 20, 25])
print(z)
change_in_place(z) # this changes z itself, no "output" to store in a variable here
print(z)
```
```python
def row_op(A, alpha, i, beta, j):
'''
Applies row operation beta*A_j + alpha*A_i to A_j,
the jth row of the matrix A.
Changes A in place.
'''
# your code here
# Testing with an example
A = np.array([[2, 0],
[1, 2]])
alpha, beta = 2, -1
i, j = 1, 0
# If you don't return A, it will be changed in-place when the function is executed
print(A)
row_op(A, alpha, i, beta, j) # this changes A
print(A)
```
```python
%run scripts/show_solutions.py W02-W3_ex8
```
### 4.2. Row echelon form
🚩 ***Exercise 9 (challenging):*** Write a function `REF()` which takes as inputs `A` and `b`, a square invertible matrix and a vector, and returns `C` and `d`, which are respectively `A` and `b` transformed by successive elementary row operations, so that `C` is upper triangular (and the system $Cx = d$ is equivalent to $Ax = b$).
Your function should first build the augmented matrix $( A | b )$, and use elementary row operations as in the example above to reduce it to row echelon form. Finally, it should split the final augmented matrix into a square matrix `C` and a vector `d`.
Use your function `row_op()` to perform the row operations: **you do not need to re-define it here**, you can simply *call* it -- i.e. use the command `row_op(..)` with appropriate input arguments inside your function `REF()`.
You will have to calculate $\alpha$ and $\beta$ for each row operation. For instance, in the example above, the first row operation performed is $R_2 \to R_2 - 2R_1$, therefore we have $i=1$, $j=2$, $\alpha = -2$, and $\beta = 1$. How can you know that these values of $\alpha$ and $\beta$ will ensure that the element in the second row, first column becomes 0? (*hint: you should see similarities with your forward substitution algorithm.*)
*Hint:* think about how you would do this on paper. You will need to create zeros under the diagonal element in each column (one after another), and you will need a separate row operation for each row (in a given column) to make the leading element zero. You will need 2 nested loops.
*For checking:* `C` and `d` should be as in the example above.
```python
def REF(A, b):
'''
Reduces the augmented matrix (A|b) into
row echelon form, returns (C|d).
'''
# your code here
return C, d
# Testing with an example
A = np.array([[1, 1, 1],
[2, 1, -1],
[1, 1, 2]], dtype=float)
b = np.array([2, 1, 0], dtype=float)
C, d = REF(A, b)
print(C)
print(d)
```
```python
%run scripts/show_solutions.py W02-W3_ex9
```
### 4.3. Completing Gaussian elimination
We have done all the hard work now, all that is left is to put it all together.
🚩 ***Exercise 10:*** Write a function `gauss()` which, given an invertible matrix `A` and a column vector `b`, solves the system $Ax = b$ and returns the result as `x`. This function should make use of your previous functions `REF()` and `linsolve_ut()`. (Again, no need to define them again here, just call them.)
*For checking:* the result here should be $[-5, 9, -2]$.
*For further checking:* given an arbitrary `A` and `b`, how can you check that `x` is indeed the solution to $Ax = b$ (to machine precision)?
```python
def gauss(A, b):
'''
Solve the linear system Ax = b, given a square
invertible matrix A and a vector b, using Gaussian elimination.
'''
# your code here
return x
# Test the function
A = np.array([[1, 1, 1],
[2, 1, -1],
[1, 1, 2]], dtype=float)
b = np.array([2, 1, 0], dtype=float)
x = gauss(A, b)
print(x)
```
```python
%run scripts/show_solutions.py W02-W3_ex9
```
| 031b6b6b2db1e915d9224a4d190df4e81a503f8f | 28,072 | ipynb | Jupyter Notebook | Workshops/W02-W3_NMfCE_Gaussian_elimination.ipynb | DrFriedrich/nmfce-2021-22 | 2ccee5a97b24bd5c1e80e531957240ffb7163897 | [
"MIT"
] | null | null | null | Workshops/W02-W3_NMfCE_Gaussian_elimination.ipynb | DrFriedrich/nmfce-2021-22 | 2ccee5a97b24bd5c1e80e531957240ffb7163897 | [
"MIT"
] | null | null | null | Workshops/W02-W3_NMfCE_Gaussian_elimination.ipynb | DrFriedrich/nmfce-2021-22 | 2ccee5a97b24bd5c1e80e531957240ffb7163897 | [
"MIT"
] | null | null | null | 35.534177 | 568 | 0.555215 | true | 5,730 | Qwen/Qwen-72B | 1. YES
2. YES | 0.893309 | 0.903294 | 0.806921 | __label__eng_Latn | 0.992964 | 0.713081 |
```python
'''import numpy as np
import math
import sympy
import sympy as sp
from sympy import Eq, IndexedBase, symbols, Idx, Indexed, Sum, S, N
from sympy.functions.special.tensor_functions import KroneckerDelta
from sympy.vector import Vector, CoordSys3D, AxisOrienter, BodyOrienter, Del, curl, divergence, gradient, is_conservative, is_solenoidal, scalar_potential, Point, scalar_potential_difference, Del, express, matrix_to_vector
import matplotlib.pyplot as plt
from sympy.physics.vector import ReferenceFrame
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from IPython.display import display
from IPython.display import display_latex
from sympy import latex
from sympy import Array, Matrix, transpose, zeros, diff, Function, Derivative, cos, sin, sqrt, solve, linsolve, acos, atan, asin
from sympy import symbols
from sympy.plotting import plot'''
from numpy import log, log2
from scipy.linalg import logm, expm
from spatialmath import SO2, SE2
from spatialmath.base import rotx, rot2, det, simplify, skew, vex, trot2, transl2, trplot2
from spatialmath.base import transl, trplot
import spatialmath.base.symbolic as sym
```
```python
np.set_printoptions(formatter={'float': lambda x: "{0: 0.4f}".format(x)})
```
Given
$^A\xi_B$,
A is the reference frame
B is the target frame
$^AP = ^A\xi_B \cdot ^BP$
We use the operator ⊕ to indicate
composition of relative poses
$^AP = (^A\xi_B \oplus ^B\xi_C) \bullet ^CP$
```python
R = rot2(0.2)
d = det(R)
d
```
$\displaystyle 1.0$
```python
theta = sym.symbol('theta')
R = rot2(theta)
R
```
array([[cos(theta), -sin(theta)],
[sin(theta), cos(theta)]], dtype=object)
```python
simplify(R * R)
```
$\displaystyle \left[\begin{matrix}\cos^{2}{\left(\theta \right)} & \sin^{2}{\left(\theta \right)}\\\sin^{2}{\left(\theta \right)} & \cos^{2}{\left(\theta \right)}\end{matrix}\right]$
```python
simplify(det(R))
```
$\displaystyle 1$
Matrix Exponential
```python
R = rot2(0.3)
R
```
array([[ 0.9553, -0.2955],
[ 0.2955, 0.9553]])
```python
R = rot2(0.3)
S = logm(R)
```
```python
k = skew(2)
```
```python
vex(S)
```
array([ 0.3000])
```python
expm(S)
```
array([[ 0.9553, -0.2955],
[ 0.2955, 0.9553]])
$R = e^{[\theta]_x} \exists SO(2)$
```python
R = rot2(0.3)
R = expm(skew(0.3))
R
```
array([[ 0.9553, -0.2955],
[ 0.2955, 0.9553]])
Pose in 2-Dimensions
Homogeneous Transformation Matrix
$^AP = (^AR_B)$
```python
T1 = transl2(1, 2) * trot2(30, 'deg')
T1
```
array([[ 0.8660, -0.0000, 0.0000],
[ 0.0000, 0.8660, 0.0000],
[ 0.0000, 0.0000, 1.0000]])
```python
trplot( transl(1,2,3), frame='A', rviz=True, width=1, dims=[0, 10, 0, 10, 0, 10])
```
```python
trplot2( transl2(1,2) * trot2(30, 'deg'), frame='A', rviz=True, width=1, dims=[0, 5, 0, 5])
```
```python
transl2(1,2) * trot2(30, 'deg')
```
array([[ 0.8660, -0.0000, 0.0000],
[ 0.0000, 0.8660, 0.0000],
[ 0.0000, 0.0000, 1.0000]])
```python
```
| f1364f9bfba7c6242df4854ee74b9b3b8f8ade39 | 39,526 | ipynb | Jupyter Notebook | .ipynb_checkpoints/Robotics-checkpoint.ipynb | Valentine-Efagene/Jupyter-Notebooks | 91a1d98354a270d214316eba21e4a435b3e17f5d | [
"MIT"
] | null | null | null | .ipynb_checkpoints/Robotics-checkpoint.ipynb | Valentine-Efagene/Jupyter-Notebooks | 91a1d98354a270d214316eba21e4a435b3e17f5d | [
"MIT"
] | null | null | null | .ipynb_checkpoints/Robotics-checkpoint.ipynb | Valentine-Efagene/Jupyter-Notebooks | 91a1d98354a270d214316eba21e4a435b3e17f5d | [
"MIT"
] | null | null | null | 86.11329 | 27,536 | 0.831579 | true | 1,096 | Qwen/Qwen-72B | 1. YES
2. YES | 0.932453 | 0.826712 | 0.77087 | __label__eng_Latn | 0.362223 | 0.629322 |
| |Pierre Proulx, ing, professeur|
|:---|:---|
|Département de génie chimique et de génie biotechnologique |** GCH200-Phénomènes d'échanges I **|
### Section10.4, chauffage par effet Brinkman: source de chauffage causé par la viscosité
>
>> Ici on traitera le problème de façon légèrement différente de Transport Phenomena, en effet, pour traiter ce problème on peut aller chercher les équations générales de l'annexe B.9.
>>Regardons donc l'équation B.9-1 qui est utilisée puisqu'on suppose que le système a une courbure négligeable et peut être traité en coordonnées cartésiennes.
Tous les termes de gauche sont nuls car:
>>> On veut le résultat en état de régime, donc la dérivée en fonction du temps est nulle.
>>> Il n'y a pas de vitesse dans la direction x ni dans la direction y, et il n'y a pas de gradient de température dans la direction z.
>>> Le seul terme dans les dérivées de la température qui reste à droite est celui dans la direction x, et le terme de source $\Phi$, donc:
>>>> ### $k \frac {\partial^2 T}{\partial x^2}+\mu \Phi_v = 0$
>> Le terme de source visqueuse est obtenu par l'annexe B7
>
qui peut sembler très complexe (en effet) mais tous les termes s'annulent sauf celui du gradient de $v_z$ en fonction de x donc:
>>>> ### $\Phi_v =\big(\frac {\partial v_z}{\partial x} \big)^2 $
>> L'équation à résoudre sera donc directement:
>>>> ### $k \frac {\partial^2 T}{\partial x^2}+ \mu \big(\frac {\partial v_z}{\partial x} \big)^2= 0$
> Et puisque le profil de vitesse vous a déjà été donné sur la figure 10.4-2, on pourra résoudre
```python
#
# Pierre Proulx
#
# Préparation de l'affichage et des outils de calcul symbolique
#
import sympy as sp
from IPython.display import *
sp.init_printing(use_latex=True)
%matplotlib inline
```
```python
# Paramètres, variables et fonctions
#
mu,k,b,Tb,T0,vb,x=sp.symbols('mu k b T_b T_0 v_b x')
T=sp.Function('T')(x)
v=sp.Function('v')(x)
```
```python
v=vb*x/b
eq=k*T.diff(x,x)+mu*v.diff(x)**2
display(eq)
```
```python
T=sp.dsolve(eq)
display('Le profil de température, solution générale avec les constantes inconnues',T)
```
```python
condition_1=sp.Eq(T.rhs.subs(x,0),T0)
condition_2=sp.Eq(T.rhs.subs(x,b),Tb)
display('Les conditions aux limites',condition_1, condition_2)
constantes=sp.solve([condition_1,condition_2],sp.symbols('C1,C2'))
display('Les constantes après solution',constantes)
T=T.subs(constantes)
display('Le profil de température final',T)
```
```python
### Tracage
T=T.rhs
dico={'T_0':0,'T_b':10,'k':0.1,'mu':0.01,'b':0.01,'v_b':10}
T1=T.subs(dico)
dico={'T_0':0,'T_b':10,'k':0.1,'mu':0.05,'b':0.01,'v_b':10}
T2=T.subs(dico)
dico={'T_0':0,'T_b':10,'k':0.1,'mu':0.1,'b':0.01,'v_b':10}
T3=T.subs(dico)
dico={'T_0':0,'T_b':10,'k':0.1,'mu':0.15,'b':0.01,'v_b':10}
T4=T.subs(dico)
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize']=10,8
p=sp.plot((T1,(x,0,0.01)),(T2,(x,0,0.01)),(T3,(x,0,0.01)),(T4,(x,0,0.01)),
legend=True,ylabel='T',xlabel='x',show=False)
p[0].label='mu=0.01'
p[1].label='.05'
p[2].label='.1'
p[3].label='.15'
p[0].line_color='blue'
p[1].line_color='red'
p[2].line_color='green'
p[3].line_color='black'
p.show()
```
| 9fc8807afd7b991f7f0e53625a5b3166f1535334 | 72,584 | ipynb | Jupyter Notebook | Chap-10-Section-10-4.ipynb | pierreproulx/GCH200 | 66786aa96ceb2124b96c93ee3d928a295f8e9a03 | [
"MIT"
] | 1 | 2018-02-26T16:29:58.000Z | 2018-02-26T16:29:58.000Z | Chap-10-Section-10-4.ipynb | pierreproulx/GCH200 | 66786aa96ceb2124b96c93ee3d928a295f8e9a03 | [
"MIT"
] | null | null | null | Chap-10-Section-10-4.ipynb | pierreproulx/GCH200 | 66786aa96ceb2124b96c93ee3d928a295f8e9a03 | [
"MIT"
] | 2 | 2018-02-27T15:04:33.000Z | 2021-06-03T16:38:07.000Z | 222.650307 | 51,672 | 0.902596 | true | 1,130 | Qwen/Qwen-72B | 1. YES
2. YES | 0.882428 | 0.835484 | 0.737254 | __label__fra_Latn | 0.840327 | 0.55122 |
```julia
using MomentClosure, Latexify, OrdinaryDiffEq, Catalyst
```
┌ Info: Precompiling MomentClosure [01a1b25a-ecf0-48c5-ae58-55bfd5393600]
└ @ Base loading.jl:1278
$$ G \stackrel{c_1}{\rightarrow} G+P, \\
G^* \stackrel{c_2}{\rightarrow} G^*+P, \\
P \stackrel{c_3}{\rightarrow} 0 \\
G+P \underset{c_5}{\stackrel{c_4}{\rightleftharpoons}} G^* $$
On/off gene states are merged into a Bernoulli variable $g(t)$ which can be either $1$ ($G$) or $0$ ($G^*$). The number of proteins in the system is given by $p(t)$.
### Using `ReactionSystemMod`
```julia
@parameters t, c₁, c₂, c₃, c₄, c₅
@variables p(t), g(t)
vars = [g, p]
ps = [c₁, c₂, c₃, c₄, c₅]
S = [0 0 0 -1 1;
1 1 -1 -1 1]
as = [c₁*g, # G -> G + P
c₂*(1-g), # G* -> G* + P
c₃*p, # P -> 0
c₄*g*p, # G+P -> G*
c₅*(1-g)] # G* -> G+P
# system size Ω can be included as an additional parameter if needed
binary_vars = [1]
rn = ReactionSystemMod(t, vars, ps, as, S)
```
ReactionSystemMod(t, Term{Real,Nothing}[g(t), p(t)], Sym{ModelingToolkit.Parameter{Real},Nothing}[c₁, c₂, c₃, c₄, c₅], SymbolicUtils.Mul{Real,Int64,Dict{Any,Number},Nothing}[c₁*g(t), c₂*(1 - g(t)), c₃*p(t), c₄*g(t)*p(t), c₅*(1 - g(t))], [0 0 … -1 1; 1 1 … -1 1])
### Using Catalyst.jl `ReactionSystem`
* $\rightarrow$ indicates a reaction that follows the law of mass action (need to indicate only the reaction coefficient, full propensity function is constructed automatically)
* $\Rightarrow$ indicates a reaction that does not follow the law of mass action (need to define the full propensity function)
```julia
rn = @reaction_network begin
(c₁), g → g+p
(c₂*(1-g)), 0 ⇒ p
(c₃), p → 0
(c₄), g+p → 0
(c₅*(1-g)), 0 ⇒ g+p
end c₁ c₂ c₃ c₄ c₅
```
\begin{align}
\require{mhchem}
\ce{ g &->[c{_1}] g + p}\\
\ce{ \varnothing &<=>[c{_2} \left( 1 - g\left( t \right) \right)][c{_3}] p}\\
\ce{ g + p &<=>[c{_4}][c{_5} \left( 1 - g\left( t \right) \right)] \varnothing}
\end{align}
identical stoichiometry matrix and propensity functions are recovered
```julia
propensities(rn, combinatoric_ratelaw=false)
```
5-element Array{SymbolicUtils.Mul{Real,Int64,Dict{Any,Number},Nothing},1}:
c₁*g(t)
c₂*(1 - g(t))
c₃*p(t)
c₄*g(t)*p(t)
c₅*(1 - g(t))
```julia
get_S_mat(rn)
```
2×5 Array{Int64,2}:
0 0 0 -1 1
1 1 -1 -1 1
### Moment equations
Generate raw moment equations up to 3rd order.
The argument `combinatoric_ratelaw = false` indicates whether binomial coefficients are included when constructing the propensity functions for the reactions that follow the law of mass action (does not play a role in this specific scenarion)
Equivalently, central moment equations can be generated using `generate_central_moment_eqs(rn, 3, 5, combinatoric_ratelaw=false)`
```julia
raw_eqs = generate_raw_moment_eqs(rn, 3, combinatoric_ratelaw=false)
latexify(raw_eqs)
```
\begin{align*}
\frac{d\mu{_{10}}}{dt} =& c{_5} - c{_4} \mu{_{11}} - c{_5} \mu{_{10}} \\
\frac{d\mu{_{01}}}{dt} =& c{_2} + c{_5} + c{_1} \mu{_{10}} - c{_2} \mu{_{10}} - c{_3} \mu{_{01}} - c{_4} \mu{_{11}} - c{_5} \mu{_{10}} \\
\frac{d\mu{_{20}}}{dt} =& c{_5} + c{_4} \mu{_{11}} + c{_5} \mu{_{10}} - 2 c{_4} \mu{_{21}} - 2 c{_5} \mu{_{20}} \\
\frac{d\mu{_{11}}}{dt} =& c{_5} + c{_1} \mu{_{20}} + c{_2} \mu{_{10}} + c{_4} \mu{_{11}} + c{_5} \mu{_{01}} - c{_2} \mu{_{20}} - c{_3} \mu{_{11}} - c{_4} \mu{_{12}} - c{_4} \mu{_{21}} - c{_5} \mu{_{11}} - c{_5} \mu{_{20}} \\
\frac{d\mu{_{02}}}{dt} =& c{_2} + c{_5} + c{_1} \mu{_{10}} + c{_3} \mu{_{01}} + c{_4} \mu{_{11}} + 2 c{_1} \mu{_{11}} + 2 c{_2} \mu{_{01}} + 2 c{_5} \mu{_{01}} - c{_2} \mu{_{10}} - 2 c{_2} \mu{_{11}} - 2 c{_3} \mu{_{02}} - 2 c{_4} \mu{_{12}} - c{_5} \mu{_{10}} - 2 c{_5} \mu{_{11}} \\
\frac{d\mu{_{30}}}{dt} =& c{_5} + 3 c{_4} \mu{_{21}} + 2 c{_5} \mu{_{10}} - c{_4} \mu{_{11}} - 3 c{_4} \mu{_{31}} - 3 c{_5} \mu{_{30}} \\
\frac{d\mu{_{21}}}{dt} =& c{_5} + c{_1} \mu{_{30}} + c{_2} \mu{_{20}} + c{_4} \mu{_{12}} + c{_5} \mu{_{01}} + c{_5} \mu{_{10}} + c{_5} \mu{_{11}} + 2 c{_4} \mu{_{21}} - c{_2} \mu{_{30}} - c{_3} \mu{_{21}} - c{_4} \mu{_{11}} - 2 c{_4} \mu{_{22}} - c{_4} \mu{_{31}} - c{_5} \mu{_{20}} - 2 c{_5} \mu{_{21}} - c{_5} \mu{_{30}} \\
\frac{d\mu{_{12}}}{dt} =& c{_5} + c{_1} \mu{_{20}} + c{_2} \mu{_{10}} + c{_3} \mu{_{11}} + c{_4} \mu{_{21}} + c{_5} \mu{_{02}} + 2 c{_1} \mu{_{21}} + 2 c{_2} \mu{_{11}} + 2 c{_4} \mu{_{12}} + 2 c{_5} \mu{_{01}} - c{_2} \mu{_{20}} - 2 c{_2} \mu{_{21}} - 2 c{_3} \mu{_{12}} - c{_4} \mu{_{11}} - c{_4} \mu{_{13}} - 2 c{_4} \mu{_{22}} - c{_5} \mu{_{12}} - c{_5} \mu{_{20}} - 2 c{_5} \mu{_{21}} \\
\frac{d\mu{_{03}}}{dt} =& c{_2} + c{_5} + c{_1} \mu{_{10}} + 3 c{_1} \mu{_{11}} + 3 c{_1} \mu{_{12}} + 3 c{_2} \mu{_{01}} + 3 c{_2} \mu{_{02}} + 3 c{_3} \mu{_{02}} + 3 c{_4} \mu{_{12}} + 3 c{_5} \mu{_{01}} + 3 c{_5} \mu{_{02}} - c{_2} \mu{_{10}} - 3 c{_2} \mu{_{11}} - 3 c{_2} \mu{_{12}} - c{_3} \mu{_{01}} - 3 c{_3} \mu{_{03}} - c{_4} \mu{_{11}} - 3 c{_4} \mu{_{13}} - c{_5} \mu{_{10}} - 3 c{_5} \mu{_{11}} - 3 c{_5} \mu{_{12}}
\end{align*}
We are solving for moments up to `m_order = 3`, and in the equations encounter moments up to `exp_order = 5`.
Use the Bernoulli variable properties to eliminate redundant equations to see how they simplify:
```julia
bernoulli_eqs = bernoulli_moment_eqs(raw_eqs, binary_vars)
latexify(bernoulli_eqs)
```
\begin{align*}
\frac{d\mu{_{10}}}{dt} =& c{_5} - c{_4} \mu{_{11}} - c{_5} \mu{_{10}} \\
\frac{d\mu{_{01}}}{dt} =& c{_2} + c{_5} + c{_1} \mu{_{10}} - c{_2} \mu{_{10}} - c{_3} \mu{_{01}} - c{_4} \mu{_{11}} - c{_5} \mu{_{10}} \\
\frac{d\mu{_{11}}}{dt} =& c{_5} + c{_1} \mu{_{10}} + c{_5} \mu{_{01}} - c{_3} \mu{_{11}} - c{_4} \mu{_{12}} - c{_5} \mu{_{10}} - c{_5} \mu{_{11}} \\
\frac{d\mu{_{02}}}{dt} =& c{_2} + c{_5} + c{_1} \mu{_{10}} + c{_3} \mu{_{01}} + c{_4} \mu{_{11}} + 2 c{_1} \mu{_{11}} + 2 c{_2} \mu{_{01}} + 2 c{_5} \mu{_{01}} - c{_2} \mu{_{10}} - 2 c{_2} \mu{_{11}} - 2 c{_3} \mu{_{02}} - 2 c{_4} \mu{_{12}} - c{_5} \mu{_{10}} - 2 c{_5} \mu{_{11}} \\
\frac{d\mu{_{12}}}{dt} =& c{_5} + c{_1} \mu{_{10}} + c{_3} \mu{_{11}} + c{_5} \mu{_{02}} + 2 c{_1} \mu{_{11}} + 2 c{_5} \mu{_{01}} - 2 c{_3} \mu{_{12}} - c{_4} \mu{_{13}} - c{_5} \mu{_{10}} - 2 c{_5} \mu{_{11}} - c{_5} \mu{_{12}} \\
\frac{d\mu{_{03}}}{dt} =& c{_2} + c{_5} + c{_1} \mu{_{10}} + 3 c{_1} \mu{_{11}} + 3 c{_1} \mu{_{12}} + 3 c{_2} \mu{_{01}} + 3 c{_2} \mu{_{02}} + 3 c{_3} \mu{_{02}} + 3 c{_4} \mu{_{12}} + 3 c{_5} \mu{_{01}} + 3 c{_5} \mu{_{02}} - c{_2} \mu{_{10}} - 3 c{_2} \mu{_{11}} - 3 c{_2} \mu{_{12}} - c{_3} \mu{_{01}} - 3 c{_3} \mu{_{03}} - c{_4} \mu{_{11}} - 3 c{_4} \mu{_{13}} - c{_5} \mu{_{10}} - 3 c{_5} \mu{_{11}} - 3 c{_5} \mu{_{12}}
\end{align*}
### Closing the moment equations
Finally, we can apply the selected moment closure method on the system of raw moment equations:
```julia
closed_raw_eqs = moment_closure(raw_eqs, "conditional derivative matching", binary_vars)
latexify(closed_raw_eqs)
```
\begin{align*}
\frac{d\mu{_{10}}}{dt} =& c{_5} - c{_4} \mu{_{11}} - c{_5} \mu{_{10}} \\
\frac{d\mu{_{01}}}{dt} =& c{_2} + c{_5} + c{_1} \mu{_{10}} - c{_2} \mu{_{10}} - c{_3} \mu{_{01}} - c{_4} \mu{_{11}} - c{_5} \mu{_{10}} \\
\frac{d\mu{_{11}}}{dt} =& c{_5} + c{_1} \mu{_{10}} + c{_5} \mu{_{01}} - c{_3} \mu{_{11}} - c{_4} \mu{_{12}} - c{_5} \mu{_{10}} - c{_5} \mu{_{11}} \\
\frac{d\mu{_{02}}}{dt} =& c{_2} + c{_5} + c{_1} \mu{_{10}} + c{_3} \mu{_{01}} + c{_4} \mu{_{11}} + 2 c{_1} \mu{_{11}} + 2 c{_2} \mu{_{01}} + 2 c{_5} \mu{_{01}} - c{_2} \mu{_{10}} - 2 c{_2} \mu{_{11}} - 2 c{_3} \mu{_{02}} - 2 c{_4} \mu{_{12}} - c{_5} \mu{_{10}} - 2 c{_5} \mu{_{11}} \\
\frac{d\mu{_{12}}}{dt} =& c{_5} + c{_1} \mu{_{10}} + c{_3} \mu{_{11}} + c{_5} \mu{_{02}} + 2 c{_1} \mu{_{11}} + 2 c{_5} \mu{_{01}} - 2 c{_3} \mu{_{12}} - c{_5} \mu{_{10}} - 2 c{_5} \mu{_{11}} - c{_5} \mu{_{12}} - c{_4} \mu{_{10}} \mu{_{11}}^{-3} \mu{_{12}}^{3} \\
\frac{d\mu{_{03}}}{dt} =& c{_2} + c{_5} + c{_1} \mu{_{10}} + 3 c{_1} \mu{_{11}} + 3 c{_1} \mu{_{12}} + 3 c{_2} \mu{_{01}} + 3 c{_2} \mu{_{02}} + 3 c{_3} \mu{_{02}} + 3 c{_4} \mu{_{12}} + 3 c{_5} \mu{_{01}} + 3 c{_5} \mu{_{02}} - c{_2} \mu{_{10}} - 3 c{_2} \mu{_{11}} - 3 c{_2} \mu{_{12}} - c{_3} \mu{_{01}} - 3 c{_3} \mu{_{03}} - c{_4} \mu{_{11}} - c{_5} \mu{_{10}} - 3 c{_5} \mu{_{11}} - 3 c{_5} \mu{_{12}} - 3 c{_4} \mu{_{10}} \mu{_{11}}^{-3} \mu{_{12}}^{3}
\end{align*}
We can also print out the closure functions for each higher order moment:
```julia
latexify(closed_raw_eqs, :closure)
```
\begin{align*}
\mu{_{13}} =& \mu{_{10}} \mu{_{11}}^{-3} \mu{_{12}}^{3} \\
\mu{_{04}} =& \mu{_{01}}^{4} \mu{_{02}}^{-6} \mu{_{03}}^{4}
\end{align*}
### Numerical solution
The closed moment equations can be solved using DifferentialEquations.jl (or just OrdinaryDiffEq.jl which is more lightweight and sufficient for this particular case.
```julia
# PARAMETER INITIALISATION
pmap = [c₁ => 0.01,
c₂ => 40,
c₃ => 1,
c₄ => 1,
c₅ => 1]
# DETERMINISTIC INITIAL CONDITIONS
μ₀ = [1., 0.001]
u₀map = deterministic_IC(μ₀, closed_raw_eqs)
# time interval to solve on
tspan = (0., 1000.0)
dt = 1
@time oprob = ODEProblem(closed_raw_eqs, u₀map, tspan, pmap);
@time sol_CDM = solve(oprob, Tsit5(), saveat=dt);
```
12.183509 seconds (17.83 M allocations: 933.692 MiB, 2.85% gc time)
5.671233 seconds (12.59 M allocations: 608.756 MiB, 11.10% gc time)
```julia
using Plots
plot(sol_CDM.t, sol_CDM[1,:],
label = "CDM",
legend = true,
xlabel = "Time [s]",
ylabel = "Mean gene number",
lw=2,
legendfontsize=8,
xtickfontsize=10,
ytickfontsize=10,
dpi=100)
```
```julia
plot(sol_CDM.t, sol_CDM[2,:],
label = "CDM",
legend = :bottomright,
xlabel = "Time [s]",
ylabel = "Mean protein number",
lw=2,
legendfontsize=8,
xtickfontsize=10,
ytickfontsize=10,
dpi=100)
```
```julia
std_CDM = sqrt.(sol_CDM[4,2:end] .- sol_CDM[2,2:end].^2)
plot(sol_CDM.t[2:end], std_CDM,
label = "CDM",
legend = true,
xlabel = "Time [s]",
ylabel = "standard deviation of the protein number",
lw=2,
legendfontsize=8,
xtickfontsize=10,
ytickfontsize=10,
dpi=100)
```
### SSA
```julia
using DiffEqJump
# parameters [c₁, c₂, c₃, c₄, c₅]
p = [param[2] for param in pmap]
# initial conditions [g, p]
# NOTE: define as FLOATS as otherwise will encounter problems due to parsing from Int to Float
u₀ = [1., 0.]
# time interval to solve on
tspan = (0., 1000.)
# create a discrete problem to encode that our species are integer valued
dprob = DiscreteProblem(rn, u₀, tspan, p)
# create a JumpProblem and specify Gillespie's Direct Method as the solver:
jprob = JumpProblem(rn, dprob, Direct(), save_positions=(false, false))
# SET save_positions to (false, false) as otherwise time of each reaction occurence is saved
dt = 1 # time resolution at which numerical solution is saved
# solve and plot
ensembleprob = EnsembleProblem(jprob)
@time sol_SSA = solve(ensembleprob, SSAStepper(), saveat=dt, trajectories=10000);
```
205.463844 seconds (646.80 M allocations: 15.332 GiB, 56.95% gc time)
```julia
using DiffEqJump
# This is not equal to μ₀ as the initial values should be Int (not 0.001)
# However, these still must be defined as Floats, otherwise will encounter
# problems due to parsing from Int to Float using EnsembleAnalysis
u₀ = [1., 0.]
# create a discrete problem to encode that our species are integer valued
dprob = DiscreteProblem(rn, u₀, tspan, pmap)
# create a JumpProblem and specify Gillespie's Direct Method as the solver:
jprob = JumpProblem(rn, dprob, Direct(), save_positions=(false, false))
# SET save_positions to (false, false) as otherwise time of each reaction occurence is saved
dt = 1 # time resolution at which numerical solution is saved
# solve and plot
ensembleprob = EnsembleProblem(jprob)
@time sol_SSA = solve(ensembleprob, SSAStepper(), saveat=dt, trajectories=10000);
```
245.071448 seconds (645.00 M allocations: 15.269 GiB, 65.19% gc time)
Can compute all sample moments up to chosen order
```julia
@time SSA_μ = sample_raw_moments(sol_SSA, 2);
@time SSA_M = sample_central_moments(sol_SSA, 2);
```
6.137514 seconds (2.82 M allocations: 1.325 GiB, 21.46% gc time)
2.501856 seconds (2.79 M allocations: 744.488 MiB)
```julia
plot(sol_CDM.t, [sol_CDM[1,:], SSA_μ[1,0]],
label = ["CDM" "SSA"],
legend = true,
xlabel = "Time [s]",
ylabel = "Mean gene number",
lw=2,
legendfontsize=8,
xtickfontsize=10,
ytickfontsize=10,
dpi=100)
```
```julia
plot(sol_CDM.t, [sol_CDM[2,:], SSA_μ[0,1]],
label = ["CDM" "SSA"],
legend = :bottomright,
xlabel = "Time [s]",
ylabel = "Mean protein number",
lw=2,
legendfontsize=8,
xtickfontsize=10,
ytickfontsize=10,
dpi=100)
```
```julia
std_CDM = sqrt.(sol_CDM[4,2:end] .- sol_CDM[2,2:end].^2)
std_p_SSA = sqrt.(SSA_M[0,2][2:end])
plot(sol_CDM.t[2:end], [std_CDM, std_p_SSA],
label = ["CDM" "SSA"],
legend = true,
xlabel = "Time [s]",
ylabel = "standard deviation of the protein number",
lw=2,
legendfontsize=8,
xtickfontsize=10,
ytickfontsize=10,
dpi=100)
```
| fb7923c865a0af7e118573c7723fd305ec0182f1 | 435,385 | ipynb | Jupyter Notebook | examples/conditional_TEST_J.ipynb | palmtree2013/MomentClosure.jl | 171d18818657bf1e93240e4b24d393fac3eea72d | [
"MIT"
] | 27 | 2021-02-21T00:44:05.000Z | 2022-03-25T23:48:52.000Z | examples/conditional_TEST_J.ipynb | palmtree2013/MomentClosure.jl | 171d18818657bf1e93240e4b24d393fac3eea72d | [
"MIT"
] | 10 | 2021-02-26T15:44:04.000Z | 2022-03-16T12:48:27.000Z | examples/conditional_TEST_J.ipynb | palmtree2013/MomentClosure.jl | 171d18818657bf1e93240e4b24d393fac3eea72d | [
"MIT"
] | 3 | 2021-02-21T01:20:10.000Z | 2022-03-24T13:18:07.000Z | 192.73351 | 25,709 | 0.679396 | true | 5,708 | Qwen/Qwen-72B | 1. YES
2. YES | 0.870597 | 0.793106 | 0.690476 | __label__eng_Latn | 0.306386 | 0.442538 |
```python
import sys
if not '..' in sys.path:
sys.path.insert(0, '..')
import control
import sympy
import numpy as np
import matplotlib.pyplot as plt
import ulog_tools as ut
import ulog_tools.control_opt as opt
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
# System Identification
```python
log_file = ut.ulog.download_log('http://review.px4.io/download?log=35b27fdb-6a93-427a-b634-72ab45b9609e', '/tmp')
data = ut.sysid.prepare_data(log_file)
res = ut.sysid.attitude_sysid(data)
res
```
{'pitch': {'model': {'delay': 0.051503721021908873,
'f_s': 232.99287433805779,
'fit': 0.57135828183122261,
'gain': 29.538707721830491,
'sample_delay': 12},
't_end': 80,
't_start': 75},
'roll': {'model': {'delay': 0.072963604781037569,
'f_s': 232.99287433805779,
'fit': 0.80970246292599435,
'gain': 45.686710321167887,
'sample_delay': 17},
't_end': 105,
't_start': 100},
'yaw': {'model': {'delay': 0.11159139554746923,
'f_s': 232.99287433805779,
'fit': 0.87415539224859207,
'gain': 41.274521147805387,
'sample_delay': 26},
't_end': 5,
't_start': 0}}
# Continuous Time Optimization
```python
opt.attitude_loop_design(res['roll']['model'], 'ROLL', d)
```
```python
attitude_loop_design(res['pitch']['model'], 'PITCH')
```
{'MC_PITCHRATE_D': 0.015662896675004697,
'MC_PITCHRATE_I': 0.48847645640076243,
'MC_PITCHRATE_P': 0.51104029619426683,
'MC_PITCH_P': 5.8666514695501988}
```python
attitude_loop_design(res['yaw']['model'], 'YAW')
```
{'MC_YAWRATE_D': 0.017251069591687748,
'MC_YAWRATE_I': 0.19498248018478978,
'MC_YAWRATE_P': 0.18924319337905329,
'MC_YAW_P': 3.598452484267229}
| aa968dd62ad677f36cf9425507b2e8b35e29dbb9 | 5,975 | ipynb | Jupyter Notebook | notebooks/old/System ID.ipynb | dronecrew/ulog_tools | 65aabdf8c729d8307b6bab2e0897ecf2a222d6bb | [
"BSD-3-Clause"
] | 13 | 2017-07-07T15:26:45.000Z | 2021-01-29T11:00:37.000Z | notebooks/old/System ID.ipynb | dronecrew/ulog_tools | 65aabdf8c729d8307b6bab2e0897ecf2a222d6bb | [
"BSD-3-Clause"
] | 8 | 2017-09-05T18:56:57.000Z | 2021-09-12T09:35:19.000Z | notebooks/old/System ID.ipynb | dronecrew/ulog_tools | 65aabdf8c729d8307b6bab2e0897ecf2a222d6bb | [
"BSD-3-Clause"
] | 5 | 2017-07-03T19:48:30.000Z | 2021-07-06T14:26:27.000Z | 27.036199 | 521 | 0.546109 | true | 638 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.798187 | 0.720998 | __label__yue_Hant | 0.121748 | 0.51345 |
# Jupyter notebook example
## Simple plots
Loading the necessary modules (maybe _numpy_ is superceeded by _scipy_)
```python
import numpy as npy
import scipy as scy
# import sympy as spy
# import timeit
```
The sine function, $\sin(t)$, and its Fourier-transformation.
$$f_{max}=\frac1{2\Delta t},\quad \Delta f=\frac1{t_{max}}$$
```python
# tmax, t = 200/4, scy.linspace(0,tmax,201)
t = [ i/4 for i in range(201) ]
y = [ npy.sin(t[i]/10*2*npy.pi) for i in range(201) ]
Y = scy.fft(y)
Y = abs(Y[:101])*2/200
f = scy.linspace(0,1/t[1]/2,101)
```
### Plotting the graphs
```python
%matplotlib inline
import matplotlib.pyplot as plt
```
```python
plt.figure(1,figsize=(10,4))
plt.subplot(121)
plt.plot(t,y,'g.')
plt.title('$\sin(t)$')
plt.xlabel('$t$[s]')
plt.grid('on')
plt.subplot(122)
plt.plot(f,Y,'b')
plt.title('FFT')
plt.xlabel(r'$\nu$[Hz]')
plt.grid('on')
plt.show()
```
```html
```
## Fázissík módszer
### Elsőrendű differenciálegyenletek
$$\begin{array}{rcl}
\dot x &=& (a^2x^2+y^2-1)y\\
\dot y &=& (x^2+a^2y^2-1)x
\end{array}
$$
### Nullvonalak és fázisportré
```python
# %matplotlib inline
from pylab import *
```
```python
a = 2
x, y = meshgrid(arange(-1.5, 1.5, 0.1), arange(-1.5, 1.5, 0.1))
fxy = -((a*x)**2+y**2-1)*y
gxy = (x**2+(a*y)**2-1)*x
streamplot(x, y, fxy, gxy)
# f(x,y)=0 görbék: ellipszis+egyenes
contour(x,y,fxy,1,colors="red")
# g(x,y)=0 görbék: ellipszis+egyenes
contour(x,y,gxy,1,colors="green")
grid(); show()
```
## Benchmarking
```python
import time
import timeit
```
```python
# benchmark from https://devblogs.nvidia.com/drop-in-acceleration-gnu-octave/
N = 8192
# def f(N):
A = scy.single(scy.rand(N,N))
B = scy.single(scy.rand(N,N))
# return A * B
#tic = time.clock()
#print(tic)
#C = A * B
etime = timeit.timeit('A*B',globals=globals(),number=1)
#toc = time.clock()
#print(toc)
elapsedTime = etime #toc-tic
# elapsedTime = etime(clock(), start);
print("Elapsed time: {0:.2g} sec (".format(elapsedTime))
gFlops = 2*N*N*N/(elapsedTime * 1e+9)
# disp(gFlops);
print("{0:.2f} GFlops)\n".format(gFlops))
```
Elapsed time: 0.64 sec (
1713.92 GFlops)
```python
```
| 07c24d3ae27da03a61f5c9414a3344b13009e817 | 146,024 | ipynb | Jupyter Notebook | python_ex1.ipynb | szazs89/jupyter_ex | 366079f54a8ac8f3d6e65d45ec79d4b2318bed40 | [
"MIT"
] | null | null | null | python_ex1.ipynb | szazs89/jupyter_ex | 366079f54a8ac8f3d6e65d45ec79d4b2318bed40 | [
"MIT"
] | null | null | null | python_ex1.ipynb | szazs89/jupyter_ex | 366079f54a8ac8f3d6e65d45ec79d4b2318bed40 | [
"MIT"
] | null | null | null | 568.18677 | 122,772 | 0.95046 | true | 814 | Qwen/Qwen-72B | 1. YES
2. YES | 0.927363 | 0.763484 | 0.708027 | __label__eng_Latn | 0.205814 | 0.483315 |
<a id='heavy-tails'></a>
<div id="qe-notebook-header" align="right" style="text-align:right;">
<a href="https://quantecon.org/" title="quantecon.org">
</a>
</div>
# Heavy-Tailed Distributions
<a id='index-0'></a>
## Contents
- [Heavy-Tailed Distributions](#Heavy-Tailed-Distributions)
- [Overview](#Overview)
- [Visual Comparisons](#Visual-Comparisons)
- [Failure of the LLN](#Failure-of-the-LLN)
- [Classifying Tail Properties](#Classifying-Tail-Properties)
- [Exercises](#Exercises)
- [Solutions](#Solutions)
In addition to what’s in Anaconda, this lecture will need the following libraries:
```python
!pip install --upgrade quantecon --user
!pip install --upgrade yfinance --user
```
Requirement already up-to-date: quantecon in c:\users\harve\appdata\roaming\python\python36\site-packages (0.4.6)
Requirement already satisfied, skipping upgrade: numba>=0.38 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from quantecon) (0.38.0)
Requirement already satisfied, skipping upgrade: numpy in c:\users\harve\appdata\roaming\python\python36\site-packages (from quantecon) (1.17.4)
Requirement already satisfied, skipping upgrade: requests in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from quantecon) (2.22.0)
Requirement already satisfied, skipping upgrade: scipy>=1.0.0 in c:\users\harve\appdata\roaming\python\python36\site-packages (from quantecon) (1.2.0)
Requirement already satisfied, skipping upgrade: sympy in c:\users\harve\appdata\roaming\python\python36\site-packages (from quantecon) (1.3)
Requirement already satisfied, skipping upgrade: llvmlite>=0.23.0dev0 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from numba>=0.38->quantecon) (0.23.2+1.g67a89c7)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from requests->quantecon) (2019.9.11)
Requirement already satisfied, skipping upgrade: idna<2.9,>=2.5 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from requests->quantecon) (2.8)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from requests->quantecon) (1.24.2)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from requests->quantecon) (3.0.4)
Requirement already satisfied, skipping upgrade: mpmath>=0.19 in c:\users\harve\appdata\roaming\python\python36\site-packages (from sympy->quantecon) (1.1.0)
Requirement already up-to-date: yfinance in c:\users\harve\appdata\roaming\python\python36\site-packages (0.1.54)
Requirement already satisfied, skipping upgrade: numpy>=1.15 in c:\users\harve\appdata\roaming\python\python36\site-packages (from yfinance) (1.17.4)
Requirement already satisfied, skipping upgrade: multitasking>=0.0.7 in c:\users\harve\appdata\roaming\python\python36\site-packages (from yfinance) (0.0.9)
Requirement already satisfied, skipping upgrade: pandas>=0.24 in c:\users\harve\appdata\roaming\python\python36\site-packages (from yfinance) (0.25.3)
Requirement already satisfied, skipping upgrade: requests>=2.20 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from yfinance) (2.22.0)
Requirement already satisfied, skipping upgrade: pytz>=2017.2 in c:\users\harve\appdata\roaming\python\python36\site-packages (from pandas>=0.24->yfinance) (2018.7)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.6.1 in c:\users\harve\appdata\roaming\python\python36\site-packages (from pandas>=0.24->yfinance) (2.7.5)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from requests>=2.20->yfinance) (1.24.2)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from requests>=2.20->yfinance) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from requests>=2.20->yfinance) (2019.9.11)
Requirement already satisfied, skipping upgrade: idna<2.9,>=2.5 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from requests>=2.20->yfinance) (2.8)
Requirement already satisfied, skipping upgrade: six>=1.5 in c:\program files (x86)\microsoft visual studio\shared\anaconda3_64\lib\site-packages (from python-dateutil>=2.6.1->pandas>=0.24->yfinance) (1.13.0)
```python
```
## Overview
Most commonly used probability distributions in classical statistics and
the natural sciences have either bounded support or light tails.
When a distribution is light-tailed, extreme observations are rare and
draws tend not to deviate too much from the mean.
Having internalized these kinds of distributions, many researchers and
practitioners use rules of thumb such as “outcomes more than four or five
standard deviations from the mean can safely be ignored.”
However, some distributions encountered in economics have far more probability
mass in the tails than distributions like the normal distribution.
With such **heavy-tailed** distributions, what would be regarded as extreme
outcomes for someone accustomed to thin tailed distributions occur relatively
frequently.
Examples of heavy-tailed distributions observed in economic and financial
settings include
- the income distributions and the wealth distribution (see, e.g., [[Vil96]](zreferences.ipynb#pareto1896cours), [[BB18]](zreferences.ipynb#benhabib2018skewed)),
- the firm size distribution ([[Axt01]](zreferences.ipynb#axtell2001zipf), [[Gab16]](zreferences.ipynb#gabaix2016power)}),
- the distribution of returns on holding assets over short time horizons ([[Man63]](zreferences.ipynb#mandelbrot1963variation), [[Rac03]](zreferences.ipynb#rachev2003handbook)), and
- the distribution of city sizes ([[RRGM11]](zreferences.ipynb#rozenfeld2011area), [[Gab16]](zreferences.ipynb#gabaix2016power)).
These heavy tails turn out to be important for our understanding of economic outcomes.
As one example, the heaviness of the tail in the wealth distribution is one
natural measure of inequality.
It matters for taxation and redistribution
policies, as well as for flow-on effects for productivity growth, business
cycles, and political economy
- see, e.g., [[AR02]](zreferences.ipynb#acemoglu2002political), [[GSS03]](zreferences.ipynb#glaeser2003injustice), [[BEGS18]](zreferences.ipynb#bhandari2018inequality) or [[AKM+18]](zreferences.ipynb#ahn2018inequality).
This lecture formalizes some of the concepts introduced above and reviews the
key ideas.
Let’s start with some imports:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
The following two lines can be added to avoid an annoying FutureWarning, and prevent a specific compatibility issue between pandas and matplotlib from causing problems down the line:
```python
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
```
## Visual Comparisons
One way to build intuition on the difference between light and heavy tails is
to plot independent draws and compare them side-by-side.
### A Simulation
The figure below shows a simulation. (You will be asked to replicate it in
the exercises.)
The top two subfigures each show 120 independent draws from the normal distribution, which is light-tailed.
The bottom subfigure shows 120 independent draws from [the Cauchy distribution](https://en.wikipedia.org/wiki/Cauchy_distribution), which is heavy-tailed.
<a id='light-heavy-fig1'></a>
In the top subfigure, the standard deviation of the normal distribution is 2,
and the draws are clustered around the mean.
In the middle subfigure, the standard deviation is increased to 12 and, as expected, the amount of dispersion rises.
The bottom subfigure, with the Cauchy draws, shows a
different pattern: tight clustering around the mean for the great majority of
observations, combined with a few sudden large deviations from the mean.
This is typical of a heavy-tailed distribution.
### Heavy Tails in Asset Returns
Next let’s look at some financial data.
Our aim is to plot the daily change in the price of Amazon (AMZN) stock for
the period from 1st January 2015 to 1st November 2019.
This equates to daily returns if we set dividends aside.
The code below produces the desired plot using Yahoo financial data via the `yfinance` library.
```python
import yfinance as yf
import pandas as pd
s = yf.download('AMZN', '2015-1-1', '2019-11-1')['Adj Close']
r = s.pct_change()
fig, ax = plt.subplots()
ax.plot(r, linestyle='', marker='o', alpha=0.5, ms=4)
ax.vlines(r.index, 0, r.values, lw=0.2)
ax.set_ylabel('returns', fontsize=12)
ax.set_xlabel('date', fontsize=12)
plt.show()
type(r)
```
Five of the 1217 observations are more than 5 standard
deviations from the mean.
Overall, the figure is suggestive of heavy tails,
although not to the same degree as the Cauchy distribution the
figure above.
If, however, one takes tick-by-tick data rather
daily data, the heavy-tailedness of the distribution increases further.
## Failure of the LLN
One impact of heavy tails is that sample averages can be poor estimators of
the underlying mean of the distribution.
To understand this point better, recall [our earlier discussion](lln_clt.ipynb) of the Law of Large Numbers, which considered IID $ X_1,
\ldots, X_n $ with common distribution $ F $
If $ \mathbb E |X_i| $ is finite, then
the sample mean $ \bar X_n := \frac{1}{n} \sum_{i=1}^n X_i $ satisfies
<a id='equation-lln-as2'></a>
$$
\mathbb P \left\{ \bar X_n \to \mu \text{ as } n \to \infty \right\} = 1 \tag{1}
$$
where $ \mu := \mathbb E X_i = \int x F(x) $ is the common mean of the sample.
The condition $ \mathbb E | X_i | = \int |x| F(x) < \infty $ holds
in most cases but can fail if the distribution $ F $ is very heavy tailed.
For example, it fails for the Cauchy distribution
Let’s have a look at the behavior of the sample mean in this case, and see
whether or not the LLN is still valid.
```python
from scipy.stats import cauchy
np.random.seed(1234)
N = 1_000
distribution = cauchy()
fig, ax = plt.subplots()
data = distribution.rvs(N)
# Compute sample mean at each n
sample_mean = np.empty(N)
for n in range(1, N):
sample_mean[n] = np.mean(data[:n])
# Plot
ax.plot(range(N), sample_mean, alpha=0.6, label='$\\bar X_n$')
ax.plot(range(N), np.zeros(N), 'k--', lw=0.5)
ax.legend()
plt.show()
```
The sequence shows no sign of converging.
Will convergence occur if we take $ n $ even larger?
The answer is no.
To see this, recall that the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_%28probability_theory%29) of the Cauchy distribution is
<a id='equation-lln-cch'></a>
$$
\phi(t) = \mathbb E e^{itX} = \int e^{i t x} f(x) dx = e^{-|t|} \tag{2}
$$
Using independence, the characteristic function of the sample mean becomes
$$
\begin{aligned}
\mathbb E e^{i t \bar X_n }
& = \mathbb E \exp \left\{ i \frac{t}{n} \sum_{j=1}^n X_j \right\}
\\
& = \mathbb E \prod_{j=1}^n \exp \left\{ i \frac{t}{n} X_j \right\}
\\
& = \prod_{j=1}^n \mathbb E \exp \left\{ i \frac{t}{n} X_j \right\}
= [\phi(t/n)]^n
\end{aligned}
$$
In view of [(2)](#equation-lln-cch), this is just $ e^{-|t|} $.
Thus, in the case of the Cauchy distribution, the sample mean itself has the very same Cauchy distribution, regardless of $ n $!
In particular, the sequence $ \bar X_n $ does not converge to any point.
<a id='cltail'></a>
## Classifying Tail Properties
To keep our discussion precise, we need some definitions concerning tail
properties.
We will focus our attention on the right hand tails of
nonnegative random variables and their distributions.
The definitions for
left hand tails are very similar and we omit them to simplify the exposition.
### Light and Heavy Tails
A distribution $ F $ on $ \mathbb R_+ $ is called **heavy-tailed** if
<a id='equation-defht'></a>
$$
\int_0^\infty \exp(tx) F(x) = \infty \; \text{ for all } t > 0. \tag{3}
$$
We say that a nonnegative random variable $ X $ is **heavy-tailed** if its distribution $ F(x) := \mathbb P\{X \leq x\} $ is heavy-tailed.
This is equivalent to stating that its **moment generating function**
$ m(t) := \mathbb E \exp(t X) $ is infinite for all $ t > 0 $.
- For example, the lognormal distribution is heavy-tailed because its
moment generating function is infinite everywhere on $ (0, \infty) $.
A distribution $ F $ on $ \mathbb R_+ $ is called **light-tailed** if it is not heavy-tailed.
A nonnegative random variable $ X $ is **light-tailed** if its distribution $ F $ is light-tailed.
- Example: Every random variable with bounded support is light-tailed. (Why?)
- Example: If $ X $ has the exponential distribution, with cdf $ F(x) = 1 - \exp(-\lambda x) $ for some $ \lambda > 0 $, then its moment generating function is finite whenever $ t < \lambda $. Hence $ X $ is light-tailed.
One can show that if $ X $ is light-tailed, then all of its moments are finite.
The contrapositive is that if some moment is infinite, then $ X $ is heavy-tailed.
The latter condition is not necessary, however.
- Example: the lognormal distribution is heavy-tailed but every moment is finite.
### Pareto Tails
One specific class of heavy-tailed distributions has been found repeatedly in
economic and social phenomena: the class of so-called power laws.
Specifically, given $ \alpha > 0 $, a nonnegative random variable $ X $ is said to have a **Pareto tail** with **tail index** $ \alpha $ if
<a id='equation-plrt'></a>
$$
\lim_{x \to \infty} x^\alpha \, \mathbb P\{X > x\} = c. \tag{4}
$$
Evidently [(4)](#equation-plrt) implies the existence of positive constants $ b $ and $ \bar x $ such that $ \mathbb P\{X > x\} \geq b x^{- \alpha} $ whenever $ x \geq \bar x $.
The implication is that $ \mathbb P\{X > x\} $ converges to zero no faster than $ x^{-\alpha} $.
In some sources, a random variable obeying [(4)](#equation-plrt) is said to have a **power law tail**.
The primary example is the **Pareto distribution**, which has distribution
<a id='equation-pareto'></a>
$$
F(x) =
\begin{cases}
1 - \left( \bar x/x \right)^{\alpha}
& \text{ if } x \geq \bar x
\\
0
& \text{ if } x < \bar x
\end{cases} \tag{5}
$$
for some positive constants $ \bar x $ and $ \alpha $.
It is easy to see that if $ X \sim F $, then $ \mathbb P\{X > x\} $ satisfies [(4)](#equation-plrt).
Thus, in line with the terminology, Pareto distributed random variables have a Pareto tail.
### Rank-Size Plots
One graphical technique for investigating Pareto tails and power laws is the so-called **rank-size plot**.
This kind of figure plots
log size against log rank of the population (i.e., location in the population
when sorted from smallest to largest).
Often just the largest 5 or 10% of observations are plotted.
For a sufficiently large number of draws from a Pareto distribution, the plot generates a straight line. For distributions with thinner tails, the data points are concave.
A discussion of why this occurs can be found in [[NOM04]](zreferences.ipynb#nishiyama2004estimation).
The figure below provides one example, using simulated data.
The rank-size plots shows draws from three different distributions: folded normal, chi-squared with 1 degree of freedom and Pareto.
In each case, the largest 5% of 1,000 draws are shown.
The Pareto sample produces a straight line, while the lines produced by the other samples are concave.
<a id='rank-size-fig1'></a>
## Exercises
### Exercise 1
Replicate [the figure presented above](#light-heavy-fig1) that compares normal and Cauchy draws.
Use `np.random.seed(11)` to set the seed.
```python
from scipy.stats import cauchy, uniform
from random import uniform
import pandas as pd
np.random.seed(11)
N=100
var1=2
var2=12
distribution0 = np.random.randn(N) * var1
distribution1=np.random.randn(N) * var2
distribution2=cauchy()
datas = np.array([ distribution0, distribution1, distribution2.rvs(N)])
for data in datas:
fig, ax = plt.subplots()
ax.plot(data, linestyle='', marker='o', alpha=1, ms=2)
ax.vlines(list(range(N)), 0, data, lw=0.2)
plt.show()
```
### Exercise 2
Prove: If $ X $ has a Pareto tail with tail index $ \alpha $, then
$ \mathbb E[X^r] = \infty $ for all $ r \geq \alpha $.
### Exercise 3
Repeat exercise 1, but replace the three distributions (two normal, one
Cauchy) with three Pareto distributions using different choices of
$ \alpha $.
For $ \alpha $, try 1.15, 1.5 and 1.75.
Use `np.random.seed(11)` to set the seed.
```python
from scipy.stats import pareto
import matplotlib.pyplot as plt
np.random.seed(11)
def plot_pareto(α,N):
distribution=pareto(α)
data =distribution.rvs(N)
fig, ax = plt.subplots()
ax.plot(data, linestyle='', marker='o', alpha=1, ms=2, label=f"α={α}")
ax.legend()
ax.vlines(list(range(N)), 0, data, lw=0.2)
plt.show()
N=100
αs=np.array([1.15,1.5,1.75])
for α in αs:
plot_pareto(α,N)
```
### Exercise 4
Replicate the rank-size plot figure [presented above](#rank-size-fig1).
Use `np.random.seed(13)` to set the seed.
```python
from scipy.stats import pareto, norm
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(13)
def plot(data):
#print(data)
log_size=np.sort(np.log(data))
log_size_rank=np.log(log_size.argsort()[::-1]+1)
fig, ax = plt.subplots()
ax.scatter(y=log_size,x=log_size_rank, marker='o', alpha=0.5)
ax.set_xlabel("log rank")
ax.set_ylabel("log size")
plt.show()
α=1
N=1000
datas=[abs(norm().rvs(N)),np.exp(norm().rvs(N)),pareto(α).rvs(N)]
for data in datas:
plot(data)
```
### Exercise 5
There is an ongoing argument about whether the firm size distribution should
be modeled as a Pareto distribution or a lognormal distribution (see, e.g.,
[[FDGA+04]](zreferences.ipynb#fujiwara2004pareto), [[KLS18]](zreferences.ipynb#kondo2018us) or [[ST19]](zreferences.ipynb#schluter2019size)).
This sounds esoteric but has real implications for a variety of economic
phenomena.
To illustrate this fact in a simple way, let us consider an economy with
100,000 firms, an interest rate of `r = 0.05` and a corporate tax rate of
15%.
Your task is to estimate the present discounted value of projected corporate
tax revenue over the next 10 years.
Because we are forecasting, we need a model.
We will suppose that
1. the number of firms and the firm size distribution (measured in profits) remain fixed and
1. the firm size distribution is either lognormal or Pareto.
Present discounted value of tax revenue will be estimated by
1. generating 100,000 draws of firm profit from the firm size distribution,
1. multiplying by the tax rate, and
1. summing the results with discounting to obtain present value.
The Pareto distribution is assumed to take the form [(5)](#equation-pareto) with $ \bar x = 1 $ and $ \alpha = 1.05 $.
(The value the tail index $ \alpha $ is plausible given the data [[Gab16]](zreferences.ipynb#gabaix2016power).)
To make the lognormal option as similar as possible to the Pareto option, choose its parameters such that the mean and median of both distributions are the same.
Note that, for each distribution, your estimate of tax revenue will be random because it is based on a finite number of draws.
To take this into account, generate 100 replications (evaluations of tax revenue) for each of the two distributions and compare the two samples by
- producing a [violin plot](https://en.wikipedia.org/wiki/Violin_plot) visualizing the two samples side-by-side and
- printing the mean and standard deviation of both samples.
For the seed use `np.random.seed(1234)`.
What differences do you observe?
(Note: a better approach to this problem would be to model firm dynamics and
try to track individual firms given the current distribution. We will discuss
firm dynamics in later lectures.)
```python
```
| 5c0a12343a1d2353c65f8c6a0b927aad8f590457 | 175,127 | ipynb | Jupyter Notebook | homework/HarveyT47/heavy_tails.ipynb | QuantEcon/summer_course_2019 | 8715ce171dbe371aac44bb7cda00c44aea8a8690 | [
"BSD-3-Clause"
] | 7 | 2019-11-01T06:33:00.000Z | 2020-03-20T10:28:26.000Z | homework/HarveyT47/heavy_tails.ipynb | QuantEcon/summer_course_2019 | 8715ce171dbe371aac44bb7cda00c44aea8a8690 | [
"BSD-3-Clause"
] | 4 | 2019-12-14T07:26:59.000Z | 2019-12-20T06:03:28.000Z | homework/HarveyT47/heavy_tails.ipynb | QuantEcon/summer_course_2019 | 8715ce171dbe371aac44bb7cda00c44aea8a8690 | [
"BSD-3-Clause"
] | 14 | 2019-12-14T07:08:04.000Z | 2021-11-17T13:48:56.000Z | 190.97819 | 50,972 | 0.897794 | true | 5,770 | Qwen/Qwen-72B | 1. YES
2. YES | 0.651355 | 0.7773 | 0.506298 | __label__eng_Latn | 0.980917 | 0.014629 |
# Task Three: Quantum Gates and Circuits
```python
from qiskit import *
from qiskit.visualization import plot_bloch_multivector
```
## Pauli Matrices
\begin{align}
I = \begin{pmatrix} 1&0 \\ 0&1 \end{pmatrix}, \quad
X = \begin{pmatrix} 0&1 \\ 1&0 \end{pmatrix}, \quad
Y = \begin{pmatrix} 0&i \\ -i&0 \end{pmatrix}, \quad
Z = \begin{pmatrix} 1&0 \\ 0&-1 \end{pmatrix} \quad
\end{align}
## X-gate
The X-gate is represented by the Pauli-X matrix:
$$ X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = |0\rangle\langle1| + |1\rangle\langle0| $$
Effect a gate has on a qubit:
$$ X|0\rangle = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} = |1\rangle$$
```python
# Let's do an X-gate on a |0> qubit
qc=QuantumCircuit(1)
qc.x(0)
qc.draw('mpl')#mpl stands for the matplotlib argument
```
```python
# Let's see the result
backend = Aer.get_backend('statevector_simulator')
out = execute(qc, backend).result().get_statevector()
print(out)
```
[0.+0.j 1.+0.j]
## Z & Y-Gate
$$ Y = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \quad\quad\quad\quad Z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} $$
$$ Y = -i|0\rangle\langle1| + i|1\rangle\langle0| \quad\quad Z = |0\rangle\langle0| - |1\rangle\langle1| $$
```python
# Do Y-gate on qubit 0
qc.y(0)
# Do Z-gate on qubit 0
qc.z(0)
qc.draw('mpl')
```
## Hadamard Gate
$$ H = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} $$
We can see that this performs the transformations below:
$$ H|0\rangle = |+\rangle $$
$$ H|1\rangle = |-\rangle $$
```python
#create circuit with three qubit
qc = QuantumCircuit(3)
# Apply H-gate to each qubit:
for qubit in range(3):
qc.h(qubit)
# See the circuit:
qc.draw('mpl')
```
## Identity Gate
$$
I = \begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix}
$$
$$ I = XX $$
```python
qc.i(0)
qc.draw('mpl')
```
**Other Gates: S-gate , T-gate, U-gate**
# That's all for Task 3
## Thank You!
Where can you find me?
LinkedIn : https://www.linkedin.com/in/arya--shah/
Twitter : https://twitter.com/aryashah2k
Github : https://github.com/aryashah2k
If you Like My Work, Follow me/ Connect with me on these platforms.
Show some Love ❤️ Sponsor me on Github!
```python
```
| f21e707097ed88f3b52fd3bce07b1d3b12d40d34 | 14,939 | ipynb | Jupyter Notebook | Guided Project - Programming a Quantum Computer with Qiskit - IBM SDK/Task 3/Task 3 - Quantum Gates and Circuits.ipynb | bobsub218/exercise-qubit | 32e1b851f65b98dcdf90ceaca1bd52ac6553e63a | [
"MIT"
] | 229 | 2020-11-13T07:11:20.000Z | 2022-03-06T02:27:45.000Z | Guided Project - Programming a Quantum Computer with Qiskit - IBM SDK/Task 3/Task 3 - Quantum Gates and Circuits.ipynb | bobsub218/Exercise-qubit | 32e1b851f65b98dcdf90ceaca1bd52ac6553e63a | [
"MIT"
] | 6 | 2020-12-25T17:25:14.000Z | 2021-04-26T07:56:06.000Z | Guided Project - Programming a Quantum Computer with Qiskit - IBM SDK/Task 3/Task 3 - Quantum Gates and Circuits.ipynb | trial1user/Quantum-Computing-Collection-Of-Resources | 6953854769adef05e705fa315017c459e5e581ce | [
"MIT"
] | 50 | 2020-11-13T08:55:28.000Z | 2022-03-14T21:16:07.000Z | 49.963211 | 2,876 | 0.762233 | true | 854 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.835484 | 0.752378 | __label__eng_Latn | 0.468875 | 0.586358 |
# Robust Data-Driven Portfolio Diversification
### Francisco A. Ibanez
1. RPCA on the sample
2. Singular Value Hard Thresholding (SVHT)
3. Truncated SVD
4. Maximize portfolio effective bets - regualization, s.t.:
- Positivity constraint
- Leverage 1x
The combination of (1), (2), and (3) should limit the possible permutations of the J vector when doing the spectral risk parity.
## Methodology
The goal of the overall methodology is to arrive to a portfolio weights vector which provides a well-balanced portfolio exposure to each one of the spectral risk factors present in an given investable universe.
We start with the data set $X_{T \times N}$ which containst the historical excess returns for each one of the assets that span the investable universe of the portfolio. Before performing the eigendecomposition on $X$, we need to clean the set from noisy trading observations and outliers. We apply Robust Principal Components (RPCA) on $X$ to achieve this, which seeks to decompose $X$ into a structured low-rank matrix $R$ and a sparse matrix $C$ containing outliers and corrupt data:
\begin{aligned}
X=R_0+C_0
\end{aligned}
The principal components of $R$ are robust to outliers and corrupt data in $C$. Mathematically, the goal is to find $R$ and $C$ that satisfy the following:
\begin{aligned}
\min_{R,C} ||R||_{*} + \lambda ||C||_{1} \\
\text{subject to} \\ R + C = X
\end{aligned}
```python
import pandas as pd
import numpy as np
from rpca import RobustPCA
import matplotlib.pyplot as plt
from scipy.linalg import svd
from optht import optht
raw = pd.read_pickle('etf_er.pkl')
sample = raw.dropna() # Working with even panel for now
# Outlier detection & cleaning
X = (sample - sample.mean()).div(sample.std()).values # Normalization
t, n = X.shape
lmb = 4 / np.sqrt(max(t, n)) # Hyper-parameter
rob = RobustPCA(lmb=lmb, max_iter=int(1E6))
R, C = rob.fit(X) # Robust, Corrupted
# Low-rank representation (compression) through hard thresholding Truncated-SVD
U, S, Vh = svd(R, full_matrices=False, compute_uv=True, lapack_driver='gesdd')
S = np.diag(S)
k = optht(X, sv=np.diag(S), sigma=None)
V = Vh.T
Vhat = V.copy()
Vhat[:, k:] = 0
Shat = S.copy()
Shat[k:, k:] = 0
cum_energy = np.cumsum(np.diag(S)) / np.sum(np.diag(S))
print(f'SVHT: {k}, {round(cum_energy[k] * 100, 2)}% of energy explained')
```
SVHT: 8, 58.43% of energy explained
```python
D = np.diag(sample.std().values)
t, n = X.shape
w = np.array([1 / n] * n).reshape(-1, 1)
eigen_wts = V.T @ D @ w
p = np.divide(np.diag(eigen_wts.flatten()) @ S.T @ S @ eigen_wts, w.T @ D @ R.T @ R @ D @ w)
p = p.flatten()
eta_p = np.exp(-np.sum(np.multiply(p, np.log(p))))
eta_p
```
1.4535327279694732
```python
def effective_bets(weights, singular_values_matrix, eigen_vector_matrix, volatilities, k=None):
w = weights.reshape(-1, 1)
eigen_wts = eigen_vector_matrix.T @ np.diag(volatilities) @ w
p = (np.diag(eigen_wts.flatten()) @ singular_values_matrix.T @ singular_values_matrix @ eigen_wts).flatten()
if k != None:
p = p[:k]
p_norm = np.divide(p, p.sum())
eta = np.exp(-np.sum(np.multiply(p_norm, np.log(p_norm))))
return eta
effective_bets(np.array([1 / n] * n), S, V, sample.std().values)
```
1.4535327279694745
```python
def objfunc(weights, singular_values_matrix, eigen_vector_matrix, volatilities, k=None):
return -effective_bets(weights, singular_values_matrix, eigen_vector_matrix, volatilities, k)
# Testing if minimizing p.T @ p yields the same results as maximizing the effective numbers of bets
def objfunc2(weights, singular_values_matrix, eigen_vector_matrix, volatilities, k=None):
w = weights.reshape(-1, 1)
eigen_wts = eigen_vector_matrix.T @ np.diag(volatilities) @ w
p = np.diag(eigen_wts.flatten()) @ singular_values_matrix.T @ singular_values_matrix @ eigen_wts
if k != None:
p = p[:k]
n = p.shape[0]
p_norm = np.divide(p, p.sum())
c = np.divide(np.ones((n, 1)), n)
return ((p_norm - c).T @ (p_norm - c)).item()
```
```python
# POSITIVE ONLY
from scipy.optimize import minimize
cons = (
{'type': 'ineq', 'fun': lambda x: x},
{'type': 'ineq', 'fun': lambda x: np.sum(x) - 1}
)
opti = minimize(
fun=objfunc,
x0=np.array([1 / n] * n),
args=(S, V, sample.std().values),
constraints=cons,
method='SLSQP',
tol=1E-12,
options={'maxiter': 1E9}
)
w_star = opti.x
w_star /= w_star.sum()
pd.Series(w_star, index=sample.columns).plot.bar()
print(-opti.fun)
```
```python
# UNCONSTRAINED
from scipy.optimize import minimize
cons = (
{'type': 'ineq', 'fun': lambda x: x},
{'type': 'ineq', 'fun': lambda x: np.sum(x) - 1}
)
opti = minimize(
fun=objfunc,
x0=np.array([1 / n] * n),
args=(S, V, sample.std().values),
# constraints=cons,
method='SLSQP',
tol=1E-12,
options={'maxiter': 1E9}
)
w_star = opti.x
w_star /= w_star.sum()
pd.Series(w_star, index=sample.columns).plot.bar()
print(-opti.fun)
```
```python
eigen_wts = V.T @ np.diag(sample.std().values) @ w_star.reshape(-1, 1)
p = (np.diag(eigen_wts.flatten()) @ S.T @ S @ eigen_wts).flatten()
p = np.divide(p, p.sum())
pd.Series(p).plot.bar()
```
```python
# RC.T @ RC... different?
cons = (
{'type': 'ineq', 'fun': lambda x: x},
{'type': 'ineq', 'fun': lambda x: np.sum(x) - 1}
)
opti = minimize(
fun=objfunc2,
x0=np.array([1 / n] * n),
args=(S, V, sample.std().values),
constraints=cons,
method='SLSQP',
tol=1E-12,
options={'maxiter': 1E9}
)
w_star = opti.x
w_star /= w_star.sum()
#pd.Series(w_star, index=sample.columns).plot.bar()
print(effective_bets(w_star, S, V, sample.std().values))
S, V, sample.std().values
eigen_wts = V.T @ np.diag(sample.std().values) @ w_star.reshape(-1, 1)
p = (np.diag(eigen_wts.flatten()) @ S.T @ S @ eigen_wts).flatten()
p = np.divide(p, p.sum())
pd.Series(p).plot.bar()
```
```python
np.array([1, 2, 3]).shape
```
(3,)
```python
```
```python
D = np.diag(sample.std().values)
n = sample.shape[0]
Sigma = 1 / (n - 1) * D @ X.T @ X @ D
Sigma_b = 1 / (n - 1) * D @ (R + C).T @ (R + C) @ D
pd.DataFrame(R.T @ C)
pd.DataFrame(R.T @ C) + pd.DataFrame(C.T @ R)
pd.DataFrame(R.T @ R) + pd.DataFrame(C.T @ C)
```
\begin{aligned}
X &= R + C \\
R &= USV^{T}
\end{aligned}
using the Singular Value Hard Thresholding (SVHT) obtained above we can approximate $R$:
\begin{aligned}
R &\approx \tilde{U}\tilde{S}\tilde{V}^{T}
\end{aligned}
Check the algebra so everything add up and the first matrix $X$ can be recovered from this point.
\begin{aligned}
\Sigma &= \frac{1}{(n - 1)}DX^{T}XD \\
\Sigma &= \frac{1}{(n - 1)}D(R + C)^{T}(R + C))D
\end{aligned}
then, portfolio risk will be given by:
\begin{aligned}
w^{T}\Sigma w &= \frac{1}{(n - 1)}w^{T}D(R + C)^{T}(R + C))D w \\
w^{T}\Sigma w &= \frac{1}{(n - 1)}w^{T}D(R^{T}R + R^{T}C + C^{T}R + C^{T}C ) D w \\
\end{aligned}
\begin{aligned}
w^{T}\Sigma w &= \frac{1}{(n - 1)} \lbrack w^{T}D(R^{T}R)Dw + w^{T} D(R^{T}C + C^{T}R + C^{T}C ) D w \rbrack
\end{aligned}
Taking the Singular Value Decomposition of R
\begin{aligned}
R &= USV^{T} \\
\end{aligned}
we can express R in terms of its singular values and eigenvectors:
\begin{aligned}
w^{T}\Sigma w &= (n - 1)^{-1} \lbrack w^{T}D(VSU^{T}USV^{T})Dw + w^{T} D(R^{T}C + C^{T}R + C^{T}C) D w \rbrack \\
w^{T}\Sigma w &= (n - 1)^{-1} \lbrack w^{T}D(V S^{2} V^{T})Dw + w^{T} D(R^{T}C + C^{T}R + C^{T}C) D w \rbrack
\end{aligned}
where $S^{2}$ contains the eigenvalues of $R$ in its diagonal entries
\begin{aligned}
w^{T}\Sigma w &= (n - 1)^{-1} \lbrack \underbrace{w^{T}DV S^{2} V^{T}Dw}_\text{Robust Component}
+ \underbrace{w^{T} D(R^{T}C + C^{T}R + C^{T}C) D w}_\text{Noisy Component} \rbrack
\end{aligned}
TO DO: There has to be a way of reducing the noisy component, or at least interpret/explain it.
The portfolio risk contribution is then given by
\begin{aligned}
diag(w)\Sigma w &= (n - 1)^{-1} \lbrack \underbrace{\theta}_\text{Robust Component}
+ \underbrace{\gamma}_\text{Noisy Component} \rbrack \\
\theta &= diag(V^{T}Dw)S^{2} V^{T}Dw
\end{aligned}
\begin{align}
\eta (w) & \equiv \exp \left( -\sum^{N}_{n=1} p_{n} \ln{(p_{n})} \right)
\end{align}
Now we look for:
\begin{align}
\arg \max_{w} \eta(w)
\end{align}
| 78043cf83883c1f7b5ef0baa28b812b0c49cbcd2 | 50,612 | ipynb | Jupyter Notebook | notebooks/spectral_diversification.ipynb | fcoibanez/eigenportfolio | 6e0f6c0239448a191aecf9137d545abf12cb344e | [
"MIT"
] | null | null | null | notebooks/spectral_diversification.ipynb | fcoibanez/eigenportfolio | 6e0f6c0239448a191aecf9137d545abf12cb344e | [
"MIT"
] | null | null | null | notebooks/spectral_diversification.ipynb | fcoibanez/eigenportfolio | 6e0f6c0239448a191aecf9137d545abf12cb344e | [
"MIT"
] | null | null | null | 94.779026 | 9,902 | 0.833083 | true | 2,771 | Qwen/Qwen-72B | 1. YES
2. YES | 0.931463 | 0.884039 | 0.823449 | __label__eng_Latn | 0.603432 | 0.751482 |
# The Laplace Transform
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*
## Theorems
The theorems of the Laplace transformation relate basic time-domain operations to their equivalents in the Laplace domain. They are of use for the computation of Laplace transforms of signals composed from modified [standard signals](../continuous_signals/standard_signals.ipynb) and for the computation of the response of systems to an input signal. The theorems allow further to predict the consequences of modifying a signal or system by certain operations.
### Temporal Scaling Theorem
A signal $x(t)$ is given for which the Laplace transform $X(s) = \mathcal{L} \{ x(t) \}$ exists. The Laplace transform of the [temporally scaled signal](../continuous_signals/operations.ipynb#Temporal-Scaling) $x(a t)$ with $a \in \mathbb{R} \setminus \{0\}$ reads
\begin{equation}
\mathcal{L} \{ x(a t) \} = \frac{1}{|a|} \cdot X \left( \frac{s}{a} \right)
\end{equation}
The Laplace transformation of a temporally scaled signal is given by weighting the inversely scaled Laplace transform of the unscaled signal with $\frac{1}{|a|}$. The scaling of the Laplace transform can be interpreted as a scaling of the complex $s$-plane. The region of convergence (ROC) of the temporally scaled signal $x(a t)$ is consequently the inversely scaled ROC of the unscaled signal $x(t)$
\begin{equation}
\text{ROC} \{ x(a t) \} = \left\{ s: \frac{s}{a} \in \text{ROC} \{ x(t) \} \right\}
\end{equation}
Above relation is known as scaling theorem of the Laplace transform. The scaling theorem can be proven by introducing the scaled signal $x(a t)$ into the definition of the Laplace transformation
\begin{equation}
\mathcal{L} \{ x(a t) \} = \int_{-\infty}^{\infty} x(a t) \, e^{- s t} \; dt = \frac{1}{|a|} \int_{-\infty}^{\infty} x(t') \, e^{-\frac{s}{a} t'} \; dt' = \frac{1}{|a|} \cdot X \left( \frac{s}{a} \right)
\end{equation}
where the substitution $t' = a t$ was used. Note that a negative value of $a$ would result in a reversal of the integration limits. In this case a second reversal of the integration limits together with the sign of the integration element $d t'= a \, dt$ was consolidated into the absolute value of $a$.
### Convolution Theorem
The convolution theorem states that the Laplace transform of the convolution of two signals $x(t)$ and $y(t)$ is equal to the scalar multiplication of their Laplace transforms $X(s)$ and $Y(s)$
\begin{equation}
\mathcal{L} \{ x(t) * y(t) \} = X(s) \cdot Y(s)
\end{equation}
under the assumption that both Laplace transforms $X(s) = \mathcal{L} \{ x(t) \}$ and $Y(s) = \mathcal{L} \{ y(t) \}$ exist, respectively. The ROC of the convolution $x(t) * y(t)$ includes at least the intersection of the ROCs of $x(t)$ and $y(t)$
\begin{equation}
\text{ROC} \{ x(t) * y(t) \} \supseteq \text{ROC} \{ x(t) \} \cap \text{ROC} \{ y(t) \}
\end{equation}
The theorem can be proven by introducing the [definition of the convolution](../systems_time_domain/convolution.ipynb) into the [definition of the Laplace transform](definition.ipynb) and changing the order of integration
\begin{align}
\mathcal{L} \{ x(t) * y(t) \} &= \int_{-\infty}^{\infty} \left( \int_{-\infty}^{\infty} x(\tau) \cdot y(t-\tau) \; d \tau \right) e^{-s t} \; dt \\
&= \int_{-\infty}^{\infty} \left( \int_{-\infty}^{\infty} y(t-\tau) \, e^{-s t} \; dt \right) x(\tau) \; d\tau \\
&= Y(s) \cdot \int_{-\infty}^{\infty} x(\tau) \, e^{-s \tau} \; d \tau \\
&= Y(s) \cdot X(s)
\end{align}
The convolution theorem is very useful in the context of linear time-invariant (LTI) systems. The output signal $y(t)$ of an LTI system is given as the convolution of the input signal $x(t)$ with the impulse response $h(t)$. The signals can be represented either in the time or Laplace domain. This leads to the following equivalent representations of an LTI system in the time and Laplace domain, respectively
Calculation of the system response by transforming the problem into the Laplace domain can be beneficial since this replaces the evaluation of the convolution integral by a scalar multiplication. In many cases this procedure simplifies the calculation of the system response significantly. A prominent example is the [analysis of a passive electrical network](network_analysis.ipynb). The convolution theorem can also be useful to derive an unknown Laplace transform. The key is here to express the signal as convolution of two other signals for which the Laplace transforms are known. This is illustrated by the following example.
**Example**
The Laplace transform of the convolution of a causal cosine signal $\epsilon(t) \cdot \cos(\omega_0 t)$ with a causal sine signal $\epsilon(t) \cdot \sin(\omega_0 t)$ is derived by the convolution theorem
\begin{equation}
\mathcal{L} \{ \epsilon(t) \cdot ( \cos(\omega_0 t) * \sin(\omega_0 t) \}
= \frac{s}{s^2 + \omega_0^2} \cdot \frac{\omega_0}{s^2 + \omega_0^2}
= \frac{\omega_0 s}{(s^2 + \omega_0^2)^2}
\end{equation}
where the [Laplace transforms of the causal cosine and sine signals](properties.ipynb#Transformation-of-the-cosine-and-sine-signal) were used. The ROC of the causal cosine and sine signal is $\Re \{ s \} > 0$. The ROC for their convolution is also $\Re \{ s \} > 0$, since no poles and zeros cancel out. Above Laplace transform has one zero $s_{00} = 0$, and two poles of second degree $s_{\infty 0} = s_{\infty 1} = j \omega_0$ and $s_{\infty 2} = s_{\infty 3} = - j \omega_0$.
This example is evaluated numerically in the following. First the convolution of the causal cosine and sine signal is computed
```python
%matplotlib inline
import sympy as sym
sym.init_printing()
t, tau = sym.symbols('t tau', real=True)
s = sym.symbols('s', complex=True)
w0 = sym.symbols('omega0', positive=True)
x = sym.integrate(sym.cos(w0*tau) * sym.sin(w0*(t-tau)), (tau, 0, t))
x = x.doit()
x
```
For the sake of illustration let's plot the signal for $\omega_0 = 1$
```python
sym.plot(x.subs(w0, 1), (t, 0, 50), xlabel=r'$t$', ylabel=r'$x(t)$');
```
The Laplace transform is computed
```python
X, a, cond = sym.laplace_transform(x, t, s)
X, a
```
which exists for $\Re \{ s \} > 0$. Its zeros are given as
```python
sym.roots(sym.numer(X), s)
```
and its poles as
```python
sym.roots(sym.denom(X), s)
```
### Temporal Shift Theorem
The [temporal shift of a signal](../continuous_signals/operations.ipynb#Temporal-Shift) $x(t - \tau)$ for $\tau \in \mathbb{R}$ can be expressed by the convolution of the signal $x(t)$ with a shifted Dirac impulse
\begin{equation}
x(t - \tau) = x(t) * \delta(t - \tau)
\end{equation}
This follows from the sifting property of the Dirac impulse. Applying a two-sided Laplace transform to the left- and right-hand side and exploiting the convolution theorem yields
\begin{equation}
\mathcal{L} \{ x(t - \tau) \} = X(s) \cdot e^{- s \tau}
\end{equation}
where $X(s) = \mathcal{L} \{ x(t) \}$ is assumed to exist. Note that $\mathcal{L} \{ \delta(t - \tau) \} = e^{- s \tau}$ can be derived from the definition of the two-sided Laplace transform together with the sifting property of the Dirac impulse. The Laplace transform of a shifted signal is given by multiplying the Laplace transform of the original signal with $e^{- s \tau}$. The ROC does not change
\begin{equation}
\text{ROC} \{ x(t-\tau) \} = \text{ROC} \{ x(t) \}
\end{equation}
This result is known as shift theorem of the Laplace transform. For a causal signal $x(t)$ and $\tau > 0$ the shift theorem of the one-sided Laplace transform is equal to the shift theorem of the two-sided transform.
#### Transformation of the rectangular signal
The Laplace transform of the [rectangular signal](../continuous_signals/standard_signals.ipynb#Rectangular-Signal) $x(t) = \text{rect}(t)$ is derived by expressing it by the Heaviside signal
\begin{equation}
\text{rect}(t) = \epsilon \left(t + \frac{1}{2} \right) - \epsilon \left(t - \frac{1}{2} \right)
\end{equation}
Applying the shift theorem to the [transform of the Heaviside signal](definition.ipynb#Transformation-of-the-Heaviside-Signal) and the linearity of the Laplace transform yields
\begin{equation}
\mathcal{L} \{ \text{rect}(t) \} = \frac{1}{s} e^{s \frac{1}{2}} - \frac{1}{s} e^{- s \frac{1}{2}} = \frac{\sinh \left( \frac{s}{2} \right) }{\frac{s}{2}}
\end{equation}
where $\sinh(\cdot)$ denotes the [hyperbolic sine function](https://en.wikipedia.org/wiki/Hyperbolic_function#Sinh). The ROC of the Heaviside signal is given as $\Re \{ s \} > 0$. Applying [l'Hopitals rule](https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule) the pole at $s=0$ can be disregarded leading to
\begin{equation}
\text{ROC} \{ \text{rect}(t) \} = \mathbb{C}
\end{equation}
For illustration, the magnitude of the Laplace transform $|X(s)|$ is plotted in the $s$-plane, as well as $X(\sigma)$ and $X(j \omega)$ for the real and imaginary part of the complex frequency $s = \sigma + j \omega$.
```python
sigma, omega = sym.symbols('sigma omega')
X = sym.sinh(s/2)*2/s
sym.plotting.plot3d(abs(X.subs(s, sigma+sym.I*omega)), (sigma, -5, 5), (omega, -20, 20),
xlabel=r'$\Re\{s\}$', ylabel=r'$\Im\{s\}$', title=r'$|X(s)|$')
sym.plot(X.subs(s, sigma) , (sigma, -5, 5), xlabel=r'$\Re\{s\}$', ylabel=r'$X(s)$', ylim=(0, 3))
sym.plot(X.subs(s, sym.I*omega) , (omega, -20, 20), xlabel=r'$\Im\{s\}$', ylabel=r'$X(s)$');
```
**Exercise**
* Derive the Laplace transform $X(s) = \mathcal{L} \{ x(t) \}$ of the causal rectangular signal $x(t) = \text{rect} (a t - \frac{1}{2 a})$
* Derive the Laplace transform of the [triangular signal](../fourier_transform/theorems.ipynb#Transformation-of-the-triangular-signal) $x(t) = \Lambda(a t)$ with $a \in \mathbb{R} \setminus \{0\}$
### Differentiation Theorem
Derivatives of signals are the fundamental operations of differential equations. Ordinary differential equations (ODEs) with constant coefficients play an important role in the theory of time-invariant (LTI) systems. Consequently, the representation of the derivative of a signal in the Laplace domain is of special interest.
#### Two-sided transform
A differentiable signal $x(t)$ whose temporal derivative $\frac{d x(t)}{dt}$ exists is given. Using the [derivation property of the Dirac impulse](../continuous_signals/standard_signals.ipynb#Dirac-Impulse), the derivative of the signal can be expressed by the convolution
\begin{equation}
\frac{d x(t)}{dt} = \frac{d \delta(t)}{dt} * x(t)
\end{equation}
Applying a two-sided Laplace transformation to the left- and right-hand side together with the [convolution theorem](#Convolution-Theorem) yields the Laplace transform of the derivative of $x(t)$
\begin{equation}
\mathcal{L} \left\{ \frac{d x(t)}{dt} \right\} = s \cdot X(s)
\end{equation}
where $X(s) = \mathcal{L} \{ x(t) \}$. The two-sided Laplace transform $\mathcal{L} \{ \frac{d \delta(t)}{dt} \} = s$ can be derived by applying the definition of the Laplace transform together with the derivation property of the Dirac impulse. The ROC is given as a superset of the ROC for $x(t)$
\begin{equation}
\text{ROC} \left\{ \frac{d x(t)}{dt} \right\} \supseteq \text{ROC} \{ x(t) \}
\end{equation}
due to the zero at $s=0$ which may cancel out a pole.
Above result is known as differentiation theorem of the two-sided Laplace transform. It states that the differentiation of a signal in the time domain is equivalent to a multiplication of its spectrum by $s$.
#### One-sided transform
Many practical signals and systems are causal, hence $x(t) = 0$ for $t < 0$. A causal signal is potentially discontinuous for $t=0$. The direct application of above result for the two-sided Laplace transform is not possible since it assumes that the signal is differentiable for every time $t$. The potential discontinuity at $t=0$ has to be considered explicitly for the derivation of the differentiation theorem for the one-sided Laplace transform [[Girod et al.](index.ipynb#Literature)]
\begin{equation}
\mathcal{L} \left\{ \frac{d x(t)}{dt} \right\} = s \cdot X(s) - x(0+)
\end{equation}
where $x(0+) := \lim_{\epsilon \to 0} x(0+\epsilon)$ denotes the right sided limit value of $x(t)$ for $t=0$. The ROC is given as a superset of the ROC of $x(t)$
\begin{equation}
\text{ROC} \left\{ \frac{d x(t)}{dt} \right\} \supseteq \text{ROC} \{ x(t) \}
\end{equation}
due to the zero at $s=0$ which may cancel out a pole. The one-sided Laplace transform of a causal signal is equal to its two-sided transform. Above result holds therefore also for the two-sided transform of a causal signal.
The main application of the differentiation theorem is the transformation and solution of differential equations under consideration of initial values. Another application area is the derivation of transforms of signals which can be expressed as derivatives of other signals.
### Integration Theorem
An integrable signal $x(t)$ for which the integral $\int_{-\infty}^{t} x(\tau) \; d\tau$ exists is given. The integration can be represented as convolution with the rectangular signal $\epsilon(t)$
\begin{equation}
\int_{-\infty}^{t} x(\tau) \; d\tau = \int_{-\infty}^{\infty} x(\tau) \cdot \epsilon(t - \tau) \; d\tau = \epsilon(t) * x(t)
\end{equation}
as illustrated below
Two-sided Laplace transformation of the left- and right-hand side of above equation, application of the convolution theorem and using the Laplace transform of the Heaviside signal $\epsilon(t)$ yields
\begin{equation}
\mathcal{L} \left\{ \int_{-\infty}^{t} x(\tau) \; d\tau \right\}
= \frac{1}{s} \cdot X(s)
\end{equation}
The ROC is given as a superset of the intersection of the ROC of $x(t)$ and the right $s$-half-plane
\begin{equation}
\text{ROC} \left\{ \int_{-\infty}^{t} x(\tau) \; d\tau \right\} \supseteq \text{ROC} \{ x(t) \} \cap \{s : \Re \{ s \} > 0\}
\end{equation}
due to the pole at $s=0$. This integration theorem holds also for the one-sided Laplace transform.
#### Transformation of the ramp signal
The Laplace transform of the causal [ramp signal](https://en.wikipedia.org/wiki/Ramp_function) $t \cdot \epsilon(t)$ is derived by the integration theorem. The ramp signal can be expressed as integration over the Heaviside signal
\begin{equation}
t \cdot \epsilon(t) = \int_{-\infty}^{t} \tau \cdot \epsilon(\tau) \; d \tau
\end{equation}
Laplace transformation of the left- and right-hand side and application of the integration theorem together with the Laplace transform of the Heaviside signal yields
\begin{equation}
\mathcal{L} \{ t \cdot \epsilon(t) \} = \frac{1}{s^2}
\end{equation}
with
\begin{equation}
\text{ROC} \{ t \cdot \epsilon(t) \} = \{s : \Re \{ s \} > 0\}
\end{equation}
**Exercise**
* Derive the Laplace transform $X(s) = \mathcal{L} \{ x(t) \}$ of the signal $x(t) = t^n \cdot \epsilon(t)$ with $n \geq 0$ by repeated application of the integration theorem.
* Compare your result to the numerical result below. Note that $\Gamma(n+1) = n!$ for $n \in \mathbb{N}$.
```python
n = sym.symbols('n', integer=True)
X, a, cond = sym.laplace_transform(t**n, t, s)
X, a, cond
```
### Modulation Theorem
The complex modulation of a signal $x(t)$ is defined as $e^{s_0 t} \cdot x(t)$ with $s_0 \in \mathbb{C}$. The Laplace transform of a modulated signal is derived by introducing it into the definition of the two-sided Laplace transform
\begin{align}
\mathcal{L} \left\{ e^{s_0 t} \cdot x(t) \right\} &=
\int_{-\infty}^{\infty} e^{s_0 t} x(t) \, e^{-s t} \; dt =
\int_{-\infty}^{\infty} x(t) \, e^{- (s - s_0) t} \; dt \\
&= X(s-s_0)
\end{align}
where $X(s) = \mathcal{L} \{ x(t) \}$. Modulation of the signal $x(t)$ leads to a translation of the $s$-plane into the direction given by the complex value $s_0$. Consequently, the ROC is also shifted
\begin{equation}
\text{ROC} \{ e^{s_0 t} \cdot x(t) \} = \{s: s - \Re \{ s_0 \} \in \text{ROC} \{ x(t) \} \}
\end{equation}
This relation is known as modulation theorem.
**Example**
The Laplace transform of the signal $t^n \cdot \epsilon(t)$
\begin{equation}
\mathcal{L} \{ t^n \cdot \epsilon(t) \} = \frac{n!}{s^{n+1}}
\end{equation}
for $\Re \{ s \} > 0$ was derived in the previous example. This result can be extended to the class of signals $t^n e^{-s_0 t} \epsilon(t)$ with $s_0 \in \mathbb{C}$ using the modulation theorem
\begin{equation}
\mathcal{L} \{ t^n e^{-s_0 t} \epsilon(t) \} = \frac{n!}{(s + s_0)^{n+1}} \qquad \text{for } \Re \{ s \} > \Re \{ - s_0 \}.
\end{equation}
**Copyright**
The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.
| 0bbfef3925f7053ab51bba7bf92a730d9da6ed2a | 169,627 | ipynb | Jupyter Notebook | laplace_transform/theorems.ipynb | swchao/signalsAndSystemsLecture | 7f135d091499e1d3d635bac6ddf22adee15454f8 | [
"MIT"
] | 3 | 2019-01-27T12:39:27.000Z | 2022-03-15T10:26:12.000Z | laplace_transform/theorems.ipynb | swchao/signalsAndSystemsLecture | 7f135d091499e1d3d635bac6ddf22adee15454f8 | [
"MIT"
] | null | null | null | laplace_transform/theorems.ipynb | swchao/signalsAndSystemsLecture | 7f135d091499e1d3d635bac6ddf22adee15454f8 | [
"MIT"
] | 2 | 2020-09-18T06:26:48.000Z | 2021-12-10T06:11:45.000Z | 276.265472 | 78,568 | 0.898807 | true | 5,208 | Qwen/Qwen-72B | 1. YES
2. YES | 0.695958 | 0.859664 | 0.59829 | __label__eng_Latn | 0.972015 | 0.228359 |
# Calculating Times of Rise, Set, and Culmination
Suppose we want to calculate when a given celestial object rises above the horizon, sets below the horizon, or reaches the highest point above the horizon (*culminates*), as seen by an observer at a given location on the surface of the Earth.
### Azimuth and altitude
Let us consider how this observer can describe the apparent location of the celestial object, say a star, at a specific moment in time using a pair of angles. Let the *azimuth* $A$ be the angle measured westward from due south along the horizon needed to face the star. Let the *altitude* $h$ be the angle of the star above the horizon. (If the star is below the horizon, $h < 0$.)
We define a pair of co-rotating Cartesian coordinate systems for such an observer, called *horizontal* coordinates and *local equatorial* coordinates. In both systems we treat the entire sky as a sphere of unit radius centered on the observer.
### Horizontal coordinates
In the horizontal coordinate system we have three directions $x$, $y$, and $z$ such that
- $x$ points due south toward the horizon,
- $y$ points due west toward the horizon, and
- $z$ points straight up toward the zenith.
Thus it is possible to relate the angles $A$ and $h$ with horizontal Cartesian coordinates $x$, $y$, and $z$ by
\begin{align}
x & = \cos{h} \cos{A} \\
y & = \cos{h} \sin{A} \tag{1} \\
z & = \sin{h}
\end{align}
To understand these equations a little more intuitively, consider a star that is exactly on the horizon and due west. Then $h=0$, which means $\cos{h}=1$ and $\sin{h}=0$. Also, $A=90^{\circ}$, so $\cos{A}=0$ and $\sin{A}=1$. This results in $x=0$, $y=1$, $z=0$, which is consistent with the above descriptions of the three coordinates. Another example is where $h=90^{\circ}$ and $A$ has any value, resulting in the vertical vector $x=0$, $y=0$, $z=1$.
### Declination and hour angle
Usually in astronomy when you see the phrase "equatorial coordinates," it refers to right ascension $\alpha$ and declination $\delta$. We need those two quantities for this discussion, but they are not the same as local equatorial coordinates. Here I will refer to right ascension and declination as "celestial equatorial coordinates" to avoid ambiguity.
Like their celestial counterparts, local equatorial coordinates are defined with respect to the plane of the Earth's equator. But being local means they rotate along with the observer, while celestial equatorial coordinates are fixed to the starry background of the celestial sphere. In other words, an earthbound observer experiences stationary local equatorial coordinates, but sees stars rotating around him. Meanwhile, an observer far out in space sees fixed celestial equatorial coordinates, but sees a rotating Earth.
*Declination* $\delta$ is the angle of a star north ($\delta>0$) or south ($\delta<0$) of the celestial equator. For example, the north celestial pole has $\delta=90^{\circ}$.
I will describe right ascension later.
*Hour angle* $\tau$ is the angle from the local meridian westward toward a given star. It is measured around a celestial circle having same declination as the star. It represents the amount of time that has passed since the star last crossed the meridian, or equivalently, since the star culminated (reached its highest point above the horizon). Hour angle can be measured in degrees (0 to 360) or in sidereal hours (0 to 24). You can divide degrees by 15 to get sidereal hours.
### Local equatorial coordinates
Specifically, local equatorial coordinates are Cartesian coordinates $\hat{x}$, $\hat{y}$, $\hat{z}$ such that
- $\hat{x}$ points to where the celestial equator intersects the meridian,
- $\hat{y}$ points due west toward the horizon, and
- $\hat{z}$ points to the north celestial pole.
Using $\delta$ as the star's declination and $\tau$ as the star's hour angle at a given time, we can write the local equatorial coordinates of the star as
\begin{align}
\hat{x} & = \cos{\delta} \cos{\tau} \\
\hat{y} & = \cos{\delta} \sin{\tau} \tag{2} \\
\hat{z} & = \sin{\delta} \nonumber
\end{align}
Note that $\tau$ changes with the Earth's rotation but $\delta$ does not. Therefore, for a fixed star, $\hat{x}$ and $\hat{y}$ change with time but $\hat{z}$ remains constant.
### Relating the two local systems
We can write a system of equations that relates these two local rotating coordinate systems.
First note that the $y$-axis is the same as the $\hat{y}$-axis; they are both aimed at the point on the horizon due west from the observer.
The $\hat{z}$-axis, pointing toward the north celestial pole, has an angle $\phi$ above the due-north point on the horizon, where $\phi$ is the geographic latitude of the observer. (If the observer is in the southern hemisphere, then $\phi$ is negative and the north celestial pole is below the horizon.) That northernmost point on the horizon is in the $-x$ direction, because $x$ points due south. Similarly, the $\hat{x}$-axis is rotated by $\phi$ from the vertical $z$-axis. In general, $x$ and $z$ each depend on both $\hat{x}$ and $\hat{z}$ through a rotation of $\phi$.
We can express horizontal coordinates in terms of local equatorial coordinates as
\begin{align}
x & = \hat{x} \sin{\phi} - \hat{z} \cos{\phi} \\
y & = \hat{y} \tag{3} \\
z & = \hat{x} \cos{\phi} + \hat{z} \sin{\phi}
\end{align}
### Relating (azimuth, altitude) to (hour angle, declination)
Substituting equations (1) and (2) into (3), we can write a new system of equations that relate azimuth, altitude, hour angle, and declination:
\begin{align}
\cos{h} \cos{A} &= \cos{\delta} \cos{\tau} \sin{\phi} - \sin{\delta} \cos{\phi} \\
\cos{h} \sin{A} &= \cos{\delta} \sin{\tau} \tag{4} \\
\sin{h} &= \cos{\delta} \cos{\tau} \cos{\phi} + \sin{\delta} \sin{\phi}
\end{align}
The third equation in (4) is most helpful for finding rise and set times because it eliminates the azimuth as a factor.
### Calculating hour angle for a given star at a given time
To calculate hour angle $\tau$ we need to use
\begin{equation}
\tau = \theta + \frac{\lambda}{15} - \alpha \tag{5}
\end{equation}
where
- $\theta$ is the current Greenwich Apparent Sidereal Time (GAST) at the moment of observation,
- $\lambda$ is the geographic longitude of the observer east of Greenwich in degrees, and
- $\alpha$ is the right ascension of the star.
Note that $\tau$, $\theta$, and $\alpha$ are all expressed in sidereal hours. We divide $\lambda$ by 15 to convert degrees to hours.
### Determining when the object is highest or lowest in the sky
The object reaches maximum altitude angle $h_1$ when its hour angle $\tau=0$ and its minimum value of $h_2$ when $\tau = 12$ hours. Let's look at the general case where we are trying to find the *event time* when the object reaches an arbitrary hour angle $\tau$:
\begin{align}
\theta + \frac{\lambda}{15} - \alpha & = \tau \\
\theta = \tau + \alpha - \frac{\lambda}{15} \tag{6}
\end{align}
A star's right ascension $\alpha$ is almost constant over time, and we assume the observer's longitude $\lambda$ is fixed. In practice, $\alpha$ for the Sun, the Moon, or a planet is not constant. The Moon especially will move in right ascension significantly over the course of a few hours. So we will end up needing to numerically iterate to find the exact event time.
The approach I use in the Astronomy library is to determine the GAST at a start search time. This start search time represents a time after which the next event is to be found. This initial guess $\theta_{x}$ of the GAST value will be incorrect by the approximate amount
\begin{align}
\Delta \theta = \left( \tau + \alpha - \frac{\lambda}{15} \right) - \theta_x \tag{7}
\end{align}
On the first iteration, if $\Delta\theta$ is negative, it is corrected by adding 24 sidereal hours to guarantee finding a culmination time after the start search time. On subsequent iterations, the correction may be positive or negative to home in on the correct time.
At this point, it is important to note that sidereal hours are not exactly the same as clock hours. The Earth rotates with respect to a fixed star once every 23 hours, 56 minutes, and 4.091 seconds, or 86164.091 seconds. This is compared to a mean solar day of 24 hours = 86400 seconds. This gives a ratio of $\rho = 0.99726957$ mean solar days per sidereal day.
After each iteration, we correct by adding $\rho \Delta \theta$ days of terrestrial time and calculating the right ascension $\alpha$ and declination $\delta$ of the object, and GAST, again at the new time. We plug the updated values of $\theta_x$ and $\alpha$ back into (7) to obtain a more accurate estimate. We keep iterating until $\Delta \theta$ is tolerably small (less than 0.1 seconds). This typically takes 4 or 5 iterations.
### Calculating rise and set times
Now that we can determine when an object reaches its highest and lowest altitudes in the sky for a given observer, we know that *if* the object rises or sets, it will be bounded by those two events. At locations close to one of the Earth's poles, an object may stay in the sky for weeks at a time, never setting, or it may stay below the horizon for weeks at a time, never rising.
Specifically, if the object is at its lowest at time $t_L$ and at its highest at $t_H \approx t_L + 12$ hours, then we can calculate the object's altitudes $h_L$ and $h_H$ at those respective times. If $h_L \lt 0$ and $h_H \gt 0$, we know the object must have risen at some time $t_R$ such that $t_L \lt t_R \lt t_H$.
| 8623a62f24e213acf4ab2113f6c244d3718f4782 | 11,518 | ipynb | Jupyter Notebook | theory/rise_set_culm.ipynb | matheo/astronomy | 3a1d4ea47a0c04d83bd8ede43dc564e956e999fe | [
"MIT"
] | null | null | null | theory/rise_set_culm.ipynb | matheo/astronomy | 3a1d4ea47a0c04d83bd8ede43dc564e956e999fe | [
"MIT"
] | null | null | null | theory/rise_set_culm.ipynb | matheo/astronomy | 3a1d4ea47a0c04d83bd8ede43dc564e956e999fe | [
"MIT"
] | null | null | null | 67.752941 | 597 | 0.647074 | true | 2,498 | Qwen/Qwen-72B | 1. YES
2. YES | 0.921922 | 0.795658 | 0.733535 | __label__eng_Latn | 0.999135 | 0.542578 |
<table>
<tr>
<td>Auhor:</td>
<td>Zlatko Minev </td>
</tr>
<tr>
<td>Purpose:</td>
<td>Demonstrate some of the basic conversion and tools in toolbox_circuits <br>
These are just basic utility functions
</td>
</tr>
<td>File Status:</td>
<td>In construction </td>
</tr>
</table>
These conversions are quite basic, so I decided to not use an external package, but just manaully handle them.
For all the calculations anyhow, we will only work in reduced units of MHz for energies (or GHz if need be) and nH and fF for ind and cap, resp.
```python
%load_ext autoreload
%autoreload 2
```
# Conversions
##### Elementary units
```python
import pyEPR.calcs
from pyEPR.calcs import Convert
print("Convert.toSI(1,'nH') = ", Convert.toSI(1,'nH'), "H")
print("Convert.fromSI(1.0,'nH') = ", Convert.fromSI(1.0,'nH'), "nH")
print("Identity: ", Convert.toSI(Convert.fromSI(1.0,'nH'),'nH'))
```
Convert.toSI(1,'nH') = 1e-09 H
Convert.fromSI(1.0,'nH') = 1000000000.0 nH
Identity: 1.0
##### Josephson Junction Parameters
```python
from IPython.display import Latex
Lj = 10
display(Latex(r"$E_J = %.2f \text{ GHz} \qquad \text{for } L_J=%.2f\text{ nH}$" % (\
Convert.Ej_from_Lj(Lj, 'nH', "GHz"),Lj)))
print('\nConvert back %.2f nH' % Convert.Lj_from_Ej(16.35E3, 'MHz', 'nH'),'\n')
display(Latex(r"$E_C = %.2f \text{ MHz} \qquad \text{for } C_J=%.2f\text{ fF}$" % (\
Convert.Ec_from_Cs(65., 'fF', "MHz"),65.)))
display( 'Convert back:',Latex(r"$C_J = %.2f \text{ fF} \qquad \text{for } E_C=%.2f\text{ MHz}$" % (\
Convert.Cs_from_Ec(300, 'MHz', "fF"),300)))
```
$E_J = 16.35 \text{ GHz} \qquad \text{for } L_J=10.00\text{ nH}$
Convert back 10.00 nH
$E_C = 298.00 \text{ MHz} \qquad \text{for } C_J=65.00\text{ fF}$
'Convert back:'
$C_J = 64.57 \text{ fF} \qquad \text{for } E_C=300.00\text{ MHz}$
###### Critical current
```python
print(Convert.Ic_from_Lj(10))
Convert.Lj_from_Ic(32)
```
32.91059784754533
10.284561827357917
##### Convinience units
```python
from pyEPR.calcs.convert import π, pi, ϕ0, fluxQ, Planck, ħ, hbar, elementary_charge, e_el
print("Test EJ raw calculation = %.2f"%( ϕ0**2 / (10E-9 * Planck) *1E-9 ) ,'GHz')
```
Test EJ raw calculation = 16.35 GHz
##### Transmon
Linear harmonic oscillator approximation of transmon.<br>
Convinince func
```python
pyEPR.calcs.CalcsTransmon.transmon_print_all_params(13, 65);
```
$\displaystyle
\begin{align}
L_J &=13.0 \mathrm{\ nH} & C_\Sigma &=65.0 \mathrm{\ fF} \\
E_J &=12.57 \mathrm{\ GHz} & E_C &=298 \mathrm{\ MHz} \\
\omega_0 &=2\pi\times 5.48 \mathrm{\ GHz} & Z_0 &= 447 \mathrm{\ \Omega} \\
\phi_\mathrm{ZPF} &= 0.47 \ \ \phi_0 & n_\mathrm{ZPF} &=1.07 \ \ (2e) \\
\end{align}
$
and raw
```python
pyEPR.calcs.CalcsTransmon.transmon_get_all_params(Convert.Ej_from_Lj(13, 'nH', 'MHz'), Convert.Ec_from_Cs(65, 'fF', 'MHz'))
```
{'Ej_MHz': 12573.962523598551,
'Ec_MHz': 298.00352807167883,
'Lj_H': 1.3000000000000006e-08,
'Cs_F': 6.5e-14,
'Lj_nH': 13.000000000000005,
'Cs_fF': 65.0,
'Phi_ZPF': 1.5356087624822668e-16,
'Q_ZPF': 3.433725579754676e-19,
'phi_ZPF': 0.4666000811032977,
'n_ZPF': 1.0715814682623428,
'Omega_MHz': 34401.045807689065,
'f_MHz': 5.4750964878244375,
'Z_Ohms': 447.21359549995805}
| 46adf3e5ce7ce5a05595cf0e158f433308dc59ae | 7,818 | ipynb | Jupyter Notebook | Updated pyEPR Files/pyEPR/_tutorial_notebooks/Tutorial 3. toolbox_circuits.ipynb | circuitqed/Automated-RF-Design-Demo-MASTER | d92bd57447ecce901dc5a4d527205acecb617268 | [
"BSD-3-Clause"
] | null | null | null | Updated pyEPR Files/pyEPR/_tutorial_notebooks/Tutorial 3. toolbox_circuits.ipynb | circuitqed/Automated-RF-Design-Demo-MASTER | d92bd57447ecce901dc5a4d527205acecb617268 | [
"BSD-3-Clause"
] | null | null | null | Updated pyEPR Files/pyEPR/_tutorial_notebooks/Tutorial 3. toolbox_circuits.ipynb | circuitqed/Automated-RF-Design-Demo-MASTER | d92bd57447ecce901dc5a4d527205acecb617268 | [
"BSD-3-Clause"
] | null | null | null | 23.835366 | 151 | 0.467639 | true | 1,317 | Qwen/Qwen-72B | 1. YES
2. YES | 0.83762 | 0.766294 | 0.641863 | __label__yue_Hant | 0.299086 | 0.329593 |
```python
import numpy as np
```
### Matrices
```python
matrix_01 = np.matrix("1, 2, 3; 4, 5, 6"); matrix_01
```
matrix([[1, 2, 3],
[4, 5, 6]])
```python
matrix_02 = np.matrix([[1, 2, 3], [4, 5, 6]]); matrix_02
```
matrix([[1, 2, 3],
[4, 5, 6]])
### Math Operations with Arrays and Matrices
```python
array_01 = np.array([[1, 2], [3, 4]]); array_01
```
array([[1, 2],
[3, 4]])
```python
type(array_01)
```
numpy.ndarray
```python
array_01 * array_01
```
array([[ 1, 4],
[ 9, 16]])
```python
matrix_01 = np.mat(array_01); matrix_01
```
matrix([[1, 2],
[3, 4]])
```python
type(matrix_01)
```
numpy.matrix
```python
matrix_01 * matrix_01
```
matrix([[ 7, 10],
[15, 22]])
### The multiplications results between arrays and matrices are different. The math is the following:
## $$ \boxed{ \begin{align} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} & \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} = \begin{pmatrix} 7 & 10 \\ 15 & 22 \end{pmatrix} \end{align} }$$
### A matrix apply the multiplication from lines to columns, resulting in the following operations:
```python
from IPython.display import Image
Image('aux/images/matrix-multiplication.png')
```
### To do the same operation with an array object:
```python
array_01 = np.array([[1, 2], [3, 4]]); array_01
```
array([[1, 2],
[3, 4]])
```python
np.dot(array_01, array_01)
```
array([[ 7, 10],
[15, 22]])
### Conversions
```python
array_01 = np.array([[1, 2], [3, 4]]); array_01
```
array([[1, 2],
[3, 4]])
```python
# Array to Matrix
matrix_01 = np.asmatrix(array_01); matrix_01
```
matrix([[1, 2],
[3, 4]])
```python
# Matrix to Array
array_02 = np.asarray(matrix_01); array_02
```
array([[1, 2],
[3, 4]])
| b801552fa4e6a7665d76c23257067f5306883fc5 | 26,029 | ipynb | Jupyter Notebook | modules/02-data-organization-and-visualization/06-numpy-array-matrix-math-operations.ipynb | cfascina/rtaps | d54c83a2100ac3a300041e2d589c86e5ca0c4a8e | [
"MIT"
] | 2 | 2020-07-27T14:25:23.000Z | 2020-12-02T22:12:03.000Z | modules/02-data-organization-and-visualization/06-numpy-array-matrix-math-operations.ipynb | cfascina/rtaps | d54c83a2100ac3a300041e2d589c86e5ca0c4a8e | [
"MIT"
] | null | null | null | modules/02-data-organization-and-visualization/06-numpy-array-matrix-math-operations.ipynb | cfascina/rtaps | d54c83a2100ac3a300041e2d589c86e5ca0c4a8e | [
"MIT"
] | null | null | null | 68.6781 | 19,140 | 0.826732 | true | 677 | Qwen/Qwen-72B | 1. YES
2. YES | 0.941654 | 0.944177 | 0.889088 | __label__eng_Latn | 0.401105 | 0.903983 |
# The Winding Number and the SSH model
The Chern number isn't the only topological invariant. We have multiple invariants, each convenient in their own situations. The Chern number just happened to appear one of the biggest, early examples, the Integer Quantum Hall Effect, but the winding number actually occurs much more often in a wider variety of circumstances.
How many times does the phase wrap as we transverse a closed loop?
$$
n = \frac{1}{2 \pi i } \oint \text{d}\phi =
\frac{1}{2\pi i } \oint \frac{\text{d}z}{z}
$$
This expression shows up in complex analysis with <i>Residues</i> and [the Cauchy Integral Formula](https://en.wikipedia.org/wiki/Cauchy%27s_integral_formula), but we're interested in applying this formula to topology.
## Topology and Homotopy
<b>Topology</b> is a general umbrella term for studying properties independent of deformation or coordinate systems. If we go back to "what does topology formally mean?", it's a structure we can put on sets. From there, we have a variety of different ways to study that structure, and one of those is <b>Homotopy</b>.
<b>Homotopy</b> considers two functions and asks whether or not they can be deformed into each other.
Here's a simple example:
We have positions $\vec{x}(t)$ over time with a fixed starting and stopping point. And we have some fixed puncture point in space our function can never occupy. We can classify all the possible paths by the number of times they go around the puncture point.
The domain in our situation is the unit circle $k$, and we want to know what the range looks like in terms of unit circles:
* Zero unit circles= a point?
* One Unit Circle?
* A Unit Circle followed by another Unit Circle?
* A Unit Circle, but flipped and traveled in the opposite direction?
Each of these is a different homotopy class.
## Su-Schrieffer-Heeger Model for Trans-polyacetylene
The Su-Schrieffer-Heeger Model for Trans-Polyacetylene hosts topological phases characterized by the winding number.
The chemical under doping has high electrical conductivity, opened the entire field of conductive polymers, and led to the 2000 Chemistry Nobel Prize [3]. To get to the model, we first need to look at the chemical structure,
Scary Organic Chemistry stuff... and due to my lack of understand Organic Chemistry stuff, I actually understood the physical situation wrong for a while, but now let me take a crack at explaining what I think is actually the case.
Though just the plain <i>positions</i> of the atoms are translationally symmetric with a period of one, due to [Pereils instability in 1D](https://en.wikipedia.org/wiki/Peierls_transition) the bonds break into having a translational period of <b>2</b>. This gives a degeneracy in the ground state. The way the structure is tiled with "single-bond"--"double-bond" could just be shifted by one site, giving an equally viable chemical structure.
Now we have regions of uniform tiling and boundaries between them. The boundaries then act like solitons with certain transitions probabilities to move around. We are looking at the Hamiltonian that describes how this boundary soliton moves.
All we really need to know is we have a particle free to move in one dimension, with different transition probabilities depending on the type of bond. For the technical history, see [4], [5], [6]. [6] makes the most sense.
$v$ and $w$ are our two transition probabilities, and we also have two different types of sites, $a$ and $b$. We can write down a hopping Hamiltonian from this information,
\begin{equation}
\mathcal{H}= v \sum_i \left(a_i^{\dagger} b_i + \text{h.c.} \right)
-w \sum_i \left(b_i^{\dagger} a_{i+1} + \text{h.c.} \right)
\end{equation}
We can Fourier transform the Hamiltonian,
\begin{equation}
\mathcal{\tilde{H}}= \sum_k v \left( a_k^{\dagger} b_k + b_k a_k^{\dagger} \right)
- w \left( e^{-i k} b_k^{\dagger} a_k + e^{i k} a_k^{\dagger} b_k \right)
\end{equation}
and change forms
\begin{equation}
= \sum_k \left(v -w \cos k \right) \left( b_k^{\dagger} a_k + a_k^{\dagger} b_k \right)
-i w \sin k \left( - b_k^{\dagger} a_k + a_k^{\dagger} b_k \right)
\end{equation}
\begin{equation}
=\begin{bmatrix}
a_k^{\dagger} & b_k^{\dagger}
\end{bmatrix}
\begin{bmatrix}
0 & \left(v-w \cos k\right) -i \left( -w \sin k \right) \\
\left(v-w \cos k\right) + i \left( -w \sin k \right) & 0 \\
\end{bmatrix}
\begin{bmatrix}
a_k \\ b_k
\end{bmatrix}
\end{equation}
to something where we can read off a quite useful form for this type of topological stuff:
\begin{equation}
\mathcal{\tilde{H}} = \vec{R} \cdot \vec{\sigma}
\end{equation}
\begin{equation}
R_x(k) = v - w \cos k \qquad \qquad R_y (k) = -w \sin k \qquad \qquad R_z = 0
\end{equation}
## Code up the Model
```julia
# Adding the Packages
using Plots
using LinearAlgebra
gr()
```
Plots.GRBackend()
```julia
# Pauli Matrices
σx=[[0 1]
[1 0]]
σy=[[0 -im]
[im 0]]
σz=[[1 0]
[0 -1]]
```
2×2 Array{Int64,2}:
1 0
0 -1
```julia
# Functions
Rx(k::Float64,v=1,w=2)=v-w*cos(k)
Ry(k::Float64,v=1,w=2)=-w*sin(k)
R(k::Float64,v=1,w=2)=sqrt(Rx(k,v,w)^2+Ry(k,v,w)^2)
H(k::Float64,v=1,w=2)=Rx(k,v,w)*σx+Ry(k,v,w)*σy
```
H (generic function with 3 methods)
```julia
# domain we will calculate on
l=314
ks=range(-π,stop=π,length=l)
dk=ks[2]-ks[1]
```
0.02007407446383258
### Chiral Symmetry
A Hamiltonian is said to possess chiral symmetry if there exists a $U$ such that
$$
U H U^{-1} = -H \qquad \qquad U U^{\dagger} =\mathbb{1}.
$$
Finding $U$ if even exists and determining its form if it exists is a problem for another time. Today, multiple places said that $\sigma_z$ works for the SSH model, and we can confirm that it does.
A little less intellectually satisfying (at least for me), but it works.
We could test that equation analytically on pen and paper, analytically using 'SymPy', or by plugging in random 'k' values a bunch of times and assuming that's good enough.
I'm going the bunch of random k values route. Just keep evaluating the next cell till you're convinced.
<b>Bonus Note:</b> We only have a situation with a winding number because we have chiral symmetry and have an odd number of dimensions. If we have no chiral (and no other any other anti-unitary) symmetry, then we could only have the topologically trivial phase. That's why I'm making sure to mention this. Check out the Periodic Table of Topological Insulators for more information.
```julia
k_test=rand()
σz*H(k_test)*σz^(-1)+H(k_test) # Should equal zero
```
2×2 Array{Complex{Float64},2}:
0.0+0.0im 0.0+0.0im
0.0+0.0im 0.0+0.0im
### Homotopically Different Hamiltonians
We want to know if two sets of parameters $c_1=(v_1,w_1)$ and $c_2=(v_2,w_2)$ will describe topologically equivalent systems.
We are gifted by the fact have some convenient theorems relating the SPT (Symmetry Protected Topological) topology of the phases and band gap closings.
As we change the parameters, we will remain in the same topological phase as long as <b>we don't close the gap</b> or <b>break symmetries</b>. [2]
The states are in the same phase if they can be <i>smoothly deformed</i> into each other. In this context, smoothly deformed means connected by local unitary evolution. Nothing drastic is happening to the system. Closing the gap is considered drastic.
Now we don't have to change the topological phase at a gap closing, but it's only possible there.
Beforehand doing a lot of calculations and work, if we can identify where band closings occur and regions where parameters can be perturbed and changed without causing band closings, we can reduce the number of things we need to solve later on.
Analytically, we know that the two eigenvalues occur at: (see QAHE post)
$$
R=\pm \sqrt{R_x^2+ R_y^2}
$$
$$
=\pm \sqrt{v^2+w^2 \cos^2 k -2 vw \cos k + w^2 \sin^2 k}
= \pm \sqrt{v^2 - 2 vw \cos k + w^2}
$$
The difference between the upper and lower band will be at it's minimum when $\cos k$ is greatest,$k=0$.
$$
=\pm \sqrt{(v-w)^2}
$$
So when $v=w$, the gap closes. This $v=w$ line in parameter space could separate two different topological phases. Now we need to perform some calculations to see if that is they are actually different phases.
To more quickly see which side of the dividing line a parameter set falls on, I'm instead going to write out parameters in terms of $v$ and $d = v-w$. This way, the sign of $d$ can quickly tell me which phase we are in.
If d is positive, we are in the <b>Purple</b> phase, designated so because that's what I am using for my color scheme. The Purple phase also turns out to be the topological phase, as we will see later.
When d is negative, we are in the <b>Turquoise</b> phase, again because of my color scheme. This phase is topologically trivial.
```julia
# Parameters chosen to look at
va=[1.0, 0.5,1.0, 0.0,0.5, 0.4,0.6]
da=[0.0, 0.5,-0.5, 0.5,-0.5, 0.2,-0.2]
# w values corresponding to chosen ds
wa=round.(va+da;sigdigits=2) #Floating point error was making it print bad
# how to plot chosen parameters
colors=[colorant"#aa8e39",
colorant"#592a71",colorant"#4e918f",
colorant"#310c43",colorant"#226764",
colorant"#cca7df",colorant"#a0d9d7",
]
styles=[:solid,
:dash,:dash,
:solid,:solid,
:dot,:dot]
widths=[10,15,5,15,5,10,3];
```
## Band diagrams for different parameters
When $d=0$, we have the gold line with zero band-gap.
If $d\neq 0$, then the band gap is not zero.
If $v$ and $w$ are flipped, the model will look identical in its energy structure. We can see the two different sets of purple and turquoise lines plotted over each other. Only when we look at the phase of the wavefunctions can we see that flipping the $v$ and $w$ actually does have quite an influence on the solution.
```julia
plot()
for ii in 1:length(va)
plot!(ks,R.(ks,va[ii],wa[ii])
,label="v=$(va[ii]) w=$(wa[ii])"
,linewidth=widths[ii],color=colors[ii],linestyle=styles[ii])
plot!(ks,-R.(ks,va[ii],wa[ii])
,label=""
,linewidth=widths[ii],color=colors[ii],linestyle=styles[ii])
end
plot!(title="Band diagrams for different parameters",
xlabel="Momentum",ylabel="Energy")
```
## Homotopy of Hamiltonian Vector
We can look at <b>either</b> the homotopy of the Hamiltonian <b>or</b> the homotopy of the eigenfunctions.
Looking at the Hamiltonian seems easier since we don't have to go through the work of calculating the wavefunctions, especially if we have a complicated system, but homotopy is a geometric, almost pictorial thing. How do we go about getting something like that for an operator?
Let's go back to how we wrote our Hamiltonian down, both this one and the QAHE one before,
$$
\mathcal{H}=\vec{R}(k) \cdot \vec{\sigma}.
$$
Here we have a 1-1 correspondence between the Hamiltonian and a <b>geometric</b> object, this $\vec{R}$ vector. When we look at how it depends on $k$, we get insight into how $\mathcal{H}$ depends on $k$ as well.
The two different groups, purple and turquoise, will have two different behaviors. $\vec{R}(k)$ for purple will circle the origin like $S^1$ the unit circle, whereas $\vec{R}(k)$ for turquoise not circle the origin and will not be like $S^1$.
```julia
plot()
for ii in 1:length(va)
plot!(Rx.(ks,va[ii],wa[ii]),
Ry.(ks,va[ii],wa[ii])
,label="v=$(va[ii]) , d=$(da[ii])"
,linewidth=5,color=colors[ii],linestyle=styles[ii])
end
statval=5
scatter!(Rx.(ks,va[statval],wa[statval]),Ry.(ks,va[statval],wa[statval])
,label="",markersize=10,color=colors[statval])
scatter!([0],[0],label="Origin",
markersize=20,markershape=:star5,color=colorant"#6f4f0d")
plot!(title="R(k) for different parmeters",
xlabel="Rx", ylabel="Ry",legend=:bottomright,aspect_ratio=1)
```
## Wavefunction
Going back to what we did in the QAHE post again, we have an analytical expression for the wavefunction:
\begin{equation}
\Psi = \frac{1}{\sqrt{2}}
\begin{bmatrix}
-1\\
\frac{R_x + i R_y}{R}
\end{bmatrix}
\end{equation}
Now that we don't have $R_z$, the magnitudes of both components are constant and uniform. For the first component, the phase also remains constant, but the phase of the second component varies. It's the behavior of this phase that we will be looking at, and that will determine whether or not the system is topological.
```julia
um1=-1/sqrt(2)
function um2(k::Float64,v=1,w=2)
return 1/(sqrt(2)*R(k,v,w))*(Rx(k,v,w)+im*Ry(k,v,w))
end
```
um2 (generic function with 3 methods)
## Plotting the Phase
In plotting the phase for the different parameter combinations, we can really see the differences between the topological phases. In the turquoise group that didn't encircle zero, the phase changes sinusoidally, going up then back down again, so on and so forth around zero.
But for our purple states $d>0$, the phase just keeps increasing, so we get jumps as we confine it between $-\pi$ and $\pi$. The phase itself is continuous; it just goes across a branch cut which gives us a discontinuity in how we write it down.
As for the $d=0$ systems, those are both boundary cases with more complicated behavior.
```julia
plot()
for ii in 1:length(va)
plot!(ks,angle.(um2.(ks,va[ii],wa[ii]))
,label="v=$(va[ii]) , d=$(da[ii])",linewidth=5
,color=colors[ii],linestyle=styles[ii])
end
plot!(title="Phase",xlabel="k",ylabel="angle")
```
We can look at the effect of our decision of how to take an angle by rotating the system before applying the $-\pi$-$\pi$ boundary.
If we rotate the system by $\pi/4$ first, the discontinuity in the topological wavefunctions occurs at a different k-location, but it doesn't go away. This same thing happened in the QAHE Chern number situation. We can write something different and make a problem area occur in a different spot, but it's still going to occur somewhere. We can't get rid of the wrapping behavior of the topological systems by any amount of looking at it differently or smooth manipulations. We can only move the discontinuity that arises from it to a different location.
```julia
plot()
for ii in 1:length(va)
plot!(ks,angle.(exp(im*π/4)*um2.(ks,va[ii],wa[ii]))
,label="v=$(va[ii]) , w=$(wa[ii])",linewidth=5
,color=colors[ii])
end
plot!(title="Phase",xlabel="k",ylabel="angle")
```
We've qualitatively seen the difference between the phases, but now let's quantitatively look at the difference between the phases. This is as simple integrating the formula in the introduction,
\begin{equation}
n= \oint \frac{\text{d} z}{z}
\end{equation}
```julia
function Winding_phi(k,v,w)
dum2=(um2.(k[2:end],v,w).-um2.(k[1:(end-1)],v,w))
return 1/(2π*im)*sum(dum2./um2.(k[2:end],v,w) )
end
```
Winding_phi (generic function with 1 method)
Calculating the winding number specifically for the parameter combinations we've been looking at so far, we can see that the positive phase $d>0$ has a winding number of 1 and the negative phase $d<0$ has a winding number of 0.
```julia
println("|Phase \t n \t| d \t v \t w |\t Real \t Imag")
for ii in 1:length(va)
temp=Winding_phi(ks,va[ii],wa[ii])
println("| ",sign(da[ii]),"\t",round(real(temp),digits=1),"\t|",
da[ii],"\t",va[ii],"\t",wa[ii],"|\t",
round(real(temp),digits=5),"\t",round(imag(temp),digits=5))
end
```
|Phase n | d v w | Real Imag
| 0.0 0.5 |0.0 1.0 1.0| 0.4968 -0.3208
| 1.0 1.0 |0.5 0.5 1.0| 0.99989 -0.01171
| -1.0 0.0 |-0.5 1.0 0.5| 1.0e-5 -0.00167
| 1.0 1.0 |0.5 0.0 0.5| 0.99993 -0.01004
| -1.0 0.0 |-0.5 0.5 0.0| 0.0 0.0
| 1.0 1.0 |0.2 0.4 0.6| 0.99982 -0.01405
| -1.0 0.0 |-0.2 0.6 0.4| 3.0e-5 -0.00401
But we can also calculate the winding number for the entire grid of parameter values. Here we can much more obviously see how $d=0 \rightarrow v=w$ represents a phase transition between two different topological phases.
```julia
vaa=repeat(range(0,1,length=100),1,100)
waa=transpose(vaa)
ϕaa=zeros(Complex{Float64},100,100)
for ii in 1:100
for jj in 1:100
ϕaa[ii,jj]=Winding_phi(ks,vaa[ii,jj],waa[ii,jj])
end
end
```
```julia
heatmap(vaa[:,1],waa[1,:],real.(ϕaa))
plot!(xlabel="v",ylabel="w", title="Two Different Topological Phases")
```
## Conclusion
Systems in one dimension with chiral symmetry can host topological phases characterized by the winding number.
The winding behavior appears in both the Hamiltonian and the wavefunctions.
Transitions between topological phases occur at band gap closings.
The Su-Schrieffer-Heeger model exhibits two different topological phases.
[1] Public Domain, https://commons.wikimedia.org/w/index.php?curid=1499462
[2] Chen, Xie, Zheng-Cheng Gu, and Xiao-Gang Wen. "Local unitary transformation, long-range quantum entanglement, wave function renormalization, and topological order." Physical review b 82.15 (2010): 155138. https://arxiv.org/pdf/1004.3835.pdf
[3] https://www.nobelprize.org/prizes/chemistry/2000/popular-information/
[4] Su, W-P_, J. R. Schrieffer, and A. J. Heeger. "Soliton excitations in polyacetylene." Physical Review B 22.4 (1980): 2099.
[5] Heeger, Alan J., et al. "Solitons in conducting polymers." Reviews of Modern Physics 60.3 (1988): 781.
[6] Takayama, Hajime, Yo R. Lin-Liu, and Kazumi Maki. "Continuum model for solitons in polyacetylene." Physical Review B 21.6 (1980): 2388.
[7] Meier, E. J. et al. Observation of the topological soliton state in the Su-Schrieffer-Heeger model. Nat. Commun. 7, 13986 doi: 10.1038/ncomms13986 (2016)
http://paletton.com/#uid=31b0J0kllll8rOUeTt+rNcHBJ42
```julia
```
| 8f2b8a7792d0e1a93d5cf0d42963912504eec281 | 415,363 | ipynb | Jupyter Notebook | Graduate/Winding-Number.ipynb | IanHawke/M4 | 2d841d4eb38f3d09891ed3c84e49858d30f2d4d4 | [
"MIT"
] | null | null | null | Graduate/Winding-Number.ipynb | IanHawke/M4 | 2d841d4eb38f3d09891ed3c84e49858d30f2d4d4 | [
"MIT"
] | null | null | null | Graduate/Winding-Number.ipynb | IanHawke/M4 | 2d841d4eb38f3d09891ed3c84e49858d30f2d4d4 | [
"MIT"
] | null | null | null | 105.717231 | 564 | 0.64623 | true | 5,353 | Qwen/Qwen-72B | 1. YES
2. YES | 0.882428 | 0.749087 | 0.661015 | __label__eng_Latn | 0.990825 | 0.374091 |
# Fitting a Mixture Model with Gibbs Sampling
```python
%matplotlib inline
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
from scipy import stats
from collections import namedtuple, Counter
```
Suppose we receive some data that looks like the following:
```python
data = pd.Series.from_csv("clusters.csv")
_=data.hist(bins=20)
```
```python
data.size
```
1000
It appears that these data exist in three separate clusters. We want to develop a method for finding these _latent_ clusters. One way to start developing a method is to attempt to describe the process that may have generated these data.
For simplicity and sanity, let's assume that each data point is generated independently of the other. Moreover, we will assume that within each cluster, the data points are identically distributed. In this case, we will assume each cluster is normally distributed and that each cluster has the same variance, $\sigma^2$.
Given these assumptions, our data could have been generated by the following process. For each data point, randomly select 1 of 3 clusters from the distribution $\text{Discrete}(\pi_1, \pi_2, \pi_3)$. Each cluster $k$ corresponds to a parameter $\theta_k$ for that cluster, sample a data point from $\mathcal{N}(\theta_k, \sigma^2)$.
Equivalently, we could consider these data to be generated from a probability distribution with this probability density function:
$$
p(x_i \,|\, \pi, \theta_1, \theta_2, \theta_3, \sigma)=
\sum_{k=1}^3 \pi_k\cdot
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left\{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right\}
$$
where $\pi$ is a 3-dimensional vector giving the _mixing proportions_. In other words, $\pi_k$ describes the proportion of points that occur in cluster $k$.
That is, _the probability distribution describing $x$ is a linear combination of normal distributions_.
We want to use this _generative_ model to formulate an algorithm for determining the particular parameters that generated the dataset above. The $\pi$ vector is unknown to us, as is each cluster mean $\theta_k$.
We would also like to know $z_i\in\{1, 2, 3\}$, the latent cluster for each point. It turns out that introducing $z_i$ into our model will help us solve for the other values.
The joint distribution of our observed data (`data`) along with the assignment variables is given by:
\begin{align}
p(\mathbf{x}, \mathbf{z} \,|\, \pi, \theta_1, \theta_2, \theta_3, \sigma)&=
p(\mathbf{z} \,|\, \pi)
p(\mathbf{x} \,|\, \mathbf{z}, \theta_1, \theta_2, \theta_3, \sigma)\\
&= \prod_{i=1}^N p(z_i \,|\, \pi)
\prod_{i=1}^N p(x_i \,|\, z_i, \theta_1, \theta_2, \theta_3, \sigma) \\
&= \prod_{i=1}^N \pi_{z_i}
\prod_{i=1}^N
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left\{
\frac{-(x_i-\theta_{z_i})^2}{2\sigma^2}
\right\}\\
&= \prod_{i=1}^N
\left(
\pi_{z_i}
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left\{
\frac{-(x_i-\theta_{z_i})^2}{2\sigma^2}
\right\}
\right)\\
&=
\prod_i^n
\prod_k^K
\left(
\pi_k
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left\{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right\}
\right)^{\delta(z_i, k)}
\end{align}
### Keeping Everything Straight
Before moving on, we need to devise a way to keep all our data and parameters straight. Following ideas suggested by [Keith Bonawitz](http://people.csail.mit.edu/bonawitz/Composable%20Probabilistic%20Inference%20with%20Blaise%20-%20Keith%20Bonawitz%20PhD%20Thesis.pdf), let's define a "state" object to store all of this data.
It won't yet be clear why we are defining some components of `state`, however we will use each part eventually! As an attempt at clarity, I am using a trailing underscore in the names of members that are fixed. We will update the other parameters as we try to fit the model.
```python
SuffStat = namedtuple('SuffStat', 'theta N')
def update_suffstats(state):
for cluster_id, N in Counter(state['assignment']).iteritems():
points_in_cluster = [x
for x, cid in zip(state['data_'], state['assignment'])
if cid == cluster_id
]
mean = np.array(points_in_cluster).mean()
state['suffstats'][cluster_id] = SuffStat(mean, N)
def initial_state():
num_clusters = 3
alpha = 1.0
cluster_ids = range(num_clusters)
state = {
'cluster_ids_': cluster_ids,
'data_': data,
'num_clusters_': num_clusters,
'cluster_variance_': .01,
'alpha_': alpha,
'hyperparameters_': {
"mean": 0,
"variance": 1,
},
'suffstats': [None, None, None],
'assignment': [random.choice(cluster_ids) for _ in data],
'pi': [alpha / num_clusters for _ in cluster_ids],
'cluster_means': [-1, 0, 1]
}
update_suffstats(state)
return state
state = initial_state()
```
```python
for k, v in state.items():
print(k)
```
num_clusters_
suffstats
data_
cluster_means
cluster_variance_
cluster_ids_
assignment
pi
alpha_
hyperparameters_
### Gibbs Sampling
The [theory of Gibbs sampling](https://en.wikipedia.org/wiki/Gibbs_sampling) tells us that given some data $\bf y$ and a probability distribution $p$ parameterized by $\gamma_1, \ldots, \gamma_d$, we can successively draw samples from the distribution by sampling from
$$\gamma_j^{(t)}\sim p(\gamma_j \,|\, \gamma_{\neg j}^{(t-1)})$$
where $\gamma_{\neg j}^{(t-1)}$ is all current values of $\gamma_i$ except for $\gamma_j$. If we sample long enough, these $\gamma_j$ values will be random samples from $p$.
In deriving a Gibbs sampler, it is often helpful to observe that
$$
p(\gamma_j \,|\, \gamma_{\neg j})
= \frac{
p(\gamma_1,\ldots,\gamma_d)
}{
p(\gamma_{\neg j})
} \propto p(\gamma_1,\ldots,\gamma_d).
$$
The conditional distribution is proportional to the joint distribution. We will get a lot of mileage from this simple observation by dropping constant terms from the joint distribution (relative to the parameters we are conditioned on).
The $\gamma$ values in our model are each of the $\theta_k$ values, the $z_i$ values, and the $\pi_k$ values. Thus, we need to derive the conditional distributions for each of these.
Many derivation of Gibbs samplers that I have seen rely on a lot of handwaving and casual appeals to conjugacy. I have tried to add more mathematical details here. I would gladly accept feedback on how to more clearly present the derivations! I have also tried to make the derivations more concrete by immediately providing code to do the computations in this specific case.
#### Conditional Distribution of Assignment
For berevity, we will use
$$
p(z_i=k \,|\, \cdot)=
p(z_i=k \,|\,
z_{\neg i}, \pi,
\theta_1, \theta_2, \theta_3, \sigma, \bf x
).
$$
Because cluster assignements are conditionally independent given the cluster weights and paramters,
\begin{align}
p(z_i=k \,|\, \cdot)
&\propto
\prod_i^n
\prod_k^K
\left(
\pi_k
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left\{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right\}
\right)^{\delta(z_i, k)} \\
&\propto
\pi_k \cdot
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left\{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right\}
\end{align}
This equation intuitively makes sense: point $i$ is more likely to be in cluster $k$ if $k$ is itself probable ($\pi_k\gg 0$) and $x_i$ is close to the mean of the cluster $\theta_k$.
For each data point $i$, we can compute $p(z_i=k \,|\, \cdot)$ for each of cluster $k$. These values are the unnormalized parameters to a discrete distribution from which we can sample assignments.
Below, we define functions for doing this sampling. `sample_assignment` will generate a sample from the posterior assignment distribution for the specified data point. `update_assignment` will sample from the posterior assignment for each data point and update the `state` object.
```python
def log_assignment_score(data_id, cluster_id, state):
"""log p(z_i=k \,|\, \cdot)
We compute these scores in log space for numerical stability.
"""
x = state['data_'][data_id]
theta = state['cluster_means'][cluster_id]
var = state['cluster_variance_']
log_pi = np.log(state['pi'][cluster_id])
return log_pi + stats.norm.logpdf(x, theta, var)
def assigment_probs(data_id, state):
"""p(z_i=cid \,|\, \cdot) for cid in cluster_ids
"""
scores = [log_assignment_score(data_id, cid, state) for cid in state['cluster_ids_']]
scores = np.exp(np.array(scores))
return scores / scores.sum()
def sample_assignment(data_id, state):
"""Sample cluster assignment for data_id given current state
cf Step 1 of Algorithm 2.1 in Sudderth 2006
"""
p = assigment_probs(data_id, state)
return np.random.choice(state['cluster_ids_'], p=p)
def update_assignment(state):
"""Update cluster assignment for each data point given current state
cf Step 1 of Algorithm 2.1 in Sudderth 2006
"""
for data_id, x in enumerate(state['data_']):
state['assignment'][data_id] = sample_assignment(data_id, state)
update_suffstats(state)
```
#### Conditional Distribution of Mixture Weights
We can similarly derive the conditional distributions of mixture weights by an application of Bayes theorem. Instead of updating each component of $\pi$ separately, we update them together (this is called blocked Gibbs).
\begin{align}
p(\pi \,|\, \cdot)&=
p(\pi \,|\,
\bf{z},
\theta_1, \theta_2, \theta_3,
\sigma, \mathbf{x}, \alpha
)\\
&\propto
p(\pi \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \alpha
)
p(\bf{z}\ \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \pi, \alpha
)\\
&=
p(\pi \,|\,
\alpha
)
p(\bf{z}\ \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \pi, \alpha
)\\
&=
\prod_{i=1}^K \pi_k^{\alpha/K - 1}
\prod_{i=1}^K \pi_k^{\sum_{i=1}^N \delta(z_i, k)} \\
&=\prod_{k=1}^3 \pi_k^{\alpha/K+\sum_{i=1}^N \delta(z_i, k)-1}\\
&\propto \text{Dir}\left(
\sum_{i=1}^N \delta(z_i, 1)+\alpha/K,
\sum_{i=1}^N \delta(z_i, 2)+\alpha/K,
\sum_{i=1}^N \delta(z_i, 3)+\alpha/K
\right)
\end{align}
Here are Python functions to sample from the mixture weights given the current `state` and to update the mixture weights in the `state` object.
```python
def sample_mixture_weights(state):
"""Sample new mixture weights from current state according to
a Dirichlet distribution
cf Step 2 of Algorithm 2.1 in Sudderth 2006
"""
ss = state['suffstats']
alpha = [ss[cid].N + state['alpha_'] / state['num_clusters_']
for cid in state['cluster_ids_']]
return stats.dirichlet(alpha).rvs(size=1).flatten()
def update_mixture_weights(state):
"""Update state with new mixture weights from current state
sampled according to a Dirichlet distribution
cf Step 2 of Algorithm 2.1 in Sudderth 2006
"""
state['pi'] = sample_mixture_weights(state)
```
#### Conditional Distribution of Cluster Means
Finally, we need to compute the conditional distribution for the cluster means.
We assume the unknown cluster means are distributed according to a normal distribution with hyperparameter mean $\lambda_1$ and variance $\lambda_2^2$. The final step in this derivation comes from the normal-normal conjugacy. For more information see [section 2.3 of this](http://www.cs.ubc.ca/~murphyk/Papers/bayesGauss.pdf) and [section 6.2 this](https://web.archive.org/web/20160304125731/http://fisher.osu.edu/~schroeder.9/AMIS900/ech6.pdf).)
\begin{align}
p(\theta_k \,|\, \cdot)&=
p(\theta_k \,|\,
\bf{z}, \pi,
\theta_{\neg k},
\sigma, \bf x, \lambda_1, \lambda_2
) \\
&\propto p(\left\{x_i \,|\, z_i=k\right\} \,|\, \bf{z}, \pi,
\theta_1, \theta_2, \theta_3,
\sigma, \lambda_1, \lambda_2) \cdot\\
&\phantom{==}p(\theta_k \,|\, \bf{z}, \pi,
\theta_{\neg k},
\sigma, \lambda_1, \lambda_2)\\
&\propto p(\left\{x_i \,|\, z_i=k\right\} \,|\, \mathbf{z},
\theta_k, \sigma)
p(\theta_k \,|\, \lambda_1, \lambda_2)\\
&= \mathcal{N}(\theta_k \,|\, \mu_n, \sigma_n)\\
\end{align}
$$ \sigma_n^2 = \frac{1}{
\frac{1}{\lambda_2^2} + \frac{N_k}{\sigma^2}
} $$
and
$$\mu_n = \sigma_n^2
\left(
\frac{\lambda_1}{\lambda_2^2} +
\frac{n\bar{x_k}}{\sigma^2}
\right)
$$
Here is the code for sampling those means and for updating our state accordingly.
```python
def sample_cluster_mean(cluster_id, state):
cluster_var = state['cluster_variance_']
hp_mean = state['hyperparameters_']['mean']
hp_var = state['hyperparameters_']['variance']
ss = state['suffstats'][cluster_id]
numerator = hp_mean / hp_var + ss.theta * ss.N / cluster_var
denominator = (1.0 / hp_var + ss.N / cluster_var)
posterior_mu = numerator / denominator
posterior_var = 1.0 / denominator
return stats.norm(posterior_mu, np.sqrt(posterior_var)).rvs()
def update_cluster_means(state):
state['cluster_means'] = [sample_cluster_mean(cid, state)
for cid in state['cluster_ids_']]
```
Doing each of these three updates in sequence makes a complete _Gibbs step_ for our mixture model. Here is a function to do that:
```python
def gibbs_step(state):
update_assignment(state)
update_mixture_weights(state)
update_cluster_means(state)
```
Initially, we assigned each data point to a random cluster. We can see this by plotting a histogram of each cluster.
```python
def plot_clusters(state):
gby = pd.DataFrame({
'data': state['data_'],
'assignment': state['assignment']}
).groupby(by='assignment')['data']
hist_data = [gby.get_group(cid).tolist()
for cid in gby.groups.keys()]
plt.hist(hist_data,
bins=20,
histtype='stepfilled', alpha=.5 )
plot_clusters(state)
```
Each time we run `gibbs_step`, our `state` is updated with newly sampled assignments. Look what happens to our histogram after 5 steps:
```python
for _ in range(5):
gibbs_step(state)
plot_clusters(state)
```
Suddenly, we are seeing clusters that appear very similar to what we would intuitively expect: three Gaussian clusters.
Another way to see the progress made by the Gibbs sampler is to plot the change in the model's log-likelihood after each step. The log likehlihood is given by:
$$
\log p(\mathbf{x} \,|\, \pi, \theta_1, \theta_2, \theta_3)
\propto \sum_x \log \left(
\sum_{k=1}^3 \pi_k \exp
\left\{
-(x-\theta_k)^2 / (2\sigma^2)
\right\}
\right)
$$
We can define this as a function of our `state` object:
```python
def log_likelihood(state):
"""Data log-likeliehood
Equation 2.153 in Sudderth
"""
ll = 0
for x in state['data_']:
pi = state['pi']
mean = state['cluster_means']
sd = np.sqrt(state['cluster_variance_'])
ll += np.log(np.dot(pi, stats.norm(mean, sd).pdf(x)))
return ll
```
```python
state = initial_state()
ll = [log_likelihood(state)]
for _ in range(20):
gibbs_step(state)
ll.append(log_likelihood(state))
pd.Series(ll).plot()
```
See that the log likelihood improves with iterations of the Gibbs sampler. This is what we should expect: the Gibbs sampler finds state configurations that make the data we have seem "likely". However, the likelihood isn't strictly monotonic: it jitters up and down. Though it behaves similarly, the Gibbs sampler isn't optimizing the likelihood function. In its steady state, it is sampling from the posterior distribution. The `state` after each step of the Gibbs sampler is a sample from the posterior.
```python
pd.Series(ll).plot(ylim=[-150, -100])
```
[In another post](/collapsed-gibbs/), I show how we can "collapse" the Gibbs sampler and sampling the assignment parameter without sampling the $\pi$ and $\theta$ values. This collapsed sampler can also be extended to the model with a Dirichet process prior that allows the number of clusters to be a parameter fit by the model.
## Notation Helper
* $N_k$, `state['suffstat'][k].N`: Number of points in cluster $k$.
* $\theta_k$, `state['suffstat'][k].theta`: Mean of cluster $k$.
* $\lambda_1$, `state['hyperparameters_']['mean']`: Mean of prior distribution over cluster means.
* $\lambda_2^2$, `state['hyperparameters_']['variance']` Variance of prior distribution over cluster means.
* $\sigma^2$, `state[cluster_variance_]`: Known, fixed variance of clusters.
The superscript $(t)$ on $\theta_k$, $pi_k$, and $z_i$ indicates the value of that variable at step $t$ of the Gibbs sampler.
| c7a6bda22c51039fc6e1eace14a2f4f3b0786f43 | 71,843 | ipynb | Jupyter Notebook | pages/2015-09-02-fitting-a-mixture-model.ipynb | tdhopper/notes-on-dirichlet-processes | 6efb736ca7f65cb4a51f99494d6fcf6709395cd7 | [
"MIT"
] | 438 | 2015-08-06T13:32:35.000Z | 2022-03-05T03:20:44.000Z | pages/2015-09-02-fitting-a-mixture-model.ipynb | tdhopper/notes-on-dirichlet-processes | 6efb736ca7f65cb4a51f99494d6fcf6709395cd7 | [
"MIT"
] | 2 | 2015-10-13T17:10:18.000Z | 2018-07-18T14:37:21.000Z | pages/2015-09-02-fitting-a-mixture-model.ipynb | tdhopper/notes-on-dirichlet-processes | 6efb736ca7f65cb4a51f99494d6fcf6709395cd7 | [
"MIT"
] | 134 | 2015-08-26T03:59:12.000Z | 2021-09-10T02:45:44.000Z | 96.693136 | 10,094 | 0.800454 | true | 4,841 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.914901 | 0.826425 | __label__eng_Latn | 0.930901 | 0.758394 |
$
\begin{align}
a_1&=b_1+c_1 \tag{1}\\
a_2&=b_2+c_2+d_2 \tag{2}\\
a_3&=b_3+c_3 \tag{3}
\end{align}
$
[Euler](https://krasjet.github.io/quaternion/bonus_gimbal_lock.pdf)
[Quaternion](https://krasjet.github.io/quaternion/bonus_gimbal_lock.pdf)
[Source](https://github.com/Krasjet/quaternion)
```python
```
```python
```
| ca1b0a8eb933221f411e2731de800fae2f930bd0 | 1,368 | ipynb | Jupyter Notebook | Doc/Jupyter Notebook/Math_2.ipynb | Alpha255/Rockcat | f04124b17911fb6148512dd8fb260bd84702ffc1 | [
"MIT"
] | null | null | null | Doc/Jupyter Notebook/Math_2.ipynb | Alpha255/Rockcat | f04124b17911fb6148512dd8fb260bd84702ffc1 | [
"MIT"
] | null | null | null | Doc/Jupyter Notebook/Math_2.ipynb | Alpha255/Rockcat | f04124b17911fb6148512dd8fb260bd84702ffc1 | [
"MIT"
] | null | null | null | 18.486486 | 81 | 0.505117 | true | 129 | Qwen/Qwen-72B | 1. YES
2. YES | 0.822189 | 0.689306 | 0.56674 | __label__yue_Hant | 0.298697 | 0.155056 |
# 14 Linear Algebra: Singular Value Decomposition
One can always decompose a matrix $\mathsf{A}$
\begin{gather}
\mathsf{A} = \mathsf{U}\,\text{diag}(w_j)\,\mathsf{V}^{T}\\
\mathsf{U}^T \mathsf{U} = \mathsf{U} \mathsf{U}^T = 1\\
\mathsf{V}^T \mathsf{V} = \mathsf{V} \mathsf{V}^T = 1
\end{gather}
where $\mathsf{U}$ and $\mathsf{V}$ are orthogonal matrices and the $w_j$ are the _singular values_ that are assembled into a diagonal matrix $\mathsf{W}$.
$$
\mathsf{W} = \text{diag}(w_j)
$$
The inverse (if it exists) can be directly calculated from the SVD:
$$
\mathsf{A}^{-1} = \mathsf{V} \text{diag}(1/w_j) \mathsf{U}^T
$$
## Solving ill-conditioned coupled linear equations
```python
import numpy as np
```
### Non-singular matrix
Solve the linear system of equations
$$
\mathsf{A}\mathbf{x} = \mathbf{b}
$$
Using the standard linear solver in numpy:
```python
A = np.array([
[1, 2, 3],
[3, 2, 1],
[-1, -2, -6],
])
b = np.array([0, 1, -1])
```
```python
np.linalg.solve(A, b)
```
array([ 0.83333333, -0.91666667, 0.33333333])
Using the inverse from SVD:
$$
\mathbf{x} = \mathsf{A}^{-1} \mathbf{b}
$$
```python
U, w, VT = np.linalg.svd(A)
print(w)
```
[ 7.74140616 2.96605874 0.52261473]
First check that the SVD really factors $\mathsf{A} = \mathsf{U}\,\text{diag}(w_j)\,\mathsf{V}^{T}$:
```python
U.dot(np.diag(w).dot(VT))
```
array([[ 1., 2., 3.],
[ 3., 2., 1.],
[-1., -2., -6.]])
```python
np.allclose(A, U.dot(np.diag(w).dot(VT)))
```
True
Now calculate the matrix inverse $\mathsf{A}^{-1} = \mathsf{V} \text{diag}(1/w_j) \mathsf{U}^T$:
```python
inv_w = 1/w
print(inv_w)
```
[ 0.1291755 0.33714774 1.91345545]
```python
A_inv = VT.T.dot(np.diag(inv_w)).dot(U.T)
print(A_inv)
```
[[ -8.33333333e-01 5.00000000e-01 -3.33333333e-01]
[ 1.41666667e+00 -2.50000000e-01 6.66666667e-01]
[ -3.33333333e-01 -1.38777878e-17 -3.33333333e-01]]
Check that this is the same that we get from `numpy.linalg.inv()`:
```python
np.allclose(A_inv, np.linalg.inv(A))
```
True
Now, *finally* solve (and check against `numpy.linalg.solve()`):
```python
x = A_inv.dot(b)
print(x)
np.allclose(x, np.linalg.solve(A, b))
```
[ 0.83333333 -0.91666667 0.33333333]
True
```python
A.dot(x)
```
array([ -8.88178420e-16, 1.00000000e+00, -1.00000000e+00])
```python
np.allclose(A.dot(x), b)
```
True
### Singular matrix
If the matrix $\mathsf{A}$ is *singular* (i.e., its rank (linearly independent rows or columns) is less than its dimension and hence the linear system of equation does not have a unique solution):
For example, the following matrix has the same row twice:
```python
C = np.array([
[ 0.87119148, 0.9330127, -0.9330127],
[ 1.1160254, 0.04736717, -0.04736717],
[ 1.1160254, 0.04736717, -0.04736717],
])
b1 = np.array([ 2.3674474, -0.24813392, -0.24813392])
b2 = np.array([0, 1, 1])
```
```python
np.linalg.solve(C, b1)
```
NOTE: failure is not always that obvious: numerically, a matrix can be *almost* singular
```python
D = C.copy()
D[2, :] = C[0] - 3*C[1]
D
```
array([[ 0.87119148, 0.9330127 , -0.9330127 ],
[ 1.1160254 , 0.04736717, -0.04736717],
[-2.47688472, 0.79091119, -0.79091119]])
```python
np.linalg.solve(D, b1)
```
array([ -1.70189831e+00, 2.34823174e+16, 2.34823174e+16])
Note that some of the values are huge, and suspiciously like the inverse of machine precision? Sign of a nearly singular matrix.
Now back to the example with $\mathsf{C}$:
#### SVD for singular matrices
If a matrix is *singular* or *near singular* then one can *still* apply SVD.
One can then compute the *pseudo inverse*
\begin{align}
\mathsf{A}^{-1} &= \mathsf{V} \text{diag}(\alpha_j) \mathsf{U}^T \\
\alpha_j &= \begin{cases}
\frac{1}{w_j}, &\quad\text{if}\ w_j \neq 0\\
0, &\quad\text{if}\ w_j = 0
\end{cases}
\end{align}
i.e., any singular $w_j = 0$ is being "augmented" by setting
$$
\frac{1}{w_j} \rightarrow 0 \quad\text{if}\quad w_j = 0
$$
in $\text{diag}(1/w_j)$.
Perform the SVD for the singular matrix $\mathsf{C}$:
```python
U, w, VT = np.linalg.svd(C)
print(w)
```
[ 1.99999999e+00 1.00000000e+00 1.06263691e-33]
Note the third value $w_2 \approx 0$: sign of a singular matrix.
Test that the SVD really decomposes $\mathsf{A} = \mathsf{U}\,\text{diag}(w_j)\,\mathsf{V}^{T}$:
```python
U.dot(np.diag(w).dot(VT))
```
array([[ 0.87119148, 0.9330127 , -0.9330127 ],
[ 1.1160254 , 0.04736717, -0.04736717],
[ 1.1160254 , 0.04736717, -0.04736717]])
```python
np.allclose(C, U.dot(np.diag(w).dot(VT)))
```
True
There are the **singular values**:
```python
singular_values = np.abs(w) < 1e-12
print(singular_values)
```
[False False True]
#### Pseudo-inverse
Calculate the **pseudo-inverse** from the SVD
\begin{align}
\mathsf{A}^{-1} &= \mathsf{V} \text{diag}(\alpha_j) \mathsf{U}^T \\
\alpha_j &= \begin{cases}
\frac{1}{w_j}, &\quad\text{if}\ w_j \neq 0\\
0, &\quad\text{if}\ w_j = 0
\end{cases}
\end{align}
Augment:
```python
inv_w = 1/w
inv_w[singular_values] = 0
print(inv_w)
```
[ 0.5 1. 0. ]
```python
C_inv = VT.T.dot(np.diag(inv_w)).dot(U.T)
print(C_inv)
```
[[-0.04736717 0.46650635 0.46650635]
[ 0.5580127 -0.21779787 -0.21779787]
[-0.5580127 0.21779787 0.21779787]]
Now solve the linear problem with SVD:
```python
x1 = C_inv.dot(b1)
print(x1)
```
[-0.34365138 1.4291518 -1.4291518 ]
```python
C.dot(x1)
```
array([ 2.3674474 , -0.24813392, -0.24813392])
```python
C.dot(x1) - b1
```
array([ 8.88178420e-16, -1.11022302e-16, -1.11022302e-16])
Thus, using the pseudo-inverse $\mathsf{C}^{-1}$ we can obtain solutions to the equation
$$
\mathsf{C} \mathbf{x}_1 = \mathbf{b}_1
$$
However, $\mathbf{x}_1$ is not the only solution: there's a whole line of solutions that are formed the special solution and a combination of the basis vectors in the *null space* of the matrix:
The (right) *kernel* or *null space* contains all vectors $\mathbf{x^0}$ for which
$$
\mathsf{C} \mathbf{x^0} = 0
$$
(The dimension of the null space corresponds to the number of singular values.) You can find a basis that spans the null space. Any linear combination of null space basis vectors will also end up in the null space when $\mathbf{A}$ is applied to it.
Specifically, if $\mathbf{x}_1$ is a special solution and $\lambda_1 \mathbf{x}^0_1 + \lambda_2 \mathbf{x}^0_2 + \dots$ is a vector in the null space then
$$
\mathbf{x} = \mathbf{x}_1 + ( \lambda_1 \mathbf{x}^0_1 + \lambda_2 \mathbf{x}^0_2 + \dots )
$$
is **also a solution** because
$$
\mathsf{C} \mathbf{x} = \mathsf{C} \mathbf{x^0} + \mathsf{C} ( \lambda_1 \mathbf{x}^0_1 + \lambda_2 \mathbf{x}^0_2 + \dots ) = \mathsf{C} \mathbf{x^0} + 0 = \mathbf{b}_1 + 0 = \mathbf{b}_1
$$
The $\lambda_i$ are arbitrary real numbers and hence there is an infinite number of solutions.
In SVD:
* The columns $U_{\cdot, i}$ of $\mathsf{U}$ (i.e. `U.T[i]` or `U[:, i]`) corresponding to non-zero $w_i$, i.e. $\{i : w_i \neq 0\}$, form the basis for the _range_ of the matrix $\mathsf{A}$.
* The columns $V_{\cdot, i}$ of $\mathsf{V}$ (i.e. `V.T[i]` or `V[:, i]`) corresponding to zero $w_i$, i.e. $\{i : w_i = 0\}$, form the basis for the _null space_ of the matrix $\mathsf{A}$.
Note that `x1` can be written as a linear combination of `U.T[0]` and `U.T[1]`:
```python
x1
```
array([-0.34365138, 1.4291518 , -1.4291518 ])
```python
U.T
```
array([[ -7.07106782e-01, -4.99999999e-01, -4.99999999e-01],
[ 7.07106780e-01, -5.00000001e-01, -5.00000001e-01],
[ -8.23369199e-17, -7.07106781e-01, 7.07106781e-01]])
```python
VT
```
array([[-0.8660254 , -0.35355339, 0.35355339],
[-0.5 , 0.61237244, -0.61237244],
[-0. , -0.70710678, -0.70710678]])
```python
U.T[0].dot(x1), U.T[1].dot(x1)
```
(0.24299822382783764, -0.24299822305983237)
```python
VT[2].dot(x1)
```
0.0
```python
U.T[0].dot(x1) * U.T[0] + U.T[1].dot(x1) * U.T[1] + 2 * VT[2]
```
array([-0.34365138, -1.41421356, -1.41421356])
Thus, **all** solutions are
```
x1 + lambda * VT[2]
```
The solution vector $x_2$ is in the null space:
```python
x2 = C_inv.dot(b2)
print(x2)
print(C.dot(x2))
print(C.dot(x2) - b2)
```
[ 0.9330127 -0.43559574 0.43559574]
[ -5.55111512e-16 1.00000000e+00 1.00000000e+00]
[ -5.55111512e-16 2.22044605e-16 2.22044605e-16]
```python
C.dot(10*x2)
```
array([ -4.44089210e-15, 1.00000000e+01, 1.00000000e+01])
```python
C.dot(VT[2])
```
array([ 0.00000000e+00, -6.93889390e-18, -6.93889390e-18])
```python
VT[2]
```
array([-0. , -0.70710678, -0.70710678])
```python
null_basis = VT[singular_values]
```
```python
C.dot(null_basis.T)
```
array([[ 0.00000000e+00],
[ -6.93889390e-18],
[ -6.93889390e-18]])
## SVD for fewer equations than unknowns
$N$ equations for $M$ unknowns with $N < M$:
* no unique solutions (underdetermined)
* $M-N$ dimensional family of solutions
* SVD: at least $M-N$ zero or negligible $w_j$: columns of $\mathsf{V}$ corresponding to singular $w_j$ span the solution space when added to a particular solution.
Same as the above **Solving ill-conditioned coupled linear equations**.
## SVD for more equations than unknowns
$N$ equations for $M$ unknowns with $N > M$:
* no exact solutions in general (overdetermined)
* but: SVD can provide best solution in the least-square sense
$$
\mathbf{x} = \mathsf{V}\, \text{diag}(1/w_j)\, \mathsf{U}^{T}\, \mathbf{b}
$$
where
* $\mathbf{x}$ is a $M$-dimensional vector of the unknowns,
* $\mathsf{V}$ is a $M \times N$ matrix
* the $w_j$ form a square $M \times M$ matrix,
* $\mathsf{U}$ is a $M \times N$ matrix (and $\mathsf{U}^T$ is a $N \times M$ matrix), and
* $\mathbf{b}$ is the $N$-dimensional vector of the given values
It can be shown that $\mathbf{x}$ minimizes the residual
$$
\mathbf{r} := |\mathsf{A}\mathbf{x} - \mathbf{b}|.
$$
(For a $N \le M$, one can find $\mathbf{x}$ so that $\mathbf{r} = 0$ – see above.)
(In the following, $\mathbf{x}$ will correspond to the $N$ parameter values of the model and $M$ is the number of observations.)
### Linear least-squares fitting
This is the *liner least-squares fitting problem*: Given $N$ data points $(x_i, y_i)$ (where $1 \le i \le N$), fit to a linear model $y(x)$, which can be any linear combination of $M$ functions of $x$.
For example, if we have $N$ functions $x^k$ with parameters $a_k$
$$
y(x) = a_1 + a_2 x + a_3 x^2 + \dots + a_M x^{M-1}
$$
or in general
$$
y(x) = \sum_{k=1}^M a_k X_k(x)
$$
The goal is to determine the $M$ coefficients $a_k$.
Define the **merit function**
$$
\chi^2 = \sum_{i=1}^N \left[ \frac{y_i - \sum_{k=1}^M a_k X_k(x_i)}{\sigma_i}\right]^2
$$
(sum of squared deviations, weighted with standard deviations $\sigma_i$ on the $y_i$).
Best parameters $a_k$ are the ones that *minimize $\chi^2$*.
*Design matrix* $\mathsf{A}$ ($N \times M$, $N \geq M$), vector of measurements $\mathbf{b}$ ($N$-dim) and parameter vector $\mathbf{a}$ ($M$-dim):
\begin{align}
A_{ij} &= \frac{X_j(x_i)}{\sigma_i}\\
b_i &= \frac{y_i}{\sigma_i}\\
\mathbf{a} &= (a_1, a_2, \dots, a_M)
\end{align}
Minimum occurs when the derivative vanishes:
$$
0 = \frac{\partial\chi^2}{\partial a_k} = \sum_{i=1}^N {\sigma_i}^{-2} \left[ y_i - \sum_{k=1}^M a_k X_k(x_i) \right] X_k(x_i), \quad 1 \leq k \leq M
$$
($M$ coupled equations)
\begin{align}
\sum_{j=1}^{M} \alpha_{kj} a_j &= \beta_k\\
\mathsf{\alpha}\mathbf{a} = \mathsf{\beta}
\end{align}
with the $M \times M$ matrix
\begin{align}
\alpha_{kj} &= \sum_{i=1}^N \frac{X_j(x_i) X_k(x_i)}{\sigma_i^2}\\
\mathsf{\alpha} &= \mathsf{A}^T \mathsf{A}
\end{align}
and the vector of length $M$
\begin{align}
\beta_{k} &= \sum_{i=1}^N \frac{y_i X_k(x_i)}{\sigma_i^2}\\
\mathsf{\beta} &= \mathsf{A}^T \mathbf{b}
\end{align}
The inverse of $\mathsf{\alpha}$ is related to the uncertainties in the parameters:
$$
\mathsf{C} := \mathsf{\alpha}^{-1}
$$
in particular
$$
\sigma(a_i) = C_ii
$$
(and the $C_{ij}$ are the co-variances).
#### Solution of the linear least-squares fitting problem with SVD
We need to solve the overdetermined system of $M$ coupled equations
\begin{align}
\sum_{j=1}^{M} \alpha_{kj} a_j &= \beta_k\\
\mathsf{\alpha}\mathbf{a} = \mathsf{\beta}
\end{align}
SVD finds $\mathbf{a}$ that minimizes
$$
\chi^2 = |\mathsf{A}\mathbf{a} - \mathbf{b}|
$$
The errors are
$$
\sigma^2(a_j) = \sum_{i=1}^{M} \left(\frac{V_{ji}}{w_i}\right)^2
$$
#### Example
Synthetic data
$$
y(x) = 3\sin x - 2\sin 3x + \sin 4x
$$
with noise $r$ added (uniform in range $-5 < r < 5$).
```python
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.style.use('ggplot')
import numpy as np
```
```python
def signal(x, noise=0):
r = np.random.uniform(-noise, noise, len(x))
return 3*np.sin(x) - 2*np.sin(3*x) + np.sin(4*x) + r
```
```python
X = np.linspace(-10, 10, 500)
Y = signal(X, noise=5)
```
```python
plt.plot(X, Y, 'r-', X, signal(X, noise=0), 'k--')
```
```python
def fitfunc(x, a):
return a[0]*np.cos(x) + a[1]*np.sin(x) + \
a[2]*np.cos(2*x) + a[3]*np.sin(2*x) + \
a[4]*np.cos(3*x) + a[5]*np.sin(3*x) + \
a[6]*np.cos(4*x) + a[7]*np.sin(4*x)
def basisfuncs(x):
return np.array([np.cos(x), np.sin(x),
np.cos(2*x), np.sin(2*x),
np.cos(3*x), np.sin(3*x),
np.cos(4*x), np.sin(4*x)])
```
```python
M = 8
sigma = 1.
alpha = np.zeros((M, M))
beta = np.zeros(M)
for x in X:
Xk = basisfuncs(x)
for k in range(M):
for j in range(M):
alpha[k, j] += Xk[k]*Xk[j]
for x, y in zip(X, Y):
beta += y * basisfuncs(x)/sigma
```
```python
U, w, VT = np.linalg.svd(alpha)
V = VT.T
```
In this case, the singular values do not immediately show if any basis functions are superfluous (this would be the case for values close to 0).
```python
w
```
array([ 296.92809624, 282.94804954, 243.7895787 , 235.7300808 ,
235.15938555, 235.14838812, 235.14821093, 235.14821013])
... nevertheless, remember to routinely mask any singular values or close to singular values:
```python
w_inv = 1/w
w_inv[np.abs(w) < 1e-12] = 0
alpha_inv = V.dot(np.diag(w_inv)).dot(U.T)
```
Compare the fitted values to the original parameters $a_j = 0, +3, 0, 0, 0, -2, 0, +1$.
```python
a_values = alpha_inv.dot(beta)
print(a_values)
```
[-0.05602761 2.76553973 0.25531225 -0.03780974 -0.05668003 -1.76371356
0.28272354 0.68902357]
```python
plt.plot(X, fitfunc(X, a_values), 'b-', label="fit")
plt.plot(X, signal(X, noise=0), 'k--', label="signal")
plt.legend(loc="best", fontsize="small")
```
```python
```
| bbbe07ee5de06bd6ba2212b0e5977dac1b7a5df7 | 153,734 | ipynb | Jupyter Notebook | 14_linear_algebra/14_SVD.ipynb | nachrisman/PHY494 | bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7 | [
"CC-BY-4.0"
] | null | null | null | 14_linear_algebra/14_SVD.ipynb | nachrisman/PHY494 | bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7 | [
"CC-BY-4.0"
] | null | null | null | 14_linear_algebra/14_SVD.ipynb | nachrisman/PHY494 | bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7 | [
"CC-BY-4.0"
] | null | null | null | 97.361621 | 66,442 | 0.844777 | true | 5,706 | Qwen/Qwen-72B | 1. YES
2. YES | 0.945801 | 0.803174 | 0.759643 | __label__eng_Latn | 0.761403 | 0.603237 |
```python
from IPython.core.display import HTML
css_file = './custom.css'
HTML(open(css_file, "r").read())
```
###### Content provided under a Creative Commons Attribution license, CC-BY 4.0; code under MIT License. (c)2015 [David I. Ketcheson](http://davidketcheson.info)
##### Version 0.2 - May 2021
```python
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation
from IPython.display import HTML
font = {'size' : 15}
matplotlib.rc('font', **font)
```
# The FFT, aliasing, and filtering
Welcome to lesson 3. Here we'll learn about the Fast Fourier Transform, which we've been using all along. We'll also learn about a numerical pathology of pseudospectral methods (known as *aliasing*) and one way to avoid it (known as *filtering* or *dealiasing*).
## The fast Fourier transform
We won't go into great detail regarding the FFT algorithm, since there is already an excellent explanation of the Fast Fourier Transform in Jupyter Notebook form available on the web:
- [Understanding the FFT Algorithm](https://jakevdp.github.io/blog/2013/08/28/understanding-the-fft/) by Jake Vanderplas
Suffice it to say that the FFT is a fast algorithm for computing the discrete Fourier transform (DFT):
$$
\hat{u}_\xi = \sum_{j=0}^{m-1} u_j e^{-2\pi i \xi j/m}
$$
or its inverse. The DFT, as we know, is linear and can be computed by multiplying $u$ by a certain $m\times m$ dense matrix $F$. Multiplication by a dense matrix requeires ${\mathcal O}(m^2)$ operations.
The FFT is a shortcut to compute that matrix-vector product in just ${\mathcal O}(m \log m)$ operations by taking advantage of the special structure of $F$.
This is very important for pseudospectral methods, since most of the computational work occurs in computing the Fourier transform and its inverse. It's also important that we make use of a compiled version of the FFT, since native Python code is relatively slow. The `np.fft` function is an interface to a compiled FFT library.
### Ordering of wavenumbers in FFT output
The vector returned by `np.fft` contains the Fourier coefficients of $u$ corresponding to the wavenumbers
$$
\frac{2\pi}{L} \{-m/2, -m/2 + 1, \dots, m/2 - 1\}.
$$
However, for computational efficiency the output vector does not use the natural ordering above. The ordering it uses can be obtained with the following command.
```python
m = 16
L = 2*np.pi
xi=np.fft.fftfreq(m)*m/(L/(2*np.pi))
print(xi)
```
As you can see, the return vector starts with the nonnegative wavenumbers, followed by the negative wavenumbers. It may seem strange to you that the range of wavenumbers returned is not symmetric; in the case above, it includes $-8$ but not $+8$. This apparent asymmetry can be explained once one understands the phenomenon known as *aliasing*.
## Aliasing
A numerical grid has a limited resolution. If you try to represent a rapidly-oscillating function with relatively few grid points, you will observe an effect known as **aliasing**. This naturally limits the range of frequencies that can be modelled on a given grid. It can also lead to instabilities in pseudospectral simulations, when generation of high frequencies leads to buildup of lower-frequency energy due to aliasing.
The code below plots a sine wave of a given frequency, along with its representation on a grid with $m$ points. Try changing $p$ and notice how for $m<2p$ the function looks like a lower-frequency mode.
```python
from ipywidgets import widgets
from ipywidgets import interact, interactive
def plot_sine(wavenumber=4,grid_points=12,plot_sine=True):
"Plot sin(2*pi*p), sampled at m equispaced points."
x = np.linspace(0,1,grid_points+1); # grid
xf = np.linspace(0,1,1000) # fine grid
y = np.sin(wavenumber*np.pi*x)
yf = np.sin(wavenumber*np.pi*xf)
fig = plt.figure(figsize = (8, 6));
ax = fig.add_subplot(1,1,1);
if plot_sine:
ax.plot(xf, yf, 'r-', linewidth=2);
ax.plot(x, y, 'o-', lw=2)
interact(plot_sine, wavenumber=(-30,30,1),
grid_points=(10, 16, 1));
```
### Exercise
Try to answer the questions below with pencil and paper; then check them by modifying the code above.
1. For a given number of grid points $m$, which wavenumbers $p$ will be aliased to the $p=0$ mode? Which will be aliased to $p=1$? Can you explain why?
2. What is the highest frequency mode that can be represented on a given grid?
After completing the exercise, explain why the sequence of wavenumbers given by `np.fft.fftfreq` is not, in fact, asymmetric after all.
## Aliasing as a source of numerical instability
As we have seen, aliasing means that wavenumbers of magnitude greater than $\pi m/L$ are incorrectly represented as lower wavenumbers on a grid with $m$ points. This suggests that we shouldn't allow larger wavenumbers in our numerical solution. For linear problems, this simply means that we should represent the initial condition by a truncated Fourier series containing modes with wavenumbers less than $\pi m/L$. This happens naturally when we sample the function at the grid points. As we evolve in time, higher frequencies are not generated due to the linearity of the problem.
Nonlinear problems are a different story. Let's consider what happens when we have a quadratic term like $u^2$, as in Burgers' equation. In general, if the grid function $u$ contains wavenumbers up to $\pi m/L$, then $u^2$ contains frequencies up to $2 \pi m/L$. So each time we compute this term, we generate high frequencies that get aliased back to lower frequencies on our grid. Clearly this has nothing to do with the correct mathematical solution and will lead to errors. Even worse, this aliasing effect can, as it is repeated at every step, lead to an instability that causes the numerical solution to blow up.
$$
\newcommand{\F}{\mathcal F}
\newcommand{\Finv}{{\mathcal F}^{-1}}
$$
## An illustration of aliasing instability: the Korteweg-de Vries equation
To see aliasing in practice, we'll consider the KdV equation, which describes certain kinds of water waves:
$$
u_t = -u u_x - u_{xxx}
$$
A natural pseudospectral discretization is obtained if we compute the spatial derivatives via
\begin{align}
u_x & = \Finv(i\xi \F(u)) \\
u_{xxx} & = \Finv(-i\xi^3 \F(u)).
\end{align}
This gives
$$
U'(t) = -D[U] \Finv(i\xi \F(U)) - \Finv(-i\xi^3 \F(U)).
$$
This is identical to our discretization of Burgers' equation, except that now we have a third-derivative term. In Fourier space, the third derivative gives a purely imaginary factor, which -- like the first derivative -- causes the solution to travel over time. Unlike the first derivative, the third derivative term causes different wavenumber modes to travel at different speeds; this is referred to as *dispersion*.
The largest-magnitude eigenvalues of the Jacobian for this semi-discretization are related to the 3rd-derivative term. If we consider only that term, the eigenvalues are
$$-i \xi^3$$
where $\xi$ lies in the range $(-m/2, m/2)$. So we need the time step to satisfy $\Delta t (m/2)^3 \in S$, where $S$ is the region of absolute stability of a given time integrator.
For this example we'll use a 3rd-order Runge-Kutta method:
```python
def rk3(u,xi,rhs):
y2 = u + dt*rhs(u,xi)
y3 = 0.75*u + 0.25*(y2 + dt*rhs(y2,xi))
u_new = 1./3 * u + 2./3 * (y3 + dt*rhs(y3,xi))
return u_new
```
Let's check the size of the imaginary axis interval contained in this method's absolute stability region:
```python
from nodepy import rk
ssp33 = rk.loadRKM('SSP33')
print(ssp33.imaginary_stability_interval())
```
Now we'll go ahead and implement our solution, making sure to set the time step according to the condition above.
```python
def rhs(u, xi, equation='KdV'):
uhat = np.fft.fft(u)
if equation == 'Burgers':
return -u*np.real(np.fft.ifft(1j*xi*uhat)) + np.real(np.fft.ifft(-xi**2*uhat))
elif equation == 'KdV':
return -u*np.real(np.fft.ifft(1j*xi*uhat)) - np.real(np.fft.ifft(-1j*xi**3*uhat))
# Grid
m = 256
L = 2*np.pi
x = np.arange(-m/2,m/2)*(L/m)
xi = np.fft.fftfreq(m)*m*2*np.pi/L
dt = 1.73/((m/2)**3)
A = 25; B = 16;
#u = 3*A**2/np.cosh(0.5*(A*(x+2.)))**2 + 3*B**2/np.cosh(0.5*(B*(x+1)))**2
#tmax = 0.006
# Try this one first:
u = 1500*np.exp(-10*(x+2)**2)
tmax = 0.005
uhat2 = np.abs(np.fft.fft(u))
num_plots = 50
nplt = np.floor((tmax/num_plots)/dt)
nmax = int(round(tmax/dt))
fig = plt.figure(figsize=(12,8))
axes = fig.add_subplot(211)
axes2 = fig.add_subplot(212)
line, = axes.plot(x,u,lw=3)
line2, = axes2.semilogy(xi,uhat2)
xi_max = np.max(np.abs(xi))
axes2.semilogy([xi_max/2.,xi_max/2.],[1.e-3,25000],'--r')
axes2.semilogy([-xi_max/2.,-xi_max/2.],[1.e-3,25000],'--r')
frames = [u.copy()]
tt = [0]
uuhat = [uhat2]
for n in range(1,nmax+1):
u_new = rk3(u,xi,rhs)
u = u_new.copy()
t = n*dt
# Plotting
if np.mod(n,nplt) == 0:
frames.append(u.copy())
tt.append(t)
uhat2 = np.abs(np.fft.fft(u))
uuhat.append(uhat2)
def plot_frame(i):
line.set_data(x,frames[i])
line2.set_data(np.sort(xi),uuhat[i][np.argsort(xi)])
axes.set_title('t= %.2e' % tt[i])
axes.set_xlim((-np.pi,np.pi))
axes.set_ylim((-100,3000))
anim = matplotlib.animation.FuncAnimation(fig, plot_frame,
frames=len(frames), interval=100)
HTML(anim.to_jshtml())
```
In the output, we're plotting the solution (top plot) and its Fourier transform (bottom plot). There are a lot of interesting things to say about the solution, but for now let's focus on the Fourier transform. Notice how the wavenumbers present in the solution remain in the lower half of those representable on the grid (this region is delimited by the dashed red lines). Because of this, no aliasing occurs.
Now change the code above to use only $m=128$ grid points. What happens?
## Explanation
Here we will give a somewhat simplified explanation of the blow-up just observed. First, this blowup has nothing to do with the absoute stability condition -- when we change $m$, the time step is automatically changed in a way that will ensure absolute stability. If you're not convinced, try taking the time step even smaller; you will still observe the blowup.
By taking $m=128$, we cut by half the wavenumbers that can be represented on the grid. As you can see from the plots, this means that some of the wavenumbers present in the initial data are in the upper half of the representable range (i.e., outside the dashed red lines). That means that the highest wavenumbers generated by the quadratic term will be aliased -- and they will be aliased back into that upper-half range. This leads to a gradual accumulation of high-wavenumber modes, easily visible in both plots. Eventually the high-wavenumber modes dominate the numerical solution and lead to blowup.
For a detailed discussion of aliasing instabilities, see Chapter 11 of John Boyd's "Chebyshev and Fourier Spectral Methods".
## Filtering
How can we avoid aliasing instability? The proper approach is to ensure that the solution is well resolved, so that the instability never appears. However, this may entail a very substantial computational cost. One way to ensure stability even if the solution is underresolved is by *filtering*, which is also known as *dealiasing*. In general it is unwise to rely on filtering, since it can mask the fact that the solution is not resolved (and hence not accurate). But understanding filtering can give a bit more insight into aliasing instability itself.
At the most basic level, filtering means removing the modes that lead to aliasing. This can be done by damping the high wavenumbers or simply zeroing them when computing the $(u^2)_x$ term. The obvious approach would be to filter the upper half of all wavenumbers, but this is overkill. In fact, it is sufficient to filter only the uppermost third. To see why, notice that the aliased modes resulting from the lower two-thirds will appear in the uppermost third of the range of modes, and so will be filtered at the next step.
A simple 2/3 filter is implemented in the code below.
```python
def rhs(u, xi, filtr, equation='KdV'):
uhat = np.fft.fft(u)
if equation == 'Burgers':
return -u*np.real(np.fft.ifft(1j*xi*uhat)) \
+ np.real(np.fft.ifft(-xi**2*uhat))
elif equation == 'KdV':
return -u*np.real(np.fft.ifft(1j*xi*uhat*filtr)) \
- np.real(np.fft.ifft(-1j*xi**3*uhat))
def rk3(u,xi,rhs,filtr):
y2 = u + dt*rhs(u,xi,filtr)
y3 = 0.75*u + 0.25*(y2 + dt*rhs(y2,xi,filtr))
u_new = 1./3 * u + 2./3 * (y3 + dt*rhs(y3,xi,filtr))
return u_new
# Grid
m = 128
L = 2*np.pi
x = np.arange(-m/2,m/2)*(L/m)
xi = np.fft.fftfreq(m)*m*2*np.pi/L
filtr = np.ones_like(xi)
xi_max = np.max(np.abs(xi))
filtr[np.where(np.abs(xi)>xi_max*1./2)] = 0.
dt = 1.73/((m/2)**3)
A = 25; B = 16;
u = 3*A**2/np.cosh(0.5*(A*(x+2.)))**2 + 3*B**2/np.cosh(0.5*(B*(x+1)))**2
tmax = 0.006
uhat2 = np.abs(np.fft.fft(u))
num_plots = 50
nplt = np.floor((tmax/num_plots)/dt)
nmax = int(round(tmax/dt))
fig = plt.figure(figsize=(12,8))
axes = fig.add_subplot(211)
axes2 = fig.add_subplot(212)
line, = axes.plot(x,u,lw=3)
line2, = axes2.semilogy(xi,uhat2)
axes2.semilogy([xi_max/2.,xi_max/2.],[1.e-3,25000],'--r')
axes2.semilogy([-xi_max/2.,-xi_max/2.],[1.e-3,25000],'--r')
frames = [u.copy()]
tt = [0]
uuhat = [uhat2]
for n in range(1,nmax+1):
u_new = rk3(u,xi,rhs,filtr)
u = u_new.copy()
t = n*dt
# Plotting
if np.mod(n,nplt) == 0:
frames.append(u.copy())
tt.append(t)
uhat2 = np.abs(np.fft.fft(u))
uuhat.append(uhat2)
def plot_frame(i):
line.set_data(x,frames[i])
line2.set_data(np.sort(xi),uuhat[i][np.argsort(xi)])
axes.set_title('t= %.2e' % tt[i])
axes.set_xlim((-np.pi,np.pi))
axes.set_ylim((-100,3000))
anim = matplotlib.animation.FuncAnimation(fig, plot_frame,
frames=len(frames), interval=20)
HTML(anim.to_jshtml())
```
Notice how the solution remains stable, but small wiggles appear throughout the domain. These are a hint that something is not sufficiently resolved.
| 9695a926b9101383fd9c5182f7cfbe495c40ab02 | 19,726 | ipynb | Jupyter Notebook | PSPython_03-FFT-aliasing-filtering.ipynb | ketch/PseudoSpectralPython | 382894906cfa3ded504f7f3393e139957c147022 | [
"MIT"
] | 20 | 2016-07-11T07:52:30.000Z | 2022-03-15T00:29:15.000Z | PSPython_03-FFT-aliasing-filtering.ipynb | chenjied/PseudoSpectralPython | 382894906cfa3ded504f7f3393e139957c147022 | [
"MIT"
] | null | null | null | PSPython_03-FFT-aliasing-filtering.ipynb | chenjied/PseudoSpectralPython | 382894906cfa3ded504f7f3393e139957c147022 | [
"MIT"
] | 13 | 2017-02-08T00:58:59.000Z | 2022-03-27T17:29:09.000Z | 40.01217 | 631 | 0.593886 | true | 4,057 | Qwen/Qwen-72B | 1. YES
2. YES | 0.727975 | 0.879147 | 0.639997 | __label__eng_Latn | 0.992469 | 0.325259 |
```python
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
```
# Neural Networks
### Interpreting the linear function as a neural network
In the last example we tried to classify our data into two categories by maximising and minimizing the following function:
\begin{align}
F = f(\mathbf{X},\mathbf{Y})=A\mathbf{X}+B\mathbf{Y}+C
\end{align}
where, $\mathbf{X}$ and $\mathbf{Y}$ are the input vectors, and $A$, $B$ and $C$ are the parameters that we are trying to learn.
The function $f$, we have been using in our last example was not an arbitrary choice. This type of function is very common in machine learning, and it is used to model an element of a neural network called a neuron or a unit.
#### Origins of neural networks: Perceptrons
Perceptron networks are the precursor of the contemporary neural networks we use today. Let's see how they work. A __perceptron__ is a small computational unit that takes several binary inputs, $X_1, X_2,…$, and produces a single binary output. The image below illustrates a perceptron with 3 binary inputs, although the number of inputs is not restricted.
How can we use perceptrons to do something useful? Let's say that we are trying to formally decide whether to do something or not. For example, we might want to decide whether to go and see a movie, or not. We could for example think of three aspects that are relevant to the decision, and define them as questions. For example: is the weather nice or not, do your friends like it or not, do you like the main actor/actress or not. The answers to these questions could only be yes or no, or in computer terms 1 or 0. Such yes (1) or no (0) answers that are relevant to decide what the output will be are taken as inputs $X_1, X_2,…$ of the perceptron. The perceptron's binary output should evaluate the inputs and tell us whether to go to the movies (1) or not (0).
Certain aspects of our decision making might be more or less important than the others. For example, an aspect reflected in the input $X_1$ could be more important to you than the aspect $X_2$. You could, for example, think that what your friends think of the movie is twice as important than how the weather is. To implement this in the perceptron model, each input is multiplied by a real number that reflects its importance. If an aspect $X_1$ is more important then the aspect $X_2$, then $X_2$ should be multiplied by a larger number than $X_1$. The number with which we multiply an input is called a _weight_.
Finally, to decide whether the decision will be positive or negative, the weighted sum of all the inputs must be compared to some value which will be the final decision factor. We can temporarily call this value a _threshold_. Let's say that our threshold is the number `3.5`. Once we sum our decisions multiplied by their respective weights, we get a real number value. If the value is less or equal to `3.5`, the perceptron outputs `0`, which indicates we shouldn't do the thing. If the value is larger than `3.5`, the perceptron outputs `1` which indicates we should do the thing.
We can represent the output of a neuron algebraically:
\begin{eqnarray}
\mbox{output} & = & \left\{ \begin{array}{ll}
0 & \mbox{if } \sum_j W_j X_j \leq \text{ threshold} \\
1 & \mbox{if } \sum_j W_j X_j > \text{ threshold}
\end{array} \right.
\end{eqnarray}
Let's simplify the way we describe perceptrons, by making two notational changes. The first change is to write $\sum_j W_j X_j$ in terms of vectors as a dot product, $w \cdot x \equiv \sum_j w_j x_j$, where $w$ and $x$ are vectors whose components are the weights and inputs, respectively. The second change is to move the threshold to the other side of the inequality and to replace it by what's known as the perceptron's __bias__, $B \equiv
-\mbox{threshold}$. Using the bias instead of the threshold, the perceptron rule can be rewritten:
\begin{eqnarray}
\mbox{output} = \left\{
\begin{array}{ll}
0 & \mbox{if } W\cdot X + B \leq 0 \\
1 & \mbox{if } W\cdot X + B > 0
\end{array}
\right.
\end{eqnarray}
This change is reflected in the following diagram.
You can think of the bias as a measure of how easy it is to get the perceptron to output 1. For a perceptron with a really big positive bias, it's extremely easy for the perceptron to output 1, as the weighted sum plus a large positive value, easily gets larger than 0. But if the bias is very negative, then it's difficult for the perceptron to output a 1, as the large negative value will pull the output to be less than 0.
If we would like to use perceptrons for classification like we did before, we run into a problem. To minimise or maximise a function of multiple inputs we need partial derivatives. The derivative shows us how the output of the function changes in terms of inputs if we slightly nudge one of the inputs. The problem with perceptrons is that their binary decision making aspect make them intrinsically insensitive to the small changes of inputs. To differentiate a function, after all, it needs to be smooth and continuous, not discreete. A tiny change of one of our inputs can cause the output of a neuron to abruptly change from 0 to 1 or vice versa. That will then spawn over the entire network of neurons and cause unpredictable changes.
We can overcome this problem by changing the architecture of our neural network. Instead of binary inputs, and outputs, we can now think of all the values within the neuron as *real numbers*. Instead of thinking of the criteria for going to the movies in terms of simple yes or no, we can refine think of it in terms of probabilities, where 0 is the least probable, and 1 is the most probable, but there are also values in between to choose from. This also applies to the output which no longer can be expressed in binary terms. This part we replace with the so called _activation function_, which maps its input smoothly to an output in between 0 and 1. With this change, the small changes in the neuron's weights and bias cause only a small change in the neuron's output. That's the crucial fact which will allow a network of neurons to learn.
### Classification by using a two-layer Neural Network
In our classification taks, we used the function $F = f(\mathbf{X},\mathbf{Y})=A\mathbf{X}+B\mathbf{Y}+C$. Now you can see that this function is a simplified neuron without the activation function. Its inputs are $\mathbf{X}$ and $\mathbf{Y}$, their corresponding weights are $A$ and $B$, and $C$ represents the bias.
A neural network works by connecting multiple neurons in a network. Unlike a single neuron, a network of neurons is capable of creating smoother, non-linear decision boundaries between data points belonging to different categories we are interested to predict. Our network will contain 4 neurons, distributed in 3 layers. The first layer contains the inputs, and is called the __input layer__. The second layer, called a __hidden layer__, comprises of 2 identical neurons, $n_1$, and $n_2$, each multiplying our input data points $\mathbf{X}$ and $\mathbf{Y}$ by a different set of parameters $A_i, B_i, C_i$, where $i=1,2$. The third layer, called the __output layer__ contains a single neuron $s$ that multiplies the outputs $N_1$, and $N_2$ of the previous neurons by a new set of parameters $A_3$, $B_3$, and $C_3$, and outputs the result $S$, like shown on the following diagram.
In machine learning this scheme is usually simplified as following:
As an activation function we will again be using sigmoid function $\sigma(x)$ defined as:
$$
\sigma = \frac{1}{1+e^{-x}}\\
$$
and defined in code as:
```python
def sigmoid(x):
return 1 / (1 + np.exp(-x))
```
Let us construct a new dataset containing 2D points:
```python
data2 = np.array([[ 1.2, 0.7],
[-0.3,-0.5],
[ 3.0, 0.1],
[-0.1,-1.0],
[-0.0, 1.1],
[ 2.1,-1.3],
[ 3.1,-1.8],
[ 1.1,-0.1],
[ 1.5,-2.2],
[ 4.0,-1.0]])
```
With each point, there is a label `1` or `-1` is associated:
```python
labels2 = np.array([ 1,
-1,
1,
-1,
-1,
1,
-1,
1,
-1,
-1])
```
We can plot this data by using the function `plot_data`:
```python
def plot_data(data, labels):
fig = plt.figure(figsize=(5,5))
ax = fig.add_subplot(111)
ax.scatter(data[:,0], data[:,1], c=labels, s=50, cmap=plt.cm.bwr,zorder=50)
nudge = 0.08
for i in range(data.shape[0]):
d = data[i]
ax.annotate(f'{i}',(d[0]+nudge,d[1]+nudge))
ax.set_aspect('equal', 'datalim')
plt.show()
```
```python
plot_data(data2, labels2)
```
Here, the data is structured such that a linear classifier wouldn't be able to apropriately classify it.
Now let's initialise the parameters of the network. This be done very efficiently with matrices, but here for the sake of clarifying the procedure we will initialise them manually with separate variables that take normally distributed random values between 0 and 1:
```python
# generating a random data set
rnd = np.random.normal(size=9)
# hidden layer neuron 1
A1 = rnd[0] #weight for X
B1 = rnd[1] #weight for Y
C1 = rnd[2] #bias
# hidden layer neuron 2
A2 = rnd[3] #weight for X
B2 = rnd[4] #weight for Y
C2 = rnd[5] #bias
# output layer neuron
A3 = rnd[6] #weight for n1
B3 = rnd[7] #weight for n2
C3 = rnd[8] # bias
print (A1, B1, C1, A2, B2, C2, A3, B3, C3)
```
0.7566485514420692 0.002725779569172881 0.4076987155211042 0.10770685308094682 -0.8309211754969811 0.17043072781064242 -2.0621777482806545 0.620219798994706 1.43868497651903
#### Computing the forward pass
With the given weights, biases and data points, we need to compute the final output (activation) of the function $S = s(\mathbf{X},\mathbf{Y})$. We can accomplish that manually or by evaluating a function `forward_pass`:
```python
def forward_pass(A1,A2,A3,B1,B2,B3,C1,C2,C3,X,Y):
N1 = sigmoid(A1*X + B1*Y + C1) # 1st neuron
N2 = sigmoid(A2*X + B2*Y + C2) # 2nd neuron
S = A3*N1 + B3*N2 + C3 # final activation
return S
```
Let's do it manually first:
If we take the first data point `[ 1.2, 0.7]`:
```python
X, Y = data2[0]
X, Y
```
(1.2, 0.7)
```python
z1 = A1*X + B1*Y + C1
z2 = A2*X + B2*Y + C2
N1 = sigmoid(z1) # 1st neuron
N2 = sigmoid(z2) # 2nd neuron
S = A3*N1 + B3*N2 + C3 # final activation
```
the output will be:
```python
S
```
0.07875823603547283
We should get the same result by evaluating the function `forward_pass`:
```python
forward_pass(A1,A2,A3,B1,B2,B3,C1,C2,C3,X,Y)
```
0.07875823603547283
#### Computing the backward pass
By analysing the image, let's compute the derivatives:
The simplest to compute are the derivatives in respect to the weights $A_3$, $B_3$, and the bias $C_3$, $\frac{\partial S}{\partial A_3}$, $\frac{\partial S}{\partial B_3}$ and $\frac{\partial S}{\partial C_3}$:
\begin{align*}
\frac{\partial S}{\partial A_3} &=N_1; &\frac{\partial S}{\partial B_3} &=N2; &\frac{\partial S}{\partial C_3}&=1;\\\\
\end{align*}
```python
dA3, dB3, dC3 = N1, N2, 1
print (f'dA3: {dA3}\ndB3: {dB3}\ndC3: {dC3}')
```
dA3: 0.78877963667698
dB3: 0.4299718825501192
dC3: 1
In order to proceed to the weights and biases in the first layer, we need to compute derivatives in respect to the activations $N_1$ and $N_2$: $\frac{\partial S}{\partial N_1}$ and $\frac{\partial S}{\partial N_2}$:
\begin{align*}
\frac{\partial S}{\partial N_1} &=A_3 &\frac{\partial S}{\partial N_2} &=B_3
\end{align*}
```python
dN1, dN2 = A3, B3
print (f'dN1: {dN1}\ndN2: {dN2}')
```
dN1: -2.0621777482806545
dN2: 0.620219798994706
To determine the derivatives in respect to the bias $C_1$ and the weights $A_1$ and $B_1$ we need to use chain rule as before:
\begin{align*}
\frac{\partial S}{\partial C_1} &=\frac{\partial S}{\partial N_1}*\frac{\partial N_1}{\partial z_1}*1
&\frac{\partial S}{\partial A_1} &=\frac{\partial S}{\partial N_1}*\frac{\partial N_1}{\partial z_1}*1*\mathbf{X}; &\frac{\partial S}{\partial B_1} &=\frac{\partial S}{\partial N_1}*\frac{\partial N_1}{\partial z_1}*1*\mathbf{Y};\\\\
\end{align*}
\begin{align*}
\frac{\partial S}{\partial C_1} &=\frac{\partial S}{\partial N_1}*N_1*(1-N_1);
&\frac{\partial S}{\partial A_1} &=\frac{\partial S}{\partial N_1}*N_1*(1-N_1)*\mathbf{X}; &\frac{\partial S}{\partial B_1} &=\frac{\partial S}{\partial N_1}*N_1*(1-N_1)*\mathbf{Y};\\\\
\end{align*}
As the term $\frac{\partial S}{\partial N_1}*N_1*(1-N_1)$ figures in all three partial derivatives, we can take that as an advantage and make it into a common variable `dZ1`:
```python
dZ1 = dN1 * N1*(1-N1)
dZ1
```
-0.3435718487979292
This makes it east to compute the derivatives $\frac{\partial S}{\partial C_1}$, $\frac{\partial S}{\partial A_1}$, and $\frac{\partial S}{\partial B_1}$:
```python
dA1 = dZ1*X
dB1 = dZ1*Y
dC1 = dZ1*1
print (f'dA1: {dA1}\ndB1: {dB1}\ndC1: {dC1}')
```
dA1: -0.412286218557515
dB1: -0.2405002941585504
dC1: -0.3435718487979292
As in the previous case, to determine the derivatives in respect to the bias $C_2$ and the weights $A_2$ and $B_2$ we need to use chain rule:
\begin{align*}
\frac{\partial S}{\partial C_2} &=\frac{\partial S}{\partial N_2}*\frac{\partial N_2}{\partial z_2}*1;
&\frac{\partial S}{\partial A_2} &=\frac{\partial S}{\partial N_2}*\frac{\partial N_2}{\partial z_2}*1*\mathbf{X}; &\frac{\partial S}{\partial B_2} &=\frac{\partial S}{\partial N_2}*\frac{\partial N_2}{\partial z_2}*1*\mathbf{Y};\\\\
\end{align*}
\begin{align*}
\frac{\partial S}{\partial C_2} &=\frac{\partial S}{\partial N_2}*N_2*(1-N_2);
&\frac{\partial S}{\partial A_2} &=\frac{\partial S}{\partial N_2}*N_2*(1-N_2)*\mathbf{X}; &\frac{\partial S}{\partial B_2} &=\frac{\partial S}{\partial N_2}*N_1*(1-N_2)*\mathbf{Y};\\\\
\end{align*}
As the term $\frac{\partial S}{\partial N_2}*N_2*(1-N_2)$ figures in all three partial derivatives, we can take that as an advantage and make it into a common variable `dZ2`:
```python
dZ2 = dN2 * N2*(1-N2)
dZ2
```
0.15201343078338642
This makes it east to compute the derivatives $\frac{\partial S}{\partial C_2}$, $\frac{\partial S}{\partial A_2}$, and $\frac{\partial S}{\partial B_2}$:
```python
dA2 = dZ2*X
dB2 = dZ2*Y
dC2 = dZ2*1
print (f'dA2: {dA2}\ndB2: {dB2}\ndC2: {dC2}')
```
dA2: 0.18241611694006368
dB2: 0.10640940154837049
dC2: 0.15201343078338642
Depending on the given label, we will need to multiply our derivatives `dA1`, `dB1`, `dC1`, `dA2`, `dB2`, `dC2`, `dA3`, `dB3` and `dC3` with either `-1` or `+1` depending on the given situation. In the algorithm, we will introduce a new variable `pull` that can be set to `1` or `-1`.
Now let's test if we are able to successfully increase the function $S=s(\mathbf{X},\mathbf{Y})$. Our original score was:
```python
forward_pass(A1,A2,A3,B1,B2,B3,C1,C2,C3,X,Y)
```
0.07875823603547283
In order to gradually maximize the function $s$ towards our desired result, we need to update all of our parameters (weights and biases). This is done by adding to the parameter's value the value of its partial derivative. To achieve this gradually in small steps, we multiply the value of the partial derivative by a a small number (step).
```python
step_size = 0.01
A1 = A1 + dA1 * step_size
B1 = B1 + dB1 * step_size
C1 = C1 + dC1 * step_size
A2 = A2 + dA2 * step_size
B2 = B2 + dB2 * step_size
C2 = C2 + dC2 * step_size
A3 = A3 + dA3 * step_size
B3 = B3 + dB3 * step_size
C3 = C3 + dC3 * step_size
```
If we now evaluate the function $f(\mathbf{X},\mathbf{Y})$, with the updated parameters $A$, $B$, and $C$, we get:
```python
forward_pass(A1,A2,A3,B1,B2,B3,C1,C2,C3,X,Y)
```
0.10096610562552
This result should better then the original result!
***
#### A simple neural network algorithm
Now we can put these elements together in a working algorithm:
```python
def train_linear_classifier(data, labels, step_size, reg_strength, no_loops, iter_info):
rnd = np.random.normal(size=9)
# hidden layer neuron 1
A1 = rnd[0] #weight for X
B1 = rnd[1] #weight for Y
C1 = rnd[2] #bias
# hidden layer neuron 2
A2 = rnd[3] #weight for X
B2 = rnd[4] #weight for Y
C2 = rnd[5] #bias
# output layer neuron
A3 = rnd[6] #weight for n1
B3 = rnd[7] #weight for n2
C3 = rnd[8] # bias
grid = create_meshgrid(data)
for i in range(no_loops):
# get a single random data point
index = np.random.randint(data.shape[0])
# get X, Y of that data point and its label
X,Y = data[index]
label = labels[index]
# forward pass
N1 = sigmoid(A1*X + B1*Y + C1) # 1st neuron
N2 = sigmoid(A2*X + B2*Y + C2) # 2nd neuron
S = A3*N1 + B3*N2 + C3 # final activation
pull = 0.0
if (label == 1 and S < 1):
pull = 1.0
if (label ==-1 and S > -1):
pull = -1.0
# backpropagating through the network
# output layer weights and biases
dA3, dB3, dC3 = pull*N1, pull*N2, pull*1
#second layer activations
dN1, dN2 = pull*A3, pull*B3
# intermediate values
dz1 = dN1 * N1 * (1 - N1)
dz2 = dN2 * N2 * (1 - N2)
# second layer neuron 1
dA1 = dz1*X
dB1 = dz1*Y
dC1 = dz1*1
# second layer neuron 2
dA2 = dz2*X
dB2 = dz2*Y
dC2 = dz2*1
#regularization
dA1 += -A1*reg_strength; dA2 += -A2*reg_strength; dA3 += -A3*reg_strength;
dB1 += -B1*reg_strength; dB2 += -B2*reg_strength; dB3 += -B3*reg_strength;
# finally, do the parameter update
A1 += step_size * dA1;
B1 += step_size * dB1;
C1 += step_size * dC1;
A2 += step_size * dA2;
B2 += step_size * dB2;
C2 += step_size * dC2;
A3 += step_size * dA3;
B3 += step_size * dB3;
C3 += step_size * dC3;
if (i%iter_info==0):
accuracy = eval_accuracy_neural((A1,A2,A3,B1,B2,B3,C1,C2,C3),data,labels)
plot_neural_simple((A1,A2,A3,B1,B2,B3,C1,C2,C3),grid, data, labels, i, accuracy)
return (A1, A2, A3, B1, B2, B3, C1, C2, C3)
def create_meshgrid(data):
h = 0.02
x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1
y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
return (xx,yy,np.ones(xx.shape))
def eval_accuracy_neural(params, data, labels):
A1, A2, A3, B1, B2, B3, C1, C2, C3 = params
num_correct = 0;
data_len = data.shape[0]
for i in range(data_len):
X,Y = data[i]
true_label = labels[i]
score = forward_pass(A1, A2, A3, B1, B2, B3, C1, C2, C3, X, Y)
predicted_label = 1 if score > 0 else -1
if (predicted_label == true_label):
num_correct += 1
return num_correct / data_len
def plot_neural_simple(params, grid,data, labels, iteration, accuracy):
nudge = 0.06
A1, A2, A3, B1, B2, B3, C1, C2, C3 = params
xx,yy,Z = grid
for i in range(xx.shape[0]): # row
for j in range(yy.shape[1]): #column
X, Y = xx[i][j],yy[i][j]
score = forward_pass(A1, A2, A3, B1, B2, B3, C1, C2, C3, X, Y)
score = 1 if score > 0 else -1
Z[i][j] = score
fig = plt.figure(figsize=(5,5))
ax = fig.add_subplot(111)
plt.title(f'accuracy at the iteration {iteration}: {accuracy}')
ax.contourf(xx, yy, Z, cmap=plt.cm.binary, alpha=0.1, zorder=15)
ax.scatter(data[:, 0], data[:, 1], c=labels, s=50, cmap=plt.cm.bwr,zorder=50)
ax.set_aspect('equal')
for i in range(data.shape[0]):
d = data[i]
ax.annotate(f'{i}',(d[0]+nudge,d[1]+nudge))
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.show()
```
```python
train_2 = train_linear_classifier(data2, labels2, 0.1, 0.001, 4001, 800)
```
```python
a1,a2,a3,b1,b2,b3,c1,c2,c3 = train_2
for i, ((x,y), label) in enumerate(zip(data2, labels2)):
s = forward_pass(a1,a2,a3,b1,b2,b3,c1,c2,c3,x,y)
s = 1 if s > 0 else -1
print (f'data point {i}: real label : {label}, pred. label: {s}, {(s==label)}')
```
data point 0: real label : 1, pred. label: 1, True
data point 1: real label : -1, pred. label: -1, True
data point 2: real label : 1, pred. label: 1, True
data point 3: real label : -1, pred. label: -1, True
data point 4: real label : -1, pred. label: -1, True
data point 5: real label : 1, pred. label: 1, True
data point 6: real label : -1, pred. label: -1, True
data point 7: real label : 1, pred. label: 1, True
data point 8: real label : -1, pred. label: -1, True
data point 9: real label : -1, pred. label: -1, True
| 35539646ec640bc1f70a5a1030e7e2c9ed65277a | 231,587 | ipynb | Jupyter Notebook | 06. Neural Networks.ipynb | Mistrymm7/machineintelligence | 7629d61d46dafa8e5f3013082b1403813d165375 | [
"Apache-2.0"
] | 82 | 2019-09-23T11:25:41.000Z | 2022-03-29T22:56:10.000Z | 06. Neural Networks.ipynb | Iason-Giraud/machineintelligence | b34a070208c7ac7d7b8a1e1ad02813b39274921c | [
"Apache-2.0"
] | null | null | null | 06. Neural Networks.ipynb | Iason-Giraud/machineintelligence | b34a070208c7ac7d7b8a1e1ad02813b39274921c | [
"Apache-2.0"
] | 31 | 2019-09-30T16:08:46.000Z | 2022-02-19T10:29:07.000Z | 187.216653 | 31,508 | 0.897883 | true | 6,768 | Qwen/Qwen-72B | 1. YES
2. YES | 0.865224 | 0.7773 | 0.672539 | __label__eng_Latn | 0.989551 | 0.400864 |
# Fixed Coefficients Random Utility (Demand) Estimation
This notebook reviews the estimation and inference of a **linear** random utility model when the agent is facing a finite number of alternatives.
## Introduction
Consider a set of $J+1$ alternatives $\{0,1,2,...,J\}$. The utility that decision maker (DM) $i$ receives from buying projusct $j$ is
$$ u_{ij} = x_{ij}' \beta -\alpha p_j + \xi_j+ \epsilon_{ij}.$$
The DM maximizes her utility
$$y_i =\arg \min_{j} u_{ij}.$$
We now assume that $\epsilon_{ij}$ are $i.i.d.$ across DMs and across alternatives. In addition, we assume that $\epsilon_{ij}$ are distributed (standard) T1EV. We can write the following Conditional Choice Probabilities (CCP):
$$ Pr(y_i = j) = \frac{e^{x_{ij}'\beta}}{\sum_{k=0}^{J}e^{x_{ik}'\beta}}.$$
**Aggregate, market-level data** In Berry, Levinson, and Pakes (1995) and in many other empirical work following BLP the researcher observes only market-level data. That means that the characteristics vector of the alternatives is not indexed by $i$. The variation in product characteristics are unobserved and get absorbed by the error term $\epsilon$. The choice probabilities become
$$ Pr(y = j|x_j, \xi_j; \beta) \ \text{for} \ j=0,,,J = \frac{e^{x_{j}'\beta}}{\sum_{k=0}^{J}e^{x_{k}'\beta}}.$$
The left-hand side is simply the market share of product/alternative $j$. We will denote these market shares as $s_0,s_1,...,s_{J-1}$. The CCP above all have the same denominator. Moreover, for identification reasons, we normalize $x_0 = 0$. Therefore,
$$ \frac{s_j}{s_0} = e^{x_j'\beta}.$$
Using Berry's Inversion (1994), and take log for both sides gives us:
$$ \text{ln}(s_j) - \text{ln} (s_0) \ = \delta_j \equiv x_j' \beta - \alpha p_j + \xi_j \ \ \ \ \ \text{(eq} A) $$
where
- Mean utility level $\delta_j$ contains the product characteristic $x_j$, $price_j$ and the aggregate error $\xi_j$.
- Econometricians observe aggregate market share for good $j$, outside good, product characteristics $x_j$, price $p_j$.
- $\delta_j$ is uniquely identified directly from a simple algebraic calculation involving market share.
- This is a OLS. (We need instruments for the price!)
# For the rest of this notebook, we will introduce two empirical examples:
## A. Estimate logit-demand using, BLP(1995)'s aggregate market level data.
- A1. Introduction of car data from BLP(1995)
- A2. Data cleaning
- A3. Run linear regression using eq(A)
- A4. Run 2sls using instruments
## B. Monte Carlo Example: estimate logit-demand after solving Nash-Bertrand game
- B1. Data Generating Process
- B2. Obtain (numerically) equilibrium price and market shares
- B3. Regress using OLS / IV (cost shifters, and competitors' product charactersitics as instruments for price. (Table 1, Berry (1994)
### A1.Introduction of car data from BLP(1995)
#### As an empirical study, we will replicate Table 3, as in BLP (1995).
- We obtain product characteristics from the annual issues of the Automotive News Market Data Book and you can find BLP.csv.
- Data includes the number of cylinders, number of doors, weight, engine displacement, horsepower, length, width, wheelbase, EPA miles per gallon rating (MPG), and dummy variables for whether the car has front-wheel drive, automatic transmission, power steering, and air conditioning.
- The data set includes this information on observed products from 1971-1990.
- The price variable is list retail price (in \$1000's). Prices are in 1983 dollars. (We used the Consumer Price Index to deflate.)
- The sales variable corresponds to U.S. sales (in 1000's) by nameplate.
- The product characteristics correspond to the characteristics of the base model for the given nameplate.
- To capture the cost of driving, we include milers per dollar (MP\$), calculated as MPG divided by price per gallon. (Notice that MPG and pricer per gallon is provided.)
- In terms of potential market size, there is no formal definition. we used the yearly number of households in the U.S. from Statistical Abstract of the U.S.
- We assume that each model comprises a single firm to avoid a multi-product pricing problem.
```julia
# Query / DataFramesMeta is used for cleaning dataset
# FixedEffectModels is used for running regression
# Distributions, Random, NLsolve are used for Monte Carlo study
using CSV, DataFrames, Query, DataFramesMeta, FixedEffectModels, Distributions, Random, NLsolve
```
```julia
ENV["COLUMNS"],ENV["LINES"] = 350,50 #This is not specific to Julia, it's a Jupyter notebook environment variable
#dataset = CSV.read("c:\\data\\BLP.csv"); # <---- change this
dataset = CSV.read("/Users/jinkim/Dropbox/2020 Summer/dropbox_RA_work/Berry/BLP.csv")
first(dataset,10)
```
┌ Warning: `CSV.read(input; kw...)` is deprecated in favor of `using DataFrames; CSV.read(input, DataFrame; kw...)
│ caller = read(::String) at CSV.jl:40
└ @ CSV /Users/jinkim/.julia/packages/CSV/MKemC/src/CSV.jl:40
<table class="data-frame"><thead><tr><th></th><th>name</th><th>id</th><th>year</th><th>cy</th><th>dr</th><th>at</th><th>ps</th><th>air</th><th>drv</th><th>p</th><th>wt</th><th>dom</th><th>disp</th><th>hp</th><th>lng</th><th>wdt</th><th>wb</th><th>mpg</th><th>q</th><th>firmids</th><th>euro</th><th>reli</th><th>dfi</th><th>hp2wt</th><th>size</th><th>japan</th><th>cpi</th><th>gasprice</th><th>nb_hh</th><th>cat</th><th>door2</th><th>door3</th><th>door4</th><th>door5</th><th>sampleweight</th><th>mpgd</th><th>dpm</th><th>model</th></tr><tr><th></th><th>String</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Float64</th><th>Int64</th><th>Int64</th><th>Float64</th><th>Int64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Float64</th><th>Float64</th><th>Int64</th><th>Float64</th><th>Float64</th><th>Int64</th><th>String</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Int64</th><th>Float64</th><th>Float64</th><th>String</th></tr></thead><tbody><p>10 rows × 38 columns</p><tr><th>1</th><td>ACINTE</td><td>3735</td><td>1986</td><td>4</td><td>3</td><td>0</td><td>1</td><td>0</td><td>1</td><td>8.48358</td><td>2249</td><td>0</td><td>97.0</td><td>113</td><td>168.5</td><td>65.6</td><td>96.5</td><td>27.0</td><td>27.807</td><td>3</td><td>0</td><td>5</td><td>0</td><td>0.502446</td><td>1.10536</td><td>1</td><td>109.6</td><td>0.826794</td><td>88458</td><td>compact</td><td>0</td><td>1</td><td>0</td><td>0</td><td>27807</td><td>32.6562</td><td>0.030622</td><td>ACINTE1986</td></tr><tr><th>2</th><td>ACINTE</td><td>4030</td><td>1987</td><td>4</td><td>3</td><td>0</td><td>1</td><td>0</td><td>1</td><td>8.6787</td><td>2326</td><td>0</td><td>97.0</td><td>113</td><td>168.5</td><td>65.6</td><td>96.5</td><td>26.0</td><td>54.757</td><td>3</td><td>0</td><td>5</td><td>0</td><td>0.485813</td><td>1.10536</td><td>1</td><td>113.6</td><td>0.818662</td><td>89479</td><td>compact</td><td>0</td><td>1</td><td>0</td><td>0</td><td>54757</td><td>31.7591</td><td>0.031487</td><td>ACINTE1987</td></tr><tr><th>3</th><td>ACINTE</td><td>4327</td><td>1988</td><td>4</td><td>5</td><td>0</td><td>1</td><td>0</td><td>1</td><td>9.55199</td><td>2390</td><td>0</td><td>97.0</td><td>118</td><td>171.5</td><td>65.6</td><td>99.2</td><td>17.0</td><td>57.468</td><td>3</td><td>0</td><td>5</td><td>0</td><td>0.493724</td><td>1.12504</td><td>1</td><td>118.3</td><td>0.785362</td><td>91066</td><td>compact</td><td>0</td><td>0</td><td>0</td><td>1</td><td>57468</td><td>21.6461</td><td>0.0461978</td><td>ACINTE1988</td></tr><tr><th>4</th><td>ACINTE</td><td>4421</td><td>1989</td><td>4</td><td>2</td><td>0</td><td>1</td><td>0</td><td>1</td><td>10.5403</td><td>2313</td><td>0</td><td>97.0</td><td>118</td><td>168.7</td><td>65.6</td><td>96.5</td><td>26.0</td><td>77.4</td><td>3</td><td>0</td><td>5</td><td>0</td><td>0.51016</td><td>1.10667</td><td>1</td><td>124.0</td><td>0.810081</td><td>92830</td><td>compact</td><td>1</td><td>0</td><td>0</td><td>0</td><td>77400</td><td>32.0956</td><td>0.0311569</td><td>ACINTE1989</td></tr><tr><th>5</th><td>ACINTE</td><td>5421</td><td>1990</td><td>4</td><td>2</td><td>0</td><td>1</td><td>0</td><td>1</td><td>9.14308</td><td>2549</td><td>0</td><td>1.8</td><td>130</td><td>172.9</td><td>67.4</td><td>100.4</td><td>21.0</td><td>83.599</td><td>3</td><td>0</td><td>5</td><td>0</td><td>0.510004</td><td>1.16535</td><td>1</td><td>130.7</td><td>0.875542</td><td>93347</td><td>compact</td><td>1</td><td>0</td><td>0</td><td>0</td><td>83599</td><td>23.9851</td><td>0.0416925</td><td>ACINTE1990</td></tr><tr><th>6</th><td>ACLEGE</td><td>3736</td><td>1986</td><td>6</td><td>4</td><td>0</td><td>1</td><td>1</td><td>1</td><td>17.6077</td><td>2970</td><td>0</td><td>152.0</td><td>151</td><td>189.4</td><td>68.3</td><td>108.6</td><td>20.0</td><td>25.062</td><td>3</td><td>0</td><td>5</td><td>0</td><td>0.508417</td><td>1.2936</td><td>1</td><td>109.6</td><td>0.826794</td><td>88458</td><td>midsize</td><td>0</td><td>0</td><td>1</td><td>0</td><td>25062</td><td>24.1898</td><td>0.0413397</td><td>ACLEGE1986</td></tr><tr><th>7</th><td>ACLEGE</td><td>4031</td><td>1987</td><td>6</td><td>4</td><td>0</td><td>1</td><td>1</td><td>1</td><td>17.8327</td><td>3078</td><td>0</td><td>152.0</td><td>151</td><td>189.4</td><td>68.3</td><td>108.7</td><td>20.0</td><td>54.713</td><td>3</td><td>0</td><td>5</td><td>0</td><td>0.490578</td><td>1.2936</td><td>1</td><td>113.6</td><td>0.818662</td><td>89479</td><td>midsize</td><td>0</td><td>0</td><td>1</td><td>0</td><td>54713</td><td>24.4301</td><td>0.0409331</td><td>ACLEGE1987</td></tr><tr><th>8</th><td>ACLEGE</td><td>4328</td><td>1988</td><td>6</td><td>4</td><td>0</td><td>1</td><td>1</td><td>1</td><td>17.7599</td><td>3067</td><td>0</td><td>163.2</td><td>161</td><td>189.4</td><td>68.3</td><td>108.7</td><td>19.0</td><td>70.77</td><td>3</td><td>0</td><td>5</td><td>0</td><td>0.524943</td><td>1.2936</td><td>1</td><td>118.3</td><td>0.785362</td><td>91066</td><td>midsize</td><td>0</td><td>0</td><td>1</td><td>0</td><td>70770</td><td>24.1927</td><td>0.0413348</td><td>ACLEGE1988</td></tr><tr><th>9</th><td>ACLEGE</td><td>4422</td><td>1989</td><td>6</td><td>4</td><td>0</td><td>1</td><td>1</td><td>1</td><td>18.2258</td><td>3170</td><td>0</td><td>163.0</td><td>160</td><td>190.6</td><td>68.9</td><td>108.7</td><td>19.0</td><td>64.6</td><td>3</td><td>0</td><td>5</td><td>0</td><td>0.504732</td><td>1.31323</td><td>1</td><td>124.0</td><td>0.810081</td><td>92830</td><td>midsize</td><td>0</td><td>0</td><td>1</td><td>0</td><td>64600</td><td>23.4545</td><td>0.0426358</td><td>ACLEGE1989</td></tr><tr><th>10</th><td>ACLEGE</td><td>5422</td><td>1990</td><td>6</td><td>2</td><td>0</td><td>1</td><td>1</td><td>1</td><td>18.9441</td><td>3139</td><td>0</td><td>2.7</td><td>160</td><td>188.0</td><td>68.7</td><td>106.5</td><td>19.0</td><td>53.666</td><td>3</td><td>0</td><td>5</td><td>0</td><td>0.509716</td><td>1.29156</td><td>1</td><td>130.7</td><td>0.875542</td><td>93347</td><td>midsize</td><td>1</td><td>0</td><td>0</td><td>0</td><td>53666</td><td>21.7008</td><td>0.0460811</td><td>ACLEGE1990</td></tr></tbody></table>
## Variable name/short description
#### For detailed description, please see BLP(1995) section 7.1. (Data section)
| Variable name | Description |
|------------------|-----------------------------|
| name | Car |
| id | Car ID |
| ye | Year |
| cy | Cylinder |
| dr | Number of Doors |
| at | Automatic Transmission |
| ps | Power Steering |
| air | Air Conditioning |
| drv | Front Wheel Drive |
| p | Price (in \$ 1000's) |
| wt | Weight |
| dom | Domestic |
| disp | Engine Displacement |
| hp | Horse Power |
| lng | Length |
| wdt | Width |
| wb | Wheelbase |
| mpg | Miles per Gallon |
| q | Quantities |
| firmids | Firm ID |
| euro | Indicator for EURO car |
| reli | Rating |
| dfi | Indicator for Digital Fuel Injection |
| hp2wt | HP to Weight (ratio) |
| size | Length X Width (/1000) |
| japan | Japan |
| cpi | CPI |
| gasprice | Gas Price per gallon |
| nb_hh | Size of Household (Potential Market Size) |
| cat | Size Cateogry |
| cat | (Using for nested logit) |
| door2 | I(door=2) |
| door3 | I(door=3) |
| door4 | I(door=4) |
| door5 | I(door=5) |
| sampleweight | Weights |
| mpgd | Miles per gallon (imputed from gas prices) |
| dpm | Dollars per miles (imputed from gas prices) |
| modelid | Car name |
```julia
#summary statistics
describe(dataset,:all)
```
<table class="data-frame"><thead><tr><th></th><th>variable</th><th>mean</th><th>std</th><th>min</th><th>q25</th><th>median</th><th>q75</th><th>max</th><th>nunique</th><th>nmissing</th><th>first</th><th>last</th><th>eltype</th></tr><tr><th></th><th>Symbol</th><th>Union…</th><th>Union…</th><th>Any</th><th>Union…</th><th>Union…</th><th>Union…</th><th>Any</th><th>Union…</th><th>Nothing</th><th>Any</th><th>Any</th><th>DataType</th></tr></thead><tbody><p>38 rows × 13 columns</p><tr><th>1</th><td>name</td><td></td><td></td><td>ACINTE</td><td></td><td></td><td></td><td>YUYUGO</td><td>542</td><td></td><td>ACINTE</td><td>YUYUGO</td><td>String</td></tr><tr><th>2</th><td>id</td><td>2560.89</td><td>1517.55</td><td>129</td><td>1309.0</td><td>2325.0</td><td>3927.0</td><td>5592</td><td></td><td></td><td>3735</td><td>4506</td><td>Int64</td></tr><tr><th>3</th><td>year</td><td>1981.54</td><td>5.74082</td><td>1971</td><td>1977.0</td><td>1982.0</td><td>1987.0</td><td>1990</td><td></td><td></td><td>1986</td><td>1989</td><td>Int64</td></tr><tr><th>4</th><td>cy</td><td>5.3207</td><td>1.55712</td><td>0</td><td>4.0</td><td>4.0</td><td>6.0</td><td>12</td><td></td><td></td><td>4</td><td>4</td><td>Int64</td></tr><tr><th>5</th><td>dr</td><td>3.29409</td><td>0.965154</td><td>2</td><td>2.0</td><td>4.0</td><td>4.0</td><td>5</td><td></td><td></td><td>3</td><td>2</td><td>Int64</td></tr><tr><th>6</th><td>at</td><td>0.326116</td><td>0.468896</td><td>0</td><td>0.0</td><td>0.0</td><td>1.0</td><td>1</td><td></td><td></td><td>0</td><td>0</td><td>Int64</td></tr><tr><th>7</th><td>ps</td><td>0.533153</td><td>0.499012</td><td>0</td><td>0.0</td><td>1.0</td><td>1.0</td><td>1</td><td></td><td></td><td>1</td><td>0</td><td>Int64</td></tr><tr><th>8</th><td>air</td><td>0.241768</td><td>0.428251</td><td>0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>1</td><td></td><td></td><td>0</td><td>0</td><td>Int64</td></tr><tr><th>9</th><td>drv</td><td>0.354984</td><td>0.478616</td><td>0</td><td>0.0</td><td>0.0</td><td>1.0</td><td>1</td><td></td><td></td><td>1</td><td>1</td><td>Int64</td></tr><tr><th>10</th><td>p</td><td>11.7614</td><td>8.64378</td><td>3.39327</td><td>6.71375</td><td>8.72865</td><td>13.0741</td><td>68.5968</td><td></td><td></td><td>8.48358</td><td>3.50726</td><td>Float64</td></tr><tr><th>11</th><td>wt</td><td>2930.47</td><td>722.366</td><td>1445</td><td>2375.0</td><td>2861.0</td><td>3383.0</td><td>5362</td><td></td><td></td><td>2249</td><td>1832</td><td>Int64</td></tr><tr><th>12</th><td>dom</td><td>0.589986</td><td>0.491947</td><td>0</td><td>0.0</td><td>1.0</td><td>1.0</td><td>1</td><td></td><td></td><td>0</td><td>0</td><td>Int64</td></tr><tr><th>13</th><td>disp</td><td>177.746</td><td>102.032</td><td>1.0</td><td>109.0</td><td>151.0</td><td>231.0</td><td>500.0</td><td></td><td></td><td>97.0</td><td>68.0</td><td>Float64</td></tr><tr><th>14</th><td>hp</td><td>117.005</td><td>46.6881</td><td>39</td><td>88.0</td><td>105.0</td><td>140.0</td><td>365</td><td></td><td></td><td>113</td><td>52</td><td>Int64</td></tr><tr><th>15</th><td>lng</td><td>186.705</td><td>20.0657</td><td>139.0</td><td>172.2</td><td>185.0</td><td>200.0</td><td>236.0</td><td></td><td></td><td>168.5</td><td>139.0</td><td>Float64</td></tr><tr><th>16</th><td>wdt</td><td>69.6557</td><td>5.29795</td><td>53.0</td><td>65.9</td><td>69.0</td><td>73.0</td><td>81.0</td><td></td><td></td><td>65.6</td><td>60.7</td><td>Float64</td></tr><tr><th>17</th><td>wb</td><td>104.616</td><td>9.817</td><td>14.3</td><td>97.0</td><td>103.4</td><td>110.8</td><td>212.1</td><td></td><td></td><td>96.5</td><td>84.6</td><td>Float64</td></tr><tr><th>18</th><td>mpg</td><td>20.9964</td><td>5.8107</td><td>9.13</td><td>17.0</td><td>20.0</td><td>25.0</td><td>53.0</td><td></td><td></td><td>27.0</td><td>28.0</td><td>Float64</td></tr><tr><th>19</th><td>q</td><td>78.804</td><td>89.0799</td><td>0.049</td><td>15.603</td><td>47.35</td><td>109.002</td><td>646.526</td><td></td><td></td><td>27.807</td><td>10.5</td><td>Float64</td></tr><tr><th>20</th><td>firmids</td><td>13.7438</td><td>6.25909</td><td>1</td><td>8.0</td><td>16.0</td><td>19.0</td><td>26</td><td></td><td></td><td>3</td><td>23</td><td>Int64</td></tr><tr><th>21</th><td>euro</td><td>0.23816</td><td>0.426053</td><td>0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>1</td><td></td><td></td><td>0</td><td>1</td><td>Int64</td></tr><tr><th>22</th><td>reli</td><td>3.0433</td><td>1.29108</td><td>1</td><td>2.0</td><td>3.0</td><td>4.0</td><td>5</td><td></td><td></td><td>5</td><td>1</td><td>Int64</td></tr><tr><th>23</th><td>dfi</td><td>0.0135318</td><td>0.115563</td><td>0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>1</td><td></td><td></td><td>0</td><td>0</td><td>Int64</td></tr><tr><th>24</th><td>hp2wt</td><td>0.394375</td><td>0.0966429</td><td>0.170455</td><td>0.336585</td><td>0.375049</td><td>0.427509</td><td>0.947581</td><td></td><td></td><td>0.502446</td><td>0.283843</td><td>Float64</td></tr><tr><th>25</th><td>size</td><td>1.31016</td><td>0.237637</td><td>0.756</td><td>1.13128</td><td>1.26983</td><td>1.4527</td><td>1.888</td><td></td><td></td><td>1.10536</td><td>0.84373</td><td>Float64</td></tr><tr><th>26</th><td>japan</td><td>0.171854</td><td>0.377338</td><td>0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>1</td><td></td><td></td><td>1</td><td>0</td><td>Int64</td></tr><tr><th>27</th><td>cpi</td><td>88.3488</td><td>28.8772</td><td>40.5</td><td>60.6</td><td>96.5</td><td>113.6</td><td>130.7</td><td></td><td></td><td>109.6</td><td>124.0</td><td>Float64</td></tr><tr><th>28</th><td>gasprice</td><td>1.02727</td><td>0.206509</td><td>0.785362</td><td>0.826794</td><td>1.01788</td><td>1.13523</td><td>1.47121</td><td></td><td></td><td>0.826794</td><td>0.810081</td><td>Float64</td></tr><tr><th>29</th><td>nb_hh</td><td>81539.0</td><td>8770.71</td><td>64778</td><td>74142.0</td><td>83527.0</td><td>89479.0</td><td>93347</td><td></td><td></td><td>88458</td><td>92830</td><td>Int64</td></tr><tr><th>30</th><td>cat</td><td></td><td></td><td>compact</td><td></td><td></td><td></td><td>midsize</td><td>3</td><td></td><td>compact</td><td>compact</td><td>String</td></tr><tr><th>31</th><td>door2</td><td>0.341903</td><td>0.474454</td><td>0</td><td>0.0</td><td>0.0</td><td>1.0</td><td>1</td><td></td><td></td><td>0</td><td>1</td><td>Int64</td></tr><tr><th>32</th><td>door3</td><td>0.0419486</td><td>0.200517</td><td>0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>1</td><td></td><td></td><td>1</td><td>0</td><td>Int64</td></tr><tr><th>33</th><td>door4</td><td>0.596301</td><td>0.490749</td><td>0</td><td>0.0</td><td>1.0</td><td>1.0</td><td>1</td><td></td><td></td><td>0</td><td>0</td><td>Int64</td></tr><tr><th>34</th><td>door5</td><td>0.0198466</td><td>0.139505</td><td>0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>1</td><td></td><td></td><td>0</td><td>0</td><td>Int64</td></tr><tr><th>35</th><td>sampleweight</td><td>78804.0</td><td>89079.9</td><td>49</td><td>15603.0</td><td>47350.0</td><td>109002.0</td><td>646526</td><td></td><td></td><td>27807</td><td>10500</td><td>Int64</td></tr><tr><th>36</th><td>mpgd</td><td>21.1248</td><td>6.94301</td><td>8.61178</td><td>15.9838</td><td>20.5059</td><td>24.8007</td><td>65.4256</td><td></td><td></td><td>32.6562</td><td>34.5645</td><td>Float64</td></tr><tr><th>37</th><td>dpm</td><td>0.052389</td><td>0.0167817</td><td>0.0152845</td><td>0.0403215</td><td>0.0487664</td><td>0.0625634</td><td>0.11612</td><td></td><td></td><td>0.030622</td><td>0.0289315</td><td>Float64</td></tr><tr><th>38</th><td>model</td><td></td><td></td><td>ACINTE1986</td><td></td><td></td><td></td><td>YUYUGO1989</td><td>2172</td><td></td><td>ACINTE1986</td><td>YUYUGO1989</td><td>String</td></tr></tbody></table>
```julia
### Replicate Table 1: Summary Stats
using Statistics
#Add or substrct column names as needed:
cnames = [:year,:cy,:dr,:at,:ps,:air,:drv,:p,:wt,:dom,:disp,:hp,:lng,:wdt,:wb,:mpg]
aggregate(dataset[!,cnames], [:year], mean)
```
┌ Warning: `aggregate(d, cols, f, sort=false, skipmissing=false)` is deprecated. Instead use combine(groupby(d, cols, sort=false, skipmissing=false), names(d, Not(cols)) .=> f)`
│ caller = top-level scope at In[4]:6
└ @ Core In[4]:6
<table class="data-frame"><thead><tr><th></th><th>year</th><th>cy_mean</th><th>dr_mean</th><th>at_mean</th><th>ps_mean</th><th>air_mean</th><th>drv_mean</th><th>p_mean</th><th>wt_mean</th><th>dom_mean</th><th>disp_mean</th><th>hp_mean</th><th>lng_mean</th><th>wdt_mean</th><th>wb_mean</th><th>mpg_mean</th></tr><tr><th></th><th>Int64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th><th>Float64</th></tr></thead><tbody><p>20 rows × 16 columns</p><tr><th>1</th><td>1986</td><td>4.94615</td><td>3.43846</td><td>0.330769</td><td>0.684615</td><td>0.323077</td><td>0.576923</td><td>11.7826</td><td>2733.55</td><td>0.569231</td><td>160.212</td><td>110.192</td><td>180.583</td><td>67.9892</td><td>101.829</td><td>23.5546</td></tr><tr><th>2</th><td>1987</td><td>5.0</td><td>3.43357</td><td>0.321678</td><td>0.72028</td><td>0.391608</td><td>0.615385</td><td>13.4363</td><td>2785.15</td><td>0.538462</td><td>163.357</td><td>117.028</td><td>180.416</td><td>67.9965</td><td>101.909</td><td>22.6503</td></tr><tr><th>3</th><td>1988</td><td>5.09333</td><td>3.44</td><td>0.4</td><td>0.766667</td><td>0.446667</td><td>0.613333</td><td>14.8857</td><td>2847.8</td><td>0.533333</td><td>163.836</td><td>125.053</td><td>181.756</td><td>68.2853</td><td>102.443</td><td>21.88</td></tr><tr><th>4</th><td>1989</td><td>4.9932</td><td>2.9932</td><td>0.37415</td><td>0.795918</td><td>0.503401</td><td>0.680272</td><td>16.6905</td><td>2895.97</td><td>0.496599</td><td>163.503</td><td>133.837</td><td>181.348</td><td>68.5537</td><td>103.037</td><td>21.7891</td></tr><tr><th>5</th><td>1990</td><td>5.0687</td><td>2.85496</td><td>0.40458</td><td>0.816794</td><td>0.458015</td><td>0.717557</td><td>14.0384</td><td>2923.47</td><td>0.51145</td><td>2.7084</td><td>133.588</td><td>182.156</td><td>68.655</td><td>101.843</td><td>21.8168</td></tr><tr><th>6</th><td>1971</td><td>6.04348</td><td>3.36957</td><td>0.152174</td><td>0.163043</td><td>0.0</td><td>0.0</td><td>8.85633</td><td>3176.08</td><td>0.684783</td><td>245.548</td><td>171.891</td><td>195.446</td><td>72.9891</td><td>110.787</td><td>17.1995</td></tr><tr><th>7</th><td>1975</td><td>5.86022</td><td>3.26882</td><td>0.311828</td><td>0.387097</td><td>0.107527</td><td>0.0752688</td><td>9.64184</td><td>3328.6</td><td>0.645161</td><td>238.5</td><td>119.473</td><td>195.312</td><td>71.7419</td><td>108.129</td><td>16.2688</td></tr><tr><th>8</th><td>1976</td><td>5.62626</td><td>3.28283</td><td>0.282828</td><td>0.343434</td><td>0.10101</td><td>0.0909091</td><td>9.49007</td><td>3159.45</td><td>0.616162</td><td>218.66</td><td>112.889</td><td>192.515</td><td>70.8889</td><td>106.833</td><td>18.5556</td></tr><tr><th>9</th><td>1977</td><td>5.61053</td><td>3.24211</td><td>0.294737</td><td>0.378947</td><td>0.0842105</td><td>0.136842</td><td>9.86429</td><td>3070.06</td><td>0.610526</td><td>204.085</td><td>109.463</td><td>190.747</td><td>70.4947</td><td>106.038</td><td>19.7789</td></tr><tr><th>10</th><td>1980</td><td>5.23301</td><td>3.35922</td><td>0.252427</td><td>0.359223</td><td>0.174757</td><td>0.252427</td><td>10.7269</td><td>2776.99</td><td>0.601942</td><td>179.446</td><td>99.699</td><td>185.398</td><td>69.4854</td><td>103.684</td><td>21.6699</td></tr><tr><th>11</th><td>1981</td><td>5.16379</td><td>3.21552</td><td>0.336207</td><td>0.491379</td><td>0.241379</td><td>0.232759</td><td>13.0352</td><td>2819.37</td><td>0.517241</td><td>171.325</td><td>102.017</td><td>186.56</td><td>69.3276</td><td>103.929</td><td>22.5</td></tr><tr><th>12</th><td>1982</td><td>4.98182</td><td>3.32727</td><td>0.336364</td><td>0.490909</td><td>0.236364</td><td>0.381818</td><td>11.5913</td><td>2747.25</td><td>0.6</td><td>158.536</td><td>98.3</td><td>185.273</td><td>68.8636</td><td>103.151</td><td>23.7727</td></tr><tr><th>13</th><td>1983</td><td>4.88696</td><td>3.30435</td><td>0.33913</td><td>0.530435</td><td>0.2</td><td>0.434783</td><td>11.1408</td><td>2736.82</td><td>0.617391</td><td>156.169</td><td>97.8522</td><td>184.087</td><td>68.7913</td><td>102.857</td><td>25.1478</td></tr><tr><th>14</th><td>1984</td><td>4.92035</td><td>3.35398</td><td>0.345133</td><td>0.663717</td><td>0.283186</td><td>0.477876</td><td>11.6477</td><td>2765.41</td><td>0.59292</td><td>166.069</td><td>107.265</td><td>183.822</td><td>68.4637</td><td>102.735</td><td>24.1504</td></tr><tr><th>15</th><td>1985</td><td>4.96324</td><td>3.31618</td><td>0.345588</td><td>0.669118</td><td>0.330882</td><td>0.522059</td><td>12.4764</td><td>2759.99</td><td>0.566176</td><td>163.106</td><td>108.081</td><td>182.096</td><td>68.1625</td><td>102.349</td><td>22.3868</td></tr><tr><th>16</th><td>1978</td><td>5.57895</td><td>3.33684</td><td>0.263158</td><td>0.347368</td><td>0.0947368</td><td>0.178947</td><td>10.6021</td><td>2964.8</td><td>0.642105</td><td>199.494</td><td>107.695</td><td>189.347</td><td>70.3579</td><td>105.337</td><td>19.9263</td></tr><tr><th>17</th><td>1979</td><td>5.33333</td><td>3.28431</td><td>0.235294</td><td>0.313725</td><td>0.0882353</td><td>0.215686</td><td>10.4513</td><td>2840.49</td><td>0.598039</td><td>184.635</td><td>104.108</td><td>186.559</td><td>69.5</td><td>103.849</td><td>20.1078</td></tr><tr><th>18</th><td>1972</td><td>6.20225</td><td>3.44944</td><td>0.337079</td><td>0.348315</td><td>0.0449438</td><td>0.0</td><td>9.04282</td><td>3253.78</td><td>0.696629</td><td>256.947</td><td>134.348</td><td>196.775</td><td>73.3146</td><td>111.563</td><td>16.3317</td></tr><tr><th>19</th><td>1973</td><td>6.37209</td><td>3.44186</td><td>0.395349</td><td>0.372093</td><td>0.0697674</td><td>0.0</td><td>9.0452</td><td>3337.42</td><td>0.709302</td><td>261.949</td><td>131.256</td><td>198.384</td><td>73.0233</td><td>111.215</td><td>16.2506</td></tr><tr><th>20</th><td>1974</td><td>6.0</td><td>3.30556</td><td>0.375</td><td>0.375</td><td>0.125</td><td>0.0</td><td>9.25473</td><td>3268.29</td><td>0.652778</td><td>239.179</td><td>122.556</td><td>196.194</td><td>71.9306</td><td>108.874</td><td>16.3329</td></tr></tbody></table>
### A2. Data cleaning
#### Step 1. Obtain market share for each good $j$: $s_{jt}$ = $\frac{q_{jt}}{nb\_hh_{t}}$
For notation, let denote total market size $nb\_hh_t = M_t$
```julia
dataset = @linq dataset |>
groupby([:year]) |>
transform(Total_Q = sum(:q))
dataset.s_j = dataset.q./dataset.nb_hh;
```
#### Step 2. Obtain market share for outside good 0: $s_{0t}$ = $\frac{ \Big(nb\_hh_t - \sum_{k=1}^J(q_{kt}) \Big)}{nb\_hh_t}$
```julia
dataset.s_0 = (dataset.nb_hh-dataset.Total_Q)./dataset.nb_hh;
```
#### Step 3. Construct dependent variable: $ \text{ln}(s_{jt}) - \text{ln}(s_{0t}) $
```julia
dataset.log_s_j_0 = log.(dataset.s_j) - log.(dataset.s_0);
```
### A3. Run linear regression using eq(A)
#### Step 4. we use hp2wt, air, mpgd, size as product characteristics:
$$ \text{ln}(s_j) - \text{ln} (s_0) \ = \delta_j \equiv x_j' \beta - \alpha p_j + \xi_j $$
```julia
result = reg(dataset, @formula(log_s_j_0 ~ p+hp2wt+air+mpgd+size ), save = true,);
print(result)
```
Linear Model
===========================================================================
Number of obs: 2217 Degrees of freedom: 6
R2: 0.390 R2 Adjusted: 0.388
F Statistic: 282.47 p-value: 0.000
===========================================================================
Estimate Std.Error t value Pr(>|t|) Lower 95% Upper 95%
---------------------------------------------------------------------------
p -0.0886398 0.00401348 -22.0855 0.000 -0.0965104 -0.0807692
hp2wt -0.0731735 0.276701 -0.26445 0.791 -0.615794 0.469447
air -0.0380115 0.0726455 -0.523246 0.601 -0.180472 0.104449
mpgd 0.0288388 0.00439518 6.56147 0.000 0.0202197 0.0374579
size 2.40052 0.126801 18.9315 0.000 2.15186 2.64919
(Intercept) -10.2035 0.26002 -39.2412 0.000 -10.7134 -9.69356
===========================================================================
#### Step 5. Obtain Price elasticities:
Note that own price elasticities $(\eta_j$) is given by:
\begin{align}
\eta_j & = \frac{\partial Pr(j)}{\partial price_j} \underbrace{\frac{price_j}{Pr(j)}}_{\frac{price_j}{s_j \times M}} \\
& \text{Note that} \ \frac{\partial Pr(j)}{\partial price_j} = \frac{\partial s_j}{\partial price_j} \times M \ \text{where} \ s_j = \frac{e^{\delta_j}}{\sum_k^J e^{\delta_j}} \\
& \text{Appealing to chain rule}: \frac{\partial s_j}{\partial price_j} \ M = \Bigg[ \alpha \frac{e^{\delta_j}}{\sum_k^J e^{\delta_j}} - \alpha \Big( \frac{e^{\delta_j}}{\sum_k^J e^{\delta_j}}\Big)^2 \Bigg] = M \alpha [s_j - s_j^2] = M \alpha s_j[1- s_j]\\
& \text{Rearranging these terms gives us:} \\
& \eta_j = \underbrace{\frac{\partial Pr(j)}{\partial price_j}}_{M \alpha s_j[1- s_j]} \underbrace{\frac{price_j}{Pr(j)}}_{\frac{price_j}{s_j \times M}} = M \alpha s_j[1- s_j] \times \frac{price_j}{s_j} \frac{1}{M} = \underbrace{\alpha \times (1-s_j) \times price_j}_\text{price elasticities for good j} \\
& = \alpha \times (1-s_j) \times price_j
\end{align}
```julia
# Following price elasticities, I can derive price elasticities for each good j, using price coefficients alpha.
price_coef = coef(result)[2];
dataset.e = price_coef * (ones(nrow(dataset))-dataset.s_j) .* dataset.p;
q1 = @from i in dataset begin;
@where i.e >-1
@select {elasticity=i.e}
@collect DataFrame
end;
nrow(q1)
```
1502
#### Replication: BLP Table 3, IV Logit Demand Column (Row: No. Inelastic De) in page 873.
#### I derive the number of inelastic car model. My estimates are 1,502. BLP's estimates were 1,494, which is pretty close.
### A4. Run 2sls using instruments
Following BLP, I use the following instruments for price.
#### 1. the sum of size at market $t$. (Note that you need to drop product $j$'s own size.)
#### 2. the sum of size across rival firm products at market $t$.
```julia
# IV 1
dataset = @linq dataset |>
groupby([:year]) |>
transform(Total_size = sum(:size));
dataset.iv_size1 = dataset.Total_size - dataset.size;
```
```julia
# IV 2
dataset = @linq dataset |>
groupby([:year , :firmids]) |>
transform(sum_size = sum(:size));
dataset.iv_size2 = dataset.Total_size - dataset.sum_size;
```
```julia
# 2SLS Regression for demand estimation
# First stage: regress price on Z and X
first_stage_result = reg(dataset, @formula(p ~ iv_size1+ iv_size2+hp2wt + air+mpgd+size), save = true, );
print(first_stage_result)
```
Linear Model
=========================================================================
Number of obs: 2217 Degrees of freedom: 7
R2: 0.592 R2 Adjusted: 0.591
F Statistic: 534.872 p-value: 0.000
=========================================================================
Estimate Std.Error t value Pr(>|t|) Lower 95% Upper 95%
-------------------------------------------------------------------------
iv_size1 -0.030544 0.00999712 -3.05528 0.002 -0.0501487 -0.0109393
iv_size2 0.0919841 0.00809711 11.3601 0.000 0.0761054 0.107863
hp2wt 25.8362 1.31783 19.6051 0.000 23.2518 28.4205
air 9.57023 0.327809 29.1945 0.000 8.92738 10.2131
mpgd -0.272477 0.0270151 -10.0861 0.000 -0.325455 -0.219499
size 2.25533 0.727117 3.10174 0.002 0.829424 3.68123
(Intercept) -5.21123 1.50736 -3.4572 0.001 -8.16721 -2.25525
=========================================================================
```julia
# Second Stage: regress log(s_j)-log(s_0) on xhat
xhat = predict(first_stage_result, dataset);
dataset.p_iv = xhat;
second_stage_result = reg(dataset, @formula(log_s_j_0 ~ p_iv+hp2wt + air+mpgd+size), save = true);
print(second_stage_result)
```
Linear Model
============================================================================
Number of obs: 2217 Degrees of freedom: 6
R2: 0.355 R2 Adjusted: 0.354
F Statistic: 243.848 p-value: 0.000
============================================================================
Estimate Std.Error t value Pr(>|t|) Lower 95% Upper 95%
----------------------------------------------------------------------------
p_iv -0.289427 0.0156063 -18.5455 0.000 -0.320032 -0.258823
hp2wt 5.63223 0.513603 10.9661 0.000 4.62503 6.63942
air 2.18099 0.182327 11.9619 0.000 1.82344 2.53854
mpgd -0.00984143 0.00536771 -1.83345 0.067 -0.0203677 0.000684853
size 2.19676 0.131213 16.7419 0.000 1.93944 2.45407
(Intercept) -9.5444 0.271767 -35.1198 0.000 -10.0773 -9.01145
============================================================================
```julia
price_coef = coef(second_stage_result)[2];
dataset.e_iv = price_coef * (ones(nrow(dataset))-dataset.s_j) .* dataset.p;
```
```julia
q1 = @from i in dataset begin;
@where i.e_iv >-1
@select {number_of_children=i.e_iv}
@collect DataFrame
end;
```
### Comparision with BLP Table 3, IV Logit Demand Column (Row: No. Inelastic De) in page 873.
#### Note that I have slightly different price coefficients, I observe number of inelastic demand good is 2. BLP estimates were 22.
```julia
nrow(q1)
```
2
### Step 6. Discussion : IV regressions
#### Reported price coefficient ($\alpha$) is -0.0886 in OLS.
#### Now we have -0.2894 ($\alpha$) in IV regression. Prices are upward biased in OLS.
```julia
# OLS Results
print(result)
```
Linear Model
===========================================================================
Number of obs: 2217 Degrees of freedom: 6
R2: 0.390 R2 Adjusted: 0.388
F Statistic: 282.47 p-value: 0.000
===========================================================================
Estimate Std.Error t value Pr(>|t|) Lower 95% Upper 95%
---------------------------------------------------------------------------
p -0.0886398 0.00401348 -22.0855 0.000 -0.0965104 -0.0807692
hp2wt -0.0731735 0.276701 -0.26445 0.791 -0.615794 0.469447
air -0.0380115 0.0726455 -0.523246 0.601 -0.180472 0.104449
mpgd 0.0288388 0.00439518 6.56147 0.000 0.0202197 0.0374579
size 2.40052 0.126801 18.9315 0.000 2.15186 2.64919
(Intercept) -10.2035 0.26002 -39.2412 0.000 -10.7134 -9.69356
===========================================================================
```julia
# 2SLS Results
print(second_stage_result)
```
Linear Model
============================================================================
Number of obs: 2217 Degrees of freedom: 6
R2: 0.355 R2 Adjusted: 0.354
F Statistic: 243.848 p-value: 0.000
============================================================================
Estimate Std.Error t value Pr(>|t|) Lower 95% Upper 95%
----------------------------------------------------------------------------
p_iv -0.289427 0.0156063 -18.5455 0.000 -0.320032 -0.258823
hp2wt 5.63223 0.513603 10.9661 0.000 4.62503 6.63942
air 2.18099 0.182327 11.9619 0.000 1.82344 2.53854
mpgd -0.00984143 0.00536771 -1.83345 0.067 -0.0203677 0.000684853
size 2.19676 0.131213 16.7419 0.000 1.93944 2.45407
(Intercept) -9.5444 0.271767 -35.1198 0.000 -10.0773 -9.01145
============================================================================
## B. Monte Carlo Example: estimate logit-demand after solving Nash-Bertrand game
- B1. Data Generating Process
- B2. Obtain (numerically) equilibrium price and market shares
- B3. Regress using OLS / IV
### B1. Data Generating Process
Market is characterized by dupoly firms which procuce single good, with aggregate market shares, and price for each good. We assume that duopoly firms compete in 500 "independent" (isolated) markets.
In D.G.P., we solve Nash-Bertrand game so that we derive dupoly firm's price, and market shares. We use cost shifters and product characteritics to numerically solve this game. Since it is D.G.P. we use true parameters to obtain price, and market shares.
As an econometrician, we observe dupoly firm's market share, price, costs, and product characteristics.
The utility of each consumer $i$ in each market is given by:
\begin{equation}
u_{ij} = \beta_0 + \beta_1 x_j + \sigma_d \xi_j - \alpha p_j + \epsilon_{ij}
\end{equation}
Marginal cost is constrained to be postitive and given by:
\begin{equation}
c_j = e^{\gamma_0 + \gamma_x x_j + \sigma_c \xi_j + \gamma_w w_j + \sigma_\omega \omega_j}
\end{equation}
The exogenous data $x_j, \xi_j, w_j, $ and $\omega_j$ are all created standard normal random variables.
True parameter is given by:
| Parameter | True Value | Description |
|------------------|------------|-------------|
| $ \beta_0$ | 5 | Intercept (demand) |
| $ \beta_x$ | 2 | Utility from good $x$ |
| $ \sigma_d$ | 1 / 3 (second monte carlo) | Covariance |
| $\alpha $ | 1 | Price coefficients |
| $ \gamma_0$ | 1 | Intercept (Supply) |
| $\gamma_x $ | 5 | Cost from good $x$ |
| $ \sigma_c$ | 0.25 | Covariance $(\xi_j)$ |
| $ \gamma_w$ | 0.25 | Parameters for input costs |
| $ \sigma_\omega$ | 0.25 | Covariance $(\omega_j)$ |
### B2. Obtain (numerically) equilibrium price and market shares (still D.G.P)
I solve nonlinear equation where under j=1,2. (Argument: $p_1, p_2, s_1(p_1, p_2), s_2(p_1,p_2)$) $X$ is a vector dupoly firm's product charactersitics $ X = \{ x_1, x_2 \}$
Note that $s_0 = 1-s_1-s_2$
\begin{align}
p_1 & = c_1 - \frac{1}{\alpha (1-s_1)} \\
p_2 & = c_2 - \frac{1}{\alpha (1-s_2)} \\
& \text{Note that $s_1$, and $s_2$ is given by} \\
s_1(X,p_1, p_2) & = \frac{exp^{\beta_0 + \beta_1 x_1 + \sigma_d \xi_1 - \alpha p_1}}{1+exp^{\beta_0 + \beta_1 x_1 + \sigma_d \xi_1 - \alpha p_1} +exp^{\beta_0 + \beta_1 x_2 + \sigma_d \xi_2 - \alpha p_2} } \\
s_2(X,p_1, p_2) & = \frac{exp^{\beta_0 + \beta_1 x_2 + \sigma_d \xi_2 - \alpha p_2}}{1+exp^{\beta_0 + \beta_1 x_1 + \sigma_d \xi_1 - \alpha p_1} + exp^{\beta_0 + \beta_1 x_2 + \sigma_d \xi_2 - \alpha p_2}}
\end{align}
Using Nonlinear solver, we can obtain equilibrium outcome: $p_1, p_2, s_1, s_2 (s_0= 1-s_1-s_2).$ One might concern about multiple equilibria for this game. Since we solve single product under duopoly (which is simple market), we observe unique solution for this Monte Carlo Study. Please see Nalebuff (1991) for multi-product firm problem, or uniquness of this game.
### B3. Regress using OLS / IV
Following Berry's inversion an econometrician run following OLS/IV regression. An econometrician observes price, product characteristics, and cost shifters for 500 independent dupoly markets.
\begin{align}
\text{ln} (s_j) - \text{ln} (s_0) \ & = \delta_j \\
& = \beta_0 + \beta_1 x_j - \alpha p_j + \sigma_d \xi_j
\end{align}
For IV, I use cost shifters, and competitors' product characteristics, as in Berry (1994)
Note that in OLS, since an econometrician cannot observe $\xi_j$ term, price coefficients $\alpha$ is upward biased. In the IV regression, the observed cost factors, $w_j$, and the product characteristic of the rival firm are used as instruments for price.
#### For each simulation, we independently draw these for 500 markets, as in Berry(1994).
#### Repeat 100 times, and report Monte Racrlo results
```julia
# Define parameters, as in Berry's 1994 Monte Carlo
beta_0 = 5.0
beta_x = 2.0
sigma_d = 1.0
alpha = -1.0
gamma_0 = 1.0
gamma_x = 0.5
sigma_c = 0.25
gamma_w = 0.25
sigma_omega = 0.25
T = 500
S = 100
d = Normal()
```
Normal{Float64}(μ=0.0, σ=1.0)
```julia
# Define Non-linear solver
function f!(F, x)
# D.G.P for true costs
cost_1 = exp(gamma_0 + gamma_x * data_temp1[:x] + sigma_c * data_temp1[:xi] + gamma_w * data_temp1[:w] + sigma_omega * data_temp1[:omega])
cost_2 = exp(gamma_0 + gamma_x * data_temp2[:x] + sigma_c * data_temp2[:xi] + gamma_w * data_temp2[:w] + sigma_omega * data_temp2[:omega])
# Derive equillibrium price / quantity(markset shares)
price_1 = cost_1 - 1/(alpha*(1-x[1]))
price_2 = cost_2 - 1/(alpha*(1-x[2]))
w
#x[1]: market share of good 1
#x[2]: market share of good 2
denom = 1 + exp(beta_0 + beta_x*data_temp1[:x] + sigma_d*data_temp1[:xi] + alpha*price_1) + exp(beta_0 + beta_x*data_temp2[:x] + sigma_d*data_temp2[:xi] + alpha*price_2)
F[1] = x[1] - exp(beta_0 + beta_x*data_temp1[:x] + sigma_d*data_temp1[:xi] + alpha*price_1)/denom
F[2] = x[2] - exp(beta_0 + beta_x*data_temp2[:x] + sigma_d*data_temp2[:xi] + alpha*price_2)/denom
end
```
### Replicate Table 1 in Berry(1994), column (1) and (2), where $\sigma_d=1$
```julia
### It takes 5 seconds, at the current workflow
sigma_d = 1
for s = 1:S
### Step B1. D.G.P.
# If you want to have same results, you need to assign random seed
Random.seed!(s*100+T+1)
x_1 = rand(d, T);
Random.seed!(s*100+T+2)
xi_1 = rand(d, T);
Random.seed!(s*100+T+3)
w_1 = rand(d, T);
Random.seed!(s*100+T+4)
omega_1 = rand(d, T);
Random.seed!(s*100+T+5)
x_2 = rand(d, T);
Random.seed!(s*100+T+6)
xi_2 = rand(d, T);
Random.seed!(s*100+T+7)
w_2 = rand(d, T);
Random.seed!(s*100+T+8)
omega_2 = rand(d, T);
data_1 = DataFrame(x = x_1[1:T], xi = xi_1[1:T], w = w_1[1:T], omega = omega_1[1:T], iv=x_2[1:T]);
data_2 = DataFrame(x = x_2[1:T], xi = xi_2[1:T], w = w_2[1:T], omega = omega_2[1:T], iv=x_1[1:T]);
# For the first periods
data_temp1 = data_1[1,:]
data_temp2 = data_2[1,:]
global data_temp1, data_temp2
### Step B2. Solve Equilibrium price and market shares using nonlinear-solver
a= nlsolve(f!, [0.1; 0.1])
vector_s1 = [a.zero[1]]
vector_s2 = [a.zero[2]]
vector_s0 = [1-a.zero[1]-a.zero[2]]
cost_1 = exp(gamma_0 + gamma_x * data_temp1[:x] + sigma_c * data_temp1[:xi] + gamma_w * data_temp1[:w] + sigma_omega * data_temp1[:omega])
cost_2 = exp(gamma_0 + gamma_x * data_temp2[:x] + sigma_c * data_temp2[:xi] + gamma_w * data_temp2[:w] + sigma_omega * data_temp2[:omega])
vector_p1 = [cost_1 - 1/(alpha*(1-a.zero[1]))]
vector_p2 = [cost_2 - 1/(alpha*(1-a.zero[2]))]
vector_delta_1 = [beta_0 + beta_x * data_temp1[:x] + alpha*(cost_1 - 1/(alpha*(1-a.zero[1])) )]
vector_delta_2 = [beta_0 + beta_x * data_temp2[:x] + alpha*(cost_2 - 1/(alpha*(1-a.zero[2])) )]
# From the second market to T markets.
t=2
for t = 2:T
data_temp1 = data_1[t,:]
data_temp2 = data_2[t,:]
# Step 1. Solve Equilibrium price / market shares
a= nlsolve(f!, [0.0; 0.0])
append!(vector_s1, [a.zero[1]]);
append!(vector_s2, [a.zero[2]]);
append!(vector_s0, [1-a.zero[1]-a.zero[2]]);
cost_1 = exp(gamma_0 + gamma_x * data_temp1[:x] + sigma_c * data_temp1[:xi] + gamma_w * data_temp1[:w] + sigma_omega * data_temp1[:omega])
cost_2 = exp(gamma_0 + gamma_x * data_temp2[:x] + sigma_c * data_temp2[:xi] + gamma_w * data_temp2[:w] + sigma_omega * data_temp2[:omega])
append!(vector_p1, [cost_1 - 1/(alpha*(1-a.zero[1]))]);
append!(vector_p2, [cost_2 - 1/(alpha*(1-a.zero[2]))]);
append!(vector_delta_1, [beta_0 + beta_x * data_temp1[:x] + alpha*(cost_1 - 1/(alpha*(1-a.zero[1])) )]);
append!(vector_delta_2, [beta_0 + beta_x * data_temp2[:x] + alpha*(cost_2 - 1/(alpha*(1-a.zero[2])) )]);
end
data_1.price = vector_p1;
data_2.price = vector_p2;
data_1.s = vector_s1;
data_2.s = vector_s2;
data_1.delta = vector_delta_1;
data_2.delta = vector_delta_2;
data_1.log_sj_s0 = log.(vector_s1) - log.(vector_s0);
data_2.log_sj_s0 = log.(vector_s2) - log.(vector_s0);
# Merge into dataset
data_merged = append!(data_1, data_2);
### B3. Regress using OLS / IV
## OLS Regression
ols_result = reg(data_merged, @formula(log_sj_s0 ~ x + price), save = true, Vcov.robust());
ols_cons = coef(ols_result)[1];
ols_x = coef(ols_result)[2];
ols_p = coef(ols_result)[3];
## IV Regression
first_stage_result = reg(data_merged, @formula(price ~ iv + w +x), save = true, Vcov.robust());
xhat = predict(first_stage_result, data_merged);
data_merged.xhat = xhat;
iv_result = reg(data_merged, @formula(log_sj_s0 ~ x + xhat), save = true, Vcov.robust());
iv_cons = coef(iv_result)[1];
iv_x = coef(iv_result)[2];
iv_p = coef(iv_result)[3];
if s == 1
vector_ols_cons = [ols_cons]
vector_ols_x = [ols_x]
vector_ols_p = [ols_p]
vector_iv_cons = [iv_cons]
vector_iv_x = [iv_x]
vector_iv_p = [iv_p]
# Store Monte Carlo Results
global vector_ols_cons, vector_ols_x, vector_ols_p, vector_iv_cons, vector_iv_x, vector_iv_p
else
append!(vector_ols_cons, [ols_cons])
append!(vector_ols_x, [ols_x])
append!(vector_ols_p, [ols_p])
append!(vector_iv_cons, [iv_cons])
append!(vector_iv_x, [iv_x])
append!(vector_iv_p, [iv_p])
end
end
print("Monte Carlo Parameter Estimates 100 Random Samples of 500 Duopoly Markets Logit Utility (sigma_d = 1)")
result_summary =DataFrame( True_parameter = [beta_0, beta_x, alpha], OLS_mean = [mean(vector_ols_cons),mean(vector_ols_x),mean(vector_ols_p)], OLS_se = [std(vector_ols_cons),std(vector_ols_x),std(vector_ols_p)],
IV_mean = [mean(vector_iv_cons),mean(vector_iv_x),mean(vector_iv_p)], IV_se =[std(vector_iv_cons),std(vector_iv_x),std(vector_iv_p)]);
print("Result Summary")
print(result_summary)
```
Monte Carlo Parameter Estimates 100 Random Samples of 500 Duopoly Markets Logit Utility (sigma_d = 1)Result Summary3×5 DataFrame
│ Row │ True_parameter │ OLS_mean │ OLS_se │ IV_mean │ IV_se │
│ │ [90mFloat64[39m │ [90mFloat64[39m │ [90mFloat64[39m │ [90mFloat64[39m │ [90mFloat64[39m │
├─────┼────────────────┼──────────┼───────────┼──────────┼───────────┤
│ 1 │ 5.0 │ 3.1872 │ 0.235697 │ 5.01814 │ 0.266852 │
│ 2 │ 2.0 │ 1.33611 │ 0.0742151 │ 2.01013 │ 0.0994837 │
│ 3 │ -1.0 │ -0.63979 │ 0.0482097 │ -1.00436 │ 0.0513297 │
### Replicate Table 1 in Berry(1994), column (1) and (2), where $\sigma_d=3$
```julia
sigma_d = 3.0
for s = 1:S
### Step B1. D.G.P.
# If you want to have same results, you need to assign random seed
Random.seed!(s*100+T+1)
x_1 = rand(d, T);
Random.seed!(s*100+T+2)
xi_1 = rand(d, T);
Random.seed!(s*100+T+3)
w_1 = rand(d, T);
Random.seed!(s*100+T+4)
omega_1 = rand(d, T);
Random.seed!(s*100+T+5)
x_2 = rand(d, T);
Random.seed!(s*100+T+6)
xi_2 = rand(d, T);
Random.seed!(s*100+T+7)
w_2 = rand(d, T);
Random.seed!(s*100+T+8)
omega_2 = rand(d, T);
data_1 = DataFrame(x = x_1[1:T], xi = xi_1[1:T], w = w_1[1:T], omega = omega_1[1:T], iv=x_2[1:T]);
data_2 = DataFrame(x = x_2[1:T], xi = xi_2[1:T], w = w_2[1:T], omega = omega_2[1:T], iv=x_1[1:T]);
# For the first periods
data_temp1 = data_1[1,:]
data_temp2 = data_2[1,:]
global data_temp1, data_temp2
### Step B2. Solve Equilibrium price and market shares using nonlinear-solver
a= nlsolve(f!, [0.1; 0.1])
vector_s1 = [a.zero[1]]
vector_s2 = [a.zero[2]]
vector_s0 = [1-a.zero[1]-a.zero[2]]
cost_1 = exp(gamma_0 + gamma_x * data_temp1[:x] + sigma_c * data_temp1[:xi] + gamma_w * data_temp1[:w] + sigma_omega * data_temp1[:omega])
cost_2 = exp(gamma_0 + gamma_x * data_temp2[:x] + sigma_c * data_temp2[:xi] + gamma_w * data_temp2[:w] + sigma_omega * data_temp2[:omega])
vector_p1 = [cost_1 - 1/(alpha*(1-a.zero[1]))]
vector_p2 = [cost_2 - 1/(alpha*(1-a.zero[2]))]
vector_delta_1 = [beta_0 + beta_x * data_temp1[:x] + alpha*(cost_1 - 1/(alpha*(1-a.zero[1])) )]
vector_delta_2 = [beta_0 + beta_x * data_temp2[:x] + alpha*(cost_2 - 1/(alpha*(1-a.zero[2])) )]
# From the second market to T markets.
t=2
for t = 2:T
data_temp1 = data_1[t,:]
data_temp2 = data_2[t,:]
# Step 1. Solve Equilibrium price / market shares
a= nlsolve(f!, [0.0; 0.0])
append!(vector_s1, [a.zero[1]]);
append!(vector_s2, [a.zero[2]]);
append!(vector_s0, [1-a.zero[1]-a.zero[2]]);
cost_1 = exp(gamma_0 + gamma_x * data_temp1[:x] + sigma_c * data_temp1[:xi] + gamma_w * data_temp1[:w] + sigma_omega * data_temp1[:omega])
cost_2 = exp(gamma_0 + gamma_x * data_temp2[:x] + sigma_c * data_temp2[:xi] + gamma_w * data_temp2[:w] + sigma_omega * data_temp2[:omega])
append!(vector_p1, [cost_1 - 1/(alpha*(1-a.zero[1]))]);
append!(vector_p2, [cost_2 - 1/(alpha*(1-a.zero[2]))]);
append!(vector_delta_1, [beta_0 + beta_x * data_temp1[:x] + alpha*(cost_1 - 1/(alpha*(1-a.zero[1])) )]);
append!(vector_delta_2, [beta_0 + beta_x * data_temp2[:x] + alpha*(cost_2 - 1/(alpha*(1-a.zero[2])) )]);
end
data_1.price = vector_p1;
data_2.price = vector_p2;
data_1.s = vector_s1;
data_2.s = vector_s2;
data_1.delta = vector_delta_1;
data_2.delta = vector_delta_2;
data_1.log_sj_s0 = log.(vector_s1) - log.(vector_s0);
data_2.log_sj_s0 = log.(vector_s2) - log.(vector_s0);
# Merge into dataset
data_merged = append!(data_1, data_2);
### B3. Regress using OLS / IV
## OLS Regression
ols_result = reg(data_merged, @formula(log_sj_s0 ~ x + price), save = true, Vcov.robust());
ols_cons = coef(ols_result)[1];
ols_x = coef(ols_result)[2];
ols_p = coef(ols_result)[3];
## IV Regression
first_stage_result = reg(data_merged, @formula(price ~ iv + w +x), save = true, Vcov.robust());
xhat = predict(first_stage_result, data_merged);
data_merged.xhat = xhat;
iv_result = reg(data_merged, @formula(log_sj_s0 ~ x + xhat), save = true, Vcov.robust());
iv_cons = coef(iv_result)[1];
iv_x = coef(iv_result)[2];
iv_p = coef(iv_result)[3];
if s == 1
vector_ols_cons = [ols_cons]
vector_ols_x = [ols_x]
vector_ols_p = [ols_p]
vector_iv_cons = [iv_cons]
vector_iv_x = [iv_x]
vector_iv_p = [iv_p]
# Store Monte Carlo Results
global vector_ols_cons, vector_ols_x, vector_ols_p, vector_iv_cons, vector_iv_x, vector_iv_p
else
append!(vector_ols_cons, [ols_cons])
append!(vector_ols_x, [ols_x])
append!(vector_ols_p, [ols_p])
append!(vector_iv_cons, [iv_cons])
append!(vector_iv_x, [iv_x])
append!(vector_iv_p, [iv_p])
end
end
print("Monte Carlo Parameter Estimates 100 Random Samples of 500 Duopoly Markets Logit Utility (sigma_d = 1)")
result_summary =DataFrame( True_parameter = [beta_0, beta_x, alpha], OLS_mean = [mean(vector_ols_cons),mean(vector_ols_x),mean(vector_ols_p)], OLS_se = [std(vector_ols_cons),std(vector_ols_x),std(vector_ols_p)],
IV_mean = [mean(vector_iv_cons),mean(vector_iv_x),mean(vector_iv_p)], IV_se =[std(vector_iv_cons),std(vector_iv_x),std(vector_iv_p)]);
print("Result Summary")
print(result_summary)
```
Monte Carlo Parameter Estimates 100 Random Samples of 500 Duopoly Markets Logit Utility (sigma_d = 3)3×5 DataFrame
│ Row │ True_parameter │ OLS_mean │ OLS_se │ IV_mean │ IV_se │
│ │ [90mFloat64[39m │ [90mFloat64[39m │ [90mFloat64[39m │ [90mFloat64[39m │ [90mFloat64[39m │
├─────┼────────────────┼───────────┼───────────┼──────────┼──────────┤
│ 1 │ 5.0 │ -0.762803 │ 0.418897 │ 5.03208 │ 0.847055 │
│ 2 │ 2.0 │ 0.0195958 │ 0.115257 │ 1.99649 │ 0.301829 │
│ 3 │ -1.0 │ 0.105563 │ 0.0831606 │ -1.00709 │ 0.166926 │
```julia
```
# References
Berry, Steven T. "Estimating discrete-choice models of product differentiation." <em>The RAND Journal of Economics<em> (1994): 242-262.
Berry, Steven, James Levinsohn, and Ariel Pakes. "Automobile prices in market equilibrium." <em>Econometrica: Journal of the Econometric Society<em> (1995): 841-890.
| 111668a6a6abfa59b86c87e406f630220d75907a | 97,716 | ipynb | Jupyter Notebook | Berry.ipynb | econjinkim/econjinkim.github.io | 745ca0eebc8e8e5271722d96deb11c829fce435d | [
"MIT"
] | 1 | 2021-02-18T15:44:42.000Z | 2021-02-18T15:44:42.000Z | Berry.ipynb | econgenekim/econjinkim.github.io | 669ee5038a4a138fc6c3ada9ce5e3cff8ea22136 | [
"MIT"
] | null | null | null | Berry.ipynb | econgenekim/econjinkim.github.io | 669ee5038a4a138fc6c3ada9ce5e3cff8ea22136 | [
"MIT"
] | null | null | null | 74.25228 | 7,637 | 0.491793 | true | 21,638 | Qwen/Qwen-72B | 1. YES
2. YES | 0.793106 | 0.715424 | 0.567407 | __label__kor_Hang | 0.162518 | 0.156606 |
## Tracking Error Minimization
---
### Passive Management Vs. Active Management
+ So far we have reviewed how to manage our portfolio in terms of the balance between the expected return and the risk (the variance or the expected shortfall). This style of portfolio management is called <font color=red>active management</font>. Active management also involves discretionary selection of assets.
+ <font color=red>Passive management</font> of a portfolio, on the other hand, is a investment strategy in which an investor tries to mimic a benchmark index. Passive management funds that mimic indices are called \alert{index funds}. As the benchmark portfolio, index funds use stock indices, bond indices, currencies, commodities, or even hedge funds.
+ The goal in passive management is to minimize a discrepancy between a portfolio and the benchmark index.
```python
import numpy as np
import scipy.stats as st
import cvxpy as cvx
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
```
Here we generate artificial stock return data and save them in the CSV file `asset_return_data.csv`.
`random.seed` sets the seed for pseudo-random numbers. This assures the reproducibility of computational outcomes with pseudo-random numbers.
We use `multivariate_normal` in `scipy.stats` to generate random vectors from the multivariate normal distribution with the mean vector `Mu` and the covariance matrix `Sigma`. See more details on `multivariate_normal` at https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.multivariate_normal.html
```python
Mu = np.array([1.0, 3.0, 1.5, 6.0, 4.5])
Stdev = np.array([5.0, 10.0, 7.5, 15.0, 11.0])
CorrMatrix = np.array([[1.00, 0.25, 0.18, 0.10, 0.25],
[0.25, 1.00, 0.36, 0.20, 0.20],
[0.18, 0.36, 1.00, 0.25, 0.36],
[0.10, 0.20, 0.25, 1.00, 0.45],
[0.25, 0.20, 0.36, 0.45, 1.00]])
Sigma = np.diag(Stdev) @ CorrMatrix @ np.diag(Stdev)
np.random.seed(9999)
T = 120
End_of_Month = pd.date_range('1/1/2010', periods=T, freq='M')
Asset_Names = ['Asset 1', 'Aseet 2', 'Asset3', 'Asset 4', 'Asset 5']
Asset_Return = pd.DataFrame(st.multivariate_normal.rvs(mean=Mu, cov=Sigma, size=T),
index=End_of_Month, columns=Asset_Names)
Asset_Return.to_csv('asset_return_data.csv')
display(Asset_Return)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Asset 1</th>
<th>Aseet 2</th>
<th>Asset3</th>
<th>Asset 4</th>
<th>Asset 5</th>
</tr>
</thead>
<tbody>
<tr>
<th>2010-01-31</th>
<td>-2.313057</td>
<td>-4.503855</td>
<td>0.530932</td>
<td>1.392579</td>
<td>-1.345408</td>
</tr>
<tr>
<th>2010-02-28</th>
<td>-2.474033</td>
<td>-2.361441</td>
<td>-19.814875</td>
<td>12.819493</td>
<td>-10.031085</td>
</tr>
<tr>
<th>2010-03-31</th>
<td>-5.384739</td>
<td>2.588934</td>
<td>10.350946</td>
<td>10.356395</td>
<td>-1.664494</td>
</tr>
<tr>
<th>2010-04-30</th>
<td>-5.992190</td>
<td>-0.496439</td>
<td>-4.770446</td>
<td>-5.838914</td>
<td>-21.171828</td>
</tr>
<tr>
<th>2010-05-31</th>
<td>-4.701301</td>
<td>-13.822786</td>
<td>6.776691</td>
<td>-4.201704</td>
<td>21.390680</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>2019-08-31</th>
<td>3.627017</td>
<td>25.813189</td>
<td>5.174548</td>
<td>6.392566</td>
<td>10.367343</td>
</tr>
<tr>
<th>2019-09-30</th>
<td>8.219459</td>
<td>2.193804</td>
<td>-1.083170</td>
<td>32.676895</td>
<td>16.633275</td>
</tr>
<tr>
<th>2019-10-31</th>
<td>7.407514</td>
<td>13.978706</td>
<td>-5.043529</td>
<td>15.059008</td>
<td>2.255886</td>
</tr>
<tr>
<th>2019-11-30</th>
<td>2.853102</td>
<td>-2.226434</td>
<td>5.037606</td>
<td>6.213427</td>
<td>21.054736</td>
</tr>
<tr>
<th>2019-12-31</th>
<td>3.925230</td>
<td>-0.556322</td>
<td>6.234687</td>
<td>-3.554233</td>
<td>9.539070</td>
</tr>
</tbody>
</table>
<p>120 rows × 5 columns</p>
</div>
Then we read the generate data from `asset_return_data.csv` again.
```python
R = pd.read_csv('asset_return_data.csv', index_col=0)
R = R.asfreq(pd.infer_freq(R.index))
T, N = R.shape
np.random.seed(8888)
BenchmarkIndex = R @ np.tile(1.0/N, N) + st.norm.rvs(loc=0.0, scale=3.0, size=T)
```
### Tracking Error
Let $y_t$ denote the return on the benchmark index at time $t$, $r_{nt}$ denote the return on asset $n$ $(n=1,\dots,N)$ at time $t$ $(t=1,\dots,T)$, and $w_n$ denote the allocation weight for asset $n$. Then a discrepancy between a portfolio and the benchmark index at time $t$ is given by
\begin{equation}
\begin{split}
e_t &= y_t - \sum_{n=1}^N w_n r_{nt}
= y_t - \begin{bmatrix} r_{1t} & \cdots & r_{Nt}\end{bmatrix}
\begin{bmatrix} w_1\\ \vdots \\ w_N \end{bmatrix} \\
&= y_t - r_t^{\intercal}w.
\end{split}
\end{equation}
This discrepancy is called a <font color=red>tracking error</font>.
Define
\begin{equation*}
y = \begin{bmatrix} y_1 \\ \vdots \\ y_T \end{bmatrix},\quad
R = \begin{bmatrix} r_1^{\intercal} \\ \vdots \\ r_T^{\intercal} \end{bmatrix},\quad
e = \begin{bmatrix} e_1 \\ \vdots \\ e_T \end{bmatrix}.
\end{equation*}
The tracking error minimization problem is defined as
\begin{equation*}
\begin{split}
\min_{w, e} & \quad \frac1{T}e^{\intercal}e, \\
\text{subject to} & \quad e = y-Rw,\quad w^{\intercal}\iota = 1,\quad w\geqq 0.
\end{split}
\end{equation*}
```python
MovingWindow = 96
BackTesting = T - MovingWindow
V_Tracking = np.zeros(BackTesting)
Weight = cvx.Variable(N)
Error = cvx.Variable(MovingWindow)
TrackingError = cvx.sum_squares(Error) / MovingWindow
Asset_all = R
Index_all = BenchmarkIndex
for Month in range(0, BackTesting):
Asset = Asset_all.values[Month:(Month + MovingWindow), :]
Index = Index_all.values[Month:(Month + MovingWindow)]
Min_TrackingError = cvx.Problem(cvx.Minimize(TrackingError),
[Index - Asset @ Weight == Error,
cvx.sum(Weight) == 1.0,
Weight >= 0.0])
Min_TrackingError.solve(solver=cvx.ECOS)
V_Tracking[Month] = R.values[Month + MovingWindow, :] @ Weight.value
```
```python
fig1 = plt.figure(num=1, facecolor='w')
plt.plot(list(range(1, BackTesting + 1)), BenchmarkIndex[MovingWindow:], 'b-', label='Benchmark')
plt.plot(list(range(1, BackTesting + 1)), V_Tracking, 'r--', label='Portfolio')
plt.legend(loc='best', frameon=False)
plt.xlabel('Year')
plt.ylabel('Return (%)')
plt.xticks(list(range(12, BackTesting + 1, 12)),
pd.date_range(R.index[MovingWindow], periods=BackTesting//12, freq='AS').year)
plt.show()
```
| 79610ba76add7842e3815ae46ffb342d969d928c | 42,221 | ipynb | Jupyter Notebook | notebook/ges_tracking_error.ipynb | nakatsuma/GES-PEARL | 094c2f5f1f4045d60803db91159824a801ce5dcd | [
"MIT"
] | 4 | 2018-10-10T04:10:51.000Z | 2021-10-06T02:03:56.000Z | notebook/ges_tracking_error.ipynb | nakatsuma/GES-PEARL | 094c2f5f1f4045d60803db91159824a801ce5dcd | [
"MIT"
] | null | null | null | notebook/ges_tracking_error.ipynb | nakatsuma/GES-PEARL | 094c2f5f1f4045d60803db91159824a801ce5dcd | [
"MIT"
] | 5 | 2019-05-15T04:03:04.000Z | 2021-12-07T01:33:23.000Z | 121.674352 | 29,868 | 0.827313 | true | 2,475 | Qwen/Qwen-72B | 1. YES
2. YES | 0.774583 | 0.819893 | 0.635076 | __label__eng_Latn | 0.457474 | 0.313824 |
# <h1><center><span style="color: red;">$\textbf{Superposition}$</center></h1>
<font size = '4'> It’s only when you look at the tiniest quantum particles like atoms, electrons, photons, and the like that you see intriguing phenomena like $\textbf{superposition and entanglement}$.
$\textbf{Superposition}$ refers to the quantum phenomenon where a quantum system can exist in multiple states or places at the exact same time. In other words, something can be “here” and “there,” or “up” and “down” at the same time.
```python
%matplotlib inline
# Importing standard Qiskit libraries and configuring account
from qiskit import QuantumCircuit, execute, Aer,assemble
from qiskit.visualization import plot_histogram
from qiskit.visualization import plot_state_qsphere
from math import pi
from qiskit.quantum_info import Statevector
from IPython.core.display import Image, display
import numpy as np
backend = Aer.get_backend('statevector_simulator')
```
```python
qch = QuantumCircuit(1,1)
qch.h(0)
qch.measure_all()
qch.draw('mpl')
```
```python
aer_sim = Aer.get_backend('aer_simulator')
shots = 1024
qobj = assemble(qch, aer_sim)
results = aer_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
```
```python
state = Statevector.from_instruction(qch)
plot_state_qsphere(state)
```
# <h1><center><span style="color: red;">$\textbf{Random Circuit}$</center></h1>
<font size = '4'>Generate random circuit of arbitrary size and form
```python
from qiskit.circuit.random import random_circuit
Random = random_circuit(4, 3, measure=False)
Random.draw(output='mpl')
```
# <h1><center><span style="color: red;">$\textbf{Quantum Entanglement}$</center></h1>
<font size = '4'> $\textbf{Quantum entanglement}$ is a quantum mechanical phenomenon in which the quantum states of two or more objects have to be described with reference to each other, even though the individual objects may be spatially separated.
For example, it is possible to prepare two particles in a single quantum state such that when one is observed to be spin-up, the other one will always be observed to be spin-down and vice versa, this despite the fact that it is impossible to predict, according to quantum mechanics, which set of measurements will be observed.
### Quantum entanglement has applications in the emerging technologies of quantum computing and quantum cryptography, and has been used to realize quantum teleportation experimentally.
# <h1><center><span style="color: blue;">$\textbf{Bell State or Entanglement circuit}$
<font size = '4'>The Bell states are four specific maximally entangled quantum states of two qubits. They are in a superposition of 0 and 1--that is, a linear combination of the two states. Their entanglement means the following:
The qubit held by Alice (subscript "A") can be 0 as well as 1. If Alice measured her qubit in the standard basis, the outcome would be perfectly random, either possibility 0 or 1 having probability 1/2. But if Bob (subscript "B") then measured his qubit, the outcome would be the same as the one Alice got. So, if Bob measured, he would also get a random outcome on first sight, but if Alice and Bob communicated, they would find out that, although their outcomes seemed random, they are perfectly correlated.
\begin{equation}\begin{aligned}
\textbf{For initial state {00},}\left|\Psi^{+}\right\rangle &=\frac{1}{\sqrt{2}}\left(|0\rangle_{A} \otimes|0\rangle_{B}+|1\rangle_{A} \otimes|1\rangle_{B}\right)(1) \\
\textbf{For initial state {10},}\left|\Psi^{-}\right\rangle &=\frac{1}{\sqrt{2}}\left(|0\rangle_{A} \otimes|0\rangle_{B}-|1\rangle_{A} \otimes|1\rangle_{B}\right)(2) \\
\textbf{For initial state {01},}\left|\Phi^{+}\right\rangle &=\frac{1}{\sqrt{2}}\left(|0\rangle_{A} \otimes|1\rangle_{B}+|1\rangle_{A} \otimes|0\rangle_{B}\right)(3) \\
\textbf{For initial state {11},}\left|\Phi^{-}\right\rangle &=\frac{1}{\sqrt{2}}\left(|0\rangle_{A} \otimes|1\rangle_{B}-|1\rangle_{A} \otimes|0\rangle_{B}\right)(4)
\end{aligned}\end{equation}
```python
entanglement1 = QuantumCircuit(2,2)
entanglement1.h(0)
entanglement1.cx(0,1)
entanglement1.draw('mpl')
```
```python
state = Statevector.from_instruction(entanglement1)
plot_state_qsphere(state)
```
```python
final_state = execute(entanglement1,backend).result().get_statevector()
from qiskit_textbook.tools import array_to_latex
array_to_latex(final_state, pretext="\\text{Statevector} = ")
```
$\displaystyle
\text{Statevector} = \begin{bmatrix}
\tfrac{1}{\sqrt{2}} \\
0 \\
0 \\
\tfrac{1}{\sqrt{2}}
\end{bmatrix}
$
```python
entanglement2=QuantumCircuit(2,1)
entanglement2.x(0)
entanglement2.h(0)
entanglement2.cx(0,1)
entanglement2.draw('mpl')
```
```python
state = Statevector.from_instruction(entanglement2)
plot_state_qsphere(state)
```
```python
# Let's see the result
final_state = execute(entanglement2,backend).result().get_statevector()
from qiskit_textbook.tools import array_to_latex
array_to_latex(final_state, pretext="\\text{Statevector} = ")
```
$\displaystyle
\text{Statevector} = \begin{bmatrix}
\tfrac{1}{\sqrt{2}} \\
0 \\
0 \\
-\tfrac{1}{\sqrt{2}}
\end{bmatrix}
$
```python
import qiskit.tools.jupyter
%qiskit_version_table
```
C:\Users\User\anaconda3\lib\site-packages\qiskit\aqua\__init__.py:86: DeprecationWarning: The package qiskit.aqua is deprecated. It was moved/refactored to qiskit-terra For more information see <https://github.com/Qiskit/qiskit-aqua/blob/main/README.md#migration-guide>
warn_package('aqua', 'qiskit-terra')
<h3>Version Information</h3><table><tr><th>Qiskit Software</th><th>Version</th></tr><tr><td><code>qiskit-terra</code></td><td>0.18.0</td></tr><tr><td><code>qiskit-aer</code></td><td>0.8.2</td></tr><tr><td><code>qiskit-ignis</code></td><td>0.6.0</td></tr><tr><td><code>qiskit-ibmq-provider</code></td><td>0.15.0</td></tr><tr><td><code>qiskit-aqua</code></td><td>0.9.4</td></tr><tr><td><code>qiskit</code></td><td>0.28.0</td></tr><tr><th>System information</th></tr><tr><td>Python</td><td>3.8.10 (default, May 19 2021, 13:12:57) [MSC v.1916 64 bit (AMD64)]</td></tr><tr><td>OS</td><td>Windows</td></tr><tr><td>CPUs</td><td>2</td></tr><tr><td>Memory (Gb)</td><td>7.912609100341797</td></tr><tr><td colspan='2'>Sun Jul 25 19:12:29 2021 Nepal Standard Time</td></tr></table>
| 88729c515d7688ca42772bcd918fa67a8e542a72 | 156,908 | ipynb | Jupyter Notebook | day2/Superposition, Random circuit and Entanglement.ipynb | locus-ioe/Quantum-Computing-2021 | ba11d76be7d5bf36dbd1e4b92e7f9635f3237bbb | [
"MIT"
] | 12 | 2021-07-23T13:38:20.000Z | 2021-09-07T00:40:09.000Z | day2/Superposition, Random circuit and Entanglement.ipynb | Pratha-Me/Quantum-Computing-2021 | bd9cf9a1165a47c61f9277126f4df04ae5562d61 | [
"MIT"
] | 3 | 2021-07-31T08:43:38.000Z | 2021-07-31T08:43:38.000Z | day2/Superposition, Random circuit and Entanglement.ipynb | Pratha-Me/Quantum-Computing-2021 | bd9cf9a1165a47c61f9277126f4df04ae5562d61 | [
"MIT"
] | 7 | 2021-07-24T06:14:36.000Z | 2021-07-29T22:02:12.000Z | 361.539171 | 38,236 | 0.935759 | true | 1,925 | Qwen/Qwen-72B | 1. YES
2. YES | 0.868827 | 0.815232 | 0.708296 | __label__eng_Latn | 0.786542 | 0.48394 |
# Quantum Fourier Transforms
The **"QFT (Quantum Fourier Transform)"** quantum kata is a series of exercises designed
to teach you the basics of quantum Fourier transform (QFT). It covers implementing QFT and using
it to perform simple state transformations.
Each task is wrapped in one operation preceded by the description of the task.
Your goal is to fill in the blank (marked with the `// ...` comments)
with some Q# code that solves the task. To verify your answer, run the cell using Ctrl+Enter (⌘+Enter on macOS).
Within each section, tasks are given in approximate order of increasing difficulty;
harder ones are marked with asterisks.
## Part I. Implementing Quantum Fourier Transform
This sequence of tasks uses the implementation of QFT described in Nielsen & Chuang.
All numbers in this kata use big endian encoding: most significant bit of the number
is stored in the first (leftmost) bit/qubit.
### Task 1.1. 1-qubit QFT
**Input:**
A qubit in state $|\psi\rangle = x_0 |0\rangle + x_1 |1\rangle$.
**Goal:**
Apply QFT to this qubit, i.e., transform it to a state
$\frac{1}{\sqrt{2}} \big((x_0 + x_1) |0\rangle + (x_0 - x_1) |1\rangle\big)$.
In other words, transform a basis state $|j\rangle$ into a state $\frac{1}{\sqrt{2}} \big(|0\rangle + e^{2\pi i \cdot \frac{j}{2}}|1\rangle\big)$ .
```qsharp
%kata T11_OneQubitQFT
operation OneQubitQFT (q : Qubit) : Unit is Adj+Ctl {
H(q);
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-1.1.-1-qubit-QFT).*
### Task 1.2. Rotation gate
**Inputs:**
1. A qubit in state $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$.
2. An integer k $\geq$ 0.
**Goal:**
Change the state of the qubit to $\alpha |0\rangle + \beta \cdot e^{\frac{2\pi i}{2^{k}}} |1\rangle$.
> Be careful about not introducing an extra global phase!
This is going to be important in the later tasks.
```qsharp
%kata T12_Rotation
open Microsoft.Quantum.Convert;
open Microsoft.Quantum.Math;
operation Rotation (q : Qubit, k : Int) : Unit is Adj+Ctl {
// The R1 gate leaves |0> untouched,
// sends |1> to e^(i * theta)|1>
let theta = (2.0 * PI())/(2.0 ^ IntAsDouble(k));
R1(theta, q);
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-1.2.-Rotation-gate).*
### Task 1.3. Prepare binary fraction exponent (classical input)
**Inputs:**
1. A qubit in state $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$.
2. An array of $n$ bits $[j_1, j_2, ..., j_n]$, stored as `Int[]` ($ j_k \in \{0,1\}$).
**Goal:**
Change the state of the qubit to $\alpha |0\rangle + \beta \cdot e^{2\pi i \cdot 0.j_1 j_2 ... j_n} |1\rangle$,
where $0.j_1 j_2 ... j_n$ is a binary fraction, similar to decimal fractions:
$$0.j_1 j_2 ... j_n = j_1 \cdot \frac{1}{2^1} + j_2 \cdot \frac{1}{2^2} + ... j_n \cdot \frac{1}{2^n}$$
```qsharp
%kata T13_BinaryFractionClassical
operation BinaryFractionClassical (q : Qubit, j : Int[]) : Unit is Adj+Ctl {
// e^(2 * Pi * (j1 + j2 + j3 + .. j2)) =
// e^(2 * PI) * e^j1 * ...
for i in 0 .. Length(j) - 1 {
if (j[i] == 1) {
Rotation(q, i + 1);
}
}
}
```
1-bit 0 = [0]
1-bit 1 = [1]
2-bit 0 = [0,0]
2-bit 1 = [0,1]
2-bit 2 = [1,0]
2-bit 3 = [1,1]
3-bit 0 = [0,0,0]
3-bit 1 = [0,0,1]
3-bit 2 = [0,1,0]
3-bit 3 = [0,1,1]
3-bit 4 = [1,0,0]
3-bit 5 = [1,0,1]
3-bit 6 = [1,1,0]
3-bit 7 = [1,1,1]
4-bit 0 = [0,0,0,0]
4-bit 1 = [0,0,0,1]
4-bit 2 = [0,0,1,0]
4-bit 3 = [0,0,1,1]
4-bit 4 = [0,1,0,0]
4-bit 5 = [0,1,0,1]
4-bit 6 = [0,1,1,0]
4-bit 7 = [0,1,1,1]
4-bit 8 = [1,0,0,0]
4-bit 9 = [1,0,0,1]
4-bit 10 = [1,0,1,0]
4-bit 11 = [1,0,1,1]
4-bit 12 = [1,1,0,0]
4-bit 13 = [1,1,0,1]
4-bit 14 = [1,1,1,0]
4-bit 15 = [1,1,1,1]
5-bit 0 = [0,0,0,0,0]
5-bit 1 = [0,0,0,0,1]
5-bit 2 = [0,0,0,1,0]
5-bit 3 = [0,0,0,1,1]
5-bit 4 = [0,0,1,0,0]
5-bit 5 = [0,0,1,0,1]
5-bit 6 = [0,0,1,1,0]
5-bit 7 = [0,0,1,1,1]
5-bit 8 = [0,1,0,0,0]
5-bit 9 = [0,1,0,0,1]
5-bit 10 = [0,1,0,1,0]
5-bit 11 = [0,1,0,1,1]
5-bit 12 = [0,1,1,0,0]
5-bit 13 = [0,1,1,0,1]
5-bit 14 = [0,1,1,1,0]
5-bit 15 = [0,1,1,1,1]
5-bit 16 = [1,0,0,0,0]
5-bit 17 = [1,0,0,0,1]
5-bit 18 = [1,0,0,1,0]
5-bit 19 = [1,0,0,1,1]
5-bit 20 = [1,0,1,0,0]
5-bit 21 = [1,0,1,0,1]
5-bit 22 = [1,0,1,1,0]
5-bit 23 = [1,0,1,1,1]
5-bit 24 = [1,1,0,0,0]
5-bit 25 = [1,1,0,0,1]
5-bit 26 = [1,1,0,1,0]
5-bit 27 = [1,1,0,1,1]
5-bit 28 = [1,1,1,0,0]
5-bit 29 = [1,1,1,0,1]
5-bit 30 = [1,1,1,1,0]
5-bit 31 = [1,1,1,1,1]
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-1.3.-Prepare-binary-fraction-exponent-(classical-input)).*
### Task 1.4. Prepare binary fraction exponent (quantum input)
**Inputs:**
1. A qubit in state $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$.
2. A register of $n$ qubits in state $|j_1 j_2 ... j_n\rangle$.
**Goal:**
Change the state of the input
from $(\alpha |0\rangle + \beta |1\rangle) \otimes |j_1 j_2 ... j_n\rangle$
to $(\alpha |0\rangle + \beta \cdot e^{2\pi i \cdot 0.j_1 j_2 ... j_n} |1\rangle) \otimes |j_1 j_2 ... j_n\rangle$,
where $0.j_1 j_2 ... j_n$ is a binary fraction corresponding to the basis state $j_1 j_2 ... j_n$ of the register.
> The register of qubits can be in superposition as well;
the behavior in this case is defined by behavior on the basis states and the linearity of unitary transformations.
```qsharp
%kata T14_BinaryFractionQuantum
operation BinaryFractionQuantum (q : Qubit, jRegister : Qubit[]) : Unit is Adj+Ctl {
for i in 0 .. Length(jRegister) - 1 {
Controlled Rotation ([jRegister[i]], (q, i + 1));
}
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-1.4.-Prepare-binary-fraction-exponent-(quantum-input)).*
### Task 1.5. Prepare binary fraction exponent in place (quantum input)
**Input:**
A register of $n$ qubits in state $|j_1 j_2 ... j_n \rangle$.
**Goal:**
Change the state of the register
from $|j_1\rangle \otimes |j_2 ... j_n\rangle$
to $\frac{1}{\sqrt{2}} \big(|0\rangle + e^{2\pi i \cdot 0.j_1 j_2 ... j_n} |1\rangle \big) \otimes |j_2 ... j_n\rangle$.
> The register of qubits can be in superposition as well;
the behavior in this case is defined by behavior on the basis states and the linearity of unitary transformations.
<details>
<summary><b>Need a hint? Click here</b></summary>
This task is very similar to task 1.4, but the digit $j_1$ is encoded in-place, using task 1.1.
</details>
```qsharp
%kata T15_BinaryFractionQuantumInPlace
operation BinaryFractionQuantumInPlace (register : Qubit[]) : Unit is Adj+Ctl {
OneQubitQFT(register[0]);
for ind in 1 .. Length(register) - 1 {
Controlled Rotation([register[ind]], (register[0] , ind + 1));
}
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-1.5.-Prepare-binary-fraction-exponent-in-place-(quantum-input)).*
### Task 1.6. Reverse the order of qubits
**Input:**
A register of $n$ qubits in state $|x_1 x_2 ... x_n \rangle$.
**Goal:**
Reverse the order of qubits, i.e., convert the state of the register to $|x_n ... x_2 x_1\rangle$.
```qsharp
%kata T16_ReverseRegister
operation ReverseRegister (register : Qubit[]) : Unit is Adj+Ctl {
let N = Length(register);
for ind in 0 .. N / 2 - 1 {
SWAP(register[ind], register[N - 1 - ind]);
}
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-1.6.-Reverse-the-order-of-qubits).*
### Task 1.7. Quantum Fourier transform
**Input:**
A register of $n$ qubits in state $|j_1 j_2 ... j_n \rangle$.
**Goal:**
Apply quantum Fourier transform to the input register, i.e., transform it to a state
$$\frac{1}{\sqrt{2^{n}}} \sum_{k=0}^{2^n-1} e^{2\pi i \cdot \frac{jk}{2^{n}}} |k\rangle = $$
$$\begin{align}= &\frac{1}{\sqrt{2}} \big(|0\rangle + e^{2\pi i \cdot 0.j_n} |1\rangle\big) \otimes \\
\otimes &\frac{1}{\sqrt{2}} \big(|0\rangle + e^{2\pi i \cdot 0.j_{n-1} j_n} |1\rangle\big) \otimes ... \otimes \\
\otimes &\frac{1}{\sqrt{2}} \big(|0\rangle + e^{2\pi i \cdot 0.j_1 j_2 ... j_{n-1} j_n} |1\rangle\big)\end{align}$$
> The register of qubits can be in superposition as well;
the behavior in this case is defined by behavior on the basis states and the linearity of unitary transformations.
> You can do this with a library call, but we recommend
implementing the algorithm yourself for learning purposes, using the previous tasks.
<details>
<summary><b>Need a hint? Click here</b></summary>
Consider preparing a different state first and transforming it to the goal state:
$\frac{1}{\sqrt{2}} \big(|0\rangle + exp(2\pi i \cdot 0.j_1 j_2 ... j_{n-1} j_n) |1\rangle\big) \otimes ...
\otimes \frac{1}{\sqrt{2}} \big(|0\rangle + exp(2\pi i \cdot 0.j_{n-1} j_n) |1\rangle\big)
\otimes \frac{1}{\sqrt{2}} \big(|0\rangle + exp(2\pi i \cdot 0.j_n) |1\rangle\big)$
</details>
```qsharp
%kata T17_QuantumFourierTransform
operation QuantumFourierTransform (register : Qubit[]) : Unit is Adj+Ctl {
// If I apply BinaryFractionQuantumInPlace to the entire register, I get:
//1/sqrt(2)(|0> + e^(2*pi*0.j1j2...jn)|1>) x |j2...jn>
let N = Length(register);
for i in 0 .. N - 1 {
BinaryFractionQuantumInPlace (register[i..N-1]);
}
ReverseRegister(register);
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-1.7.-Quantum-Fourier-transform).*
### Task 1.8. Inverse QFT
**Input:**
A register of $n$ qubits in state $|j_1 j_2 ... j_n \rangle$.
**Goal:**
Apply inverse QFT to the input register, i.e., transform it to a state
$\frac{1}{\sqrt{2^{n}}} \sum_{k=0}^{2^n-1} e^{-2\pi i \cdot \frac{jk}{2^{n}}} |k\rangle$.
<details>
<summary><b>Need a hint? Click here</b></summary>
Inverse QFT is literally the inverse transformation of QFT.
Do you know a quantum way to express this?
</details>
```qsharp
%kata T18_InverseQFT
operation InverseQFT (register : Qubit[]) : Unit is Adj+Ctl {
Adjoint QuantumFourierTransform(register);
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-1.8.-Inverse-QFT).*
## Part II. Using the Quantum Fourier Transform
This section offers you tasks on state preparation and state analysis
that can be solved using QFT (or inverse QFT). It is possible to solve them
without QFT, but we recommend that you to try and come up with a QFT-based solution.
### Task 2.1. Prepare an equal superposition of all basis states
**Input:**
A register of $n$ qubits in state $|0...0\rangle$.
**Goal:**
Prepare an equal superposition of all basis vectors from $|0...0\rangle$ to $|1...1\rangle$
(i.e., state $\frac{1}{\sqrt{2^{n}}} \big(|0...0\rangle + ... + |1...1\rangle\big)$.
```qsharp
%kata T21_PrepareEqualSuperposition
operation PrepareEqualSuperposition (register : Qubit[]) : Unit is Adj+Ctl {
ApplyToEachCA(H, register);
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-2.1.-Prepare-an-equal-superposition-of-all-basis-states).*
### Task 2.2. Prepare a periodic state
**Inputs:**
1. A register of $n$ qubits in state $|0...0\rangle$.
2. An integer frequency F ($0 \leq F \leq 2^{n}-1$).
**Goal:**
Prepare a periodic state with frequency F on this register:
$$\frac{1}{\sqrt{2^{n}}} \sum_k e^{2\pi i \cdot \frac{Fk}{2^{n}}} |k\rangle$$
> For example, for $n = 2$ and $F = 1$ the goal state is $\frac{1}{2}\big(|0\rangle + i|1\rangle - |2\rangle - i|3\rangle\big)$.
> If you're using `DumpMachine` to debug your solution,
remember that this kata uses big endian encoding of states,
while `DumpMachine` uses little endian encoding.
You can use [`%config` magic command](https://docs.microsoft.com/en-us/qsharp/api/iqsharp-magic/config)
to reconfigure `DumpMachine` to use big endian encoding or bit strings.
<details>
<summary><b>Need a hint? Click here</b></summary>
Which basis state can be mapped to this state using QFT?
</details>
```qsharp
%kata T22_PreparePeriodicState
open Microsoft.Quantum.Arithmetic;
open Microsoft.Quantum.Arrays;
open Microsoft.Quantum.Convert;
operation PreparePeriodicState (register : Qubit[], F : Int) : Unit is Adj+Ctl {
// F = 1 = 01 (or 10?)
// e * (i * theta) = cos(theta) + i * sin(theta)
// e * ((i * pi)/2) = i
// so, 0, pi/2, pi, 3pi/2
// e^((2pi*1*0)/4) + e^((2pi*1*1)/4) + e^((2pi*1*2)/4) + e^((2pi*1*3)/4)
// e^0 + e^(pi/2) + e^pi + e^(3pi/2)
let bitsBE = Reversed(IntAsBoolArray(F, Length(register)));
ApplyPauliFromBitString(PauliX, true, bitsBE, register); // This converts the register to a representation of F.
//QFT(BigEndian(register));
QuantumFourierTransform(register);
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-2.2.-Prepare-a-periodic-state).*
### Task 2.3. Prepare a periodic state with alternating $1$ and $-1$ amplitudes
**Input:**
A register of $n$ qubits in state $|0...0\rangle$.
**Goal:**
Prepare a periodic state with alternating $1$ and $-1$ amplitudes of basis states:
$$\frac{1}{\sqrt{2^{n}}} \big(|0\rangle - |1\rangle + |2\rangle - |3\rangle + ... - |2^{n}-1\rangle\big)$$
> For example, for $n = 2$ the goal state is $\frac{1}{2} \big(|0\rangle - |1\rangle + |2\rangle - |3\rangle\big)$.
<details>
<summary><b>Need a hint? Click here</b></summary>
Which basis state can be mapped to this state using QFT? Which frequency would allow you to use task 2.2 here?
</details>
```qsharp
%kata T23_PrepareAlternatingState
operation PrepareAlternatingState (register : Qubit[]) : Unit is Adj+Ctl {
X(Head(register));
QuantumFourierTransform(register);
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-2.3.-Prepare-a-periodic-state-with-alternating-$1$-and-$-1$-amplitudes).*
### Task 2.4. Prepare an equal superposition of all even basis states
**Input:**
A register of $n$ qubits in state $|0...0\rangle$.
**Goal:**
Prepare an equal superposition of all even basis vectors:
$\frac{1}{\sqrt{2^{n-1}}} \big(|0\rangle + |2\rangle + ... + |2^{n}-2\rangle\big)$.
<details>
<summary><b>Need a hint? Click here</b></summary>
Which superposition of two basis states can be mapped to this state using QFT?
Use the solutions to tasks 2.1 and 2.3 to figure out the answer.
</details>
```qsharp
%kata T24_PrepareEqualSuperpositionOfEvenStates
operation PrepareEqualSuperpositionOfEvenStates (register : Qubit[]) : Unit is Adj+Ctl {
H(Head(register));
QFT(BigEndian(register));
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-2.4.-Prepare-an-equal-superposition-of-all-even-basis-states).*
### Task 2.5. Prepare a square-wave signal
**Input:**
A register of $n\geq2$ qubits in state $|0...0\rangle$.
**Goal:**
Prepare a periodic state with alternating $1, 1, -1, -1$ amplitudes of basis states:
$$\frac{1}{\sqrt{2^{n}}} \big(|0\rangle + |1\rangle - |2\rangle - |3\rangle + ... - |2^{n}-2\rangle - |2^{n}-1\rangle\big)$$
<details>
<summary><b>Need a hint? Click here</b></summary>
Which superposition of two basis states can be mapped to this state using QFT?
Remember that sum of two complex amplitudes can be a real number if their imaginary parts cancel out.
</details>
```qsharp
%kata T25_PrepareSquareWaveSignal
operation PrepareSquareWaveSignal (register : Qubit[]) : Unit is Adj+Ctl {
X(register[1]);
// |010...0⟩
H(register[0]);
// |010...0⟩ + |110...0⟩
T(register[0]);
within { X(register[0]); }
apply { Adjoint T(register[0]); }
QFT(BigEndian(register));
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-2.5.-Prepare-a-square-wave-signal).*
### Task 2.6. Get the frequency of a signal
**Input:**
A register of $n\geq2$ qubits in state
$\frac{1}{\sqrt{2^{n}}} \sum_k e^{2\pi i \cdot \frac{Fk}{2^{n}}} |k\rangle$, $0\leq F\leq 2^{n}-1$.
**Goal:**
Return the frequency F of the "signal" encoded in this state.
```qsharp
%kata T26_Frequency
open Microsoft.Quantum.Arithmetic;
open Microsoft.Quantum.Arrays;
open Microsoft.Quantum.Measurement;
operation Frequency (register : Qubit[]) : Int {
Adjoint QFT(BigEndian(register));
let bitsBE = MultiM(register);
return ResultArrayAsInt(Reversed(bitsBE));
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-2.6.-Get-the-frequency-of-a-signal).*
## Part III. Powers and roots of the QFT
### Task 3.1 Implement powers of the QFT
**Inputs:**
1. A register of $n$ qubits in an arbitrary state.
2. An integer $P$ ($ 0 \leq P \leq 2^{20} - 1 $).
**Goal:**
Transform state $|x\rangle$ into state $ QFT^{P} |x\rangle$, where $QFT$ is the quantum Fourier transform implemented in part I.
**Note:**
Your solution has to run fast for any $P$ in the given range!
```qsharp
%kata T31_QFTPower
operation QFTPower (P : Int, inputRegister : Qubit[]) : Unit is Adj+Ctl {
for _ in 1 .. (P % 4) {
QFT(BigEndian(inputRegister));
}
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-3.1-Implement-powers-of-the-QFT).*
### Task 3.2. Implement roots of the QFT
**Inputs:**
1. A register of $n$ qubits in an arbitrary state.
2. An integer $P$ ($ 2 \leq P \leq 8 $).
**Goal:**
Transform state $|x\rangle$ into state $V |x\rangle$, where $V^{P} = QFT$. In other words, implement a $P^{th}$ root of the $QFT$. You can implement the required unitary up to a global phase.
```qsharp
operation Circ (qs : LittleEndian, alpha : Double) : Unit is Adj + Ctl {
within {
Adjoint QFTLE(qs);
} apply {
ApplyDiagonalUnitary(
[0.0, -alpha, -2.0 * alpha, alpha],
qs
);
}
}
```
<ul><li>Circ</li></ul>
```qsharp
%kata T32_QFTRoot
open Microsoft.Quantum.Arithmetic;
open Microsoft.Quantum.Convert;
open Microsoft.Quantum.Math;
operation QFTRoot (P : Int, inputRegister : Qubit[]) : Unit is Adj + Ctl {
use aux = Qubit[2];
let Q = QFT;
let Q2 = OperationPowCA(Q, 2);
within {
ApplyToEachCA(H, aux);
Controlled Adjoint Q([aux[0]], BigEndian(inputRegister));
Controlled Adjoint Q2([aux[1]], BigEndian(inputRegister));
} apply {
Circ(LittleEndian(aux), PI() / (2.0 * IntAsDouble(P)));
}
}
```
Success!
*Can't come up with a solution? See the explained solution in the [QFT Workbook](./Workbook_QFT.ipynb#Task-3.2.-Implement-roots-of-the-QFT).*
| 10cd8c9f4e203745cbb9089b233b996bdc363614 | 32,420 | ipynb | Jupyter Notebook | QFT/QFT.ipynb | cfhirsch/QuantumKatas | dd531c8c4b9034ef1dfbb303a6efe1d42fe7f2cb | [
"MIT"
] | null | null | null | QFT/QFT.ipynb | cfhirsch/QuantumKatas | dd531c8c4b9034ef1dfbb303a6efe1d42fe7f2cb | [
"MIT"
] | null | null | null | QFT/QFT.ipynb | cfhirsch/QuantumKatas | dd531c8c4b9034ef1dfbb303a6efe1d42fe7f2cb | [
"MIT"
] | null | null | null | 29.339367 | 203 | 0.509315 | true | 6,692 | Qwen/Qwen-72B | 1. YES
2. YES | 0.872347 | 0.737158 | 0.643058 | __label__eng_Latn | 0.772946 | 0.33237 |
### Calculations used for Tsmp
```
from sympy import *
init_printing()
```
#### Post synaptic dendritic current
Tfd (Tf difference) below represents the difference between $t$ (current time) and $Tf[i]$ (last pre synaptic spike time for dendrite i). E. g. result of $t - Tf[i]$.
```
Tfd = var('Tfd')
```
Dt = Delta time
A dirac delta formula from 'Theoretical Neuroscience' page 405: $DiracDelta = lim(Dt->0)(if (-Dt/2) < t < Dt)$: return $1/Dt$; else return $0$;). Due to that the formula $1/t$ for DiracDelta is able to be used when the if statement condition is matched. It is represented as $1/Tfd$ in this case.
Formula 1 in the Gupta paper is $T_d[i]*d(I_d[i][t])=-I_d[i][t]+R_d[i]*w[i]*(\frac{1}{Tfd})$
$T_d[i]$ can be brought to the other side of the equation through multipling $1/T_d[i]$ on both sides. This produces $d(I_d[i][t])=(1/T_d[i])*-I_d[i][t]+R_d[i]*w[i]*(1/Tfd)$
The fundamental theorum of calculus is that $integral(derivative(f(x)))=f(x)$. A.k.a. an integral can cancel a derivative out of an equation. To get $Id[i]$ from formula 1 the integral of both sides should be taken to remove the derivative from the formula.
```
Id, Td, Rd, W = var('I_d T_d R_d W')
```
```
Id2 = Id
```
```
Id = ((1/Td)*-Id2+Rd*W*(1/Tfd))
```
```
Id.integrate(Id2)
```
#### Post synaptic direct synapse soma current
15 input neurons. $W = weight, 1/Tdf = Dirac function$. Using the formula $Sigma(w*Dirac)$ the output below is produced.
```
SummedWeights1 = var('SummedWeights1')
```
```
DiracWeightedSum = SummedWeights1/Tfd
```
```
Is_1, Ts_1 = var('Is_1 Ts_1')
```
```
Is_1b = Is_1
```
```
Is_1 = (1/Ts_1)*(-Is_1b+DiracWeightedSum)
```
###### Total soma membrane potential
```
DendriteGroup = [Id.integrate(Id2)]*15
```
```
SynapseToSoma = Is_1.integrate(Is_1b)
```
#### Total soma membrane potential calculation
It was found that end result variables representing prior component
calculations (SummedDendriteGroupX, SynapseToSomaX) could be applied
to solving for um in the equation.
```
Um, Tm, Rm = var('U_m T_m R_m')
```
```
Uu2 = (1/Tm)*(-Um+(Rm*(sum(DendriteGroup)+SynapseToSoma)))
```
```
Uu2.integrate(Um)
```
Below are shown the end result variables which can simplify the formula
used instead of the above one
```
DiracWeightedSum_Z, Is_Z = var('DiracWeightedSum_Z Is_Z')
```
```
Is_Zb = Is_Z
```
```
Is_Z = (1/Ts_1)*(-Is_Zb+DiracWeightedSum_Z)
```
```
SynapseToSoma_Z = Is_Z.integrate(Is_Zb)
```
```
SynapseToSoma_Z
```
```
SummedDendriteGroupX, SynapseToSomaX = var('SummedDendriteGroupX SynapseToSomaX')
```
```
Uu3 = (1/Tm)*(-Um+(Rm*(SummedDendriteGroupX+SynapseToSomaX)))
```
The integration below solves the tsmp equation for Um and therefore
provides the tsmp result wanted.
```
Uu3.integrate(Um)
```
| bdc7b805f754f046ee9982ba7b5c16d5de49b013 | 25,498 | ipynb | Jupyter Notebook | notebooks/TsmpCalcs.ipynb | Jbwasse2/snn-rl | 29b040655f432bd390bc9d835b86cbfdf1a622e4 | [
"MIT"
] | 68 | 2015-04-16T11:14:31.000Z | 2022-03-11T07:43:51.000Z | notebooks/TsmpCalcs.ipynb | Jbwasse2/snn-rl | 29b040655f432bd390bc9d835b86cbfdf1a622e4 | [
"MIT"
] | 6 | 2015-11-24T04:53:57.000Z | 2019-10-21T02:00:15.000Z | notebooks/TsmpCalcs.ipynb | Jbwasse2/snn-rl | 29b040655f432bd390bc9d835b86cbfdf1a622e4 | [
"MIT"
] | 25 | 2015-12-27T10:04:53.000Z | 2021-01-03T03:25:18.000Z | 55.794311 | 5,523 | 0.735783 | true | 936 | Qwen/Qwen-72B | 1. YES
2. YES | 0.942507 | 0.810479 | 0.763882 | __label__eng_Latn | 0.888958 | 0.613086 |
## Nonlinear Dimensionality Reduction
G. Richards (2016, 2018), based on materials from Ivezic, Connolly, Miller, Leighly, and VanderPlas.
Today we will talk about the concepts of
* manifold learning
* nonlinear dimensionality reduction
Specifically using the following algorithms
* local linear embedding (LLE)
* isometric mapping (IsoMap)
* t-distributed Stochastic Neighbor Embedding (t-SNE)
Let's start by my echoing the brief note of caution given in Adam Miller's notebook: "astronomers will often try to derive physical insight from PCA eigenspectra or eigentimeseries, but this is not advisable as there is no physical reason for the data to be linearly and orthogonally separable". Moreover, physical components are (generally) positive definite. So, PCA is great for dimensional reduction, but for doing physics there are generally better choices.
While NMF "solves" the issue of negative components, it is still a linear process. For data with non-linear correlations, an entire field, known as [Manifold Learning](http://scikit-learn.org/stable/modules/manifold.html) and [nonlinear dimensionality reduction]( https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction), has been developed, with several algorithms available via the [`sklearn.manifold`](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.manifold) module.
For example, if your data set looks like this:
Then PCA is going to give you something like this.
Clearly not very helpful!
What you were really hoping for is something more like the results below. For more examples see
[Vanderplas & Connolly 2009](http://iopscience.iop.org/article/10.1088/0004-6256/138/5/1365/meta;jsessionid=48A569862A424ECCAEECE2A900D9837B.c3.iopscience.cld.iop.org)
## Local Linear Embedding
[Local Linear Embedding](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.LocallyLinearEmbedding.html#sklearn.manifold.LocallyLinearEmbedding) attempts to embed high-$D$ data in a lower-$D$ space. Crucially it also seeks to preserve the geometry of the local "neighborhoods" around each point. In the case of the "S" curve, it seeks to unroll the data. The steps are
Step 1: define local geometry
- local neighborhoods determined from $k$ nearest neighbors.
- for each point calculate weights that reconstruct a point from its $k$ nearest
neighbors via
$$
\begin{equation}
\mathcal{E}_1(W) = \left|X - WX\right|^2,
\end{equation}
$$
where $X$ is an $N\times K$ matrix and $W$ is an $N\times N$ matrix that minimizes the reconstruction error.
Essentially this is finding the hyperplane that describes the local surface at each point within the data set. So, imagine that you have a bunch of square tiles and you are trying to tile the surface with them.
Step 2: embed within a lower dimensional space
- set all $W_{ij}=0$ except when point $j$ is one of the $k$ nearest neighbors of point $i$.
- $W$ becomes very sparse for $k \ll N$ (only $Nk$ entries in $W$ are non-zero).
- minimize
>$\begin{equation}
\mathcal{E}_2(Y) = \left|Y - W Y\right|^2,
\end{equation}
$
with $W$ fixed to find an $N$ by $d$ matrix ($d$ is the new dimensionality).
Step 1 requires a nearest-neighbor search.
Step 2 requires an
eigenvalue decomposition of the matrix $C_W \equiv (I-W)^T(I-W)$.
LLE has been applied to data as diverse as galaxy spectra, stellar spectra, and photometric light curves. It was introduced by [Roweis & Saul (2000)](https://www.ncbi.nlm.nih.gov/pubmed/11125150).
Skikit-Learn's call to LLE is as follows, with a more detailed example already being given above.
```python
import numpy as np
from sklearn.manifold import LocallyLinearEmbedding
X = np.random.normal(size=(1000,2)) # 1000 points in 2D
R = np.random.random((2,10)) # projection matrix
X = np.dot(X,R) # now a 2D linear manifold in 10D space
k = 5 # Number of neighbors to use in fit
n = 2 # Number of dimensions to fit
lle = LocallyLinearEmbedding(k,n)
lle.fit(X)
proj = lle.transform(X) # 100x2 projection of the data
```
See what LLE does for the digits data, using the 7 nearest neighbors and 2 components.
```python
# Execute this cell to load the digits sample
%matplotlib inline
import numpy as np
from sklearn.datasets import load_digits
from matplotlib import pyplot as plt
digits = load_digits()
grid_data = np.reshape(digits.data[0], (8,8)) #reshape to 8x8
plt.imshow(grid_data, interpolation = "nearest", cmap = "bone_r")
print(grid_data)
X = digits.data
y = digits.target
```
```python
#LLE
from sklearn.manifold import LocallyLinearEmbedding
k = 7 # Number of neighbors to use in fit
n = 2 # Number of dimensions to fit
lle = LocallyLinearEmbedding(k,n)
lle.fit(X)
X_reduced = lle.transform(X)
plt.scatter(X_reduced[:,0], X_reduced[:,1], c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
## Isometric Mapping
is based on multi-dimensional scaling (MDS) framework. It was introduced in the same volume of *Science* as the article above. See [Tenenbaum, de Silva, & Langford (2000)](https://www.ncbi.nlm.nih.gov/pubmed/?term=A+Global+Geometric+Framework+for+Nonlinear+Dimensionality+Reduction).
Geodestic curves are used to recover non-linear structure.
In Scikit-Learn [IsoMap](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.Isomap.html) is implemented as follows:
```python
# Execute this cell
import numpy as np
from sklearn.manifold import Isomap
XX = np.random.normal(size=(1000,2)) # 1000 points in 2D
R = np.random.random((2,10)) # projection matrix
XX = np.dot(XX,R) # X is a 2D manifold in 10D space
k = 5 # number of neighbors
n = 2 # number of dimensions
iso = Isomap(k,n)
iso.fit(XX)
proj = iso.transform(XX) # 1000x2 projection of the data
```
Try 7 neighbors and 2 dimensions on the digits data.
```python
# IsoMap
from sklearn.manifold import Isomap
k = 7 # Number of neighbors to use in fit
n = 2 # Number of dimensions to fit
iso = Isomap(k,n)
iso.fit(X)
X_reduced = iso.transform(X)
plt.scatter(X_reduced[:,0], X_reduced[:,1], c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
## t-SNE
[t-distributed Stochastic Neighbor Embedding (t-SNE)](https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding) is not discussed in the book, Scikit-Learn does have a [t-SNE implementation](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) and it is well worth mentioning this manifold learning algorithm too. SNE itself was developed by [Hinton & Roweis](http://www.cs.toronto.edu/~fritz/absps/sne.pdf) with the "$t$" part being added by [van der Maaten & Hinton](http://jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf). It works like the other manifold learning algorithms.
Try it on the digits data. You'll need to import `TSNE` from `sklearn.manifold`, instantiate it with 2 components, then do a `fit_transform` on the original data.
```python
# t-SNE
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2)
X_reduced = tsne.fit_transform(X)
plt.scatter(X_reduced[:,0], X_reduced[:,1] , c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
You'll know if you have done it right if you understand Adam Miller's comment "Holy freakin' smokes. That is magic. (It's possible we just solved science)."
Personally, I think that some exclamation points may be needed in there!
What's even more illuminating is to make the plot using the actual digits to plot the points. Then you can see why certain digits are alike or split into multiple regions. Can you explain the patterns you see here?
```python
# Execute this cell
from matplotlib import offsetbox
#----------------------------------------------------------------------
# Scale and visualize the embedding vectors
def plot_embedding(X):
x_min, x_max = np.min(X, 0), np.max(X, 0)
X = (X - x_min) / (x_max - x_min)
plt.figure()
ax = plt.subplot(111)
for i in range(X.shape[0]):
#plt.text(X[i, 0], X[i, 1], str(digits.target[i]), color=plt.cm.Set1(y[i] / 10.), fontdict={'weight': 'bold', 'size': 9})
plt.text(X[i, 0], X[i, 1], str(digits.target[i]), color=plt.cm.nipy_spectral(y[i]/9.))
shown_images = np.array([[1., 1.]]) # just something big
for i in range(digits.data.shape[0]):
dist = np.sum((X[i] - shown_images) ** 2, 1)
if np.min(dist) < 4e-3:
# don't show points that are too close
continue
shown_images = np.r_[shown_images, [X[i]]]
imagebox = offsetbox.AnnotationBbox(offsetbox.OffsetImage(digits.images[i], cmap=plt.cm.gray_r), X[i])
ax.add_artist(imagebox)
plt.xticks([]), plt.yticks([])
plot_embedding(X_reduced)
plt.show()
```
With the remainder of time in class today, play with the arguments of the algorithms that we have discussed this week and/or try running them on a different data set. For example the iris data set or one of the other samples of data that are included with Scikit-Learn. Or maybe have a look through some of these public data repositories:
- [https://github.com/caesar0301/awesome-public-datasets?utm_content=buffer4245d&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer](https://github.com/caesar0301/awesome-public-datasets?utm_content=buffer4245d&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer)
- [http://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A318739](http://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A318739)
- [http://www.kdnuggets.com/2015/04/awesome-public-datasets-github.html](http://www.kdnuggets.com/2015/04/awesome-public-datasets-github.html)
| 30d6db95ab1ac28cc04763edc0d2a3c5ad10df58 | 142,816 | ipynb | Jupyter Notebook | notebooks/NonlinearDimensionReduction.ipynb | ejh92/PHYS_T480_F18 | 8aa8bdcb230ef36fe4fab3c8d689e59e4be59366 | [
"MIT"
] | null | null | null | notebooks/NonlinearDimensionReduction.ipynb | ejh92/PHYS_T480_F18 | 8aa8bdcb230ef36fe4fab3c8d689e59e4be59366 | [
"MIT"
] | null | null | null | notebooks/NonlinearDimensionReduction.ipynb | ejh92/PHYS_T480_F18 | 8aa8bdcb230ef36fe4fab3c8d689e59e4be59366 | [
"MIT"
] | null | null | null | 271.513308 | 54,628 | 0.920548 | true | 2,595 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.810479 | 0.692078 | __label__eng_Latn | 0.959208 | 0.446261 |
# Neural Nets t2
```python
%matplotlib widget
#%matplotlib inline
%load_ext autoreload
%autoreload 2
```
```python
# import Importing_Notebooks
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
import dill
```
A network built of components which:
1. accept an ordered set of reals (we'll use `numpy.array`, and call them vectors) at the input port and produce another at the output port - this is forward propagation. ${\displaystyle f\colon \mathbf {R} ^{n}\to \mathbf {R} ^{m}}$
1. accept an ordered set of reals at the output port, representing the gradient of the loss function at the output, and produce the gradient of the loss function at the input port - this is back propagation, aka backprop. ${\displaystyle b\colon \mathbf {R} ^{m}\to \mathbf {R} ^{n}}$
1. from the gradient of the loss function at the output, calculate the partial of the loss function w.r.t the internal parameters ${\displaystyle \frac{\partial E}{\partial w} }$
1. accept a scalar $\alpha$ to control the adjustment of internal parameters. _Or is this effected by scaling the loss gradient before passing??_
1. update internal parameters ${\displaystyle w \leftarrow w - \alpha \frac{\partial E}{\partial w} }$
```python
class Layer:
def __init__(self):
pass
def __call__(self, x):
"""Computes response to input"""
raise NotImplementedError
def backprop(self, output_delE):
"""Uses output error gradient to adjust internal parameters, and returns gradient of error at input"""
raise NotImplementedError
```
A network built of a cascade of layers:
```python
class Network:
def __init__(self):
self.layers = []
self.alpha = 0.1 #FIXME
def extend(self, net):
self.layers.append(net)
def __call__(self, input):
v = input
for net in self.layers:
v = net(v)
return v
def learn(self, facts):
for (x, expected) in facts:
y = self(x)
e = y - expected
loss = e.dot(e)/2.0
agrad = e * self.alpha
for net in reversed(self.layers):
agrad = net.backprop(agrad)
return loss
```
## Useful Layers
### Identify
```python
class IdentityLayer(Layer):
def __call__(self, x):
return x
def backprop(self, output_delE):
return output_delE
```
### Affine
A layer that does an [affine transformation](https://mathworld.wolfram.com/AffineTransformation.html) aka affinity, which is the classic fully-connected layer with output offsets.
$$ \mathbf{M} \mathbf{x} + \mathbf{b} = \mathbf{y} $$
where
$$
\mathbf{x} = \sum_{j=1}^{n} x_j \mathbf{\hat{x}}_j \\
\mathbf{b} = \sum_{i=1}^{m} b_i \mathbf{\hat{y}}_i \\
\mathbf{y} = \sum_{i=1}^{m} y_i \mathbf{\hat{y}}_i
$$
and $\mathbf{M}$ can be written
$$
\begin{bmatrix}
m_{1,1} & \dots & m_{1,n} \\
\vdots & \ddots & \vdots \\
m_{m,1} & \dots & m_{m,n}
\end{bmatrix} \\
$$
#### Error gradient back-propagation
$$
\begin{align}
\frac{\partial loss}{\partial\mathbf{x}}
= \frac{\partial loss}{\partial\mathbf{y}} \frac{\partial\mathbf{y}}{\partial\mathbf{x}}
= \mathbf{M}\frac{\partial loss}{\partial\mathbf{y}}
\end{align}
$$
_SOLVE: Left-multiply or right-multiply?_
#### Parameter adjustment
$$
\frac{\partial loss}{\partial\mathbf{M}}
= \frac{\partial loss}{\partial\mathbf{y}} \frac{\partial\mathbf{y}}{\partial\mathbf{M}}
= \frac{\partial loss}{\partial\mathbf{y}} \mathbf{x} \\
\frac{\partial loss}{\partial\mathbf{b}}
= \frac{\partial loss}{\partial\mathbf{y}} \frac{\partial\mathbf{y}}{\partial\mathbf{b}}
= \frac{\partial loss}{\partial\mathbf{y}}
$$
```python
class AffinityLayer(Layer):
"""An affine transformation, which is the classic fully-connected layer with offsets"""
def __init__(self, n, m):
self.M = np.empty((m, n))
self.b = np.empty(m)
self.randomize()
def randomize(self):
self.M[:] = np.random.randn(*self.M.shape)
self.b[:] = np.random.randn(*self.b.shape)
def __call__(self, x):
self.input = x
self.output = self.M @ x + self.b
return self.output
def backprop(self, output_delE):
input_delE = self.M @ output_delE
self.M -= np.einsum('i,j', output_delE, self.input) # use np.outer?
self.b -= output_delE
return input_delE
```
### Map
Maps a scalar function on the inputs, for e.g. activation layers.
```python
class MapLayer(Layer):
"""Map a scalar function on the input taken element-wise"""
def __init__(self, fun, dfundx):
self.vfun = np.vectorize(fun)
self.vdfundx = np.vectorize(dfundx)
def __call__(self, x):
self.input = x
return self.vfun(x)
def backprop(self, output_delE):
input_delE = self.vdfundx(self.input) * output_delE
return input_delE
```
___
## Tests
### One identity layer
See if the wheels turn:
```python
net = Network()
net.extend(IdentityLayer())
all(net(np.arange(3)) == np.arange(3))
```
True
It does not learn, as expected:
```python
facts = [(np.arange(2*n, 2*n+2), np.arange(2*n+1, 2*n-1, -1)) for n in range(3)]
net.learn(facts)
```
1.0
```python
net(np.arange(2,4))
```
array([2, 3])
### One map layer
```python
net = Network()
net.extend(MapLayer(lambda x: x+1, lambda d: 1))
all(net(np.arange(3)) == np.arange(3)+1)
```
True
It does not learn, as expected:
```python
net.learn(facts), all(net(np.arange(5)) == np.arange(5)+1), net(np.arange(2,4))
```
(2.0, True, array([3, 4]))
### One affine layer
```python
net = Network()
net.extend(AffinityLayer(2,2))
```
```python
t = net.layers[0]
t.M, t.b
```
(array([[-0.21533062, 0.28402263],
[-0.28379346, -1.35249073]]),
array([ 0.61141609, -1.15769416]))
Can it learn the identity transformation?
```python
# from nnbench import NNBench
from matplotlib.widgets import Slider, Button, RadioButtons
```
```python
class NNBench:
def __init__(self, net, ideal=lambda x:x):
self.net = net
self.ideal = ideal
self.gc_protect = []
self.seed = 3
def checkpoint_net(self):
self.net_checkpoint = dill.dumps(self.net)
def rollback_net(self):
self.net = dill.loads(self.net_checkpoint)
def training_data_gen(self, n):
"""Generate n instances of labelled training data"""
np.random.seed(self.seed)
for i in range(n):
v = np.random.randn(2)
yield (v, self.ideal(v))
def learn(self, n=100):
return [self.net.learn([fact]) for fact in self.training_data_gen(n)]
def learning_potential(self, n=100, alpha=None):
stash = dill.dumps(self.net)
if alpha is not None: # only change the net's alpha if a value was passed to us
self.net.alpha = alpha
loss = self.net.learn(fact for fact in self.training_data_gen(n))
self.net = dill.loads(stash)
return -np.log(loss)
def plot_learning(self, n):
from matplotlib import pyplot as plt
# self.losses = losses = [self.net.learn(fact for fact in self.training_data_gen(n))]
losses = self.learn(n)
plt.yscale('log')
plt.plot(range(len(losses)),losses)
plt.show(block=0)
def knobs_plot_learning(self, n):
# from matplotlib import pyplot as plt
fig, ax = plt.subplots()
plt.subplots_adjust(left=0.25, bottom=0.25)
a0 = 5
f0 = 3
###
losses = [self.net.learn([fact]) for fact in self.training_data_gen(n)]
l, = plt.plot(range(len(losses)), losses, lw=2)
ax.margins(x=0)
plt.yscale('log')
axcolor = 'lightgoldenrodyellow'
axfreq = plt.axes([0.25, 0.1, 0.65, 0.03], facecolor=axcolor)
axamp = plt.axes([0.25, 0.15, 0.65, 0.03], facecolor=axcolor)
sfreq = Slider(axfreq, '⍺', 0, 1, valinit=self.net.alpha)
samp = Slider(axamp, 'Num', 1, 1000, valinit=100, valstep=1)
filtfunc = [lambda x:x]
big = max(losses)
ax.set_title(f"maxloss:{big}")
iax = plt.axes([0.025, 0.7, 0.15, 0.15])
def make_iax_image():
return np.concatenate([np.concatenate((l.M,np.array([l.b])),axis=0)
for l in self.net.layers
if hasattr(l, 'M')],axis=1)
def update_iax(img=[iax.imshow(make_iax_image())]):
img[0].remove()
img[0] = iax.imshow(make_iax_image())
def update(val,ax=ax,loc=[l]):
n = int(samp.val)
self.rollback_net()
sfunc = lambda x: 2**(-1.005/(x+.005))
self.net.alpha = sfunc(sfreq.val)
#sfreq.set_label("2.4e"%(self.net.alpha,))
losses = filtfunc[0]([self.net.learn([fact]) for fact in self.training_data_gen(n)])
big = max(losses)
ax.set_title(f"⍺={self.net.alpha},max loss:{big}")
loc[0].remove()
loc[0], = ax.plot(range(len(losses)), losses, lw=2,color='xkcd:blue')
ax.set_xlim((0,len(losses)))
ax.set_ylim((min(losses),big))
update_iax()
fig.canvas.draw_idle()
sfreq.on_changed(update)
samp.on_changed(update)
resetax = plt.axes([0.8, 0.025, 0.1, 0.04])
button = Button(resetax, 'Reset', color=axcolor, hovercolor='0.975')
def reset(event):
self.seed += 1
update()
button.on_clicked(reset)
rax = plt.axes([0.025, 0.5, 0.15, 0.15], facecolor=axcolor)
radio = RadioButtons(rax, ('raw', 'low pass', 'green'), active=0)
def colorfunc(label):
if label == "raw":
filtfunc[0] = lambda x:x
elif label == "low pass":
filtfunc[0] = lambda x:ndimage.gaussian_filter(np.array(x),3)
#l.set_color(label)
#fig.canvas.draw_idle()
update()
radio.on_clicked(colorfunc)
plt.show()
#return 'gc protect:', update, reset, colorfunc,sfreq,samp, radio, button
self.gc_protect.append((update, reset, colorfunc,sfreq,samp, radio, button))
```
```python
bench = NNBench(net)
bench.checkpoint_net()
bench.learning_potential()
```
nan
```python
bench.plot_learning(100)
```
```python
bench.ideal = lambda v: np.array([v[1], v[0]])
bench.knobs_plot_learning(100)
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
### Learn thru a map layer
This layer doubles its input:
```python
net = Network()
net.extend(AffinityLayer(2,2))
def dtanh(x):
v = np.tanh(x)
return (1+v)*(1-v)
net.extend(MapLayer(lambda x:x*x/2.0, lambda d:d))
#net.extend(MapLayer(np.tanh, dtanh))
bench = NNBench(net)
bench.checkpoint_net()
```
```python
net.layers[0].M, net.layers[0].b
```
(array([[-0.32158469, 0.15113037],
[-0.01862772, 0.48352879]]),
array([0.76896516, 1.36624284]))
```python
bench.ideal = lambda v: [(v[0]-v[1])**2,0]
#bench.ideal = lambda v: [(v[0]>0)*2-1,(v[0]>v[1])*2-1]
bench.learning_potential()
#bench.knobs_plot_learning(100)
```
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:13: RuntimeWarning: overflow encountered in multiply
del sys.path[0]
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:19: RuntimeWarning: invalid value encountered in subtract
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:20: RuntimeWarning: invalid value encountered in subtract
nan
```python
bench.knobs_plot_learning(100)
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:13: RuntimeWarning: overflow encountered in multiply
del sys.path[0]
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:19: RuntimeWarning: invalid value encountered in subtract
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:20: RuntimeWarning: invalid value encountered in subtract
Look into it:
```python
```
['ALLOW_THREADS',
'AxisError',
'BUFSIZE',
'CLIP',
'ComplexWarning',
'DataSource',
'ERR_CALL',
'ERR_DEFAULT',
'ERR_IGNORE',
'ERR_LOG',
'ERR_PRINT',
'ERR_RAISE',
'ERR_WARN',
'FLOATING_POINT_SUPPORT',
'FPE_DIVIDEBYZERO',
'FPE_INVALID',
'FPE_OVERFLOW',
'FPE_UNDERFLOW',
'False_',
'Inf',
'Infinity',
'LowLevelCallable',
'MAXDIMS',
'MAY_SHARE_BOUNDS',
'MAY_SHARE_EXACT',
'MachAr',
'ModuleDeprecationWarning',
'NAN',
'NINF',
'NZERO',
'NaN',
'PINF',
'PZERO',
'RAISE',
'RankWarning',
'SHIFT_DIVIDEBYZERO',
'SHIFT_INVALID',
'SHIFT_OVERFLOW',
'SHIFT_UNDERFLOW',
'ScalarType',
'TooHardError',
'True_',
'UFUNC_BUFSIZE_DEFAULT',
'UFUNC_PYVALS_NAME',
'VisibleDeprecationWarning',
'WRAP',
'_UFUNC_API',
'__SCIPY_SETUP__',
'__all__',
'__builtins__',
'__cached__',
'__config__',
'__doc__',
'__file__',
'__loader__',
'__name__',
'__numpy_version__',
'__package__',
'__path__',
'__spec__',
'__version__',
'_add_newdoc_ufunc',
'_dep_fft',
'_deprecated',
'_distributor_init',
'_fun',
'_key',
'_lib',
'_msg',
'_sci',
'absolute',
'absolute_import',
'add',
'add_docstring',
'add_newdoc',
'add_newdoc_ufunc',
'alen',
'all',
'allclose',
'alltrue',
'amax',
'amin',
'angle',
'any',
'append',
'apply_along_axis',
'apply_over_axes',
'arange',
'arccos',
'arccosh',
'arcsin',
'arcsinh',
'arctan',
'arctan2',
'arctanh',
'argmax',
'argmin',
'argpartition',
'argsort',
'argwhere',
'around',
'array',
'array2string',
'array_equal',
'array_equiv',
'array_repr',
'array_split',
'array_str',
'asanyarray',
'asarray',
'asarray_chkfinite',
'ascontiguousarray',
'asfarray',
'asfortranarray',
'asmatrix',
'asscalar',
'atleast_1d',
'atleast_2d',
'atleast_3d',
'average',
'bartlett',
'base_repr',
'binary_repr',
'bincount',
'bitwise_and',
'bitwise_not',
'bitwise_or',
'bitwise_xor',
'blackman',
'block',
'bmat',
'bool8',
'bool_',
'broadcast',
'broadcast_arrays',
'broadcast_to',
'busday_count',
'busday_offset',
'busdaycalendar',
'byte',
'byte_bounds',
'bytes0',
'bytes_',
'c_',
'can_cast',
'cast',
'cbrt',
'cdouble',
'ceil',
'cfloat',
'char',
'character',
'chararray',
'choose',
'clip',
'clongdouble',
'clongfloat',
'column_stack',
'common_type',
'compare_chararrays',
'complex128',
'complex256',
'complex64',
'complex_',
'complexfloating',
'compress',
'concatenate',
'conj',
'conjugate',
'convolve',
'copy',
'copysign',
'copyto',
'corrcoef',
'correlate',
'cos',
'cosh',
'count_nonzero',
'cov',
'cross',
'csingle',
'ctypeslib',
'cumprod',
'cumproduct',
'cumsum',
'datetime64',
'datetime_as_string',
'datetime_data',
'deg2rad',
'degrees',
'delete',
'deprecate',
'deprecate_with_doc',
'diag',
'diag_indices',
'diag_indices_from',
'diagflat',
'diagonal',
'diff',
'digitize',
'disp',
'divide',
'division',
'divmod',
'dot',
'double',
'dsplit',
'dstack',
'dtype',
'e',
'ediff1d',
'einsum',
'einsum_path',
'emath',
'empty',
'empty_like',
'equal',
'errstate',
'euler_gamma',
'exp',
'exp2',
'expand_dims',
'expm1',
'extract',
'eye',
'fabs',
'fastCopyAndTranspose',
'fft',
'fft_msg',
'fill_diagonal',
'find_common_type',
'finfo',
'fix',
'flatiter',
'flatnonzero',
'flexible',
'flip',
'fliplr',
'flipud',
'float128',
'float16',
'float32',
'float64',
'float_',
'float_power',
'floating',
'floor',
'floor_divide',
'fmax',
'fmin',
'fmod',
'format_float_positional',
'format_float_scientific',
'format_parser',
'frexp',
'frombuffer',
'fromfile',
'fromfunction',
'fromiter',
'frompyfunc',
'fromregex',
'fromstring',
'full',
'full_like',
'fv',
'gcd',
'generic',
'genfromtxt',
'geomspace',
'get_array_wrap',
'get_include',
'get_printoptions',
'getbufsize',
'geterr',
'geterrcall',
'geterrobj',
'gradient',
'greater',
'greater_equal',
'half',
'hamming',
'hanning',
'heaviside',
'histogram',
'histogram2d',
'histogram_bin_edges',
'histogramdd',
'hsplit',
'hstack',
'hypot',
'i0',
'identity',
'ifft',
'iinfo',
'imag',
'in1d',
'index_exp',
'indices',
'inexact',
'inf',
'info',
'infty',
'inner',
'insert',
'int0',
'int16',
'int32',
'int64',
'int8',
'int_',
'intc',
'integer',
'interp',
'intersect1d',
'intp',
'invert',
'ipmt',
'irr',
'is_busday',
'isclose',
'iscomplex',
'iscomplexobj',
'isfinite',
'isfortran',
'isin',
'isinf',
'isnan',
'isnat',
'isneginf',
'isposinf',
'isreal',
'isrealobj',
'isscalar',
'issctype',
'issubclass_',
'issubdtype',
'issubsctype',
'iterable',
'ix_',
'kaiser',
'kron',
'lcm',
'ldexp',
'left_shift',
'less',
'less_equal',
'lexsort',
'linspace',
'little_endian',
'load',
'loads',
'loadtxt',
'log',
'log10',
'log1p',
'log2',
'logaddexp',
'logaddexp2',
'logical_and',
'logical_not',
'logical_or',
'logical_xor',
'logn',
'logspace',
'longcomplex',
'longdouble',
'longfloat',
'longlong',
'lookfor',
'ma',
'mafromtxt',
'mask_indices',
'mat',
'math',
'matmul',
'matrix',
'maximum',
'maximum_sctype',
'may_share_memory',
'mean',
'median',
'memmap',
'meshgrid',
'mgrid',
'min_scalar_type',
'minimum',
'mintypecode',
'mirr',
'mod',
'modf',
'moveaxis',
'msort',
'multiply',
'nan',
'nan_to_num',
'nanargmax',
'nanargmin',
'nancumprod',
'nancumsum',
'nanmax',
'nanmean',
'nanmedian',
'nanmin',
'nanpercentile',
'nanprod',
'nanquantile',
'nanstd',
'nansum',
'nanvar',
'nbytes',
'ndarray',
'ndenumerate',
'ndfromtxt',
'ndim',
'ndindex',
'nditer',
'negative',
'nested_iters',
'newaxis',
'nextafter',
'nonzero',
'not_equal',
'nper',
'npv',
'number',
'obj2sctype',
'object0',
'object_',
'ogrid',
'ones',
'ones_like',
'outer',
'packbits',
'pad',
'partition',
'percentile',
'pi',
'piecewise',
'place',
'pmt',
'poly',
'poly1d',
'polyadd',
'polyder',
'polydiv',
'polyfit',
'polyint',
'polymul',
'polysub',
'polyval',
'positive',
'power',
'ppmt',
'print_function',
'printoptions',
'prod',
'product',
'promote_types',
'ptp',
'put',
'put_along_axis',
'putmask',
'pv',
'quantile',
'r_',
'rad2deg',
'radians',
'rand',
'randn',
'random',
'rate',
'ravel',
'ravel_multi_index',
'real',
'real_if_close',
'rec',
'recarray',
'recfromcsv',
'recfromtxt',
'reciprocal',
'record',
'remainder',
'repeat',
'require',
'reshape',
'resize',
'result_type',
'right_shift',
'rint',
'roll',
'rollaxis',
'roots',
'rot90',
'round_',
'row_stack',
's_',
'safe_eval',
'save',
'savetxt',
'savez',
'savez_compressed',
'sctype2char',
'sctypeDict',
'sctypeNA',
'sctypes',
'searchsorted',
'select',
'set_numeric_ops',
'set_printoptions',
'set_string_function',
'setbufsize',
'setdiff1d',
'seterr',
'seterrcall',
'seterrobj',
'setxor1d',
'shape',
'shares_memory',
'short',
'show_config',
'show_numpy_config',
'sign',
'signbit',
'signedinteger',
'sin',
'sinc',
'single',
'singlecomplex',
'sinh',
'size',
'sometrue',
'sort',
'sort_complex',
'source',
'spacing',
'split',
'sqrt',
'square',
'squeeze',
'stack',
'std',
'str0',
'str_',
'string_',
'subtract',
'sum',
'swapaxes',
'take',
'take_along_axis',
'tan',
'tanh',
'tensordot',
'test',
'tile',
'timedelta64',
'trace',
'tracemalloc_domain',
'transpose',
'trapz',
'tri',
'tril',
'tril_indices',
'tril_indices_from',
'trim_zeros',
'triu',
'triu_indices',
'triu_indices_from',
'true_divide',
'trunc',
'typeDict',
'typeNA',
'typecodes',
'typename',
'ubyte',
'ufunc',
'uint',
'uint0',
'uint16',
'uint32',
'uint64',
'uint8',
'uintc',
'uintp',
'ulonglong',
'unicode_',
'union1d',
'unique',
'unpackbits',
'unravel_index',
'unsignedinteger',
'unwrap',
'ushort',
'vander',
'var',
'vdot',
'vectorize',
'version',
'void',
'void0',
'vsplit',
'vstack',
'where',
'who',
'zeros',
'zeros_like']
```python
net.layers[0].randomize()
net([3, 5])
```
array([3.2617429 , 6.04950289])
```python
net.layers[0].M, net.layers[0].b
```
(array([[-1.41364637, 1.16071821],
[ 0.66063039, 0.42396354]]),
array([ 0.06821949, -1.07695745]))
Make the affine layer the identity transform:
```python
net.layers[0].M = np.array([[1,0],[0,1]])
net.layers[0].b = np.array([0,0])
net([3,5])
```
array([ 6, 10])
```python
bench.learning_potential()
```
-30.508647928518727
```python
bench.knobs_plot_learning(100)
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
```python
net([7,11])
```
array([14, 22])
```python
net.layers[0].M, net.layers[0].b
```
(array([[1, 0],
[0, 1]]),
array([0, 0]))
What is learning doing to it?
```python
bench.learn(10)
```
[4.092601131822876e-32,
3.947771200004956e-30,
4.125582682120508e-31,
3.777904178910002e-30,
1.5037661005775538e-30,
1.633188592840376e-30,
1.3620176566706532e-30,
5.516826325697237e-31,
7.703719777548943e-33,
7.888609052210118e-31]
```python
net([7,11])
```
array([14, 22])
```python
net.layers[0].M, net.layers[0].b
```
(array([[1, 0],
[0, 1]]),
array([0, 0]))
If we take the map layer off again, how does it do?
```python
bench.rollback_net()
bench.net.layers = bench.net.layers[:1]
bench.checkpoint_net()
```
```python
bench.ideal = lambda v: v
bench.learning_potential()
#bench.knobs_plot_learning(100)
```
17.53014188241033
It learns just fine, as expected. So we definitely have a problem.
### add a RELU
```python
bench.net.layers = bench.net.layers[:1]
leak = 0
bench.net.extend(MapLayer(lambda x: (x*(1+leak/2)+abs(x)*(1-leak/2))/2, lambda d: [leak,1][1 if d>0 else 0]))
bench.net.layers
```
[<__main__.AffinityLayer at 0x7f0d293189d0>,
<__main__.MapLayer at 0x7f0d292a86d0>]
```python
bench.net.layers[0].randomize()
bench.checkpoint_net()
bench.ideal = lambda v: np.array([1,1])
bench.knobs_plot_learning(100)
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
```python
%debug
```
ERROR:root:No traceback has been produced, nothing to debug.
### XOR
```python
net = Network()
net.extend(AffinityLayer(2,2))
```
```python
t = net.layers[0]
t.M, t.b
```
(array([[ 1.18691118, 0.10949354],
[ 1.40113726, -0.73322905]]),
array([-0.96443749, -0.11239461]))
```python
```
| 918e7bebfea9a42a73e5e9384a7f155e38b0042f | 44,882 | ipynb | Jupyter Notebook | nbs/OLD/nnt2.ipynb | pramasoul/aix | 98333b875f6c6cda6dee86e6eab02c5ddc622543 | [
"MIT"
] | null | null | null | nbs/OLD/nnt2.ipynb | pramasoul/aix | 98333b875f6c6cda6dee86e6eab02c5ddc622543 | [
"MIT"
] | 1 | 2021-11-29T03:44:00.000Z | 2021-12-19T05:34:04.000Z | nbs/OLD/nnt2.ipynb | pramasoul/aix | 98333b875f6c6cda6dee86e6eab02c5ddc622543 | [
"MIT"
] | null | null | null | 24.742007 | 298 | 0.441981 | true | 7,523 | Qwen/Qwen-72B | 1. YES
2. YES | 0.938124 | 0.805632 | 0.755783 | __label__eng_Latn | 0.317826 | 0.594269 |
<center>
<h1> TP-Projet d'optimisation numérique </h1>
<h1> Année 2020-2021 - 2e année département Sciences du Numérique </h1>
<h1> Mouddene Hamza </h1>
<h1> Tyoubi Anass </h1>
</center>
# Algorithme de Newton
## Implémentation
1. Coder l’algorithme de Newton local tel que décrit dans la section *Algorithme de Newton* (fichier `Algorithme_De_Newton.jl`)
2. Tester l’algorithme sur les fonctions $f_{1}$ , $f_{2}$ avec les points initiaux $x_{011}$ , $x_{012}$ (pour $f_{1}$ ) et $x_{021}$ , $x_{022}$ , $x_{023}$ (pour $f_{2}$ ) donnés en Annexe A.
```julia
using LinearAlgebra
using Markdown # Pour que les docstrings en début des fonctions ne posent
# pas de soucis. Ces docstrings sont utiles pour générer
# la documentation sous GitHub
include("Algorithme_De_Newton.jl")
# Affichage les sorties de l'algorithme des Régions de confiance
function my_afficher_resultats(algo, nom_fct, point_init, xmin, fxmin, flag, sol_exacte, nbiters)
println("-------------------------------------------------------------------------")
printstyled("Résultats de : ", algo, " appliqué à ", nom_fct, " au point initial ", point_init, ":\n", bold=true, color=:blue)
println(" * xsol = ", xmin)
println(" * f(xsol) = ", fxmin)
println(" * nb_iters = ", nbiters)
println(" * flag = ", flag)
println(" * sol_exacte : ", sol_exacte)
end
println("\n\n\nFonction f0")
f0(x) = sin(x)
# la gradient de la fonction f0
grad_f0(x) = cos(x)
# la hessienne de la fonction f0
hess_f0(x) = -sin(x)
sol_exacte = -pi/2
options = []
x0 = sol_exacte
xmin, f_min, flag, nb_iters = Algorithme_De_Newton(f0, grad_f0, hess_f0, x0, options)
my_afficher_resultats("Newton", "f0", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = -pi / 2 + 0.5
xmin, f_min, flag, nb_iters = Algorithme_De_Newton(f0, grad_f0, hess_f0, x0, options)
my_afficher_resultats("Newton", "f0", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = pi / 2
xmin, f_min, flag, nb_iters = Algorithme_De_Newton(f0, grad_f0, hess_f0, x0, options)
my_afficher_resultats("Newton", "f0", x0, xmin, f_min, flag, sol_exacte, nb_iters)
println("\n\n\nFonction f1")
# -----------
f1(x) = 2 * (x[1] + x[2] + x[3] - 3) ^ 2 + (x[1] - x[2]) ^ 2 + (x[2] - x[3]) ^ 2
# la gradient de la fonction f1
grad_f1(x) = [4 * (x[1] + x[2] + x[3] - 3) + 2 * (x[1] - x[2]); 4 * (x[1] + x[2] + x[3] - 3) - 2 * (x[1] - x[2]) + 2*(x[2]-x[3]); 4*(x[1]+x[2]+x[3]-3)-2*(x[2]-x[3])]
# la hessienne de la fonction f1
hess_f1(x) = [6 2 4; 2 8 2; 4 2 6]
sol_exacte = [1 ,1 ,1]
options = []
x0 = [1; 0; 0]
xmin, f_min, flag, nb_iters = Algorithme_De_Newton(f1, grad_f1, hess_f1, x0, options)
my_afficher_resultats("Newton", "f1", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = [10; 3; -2.2]
xmin, f_min, flag, nb_iters = Algorithme_De_Newton(f1, grad_f1, hess_f1, x0, options)
my_afficher_resultats("Newton","f1", x0, xmin, f_min, flag, sol_exacte, nb_iters)
println("\n\n\nFonction f2")
# -----------
f2(x) = 100 * (x[2] - x[1] ^ 2) ^ 2 + (1 - x[1]) ^ 2
grad_f2(x) = [-400 * x[1] * (x[2] - x[1] ^ 2) - 2 * (1 - x[1]) ; 200 * (x[2] -x[1]^2)]
hess_f2(x) = [-400 * (x[2] - 3 * x[1] ^ 2) + 2 -400 * x[1]; -400 * x[1] 200]
sol_exacte = [1, 1, 1]
options = []
x0 = [-1.2; 1]
xmin, f_min, flag, nb_iters = Algorithme_De_Newton(f2, grad_f2, hess_f2, x0, options)
my_afficher_resultats("Newton", "f2", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = [10; 0]
xmin, f_min, flag, nb_iters = Algorithme_De_Newton(f2, grad_f2, hess_f2, x0, options)
my_afficher_resultats("Newton", "f2", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = [0; ((1 / 200) + (1 / (10 ^ 12)))]
xmin, f_min, flag, nb_iters = Algorithme_De_Newton(f2, grad_f2, hess_f2, x0, options)
my_afficher_resultats("Newton", "f2", x0, xmin, f_min, flag, sol_exacte, nb_iters)
```
## Interprétation
Justifier
1. les résultats obtenus pour l'exemple $f_0$ ci-dessus;
2. que l’algorithme implémenté converge en une itération pour $f_{1}$;
3. que l’algorithme puisse ne pas converger pour $f_{2}$ avec certains points initiaux.
## Réponses
<ol>
<li>
les résultats obtenus pour l'exemple $f_0$ ci-dessus sont plutôt cohérentes, tel que :<br>
\begin{equation}
\begin{split}
&f_0 : \mathbb{R} \rightarrow \mathbb{R}\\
&x \rightarrow \sin(x)
\end{split}
\end{equation}<br>
Pour le premier cas, le point initiale est égale à la solution exacte $x_{001} = -\dfrac{\pi}{2}$, puis on trouve que le minimum de la fonction $\sin$ est $-1$ sans aucune itération.<br>
Pour le deuxième résultat on s'écarte de $\dfrac{1}{2}$ du point critique qui est un minimum selon le résultat du premier calcul, après $3$ itérations on retrouve la solution exacte.<br>
Finalement, on prend un $x_{002} = \dfrac{\pi}{2}$ qui représente un maximum pour la fonction $\sin$, comme $x_{003}$ est dèja un point critique, l'algorithme de Newton s'arrête. Dans cette situation, il faut aller chercher plutôt la condition à l'ordre $2$ pour trouver le minimum de la fonction $\sin$.
</li>
<li>
L’algorithme implémenté converge en une itération pour $f_1$, tel que :<br>
\begin{equation}
\begin{split}
&f_1 : \mathbb{R^{3}} \rightarrow \mathbb{R}\\
&(x_1, x_2, x_3) \rightarrow 2(x_1 + x_2 + x_3 - 3) ^ 2 + (x_1 - x_2) ^ 2 + (x_2 - x_3) ^ 2
\end{split}
\end{equation}<br>
Pour le points initiaux $x_{011} = \begin{bmatrix}1\\0\\0\end{bmatrix}$ et $x_{012} = \begin{bmatrix}10\\3\\-2.2\end{bmatrix}$, $f_1$ est une forme quadratique donc elle est égale à son développement de Taylor a l'ordre $2$.<br>
</li>
<li>
On remarque que l’algorithme appliqué à $f_2$ tel que : <br>
\begin{equation}
\begin{split}
&f_2 : \mathbb{R^{2}} \rightarrow \mathbb{R}\\
&(x_1, x_2) \rightarrow 100(x_2 - x_1 ^ 2) ^ 2 + (1 - x_1) ^ 2
\end{split}
\end{equation}<br>
converge pour les deux premiers points initiaux $x_{021} = \begin{bmatrix}-1.2\\1\end{bmatrix}$ et $x_{022} = \begin{bmatrix}10\\0\end{bmatrix}$ mais pas pour le dernier point $x_{023} = \begin{bmatrix}0\\\dfrac{1}{200} + \dfrac{1}{10^{12}}\end{bmatrix}$ l'interpreteur Julia signale une erreur : SingularException, cela vient du fait que le point initial est un point critique du coup dans le calcul $d_k$ solution du système : $\nabla^{2} f (x_{k}) d_{k} = - \nabla f (x_{k})$, on se trouve avec un système qui n'est pas inversible.
</li>
</ol>
# Régions de confiance avec pas de cauchy
## Implémentation
1. Coder l'algorithme du pas de Cauchy d’un sous-problème de
régions de confiance (fichier `Pas_De_Cauchy.jl`). Tester sur les quadratiques proposées en Annexe B.
2. Coder l'algorithme de régions de confiance (fichier `Regions_De_Confiance.jl`). Tester sur les problèmes de l’Annexe A.
```julia
include("Pas_De_Cauchy.jl")
println("Pas de cauchy\n\n\nB. Cas tests pour le calcul du pas de Cauchy")
println("\n\nQuadratique 1")
g1 = [0; 0]
H1 = [7 0; 0 2]
println("Cauchy g1 H1\n",Pas_De_Cauchy(g1, H1, 1),"\n")
println("\n\nQuadratique 2")
g2 = [6; 2]
H2 = [7 0; 0 2]
println("Cauchy g2 H2\n",Pas_De_Cauchy(g2, H2, 1),"\n")
println("\n\nQuadratique 3")
g3 = [-2; 1]
H3 = [-2 0; 0 10]
println("Cauchy g3 H3\n",Pas_De_Cauchy(g3, H3, 1),"\n\n")
```
Pas de cauchy
B. Cas tests pour le calcul du pas de Cauchy
Quadratique 1
Cauchy g1 H1
([0.0, 0.0], 0)
Quadratique 2
Cauchy g2 H2
([-0.9230769230769234, -0.30769230769230776], 1)
Quadratique 3
Cauchy g3 H3
([0.8944271909999159, -0.4472135954999579], -1)
```julia
# Vos tests
include("Regions_De_Confiance.jl")
println("RC-Pas de cauchy\n\n\nFonction f0")
# -----------
sol_exacte = -pi / 2
options = []
x0 = sol_exacte
xmin, f_min, flag, nb_iters = Regions_De_Confiance("cauchy", f0, grad_f0, hess_f0, x0, options)
my_afficher_resultats("cauchy", "f0", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = -pi / 2 + 0.5
xmin, f_min, flag, nb_iters = Regions_De_Confiance("cauchy", f0, grad_f0, hess_f0, x0, options)
my_afficher_resultats("cauchy", "f0", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = pi / 2
xmin, f_min, flag, nb_iters = Regions_De_Confiance("cauchy", f0, grad_f0, hess_f0, x0, options)
my_afficher_resultats("cauchy", "f0", x0, xmin, f_min, flag, sol_exacte, nb_iters)
println("\n\n\nFonction f1")
# -----------
sol_exacte = [1, 1, 1]
options = []
x0 = [1; 0; 0]
xmin, f_min, flag, nb_iters = Regions_De_Confiance("cauchy", f1, grad_f1, hess_f1, x0, options)
my_afficher_resultats("cauchy", "f1", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = [10; 3; -2.2]
xmin, f_min, flag, nb_iters = Regions_De_Confiance("cauchy", f1, grad_f1, hess_f1, x0, options)
my_afficher_resultats("cauchy", "f1", x0, xmin, f_min, flag, sol_exacte, nb_iters)
println("\n\n\nFonction f2")
# -----------
sol_exacte = [1, 1, 1]
options = []
x0 = [-1.2; 1]
xmin, f_min, flag, nb_iters = Regions_De_Confiance("cauchy", f2, grad_f2, hess_f2, x0, options)
my_afficher_resultats("cauchy", "f2", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = [10; 0]
xmin, f_min, flag, nb_iters = Regions_De_Confiance("cauchy", f2, grad_f2, hess_f2, x0, options)
my_afficher_resultats("cauchy", "f2", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = [0; ((1 / 200) + (1 / (10 ^ 12)))]
xmin, f_min, flag, nb_iters = Regions_De_Confiance("cauchy", f2, grad_f2, hess_f2, x0, options)
my_afficher_resultats("cauchy", "f2", x0, xmin, f_min, flag, sol_exacte, nb_iters)
```
RC-Pas de cauchy
Fonction f0
-------------------------------------------------------------------------
[34m[1mRésultats de : cauchy appliqué à f0 au point initial -1.5707963267948966:[22m[39m
* xsol = -1.5707963267948966
* f(xsol) = -1.0
* nb_iters = 1
* flag = 0
* sol_exacte : -1.5707963267948966
-------------------------------------------------------------------------
[34m[1mRésultats de : cauchy appliqué à f0 au point initial -1.0707963267948966:[22m[39m
* xsol = -1.5707963267949088
* f(xsol) = -1.0
* nb_iters = 3
* flag = 0
* sol_exacte : -1.5707963267948966
-------------------------------------------------------------------------
[34m[1mRésultats de : cauchy appliqué à f0 au point initial 1.5707963267948966:[22m[39m
* xsol = 1.5707963267948966
* f(xsol) = 1.0
* nb_iters = 1
* flag = 0
* sol_exacte : -1.5707963267948966
Fonction f1
-------------------------------------------------------------------------
[34m[1mRésultats de : cauchy appliqué à f1 au point initial [1, 0, 0]:[22m[39m
* xsol = [1.0000558873349883, 0.999992420017735, 0.9999289527004819]
* f(xsol) = 9.090411079109608e-9
* nb_iters = 26
* flag = 2
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : cauchy appliqué à f1 au point initial [10.0, 3.0, -2.2]:[22m[39m
* xsol = [1.000049795462743, 0.9999961002424803, 0.9999424049876057]
* f(xsol) = 6.0401046516733e-9
* nb_iters = 28
* flag = 2
* sol_exacte : [1, 1, 1]
Fonction f2
-------------------------------------------------------------------------
[34m[1mRésultats de : cauchy appliqué à f2 au point initial [-1.2, 1.0]:[22m[39m
* xsol = [0.9975452532046185, 0.9950891951120907]
* f(xsol) = 6.03116510222528e-6
* nb_iters = 8546
* flag = 2
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : cauchy appliqué à f2 au point initial [10, 0]:[22m[39m
* xsol = [0.9973371221017552, 0.9946738223584641]
* f(xsol) = 7.096562862872902e-6
* nb_iters = 1212
* flag = 2
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : cauchy appliqué à f2 au point initial [0.0, 0.0050000000010000005]:[22m[39m
* xsol = [0.9980249833119488, 0.9960352320621435]
* f(xsol) = 3.935418182296104e-6
* nb_iters = 3198
* flag = 2
* sol_exacte : [1, 1, 1]
## Interprétation
1. Quelle relation lie la fonction test $f_1$ et son modèle de Taylor à l’ordre 2 ? Comparer alors les performances de Newton et RC-Pas de Cauchy sur cette fonction.
2. Le rayon initial de la région de confiance est un paramètre important dans l’analyse
de la performance de l’algorithme. Sur quel(s) autre(s) paramètre(s) peut-on jouer
pour essayer d’améliorer cette performance ? Étudier l’influence d’au moins deux de
ces paramètres.
## Réponses
<ol>
<li>
\begin{equation}
\begin{split}
&f_1 : \mathbb{R^{3}} \rightarrow \mathbb{R}\\
&(x_1, x_2, x_3) \rightarrow 2(x_1 + x_2 + x_3 - 3) ^ 2 + (x_1 - x_2) ^ 2 + (x_2 - x_3) ^ 2
\end{split}
\end{equation}<br>
La fonction f1 est égale à son développement de Taylor à l’ordre 2. On remarque que l'algorithme de RC-Pas de Cauchy converge pour le point initial $x_{011} = \begin{bmatrix}1\\0\\0\end{bmatrix}$ en $26$ itérations, ainsi que $29$ itérations pour le point initial $x_{012} = \begin{bmatrix}10\\3\\-2.2\end{bmatrix}$., Alors que pour l'algorithme de Newton ça converge en une seule itération. Donc, pour cet exemple, l'algorithme de Newton est plus performant que l'algorithme de RC-Pas de cauchy.
</li>
<li>
Le rayon initial de la région de confiance est un paramètre important dans l’analyse
de la performance de l’algorithme. mais on peut jouer sur d'autres paramètres pour essayer d’améliorer cette performance tels que le rayon de confiance maximal, le facteur d’agrandissementde la region de confiance,le facteur de reduction de la region de confiance, le critère d’agrandissement de la region de confiance et le critère de reduction de la region de confiance.
</li>
</ol>
# Régions de confiance avec gradient conjugué tronqué
## Implémentation
1. Implémenter l’algorithme du Gradient Conjugué Tronqué, en se basant sur le cours (fichier `Gradient_Conjugue_Tronque.jl`).
On validera les résultats sur les fonctions de l’Annexe C.
2. Intégrer finalement l’algorithme du Gradient Conjugué Tronqué dans le code de
régions de confiance, et appliquer ce code pour résoudre les exemples proposés en
Annexe A.
```julia
include("Gradient_Conjugue_Tronque.jl")
println("Gradient conjugue tronqué\n\n\nC. Cas tests pour la résolution du sous-problème par l’algorithme du Gradient Conjugué Tronqué")
println("\n\nQuadratique 4")
g4 = [0; 0]
H4 = [-2 0; 0 10]
println("Cauchy g4 H4\n", Gradient_Conjugue_Tronque(g4, H4, []), "\n")
println("\n\nQuadratique 5")
g5 = [2; 3]
H5 = [4 6; 6 5]
println("\nCauchy g5 H5\n", Gradient_Conjugue_Tronque(g5, H5, []), "\n")
println("\n\nQuadratique 6")
g6 = [2; 0]
H6 = [4 0; 0 -15]
println("\nCauchy g6 H6\n", Gradient_Conjugue_Tronque(g6, H6, []), "\n\n")
```
Gradient conjugue tronqué
C. Cas tests pour la résolution du sous-problème par l’algorithme du Gradient Conjugué Tronqué
Quadratique 4
Cauchy g4 H4
[0.0, 0.0]
Quadratique 5
Cauchy g5 H5
[1.1782448197996298, -1.6160876042514951]
Quadratique 6
Cauchy g6 H6
[-0.5, 0.0]
```julia
# Vos tests
include("Regions_De_Confiance.jl")
println("RC-GCT\n\n\nFonction f1")
# -----------
sol_exacte = [1,1,1]
options = []
x0 = [1; 0; 0]
xmin, f_min, flag, nb_iters = Regions_De_Confiance("gct", f1, grad_f1, hess_f1, x0, options)
my_afficher_resultats("gct", "f1", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = [10; 3; -2.2]
xmin, f_min, flag, nb_iters = Regions_De_Confiance("gct", f1, grad_f1, hess_f1, x0, options)
my_afficher_resultats("gct", "f1", x0, xmin, f_min, flag, sol_exacte, nb_iters)
println("\n\n\nFonction f2")
# -----------
sol_exacte = [1, 1, 1]
options = []
x0 = [-1.2; 1]
xmin, f_min, flag, nb_iters = Regions_De_Confiance("gct", f2, grad_f2, hess_f2, x0, options)
my_afficher_resultats("gct", "f0", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = [10; 0]
xmin, f_min, flag, nb_iters = Regions_De_Confiance("gct", f2, grad_f2, hess_f2, x0, options)
my_afficher_resultats("gct", "f0", x0, xmin, f_min, flag, sol_exacte, nb_iters)
x0 = [0; ((1 / 200) + (1 / (10 ^ 12)))]
xmin, f_min, flag, nb_iters = Regions_De_Confiance("gct", f2, grad_f2, hess_f2, x0, options)
my_afficher_resultats("gct", "f2", x0, xmin, f_min, flag, sol_exacte, nb_iters)
```
RC-GCT
Fonction f1
-------------------------------------------------------------------------
[34m[1mRésultats de : gct appliqué à f1 au point initial [1, 0, 0]:[22m[39m
* xsol = [1.0000000000000007, 1.0, 1.0]
* f(xsol) = 2.0214560696288428e-30
* nb_iters = 1
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : gct appliqué à f1 au point initial [10.0, 3.0, -2.2]:[22m[39m
* xsol = [0.9999999999999996, 1.0000000000000004, 1.0000000000000009]
* f(xsol) = 2.5637979419682884e-30
* nb_iters = 3
* flag = 0
* sol_exacte : [1, 1, 1]
Fonction f2
-------------------------------------------------------------------------
[34m[1mRésultats de : gct appliqué à f0 au point initial [-1.2, 1.0]:[22m[39m
* xsol = [0.9999999999999402, 0.9999999999997743]
* f(xsol) = 1.1277385526166413e-24
* nb_iters = 32
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : gct appliqué à f0 au point initial [10, 0]:[22m[39m
* xsol = [1.0000000002404108, 1.0000000004684448]
* f(xsol) = 7.311578825077101e-20
* nb_iters = 45
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : gct appliqué à f2 au point initial [0.0, 0.0050000000010000005]:[22m[39m
* xsol = [0.9999999999998994, 0.9999999999996207]
* f(xsol) = 3.1813581453548166e-24
* nb_iters = 19
* flag = 0
* sol_exacte : [1, 1, 1]
## Interprétation
1. Comparer la décroissance obtenue avec celle du pas de Cauchy, en retournant, dans
un premier temps le dernier itéré admissible à courbure positive (c’est à dire, que si
l’une ou l’autre des deux conditions (b) ou (d) sont rencontrées dans l’algorithme 3,
alors on ne calcule pas ``σ_{j}`` et on retourne le dernier itéré ``s_{j}`` directement).
2. Comparer la décroissance obtenue avec celle du pas de Cauchy, en imposant la sortie
dans l’algorithme 3 au bout d’une itération seulement. Que remarquez vous ?
3. Comparer la décroissance obtenue avec celle du pas de Cauchy dans le cas général.
4. Quels sont les avantages et inconvénients des deux approches ?
## Réponses
1. Rappelant que la fonction $f_2$ s'écrit sous la forme suivante : <br>
\begin{equation}
\begin{split}
&f_2 : \mathbb{R^{2}} \rightarrow \mathbb{R}\\
&(x_1, x_2) \rightarrow 100(x_2 - x_1 ^ 2) ^ 2 + (1 - x_1) ^ 2
\end{split}
\end{equation}<br>
On applique cette fonction au point initial $x_{022} = \begin{bmatrix}10\\0\end{bmatrix}$, en retournant le dernier itéré admissible à courbure positive, le résultat de la décroissance obtenue est le suivant :
| RC-Pas de cauchy | RC-Gradient tronqué conjugué |
|:----------------:|:----------------------------:|
| 591198.87 | 591198.87 |
| 328991.13 | 337809.30 |
| 64598.674 | 65927.96 |
| 12522.069 | 5134.153 |
| 2336.1646 | 1.160071 |
On remarque que la décroissance de l'algorithme "Pas de cauchy" et celle de l'algorithme "gct" sont à peu près Égaux.
2. En imposant la sortie dans l’algorithme "gct" au bout d’une itération seulement, le résultat de la décroissance obtenue est le suivant :<br>
| RC-Pas de cauchy | RC-Gradient tronqué conjugué |
|:----------------:|:----------------------------:|
| 591198.87 | 591198.87 |
| 328991.13 | 337809.30 |
| 64598.674 | 65927.96 |
| 12522.069 | 5134.153 |
| 2336.1646 | 1.160071 |
3. Dans le cas général, le résultat de la décroissance obtenue est le suivant :<br>
| RC-Pas de cauchy | RC-Gradient tronqué conjugué |
|:----------------:|:----------------------------:|
| 591198.87 | 591198.87 |
| 328991.13 | 337809.30 |
| 64598.674 | 65927.96 |
| 12522.069 | -175.9230 |
| 2336.1646 | -7.765696 |
| 387.54032 | 1.5709228 |
4. Chaque approche a des points faibles et forts, la région de confiance avec le pas de Cauchy est une approche qui converge rapidement en unre seule itération, mais malheureusement sa décroissance est faible. En revanche, la régions de confiance avec le gradient tronqué conjugué a une convergence plutôt lente mais une décroissance plus importane que le pas de Cauchy.
# Lagrangien augmenté
## Implémentation
1.Choisir des critères d’arrêt pour la convergence de l'algorithme.
2.Implémenter l'algorithme du lagrangien augmenté, en utilisant les différentes méthodes
qui ont été vues en première partie pour la résolution de la suite de problémes sans
contraintes (fichier `Lagrangien_Augmente.jl`)
3.Tester les différentes variantes sur les problèmes en Annexe D.
```julia
# Vos tests
include("Lagrangien_Augmente.jl")
include("Regions_De_Confiance.jl")
include("Gradient_Conjugue_Tronque.jl")
include("Pas_De_Cauchy.jl")
"Retour sur f1"
contrainte(x) = x[1] + x[3] - 1
grad_contrainte(x) = [1; 0; 1]
hess_contrainte(x) = [0 0 0; 0 0 0; 0 0 0]
"réalisable"
xc = [0; 1; 1]
sol_exacte = [1, 1, 1]
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("gct", f1, contrainte, grad_f1, hess_f1, grad_contrainte, hess_contrainte, xc, [])
my_afficher_resultats("Lagrangien_Augmente_gct", "f1", xc, xmin, f_min, flag, sol_exacte, nb_iters)
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("cauchy", f1, contrainte, grad_f1, hess_f1, grad_contrainte, hess_contrainte, xc, [])
my_afficher_resultats("Lagrangien_Augmente_cauchy", "f1", xc, xmin, f_min, flag, sol_exacte, nb_iters)
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("newton", f1, contrainte, grad_f1, hess_f1, grad_contrainte, hess_contrainte , xc, [])
my_afficher_resultats("Lagrangien_Augmente_newton", "f1", xc, xmin, f_min, flag, sol_exacte, nb_iters)
"non réalisable"
xc = [0.5; 1.25; 1]
sol_exacte = [1, 1, 1]
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("gct", f1, contrainte, grad_f1, hess_f1, grad_contrainte, hess_contrainte, xc, [])
my_afficher_resultats("Lagrangien_Augmente_gct", "f1", xc, xmin, f_min, flag, sol_exacte, nb_iters)
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("cauchy", f1, contrainte, grad_f1, hess_f1, grad_contrainte, hess_contrainte, xc, [])
my_afficher_resultats("Lagrangien_Augmente_cauchy", "f1", xc, xmin, f_min, flag, sol_exacte, nb_iters)
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("newton", f1, contrainte, grad_f1, hess_f1, grad_contrainte, hess_contrainte, xc, [])
my_afficher_resultats("Lagrangien_Augmente_newton", "f1", xc, xmin, f_min, flag, sol_exacte, nb_iters)
"Retour sur f2"
contrainte(x) = (x[1] ^ 2) + (x[2] ^ 2) - 1.5
grad_contrainte(x) = [2 * x[1]; 2 * x[2]]
hess_contrainte(x) = [2 0; 0 2]
"non réalisable"
xc = [1; 0]
sol_exacte = [1, 1, 1]
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("gct", f2, contrainte, grad_f2, hess_f2, grad_contrainte, hess_contrainte, xc, [])
my_afficher_resultats("Lagrangien_Augmente_gct", "f2", xc, xmin, f_min, flag, sol_exacte, nb_iters)
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("cauchy", f2, contrainte, grad_f2, hess_f2, grad_contrainte, hess_contrainte, xc, [])
my_afficher_resultats("Lagrangien_Augmente_cauchy", "f2", xc, xmin, f_min, flag, sol_exacte, nb_iters)
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("newton", f2, contrainte, grad_f2, hess_f2, grad_contrainte, hess_contrainte, xc, [])
my_afficher_resultats("Lagrangien_Augmente_newton", "f2", xc, xmin, f_min, flag, sol_exacte, nb_iters)
"réalisable"
xc = [sqrt(3) / 2; sqrt(3) / 2]
sol_exacte = [1, 1]
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("gct", f2, contrainte, grad_f2, hess_f2, grad_contrainte, hess_contrainte, xc, [])
my_afficher_resultats("Lagrangien_Augmente_gct", "f2", xc, xmin, f_min, flag, sol_exacte, nb_iters)
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("cauchy", f2, contrainte, grad_f2, hess_f2, grad_contrainte, hess_contrainte, xc, [])
my_afficher_resultats("Lagrangien_Augmente_cauchy", "f2", xc, xmin, f_min, flag, sol_exacte, nb_iters)
xmin, f_min, flag, nb_iters = Lagrangien_Augmente("newton", f2, contrainte, grad_f2, hess_f2, grad_contrainte, hess_contrainte, xc, [])
my_afficher_resultats("Lagrangien_Augmente_newton", "f2", xc, xmin, f_min, flag, sol_exacte, nb_iters)
```
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_gct appliqué à f1 au point initial [0, 1, 1]:[22m[39m
* xsol = [0, 1, 1]
* f(xsol) = 3
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_cauchy appliqué à f1 au point initial [0, 1, 1]:[22m[39m
* xsol = [0, 1, 1]
* f(xsol) = 3
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_newton appliqué à f1 au point initial [0, 1, 1]:[22m[39m
* xsol = [0, 1, 1]
* f(xsol) = 3
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_gct appliqué à f1 au point initial [0.5, 1.25, 1.0]:[22m[39m
* xsol = [0.5, 1.25, 1.0]
* f(xsol) = 0.75
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_cauchy appliqué à f1 au point initial [0.5, 1.25, 1.0]:[22m[39m
* xsol = [0.5, 1.25, 1.0]
* f(xsol) = 0.75
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_newton appliqué à f1 au point initial [0.5, 1.25, 1.0]:[22m[39m
* xsol = [0.5, 1.25, 1.0]
* f(xsol) = 0.75
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_gct appliqué à f2 au point initial [1, 0]:[22m[39m
* xsol = [1, 0]
* f(xsol) = 100
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_cauchy appliqué à f2 au point initial [1, 0]:[22m[39m
* xsol = [1, 0]
* f(xsol) = 100
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_newton appliqué à f2 au point initial [1, 0]:[22m[39m
* xsol = [1, 0]
* f(xsol) = 100
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_gct appliqué à f2 au point initial [0.8660254037844386, 0.8660254037844386]:[22m[39m
* xsol = [0.8660254037844386, 0.8660254037844386]
* f(xsol) = 1.3641386247653273
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_cauchy appliqué à f2 au point initial [0.8660254037844386, 0.8660254037844386]:[22m[39m
* xsol = [0.8660254037844386, 0.8660254037844386]
* f(xsol) = 1.3641386247653273
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1]
-------------------------------------------------------------------------
[34m[1mRésultats de : Lagrangien_Augmente_newton appliqué à f2 au point initial [0.8660254037844386, 0.8660254037844386]:[22m[39m
* xsol = [0.8660254037844386, 0.8660254037844386]
* f(xsol) = 1.3641386247653273
* nb_iters = 0
* flag = 0
* sol_exacte : [1, 1]
## Interprétation
1.Commenter les résultats obtenus, en étudiant notamment les valeurs de $\lambda_k$ et $\mu_k$.
2.Étudier l'influence du paramètre $\tau$ dans la performance de l'algorithme.
3.**Supplémentaire** :
Que proposez-vous comme méthode pour la résolution des problèmes avec
des contraintes à la fois d'égalité et d'inégalité ? Implémenter (si le temps le permet)
ce nouvel algorithme
## Vos réponses?
<ol>
<li>
Les résultas obtenus sont cohérentes, en revanche, la vitesse de convergence est très faible par rapport aux versions précedentes.<br>
On constate que le $\lambda$ en sortie est très grand, donc la solution du probleme sans contrainte ne satisfait pas aux contraintes appliquees.<br>
$\mu$ réduit significativement le nombre d’iterations realisées et permet egalement de rendre soluble des résolutions qui prenaient trop de temps. On note également que le $\mu$ final a augmenté à chaque fois, donc les contraintes n’étaient pas respectées et qu’il a fallu pénaliser plusieurs fois afin de les respecter.
</li>
<li>L'augmentation de $\tau$ améliore la convergence de l’algorithme. le critère d’arrêt sur la précision de la fonction nous permet de s’attacher plus à la justesse du résultat. La valeur de $\tau$ la plus importante est celle qui fournie donc les meilleures résultats.
</li>
</ol>
| 0c8f6123fc3dfbbf9bc3ca56b7e2d067d1f92681 | 44,980 | ipynb | Jupyter Notebook | 2A/S7/Optimisation/TP-Projet/src/.ipynb_checkpoints/TP-Projet-Optinum-checkpoint.ipynb | MOUDDENEHamza/ENSEEIHT | a90b1dee0c8d18a9578153a357278d99405bb534 | [
"Apache-2.0"
] | 4 | 2020-05-02T12:32:32.000Z | 2022-01-12T20:20:35.000Z | 2A/S7/Optimisation/TP-Projet/src/.ipynb_checkpoints/TP-Projet-Optinum-checkpoint.ipynb | MOUDDENEHamza/ENSEEIHT | a90b1dee0c8d18a9578153a357278d99405bb534 | [
"Apache-2.0"
] | 2 | 2021-01-14T20:03:26.000Z | 2022-01-30T01:10:00.000Z | 2A/S7/Optimisation/TP-Projet/src/.ipynb_checkpoints/TP-Projet-Optinum-checkpoint.ipynb | MOUDDENEHamza/ENSEEIHT | a90b1dee0c8d18a9578153a357278d99405bb534 | [
"Apache-2.0"
] | 13 | 2020-11-11T21:28:11.000Z | 2022-02-19T13:54:22.000Z | 45.851172 | 567 | 0.520454 | true | 10,907 | Qwen/Qwen-72B | 1. YES
2. YES | 0.819893 | 0.851953 | 0.69851 | __label__fra_Latn | 0.645104 | 0.461205 |
# GPyTorch Regression Tutorial
## Introduction
In this notebook, we demonstrate many of the design features of GPyTorch using the simplest example, training an RBF kernel Gaussian process on a simple function. We'll be modeling the function
\begin{align}
y &= \sin(2\pi x) + \epsilon \\
\epsilon &\sim \mathcal{N}(0, 0.04)
\end{align}
with 100 training examples, and testing on 51 test examples.
**Note:** this notebook is not necessarily intended to teach the mathematical background of Gaussian processes, but rather how to train a simple one and make predictions in GPyTorch. For a mathematical treatment, Chapter 2 of Gaussian Processes for Machine Learning provides a very thorough introduction to GP regression (this entire text is highly recommended): http://www.gaussianprocess.org/gpml/chapters/RW2.pdf
```python
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
### Set up training data
In the next cell, we set up the training data for this example. We'll be using 100 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels.
```python
# Training data is 100 points in [0,1] inclusive regularly spaced
train_x = torch.linspace(0, 1, 100)
# True function is sin(2*pi*x) with Gaussian noise
train_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * math.sqrt(0.04)
```
## Setting up the model
The next cell demonstrates the most critical features of a user-defined Gaussian process model in GPyTorch. Building a GP model in GPyTorch is different in a number of ways.
First in contrast to many existing GP packages, we do not provide full GP models for the user. Rather, we provide *the tools necessary to quickly construct one*. This is because we believe, analogous to building a neural network in standard PyTorch, it is important to have the flexibility to include whatever components are necessary. As can be seen in more complicated examples, this allows the user great flexibility in designing custom models.
For most GP regression models, you will need to construct the following GPyTorch objects:
1. A **GP Model** (`gpytorch.models.ExactGP`) - This handles most of the inference.
1. A **Likelihood** (`gpytorch.likelihoods.GaussianLikelihood`) - This is the most common likelihood used for GP regression.
1. A **Mean** - This defines the prior mean of the GP.(If you don't know which mean to use, a `gpytorch.means.ConstantMean()` is a good place to start.)
1. A **Kernel** - This defines the prior covariance of the GP.(If you don't know which kernel to use, a `gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())` is a good place to start).
1. A **MultivariateNormal** Distribution (`gpytorch.distributions.MultivariateNormal`) - This is the object used to represent multivariate normal distributions.
### The GP Model
The components of a user built (Exact, i.e. non-variational) GP model in GPyTorch are, broadly speaking:
1. An `__init__` method that takes the training data and a likelihood, and constructs whatever objects are necessary for the model's `forward` method. This will most commonly include things like a mean module and a kernel module.
2. A `forward` method that takes in some $n \times d$ data `x` and returns a `MultivariateNormal` with the *prior* mean and covariance evaluated at `x`. In other words, we return the vector $\mu(x)$ and the $n \times n$ matrix $K_{xx}$ representing the prior mean and covariance matrix of the GP.
This specification leaves a large amount of flexibility when defining a model. For example, to compose two kernels via addition, you can either add the kernel modules directly:
```python
self.covar_module = ScaleKernel(RBFKernel() + LinearKernel())
```
Or you can add the outputs of the kernel in the forward method:
```python
covar_x = self.rbf_kernel_module(x) + self.white_noise_module(x)
```
### The likelihood
The simplest likelihood for regression is the `gpytorch.likelihoods.GaussianLikelihood`. This assumes a homoskedastic noise model (i.e. all inputs have the same observational noise).
There are other options for exact GP regression, such as the [FixedNoiseGaussianLikelihood](http://docs.gpytorch.ai/likelihoods.html#fixednoisegaussianlikelihood), which assigns a different observed noise value to different training inputs.
```python
# We will use the simplest form of GP model, exact inference
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_x, train_y, likelihood)
```
### Model modes
Like most PyTorch modules, the `ExactGP` has a `.train()` and `.eval()` mode.
- `.train()` mode is for optimizing model hyperameters.
- `.eval()` mode is for computing predictions through the model posterior.
## Training the model
In the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.
The most obvious difference here compared to many other GP implementations is that, as in standard PyTorch, the core training loop is written by the user. In GPyTorch, we make use of the standard PyTorch optimizers as from `torch.optim`, and all trainable parameters of the model should be of type `torch.nn.Parameter`. Because GP models directly extend `torch.nn.Module`, calls to methods like `model.parameters()` or `model.named_parameters()` function as you might expect coming from PyTorch.
In most cases, the boilerplate code below will work well. It has the same basic components as the standard PyTorch training loop:
1. Zero all parameter gradients
2. Call the model and compute the loss
3. Call backward on the loss to fill in gradients
4. Take a step on the optimizer
However, defining custom training loops allows for greater flexibility. For example, it is easy to save the parameters at each step of training, or use different learning rates for different parameters (which may be useful in deep kernel learning for example).
```python
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iter = 2 if smoke_test else 50
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iter):
# Zero gradients from previous iteration
optimizer.zero_grad()
# Output from model
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % (
i + 1, training_iter, loss.item(),
model.covar_module.base_kernel.lengthscale.item(),
model.likelihood.noise.item()
))
optimizer.step()
```
Iter 1/50 - Loss: 0.938 lengthscale: 0.693 noise: 0.693
Iter 2/50 - Loss: 0.907 lengthscale: 0.644 noise: 0.644
Iter 3/50 - Loss: 0.874 lengthscale: 0.598 noise: 0.598
Iter 4/50 - Loss: 0.836 lengthscale: 0.555 noise: 0.554
Iter 5/50 - Loss: 0.793 lengthscale: 0.514 noise: 0.513
Iter 6/50 - Loss: 0.745 lengthscale: 0.476 noise: 0.474
Iter 7/50 - Loss: 0.693 lengthscale: 0.439 noise: 0.437
Iter 8/50 - Loss: 0.639 lengthscale: 0.405 noise: 0.402
Iter 9/50 - Loss: 0.588 lengthscale: 0.373 noise: 0.369
Iter 10/50 - Loss: 0.540 lengthscale: 0.342 noise: 0.339
Iter 11/50 - Loss: 0.497 lengthscale: 0.315 noise: 0.310
Iter 12/50 - Loss: 0.458 lengthscale: 0.291 noise: 0.284
Iter 13/50 - Loss: 0.421 lengthscale: 0.271 noise: 0.259
Iter 14/50 - Loss: 0.385 lengthscale: 0.254 noise: 0.236
Iter 15/50 - Loss: 0.350 lengthscale: 0.241 noise: 0.215
Iter 16/50 - Loss: 0.316 lengthscale: 0.230 noise: 0.196
Iter 17/50 - Loss: 0.282 lengthscale: 0.222 noise: 0.178
Iter 18/50 - Loss: 0.248 lengthscale: 0.217 noise: 0.162
Iter 19/50 - Loss: 0.214 lengthscale: 0.213 noise: 0.147
Iter 20/50 - Loss: 0.180 lengthscale: 0.211 noise: 0.134
Iter 21/50 - Loss: 0.147 lengthscale: 0.211 noise: 0.121
Iter 22/50 - Loss: 0.114 lengthscale: 0.213 noise: 0.110
Iter 23/50 - Loss: 0.083 lengthscale: 0.215 noise: 0.100
Iter 24/50 - Loss: 0.052 lengthscale: 0.220 noise: 0.091
Iter 25/50 - Loss: 0.022 lengthscale: 0.225 noise: 0.083
Iter 26/50 - Loss: -0.005 lengthscale: 0.232 noise: 0.075
Iter 27/50 - Loss: -0.031 lengthscale: 0.239 noise: 0.069
Iter 28/50 - Loss: -0.054 lengthscale: 0.248 noise: 0.063
Iter 29/50 - Loss: -0.075 lengthscale: 0.257 noise: 0.057
Iter 30/50 - Loss: -0.092 lengthscale: 0.267 noise: 0.052
Iter 31/50 - Loss: -0.106 lengthscale: 0.277 noise: 0.048
Iter 32/50 - Loss: -0.117 lengthscale: 0.287 noise: 0.044
Iter 33/50 - Loss: -0.124 lengthscale: 0.296 noise: 0.041
Iter 34/50 - Loss: -0.128 lengthscale: 0.304 noise: 0.038
Iter 35/50 - Loss: -0.129 lengthscale: 0.310 noise: 0.036
Iter 36/50 - Loss: -0.128 lengthscale: 0.313 noise: 0.033
Iter 37/50 - Loss: -0.126 lengthscale: 0.314 noise: 0.031
Iter 38/50 - Loss: -0.124 lengthscale: 0.312 noise: 0.030
Iter 39/50 - Loss: -0.122 lengthscale: 0.308 noise: 0.029
Iter 40/50 - Loss: -0.119 lengthscale: 0.302 noise: 0.028
Iter 41/50 - Loss: -0.116 lengthscale: 0.296 noise: 0.027
Iter 42/50 - Loss: -0.114 lengthscale: 0.289 noise: 0.026
Iter 43/50 - Loss: -0.112 lengthscale: 0.283 noise: 0.026
Iter 44/50 - Loss: -0.110 lengthscale: 0.278 noise: 0.026
Iter 45/50 - Loss: -0.110 lengthscale: 0.274 noise: 0.026
Iter 46/50 - Loss: -0.110 lengthscale: 0.270 noise: 0.026
Iter 47/50 - Loss: -0.112 lengthscale: 0.268 noise: 0.026
Iter 48/50 - Loss: -0.114 lengthscale: 0.267 noise: 0.026
Iter 49/50 - Loss: -0.116 lengthscale: 0.267 noise: 0.027
Iter 50/50 - Loss: -0.119 lengthscale: 0.269 noise: 0.027
## Make predictions with the model
In the next cell, we make predictions with the model. To do this, we simply put the model and likelihood in eval mode, and call both modules on the test data.
Just as a user defined GP model returns a `MultivariateNormal` containing the prior mean and covariance from forward, a trained GP model in eval mode returns a `MultivariateNormal` containing the posterior mean and covariance. Thus, getting the predictive mean and variance, and then sampling functions from the GP at the given test points could be accomplished with calls like:
```python
f_preds = model(test_x)
y_preds = likelihood(model(test_x))
f_mean = f_preds.mean
f_var = f_preds.variance
f_covar = f_preds.covariance_matrix
f_samples = f_preds.sample(sample_shape=torch.Size(1000,))
```
The `gpytorch.settings.fast_pred_var` context is not needed, but here we are giving a preview of using one of our cool features, getting faster predictive distributions using [LOVE](https://arxiv.org/abs/1803.06058).
```python
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
test_x = torch.linspace(0, 1, 51)
observed_pred = likelihood(model(test_x))
```
## Plot the model fit
In the next cell, we plot the mean and confidence region of the Gaussian process model. The `confidence_region` method is a helper method that returns 2 standard deviations above and below the mean.
```python
with torch.no_grad():
# Initialize plot
f, ax = plt.subplots(1, 1, figsize=(4, 3))
# Get upper and lower confidence bounds
lower, upper = observed_pred.confidence_region()
# Plot training data as black stars
ax.plot(train_x.numpy(), train_y.numpy(), 'k*')
# Plot predictive means as blue line
ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'b')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
```
```python
```
| 849d0b24a2b65a0f984cc2d11417c10aff98ba33 | 35,633 | ipynb | Jupyter Notebook | examples/01_Exact_GPs/Simple_GP_Regression.ipynb | Mehdishishehbor/gpytorch | 432e537b3f6679ea4ab3acf33b14626b7e161c92 | [
"MIT"
] | null | null | null | examples/01_Exact_GPs/Simple_GP_Regression.ipynb | Mehdishishehbor/gpytorch | 432e537b3f6679ea4ab3acf33b14626b7e161c92 | [
"MIT"
] | null | null | null | examples/01_Exact_GPs/Simple_GP_Regression.ipynb | Mehdishishehbor/gpytorch | 432e537b3f6679ea4ab3acf33b14626b7e161c92 | [
"MIT"
] | null | null | null | 95.021333 | 18,714 | 0.800662 | true | 3,731 | Qwen/Qwen-72B | 1. YES
2. YES | 0.754915 | 0.815232 | 0.615431 | __label__eng_Latn | 0.964293 | 0.268183 |
# Algebra Lineal con Python
*Esta notebook fue creada originalmente como un blog post por [Raúl E. López Briega](http://relopezbriega.com.ar/) en [Mi blog sobre Python](http://relopezbriega.github.io). El contenido esta bajo la licencia BSD.*
## Introducción
Una de las herramientas matemáticas más utilizadas en [machine learning](http://es.wikipedia.org/wiki/Machine_learning) y [data mining](http://es.wikipedia.org/wiki/Miner%C3%ADa_de_datos) es el [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal); por tanto, si queremos incursionar en el fascinante mundo del aprendizaje automático y el análisis de datos es importante reforzar los conceptos que forman parte de sus cimientos.
El [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal) es una rama de las [matemáticas](http://es.wikipedia.org/wiki/Matem%C3%A1ticas) que es sumamente utilizada en el estudio de una gran variedad de ciencias, como ser, ingeniería, finanzas, investigación operativa, entre otras. Es una extensión del [álgebra](http://es.wikipedia.org/wiki/%C3%81lgebra) que aprendemos en la escuela secundaria, hacia un mayor número de dimensiones; en lugar de trabajar con incógnitas a nivel de <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> comenzamos a trabajar con <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> y [vectores](http://es.wikipedia.org/wiki/Vector).
El estudio del [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal) implica trabajar con varios objetos matemáticos, como ser:
* **Los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">Escalares</a>**: Un *escalar* es un solo número, en contraste con la mayoría de los otros objetos estudiados en [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal), que son generalmente una colección de múltiples números.
* **Los [Vectores](http://es.wikipedia.org/wiki/Vector)**:Un *vector* es una serie de números. Los números tienen una orden preestablecido, y podemos identificar cada número individual por su índice en ese orden. Podemos pensar en los *vectores* como la identificación de puntos en el espacio, con cada elemento que da la coordenada a lo largo de un eje diferente. Existen dos tipos de *vectores*, los *vectores de fila* y los *vectores de columna*. Podemos representarlos de la siguiente manera, dónde *f* es un vector de fila y *c* es un vector de columna:
$$f=\begin{bmatrix}0&1&-1\end{bmatrix} ; c=\begin{bmatrix}0\\1\\-1\end{bmatrix}$$
* **Las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">Matrices</a>**: Una *matriz* es un arreglo bidimensional de números (llamados entradas de la matriz) ordenados en filas (o renglones) y columnas, donde una fila es cada una de las líneas horizontales de la matriz y una columna es cada una de las líneas verticales. En una *matriz* cada elemento puede ser identificado utilizando dos índices, uno para la fila y otro para la columna en que se encuentra. Las podemos representar de la siguiente manera, *A* es una matriz de 3x2.
$$A=\begin{bmatrix}0 & 1& \\-1 & 2 \\ -2 & 3\end{bmatrix}$$
* **Los [Tensores](http://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial)**:En algunos casos necesitaremos una matriz con más de dos ejes. En general, una serie de números dispuestos en una cuadrícula regular con un número variable de ejes es conocido como un *tensor*.
Sobre estos objetos podemos realizar las operaciones matemáticas básicas, como ser [adición](http://es.wikipedia.org/wiki/Adici%C3%B3n), [multiplicación](http://es.wikipedia.org/wiki/Multiplicaci%C3%B3n), [sustracción](http://es.wikipedia.org/wiki/Sustracci%C3%B3n) y <a href="http://es.wikipedia.org/wiki/Divisi%C3%B3n_(matem%C3%A1tica)" >división</a>, es decir que vamos a poder sumar [vectores](http://es.wikipedia.org/wiki/Vector) con <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>, multiplicar <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> a [vectores](http://es.wikipedia.org/wiki/Vector) y demás.
(Tensores) T grado 0 = es un escalar; T grado 1 = vector; T grado 2 = matriz
## Librerías de Python para álgebra lineal
Los principales módulos que [Python](http://python.org/) nos ofrece para realizar operaciones de [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal) son los siguientes:
* **[Numpy](http://www.numpy.org/)**: El popular paquete matemático de [Python](http://python.org/), nos va a permitir crear *[vectores](http://es.wikipedia.org/wiki/Vector)*, *<a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>* y *[tensores](http://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial)* con suma facilidad.
* **[numpy.linalg](http://docs.scipy.org/doc/numpy/reference/routines.linalg.html)**: Este es un submodulo dentro de [Numpy](http://www.numpy.org/) con un gran número de funciones para resolver ecuaciones de [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal).
* **[scipy.linalg](http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html)**: Este submodulo del paquete científico [Scipy](http://docs.scipy.org/doc/scipy/reference/index.html) es muy similar al anterior, pero con algunas más funciones y optimaciones.
* **[Sympy](http://www.sympy.org/es/)**: Esta librería nos permite trabajar con matemática simbólica, convierte a [Python](http://python.org/) en un [sistema algebraico computacional](http://es.wikipedia.org/wiki/Sistema_algebraico_computacional). Nos va a permitir trabajar con ecuaciones y fórmulas simbólicamente, en lugar de numéricamente.
* **[CVXOPT](http://cvxopt.org/)**: Este módulo nos permite resolver problemas de optimizaciones de [programación lineal](http://es.wikipedia.org/wiki/Programaci%C3%B3n_lineal).
* **[PuLP](http://pythonhosted.org//PuLP/)**: Esta librería nos permite crear modelos de [programación lineal](http://es.wikipedia.org/wiki/Programaci%C3%B3n_lineal) en forma muy sencilla con [Python](http://python.org/).
## Operaciones básicas
### Vectores
Un [vector](http://es.wikipedia.org/wiki/Vector) de largo `n` es una secuencia (o *array*, o *tupla*) de `n` números. La solemos escribir como x=(x1,...,xn) o x=[x1,...,xn]
En [Python](http://python.org/), un [vector](http://es.wikipedia.org/wiki/Vector) puede ser representado con una simple *lista*, o con un *array* de [Numpy](http://www.numpy.org/); siendo preferible utilizar esta última opción.
```python
# Vector como lista de Python
v1 = [2, 4, 6]
v1
```
[2, 4, 6]
```python
# Vectores con numpy
import numpy as np
v2 = np.ones(3) # vector de solo unos.
v2
```
array([1., 1., 1.])
```python
v3 = np.array([1, 3, 5]) # pasando una lista a las arrays de numpy
v3
```
array([1, 3, 5])
```python
np.arange(3,5,2)
```
array([3])
```python
lind = np.linspace(1, 50, num=100) #esta función te devuelve los valores pedidos entre dos números el número de veces determinado)
lind
```
array([ 1. , 1.49494949, 1.98989899, 2.48484848, 2.97979798,
3.47474747, 3.96969697, 4.46464646, 4.95959596, 5.45454545,
5.94949495, 6.44444444, 6.93939394, 7.43434343, 7.92929293,
8.42424242, 8.91919192, 9.41414141, 9.90909091, 10.4040404 ,
10.8989899 , 11.39393939, 11.88888889, 12.38383838, 12.87878788,
13.37373737, 13.86868687, 14.36363636, 14.85858586, 15.35353535,
15.84848485, 16.34343434, 16.83838384, 17.33333333, 17.82828283,
18.32323232, 18.81818182, 19.31313131, 19.80808081, 20.3030303 ,
20.7979798 , 21.29292929, 21.78787879, 22.28282828, 22.77777778,
23.27272727, 23.76767677, 24.26262626, 24.75757576, 25.25252525,
25.74747475, 26.24242424, 26.73737374, 27.23232323, 27.72727273,
28.22222222, 28.71717172, 29.21212121, 29.70707071, 30.2020202 ,
30.6969697 , 31.19191919, 31.68686869, 32.18181818, 32.67676768,
33.17171717, 33.66666667, 34.16161616, 34.65656566, 35.15151515,
35.64646465, 36.14141414, 36.63636364, 37.13131313, 37.62626263,
38.12121212, 38.61616162, 39.11111111, 39.60606061, 40.1010101 ,
40.5959596 , 41.09090909, 41.58585859, 42.08080808, 42.57575758,
43.07070707, 43.56565657, 44.06060606, 44.55555556, 45.05050505,
45.54545455, 46.04040404, 46.53535354, 47.03030303, 47.52525253,
48.02020202, 48.51515152, 49.01010101, 49.50505051, 50. ])
```python
v4 = np.arange(1, 8) # utilizando la funcion arange de numpy
v4
```
array([1, 2, 3, 4, 5, 6, 7])
### Representación gráfica
Tradicionalmente, los [vectores](http://es.wikipedia.org/wiki/Vector) son representados visualmente como flechas que parten desde el origen hacia un punto.
Por ejemplo, si quisiéramos representar graficamente a los vectores v1=[2, 4], v2=[-3, 3] y v3=[-4, -3.5], podríamos hacerlo de la siguiente manera.
```python
import matplotlib.pyplot as plt
from warnings import filterwarnings
%matplotlib inline
filterwarnings('ignore') # Ignorar warnings
```
```python
def move_spines():
"""Crea la figura de pyplot y los ejes. Mueve las lineas de la izquierda y de abajo
para que se intersecten con el origen. Elimina las lineas de la derecha y la de arriba.
Devuelve los ejes."""
fix, ax = plt.subplots()
for spine in ["left", "bottom"]:
ax.spines[spine].set_position("zero")
for spine in ["right", "top"]:
ax.spines[spine].set_color("none")
return ax
def vect_fig():
"""Genera el grafico de los vectores en el plano"""
ax = move_spines()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
ax.grid()
vecs = [[2, 4], [-3, 3], [-4, -3.5]] # lista de vectores
for v in vecs:
ax.annotate(" ", xy=v, xytext=[0, 0],
arrowprops=dict(facecolor="blue",
shrink=0,
alpha=0.7,
width=0.5))
ax.text(1.1 * v[0], 1.1 * v[1], v)
```
```python
vect_fig() # crea el gráfico
```
### Operaciones con vectores
Las operaciones más comunes que utilizamos cuando trabajamos con [vectores](http://es.wikipedia.org/wiki/Vector) son la *suma*, la *resta* y la *multiplicación por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>*.
Cuando *sumamos* dos [vectores](http://es.wikipedia.org/wiki/Vector), vamos sumando elemento por elemento de cada
[vector](http://es.wikipedia.org/wiki/Vector).
$$ \begin{split}x + y
=
\left[
\begin{array}{c}
x_1 \\
x_2 \\
\vdots \\
x_n
\end{array}
\right]
+
\left[
\begin{array}{c}
y_1 \\
y_2 \\
\vdots \\
y_n
\end{array}
\right]
:=
\left[
\begin{array}{c}
x_1 + y_1 \\
x_2 + y_2 \\
\vdots \\
x_n + y_n
\end{array}
\right]\end{split}$$
De forma similar funciona la operación de resta.
$$ \begin{split}x - y
=
\left[
\begin{array}{c}
x_1 \\
x_2 \\
\vdots \\
x_n
\end{array}
\right]
-
\left[
\begin{array}{c}
y_1 \\
y_2 \\
\vdots \\
y_n
\end{array}
\right]
:=
\left[
\begin{array}{c}
x_1 - y_1 \\
x_2 - y_2 \\
\vdots \\
x_n - y_n
\end{array}
\right]\end{split}$$
La *multiplicación por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>* es una operación que toma a un número $\gamma$, y a un [vector](http://es.wikipedia.org/wiki/Vector) $x$ y produce un nuevo [vector](http://es.wikipedia.org/wiki/Vector) donde cada elemento del vector $x$ es multiplicado por el número $\gamma$.
$$\begin{split}\gamma x
:=
\left[
\begin{array}{c}
\gamma x_1 \\
\gamma x_2 \\
\vdots \\
\gamma x_n
\end{array}
\right]\end{split}$$
En [Python](http://python.org/) podríamos realizar estas operaciones en forma muy sencilla:
```python
# Ejemplo en Python
x = np.arange(1, 5)
y = np.array([2, 4, 6, 8])
x, y
```
(array([1, 2, 3, 4]), array([2, 4, 6, 8]))
```python
# sumando dos vectores numpy
x + y
```
array([ 3, 6, 9, 12])
```python
# restando dos vectores
x - y
```
array([-1, -2, -3, -4])
```python
# multiplicando por un escalar
x * 2
```
array([2, 4, 6, 8])
```python
y * 3
```
array([ 6, 12, 18, 24])
```python
ax = move_spines()
ax.set_xlim(-6, 6)
ax.set_ylim(-6, 6)
ax.grid()
v = np.array([2, 3])
vect_fig(v, "blue")
v = v * 2
vect_fig(v, "red")
```
### Producto escalar o interior
El [producto escalar](https://es.wikipedia.org/wiki/Producto_escalar) de dos [vectores](http://es.wikipedia.org/wiki/Vector) se define como la suma de los productos de sus elementos, suele representarse matemáticamente como < x, y > o x'y, donde x e y son dos vectores. Tambien puede definirse como el módulo de ambos vectores por el coseno del ángulo que forma los dos vectores.
$$< x, y > := \sum_{i=1}^n x_i y_i = \| x \| \|y \| cos(\alpha)$$
Dos [vectores](http://es.wikipedia.org/wiki/Vector) son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a> o perpendiculares cuando forman ángulo recto entre sí. Si el producto escalar de dos vectores es cero, ambos vectores son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a>.
Adicionalmente, todo [producto escalar](https://es.wikipedia.org/wiki/Producto_escalar) induce una [norma](https://es.wikipedia.org/wiki/Norma_vectorial) sobre el espacio en el que está definido, de la siguiente manera:
$$\| x \| := \sqrt{< x, x>} := \left( \sum_{i=1}^n x_i^2 \right)^{1/2}$$
En [Python](http://python.org/) lo podemos calcular de la siguiente forma:
Multiplicar un vector por sí mismo da la norma de un vector
Dos vectores son ortonormales siempre que el ángulo que formen entre ellos sumen π/2 y que el módulo de cada vector mida 1; si es ortogonal solo sume π/2
Como la multiplicación de dos vectores unitarios depende del coseno (el ángulo que forman entre ellos), dos vectores paralelos tendrán un valor 0.
cos0=1 cosπ/2=0 sen0=0 senπ/2=1
Un vector tiene módulo, mide algo (ocupa un espacio), tiene una dirección y un sentido (esto lo da la flecha).
```python
# Calculando el producto escalar de los vectores x e y
np.dot(x, y)
```
60
```python
# o lo que es lo mismo, que:
sum(x * y)
```
60
```python
# Calculando la norma del vector X
np.linalg.norm(x)
```
5.4772255750516612
```python
# otra forma de calcular la norma de x
np.sqrt(np.dot(x, x))
```
5.4772255750516612
```python
# vectores ortogonales
v1 = np.array([3, 4])
v2 = np.array([4, -3])
np.dot(v1, v2)
```
0
```python
# vectores ortonormales
v1 = np.array([1, 0])
v2 = np.array([0, 1])
np.dot(v1, v2)
```
0
### Producto vectorial
El [producto vectorial](https://es.wikipedia.org/wiki/Producto_vectorial) de dos [vectores](http://es.wikipedia.org/wiki/Vector) se define como:
$$\vec{x} \times \vec{y} = \| x \| \|y \| sin(\alpha)
\|
## Combinaciones lineales
Cuando trabajamos con [vectores](http://es.wikipedia.org/wiki/Vector), nos vamos a encontrar con dos operaciones fundamentales, la *suma* o <a href="https://es.wikipedia.org/wiki/Adici%C3%B3n_(matem%C3%A1ticas)">adición</a>; y la multiplicación por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>. Cuando *sumamos* dos vectores $v$ y $w$, sumamos elemento por elemento, del siguiente modo:
$$v + w
=
\left[
\begin{array}{c}
v_1 \\
v_2 \\
\vdots \\
v_n
\end{array}
\right]
+
\left[
\begin{array}{c}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{array}
\right] =
\left[
\begin{array}{c}
v_1 + w_1 \\
v_2 + w_2 \\
\vdots \\
v_n + w_n
\end{array}
\right]$$
Geométricamente lo podemos ver representado del siguiente modo:
```python
ax = move_spines()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
ax.grid()
vecs = [[2, 4], [2, -2]] # lista de vectores
for v in vecs:
vect_fig(v, "blue")
v = np.array([2, 4]) + np.array([2, -2])
vect_fig(v, "red")
ax.plot([2, 4], [-2, 2], linestyle='--')
a =ax.plot([2, 4], [4, 2], linestyle='--' )
```
Cuando combinamos estas dos operaciones, formamos lo que se conoce en [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html) como [combinaciones lineales](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal). Es decir que una [combinación lineal](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal) va a ser una expresión matemática construida sobre un conjunto de [vectores](http://es.wikipedia.org/wiki/Vector), en el que cada vector es *multiplicado por un <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalar</a>* y los resultados son luego *sumados*. Matemáticamente lo podemos expresar de la siguiente forma:
$$w = \alpha_1 v_1 + \alpha_2 v_2 + \dots + \alpha_n v_n = \sum_{i=1}^n \alpha_i v_i
$$
en donde, $v_n$ son [vectores](http://es.wikipedia.org/wiki/Vector) y $\alpha_n$ son <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>.
### Matrices
Las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> son una forma clara y sencilla de organizar los datos para su uso en operaciones lineales.
Una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> `n × k` es una agrupación rectangular de números con n filas y k columnas; se representa de la siguiente forma:
$$\begin{split}A =
\left[
\begin{array}{cccc}
a_{11} & a_{12} & \cdots & a_{1k} \\
a_{21} & a_{22} & \cdots & a_{2k} \\
\vdots & \vdots & & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nk}
\end{array}
\right]\end{split}$$
En la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A, el símbolo $a_{nk}$ representa el elemento n-ésimo de la fila en la k-ésima columna. La <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A también puede ser llamada un [vector](http://es.wikipedia.org/wiki/Vector) si cualquiera de n o k son iguales a 1. En el caso de n=1, A se llama un [vector](http://es.wikipedia.org/wiki/Vector) fila, mientras que en el caso de k=1 se denomina un [vector](http://es.wikipedia.org/wiki/Vector) columna.
Las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se utilizan para múltiples aplicaciones y sirven, en particular, para representar los coeficientes de los sistemas de ecuaciones lineales o para representar transformaciones lineales dada una base. Pueden sumarse, multiplicarse y descomponerse de varias formas.
### Operaciones con matrices
Al igual que con los [vectores](http://es.wikipedia.org/wiki/Vector), que no son más que un caso particular, las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se pueden *sumar*, *restar* y la *multiplicar por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>*.
Multiplicacion por escalares:
$$\begin{split}\gamma A
\left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \\
\vdots & \vdots & \vdots \\
a_{n1} & \cdots & a_{nk} \\
\end{array}
\right]
:=
\left[
\begin{array}{ccc}
\gamma a_{11} & \cdots & \gamma a_{1k} \\
\vdots & \vdots & \vdots \\
\gamma a_{n1} & \cdots & \gamma a_{nk} \\
\end{array}
\right]\end{split}$$
Suma de matrices: $$\begin{split}A + B =
\left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \\
\vdots & \vdots & \vdots \\
a_{n1} & \cdots & a_{nk} \\
\end{array}
\right]
+
\left[
\begin{array}{ccc}
b_{11} & \cdots & b_{1k} \\
\vdots & \vdots & \vdots \\
b_{n1} & \cdots & b_{nk} \\
\end{array}
\right]
:=
\left[
\begin{array}{ccc}
a_{11} + b_{11} & \cdots & a_{1k} + b_{1k} \\
\vdots & \vdots & \vdots \\
a_{n1} + b_{n1} & \cdots & a_{nk} + b_{nk} \\
\end{array}
\right]\end{split}$$
Resta de matrices: $$\begin{split}A - B =
\left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \\
\vdots & \vdots & \vdots \\
a_{n1} & \cdots & a_{nk} \\
\end{array}
\right]-
\left[
\begin{array}{ccc}
b_{11} & \cdots & b_{1k} \\
\vdots & \vdots & \vdots \\
b_{n1} & \cdots & b_{nk} \\
\end{array}
\right]
:=
\left[
\begin{array}{ccc}
a_{11} - b_{11} & \cdots & a_{1k} - b_{1k} \\
\vdots & \vdots & \vdots \\
a_{n1} - b_{n1} & \cdots & a_{nk} - b_{nk} \\
\end{array}
\right]\end{split}$$
Para los casos de suma y resta, hay que tener en cuenta que solo se pueden sumar o restar <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> que tengan las mismas dimensiones, es decir que si tengo una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de dimensión 3x2 (3 filas y 2 columnas) solo voy a poder sumar o restar la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> B si esta también tiene 3 filas y 2 columnas.
```python
# Ejemplo en Python
A = np.array([[1, 3, 2],
[1, 0, 0],
[1, 2, 2]])
B = np.array([[1, 0, 5],
[7, 5, 0],
[2, 1, 1]])
```
```python
# resta de matrices
A - B
```
array([[ 0, 3, -3],
[-6, -5, 0],
[-1, 1, 1]])
```python
# multiplicando matrices por escalares
A * 2
```
array([[2, 6, 4],
[2, 0, 0],
[2, 4, 4]])
```python
B * 3
```
array([[ 3, 0, 15],
[21, 15, 0],
[ 6, 3, 3]])
```python
# ver la dimension de una matriz
A.shape
```
(3, 3)
```python
# ver cantidad de elementos de una matriz
A.size
```
9
Una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es un arreglo bidimensional de números ordenados en filas y columnas, donde una fila es cada una de las líneas horizontales de la matriz y una columna es cada una de las líneas verticales. En una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> cada elemento puede ser identificado utilizando dos índices, uno para la fila y otro para la columna en que se encuentra. Las podemos representar de la siguiente manera:
$$A=\begin{bmatrix}a_{11} & a_{12} & \dots & a_{1n}\\a_{21} & a_{22} & \dots & a_{2n}
\\ \vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \dots & a_{nn}\end{bmatrix}$$
Las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se utilizan para múltiples aplicaciones y sirven, en particular, para representar los coeficientes de los [sistemas de ecuaciones lineales](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) o para representar [combinaciones lineales](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal).
Supongamos que tenemos los siguientes 3 vectores:
$$x_1
=
\left[
\begin{array}{c}
1 \\
-1 \\
0
\end{array}
\right]
\
x_2 =
\left[
\begin{array}{c}
0 \\
1 \\
-1
\end{array}
\right] \
x_3 =
\left[
\begin{array}{c}
0 \\
0 \\
1
\end{array}
\right]$$
su [combinación lineal](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal) en el espacio de 3 dimensiones va a ser igual a $\alpha_1 x_1 + \alpha_2 x_2 + \alpha_3 x_3$; lo que es lo mismo que decir:
$$\alpha_1
\left[
\begin{array}{c}
1 \\
-1 \\
0
\end{array}
\right]
+ \alpha_2
\left[
\begin{array}{c}
0 \\
1 \\
-1
\end{array}
\right] + \alpha_3
\left[
\begin{array}{c}
0 \\
0 \\
1
\end{array}
\right] = \left[
\begin{array}{c}
\alpha_1 \\
\alpha_2 - \alpha_1 \\
\alpha_3 - \alpha_2
\end{array}
\right]$$
Ahora esta [combinación lineal](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal) la podríamos reescribir en forma matricial. Los vectores $x_1, x_2$ y $x_3$, pasarían a formar las columnas de la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ y los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> $\alpha_1, \alpha_2$ y $\alpha_3$ pasarían a ser los componentes del [vector](http://es.wikipedia.org/wiki/Vector) $x$ del siguiente modo:
$$\begin{bmatrix}1 & 0 & 0\\-1 & 1 & 0
\\ 0 & -1 & 1\end{bmatrix}\begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \alpha_3\end{bmatrix}=
\begin{bmatrix}\alpha_1 \\ \alpha_2 - \alpha_1 \\ \alpha_3 - \alpha_2 \end{bmatrix}$$
De esta forma la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ multiplicada por el [vector](http://es.wikipedia.org/wiki/Vector) $x$, nos da como resultado la misma [combinación lineal](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal) $b$. De esta forma, arribamos a una de las ecuaciones más fundamentales del [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html):
$$Ax = b$$
Esta ecuación no solo nos va a servir para expresar [combinaciones lineales](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal), sino que también se vuelve de suma importancia a la hora de resolver [sistemas de ecuaciones lineales](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales), en dónde $b$ va a ser conocido y la incógnita pasa a ser $x$. Por ejemplo, supongamos que queremos resolver el siguiente [sistemas de ecuaciones](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) de 3 incógnitas:
$$ 2x_1 + 3x_2 + 5x_3 = 52 \\
3x_1 + 6x_2 + 2x_3 = 61 \\
8x_1 + 3x_2 + 6x_3 = 75
$$
Podemos ayudarnos de [SymPy](http://www.sympy.org/es/) para expresar a la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ y $b$ para luego arribar a la solución del [vector](http://es.wikipedia.org/wiki/Vector) $x$.
#### Multiplicacion o Producto de matrices
La regla para la [multiplicación de matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices) generaliza la idea del [producto interior](https://es.wikipedia.org/wiki/Producto_escalar) que vimos con los [vectores](http://es.wikipedia.org/wiki/Vector); y esta diseñada para facilitar las operaciones lineales básicas.
Cuando [multiplicamos matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices), el número de columnas de la primera <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> debe ser igual al número de filas de la segunda <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>; y el resultado de esta multiplicación va a tener el mismo número de filas que la primer <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> y el número de la columnas de la segunda <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>. Es decir, que si yo tengo una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de dimensión 3x4 y la multiplico por una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> B de dimensión 4x2, el resultado va a ser una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> C de dimensión 3x2.
Algo a tener en cuenta a la hora de [multiplicar matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices) es que la propiedad [connmutativa](https://es.wikipedia.org/wiki/Conmutatividad) no se cumple. AxB no es lo mismo que BxA.
Veamos los ejemplos en [Python](http://python.org/).
```python
# Ejemplo multiplicación de matrices
A = np.arange(1, 13).reshape(3, 4) #matriz de dimension 3x4
A
```
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12]])
```python
B = np.arange(8).reshape(4,2) #matriz de dimension 4x2
B
```
array([[0, 1],
[2, 3],
[4, 5],
[6, 7]])
```python
# Multiplicando A x B
A.dot(B) #resulta en una matriz de dimension 3x2
```
array([[ 40, 50],
[ 88, 114],
[136, 178]])
```python
# Multiplicando B x A
B.dot(A)
```
Este ultimo ejemplo vemos que la propiedad conmutativa no se cumple, es más, [Python](http://python.org/) nos arroja un error, ya que el número de columnas de B no coincide con el número de filas de A, por lo que ni siquiera se puede realizar la multiplicación de B x A.
Para una explicación más detallada del proceso de [multiplicación de matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices), pueden consultar el siguiente [tutorial](http://www.mathsisfun.com/algebra/matrix-multiplying.html).
```python
#Las matrices no son conmutativas :(
a = np.array([1,2,-2,-5]).reshape(2,2)
b = np.array([1,-3, 2,1]).reshape(2,2)
print(a.dot(b))
print(b.dot(a))
print(a.dot(b) - b.dot(a)) #para comprobar la propiedad conmutativa
```
[[ 5 -1]
[-12 1]]
[[ 7 17]
[ 0 -1]]
[[ -2 -18]
[-12 2]]
```python
import sympy
```
```python
# Resolviendo sistema de ecuaciones con SymPy
A = sympy.Matrix(( (2, 3, 5), (3, 6, 2), (8, 3, 6) ))
A
```
$\displaystyle \left[\begin{matrix}2 & 3 & 5\\3 & 6 & 2\\8 & 3 & 6\end{matrix}\right]$
```python
b = sympy.Matrix(3,1,(52,61,75))
b
```
$\displaystyle \left[\begin{matrix}52\\61\\75\end{matrix}\right]$
```python
help(A.LUsolve)
```
Help on method LUsolve in module sympy.matrices.matrices:
LUsolve(rhs, iszerofunc=<function _iszero at 0x11cb45d40>) method of sympy.matrices.dense.MutableDenseMatrix instance
Solve the linear system ``Ax = rhs`` for ``x`` where ``A = self``.
This is for symbolic matrices, for real or complex ones use
mpmath.lu_solve or mpmath.qr_solve.
See Also
========
lower_triangular_solve
upper_triangular_solve
gauss_jordan_solve
cholesky_solve
diagonal_solve
LDLsolve
QRsolve
pinv_solve
LUdecomposition
```python
# Resolviendo Ax = b
x = A.LUsolve(b)
x
```
$\displaystyle \left[\begin{matrix}3\\7\\5\end{matrix}\right]$
```python
# Comprobando la solución
A*x
```
$\displaystyle \left[\begin{matrix}52\\61\\75\end{matrix}\right]$
#### La matriz identidad, la matriz inversa, la matrix transpuesta y el determinante
La [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) es el elemento neutro en la [multiplicación de matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices), es el equivalente al número 1. Cualquier matriz multiplicada por la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) nos da como resultado la misma matriz. La [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) es una [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada) (tiene siempre el mismo número de filas que de columnas); y su diagonal principal se compone de todos elementos 1 y el resto de los elementos se completan con 0. Suele representase con la letra I
Por ejemplo la matriz identidad de 3x3 sería la siguiente:
$$I=\begin{bmatrix}1 & 0 & 0 & \\0 & 1 & 0\\ 0 & 0 & 1\end{bmatrix}$$
Ahora que conocemos el concepto de la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad), podemos llegar al concepto de la [matriz inversa](https://es.wikipedia.org/wiki/Matriz_invertible). Si tenemos una matriz A, la [matriz inversa](https://es.wikipedia.org/wiki/Matriz_invertible) de A, que se representa como $A^{-1}$ es aquella [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada) que hace que la multiplicación $A$x$A^{-1}$ sea igual a la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) I. Es decir que es la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> recíproca de A.
$$A × A^{-1} = A^{-1} × A = I$$
Tener en cuenta que esta [matriz inversa](https://es.wikipedia.org/wiki/Matriz_invertible) en muchos casos puede no existir.En este caso se dice que la matriz es singular o degenerada. Una matriz es singular si y solo si su <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es nulo.
El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es un número especial que puede calcularse sobre las [matrices cuadradas](https://es.wikipedia.org/wiki/Matriz_cuadrada). Se calcula como la suma de los productos de las diagonales de la matriz en una dirección menos la suma de los productos de las diagonales en la otra dirección. Se represente con el símbolo |A|.
$$A=\begin{bmatrix}a_{11} & a_{12} & a_{13} & \\a_{21} & a_{22} & a_{23} & \\ a_{31} & a_{32} & a_{33} & \end{bmatrix}$$
$$|A| =
(a_{11} a_{22} a_{33}
+ a_{12} a_{23} a_{31}
+ a_{13} a_{21} a_{32} )
- (a_{31} a_{22} a_{13}
+ a_{32} a_{23} a_{11}
+ a_{33} a_{21} a_{12})
$$
Por último, la [matriz transpuesta](http://es.wikipedia.org/wiki/Matriz_transpuesta) es aquella en que las filas se transforman en columnas y las columnas en filas. Se representa con el símbolo $A^\intercal$
$$\begin{bmatrix}a & b & \\c & d & \\ e & f & \end{bmatrix}^T:=\begin{bmatrix}a & c & e &\\b & d & f & \end{bmatrix}$$
Algunas de las propiedades de las [matrices transpuestas](http://es.wikipedia.org/wiki/Matriz_transpuesta) son:
a. $(A^T)^T = A$
b. $(A + B)^T = A^T + B^T$
c. $k(A)^T = k(A^T)$
d. $(AB)^T = B^T A^T$
e. $(A^r)^T = (A^T)^r$ para todos los $r$ no negativos.
f. Si $A$ es una [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada), entonces $A + A^T$ es una [matriz simétrica](https://es.wikipedia.org/wiki/Matriz_sim%C3%A9trica).
g. Para cualquier <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$, $A A^T$ y $A^T A$ son [matrices simétricas](https://es.wikipedia.org/wiki/Matriz_sim%C3%A9trica).
Veamos algunos ejemplos en [Python](http://python.org/
```python
# Creando una matriz identidad de 2x2
I = np.eye(2)
I
```
array([[ 1., 0.],
[ 0., 1.]])
```python
# Multiplicar una matriz por la identidad nos da la misma matriz
A = np.array([[4, 7],
[2, 6]])
A
```
array([[4, 7],
[2, 6]])
```python
A.dot(I) # AxI = A
```
array([[ 4., 7.],
[ 2., 6.]])
```python
# Calculando el determinante de la matriz A
np.linalg.det(A)
```
10.000000000000002
```python
# Calculando la inversa de A.
A_inv = np.linalg.inv(A)
A_inv
```
array([[ 0.6, -0.7],
[-0.2, 0.4]])
```python
# A x A_inv nos da como resultado I.
A.dot(A_inv)
```
array([[ 1., 0.],
[ 0., 1.]])
```python
# Trasponiendo una matriz
A = np.arange(6).reshape(3, 2)
A
```
array([[0, 1],
[2, 3],
[4, 5]])
```python
A = np.array([[2,3,5], [3,6,2], [8,3,6]])
b = np.array([[52, 61,75]]).reshape(3,1)
print(A)
print(b)
np.linalg.det(A) !=0
np.linalg.inv(A).dot(b)
```
[[2 3 5]
[3 6 2]
[8 3 6]]
[[52]
[61]
[75]]
array([[3.],
[7.],
[5.]])
```python
np.transpose(A)
```
array([[2, 3, 8],
[3, 6, 3],
[5, 2, 6]])
```python
# Matriz transpuesta
A = sympy.Matrix( [[ 2,-3,-8, 7],
[-2,-1, 2,-7],
[ 1, 0,-3, 6]] )
A
```
$$\left[\begin{matrix}2 & -3 & -8 & 7\\-2 & -1 & 2 & -7\\1 & 0 & -3 & 6\end{matrix}\right]$$
```python
A.transpose()
A.T #también transpone
```
$$\left[\begin{matrix}2 & -2 & 1\\-3 & -1 & 0\\-8 & 2 & -3\\7 & -7 & 6\end{matrix}\right]$$
```python
# transpuesta de transpuesta vuelve a A.
A.transpose().transpose()
```
$$\left[\begin{matrix}2 & -3 & -8 & 7\\-2 & -1 & 2 & -7\\1 & 0 & -3 & 6\end{matrix}\right]$$
```python
# creando matriz simetrica
As = A*A.transpose()
As
```
$$\left[\begin{matrix}126 & -66 & 68\\-66 & 58 & -50\\68 & -50 & 46\end{matrix}\right]$$
```python
# comprobando simetria.
As.transpose()
```
$$\left[\begin{matrix}126 & -66 & 68\\-66 & 58 & -50\\68 & -50 & 46\end{matrix}\right]$$
La [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible) es muy importante, ya que esta relacionada con la ecuación $Ax = b$. Si tenemos una [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada) $A$ de $n \times n$, entonces la [matriz inversa](https://es.wikipedia.org/wiki/Matriz_invertible) de $A$ es una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A'$ o $A^{-1}$ de $n \times n$ que hace que la multiplicación $A A^{-1}$ sea igual a la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) $I$. Es decir que es la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> recíproca de $A$.
$A A^{-1} = I$ o $A^{-1} A = I$
En caso de que estas condiciones se cumplan, decimos que la [matriz es invertible](https://es.wikipedia.org/wiki/Matriz_invertible).
Que una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> sea [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) tiene importantes implicaciones, como ser:
a. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible), entonces su [matriz inversa](https://es.wikipedia.org/wiki/Matriz_invertible) es única.
b. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible) de $n \times n$, entonces el [sistemas de ecuaciones lineales](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) dado por $Ax = b$ tiene una única solución $x = A^{-1}b$ para cualquier $b$ en $\mathbb{R}^n$.
c. Una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> va a ser [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) si y solo si su <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es distinto de cero. En el caso de que el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> sea cero se dice que la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es singular.
d. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible), entonces el [sistema](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) $Ax = 0$ solo tiene una solución *trivial*. Es decir, en las que todas las incógnitas son ceros.
e. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible), entonces su [forma escalonada](https://es.wikipedia.org/wiki/Matriz_escalonada) va a ser igual a la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad).
f. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible), entonces $A^{-1}$ es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) y:
$$(A^{-1})^{-1} = A$$
g. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible) y $\alpha$ es un <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalar</a> distinto de cero, entonces $\alpha A$ es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) y:
$$(\alpha A)^{-1} = \frac{1}{\alpha}A^{-1}$$.
h. Si $A$ y $B$ son [matrices invertibles](https://es.wikipedia.org/wiki/Matriz_invertible) del mismo tamaño, entonces $AB$ es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) y:
$$(AB)^{-1} = B^{-1} A^{-1}$$.
i. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible), entonces $A^T$ es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) y:
$$(A^T)^{-1} = (A^{-1})^T$$.
Con [SymPy](http://www.sympy.org/es/) podemos trabajar con las [matrices invertibles](https://es.wikipedia.org/wiki/Matriz_invertible) del siguiente modo:
```python
# Matriz invertible
A = sympy.Matrix( [[1,2],
[3,9]] )
A
```
$$\left[\begin{matrix}1 & 2\\3 & 9\end{matrix}\right]$$
```python
A_inv = A.inv()
A_inv
```
$$\left[\begin{matrix}3 & - \frac{2}{3}\\-1 & \frac{1}{3}\end{matrix}\right]$$
```python
# A * A_inv = I
A*A_inv
```
$$\left[\begin{matrix}1 & 0\\0 & 1\end{matrix}\right]$$
```python
# forma escalonada igual a indentidad.
A.rref()
```
$$\left ( \left[\begin{matrix}1 & 0\\0 & 1\end{matrix}\right], \quad \left [ 0, \quad 1\right ]\right )$$
```python
# la inversa de A_inv es A
A_inv.inv()
```
$$\left[\begin{matrix}1 & 2\\3 & 9\end{matrix}\right]$$
### Sistemas de ecuaciones lineales
Una de las principales aplicaciones del [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal) consiste en resolver problemas de sistemas de ecuaciones lineales.
Una [ecuación lineal](https://es.wikipedia.org/wiki/Ecuaci%C3%B3n_de_primer_grado) es una ecuación que solo involucra sumas y restas de una variable o mas variables a la primera potencia. Es la ecuación de la línea recta.Cuando nuestro problema esta representado por más de una [ecuación lineal](https://es.wikipedia.org/wiki/Ecuaci%C3%B3n_de_primer_grado), hablamos de un [sistema de ecuaciones lineales](http://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales). Por ejemplo, podríamos tener un sistema de dos ecuaciones con dos incógnitas como el siguiente:
$$ x - 2y = 1$$
$$3x + 2y = 11$$
La idea es encontrar el valor de $x$ e $y$ que resuelva ambas ecuaciones. Una forma en que podemos hacer esto, puede ser representando graficamente ambas rectas y buscar los puntos en que las rectas se cruzan.
En [Python](http://python.org/) esto se puede hacer en forma muy sencilla con la ayuda de [matplotlib](http://matplotlib.org/).
```python
# graficando el sistema de ecuaciones.
x_vals = np.linspace(0, 5, 50) # crea 50 valores entre 0 y 5
plt.plot(x_vals, (1 - x_vals)/-2) # grafica x - 2y = 1
plt.plot(x_vals, (11 - (3*x_vals))/2) # grafica 3x + 2y = 11
plt.axis(ymin = 0)
```
Luego de haber graficado las funciones, podemos ver que ambas rectas se cruzan en el punto (3, 1), es decir que la solución de nuestro sistema sería $x=3$ e $y=1$. En este caso, al tratarse de un sistema simple y con solo dos incógnitas, la solución gráfica puede ser de utilidad, pero para sistemas más complicados se necesita una solución numérica, es aquí donde entran a jugar las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>.
Ese mismo sistema se podría representar como una ecuación matricial de la siguiente forma:
$$\begin{bmatrix}1 & -2 & \\3 & 2 & \end{bmatrix} \begin{bmatrix}x & \\y & \end{bmatrix} = \begin{bmatrix}1 & \\11 & \end{bmatrix}$$
Lo que es lo mismo que decir que la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A por la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $x$ nos da como resultado el [vector](http://es.wikipedia.org/wiki/Vector) b.
$$ Ax = b$$
En este caso, ya sabemos el resultado de $x$, por lo que podemos comprobar que nuestra solución es correcta realizando la [multiplicación de matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices).
```python
# Comprobando la solucion con la multiplicación de matrices.
A = np.array([[1., -2.],
[3., 2.]])
x = np.array([[3.],[1.]])
A.dot(x)
```
array([[ 1.],
[ 11.]])
Para resolver en forma numérica los [sistema de ecuaciones](http://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales), existen varios métodos:
* **El método de sustitución**: El cual consiste en despejar en una de las ecuaciones cualquier incógnita, preferiblemente la que tenga menor coeficiente y a continuación sustituirla en otra ecuación por su valor.
* **El método de igualacion**: El cual se puede entender como un caso particular del método de sustitución en el que se despeja la misma incógnita en dos ecuaciones y a continuación se igualan entre sí la parte derecha de ambas ecuaciones.
* **El método de reduccion**: El procedimiento de este método consiste en transformar una de las ecuaciones (generalmente, mediante productos), de manera que obtengamos dos ecuaciones en la que una misma incógnita aparezca con el mismo coeficiente y distinto signo. A continuación, se suman ambas ecuaciones produciéndose así la reducción o cancelación de dicha incógnita, obteniendo una ecuación con una sola incógnita, donde el método de resolución es simple.
* **El método gráfico**: Que consiste en construir el gráfica de cada una de las ecuaciones del sistema. Este método (manualmente aplicado) solo resulta eficiente en el plano cartesiano (solo dos incógnitas).
* **El método de Gauss**: El método de eliminación de Gauss o simplemente método de Gauss consiste en convertir un sistema lineal de n ecuaciones con n incógnitas, en uno escalonado, en el que la primera ecuación tiene n incógnitas, la segunda ecuación tiene n - 1 incógnitas, ..., hasta la última ecuación, que tiene 1 incógnita. De esta forma, será fácil partir de la última ecuación e ir subiendo para calcular el valor de las demás incógnitas.
* **El método de Eliminación de Gauss-Jordan**: El cual es una variante del método anterior, y consistente en triangular la matriz aumentada del sistema mediante transformaciones elementales, hasta obtener ecuaciones de una sola incógnita.
* **El método de Cramer**: El cual consiste en aplicar la [regla de Cramer](http://es.wikipedia.org/wiki/Regla_de_Cramer) para resolver el sistema. Este método solo se puede aplicar cuando la matriz de coeficientes del sistema es cuadrada y de determinante no nulo.
La idea no es explicar cada uno de estos métodos, sino saber que existen y que [Python](http://python.org/) nos hacer la vida mucho más fácil, ya que para resolver un [sistema de ecuaciones](http://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) simplemente debemos llamar a la función `solve()`.
Por ejemplo, para resolver este sistema de 3 ecuaciones y 3 incógnitas.
$$ x + 2y + 3z = 6$$
$$ 2x + 5y + 2z = 4$$
$$ 6x - 3y + z = 2$$
Primero armamos la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de coeficientes y la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> b de resultados y luego utilizamos `solve()` para resolverla.
```python
# Creando matriz de coeficientes
A = np.array([[1, 2, 3],
[2, 5, 2],
[6, -3, 1]])
A
```
array([[ 1, 2, 3],
[ 2, 5, 2],
[ 6, -3, 1]])
```python
# Creando matriz de resultados
b = np.array([6, 4, 2])
b
```
array([6, 4, 2])
```python
# Resolviendo sistema de ecuaciones
x = np.linalg.solve(A, b)
x
```
array([0., 0., 2.])
```python
# Comprobando la solucion
A.dot(x) == b
```
array([ True, True, True])
## Independencia lineal
La [independencia lineal](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal) es un concepto aparentemente simple con consecuencias que se extienden profundamente en muchos aspectos del análisis. Si deseamos entender cuando una matriz puede ser [invertible](https://es.wikipedia.org/wiki/Matriz_invertible), o cuando un [sistema de ecuaciones lineales](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) tiene una única solución, o cuando una estimación por [mínimos cuadrados](https://es.wikipedia.org/wiki/M%C3%ADnimos_cuadrados) se define de forma única, la idea fundamental más importante es la de [independencia lineal](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal) de [vectores](http://es.wikipedia.org/wiki/Vector).
Dado un conjunto finito de [vectores](http://es.wikipedia.org/wiki/Vector) $x_1, x_2, \dots, x_n$ se dice que los mismos son *[linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*, si y solo si, los únicos <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> $\alpha_1, \alpha_2, \dots, \alpha_n$ que satisfacen la ecuación:
$$\alpha_1 x_1 + \alpha_2 x_2 + \dots + \alpha_n x_n = 0$$
son todos ceros, $\alpha_1 = \alpha_2 = \dots = \alpha_n = 0$.
En caso de que esto no se cumpla, es decir, que existe una solución a la ecuación de arriba en que no todos los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> son ceros, a esta solución se la llama *no trivial* y se dice que los [vectores](http://es.wikipedia.org/wiki/Vector) son *[linealmente dependientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*.
Para ilustrar la definición y que quede más clara, veamos algunos ejemplos. Supongamos que queremos determinar si los siguientes [vectores](http://es.wikipedia.org/wiki/Vector) son *[linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*:
$$\begin{split}x_1
=
\left[
\begin{array}{c}
1.2 \\
1.1 \\
\end{array}
\right] \ \ \ x_2 =
\left[
\begin{array}{c}
-2.2 \\
1.4 \\
\end{array}
\right]\end{split}$$
Para lograr esto, deberíamos resolver el siguiente [sistema de ecuaciones](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) y verificar si la única solución es aquella en que los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> sean ceros.
$$\begin{split}\alpha_1
\left[
\begin{array}{c}
1.2 \\
1.1 \\
\end{array}
\right] + \alpha_2
\left[
\begin{array}{c}
-2.2 \\
1.4 \\
\end{array}
\right]\end{split} = 0
$$
```python
# Resolviendo el sistema de ecuaciones.
A = np.array([[1.2, -2.2],
[1.1, 1.4]])
b = np.array([0., 0.])
x = np.linalg.solve(A, b)
x
```
array([ 0., 0.])
```python
# <!-- collapse=True -->
# Solución gráfica.
x_vals = np.linspace(-5, 5, 50) # crea 50 valores entre 0 y 5
ax = move_spines()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
ax.grid()
ax.plot(x_vals, (1.2 * x_vals) / -2.2) # grafica 1.2x_1 - 2.2x_2 = 0
a = ax.plot(x_vals, (1.1 * x_vals) / 1.4) # grafica 1.1x + 1.4x_2 = 0
```
Como podemos ver, tanto por la solución numérica como por la solución gráfica, estos vectores son *[linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*, ya que la única solución a la ecuación $\alpha_1 x_1 + \alpha_2 x_2 + \dots + \alpha_n x_n = 0$, es aquella en que los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> son cero.
Determinemos ahora si por ejemplo, los siguientes [vectores](http://es.wikipedia.org/wiki/Vector) en $\mathbb{R}^4$ son *[linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*: $\{(3, 2, 2, 3), (3, 2, 1, 2), (3, 2, 0, 1)\}$. Aquí, ahora deberíamos resolver la siguiente ecuación:
$$\alpha_1 (3, 2, 2, 3) +\alpha_2 (3, 2, 1, 2) + \alpha_3 (3, 2, 0, 1) = (0, 0, 0, 0)$$
Para resolver este sistema de ecuaciones que no es cuadrado (tiene 4 ecuaciones y solo 3 incógnitas); podemos utilizar [SymPy](http://www.sympy.org/es/).
```python
# Sympy para resolver el sistema de ecuaciones lineales
a1, a2, a3 = sympy.symbols('a1, a2, a3')
A = sympy.Matrix(( (3, 3, 3, 0), (2, 2, 2, 0), (2, 1, 0, 0), (3, 2, 1, 0) ))
A
```
$$\left[\begin{matrix}3 & 3 & 3 & 0\\2 & 2 & 2 & 0\\2 & 1 & 0 & 0\\3 & 2 & 1 & 0\end{matrix}\right]$$
```python
sympy.solve_linear_system(A, a1, a2, a3)
```
$$\left \{ a_{1} : a_{3}, \quad a_{2} : - 2 a_{3}\right \}$$
Como vemos, esta solución es *no trivial*, ya que por ejemplo existe la solución $\alpha_1 = 1, \ \alpha_2 = -2 , \ \alpha_3 = 1$ en la que los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> no son ceros. Por lo tanto este sistema es *[linealmente dependiente](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*.
Por último, podríamos considerar si los siguientes [polinomios](https://es.wikipedia.org/wiki/Polinomio) son *[linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*: $1 -2x -x^2$, $1 + x$, $1 + x + 2x^2$. En este caso, deberíamos resolver la siguiente ecuación:
$$\alpha_1 (1 − 2x − x^2) + \alpha_2 (1 + x) + \alpha_3 (1 + x + 2x^2) = 0$$
y esta ecuación es equivalente a la siguiente:
$$(\alpha_1 + \alpha_2 + \alpha_3 ) + (−2 \alpha_1 + \alpha_2 + \alpha_3 )x + (−\alpha_1 + 2 \alpha_2 )x^2 = 0$$
Por lo tanto, podemos armar el siguiente [sistema de ecuaciones](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales):
$$\alpha_1 + \alpha_2 + \alpha_3 = 0, \\
-2 \alpha_1 + \alpha_2 + \alpha_3 = 0, \\
-\alpha_1 + 2 \alpha_2 = 0.
$$
El cual podemos nuevamente resolver con la ayuda de [SymPy](http://www.sympy.org/es/).
```python
A = sympy.Matrix(( (1, 1, 1, 0), (-2, 1, 1, 0), (-1, 2, 0, 0) ))
A
```
$$\left[\begin{matrix}1 & 1 & 1 & 0\\-2 & 1 & 1 & 0\\-1 & 2 & 0 & 0\end{matrix}\right]$$
```python
sympy.solve_linear_system(A, a1, a2, a3)
```
$$\left \{ a_{1} : 0, \quad a_{2} : 0, \quad a_{3} : 0\right \}$$
## Rango
Otro concepto que también esta ligado a la [independencia lineal](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal) es el de <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a>. Los números de columnas $m$ y filas $n$ pueden darnos el tamaño de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>, pero esto no necesariamente representa el verdadero tamaño del [sistema lineal](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales), ya que por ejemplo si existen dos filas iguales en una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$, la segunda fila desaparecía en el proceso de [eliminación](https://es.wikipedia.org/wiki/Eliminaci%C3%B3n_de_Gauss-Jordan). El verdadero tamaño de $A$ va a estar dado por su <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a>. El <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es el número máximo de columnas (filas respectivamente) que son [linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal). Por ejemplo si tenemos la siguiente <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> de 3 x 4:
$$A = \begin{bmatrix}1 & 1 & 2 & 4\\1 & 2 & 2 & 5
\\ 1 & 3 & 2 & 6\end{bmatrix}$$
Podemos ver que la tercer columna $(2, 2, 2)$ es un múltiplo de la primera y que la cuarta columna $(4, 5, 6)$ es la suma de las primeras 3 columnas. Por tanto el <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> de $A$ va a ser igual a 2; ya que la tercer y cuarta columna pueden ser eliminadas.
Obviamente, el <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> también lo podemos calcular con la ayuda de [Python](http://python.org/).
```python
# Calculando el rango con SymPy
A = sympy.Matrix([[1, 1, 2, 4],
[1, 2, 2, 5],
[1, 3, 2, 6]])
A
```
$$\left[\begin{matrix}1 & 1 & 2 & 4\\1 & 2 & 2 & 5\\1 & 3 & 2 & 6\end{matrix}\right]$$
```python
# Rango con SymPy
A.rank()
```
$$2$$
```python
# Rango con numpy
A = np.array([[1, 1, 2, 4],
[1, 2, 2, 5],
[1, 3, 2, 6]])
np.linalg.matrix_rank(A)
```
2
Una útil aplicación de calcular el <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es la de determinar el número de soluciones al [sistema de ecuaciones lineales](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales), de acuerdo al enunciado del [Teorema de Rouché–Frobenius](https://es.wikipedia.org/wiki/Teorema_de_Rouch%C3%A9%E2%80%93Frobenius). El sistema tiene por lo menos una solución si el <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> de la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> de coeficientes equivale al <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> de la [matriz aumentada](https://es.wikipedia.org/wiki/Matriz_aumentada). En ese caso, ésta tiene exactamente una solución si el rango equivale al número de incógnitas.
## La norma y la Ortogonalidad
Si quisiéramos saber cual es el *largo* del un [vector](http://es.wikipedia.org/wiki/Vector), lo único que necesitamos es el famoso [teorema de Pitágoras](https://es.wikipedia.org/wiki/Teorema_de_Pit%C3%A1goras). En el plano $\mathbb{R}^2$, el *largo* de un [vector](http://es.wikipedia.org/wiki/Vector) $v=\begin{bmatrix}a \\ b \end{bmatrix}$ va a ser igual a la distancia desde el origen $(0, 0)$ hasta el punto $(a, b)$. Esta distancia puede ser fácilmente calculada gracias al [teorema de Pitágoras](https://es.wikipedia.org/wiki/Teorema_de_Pit%C3%A1goras) y va ser igual a $\sqrt{a^2 + b^2}$, como se puede ver en la siguiente figura:
```python
# <!-- collapse=True -->
# Calculando largo de un vector
# forma un triángulo rectángulo
ax = move_spines()
ax.set_xlim(-6, 6)
ax.set_ylim(-6, 6)
ax.grid()
v = np.array([4, 6])
vect_fig(v, "blue")
a = ax.vlines(x=v[0], ymin=0, ymax = 6, linestyle='--', color='g')
```
En esta definición podemos observar que $a^2 + b^2 = v \cdot v$, por lo que ya estamos en condiciones de poder definir lo que en [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html) se conoce como [norma](https://es.wikipedia.org/wiki/Norma_vectorial).
El *largo* o [norma](https://es.wikipedia.org/wiki/Norma_vectorial) de un [vector](http://es.wikipedia.org/wiki/Vector) $v = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}$, en $\mathbb{R}^n$ va a ser igual a un número no negativo $||v||$ definido por:
$$||v|| = \sqrt{v \cdot v} = \sqrt{v_1^2 + v_2^2 + \dots + v_n^2}$$
Es decir que la [norma](https://es.wikipedia.org/wiki/Norma_vectorial) de un [vector](http://es.wikipedia.org/wiki/Vector) va a ser igual a la raíz cuadrada de la suma de los cuadrados de sus componentes.
### Ortogonalidad
El concepto de [perpendicularidad](https://es.wikipedia.org/wiki/Perpendicularidad) es fundamental en [geometría](https://es.wikipedia.org/wiki/Geometr%C3%ADa). Este concepto llevado a los [vectores](http://es.wikipedia.org/wiki/Vector) en $\mathbb{R}^n$ se llama <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonalidad</a>.
Dos [vectores](http://es.wikipedia.org/wiki/Vector) $v$ y $w$ en $\mathbb{R}^n$ van a ser <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a> el uno al otro si su [producto interior](https://es.wikipedia.org/wiki/Producto_escalar) es igual a cero. Es decir, $v \cdot w = 0$.
Geométricamente lo podemos ver de la siguiente manera:
```python
# <!-- collapse=True -->
# Vectores ortogonales
ax = move_spines()
ax.set_xlim(-6, 6)
ax.set_ylim(-6, 6)
ax.grid()
vecs = [np.array([4, 6]), np.array([-3, 2])]
for v in vecs:
vect_fig(v, "blue")
a = ax.plot([-3, 4], [2, 6], linestyle='--', color='g')
```
```python
# comprobando su producto interior.
v = np.array([4, 6])
w = np.array([-3, 2])
v.dot(w)
```
0
Un [conjunto](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) de [vectores](http://es.wikipedia.org/wiki/Vector) en $\mathbb{R}^n$ va a ser <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonal</a> si todo los pares de los distintos [vectores](http://es.wikipedia.org/wiki/Vector) en el [conjunto](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a> entre sí. O sea:
$v_i \cdot v_j = 0$ para todo $i, j = 1, 2, \dots, k$ y donde $i \ne j$.
Por ejemplo, si tenemos el siguiente [conjunto](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) de [vectores](http://es.wikipedia.org/wiki/Vector) en $\mathbb{R}^3$:
$$v1 = \begin{bmatrix} 2 \\ 1 \\ -1\end{bmatrix} \
v2 = \begin{bmatrix} 0 \\ 1 \\ 1\end{bmatrix} \
v3 = \begin{bmatrix} 1 \\ -1 \\ 1\end{bmatrix}$$
En este caso, deberíamos combrobar que:
$$v1 \cdot v2 = 0 \\
v2 \cdot v3 = 0 \\
v1 \cdot v3 = 0 $$
```python
# comprobando ortogonalidad del conjunto
v1 = np.array([2, 1, -1])
v2 = np.array([0, 1, 1])
v3 = np.array([1, -1, 1])
v1.dot(v2), v2.dot(v3), v1.dot(v3)
```
(0, 0, 0)
```python
a = np.array([1,2,3,5]).reshape(2,2)
print(a)
np.linalg.det(a)
np.linalg.det(np.linalg.inv(a))
```
[[1 2]
[3 5]]
-0.9999999999999993
```python
a = np.array([3,5,1,2]).reshape(2,2)
print(a)
np.linalg.det(a)
np.linalg.det(np.linalg.inv(a))
```
[[3 5]
[1 2]]
0.9999999999999988
Como vemos, este conjunto es <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonal</a>. Una de las principales ventajas de trabajar con [conjuntos](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) de [vectores](http://es.wikipedia.org/wiki/Vector) <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a> es que los mismos son necesariamente [linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal).
El concepto de <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonalidad</a> es uno de los más importantes y útiles en [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html) y surge en muchas situaciones prácticas, sobre todo cuando queremos calcular distancias.
## Determinante
El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es un número especial que puede calcularse sobre las [matrices cuadradas](https://es.wikipedia.org/wiki/Matriz_cuadrada). Este número nos va a decir muchas cosas sobre la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>. Por ejemplo, nos va decir si la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) o no. Si el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es igual a cero, la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> no es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible). Cuando la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible), el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de $A^{-1}= 1/(\det \ A)$. El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> también puede ser útil para calcular áreas.
Para obtener el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> debemos calcular la suma de los productos de las diagonales de la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> en una dirección menos la suma de los productos de las diagonales en la otra dirección. Se represente con el símbolo $|A|$ o $\det A$.
Algunas de sus propiedades que debemos tener en cuenta son:
a. El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) es igual a 1. $\det I = 1$.
b. Una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ es *singular* (no tiene [inversa](https://es.wikipedia.org/wiki/Matriz_invertible)) si su <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es igual a cero.
c. El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> cambia de signo cuando dos columnas(o filas) son intercambiadas.
d. Si dos filas de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ son iguales, entonces el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es cero.
e. Si alguna fila de la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ son todos ceros, entonces el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es cero.
f. La [matriz transpuesta](http://es.wikipedia.org/wiki/Matriz_transpuesta) $A^T$, tiene el mismo <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> que $A$.
g. El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de $AB$ es igual al <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de $A$ multiplicado por el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de $B$. $\det (AB) = \det A \cdot \det B$.
h. El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es una [función lineal](https://es.wikipedia.org/wiki/Funci%C3%B3n_lineal) de cada una de las filas en forma separada. Si multiplicamos solo una fila por $\alpha$, entonces el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> también es multiplicado por $\alpha$.
Veamos como podemos obtener el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> con la ayuda de [Python](http://python.org/)
```python
# Determinante con sympy
A = sympy.Matrix( [[1, 2, 3],
[2,-2, 4],
[2, 2, 5]] )
A.det()
```
$$2$$
```python
# Determinante con numpy
A = np.array([[1, 2, 3],
[2,-2, 4],
[2, 2, 5]] )
np.linalg.det(A)
```
$$2.0$$
```python
# Determinante como funcion lineal de fila
A[0] = A[0:1]*5
np.linalg.det(A)
```
$$10.0$$
```python
# cambio de signo de determinante
A = sympy.Matrix( [[2,-2, 4],
[1, 2, 3],
[2, 2, 5]] )
A.det()
```
$$-2$$
## Eigenvalores y Eigenvectores
Cuando estamos resolviendo [ecuaciones lineales](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) del tipo $Ax = b$, estamos trabajando con problemas *estáticos*. ¿Pero qué pasa si quisiéramos trabajar con problemas *dinámicos*?. Es en este tipo de situaciones donde los [Eigenvalores y Eigenvectores](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio) tienen su mayor importancia.
Supongamos que tenemos una [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada) $A$ de $n \times n$. Una pregunta natural que nos podríamos hacer sobre $A$ es: ¿Existe algún [vector](http://es.wikipedia.org/wiki/Vector) $x$ distinto de cero para el cual $Ax$ es un <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalar</a> múltiplo de $x$?. Si llevamos esta pregunta al lenguaje matemático nos vamos a encontrar con la siguiente ecuación:
$$Ax = \lambda x$$
Cuando esta ecuación es válida y $x$ no es cero, decimos que $\lambda$ es el [Eigenvalor o valor propio](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio) de $A$ y $x$ es su correspondiente [Eigenvector o vector propio](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio).
Muchos problemas en ciencia derivan en problemas de [Eigenvalores](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio), en los cuales la principal pregunta es: ¿Cuáles son los [Eigenvalores](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio) de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> dada, y cuáles son sus correspondientes [Eigenvectores](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio). Un área donde nos va a ser de mucha utilidad esta teoría, es en problemas con sistemas de [ecuaciones diferenciales lineales](https://es.wikipedia.org/wiki/Ecuaci%C3%B3n_diferencial_lineal).
### Calculando Eigenvalores
Hasta aquí todo muy bien, pero dada una [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada) $A$ de $n \times n$, ¿cómo podemos obtener sus [Eigenvalores](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio)?.
Podemos comenzar por observar que la ecuación $Ax = \lambda x$ es equivalente a $(A - \lambda I)x = 0$. Dado que estamos interesados en soluciones a esta ecuación que sean distintas de cero, la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A - \lambda I$ debe ser *singular*, no [invertible](https://es.wikipedia.org/wiki/Matriz_invertible), por lo tanto su <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> debe ser cero, $\det (A - \lambda I) = 0$. De esta forma, podemos utilizar esta ecuación para encontrar los [Eigenvalores](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio) de $A$. Particularmente, podríamos formar el [polinomio característico](https://es.wikipedia.org/wiki/Polinomio_caracter%C3%ADstico) de la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$, el cual va a tener *grado* $n$ y por lo tanto va a tener $n$ soluciones, es decir que vamos a encontrar $n$ [Eigenvalores](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio).
Algo que debemos tener en cuenta es, que a pesar de que la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ sea [real](https://es.wikipedia.org/wiki/N%C3%BAmero_real), debemos estar preparados para encontrar [Eigenvalores](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio) que sean [complejos](http://relopezbriega.github.io/blog/2015/10/12/numeros-complejos-con-python/).
Para que quede más claro, veamos un ejemplo de como podemos calcular los [Eigenvalores](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio). Supongamos que tenemos la siguiente <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>:
$$A = \begin{bmatrix} 3 & 2 \\ 7 & -2 \end{bmatrix}$$
Su [polinomio característico](https://es.wikipedia.org/wiki/Polinomio_caracter%C3%ADstico) va a ser igual a:
$$p(\lambda) = \det (A - \lambda I) = \det \begin{bmatrix}3 - \lambda & 2 \\ 7 & -2-\lambda\end{bmatrix} = (3 - \lambda)(-2-\lambda) - 14 \\ =\lambda^2 - \lambda - 20 = (\lambda - 5) (\lambda + 4)$$
Por lo tanto los [Eigenvalores](https://es.wikipedia.org/wiki/Vector_propio_y_valor_propio) de $A$ van a ser $5$ y $-4$.
Obviamente, también los podemos obtener mucho más fácilmente con la ayuda de [Python](http://python.org/).
```python
# Eigenvalores con numpy
A = np.array([[3, 2],
[7, -2]])
x, v = np.linalg.eig(A)
# x Eigenvalor, v Eigenvector
x, v
```
(array([ 5., -4.]), array([[ 0.70710678, -0.27472113],
[ 0.70710678, 0.96152395]]))
```python
# Eigenvalores con SymPy
A = sympy.Matrix([[3, 2],
[7, -2]])
# Eigenvalor
A.eigenvals()
```
$$\left \{ -4 : 1, \quad 5 : 1\right \}$$
```python
# Eigenvector
A.eigenvects()
```
$$\left [ \left ( -4, \quad 1, \quad \left [ \left[\begin{matrix}- \frac{2}{7}\\1\end{matrix}\right]\right ]\right ), \quad \left ( 5, \quad 1, \quad \left [ \left[\begin{matrix}1\\1\end{matrix}\right]\right ]\right )\right ]$$
```python
# comprobando la solución Ax = λx
# x eigenvector, v eigenvalue
x = A.eigenvects()[0][2][0]
v = A.eigenvects()[0][0]
# Ax == vx
A*x, v*x
```
$$\left ( \left[\begin{matrix}\frac{8}{7}\\-4\end{matrix}\right], \quad \left[\begin{matrix}\frac{8}{7}\\-4\end{matrix}\right]\right )$$
Con esto termino con este recorrido por los principales conceptos del [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html), muchos de los cuales veremos en próximos artículos que tienen muchas aplicaciones interesantes. Espero que les sea de utilidad y les sirva de referencia.
```python
det(A-øI)=0 ecuación
```
| 6d9414436822e9ffc101a7483a4551146646233e | 188,242 | ipynb | Jupyter Notebook | Precurso/03_Matematicas_estadistica_Git/Introduccion-matematicas.ipynb | Lawlesscodelen/Bootcamp-Data- | 17125432ff82dd9b6b8dd08e4b5f39e1d787ccde | [
"MIT"
] | null | null | null | Precurso/03_Matematicas_estadistica_Git/Introduccion-matematicas.ipynb | Lawlesscodelen/Bootcamp-Data- | 17125432ff82dd9b6b8dd08e4b5f39e1d787ccde | [
"MIT"
] | null | null | null | Precurso/03_Matematicas_estadistica_Git/Introduccion-matematicas.ipynb | Lawlesscodelen/Bootcamp-Data- | 17125432ff82dd9b6b8dd08e4b5f39e1d787ccde | [
"MIT"
] | 1 | 2020-04-21T19:01:34.000Z | 2020-04-21T19:01:34.000Z | 53.175706 | 11,970 | 0.700896 | true | 23,824 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.808067 | 0.690019 | __label__spa_Latn | 0.806964 | 0.441476 |
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
### Goals of this Lesson
- Present the fundamentals of Linear Regression for Prediction
- Notation and Framework
- Gradient Descent for Linear Regression
- Advantages and Issues
- Closed form Matrix Solutions for Linear Regression
- Advantages and Issues
- Demonstrate Python
- Exploratory Plotting
- Simple plotting with `pyplot` from `matplotlib`
- Code Gradient Descent
- Code Closed Form Matrix Solution
- Perform Linear Regression in scikit-learn
### References for Linear Regression
- Elements of Statistical Learning by Hastie, Tibshriani, Friedman - Chapter 3
- Alex Ihler's Course Notes on Linear Models for Regression - http://sli.ics.uci.edu/Classes/2015W-273a
- scikit-learn Documentation - http://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares
- Linear Regression Analysis By Seber and Lee - http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471415405,subjectCd-ST24.html
- Applied Linear Regression by Weisberg - http://onlinelibrary.wiley.com/book/10.1002/0471704091
- Wikipedia - http://en.wikipedia.org/wiki/Linear_regression
### Linear Regression Notation and Framework
Linear Regression is a supervised learning technique that is interested in predicting a response or target $\mathbf{y}$, based on a linear combination of a set $D$ predictors or features, $\mathbf{x}= (1, x_1,\dots, x_D)$ such that,
\begin{equation*}
y = \beta_0 + \beta_1 x_1 + \dots + \beta_D x_D = \mathbf{x_i}^T\mathbf{\beta}
\end{equation*}
_**Data We Observe**_
\begin{eqnarray*}
y &:& \mbox{response or target variable} \\
\mathbf{x} &:& \mbox{set of $D$ predictor or explanatory variables } \mathbf{x}^T = (1, x_1, \dots, x_D)
\end{eqnarray*}
_** What We Are Trying to Learn**_
\begin{eqnarray*}
\beta^T = (\beta_0, \beta_1, \dots, \beta_D) : \mbox{Parameter values for a "best" prediction of } y \rightarrow \hat y
\end{eqnarray*}
_**Outcomes We are Trying to Predict**_
\begin{eqnarray*}
\hat y : \mbox{Prediction for the data that we observe}
\end{eqnarray*}
_**Matrix Notation**_
\begin{equation*}
\mathbf{Y} = \left( \begin{array}{ccc}
y_1 \\
y_2 \\
\vdots \\
y_i \\
\vdots \\
y_N
\end{array} \right)
\qquad
\mathbf{X} = \left( \begin{array}{ccc}
1 & x_{1,1} & x_{1,2} & \dots & x_{1,D} \\
1 & x_{2,1} & x_{2,2} & \dots & x_{2,D} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
1 & x_{i,1} & x_{i,2} & \dots & x_{i,D} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
1 & x_{N,1} & x_{N,2} & \dots & x_{N,D} \\
\end{array} \right)
\qquad
\beta = \left( \begin{array}{ccc}
\beta_0 \\
\beta_1 \\
\vdots \\
\beta_j \\
\vdots \\
\beta_D
\end{array} \right)
\end{equation*}
_Why is it called Linear Regression?_
It is often asked, why is it called linear regression if we can use polynomial terms and other transformations as the predictors. That is
\begin{equation*}
y = \beta_0 + \beta_1 x_1 + \beta_2 x_1^2 + \beta_3 x_1^3 + \beta_4 \sin(x_1)
\end{equation*}
is still a linear regression, though it contains polynomial and trigonometric transformations of $x_1$. This is due to the fact that the term _linear_ applies to the learned coefficients $\beta$ and not the input features $\mathbf{x}$.
_** How can we Learn $\beta$? **_
Linear Regression can be thought of as an optimization problem where we want to minimize some loss function of the error between the prediction $\hat y$ and the observed data $y$.
\begin{eqnarray*}
error_i &=& y_i - \hat y_i \\
&=& y_i - \mathbf{x_i^T}\beta
\end{eqnarray*}
_Let's see what these errors look like..._
Below we show a simulation where the observed $y$ was generated such that $y= 1 + 0.5 x + \epsilon$ and $\epsilon \sim N(0,1)$. If we assume that know the truth that $y=1 + 0.5 x$, the red lines demonstrate the error (or residuals) between the observed and the truth.
```python
#############################################################
# Demonstration - What do Residuals Look Like
#############################################################
np.random.seed(33) # Setting a seed allows reproducability of experiments
beta0 = 1 # Creating an intercept
beta1 = 0.5 # Creating a slope
# Randomly sampling data points
x_example = np.random.uniform(0,5,10)
y_example = beta0 + beta1 * x_example + np.random.normal(0,1,10)
line1 = beta0 + beta1 * np.arange(-1, 6)
f = plt.figure()
plt.scatter(x_example,y_example) # Plotting observed data
plt.plot(np.arange(-1,6), line1) # Plotting the true line
for i, xi in enumerate(x_example):
plt.vlines(xi, beta0 + beta1 * xi, y_example[i], colors='red') # Plotting Residual Lines
plt.annotate('Error or "residual"', xy = (x_example[5], 2), xytext = (-1.5,2.1),
arrowprops=dict(width=1,headwidth=7,facecolor='black', shrink=0.01))
f.set_size_inches(10,5)
plt.title('Errors in Linear Regression')
plt.show()
```
_Choosing a Loss Function to Optimize_
Historically Linear Regression has been solved using the method of Least Squares where we are interested in minimizing the mean squared error loss function of the form:
\begin{eqnarray*}
Loss(\beta) = MSE &=& \frac{1}{N} \sum_{i=1}^{N} (y_i - \hat y_i)^2 \\
&=& \frac{1}{N} \sum_{i=1}^{N} (y_i - \mathbf{x_i^T}\beta)^2 \\
\end{eqnarray*}
Where $N$ is the total number of observations. Other loss functions can be used, but using mean squared error (also referred to sum of the squared residuals in other text) has very nice properities for closed form solutions. We will use this loss function for both gradient descent and to create a closed form matrix solution.
### Before We Present Solutions for Linear Regression: Introducing a Baseball Dataset
We'll use this dataset to investigate Linear Regression. The dataset consists of 337 observations and 18 variables from the set of Major League Baseball players who played at least one game in both the 1991 and 1992
seasons, excluding pitchers. The dataset contains the 1992 salaries for that population, along with performance measures for each player. Four categorical variables indicate how free each player was to move to other teams.
** Reference **
- Pay for Play: Are Baseball Salaries Based on Performance?
- http://www.amstat.org/publications/jse/v6n2/datasets.watnik.html
**Filename**
- 'baseball.dat.txt'.
**Variables**
- _Salary_: Thousands of dollars
- _AVG_: Batting average
- _OBP_: On-base percentage
- _Runs_: Number of runs
- _Hits_: Number of hits
- _Doubles_: Number of doubles
- _Triples_: Number of triples
- _HR_: Number of home runs
- _RBI_: Number of runs batted in
- _Walks_: Number of walks
- _SO_: Number of strike-outs
- _SB_: Number of stolen bases
- _Errs_: Number of errors
- _free agency eligibility_: Indicator of "free agency eligibility"
- _free agent in 1991/2_: Indicator of "free agent in 1991/2"
- _arbitration eligibility_: Indicator of "arbitration eligibility"
- _arbitration in 1991/2_: Indicator of "arbitration in 1991/2"
- _Name_: Player's name (in quotation marks)
** What we will try to predict **
We will attempt to predict the players salary based upon some predictor variables such as Hits, OBP, Walks, RBIs, etc.
#### Load The Data
Loading data in python from csv files in python can be done by a few different ways. The numpy package has a function called 'genfromtxt' that can read csv files, while the pandas library has the 'read_csv' function. Remember that we have imported numpy and pandas as `np` and `pd` respectively at the top of this notebook. An example using pandas is as follows:
pd.read_csv(filename, **args)
http://pandas.pydata.org/pandas-docs/dev/generated/pandas.io.parsers.read_csv.html
###<span style="color:red">STUDENT ACTIVITY (2 MINS)</span>
_**Student Action - Load the 'baseball.dat.txt' file into a variable called 'baseball'. Then use baseball.head() to view the first few entries**_
```python
#######################################################################
# Student Action - Load the file 'baseball.dat.txt' using pd.read_csv()
#######################################################################
baseball = pd.read_csv('data/baseball.dat.txt')
```
_**Crash Course: Plotting with Matplotlib**_
At the top of this notebook we have imported the the package `pyplot as plt` from the `matplotlib` library. `matplotlib` is a great package for creating simple plots in Python. Below is a link to their tutorial for basic plotting.
_Tutorials_
- http://matplotlib.org/users/pyplot_tutorial.html
- https://scipy-lectures.github.io/intro/matplotlib/matplotlib.html
_Simple Plotting_
- Step 0: Import the packge pyplot from matplotlib for plotting
- `import matplotlib.pyplot as plt`
- Step 1: Create a variable to store a new figure object
- `fig = plt.figure()`
- Step 2: Create the plot of your choice
- Common Plots
- `plt.plot(x,y)` - A line plot
- `plt.scatter(x,y)` - Scatter Plots
- `plt.hist(x)` - Histogram of a variable
- Example Plots: http://matplotlib.org/gallery.html
- Step 3: Create labels for your plot for better interpretability
- X Label
- `plt.xlabel('String')`
- Y Label
- `plt.ylabel('String')`
- Title
- `plt.title('String')`
- Step 4: Change the figure size for better viewing within the iPython Notebook
- `fig.set_size_inches(width, height)`
- Step 5: Show the plot
- `plt.show()`
- The above command allows the plot to be shown below the cell that you're currently in. This is made possible by the `magic` command `%matplotlib inline`.
- _NOTE: This may not always be the best way to create plots, but it is a quick template to get you started._
_Transforming Variables_
We'll talk more about numpy later, but to perform the logarithmic transformation use the command
- `np.log(`$array$`)`
```python
#############################################################
# Demonstration - Plot a Histogram of Hits
#############################################################
f = plt.figure()
plt.hist(baseball['Hits'], bins=15)
plt.xlabel('Number of Hits')
plt.ylabel('Frequency')
plt.title('Histogram of Number of Hits')
f.set_size_inches(10, 5)
plt.show()
```
##<span style="color:red">STUDENT ACTIVITY (7 MINS)</span>
### Data Exploration - Investigating Variables
Work in pairs to import the package `matplotlib.pyplot`, create the following two plots.
- A histogram of the $log(Salary)$
- hint: `np.log()`
- a scatterplot of $log(Salary)$ vs $Hits$.
```python
#############################################################
# Student Action - import matplotlib.pylot
# - Plot a Histogram of log(Salaries)
#############################################################
f = plt.figure()
plt.hist(np.log(baseball['Salary']), bins = 15)
plt.xlabel('log(Salaries)')
plt.ylabel('Frequency')
plt.title('Histogram of log Salaries')
f.set_size_inches(10, 5)
plt.show()
```
```python
#############################################################
# Studdent Action - Plot a Scatter Plot of Salarie vs. Hitting
#############################################################
f = plt.figure()
plt.scatter(baseball['Hits'], np.log(baseball['Salary']))
plt.xlabel('Hits')
plt.ylabel('log(Salaries)')
plt.title('Scatter Plot of Salarie vs. Hitting')
f.set_size_inches(10, 5)
plt.show()
```
## Gradient Descent for Linear Regression
In Linear Regression we are interested in optimizing our loss function $Loss(\beta)$ to find the optimatal $\beta$ such that
\begin{eqnarray*}
\hat \beta &=& \arg \min_{\beta} \frac{1}{N} \sum_{i=1}^{N} (y_i - \mathbf{x_i^T}\beta)^2 \\
&=& \arg \min_{\beta} \frac{1}{N} \mathbf{(Y - X\beta)^T (Y - X\beta)} \\
\end{eqnarray*}
One optimization technique called 'Gradient Descent' is useful for finding an optimal solution to this problem. Gradient descent is a first order optimization technique that attempts to find a local minimum of a function by updating its position by taking steps proportional to the negative gradient of the function at its current point. The gradient at the point indicates the direction of steepest ascent and is the best guess for which direction the algorithm should go.
If we consider $\theta$ to be some parameters we are interested in optimizing, $L(\theta)$ to be our loss function, and $\alpha$ to be our step size proportionality, then we have the following algorithm:
_________
_**Algorithm - Gradient Descent**_
- Initialize $\theta$
- Until $\alpha || \nabla L(\theta) || < tol $:
- $\theta^{(t+1)} = \theta^{(t)} - \alpha \nabla_{\theta} L(\theta^{(t)})$
__________
For our problem at hand, we therefore need to find $\nabla L(\beta)$. The deriviative of $L(\beta)$ due to the $j^{th}$ feature is:
\begin{eqnarray*}
\frac{\partial L(\beta)}{\partial \beta_j} = -\frac{2}{N}\sum_{i=1}^{N} (y_i - \mathbf{x_i^T}\beta)\cdot{x_{i,j}}
\end{eqnarray*}
In matrix notation this can be written:
\begin{eqnarray*}
Loss(\beta) &=& \frac{1}{N}\mathbf{(Y - X\beta)^T (Y - X\beta)} \\
&=& \frac{1}{N}\mathbf{(Y^TY} - 2 \mathbf{\beta^T X^T Y + \beta^T X^T X\beta)} \\
\nabla_{\beta} L(\beta) &=& \frac{1}{N} (-2 \mathbf{X^T Y} + 2 \mathbf{X^T X \beta)} \\
&=& -\frac{2}{N} \mathbf{X^T (Y - X \beta)} \\
\end{eqnarray*}
###<span style="color:red">STUDENT ACTIVITY (7 MINS)</span>
### Create a function that returns the gradient of $L(\beta)$
```python
###################################################################
# Student Action - Programming the Gradient
###################################################################
def gradient(X, y, betas):
#****************************
# Your code here!
return -2.0/len(X)*np.dot(X.T, y - np.dot(X, betas))
#****************************
#########################################################
# Testing your gradient function
#########################################################
np.random.seed(33)
X = pd.DataFrame({'ones':1,
'X1':np.random.uniform(0,1,50)})
y = np.random.normal(0,1,50)
betas = np.array([-1,4])
grad_expected = np.array([ 2.98018138, 7.09758971])
grad = gradient(X,y,betas)
try:
np.testing.assert_almost_equal(grad, grad_expected)
print "Test Passed!"
except AssertionError:
print "*******************************************"
print "ERROR: Something isn't right... Try Again!"
print "*******************************************"
```
Test Passed!
###<span style="color:red">STUDENT ACTIVITY (15 MINS)</span>
_** Student Action - Use your Gradient Function to complete the Gradient Descent for the Baseball Dataset**_
#### Code Gradient Descent Here
We have set-up the all necessary matrices and starting values. In the designated section below code the algorithm from the previous section above.
```python
# Setting up our matrices
Y = np.log(baseball['Salary'])
N = len(Y)
X = pd.DataFrame({'ones' : np.ones(N),
'Hits' : baseball['Hits']})
p = len(X.columns)
# Initializing the beta vector
betas = np.array([0.015,5.13])
# Initializing Alpha
alph = 0.00001
# Setting a tolerance
tol = 1e-8
###################################################################
# Student Action - Programming the Gradient Descent Algorithm Below
###################################################################
niter = 1.
while (alph*np.linalg.norm(gradient(X,Y,betas)) > tol) and (niter < 20000):
#****************************
# Your code here!
betas -= alph*gradient(X, Y, betas)
niter += 1
#****************************
print niter, betas
try:
beta_expected = np.array([ 0.01513772, 5.13000121])
np.testing.assert_almost_equal(betas, beta_expected)
print "Test Passed!"
except AssertionError:
print "*******************************************"
print "ERROR: Something isn't right... Try Again!"
print "*******************************************"
```
33.0 [ 0.01513772 5.13000121]
Test Passed!
** Comments on Gradient Descent**
- Advantage: Very General Algorithm $\rightarrow$ Gradient Descent and its variants are used throughout Machine Learning and Statistics
- Disadvantage: Highly Sensitive to Initial Starting Conditions
- Not gauranteed to find the global optima
- Disadvantage: How do you choose step size $\alpha$?
- Too small $\rightarrow$ May never find the minima
- Too large $\rightarrow$ May step past the minima
- Can we fix it?
- Adaptive step sizes
- Newton's Method for Optimization
- http://en.wikipedia.org/wiki/Newton%27s_method_in_optimization
- Each correction obviously comes with it's own computational considerations.
See the Supplementary Material for any help necessary with scripting this in Python.
### Visualizing Gradient Descent to Understand its Limitations
Let's try to find the value of $X$ that maximizes the following function:
\begin{equation}
f(x) = w \times \frac{1}{\sqrt{2\pi \sigma_1^2}} \exp \left( - \frac{(x-\mu_1)^2}{2\sigma_1^2}\right) + (1-w) \times \frac{1}{\sqrt{2\pi \sigma_2^2}} \exp \left( - \frac{(x-\mu_2)^2}{2\sigma_2^2}\right)
\end{equation}
where $w=0.3$, $\mu_1 = 3, \sigma_1^2=1$ and $\mu_2 = -1, \sigma_2^2=0.5$
Let's visualize this function
```python
x1 = np.arange(-10, 15, 0.05)
mu1 = 6.5
var1 = 3
mu2 = -1
var2 = 10
weight = 0.3
def mixed_normal_distribution(x, mu1, var1, mu2, var2):
pdf1 = np.exp( - (x - mu1)**2 / (2*var1) ) / np.sqrt(2 * np.pi * var1)
pdf2 = np.exp( - (x - mu2)**2 / (2*var2) ) / np.sqrt(2 * np.pi * var2)
return weight * pdf1 + (1-weight )*pdf2
pdf = mixed_normal_distribution(x1, mu1, var1, mu2, var2)
fig = plt.figure()
plt.plot(x1, pdf)
fig.set_size_inches([10,5])
plt.show()
```
### Now let's show visualize happens for different starting conditions and different step sizes
```python
def mixed_gradient(x, mu1, var1, mu2, var2):
grad_pdf1 = np.exp( - (x - mu1)**2 / (2*var1) ) * ((x-mu1)/var1) / np.sqrt(2 * np.pi * var1)
grad_pdf2 = np.exp( - (x - mu2)**2 / (2*var2) ) * ((x-mu2)/var2) / np.sqrt(2 * np.pi * var2)
return weight * grad_pdf1 + (1-weight)*grad_pdf2
# Initialize X
x = 3.25
# Initializing Alpha
alph = 5
# Setting a tolerance
tol = 1e-8
niter = 1.
results = []
while (alph*np.linalg.norm(mixed_gradient(x, mu1, var1, mu2, var2)) > tol) and (niter < 500000):
#****************************
results.append(x)
x = x - alph * mixed_gradient(x, mu1, var1, mu2, var2)
niter += 1
#****************************
print x, niter
if niter < 500000:
exes = mixed_normal_distribution(np.array(results), mu1, var1, mu2, var2)
fig = plt.figure()
plt.plot(x1, pdf)
plt.plot(results, exes, color='red', marker='x')
plt.ylim([0,0.1])
fig.set_size_inches([20,10])
plt.show()
```
## Linear Regression Matrix Solution
From the last section, you may have recognized that we could actually solve for $\beta$ directly.
\begin{eqnarray*}
Loss(\beta) &=& \frac{1}{N}\mathbf{(Y - X\beta)^T (Y - X\beta)} \\
\nabla_{\beta} L(\beta) &=& \frac{1}{N} (-2 \mathbf{X^T Y} + 2 \mathbf{X^T X \beta}) \\
\end{eqnarray*}
Setting to zero
\begin{eqnarray*}
-2 \mathbf{X^T Y} + 2 \mathbf{X^T X} \beta &=& 0 \\
\mathbf{X^T X \beta} &=& \mathbf{X^T Y} \\
\end{eqnarray*}
If we assume that the columns $X$ are linearly independent then
\begin{eqnarray*}
\hat \beta &=& \mathbf{(X^T X)^{-1}X^T Y} \\
\end{eqnarray*}
This is called the _Ordinary Least Squares_ (OLS) Estimator
###<span style="color:red">STUDENT ACTIVITY (10 MINS)</span>
_** Student Action - Solve for $\hat \beta$ directly using OLS on the Baseball Dataset - 10 mins** _
- Review the Supplementary Materials for help with Linear Algebra
```python
# Setting up our matrices
y = np.log(baseball['Salary'])
N = len(Y)
X = pd.DataFrame({'ones' : np.ones(N),
'Hits' : baseball['Hits']})
#############################################################
# Student Action - Program a closed form solution for
# Linear Regression. Compare with Gradient
# Descent.
#############################################################
def solve_linear_regression(X, y):
#****************************
return np.dot(np.linalg.inv(np.dot(X.T, X)), np.dot(X.T, y))
#****************************
betas = solve_linear_regression(X,y)
try:
beta_expected = np.array([ 0.01513353, 5.13051682])
np.testing.assert_almost_equal(betas, beta_expected)
print "Betas: ", betas
print "Test Passed!"
except AssertionError:
print "*******************************************"
print "ERROR: Something isn't right... Try Again!"
print "*******************************************"
```
Betas: [ 0.01513353 5.13051682]
Test Passed!
** Comments on solving the loss function directly **
- Advantage: Simple solution to code
- Disadvantage: The Design Matrix must be Full Rank to invert
- Can be corrected with a Generalized Inverse Solution
- Disadvantage: Inverting a Matrix can be a computational expensive operation
- If we have a design matrix that has $N$ observations and $D$ predictors, then X is $(N\times D)$ it follows then that
\begin{eqnarray*}
\mathbf{X^TX} \mbox{ is of size } (D \times N) \times (N \times D) = (D \times D) \\
\end{eqnarray*}
- If a matrix is of size $(D\times D)$, the computational cost of inverting it is $O(D^3)$.
- Thus inverting a matrix is directly related to the number of predictors that are included in the analysis.
## Sci-Kit Learn Linear Regression
As we've shown in the previous two exercises, when coding these algorithms ourselves, we must consider many things such as selecting step sizes, considering the computational cost of inverting matrices. For many applications though, packages have been created that have taken into consideration many of these parameter selections. We now turn our attention to the Python package for Machine Learning called 'scikit-learn'.
- http://scikit-learn.org/stable/
Included is the documentation for the scikit-learn implementation of Ordinary Least Squares from their linear models package
- _Generalized Linear Models Documentation:_
- http://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares
- _LinearRegression Class Documentation:_
- http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression
From this we that we'll need to import the module `linear_model` using the following:
from sklearn import linear_model
Let's examine an example using the `LinearRegression` class from scikit-learn. We'll continue with the simulated data from the beginning of the exercise.
### _Example using the variables from the Residual Example_
** Notes **
- Calling `linear_model.LinearRegression()` creates an object of class `sklearn.linear_model.base.LinearRegression`
- Defaults
- `fit_intercept = True`: automatically adds a column vector of ones for an intercept
- `normalize = False`: defaults to not normalizing the input predictors
- `copy_X = False`: defaults to not copying X
- `n_jobs = 1`: The number of jobs to use for the computation. If -1 all CPUs are used. This will only provide speedup for n_targets > 1 and sufficient large problems.
- Example
- `lmr = linear_model.LinearRegression()
- To fit a model, the method `.fit(X,y)` can be used
- X must be a column vector for scikit-learn
- This can be accomplished by creating a DataFrame using `pd.DataFrame()`
- Example
- lmr.fit(X,y)
- To predict out of sample values, the method `.predict(X)` can be used
- To see the $\beta$ estimates use `.coef_` for the coefficients for the predictors and `.intercept` for $\beta_0$
```python
#############################################################
# Demonstration - scikit-learn with Regression Example
#############################################################
from sklearn import linear_model
lmr = linear_model.LinearRegression()
lmr.fit(pd.DataFrame(x_example), pd.DataFrame(y_example))
xTest = pd.DataFrame(np.arange(-1,6))
yHat = lmr.predict(xTest)
f = plt.figure()
plt.scatter(x_example, y_example)
p1, = plt.plot(np.arange(-1,6), line1)
p2, = plt.plot(xTest, yHat)
plt.legend([p1, p2], ['y = 1 + 0.5x', 'OLS Estimate'], loc=2)
f.set_size_inches(10,5)
plt.show()
print lmr.coef_, lmr.intercept_
```
###<span style="color:red">STUDENT ACTIVITY (15 MINS)</span>
### _**Final Student Task**_
Programming Linear Regression using the scikit-learn method. For the ambitious students, plot all results on one plot.
```python
#######################################################################
# Student Action - Use scikit-learn to calculate the beta coefficients
#
# Note: You no longer need the intercept column in your X matrix for
# sci-kit Learn. It will add that column automatically.
#######################################################################
lmr2 = linear_model.LinearRegression(fit_intercept=True)
lmr2.fit(pd.DataFrame(baseball['Hits']), np.log(baseball['Salary']))
xtest = np.arange(0,200)
ytest = lmr2.intercept_ + lmr2.coef_*xtest
f = plt.figure()
plt.scatter(baseball['Hits'], np.log(baseball['Salary']))
plt.plot(xtest, ytest, color='r', linewidth=3)
f.set_size_inches(10,5)
plt.show()
print lmr2.coef_, lmr2.intercept_
```
## Linear Regression in the Real World
In the real world, Linear Regression for predictive modeling doesn't end once you've fit the model. Models are often fit and used to predict user behavior, used to quantify business metrics, or sometimes used to identify cats faces for internet points. In that pursuit, it isn't really interesting to fit a model and assess its performance on data that has already been observed. The real interest lies in _**how it predicts future observations!**_
Often times then, we may be susceptible to creating a model that is perfected for our observed data, but that does not generalize well to new data. In order to assess how we perform to new data, we can _score_ the model on both the old and new data, and compare the models performance with the hope that the it generalizes well to the new data. After lunch we'll introduce some techniques and other methods to better our chances of performing well on new data.
Before we break for lunch though, let's take a look at a simulated dataset to see what we mean...
_Situation_
Imagine that last year a talent management company managed 400 celebrities and tracked how popular they were within the public eye, as well various predictors for that metric. The company is now interested in managing a few new celebrities, but wants to sign those stars that are above a certain 'popularity' threshold to maintain their image.
Our job is to predict how popular each new celebrity will be over the course of the coming year so that we make that best decision about who to manage. For this analysis we'll use a function `l2_error` to compare our errors on a training set, and on a test set of celebrity data.
The variable `celeb_data_old` represents things we know about the previous batch of celebrities. Each row represents one celeb. Each column represents some tangible measure about them -- their age at the time, number of Twitter followers, voice squeakiness, etc. The specifics of what each column represents aren't important.
Similarly, `popularity_score_old` is a previous measure of the celebrities popularity.
Finally, `celeb_data_new` represents the same information that we had from `celeb_data_old` but for the new batch of internet wonders that we're considering.
How can we predict how popular the NEW batch of celebrities will be ahead of time so that we can decide who to sign? And are these estimates stable from year to year?
```python
with np.load('data/mystery_data_old.npz') as data:
celeb_data_old = data['celeb_data_old']
popularity_old = data['popularity_old']
celeb_data_new = data['celeb_data_new']
lmr3 = linear_model.LinearRegression()
lmr3.fit(celeb_data_old, popularity_old)
predicted_popularity_old = lmr3.predict(celeb_data_old)
predicted_popularity_new = lmr3.predict(celeb_data_new)
def l2_error(y_true, y_pred):
"""
calculate the sum of squared errors (i.e. "L2 error")
given a vector of true ys and a vector of predicted ys
"""
diff = (y_true-y_pred)
return np.sqrt(np.dot(diff, diff))
print "Predicted L2 Error:", l2_error(popularity_old, predicted_popularity_old)
```
Predicted L2 Error: 18.1262825607
### Checking How We Did
At the end of the year, we tally up the popularity numbers for each celeb and check how well we did on our predictions.
```python
with np.load('data/mystery_data_new.npz') as data:
popularity_new = data['popularity_new']
print "Predicted L2 Error:", l2_error(popularity_new, predicted_popularity_new)
```
Predicted L2 Error: 24.173135433
Something's not right... our model seems to be performing worse on this data! Our model performed so well on last year's data, why didn't it work on the data from this year?
| 20dd458bd91e953aecfac2ce86bbfb6d10943437 | 204,715 | ipynb | Jupyter Notebook | Session 1 - Linear_Regression.ipynb | dinrker/PredictiveModeling | af69864fbe506d095d15049ee2fea6ecd770af36 | [
"MIT"
] | null | null | null | Session 1 - Linear_Regression.ipynb | dinrker/PredictiveModeling | af69864fbe506d095d15049ee2fea6ecd770af36 | [
"MIT"
] | null | null | null | Session 1 - Linear_Regression.ipynb | dinrker/PredictiveModeling | af69864fbe506d095d15049ee2fea6ecd770af36 | [
"MIT"
] | null | null | null | 180.206866 | 38,724 | 0.870185 | true | 7,713 | Qwen/Qwen-72B | 1. YES
2. YES | 0.927363 | 0.903294 | 0.837682 | __label__eng_Latn | 0.928219 | 0.784549 |
```python
import numpy as np
binPow = 1.6
maxR = 8
kernelSize = 20
kernelDist = 10
def v(x):
return x**binPow/kernelSize**binPow * kernelDist
def bin(i):
return ( i * (kernelSize**binPow) / kernelDist ) ** (1/binPow)
for i in range(kernelSize):
print(i, v(i), v(i+1), v(i+1)-v(i))
for i in range(maxR):
print(i, bin(i))
```
0 0.0 0.08286135043349964 0.08286135043349964
1 0.08286135043349964 0.25118864315095796 0.16832729271745833
2 0.25118864315095796 0.4805582246305208 0.22936958147956282
3 0.4805582246305208 0.7614615754863513 0.2809033508558305
4 0.7614615754863513 1.088188204120155 0.32672662863380364
5 1.088188204120155 1.4567801244906113 0.3685919203704564
6 1.4567801244906113 1.8642702577112429 0.40749013322063155
7 1.8642702577112429 2.3083198494515416 0.4440495917402987
8 2.3083198494515416 2.7870195942000233 0.47869974474848176
9 2.7870195942000233 3.2987697769322355 0.5117501827322122
10 3.2987697769322355 3.842202968380542 0.5434331914483064
11 3.842202968380542 4.416131536906999 0.573928568526457
12 4.416131536906999 5.019510541446327 0.6033790045393284
13 5.019510541446327 5.651410628131081 0.6319000866847535
14 5.651410628131081 6.310997693134873 0.6595870650037918
15 6.310997693134873 6.99751727323698 0.6865195801021073
16 6.99751727323698 7.710282331663665 0.7127650584266849
17 7.710282331663665 8.448663540236124 0.7383812085724593
18 8.448663540236124 9.212081434582966 0.7634178943468424
19 9.212081434582966 10.0 0.7879185654170335
0 0.0
1 4.742747411323311
2 7.314316399918298
3 9.423902405005808
4 11.280217932412837
5 12.9683955465101
6 14.533644306587888
7 16.003547774916488
```python
def f(x,p0,p1,p2,p3):
return p0-(p1/(1+np.exp(-p2*(x-p3))))
for x in range(0,20,1):
print(x, f(x,0.5,0.5,1,3))
```
0 0.4762870634112166
1 0.4403985389889412
2 0.36552928931500245
3 0.25
4 0.13447071068499755
5 0.05960146101105884
6 0.023712936588783318
7 0.008993104981045774
8 0.003346425462142366
9 0.0012363115783173284
10 0.0004555255972003014
11 0.00016767506523318598
12 6.169728799315655e-05
13 2.2698934351195188e-05
14 8.350710923976656e-06
15 3.072087301103643e-06
16 1.1301621489767655e-06
17 4.157640138835461e-07
18 1.529511134967798e-07
19 5.626758103893792e-08
```python
from sympy import *
p0,p1,p2,p3,x,v = symbols('p0 p1 p2 p3 x v')
diff(ln(v - (p0-(p1/(1+exp(-p2*(x-p3)))))),p1)
```
$\displaystyle \frac{1}{\left(1 + e^{- p_{2} \left(- p_{3} + x\right)}\right) \left(- p_{0} + \frac{p_{1}}{1 + e^{- p_{2} \left(- p_{3} + x\right)}} + v\right)}$
```python
```
| 22167955fb76aa83caf4ad399f37cd449f118043 | 4,709 | ipynb | Jupyter Notebook | examples/MRA-Head/SupportingNotes.ipynb | kian-weimer/ITKTubeTK | 88da3195bfeca017745e7cddfe04f82571bd00ee | [
"Apache-2.0"
] | 27 | 2020-04-06T17:23:22.000Z | 2022-03-02T13:25:52.000Z | examples/MRA-Head/SupportingNotes.ipynb | kian-weimer/ITKTubeTK | 88da3195bfeca017745e7cddfe04f82571bd00ee | [
"Apache-2.0"
] | 14 | 2020-04-09T00:23:15.000Z | 2022-02-26T13:02:35.000Z | examples/MRA-Head/SupportingNotes.ipynb | kian-weimer/ITKTubeTK | 88da3195bfeca017745e7cddfe04f82571bd00ee | [
"Apache-2.0"
] | 14 | 2020-04-03T03:56:14.000Z | 2022-01-14T07:51:32.000Z | 28.539394 | 181 | 0.564239 | true | 1,288 | Qwen/Qwen-72B | 1. YES
2. YES | 0.760651 | 0.682574 | 0.5192 | __label__yue_Hant | 0.214736 | 0.044605 |
```python
import numpy as np
import sympy as sym
import numba
import pydae.build as db
```
```python
```
```python
S_b = 90e3
U_b = 400.0
Z_b = U_b**2/S_b
I_b = S_b/(np.sqrt(3)*U_b)
Omega_b = 2*np.pi*50
R_s = 0.023/Z_b
R_r = 0.024/Z_b
Ll_s = 0.086/Z_b
Ll_r = 0.196/Z_b
L_m = 3.7/Z_b
params = {'S_b':S_b,'U_b':U_b,'I_b':I_b,
'R_s':R_s,'R_r':R_r,'L_ls':Ll_s,'L_lr':Ll_r,'L_m':L_m, # synnchronous machine d-axis parameters
'H_m':3.5,'Omega_b':2*np.pi*50,'D':0.1,
'v_0':1,'theta_0':0.0,
'X_l':0.05, 'omega_s':1.0,'v_rd':0.0,'v_rq':0.0,'v_sd':0.0,'v_sq':-1.0}
u_ini_dict = {'tau_m':0.1, 'Q_c':0.0} # for the initialization problem
u_run_dict = {'tau_m':0.1,'Q_c':0.0} # for the running problem (here initialization and running problem are the same)
x_list = ['omega_e','psi_sd','psi_sq','psi_rd','psi_rq'] # [inductor current, PI integrator]
y_ini_list = ['i_sd','i_sq','i_rd','i_rq'] # for the initialization problem
y_run_list = ['i_sd','i_sq','i_rd','i_rq'] # for the running problem (here initialization and running problem are the same)
sys_vars = {'params':params,
'u_list':u_run_dict,
'x_list':x_list,
'y_list':y_run_list}
exec(db.sym_gen_str()) # exec to generate the required symbolic varables and constants
```
```python
#v_sd = -v_h*sin(theta_h)
#v_sq = v_h*cos(theta_h)
tau_e = psi_sd*i_sq - psi_sq*i_sd
domega_e = 1/(2*H_m)*(tau_m - tau_e - D*omega_e)
dpsi_sd = Omega_b*(-v_sd -R_s*i_sd - omega_s*psi_sq)
dpsi_sq = Omega_b*(-v_sq -R_s*i_sq + omega_s*psi_sd)
dpsi_rd = Omega_b*(-v_rd -R_r*i_rd - (omega_s-omega_e)*psi_rq)
dpsi_rq = Omega_b*(-v_rq -R_r*i_rq + (omega_s-omega_e)*psi_rd)
g_1 = -psi_sd + (L_m + L_ls)*i_sd + L_m*i_rd
g_2 = -psi_sq + (L_m + L_ls)*i_sq + L_m*i_rq
g_3 = -psi_rd + (L_m + L_lr)*i_rd + L_m*i_sd
g_4 = -psi_rq + (L_m + L_lr)*i_rq + L_m*i_sq
#g_5 = P_h - (v_h*v_0*sin(theta_h - theta_0))/X_l
#g_6 = Q_c + Q_h + (v_h*v_0*cos(theta_h - theta_0))/X_l - v_h**2/X_l
#g_7 = -P_h - (v_sd*i_sd + v_sq*i_sq)
#g_8 = -Q_h - (v_sq*i_sd - v_sd*i_sq)
h_1 = I_b*(i_sd*i_sd + i_sq*i_sq)**0.5
h_p = (v_sd*i_sd + v_sq*i_sq)
h_q = (v_sd*i_sq - v_sq*i_sd)
sys = {'name':'imib_fisix_5ord',
'params':params,
'f':[domega_e,dpsi_sd,dpsi_sq,dpsi_rd,dpsi_rq],
'g':[g_1,g_2,g_3,g_4],#,g_5,g_6,g_7,g_8],
'g_ini':[g_1,g_2,g_3,g_4],#,g_5,g_6,g_7,g_8],
'x':x_list,
'y_ini':y_ini_list,
'y':y_run_list,
'u_run_dict':u_run_dict,
'u_ini_dict':u_ini_dict,
'h':[h_1,h_p,h_q]}
sys = db.system(sys)
db.sys2num(sys)
```
```python
sys['f']
```
$\displaystyle \left[\begin{matrix}\frac{i_{sd} \psi_{sq} - i_{sq} \psi_{sd} + \tau_{m}}{2 H_{m}}\\\Omega_{b} \left(- R_{s} i_{sd} - \omega_{s} \psi_{sq}\right)\\\Omega_{b} \left(- R_{s} i_{sq} + \omega_{s} \psi_{sd} + 1\right)\\\Omega_{b} \left(- R_{r} i_{rd} - \psi_{rq} \left(- \omega_{e} + \omega_{s}\right) - v_{rd}\right)\\\Omega_{b} \left(- R_{r} i_{rq} + \psi_{rd} \left(- \omega_{e} + \omega_{s}\right) - v_{rq}\right)\end{matrix}\right]$
```python
```
| f23c83bed287b5ca7e97435ba8bbe3467b268431 | 6,660 | ipynb | Jupyter Notebook | examples/machines/im_milano/imib_fisix_5ord_builder.ipynb | pydae/pydae | 8076bcfeb2cdc865a5fc58561ff8d246d0ed7d9d | [
"MIT"
] | 1 | 2020-12-20T03:45:26.000Z | 2020-12-20T03:45:26.000Z | examples/machines/im_milano/imib_fisix_5ord_builder.ipynb | pydae/pydae | 8076bcfeb2cdc865a5fc58561ff8d246d0ed7d9d | [
"MIT"
] | null | null | null | examples/machines/im_milano/imib_fisix_5ord_builder.ipynb | pydae/pydae | 8076bcfeb2cdc865a5fc58561ff8d246d0ed7d9d | [
"MIT"
] | null | null | null | 35.806452 | 740 | 0.515015 | true | 1,274 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.743168 | 0.669245 | __label__kor_Hang | 0.129944 | 0.393211 |
<a href="https://colab.research.google.com/github/AnilZen/centpy/blob/master/notebooks/Scalar_2d.ipynb" target="_parent"></a>
# Quasilinear scalar equation with CentPy in 2d
### Import packages
```python
# Install the centpy package
!pip install centpy
```
Collecting centpy
Downloading https://files.pythonhosted.org/packages/92/89/7cbdc92609ea7790eb6444f8a189826582d675f0b7f59ba539159c43c690/centpy-0.1-py3-none-any.whl
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from centpy) (1.18.5)
Installing collected packages: centpy
Successfully installed centpy-0.1
```python
# Import numpy and centpy for the solution
from numpy import pi, sin, cos, abs, min, max
import centpy
```
```python
# Imports functions from matplotlib and setup for the animation
import matplotlib.pyplot as plt
from matplotlib import animation
from IPython.display import HTML
```
## Equation
We solve the nonlinear scalar conservation law
\begin{equation}
\partial_t u + \partial_x \sin u + \frac{1}{3} \partial_y u^3= 0,
\end{equation}
on the domain $(x,y,t)\in([0,2\pi]\times[0,2\pi]\times[0,6])$ with initial data
\begin{equation}
u(x,y,0) = \sin \left(x+\frac{1}{2}\right) \cos(2x+y)
\end{equation}
and periodic boundary conditions. The solution is computed using a 144 $\times$ 144 mesh and CFL number 0.9.
```python
pars = centpy.Pars2d(
x_init=0, x_final=2*pi,
y_init=0.0, y_final=2*pi,
J=144, K=144,
t_final=6.0,
dt_out=0.1,
cfl=0.9,
scheme="sd3",
)
```
```python
class Scalar2d(centpy.Equation2d):
def initial_data(self):
x = self.xx.T; y = self.yy.T
return sin(x + 0.5) * cos(2*x + y)
def boundary_conditions(self, u):
# x-boundary
u[0] = u[-4]
u[1] = u[-3]
u[-2] = u[2]
u[-1] = u[3]
# y-boundary
u[:, 0] = u[:, -4]
u[:, 1] = u[:, -3]
u[:, -2] = u[:, 2]
u[:, -1] = u[:, 3]
def flux_x(self, u):
return sin(u)
def flux_y(self, u):
return 1./3 *u**3
def spectral_radius_x(self, u):
return abs(cos(u))
def spectral_radius_y(self, u):
return u**2
```
## Solution
```python
eqn = Scalar2d(pars)
soln = centpy.Solver2d(eqn)
soln.solve()
```
## Animation
```python
# Animation
j0 = slice(2, -2)
fig = plt.figure()
ax = plt.axes(xlim=(soln.x_init,soln.x_final), ylim=(soln.y_init, soln.y_final))
ax.set_title("Nonlinear scalar")
ax.set_xlabel("x")
ax.set_ylabel("y")
contours=ax.contour(soln.x[j0], soln.y[j0], soln.u_n[0,j0,j0], 8, colors='black')
img=ax.imshow(soln.u_n[0,j0,j0], extent=[0, 6.3, 0, 6.3], origin='lower',
cmap='ocean', alpha=0.5)
fig.colorbar(img)
def animate(i):
ax.collections = []
ax.contour(soln.x[j0], soln.y[j0], soln.u_n[i,j0,j0], 8, colors='black')
img.set_array(soln.u_n[i,j0,j0])
img.autoscale()
plt.close()
anim = animation.FuncAnimation(fig, animate, frames=soln.Nt, interval=100, blit=False);
HTML(anim.to_html5_video())
```
```python
```
| 31fcdf9ee0987e7e132118a9bf87a2c82ea32a53 | 368,597 | ipynb | Jupyter Notebook | notebooks/Scalar_2d.ipynb | olekravchenko/centpy | e10d1b92c0ee5520110496595b6875b749fa4451 | [
"MIT"
] | 2 | 2021-06-23T17:23:21.000Z | 2022-01-14T01:28:57.000Z | notebooks/Scalar_2d.ipynb | olekravchenko/centpy | e10d1b92c0ee5520110496595b6875b749fa4451 | [
"MIT"
] | null | null | null | notebooks/Scalar_2d.ipynb | olekravchenko/centpy | e10d1b92c0ee5520110496595b6875b749fa4451 | [
"MIT"
] | null | null | null | 90.519892 | 232 | 0.76853 | true | 1,017 | Qwen/Qwen-72B | 1. YES
2. YES | 0.874077 | 0.661923 | 0.578572 | __label__eng_Latn | 0.389761 | 0.182546 |
```python
import numpy as np
import math
from scipy import stats
import matplotlib as mpl
import matplotlib.pyplot as plt
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
%matplotlib inline
plt.style.use('seaborn-whitegrid')
mpl.style.use('seaborn')
prop_cycle = plt.rcParams["axes.prop_cycle"]
colors = prop_cycle.by_key()["color"]
import Abatement_functions as func
```
# Abatement Project: Adding end-of-pipe technologies to a CGE
Consider the simple setup:
* A sector uses inputs $(x,y)$ in production of a commodity/service $H$.
* In using the input $x$ there is an associated emission of something 'bad'. Let $e^0$ denote emissions prior to abatement and $e$ actual emissions.
* There is a pre-abatement known emission coefficient in the use of $x$. Denote this $\eta$. Thus we have the simple relation:
$$\begin{align}
e^0 = \eta x^0,
\end{align}$$
where $x^0$ is the use of the input $x$, in absence of any regulation on emissions.
* Emissions are taxed at a rate $\tau$.
In general we think of three abatement channels:
* **Output reduction**: Given input-mix and abatement technology, a lower output lowers the use of the dirty input $x$ and thus emissions.
* **Input substitution**: Given the output level and abatement technology, input can be mixed with lower use of $x$ (consequently more use of $y$ here), emissions can be lowered.
* **Abatement technology**: Given the output level and input-mix, targeted abatement equipment can lower emissions directly at some cost.
We can further think of **abatement technology** in two different ways:
* **End-of-pipe abatement:** Lowers emissions without altering the optimal input-mix (think of a filter on an engine that burns fossil fuels).
* **Input-displacing abatement technology:** A discrete change in technology that alters the production function altogether (i.e. altering a production process).
In the following we will refer to *abatement technology* as an *end-of-pipe* type of technology, as the *input-displacing* type would involve a discrete choice between different production functions, which presents more of a challenge.
## 1: Abatement as discrete set of end-of-pipe technologies
### 1.1: Representing technology data
We assume that a technology dataset gives $T$ ways of abating emissions. Each technology $t$ is summed up by the information:
* $q_t \in [0,1]$ is the share of emissions that are abated, if the equipment is installed.
* $c_t\in R_+$ is unit cost of abating emissions.
This gives rise to the optimal abatement $(A)$ as a function of the tax rate $(\tau)$, as well as the total abatement costs $(C)$:
$$\begin{align}
A(\tau) &= \eta x \sum_{t=1}^{T} q_t * \mathbf{1}_{c_t<\tau}, && \mathbf{1}_{c_t<\tau} = \left\lbrace \begin{array}{ll} 0, & c_t\geq \tau \\ 1, & c_t<\tau \end{array} \right. \tag{D1}\\
C(\tau) &= \eta x \sum_{t=1}^{T} q_t c_t * \mathbf{1}_{c_t<\tau} \tag{D2}
\end{align}$$
**Draw and plot a data set for technologies:**
```python
T = 10; # Number of technologies
abate_upper = 0.9 # Upper limit of total abatement (in percentages)
c_upper = 1 # Most expensive abatement technology
seed=21 # Set seed so the random draw does not change every time we run this cell.
labels = ['Tax on emissions', 'Abatement'] # Names on axes in plot below.
sample_c, sample_q_sum, sample_q = func.draw_sample(T,abate_upper,c_upper,seed) # draw samples of unit costs and abatement potential.
```
```python
func.plot_stepcurve(sample_c,sample_q_sum,labels) # Plot abatement function.
```
### 1.2: Smoothing out the step-function
In an older version we told a story about how this step function looks like, if instead of constant unit costs of abatement $c_t$, there were a continuum of heterogeneous firms applying the abatement technologies. Either through difference in abilities or compatibility with the technology, we can think of the heterogeneity resulting in varying unit costs $c_t^i$ for each $i$ firm in the continuum. Alternatively we can think of it as a *smoothing device*, facilitating the use of gradient-based solvers.
The idea is to replace the abatement and cost functions with a smoothed stepwise curve of the form:
$$\begin{align}
A_t(\tau) &= \eta x q_t \int_{\underline{s}}^{\tau} dF(c_t^i), \tag{S1} \\
&= \eta x q_t F_t(\tau) \\
C_t(\tau) &= \eta x q_t \int_{\underline{s}}^{\tau} c_t^i dF(c_t^i), \tag{S2} \\
&= \eta x q_t F_t(\tau) \mathbb{E}\left[c_t^i|c_t^i<\tau\right].
\end{align}$$
where $F(\cdot)$ represents some continuously differentiable distribution with support on $[\underline{s},\bar{s}]$ (bounded support is not a necessary condition). Relevant functions that approximate the step-shape well are the *normal* and *log-normal* distributions.
In the following we use the log-normal distribution in which case the functions are given by:
$$\begin{align}
ln(c_t^i) &\sim N\left(ln(c_t)-\dfrac{\sigma^2}{2},\mbox{ }\sigma^2\right) \tag{S3} \\
A_t(\tau) &= \eta x q_t \underbrace{\Phi\left(\dfrac{ln(\tau)-ln(c_t)+\sigma^2/2}{\sigma}\right)}_{F_t^q(\tau)} \tag{S4} \\
C_t(\tau) &= \eta x q_t c_t \underbrace{\Phi\left(\dfrac{ln(\tau)-ln(c_t)-\sigma^2/2}{\sigma}\right)}_{F_t^c(\tau)} \tag{S5}
\end{align}$$
(Alternatively we could use the normal distribution. This has some nice properties when it comes to the $A_t(\tau)$ function, but is a more complicated function for $C_t(\tau)$. There are several other readily available smoothing functions out there though.)
**Plot the smoothed function varying $\sigma$**
```python
n_tau_grid = 500
n_sigma_grid = 100;
sigma_grid = np.linspace(0.0001,0.25,n_sigma_grid)
tau_grid = np.linspace(0.01,c_upper*1.1,n_tau_grid)
A_share, C_share = func.smooth_sample(sample_c,tau_grid,sigma_grid)
```
```python
A = np.sum((A_share * sample_q[:,None,None]), axis=0)
func.interactive_sigma(A,tau_grid,sigma_grid,int((n_sigma_grid/2)),'Abatement')
```
interactive(children=(FloatSlider(value=0.1263121212121212, description='$\\sigma$', max=0.25, min=0.0001, ste…
Note that with a log-normal distribution, the $\sigma$ values should not be constant across technologies, as there is a clear skewness in the smoothing (more smooth for larger $\tau$).
## 2: Application in the world's simplest CGE model
### 2.1: The Setup:
In a shocking twist of events we learn that the production function of the polluting firm is of the CES type. With abatement technology and cost functions as discussed above, the profit maximization problem that the firm is facing is given by:
$$\begin{align}
\max_{x,y,H}\mbox{ }\Pi = pH - \tau e -C-p_xx-p_yy, \tag{CES-1}
\end{align}$$
subject to the constraints:
$$\begin{align}
H =& \left(\mu_x^{1/\theta}x^{\frac{\theta-1}{\theta}}+\mu_y^{1/\theta}y^{\frac{\theta-1}{\theta}}\right)^{\frac{\theta}{\theta-1}} \tag{CES-2} \\
e =& \eta x \left(1-\sum_{t=1}^T q_tF_t^q(\tau) \right) \tag{CES-3} \\
C =& \eta x \sum_{t=1}^T q_tc_tF_t^c(\tau) \tag{CES-4}
\end{align}$$
The resulting first order conditions for optimality are then given by:
$$\begin{align}
x &= \mu_x H \left(\dfrac{p}{\hat{p}_x}\right)^{\theta} \tag{CES-5} \\
y &= \mu_y H \left(\dfrac{p}{p_y}\right)^{\theta} \tag{CES_6} \\
\hat{p}_x &= p_x+\eta\left(\tau-\sum_{t=1}^Tq_t\Big[\tau F_t^q(\tau)-c_tF_t^c(\tau)\Big]\right) \tag{CES-7}
\end{align} $$
along with (CES-2)-(CES-4).
### 2.2: Emission taxes increases adjusted relative price of applying dirty input
The additional effects on the relevant price on the use of the dirty input $x$ is now:
1. Emission tax on marginal pollution of size $\eta \tau$.
2. A gain (relative to paying the full tax on all emissions), from applying profitable abatement equipment:
$$\begin{align}
\text{Abatement gain } = \eta \sum_{t=1}^Tq_t\Big[\tau F_t^q(\tau)-c_t F_t^c(\tau)\Big]\geq 0.
\end{align}$$
The inequality comes from the fact that
$$\begin{align}
c_tF_t^c(\tau) = \mathbb{E}\left[c_t|c_t<\tau\right] F_t^q(\tau),
\end{align}$$
which per construction is lower than $\tau F_t^q(\tau)$. However, it is also straightforward to see that the higher the tax $\tau$, the higher is the 'adjusted' price on the dirty input:
$$\begin{align}
\hat{p}_x \geq p_x, && \text{as } && \tau \geq \sum_{t=1}^T q_t\Big[\tau F_t^q(\tau)-c_tF_t^c(\tau)\Big].
\end{align}$$
This follows from the fact that:
$$\begin{align}
\sum_t^T q_t \leq& 1
\end{align}$$
and
$$\begin{align}
0\leq \tau F_t^q(\tau)-c_tF_t^c(\tau) = F_t^q(\tau)\left(\tau-\mathbb{E}\left[c_t|c_t<\tau\right]\right) \leq \tau.
\end{align}$$
### 2.3: The abatement gain is fully crowded out by adjusted output prices, i.e. there are still zero profits with perfect competition
Evaluating the profit function in (CES-1), substituting initially for (CES-5) and (CES-6) yields:
$$\begin{align}
\Pi^* &= pH-\tau e-C-\mu_x H p^{\theta} \dfrac{p_x}{(\hat{p}_x)^{\theta}}-\mu_y H p^{\theta}p_y^{1-\theta} \\
&= H\left[p-p^{\theta}\left(\mu_x \dfrac{p_x}{\left(\hat{p}_x\right)^{\theta}}+\mu_yp_y^{1-\theta}\right)\right]-\tau e- C
\end{align}$$
Focus on the last part concerning emissions. Substitute for (CES-3)-(CES-4) (abatement functions) to get:
$$\begin{align}
\tau e + C &= \tau \eta x \left(1-\sum_{t=1}^T q_tF_t^q (\tau)\right)+\eta x \sum_{t=1}^T q_tc_tF_t^c(\tau). \\
&= \eta x \left(\tau - \sum_{t=1}^T q_t \left[\tau F_t^q(\tau)-c_tF_t^c(\tau)\right]\right)
\end{align}$$
Using (CES-7) we can rewrite this as:
$$\begin{align}
\tau e + C = x \left(\hat{p}_x-p_x\right).
\end{align}$$
Plugging this back into the expression for $\Pi^*$ we then have:
$$\begin{align}
\Pi^* = H\left[p-p^{\theta}\left(\mu_x\dfrac{p_x}{(\hat{p}_x)^{\theta}}+\mu_yp_y^{1-\theta}\right)\right]-x\left(\hat{p}_x-p_x\right).
\end{align}$$
Finally substituting for $x$ using (CES-5) this yields the maximized profit function:
$$\begin{align}
\Pi^* &= H\left[p-p^{\theta}\left(\mu_x\dfrac{p_x}{(\hat{p}_x)^{\theta}}+\mu_yp_y^{1-\theta}\right)\right]-\mu_x H \left(\dfrac{p}{\hat{p}_x}\right)^{\theta}\left(\hat{p}_x-p_x\right) \\
& = H\left[p-p^{\theta}\left(\mu_x\left(\hat{p}_x\right)^{1-\theta}+\mu_y\left(p_y\right)^{1-\theta}\right)\right]
\end{align}$$
With the *usual* CES price index of (Combine production function with the (CES-5)-(CES-6) to show)
$$\begin{align}
p = \left(\mu_x\left(\hat{p}_x\right)^{1-\theta}+\mu_y\left(p_y\right)^{1-\theta}\right)^{\frac{1}{1-\theta}},
\end{align}$$
this yields exactly zero profits $\Pi^* = 0$.
## 3: Further analysis and plots
Here we investigate and plot a number of the functions derived above:
* The *abatement gain* function,
* The *adjusted relative price on dirty good*,
* Total emissions and abatement, (more interesting things?)
all as a function of the tax rate $\tau$, and with interactive features for parameters $(\mu_x,\theta,\sigma)$.
### 3.1: Abatement gain function
Start with settings:
```python
# Fixed values:
eta,px,py,muy = 2,5,4,1
# Grids:
n_tau_grid,n_sigma_grid,n_mux_grid,n_theta_grid = 100,10,6,6
tau_grid = np.linspace(0.01,c_upper*1.1,n_tau_grid)
sigma_grid = np.linspace(0.0001,0.25,n_sigma_grid)
mux_grid = np.linspace(0.5, 2.5,n_mux_grid)
theta_grid = np.linspace(0.25, 1.25, n_theta_grid)
```
Used already drawn abatement technology data set. Recall only $(\sigma)$ parameter enters the function. Note that the abatement gain function per construction is lower than $\tau$.
```python
A_share, C_share = func.smooth_sample(sample_c,tau_grid,sigma_grid) # 3-dimensions: Technology (T), tau grid, sigma grid.
A = np.sum(( A_share * sample_q[:,None,None] * tau_grid[None,:,None]), axis=0)
C = np.sum(( C_share * sample_q[:,None,None] * sample_c[:,None,None]), axis=0)
Abatement_gain = eta*(A-C)
func.interactive_sigma(Abatement_gain,tau_grid,sigma_grid,int((n_sigma_grid/2)),'Abatement gain')
```
interactive(children=(FloatSlider(value=0.13893333333333333, description='$\\sigma$', max=0.25, min=0.0001, st…
### 3.2: The Relative price on the 'dirty' input factor
Recall that the price was given by:
$$\begin{align}
\hat{p}_x &= p_x+\eta\left(\tau-\sum_{t=1}^Tq_t\Big[\tau F_t^q(\tau)-c_tF_t^c(\tau)\Big]\right) = p_x + \eta \tau - \text{Abatement gain}
\end{align}$$
```python
px_hat = px + eta * tau_grid[:,None]-Abatement_gain
func.interactive_sigma(px_hat,tau_grid,sigma_grid,int((n_sigma_grid/2)),'$\hat{p}_x$')
```
interactive(children=(FloatSlider(value=0.13893333333333333, description='$\\sigma$', max=0.25, min=0.0001, st…
### 3.3: Emissions and abatement
| 6fc105e8dfd07de52c1490003c40be45a2b70046 | 34,619 | ipynb | Jupyter Notebook | Abatement_v1.ipynb | ChampionApe/Abatement_project | eeb1ebe3ed84a49521c18c0acf22314474fbfc2e | [
"MIT"
] | null | null | null | Abatement_v1.ipynb | ChampionApe/Abatement_project | eeb1ebe3ed84a49521c18c0acf22314474fbfc2e | [
"MIT"
] | null | null | null | Abatement_v1.ipynb | ChampionApe/Abatement_project | eeb1ebe3ed84a49521c18c0acf22314474fbfc2e | [
"MIT"
] | null | null | null | 63.521101 | 15,144 | 0.743205 | true | 3,951 | Qwen/Qwen-72B | 1. YES
2. YES | 0.849971 | 0.76908 | 0.653696 | __label__eng_Latn | 0.880639 | 0.357086 |
# Лаба Дмитро
## Лабораторна робота №2
## Варіант 2
```python
import numpy as np
import sympy as sp
from scipy.linalg import eig
from sympy.matrices import Matrix
from IPython.display import display, Math, Latex
def bmatrix(a):
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{bmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{bmatrix}']
return Math('\n'.join(rv))
```
# Завдання №1
```python
industry = (600, 450, 1200)
industry_added_value = 0.3
agriculture = (450, 400, 700)
agriculture_added_value = 0.5
matrix = np.array([industry, agriculture])
added_value = np.array([industry_added_value, agriculture_added_value])
```
```python
gross_output = np.sum(matrix, axis=1)
direct_expenses = matrix[0:2, 0:2]/gross_output
print('Прямі витрати: ')
display(bmatrix(direct_expenses))
```
Прямі витрати:
$$\begin{bmatrix}
0.26666667 & 0.29032258\\
0.2 & 0.25806452\\
\end{bmatrix}$$
```python
full_expenses = np.linalg.inv(np.eye(len(direct_expenses)) - direct_expenses)
print('Повні витрати: ')
display(bmatrix(full_expenses))
```
Повні витрати:
$$\begin{bmatrix}
1.52654867 & 0.59734513\\
0.41150442 & 1.50884956\\
\end{bmatrix}$$
```python
prices = added_value @ full_expenses
print('Ціни на промислову та сільськогосподарську продукцію: ')
display(bmatrix(prices))
```
Ціни на промислову та сільськогосподарську продукцію:
$$\begin{bmatrix}
0.66371681 & 0.93362832\\
\end{bmatrix}$$
# Завдання №2
```python
A = np.array([
[0.4, 0.4, 0.2],
[0.2, 0.5, 0.4],
[0.1, 0.1, 0.2]
])
n = len(A)
```
## Власні числа
```python
eigvals = np.linalg.eigvals(A)
print('\n'.join(['({0:.2f} {1} {2:.2f}i)'.format(n.real, '+-'[n.imag < 0], abs(n.imag)) for n in eigvals]))
```
(0.84 + 0.00i)
(0.13 + 0.07i)
(0.13 - 0.07i)
## Коефіцієнти характеристичного поліному
```python
M = Matrix(A)
lamda = sp.symbols('lamda')
characteristic_polynomial = M.charpoly(lamda)
sp.init_printing(use_latex='mathjax')
display(sp.factor(characteristic_polynomial))
```
$$1.0 \left(1.0 \lambda^{3} - 1.1 \lambda^{2} + 0.24 \lambda - 0.018\right)$$
## Число Фробеніуса
```python
frobenius_number = np.real(np.max(eigvals))
print('{0:.2f}'.format(frobenius_number))
```
0.84
## Правий та лівий вектори Фробеніуса
```python
a, left, right = eig(A, left=True)
print('Лівий вектор: ')
left_frobenius = np.real(left[:, 0])
display(bmatrix(left_frobenius))
print('Перевірка: ')
display(bmatrix(left_frobenius @ A))
display(bmatrix(frobenius_number * left_frobenius))
print('Правий вектор: ')
right_frobenius = np.real(right[:, 0])
display(bmatrix(right_frobenius))
print('Перевірка: ')
display(bmatrix(A @ right_frobenius))
display(bmatrix(frobenius_number * right_frobenius))
```
Лівий вектор:
$$\begin{bmatrix}
0.44397979 & 0.69076468 & 0.57072419\\
\end{bmatrix}$$
Перевірка:
$$\begin{bmatrix}
0.37281727 & 0.58004668 & 0.47924667\\
\end{bmatrix}$$
$$\begin{bmatrix}
0.37281727 & 0.58004668 & 0.47924667\\
\end{bmatrix}$$
Правий вектор:
$$\begin{bmatrix}
0.7089431 & 0.67144485 & 0.21578112\\
\end{bmatrix}$$
Перевірка:
$$\begin{bmatrix}
0.5953114 & 0.56382349 & 0.18119502\\
\end{bmatrix}$$
$$\begin{bmatrix}
0.5953114 & 0.56382349 & 0.18119502\\
\end{bmatrix}$$
## Продуктивність матриці
За критерієм Леонтьєва, необхідня і достатня умова $\lambda_A < 1$ виконується, тому матриця є продуктивною
## Матриця повних витрат
```python
B = np.linalg.inv(np.eye(len(A)) - A)
display(bmatrix(B))
```
$$\begin{bmatrix}
2.95081967 & 2.78688525 & 2.13114754\\
1.63934426 & 3.7704918 & 2.29508197\\
0.57377049 & 0.81967213 & 1.80327869\\
\end{bmatrix}$$
## Збіжність до матриці повних витрат
```python
matrix_sum, A_power = np.eye(n), np.eye(n)
for i in range(200):
A_power = A_power @ A
matrix_sum += A_power
if np.all(B - matrix_sum) < 0.01:
print('Матриця збіжна до матриці повних витрат на кроці {}'.format(i))
break
display(bmatrix(matrix_sum))
```
Матриця збіжна до матриці повних витрат на кроці 193
$$\begin{bmatrix}
2.95081967 & 2.78688525 & 2.13114754\\
1.63934426 & 3.7704918 & 2.29508197\\
0.57377049 & 0.81967213 & 1.80327869\\
\end{bmatrix}$$
## Вектор кінцевого випуску
```python
y = np.array([100, 70, 80])
end_output = np.linalg.inv(np.eye(n) - A) @ y
display(bmatrix(end_output))
```
$$\begin{bmatrix}
660.6557377 & 611.47540984 & 259.01639344\\
\end{bmatrix}$$
| 8e29f38ba494a0eb80ca405b74a94376edd2d792 | 12,031 | ipynb | Jupyter Notebook | eco_systems/laba2.ipynb | pashchenkoromak/jParcs | 5d91ef6fdd983300e850599d04a469c17238fc65 | [
"MIT"
] | 2 | 2019-10-01T09:41:15.000Z | 2021-06-06T17:46:13.000Z | eco_systems/laba2.ipynb | pashchenkoromak/jParcs | 5d91ef6fdd983300e850599d04a469c17238fc65 | [
"MIT"
] | 1 | 2018-05-18T18:20:46.000Z | 2018-05-18T18:20:46.000Z | eco_systems/laba2.ipynb | pashchenkoromak/jParcs | 5d91ef6fdd983300e850599d04a469c17238fc65 | [
"MIT"
] | 8 | 2017-01-20T15:44:06.000Z | 2021-11-28T20:00:49.000Z | 21.369449 | 114 | 0.468872 | true | 1,885 | Qwen/Qwen-72B | 1. YES
2. YES | 0.926304 | 0.83762 | 0.77589 | __label__kor_Hang | 0.074921 | 0.640986 |
```python
### PREAMBLE
# Chapter 2 - linear models
# linear.svg
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
```
**any bullet points are comments I've made to help for understanding!**
## Chapter 2: Linear models
Before we dive into the discussion of adversarial attacks and defenses on deep networks, it is worthwhile considering the situation that arises when the hypothesis class is linear. That is, for the multi-class setting $h_\theta : \mathbb{R}^n \rightarrow \mathbb{R}^k$, we consider a classifier of the form
\begin{equation}
h_\theta(x) = W x + b
\end{equation}
where $\theta = \{W \in \mathbb{R}^{k \times n}, b \in \mathbb{R}^k\}$. We will also shortly consider a binary classifier of a slightly different form, as many of the ideas are a bit easer to describe in this setting, before returning back to the multi-class case.
Substituting this hypothesis back into our robust optimization framework, and also focusing on the case where the pertrubation set $\Delta$ is a norm ball $\Delta = \{\delta : \|\delta\| \leq \epsilon\}$, where we don't actually specify the type of norm, so this could be $\ell_\infty$, $\ell_2$, $\ell_1$, etc, we arrive at the mix-max problem
\begin{equation}
\DeclareMathOperator*{\minimize}{minimize}
\minimize_{W,b} \frac{1}{|D|} \sum_{x,y \in D} \max_{\|\delta\| \leq \epsilon}\ell(W(x+\delta) + b, y).
\end{equation}
**The key point we will emphasize in this section, is that under this formulation, we can solve the inner maximization $exactly$ for the case of binary optimization, and provide a relatively tight upper bound for the case of multi-class classification. Futhermore, because the resulting minimization problem is still convex in $\theta$ (we will see shortly that it remains convex even after maximizing over $\delta$, the resulting robust training procedure can _also_ be solved optimally, and thus we can achieve the globally optimal robust classifier (at least for the case of binary classification). This is in stark constrast to the deep network case, where neither the inner maximization problem nor the outer minmimization problem can be solved globally (in the case of the outer minimization, this holds _even_ if we assume exact solutions of the inner problem, due to the non-convexity of the network itself).**
However, understanding the linear case provides important insights into the theory and practice of adversarial robustness, and also provides connections to more commonly-studied methods in machine learning such as support vector machines.
## Binary classification
Let's begin first by considering the case of binary classification, i.e., k=2 in the multi-class setting we desribe above. In this case, rather than use multi-class cross entropy loss, we'll be adopting the more common approach and using the binary cross entropy, or logistic loss. In this setting, we have our hypothesis function
\begin{equation}
h_\theta(x) = w^T x + b
\end{equation}
for $\theta = \{w \in \mathbb{R}^n, b \in \mathbb{R}\}$, class label $y \in \{+1,-1\}$, and loss function
\begin{equation}
\ell(h_\theta(x), y) = \log(1+\exp(-y\cdot h_\theta(x))) \equiv L(y \cdot h_\theta(x))
\end{equation}
where for convience below we define the function $L(z) = \log(1+\exp(-z))$ which we will use below when discussing how to solve the optimization problems involving this loss. The semantics of this setup are that for a data point $x$, the classifier predicts class $+1$ with probability
\begin{equation}
p(y=+1|x) = \frac{1}{1 + \exp(-h_\theta(x))}.
\end{equation}
**Aside:** Again, for those who may be unfamiliar with how this setting relates to the multiclass case we saw before, note that if we use the traditional mutlticlass cross entropy loss with two classes, of class 1 would be given by
\begin{equation}
\frac{\exp(h_\theta(x)_1)}{\exp(h_\theta(x)_1) + \exp(h_\theta(x)_2)} =
\frac{1}{1 + \exp(h_\theta(x)_2 - h_\theta(x)_1)}
\end{equation}
and similarly the probaiblity of predicting class 2
\begin{equation}
\frac{\exp(h_\theta(x)_2)}{\exp(h_\theta(x)_1) + \exp(h_\theta(x)_2)} =
\frac{1}{1 + \exp(h_\theta(x)_1 - h_\theta(x)_2)}.
\end{equation}
We can thus define a single scalar-valued hypothesis
\begin{equation}
h'_\theta(x) \equiv h_\theta(x)_1 - h_\theta(x)_2
\end{equation}
with the associated probabilities
\begin{equation}
p(y|x) = \frac{1}{1 + \exp(-y\cdot h'_\theta(x))}
\end{equation}
for $y$ defined as $y \in \{+1,-1\}$ as written above. Taking the negative log of this quantity gives
\begin{equation}
-\log \frac{1}{1 + \exp(-y\cdot h'_\theta(x))} = \log(1 + \exp(-y\cdot h'_\theta(x)))
\end{equation}
which is exactly the logistic loss we define above.
### Solving the inner maximization problem
* showing how this can be done in the binary case for a linear model
Now let's return to the robust optimization problem, and consider the inner maximization problem, which in this case takes the form
\begin{equation}
\DeclareMathOperator*{\maximize}{maximize}
\maximize_{\|\delta\| \leq \epsilon} \ell(w^T (x+\delta), y) \equiv \maximize_{\|\delta\| \leq \epsilon} L(y \cdot (w^T(x+\delta) + b)).
\end{equation}
The key point we need to make here is that in this setting, it is actually possible to solve this inner maximization problem exactly. To show this, first note the $L$ as we described it earlier is a scalar function that is monotonically decreasing, and looks like the following:
```python
x = np.linspace(-4,4)
plt.plot(x, np.log(1+np.exp(-x)))
```
[<matplotlib.lines.Line2D at 0x11d3c2470>]
Because the function is monotoic decreasing, if we want to maximize this function applied to a scalar, that is equivalent to just minimizing the scalar quantity. That is
\begin{equation}
\begin{split}
\DeclareMathOperator*{\minimize}{minimize}
\max_{\|\delta\| \leq \epsilon} L \left(y \cdot (w^T(x+\delta) + b) \right) & =
L\left( \min_{\|\delta\| \leq \epsilon} y \cdot (w^T(x+\delta) + b) \right) \\
& = L\left(y\cdot(w^Tx + b) + \min_{\|\delta\| \leq \epsilon} y \cdot w^T\delta \right)
\end{split}
\end{equation}
where we get the second line by just distributing out the linear terms.
So we need to consider how to solve the problem
\begin{equation}
\min_{\|\delta\| \leq \epsilon} y \cdot w^T\delta.
\end{equation}
To get the intuition here, let's just consider the case that $y = +1$, and consider an $\ell_\infty$ norm constraint $\|\delta\|_\infty \leq \epsilon$. **Since the $\ell_\infty$ norm says that each element in $\delta$ must have magnitude less than or equal $\epsilon$, we clearly minimize this quantity when we set $\delta_i = -\epsilon$ for $w_i \geq 0$ and $\delta_i = \epsilon$ for $w_i < 0$. For $y = -1$, we would just flip these quantities.** That is, the optimal solution to the above optimization problem for the $\ell_\infty$ norm is given by
\begin{equation}
\delta^\star = - y \epsilon \cdot \mathrm{sign}(w)
\end{equation}
Furthermore, we can also determine the function valued achieved by this solution,
\begin{equation}
y \cdot w^T\delta^\star = y \cdot \sum_{i=1} -y \epsilon \cdot \mathrm{sign}(w_i) w_i = -y^2 \epsilon \sum_{i} |w_i| = -\epsilon \|w\|_1.
\end{equation}
Thus, we can actually _analytically_ compute the solution of the inner maximization problem, which just has the form
\begin{equation}
\max_{\|\delta\|_\infty \leq \epsilon} L \left(y \cdot (w^T(x+\delta) + b) \right) =
L \left(y \cdot (w^Tx + b) - \epsilon \|w\|_1 \right ).
\end{equation}
**Therefore, instead of solving the robust min-max problem as an actual min-max problem, we have been able to convert it to a pure minimization problem,** given by
\begin{equation}
\minimize_{w,b} \frac{1}{D} \sum_{(x,y) \in D} L \left(y \cdot (w^Tx + b) - \epsilon \|w\|_1 \right ).
\end{equation}
**This problem is still convex in $w,b$, so can be solved exactly, or e.g., SGD will also approach the globally optimal solution.** A little more generally, it turns out that in general the optimization problem
\begin{equation}
\min_{\|\delta\| \leq \epsilon} y \cdot w^T\delta = -\epsilon \|w\|_*
\end{equation}
where $\|\cdot\|_*$ denotes the the dual norm of our original norm bound on $\theta$ ($\|\cdot\|_p$ and $\|\cdot\|_q$ are dual norms for $1/p + 1/q = 1$). So regardless of our norm constraint, we can actually solve the robust optimization problem via a single minimization problem (and find the analytical solution to the worse-case adversarial attack), without the need to explicitly solve a min-max problem.
Note that the final robust optimization problem (now adopting the general form),
\begin{equation}
\minimize_{w,b} \frac{1}{D}\sum_{(x,y) \in D} L \left(y \cdot (w^Tx + b) - \epsilon \|w\|_* \right )
\end{equation}
looks an awful lot like the typical norm-regularized objective we commonly consider in machine learning
\begin{equation}
\minimize_{w,b} \frac{1}{D}\sum_{(x,y) \in D} L (y \cdot (w^Tx + b)) + \epsilon \|w\|_*
\end{equation}
with the except that the regularization term is _inside_ the loss function. **Intuitively, this means that in the robust optimization case, if a point is far from the decision boundary, we _don't_ penalize the norm of the parameters, but we _do_ penalize the norm of the parameters (transformed by the loss function) for a point where we close to the decision boundary.** The connections between such formulations and e.g. support vector machines, has been studied extensively [\cite XuMannor].
### Illustration of binary classification setting
Let's see what this looks like for an actual linear classifier. In doing so, we can also get a sense of how well traditional linear models might work to also prevent adversarial examples (spoiler: not very well, unless you do regularize). To do so, we're going to consider the MNIST data set, which will actually serve as a running example for the vast majority of the rest of this tutorial. **MNIST is actually a fairly poor choice of problem for many reasons: in addition to being very small for modern ML, it also has the property that it can easily be "binarized", i.e., because the pixel values are essentially just black and white, we can remove more $\ell_\infty$ noise by just rounding to 0 or 1, and then classifying the resulting image.** But presuming we _don't_ use such strategies, it is still a reasonable choice for initial experiments, and small enough that some of the more complex methods we discuss in further sections still can be run in a reasonable amount of time.
* explains why MNIST is not best to use
* the problem with MNIST appears to relate mostly to $\ell_\infty$
Since we're in the binary classification setting for now, let's focus on the even easier problem of just classifying between 0s and 1s in the MNIST data (we'll return back to the multi-class setting for linear models shortly). Let's first load the data using the PyTorch library and build a simple linear classifier using gradient descent. Note that we're going to do this a bit more explicitly to replicate the logic above (i.e., using labels of +1/-1, using the direct computation of the $L$ function, etc) instead of reverse-engineering it from the typical PyTorch functions.
Let's first load the MNIST data reduced to the 0/1 examples.
```python
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
mnist_train = datasets.MNIST("./data", train=True, download=True, transform=transforms.ToTensor())
mnist_test = datasets.MNIST("./data", train=False, download=True, transform=transforms.ToTensor())
train_idx = mnist_train.train_labels <= 1
mnist_train.train_data = mnist_train.train_data[train_idx]
mnist_train.train_labels = mnist_train.train_labels[train_idx]
test_idx = mnist_test.test_labels <= 1
mnist_test.test_data = mnist_test.test_data[test_idx]
mnist_test.test_labels = mnist_test.test_labels[test_idx]
train_loader = DataLoader(mnist_train, batch_size = 100, shuffle=True)
test_loader = DataLoader(mnist_test, batch_size = 100, shuffle=False)
```
Now let's build a simple linear classifier (the `nn.Linear` module does this, containing the weights in the `.weight` object and the bias in the `.bias` object). The `nn.Softplus` function implement $L$ function above (though without negating the input), and does so in a more numerically stable way than using the exp or log functions directly.
```python
import torch
import torch.nn as nn
import torch.optim as optim
# do a single pass over the data
def epoch(loader, model, opt=None):
total_loss, total_err = 0.,0.
for X,y in loader:
yp = model(X.view(X.shape[0], -1))[:,0]
loss = nn.BCEWithLogitsLoss()(yp, y.float())
if opt:
opt.zero_grad()
loss.backward()
opt.step()
total_err += ((yp > 0) * (y==0) + (yp < 0) * (y==1)).sum().item()
total_loss += loss.item() * X.shape[0]
return total_err / len(loader.dataset), total_loss / len(loader.dataset)
```
We'll train the classifier for 10 epochs, though note that MNIST 0/1 binary classification is a _very_ easy problem, and after one epoch we basically have converged to the final test error (though test loss still decreases). It eventually reaches as error of 0.0004, which in this case actually just making one mistake on the test set.
```python
model = nn.Linear(784, 1)
opt = optim.SGD(model.parameters(), lr=1.)
print("Train Err", "Train Loss", "Test Err", "Test Loss", sep="\t")
for i in range(10):
train_err, train_loss = epoch(train_loader, model, opt)
test_err, test_loss = epoch(test_loader, model)
print(*("{:.6f}".format(i) for i in (train_err, train_loss, test_err, test_loss)), sep="\t")
```
Train Err Train Loss Test Err Test Loss
0.007501 0.015405 0.000946 0.003278
0.001342 0.005392 0.000946 0.002892
0.001342 0.004438 0.000473 0.002560
0.001105 0.003788 0.000946 0.002495
0.000947 0.003478 0.000946 0.002297
0.000947 0.003251 0.000946 0.002161
0.000711 0.002940 0.000473 0.002159
0.000790 0.002793 0.000946 0.002109
0.000711 0.002650 0.000946 0.002107
0.000790 0.002529 0.000946 0.001997
In case you're curious, we can actually look at the one test example that the classifier makes a mistake on, which indeed seems a bit odd relative to most 0s and 1s.
```python
X_test = (test_loader.dataset.test_data.float()/255).view(len(test_loader.dataset),-1)
y_test = test_loader.dataset.test_labels
yp = model(X_test)[:,0]
idx = (yp > 0) * (y_test == 0) + (yp < 0) * (y_test == 1)
plt.imshow(1-X_test[idx][0].view(28,28).numpy(), cmap="gray")
plt.title("True Label: {}".format(y_test[idx].item()))
```
Text(0.5,1,'True Label: 0')
Hopefully you've already noticed something else about the adversarial examples we generate in the linear case: because the optimal perturbation is equal to
\begin{equation}
\delta^\star = - y \epsilon \cdot \mathrm{sign}(w),
\end{equation}
**which doesn't depend on $x$, this means that the best perturbation to apply is the _same_ across all examples. Note however, that to get the best valid perturbation, here we should really be constraining $x + \delta$ to be in $[0,1]$, which _doesn't_ hold for this case.** For simplicity, we'll ignore this for now, and go ahead and add this same $\delta$ anyway (even if it gives us a technically invalid image). After all, for the classifier, the inputs are just numerical value, so we can always have values greater than one or less than zero; the performance we'll see also applies to the case where we clip values, it just adds a bit of unnecessary hassle.
Let's look at the actual perturbation, to try to get a sense of it.
```python
epsilon = 0.2
delta = epsilon * model.weight.detach().sign().view(28,28)
plt.imshow(1-delta.numpy(), cmap="gray")
```
<matplotlib.image.AxesImage at 0x128527978>
**It's perhaps not all that obvious, but if you squint you can see that maybe there is a vertical line (like a 1) in black pixels, and a cirlce (like a 0) in in white. The intuition here is that moving in the black direction, we make the classifier think the image is more like a 1, while moving in the white direction, more like a 0.** But the picture here is not perfect, and it you didn't know to look for this, it may not be obviously. We'll shortly
Let's next see what happens when we evaluate the test accuracy when we make this (optimal) adverarial attack on the images in the test set.
```python
def epoch_adv(loader, model, delta):
total_loss, total_err = 0.,0.
for X,y in loader:
# this yp (y prediction) is changing the X's in the test to account for the adversarial attack
yp = model((X-(2*y.float()[:,None,None,None]-1)*delta).view(X.shape[0], -1))[:,0]
loss = nn.BCEWithLogitsLoss()(yp, y.float())
total_err += ((yp > 0) * (y==0) + (yp < 0) * (y==1)).sum().item()
total_loss += loss.item() * X.shape[0]
return total_err / len(loader.dataset), total_loss / len(loader.dataset)
print(epoch_adv(test_loader, model, delta[None,None,:,:]))
```
(0.8458628841607565, 3.4517438034075654)
**So allowing perturbations within the $\ell_\infty$ ball of size $\epsilon=0.2$, the classifier we go from essentially zero error to 84.5% error. Unlike the ImageNet case, the perturbed images here, _are_ recognizably different (we just overlay the noise you saw above), but this would definitely not be sufficient to fool most humans in recognizing the image.**
```python
f,ax = plt.subplots(5,5, sharey=True)
for i in range(25):
ax[i%5][i//5].imshow(1-(X_test[i].view(28,28) - (2*y_test[i]-1)*delta).numpy(), cmap="gray")
ax
```
### Training robust linear models
We've now seen that a standard linear model suffers from a lot of the same problems as deep models (though it should be said, they are still slightly more resilient than standard training for deep networks, for which an $\ell_\infty$ ball with $\epsilon=0.2$ could easily create 100% error). But we also know that we can easily perform exact robust optimization (i.e., solving the equivalent of the min-max problem) by simply incorporating the $\ell_1$ norm into the objective. Putting this into the standard binary cross entropy loss that PyTorch implements (which uses labels of 0/1 by default, not -1/+1), takes a bit of munging, but the training procedure is still quite simple: we just subtract $\epsilon(2y-1)\|w\|_1$ from the predictions (the $2y-1$ scales the 0/1 entries to -1/+1).
```python
# do a single pass over the data
# training now to handle adversarial attack
def epoch_robust(loader, model, epsilon, opt=None):
total_loss, total_err = 0.,0.
for X,y in loader:
yp = model(X.view(X.shape[0], -1))[:,0] - epsilon*(2*y.float()-1)*model.weight.norm(1)
loss = nn.BCEWithLogitsLoss()(yp, y.float())
if opt:
opt.zero_grad()
loss.backward()
opt.step()
total_err += ((yp > 0) * (y==0) + (yp < 0) * (y==1)).sum().item()
total_loss += loss.item() * X.shape[0]
return total_err / len(loader.dataset), total_loss / len(loader.dataset)
```
```python
model = nn.Linear(784, 1)
opt = optim.SGD(model.parameters(), lr=1e-1)
epsilon = 0.2
print("Rob. Train Err", "Rob. Train Loss", "Rob. Test Err", "Rob. Test Loss", sep="\t")
for i in range(20):
train_err, train_loss = epoch_robust(train_loader, model, epsilon, opt)
test_err, test_loss = epoch_robust(test_loader, model, epsilon)
print(*("{:.6f}".format(i) for i in (train_err, train_loss, test_err, test_loss)), sep="\t")
```
Rob. Train Err Rob. Train Loss Rob. Test Err Rob. Test Loss
0.147414 0.376791 0.073759 0.228654
0.073352 0.223381 0.053901 0.176481
0.062929 0.197301 0.043026 0.154818
0.057008 0.183879 0.038298 0.139773
0.052981 0.174964 0.040662 0.143639
0.050059 0.167973 0.037352 0.132365
0.048164 0.162836 0.032624 0.119755
0.046190 0.158340 0.033570 0.123211
0.044769 0.154719 0.029787 0.118066
0.043979 0.152048 0.027423 0.118974
0.041058 0.149381 0.026478 0.110074
0.040268 0.147034 0.027423 0.114998
0.039874 0.145070 0.026950 0.109395
0.038452 0.143232 0.026950 0.109015
0.037663 0.141919 0.027896 0.113093
0.036715 0.140546 0.026478 0.103066
0.036321 0.139162 0.026478 0.107028
0.035610 0.138088 0.025059 0.104717
0.035215 0.137290 0.025059 0.104803
0.034741 0.136175 0.025059 0.106629
We say it above, but we should **emphasize that all the numbers reported above are the _robust_ (i.e., worst case adversarial) errors and losses. So by training with the robust optimization problem, we're able to train a model such that for $\epsilon=0.2$, no adversarial attack can lead to more then 2.5% error on the test set.** Quite an improvement from the ~85% that the standard training had. But how well does it do on the _non-adversarial_ training set?
```python
train_err, train_loss = epoch(train_loader, model)
test_err, test_loss = epoch(test_loader, model)
print("Train Err", "Train Loss", "Test Err", "Test Loss", sep="\t")
print(*("{:.6f}".format(i) for i in (train_err, train_loss, test_err, test_loss)), sep="\t")
```
Train Err Train Loss Test Err Test Loss
0.006080 0.015129 0.003783 0.008186
We're getting 0.3% error on the test set. This is good, but _not_ as good as we were doing with standard training; we're now making 8 mistakes on the test set, instead of the 1 that we were making before. And this is not just a random effect of this particular problem, or the fact that it is relatively easy. **Rather, perhaps somewhat surprisingly, there is a _fundamental_ tradeoff between clean accuracy and robust accuracy, and doing better on the robust error leads to higher clean error.** We will return to this point in much more detail later.
* something to note here where adversarial training typically leads to higher overall error when compared to no adversarial training when looking at standard training set
Finally, let's look at the image of the optimal perturbation for this robust model.
```python
delta = epsilon * model.weight.detach().sign().view(28,28)
plt.imshow(1-delta.numpy(), cmap="gray")
```
<matplotlib.image.AxesImage at 0x128d39240>
That looks substantially more like a zero than what we saw before. Thus, we have some (admittedly, at this point, fairly weak) evidence that robsut training may also lead to "adversarial directions" that are inherrently more meaningful. Rather than fooling the classifier by just adding "random noise" we actually need to start moving the image in the direction of an actual new image (and even doing so, at least with this size epsilon, we aren't very successful at fooling the classifier). This idea will also come up later.
## Multi-class classification
Before moving on to the deep learning setting, let's briefly consider the multi-class extension of what we presented above. After all, most deep classifiers that we care about are actually multi-class classifiers, using the cross entropy loss or something similar. Recalling what we defined before, this means we are considering the linear hypothesis function
\begin{equation}
h_\theta(x) = W x + b
\end{equation}
which results in an inner maximization problem of the form
\begin{equation}
\max_{\|\delta\| \leq \epsilon}\ell(W(x+\delta) + b, y).
\end{equation}
Unforutnately in the binary case, it turns out that it is no longer possible to optimally solve the inner maximization problem. Specifcally, if we consider the cross entropy loss plugged into the above expression
\begin{equation}
\max_{\|\delta\| \leq \epsilon} \left (\log \left ( \sum_{j=1}^k \exp(w_j^T (x + \delta) + b_i) \right ) - (w_y^T(x + \delta) + b_y) \right ).
\end{equation}
Here, unlike the binary case, we cannot push the max over $\delta$ inside the nonlinear function (the log-sum-exp function is convex, so maximizing over it is difficult in general).
```python
```
| aac355f112bd64a04f338cbb01715889942f9549 | 236,819 | ipynb | Jupyter Notebook | misc/adversarial_robustness_neurips_tutorial/linear_models/linear_models.ipynb | kchare/advex_notbugs_features | 0ec0578a1aba2bdb86854676c005488091b64123 | [
"MIT"
] | 2 | 2022-02-08T11:51:12.000Z | 2022-02-23T00:30:07.000Z | misc/adversarial_robustness_neurips_tutorial/linear_models/linear_models.ipynb | kchare/advex_notbugs_features | 0ec0578a1aba2bdb86854676c005488091b64123 | [
"MIT"
] | null | null | null | misc/adversarial_robustness_neurips_tutorial/linear_models/linear_models.ipynb | kchare/advex_notbugs_features | 0ec0578a1aba2bdb86854676c005488091b64123 | [
"MIT"
] | 2 | 2021-12-21T20:31:28.000Z | 2022-01-21T17:06:34.000Z | 51.493586 | 2,441 | 0.535861 | true | 6,843 | Qwen/Qwen-72B | 1. YES
2. YES | 0.805632 | 0.808067 | 0.651005 | __label__eng_Latn | 0.986379 | 0.350833 |
```python
# goal: have sympy do the mechanical substitutions, to double-check the desired relations
# once this is done, this will also make it easier for a human to check (just double-check the definitions), and easier
# to check for arbitrary splittings
from sympy import *
from sympy import init_printing
init_printing()
```
```python
symbolic_traj = dict()
symbolic_traj['x_0'] = symbols('x_0')
symbolic_traj['v_0'] = symbols('v_0')
dt = symbols('dt')
m = symbols('m')
gamma = symbols('gamma')
f = symbols('f')
kT = symbols('kT')
U = symbols('U')
def count_steps(splitting="OVRVO"):
n_O = sum([step == "O" for step in splitting])
n_R = sum([step == "R" for step in splitting])
n_V = sum([step == "V" for step in splitting])
return n_O, n_R, n_V
def create_variable_names(i):
name_x = 'x_{}'.format(i)
name_v = 'v_{}'.format(i)
symbolic_traj[name_x] = symbols(name_x)
symbolic_traj[name_v] = symbols(name_v)
x = symbolic_traj['x_{}'.format(i-1)]
v = symbolic_traj['v_{}'.format(i-1)]
return x, v, name_x, name_v
def apply_R(i, h):
x, v, name_x, name_v = create_variable_names(i)
symbolic_traj[name_x] = x + v * h
symbolic_traj[name_v] = v
def apply_V(i, h):
x, v, name_x, name_v = create_variable_names(i)
symbolic_traj[name_x] = x
symbolic_traj[name_v] = v + f(x) * h / m
def apply_O(i, h):
a = exp(-gamma * h)
b = sqrt(1 - exp(-2 * gamma * h))
x, v, name_x, name_v = create_variable_names(i)
symbolic_traj[name_x] = x
symbolic_traj[name_v] = a * v + b * sqrt(kT / m) * symbols('xi_{}'.format(i))
# xi_i is an i.i.d. standard normal r.v.
def total_energy(x, v):
return U(x) + (0.5 * m * v**2)
def get_total_energy_at_step(i):
x = symbolic_traj['x_{}'.format(i)]
v = symbolic_traj['v_{}'.format(i)]
return total_energy(x, v)
def apply_integrator(i, splitting="OVRVO"):
heat, W_shad = 0, 0
n_O, n_R, n_V = count_steps(splitting)
for step in splitting:
if step == "O":
apply_O(i, dt / n_O)
heat += get_total_energy_at_step(i) - get_total_energy_at_step(i-1)
elif step == "R":
apply_R(i, dt / n_R)
W_shad += get_total_energy_at_step(i) - get_total_energy_at_step(i-1)
elif step == "V":
apply_V(i, dt / n_V)
W_shad += get_total_energy_at_step(i) - get_total_energy_at_step(i-1)
i += 1
return heat, W_shad
splitting = "OVRVO"
heat, W_shad = apply_integrator(1, splitting)
delta_E = get_total_energy_at_step(len(splitting)) - get_total_energy_at_step(0)
delta_E
```
```python
W_shad
```
```python
heat
```
```python
delta_E - (W_shad + heat)
```
```python
# also define path probabilities?
# here, let's define the individual step probabilities
def normal_pdf(x, mu, sigma):
return (1 / (sqrt(2 * pi * sigma**2))) * exp(-(x - mu)**2 / (2 * sigma**2))
def R_prob(from_x, from_v, to_x, to_v, h):
'''deterministic position update'''
if (from_v == to_v) and (to_x == (from_x + from_v * h)):
return 1
else:
return 0
def V_prob(from_x, from_v, to_x, to_v, h):
if (from_x == to_x) and (to_v == (from_v + f(from_x) * h / m)):
return 1
else:
return 0
def O_prob(from_x, from_v, to_x, to_v, h):
''' need to double check this, probably dropped a 2 somewhere...'''
a = exp(-gamma * h) # okay
b = sqrt(1 - exp(-2 * gamma * h)) # okay
sigma = b * sqrt(kT / m) # double-check this! is this the definition of sigma or sigma^2?
mu = a * from_v # okay
if (from_x == to_x):
return normal_pdf(to_v, mu, sigma)
else:
return 0
# for example, what's the probability of a small v perturbation
O_prob(0, 0, 0, 0.1, dt / 2)
```
```python
O_prob(symbolic_traj['x_0'], symbolic_traj['v_0'], symbolic_traj['x_1'], symbolic_traj['v_1'], dt / 2)
```
```python
def forward_path_probability(splitting="OVRVO"):
path_prob = 1
n_O, n_R, n_V = count_steps(splitting)
for i, step in enumerate(splitting):
x_0, v_0 = symbolic_traj['x_{}'.format(i)], symbolic_traj['v_{}'.format(i)]
x_1, v_1 = symbolic_traj['x_{}'.format(i+1)], symbolic_traj['v_{}'.format(i+1)]
if step == "O":
step_prob = O_prob(x_0, v_0, x_1, v_1,
dt / n_O)
elif step == "R":
step_prob = R_prob(x_0, v_0, x_1, v_1,
dt / n_R)
elif step == "V":
step_prob = V_prob(x_0, v_0, x_1, v_1,
dt / n_V)
path_prob = path_prob * step_prob
return path_prob
```
```python
forward_path_probability("OVRVO")
```
```python
def reverse_path_probability(splitting="OVRVO"):
'''same as above, just reverse order of trajectory and steps appropriately?'''
path_prob = 1
n_O, n_R, n_V = count_steps(splitting)
for i in range(len(splitting))[::-1]:
step = splitting[i]
x_0, v_0 = symbolic_traj['x_{}'.format(i+1)], symbolic_traj['v_{}'.format(i+1)]
x_1, v_1 = symbolic_traj['x_{}'.format(i)], symbolic_traj['v_{}'.format(i)]
if step == "O":
step_prob = O_prob(x_0, v_0, x_1, v_1,
dt / n_O)
elif step == "R":
step_prob = R_prob(x_0, v_0, x_1, v_1,
dt / n_R)
elif step == "V":
step_prob = V_prob(x_0, v_0, x_1, v_1,
dt / n_V)
path_prob = path_prob * step_prob
return path_prob
```
```python
def CFT_definition_of_work(splitting="OVRVO"):
return - ln(reverse_path_probability(splitting) / forward_path_probability(splitting))
```
```python
CFT_definition_of_work("OVRVO")
```
```python
reverse_path_probability()
```
```python
forward_path_probability()
```
```python
# hmm that doesn't look quite right
```
| a6ef330526a753e41d7d7a1c09c54974e28e80bc | 75,428 | ipynb | Jupyter Notebook | Using sympy to compute relative path action.ipynb | jchodera/maxentile-notebooks | 6e8ca4e3d9dbd1623ea926395d06740a30d9111d | [
"MIT"
] | null | null | null | Using sympy to compute relative path action.ipynb | jchodera/maxentile-notebooks | 6e8ca4e3d9dbd1623ea926395d06740a30d9111d | [
"MIT"
] | 2 | 2018-06-10T12:21:10.000Z | 2018-06-10T14:42:45.000Z | Using sympy to compute relative path action.ipynb | jchodera/maxentile-notebooks | 6e8ca4e3d9dbd1623ea926395d06740a30d9111d | [
"MIT"
] | 1 | 2018-06-10T12:14:55.000Z | 2018-06-10T12:14:55.000Z | 114.806697 | 13,472 | 0.736915 | true | 1,797 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.831143 | 0.729309 | __label__eng_Latn | 0.350959 | 0.53276 |
```python
import numpy as np
import matplotlib.pyplot as plt
from sympy import S, solve
import plotutils as pu
%matplotlib inline
```
# numbers on a plane
Numbers can be a lot more interesting than just a value if you're just willing to shift your perspective a bit.
# integers
When we are dealing with integers we are dealing with all the whole numbers, zero and all the negative whole numbers. In math this set of numbers is often denoted with the symbol $\mathbb{Z}$. This is a *countable infinite* set and even though the numbers are a bit basic we can try to get some more insight into the structure of numbers.
# squares
If we take a number and multiply it with itself we get a *square number*. These are called square because we can easily plot them as squares in a plot.
```python
def plot_rect(ax, p, fmt='b'):
x, y = p
ax.plot([0, x], [y, y], fmt) # horizontal line
ax.plot([x, x], [0, y], fmt) # vertical line
with plt.xkcd():
fig, axes = plt.subplots(1, figsize=(4, 4))
pu.setup_axes(axes, xlim=(-1, 4), ylim=(-1, 4))
for x in [1,2,3]: plot_rect(axes, (x, x))
```
However, what happens we have a non-square number such as $5$?. We can't easily plot this as two equal lenghts, we'll have to turn it into a rectangle of $1 \times 5$ or $5 \times 1$.
```python
with plt.xkcd():
fig, axes = plt.subplots(1, figsize=(4, 4))
pu.setup_axes(axes, xlim=(-1, 6), ylim=(-1, 6))
for x, y in [(1, 5), (5, 1)]:
plot_rect(axes, (x, y))
```
The first thing we notice is that we can take one thing and project it as two things. The fact that this happens is perfectly natural because we decided to take a single value and project it in two-dimensions in a way that suits us. Nothing really weird about it but still it's worth to think about it for a moment. Apparently it's perfectly valid to have something even though the way we got there doesn't matter. We could either take the rectangle standing up or the one lying down.
Another interesting question to ask is whether we can get on the other sides of the axes. So far we have been happily plotting in the positive quadrant where $0 \le x$ and $0 \le y$ but what about the other three? Are they even reachable using just integer numbers?
We could make up some factor like $-1 \times -5$ and that would put us in the lower left. That would be equal to the same rectangles projected in the top right. And negative numbers would be either in the top left or bottom right. Although trivial this is interesting because now we find that if we project a single dimension into two dimensions we sometimes get 1 possibility, sometimes 2 and usually 4.
If we project zero we just get zero. However if we project $1$ we get either $1 \times 1$ or $-1 \times -1$. If we project $5$ we get $5 \times 1$, $1 \times 5$, $-5 \times -1$ and $-1 \times -5$.
```python
```
| 2332e5032b2f7552cae859c9d3e2c176e205b711 | 20,341 | ipynb | Jupyter Notebook | squares_and_roots.ipynb | basp/notes | 8831f5f44fc675fbf1c3359a8743d2023312d5ca | [
"MIT"
] | 1 | 2016-12-09T13:58:13.000Z | 2016-12-09T13:58:13.000Z | squares_and_roots.ipynb | basp/notes | 8831f5f44fc675fbf1c3359a8743d2023312d5ca | [
"MIT"
] | null | null | null | squares_and_roots.ipynb | basp/notes | 8831f5f44fc675fbf1c3359a8743d2023312d5ca | [
"MIT"
] | null | null | null | 141.256944 | 8,106 | 0.871098 | true | 764 | Qwen/Qwen-72B | 1. YES
2. YES | 0.887205 | 0.863392 | 0.766005 | __label__eng_Latn | 0.999698 | 0.618018 |
# Week 8 - Discrete Latent Variable Models and Hybrid Models Notebook
In this notebook, we will solve questions discrete latent variable models and hybrid generative models.
- This notebook is prepared using PyTorch. However, you can use any Python package you want to implement the necessary functions in questions.
- If the question asks you to implement a specific function, please do not use its readily available version from a package and implement it yourself.
## Question 1
Please answer the questions below:
1. Please give some examples to discrete data modalities.
1. Can we use GANs to generate discrete data points?
1. What is REINFORCE and why do we use it?
1. Please briefly explain Gumbel-Softmax by stating why do we need it and how do we use it in practice?
1. Please conceptually explain how PixelVAE works.
1. What is the novelty of $\beta$-VAE over the classical variational auto-encoder. Please briefly explain.
You can write your answer for each question in the markdown cell below:
**Please write your answer for each question here**
## Question 2
Implement the Gumbel-Softmax function. The function is characterized as below:
\begin{equation}
\hat{z} = \text{soft}\max_i \left(\frac{g_i + \log \pi}{\tau}\right)
\end{equation}
where $\pi$ are the class proabilities, $g_i$ are the i.i.d. samples from the gumbel distribution, and $\tau$ is the temperature parameter $\in (0, 1]$.
You can write additional function or functions to sample from the gumbel distribution.
You can also change the value of the random seed to see different results.
```python
import torch
torch.manual_seed(0)
batch_size = 16
# Let's assume four discrete outputs
num_classes = 4
logits = torch.randn(batch_size, num_classes)
```
```python
def gumbel_softmax(logits, temperature):
"""Applies gumbel softmax operation to the provided logits
Args:
logits: (N x num_classes)
temperature: A scalar constant that determines the bias-variance tradeoff
Returns:
the resulting tensor from the operation
"""
#######################
# Write code here
#######################
pass
```
```python
print(gumbel_softmax(logits, temperature=0.5))
```
**Expected Output:** (Note that these are the outputs that are converted to one-hot vectors. You can choose to give outputs as softmax values as well).
```
tensor([[0., 0., 0., 1.],
[0., 1., 0., 0.],
[0., 0., 0., 1.],
[0., 0., 1., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.],
[1., 0., 0., 0.],
[1., 0., 0., 0.],
[0., 1., 0., 0.],
[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 1., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.],
[0., 0., 1., 0.]])
```
**Bonus:** It is recommended for you to tinker with the temperature parameter and see how the results change.
## Question 3
Implement the loss function of VAE-GAN. You can refer to the [paper](https://arxiv.org/pdf/1512.09300.pdf) to see the motivation behind the loss function and the related equations.
The loss function of VAE-GAN consists of three parts, first one being the KL divergence loss:
\begin{equation}
\mathcal{L}_{prior} = D_{KL}(q(z|x)||p(z))
\end{equation}
where $z$ is the latent space vector from the latent distribution $p(z)$ and $x$ is the data point to be reconstructed. Typically, $z$ is sampled from $\mathcal N(0, 1)$. This term is considered as a regularizer and ensures that the distribution of the output of the encoder is similar to $\mathcal N(0, 1)$.
Second term is the reconstruction loss, but with a small twist:
\begin{equation}
\mathcal{L}^{\text{Dis}_l}_{\text{llike}} = -\mathbb{E}_{q(z|x)}[\log(p(\text{Dis}_l(x)|z)]
\end{equation}
Equation above is the log-likelihood based reconstruction loss of the original VAE, except for $x$ is replaced by $\text{Dis}_l(x)$. This is the intermediate represantation of the reconstructed version of $x \sim \text{Dec}(z)$ from the $l^{th}$ layer of the discriminator. This is to ensure that the image is not reconstructed on the pixel-level but more on a feature-level.
Finally, third part of the loss is our good old GAN loss:
\begin{equation}
\mathcal{L}_{\text{GAN}} = \log(\text{Dis}(x)) + \log(1 - \text{Dis}(\text{Gen}(z)))
\end{equation}
The final loss of the VAE-GAN is the sum of all these losses:
\begin{equation}
\mathcal{L} = \mathcal{L}_{prior} + \mathcal{L}^{\text{Dis}_l}_{\text{llike}} + \mathcal{L}_{\text{GAN}}
\end{equation}
Implement all three losses as different functions to the code cells below:
```python
mean = torch.randn(batch_size, 20)
logvar = torch.randn(batch_size, 20)
```
```python
def kl_loss(mean, logvar):
"""Calculates the KL loss based on the mean and logvar outputs of the Encoder network
w.r.t to the Gaussian with zero mean and unit variance
Args:
mean: Tensor of mean values coming from the Encoder (N x D)
logvar: Tensor of log-variance values coming from the Encoder (N x D)
Returns:
The resulting KL loss
"""
#######################
# Write code here
#######################
pass
```
```python
print(kl_loss(mean, logvar))
```
```python
features_org = torch.randn(batch_size, 100)
features_recon = torch.randn(batch_size, 100)
# Uncomment the line below and run the function again to see a higher reconstruction error
# features_recon = torch.normal(3, 20, batch_size, 100)
```
```python
def reconstruction_loss(features_org, features_recon):
"""Calculates the reconstruction loss with mean squared error
Args:
features_org: Features of the original image obtained from an intermediate layer of the discriminator
features_recon: Features of the reconstructed image obtained from an intermediate layer of the discriminator
Returns:
M.S.E based reconstruction error of the features
"""
#######################
# Write code here
#######################
pass
```
```python
print(reconstruction_loss(faetures_org, features_recon))
```
```python
outputs_real = torch.randn(batch_size, 32).clip(0, 1)
outputs_fake = torch.randn(batch_size, 32).clip(0, 1)
```
```python
def gan_loss(d_real_outputs, d_fake_outputs):
"""Our good old GAN loss, doesn't need much of an explanation :)
Args:
d_real_outputs: Discriminator sigmoid outputs for the real data points
d_fake_outputs: Discriminator sigmoid outputs for the fake data points
Returns:
The calculated GAN loss
"""
#######################
# Write code here
#######################
pass
```
```python
print(gan_loss(outputs_real, outputs_fake))
```
## Bonus
My master's thesis was a hybrid generative model and it was published in Pattern Recognition. I would like to briefly talk about it during the notebook session.
For anyone who is interested, kindly read or skim through the paper before coming to the discussion session. I leave the link to the paper [here](https://faculty.ozyegin.edu.tr/ethemalpaydin/files/2021/01/Uras_bigan_PatRec.pdf).
| 72be7d5a7f9ccbef59597468224933265a05e1e5 | 12,296 | ipynb | Jupyter Notebook | inzva x METU ImageLab Joint Program/Week 8 - Discrete Latent Variable Models and Hybrid Models/Week_8_Discrete_Latent_Variable_Models_and_Hybrid_Models.ipynb | inzva/-AI-Labs-Joint-Program | 45d776000f5d6671c7dbd98bb86ad3ceae6e4b2c | [
"MIT"
] | 12 | 2021-07-31T11:14:41.000Z | 2022-02-26T14:28:59.000Z | inzva x METU ImageLab Joint Program/Week 8 - Discrete Latent Variable Models and Hybrid Models/Week_8_Discrete_Latent_Variable_Models_and_Hybrid_Models.ipynb | inzva/-AI-Labs-Joint-Program | 45d776000f5d6671c7dbd98bb86ad3ceae6e4b2c | [
"MIT"
] | null | null | null | inzva x METU ImageLab Joint Program/Week 8 - Discrete Latent Variable Models and Hybrid Models/Week_8_Discrete_Latent_Variable_Models_and_Hybrid_Models.ipynb | inzva/-AI-Labs-Joint-Program | 45d776000f5d6671c7dbd98bb86ad3ceae6e4b2c | [
"MIT"
] | 1 | 2021-08-16T20:50:44.000Z | 2021-08-16T20:50:44.000Z | 34.63662 | 391 | 0.499919 | true | 1,873 | Qwen/Qwen-72B | 1. YES
2. YES | 0.709019 | 0.927363 | 0.657518 | __label__eng_Latn | 0.990794 | 0.365966 |
# Band Math and Indices
This section discusses band math and spectral indicies.
This notebook is derived from a [Digital Earth Africa](https://www.digitalearthafrica.org/) notebook: [here](https://github.com/digitalearthafrica/deafrica-training-workshop/blob/master/docs/session_4/01_band_indices.ipynb)
## Background
Band math is the application of arithmetic to the values of the spectral bands of a dataset. One use of band math is in computing spectral indicies.
## Description
Topics covered include:
* What bands are
* What band math is
* Uses of band math
* Some of the most common spectral indicies that use band math
* How to compute spectral indicies in ODC notebooks
## What is a band?
*A section of the electromagnetic spectrum showing some of the Landsat 8 bands.* [[Source](http://www.geocarto.com.hk/edu/PJ-BCMBLSAT/main_BCLS.html)]
The range of data acquired by a satellite is defined by its **bands**. Bands are subdivisions of the electromagnetic spectrum dependent on the sensors on the satellite.
A selection of commonly-used Landsat 8 and Sentinel-2 bands are shown in the table below. We can see the spectral ranges between Landsat 8 and Sentinel-2 are similar, but not the same. Also note the band numbers do not always correspond to the same spectral range.
<!--|Band| Landsat 8 wavelength range (nm) | Sentinel-2 wavelength range (nm)|
|------------|--------|---------------|
| Blue | Band 2 <br> 450 – 510 | Band 2 <br> 458 – 523 |
| Green| Band 3 <br> 530 – 590| Band 3 <br> 543 – 578
| Red | Band 4 <br> 640 – 670 | Band 4 <br> 650 – 680 |
| Near-infrared (NIR) | Band 5 <br> 850 – 880| Band 8 <br> 785 – 899|
| Short-wave infrared 1 (SWIR 1) | Band 6 <br> 1570 – 1650 | Band 11 <br> 1565 – 1655||-->
*Sources:* [Landsat 8 bands](https://www.usgs.gov/media/resources/es/landsat-8-band-designations), [Sentinel-2 bands](https://earth.esa.int/web/sentinel/technical-guides/sentinel-2-msi/msi-instrument)
## Bands and terrain features
Different types of ground cover absorb and reflect different amounts of radiation along the electromagnetic spectrum. This is dependent on the physical and chemical properties of the surface.
For example:
* **Water:** Open water bodies generally reflect light in the visible spectrum, and absorb more short-wave infrared than near-infrared radiation. This can change if the water is turbid.
* **Snow:** Ice and snow reflect most visible radiation, but do not reflect much short-wave infrared. Reflectance measurements depend on snow granule size and liquid water content.
* **Green vegetation:** Chlorophyll, a pigment in plants, absorbs a lot of visible light, including red light. When plants are healthy, they strongly reflect near-infrared light.
* **Bare soil:** The mineral composition of soil can be characterised using the visible and near-infrared spectrum. Soil moisture content can greatly influence the results.
Using these spectral differences, we can calculate ratios between bands to isolate and accentuate the specific terrain feature we are interested in. These metrics are known as band ratios or **band indices**.
In practice, variation within terrain feature classes, as well as the presence of multiple features in one area, can make different types of ground cover difficult to distinguish. This is one of the challenges of spectral data analysis.
### Example: Normalised Difference Vegetation Index (NDVI)
One of the most widely-used band indices is the Normalised Difference Vegetation Index (NDVI). It is used to show the presence of live green vegetation. Generally, green vegetation has a low red band measurement, as red light is absorbed by chlorophyll. In addition to this, healthy leaf cell structures reflect near-infrared light, giving a high near-infrared (NIR) band measurement.
NDVI is therefore typically calculated using a satellite's NIR band and red band. One value is calculated per pixel.
\begin{equation} \text{NDVI} \ = \ \frac{\text{NIR} - \text{Red}}{\text{NIR} + \text{Red}} \end{equation}
We see the index is calculated by the difference $(\text{NIR} - \text{Red})$ divided by the sum $(\text{NIR} + \text{Red})$. This normalises the index: all values will fall between $-1$ and $1$.
Large values of NDVI will occur for pixels where NIR is high and red is low. Conversely, NDVI can be close to 0 or even negative where NIR is low and red is high. This means we interpret NDVI as follows:
\begin{align}
\text{NDVI} &> 0, \text{ or close to }1 = \text{green vegetation}\\
\text{NDVI} &\leq 0 = \text{not green vegetation; water, soil, etc.}
\end{align}
But what is green vegetation? The US Geological Survey provides a more specific [guide to interpreting NDVI](https://www.usgs.gov/land-resources/eros/phenology/science/ndvi-foundation-remote-sensing-phenology?qt-science_center_objects=0#qt-science_center_objects).
> Areas of barren rock, sand, or snow usually show very low NDVI values (for example, 0.1 or less). Sparse vegetation such as shrubs and grasslands or senescing crops may result in moderate NDVI values (approximately 0.2 to 0.5). High NDVI values (approximately 0.6 to 0.9) correspond to dense vegetation such as that found in temperate and tropical forests or crops at their peak growth stage.
You can see even this definition does not exhaustively cover every kind of vegetation. It is important to remember that Earth observation data analysis is sensitive to the dataset location and time. The nature of climate and environment variations across the globe, and even just within the African continent, mean that band indices like NDVI need to be interpreted with knowledge and context of the area.
Normalising band indices so their values fall between -1 and 1 gives a relative scale that allows for easier data analysis. Values of the index can be better compared between different points in the area of analysis, and across time periods.
Below we have two plots of an area with river distributaries in Guinea-Bissau, a country that experiences monsoonal-like seasons. We see the amount of vegetation as detected by NDVI fluctuates over time. The top image shows NDVI in April, at the end of the dry season. NDVI readings are much lower than when compared to the same area in November (bottom image), after several months of rain during the wet season.
Notice the RGB images may show parts of the area to be visibly 'dry' or 'lush', but in places where this is less obvious, it is easier to analyse NDVI than the multispectral RGB dataset.
*NDVI calculated from Sentinel-2 data in Guinea-Bissau in April 2018 (top left) and November 2018 (bottom left). The NDVI values reflect typical seasonal patterns in a quantitative manner.*
Note that in the table of wavelength ranges for the bands for Landsat 8 and Sentinel-2, the satellites have slightly different band ranges for the same band name (e.g. 'Red' for Landsat 8 is slightly different than 'Red' for Sentinel-2). This means they will produce different band index values even at approximately the same time and place. It is good practice to ensure that datasets you are comparing match very closely in band ranges for the relevant bands.
## Band indices in research
NDVI is just one example of a useful band index. There are many other band indices used in Earth observation research to draw out terrain features. Selecting a band index is often dependent on environmental conditions and research purpose.
* **Vegetation:** As described above, NDVI is a good baseline vegetation index. It is simple to calculate, and only requires two bands — red and NIR. However, the Enhanced Vegetation Index (EVI) is often more accurate. EVI is calculated with three bands — red, blue and NIR — and requires some coefficients for scaling.
In arid regions, where vegetative cover is low, consider using the Soil Adjusted Vegetation Index (SAVI), which incorporates a soil brightness correction factor.
* **Urbanisation:** Human settlements can be identified through urbanisation indices, one of which is the Normalised Difference Built-up Index (NDBI). NDBI uses SWIR 1 and NIR bands:
$$ \text{NDBI} \ = \ \frac{\text{SWIR 1} - \text{NIR}}{\text{SWIR 1} + \text{NIR}} $$
However, NDBI can be confused between built-up areas and bare soil, so in arid and semi-arid regions where this is problematic, it may be better to use the Dry Bare Soil Index (DBSI).
$$ \text{DBSI} \ = \ \frac{\text{SWIR 1} - \text{Green}}{\text{SWIR 1} + \text{Green}} \ - \ \text{NDVI}$$
* **Water bodies:** Delineation between water and land can be defined using the Modified Normalised Difference Water Index (MNDWI). It is calculated using green and SWIR 1 bands:
$$ \text{MNDWI} \ = \ \frac{\text{Green} - \text{SWIR 1}}{\text{Green} + \text{SWIR 1}} $$
This should not be confused with indices for monitoring water content in vegetation.
It is important to remember band indices are not infallible; their usefulness relies on appropriate index selection and sensible interpretation. However, as the field of remote sensing grows, ongoing research into differentiating terrain types with different band combinations give rise to more nuanced and accurate data analysis. For instance, it is common to use more than one index to help distinguish feature classes with similar spectral characteristics.
## Calculating Indicies
As shown in the [NDVI Training](../day2/NDVI_Training.ipynb), to calculate spectral indicies, we use the `calculate_indices()` function (defined [here](https://github.com/GeoscienceAustralia/dea-notebooks/blob/develop/Tools/dea_tools/bandindices.py#L29)). In this environment, it is available at `utils.dea_tools.bandindices`.
Below is an example of calculating NDVI with Landsat Collection 2 data.
```
from utils.dea_tools.bandindices import calculate_indices
ds = calculate_indices(ds, index='NDVI', collection='c2')
```
## Conclusion
Band indices are an integral part of spatial data analysis, and are an efficient method of distinguishing different types of land cover. In the following sections, we will calculate NDVI for a cloud-free composite using the Sandbox.
| d5e62ed614258f121dd5b0df1d4af395ae9d7afb | 14,748 | ipynb | Jupyter Notebook | notebooks/day3/Band_Math_and_Indices.ipynb | jcrattz/odc_training_notebooks | 651a0028463633cf30b32ac16b4addb05d9f4e85 | [
"Apache-2.0"
] | null | null | null | notebooks/day3/Band_Math_and_Indices.ipynb | jcrattz/odc_training_notebooks | 651a0028463633cf30b32ac16b4addb05d9f4e85 | [
"Apache-2.0"
] | null | null | null | notebooks/day3/Band_Math_and_Indices.ipynb | jcrattz/odc_training_notebooks | 651a0028463633cf30b32ac16b4addb05d9f4e85 | [
"Apache-2.0"
] | 1 | 2021-08-18T16:24:48.000Z | 2021-08-18T16:24:48.000Z | 43.762611 | 467 | 0.657038 | true | 2,452 | Qwen/Qwen-72B | 1. YES
2. YES | 0.695958 | 0.72487 | 0.50448 | __label__eng_Latn | 0.997136 | 0.010404 |
# Tutorial: advection-diffusion kernels in Parcels
In Eulerian ocean models, sub-grid scale dispersion of tracers such as heat, salt, or nutrients is often parameterized as a diffusive process. In Lagrangian particle simulations, sub-grid scale effects can be parameterized as a stochastic process, randomly displacing a particle position in proportion to the local eddy diffusivity ([Van Sebille et al. 2018](https://doi.org/10.1016/j.ocemod.2017.11.008)). Parameterizing sub-grid scale dispersion may be especially important when coarse velocity fields are used that do not resolve mesoscale eddies ([Shah et al., 2017](https://doi.org/10.1175/JPO-D-16-0098.1)). This tutorial explains how to use a sub-grid scale parameterization in _Parcels_ that is consistent with the advection-diffusion equation used in Eulerian models.
## Stochastic differential equations (SDE) consistent with advection-diffusion
The time-evolution of a stochastic process is described by a stochastic differential equation. The time-evolution of the conditional probability density of a stochastic process is described by a Fokker-Planck equation (FPE). The advection-diffusion equation, describing the evolution of a tracer, can be written as a Fokker-Planck equation. Therefore, we can formulate a stochastic differential equation for a particle in the Lagrangian frame undergoing advection with stochastic noise proportional to the local diffusivity in a way that is consistent with advection-diffusion in the Eulerian frame. For details, see [Shah et al., 2011](https://doi.org/10.1016/j.ocemod.2011.05.008) and [van Sebille et al., 2018](https://doi.org/10.1016/j.ocemod.2017.11.008).
The stochastic differential equation for a particle trajectory including diffusion is
$$
\begin{aligned}
d\mathbf{X}(t) &\overset{\text{Îto}}{=} (\mathbf{u} + \nabla \cdot \mathbf{K}) dt + \mathbf{V}(t, \mathbf{X})\cdot d\mathbf{W}(t), \\
\mathbf{X}(t_0) &= \mathbf{x}_0,
\end{aligned}
$$
where $\mathbf{X}$ is the particle position vector ($\mathbf{x}_0$ being the initial position vector), $\mathbf{u}$ the velocity vector, $\mathbf{K} = \frac{1}{2} \mathbf{V} \cdot \mathbf{V}^T$ the diffusivity tensor, and $d\mathbf{W}(t)$ a Wiener increment (normally distributed with zero mean and variance $dt$). Particle distributions obtained by solving the above equation are therefore consistent with Eulerian concentrations found by solving the advection-diffusion equation.
In three-dimensional ocean models diffusion operates along slopes of neutral buoyancy. To account for these slopes, the 3D diffusivity tensor $\mathbf{K}$ (and therefore $\mathbf{V}$) contains off-diagonal components. Three-dimensional advection-diffusion is not yet implemented in _Parcels_, but it is currently under development. Here we instead focus on the simpler case of diffusion in a horizontal plane, where diffusivity is specified only in the zonal and meridional direction, i.e.
$$\mathbf{K}(x,y)=\begin{bmatrix}
K_x(x,y) & 0\\
0 & K_y(x,y)
\end{bmatrix}.$$
The above stochastic differential equation then becomes
$$
\begin{align}
dX(t) &= a_x dt + b_x dW_x(t), \quad &X(t_0) = x_0,\\
dY(t) &= a_y dt + b_y dW_y(t), \quad &Y(t_0) = y_0,
\end{align}
$$
where $a_i = v_i + \partial_i K_i(x, y)$ is the deterministic drift term and $b_i = \sqrt{2K_i(x, y)}$ a stochastic noise term ($\partial_i$ denotes the partial derivative with respect to $i$).
## Numerical Approximations of SDEs
The simplest numerical approximation of the above SDEs is obtained by replacing $dt$ by a finite time discrete step $\Delta t$ and $dW$ by a discrete increment $\Delta W$, yielding the **Euler-Maruyama (EM) scheme** ([Maruyama, 1955](https://link.springer.com/article/10.1007/BF02846028)):
$$
\begin{equation}
X_{n+1} = X_n + a_x \Delta t + b_x \Delta W_{n, x},
\end{equation}
$$
with a similar expression for $Y$.
A higher-order scheme is found by including extra terms from a Taylor expansion on our SDE, yielding the **Milstein scheme of order 1 (M1)**:
$$
\begin{equation}
X_{n+1} = X_n + a_x \Delta t + b_x \Delta W_x + \frac{1}{2}b_x \partial_x b_x(\Delta W_{n, x}^2 - \Delta t),
\end{equation}
$$
which can be rewritten by explicitly writing $b_x\partial_x b_x$ as $\partial_x K_x(z)$:
$$
\begin{equation}
X_{n+1} = X_n + v_x \Delta t + \frac{1}{2}\partial_x K_x(\Delta W_{n, x}^2 + \Delta t) + b\Delta W_n.
\end{equation}
$$
The extra term in the M1 scheme provides extra accuracy at negligible computational cost.
The spatial derivatives in the EM and M1 schemes can be approximated by a central difference. Higher order numerical schemes (see [Gräwe et al., 2012](https://doi.org/10.1007/s10236-012-0523-y)) include higher order derivatives. Since Parcels uses bilinear interpolation, these higher order derivatives cannot be computed, meaning that higher order numerical schemes cannot be used.
An overview of numerical approximations for SDEs in a particle tracking setting can be found in [Gräwe (2011)](https://doi.org/10.1016/j.ocemod.2010.10.002).
## Using Advection-Diffusion Kernels in Parcels
The EM and M1 advection-diffusion approximations are available as `AdvectionDiffusionEM` and `AdvectionDiffusionM1`, respectively. The `AdvectionDiffusionM1` kernel should be the default choice, as the increased accuracy comes at negligible computational cost.
The advection component of these kernels is similar to that of the Explicit Euler advection kernel (`AdvectionEE`). In the special case where diffusivity is constant over the entire domain, the diffusion-only kernel `DiffusionUniformKh` can be used in combination with an advection kernel of choice. Since the diffusivity here is space-independent, gradients are not calculated, increasing efficiency. The diffusion-step can in this case be computed after or before advection, thus allowing you to chain kernels using the `+` operator.
Just like velocities, diffusivities are passed to Parcels in the form of `Field` objects. When using `DiffusionUniformKh`, they should be added to the `FieldSet` object as constant fields, e.g. `fieldset.add_constant_field("Kh_zonal", 1, mesh="flat")`.
To make a central difference approximation for computing the gradient in diffusivity, a resolution for this approximation `dres` is needed: _Parcels_ approximates the gradients in diffusivities by using their values at the particle's location ± `dres` (in both $x$ and $y$). A value of `dres` must be specified and added to the FieldSet by the user (e.g. `fieldset.add_constant("dres", 0.01)`). Currently, it is unclear what the best value of `dres` is. From experience, its size of `dres` should be smaller than the spatial resolution of the data, but within reasonable limits of machine precision to avoid numerical errors. We are working on a method to compute gradients differently so that specifying `dres` is not necessary anymore.
## Example: Impermeable Diffusivity Profile
Let's see the `AdvectionDiffusionM1` in action and see why it's preferable over the `AdvectionDiffusionEM` kernel. To do so, we create an idealized profile with diffusivities $K_\text{zonal}$ uniform everywhere ($K_\text{zonal} = \bar{K}=0.5$) and $K_\text{meridional}$ constant in the zonal direction, while having the following profile in the meridional direction:
$$ K_\text{meridional}(y) = \bar{K}\frac{2(1+\alpha)(1+2\alpha)}{\alpha^2H^{1+1/\alpha}} \begin{cases}
y(L-2y)^{1/\alpha},\quad 0 \leq y \leq L/2,\\
(L-y)(2y-1)^{1/a},\quad H/2 \leq y \leq L,
\end{cases}$$
with $L$ being the basin length scale, $\alpha$ as a parameter determining the steepness in the gradient in the profile. This profile is similar to that used by [Gräwe (2011)](https://doi.org/10.1016/j.ocemod.2010.10.002), now used in the meridional direction for illustrative purposes.
Let's plot $K_\text{meridional}(y)$:
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
from datetime import timedelta
from parcels import ParcelsRandom
from parcels import (FieldSet, ParticleSet, JITParticle,
DiffusionUniformKh, AdvectionDiffusionM1, AdvectionDiffusionEM)
```
```python
K_bar = 0.5 # Average diffusivity
alpha = 1. # Profile steepness
L = 1. # Basin scale
Ny = 103 # Number of grid cells in y_direction (101 +2, one level above and one below, where fields are set to zero)
dy = 1.03/Ny # Spatial resolution
y = np.linspace(-0.01, 1.01, 103) # y-coordinates for grid
y_K = np.linspace(0., 1., 101) # y-coordinates used for setting diffusivity
beta = np.zeros(y_K.shape) # Placeholder for fraction term in K(y) formula
for yi in range(len(y_K)):
if y_K[yi] < L/2:
beta[yi] = y_K[yi]*np.power(L - 2*y_K[yi], 1/alpha)
elif y_K[yi] >= L/2:
beta[yi] = (L - y_K[yi])*np.power(2*y_K[yi] - L, 1/alpha)
Kh_meridional = 0.1*(2*(1+alpha)*(1+2*alpha))/(alpha**2*np.power(L, 1+1/alpha))*beta
Kh_meridional = np.concatenate((np.array([0]), Kh_meridional, np.array([0])))
```
```python
plt.plot(Kh_meridional, y)
plt.ylabel("y")
plt.xlabel(r"$K_{meridional}$")
plt.show()
```
In this profile, diffusivity drops to 0 at $y=0.5$ and at $y=0$ and $y=1$. In the absence of advection, particles starting out in one half of the domain should remain confined to that half as they are unable to cross the points where the diffusivity drops to 0. The line $y=0.5$ should therefore provide an impermeable barrier.
Now we can put this idealized profile into a flat fieldset:
```python
xdim, ydim = (1, Ny)
data = {'U': np.zeros(ydim),
'V': np.zeros(ydim),
'Kh_zonal': K_bar*np.ones(ydim),
'Kh_meridional': Kh_meridional}
dims = {'lon': 1,
'lat': np.linspace(-0.01, 1.01, ydim, dtype=np.float32)}
fieldset = FieldSet.from_data(data, dims, mesh='flat', allow_time_extrapolation=True)
fieldset.add_constant('dres', 0.00005)
```
WARNING: Casting lon data to np.float32
WARNING: Casting field data to np.float32
We release 100 particles at ($x=0$, $y=0.75$).
```python
def get_test_particles():
return ParticleSet.from_list(fieldset,
pclass=JITParticle,
lon=np.zeros(100),
lat=np.ones(100)*0.75,
time=np.zeros(100),
lonlatdepth_dtype=np.float64)
```
Now we will simulate the advection and diffusion of the particles using the `AdvectionDiffusionM1` kernel. We run the simulation for 0.3 seconds, with a numerical timestep $\Delta t = 0.001$s. We also write away particle locations at each timestep for plotting. Note that this will hinder a runtime comparison between kernels, since it will cause most time to be spent on I/O.
```python
dt = 0.001
testParticles = get_test_particles()
output_file = testParticles.ParticleFile(name="M1_out.nc",
outputdt=timedelta(seconds=dt))
ParcelsRandom.seed(1636) # Random seed for reproducibility
testParticles.execute(AdvectionDiffusionM1,
runtime=timedelta(seconds=0.3),
dt=timedelta(seconds=dt),
output_file=output_file,
verbose_progress=True)
output_file.close() # to write the output to a netCDF file, since `output_file` does not close automatically when using notebooks
```
INFO: Compiled ParcelsRandom ==> /var/folders/_k/jcmdplbn0yj79g4k3g9f4nxr0000gn/T/parcels-501/parcels_random_ac532b87-5b06-48d0-9e2d-255636be37a9.so
INFO: Compiled JITParticleAdvectionDiffusionM1 ==> /var/folders/_k/jcmdplbn0yj79g4k3g9f4nxr0000gn/T/parcels-501/898c1da8a616299b79df707e314fd48c_0.so
100% (0.3 of 0.3) |######################| Elapsed Time: 0:00:00 Time: 0:00:00
```python
M1_out = xr.open_dataset("M1_out.nc")
```
We can plot the individual coordinates of the particle trajectories against time ($x$ against $t$ and $y$ against $t$) to investigate how diffusion works along each axis.
```python
fig, ax = plt.subplots(1, 2)
fig.set_figwidth(12)
for data, ai, dim, ystart, ylim in zip([M1_out.lat, M1_out.lon], ax, ('y', 'x'), (0.75, 0), [(0, 1), (-1, 1)]):
ai.plot(np.arange(0, 0.3002, 0.001), data.T, alpha=0.3)
ai.scatter(0, ystart, s=20, c='r', zorder=3)
ai.set_xlabel("t")
ai.set_ylabel(dim)
ai.set_xlim(0, 0.3)
ai.set_ylim(ylim)
fig.suptitle("`AdvectionDiffusionM1` Simulation: Particle trajectories in the x- and y-directions against time")
plt.show()
```
We see that the along the meridional direction, particles remain confined to the ‘upper’ part of the domain, not crossing the impermeable barrier where the diffusivity drops to zero. In the zonal direction, particles follow random walks, since all terms involving gradients of the diffusivity are zero.
Now let's execute the simulation with the `AdvectionDiffusionEM` kernel instead.
```python
dt = 0.001
testParticles = get_test_particles()
output_file = testParticles.ParticleFile(name="EM_out.nc",
outputdt=timedelta(seconds=dt))
ParcelsRandom.seed(1636) # Random seed for reproducibility
testParticles.execute(AdvectionDiffusionEM,
runtime=timedelta(seconds=0.3),
dt=timedelta(seconds=dt),
output_file=output_file,
verbose_progress=True)
output_file.close() # to write the output to a netCDF file, since `output_file` does not close automatically when using notebooks
```
INFO: Compiled JITParticleAdvectionDiffusionEM ==> /var/folders/_k/jcmdplbn0yj79g4k3g9f4nxr0000gn/T/parcels-501/e1c028fccbd72db1f946f5c3150bf1a9_0.so
100% (0.3 of 0.3) |######################| Elapsed Time: 0:00:00 Time: 0:00:00
```python
EM_out = xr.open_dataset("EM_out.nc")
```
```python
fig, ax = plt.subplots(1, 2)
fig.set_figwidth(12)
for data, ai, dim, ystart, ylim in zip([EM_out.lat, EM_out.lon], ax, ('y', 'x'), (0.75, 0), [(0, 1), (-1, 1)]):
ai.plot(np.arange(0, 0.3002, 0.001), data.T, alpha=0.3)
ai.scatter(0, ystart, s=20, c='r', zorder=3)
ai.set_xlabel("t")
ai.set_ylabel(dim)
ai.set_xlim(0, 0.3)
ai.set_ylim(ylim)
fig.suptitle("`AdvectionDiffusionEM` Simulation: Particle trajectories in the x- and y-directions against time")
plt.show()
```
The Wiener increments for both simulations are equal, as they are fixed through a random seed. As we can see, the Euler-Maruyama scheme performs worse than the Milstein scheme, letting particles cross the impermeable barrier at $y=0.5$. In contrast, along the zonal direction, particles follow the same random walk as in the Milstein scheme, which is expected since the extra terms in the Milstein scheme are zero in this case.
## References
Gräwe, U. (2011). “Implementation of high-order particle-tracking schemes in a water column model.” *Ocean Modelling*, 36(1), 80–89. https://doi.org/10.1016/j.ocemod.2010.10.002
Gräwe, Deleersnijder, Shah & Heemink (2012). “Why the Euler scheme in particle tracking is not enough: The shallow-sea pycnocline test case.” *Ocean Dynamics*, 62(4), 501–514. https://doi.org/10.1007/s10236-012-0523-y
Maruyama, G. (1955). “Continuous Markov processes and stochastic equations.” *Rendiconti del Circolo Matematico di Palermo*, 4(1), 48.
van Sebille et al. (2018). “Lagrangian ocean analysis: Fundamentals and practices.” *Ocean Modelling*, 121, 49–75. https://doi.org/10.1016/j.ocemod.2017.11.008
Shah, S. H. A. M., Heemink, A. W., & Deleersnijder, E. (2011). “Assessing Lagrangian schemes for simulating diffusion on non-flat isopycnal surfaces.” *Ocean Modelling*, 39(3–4), 351–361. https://doi.org/10.1016/j.ocemod.2011.05.008
https://doi.org/10.1016/j.ocemod.2011.05.008
Shah, Primeau, Deleersnijder & Heemink (2017). “Tracing the Ventilation Pathways of the Deep North Pacific Ocean Using Lagrangian Particles and Eulerian Tracers.” *Journal of Physical Oceanography*, 47(6), 1261–1280. https://doi.org/10.1175/JPO-D-16-0098.1
| 6f5bf8dec24266ec64bac827062d4de69465d0d7 | 678,433 | ipynb | Jupyter Notebook | parcels/examples/tutorial_diffusion.ipynb | noemieplanat/Copy-parcels-master | 21f053b81a9ccdaa5d8ee4f7efd6f01639b83bfc | [
"MIT"
] | 202 | 2017-07-24T23:22:38.000Z | 2022-03-22T15:33:46.000Z | parcels/examples/tutorial_diffusion.ipynb | noemieplanat/Copy-parcels-master | 21f053b81a9ccdaa5d8ee4f7efd6f01639b83bfc | [
"MIT"
] | 538 | 2017-06-21T08:04:43.000Z | 2022-03-31T14:36:45.000Z | parcels/examples/tutorial_diffusion.ipynb | noemieplanat/Copy-parcels-master | 21f053b81a9ccdaa5d8ee4f7efd6f01639b83bfc | [
"MIT"
] | 94 | 2017-07-05T10:28:55.000Z | 2022-03-23T19:46:23.000Z | 1,600.07783 | 322,548 | 0.959405 | true | 4,621 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.831143 | 0.751894 | __label__eng_Latn | 0.943369 | 0.585234 |
<a href="https://colab.research.google.com/github/lsantiago/PythonIntermedio/blob/master/Clases/Semana6_ALGEBRA/algebra_lineal_apuntes.ipynb" target="_parent"></a>
# Clase Nro. 6: Álgebra Lineal
> El álgebra lineal es una rama de las matemáticas que estudia conceptos tales como vectores, matrices, espacio dual, sistemas de ecuaciones lineales y en su enfoque de manera más formal, espacios vectoriales y sus transformaciones lineales
np.linalg: paquete de álgebra lineal en NumPy
- Funciones básicas
- vectores
- operaciones con vectores
- norma de un vector
- operaciones con matrices
- inversa de una matriz
- determinante
- Resolución de sistemas
_Con el manejo básico de arrays en Python con NumPy, es hora de pasar a operaciones más interesantes como son las propias del Álgebra Lineal._
_Los productos escalares y las inversiones de matrices están por todas partes en los programas científicos e ingenieriles, así que vamos a estudiar cómo se realizan en Python._
Como sabemos, las operaciones del álgebra lineal aparecen con mucha frecuencia a la hora de resolver sistemas de ecuaciones en derivadas parciales y en general al linealizar problemas de todo tipo, y suele ser necesario resolver sistemas con un número enorme de ecuaciones e incógnitas. Gracias a los arrays de NumPy podemos abordar este tipo de cálculos en Python, ya que todas las funciones están escritas en C o Fortran y tenemos la opción de usar bibliotecas optimizadas al límite.
El paquete de álgebra lineal en NumPy se llama `linalg`, así que importando NumPy con la convención habitual podemos acceder a él escribiendo `np.linalg`.
**Pero ¿Porqué álgebra lineal?**
Si entendemos álgebra lineal podremos desarrollar una mejor intuición para el aprendizaje automático y algoritmos. Además, también seremos capaces de desarrollar algoritmos desde cero y hacer variaciones de ellos.
**¿Qué es un vector?**
Un vector tiene tanto magnitud como dirección. Utilizamos vectores para describir, por ejemplo, la velocidad de objetos en movimiento.
Por dirección, se refiere a dónde en el espacio apunta la “flecha”, y la magnitud te dice qué tan lejos debes ir en esa dirección. Si solo tienes magnitud, pero no dirección, entonces estás hablando de escalares. Una vez que le das al escalar alguna dirección, se convierte en un vector.
Un vector se representa con una letra minúscula y una flecha arriba, apuntando a la derecha
**Suma de vectores**
Para sumar los vectores (x₁,y₁) y (x₂,y₂), sumamos los componentes correspondientes de cada vector: (x₁+x₂,y₁+y₂)
```python
```
**Multiplicar un vector por un escalar**
```python
```
**Producto punto o escalar**
Esto suena raro, pero no es más que la multiplicación de vectores cuyo resultado es un escalar. Para calcular el producto escalar de dos vectores, primero debemos multiplicar los elementos correspondientes y luego sumar los términos del producto.
En la siguiente fórmula lo vemos mucho más fácil
```python
```
**Norma vectorial**
La norma es solo otro término para la magnitud de un vector y se denota con dos lineas dobles (||) en cada lado. Se define como una raíz cuadrada de la suma de cuadrados para cada componente de un vector
Pasos:
- Elevar al cuadrado cada componente
- Suma todos los cuadrados
- Toma la raíz cuadrada
Trabajemos ahora con la formula:
```python
```
**Vector unitario**
Los vectores unitarios son aquellos cuya magnitud es exactamente 1 unidad. Son muy útiles por diversas razones. Específicamente, los vectores unitarios [0,1] y [1,0] juntos pueden formar cualquier otro vector.
Un vector unitario se denota con mayor frecuencia con un símbolo de sombrero (^) y se determina calculando la norma y luego dividiendo cada componente del vector con la norma.
Suena complejo, pero vamos a hacer un ejercicio para poder ver que es más complejo de lo que imaginamos
```python
```
## Operaciones con matrices
¿Qué es una matriz? Una matriz es simplemente un arreglo rectangular de números
En data science las usamos un montón, así no nos demos cuenta. Es muy importante de diferenciar un vector de una matriz. En pocas palabras un vector es una sola columna (atributo) en su conjunto de datos y una matriz es una colección de todas las columnas
```python
```
**Definición de matrices**
$A = \begin{equation}
\begin{bmatrix}
1 & 2 & 3\\
4 & 5 & 6\\
\end{bmatrix}
\end{equation}$
```python
```
$B = \begin{equation}
\begin{bmatrix}
5 & 5 & 5\\
5 & 5 & 5\\
\end{bmatrix}
\end{equation}$
```python
```
$v = \begin{equation}
\begin{bmatrix}
6\\
7\\
8
\end{bmatrix}
\end{equation}$
```python
```
**Suma de matrices**
```python
```
**Resta de matrices**
```python
```
**Multiplicación por un escalar**
```python
```
**Matriz transpuesta**
```python
```
**Matriz identidad**
```python
```
```python
#help(np.linalg)
```
Recordemos que si queremos usar una función de un paquete pero no queremos escribir la "ruta" completa cada vez, podemos usar la sintaxis `from package import func`:
```python
from numpy.linalg import norm, det
norm
```
El producto matricial usual (no el que se hace elemento a elemento, sino el del álgebra lineal) se calcula con la misma función que el producto matriz-vector y el producto escalar vector-vector: con la función `dot`, que **no** está en el paquete `linalg` sino directamente en `numpy` y no hace falta importarlo.
```python
np.dot
```
Una consideración importante a tener en cuenta es que en NumPy no hace falta ser estricto a la hora de manejar vectores como si fueran matrices columna, siempre que la operación sea consistente. Un vector es una matriz con una sola dimensión: por eso si calculamos su traspuesta no funciona.
```python
M = np.array([
[1, 2],
[3, 4]
])
v = np.array([1, -1])
```
v
```python
M.T
```
```python
v.T
```
```python
v.reshape(2,1).T
```
```python
u = np.dot(M, v)
u
```
**Multiplicación entre matrices**
```python
```
```python
# producto matricial
```
**Determinante de una matriz**
```python
# calcula el determinante de una matriz
```
**Inversa de una matriz**
Para calcular la matriz inversa en Python, use la función linalg.inv () del módulo numpy.
linalg.inv(x)
El parámetro x de la función es una matriz cuadrada invertible M definida con la función array() de numpy.
La función genera la matriz inversa M-1 de la matriz M.
> ¿Qué es la matriz inversa? La matriz inversa M-1 de una matriz cuadrada es una matriz tal que el producto M · M-1 es igual a una matriz de identidad I.
Encuentre la matriz inversa M-1 de la siguiente matriz invertible M.
```python
m=np.array([[3,4,-1],[2,0,1],[1,3,-2]])
```
```python
# Calcula la matriz inversa con la función linalg.inv().
```
La matriz de salida inversa es también un objeto array(). Se puede leer como una lista anidada. Los elementos de la matriz inversa son números reales.
Verificación. El producto de la matriz M para la matriz M-1 es una matriz de identidad.
```python
# verificación
```
**Otro ejemplo**
Encontrar el determinante y la matriz inversa de D.
$ D = \begin{equation}
\begin{bmatrix}
1 & 2\\
3 & 4\\
\end{bmatrix}
\end{equation}$
```python
D = np.array([[1, 2], [3, 4]])
D
```
```python
# cálculo del determinante
```
```python
# y el inverso
```
**Solución de ecuaciones con matrices**
1. Resolver el siguiente sistema de ecuaciones
$\begin{eqnarray}
3x + y = 9 \\
x + 2y = 8 \\
\end{eqnarray}$
```python
```
### Ejercicios
1- Hallar el producto de estas dos matrices y su determinante:
$$\begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 1 \\ -1 & 0 & 1 \end{pmatrix} \begin{pmatrix} 2 & 3 & -1 \\ 0 & -2 & 1 \\ 0 & 0 & 3 \end{pmatrix}$$
```python
from numpy.linalg import det
```
```python
```
```python
```
```python
```
3- Resuelva el siguiente sistema de ecuaciones
$\begin{eqnarray}
x + 2y = 3 \\
3x + 4y = 5 \\
\end{eqnarray}$
```python
```
4- Un ingeniereo escribe dos ecuaciones que describen el circuito, como sigue:
$\begin{gather}
300I_{1} + 500(I_{1}-I_{2})-20 = 0\\
200I_{2} + 500(I_{2}-I_{1}) +10 = 0\\
\end{gather}$
Coloque las dos ecuaciones en forma estándar y resuelva las dos ecuaciones:
```python
```
5- Resolver el siguiente sistema de ecuaciones
```python
```
3- Resolver el siguiente sistema:
$$ \begin{pmatrix} 2 & 0 & 0 \\ -1 & 1 & 0 \\ 3 & 2 & -1 \end{pmatrix} \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 2 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} -1 \\ 3 \\ 0 \end{pmatrix} $$
```python
```
array([[ 2, 2, 2],
[-1, 0, 1],
[ 3, 5, 6]])
```python
```
array([ 0.5, -4.5, 3.5])
```python
```
True
| c63aa8f1b90c272d10b6acbda8444dd0060b0744 | 29,911 | ipynb | Jupyter Notebook | Clases/Semana6_ALGEBRA/algebra_lineal_apuntes.ipynb | CarlosLedesma/PythonIntermedio | 4e54817fe5c0f13e8152f1d752b02dfa55785e28 | [
"MIT"
] | null | null | null | Clases/Semana6_ALGEBRA/algebra_lineal_apuntes.ipynb | CarlosLedesma/PythonIntermedio | 4e54817fe5c0f13e8152f1d752b02dfa55785e28 | [
"MIT"
] | null | null | null | Clases/Semana6_ALGEBRA/algebra_lineal_apuntes.ipynb | CarlosLedesma/PythonIntermedio | 4e54817fe5c0f13e8152f1d752b02dfa55785e28 | [
"MIT"
] | null | null | null | 23.890575 | 495 | 0.461001 | true | 2,603 | Qwen/Qwen-72B | 1. YES
2. YES | 0.835484 | 0.815232 | 0.681113 | __label__spa_Latn | 0.983554 | 0.420786 |
# Hydrogen Wave Function
```python
#Import libraries
from numpy import *
import matplotlib.pyplot as plt
from sympy.physics.hydrogen import Psi_nlm
```
## Analytical Equation
$$
\psi_{n \ell m}(r, \theta, \varphi)=\sqrt{\left(\frac{2}{n a_{0}^{*}}\right)^{3} \frac{(n-\ell-1) !}{2 n(n+\ell) !}} e^{-\rho / 2} \rho^{\ell} L_{n-\ell-1}^{2 \ell+1}(\rho) Y_{\ell}^{m}(\theta, \varphi)
$$
```python
#Symbolic representation
from sympy import Symbol
```
```python
#Symbolic variables
R=Symbol("R", real=True, positive=True)
Φ=Symbol("Φ", real=True)
Θ=Symbol("Θ", real=True)
Z=Symbol("Z", positive=True, integer=True, nonzero=True)
```
```python
#Analytical wave function
Psi_nlm(1,0,0,R,Φ,Θ,Z) #First three entries are n,l and m
```
$\displaystyle \frac{Z^{\frac{3}{2}} e^{- R Z}}{\sqrt{\pi}}$
```python
#Numerical hydrogen wave function
#Points array contains r and θ for fixed ϕ=0
def wavefunc(point,n,l,m,Z=1):
r=point[0]
θ=point[1]
ϕ=point[2]
return abs(Psi_nlm(n,l,m,r,ϕ,θ,Z))
```
```python
#Transform polar coordinates into cartesian to plot
def polar2cart(r, theta):
return (
r * cos(theta),
r * sin(theta)
)
```
```python
%%timeit
wavefunc([1,.1,.2],4,3,2,Z=1)
```
72.8 µs ± 276 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```python
import sympy
```
```python
n, l, m, r, phi, theta = sympy.symbols("n, l, m, r, phi, theta")
lam_psi = sympy.lambdify([n, l, m, r, phi, theta], Psi_nlm(n, l, m, r, phi, theta, Z=1))
```
```python
%%timeit
lam_psi(4,3,2,1,.1,.2)
```
22.6 µs ± 70.2 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```python
#2D polar variables
# 0.01 < r < 5 & 0 < θ < 2π
r = linspace(0.01,5,10)
ϕ = linspace(0,pi,20)
θ = linspace(0,2*pi,20)
```
```python
rm , tm, pm = meshgrid(r,θ,ϕ)
```
```python
%%timeit
out = lam_psi(4,3,2,rm,pm,tm)
```
419 µs ± 788 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
```python
#Create polar grid
grid = []
for i in r:
for j in θ:
grid.append([i,j])
grid = array(grid)
```
```python
#Cartesian grid
x, y = array((polar2cart(grid[:,0],grid[:,1])))
```
```python
#Import parallel libraries
from joblib import Parallel, delayed
import multiprocessing
num_cores = multiprocessing.cpu_count()
```
```python
#Call wavefunc in parallel
n, l, m = 1 ,0 ,0
results = Parallel(n_jobs=num_cores)(delayed(wavefunc)(i,n,l,m) for i in grid)
```
```python
#Plot figure
fig = plt.figure()
plt.scatter(x, y,c=results,cmap="magma")
plt.colorbar()
plt.axis("equal")
plt.show()
```
```python
```
| 1c0219cd0030deedd6270184c1603444018b2c05 | 41,093 | ipynb | Jupyter Notebook | hydrogen-wave-function.ipynb | sinansevim/EBT617E | 0907846e09173b419dfb6c3a5eae20c3ef8548bb | [
"MIT"
] | 1 | 2021-03-12T13:16:39.000Z | 2021-03-12T13:16:39.000Z | hydrogen-wave-function.ipynb | sinansevim/EBT617E | 0907846e09173b419dfb6c3a5eae20c3ef8548bb | [
"MIT"
] | null | null | null | hydrogen-wave-function.ipynb | sinansevim/EBT617E | 0907846e09173b419dfb6c3a5eae20c3ef8548bb | [
"MIT"
] | null | null | null | 132.987055 | 34,692 | 0.893218 | true | 943 | Qwen/Qwen-72B | 1. YES
2. YES | 0.927363 | 0.819893 | 0.760339 | __label__eng_Latn | 0.438451 | 0.604854 |
```python
import pycalphad
from pycalphad.tests.datasets import ALFE_TDB
from pycalphad import Database, Model
import pycalphad.variables as v
from sympy import Piecewise, Function
dbf = Database(ALFE_TDB)
mod = Model(dbf, ['AL','FE', 'VA'], 'B2_BCC')
t = mod.ast.diff(v.Y('B2_BCC', 1, 'AL'), v.Y('B2_BCC', 0, 'FE'))
#print(t)
def func(x):
f = Function('f')
a = []
for t in x.args:
a.append(t[0])
#print(a)
return f(*a)
t = t.replace(lambda expr: isinstance(expr, Piecewise), func)
for _ in range(5):
t = t + t**(t+3)
from timeit import default_timer as clock
from symengine import sympify
t1 = clock()
p = str(t)
t2 = clock()
sympy_time = t2-t1
print(t2-t1)
t1 = clock()
t = sympify(t)
p = str(t)
t2 = clock()
print(t2-t1)
symengine_time = t2-t1
print(sympy_time / symengine_time)
```
```python
import pycalphad
from pycalphad.tests.datasets import ALFE_TDB
from pycalphad import Database, Model
import pycalphad.variables as v
from sympy import Piecewise, Function
dbf = Database(ALFE_TDB)
mod = Model(dbf, ['AL','FE', 'VA'], 'B2_BCC')
t = mod.ast
def func(x):
f = Function('f')
a = []
for t in x.args:
a.append(t[0])
return f(*a)
t = t.replace(lambda expr: isinstance(expr, Piecewise), func)
from timeit import default_timer as clock
from symengine import sympify, diff
t1 = clock()
p = str(t.diff(v.Y('B2_BCC', 1, 'AL'), v.Y('B2_BCC', 0, 'FE')))
t2 = clock()
print(t2-t1)
sympy_time = t2-t1
t1 = clock()
t = sympify(t)
p = str(t.diff(v.Y('B2_BCC', 1, 'AL')).diff(v.Y('B2_BCC', 0, 'FE')))
t2 = clock()
print(t2-t1)
symengine_time = t2-t1
print(sympy_time/symengine_time)
```
2.1349420790002114
0.11995693399967422
17.797571243409816
```python
```
| 99434542f5ac04ce671a0e1de074f800d784b53a | 80,268 | ipynb | Jupyter Notebook | SymEngineTest.ipynb | richardotis/pycalphad-sandbox | 43d8786eee8f279266497e9c5f4630d19c893092 | [
"MIT"
] | 1 | 2017-03-08T18:21:30.000Z | 2017-03-08T18:21:30.000Z | SymEngineTest.ipynb | richardotis/pycalphad-sandbox | 43d8786eee8f279266497e9c5f4630d19c893092 | [
"MIT"
] | null | null | null | SymEngineTest.ipynb | richardotis/pycalphad-sandbox | 43d8786eee8f279266497e9c5f4630d19c893092 | [
"MIT"
] | 1 | 2018-11-03T01:31:57.000Z | 2018-11-03T01:31:57.000Z | 399.343284 | 18,835 | 0.694623 | true | 610 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.679179 | 0.542111 | __label__eng_Latn | 0.449568 | 0.097836 |
# The Space-Arrow of Space-time
People have claimed Nature does not have an arrow for time. I don't think the question, as stated, is well-formed. Any analysis of the arrow of time for one observer will look like the arrow of space-time to another one moving relative to the first.
Two different problems are often cited on this subject. The first is that the fundamental forces of the standard model - EM, the weak force, and the strong force - are unaltered by a change in the arrow of (space-)time. Second, that the Lorentz group can be used an arbitrary number of times and still flip the value of time precisely.
In this short talk, I will show how keeping the space terms next to the time ones leads to the world we know, where space-time reversal does not come easy.
## Space-time Reversal in EM
Let me provide a quick sketch of how to derive the Maxwell source equations, not just write them down like Minkowski did. The goal is to calculate the Lorentz invariant quantity $B^2 - E^2$, the difference of two squares. Based on high school algebra, this should be obtained by the product of the sum and difference of $E$ and $B$. Once one has this difference of squares, the Euler-Lagrange equations to derive Gauss's and Ampere's laws.
Start all the ways a 4-potential can change in space-time (doesn't that sound general?):
The first term is a gauge field. Since photons are being studied, this gauge field has to be set to zero. For anything that is not a photon, this field will be non-zero. It is interesting to me that it is just "naturally" here. It can be zeroed out easily enough:
By changing the order of the differential acting on the potential, only the magnetic field $B$ will change signs.
Form the product of these two:
The first observation to make that everyone makes is that reversing time will never, ever change the value of the difference between the square of the magnetic field and the square of the electric field. That is what a square does. This is why one can conclude with confidence that the fundamental forces in physics are unaltered by changes in time. The weak and strong forces are mere variations on EM using other gauge groups.
The second observation that no one makes is that the Poynting vector is sitting right next door. The Poynting vector will change signs under time reversal. A complete analysis of EM must include the part that responds to time reversal. By omitting it, a mystery is claimed.
## Space-time Reversal is Locally Irreversible
In this notebook, I will show two different ways how when one thinks about space-time reversal instead of just time reversal using space-time numbers, that such a system cannot be reversed an arbitrarily large number of times as is the case for the Lorentz group. First the tools need to work with space-time numbers have to loaded.
```python
%%capture
%matplotlib inline
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
# To get equations the look like, well, equations, use the following.
from sympy.interactive import printing
printing.init_printing(use_latex=True)
from IPython.display import display
# Tools for manipulating quaternions.
import Q_tools as qt;
```
The member of the Lorentz group that reverses time is remarkably simple: it is a matrix that has minus one in the upper diagonal position, positive one for the other diagonal positions, and zeros elsewhere.
```python
TimeReversal = np.array([[-1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]])
display(TimeReversal)
```
array([[-1, 0, 0, 0],
[ 0, 1, 0, 0],
[ 0, 0, 1, 0],
[ 0, 0, 0, 1]])
Create a 4-vector.
```python
t, x, y, z = sp.symbols("t x y z")
```
```python
Vector = np.array([t, x, y, z])
display(Vector)
```
array([t, x, y, z], dtype=object)
```python
display(Vector * TimeReversal)
```
array([[-t, 0, 0, 0],
[0, x, 0, 0],
[0, 0, y, 0],
[0, 0, 0, z]], dtype=object)
Do the time reversal a bunch of times.
```python
display(Vector * TimeReversal * TimeReversal)
display(Vector * TimeReversal * TimeReversal * TimeReversal)
```
array([[t, 0, 0, 0],
[0, x, 0, 0],
[0, 0, y, 0],
[0, 0, 0, z]], dtype=object)
array([[-t, 0, 0, 0],
[0, x, 0, 0],
[0, 0, y, 0],
[0, 0, 0, z]], dtype=object)
It comes as no surprise that if the time-reversal member of the Lorentz group is done an even number of times, then nothing changes, but an odd number of times reverses time exactly.
For the quaternion approach, one solves a pretty simple algebraic expression instead of using a global matrix. The equation to solve is:
$$ P T_r = - P^* $$
Solve for $T_r$:
$$ T_r = - P^{-1} P^* $$
```python
P = qt.QH([t, x, y, z])
Tr = P.flip_signs().invert().product(P.conj())
display(Tr.t)
display(Tr.x)
display(Tr.y)
display(Tr.z)
```
Does this considerably more complicated expression than the Lorentz group do its job? Of course is **should**, but let's just show this is the case:
```python
PFlip = P.product(Tr)
display(sp.simplify(PFlip.t))
display(sp.simplify(PFlip.x))
display(sp.simplify(PFlip.y))
display(sp.simplify(PFlip.z))
```
Apply Tr twice to see if one gets back to the start point.
```python
PFlipFlip = P.product(Tr).product(Tr)
display(sp.simplify(PFlipFlip.t))
display(sp.simplify(PFlipFlip.x))
display(sp.simplify(PFlipFlip.y))
display(sp.simplify(PFlipFlip.z))
```
This may not look "right" to the eye, so test it. Use "classical" values, meaning time $t >>> x, y, z$.
```python
Classical_subs = {t:1, x:0.0000000002, y:0.00000000012, z:-0.0000000003}
display(sp.simplify(PFlip.t.subs(Classical_subs)))
display(sp.simplify(PFlip.x.subs(Classical_subs)))
display(sp.simplify(PFlip.y.subs(Classical_subs)))
display(sp.simplify(PFlip.z.subs(Classical_subs)))
print()
display(sp.simplify(PFlipFlip.t.subs(Classical_subs)))
display(sp.simplify(PFlipFlip.x.subs(Classical_subs)))
display(sp.simplify(PFlipFlip.y.subs(Classical_subs)))
display(sp.simplify(PFlipFlip.z.subs(Classical_subs)))
```
The value for t returned to unity as it should, but the same cannot be said for the spatial terms. This is due to the cross product. See what happens if one does this many, many times. Define a function to do the work.
```python
def reverse_n_times(P1, T1, sub_1, n):
"""Given a symbolic expression P, applies symbolic space-time reversal using a dictionary of values n times."""
P1_t, P1_x, P1_y, P1_z = P1.t.subs(sub_1), P1.x.subs(sub_1), P1.y.subs(sub_1), P1.z.subs(sub_1)
P_result = qt.QH([P1_t, P1_x, P1_y, P1_z])
T1_t, T1_x, T1_y, T1_z = T1.t.subs(sub_1), T1.x.subs(sub_1), T1.y.subs(sub_1), T1.z.subs(sub_1)
T_sub = qt.QH([T1_t, T1_x, T1_y, T1_z])
for i in range(n):
P_result = P_result.product(T_sub)
return P_result
```
```python
print(reverse_n_times(P, Tr, Classical_subs, 100))
print(reverse_n_times(P, Tr, Classical_subs, 101))
print(reverse_n_times(P, Tr, Classical_subs, 1000))
print(reverse_n_times(P, Tr, Classical_subs, 1001))
```
(1.00000000000000, -3.97999999999999E-8, -2.38800000000001E-8, 5.96999999999999E-8) ...
(-1.00000000000000, 4.01999999999999E-8, 2.41200000000001E-8, -6.02999999999999E-8) ...
(0.999999999999712, -3.99799999999961E-7, -2.39879999999978E-7, 5.99699999999939E-7) ...
(-0.999999999999712, 4.00199999999961E-7, 2.40119999999978E-7, -6.00299999999938E-7) ...
```python
print(reverse_n_times(P, Tr, Classical_subs, 10000))
print(reverse_n_times(P, Tr, Classical_subs, 100000))
```
(0.999999999971126, -0.00000399979999996151, -0.00000239987999997690, 0.00000599969999994228) ...
(0.999999997112059, -0.0000399997999614952, -0.0000239998799768971, 0.0000599996999422425) ...
```python
print(reverse_n_times(P, Tr, Classical_subs, 1000000))
print(reverse_n_times(P, Tr, Classical_subs, 10000000))
```
(0.999999711200594, -0.000399999761493507, -0.000239999856896105, 0.000599999642240260) ...
| ef37442ff3d97dcf565edd3ec2424201dba4cc7b | 43,512 | ipynb | Jupyter Notebook | q_notebooks/space-time_reversal.ipynb | dougsweetser/ipq | 5505c8c9c6a6991e053dc9a3de3b5e3588805203 | [
"Apache-2.0"
] | 2 | 2017-01-19T18:43:20.000Z | 2017-02-21T16:23:07.000Z | q_notebooks/space-time_reversal.ipynb | dougsweetser/ipq | 5505c8c9c6a6991e053dc9a3de3b5e3588805203 | [
"Apache-2.0"
] | null | null | null | q_notebooks/space-time_reversal.ipynb | dougsweetser/ipq | 5505c8c9c6a6991e053dc9a3de3b5e3588805203 | [
"Apache-2.0"
] | null | null | null | 55.784615 | 2,692 | 0.727684 | true | 2,358 | Qwen/Qwen-72B | 1. YES
2. YES | 0.76908 | 0.66888 | 0.514423 | __label__eng_Latn | 0.984795 | 0.033505 |
# Allen Cahn equation
* Physical space
\begin{align}
u_{t} = \epsilon u_{xx} + u - u^{3}
\end{align}
* Discretized with Chebyshev differentiation matrix (D)
\begin{align}
u_t = (\epsilon D^2 + I)u - u^{3}
\end{align}
# Imports
```python
import numpy as np
import matplotlib.pyplot as plt
from rkstiff.grids import construct_x_Dx_cheb
from rkstiff.etd35 import ETD35
from rkstiff.etd34 import ETD34
from rkstiff.if34 import IF34
import time
```
# Linear operator, nonlinear function
```python
N = 20
epsilon = 0.01
x,D = construct_x_Dx_cheb(N,-1,1)
D2 = D.dot(D)
L = epsilon*D2 + np.eye(*D2.shape)
L = L[1:-1,1:-1] # Interior points
def NL(u):
return x[1:-1] - np.power(u+x[1:-1],3)
```
# Set initial field
```python
u0 = 0.53*x + 0.47*np.sin(-1.5*np.pi*x)
w0 = u0 - x
plt.plot(x,u0)
plt.xlabel('x')
plt.ylabel('$u_0$')
plt.title('Initial field')
```
# Apply nondiagonal and diagonalized solvers for comparison
```python
nondiag_params = {'epsilon' : 1e-4, 'contour_points' : 64, 'contour_radius' : 20}
solver34 = ETD34(linop=L,NLfunc=NL,**nondiag_params)
solver35 = ETD35(linop=L,NLfunc=NL,**nondiag_params)
solverIF = IF34(linop=L,NLfunc=NL,**nondiag_params)
diag_params = {'epsilon' : 1e-4, 'contour_points' : 32, 'contour_radius' : 1,'diagonalize' : True}
solverDiag34 = ETD34(linop=L,NLfunc=NL,**diag_params)
solverDiag35 = ETD35(linop=L,NLfunc=NL,**diag_params)
solverDiagIF34 = IF34(linop=L,NLfunc=NL,**diag_params)
solvers = [solver34,solver35,solverIF,solverDiag34,solverDiag35,solverDiagIF34]
titles = ['ETD34','ETD35','IF34','ETD34 Diagonalized','ETD35 Diagonalized','IF34 Diagonalized']
```
# Run simulations
```python
Xvec,Tvec,Uvec = [],[],[]
for solver in solvers:
_ = solver.evolve(w0[1:-1],t0=0,tf=100)
U = []
for wint in solver.u:
w = np.r_[0,wint.real,0]
u = w + x
U.append(u)
U = np.array(U)
t = np.array(solver.t)
T,X = np.meshgrid(t,x,indexing='ij')
Xvec.append(X); Tvec.append(T); Uvec.append(U)
```
# Plot results
```python
fig = plt.figure(figsize=(16,12))
for i in range(6):
ax = fig.add_subplot(2,3,i+1,projection='3d')
ax.plot_wireframe(Xvec[i],Tvec[i],Uvec[i],color='black')
ax.set_xlabel('x')
ax.set_ylabel('t')
ax.set_zlabel('z')
ax.set_facecolor('white')
ax.grid(False)
ax.set_title(titles[i])
ax.view_init(elev=36,azim=-131)
# fig.tight_layout()
```
# Time simulations
```python
start = time.time()
solver = ETD34(linop=L,NLfunc=NL,**nondiag_params)
_ = solver.evolve(w0[1:-1],t0=0,tf=100,store_data=False)
end = time.time()
print(titles[0],'-> {:.2e}'.format(end-start))
start = time.time()
solver = ETD35(linop=L,NLfunc=NL,**nondiag_params)
_ = solver.evolve(w0[1:-1],t0=0,tf=100,store_data=False)
end = time.time()
print(titles[1],'-> {:.2e}'.format(end-start))
start = time.time()
solver = IF34(linop=L,NLfunc=NL,**nondiag_params)
_ = solver.evolve(w0[1:-1],t0=0,tf=100,store_data=False)
end = time.time()
print(titles[2],'-> {:.2e}'.format(end-start))
start = time.time()
solverDiag = ETD34(linop=L,NLfunc=NL,**diag_params)
_ = solver.evolve(w0[1:-1],t0=0,tf=100,store_data=False)
end = time.time()
print(titles[3],'-> {:.2e}'.format(end-start))
start = time.time()
solver = ETD35(linop=L,NLfunc=NL,**diag_params)
_ = solver.evolve(w0[1:-1],t0=0,tf=100,store_data=False)
end = time.time()
print(titles[4],'-> {:.2e}'.format(end-start))
start = time.time()
solver = IF34(linop=L,NLfunc=NL,**diag_params)
_ = solver.evolve(w0[1:-1],t0=0,tf=100,store_data=False)
end = time.time()
print(titles[5],'-> {:.2e}'.format(end-start))
```
ETD34 -> 7.02e-01
ETD35 -> 1.45e+00
IF34 -> 1.59e-01
ETD34 Diagonalized -> 1.65e-01
ETD35 Diagonalized -> 1.59e-01
IF34 Diagonalized -> 1.75e-01
# Diagonalizing IF method has little impact on performance
```python
%%timeit
solver = IF34(linop=L,NLfunc=NL,epsilon=1e-4,contour_points=32,contour_radius=1,diagonalize=True)
_ = solver.evolve(w0[1:-1],t0=0,tf=100,store_data=False)
```
```python
%%timeit
solver = IF34(linop=L,NLfunc=NL,epsilon=1e-4,contour_points=64,contour_radius=20)
_ = solver.evolve(w0[1:-1],t0=0,tf=100,store_data=False)
```
155 ms ± 1.77 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
| 1da4080158afa4cfadbe169e6a22d972b25b7aee | 518,273 | ipynb | Jupyter Notebook | demos/allen_cahn.ipynb | whalenpt/rkstiff | 9fbec7ddd123cc644d392933b518d342751b4cd8 | [
"MIT"
] | 4 | 2021-11-05T15:35:21.000Z | 2022-01-17T10:20:57.000Z | demos/allen_cahn.ipynb | whalenpt/rkstiff | 9fbec7ddd123cc644d392933b518d342751b4cd8 | [
"MIT"
] | null | null | null | demos/allen_cahn.ipynb | whalenpt/rkstiff | 9fbec7ddd123cc644d392933b518d342751b4cd8 | [
"MIT"
] | null | null | null | 1,511 | 492,404 | 0.960833 | true | 1,545 | Qwen/Qwen-72B | 1. YES
2. YES | 0.785309 | 0.793106 | 0.622833 | __label__eng_Latn | 0.183555 | 0.28538 |
## 1 求解导数
给定输入的张量是$x$,这是一个 $N \times C_{i n} \times w \times h$ 的张量;
给定模板的张量是$h$,这是一个$C_{\text {out }} \times C_{\text {in }} \times 3 \times 3$的张量;
进行卷积运算的参数,采用Padding = 1,然后 Stride = 1
现在已知张量$y$是通过模板对输入进行模板运算的结果,如下:
$$y=x \otimes h$$
其中$\otimes$是模板运算,另外已知损失函数相对于$y$的偏导数为:
$$\frac{\partial L}{\partial y}$$
请尝试推导:
1) 损失函数相对于输入的导数$\frac{\partial L}{\partial x}$
不妨令
$\begin{equation}
X = \left[\begin{array}{cccccccc}
x_{11} & x_{12} & x_{13} & x_{14} & x_{15} \\
x_{22} & x_{22} & x_{23} & x_{24} & x_{25} \\
x_{33} & x_{32} & x_{33} & x_{34} & x_{35} \\
x_{44} & x_{42} & x_{43} & x_{44} & x_{45} \\
x_{55} & x_{52} & x_{53} & x_{54} & x_{55}
\end{array}\right]
\end{equation}$
$\begin{equation}
H=\left[\begin{array}{ccc}
h_{11} & h_{12} & h_{13} \\
h_{21} & h_{22} & h_{23} \\
h_{31} & h_{32} & h_{33}
\end{array}\right]
\end{equation}$
由于$y=x \otimes h$,则$Y=conv2(X,H)$
$\begin{equation}
Y = \left[\begin{array}{cccccccc}
y_{11} & y_{12} & y_{13} & y_{14} & y_{15} \\
y_{22} & y_{22} & y_{23} & y_{24} & y_{25} \\
y_{33} & y_{32} & y_{33} & y_{34} & y_{35} \\
y_{44} & y_{42} & y_{43} & y_{44} & y_{45} \\
y_{55} & y_{52} & y_{53} & y_{54} & y_{55}
\end{array}\right]
\end{equation}$
下面分析$\frac{\partial L}{\partial x_{11}}$
$\begin{equation}
X^{pad} = \left[\begin{array}{ccccccc}
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & x_{11} & x_{12} & x_{13} & x_{14} & x_{15} & 0 \\
0 & x_{22} & x_{22} & x_{23} & x_{24} & x_{25} & 0 \\
0 & x_{33} & x_{32} & x_{33} & x_{34} & x_{35} & 0 \\
0 & x_{44} & x_{42} & x_{43} & x_{44} & x_{45} & 0 \\
0 & x_{55} & x_{52} & x_{53} & x_{54} & x_{55} & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}\right]
\end{equation}$
$y_{ij}=\sum_{u=0}^{2} \sum_{v=0}^{2} X_{i+u, j+v}^{p a d} \cdot H_{1+u, 1+v}$
$\begin{equation}y_{11} = np.sum\left(
\left[\begin{array}{ccc}
0 & 0 & 0 \\
0 & x_{11} & x_{12} \\
0 & x_{21} & x_{22}
\end{array}\right] *
\left[\begin{array}{ccc}
h_{11} & h_{12} & h_{13} \\
h_{21} & h_{22} & h_{23} \\
h_{31} & h_{32} & h_{33}
\end{array}\right]\right)
\end{equation}$
$\begin{equation}y_{12} = np.sum\left(
\left[\begin{array}{ccc}
0 & 0 & 0 \\
x_{11} & x_{12} & x_{13}\\
x_{21} & x_{22} & x_{23}
\end{array}\right] *
\left[\begin{array}{ccc}
h_{11} & h_{12} & h_{13} \\
h_{21} & h_{22} & h_{23} \\
h_{31} & h_{32} & h_{33}
\end{array}\right] \right)
\end{equation}$
$\begin{equation}y_{21} = np.sum\left(
\left[\begin{array}{ccc}
0 & x_{11} & x_{12} \\
0 & x_{21} & x_{22} \\
0 & x_{31} & x_{32}
\end{array}\right] *
\left[\begin{array}{ccc}
h_{11} & h_{12} & h_{13} \\
h_{21} & h_{22} & h_{23} \\
h_{31} & h_{32} & h_{33}
\end{array}\right] \right)
\end{equation}$
$\begin{equation}y_{22} = np.sum\left(
\left[\begin{array}{ccc}
x_{11} & x_{12} & x_{13} \\
x_{21} & x_{22} & x_{23} \\
x_{31} & x_{32} & x_{33}
\end{array}\right] *
\left[\begin{array}{ccc}
h_{11} & h_{12} & h_{13} \\
h_{21} & h_{22} & h_{23} \\
h_{31} & h_{32} & h_{33}
\end{array}\right] \right)
\end{equation}$
$\frac{\partial L}{\partial x_{11}}=\frac{\partial L}{\partial y_{11}} \cdot h_{22}+\frac{\partial L}{\partial y_{12}} \cdot h_{23}+\frac{\partial L}{\partial y_{21}} \cdot h_{32}+\frac{\partial L}{\partial y_{22}} \cdot h_{33}$
$\frac{\partial L}{\partial x_{11}}$即目标$Y$对$H$的模板运算的第一步,Padding = 1,Stride = 1。
所以,可以推出
$\frac{\partial L}{\partial X}=\operatorname{Convolution2D}\left(\frac{\partial L}{\partial Y}^{(p a d)}, H\right)$
2) 损失函数相对于模板的导数$\frac{\partial L}{\partial h}$
$\begin{equation}
\frac{\partial E}{\partial W}=\left[\begin{array}{ccc}
\frac{\partial E}{\partial W_{11}} & \frac{\partial E}{\partial W_{12}} & \frac{\partial E}{\partial W_{13}} \\
\frac{\partial E}{\partial W_{21}} & \frac{\partial E}{\partial W_{22}} & \frac{\partial E}{\partial W_{23}} \\
\frac{\partial E}{\partial W_{31}} & \frac{\partial E}{\partial W_{32}} & \frac{\partial E}{\partial W_{33}}
\end{array}\right]
\end{equation}$
$\begin{equation}
=\left[\begin{array}{ccc}
\sum_{i} \sum_{j} \frac{\partial E}{\partial Y_{i, j}} \cdot X_{i+2, j+2}^{p a d} & \sum_{i} \sum_{j} \frac{\partial E}{\partial Y_{i, j}} \cdot X_{i+2, j+1}^{p a d} & \sum_{i} \sum_{j} \frac{\partial E}{\partial Y_{i, j}} \cdot X_{i+2, j}^{p a d} \\
\sum_{i} \sum_{j} \frac{\partial E}{\partial Y_{i, j}} \cdot X_{i+1, j+2}^{p a d} & \sum_{i} \sum_{j} \frac{\partial E}{\partial Y_{i, j}} \cdot X_{i+1, j+1}^{p a d} & \sum_{i} \sum_{j} \frac{\partial E}{\partial Y_{i, j}} \cdot X_{i+1, j}^{p a d} \\
\sum_{i} \sum_{j} \frac{\partial E}{\partial Y_{i, j}} \cdot X_{i, j+2}^{p a d} & \sum_{i} \sum_{j} \frac{\partial E}{\partial Y_{i, j}} \cdot X_{i, j+1}^{p a d} & \sum_{i} \sum_{j} \frac{\partial E}{\partial Y_{i, j}} \cdot X_{i, j}^{p a d}
\end{array}\right]
\end{equation}$
即
$\frac{\partial L}{\partial H}=\operatorname{Rot180}\left( \operatorname{Convolution2D}\left(\frac{\partial L}{\partial Y}^{(p a d)}, X\right)\right)$
## 2
假设现在有一个4x4的具有两个通道的特征如下所示。
```
f = [[[ 1 2 3 4]
[ 8 7 6 5]
[ 9 10 11 12]
[16 15 14 13]]
[[29 30 31 32]
[28 27 26 25]
[21 22 23 24]
[20 19 18 17]]]
```
对这个图像采用,如下的模板进行模板运算。
```
h = [[[[-1 0 1]
[-1 0 1]
[-1 0 1]]
[[-1 -1 -1]
[ 0 0 0]
[ 1 1 1]]]
[[[ 1 0 0]
[ 0 1 0]
[ 0 0 1]]
[[ 0 0 1]
[ 0 1 0]
[ 1 0 0]]]]
```
```python
import numpy as np
# f[c_in, x, y]
f = np.asarray([[[1, 2, 3, 4],
[8, 7, 6, 5],
[9, 10, 11, 12],
[16, 15, 14, 13]],
[[29, 30, 31, 32],
[28, 27, 26, 25],
[21, 22, 23, 24],
[20, 19, 18, 17]]])
h = np.empty([2, 2, 3, 3])
h[0, 0, :, :] = [[-1, 0, 1],
[-1, 0, 1],
[-1, 0, 1]]
h[0, 1, :, :] = [[-1, -1, -1],
[0, 0, 0],
[1, 1, 1]]
h[1, 0, :, :] = [[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]
h[1, 1, :, :] = [[0, 0, 1],
[0, 1, 0],
[1, 0, 0]]
```
模板运算采用 Valid 输出尺寸,请问:
1)输出记为 conv1 ,请问 conv1 是多少?
```python
from scipy.signal import convolve2d
_, M, N = f.shape
# conv1[c_out, c_in, x, y]
conv = np.empty([2, 2, M - 2, N - 2])
conv1 = np.empty([2, 2, 2])
for c_out in range(2):
for c_in in range(2):
conv[c_out, c_in, :, :] = convolve2d(f[c_in, :, :], np.rot90(h[c_out, c_in, :, :], 2), mode="valid")
conv1[0]=conv[0,0,:,:]+conv[0,1,:,:]
conv1[1]=conv[1,0,:,:]+conv[1,1,:,:]
print("conv1 = \n", conv1)
# print(convolve2d(f[0,:,:],np.rot90(h[0,0,:,:],2),"valid"))
```
conv1 =
[[[-22. -22.]
[-26. -26.]]
[[ 98. 100.]
[100. 98.]]]
(2) 如果采用 ReLU 对这个输出进行激活,记为 relu1 ,请问激活后 relu1 的值是多
少?
```python
def relu(np_vect):
if np_vect >= 0:
return np_vect
else:
return 0
rectification = np.vectorize(relu)
relu1 = rectification(conv1)
print(relu1)
```
[[[ 0 0]
[ 0 0]]
[[ 98 100]
[100 98]]]
3)如果将输出拉成一列,采用全连接网络,输出节点个数为 5,假设全连接所有权重都设置为 1/10,输出记为fc1 ,请问输出是多少?
```python
fc1 = np.zeros([5])
weight = 0.1
for i in range(5):
for j in relu1.flat:
fc1[i] += weight * j
print(fc1)
```
[39.6 39.6 39.6 39.6 39.6]
4)假设采用 softmax 对这个 5 个节点的输出进行,概率值记为p=[p1,p2,p3,p4,p5],
请问p是多少?
```python
p = np.asarray([1 / (1 + np.exp(-x)) for x in fc1])
p=[x/sum(p) for x in p]
print(p)
```
[0.2, 0.2, 0.2, 0.2, 0.2]
5) 如果采用交叉熵对概率进行约束,如下所示
$$L=\sum_{i=1}^{5}-y_{i} \log p_{i}$$
如果$y_{1}=0, y_{2}=0, y_{3}=1, y_{4}=0, y_{5}=0$,请问损失函数是多少?
```python
y = np.asarray([0, 0, 1, 0, 0])
print("y*log(p) = ", y*np.log(p))
L = np.sum(y * np.log(p))
print("L = ", L)
```
y*log(p) = [-0. -0. -1.60943791 -0. -0. ]
L = -1.6094379124341003
6) 请问$\frac{\partial L}{\partial p}, \frac{\partial L}{\partial \mathrm{fc}_{1}}, \frac{\partial L}{\partial \mathrm{relu}_{1}}, \frac{\partial L}{\partial \operatorname{conv}_{1}}$分别是多少?
```
tensor([[ 0.2000, 0.2000, -0.8000, 0.2000, 0.2000]])
即0
tensor([[[[-4.1723e-09, -4.1723e-09], [-4.1723e-09, -4.1723e-09]], [[-4.1723e-09, -4.1723e-09], [-4.1723e-09, -4.1723e-09]]]])
tensor([[[[ 0.0000e+00, 0.0000e+00], [ 0.0000e+00, 0.0000e+00]], [[-4.1723e-09, -4.1723e-09], [-4.1723e-09, -4.1723e-09]]]]
```
7) 如果把全连接的权重记为$W$,请问$\frac{\partial L}{\partial W}$是多少?
```
tensor([
[ 0.0000, 0.0000, 0.0000, 0.0000, 19.6000, 20.0000, 20.0000,19.6000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 19.6000, 20.0000, 20.0000,19.6000],
[ -0.0000, -0.0000, -0.0000, -0.0000, -78.4000, -80.0000, -80.0000,-78.4000], [ 0.0000, 0.0000, 0.0000, 0.0000, 19.6000, 20.0000, 20.0000,19.6000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 19.6000, 20.0000, 20.0000,19.6000] ])
```
8) 请问$\frac{\partial L}{\partial h}$是多少?
```
tensor([
[[[ 0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00]],
[[ 0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00]]],
[[[-7.5102e-08, -7.5102e-08, -7.5102e-08],
[-1.4186e-07, -1.4186e-07, -1.4186e-07],
[-2.0862e-07, -2.0862e-07, -2.0862e-07]],
[[-4.7565e-07, -4.7565e-07, -4.7565e-07],
[-4.0889e-07, -4.0889e-07, -4.0889e-07],
[-3.4213e-07, -3.4213e-07, -3.4213e-07]]]])
```
| 893b1d1d596d4a2f2735e5f21080fb2267c10a27 | 15,469 | ipynb | Jupyter Notebook | homeworks/ch_11.ipynb | magicwenli/morpher | 2f8e756d81f3fac59c948789e945a06a4d4adce3 | [
"MIT"
] | null | null | null | homeworks/ch_11.ipynb | magicwenli/morpher | 2f8e756d81f3fac59c948789e945a06a4d4adce3 | [
"MIT"
] | null | null | null | homeworks/ch_11.ipynb | magicwenli/morpher | 2f8e756d81f3fac59c948789e945a06a4d4adce3 | [
"MIT"
] | null | null | null | 29.408745 | 279 | 0.40662 | true | 4,712 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91848 | 0.79053 | 0.726086 | __label__yue_Hant | 0.14423 | 0.525274 |
```python
# HIDDEN
from datascience import *
from prob140 import *
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
%matplotlib inline
import math
from scipy import stats
from sympy import *
init_printing()
```
### Probabilities and Expectations ###
A function $f$ on the plane is called a *joint density* if:
- $f(x, y) \ge 0$ for all $x$, $y$
- $\int_x \int_y f(x, y)dydx = 1$
If you think of $f$ as a surface, then the first condition says that the surface is on or above the plane. The second condition says that the total volume under the surface is 1.
Think of probabilities as volumes under the surface, and define $f$ to be the *joint density of random variables $X$ and $Y$* if
$$
P((X, Y) \in A) ~ = ~ \mathop{\int \int}_A f(x,y)dydx ~~~~~ \text{for all } A
$$
That is, the chance that the random point $(X, Y)$ falls in the region $A$ is the volume under the joint density surface over the region $A$.
This is a two-dimensional analog of the fact that in probabilities involving a single random variable can be thought of as areas under the density curve.
### Infinitesimals ###
Also analogous is the interpretation of the joint density as an element in the calculation of the probability of an infinitesimal region.
The infinitesimal region is a tiny rectangle in the plane just around the point $(x, y)$. Its width is $dx$ and its length is $dy$. The corresponding volume is that of a rectangular box whose base is the tiny rectangle and whose height is $f(x, y)$.
Thus for all $x$ and $y$,
$$
P(X \in dx, Y \in dy) ~ \sim ~ f(x, y)dxdy
$$
and the joint density measures *probability per unit area*:
$$
f(x, y) ~ \sim ~ \frac{P(X \in dx, Y \in dy)}{dxdy}
$$
An example will help us visualize all this. Let $f$ be defined as follows:
$$
f(x, y) ~ = ~
\begin{cases}
120x(y-x)(1-y), ~~~ 0 < x < y < 1 \\
0 ~~~~~~~~ \text{otherwise}
\end{cases}
$$
For now, just assume that this is a joint density, that is, it integrates to 1. Let's first take a look at what the surface looks like.
### Plotting the Surface ###
To do this, we will use a 3-dimensional plotting routine. First, we define the joint density function. For use in our plotting routine, this function must take $x$ and $y$ as its inputs and return the value $f(x, y)$ as defined above.
```python
def joint(x,y):
if y < x:
return 0
else:
return 120 * x * (y-x) * (1-y)
```
Then we call `Plot_3d` to plot the surface. The arguments are the limits on the $x$ and $y$ axes, the name of the function to be plotted, and two optional arguments `rstride` and `cstride` that determine how many grid lines to use (larger numbers correspond to less frequent grid lines).
```python
Plot_3d(x_limits=(0,1), y_limits=(0,1), f=joint, cstride=4, rstride=4)
```
You can see that the surface has level 0 in the lower right hand triangle. In fact, the possible values of $(X, Y)$ are as shown below. For calculations, we will frequently draw just the possible values and not the surface.
```python
# HIDDEN
plt.plot([0, 0], [0, 1], color='k', lw=2)
plt.plot([0, 1], [0, 1], color='k', lw=2)
plt.plot([0, 1], [1, 1], color='k', lw=2)
xx = np.arange(0, 1.11, 0.1)
yy = np.ones(len(xx))
plt.fill_between(xx, xx, yy, alpha=0.3)
plt.xlim(-0.05, 1)
plt.ylim(0, 1.05)
plt.axes().set_aspect('equal')
plt.xticks(np.arange(0, 1.1, 0.25))
plt.yticks(np.arange(0, 1.1, 0.25))
plt.xlabel('$x$')
plt.ylabel('$y$', rotation=0)
plt.title('Possible Values of $(X, Y)$');
```
### The Total Volume Under the Surface ###
First, it's a good idea to check that the total probability under the surface is equal to 1.
The function $f$ looks like a bit of a mess but it is easy to see that it is non-negative. Let's use `SymPy` to see that it integrates to 1. Done by hand, the integration is routine but tedious.
We will first declare the two variables to have values in the unit interval, and assign the function to the name `f`. This specification doesn't say that $x < y$, but we will enforce that condition when we integrate.
```python
declare('x', interval=(0, 1))
declare('y', interval=(0, 1))
f = 120*x*(y-x)*(1-y)
```
To set up the double integral over the entire region of possible values, notice that $x$ goes from 0 to 1, and for each fixed value of $x$, the value of $y$ goes from $x$ to 1.
We will fix $x$ and first integrate with respect to $y$. Then we will integrate $x$. The double integral requires a call to `Integral` that specifies the inner integral first and then the outer. The call says:
- The function being integrated is $f$.
- The inner integral is over the variable $y$ which goes from $x$ to 1.
- The outer integral is over the variable $x$ which goes from 0 to 1.
```python
Integral(f, (y, x, 1), (x, 0, 1))
```
To evaluate the integral, use `doit()`:
```python
Integral(f, (y, x, 1), (x, 0, 1)).doit()
```
### Probabilities as Volumes ###
Probabilities are volumes under the joint density surface; in other words, they are double integrals of the function $f$. For each probability, we have to first identify the region of integration, which we will do by geometry and by inspecting the event. Once we have set up the integral, we have to calculate its value, which we will do by `SymPy`.
#### Example 1. ####
Suppose you want to find $P(Y > 4X)$. The event is the blue region in the graph below.
```python
# HIDDEN
plt.plot([0, 0], [0, 1], color='k', lw=2)
plt.plot([0, 1], [0, 1], color='k', lw=2)
plt.plot([0, 1], [1, 1], color='k', lw=2)
xx = np.arange(0, 0.251, 0.05)
yy = np.ones(len(xx))
plt.fill_between(xx, 4*xx, yy, alpha=0.3)
plt.xlim(-0.05, 1)
plt.ylim(0, 1.05)
plt.axes().set_aspect('equal')
plt.xticks(np.arange(0, 1.1, 0.25))
plt.yticks(np.arange(0, 1.1, 0.25))
plt.xlabel('$x$')
plt.ylabel('$y$', rotation=0)
plt.title('$Y > 4X$');
```
The volume under the density surface over this region is given by an integral specified analogously to the previous one: first the inner integral and then the outer.
```python
Integral(f, (y, 4*x, 1), (x, 0, 0.25))
```
```python
Integral(f, (y, 4*x, 1), (x, 0, 0.25)).doit()
```
#### Example 2. ####
Suppose you want to find $P(X > 0.25, Y > 0.5)$. The event is the colored region below.
```python
# HIDDEN
plt.plot([0, 0], [0, 1], color='k', lw=2)
plt.plot([0, 1], [0, 1], color='k', lw=2)
plt.plot([0, 1], [1, 1], color='k', lw=2)
xx = np.arange(0.25, .52, 0.05)
yy1 = 0.5*np.ones(len(xx))
yy2 = np.ones(len(xx))
plt.fill_between(xx, yy1, yy2, alpha=0.3)
xx = np.arange(0.5, 1.1, 0.1)
yy1 = 0.5*np.ones(len(xx))
yy2 = np.ones(len(xx))
plt.fill_between(xx, xx, yy2, alpha=0.3)
plt.xlim(-0.05, 1)
plt.ylim(0, 1.05)
plt.axes().set_aspect('equal')
plt.xticks(np.arange(0, 1.1, 0.25))
plt.yticks(np.arange(0, 1.1, 0.25))
plt.xlabel('$x$')
plt.ylabel('$y$', rotation=0)
plt.title('$X > 0.25, Y > 0.5$');
```
Now $P(X > 0.25, Y > 0.5)$ is the integral of the joint density function over this region. Notice that for each fixed value of $y > 0.5$, the value of $x$ in this event goes from $0.25$ to $y$. So let's integrate $x$ first and then $y$.
```python
Integral(f, (x, 0.25, y), (y, 0.5, 1))
```
```python
Integral(f, (x, 0.25, y), (y, 0.5, 1)).doit()
```
### Expectation ###
Let $g$ be a function on the plane. Then
$$
E(g(X, Y)) ~ = ~ \int_y \int_x g(x, y)f(x, y)dxdy
$$
provided the integral exists, in which case it can be carried out in either order ($x$ first, then $y$, or the other way around).
This is the non-linear function rule for expectation, applied to two random variables with a joint density.
As an example, let's find $E(\frac{Y}{X})$ for $X$ and $Y$ with the joint density $f$ given in the examples above.
Here $g(x, y) = \frac{y}{x}$, and
\begin{align*}
E\big{(}\frac{Y}{X}\big{)} &= \int_y \int_x g(x, y)f(x, y)dxdy \\ \\
&= \int_0^1 \int_x^1 \frac{y}{x} 120x(y-x)(1-y)dy dx \\ \\
&= \int_0^1 \int_x^1 120y(y-x)(1-y)dy dx
\end{align*}
Now let's use `SymPy`. Remember that `x` and `y` have already been defined as symbolic variables with values in the unit interval.
```python
ev_y_over_x = Integral(120*y*(y-x)*(1-y), (y, x, 1), (x, 0, 1))
ev_y_over_x
```
```python
ev_y_over_x.doit()
```
So for this pair of random variables $X$ and $Y$, we have
$$
E\big{(}\frac{Y}{X}\big{)} = 3
$$
```python
```
| 77bc35e3a40636f4c12143d8dfc200f083583c51 | 217,358 | ipynb | Jupyter Notebook | miscellaneous_notebooks/Joint_Densities/Probabilities_and_Expectations.ipynb | dcroce/jupyter-book | 9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624 | [
"MIT"
] | null | null | null | miscellaneous_notebooks/Joint_Densities/Probabilities_and_Expectations.ipynb | dcroce/jupyter-book | 9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624 | [
"MIT"
] | null | null | null | miscellaneous_notebooks/Joint_Densities/Probabilities_and_Expectations.ipynb | dcroce/jupyter-book | 9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624 | [
"MIT"
] | null | null | null | 343.920886 | 164,082 | 0.915204 | true | 2,687 | Qwen/Qwen-72B | 1. YES
2. YES | 0.874077 | 0.865224 | 0.756273 | __label__eng_Latn | 0.99257 | 0.595407 |
For the Ronbrock method, we need to solve a linear system of the form
$$
M_{ij}x_{j}=b_{i} \;,
$$
with M a square matrix (repeated indecies imply summation).
Such systems are soved by (among other methods) the so-called LU factorization (or decomposition),
where you decompose $M_{ij}=L_{ik}U_{kj}$ with $L_{i, j>i}=0$, $L_{i, j=i}=1$, $U_{i,j<i}=0$.
That is if $M$ is $N \times N$ matrix, L,U are defined as
\begin{align}
&L=\left( \begin{matrix}
1 & 0 & 0 & 0 & \dots &0 & 0 \\
L_{2,1} & 1 & 0 & 0 & \dots &0 & 0\\
L_{3,1} & L_{3,2} & 1 & 0 & \dots &0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \dots & \vdots & \vdots \\
L_{N-1, 1} & L_{N-1, 2} & L_{N-1, 3} & L_{N-1, 4} & \dots & 1 & 0 \\
L_{N, 1} & L_{N, 2} & L_{N, 3} & L_{N, 4} & \dots & L_{N, N-1} & 1 \\
\end{matrix}\right) \;, \\
%
&U=\left( \begin{matrix}
U_{1,1} & U_{1,2} & U_{1,3} & U_{1,4} & \dots & U_{1,N-1} & U_{1,N} \\
0 & U_{2,2} & U_{2,3} & U_{2,4} & \dots & U_{2,N-1} & U_{2,N}\\
0 & 0 & U_{3,3} & U_{3,4} & \dots & U_{3,N-1} & U_{3,N} \\
\vdots & \vdots & \vdots & \ddots & \dots & \vdots & \vdots \\
0 & 0 & 0 &0 & \dots & U_{N-1,N-1} & U_{N-1,N} \\
0 & 0 & 0& 0 & \dots & 0 &U_{N,N} \\
\end{matrix}\right)
%
\end{align}
Then we have in general $M_{i, j} = \sum_{k=1}^{i}L_{i,k}U_{k,j}$. Since
$L_{i, k \geq i}=0$ and $U_{k>j,j}=0$, the sum runs up to $j$ if $i \geq j$ and
$i$ if $i \leq j$ (for $i=j$ then both are correct). That is
$$
M_{i, j \geq i} = \sum_{k=1}^{i-1}L_{i,k}U_{k,j}+ U_{i,j} \Rightarrow
U_{i,j }=M_{i,j } - \sum_{k=1}^{i-1}L_{i,k}U_{k,j }\; , \;\;\; j \geq i \\[0.5cm]
M_{i, j \leq i} = \sum_{k=1}^{j-1}L_{i,k}U_{k,j} +L_{i,j}U_{j,j} \Rightarrow
L_{i,j}=\left( M_{i,j} - \sum_{k=1}^{j-1}L_{i,k}U_{k,j} \right) U_{jj}^{-1} , \;\;\; j \leq i
$$
Since $U$ and $L$ are triangular matrices, we can solve these two systems sequentially
$$
L_{i,k}y_{k}=b_{k} \\
U_{k,j}x_{j}=y_{k},
$$
since
$$
y_1 = b_{1} \\
L_{2,1}y_{1}+y_{2}=b_{2} \Rightarrow y_{2}=b_{2}-L_{2,1}y_{1} \\
\vdots \\
y_{i}=b_{i} - \sum_{j=1}^{i-1}L_{i,j}y_{j}
$$
and
$$
U_{N,N}x_{N}=y_{N} \Rightarrow x_{N}=y_{N}/U_{N,N}\\
U_{N-1,N}x_{N}+U_{N-1,N-1}x_{N-1}=y_{N-1} \Rightarrow x_{N-1}=\left(y_{N-1} -U_{N-1,N}x_{N} \right)/U_{N-1,N-1} \\
\vdots \\
x_{i}=\dfrac{y_{i} -\displaystyle\sum_{j=i+1}^{N} U_{i,j}x_{j} }{U_{i,i}}
$$
Since $U_{i,i}$ appears to denominator, if the diagonal terms of $U$ are small (or god forbid they vanish), we would have a problem.
To solve this problem we do $LUP$ decomposition, where $L \; U=P \; M$ with $P$ a permutation matrix so that the diagonal of $U$ has the dominant components in each row.
Then solving $M x =b$ is equavalent to solving $\left( P \; M \right) x =P \; b$ with LU decomposition of
$P \; M$. That is x solves both systems (no need to permute x).
There is a clever way to make the docomposition faster. This is by initializing
$L=1_{N \times N}$ and $U=M$. Then we have the follwing algorithm for LU decomposition without
pivoting:
```bash
Input: M, N
#M: matrix
#N: size of M
#initialize U
U=M
#initialize L
L=Unit(N,N)
for k in [2,...,N] do
for i in [k,...,N] do
L[i][k-1]=U[i][k-1]/U[k-1][k-1]
for j in [k-1,...,N] do
U[i][j]=U[i][j]-L[i][k-1]*U[k-1][j]
done
done
done
```
I will not write the algorithm including pivoting, as the code in python will not be different.
```python
import numpy as np
```
```python
def ind_max(row,N):
'''
Find the index of the maximum of a list (row) of lentgth N.
'''
_in=0
_max=row[0]
i=0
while i<N:#the end of the row should be included (convension in how I use LUP..)
if row[i]>_max:
_max=row[i]
_in=i
i+=1
return _in
def row_swap(A,index_1,index_2,N):
'''
row_swap takes a N*N array and interchanges
row index_1 with row index_2.
'''
for i in range(N):
tmp=A[index_1][i]
A[index_1][i]=A[index_2][i]
A[index_2][i]=tmp
```
```python
#This is the same as in the main notebook, but here I use permutation matrix instead of a permutation vector.
#The actual algorithm does not change, as you don't realy care about the definition of the permutation matrix
#or vector.
def LUP(M,N,_tiny=1e-20):
U=[ [ M[i][j] for j in range(N)] for i in range(N) ]
L=[ [ 0 if i!=j else 1 for j in range(N)] for i in range(N) ]
P=[ [ 0 if i!=j else 1 for j in range(N)] for i in range(N) ]
for k in range(1,N):
for i in range(k,N):
#find the index of the maximum in column
_col=[np.abs(U[_r][k-1]) for _r in range(k-1,N)]
#find the index of the maximum of _col
# notice that the length of _col is N-(k-1)
len_col=N-(k-1)
pivot=ind_max( _col ,len_col) + k - 1 #convert the index of _col (it has a length of len_col) to the index of a row of U
##################################################
#this was in LU_julia (instead of "<_tiny" it had "== 0").
#if you remove it, then you get a lot of infinities
#it has to do with the fact that if U[pivot][k-1] <_tiny , then U[k-1][k-1] will be a zero,
#L[i][k-1] explodes.
#You are allowed to skip this i, then, because if U[pivot][k-1] <_tiny , then all U[i][k-1] are small!
#Check that this is true by uncommenting print(_col)
if np.abs(U[pivot][k-1]) < _tiny :
#print(_col)
break
###################################################
#if the maximum is not at k-1, swap!
if pivot != k-1 :
# Permute rows k-1 and pivot in U
row_swap(P,k-1,pivot,N)
tmpU=[U[k-1][_r] for _r in range(k-1,N)]
#print(U)
for _r in range(k-1,N):
U[k-1][_r]=U[pivot][_r]
#print(U)
for _r in range(k-1,N):
U[pivot][_r]=tmpU[_r-(k-1)]#again we have to convert the index of tmpU
#print(U)
#print("=========================")
tmpL=[L[k-1][_r] for _r in range(k-1)]
#print(L)
for _r in range(k-1):
L[k-1][_r]=L[pivot][_r]
#print(L)
for _r in range(k-1):
L[pivot][_r]=tmpL[_r]
#print(L)
#print("========================")
L[i][k-1]=U[i][k-1]/U[k-1][k-1]
for j in range(k-1,N):
U[i][j]=U[i][j]-L[i][k-1]*U[k-1][j]
return L,U,P
```
```python
def Dot(M,x,N):
'''
Product of N*N matrix M with vector x.
'''
c=[0 for i in range(N) ]
for i in range(N):
for j in range(N):
c[i]+=M[i][j]*x[j]
return c
def Sum(List,N):
'''
Calculates the sum of a List of size N
'''
s=0
for i in range(N):
s+=List[i]
return s
```
```python
def Solve_LU(L,U,P,b,N):
'''
This solves P*M*x=P*b (x is also the solution to M*x=b)
Input:
L,U,P= LUP decomposition of M. with P*M=L*U
b=the right hand side of the equation
N=the number of equations
'''
b=Dot(P,b,N)
d=[0 for i in range(N) ]
x=[0 for i in range(N) ]
d[0]=b[0]
for i in range(1,N):
d[i]=b[i]-Sum( [L[i][j]*d[j] for j in range(i)],i )
x[N-1] = d[N-1]/U[N-1][N-1]
for i in range(N-2,-1,-1):
x[i]=(d[i]-Sum( [U[i][j]*x[j] for j in range(i+1,N)],N-(i+1) ))/U[i][i]
return x
```
## tests
```python
```
```python
#check if Solve_LU works
if True:
NT=500#NT tests
N=4#N*N matrices
testSol=[0 for i in range(NT)]
for i in range(NT):
#M=np.random.randint(-3,3,size=[N,N])
b=np.random.rand(N)*13.-6.5
M=np.random.rand(N,N)*4-2
L,U,P=LUP(M,N)
x=Solve_LU(L,U,P,b,N)
testSol[i]=np.array(Dot(M,x,N))-np.array(b)
print(np.max(testSol))
```
3.268496584496461e-13
```python
from scipy.linalg import lu_factor,lu_solve,lu
```
```python
#check LUP against numpy.
#in test I will have the maximum difference between my L,U with what np.lu returns,
#and the difference between my L*U-P*M. So, test should be an array with small numbers!
#even when I get difference with numpy it is not important, because the decomposition is still correct
#(no nan or inf)!!!!
#change to True to run tests
if True:
NT=500#NT tests
N=10#N*N matrices
testL=[0 for i in range(NT)]
testU=[0 for i in range(NT)]
testM=[0 for i in range(NT)]
for i in range(NT):
#M=np.random.randint(-3,3,size=[N,N])
M=np.random.rand(N,N)*4-2
L,U,P=LUP(M,N)
Ps,Ls,Us=lu(M)
testU[i]=np.max(np.array(U)-Us)
testL[i]=np.max(np.array(L)-Ls)
testM[i]=np.max(np.dot(L,U)-np.dot( P,M) )
if testL[i] > 1e-5:
#print(np.array(L))
#print(Ls)
#print([U[_t][_t] for _t in range(N)])
print(testM[i])
pass
print(np.max(testU) , np.max(testL) , np.max(testM))
```
1.865174681370263e-14 6.38378239159465e-15 1.7763568394002505e-15
```python
```
```python
```
| 0472e0c62c167525f49b4df1c9a6a111f8fbd84f | 14,542 | ipynb | Jupyter Notebook | Differential_Equations/python/0-useful/LU_decomposition-first.ipynb | dkaramit/ASAP | afade2737b332e7dbf0ea06eb4f31564a478ee40 | [
"MIT"
] | null | null | null | Differential_Equations/python/0-useful/LU_decomposition-first.ipynb | dkaramit/ASAP | afade2737b332e7dbf0ea06eb4f31564a478ee40 | [
"MIT"
] | null | null | null | Differential_Equations/python/0-useful/LU_decomposition-first.ipynb | dkaramit/ASAP | afade2737b332e7dbf0ea06eb4f31564a478ee40 | [
"MIT"
] | 1 | 2021-12-15T02:03:01.000Z | 2021-12-15T02:03:01.000Z | 31.681917 | 180 | 0.422019 | true | 3,408 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92079 | 0.849971 | 0.782645 | __label__eng_Latn | 0.876823 | 0.656678 |
# Multi-Armed bandit UCB
Thompson sampling is an ingenious algorithm that implicitly balances exploration and exploitation based on quality and uncertainty. Let's say we sample a 3-armed bandit and model the probability that each arm gives us a positive reward. The goal is of course to maximize our rewards by pulling the most promising arm. Assume at the current timestep arm-3 has mean reward of 0.9 over 800 pulls, arm-2 has mean reward of 0.8 over 800 pulls, and arm-1 has mean reward of 0.78 over 10 pulls. So far, arm-3 is clearly the best. But if we were to explore, would we choose arm-2 or arm-1? An $\epsilon$-greedy algorithm would, with probability $\epsilon$, just as likely choose arm-3, arm-2, or arm-1. However, arm-2 has been examined many times, as many as arm-1, and has a mean reward lower than arm-1. Selecting arm-2 seems like a wasteful exploratory action. Arm-1 however, has a lower mean reward than either arm-2 or arm-3, but has only been pulled a few times. In other words, arm-1 has a higher chance of being a better action than arm-3 when compared to arm-2, since we are more uncertain about its true value. The $\epsilon$-greedy algorithm completely misses this point. Thompson sampling, on the other hand, incorporates uncertainty by modelling the bandit's Bernouilli parameter with a prior beta distribution.
The beauty of the algorithm is that it always chooses the action with the highest expected reward, with the twist that this reward is weighted by uncertainty. It is in fact a Bayesian approach to the bandit problem. In our Bernouilli bandit setup, each action $k$ returns reward of 1 with probability $\theta_k$, and 0 with probability $1-\theta_k$. At the beginning of a simulation, each $\theta_k$ is sampled from a uniform distribution $\theta_k \sim Uniform(0,1)$ with $\theta_k$ held constant for the rest of that simulation (in the stationary case). The agent begins with a prior belief of the reward of each arm $k$ with a beta distribution, where $\alpha = \beta = 1$. The prior probability density of each $\theta_k$ is:
$$
p(\theta_k) = \frac{\Gamma(\alpha_k + \beta_k)}{\Gamma(\alpha_k)\Gamma(\beta_k)} \theta_k^{\alpha_k -1} (1-\theta_k)^{\beta_k-1}
$$
An action is chosen by first sampling from the beta distribution, followed by choosing the action with highest mean reward:$$
x_t = \text{argmax}_k (\hat{\theta}_k), \quad \hat{\theta}_k \sim \text{beta}(\alpha_k, \beta_k)
$$
According to Bayes' rule, an action's posterior distribution is updated depending on the reward $r_t$ received:$$
(\alpha_k, \beta_k) = (\alpha_k, \beta_k) + (r_t, 1-r_t)
$$
Thus the actions' posterior distribution are constantly updated throughout the simulation. We will measure the Thompson algorithm by comparing it with the $\epsilon$-greedy and Upper Confidence Bound (UCB) algorithms using regret. The per-period regret for the Bernouilli bandit problem is the difference between the mean reward of the optimal action minus the mean reward of the selected action:$$
\text{regret}_t(\theta) = \max_k \theta_k - \theta_{x_t}
$$
First we setup the necessary imports and the standard k-armed bandit. The get_reward_regret samples the reward for the given action, and returns the regret based on the true best action.
```python
import numpy as np
import matplotlib.pyplot as plt
from pdb import set_trace
stationary = True
class Bandit():
def __init__(self, arm_count):
"""
Multi-armed bandit with rewards 1 or 0
At initialization, multiple arms are created. The probability of each arm returning reward 1
if pulled is sample from Bernoulli(p), where randomly chosen from Uniform(0, 1) at initization
"""
self.arm_count = arm_count
self.generate_thetas()
self.timestep = 0
global stationary
self.stationary=stationary
def generate_thetas(self):
self.thetas = np.random.uniform(0, 1, self.arm_count)
def get_reward_regret(self, arm):
"""
Returns random reward for arm action. Assument action are zero-indexed
Args:
arg is an int
"""
self.timestep += 1
if (self.stationary==False) and (self.timestep%100 == 0) :
self.generate_thetas()
# Simulate bernouilli sampling
sim = np.random.uniform(0,1,self.arm_count)
rewards = (sim<self.thetas).astype(int)
reward = rewards[arm]
regret = self.thetas.max() - self.thetas[arm]
return reward, regret
```
We implement the two beta algorithms from [1], although we focus only on the Thompson algorithm. For the Bernouilli-greedy algorithm, the Bernouilli parameters are the expected values of the Beta distribution, i.e.:$$
\mathbb{E}(x_k) = \frac{\alpha_k}{(\alpha_k + \beta_k)}
$$
The Thompson algorithm follows the pseudocode below, based on [1]
Algorithm: Thompson($K$,$\alpha$, $\beta$)
<br>for $t$ = 1,2, ..., do<br>
 // sample action parameter from beta distribution<br>
 for $k = 1, \dots, K$ do<br>
  Sample $\hat{\theta}_k \sim \text{beta}(\alpha_k, \beta_k)$<br>
 end for<br>
 // select action, get reward<br>
 $x_t \leftarrow \text{argmax}_k \hat{\theta}_k$<br>
 $r_t \leftarrow \text{observe}(x_t)$<br>
 // update beta parameters<br>
 $(\alpha_{x_t}, \beta_{x_t}) \leftarrow (\alpha_{x_t}, \beta_{x_t})+(r_t, 1-r_t)$<br>
end for
```python
class BetaAlgo():
"""
The algos try to learn which Bandit arm is the best to maximize reward.
It does this by modelling the distribution of the Bandit arms with a Beta,
assuming the true probability of success of an arm is Bernouilli distributed.
"""
def __init__(self, bandit):
"""
Args:
bandit: the bandit class the algo is trying to model
"""
self.bandit = bandit
self.arm_count = bandit.arm_count
self.alpha = np.ones(self.arm_count)
self.beta = np.ones(self.arm_count)
def get_reward_regret(self, arm):
reward, regret = self.bandit.get_reward_regret(arm)
self._update_params(arm, reward)
return reward, regret
def _update_params(self, arm, reward):
self.alpha[arm] += reward
self.beta[arm] += 1 - reward
class BernGreedy(BetaAlgo):
def __init__(self, bandit):
super().__init__(bandit)
@staticmethod
def name():
return 'beta-greedy'
def get_action(self):
""" Bernouilli parameters are the expected values of the beta"""
theta = self.alpha / (self.alpha + self.beta)
return theta.argmax()
class BernThompson(BetaAlgo):
def __init__(self, bandit):
super().__init__(bandit)
@staticmethod
def name():
return 'thompson'
def get_action(self):
""" Bernouilli parameters are sampled from the beta"""
theta = np.random.beta(self.alpha, self.beta)
return theta.argmax()
```
For comparison, we also implement the $\epsilon$-greedy algorithm and Upper Confidence Bound (UBC) algorithm. The implementations are based on [2] (pages 24-28). The $\epsilon$-greedy algorithm is straightforward and explained briefly above. Note in this implementation we make use of the incremental update rule. That is, to update the $Q$-value of each action, we maintain a count of each action. For action $k$ taken at time $t$:$$
\begin{align}
r_t &\leftarrow \text{observe}(k) \\
N(k) &\leftarrow N(k) + 1 \\
Q(k) &\leftarrow Q(k) + \frac{1}{N(k)}[r_t-Q(k)] \\
\end{align}
$$
```python
epsilon = 0.1
class EpsilonGreedy():
"""
Epsilon Greedy with incremental update.
Based on Sutton and Barto pseudo-code, page. 24
"""
def __init__(self, bandit):
global epsilon
self.epsilon = epsilon
self.bandit = bandit
self.arm_count = bandit.arm_count
self.Q = np.zeros(self.arm_count) # q-value of actions
self.N = np.zeros(self.arm_count) # action count
@staticmethod
def name():
return 'epsilon-greedy'
def get_action(self):
if np.random.uniform(0,1) > self.epsilon:
action = self.Q.argmax()
else:
action = np.random.randint(0, self.arm_count)
return action
def get_reward_regret(self, arm):
reward, regret = self.bandit.get_reward_regret(arm)
self._update_params(arm, reward)
return reward, regret
def _update_params(self, arm, reward):
self.N[arm] += 1 # increment action count
self.Q[arm] += 1/self.N[arm] * (reward - self.Q[arm]) # inc. update rule
```
The UCB action selection is different to the $\epsilon$-greedy. Like the Thompson algorithm, it includes a measure of uncertainty. The selected action follows the rule:$$
A_t = \text{argmax}_a \left[ Q_t(a) + c \sqrt{\frac{\ln t}{N_t (a)}} \right]
$$
Where $N_t (a)$ is the number of times action $a$ has been selected up to time $t$. As the denominator grows in the square root expression, the added effect on $Q_t(a)$ diminishes. This uncertainty measure is weighed by the hyperparameter $c$. The disadvantage is that, unlike the Thompson algorithm, this uncertainty hyperparameter requires tuning. Fundamentally, the UCB uncertainty is deterministic and beneficial, whereas in the Thompson case, uncertainty increases the expected reward variance. Since the Thompson algorithm samples the mean rewards from a beta distribution, the actions with high variance may not only have a higher chance of being chosen, but may also have a lower chance.
```python
ucb_c = 2
class UCB():
"""
Epsilon Greedy with incremental update.
Based on Sutton and Barto pseudo-code, page. 24
"""
def __init__(self, bandit):
global ucb_c
self.ucb_c = ucb_c
self.bandit = bandit
self.arm_count = bandit.arm_count
self.Q = np.zeros(self.arm_count) # q-value of actions
self.N = np.zeros(self.arm_count) + 0.0001 # action count
self.timestep = 1
@staticmethod
def name():
return 'ucb'
def get_action(self):
ln_timestep = np.log(np.full(self.arm_count, self.timestep))
confidence = self.ucb_c * np.sqrt(ln_timestep/self.N)
action = np.argmax(self.Q + confidence)
self.timestep += 1
return action
def get_reward_regret(self, arm):
reward, regret = self.bandit.get_reward_regret(arm)
self._update_params(arm, reward)
return reward, regret
def _update_params(self, arm, reward):
self.N[arm] += 1 # increment action count
self.Q[arm] += 1/self.N[arm] * (reward - self.Q[arm]) # inc. update rule
```
Below are some helper functions. The function simulate will simulate the learning for a single algorithm and return the mean regrets over a number of trials. The experiment function runs the simulations over all algorithms and plots their mean regrets
```python
def plot_data(y):
""" y is a 1D vector """
x = np.arange(y.size)
_ = plt.plot(x, y, 'o')
def multi_plot_data(data, names):
""" data, names are lists of vectors """
x = np.arange(data[0].size)
for i, y in enumerate(data):
plt.plot(x, y, 'o', markersize=2, label=names[i])
plt.legend(loc='upper right', prop={'size': 16}, numpoints=10)
plt.show()
def simulate(simulations, timesteps, arm_count, Algorithm):
""" Simulates the algorithm over 'simulations' epochs """
sum_regrets = np.zeros(timesteps)
for e in range(simulations):
bandit = Bandit(arm_count)
algo = Algorithm(bandit)
regrets = np.zeros(timesteps)
for i in range(timesteps):
action = algo.get_action()
reward, regret = algo.get_reward_regret(action)
regrets[i] = regret
sum_regrets += regrets
mean_regrets = sum_regrets / simulations
return mean_regrets
def experiment(arm_count, timesteps=1000, simulations=1000):
"""
Standard setup across all experiments
Args:
timesteps: (int) how many steps for the algo to learn the bandit
simulations: (int) number of epochs
"""
algos = [EpsilonGreedy, UCB, BernThompson]
regrets = []
names = []
for algo in algos:
regrets.append(simulate(simulations, timesteps, arm_count, algo))
names.append(algo.name())
multi_plot_data(regrets, names)
```
## Experiments
For all experiments, in each trial the agents are allowed 1000 timesteps to maximize reward. We perform 5000 trials for each experiment.
### Baseline
In this first experiment, we aim for a standard setup, inspired by the bandit testbed in Chapter 2 of [2]. We set $\epsilon=0.1$ for the $\epsilon$-greedy algorithm, and $c=2$ for UCB. As can be seen in the chart below, the Thompson and $\epsilon$-greedy agents quickly converge to a steady regret value after only 200 steps. The UCB agent on the other hand very slowly decreases, lagging behind the two other agents, and continues in its downward trend even at step 1000. This suggests the non-Thompson agents could benefit from parameter tuning, whereas the Thompson agent works well right off the bat.
```python
# Experiment 1
arm_count = 10 # number of arms in bandit
epsilon = 0.1
ucb_c = 2
stationary=True
experiment(arm_count)
```
```python
```
```python
```
```python
```
```python
```
```python
```
```python
```
| 8ac723037bcbf2b6bd7e588d08605b48db9b2a57 | 41,382 | ipynb | Jupyter Notebook | bandits/03-mab-ucb.ipynb | martin-fabbri/colab-notebooks | 03658a7772fbe71612e584bbc767009f78246b6b | [
"Apache-2.0"
] | 8 | 2020-01-18T18:39:49.000Z | 2022-02-17T19:32:26.000Z | bandits/03-mab-ucb.ipynb | martin-fabbri/colab-notebooks | 03658a7772fbe71612e584bbc767009f78246b6b | [
"Apache-2.0"
] | null | null | null | bandits/03-mab-ucb.ipynb | martin-fabbri/colab-notebooks | 03658a7772fbe71612e584bbc767009f78246b6b | [
"Apache-2.0"
] | 6 | 2020-01-18T18:40:02.000Z | 2020-09-27T09:26:38.000Z | 79.275862 | 21,978 | 0.777729 | true | 3,436 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.863392 | 0.745445 | __label__eng_Latn | 0.98176 | 0.570251 |
```python
%matplotlib inline
import sys, platform, os
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import scipy as sci
import camb as camb
```
```python
from camb import model, initialpower
print('Using CAMB %s installed at %s'%(camb.__version__,os.path.dirname(camb.__file__)))
```
Using CAMB 1.3.5 installed at /opt/anaconda3/lib/python3.8/site-packages/camb
```python
import classy as classy
from classy import Class
print('Using CLASS %s installed at %s'%(classy.__version__,os.path.dirname(classy.__file__)))
```
Using CLASS v2.9.4 installed at /opt/anaconda3/lib/python3.8/site-packages
```python
from ipywidgets.widgets import *
import sympy
from sympy import cos, simplify, sin, sinh, tensorcontraction
from einsteinpy.symbolic import EinsteinTensor, MetricTensor, RicciScalar
sympy.init_printing()
```
```python
from IPython.display import Markdown, display
```
```python
def printmd(string, color='black', math=False, fmt='header2'):
if math==True:
mstring = string
elif math==False:
mstring="\\textrm{"+string+"}"
#colorstr = "<span style='color:{}'>{}</span>".format(color, string)
fmtstr = "${\\color{"+color+"}{"+mstring+"}}$"
if fmt=='header2':
fmtstr="## "+fmtstr
if fmt=='header1':
fmtstr="# "+fmtstr
display(Markdown(fmtstr))
return None
```
```python
from astropy.cosmology import WMAP5
from astropy.cosmology import FlatLambdaCDM
import astropy.units as u
WMAP5.H(0)
```
$70.2 \; \mathrm{\frac{km}{Mpc\,s}}$
```python
WMAP_5 = dict()
```
```python
WMAP_5['ombh2'] = 0.02238 ## Omega_b * h**2
WMAP_5['omch2'] = 0.12011 ## Omega_c * h**2
WMAP_5['ln1010As'] = 3.0448 ## ln(10**10 * As), scalar amplitude
WMAP_5['ns'] = 0.96605 ## spectral index
WMAP_5['ommh2'] = 0.14314 ## Omega_m * h**2 , total matter
WMAP_5['H0'] = 70.2 ## H0 = 100h
WMAP_5['sigma8'] = 0.8120 ## amplitude of density fluctuations
WMAP_5['tau'] = 0.0543 ## Optical depth
WMAP_5['age_Gyr'] = 13.7971 ## Age of the Universe
```
```python
WMAP_5['h'] = WMAP_5['H0']/100
WMAP_5['Om'] = WMAP_5['ommh2']/WMAP_5['h']**2
WMAP_5['Ob'] = WMAP_5['ombh2']/WMAP_5['h']**2
WMAP_5['Oc'] = WMAP_5['omch2']/WMAP_5['h']**2
WMAP_5['As'] = np.exp(WMAP_5['ln1010As'])/np.power(10,10) ## As, scalar amplitude
```
```python
WMAP_5['h']
```
```python
WMAP_5['Or'] = 0.0000930479
```
```python
WMAP_5['Ol'] = 1-np.array([WMAP_5[oo] for oo in ['Oc','Ob','Om']]).sum() ## Ol = Omega_Lambda
WMAP_5['Ol']
```
```python
cosmo = FlatLambdaCDM(H0=70.2 * u.km / u.s / u.Mpc, Om0=0.3)
```
```python
def a_of_z(z):
a=1/(1+z)
return a
```
```python
def Omega_L(Omega_c, Omega_b, Omega_r):
"""
Function for Omega_Lambda, dark energy.
For a flat Universe:
Omega_Lambda = 1-Omega_c-Omega_b-Omega_r
"""
oL = 1 - Omega_c - Omega_b - Omega_r
return oL
```
```python
def cosmological_parameters(cosmo_pars=dict()):
H0 = cosmo_pars.get('H0', WMAP_5['H0']) # WMAP5 cosmological parameters as default
Oc = cosmo_pars.get('Oc', WMAP_5['Oc'])
Ob = cosmo_pars.get('Ob', WMAP_5['Ob'])
Or = cosmo_pars.get('Or', WMAP_5['Or'])
Om = Ob+Oc
OL = Omega_L(Oc, Ob, Or)
return H0, Oc, Ob, Or, Om, OL
```
```python
cosmological_parameters()
```
```python
def Hubble(z, cosmo_pars=dict()):
H0, Oc, Ob, Or, Om, OL = cosmological_parameters(cosmo_pars)
H = H0 * np.sqrt(Om*(1+z)**3 + Or*(1+z)**4 + OL)
return H
```
```python
Hubble(0.)
```
```python
z_arr = np.linspace(0.,10, 100)
fig, ax = plt.subplots(1, 1, sharey='row', sharex='col', figsize=(10,8)) #all plots in the same row, share the y-axis.
# once you specify an axis, it is in this instance where plots are performed
ax.semilogx(z_arr, Hubble(z_arr), '-', label='WMAP5', color='orange', lw=3)
ax.legend(fontsize=26)
ax.set_xlabel('redshift $z$', fontsize=26)
ax.set_ylabel(r'$H(z)$ in km/s/Mpc', fontsize=26);
```
```python
#Set up a new set of parameters for CAMB
pars = camb.CAMBparams()
#This function sets up CosmoMC-like settings, with one massive neutrino and helium set using BBN consistency
pars.set_cosmology(H0=WMAP_5['H0'], ombh2=WMAP_5['ombh2'], omch2=WMAP_5['omch2'])
```
class: <CAMBparams>
WantCls = True
WantTransfer = False
WantScalars = True
WantTensors = False
WantVectors = False
WantDerivedParameters = True
Want_cl_2D_array = True
Want_CMB = True
Want_CMB_lensing = True
DoLensing = True
NonLinear = NonLinear_none
Transfer: <TransferParams>
high_precision = False
accurate_massive_neutrinos = False
kmax = 0.9
k_per_logint = 0
PK_num_redshifts = 1
PK_redshifts = [0.0]
want_zstar = False
want_zdrag = False
min_l = 2
max_l = 2500
max_l_tensor = 600
max_eta_k = 5000.0
max_eta_k_tensor = 1200.0
ombh2 = 0.02238
omch2 = 0.12011
omk = 0.0
omnuh2 = 0.0006451383989381787
H0 = 70.2
TCMB = 2.7255
YHe = 0.24540281330622907
num_nu_massless = 2.030666666666667
num_nu_massive = 1
nu_mass_eigenstates = 1
share_delta_neff = False
nu_mass_degeneracies = [1.0153333333333332]
nu_mass_fractions = [1.0]
nu_mass_numbers = [1]
InitPower: <InitialPowerLaw>
tensor_parameterization = tensor_param_rpivot
ns = 0.96
nrun = 0.0
nrunrun = 0.0
nt = -0.0
ntrun = -0.0
r = 0.0
pivot_scalar = 0.05
pivot_tensor = 0.05
As = 2e-09
At = 1.0
Recomb: <Recfast>
min_a_evolve_Tm = 0.0011098779505118728
RECFAST_fudge = 1.125
RECFAST_fudge_He = 0.86
RECFAST_Heswitch = 6
RECFAST_Hswitch = True
AGauss1 = -0.14
AGauss2 = 0.079
zGauss1 = 7.28
zGauss2 = 6.73
wGauss1 = 0.18
wGauss2 = 0.33
Reion: <TanhReionization>
Reionization = True
use_optical_depth = False
redshift = 10.0
optical_depth = 0.0
delta_redshift = 0.5
fraction = -1.0
include_helium_fullreion = True
helium_redshift = 3.5
helium_delta_redshift = 0.4
helium_redshiftstart = 5.5
tau_solve_accuracy_boost = 1.0
timestep_boost = 1.0
max_redshift = 50.0
DarkEnergy: <DarkEnergyFluid>
w = -1.0
wa = 0.0
cs2 = 1.0
use_tabulated_w = False
NonLinearModel: <Halofit>
Min_kh_nonlinear = 0.005
halofit_version = mead2020
HMCode_A_baryon = 3.13
HMCode_eta_baryon = 0.603
HMCode_logT_AGN = 7.8
Accuracy: <AccuracyParams>
AccuracyBoost = 1.0
lSampleBoost = 1.0
lAccuracyBoost = 1.0
AccuratePolarization = True
AccurateBB = False
AccurateReionization = True
TimeStepBoost = 1.0
BackgroundTimeStepBoost = 1.0
IntTolBoost = 1.0
SourcekAccuracyBoost = 1.0
IntkAccuracyBoost = 1.0
TransferkBoost = 1.0
NonFlatIntAccuracyBoost = 1.0
BessIntBoost = 1.0
LensingBoost = 1.0
NonlinSourceBoost = 1.0
BesselBoost = 1.0
LimberBoost = 1.0
SourceLimberBoost = 1.0
KmaxBoost = 1.0
neutrino_q_boost = 1.0
SourceTerms: <SourceTermParams>
limber_windows = True
limber_phi_lmin = 100
counts_density = True
counts_redshift = True
counts_lensing = False
counts_velocity = True
counts_radial = False
counts_timedelay = True
counts_ISW = True
counts_potential = True
counts_evolve = False
line_phot_dipole = False
line_phot_quadrupole = False
line_basic = True
line_distortions = True
line_extra = False
line_reionization = False
use_21cm_mK = True
z_outputs = []
scalar_initial_condition = initial_adiabatic
InitialConditionVector = []
OutputNormalization = 1
Alens = 1.0
MassiveNuMethod = Nu_best
DoLateRadTruncation = True
Evolve_baryon_cs = False
Evolve_delta_xe = False
Evolve_delta_Ts = False
Do21cm = False
transfer_21cm_cl = False
Log_lvalues = False
use_cl_spline_template = True
SourceWindows = []
CustomSources: <CustomSources>
num_custom_sources = 0
c_source_func = None
custom_source_ell_scales = []
```python
pars.H0
```
```python
results = camb.get_results(pars)
results.calc_background(pars)
```
```python
results.get_derived_params()
```
{'age': 13.545508925002487,
'zstar': 1089.9127705851572,
'rstar': 144.39813840951794,
'thetastar': 1.0496437751173935,
'DAstar': 13.75687083871557,
'zdrag': 1059.9673043922098,
'rdrag': 147.0538901702859,
'kd': 0.14091424370378375,
'thetad': 0.16205980754637161,
'zeq': 3405.073594348986,
'keq': 0.010392623290320202,
'thetaeq': 0.819467940898796,
'thetarseq': 0.45281650365615894}
```python
WMAP_5.keys()
```
dict_keys(['ombh2', 'omch2', 'ln1010As', 'ns', 'ommh2', 'H0', 'sigma8', 'tau', 'age_Gyr', 'h', 'Om', 'Ob', 'Oc', 'As', 'Or', 'Ol'])
```python
z_arr = np.linspace(0.,10, 100)
dA_camb = results.angular_diameter_distance(z_arr);
rz_camb = results.comoving_radial_distance(z_arr);
```
```python
# Define your cosmology (what is not specified will be set to CLASS default parameters)
## CLASS is more flexible in the names of parameters passed, because the names are "interpreted"
params = {
'H0': WMAP_5['H0'],
'omega_b': WMAP_5['ombh2'],
'Omega_cdm': WMAP_5['Oc']}
# Create an instance of the CLASS wrapper
cosmo = Class()
# Set the parameters to the cosmological code
cosmo.set(params)
cosmo.compute()
```
```python
cosmo.angular_distance(0.2)
```
```python
dA_class = np.array([cosmo.angular_distance(zi) for zi in z_arr])
```
```python
cosmo.z_of_r([0.2])
```
(array([815.69127983]), array([0.00025764]))
```python
rz_class, dz_dr_class = cosmo.z_of_r(z_arr)
```
```python
fig, ax = plt.subplots(2, 1, sharex='col', figsize=(10,8)) #all plots in the same row, share the y-axis.
# once you specify an axis, it is in this instance where plots are performed
ax[0].plot(z_arr, rz_camb, '-', label='CAMB $r(z)$', color='orange', lw=3)
ax[0].plot(z_arr, rz_class, '-.', label='CLASS $r(z)$', color='purple', lw=3)
ax[0].legend(fontsize=20)
ax[0].set_xlabel('redshift $z$', fontsize=22)
ax[0].set_ylabel(r'$r(z)$ in Mpc', fontsize=22);
ax[1].plot(z_arr, dA_camb, '-', label='CAMB $d_A(z)$', color='teal', lw=3)
ax[1].plot(z_arr, dA_class, '-.', label='CLASS $d_A(z)$', color='firebrick', lw=3)
ax[1].legend(fontsize=20)
ax[1].set_xlabel('redshift $z$', fontsize=22)
ax[1].set_ylabel(r'$d_A(z)$ in Mpc', fontsize=22);
```
Notice that (at least in a flat Universe ) objects of a fixed physical size, appear larger at larger redshifts. At a very high redshift, the angle subtended by an object of constant comoving size, would occupy the entire sky!
```python
```
| 2e02de11f777325daf1885b787e3d9b63ccb6ae5 | 120,092 | ipynb | Jupyter Notebook | IMP**.ipynb | DhruvKumarPHY/solutions | 83bced0692c78399cea906e8ba4ebb2a17b57d31 | [
"MIT"
] | null | null | null | IMP**.ipynb | DhruvKumarPHY/solutions | 83bced0692c78399cea906e8ba4ebb2a17b57d31 | [
"MIT"
] | null | null | null | IMP**.ipynb | DhruvKumarPHY/solutions | 83bced0692c78399cea906e8ba4ebb2a17b57d31 | [
"MIT"
] | null | null | null | 145.742718 | 57,056 | 0.882315 | true | 3,770 | Qwen/Qwen-72B | 1. YES
2. YES | 0.771843 | 0.793106 | 0.612154 | __label__eng_Latn | 0.267281 | 0.260568 |
# Density Estimation
### Preliminaries
- Goal
- Simple maximum likelihood estimates for Gaussian and categorical distributions
- Materials
- Mandatory
- These lecture notes
- Optional
- Bishop pp. 67-70, 74-76, 93-94
### Why Density Estimation?
Density estimation relates to building a model $p(x|\theta)$ from observations $D=\{x_1,\dotsc,x_N\}$.
Why is this interesting? Some examples:
- **Outlier detection**. Suppose $D=\{x_n\}$ are benign mammogram images. Build $p(x | \theta)$ from $D$. Then low value for $p(x^\prime | \theta)$ indicates that $x^\prime$ is a risky mammogram.
- **Compression**. Code a new data item based on **entropy**, which is a functional of $p(x|\theta)$:
$$
H[p] = -\sum_x p(x | \theta)\log p(x |\theta)
$$
- **Classification**. Let $p(x | \theta_1)$ be a model of attributes $x$ for credit-card holders that paid on time and $p(x | \theta_2)$ for clients that defaulted on payments. Then, assign a potential new client $x^\prime$ to either class based on the relative probability of $p(x^\prime | \theta_1)$ vs. $p(x^\prime|\theta_2)$.
### Example Problem
<span class="exercise">
Consider a set of observations $D=\{x_1,…,x_N\}$ in the 2-dimensional plane (see Figure). All observations were generated by the same process. We now draw an extra observation $x_\bullet = (a,b)$ from the same data generating process. What is the probability that $x_\bullet$ lies within the shaded rectangle $S$?
</span>
```julia
using Distributions, PyPlot
N = 100
generative_dist = MvNormal([0,1.], [0.8 0.5; 0.5 1.0])
function plotObservations(obs::Matrix)
plot(obs[1,:], obs[2,:], "kx", zorder=3)
fill_between([0., 2.], 1., 2., color="k", alpha=0.4, zorder=2) # Shaded area
text(2.05, 1.8, "S", fontsize=12)
xlim([-3,3]); ylim([-2,4]); xlabel("a"); ylabel("b")
end
D = rand(generative_dist, N) # Generate observations from generative_dist
plotObservations(D)
x_dot = rand(generative_dist) # Generate x∙
plot(x_dot[1], x_dot[2], "ro");
```
### Log-Likelihood for a Multivariate Gaussian (MVG)
- Assume we are given a set of IID data points $D=\{x_1,\ldots,x_N\}$, where $x_n \in \mathbb{R}^D$. We want to build a model for these data.
- **Model specification**: Let's assume a MVG model $x_n=\mu+\epsilon_n$ with $\epsilon_n \sim \mathcal{N}(0,\Sigma)$, or equivalently,
$$\begin{align*}
p(x_n|\mu,\Sigma) &= \mathcal{N}(x_n|\mu,\Sigma)
= |2 \pi \Sigma|^{-1/2} \mathrm{exp} \left\{-\frac{1}{2}(x_n-\mu)^T
\Sigma^{-1} (x_n-\mu) \right\}
\end{align*}$$
- Since the data are IID, $p(D|\theta)$ factorizes as
$$
p(D|\theta) = p(x_1,\ldots,x_N|\theta) \stackrel{\text{IID}}{=} \prod_n p(x_n|\theta)
$$
- This choice of model yields the following log-likelihood (use (B-C.9) and (B-C.4)),
$$\begin{align*}
\log &p(D|\theta) = \log \prod_n p(x_n|\theta) = \sum_n \log \mathcal{N}(x_n|\mu,\Sigma) \tag{1}\\
&= N \cdot \log | 2\pi\Sigma |^{-1/2} - \frac{1}{2} \sum\nolimits_{n} (x_n-\mu)^T \Sigma^{-1} (x_n-\mu)
\end{align*}$$
### Maximum Likelihood estimation of mean of MVG
- We want to maximize $\log p(D|\theta)$ wrt the parameters $\theta=\{\mu,\Sigma\}$. Let's take derivatives; first to mean $\mu$, (making use of (B-C.25) and (B-C.27)),
$$\begin{align*}
\nabla_\mu \log p(D|\theta) &= -\frac{1}{2}\sum_n \nabla_\mu \left[ (x_n-\mu)^T \Sigma^{-1} (x_n-\mu) \right] \\
&= -\frac{1}{2}\sum_n \nabla_\mu \mathrm{Tr} \left[ -2\mu^T\Sigma^{-1}x_n + \mu^T\Sigma^{-1}\mu \right] \\
&= -\frac{1}{2}\sum_n \left( -2\Sigma^{-1}x_n + 2\Sigma^{-1}\mu \right) \\
&= \Sigma^{-1}\,\sum_n \left( x_n-\mu \right)
\end{align*}$$
- Setting the derivative to zero yields the **sample mean**
$$\begin{equation*}
\boxed{
\hat \mu = \frac{1}{N} \sum_n x_n
}
\end{equation*}$$
### Maximum Likelihood estimation of variance of MVG
- Now we take the gradient of the log-likelihood wrt the **precision matrix** $\Sigma^{-1}$ (making use of B-C.28 and B-C.24)
$$\begin{align*}
\nabla_{\Sigma^{-1}} &\log p(D|\theta) \\
&= \nabla_{\Sigma^{-1}} \left[ \frac{N}{2} \log |2\pi\Sigma|^{-1} - \frac{1}{2} \sum_{n=1}^N (x_n-\mu)^T \Sigma^{-1} (x_n-\mu)\right] \\
&= \nabla_{\Sigma^{-1}} \left[ \frac{N}{2} \log |\Sigma^{-1}| - \frac{1}{2} \sum_{n=1}^N \mathrm{Tr} \left[ (x_n-\mu) (x_n-\mu)^T \Sigma^{-1}\right] \right]\\
&= \frac{N}{2}\Sigma -\frac{1}{2}\sum_n (x_n-\mu)(x_n-\mu)^T
\end{align*}$$
Get optimum by setting the gradient to zero,
$$\begin{equation*}
\boxed{
\hat \Sigma = \frac{1}{N} \sum_n (x_n-\hat\mu)(x_n - \hat\mu)^T}
\end{equation*}$$
which is also known as the **sample variance**.
### Sufficient Statistics
- Note that the ML estimates can also be written as
$$\begin{equation*}
\hat \Sigma = \sum_n x_n x_n^T - \left( \sum_n x_n\right)\left( \sum_n x_n\right)^T, \quad \hat \mu = \frac{1}{N} \sum_n x_n
\end{equation*}$$
- I.o.w., the two statistics (a 'statistic' is a function of the data) $\sum_n x_n$ and $\sum_n x_n x_n^T$ are sufficient to estimate the parameters $\mu$ and $\Sigma$ from $N$ observations. In the literature, $\sum_n x_n$ and $\sum_n x_n x_n^T$ are called **sufficient statistics** for the Gaussian PDF.
- The actual parametrization of a PDF is always a re-parameteriation of the sufficient statistics.
- Sufficient statistics are useful because they summarize all there is to learn about the data set in a minimal set of variables.
### Solution to Example Problem
<span class="exercise">
We apply maximum likelihood estimation to fit a 2-dimensional Gaussian model ($m$) to data set $D$. Next, we evaluate $p(x_\bullet \in S | m)$ by (numerical) integration of the Gaussian pdf over $S$: $p(x_\bullet \in S | m) = \int_S p(x|m) \mathrm{d}x$.</span>
```julia
using HCubature, LinearAlgebra# Numerical integration package
# Maximum likelihood estimation of 2D Gaussian
μ = 1/N * sum(D,dims=2)[:,1]
D_min_μ = D - repeat(μ, 1, N)
Σ = Hermitian(1/N * D_min_μ*D_min_μ')
m = MvNormal(μ, convert(Matrix, Σ));
# Contour plot of estimated Gaussian density
A = Matrix{Float64}(undef,100,100); B = Matrix{Float64}(undef,100,100)
density = Matrix{Float64}(undef,100,100)
for i=1:100
for j=1:100
A[i,j] = a = (i-1)*6/100 .- 2
B[i,j] = b = (j-1)*6/100 .- 3
density[i,j] = pdf(m, [a,b])
end
end
c = contour(A, B, density, 6, zorder=1)
PyPlot.set_cmap("cool")
clabel(c, inline=1, fontsize=10)
# Plot observations, x∙, and the countours of the estimated Gausian density
plotObservations(D)
plot(x_dot[1], x_dot[2], "ro")
# Numerical integration of p(x|m) over S:
(val,err) = hcubature((x)->pdf(m,x), [0., 1.], [2., 2.])
println("p(x⋅∈S|m) ≈ $(val)")
```
### Discrete Data: the 1-of-K Coding Scheme
- Consider a coin-tossing experiment with outcomes $x \in\{0,1\}$ (tail and head) and let $0\leq \mu \leq 1$ represent the probability of heads. This model can written as a **Bernoulli distribution**:
$$
p(x|\mu) = \mu^{x}(1-\mu)^{1-x}
$$
- Note that the variable $x$ acts as a (binary) **selector** for the tail or head probabilities. Think of this as an 'if'-statement in programming.
- **1-of-K coding scheme**. Now consider a $K$-sided coin (a _die_ (pl.: dice)). It is convenient to code the outcomes by $x=(x_1,\ldots,x_K)^T$ with **binary selection variables**
$$
x_k = \begin{cases} 1 & \text{if die landed on $k$th face}\\
0 & \text{otherwise} \end{cases}
$$
- E.g., for $K=6$, if the die lands on the 3rd face $\,\Rightarrow x=(0,0,1,0,0,0)^T$.
- Assume the probabilities $p(x_k=1) = \mu_k$ with $\sum_k \mu_k = 1$. The data generating distribution is then (note the similarity to the Bernoulli distribution)
$$
p(x|\mu) = \mu_1^{x_1} \mu_2^{x_2} \cdots \mu_k^{x_k}=\prod_k \mu_k^{x_k}
$$
- This generalized Bernoulli distribution is called the **categorical distribution** (or sometimes the 'multi-noulli' distribution).
<!---
- Note that $\sum_k x_k = 1$ and verify for yourself that $\mathrm{E}[x|\mu] = \mu$.
- In these notes, we use the superscript to indicate that we are working with a **binary selection variable** in a 1-of-$K$ scheme.
--->
### Categorical vs. Multinomial Distribution
- Observe a data set $D=\{x_1,\ldots,x_N\}$ of $N$ IID rolls of a $K$-sided die, with generating PDF
$$
p(D|\mu) = \prod_n \prod_k \mu_k^{x_{nk}} = \prod_k \mu_k^{\sum_n x_{nk}} = \prod_k \mu_k^{m_k}
$$
where $m_k= \sum_n x_{nk}$ is the total number of occurrences that we 'threw' $k$ eyes.
- This distribution depends on the observations **only** through the quantities $\{m_k\}$, with generally $K \ll N$.
- A related distribution is the distribution over $D_m=\{m_1,\ldots,m_K\}$, which is called the **multinomial distribution**,
$$
p(D_m|\mu) =\frac{N!}{m_1! m_2!\ldots m_K!} \,\prod_k \mu_k^{m_k}\,.
$$
- The catagorical distribution $p(D|\mu) = p(\,x_1,\ldots,x_N\,|\,\mu\,)$ is a distribution over the **observations** $\{x_1,\ldots,x_N\}$, whereas the multinomial distribution $p(D_m|\mu) = p(\,m_1,\ldots,m_K\,|\,\mu\,)$ is a distribution over the **data frequencies** $\{m_1,\ldots,m_K\}$.
### Maximum Likelihood Estimation for the Multinomial
- Now let's find the ML estimate for $\mu$, based on $N$ throws of a $K$-sided die. Again we use the shorthand $m_k \triangleq \sum_n x_{nk}$.
- The log-likelihood for the multinomial distribution is given by
$$\begin{align*}
\mathrm{L}(\mu) &\triangleq \log p(D_m|\mu) \propto \log \prod_k \mu_k^{m_k} = \sum_k m_k \log \mu_k \tag{2}
\end{align*}$$
- When doing ML estimation, we must obey the constraint $\sum_k \mu_k = 1$, which can be accomplished by a <span style="color:red">Lagrange multiplier</span>. The **augmented log-likelihood** with Lagrange multiplier is then
$$
\mathrm{L}^\prime(\mu) = \sum_k m_k \log \mu_k + \lambda \cdot (1 - \sum_k \mu_k )
$$
- Set derivative to zero yields the **sample proportion** for $\mu_k$
$$\begin{equation*}
\nabla_{\mu_k} \mathrm{L}^\prime = \frac{m_k }
{\hat\mu_k } - \lambda \overset{!}{=} 0 \; \Rightarrow \; \boxed{\hat\mu_k = \frac{m_k }{N}}
\end{equation*}$$
where we get $\lambda$ from the constraint
$$\begin{equation*}
\sum_k \hat \mu_k = \sum_k \frac{m_k}
{\lambda} = \frac{N}{\lambda} \overset{!}{=} 1
\end{equation*}$$
<!---
- Interesting special case: **Binomial** (=$N$ coin tosses):
$$p(x_n|\theta)= \theta^{[x_n=h]}(1-\theta)^{[x_n=t]}=\theta_h^{[x_n=h]} \theta_t^{[x_n=t]}
$$
yields $$\hat \theta = \frac{N_h}{N_h +N_t} $$
- Compare this answer to Laplace's rule for predicting the next coin toss (in probability theory lesson) $$p(\,x_\bullet=h\,|\,\theta\,)=\frac{N_h+1}{N_h +N_t+2}\,.$$ What is the source of the difference?
--->
### Recap ML for Density Estimation
Given $N$ IID observations $D=\{x_1,\dotsc,x_N\}$.
- For a **multivariate Gaussian** model $p(x_n|\theta) = \mathcal{N}(x_n|\mu,\Sigma)$, we obtain ML estimates
$$\begin{align}
\hat \mu &= \frac{1}{N} \sum_n x_n \tag{sample mean} \\
\hat \Sigma &= \frac{1}{N} \sum_n (x_n-\hat\mu)(x_n - \hat \mu)^T \tag{sample variance}
\end{align}$$
- For discrete outcomes modeled by a 1-of-K **categorical distribution** we find
$$\begin{align}
\hat\mu_k = \frac{1}{N} \sum_n x_{nk} \quad \left(= \frac{m_k}{N} \right) \tag{sample proportion}
\end{align}$$
- Note the similarity for the means between discrete and continuous data.
- We didn't use a co-variance matrix for discrete data. Why?
```julia
open("../../styles/aipstyle.html") do f
display("text/html", read(f, String))
end
```
<!--
This HTML file contains custom styles and some javascript.
Include it a Jupyter notebook for improved rendering.
-->
<!-- Fonts -->
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'>
<!-- Custom style -->
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.container {
min-width: 960px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Alegreya Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:800px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Nixie One', serif;
font-style:regular;
font-weight: 400;
font-size: 45pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Nixie One', serif;
font-weight: 400;
font-size: 30pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Nixie One', serif;
margin-top:16px;
font-size: 22pt;
font-weight: 600;
margin-bottom: 3px;
font-style: regular;
color: rgb(102,102,0);
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Nixie One', serif;
font-size: 14pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Nixie One', sans-serif;
font-weight: 400;
font-size: 16pt;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "PT Mono";
font-size: 90%;
}
.boxed { /* draw a border around a piece of text */
border: 1px solid blue ;
}
h4#CODE-EXAMPLE,
h4#END-OF-CODE-EXAMPLE {
margin: 10px 0;
padding: 10px;
background-color: #d0f9ca !important;
border-top: #849f81 1px solid;
border-bottom: #849f81 1px solid;
}
.emphasis {
color: red;
}
.exercise {
color: green;
}
.proof {
color: blue;
}
code {
padding: 2px 4px !important;
font-size: 90% !important;
color: #222 !important;
background-color: #efefef !important;
border-radius: 2px !important;
}
/* This removes the actual style cells from the notebooks, but no in print mode
as they will be removed through some other method */
@media not print {
.cell:nth-last-child(-n+2) {
display: none;
}
}
</style>
<!-- MathJax styling -->
```julia
```
| 27a4361e7ed08e2530451f2dc5cf8b943014f225 | 109,784 | ipynb | Jupyter Notebook | lessons/notebooks/05_Density-Estimation.ipynb | spsbrats/AIP-5SSB0 | c518274fdaed9fc55423ae4dd216be4218238d9d | [
"CC-BY-3.0"
] | 8 | 2018-06-14T20:45:55.000Z | 2021-10-05T09:46:25.000Z | lessons/notebooks/05_Density-Estimation.ipynb | bertdv/AIP-5SSB0 | c518274fdaed9fc55423ae4dd216be4218238d9d | [
"CC-BY-3.0"
] | 59 | 2015-08-18T11:30:12.000Z | 2019-07-03T15:17:33.000Z | lessons/notebooks/05_Density-Estimation.ipynb | bertdv/AIP-5SSB0 | c518274fdaed9fc55423ae4dd216be4218238d9d | [
"CC-BY-3.0"
] | 5 | 2015-12-30T07:39:57.000Z | 2019-03-09T10:42:21.000Z | 124.471655 | 64,630 | 0.849085 | true | 5,151 | Qwen/Qwen-72B | 1. YES
2. YES | 0.849971 | 0.743168 | 0.631671 | __label__eng_Latn | 0.718947 | 0.305915 |
# Modelagem SEIR
Neste notebook está implementado o modelo SEIR de Zhilan Feng. Que inclui quarentena, e hospitalizações
\begin{align}
\frac{dS}{dt}&=-\beta S (I+(1-\rho)H)\\
\frac{dE}{dt}&= \beta S (I+(1-\rho)H)-(\chi+\alpha)E\\
\frac{dQ}{dt}&=\chi E -\alpha Q\\
\frac{dI}{dt}&= \alpha E - (\phi+\delta)I\\
\frac{dH}{dt}&= \alpha Q +\phi I -\delta H\\
\frac{dR}{dt}&= \delta I +\delta H
\end{align}
Denotando o tamanho total da epidemia por $Y_e(t)=E(t)+Q(t)+I(t)+H(t)$, podemos escrever:
\begin{align}
\nonumber \frac{dY_e}{dt}&= \frac{dE}{dt}+\frac{dQ}{dt}+\frac{dI}{dt}+\frac{dH}{dt}\\
\label{ye}&=\beta S (I(1-\rho)H) -\delta(I+H)
\end{align}
```python
from scipy.integrate import odeint
from scipy.integrate import DOP853
import pandas as pd
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import warnings
import humanizer_portugues as hp
import seaborn as sns
from datetime import timedelta
warnings.filterwarnings("ignore")
%pylab inline
```
Populating the interactive namespace from numpy and matplotlib
```python
def seqihr(y,t,*params):
S,E,Q,I,H,R = y
chi,phi,beta,rho,delta,alpha = params
return[
-beta*S*(I+(1-rho)*H), #dS/dt
beta*S*(I+(1-rho)*H) - (chi+alpha)*E,#dE/dt
chi*E -alpha*Q,#dQ/dt
alpha*E - (phi+delta)*I,#dI/dt
alpha*Q + phi*I -delta*H,#dH/dt
delta*I + delta*H,#dR/dt
]
```
```python
chi=.05 # Taxa de quarentenação
phi=.01 # Taxa de Hospitalização
beta=.2 #taxa de Transmissão
rho=.6 # Atenuação da transmissão quando hospitalizado
delta=1/10 # Taxa de recuperação hospitalar
alpha=1/3 # Taxa de incubaçao
```
```python
inits = [0.9,.1,0,0,0,0]
trange = arange(0,100,.1)
res = odeint(seqihr,inits,trange,args=(chi,phi,beta,rho,delta,alpha))
```
```python
@interact(chi=(0,.1, 0.01),phi=(0,.1,.005),beta=(0,.5,.05),rho=(0,1,.1),delta=(0,1,.05),alpha=(0,1,0.01))
def plota_simulação(chi=0.05,phi=.01,beta=.2,rho=.6,delta=.1,alpha=.33):
res = odeint(seqihr,inits,trange,args=(chi,phi,beta,rho,delta,alpha))
fig, ax = subplots(1,1, figsize=(15,10))
ax.plot(trange,res[:,1:-1])
ax.set_ylabel('Fração da População')
ax.set_xlabel('Tempo (dias)')
ax.grid()
ax.legend(['Exposto','Quar','Inf','Hosp']);
```
interactive(children=(FloatSlider(value=0.05, description='chi', max=0.1, step=0.01), FloatSlider(value=0.01, …
## Adicionando Assintomáticos
O modelo de Feng não é apropriado para representar a epidemia da COVID-19. Vamos Modificá-lo para incluir entre outras coisas, os assintomáticos. Para simplificar a notação vamos denotar por $\lambda=\beta(I\color{red}{+A}+(1-\rho)H)$, já incluindo os assintomáticos, $A$.
\begin{align}
\frac{dS}{dt}&=-\lambda[\color{red}{(1-\chi)} S] \\
\frac{dE}{dt}&= \lambda [\color{red}{(1-\chi)} S] -\alpha E\\
\frac{dI}{dt}&= \color{red}{(1-p)}\alpha E - (\phi+\delta)I\\
\color{red}{\frac{dA}{dt}}&= \color{red}{p\alpha E -\delta A}\\
\frac{dH}{dt}&= \phi I -\delta H\\
\frac{dR}{dt}&= \delta I +\delta H \color{red}{+ \delta A}
\end{align}
Neste novo modelo alteramos mais algumas coisinhas ressaltadas em vermelho acima. Em primeiro lugar, fizemos com que a quarentena seja aplicada aos suscetíveis, efetivamente removendo-os da cadeia de transmissão, para simplificar removemos o compartimento $Q$. Além disso, para alimentar a classe dos Assintomáticos, definimos $p$ como a fração de assintomáticos na população. $R_0\approx$
### Adicionando ou removendo a quarentena
Para permitir que a quarentena seja acionada no modelo em uma data definida por um parâmetro $q$, precisamos expressá-la como uma função diferenciável que varie de zero a um rapidamente. Esta função é então multiplicada pelo parâmetro de quarentena $\chi$.
\begin{equation}
\chi^+(t)=\chi\frac{1+tanh(t-q)}{2}
\end{equation}
Para remover a quarentena podemos usar a função com o sinal trocado na tangente hiperbólica.
\begin{equation}
\chi^-(t)=\chi\frac{1-tanh(t-q)}{2}
\end{equation}
Abaixo temos uma exemplificação grafica de $\chi^+(t)$ e $\chi^-(t)$
```python
t = arange(-3,10,.01)
d=1
q = 0.3 #Nivel de quarentena desejado
addq = lambda t,d: (1+tanh(t-d))/2
remq = lambda t,d: (1-tanh(t-d))/2
f,(ax1,ax2)=subplots(1,2, figsize=(15,8))
ax1.plot(t,q*addq(t,d), label=f"liga a quarent. no dia {d}")
ax1.plot(t,q*remq(t,d), label=f"desliga a quarent. no dia {d}")
ax2.plot(t,q*addq(t,d)*remq(t,d+5))
ax1.vlines(d,0,q)
ax2.vlines(d,0,q,'g')
ax2.vlines(d+5,0,q, 'r')
ax1.grid()
ax2.grid()
ax1.legend();
```
```python
f,ax=subplots(1,1, figsize=(15,8))
ax.plot(t,q*addq(t,d)*remq(t,d+5))
ax.vlines(d,0,q,'g')
ax.vlines(d+5,0,q, 'r')
ax.set_ylabel('Fraction Quarantined')
ax.set_xlabel('Time(days)')
ax.grid()
savefig('chi_of_t.png')
```
```python
def seqiahr(y,t,*params):
S,E,I,A,H,R,C,M = y
chi,phi,beta,rho,delta,alpha,mu,p,q,r = params
lamb = beta*(I+A)
chi *= ((1+np.tanh(t-q))/2) * ((1 - np.tanh(t - (q+r))) / 2 )
return[
-lamb*((1-chi)*S), #dS/dt
lamb*((1-chi)*S) - alpha*E,#dE/dt
(1-p)*alpha*E - delta*I,#dI/dt
p*alpha*E - delta*A,
phi*delta*I -(rho+mu)*H,#dH/dt
(1-phi)*delta*I + rho*H+delta*A ,#dR/dt
phi*delta*I,#(1-p)*alpha*E+ p*alpha*E # Casos acumulados
mu*H #Mortes acumuladas
]
```
```python
trange = arange(0, 365, 1)
inits = [0.99, 0, 1.0277e-6, 0.0, 0, 0, 0, 0]
N = 97.3e6
fat = 0.035 # case fatality rate
# sumario = open('cenarios.csv','w')
# sumario.write('R0,Quarentena,tamanho_total,hosp_total,hosp_pico,morte_total\n')
def r0(chi, phi, beta, rho, delta, p, S=1):
"R0 for seqiahr2"
return -(S*beta*chi - S*beta)/delta
@interact(χ=(0, 1, 0.05),
φ=(0, .5, .01),
β=(0, 1, .02),
ρ=(0, 1, .1),
δ=(0, 1, .05),
α=(0, 10, 0.01),
mu=(0,1,.01),
p=(0, 1, .05),
q=(0, 120, 10),
r=(0,100,10))
def plota_simulação(χ=0.7, φ=.01, β=.5, ρ=0.05, δ=.1, α=.33, mu=0.03, p=.75, q=30, r=10):
res = odeint(seqiahr, inits, trange, args=(χ, φ, β, ρ, δ, α, mu, p, q,r))
rzero = r0(χ, φ, β, ρ, δ, p)
# et = 1 / rzero
# idx = np.argwhere(np.abs(res[:, 0] - et) == min(np.abs(res[:, 0] -
# et)))[0, 0]
fig, ax = subplots(1, 1, figsize=(15, 10))
ax.plot(trange, res[:, 1])#E
# ax.plot(trange, res[:, 2:4])
ax.plot(trange, res[:, 4]) #H
ax.plot(trange, res[:,-1]) #M
ax.set_ylim([0, 0.02])
Ye = res[:, 1:4].sum(axis=1)
Imax = (1 - res[-1, 0]) * N # N-S(inf)
Hosp_p = res[:, 4].max() * N # pico de Hospitalizações
Hosp_tot = (res[-1, -2]) * N
casos = (res[:, 2] + res[:, 4]) # Casos notificados H+I
casosT = Hosp_tot + ((1 - p) * α * res[:, 1]).sum() * N # N*sum(p*α*E)
M = res[-1,-1]*N
# secax = ax.twinx()
# secax.plot(trange, Ye,'k:')
# secax.set_ylabel('Total prevalence fraction')
ax.text(0,
0.005,
f"$R_0~{rzero:.1f}$\n$p={p}$\n$\\rho={ρ}$\n$\\chi={χ*100}$%",
fontsize=16)
ax.text(
110,
.01,
f"Tamanho total: {hp.intword(Imax)} infectados\nHosp. pico: {hp.intword(Hosp_p)}\nHosp. totais: {hp.intword(Hosp_tot)}\nMortes: {hp.intword(M)}",
fontsize=16)
# ax.vlines(trange[idx], 0, 0.02,'b') #res[:,1:-1].max())
ax.vlines(q, 0, .02)
ax.vlines(q+r, 0, .02,'r')
ax.set_ylabel('Fração da População')
ax.set_xlabel('Tempo (dias)')
ax.grid()
# sumario.write(f'{rzero},{χ*100},{Imax},{Hosp_tot},{Hosp_p},{M}\n')
# ax.legend(['Exposto', 'Inf', 'Assintomático', 'Hosp', 'Mortes'])
ax.legend(['Exposto', 'Hospitalizado', 'Mortes'])
# plt.savefig(f"", dpi=300)
```
interactive(children=(FloatSlider(value=0.7, description='χ', max=1.0, step=0.05), FloatSlider(value=0.01, des…
```python
# sumario.close()
# suma = pd.read_csv('cenarios.csv')
# suma.drop_duplicates(inplace=True)
# suma
```
```python
def save_data():
fig,ax = subplots(1,1, figsize=(15,10), constrained_layout=True)
χ=0.05;φ=.01;β=.25;ρ=.6;δ=.1;α=10; p=.75; q=0
i=1
dff = pd.DataFrame(data={'Sem controle':range(365),'Com controle':range(365)})
for q,R0,c in [(0,3.6,'Sem controle'),(10,1.7,'Com controle')]:
res = odeint(seqiahr2,inits,trange,args=(q/100,φ,R0/10,ρ,δ,α,p,q))
df = pd.DataFrame(data=res*N, columns=['S','E','I','A','H','R','C'],index=trange)
ax = df[['H']].plot(ax=ax, grid=True)
dff[c] =df.H
plt.legend(['Sem Controles','Com controles'])
plt.savefig(f'export/achatando a curva.png', dpi=300)
dff.to_csv(f'export/achatando a curva.csv')
save_data()
```
## Carregando os dados do Brasil
```python
# Pegando os casos do Brasil.io
cases = pd.read_csv('https://brasil.io/dataset/covid19/caso?format=csv')
cases.date = pd.to_datetime(cases.date)
```
```python
df_states = cases[cases.place_type!='state'].groupby(['date','state']).sum()
df_states.reset_index(inplace=True)
df_states.set_index('date', inplace=True)
```
```python
# df_states.set_index('date', inplace=True)
fig,ax = subplots(1,1,figsize=(15,8))
for uf in ['SP','RJ','MG','CE','RS']:
df_states[df_states.state==uf].confirmed.plot(style='-o', label=uf)
ax.legend()
plt.savefig('Casos_confirmados_estados.png',dpi=200)
```
```python
# Pegando os dados do Ministério
casos = pd.read_csv('COVID19_20200411.csv',sep=';')
casos.data = pd.to_datetime(casos.data, dayfirst=True)
casos_estados = casos.groupby(['data','estado']).sum()
casos_estados
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th>casosNovos</th>
<th>casosAcumulados</th>
<th>obitosNovos</th>
<th>obitosAcumulados</th>
</tr>
<tr>
<th>data</th>
<th>estado</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="5" valign="top">2020-01-30</th>
<th>AC</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>AL</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>AM</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>AP</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>BA</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>...</th>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th rowspan="5" valign="top">2020-04-11</th>
<th>RS</th>
<td>4</td>
<td>640</td>
<td>1</td>
<td>15</td>
</tr>
<tr>
<th>SC</th>
<td>39</td>
<td>732</td>
<td>3</td>
<td>21</td>
</tr>
<tr>
<th>SE</th>
<td>0</td>
<td>42</td>
<td>0</td>
<td>4</td>
</tr>
<tr>
<th>SP</th>
<td>203</td>
<td>8419</td>
<td>20</td>
<td>560</td>
</tr>
<tr>
<th>TO</th>
<td>0</td>
<td>23</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>1971 rows × 4 columns</p>
</div>
```python
casos.columns
```
Index(['regiao', 'estado', 'data', 'casosNovos', 'casosAcumulados',
'obitosNovos', 'obitosAcumulados'],
dtype='object')
```python
fig,ax = subplots(1,1,figsize=(15,8))
casos_estados.reset_index(inplace=True)
casos_estados.set_index('data', inplace=True)
for uf in ['SP','RJ','MG','CE','RS', 'BA']:
casos_estados[casos_estados.estado==uf].casosAcumulados.plot(style='-o', label=uf)
ax.legend()
plt.savefig('Casos_Acumulados_estados.png',dpi=200);
```
```python
trange = arange(0,1095,1)
χ=0.0;φ=.1;β=.5;ρ=.6;δ=.1;α=.53; p=.75; q=-2; mu=0.02; r=1000
inits = [0.99,0,1.0277e-8, 0.0,0,0,0,0]
res = odeint(seqiahr,inits,trange,args=(χ,φ,β,ρ,δ,α,mu,p,q,r))
# Com controle
χ=0.1;β=.17
res_c = odeint(seqiahr,inits,trange,args=(χ,φ,β,ρ,δ,α,mu,p,q, r))
plt.plot(trange,res[:,-1],label='no control');
plt.plot(trange,res_c[:,-1],label='with control');
plt.legend();
```
```python
df_brasil = casos_estados.groupby('data').sum()
# df_brasil['casos_simulados']=res[:dias,-1]*N
# dfcp = df_brasil[['confirmed']]
# dfcp.to_csv('export/dados_brasil.csv')
df_brasil
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>casosNovos</th>
<th>casosAcumulados</th>
<th>obitosNovos</th>
<th>obitosAcumulados</th>
</tr>
<tr>
<th>data</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>2020-01-30</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2020-01-31</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2020-02-01</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2020-02-02</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2020-02-03</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>2020-04-07</th>
<td>1661</td>
<td>13717</td>
<td>114</td>
<td>667</td>
</tr>
<tr>
<th>2020-04-08</th>
<td>2210</td>
<td>15927</td>
<td>133</td>
<td>800</td>
</tr>
<tr>
<th>2020-04-09</th>
<td>1930</td>
<td>17857</td>
<td>141</td>
<td>941</td>
</tr>
<tr>
<th>2020-04-10</th>
<td>1781</td>
<td>19638</td>
<td>115</td>
<td>1056</td>
</tr>
<tr>
<th>2020-04-11</th>
<td>1089</td>
<td>20727</td>
<td>68</td>
<td>1124</td>
</tr>
</tbody>
</table>
<p>73 rows × 4 columns</p>
</div>
```python
df_brasil.casosAcumulados.plot();
```
```python
dias=400
offset = 35 # quantos dias antes do primeiro caso notificado devemos começar a simulaçao
drange = pd.date_range(df_brasil[df_brasil.casosAcumulados>0].index.min()-timedelta(offset),periods=dias,freq='D')
# df_states.reset_index(inplace=True)
# df_brasil = df_states.groupby('date').sum()
fig,ax = subplots(1,1,figsize=(15,8))
ax.plot(drange,res[:dias,-1]*N,'-v', label='Sem controle')
# ax.plot(drange,res[:dias,-3]*N,'-v', label='Hosp')
# ax.vlines('2020-05-5',0,1e6)
ax.plot(drange,res_c[:dias,-1]*N,'-v', label='Com controle (Simulados)')
# ax.plot(drange,res[:dias,2]*N,'-^', label='Prevalência')
df_brasil[df_brasil.casosAcumulados>0].casosAcumulados.plot(ax=ax, style='-o',
label='Dados oficiais',
grid=True,
logy=True)
# Sem controle
ax.text('2020-05-15',0.6e5, f'Casos Totais: {res[dias,-1]*N:.0f}\nMortes: {res[dias,-1]*N*fat:.0f}',
fontsize=16)
# com 10% quarentena e R0=1.7
ax.text('2020-11-15',1e3, f'Casos Totais: {res_c[dias,-1]*N:.0f}\nMortes:{res_c[dias,-1]*N*fat:.0f}',
fontsize=16)
ax.legend();
# plt.savefig('export/Casos_vs_Projeções_log.png',dpi=200)
```
```python
df_sim = pd.DataFrame(data={'sem controle':res[:dias,-1]*N,'com controle':res_c[:dias,-1]*N}, index=drange)
df_sim.to_csv('simulação_brasil_com_e_sem_controle.csv')
```
## Comparando a série do Brasil com a de outros países
```python
confirmed = pd.read_csv('https://github.com/CSSEGISandData/COVID-19/raw/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv')
confirmed = confirmed.groupby('Country/Region').sum()
```
```python
confirmed
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Lat</th>
<th>Long</th>
<th>1/22/20</th>
<th>1/23/20</th>
<th>1/24/20</th>
<th>1/25/20</th>
<th>1/26/20</th>
<th>1/27/20</th>
<th>1/28/20</th>
<th>1/29/20</th>
<th>...</th>
<th>3/25/20</th>
<th>3/26/20</th>
<th>3/27/20</th>
<th>3/28/20</th>
<th>3/29/20</th>
<th>3/30/20</th>
<th>3/31/20</th>
<th>4/1/20</th>
<th>4/2/20</th>
<th>4/3/20</th>
</tr>
<tr>
<th>Country/Region</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Afghanistan</td>
<td>33.0000</td>
<td>65.0000</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>84</td>
<td>94</td>
<td>110</td>
<td>110</td>
<td>120</td>
<td>170</td>
<td>174</td>
<td>237</td>
<td>273</td>
<td>281</td>
</tr>
<tr>
<td>Albania</td>
<td>41.1533</td>
<td>20.1683</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>146</td>
<td>174</td>
<td>186</td>
<td>197</td>
<td>212</td>
<td>223</td>
<td>243</td>
<td>259</td>
<td>277</td>
<td>304</td>
</tr>
<tr>
<td>Algeria</td>
<td>28.0339</td>
<td>1.6596</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>302</td>
<td>367</td>
<td>409</td>
<td>454</td>
<td>511</td>
<td>584</td>
<td>716</td>
<td>847</td>
<td>986</td>
<td>1171</td>
</tr>
<tr>
<td>Andorra</td>
<td>42.5063</td>
<td>1.5218</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>188</td>
<td>224</td>
<td>267</td>
<td>308</td>
<td>334</td>
<td>370</td>
<td>376</td>
<td>390</td>
<td>428</td>
<td>439</td>
</tr>
<tr>
<td>Angola</td>
<td>-11.2027</td>
<td>17.8739</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>3</td>
<td>4</td>
<td>4</td>
<td>5</td>
<td>7</td>
<td>7</td>
<td>7</td>
<td>8</td>
<td>8</td>
<td>8</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>Venezuela</td>
<td>6.4238</td>
<td>-66.5897</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>91</td>
<td>107</td>
<td>107</td>
<td>119</td>
<td>119</td>
<td>135</td>
<td>135</td>
<td>143</td>
<td>146</td>
<td>153</td>
</tr>
<tr>
<td>Vietnam</td>
<td>16.0000</td>
<td>108.0000</td>
<td>0</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>...</td>
<td>141</td>
<td>153</td>
<td>163</td>
<td>174</td>
<td>188</td>
<td>203</td>
<td>212</td>
<td>218</td>
<td>233</td>
<td>237</td>
</tr>
<tr>
<td>West Bank and Gaza</td>
<td>31.9522</td>
<td>35.2332</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>59</td>
<td>84</td>
<td>91</td>
<td>98</td>
<td>109</td>
<td>116</td>
<td>119</td>
<td>134</td>
<td>161</td>
<td>194</td>
</tr>
<tr>
<td>Zambia</td>
<td>-15.4167</td>
<td>28.2833</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>12</td>
<td>16</td>
<td>22</td>
<td>28</td>
<td>29</td>
<td>35</td>
<td>35</td>
<td>36</td>
<td>39</td>
<td>39</td>
</tr>
<tr>
<td>Zimbabwe</td>
<td>-20.0000</td>
<td>30.0000</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>3</td>
<td>3</td>
<td>5</td>
<td>7</td>
<td>7</td>
<td>7</td>
<td>8</td>
<td>8</td>
<td>9</td>
<td>9</td>
</tr>
</tbody>
</table>
<p>181 rows × 75 columns</p>
</div>
```python
conf_US = pd.read_csv('https://github.com/CSSEGISandData/COVID-19/raw/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_US.csv')
```
```python
conf_US
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>UID</th>
<th>iso2</th>
<th>iso3</th>
<th>code3</th>
<th>FIPS</th>
<th>Admin2</th>
<th>Province_State</th>
<th>Country_Region</th>
<th>Lat</th>
<th>Long_</th>
<th>...</th>
<th>3/25/20</th>
<th>3/26/20</th>
<th>3/27/20</th>
<th>3/28/20</th>
<th>3/29/20</th>
<th>3/30/20</th>
<th>3/31/20</th>
<th>4/1/20</th>
<th>4/2/20</th>
<th>4/3/20</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>16</td>
<td>AS</td>
<td>ASM</td>
<td>16</td>
<td>60.0</td>
<td>NaN</td>
<td>American Samoa</td>
<td>US</td>
<td>-14.2710</td>
<td>-170.1320</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>316</td>
<td>GU</td>
<td>GUM</td>
<td>316</td>
<td>66.0</td>
<td>NaN</td>
<td>Guam</td>
<td>US</td>
<td>13.4443</td>
<td>144.7937</td>
<td>...</td>
<td>37</td>
<td>45</td>
<td>51</td>
<td>55</td>
<td>56</td>
<td>58</td>
<td>69</td>
<td>77</td>
<td>82</td>
<td>84</td>
</tr>
<tr>
<td>2</td>
<td>580</td>
<td>MP</td>
<td>MNP</td>
<td>580</td>
<td>69.0</td>
<td>NaN</td>
<td>Northern Mariana Islands</td>
<td>US</td>
<td>15.0979</td>
<td>145.6739</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>6</td>
<td>6</td>
<td>6</td>
</tr>
<tr>
<td>3</td>
<td>630</td>
<td>PR</td>
<td>PRI</td>
<td>630</td>
<td>72.0</td>
<td>NaN</td>
<td>Puerto Rico</td>
<td>US</td>
<td>18.2208</td>
<td>-66.5901</td>
<td>...</td>
<td>51</td>
<td>64</td>
<td>79</td>
<td>100</td>
<td>127</td>
<td>174</td>
<td>239</td>
<td>286</td>
<td>316</td>
<td>316</td>
</tr>
<tr>
<td>4</td>
<td>850</td>
<td>VI</td>
<td>VIR</td>
<td>850</td>
<td>78.0</td>
<td>NaN</td>
<td>Virgin Islands</td>
<td>US</td>
<td>18.3358</td>
<td>-64.8963</td>
<td>...</td>
<td>17</td>
<td>17</td>
<td>19</td>
<td>22</td>
<td>0</td>
<td>0</td>
<td>30</td>
<td>30</td>
<td>30</td>
<td>37</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>3248</td>
<td>84090053</td>
<td>US</td>
<td>USA</td>
<td>840</td>
<td>90053.0</td>
<td>Unassigned</td>
<td>Washington</td>
<td>US</td>
<td>0.0000</td>
<td>0.0000</td>
<td>...</td>
<td>51</td>
<td>69</td>
<td>67</td>
<td>0</td>
<td>125</td>
<td>274</td>
<td>274</td>
<td>303</td>
<td>344</td>
<td>501</td>
</tr>
<tr>
<td>3249</td>
<td>84090054</td>
<td>US</td>
<td>USA</td>
<td>840</td>
<td>90054.0</td>
<td>Unassigned</td>
<td>West Virginia</td>
<td>US</td>
<td>0.0000</td>
<td>0.0000</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>3250</td>
<td>84090055</td>
<td>US</td>
<td>USA</td>
<td>840</td>
<td>90055.0</td>
<td>Unassigned</td>
<td>Wisconsin</td>
<td>US</td>
<td>0.0000</td>
<td>0.0000</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>61</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>3251</td>
<td>84090056</td>
<td>US</td>
<td>USA</td>
<td>840</td>
<td>90056.0</td>
<td>Unassigned</td>
<td>Wyoming</td>
<td>US</td>
<td>0.0000</td>
<td>0.0000</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>3252</td>
<td>84099999</td>
<td>US</td>
<td>USA</td>
<td>840</td>
<td>99999.0</td>
<td>NaN</td>
<td>Grand Princess</td>
<td>US</td>
<td>0.0000</td>
<td>0.0000</td>
<td>...</td>
<td>28</td>
<td>28</td>
<td>28</td>
<td>103</td>
<td>103</td>
<td>103</td>
<td>103</td>
<td>103</td>
<td>103</td>
<td>103</td>
</tr>
</tbody>
</table>
<p>3253 rows × 84 columns</p>
</div>
```python
conf_US.groupby('Country_Region').sum()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>UID</th>
<th>code3</th>
<th>FIPS</th>
<th>Lat</th>
<th>Long_</th>
<th>1/22/20</th>
<th>1/23/20</th>
<th>1/24/20</th>
<th>1/25/20</th>
<th>1/26/20</th>
<th>...</th>
<th>3/25/20</th>
<th>3/26/20</th>
<th>3/27/20</th>
<th>3/28/20</th>
<th>3/29/20</th>
<th>3/30/20</th>
<th>3/31/20</th>
<th>4/1/20</th>
<th>4/2/20</th>
<th>4/3/20</th>
</tr>
<tr>
<th>Country_Region</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>US</td>
<td>272936469664</td>
<td>2730712</td>
<td>104327612.0</td>
<td>120958.868944</td>
<td>-290083.749251</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>2</td>
<td>5</td>
<td>...</td>
<td>65778</td>
<td>83836</td>
<td>101657</td>
<td>121478</td>
<td>140886</td>
<td>161807</td>
<td>188172</td>
<td>213362</td>
<td>243453</td>
<td>275582</td>
</tr>
</tbody>
</table>
<p>1 rows × 78 columns</p>
</div>
```python
serie_US = conf_US.groupby('Country_Region').sum().T
serie_US = serie_US.iloc[5:]
serie_US.index = pd.to_datetime(serie_US.index)
serie_US.index.name = 'data'
# serie_US['acumulados'] = np.cumsum(serie_US.US)
serie_US
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th>Country_Region</th>
<th>US</th>
</tr>
<tr>
<th>data</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>2020-01-22</td>
<td>1.0</td>
</tr>
<tr>
<td>2020-01-23</td>
<td>1.0</td>
</tr>
<tr>
<td>2020-01-24</td>
<td>2.0</td>
</tr>
<tr>
<td>2020-01-25</td>
<td>2.0</td>
</tr>
<tr>
<td>2020-01-26</td>
<td>5.0</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>2020-03-30</td>
<td>161807.0</td>
</tr>
<tr>
<td>2020-03-31</td>
<td>188172.0</td>
</tr>
<tr>
<td>2020-04-01</td>
<td>213362.0</td>
</tr>
<tr>
<td>2020-04-02</td>
<td>243453.0</td>
</tr>
<tr>
<td>2020-04-03</td>
<td>275582.0</td>
</tr>
</tbody>
</table>
<p>73 rows × 1 columns</p>
</div>
```python
# outros = pd.DataFrame(data=)
suecia = confirmed.loc['Sweden'][2:]
alemanha = confirmed.loc['Germany'][2:]
espanha = confirmed.loc['Spain'][2:]
italia = confirmed.loc['Italy'][2:]
outros = pd.concat([suecia,alemanha,espanha,italia], axis=1)
outros.index = pd.to_datetime(outros.index)
# outros['Sweden_acc'] = np.cumsum(outros.Sweden)
# outros['Germany_acc'] = np.cumsum(outros.Germany)
# outros['Spain_acc'] = np.cumsum(outros.Spain)
# outros['Italy_acc'] = np.cumsum(outros.Italy)
outros
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Sweden</th>
<th>Germany</th>
<th>Spain</th>
<th>Italy</th>
</tr>
</thead>
<tbody>
<tr>
<td>2020-01-22</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<td>2020-01-23</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<td>2020-01-24</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<td>2020-01-25</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<td>2020-01-26</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>2020-03-30</td>
<td>4028.0</td>
<td>66885.0</td>
<td>87956.0</td>
<td>101739.0</td>
</tr>
<tr>
<td>2020-03-31</td>
<td>4435.0</td>
<td>71808.0</td>
<td>95923.0</td>
<td>105792.0</td>
</tr>
<tr>
<td>2020-04-01</td>
<td>4947.0</td>
<td>77872.0</td>
<td>104118.0</td>
<td>110574.0</td>
</tr>
<tr>
<td>2020-04-02</td>
<td>5568.0</td>
<td>84794.0</td>
<td>112065.0</td>
<td>115242.0</td>
</tr>
<tr>
<td>2020-04-03</td>
<td>6131.0</td>
<td>91159.0</td>
<td>119199.0</td>
<td>119827.0</td>
</tr>
</tbody>
</table>
<p>73 rows × 4 columns</p>
</div>
```python
fig,ax = subplots(1,1,figsize=(15,8))
df_brasil[df_brasil.casosAcumulados>0].casosAcumulados.plot(ax=ax, style='-o',
label='Brasil',
grid=True,
logy=True)
serie_US.US.plot(ax=ax, style='-v', grid=True,
label='EUA',);
outros.Sweden.plot(ax=ax, style='-v', grid=True,
label='Suécia')
outros.Germany.plot(ax=ax, style='-v', grid=True,
label='Alemanha')
outros.Spain.plot(ax=ax, style='-v', grid=True,
label='Espanha')
outros.Italy.plot(ax=ax, style='-v', grid=True,
label='Italia')
ax.legend();
```
```python
fig,ax = subplots(1,1,figsize=(15,8))
alinhados = pd.concat([suecia[suecia>100].reset_index(),
alemanha[alemanha>100].reset_index(),
espanha[espanha>100].reset_index(),
italia[italia>100].reset_index()],
axis=1)
alinhados['EUA'] = serie_US[serie_US.US>100].reset_index().US
alinhados['Brasil'] = df_brasil[df_brasil.casosAcumulados>100].reset_index().casosAcumulados
alinhados.plot(ax=ax,logy=True, grid=True);
plt.savefig('export/Brasil_vs_outros.png', dpi=300)
alinhados.to_csv('export/Brasil_vs_outros.csv')
```
```python
serie_US[serie_US.acumulados>100].reset_index().acumulados
```
0 109.0
1 120.0
2 131.0
3 142.0
4 154.0
5 166.0
6 179.0
7 192.0
8 205.0
9 218.0
10 231.0
11 244.0
12 257.0
13 270.0
14 285.0
15 300.0
16 315.0
17 330.0
18 345.0
19 360.0
20 376.0
21 392.0
22 416.0
23 446.0
24 499.0
25 572.0
26 676.0
27 848.0
28 1065.0
29 1401.0
30 1851.0
31 2365.0
32 3073.0
33 4178.0
34 5735.0
35 7882.0
36 10739.0
37 13657.0
38 17964.0
39 24060.0
40 32933.0
41 46957.0
42 66187.0
43 91814.0
44 125435.0
45 169098.0
46 222834.0
47 288612.0
48 372448.0
49 474105.0
50 595583.0
51 736469.0
52 898276.0
53 1086448.0
54 1299810.0
55 1543263.0
Name: acumulados, dtype: float64
```python
1-timedelta(32)
```
```python
pd.date_range(df_brasil[df_brasil.casosAcumulados>0].index.min()-timedelta(offset),periods=dias,freq='D')
```
DatetimeIndex(['2020-01-22', '2020-01-23', '2020-01-24', '2020-01-25',
'2020-01-26', '2020-01-27', '2020-01-28', '2020-01-29',
'2020-01-30', '2020-01-31',
...
'2021-02-15', '2021-02-16', '2021-02-17', '2021-02-18',
'2021-02-19', '2021-02-20', '2021-02-21', '2021-02-22',
'2021-02-23', '2021-02-24'],
dtype='datetime64[ns]', length=400, freq='D')
```python
```
| 48a66629d29f134bf7b9d8fd9a00c21b6a087ade | 521,611 | ipynb | Jupyter Notebook | notebooks/Modelo SEIR.ipynb | nahumsa/covidash | 24f3fdabb41ceeaadc4582ed2820f6f7f1a392a1 | [
"MIT"
] | null | null | null | notebooks/Modelo SEIR.ipynb | nahumsa/covidash | 24f3fdabb41ceeaadc4582ed2820f6f7f1a392a1 | [
"MIT"
] | null | null | null | notebooks/Modelo SEIR.ipynb | nahumsa/covidash | 24f3fdabb41ceeaadc4582ed2820f6f7f1a392a1 | [
"MIT"
] | null | null | null | 200.157713 | 84,204 | 0.870871 | true | 14,738 | Qwen/Qwen-72B | 1. YES
2. YES | 0.888759 | 0.812867 | 0.722443 | __label__kor_Hang | 0.151401 | 0.516809 |
```python
# Método para resolver las energías y eigenfunciones de un sistema cuántico numéricamente
# Modelado Molecular 2
# By: José Manuel Casillas Martín
import numpy as np
from sympy import *
from sympy import init_printing; init_printing(use_latex = 'mathjax')
import matplotlib.pyplot as plt
```
```python
# Variables utilizadas
var('x l m hbar w k')
```
$$\left ( x, \quad l, \quad m, \quad \hbar, \quad w, \quad k\right )$$
```python
def Metodo_variaciones():
print('En este problema están definidas como variables la masa, el parámetro l (se define), el parámetro k(se optimiza) y x')
print('')
# La energía cinética está definida por: K=(-hbar**2)/(2*m)*diff(fx,x,2)
print('La energía cinética esta definida como: K=(-hbar**2)/(2*m)*diff(f(x),x,2)');print('')
# Declarar el potencial
V=sympify(input('Introduce la función de potencial: '));print('')
lim_inf_V=sympify(input('¿Cuál es el límite inferior de la función potencial? '))
lim_sup_V=sympify(input('¿Cuál es el límite superior de la función potencial? '));print('')
n = int(input('Introduce el número de funciones que vas a utilizar para resolver el problema: '));print('')
# Lista para ingresar las funciones
f=[]
# Matriz con integral de solapamiento
Sm=[]
# Matriz con integral de intercambio
Hm=[]
print('Ahora vamos definir las constantes del problema');print('')
mass=input('¿Cuánto es la masa de tu partícula? ')
large=input('Define el parámetro l: ');print('')
# Declarar funciones y límites de dichas funciones
lim_inf=[]
lim_sup=[]
for i in range(n):
f.append((input('Introduce la función %d: ' %(i+1))))
lim_inf.append(input('¿Cuál es el límite inferior de la función? '))
lim_sup.append(input('¿Cuál es el límite superior de la función? '));print('')
f=sympify(f)
lim_inf=sympify(lim_inf)
lim_sup=sympify(lim_sup)
# Para partícula en un pozo de potencial de 0 a l
# El siguiente ciclo for resuelve las integrales para formar las matrices Sij (Integrales de solapamiento)
# y Hij(integrale de intercambio)
# Aproximación de las energías
li=0
ls=0
for i in range(n):
for j in range(n):
integrandoT=(f[i])*((-hbar**2)/(2*m)*diff(f[j],x,2))
integrandoV=(f[i])*V*(f[j])
integrandoN=(f[i])*f[j]
# Definir los limites de integracion
# Límites inferiores
if lim_inf[i].subs({l:large})<=lim_inf[j].subs({l:large}):
li=lim_inf[j]
if li.subs({l:large})>=lim_inf_V.subs({l:large}):
liV=li
else:
liV=lim_inf_V
if lim_inf[i].subs({l:large})>=lim_inf[j].subs({l:large}):
li=lim_inf[i]
if li.subs({l:large})>=lim_inf_V.subs({l:large}):
liV=li
else:
liV=lim_inf_V
# Límites superiores
if lim_sup[i].subs({l:large})>=lim_sup[j].subs({l:large}):
ls=lim_sup[j]
if ls.subs({l:large})<=lim_sup_V.subs({l:large}):
lsV=ls
else:
lsV=lim_sup_V
if lim_sup[i].subs({l:large})<=lim_sup[j].subs({l:large}):
ls=lim_sup[i]
ls=lim_sup[j]
if ls.subs({l:large})<=lim_sup_V.subs({l:large}):
lsV=ls
else:
lsV=lim_sup_V
c=Integral(integrandoT,(x,li,ls))
e=Integral(integrandoV,(x,liV,lsV))
g=c+e
d=Integral(integrandoN,(x,li,ls))
g=g.doit()
Hm.append(g)
d=d.doit()
Sm.append(d)
Sm=np.reshape(Sm,(n,n))
Hm=np.reshape(Hm,(n,n))
# Matriz M: (Hij-Sij)*w
M=(Hm-Sm*w)
H=sympify(Matrix(M))
Hdet=H.det()
# Resolver el determinante para encontrar las energías
E=solve(Hdet,w)
# Ordenar energías
Eord=solve(Hdet,w)
energies=np.zeros(n)
for i in range (n):
energies[i]=E[i].subs({m: mass, l: large, hbar:1.0545718e-34})
energies_ord=sorted(energies)
for i in range(n):
for j in range(n):
if energies[i]==energies_ord[j]:
Eord[i]=E[j]
# Matriz de constantes para todas las eigenfunciones
c=zeros(n)
for i in range(n):
for j in range(n):
c[i,j]=Symbol('c %d %d' %(i+1,j+1))
# Solución a esas constantes
sol=[]
for i in range (n):
a=np.reshape(c[0+n*i:(n)+n*i],(n))
SE=Matrix(np.dot(M,a.transpose()))
SE=sympify((SE.subs({w:Eord[i]})))
sol.append(solve(SE,c[0+n*i:(n+1)+n*i]))
if n!= 1:
csol=zeros(n)
CTS,cts,Cdet=[],[],[]
for i in range (n):
for j in range(n):
csol[i,j]=(sol[i]).get(c[i,j])
if csol[i,j] is None:
csol[i,j]=c[i,j]
CTS.append(c[i,j]); cts.append(c[i,j]); Cdet.append(c[i,j])
# Impresión en pantalla de los resultados
print('Matriz Hij')
print(sympify(Matrix(Hm)));print('')
print('Matriz Sij')
print(sympify(Matrix(Sm)));print('')
print('Energías ordenadas')
print(Eord);print('')
# Normalizar las funciones de onda y graficar
graficar=input('Desea graficar las eigenfunciones calculadas: ');print('')
if graficar=="si":
if n>1:
fa=(np.reshape(f,(n)))
ef=csol*fa
for i in range(n):
integrando=ef[i]*ef[i]
integ=Integral(integrando,(x,lim_inf[i],lim_sup[i]))
integ=integ.doit()
cts[i]=solve(integ-1,Cdet[i])
if abs(cts[i][0])==cts[0][0]:
CTS[i]=cts[i][0]
else:
CTS[i]=cts[i][1]
ef=ef.subs({Cdet[i]:CTS[i]})
print('Constantes de cada una de las eigenfunciones (cada eigenfunción tiene una constante extra que se debe normalizar)')
print(csol);print('')
print('Para graficar se normalizaron las constantes mostradas anteriormente, cuyos resultados fueron:')
print(CTS);print('')
for i in range(n):
plot(ef[i].subs({l:1}),xlim=(0,1),ylim=(-2,2),title='Eigenfunción: %d' %(i+1))
# Falta automatizar los limites de las funciones para graficar y que te grafique la primer función
if n==1:
ct=Symbol('C22')
ef=ct*f[0]
integrando=(ef)*(ef)
integ=Integral(integrando,(x,lim_inf[0],lim_sup[0]))
integr=integ.doit()
cte=solve(integr-1,ct)
if cte[0].subs({l:large})>cte[1].subs({l:large}):
ctr=cte[0]
else:
ctr=cte[1]
ef=ef.subs({ct:ctr})
#print('Constantes de cada una de las eigenfunciones (cada eigenfunción tiene una constante extra que se debe normalizar)')
#print(csol);print('')
#print('Para graficar se normalizó las constante mostrada anteriormente, cuyo resultado fue:')
#print(CTS);print('')
plot(ef.subs({l:1}),xlim=(0,1),ylim=(-1,2))
return()
```
```python
Metodo_variaciones()
```
```python
```
| 1287416ede47536380ee6ce74793cb7cb90d9618 | 108,746 | ipynb | Jupyter Notebook | Huckel_M0/Chema/Teorema_de_variaciones(1).ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
] | null | null | null | Huckel_M0/Chema/Teorema_de_variaciones(1).ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
] | null | null | null | Huckel_M0/Chema/Teorema_de_variaciones(1).ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
] | null | null | null | 269.173267 | 26,888 | 0.887251 | true | 2,133 | Qwen/Qwen-72B | 1. YES
2. YES | 0.887205 | 0.782662 | 0.694382 | __label__spa_Latn | 0.542886 | 0.451613 |
```python
%matplotlib inline
```
```python
import numpy as np
import matplotlib.pyplot as plt
```
# SciPy
SciPy is a collection of numerical algorithms with python interfaces. In many cases, these interfaces are wrappers around standard numerical libraries that have been developed in the community and are used with other languages. Usually detailed references are available to explain the implementation.
There are many subpackages generally, you load the subpackages separately, e.g.
```
from scipy import linalg, optimize
```
then you have access to the methods in those namespaces
# Numerical Methods
One thing to keep in mind -- all numerical methods have strengths and weaknesses, and make assumptions. You should always do some research into the method to understand what it is doing.
It is also always a good idea to run a new method on some test where you know the answer, to make sure it is behaving as expected.
# Integration
we'll do some integrals of the form
$$I = \int_a^b f(x) dx$$
We can imagine two situations:
* our function $f(x)$ is given by an analytic expression. This gives us the freedom to pick our integration points, and in general can allow us to optimize our result and get high accuracy
* our function $f(x)$ is defined on at a set of (possibly regular spaced) points.
In numerical analysis, the term _quadrature_ is used to describe any integration method that represents the integral as the weighted sum of a discrete number of points.
```python
from scipy import integrate
help(integrate)
```
Help on package scipy.integrate in scipy:
NAME
scipy.integrate
DESCRIPTION
=============================================
Integration and ODEs (:mod:`scipy.integrate`)
=============================================
.. currentmodule:: scipy.integrate
Integrating functions, given function object
============================================
.. autosummary::
:toctree: generated/
quad -- General purpose integration
dblquad -- General purpose double integration
tplquad -- General purpose triple integration
nquad -- General purpose n-dimensional integration
fixed_quad -- Integrate func(x) using Gaussian quadrature of order n
quadrature -- Integrate with given tolerance using Gaussian quadrature
romberg -- Integrate func using Romberg integration
quad_explain -- Print information for use of quad
newton_cotes -- Weights and error coefficient for Newton-Cotes integration
IntegrationWarning -- Warning on issues during integration
Integrating functions, given fixed samples
==========================================
.. autosummary::
:toctree: generated/
trapz -- Use trapezoidal rule to compute integral.
cumtrapz -- Use trapezoidal rule to cumulatively compute integral.
simps -- Use Simpson's rule to compute integral from samples.
romb -- Use Romberg Integration to compute integral from
-- (2**k + 1) evenly-spaced samples.
.. seealso::
:mod:`scipy.special` for orthogonal polynomials (special) for Gaussian
quadrature roots and weights for other weighting factors and regions.
Solving initial value problems for ODE systems
==============================================
The solvers are implemented as individual classes which can be used directly
(low-level usage) or through a convenience function.
.. autosummary::
:toctree: generated/
solve_ivp -- Convenient function for ODE integration.
RK23 -- Explicit Runge-Kutta solver of order 3(2).
RK45 -- Explicit Runge-Kutta solver of order 5(4).
Radau -- Implicit Runge-Kutta solver of order 5.
BDF -- Implicit multi-step variable order (1 to 5) solver.
LSODA -- LSODA solver from ODEPACK Fortran package.
OdeSolver -- Base class for ODE solvers.
DenseOutput -- Local interpolant for computing a dense output.
OdeSolution -- Class which represents a continuous ODE solution.
Old API
-------
These are the routines developed earlier for scipy. They wrap older solvers
implemented in Fortran (mostly ODEPACK). While the interface to them is not
particularly convenient and certain features are missing compared to the new
API, the solvers themselves are of good quality and work fast as compiled
Fortran code. In some cases it might be worth using this old API.
.. autosummary::
:toctree: generated/
odeint -- General integration of ordinary differential equations.
ode -- Integrate ODE using VODE and ZVODE routines.
complex_ode -- Convert a complex-valued ODE to real-valued and integrate.
Solving boundary value problems for ODE systems
===============================================
.. autosummary::
:toctree: generated/
solve_bvp -- Solve a boundary value problem for a system of ODEs.
PACKAGE CONTENTS
_bvp
_dop
_ivp (package)
_ode
_odepack
_quadpack
_test_multivariate
_test_odeint_banded
lsoda
odepack
quadpack
quadrature
setup
tests (package)
vode
CLASSES
builtins.UserWarning(builtins.Warning)
scipy.integrate.quadpack.IntegrationWarning
builtins.object
scipy.integrate._ivp.base.DenseOutput
scipy.integrate._ivp.base.OdeSolver
scipy.integrate._ivp.bdf.BDF
scipy.integrate._ivp.lsoda.LSODA
scipy.integrate._ivp.radau.Radau
scipy.integrate._ivp.common.OdeSolution
scipy.integrate._ode.ode
scipy.integrate._ode.complex_ode
scipy.integrate._ivp.rk.RungeKutta(scipy.integrate._ivp.base.OdeSolver)
scipy.integrate._ivp.rk.RK23
scipy.integrate._ivp.rk.RK45
class BDF(scipy.integrate._ivp.base.OdeSolver)
| Implicit method based on Backward Differentiation Formulas.
|
| This is a variable order method with the order varying automatically from
| 1 to 5. The general framework of the BDF algorithm is described in [1]_.
| This class implements a quasi-constant step size approach as explained
| in [2]_. The error estimation strategy for the constant step BDF is derived
| in [3]_. An accuracy enhancement using modified formulas (NDF) [2]_ is also
| implemented.
|
| Can be applied in a complex domain.
|
| Parameters
| ----------
| fun : callable
| Right-hand side of the system. The calling signature is ``fun(t, y)``.
| Here ``t`` is a scalar and there are two options for ndarray ``y``.
| It can either have shape (n,), then ``fun`` must return array_like with
| shape (n,). Or alternatively it can have shape (n, k), then ``fun``
| must return array_like with shape (n, k), i.e. each column
| corresponds to a single column in ``y``. The choice between the two
| options is determined by `vectorized` argument (see below). The
| vectorized implementation allows faster approximation of the Jacobian
| by finite differences.
| t0 : float
| Initial time.
| y0 : array_like, shape (n,)
| Initial state.
| t_bound : float
| Boundary time --- the integration won't continue beyond it. It also
| determines the direction of the integration.
| max_step : float, optional
| Maximum allowed step size. Default is np.inf, i.e. the step is not
| bounded and determined solely by the solver.
| rtol, atol : float and array_like, optional
| Relative and absolute tolerances. The solver keeps the local error
| estimates less than ``atol + rtol * abs(y)``. Here `rtol` controls a
| relative accuracy (number of correct digits). But if a component of `y`
| is approximately below `atol` then the error only needs to fall within
| the same `atol` threshold, and the number of correct digits is not
| guaranteed. If components of y have different scales, it might be
| beneficial to set different `atol` values for different components by
| passing array_like with shape (n,) for `atol`. Default values are
| 1e-3 for `rtol` and 1e-6 for `atol`.
| jac : {None, array_like, sparse_matrix, callable}, optional
| Jacobian matrix of the right-hand side of the system with respect to
| y, required only by 'Radau' and 'BDF' methods. The Jacobian matrix
| has shape (n, n) and its element (i, j) is equal to ``d f_i / d y_j``.
| There are 3 ways to define the Jacobian:
|
| * If array_like or sparse_matrix, then the Jacobian is assumed to
| be constant.
| * If callable, then the Jacobian is assumed to depend on both
| t and y, and will be called as ``jac(t, y)`` as necessary. The
| return value might be a sparse matrix.
| * If None (default), then the Jacobian will be approximated by
| finite differences.
|
| It is generally recommended to provide the Jacobian rather than
| relying on a finite difference approximation.
| jac_sparsity : {None, array_like, sparse matrix}, optional
| Defines a sparsity structure of the Jacobian matrix for a finite
| difference approximation, its shape must be (n, n). If the Jacobian has
| only few non-zero elements in *each* row, providing the sparsity
| structure will greatly speed up the computations [4]_. A zero
| entry means that a corresponding element in the Jacobian is identically
| zero. If None (default), the Jacobian is assumed to be dense.
| vectorized : bool, optional
| Whether `fun` is implemented in a vectorized fashion. Default is False.
|
| Attributes
| ----------
| n : int
| Number of equations.
| status : string
| Current status of the solver: 'running', 'finished' or 'failed'.
| t_bound : float
| Boundary time.
| direction : float
| Integration direction: +1 or -1.
| t : float
| Current time.
| y : ndarray
| Current state.
| t_old : float
| Previous time. None if no steps were made yet.
| step_size : float
| Size of the last successful step. None if no steps were made yet.
| nfev : int
| Number of the system's rhs evaluations.
| njev : int
| Number of the Jacobian evaluations.
| nlu : int
| Number of LU decompositions.
|
| References
| ----------
| .. [1] G. D. Byrne, A. C. Hindmarsh, "A Polyalgorithm for the Numerical
| Solution of Ordinary Differential Equations", ACM Transactions on
| Mathematical Software, Vol. 1, No. 1, pp. 71-96, March 1975.
| .. [2] L. F. Shampine, M. W. Reichelt, "THE MATLAB ODE SUITE", SIAM J. SCI.
| COMPUTE., Vol. 18, No. 1, pp. 1-22, January 1997.
| .. [3] E. Hairer, G. Wanner, "Solving Ordinary Differential Equations I:
| Nonstiff Problems", Sec. III.2.
| .. [4] A. Curtis, M. J. D. Powell, and J. Reid, "On the estimation of
| sparse Jacobian matrices", Journal of the Institute of Mathematics
| and its Applications, 13, pp. 117-120, 1974.
|
| Method resolution order:
| BDF
| scipy.integrate._ivp.base.OdeSolver
| builtins.object
|
| Methods defined here:
|
| __init__(self, fun, t0, y0, t_bound, max_step=inf, rtol=0.001, atol=1e-06, jac=None, jac_sparsity=None, vectorized=False, **extraneous)
| Initialize self. See help(type(self)) for accurate signature.
|
| ----------------------------------------------------------------------
| Methods inherited from scipy.integrate._ivp.base.OdeSolver:
|
| dense_output(self)
| Compute a local interpolant over the last successful step.
|
| Returns
| -------
| sol : `DenseOutput`
| Local interpolant over the last successful step.
|
| step(self)
| Perform one integration step.
|
| Returns
| -------
| message : string or None
| Report from the solver. Typically a reason for a failure if
| `self.status` is 'failed' after the step was taken or None
| otherwise.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from scipy.integrate._ivp.base.OdeSolver:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| step_size
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from scipy.integrate._ivp.base.OdeSolver:
|
| TOO_SMALL_STEP = 'Required step size is less than spacing between numb...
class DenseOutput(builtins.object)
| Base class for local interpolant over step made by an ODE solver.
|
| It interpolates between `t_min` and `t_max` (see Attributes below).
| Evaluation outside this interval is not forbidden, but the accuracy is not
| guaranteed.
|
| Attributes
| ----------
| t_min, t_max : float
| Time range of the interpolation.
|
| Methods defined here:
|
| __call__(self, t)
| Evaluate the interpolant.
|
| Parameters
| ----------
| t : float or array_like with shape (n_points,)
| Points to evaluate the solution at.
|
| Returns
| -------
| y : ndarray, shape (n,) or (n, n_points)
| Computed values. Shape depends on whether `t` was a scalar or a
| 1-d array.
|
| __init__(self, t_old, t)
| Initialize self. See help(type(self)) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
class IntegrationWarning(builtins.UserWarning)
| Warning on issues during integration.
|
| Method resolution order:
| IntegrationWarning
| builtins.UserWarning
| builtins.Warning
| builtins.Exception
| builtins.BaseException
| builtins.object
|
| Data descriptors defined here:
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Methods inherited from builtins.UserWarning:
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Methods inherited from builtins.BaseException:
|
| __delattr__(self, name, /)
| Implement delattr(self, name).
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __reduce__(...)
| helper for pickle
|
| __repr__(self, /)
| Return repr(self).
|
| __setattr__(self, name, value, /)
| Implement setattr(self, name, value).
|
| __setstate__(...)
|
| __str__(self, /)
| Return str(self).
|
| with_traceback(...)
| Exception.with_traceback(tb) --
| set self.__traceback__ to tb and return self.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from builtins.BaseException:
|
| __cause__
| exception cause
|
| __context__
| exception context
|
| __dict__
|
| __suppress_context__
|
| __traceback__
|
| args
class LSODA(scipy.integrate._ivp.base.OdeSolver)
| Adams/BDF method with automatic stiffness detection and switching.
|
| This is a wrapper to the Fortran solver from ODEPACK [1]_. It switches
| automatically between the nonstiff Adams method and the stiff BDF method.
| The method was originally detailed in [2]_.
|
| Parameters
| ----------
| fun : callable
| Right-hand side of the system. The calling signature is ``fun(t, y)``.
| Here ``t`` is a scalar and there are two options for ndarray ``y``.
| It can either have shape (n,), then ``fun`` must return array_like with
| shape (n,). Or alternatively it can have shape (n, k), then ``fun``
| must return array_like with shape (n, k), i.e. each column
| corresponds to a single column in ``y``. The choice between the two
| options is determined by `vectorized` argument (see below). The
| vectorized implementation allows faster approximation of the Jacobian
| by finite differences.
| t0 : float
| Initial time.
| y0 : array_like, shape (n,)
| Initial state.
| t_bound : float
| Boundary time --- the integration won't continue beyond it. It also
| determines the direction of the integration.
| first_step : float or None, optional
| Initial step size. Default is ``None`` which means that the algorithm
| should choose.
| min_step : float, optional
| Minimum allowed step size. Default is 0.0, i.e. the step is not
| bounded and determined solely by the solver.
| max_step : float, optional
| Maximum allowed step size. Default is ``np.inf``, i.e. the step is not
| bounded and determined solely by the solver.
| rtol, atol : float and array_like, optional
| Relative and absolute tolerances. The solver keeps the local error
| estimates less than ``atol + rtol * abs(y)``. Here `rtol` controls a
| relative accuracy (number of correct digits). But if a component of `y`
| is approximately below `atol` then the error only needs to fall within
| the same `atol` threshold, and the number of correct digits is not
| guaranteed. If components of y have different scales, it might be
| beneficial to set different `atol` values for different components by
| passing array_like with shape (n,) for `atol`. Default values are
| 1e-3 for `rtol` and 1e-6 for `atol`.
| jac : None or callable, optional
| Jacobian matrix of the right-hand side of the system with respect to
| ``y``. The Jacobian matrix has shape (n, n) and its element (i, j) is
| equal to ``d f_i / d y_j``. The function will be called as
| ``jac(t, y)``. If None (default), then the Jacobian will be
| approximated by finite differences. It is generally recommended to
| provide the Jacobian rather than relying on a finite difference
| approximation.
| lband, uband : int or None, optional
| Jacobian band width:
| ``jac[i, j] != 0 only for i - lband <= j <= i + uband``. Setting these
| requires your jac routine to return the Jacobian in the packed format:
| the returned array must have ``n`` columns and ``uband + lband + 1``
| rows in which Jacobian diagonals are written. Specifically
| ``jac_packed[uband + i - j , j] = jac[i, j]``. The same format is used
| in `scipy.linalg.solve_banded` (check for an illustration).
| These parameters can be also used with ``jac=None`` to reduce the
| number of Jacobian elements estimated by finite differences.
| vectorized : bool, optional
| Whether `fun` is implemented in a vectorized fashion. A vectorized
| implementation offers no advantages for this solver. Default is False.
|
| Attributes
| ----------
| n : int
| Number of equations.
| status : string
| Current status of the solver: 'running', 'finished' or 'failed'.
| t_bound : float
| Boundary time.
| direction : float
| Integration direction: +1 or -1.
| t : float
| Current time.
| y : ndarray
| Current state.
| t_old : float
| Previous time. None if no steps were made yet.
| nfev : int
| Number of the system's rhs evaluations.
| njev : int
| Number of the Jacobian evaluations.
|
| References
| ----------
| .. [1] A. C. Hindmarsh, "ODEPACK, A Systematized Collection of ODE
| Solvers," IMACS Transactions on Scientific Computation, Vol 1.,
| pp. 55-64, 1983.
| .. [2] L. Petzold, "Automatic selection of methods for solving stiff and
| nonstiff systems of ordinary differential equations", SIAM Journal
| on Scientific and Statistical Computing, Vol. 4, No. 1, pp. 136-148,
| 1983.
|
| Method resolution order:
| LSODA
| scipy.integrate._ivp.base.OdeSolver
| builtins.object
|
| Methods defined here:
|
| __init__(self, fun, t0, y0, t_bound, first_step=None, min_step=0.0, max_step=inf, rtol=0.001, atol=1e-06, jac=None, lband=None, uband=None, vectorized=False, **extraneous)
| Initialize self. See help(type(self)) for accurate signature.
|
| ----------------------------------------------------------------------
| Methods inherited from scipy.integrate._ivp.base.OdeSolver:
|
| dense_output(self)
| Compute a local interpolant over the last successful step.
|
| Returns
| -------
| sol : `DenseOutput`
| Local interpolant over the last successful step.
|
| step(self)
| Perform one integration step.
|
| Returns
| -------
| message : string or None
| Report from the solver. Typically a reason for a failure if
| `self.status` is 'failed' after the step was taken or None
| otherwise.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from scipy.integrate._ivp.base.OdeSolver:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| step_size
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from scipy.integrate._ivp.base.OdeSolver:
|
| TOO_SMALL_STEP = 'Required step size is less than spacing between numb...
class OdeSolution(builtins.object)
| Continuous ODE solution.
|
| It is organized as a collection of `DenseOutput` objects which represent
| local interpolants. It provides an algorithm to select a right interpolant
| for each given point.
|
| The interpolants cover the range between `t_min` and `t_max` (see
| Attributes below). Evaluation outside this interval is not forbidden, but
| the accuracy is not guaranteed.
|
| When evaluating at a breakpoint (one of the values in `ts`) a segment with
| the lower index is selected.
|
| Parameters
| ----------
| ts : array_like, shape (n_segments + 1,)
| Time instants between which local interpolants are defined. Must
| be strictly increasing or decreasing (zero segment with two points is
| also allowed).
| interpolants : list of DenseOutput with n_segments elements
| Local interpolants. An i-th interpolant is assumed to be defined
| between ``ts[i]`` and ``ts[i + 1]``.
|
| Attributes
| ----------
| t_min, t_max : float
| Time range of the interpolation.
|
| Methods defined here:
|
| __call__(self, t)
| Evaluate the solution.
|
| Parameters
| ----------
| t : float or array_like with shape (n_points,)
| Points to evaluate at.
|
| Returns
| -------
| y : ndarray, shape (n_states,) or (n_states, n_points)
| Computed values. Shape depends on whether `t` is a scalar or a
| 1-d array.
|
| __init__(self, ts, interpolants)
| Initialize self. See help(type(self)) for accurate signature.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
class OdeSolver(builtins.object)
| Base class for ODE solvers.
|
| In order to implement a new solver you need to follow the guidelines:
|
| 1. A constructor must accept parameters presented in the base class
| (listed below) along with any other parameters specific to a solver.
| 2. A constructor must accept arbitrary extraneous arguments
| ``**extraneous``, but warn that these arguments are irrelevant
| using `common.warn_extraneous` function. Do not pass these
| arguments to the base class.
| 3. A solver must implement a private method `_step_impl(self)` which
| propagates a solver one step further. It must return tuple
| ``(success, message)``, where ``success`` is a boolean indicating
| whether a step was successful, and ``message`` is a string
| containing description of a failure if a step failed or None
| otherwise.
| 4. A solver must implement a private method `_dense_output_impl(self)`
| which returns a `DenseOutput` object covering the last successful
| step.
| 5. A solver must have attributes listed below in Attributes section.
| Note that `t_old` and `step_size` are updated automatically.
| 6. Use `fun(self, t, y)` method for the system rhs evaluation, this
| way the number of function evaluations (`nfev`) will be tracked
| automatically.
| 7. For convenience a base class provides `fun_single(self, t, y)` and
| `fun_vectorized(self, t, y)` for evaluating the rhs in
| non-vectorized and vectorized fashions respectively (regardless of
| how `fun` from the constructor is implemented). These calls don't
| increment `nfev`.
| 8. If a solver uses a Jacobian matrix and LU decompositions, it should
| track the number of Jacobian evaluations (`njev`) and the number of
| LU decompositions (`nlu`).
| 9. By convention the function evaluations used to compute a finite
| difference approximation of the Jacobian should not be counted in
| `nfev`, thus use `fun_single(self, t, y)` or
| `fun_vectorized(self, t, y)` when computing a finite difference
| approximation of the Jacobian.
|
| Parameters
| ----------
| fun : callable
| Right-hand side of the system. The calling signature is ``fun(t, y)``.
| Here ``t`` is a scalar and there are two options for ndarray ``y``.
| It can either have shape (n,), then ``fun`` must return array_like with
| shape (n,). Or alternatively it can have shape (n, n_points), then
| ``fun`` must return array_like with shape (n, n_points) (each column
| corresponds to a single column in ``y``). The choice between the two
| options is determined by `vectorized` argument (see below).
| t0 : float
| Initial time.
| y0 : array_like, shape (n,)
| Initial state.
| t_bound : float
| Boundary time --- the integration won't continue beyond it. It also
| determines the direction of the integration.
| vectorized : bool
| Whether `fun` is implemented in a vectorized fashion.
| support_complex : bool, optional
| Whether integration in a complex domain should be supported.
| Generally determined by a derived solver class capabilities.
| Default is False.
|
| Attributes
| ----------
| n : int
| Number of equations.
| status : string
| Current status of the solver: 'running', 'finished' or 'failed'.
| t_bound : float
| Boundary time.
| direction : float
| Integration direction: +1 or -1.
| t : float
| Current time.
| y : ndarray
| Current state.
| t_old : float
| Previous time. None if no steps were made yet.
| step_size : float
| Size of the last successful step. None if no steps were made yet.
| nfev : int
| Number of the system's rhs evaluations.
| njev : int
| Number of the Jacobian evaluations.
| nlu : int
| Number of LU decompositions.
|
| Methods defined here:
|
| __init__(self, fun, t0, y0, t_bound, vectorized, support_complex=False)
| Initialize self. See help(type(self)) for accurate signature.
|
| dense_output(self)
| Compute a local interpolant over the last successful step.
|
| Returns
| -------
| sol : `DenseOutput`
| Local interpolant over the last successful step.
|
| step(self)
| Perform one integration step.
|
| Returns
| -------
| message : string or None
| Report from the solver. Typically a reason for a failure if
| `self.status` is 'failed' after the step was taken or None
| otherwise.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| step_size
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| TOO_SMALL_STEP = 'Required step size is less than spacing between numb...
class RK23(RungeKutta)
| Explicit Runge-Kutta method of order 3(2).
|
| The Bogacki-Shamping pair of formulas is used [1]_. The error is controlled
| assuming 2nd order accuracy, but steps are taken using a 3rd oder accurate
| formula (local extrapolation is done). A cubic Hermit polynomial is used
| for the dense output.
|
| Can be applied in a complex domain.
|
| Parameters
| ----------
| fun : callable
| Right-hand side of the system. The calling signature is ``fun(t, y)``.
| Here ``t`` is a scalar and there are two options for ndarray ``y``.
| It can either have shape (n,), then ``fun`` must return array_like with
| shape (n,). Or alternatively it can have shape (n, k), then ``fun``
| must return array_like with shape (n, k), i.e. each column
| corresponds to a single column in ``y``. The choice between the two
| options is determined by `vectorized` argument (see below). The
| vectorized implementation allows faster approximation of the Jacobian
| by finite differences.
| t0 : float
| Initial time.
| y0 : array_like, shape (n,)
| Initial state.
| t_bound : float
| Boundary time --- the integration won't continue beyond it. It also
| determines the direction of the integration.
| max_step : float, optional
| Maximum allowed step size. Default is np.inf, i.e. the step is not
| bounded and determined solely by the solver.
| rtol, atol : float and array_like, optional
| Relative and absolute tolerances. The solver keeps the local error
| estimates less than ``atol + rtol * abs(y)``. Here `rtol` controls a
| relative accuracy (number of correct digits). But if a component of `y`
| is approximately below `atol` then the error only needs to fall within
| the same `atol` threshold, and the number of correct digits is not
| guaranteed. If components of y have different scales, it might be
| beneficial to set different `atol` values for different components by
| passing array_like with shape (n,) for `atol`. Default values are
| 1e-3 for `rtol` and 1e-6 for `atol`.
| vectorized : bool, optional
| Whether `fun` is implemented in a vectorized fashion. Default is False.
|
| Attributes
| ----------
| n : int
| Number of equations.
| status : string
| Current status of the solver: 'running', 'finished' or 'failed'.
| t_bound : float
| Boundary time.
| direction : float
| Integration direction: +1 or -1.
| t : float
| Current time.
| y : ndarray
| Current state.
| t_old : float
| Previous time. None if no steps were made yet.
| step_size : float
| Size of the last successful step. None if no steps were made yet.
| nfev : int
| Number of the system's rhs evaluations.
| njev : int
| Number of the Jacobian evaluations.
| nlu : int
| Number of LU decompositions.
|
| References
| ----------
| .. [1] P. Bogacki, L.F. Shampine, "A 3(2) Pair of Runge-Kutta Formulas",
| Appl. Math. Lett. Vol. 2, No. 4. pp. 321-325, 1989.
|
| Method resolution order:
| RK23
| RungeKutta
| scipy.integrate._ivp.base.OdeSolver
| builtins.object
|
| Data and other attributes defined here:
|
| A = [array([ 0.5]), array([ 0. , 0.75])]
|
| B = array([ 0.22222222, 0.33333333, 0.44444444])
|
| C = array([ 0.5 , 0.75])
|
| E = array([ 0.06944444, -0.08333333, -0.11111111, 0.125 ])
|
| P = array([[ 1. , -1.33333333, 0.55555556],
| ...
| [ 0. ...
|
| n_stages = 3
|
| order = 2
|
| ----------------------------------------------------------------------
| Methods inherited from RungeKutta:
|
| __init__(self, fun, t0, y0, t_bound, max_step=inf, rtol=0.001, atol=1e-06, vectorized=False, **extraneous)
| Initialize self. See help(type(self)) for accurate signature.
|
| ----------------------------------------------------------------------
| Methods inherited from scipy.integrate._ivp.base.OdeSolver:
|
| dense_output(self)
| Compute a local interpolant over the last successful step.
|
| Returns
| -------
| sol : `DenseOutput`
| Local interpolant over the last successful step.
|
| step(self)
| Perform one integration step.
|
| Returns
| -------
| message : string or None
| Report from the solver. Typically a reason for a failure if
| `self.status` is 'failed' after the step was taken or None
| otherwise.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from scipy.integrate._ivp.base.OdeSolver:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| step_size
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from scipy.integrate._ivp.base.OdeSolver:
|
| TOO_SMALL_STEP = 'Required step size is less than spacing between numb...
class RK45(RungeKutta)
| Explicit Runge-Kutta method of order 5(4).
|
| The Dormand-Prince pair of formulas is used [1]_. The error is controlled
| assuming 4th order accuracy, but steps are taken using a 5th
| oder accurate formula (local extrapolation is done). A quartic
| interpolation polynomial is used for the dense output [2]_.
|
| Can be applied in a complex domain.
|
| Parameters
| ----------
| fun : callable
| Right-hand side of the system. The calling signature is ``fun(t, y)``.
| Here ``t`` is a scalar and there are two options for ndarray ``y``.
| It can either have shape (n,), then ``fun`` must return array_like with
| shape (n,). Or alternatively it can have shape (n, k), then ``fun``
| must return array_like with shape (n, k), i.e. each column
| corresponds to a single column in ``y``. The choice between the two
| options is determined by `vectorized` argument (see below). The
| vectorized implementation allows faster approximation of the Jacobian
| by finite differences.
| t0 : float
| Initial value of the independent variable.
| y0 : array_like, shape (n,)
| Initial values of the dependent variable.
| t_bound : float
| Boundary time --- the integration won't continue beyond it. It also
| determines the direction of the integration.
| max_step : float, optional
| Maximum allowed step size. Default is np.inf, i.e. the step is not
| bounded and determined solely by the solver.
| rtol, atol : float and array_like, optional
| Relative and absolute tolerances. The solver keeps the local error
| estimates less than ``atol + rtol * abs(y)``. Here `rtol` controls a
| relative accuracy (number of correct digits). But if a component of `y`
| is approximately below `atol` then the error only needs to fall within
| the same `atol` threshold, and the number of correct digits is not
| guaranteed. If components of y have different scales, it might be
| beneficial to set different `atol` values for different components by
| passing array_like with shape (n,) for `atol`. Default values are
| 1e-3 for `rtol` and 1e-6 for `atol`.
| vectorized : bool, optional
| Whether `fun` is implemented in a vectorized fashion. Default is False.
|
| Attributes
| ----------
| n : int
| Number of equations.
| status : string
| Current status of the solver: 'running', 'finished' or 'failed'.
| t_bound : float
| Boundary time.
| direction : float
| Integration direction: +1 or -1.
| t : float
| Current time.
| y : ndarray
| Current state.
| t_old : float
| Previous time. None if no steps were made yet.
| step_size : float
| Size of the last successful step. None if no steps were made yet.
| nfev : int
| Number of the system's rhs evaluations.
| njev : int
| Number of the Jacobian evaluations.
| nlu : int
| Number of LU decompositions.
|
| References
| ----------
| .. [1] J. R. Dormand, P. J. Prince, "A family of embedded Runge-Kutta
| formulae", Journal of Computational and Applied Mathematics, Vol. 6,
| No. 1, pp. 19-26, 1980.
| .. [2] L. W. Shampine, "Some Practical Runge-Kutta Formulas", Mathematics
| of Computation,, Vol. 46, No. 173, pp. 135-150, 1986.
|
| Method resolution order:
| RK45
| RungeKutta
| scipy.integrate._ivp.base.OdeSolver
| builtins.object
|
| Data and other attributes defined here:
|
| A = [array([ 0.2]), array([ 0.075, 0.225]), array([ 0.97777778, -3.73...
|
| B = array([ 0.09114583, 0. , 0.4492363 , 0.65104167, -0.3223...
|
| C = array([ 0.2 , 0.3 , 0.8 , 0.88888889, 1. ...
|
| E = array([-0.00123264, 0. , 0.00425277, -0...7, 0.0508638 ,...
|
| P = array([[ 1. , -2.85358007, 3.07174346... , 1.38246...
|
| n_stages = 6
|
| order = 4
|
| ----------------------------------------------------------------------
| Methods inherited from RungeKutta:
|
| __init__(self, fun, t0, y0, t_bound, max_step=inf, rtol=0.001, atol=1e-06, vectorized=False, **extraneous)
| Initialize self. See help(type(self)) for accurate signature.
|
| ----------------------------------------------------------------------
| Methods inherited from scipy.integrate._ivp.base.OdeSolver:
|
| dense_output(self)
| Compute a local interpolant over the last successful step.
|
| Returns
| -------
| sol : `DenseOutput`
| Local interpolant over the last successful step.
|
| step(self)
| Perform one integration step.
|
| Returns
| -------
| message : string or None
| Report from the solver. Typically a reason for a failure if
| `self.status` is 'failed' after the step was taken or None
| otherwise.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from scipy.integrate._ivp.base.OdeSolver:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| step_size
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from scipy.integrate._ivp.base.OdeSolver:
|
| TOO_SMALL_STEP = 'Required step size is less than spacing between numb...
class Radau(scipy.integrate._ivp.base.OdeSolver)
| Implicit Runge-Kutta method of Radau IIA family of order 5.
|
| Implementation follows [1]_. The error is controlled for a 3rd order
| accurate embedded formula. A cubic polynomial which satisfies the
| collocation conditions is used for the dense output.
|
| Parameters
| ----------
| fun : callable
| Right-hand side of the system. The calling signature is ``fun(t, y)``.
| Here ``t`` is a scalar and there are two options for ndarray ``y``.
| It can either have shape (n,), then ``fun`` must return array_like with
| shape (n,). Or alternatively it can have shape (n, k), then ``fun``
| must return array_like with shape (n, k), i.e. each column
| corresponds to a single column in ``y``. The choice between the two
| options is determined by `vectorized` argument (see below). The
| vectorized implementation allows faster approximation of the Jacobian
| by finite differences.
| t0 : float
| Initial time.
| y0 : array_like, shape (n,)
| Initial state.
| t_bound : float
| Boundary time --- the integration won't continue beyond it. It also
| determines the direction of the integration.
| max_step : float, optional
| Maximum allowed step size. Default is np.inf, i.e. the step is not
| bounded and determined solely by the solver.
| rtol, atol : float and array_like, optional
| Relative and absolute tolerances. The solver keeps the local error
| estimates less than ``atol + rtol * abs(y)``. Here `rtol` controls a
| relative accuracy (number of correct digits). But if a component of `y`
| is approximately below `atol` then the error only needs to fall within
| the same `atol` threshold, and the number of correct digits is not
| guaranteed. If components of y have different scales, it might be
| beneficial to set different `atol` values for different components by
| passing array_like with shape (n,) for `atol`. Default values are
| 1e-3 for `rtol` and 1e-6 for `atol`.
| jac : {None, array_like, sparse_matrix, callable}, optional
| Jacobian matrix of the right-hand side of the system with respect to
| y, required only by 'Radau' and 'BDF' methods. The Jacobian matrix
| has shape (n, n) and its element (i, j) is equal to ``d f_i / d y_j``.
| There are 3 ways to define the Jacobian:
|
| * If array_like or sparse_matrix, then the Jacobian is assumed to
| be constant.
| * If callable, then the Jacobian is assumed to depend on both
| t and y, and will be called as ``jac(t, y)`` as necessary. The
| return value might be a sparse matrix.
| * If None (default), then the Jacobian will be approximated by
| finite differences.
|
| It is generally recommended to provide the Jacobian rather than
| relying on a finite difference approximation.
| jac_sparsity : {None, array_like, sparse matrix}, optional
| Defines a sparsity structure of the Jacobian matrix for a finite
| difference approximation, its shape must be (n, n). If the Jacobian has
| only few non-zero elements in *each* row, providing the sparsity
| structure will greatly speed up the computations [2]_. A zero
| entry means that a corresponding element in the Jacobian is identically
| zero. If None (default), the Jacobian is assumed to be dense.
| vectorized : bool, optional
| Whether `fun` is implemented in a vectorized fashion. Default is False.
|
| Attributes
| ----------
| n : int
| Number of equations.
| status : string
| Current status of the solver: 'running', 'finished' or 'failed'.
| t_bound : float
| Boundary time.
| direction : float
| Integration direction: +1 or -1.
| t : float
| Current time.
| y : ndarray
| Current state.
| t_old : float
| Previous time. None if no steps were made yet.
| step_size : float
| Size of the last successful step. None if no steps were made yet.
| nfev : int
| Number of the system's rhs evaluations.
| njev : int
| Number of the Jacobian evaluations.
| nlu : int
| Number of LU decompositions.
|
| References
| ----------
| .. [1] E. Hairer, G. Wanner, "Solving Ordinary Differential Equations II:
| Stiff and Differential-Algebraic Problems", Sec. IV.8.
| .. [2] A. Curtis, M. J. D. Powell, and J. Reid, "On the estimation of
| sparse Jacobian matrices", Journal of the Institute of Mathematics
| and its Applications, 13, pp. 117-120, 1974.
|
| Method resolution order:
| Radau
| scipy.integrate._ivp.base.OdeSolver
| builtins.object
|
| Methods defined here:
|
| __init__(self, fun, t0, y0, t_bound, max_step=inf, rtol=0.001, atol=1e-06, jac=None, jac_sparsity=None, vectorized=False, **extraneous)
| Initialize self. See help(type(self)) for accurate signature.
|
| ----------------------------------------------------------------------
| Methods inherited from scipy.integrate._ivp.base.OdeSolver:
|
| dense_output(self)
| Compute a local interpolant over the last successful step.
|
| Returns
| -------
| sol : `DenseOutput`
| Local interpolant over the last successful step.
|
| step(self)
| Perform one integration step.
|
| Returns
| -------
| message : string or None
| Report from the solver. Typically a reason for a failure if
| `self.status` is 'failed' after the step was taken or None
| otherwise.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from scipy.integrate._ivp.base.OdeSolver:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| step_size
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from scipy.integrate._ivp.base.OdeSolver:
|
| TOO_SMALL_STEP = 'Required step size is less than spacing between numb...
class complex_ode(ode)
| A wrapper of ode for complex systems.
|
| This functions similarly as `ode`, but re-maps a complex-valued
| equation system to a real-valued one before using the integrators.
|
| Parameters
| ----------
| f : callable ``f(t, y, *f_args)``
| Rhs of the equation. t is a scalar, ``y.shape == (n,)``.
| ``f_args`` is set by calling ``set_f_params(*args)``.
| jac : callable ``jac(t, y, *jac_args)``
| Jacobian of the rhs, ``jac[i,j] = d f[i] / d y[j]``.
| ``jac_args`` is set by calling ``set_f_params(*args)``.
|
| Attributes
| ----------
| t : float
| Current time.
| y : ndarray
| Current variable values.
|
| Examples
| --------
| For usage examples, see `ode`.
|
| Method resolution order:
| complex_ode
| ode
| builtins.object
|
| Methods defined here:
|
| __init__(self, f, jac=None)
| Initialize self. See help(type(self)) for accurate signature.
|
| integrate(self, t, step=False, relax=False)
| Find y=y(t), set y as an initial condition, and return y.
|
| Parameters
| ----------
| t : float
| The endpoint of the integration step.
| step : bool
| If True, and if the integrator supports the step method,
| then perform a single integration step and return.
| This parameter is provided in order to expose internals of
| the implementation, and should not be changed from its default
| value in most cases.
| relax : bool
| If True and if the integrator supports the run_relax method,
| then integrate until t_1 >= t and return. ``relax`` is not
| referenced if ``step=True``.
| This parameter is provided in order to expose internals of
| the implementation, and should not be changed from its default
| value in most cases.
|
| Returns
| -------
| y : float
| The integrated value at t
|
| set_initial_value(self, y, t=0.0)
| Set initial conditions y(t) = y.
|
| set_integrator(self, name, **integrator_params)
| Set integrator by name.
|
| Parameters
| ----------
| name : str
| Name of the integrator
| integrator_params
| Additional parameters for the integrator.
|
| set_solout(self, solout)
| Set callable to be called at every successful integration step.
|
| Parameters
| ----------
| solout : callable
| ``solout(t, y)`` is called at each internal integrator step,
| t is a scalar providing the current independent position
| y is the current soloution ``y.shape == (n,)``
| solout should return -1 to stop integration
| otherwise it should return None or 0
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| y
|
| ----------------------------------------------------------------------
| Methods inherited from ode:
|
| get_return_code(self)
| Extracts the return code for the integration to enable better control
| if the integration fails.
|
| set_f_params(self, *args)
| Set extra parameters for user-supplied function f.
|
| set_jac_params(self, *args)
| Set extra parameters for user-supplied function jac.
|
| successful(self)
| Check if integration was successful.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from ode:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
class ode(builtins.object)
| A generic interface class to numeric integrators.
|
| Solve an equation system :math:`y'(t) = f(t,y)` with (optional) ``jac = df/dy``.
|
| *Note*: The first two arguments of ``f(t, y, ...)`` are in the
| opposite order of the arguments in the system definition function used
| by `scipy.integrate.odeint`.
|
| Parameters
| ----------
| f : callable ``f(t, y, *f_args)``
| Right-hand side of the differential equation. t is a scalar,
| ``y.shape == (n,)``.
| ``f_args`` is set by calling ``set_f_params(*args)``.
| `f` should return a scalar, array or list (not a tuple).
| jac : callable ``jac(t, y, *jac_args)``, optional
| Jacobian of the right-hand side, ``jac[i,j] = d f[i] / d y[j]``.
| ``jac_args`` is set by calling ``set_jac_params(*args)``.
|
| Attributes
| ----------
| t : float
| Current time.
| y : ndarray
| Current variable values.
|
| See also
| --------
| odeint : an integrator with a simpler interface based on lsoda from ODEPACK
| quad : for finding the area under a curve
|
| Notes
| -----
| Available integrators are listed below. They can be selected using
| the `set_integrator` method.
|
| "vode"
|
| Real-valued Variable-coefficient Ordinary Differential Equation
| solver, with fixed-leading-coefficient implementation. It provides
| implicit Adams method (for non-stiff problems) and a method based on
| backward differentiation formulas (BDF) (for stiff problems).
|
| Source: http://www.netlib.org/ode/vode.f
|
| .. warning::
|
| This integrator is not re-entrant. You cannot have two `ode`
| instances using the "vode" integrator at the same time.
|
| This integrator accepts the following parameters in `set_integrator`
| method of the `ode` class:
|
| - atol : float or sequence
| absolute tolerance for solution
| - rtol : float or sequence
| relative tolerance for solution
| - lband : None or int
| - uband : None or int
| Jacobian band width, jac[i,j] != 0 for i-lband <= j <= i+uband.
| Setting these requires your jac routine to return the jacobian
| in packed format, jac_packed[i-j+uband, j] = jac[i,j]. The
| dimension of the matrix must be (lband+uband+1, len(y)).
| - method: 'adams' or 'bdf'
| Which solver to use, Adams (non-stiff) or BDF (stiff)
| - with_jacobian : bool
| This option is only considered when the user has not supplied a
| Jacobian function and has not indicated (by setting either band)
| that the Jacobian is banded. In this case, `with_jacobian` specifies
| whether the iteration method of the ODE solver's correction step is
| chord iteration with an internally generated full Jacobian or
| functional iteration with no Jacobian.
| - nsteps : int
| Maximum number of (internally defined) steps allowed during one
| call to the solver.
| - first_step : float
| - min_step : float
| - max_step : float
| Limits for the step sizes used by the integrator.
| - order : int
| Maximum order used by the integrator,
| order <= 12 for Adams, <= 5 for BDF.
|
| "zvode"
|
| Complex-valued Variable-coefficient Ordinary Differential Equation
| solver, with fixed-leading-coefficient implementation. It provides
| implicit Adams method (for non-stiff problems) and a method based on
| backward differentiation formulas (BDF) (for stiff problems).
|
| Source: http://www.netlib.org/ode/zvode.f
|
| .. warning::
|
| This integrator is not re-entrant. You cannot have two `ode`
| instances using the "zvode" integrator at the same time.
|
| This integrator accepts the same parameters in `set_integrator`
| as the "vode" solver.
|
| .. note::
|
| When using ZVODE for a stiff system, it should only be used for
| the case in which the function f is analytic, that is, when each f(i)
| is an analytic function of each y(j). Analyticity means that the
| partial derivative df(i)/dy(j) is a unique complex number, and this
| fact is critical in the way ZVODE solves the dense or banded linear
| systems that arise in the stiff case. For a complex stiff ODE system
| in which f is not analytic, ZVODE is likely to have convergence
| failures, and for this problem one should instead use DVODE on the
| equivalent real system (in the real and imaginary parts of y).
|
| "lsoda"
|
| Real-valued Variable-coefficient Ordinary Differential Equation
| solver, with fixed-leading-coefficient implementation. It provides
| automatic method switching between implicit Adams method (for non-stiff
| problems) and a method based on backward differentiation formulas (BDF)
| (for stiff problems).
|
| Source: http://www.netlib.org/odepack
|
| .. warning::
|
| This integrator is not re-entrant. You cannot have two `ode`
| instances using the "lsoda" integrator at the same time.
|
| This integrator accepts the following parameters in `set_integrator`
| method of the `ode` class:
|
| - atol : float or sequence
| absolute tolerance for solution
| - rtol : float or sequence
| relative tolerance for solution
| - lband : None or int
| - uband : None or int
| Jacobian band width, jac[i,j] != 0 for i-lband <= j <= i+uband.
| Setting these requires your jac routine to return the jacobian
| in packed format, jac_packed[i-j+uband, j] = jac[i,j].
| - with_jacobian : bool
| *Not used.*
| - nsteps : int
| Maximum number of (internally defined) steps allowed during one
| call to the solver.
| - first_step : float
| - min_step : float
| - max_step : float
| Limits for the step sizes used by the integrator.
| - max_order_ns : int
| Maximum order used in the nonstiff case (default 12).
| - max_order_s : int
| Maximum order used in the stiff case (default 5).
| - max_hnil : int
| Maximum number of messages reporting too small step size (t + h = t)
| (default 0)
| - ixpr : int
| Whether to generate extra printing at method switches (default False).
|
| "dopri5"
|
| This is an explicit runge-kutta method of order (4)5 due to Dormand &
| Prince (with stepsize control and dense output).
|
| Authors:
|
| E. Hairer and G. Wanner
| Universite de Geneve, Dept. de Mathematiques
| CH-1211 Geneve 24, Switzerland
| e-mail: [email protected], [email protected]
|
| This code is described in [HNW93]_.
|
| This integrator accepts the following parameters in set_integrator()
| method of the ode class:
|
| - atol : float or sequence
| absolute tolerance for solution
| - rtol : float or sequence
| relative tolerance for solution
| - nsteps : int
| Maximum number of (internally defined) steps allowed during one
| call to the solver.
| - first_step : float
| - max_step : float
| - safety : float
| Safety factor on new step selection (default 0.9)
| - ifactor : float
| - dfactor : float
| Maximum factor to increase/decrease step size by in one step
| - beta : float
| Beta parameter for stabilised step size control.
| - verbosity : int
| Switch for printing messages (< 0 for no messages).
|
| "dop853"
|
| This is an explicit runge-kutta method of order 8(5,3) due to Dormand
| & Prince (with stepsize control and dense output).
|
| Options and references the same as "dopri5".
|
| Examples
| --------
|
| A problem to integrate and the corresponding jacobian:
|
| >>> from scipy.integrate import ode
| >>>
| >>> y0, t0 = [1.0j, 2.0], 0
| >>>
| >>> def f(t, y, arg1):
| ... return [1j*arg1*y[0] + y[1], -arg1*y[1]**2]
| >>> def jac(t, y, arg1):
| ... return [[1j*arg1, 1], [0, -arg1*2*y[1]]]
|
| The integration:
|
| >>> r = ode(f, jac).set_integrator('zvode', method='bdf')
| >>> r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0)
| >>> t1 = 10
| >>> dt = 1
| >>> while r.successful() and r.t < t1:
| ... print(r.t+dt, r.integrate(r.t+dt))
| 1 [-0.71038232+0.23749653j 0.40000271+0.j ]
| 2.0 [ 0.19098503-0.52359246j 0.22222356+0.j ]
| 3.0 [ 0.47153208+0.52701229j 0.15384681+0.j ]
| 4.0 [-0.61905937+0.30726255j 0.11764744+0.j ]
| 5.0 [ 0.02340997-0.61418799j 0.09523835+0.j ]
| 6.0 [ 0.58643071+0.339819j 0.08000018+0.j ]
| 7.0 [-0.52070105+0.44525141j 0.06896565+0.j ]
| 8.0 [-0.15986733-0.61234476j 0.06060616+0.j ]
| 9.0 [ 0.64850462+0.15048982j 0.05405414+0.j ]
| 10.0 [-0.38404699+0.56382299j 0.04878055+0.j ]
|
| References
| ----------
| .. [HNW93] E. Hairer, S.P. Norsett and G. Wanner, Solving Ordinary
| Differential Equations i. Nonstiff Problems. 2nd edition.
| Springer Series in Computational Mathematics,
| Springer-Verlag (1993)
|
| Methods defined here:
|
| __init__(self, f, jac=None)
| Initialize self. See help(type(self)) for accurate signature.
|
| get_return_code(self)
| Extracts the return code for the integration to enable better control
| if the integration fails.
|
| integrate(self, t, step=False, relax=False)
| Find y=y(t), set y as an initial condition, and return y.
|
| Parameters
| ----------
| t : float
| The endpoint of the integration step.
| step : bool
| If True, and if the integrator supports the step method,
| then perform a single integration step and return.
| This parameter is provided in order to expose internals of
| the implementation, and should not be changed from its default
| value in most cases.
| relax : bool
| If True and if the integrator supports the run_relax method,
| then integrate until t_1 >= t and return. ``relax`` is not
| referenced if ``step=True``.
| This parameter is provided in order to expose internals of
| the implementation, and should not be changed from its default
| value in most cases.
|
| Returns
| -------
| y : float
| The integrated value at t
|
| set_f_params(self, *args)
| Set extra parameters for user-supplied function f.
|
| set_initial_value(self, y, t=0.0)
| Set initial conditions y(t) = y.
|
| set_integrator(self, name, **integrator_params)
| Set integrator by name.
|
| Parameters
| ----------
| name : str
| Name of the integrator.
| integrator_params
| Additional parameters for the integrator.
|
| set_jac_params(self, *args)
| Set extra parameters for user-supplied function jac.
|
| set_solout(self, solout)
| Set callable to be called at every successful integration step.
|
| Parameters
| ----------
| solout : callable
| ``solout(t, y)`` is called at each internal integrator step,
| t is a scalar providing the current independent position
| y is the current soloution ``y.shape == (n,)``
| solout should return -1 to stop integration
| otherwise it should return None or 0
|
| successful(self)
| Check if integration was successful.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| y
FUNCTIONS
cumtrapz(y, x=None, dx=1.0, axis=-1, initial=None)
Cumulatively integrate y(x) using the composite trapezoidal rule.
Parameters
----------
y : array_like
Values to integrate.
x : array_like, optional
The coordinate to integrate along. If None (default), use spacing `dx`
between consecutive elements in `y`.
dx : float, optional
Spacing between elements of `y`. Only used if `x` is None.
axis : int, optional
Specifies the axis to cumulate. Default is -1 (last axis).
initial : scalar, optional
If given, uses this value as the first value in the returned result.
Typically this value should be 0. Default is None, which means no
value at ``x[0]`` is returned and `res` has one element less than `y`
along the axis of integration.
Returns
-------
res : ndarray
The result of cumulative integration of `y` along `axis`.
If `initial` is None, the shape is such that the axis of integration
has one less value than `y`. If `initial` is given, the shape is equal
to that of `y`.
See Also
--------
numpy.cumsum, numpy.cumprod
quad: adaptive quadrature using QUADPACK
romberg: adaptive Romberg quadrature
quadrature: adaptive Gaussian quadrature
fixed_quad: fixed-order Gaussian quadrature
dblquad: double integrals
tplquad: triple integrals
romb: integrators for sampled data
ode: ODE integrators
odeint: ODE integrators
Examples
--------
>>> from scipy import integrate
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-2, 2, num=20)
>>> y = x
>>> y_int = integrate.cumtrapz(y, x, initial=0)
>>> plt.plot(x, y_int, 'ro', x, y[0] + 0.5 * x**2, 'b-')
>>> plt.show()
dblquad(func, a, b, gfun, hfun, args=(), epsabs=1.49e-08, epsrel=1.49e-08)
Compute a double integral.
Return the double (definite) integral of ``func(y, x)`` from ``x = a..b``
and ``y = gfun(x)..hfun(x)``.
Parameters
----------
func : callable
A Python function or method of at least two variables: y must be the
first argument and x the second argument.
a, b : float
The limits of integration in x: `a` < `b`
gfun : callable
The lower boundary curve in y which is a function taking a single
floating point argument (x) and returning a floating point result: a
lambda function can be useful here.
hfun : callable
The upper boundary curve in y (same requirements as `gfun`).
args : sequence, optional
Extra arguments to pass to `func`.
epsabs : float, optional
Absolute tolerance passed directly to the inner 1-D quadrature
integration. Default is 1.49e-8.
epsrel : float, optional
Relative tolerance of the inner 1-D integrals. Default is 1.49e-8.
Returns
-------
y : float
The resultant integral.
abserr : float
An estimate of the error.
See also
--------
quad : single integral
tplquad : triple integral
nquad : N-dimensional integrals
fixed_quad : fixed-order Gaussian quadrature
quadrature : adaptive Gaussian quadrature
odeint : ODE integrator
ode : ODE integrator
simps : integrator for sampled data
romb : integrator for sampled data
scipy.special : for coefficients and roots of orthogonal polynomials
fixed_quad(func, a, b, args=(), n=5)
Compute a definite integral using fixed-order Gaussian quadrature.
Integrate `func` from `a` to `b` using Gaussian quadrature of
order `n`.
Parameters
----------
func : callable
A Python function or method to integrate (must accept vector inputs).
If integrating a vector-valued function, the returned array must have
shape ``(..., len(x))``.
a : float
Lower limit of integration.
b : float
Upper limit of integration.
args : tuple, optional
Extra arguments to pass to function, if any.
n : int, optional
Order of quadrature integration. Default is 5.
Returns
-------
val : float
Gaussian quadrature approximation to the integral
none : None
Statically returned value of None
See Also
--------
quad : adaptive quadrature using QUADPACK
dblquad : double integrals
tplquad : triple integrals
romberg : adaptive Romberg quadrature
quadrature : adaptive Gaussian quadrature
romb : integrators for sampled data
simps : integrators for sampled data
cumtrapz : cumulative integration for sampled data
ode : ODE integrator
odeint : ODE integrator
newton_cotes(rn, equal=0)
Return weights and error coefficient for Newton-Cotes integration.
Suppose we have (N+1) samples of f at the positions
x_0, x_1, ..., x_N. Then an N-point Newton-Cotes formula for the
integral between x_0 and x_N is:
:math:`\int_{x_0}^{x_N} f(x)dx = \Delta x \sum_{i=0}^{N} a_i f(x_i)
+ B_N (\Delta x)^{N+2} f^{N+1} (\xi)`
where :math:`\xi \in [x_0,x_N]`
and :math:`\Delta x = \frac{x_N-x_0}{N}` is the average samples spacing.
If the samples are equally-spaced and N is even, then the error
term is :math:`B_N (\Delta x)^{N+3} f^{N+2}(\xi)`.
Parameters
----------
rn : int
The integer order for equally-spaced data or the relative positions of
the samples with the first sample at 0 and the last at N, where N+1 is
the length of `rn`. N is the order of the Newton-Cotes integration.
equal : int, optional
Set to 1 to enforce equally spaced data.
Returns
-------
an : ndarray
1-D array of weights to apply to the function at the provided sample
positions.
B : float
Error coefficient.
Notes
-----
Normally, the Newton-Cotes rules are used on smaller integration
regions and a composite rule is used to return the total integral.
nquad(func, ranges, args=None, opts=None, full_output=False)
Integration over multiple variables.
Wraps `quad` to enable integration over multiple variables.
Various options allow improved integration of discontinuous functions, as
well as the use of weighted integration, and generally finer control of the
integration process.
Parameters
----------
func : {callable, scipy.LowLevelCallable}
The function to be integrated. Has arguments of ``x0, ... xn``,
``t0, tm``, where integration is carried out over ``x0, ... xn``, which
must be floats. Function signature should be
``func(x0, x1, ..., xn, t0, t1, ..., tm)``. Integration is carried out
in order. That is, integration over ``x0`` is the innermost integral,
and ``xn`` is the outermost.
If the user desires improved integration performance, then `f` may
be a `scipy.LowLevelCallable` with one of the signatures::
double func(int n, double *xx)
double func(int n, double *xx, void *user_data)
where ``n`` is the number of extra parameters and args is an array
of doubles of the additional parameters, the ``xx`` array contains the
coordinates. The ``user_data`` is the data contained in the
`scipy.LowLevelCallable`.
ranges : iterable object
Each element of ranges may be either a sequence of 2 numbers, or else
a callable that returns such a sequence. ``ranges[0]`` corresponds to
integration over x0, and so on. If an element of ranges is a callable,
then it will be called with all of the integration arguments available,
as well as any parametric arguments. e.g. if
``func = f(x0, x1, x2, t0, t1)``, then ``ranges[0]`` may be defined as
either ``(a, b)`` or else as ``(a, b) = range0(x1, x2, t0, t1)``.
args : iterable object, optional
Additional arguments ``t0, ..., tn``, required by `func`, `ranges`, and
``opts``.
opts : iterable object or dict, optional
Options to be passed to `quad`. May be empty, a dict, or
a sequence of dicts or functions that return a dict. If empty, the
default options from scipy.integrate.quad are used. If a dict, the same
options are used for all levels of integraion. If a sequence, then each
element of the sequence corresponds to a particular integration. e.g.
opts[0] corresponds to integration over x0, and so on. If a callable,
the signature must be the same as for ``ranges``. The available
options together with their default values are:
- epsabs = 1.49e-08
- epsrel = 1.49e-08
- limit = 50
- points = None
- weight = None
- wvar = None
- wopts = None
For more information on these options, see `quad` and `quad_explain`.
full_output : bool, optional
Partial implementation of ``full_output`` from scipy.integrate.quad.
The number of integrand function evaluations ``neval`` can be obtained
by setting ``full_output=True`` when calling nquad.
Returns
-------
result : float
The result of the integration.
abserr : float
The maximum of the estimates of the absolute error in the various
integration results.
out_dict : dict, optional
A dict containing additional information on the integration.
See Also
--------
quad : 1-dimensional numerical integration
dblquad, tplquad : double and triple integrals
fixed_quad : fixed-order Gaussian quadrature
quadrature : adaptive Gaussian quadrature
Examples
--------
>>> from scipy import integrate
>>> func = lambda x0,x1,x2,x3 : x0**2 + x1*x2 - x3**3 + np.sin(x0) + (
... 1 if (x0-.2*x3-.5-.25*x1>0) else 0)
>>> points = [[lambda x1,x2,x3 : 0.2*x3 + 0.5 + 0.25*x1], [], [], []]
>>> def opts0(*args, **kwargs):
... return {'points':[0.2*args[2] + 0.5 + 0.25*args[0]]}
>>> integrate.nquad(func, [[0,1], [-1,1], [.13,.8], [-.15,1]],
... opts=[opts0,{},{},{}], full_output=True)
(1.5267454070738633, 2.9437360001402324e-14, {'neval': 388962})
>>> scale = .1
>>> def func2(x0, x1, x2, x3, t0, t1):
... return x0*x1*x3**2 + np.sin(x2) + 1 + (1 if x0+t1*x1-t0>0 else 0)
>>> def lim0(x1, x2, x3, t0, t1):
... return [scale * (x1**2 + x2 + np.cos(x3)*t0*t1 + 1) - 1,
... scale * (x1**2 + x2 + np.cos(x3)*t0*t1 + 1) + 1]
>>> def lim1(x2, x3, t0, t1):
... return [scale * (t0*x2 + t1*x3) - 1,
... scale * (t0*x2 + t1*x3) + 1]
>>> def lim2(x3, t0, t1):
... return [scale * (x3 + t0**2*t1**3) - 1,
... scale * (x3 + t0**2*t1**3) + 1]
>>> def lim3(t0, t1):
... return [scale * (t0+t1) - 1, scale * (t0+t1) + 1]
>>> def opts0(x1, x2, x3, t0, t1):
... return {'points' : [t0 - t1*x1]}
>>> def opts1(x2, x3, t0, t1):
... return {}
>>> def opts2(x3, t0, t1):
... return {}
>>> def opts3(t0, t1):
... return {}
>>> integrate.nquad(func2, [lim0, lim1, lim2, lim3], args=(0,0),
... opts=[opts0, opts1, opts2, opts3])
(25.066666666666666, 2.7829590483937256e-13)
odeint(func, y0, t, args=(), Dfun=None, col_deriv=0, full_output=0, ml=None, mu=None, rtol=None, atol=None, tcrit=None, h0=0.0, hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12, mxords=5, printmessg=0)
Integrate a system of ordinary differential equations.
Solve a system of ordinary differential equations using lsoda from the
FORTRAN library odepack.
Solves the initial value problem for stiff or non-stiff systems
of first order ode-s::
dy/dt = func(y, t0, ...)
where y can be a vector.
*Note*: The first two arguments of ``func(y, t0, ...)`` are in the
opposite order of the arguments in the system definition function used
by the `scipy.integrate.ode` class.
Parameters
----------
func : callable(y, t0, ...)
Computes the derivative of y at t0.
y0 : array
Initial condition on y (can be a vector).
t : array
A sequence of time points for which to solve for y. The initial
value point should be the first element of this sequence.
args : tuple, optional
Extra arguments to pass to function.
Dfun : callable(y, t0, ...)
Gradient (Jacobian) of `func`.
col_deriv : bool, optional
True if `Dfun` defines derivatives down columns (faster),
otherwise `Dfun` should define derivatives across rows.
full_output : bool, optional
True if to return a dictionary of optional outputs as the second output
printmessg : bool, optional
Whether to print the convergence message
Returns
-------
y : array, shape (len(t), len(y0))
Array containing the value of y for each desired time in t,
with the initial value `y0` in the first row.
infodict : dict, only returned if full_output == True
Dictionary containing additional output information
======= ============================================================
key meaning
======= ============================================================
'hu' vector of step sizes successfully used for each time step.
'tcur' vector with the value of t reached for each time step.
(will always be at least as large as the input times).
'tolsf' vector of tolerance scale factors, greater than 1.0,
computed when a request for too much accuracy was detected.
'tsw' value of t at the time of the last method switch
(given for each time step)
'nst' cumulative number of time steps
'nfe' cumulative number of function evaluations for each time step
'nje' cumulative number of jacobian evaluations for each time step
'nqu' a vector of method orders for each successful step.
'imxer' index of the component of largest magnitude in the
weighted local error vector (e / ewt) on an error return, -1
otherwise.
'lenrw' the length of the double work array required.
'leniw' the length of integer work array required.
'mused' a vector of method indicators for each successful time step:
1: adams (nonstiff), 2: bdf (stiff)
======= ============================================================
Other Parameters
----------------
ml, mu : int, optional
If either of these are not None or non-negative, then the
Jacobian is assumed to be banded. These give the number of
lower and upper non-zero diagonals in this banded matrix.
For the banded case, `Dfun` should return a matrix whose
rows contain the non-zero bands (starting with the lowest diagonal).
Thus, the return matrix `jac` from `Dfun` should have shape
``(ml + mu + 1, len(y0))`` when ``ml >=0`` or ``mu >=0``.
The data in `jac` must be stored such that ``jac[i - j + mu, j]``
holds the derivative of the `i`th equation with respect to the `j`th
state variable. If `col_deriv` is True, the transpose of this
`jac` must be returned.
rtol, atol : float, optional
The input parameters `rtol` and `atol` determine the error
control performed by the solver. The solver will control the
vector, e, of estimated local errors in y, according to an
inequality of the form ``max-norm of (e / ewt) <= 1``,
where ewt is a vector of positive error weights computed as
``ewt = rtol * abs(y) + atol``.
rtol and atol can be either vectors the same length as y or scalars.
Defaults to 1.49012e-8.
tcrit : ndarray, optional
Vector of critical points (e.g. singularities) where integration
care should be taken.
h0 : float, (0: solver-determined), optional
The step size to be attempted on the first step.
hmax : float, (0: solver-determined), optional
The maximum absolute step size allowed.
hmin : float, (0: solver-determined), optional
The minimum absolute step size allowed.
ixpr : bool, optional
Whether to generate extra printing at method switches.
mxstep : int, (0: solver-determined), optional
Maximum number of (internally defined) steps allowed for each
integration point in t.
mxhnil : int, (0: solver-determined), optional
Maximum number of messages printed.
mxordn : int, (0: solver-determined), optional
Maximum order to be allowed for the non-stiff (Adams) method.
mxords : int, (0: solver-determined), optional
Maximum order to be allowed for the stiff (BDF) method.
See Also
--------
ode : a more object-oriented integrator based on VODE.
quad : for finding the area under a curve.
Examples
--------
The second order differential equation for the angle `theta` of a
pendulum acted on by gravity with friction can be written::
theta''(t) + b*theta'(t) + c*sin(theta(t)) = 0
where `b` and `c` are positive constants, and a prime (') denotes a
derivative. To solve this equation with `odeint`, we must first convert
it to a system of first order equations. By defining the angular
velocity ``omega(t) = theta'(t)``, we obtain the system::
theta'(t) = omega(t)
omega'(t) = -b*omega(t) - c*sin(theta(t))
Let `y` be the vector [`theta`, `omega`]. We implement this system
in python as:
>>> def pend(y, t, b, c):
... theta, omega = y
... dydt = [omega, -b*omega - c*np.sin(theta)]
... return dydt
...
We assume the constants are `b` = 0.25 and `c` = 5.0:
>>> b = 0.25
>>> c = 5.0
For initial conditions, we assume the pendulum is nearly vertical
with `theta(0)` = `pi` - 0.1, and it initially at rest, so
`omega(0)` = 0. Then the vector of initial conditions is
>>> y0 = [np.pi - 0.1, 0.0]
We generate a solution 101 evenly spaced samples in the interval
0 <= `t` <= 10. So our array of times is:
>>> t = np.linspace(0, 10, 101)
Call `odeint` to generate the solution. To pass the parameters
`b` and `c` to `pend`, we give them to `odeint` using the `args`
argument.
>>> from scipy.integrate import odeint
>>> sol = odeint(pend, y0, t, args=(b, c))
The solution is an array with shape (101, 2). The first column
is `theta(t)`, and the second is `omega(t)`. The following code
plots both components.
>>> import matplotlib.pyplot as plt
>>> plt.plot(t, sol[:, 0], 'b', label='theta(t)')
>>> plt.plot(t, sol[:, 1], 'g', label='omega(t)')
>>> plt.legend(loc='best')
>>> plt.xlabel('t')
>>> plt.grid()
>>> plt.show()
quad(func, a, b, args=(), full_output=0, epsabs=1.49e-08, epsrel=1.49e-08, limit=50, points=None, weight=None, wvar=None, wopts=None, maxp1=50, limlst=50)
Compute a definite integral.
Integrate func from `a` to `b` (possibly infinite interval) using a
technique from the Fortran library QUADPACK.
Parameters
----------
func : {function, scipy.LowLevelCallable}
A Python function or method to integrate. If `func` takes many
arguments, it is integrated along the axis corresponding to the
first argument.
If the user desires improved integration performance, then `f` may
be a `scipy.LowLevelCallable` with one of the signatures::
double func(double x)
double func(double x, void *user_data)
double func(int n, double *xx)
double func(int n, double *xx, void *user_data)
The ``user_data`` is the data contained in the `scipy.LowLevelCallable`.
In the call forms with ``xx``, ``n`` is the length of the ``xx``
array which contains ``xx[0] == x`` and the rest of the items are
numbers contained in the ``args`` argument of quad.
In addition, certain ctypes call signatures are supported for
backward compatibility, but those should not be used in new code.
a : float
Lower limit of integration (use -numpy.inf for -infinity).
b : float
Upper limit of integration (use numpy.inf for +infinity).
args : tuple, optional
Extra arguments to pass to `func`.
full_output : int, optional
Non-zero to return a dictionary of integration information.
If non-zero, warning messages are also suppressed and the
message is appended to the output tuple.
Returns
-------
y : float
The integral of func from `a` to `b`.
abserr : float
An estimate of the absolute error in the result.
infodict : dict
A dictionary containing additional information.
Run scipy.integrate.quad_explain() for more information.
message
A convergence message.
explain
Appended only with 'cos' or 'sin' weighting and infinite
integration limits, it contains an explanation of the codes in
infodict['ierlst']
Other Parameters
----------------
epsabs : float or int, optional
Absolute error tolerance.
epsrel : float or int, optional
Relative error tolerance.
limit : float or int, optional
An upper bound on the number of subintervals used in the adaptive
algorithm.
points : (sequence of floats,ints), optional
A sequence of break points in the bounded integration interval
where local difficulties of the integrand may occur (e.g.,
singularities, discontinuities). The sequence does not have
to be sorted.
weight : float or int, optional
String indicating weighting function. Full explanation for this
and the remaining arguments can be found below.
wvar : optional
Variables for use with weighting functions.
wopts : optional
Optional input for reusing Chebyshev moments.
maxp1 : float or int, optional
An upper bound on the number of Chebyshev moments.
limlst : int, optional
Upper bound on the number of cycles (>=3) for use with a sinusoidal
weighting and an infinite end-point.
See Also
--------
dblquad : double integral
tplquad : triple integral
nquad : n-dimensional integrals (uses `quad` recursively)
fixed_quad : fixed-order Gaussian quadrature
quadrature : adaptive Gaussian quadrature
odeint : ODE integrator
ode : ODE integrator
simps : integrator for sampled data
romb : integrator for sampled data
scipy.special : for coefficients and roots of orthogonal polynomials
Notes
-----
**Extra information for quad() inputs and outputs**
If full_output is non-zero, then the third output argument
(infodict) is a dictionary with entries as tabulated below. For
infinite limits, the range is transformed to (0,1) and the
optional outputs are given with respect to this transformed range.
Let M be the input argument limit and let K be infodict['last'].
The entries are:
'neval'
The number of function evaluations.
'last'
The number, K, of subintervals produced in the subdivision process.
'alist'
A rank-1 array of length M, the first K elements of which are the
left end points of the subintervals in the partition of the
integration range.
'blist'
A rank-1 array of length M, the first K elements of which are the
right end points of the subintervals.
'rlist'
A rank-1 array of length M, the first K elements of which are the
integral approximations on the subintervals.
'elist'
A rank-1 array of length M, the first K elements of which are the
moduli of the absolute error estimates on the subintervals.
'iord'
A rank-1 integer array of length M, the first L elements of
which are pointers to the error estimates over the subintervals
with ``L=K`` if ``K<=M/2+2`` or ``L=M+1-K`` otherwise. Let I be the
sequence ``infodict['iord']`` and let E be the sequence
``infodict['elist']``. Then ``E[I[1]], ..., E[I[L]]`` forms a
decreasing sequence.
If the input argument points is provided (i.e. it is not None),
the following additional outputs are placed in the output
dictionary. Assume the points sequence is of length P.
'pts'
A rank-1 array of length P+2 containing the integration limits
and the break points of the intervals in ascending order.
This is an array giving the subintervals over which integration
will occur.
'level'
A rank-1 integer array of length M (=limit), containing the
subdivision levels of the subintervals, i.e., if (aa,bb) is a
subinterval of ``(pts[1], pts[2])`` where ``pts[0]`` and ``pts[2]``
are adjacent elements of ``infodict['pts']``, then (aa,bb) has level l
if ``|bb-aa| = |pts[2]-pts[1]| * 2**(-l)``.
'ndin'
A rank-1 integer array of length P+2. After the first integration
over the intervals (pts[1], pts[2]), the error estimates over some
of the intervals may have been increased artificially in order to
put their subdivision forward. This array has ones in slots
corresponding to the subintervals for which this happens.
**Weighting the integrand**
The input variables, *weight* and *wvar*, are used to weight the
integrand by a select list of functions. Different integration
methods are used to compute the integral with these weighting
functions. The possible values of weight and the corresponding
weighting functions are.
========== =================================== =====================
``weight`` Weight function used ``wvar``
========== =================================== =====================
'cos' cos(w*x) wvar = w
'sin' sin(w*x) wvar = w
'alg' g(x) = ((x-a)**alpha)*((b-x)**beta) wvar = (alpha, beta)
'alg-loga' g(x)*log(x-a) wvar = (alpha, beta)
'alg-logb' g(x)*log(b-x) wvar = (alpha, beta)
'alg-log' g(x)*log(x-a)*log(b-x) wvar = (alpha, beta)
'cauchy' 1/(x-c) wvar = c
========== =================================== =====================
wvar holds the parameter w, (alpha, beta), or c depending on the weight
selected. In these expressions, a and b are the integration limits.
For the 'cos' and 'sin' weighting, additional inputs and outputs are
available.
For finite integration limits, the integration is performed using a
Clenshaw-Curtis method which uses Chebyshev moments. For repeated
calculations, these moments are saved in the output dictionary:
'momcom'
The maximum level of Chebyshev moments that have been computed,
i.e., if ``M_c`` is ``infodict['momcom']`` then the moments have been
computed for intervals of length ``|b-a| * 2**(-l)``,
``l=0,1,...,M_c``.
'nnlog'
A rank-1 integer array of length M(=limit), containing the
subdivision levels of the subintervals, i.e., an element of this
array is equal to l if the corresponding subinterval is
``|b-a|* 2**(-l)``.
'chebmo'
A rank-2 array of shape (25, maxp1) containing the computed
Chebyshev moments. These can be passed on to an integration
over the same interval by passing this array as the second
element of the sequence wopts and passing infodict['momcom'] as
the first element.
If one of the integration limits is infinite, then a Fourier integral is
computed (assuming w neq 0). If full_output is 1 and a numerical error
is encountered, besides the error message attached to the output tuple,
a dictionary is also appended to the output tuple which translates the
error codes in the array ``info['ierlst']`` to English messages. The
output information dictionary contains the following entries instead of
'last', 'alist', 'blist', 'rlist', and 'elist':
'lst'
The number of subintervals needed for the integration (call it ``K_f``).
'rslst'
A rank-1 array of length M_f=limlst, whose first ``K_f`` elements
contain the integral contribution over the interval
``(a+(k-1)c, a+kc)`` where ``c = (2*floor(|w|) + 1) * pi / |w|``
and ``k=1,2,...,K_f``.
'erlst'
A rank-1 array of length ``M_f`` containing the error estimate
corresponding to the interval in the same position in
``infodict['rslist']``.
'ierlst'
A rank-1 integer array of length ``M_f`` containing an error flag
corresponding to the interval in the same position in
``infodict['rslist']``. See the explanation dictionary (last entry
in the output tuple) for the meaning of the codes.
Examples
--------
Calculate :math:`\int^4_0 x^2 dx` and compare with an analytic result
>>> from scipy import integrate
>>> x2 = lambda x: x**2
>>> integrate.quad(x2, 0, 4)
(21.333333333333332, 2.3684757858670003e-13)
>>> print(4**3 / 3.) # analytical result
21.3333333333
Calculate :math:`\int^\infty_0 e^{-x} dx`
>>> invexp = lambda x: np.exp(-x)
>>> integrate.quad(invexp, 0, np.inf)
(1.0, 5.842605999138044e-11)
>>> f = lambda x,a : a*x
>>> y, err = integrate.quad(f, 0, 1, args=(1,))
>>> y
0.5
>>> y, err = integrate.quad(f, 0, 1, args=(3,))
>>> y
1.5
Calculate :math:`\int^1_0 x^2 + y^2 dx` with ctypes, holding
y parameter as 1::
testlib.c =>
double func(int n, double args[n]){
return args[0]*args[0] + args[1]*args[1];}
compile to library testlib.*
::
from scipy import integrate
import ctypes
lib = ctypes.CDLL('/home/.../testlib.*') #use absolute path
lib.func.restype = ctypes.c_double
lib.func.argtypes = (ctypes.c_int,ctypes.c_double)
integrate.quad(lib.func,0,1,(1))
#(1.3333333333333333, 1.4802973661668752e-14)
print((1.0**3/3.0 + 1.0) - (0.0**3/3.0 + 0.0)) #Analytic result
# 1.3333333333333333
quad_explain(output=<ipykernel.iostream.OutStream object at 0x7f91caf55470>)
Print extra information about integrate.quad() parameters and returns.
Parameters
----------
output : instance with "write" method, optional
Information about `quad` is passed to ``output.write()``.
Default is ``sys.stdout``.
Returns
-------
None
quadrature(func, a, b, args=(), tol=1.49e-08, rtol=1.49e-08, maxiter=50, vec_func=True, miniter=1)
Compute a definite integral using fixed-tolerance Gaussian quadrature.
Integrate `func` from `a` to `b` using Gaussian quadrature
with absolute tolerance `tol`.
Parameters
----------
func : function
A Python function or method to integrate.
a : float
Lower limit of integration.
b : float
Upper limit of integration.
args : tuple, optional
Extra arguments to pass to function.
tol, rtol : float, optional
Iteration stops when error between last two iterates is less than
`tol` OR the relative change is less than `rtol`.
maxiter : int, optional
Maximum order of Gaussian quadrature.
vec_func : bool, optional
True or False if func handles arrays as arguments (is
a "vector" function). Default is True.
miniter : int, optional
Minimum order of Gaussian quadrature.
Returns
-------
val : float
Gaussian quadrature approximation (within tolerance) to integral.
err : float
Difference between last two estimates of the integral.
See also
--------
romberg: adaptive Romberg quadrature
fixed_quad: fixed-order Gaussian quadrature
quad: adaptive quadrature using QUADPACK
dblquad: double integrals
tplquad: triple integrals
romb: integrator for sampled data
simps: integrator for sampled data
cumtrapz: cumulative integration for sampled data
ode: ODE integrator
odeint: ODE integrator
romb(y, dx=1.0, axis=-1, show=False)
Romberg integration using samples of a function.
Parameters
----------
y : array_like
A vector of ``2**k + 1`` equally-spaced samples of a function.
dx : float, optional
The sample spacing. Default is 1.
axis : int, optional
The axis along which to integrate. Default is -1 (last axis).
show : bool, optional
When `y` is a single 1-D array, then if this argument is True
print the table showing Richardson extrapolation from the
samples. Default is False.
Returns
-------
romb : ndarray
The integrated result for `axis`.
See also
--------
quad : adaptive quadrature using QUADPACK
romberg : adaptive Romberg quadrature
quadrature : adaptive Gaussian quadrature
fixed_quad : fixed-order Gaussian quadrature
dblquad : double integrals
tplquad : triple integrals
simps : integrators for sampled data
cumtrapz : cumulative integration for sampled data
ode : ODE integrators
odeint : ODE integrators
romberg(function, a, b, args=(), tol=1.48e-08, rtol=1.48e-08, show=False, divmax=10, vec_func=False)
Romberg integration of a callable function or method.
Returns the integral of `function` (a function of one variable)
over the interval (`a`, `b`).
If `show` is 1, the triangular array of the intermediate results
will be printed. If `vec_func` is True (default is False), then
`function` is assumed to support vector arguments.
Parameters
----------
function : callable
Function to be integrated.
a : float
Lower limit of integration.
b : float
Upper limit of integration.
Returns
-------
results : float
Result of the integration.
Other Parameters
----------------
args : tuple, optional
Extra arguments to pass to function. Each element of `args` will
be passed as a single argument to `func`. Default is to pass no
extra arguments.
tol, rtol : float, optional
The desired absolute and relative tolerances. Defaults are 1.48e-8.
show : bool, optional
Whether to print the results. Default is False.
divmax : int, optional
Maximum order of extrapolation. Default is 10.
vec_func : bool, optional
Whether `func` handles arrays as arguments (i.e whether it is a
"vector" function). Default is False.
See Also
--------
fixed_quad : Fixed-order Gaussian quadrature.
quad : Adaptive quadrature using QUADPACK.
dblquad : Double integrals.
tplquad : Triple integrals.
romb : Integrators for sampled data.
simps : Integrators for sampled data.
cumtrapz : Cumulative integration for sampled data.
ode : ODE integrator.
odeint : ODE integrator.
References
----------
.. [1] 'Romberg's method' http://en.wikipedia.org/wiki/Romberg%27s_method
Examples
--------
Integrate a gaussian from 0 to 1 and compare to the error function.
>>> from scipy import integrate
>>> from scipy.special import erf
>>> gaussian = lambda x: 1/np.sqrt(np.pi) * np.exp(-x**2)
>>> result = integrate.romberg(gaussian, 0, 1, show=True)
Romberg integration of <function vfunc at ...> from [0, 1]
::
Steps StepSize Results
1 1.000000 0.385872
2 0.500000 0.412631 0.421551
4 0.250000 0.419184 0.421368 0.421356
8 0.125000 0.420810 0.421352 0.421350 0.421350
16 0.062500 0.421215 0.421350 0.421350 0.421350 0.421350
32 0.031250 0.421317 0.421350 0.421350 0.421350 0.421350 0.421350
The final result is 0.421350396475 after 33 function evaluations.
>>> print("%g %g" % (2*result, erf(1)))
0.842701 0.842701
simps(y, x=None, dx=1, axis=-1, even='avg')
Integrate y(x) using samples along the given axis and the composite
Simpson's rule. If x is None, spacing of dx is assumed.
If there are an even number of samples, N, then there are an odd
number of intervals (N-1), but Simpson's rule requires an even number
of intervals. The parameter 'even' controls how this is handled.
Parameters
----------
y : array_like
Array to be integrated.
x : array_like, optional
If given, the points at which `y` is sampled.
dx : int, optional
Spacing of integration points along axis of `y`. Only used when
`x` is None. Default is 1.
axis : int, optional
Axis along which to integrate. Default is the last axis.
even : str {'avg', 'first', 'last'}, optional
'avg' : Average two results:1) use the first N-2 intervals with
a trapezoidal rule on the last interval and 2) use the last
N-2 intervals with a trapezoidal rule on the first interval.
'first' : Use Simpson's rule for the first N-2 intervals with
a trapezoidal rule on the last interval.
'last' : Use Simpson's rule for the last N-2 intervals with a
trapezoidal rule on the first interval.
See Also
--------
quad: adaptive quadrature using QUADPACK
romberg: adaptive Romberg quadrature
quadrature: adaptive Gaussian quadrature
fixed_quad: fixed-order Gaussian quadrature
dblquad: double integrals
tplquad: triple integrals
romb: integrators for sampled data
cumtrapz: cumulative integration for sampled data
ode: ODE integrators
odeint: ODE integrators
Notes
-----
For an odd number of samples that are equally spaced the result is
exact if the function is a polynomial of order 3 or less. If
the samples are not equally spaced, then the result is exact only
if the function is a polynomial of order 2 or less.
solve_bvp(fun, bc, x, y, p=None, S=None, fun_jac=None, bc_jac=None, tol=0.001, max_nodes=1000, verbose=0)
Solve a boundary-value problem for a system of ODEs.
This function numerically solves a first order system of ODEs subject to
two-point boundary conditions::
dy / dx = f(x, y, p) + S * y / (x - a), a <= x <= b
bc(y(a), y(b), p) = 0
Here x is a 1-dimensional independent variable, y(x) is a n-dimensional
vector-valued function and p is a k-dimensional vector of unknown
parameters which is to be found along with y(x). For the problem to be
determined there must be n + k boundary conditions, i.e. bc must be
(n + k)-dimensional function.
The last singular term in the right-hand side of the system is optional.
It is defined by an n-by-n matrix S, such that the solution must satisfy
S y(a) = 0. This condition will be forced during iterations, so it must not
contradict boundary conditions. See [2]_ for the explanation how this term
is handled when solving BVPs numerically.
Problems in a complex domain can be solved as well. In this case y and p
are considered to be complex, and f and bc are assumed to be complex-valued
functions, but x stays real. Note that f and bc must be complex
differentiable (satisfy Cauchy-Riemann equations [4]_), otherwise you
should rewrite your problem for real and imaginary parts separately. To
solve a problem in a complex domain, pass an initial guess for y with a
complex data type (see below).
Parameters
----------
fun : callable
Right-hand side of the system. The calling signature is ``fun(x, y)``,
or ``fun(x, y, p)`` if parameters are present. All arguments are
ndarray: ``x`` with shape (m,), ``y`` with shape (n, m), meaning that
``y[:, i]`` corresponds to ``x[i]``, and ``p`` with shape (k,). The
return value must be an array with shape (n, m) and with the same
layout as ``y``.
bc : callable
Function evaluating residuals of the boundary conditions. The calling
signature is ``bc(ya, yb)``, or ``bc(ya, yb, p)`` if parameters are
present. All arguments are ndarray: ``ya`` and ``yb`` with shape (n,),
and ``p`` with shape (k,). The return value must be an array with
shape (n + k,).
x : array_like, shape (m,)
Initial mesh. Must be a strictly increasing sequence of real numbers
with ``x[0]=a`` and ``x[-1]=b``.
y : array_like, shape (n, m)
Initial guess for the function values at the mesh nodes, i-th column
corresponds to ``x[i]``. For problems in a complex domain pass `y`
with a complex data type (even if the initial guess is purely real).
p : array_like with shape (k,) or None, optional
Initial guess for the unknown parameters. If None (default), it is
assumed that the problem doesn't depend on any parameters.
S : array_like with shape (n, n) or None
Matrix defining the singular term. If None (default), the problem is
solved without the singular term.
fun_jac : callable or None, optional
Function computing derivatives of f with respect to y and p. The
calling signature is ``fun_jac(x, y)``, or ``fun_jac(x, y, p)`` if
parameters are present. The return must contain 1 or 2 elements in the
following order:
* df_dy : array_like with shape (n, n, m) where an element
(i, j, q) equals to d f_i(x_q, y_q, p) / d (y_q)_j.
* df_dp : array_like with shape (n, k, m) where an element
(i, j, q) equals to d f_i(x_q, y_q, p) / d p_j.
Here q numbers nodes at which x and y are defined, whereas i and j
number vector components. If the problem is solved without unknown
parameters df_dp should not be returned.
If `fun_jac` is None (default), the derivatives will be estimated
by the forward finite differences.
bc_jac : callable or None, optional
Function computing derivatives of bc with respect to ya, yb and p.
The calling signature is ``bc_jac(ya, yb)``, or ``bc_jac(ya, yb, p)``
if parameters are present. The return must contain 2 or 3 elements in
the following order:
* dbc_dya : array_like with shape (n, n) where an element (i, j)
equals to d bc_i(ya, yb, p) / d ya_j.
* dbc_dyb : array_like with shape (n, n) where an element (i, j)
equals to d bc_i(ya, yb, p) / d yb_j.
* dbc_dp : array_like with shape (n, k) where an element (i, j)
equals to d bc_i(ya, yb, p) / d p_j.
If the problem is solved without unknown parameters dbc_dp should not
be returned.
If `bc_jac` is None (default), the derivatives will be estimated by
the forward finite differences.
tol : float, optional
Desired tolerance of the solution. If we define ``r = y' - f(x, y)``
where y is the found solution, then the solver tries to achieve on each
mesh interval ``norm(r / (1 + abs(f)) < tol``, where ``norm`` is
estimated in a root mean squared sense (using a numerical quadrature
formula). Default is 1e-3.
max_nodes : int, optional
Maximum allowed number of the mesh nodes. If exceeded, the algorithm
terminates. Default is 1000.
verbose : {0, 1, 2}, optional
Level of algorithm's verbosity:
* 0 (default) : work silently.
* 1 : display a termination report.
* 2 : display progress during iterations.
Returns
-------
Bunch object with the following fields defined:
sol : PPoly
Found solution for y as `scipy.interpolate.PPoly` instance, a C1
continuous cubic spline.
p : ndarray or None, shape (k,)
Found parameters. None, if the parameters were not present in the
problem.
x : ndarray, shape (m,)
Nodes of the final mesh.
y : ndarray, shape (n, m)
Solution values at the mesh nodes.
yp : ndarray, shape (n, m)
Solution derivatives at the mesh nodes.
rms_residuals : ndarray, shape (m - 1,)
RMS values of the relative residuals over each mesh interval (see the
description of `tol` parameter).
niter : int
Number of completed iterations.
status : int
Reason for algorithm termination:
* 0: The algorithm converged to the desired accuracy.
* 1: The maximum number of mesh nodes is exceeded.
* 2: A singular Jacobian encountered when solving the collocation
system.
message : string
Verbal description of the termination reason.
success : bool
True if the algorithm converged to the desired accuracy (``status=0``).
Notes
-----
This function implements a 4-th order collocation algorithm with the
control of residuals similar to [1]_. A collocation system is solved
by a damped Newton method with an affine-invariant criterion function as
described in [3]_.
Note that in [1]_ integral residuals are defined without normalization
by interval lengths. So their definition is different by a multiplier of
h**0.5 (h is an interval length) from the definition used here.
.. versionadded:: 0.18.0
References
----------
.. [1] J. Kierzenka, L. F. Shampine, "A BVP Solver Based on Residual
Control and the Maltab PSE", ACM Trans. Math. Softw., Vol. 27,
Number 3, pp. 299-316, 2001.
.. [2] L.F. Shampine, P. H. Muir and H. Xu, "A User-Friendly Fortran BVP
Solver".
.. [3] U. Ascher, R. Mattheij and R. Russell "Numerical Solution of
Boundary Value Problems for Ordinary Differential Equations".
.. [4] `Cauchy-Riemann equations
<https://en.wikipedia.org/wiki/Cauchy-Riemann_equations>`_ on
Wikipedia.
Examples
--------
In the first example we solve Bratu's problem::
y'' + k * exp(y) = 0
y(0) = y(1) = 0
for k = 1.
We rewrite the equation as a first order system and implement its
right-hand side evaluation::
y1' = y2
y2' = -exp(y1)
>>> def fun(x, y):
... return np.vstack((y[1], -np.exp(y[0])))
Implement evaluation of the boundary condition residuals:
>>> def bc(ya, yb):
... return np.array([ya[0], yb[0]])
Define the initial mesh with 5 nodes:
>>> x = np.linspace(0, 1, 5)
This problem is known to have two solutions. To obtain both of them we
use two different initial guesses for y. We denote them by subscripts
a and b.
>>> y_a = np.zeros((2, x.size))
>>> y_b = np.zeros((2, x.size))
>>> y_b[0] = 3
Now we are ready to run the solver.
>>> from scipy.integrate import solve_bvp
>>> res_a = solve_bvp(fun, bc, x, y_a)
>>> res_b = solve_bvp(fun, bc, x, y_b)
Let's plot the two found solutions. We take an advantage of having the
solution in a spline form to produce a smooth plot.
>>> x_plot = np.linspace(0, 1, 100)
>>> y_plot_a = res_a.sol(x_plot)[0]
>>> y_plot_b = res_b.sol(x_plot)[0]
>>> import matplotlib.pyplot as plt
>>> plt.plot(x_plot, y_plot_a, label='y_a')
>>> plt.plot(x_plot, y_plot_b, label='y_b')
>>> plt.legend()
>>> plt.xlabel("x")
>>> plt.ylabel("y")
>>> plt.show()
We see that the two solutions have similar shape, but differ in scale
significantly.
In the second example we solve a simple Sturm-Liouville problem::
y'' + k**2 * y = 0
y(0) = y(1) = 0
It is known that a non-trivial solution y = A * sin(k * x) is possible for
k = pi * n, where n is an integer. To establish the normalization constant
A = 1 we add a boundary condition::
y'(0) = k
Again we rewrite our equation as a first order system and implement its
right-hand side evaluation::
y1' = y2
y2' = -k**2 * y1
>>> def fun(x, y, p):
... k = p[0]
... return np.vstack((y[1], -k**2 * y[0]))
Note that parameters p are passed as a vector (with one element in our
case).
Implement the boundary conditions:
>>> def bc(ya, yb, p):
... k = p[0]
... return np.array([ya[0], yb[0], ya[1] - k])
Setup the initial mesh and guess for y. We aim to find the solution for
k = 2 * pi, to achieve that we set values of y to approximately follow
sin(2 * pi * x):
>>> x = np.linspace(0, 1, 5)
>>> y = np.zeros((2, x.size))
>>> y[0, 1] = 1
>>> y[0, 3] = -1
Run the solver with 6 as an initial guess for k.
>>> sol = solve_bvp(fun, bc, x, y, p=[6])
We see that the found k is approximately correct:
>>> sol.p[0]
6.28329460046
And finally plot the solution to see the anticipated sinusoid:
>>> x_plot = np.linspace(0, 1, 100)
>>> y_plot = sol.sol(x_plot)[0]
>>> plt.plot(x_plot, y_plot)
>>> plt.xlabel("x")
>>> plt.ylabel("y")
>>> plt.show()
solve_ivp(fun, t_span, y0, method='RK45', t_eval=None, dense_output=False, events=None, vectorized=False, **options)
Solve an initial value problem for a system of ODEs.
This function numerically integrates a system of ordinary differential
equations given an initial value::
dy / dt = f(t, y)
y(t0) = y0
Here t is a 1-dimensional independent variable (time), y(t) is an
n-dimensional vector-valued function (state) and an n-dimensional
vector-valued function f(t, y) determines the differential equations.
The goal is to find y(t) approximately satisfying the differential
equations, given an initial value y(t0)=y0.
Some of the solvers support integration in a complex domain, but note that
for stiff ODE solvers the right hand side must be complex differentiable
(satisfy Cauchy-Riemann equations [11]_). To solve a problem in a complex
domain, pass y0 with a complex data type. Another option always available
is to rewrite your problem for real and imaginary parts separately.
Parameters
----------
fun : callable
Right-hand side of the system. The calling signature is ``fun(t, y)``.
Here ``t`` is a scalar and there are two options for ndarray ``y``.
It can either have shape (n,), then ``fun`` must return array_like with
shape (n,). Or alternatively it can have shape (n, k), then ``fun``
must return array_like with shape (n, k), i.e. each column
corresponds to a single column in ``y``. The choice between the two
options is determined by `vectorized` argument (see below). The
vectorized implementation allows faster approximation of the Jacobian
by finite differences (required for stiff solvers).
t_span : 2-tuple of floats
Interval of integration (t0, tf). The solver starts with t=t0 and
integrates until it reaches t=tf.
y0 : array_like, shape (n,)
Initial state. For problems in a complex domain pass `y0` with a
complex data type (even if the initial guess is purely real).
method : string or `OdeSolver`, optional
Integration method to use:
* 'RK45' (default): Explicit Runge-Kutta method of order 5(4) [1]_.
The error is controlled assuming 4th order accuracy, but steps
are taken using a 5th oder accurate formula (local extrapolation
is done). A quartic interpolation polynomial is used for the
dense output [2]_. Can be applied in a complex domain.
* 'RK23': Explicit Runge-Kutta method of order 3(2) [3]_. The error
is controlled assuming 2nd order accuracy, but steps are taken
using a 3rd oder accurate formula (local extrapolation is done).
A cubic Hermit polynomial is used for the dense output.
Can be applied in a complex domain.
* 'Radau': Implicit Runge-Kutta method of Radau IIA family of
order 5 [4]_. The error is controlled for a 3rd order accurate
embedded formula. A cubic polynomial which satisfies the
collocation conditions is used for the dense output.
* 'BDF': Implicit multi-step variable order (1 to 5) method based
on a Backward Differentiation Formulas for the derivative
approximation [5]_. An implementation approach follows the one
described in [6]_. A quasi-constant step scheme is used
and accuracy enhancement using NDF modification is also
implemented. Can be applied in a complex domain.
* 'LSODA': Adams/BDF method with automatic stiffness detection and
switching [7]_, [8]_. This is a wrapper of the Fortran solver
from ODEPACK.
You should use 'RK45' or 'RK23' methods for non-stiff problems and
'Radau' or 'BDF' for stiff problems [9]_. If not sure, first try to run
'RK45' and if it does unusual many iterations or diverges then your
problem is likely to be stiff and you should use 'Radau' or 'BDF'.
'LSODA' can also be a good universal choice, but it might be somewhat
less convenient to work with as it wraps an old Fortran code.
You can also pass an arbitrary class derived from `OdeSolver` which
implements the solver.
dense_output : bool, optional
Whether to compute a continuous solution. Default is False.
t_eval : array_like or None, optional
Times at which to store the computed solution, must be sorted and lie
within `t_span`. If None (default), use points selected by a solver.
events : callable, list of callables or None, optional
Events to track. Events are defined by functions which take
a zero value at a point of an event. Each function must have a
signature ``event(t, y)`` and return float, the solver will find an
accurate value of ``t`` at which ``event(t, y(t)) = 0`` using a root
finding algorithm. Additionally each ``event`` function might have
attributes:
* terminal: bool, whether to terminate integration if this
event occurs. Implicitly False if not assigned.
* direction: float, direction of crossing a zero. If `direction`
is positive then `event` must go from negative to positive, and
vice-versa if `direction` is negative. If 0, then either way will
count. Implicitly 0 if not assigned.
You can assign attributes like ``event.terminal = True`` to any
function in Python. If None (default), events won't be tracked.
vectorized : bool, optional
Whether `fun` is implemented in a vectorized fashion. Default is False.
options
Options passed to a chosen solver constructor. All options available
for already implemented solvers are listed below.
max_step : float, optional
Maximum allowed step size. Default is np.inf, i.e. step is not
bounded and determined solely by the solver.
rtol, atol : float and array_like, optional
Relative and absolute tolerances. The solver keeps the local error
estimates less than ``atol + rtol * abs(y)``. Here `rtol` controls a
relative accuracy (number of correct digits). But if a component of `y`
is approximately below `atol` then the error only needs to fall within
the same `atol` threshold, and the number of correct digits is not
guaranteed. If components of y have different scales, it might be
beneficial to set different `atol` values for different components by
passing array_like with shape (n,) for `atol`. Default values are
1e-3 for `rtol` and 1e-6 for `atol`.
jac : {None, array_like, sparse_matrix, callable}, optional
Jacobian matrix of the right-hand side of the system with respect to
y, required by 'Radau', 'BDF' and 'LSODA' methods. The Jacobian matrix
has shape (n, n) and its element (i, j) is equal to ``d f_i / d y_j``.
There are 3 ways to define the Jacobian:
* If array_like or sparse_matrix, then the Jacobian is assumed to
be constant. Not supported by 'LSODA'.
* If callable, then the Jacobian is assumed to depend on both
t and y, and will be called as ``jac(t, y)`` as necessary.
For 'Radau' and 'BDF' methods the return value might be a sparse
matrix.
* If None (default), then the Jacobian will be approximated by
finite differences.
It is generally recommended to provide the Jacobian rather than
relying on a finite difference approximation.
jac_sparsity : {None, array_like, sparse matrix}, optional
Defines a sparsity structure of the Jacobian matrix for a finite
difference approximation, its shape must be (n, n). If the Jacobian has
only few non-zero elements in *each* row, providing the sparsity
structure will greatly speed up the computations [10]_. A zero
entry means that a corresponding element in the Jacobian is identically
zero. If None (default), the Jacobian is assumed to be dense.
Not supported by 'LSODA', see `lband` and `uband` instead.
lband, uband : int or None
Parameters defining the Jacobian matrix bandwidth for 'LSODA' method.
The Jacobian bandwidth means that
``jac[i, j] != 0 only for i - lband <= j <= i + uband``. Setting these
requires your jac routine to return the Jacobian in the packed format:
the returned array must have ``n`` columns and ``uband + lband + 1``
rows in which Jacobian diagonals are written. Specifically
``jac_packed[uband + i - j , j] = jac[i, j]``. The same format is used
in `scipy.linalg.solve_banded` (check for an illustration).
These parameters can be also used with ``jac=None`` to reduce the
number of Jacobian elements estimated by finite differences.
min_step, first_step : float, optional
The minimum allowed step size and the initial step size respectively
for 'LSODA' method. By default `min_step` is zero and `first_step` is
selected automatically.
Returns
-------
Bunch object with the following fields defined:
t : ndarray, shape (n_points,)
Time points.
y : ndarray, shape (n, n_points)
Solution values at `t`.
sol : `OdeSolution` or None
Found solution as `OdeSolution` instance, None if `dense_output` was
set to False.
t_events : list of ndarray or None
Contains arrays with times at each a corresponding event was detected,
the length of the list equals to the number of events. None if `events`
was None.
nfev : int
Number of the system rhs evaluations.
njev : int
Number of the Jacobian evaluations.
nlu : int
Number of LU decompositions.
status : int
Reason for algorithm termination:
* -1: Integration step failed.
* 0: The solver successfully reached the interval end.
* 1: A termination event occurred.
message : string
Verbal description of the termination reason.
success : bool
True if the solver reached the interval end or a termination event
occurred (``status >= 0``).
References
----------
.. [1] J. R. Dormand, P. J. Prince, "A family of embedded Runge-Kutta
formulae", Journal of Computational and Applied Mathematics, Vol. 6,
No. 1, pp. 19-26, 1980.
.. [2] L. W. Shampine, "Some Practical Runge-Kutta Formulas", Mathematics
of Computation,, Vol. 46, No. 173, pp. 135-150, 1986.
.. [3] P. Bogacki, L.F. Shampine, "A 3(2) Pair of Runge-Kutta Formulas",
Appl. Math. Lett. Vol. 2, No. 4. pp. 321-325, 1989.
.. [4] E. Hairer, G. Wanner, "Solving Ordinary Differential Equations II:
Stiff and Differential-Algebraic Problems", Sec. IV.8.
.. [5] `Backward Differentiation Formula
<https://en.wikipedia.org/wiki/Backward_differentiation_formula>`_
on Wikipedia.
.. [6] L. F. Shampine, M. W. Reichelt, "THE MATLAB ODE SUITE", SIAM J. SCI.
COMPUTE., Vol. 18, No. 1, pp. 1-22, January 1997.
.. [7] A. C. Hindmarsh, "ODEPACK, A Systematized Collection of ODE
Solvers," IMACS Transactions on Scientific Computation, Vol 1.,
pp. 55-64, 1983.
.. [8] L. Petzold, "Automatic selection of methods for solving stiff and
nonstiff systems of ordinary differential equations", SIAM Journal
on Scientific and Statistical Computing, Vol. 4, No. 1, pp. 136-148,
1983.
.. [9] `Stiff equation <https://en.wikipedia.org/wiki/Stiff_equation>`_ on
Wikipedia.
.. [10] A. Curtis, M. J. D. Powell, and J. Reid, "On the estimation of
sparse Jacobian matrices", Journal of the Institute of Mathematics
and its Applications, 13, pp. 117-120, 1974.
.. [11] `Cauchy-Riemann equations
<https://en.wikipedia.org/wiki/Cauchy-Riemann_equations>`_ on
Wikipedia.
Examples
--------
Basic exponential decay showing automatically chosen time points.
>>> from scipy.integrate import solve_ivp
>>> def exponential_decay(t, y): return -0.5 * y
>>> sol = solve_ivp(exponential_decay, [0, 10], [2, 4, 8])
>>> print(sol.t)
[ 0. 0.11487653 1.26364188 3.06061781 4.85759374
6.65456967 8.4515456 10. ]
>>> print(sol.y)
[[ 2. 1.88836035 1.06327177 0.43319312 0.17648948 0.0719045
0.02929499 0.01350938]
[ 4. 3.7767207 2.12654355 0.86638624 0.35297895 0.143809
0.05858998 0.02701876]
[ 8. 7.5534414 4.25308709 1.73277247 0.7059579 0.287618
0.11717996 0.05403753]]
Specifying points where the solution is desired.
>>> sol = solve_ivp(exponential_decay, [0, 10], [2, 4, 8],
... t_eval=[0, 1, 2, 4, 10])
>>> print(sol.t)
[ 0 1 2 4 10]
>>> print(sol.y)
[[ 2. 1.21305369 0.73534021 0.27066736 0.01350938]
[ 4. 2.42610739 1.47068043 0.54133472 0.02701876]
[ 8. 4.85221478 2.94136085 1.08266944 0.05403753]]
Cannon fired upward with terminal event upon impact. The ``terminal`` and
``direction`` fields of an event are applied by monkey patching a function.
Here ``y[0]`` is position and ``y[1]`` is velocity. The projectile starts at
position 0 with velocity +10. Note that the integration never reaches t=100
because the event is terminal.
>>> def upward_cannon(t, y): return [y[1], -0.5]
>>> def hit_ground(t, y): return y[1]
>>> hit_ground.terminal = True
>>> hit_ground.direction = -1
>>> sol = solve_ivp(upward_cannon, [0, 100], [0, 10], events=hit_ground)
>>> print(sol.t_events)
[array([ 20.])]
>>> print(sol.t)
[ 0.00000000e+00 9.99900010e-05 1.09989001e-03 1.10988901e-02
1.11088891e-01 1.11098890e+00 1.11099890e+01 2.00000000e+01]
tplquad(func, a, b, gfun, hfun, qfun, rfun, args=(), epsabs=1.49e-08, epsrel=1.49e-08)
Compute a triple (definite) integral.
Return the triple integral of ``func(z, y, x)`` from ``x = a..b``,
``y = gfun(x)..hfun(x)``, and ``z = qfun(x,y)..rfun(x,y)``.
Parameters
----------
func : function
A Python function or method of at least three variables in the
order (z, y, x).
a, b : float
The limits of integration in x: `a` < `b`
gfun : function
The lower boundary curve in y which is a function taking a single
floating point argument (x) and returning a floating point result:
a lambda function can be useful here.
hfun : function
The upper boundary curve in y (same requirements as `gfun`).
qfun : function
The lower boundary surface in z. It must be a function that takes
two floats in the order (x, y) and returns a float.
rfun : function
The upper boundary surface in z. (Same requirements as `qfun`.)
args : tuple, optional
Extra arguments to pass to `func`.
epsabs : float, optional
Absolute tolerance passed directly to the innermost 1-D quadrature
integration. Default is 1.49e-8.
epsrel : float, optional
Relative tolerance of the innermost 1-D integrals. Default is 1.49e-8.
Returns
-------
y : float
The resultant integral.
abserr : float
An estimate of the error.
See Also
--------
quad: Adaptive quadrature using QUADPACK
quadrature: Adaptive Gaussian quadrature
fixed_quad: Fixed-order Gaussian quadrature
dblquad: Double integrals
nquad : N-dimensional integrals
romb: Integrators for sampled data
simps: Integrators for sampled data
ode: ODE integrators
odeint: ODE integrators
scipy.special: For coefficients and roots of orthogonal polynomials
trapz(y, x=None, dx=1.0, axis=-1)
Integrate along the given axis using the composite trapezoidal rule.
Integrate `y` (`x`) along given axis.
Parameters
----------
y : array_like
Input array to integrate.
x : array_like, optional
The sample points corresponding to the `y` values. If `x` is None,
the sample points are assumed to be evenly spaced `dx` apart. The
default is None.
dx : scalar, optional
The spacing between sample points when `x` is None. The default is 1.
axis : int, optional
The axis along which to integrate.
Returns
-------
trapz : float
Definite integral as approximated by trapezoidal rule.
See Also
--------
sum, cumsum
Notes
-----
Image [2]_ illustrates trapezoidal rule -- y-axis locations of points
will be taken from `y` array, by default x-axis distances between
points will be 1.0, alternatively they can be provided with `x` array
or with `dx` scalar. Return value will be equal to combined area under
the red lines.
References
----------
.. [1] Wikipedia page: http://en.wikipedia.org/wiki/Trapezoidal_rule
.. [2] Illustration image:
http://en.wikipedia.org/wiki/File:Composite_trapezoidal_rule_illustration.png
Examples
--------
>>> np.trapz([1,2,3])
4.0
>>> np.trapz([1,2,3], x=[4,6,8])
8.0
>>> np.trapz([1,2,3], dx=2)
8.0
>>> a = np.arange(6).reshape(2, 3)
>>> a
array([[0, 1, 2],
[3, 4, 5]])
>>> np.trapz(a, axis=0)
array([ 1.5, 2.5, 3.5])
>>> np.trapz(a, axis=1)
array([ 2., 8.])
DATA
__all__ = ['BDF', 'DenseOutput', 'IntegrationWarning', 'LSODA', 'OdeSo...
absolute_import = _Feature((2, 5, 0, 'alpha', 1), (3, 0, 0, 'alpha', 0...
division = _Feature((2, 2, 0, 'alpha', 2), (3, 0, 0, 'alpha', 0), 8192...
print_function = _Feature((2, 6, 0, 'alpha', 2), (3, 0, 0, 'alpha', 0)...
FILE
/usr/lib64/python3.6/site-packages/scipy/integrate/__init__.py
quad is the basic integrator for a general (not sampled) function. It uses a general-interface from the Fortran package QUADPACK (QAGS or QAGI). It will return the integral in an interval and an estimate of the error in the approximation
```python
def f(x):
return np.sin(x)**2
```
```python
I, err = integrate.quad(f, 0.0, 2.0*np.pi, epsabs=1.e-14)
print(I)
print(err)
```
3.141592653589793
2.3058791671639882e-09
```python
help(integrate.quad)
```
Help on function quad in module scipy.integrate.quadpack:
quad(func, a, b, args=(), full_output=0, epsabs=1.49e-08, epsrel=1.49e-08, limit=50, points=None, weight=None, wvar=None, wopts=None, maxp1=50, limlst=50)
Compute a definite integral.
Integrate func from `a` to `b` (possibly infinite interval) using a
technique from the Fortran library QUADPACK.
Parameters
----------
func : {function, scipy.LowLevelCallable}
A Python function or method to integrate. If `func` takes many
arguments, it is integrated along the axis corresponding to the
first argument.
If the user desires improved integration performance, then `f` may
be a `scipy.LowLevelCallable` with one of the signatures::
double func(double x)
double func(double x, void *user_data)
double func(int n, double *xx)
double func(int n, double *xx, void *user_data)
The ``user_data`` is the data contained in the `scipy.LowLevelCallable`.
In the call forms with ``xx``, ``n`` is the length of the ``xx``
array which contains ``xx[0] == x`` and the rest of the items are
numbers contained in the ``args`` argument of quad.
In addition, certain ctypes call signatures are supported for
backward compatibility, but those should not be used in new code.
a : float
Lower limit of integration (use -numpy.inf for -infinity).
b : float
Upper limit of integration (use numpy.inf for +infinity).
args : tuple, optional
Extra arguments to pass to `func`.
full_output : int, optional
Non-zero to return a dictionary of integration information.
If non-zero, warning messages are also suppressed and the
message is appended to the output tuple.
Returns
-------
y : float
The integral of func from `a` to `b`.
abserr : float
An estimate of the absolute error in the result.
infodict : dict
A dictionary containing additional information.
Run scipy.integrate.quad_explain() for more information.
message
A convergence message.
explain
Appended only with 'cos' or 'sin' weighting and infinite
integration limits, it contains an explanation of the codes in
infodict['ierlst']
Other Parameters
----------------
epsabs : float or int, optional
Absolute error tolerance.
epsrel : float or int, optional
Relative error tolerance.
limit : float or int, optional
An upper bound on the number of subintervals used in the adaptive
algorithm.
points : (sequence of floats,ints), optional
A sequence of break points in the bounded integration interval
where local difficulties of the integrand may occur (e.g.,
singularities, discontinuities). The sequence does not have
to be sorted.
weight : float or int, optional
String indicating weighting function. Full explanation for this
and the remaining arguments can be found below.
wvar : optional
Variables for use with weighting functions.
wopts : optional
Optional input for reusing Chebyshev moments.
maxp1 : float or int, optional
An upper bound on the number of Chebyshev moments.
limlst : int, optional
Upper bound on the number of cycles (>=3) for use with a sinusoidal
weighting and an infinite end-point.
See Also
--------
dblquad : double integral
tplquad : triple integral
nquad : n-dimensional integrals (uses `quad` recursively)
fixed_quad : fixed-order Gaussian quadrature
quadrature : adaptive Gaussian quadrature
odeint : ODE integrator
ode : ODE integrator
simps : integrator for sampled data
romb : integrator for sampled data
scipy.special : for coefficients and roots of orthogonal polynomials
Notes
-----
**Extra information for quad() inputs and outputs**
If full_output is non-zero, then the third output argument
(infodict) is a dictionary with entries as tabulated below. For
infinite limits, the range is transformed to (0,1) and the
optional outputs are given with respect to this transformed range.
Let M be the input argument limit and let K be infodict['last'].
The entries are:
'neval'
The number of function evaluations.
'last'
The number, K, of subintervals produced in the subdivision process.
'alist'
A rank-1 array of length M, the first K elements of which are the
left end points of the subintervals in the partition of the
integration range.
'blist'
A rank-1 array of length M, the first K elements of which are the
right end points of the subintervals.
'rlist'
A rank-1 array of length M, the first K elements of which are the
integral approximations on the subintervals.
'elist'
A rank-1 array of length M, the first K elements of which are the
moduli of the absolute error estimates on the subintervals.
'iord'
A rank-1 integer array of length M, the first L elements of
which are pointers to the error estimates over the subintervals
with ``L=K`` if ``K<=M/2+2`` or ``L=M+1-K`` otherwise. Let I be the
sequence ``infodict['iord']`` and let E be the sequence
``infodict['elist']``. Then ``E[I[1]], ..., E[I[L]]`` forms a
decreasing sequence.
If the input argument points is provided (i.e. it is not None),
the following additional outputs are placed in the output
dictionary. Assume the points sequence is of length P.
'pts'
A rank-1 array of length P+2 containing the integration limits
and the break points of the intervals in ascending order.
This is an array giving the subintervals over which integration
will occur.
'level'
A rank-1 integer array of length M (=limit), containing the
subdivision levels of the subintervals, i.e., if (aa,bb) is a
subinterval of ``(pts[1], pts[2])`` where ``pts[0]`` and ``pts[2]``
are adjacent elements of ``infodict['pts']``, then (aa,bb) has level l
if ``|bb-aa| = |pts[2]-pts[1]| * 2**(-l)``.
'ndin'
A rank-1 integer array of length P+2. After the first integration
over the intervals (pts[1], pts[2]), the error estimates over some
of the intervals may have been increased artificially in order to
put their subdivision forward. This array has ones in slots
corresponding to the subintervals for which this happens.
**Weighting the integrand**
The input variables, *weight* and *wvar*, are used to weight the
integrand by a select list of functions. Different integration
methods are used to compute the integral with these weighting
functions. The possible values of weight and the corresponding
weighting functions are.
========== =================================== =====================
``weight`` Weight function used ``wvar``
========== =================================== =====================
'cos' cos(w*x) wvar = w
'sin' sin(w*x) wvar = w
'alg' g(x) = ((x-a)**alpha)*((b-x)**beta) wvar = (alpha, beta)
'alg-loga' g(x)*log(x-a) wvar = (alpha, beta)
'alg-logb' g(x)*log(b-x) wvar = (alpha, beta)
'alg-log' g(x)*log(x-a)*log(b-x) wvar = (alpha, beta)
'cauchy' 1/(x-c) wvar = c
========== =================================== =====================
wvar holds the parameter w, (alpha, beta), or c depending on the weight
selected. In these expressions, a and b are the integration limits.
For the 'cos' and 'sin' weighting, additional inputs and outputs are
available.
For finite integration limits, the integration is performed using a
Clenshaw-Curtis method which uses Chebyshev moments. For repeated
calculations, these moments are saved in the output dictionary:
'momcom'
The maximum level of Chebyshev moments that have been computed,
i.e., if ``M_c`` is ``infodict['momcom']`` then the moments have been
computed for intervals of length ``|b-a| * 2**(-l)``,
``l=0,1,...,M_c``.
'nnlog'
A rank-1 integer array of length M(=limit), containing the
subdivision levels of the subintervals, i.e., an element of this
array is equal to l if the corresponding subinterval is
``|b-a|* 2**(-l)``.
'chebmo'
A rank-2 array of shape (25, maxp1) containing the computed
Chebyshev moments. These can be passed on to an integration
over the same interval by passing this array as the second
element of the sequence wopts and passing infodict['momcom'] as
the first element.
If one of the integration limits is infinite, then a Fourier integral is
computed (assuming w neq 0). If full_output is 1 and a numerical error
is encountered, besides the error message attached to the output tuple,
a dictionary is also appended to the output tuple which translates the
error codes in the array ``info['ierlst']`` to English messages. The
output information dictionary contains the following entries instead of
'last', 'alist', 'blist', 'rlist', and 'elist':
'lst'
The number of subintervals needed for the integration (call it ``K_f``).
'rslst'
A rank-1 array of length M_f=limlst, whose first ``K_f`` elements
contain the integral contribution over the interval
``(a+(k-1)c, a+kc)`` where ``c = (2*floor(|w|) + 1) * pi / |w|``
and ``k=1,2,...,K_f``.
'erlst'
A rank-1 array of length ``M_f`` containing the error estimate
corresponding to the interval in the same position in
``infodict['rslist']``.
'ierlst'
A rank-1 integer array of length ``M_f`` containing an error flag
corresponding to the interval in the same position in
``infodict['rslist']``. See the explanation dictionary (last entry
in the output tuple) for the meaning of the codes.
Examples
--------
Calculate :math:`\int^4_0 x^2 dx` and compare with an analytic result
>>> from scipy import integrate
>>> x2 = lambda x: x**2
>>> integrate.quad(x2, 0, 4)
(21.333333333333332, 2.3684757858670003e-13)
>>> print(4**3 / 3.) # analytical result
21.3333333333
Calculate :math:`\int^\infty_0 e^{-x} dx`
>>> invexp = lambda x: np.exp(-x)
>>> integrate.quad(invexp, 0, np.inf)
(1.0, 5.842605999138044e-11)
>>> f = lambda x,a : a*x
>>> y, err = integrate.quad(f, 0, 1, args=(1,))
>>> y
0.5
>>> y, err = integrate.quad(f, 0, 1, args=(3,))
>>> y
1.5
Calculate :math:`\int^1_0 x^2 + y^2 dx` with ctypes, holding
y parameter as 1::
testlib.c =>
double func(int n, double args[n]){
return args[0]*args[0] + args[1]*args[1];}
compile to library testlib.*
::
from scipy import integrate
import ctypes
lib = ctypes.CDLL('/home/.../testlib.*') #use absolute path
lib.func.restype = ctypes.c_double
lib.func.argtypes = (ctypes.c_int,ctypes.c_double)
integrate.quad(lib.func,0,1,(1))
#(1.3333333333333333, 1.4802973661668752e-14)
print((1.0**3/3.0 + 1.0) - (0.0**3/3.0 + 0.0)) #Analytic result
# 1.3333333333333333
sometimes our integrand function takes optional arguments
```python
def g(x, A, sigma):
return A*np.exp(-x**2/sigma**2)
```
```python
I, err = integrate.quad(g, -1.0, 1.0, args=(1.0, 2.0))
print(I, err)
```
1.8451240256511698 2.0484991765669867e-14
numpy defines the inf quantity which can be used in the integration limits. We can integrate a Gaussian (we know the answer is sqrt(pi)
Note: behind the scenes, what the integration function does is do a variable transform like: $t = 1/x$. This works when one limit is $\infty$, giving
$$\int_a^b f(x) dx = \int_{1/b}^{1/a} \frac{1}{t^2} f\left (\frac{1}{t}\right) dt$$
```python
I, err = integrate.quad(g, -np.inf, np.inf, args=(1.0, 1.0))
print(I, err)
```
1.7724538509055159 1.4202636780944923e-08
### Multidimensional integrals
multidimensional integration can be done with successive calls to quad(), but there are wrappers that help
Let's compute
$$I = \int_{y=0}^{1/2} \int_{x=0}^{1-2y} xy dxdy = \frac{1}{96}$$
(this example comes from the SciPy tutorial)
Notice that the limits of integration in x depend on y.
Note the form of the function:
```
dblquad(f, a, b, ylo, yhi)
```
where `f` = `f(y, x)` -- the y argument is first
The integral will be from: $x = [a,b]$, and $y$ = `ylo(x)`, $y$ = `yhi(x)`
```python
def integrand(x,y):
return x*y
def x_lower_lim(y):
return 0
def x_upper_lim(y):
return 1-2*y
# we change the definitions of x and y in this call
I, err = integrate.dblquad(integrand, 0.0, 0.5, x_lower_lim, x_upper_lim)
print(I, 1.0/I)
```
0.010416666666666668 95.99999999999999
If you remember the python lambda functions (one expression functions), you can do this more compactly:
```python
I, err = integrate.dblquad(lambda x, y: x*y, 0.0, 0.5, lambda y: 0, lambda y: 1-2*y)
print(I)
```
0.010416666666666668
### integration of a sampled function
here we integrate a function that is defined only at a sequece of points. Recall that Simpson's rule will use piecewise parabola data. Let's compute
$$I = \int_0^{2\pi} f(x_i) dx$$
with $x_i = 0, \ldots, 2\pi$ defined at $N$ points
```python
N = 17
x = np.linspace(0.0, 2.0*np.pi, N, endpoint=True)
y = np.sin(x)**2
I = integrate.simps(y, x)
print(I)
```
3.14159265359
Romberg integration is specific to equally-spaced samples, where $N = 2^k + 1$ and can be more converge faster (it uses extrapolation of coarser integration results to achieve higher accuracy)
```python
N = 17
x = np.linspace(0.0, 2.0*np.pi, N, endpoint=True)
y = np.sin(x)**2
I = integrate.romb(y, dx=x[1]-x[0])
print(I)
```
3.14306583533
# Interpolation
Interpolation fills in the gaps between a discrete number of points by making an assumption about the behavior of the functional form of the data.
Many different types of interpolation exist
* some ensure no new extrema are introduced
* some conserve the quantity being interpolated
* some match derivative at end points
Pathologies exist -- it is not always best to use a high-order polynomial to pass through all of the points in your dataset.
the `interp1d()` function allows for a variety of 1-d interpolation methods. It returns an object that acts as a function, which can be evaluated at any point.
```python
import scipy.interpolate as interpolate
```
```python
help (interpolate.interp1d)
```
Help on class interp1d in module scipy.interpolate.interpolate:
class interp1d(scipy.interpolate.polyint._Interpolator1D)
| Interpolate a 1-D function.
|
| `x` and `y` are arrays of values used to approximate some function f:
| ``y = f(x)``. This class returns a function whose call method uses
| interpolation to find the value of new points.
|
| Note that calling `interp1d` with NaNs present in input values results in
| undefined behaviour.
|
| Parameters
| ----------
| x : (N,) array_like
| A 1-D array of real values.
| y : (...,N,...) array_like
| A N-D array of real values. The length of `y` along the interpolation
| axis must be equal to the length of `x`.
| kind : str or int, optional
| Specifies the kind of interpolation as a string
| ('linear', 'nearest', 'zero', 'slinear', 'quadratic', 'cubic'
| where 'zero', 'slinear', 'quadratic' and 'cubic' refer to a spline
| interpolation of zeroth, first, second or third order) or as an
| integer specifying the order of the spline interpolator to use.
| Default is 'linear'.
| axis : int, optional
| Specifies the axis of `y` along which to interpolate.
| Interpolation defaults to the last axis of `y`.
| copy : bool, optional
| If True, the class makes internal copies of x and y.
| If False, references to `x` and `y` are used. The default is to copy.
| bounds_error : bool, optional
| If True, a ValueError is raised any time interpolation is attempted on
| a value outside of the range of x (where extrapolation is
| necessary). If False, out of bounds values are assigned `fill_value`.
| By default, an error is raised unless `fill_value="extrapolate"`.
| fill_value : array-like or (array-like, array_like) or "extrapolate", optional
| - if a ndarray (or float), this value will be used to fill in for
| requested points outside of the data range. If not provided, then
| the default is NaN. The array-like must broadcast properly to the
| dimensions of the non-interpolation axes.
| - If a two-element tuple, then the first element is used as a
| fill value for ``x_new < x[0]`` and the second element is used for
| ``x_new > x[-1]``. Anything that is not a 2-element tuple (e.g.,
| list or ndarray, regardless of shape) is taken to be a single
| array-like argument meant to be used for both bounds as
| ``below, above = fill_value, fill_value``.
|
| .. versionadded:: 0.17.0
| - If "extrapolate", then points outside the data range will be
| extrapolated.
|
| .. versionadded:: 0.17.0
| assume_sorted : bool, optional
| If False, values of `x` can be in any order and they are sorted first.
| If True, `x` has to be an array of monotonically increasing values.
|
| Methods
| -------
| __call__
|
| See Also
| --------
| splrep, splev
| Spline interpolation/smoothing based on FITPACK.
| UnivariateSpline : An object-oriented wrapper of the FITPACK routines.
| interp2d : 2-D interpolation
|
| Examples
| --------
| >>> import matplotlib.pyplot as plt
| >>> from scipy import interpolate
| >>> x = np.arange(0, 10)
| >>> y = np.exp(-x/3.0)
| >>> f = interpolate.interp1d(x, y)
|
| >>> xnew = np.arange(0, 9, 0.1)
| >>> ynew = f(xnew) # use interpolation function returned by `interp1d`
| >>> plt.plot(x, y, 'o', xnew, ynew, '-')
| >>> plt.show()
|
| Method resolution order:
| interp1d
| scipy.interpolate.polyint._Interpolator1D
| builtins.object
|
| Methods defined here:
|
| __init__(self, x, y, kind='linear', axis=-1, copy=True, bounds_error=None, fill_value=nan, assume_sorted=False)
| Initialize a 1D linear interpolation class.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| fill_value
|
| ----------------------------------------------------------------------
| Methods inherited from scipy.interpolate.polyint._Interpolator1D:
|
| __call__(self, x)
| Evaluate the interpolant
|
| Parameters
| ----------
| x : array_like
| Points to evaluate the interpolant at.
|
| Returns
| -------
| y : array_like
| Interpolated values. Shape is determined by replacing
| the interpolation axis in the original array with the shape of x.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from scipy.interpolate.polyint._Interpolator1D:
|
| dtype
```python
def f_exact(x):
return np.sin(x)*x
N = 10
x = np.linspace(0, 20, N)
y = f_exact(x)
fInterp = interpolate.interp1d(x, y, kind=3)
# use finer points when we plot
xplot = np.linspace(0, 20, 10*N)
plt.plot(x, y, "ro", label="known points")
plt.plot(xplot, f_exact(xplot), "b:", label="exact function")
plt.plot(xplot, fInterp(xplot), "r-", label="interpolant")
plt.legend(frameon=False, loc="best")
```
### Multi-d interpolation
Here's an example of mult-d interpolation from the official tutorial.
First we define the "answer" -- this is the true function that we will sample at a number of points and then try to use interpolation to recover
```python
def func(x, y):
return x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2
```
here will use mgrid to create the grid of (x,y) where we know func exactly -- this will be for plotting. Note the fun trick here, this is not really a function, but rather something that can magically look like an array, and we index it with the start:stop:stride. If we set stride to an imaginary number, then it is interpreted as the number of points to put between the start and stop
```python
grid_x, grid_y = np.mgrid[0:1:100j, 0:1:200j]
```
```python
print(grid_x.shape)
print(grid_y.shape)
```
(100, 200)
(100, 200)
here's what the exact function looks like -- note that our function is defined in x,y, but imshow is meant for plotting an array, so the first index is the row. We take the transpose when plotting
```python
plt.imshow(func(grid_x, grid_y).T, extent=(0,1,0,1), origin="lower")
```
now we'll define 1000 random points where we'll sample the function
```python
points = np.random.rand(1000, 2)
values = func(points[:,0], points[:,1])
```
Here's what those points look like:
```python
plt.scatter(points[:,0], points[:,1], s=1)
plt.xlim(0,1)
plt.ylim(0,1)
```
The interpolate.griddata() function provides many ways to interpolate a collection of points into a uniform grid. There are many different interpolation methods within this function
```python
grid_z0 = interpolate.griddata(points, values, (grid_x, grid_y), method='nearest')
plt.imshow(grid_z0.T, extent=(0,1,0,1), origin="lower")
```
```python
grid_z0 = interpolate.griddata(points, values, (grid_x, grid_y), method='linear')
plt.imshow(grid_z0.T, extent=(0,1,0,1), origin="lower")
```
```python
grid_z0 = interpolate.griddata(points, values, (grid_x, grid_y), method='cubic')
plt.imshow(grid_z0.T, extent=(0,1,0,1), origin="lower")
```
# Root Finding
Often we need to find a value of a variable that zeros a function -- this is _root finding_. Sometimes, this is a multidimensional problem.
The `brentq()` routine offers a very robust method for find roots from a scalar function. You do need to provide an interval that bounds the root.
$f(x) = \frac{x e^x}{e^x - 1} - 5$
```python
import scipy.optimize as optimize
def f(x):
# this is the non-linear equation that comes up in deriving Wien's law for radiation
return (x*np.exp(x)/(np.exp(x) - 1.0) - 5.0)
root, r = optimize.brentq(f, 0.1, 10.0, full_output=True)
print(root)
print(r.converged)
```
4.965114231744287
True
```python
x = np.linspace(0.1, 10.0, 1000)
plt.plot(x, f(x))
plt.plot(np.array([root]), np.array([f(root)]), 'ro')
```
# ODEs
Many methods exist for integrating ordinary differential equations. Most will want you to write your ODEs as a system of first order equations.
This system of ODEs is the Lorenz system:
$$\frac{dx}{dt} = \sigma (y - x)$$
$$\frac{dy}{dt} = rx - y - xz$$
$$\frac{dz}{dt} = xy - bz$$
the steady states of this system correspond to:
$${\bf f}({\bf x}) =
\left (
\sigma (y -x),
rx - y -xz,
xy - bz
\right )^\intercal
= 0$$
```python
# system parameters
sigma = 10.0
b = 8./3.
r = 28.0
def rhs(t, x):
xdot = sigma*(x[1] - x[0])
ydot = r*x[0] - x[1] - x[0]*x[2]
zdot = x[0]*x[1] - b*x[2]
return np.array([xdot, ydot, zdot])
def jac(t, x):
return np.array(
[ [-sigma, sigma, 0.0],
[r - x[2], -1.0, -x[0]],
[x[1], x[0], -b] ])
def f(x):
return rhs(0.,x), jac(0.,x)
```
SciPy >= 1.0.0 has a uniform interface to the different ODE solvers, `solve_ivp()` -- we use that here. Note, some (but not all) solvers provide a way to get the solution data at intermediate times (called dense output).
```python
def ode_integrate(X0, dt, tmax):
""" integrate using the VODE method, storing the solution each dt """
r = integrate.solve_ivp(rhs, (0.0, tmax), X0,
method="RK45", dense_output=True)
# get the solution at intermediate times
ts = np.arange(0.0, tmax, dt)
Xs = r.sol(ts)
return ts, Xs
```
```python
t, X = ode_integrate([1.0, 1.0, 20.0], 0.02, 30)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(X[0,:], X[1,:], X[2,:])
fig.set_size_inches(8.0,6.0)
```
### Multi-variate root find
we can find the steady points in this system by doing a multi-variate root find on the RHS vector
```python
sol1 = optimize.root(f, [1., 1., 1.], jac=True)
print(sol1.x)
sol2 = optimize.root(f, [10., 10., 10.], jac=True)
print(sol2.x)
sol3 = optimize.root(f, [-10., -10., -10.], jac=True)
print(sol3.x)
```
[ 0. 0. 0.]
[ 8.48528137 8.48528137 27. ]
[ -8.48528137 -8.48528137 27. ]
```python
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(X[0,:], X[1,:], X[2,:])
ax.scatter(sol1.x[0], sol1.x[1], sol1.x[2], marker="x", color="r")
ax.scatter(sol2.x[0], sol2.x[1], sol2.x[2], marker="x", color="r")
ax.scatter(sol3.x[0], sol3.x[1], sol3.x[2], marker="x", color="r")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
```
### Stiff system of ODEs
A stiff system of ODEs is one where there are multiple disparate timescales for change and we need to respect all of them to get an accurate solution
Here is an example from Chemical Kinetics (see, ex. Byrne & Hindmarsh 1986, or the VODE source code)
\begin{equation}
\frac{d}{dt} \left (
\begin{array}{c} y_1 \newline y_2 \newline y_3 \end{array}
\right ) =
%
\left (
\begin{array}{rrr}
-0.04 y_1 & + 10^4 y_2 y_3 & \newline
0.04 y_1 & - 10^4 y_2 y_3 & -3\times 10^7 y_2^2 \newline
& & 3\times 10^7 y_2^2
\end{array}
\right )
\end{equation}
\begin{equation}
{\bf J} = \left (
\begin{array}{ccc}
-0.04 & 10^4 y_3 & 10^4 y_2 \newline
0.04 & -10^4 y_3 - 6\times 10^7 y_2 & -10^4 y_2 \newline
0 & 6\times 10^7 y_2 & 0
\end{array}
\right )
\end{equation}
start with $y_1(0) = 1, y_2(0) = y_3(0) = 0$. Long term behavior is $y_1, y_2 \rightarrow 0; y_3 \rightarrow 1$
```python
def rhs(t, Y):
""" RHS of the system -- using 0-based indexing """
y1 = Y[0]
y2 = Y[1]
y3 = Y[2]
dy1dt = -0.04*y1 + 1.e4*y2*y3
dy2dt = 0.04*y1 - 1.e4*y2*y3 - 3.e7*y2**2
dy3dt = 3.e7*y2**2
return np.array([dy1dt, dy2dt, dy3dt])
def jac(t, Y):
""" J_{i,j} = df_i/dy_j """
y1 = Y[0]
y2 = Y[1]
y3 = Y[2]
df1dy1 = -0.04
df1dy2 = 1.e4*y3
df1dy3 = 1.e4*y2
df2dy1 = 0.04
df2dy2 = -1.e4*y3 - 6.e7*y2
df2dy3 = -1.e4*y2
df3dy1 = 0.0
df3dy2 = 6.e7*y2
df3dy3 = 0.0
return np.array([ [ df1dy1, df1dy2, df1dy3 ],
[ df2dy1, df2dy2, df2dy3 ],
[ df3dy1, df3dy2, df3dy3 ] ])
```
```python
def vode_integrate(Y0, tmax):
""" integrate using the NDF method """
r = integrate.solve_ivp(rhs, (0.0, tmax), Y0,
method="BDF", jac=jac, rtol=1.e-7, atol=1.e-10)
# Note: this solver does not have a dens_output method, instead we
# access the solution data where it was evaluated internally via
# the return object
return r.t, r.y
```
```python
Y0 = np.array([1.0, 0.0, 0.0])
tmax = 4.e7
ts, Ys = vode_integrate(Y0, tmax)
ax = plt.gca()
ax.set_xscale('log')
ax.set_yscale('log')
plt.plot(ts, Ys[0,:], label=r"$y_1$")
plt.plot(ts, Ys[1,:], label=r"$y_2$")
plt.plot(ts, Ys[2,:], label=r"$y_3$")
plt.legend(loc="best", frameon=False)
```
# Fitting
Fitting is used to match a model to experimental data. E.g. we have N points of $(x_i, y_i)$ with associated errors, $\sigma_i$, and we want to find a simply function that best represents the data.
Usually this means that we will need to define a metric, often called the residual, for how well our function matches the data, and then minimize this residual. Least-squares fitting is a popular formulation.
We want to fit our data to a function $Y(x, \{a_j\})$, where $a_j$ are model parameters we can adjust. We want to find the optimal $a_j$ to minimize the distance of $Y$ from our data:
$$\Delta_i = Y(x_i, \{a_j\}) - y_i$$
Least-squares minimizes $\chi^2$:
$$\chi^2(\{a_j\}) = \sum_{i=1}^N \left ( \frac{\Delta_i}{\sigma_i} \right )^2$$
### general linear least squares
First we'll make some experimental data (a quadratic with random fashion). We use the randn() function to provide Gaussian normalized errors.
```python
def y_experiment2(a1, a2, a3, sigma, x):
""" return the experimental data in a quadratic + random fashion,
with a1, a2, a3 the coefficients of the quadratic and sigma is
the error. This will be poorly matched to a linear fit for
a3 != 0 """
N = len(x)
# randn gives samples from the "standard normal" distribution
r = np.random.randn(N)
y = a1 + a2*x + a3*x*x + sigma*r
return y
N = 40
sigma = 5.0*np.ones(N)
x = np.linspace(0, 100.0, N)
y = y_experiment2(2.0, 1.50, -0.02, sigma, x)
plt.scatter(x,y)
plt.errorbar(x, y, yerr=sigma, fmt=None)
```
```python
def resid(avec, x, y, sigma):
""" the residual function -- this is what will be minimized by the
scipy.optimize.leastsq() routine. avec is the parameters we
are optimizing -- they are packed in here, so we unpack to
begin. (x, y) are the data points
scipy.optimize.leastsq() minimizes:
x = arg min(sum(func(y)**2,axis=0))
y
so this should just be the distance from a point to the curve,
and it will square it and sum over the points
"""
a0, a1, a2 = avec
return (y - (a0 + a1*x + a2*x**2))/sigma
# initial guesses
a0, a1, a2 = 1, 1, 1
afit, flag = optimize.leastsq(resid, [a0, a1, a2], args=(x, y, sigma))
print(afit)
plt.plot(x, afit[0] + afit[1]*x + afit[2]*x*x )
plt.scatter(x,y)
plt.errorbar(x, y, yerr=sigma, fmt=None)
```
$\chi^2$
```python
chisq = sum(np.power(resid(afit, x, y, sigma),2))
normalization = len(x)-len(afit)
print(chisq/normalization)
```
0.811373197043
### a nonlinear example
our experiemental data -- an exponential
```python
a0 = 2.5
a1 = 2./3.
sigma = 5.0
a0_orig, a1_orig = a0, a1
x = np.linspace(0.0, 4.0, 25)
y = a0*np.exp(a1*x) + sigma*np.random.randn(len(x))
plt.scatter(x,y)
plt.errorbar(x, y, yerr=sigma, fmt=None, label="_nolegend_")
```
our function to minimize
```python
def resid(avec, x, y):
""" the residual function -- this is what will be minimized by the
scipy.optimize.leastsq() routine. avec is the parameters we
are optimizing -- they are packed in here, so we unpack to
begin. (x, y) are the data points
scipy.optimize.leastsq() minimizes:
x = arg min(sum(func(y)**2,axis=0))
y
so this should just be the distance from a point to the curve,
and it will square it and sum over the points
"""
a0, a1 = avec
# note: if we wanted to deal with error bars, we would weight each
# residual accordingly
return y - a0*np.exp(a1*x)
```
```python
a0, a1 = 1, 1
afit, flag = optimize.leastsq(resid, [a0, a1], args=(x, y))
print(flag)
print(afit)
```
1
[ 2.8409826 0.62230606]
```python
plt.plot(x, afit[0]*np.exp(afit[1]*x),
label=r"$a_0 = $ %f; $a_1 = $ %f" % (afit[0], afit[1]))
plt.plot(x, a0_orig*np.exp(a1_orig*x), ":", label="original function")
plt.legend(numpoints=1, frameon=False)
plt.scatter(x,y, c="k")
plt.errorbar(x, y, yerr=sigma, fmt=None, label="_nolegend_")
```
# FFTs
Fourier transforms convert a physical-space (or time series) representation of a function into frequency space. This provides an equivalent representation of the data with a new view.
The FFT and its inverse in NumPy use:
$$F_k = \sum_{n=0}^{N-1} f_n e^{-2\pi i nk/N}$$
$$f_n = \frac{1}{N} \sum_{k=0}^{N-1} F_k
e^{2\pi i n k/N}$$
Both NumPy and SciPy have FFT routines that are similar. However, the NumPy version returns the data in a more convenient form.
It's always best to start with something you understand -- let's do a simple sine wave. Since our function is real, we can use the rfft routines in NumPy -- the understand that we are working with real data and they don't return the negative frequency components.
One important caveat -- FFTs assume that you are periodic. If you include both endpoints of the domain in the points that comprise your sample then you will not match this assumption. Here we use endpoint=False with linspace()
```python
def single_freq_sine(npts):
# a pure sine with no phase shift will result in pure imaginary
# signal
f_0 = 0.2
xmax = 10.0/f_0
xx = np.linspace(0.0, xmax, npts, endpoint=False)
f = np.sin(2.0*np.pi*f_0*xx)
return xx, f
```
To make our life easier, we'll define a function that plots all the stages of the FFT process
```python
def plot_FFT(xx, f):
npts = len(xx)
# Forward transform: f(x) -> F(k)
fk = np.fft.rfft(f)
# Normalization -- the '2' here comes from the fact that we are
# neglecting the negative portion of the frequency space, since
# the FFT of a real function contains redundant information, so
# we are only dealing with 1/2 of the frequency space.
#
# technically, we should only scale the 0 bin by N, since k=0 is
# not duplicated -- we won't worry about that for these plots
norm = 2.0/npts
fk = fk*norm
fk_r = fk.real
fk_i = fk.imag
# the fftfreq returns the postive and negative (and 0) frequencies
# the newer versions of numpy (>=1.8) have an rfftfreq() function
# that really does what we want -- we'll use that here.
k = np.fft.rfftfreq(npts)
# to make these dimensional, we need to divide by dx. Note that
# max(xx) is not the true length, since we didn't have a point
# at the endpoint of the domain.
kfreq = k*npts/(max(xx) + xx[1])
# Inverse transform: F(k) -> f(x) -- without the normalization
fkinv = np.fft.irfft(fk/norm)
# plots
plt.subplot(411)
plt.plot(xx, f)
plt.xlabel("x")
plt.ylabel("f(x)")
plt.subplot(412)
plt.plot(kfreq, fk_r, label=r"Re($\mathcal{F}$)")
plt.plot(kfreq, fk_i, ls=":", label=r"Im($\mathcal{F}$)")
plt.xlabel(r"$\nu_k$")
plt.ylabel("F(k)")
plt.legend(fontsize="small", frameon=False)
plt.subplot(413)
plt.plot(kfreq, np.abs(fk))
plt.xlabel(r"$\nu_k$")
plt.ylabel(r"|F(k)|")
plt.subplot(414)
plt.plot(xx, fkinv.real)
plt.xlabel(r"$\nu_k$")
plt.ylabel(r"inverse F(k)")
f = plt.gcf()
f.set_size_inches(10,8)
plt.tight_layout()
```
```python
npts = 128
xx, f = single_freq_sine(npts)
plot_FFT(xx, f)
```
A cosine is shifted in phase by pi/2
```python
def single_freq_cosine(npts):
# a pure cosine with no phase shift will result in pure real
# signal
f_0 = 0.2
xmax = 10.0/f_0
xx = np.linspace(0.0, xmax, npts, endpoint=False)
f = np.cos(2.0*np.pi*f_0*xx)
return xx, f
```
```python
xx, f = single_freq_cosine(npts)
plot_FFT(xx, f)
```
Now let's look at a sine with a pi/4 phase shift
```python
def single_freq_sine_plus_shift(npts):
# a pure sine with no phase shift will result in pure imaginary
# signal
f_0 = 0.2
xmax = 10.0/f_0
xx = np.linspace(0.0, xmax, npts, endpoint=False)
f = np.sin(2.0*np.pi*f_0*xx + np.pi/4)
return xx, f
```
```python
xx, f = single_freq_sine_plus_shift(npts)
plot_FFT(xx, f)
```
### A frequency filter
we'll setup a simple two-frequency sine wave and filter a component
```python
def two_freq_sine(npts):
# a pure sine with no phase shift will result in pure imaginary
# signal
f_0 = 0.2
f_1 = 0.5
xmax = 10.0/f_0
# we call with endpoint=False -- if we include the endpoint, then for
# a periodic function, the first and last point are identical -- this
# shows up as a signal in the FFT.
xx = np.linspace(0.0, xmax, npts, endpoint=False)
f = 0.5*(np.sin(2.0*np.pi*f_0*xx) + np.sin(2.0*np.pi*f_1*xx))
return xx, f
```
```python
npts = 256
xx, f = two_freq_sine(npts)
plt.plot(xx, f)
```
we'll take the transform: f(x) -> F(k)
```python
# normalization factor: the 2 here comes from the fact that we neglect
# the negative portion of frequency space because our input function
# is real
norm = 2.0/npts
fk = norm*np.fft.rfft(f)
ofk_r = fk.real.copy()
ofk_i = fk.imag.copy()
# get the frequencies
k = np.fft.rfftfreq(len(xx))
# since we don't include the endpoint in xx, to normalize things, we need
# max(xx) + dx to get the true length of the domain
#
# This makes the frequencies essentially multiples of 1/dx
kfreq = k*npts/(max(xx) + xx[1])
plt.plot(kfreq, fk.real, label="real")
plt.plot(kfreq, fk.imag, ":", label="imaginary")
plt.legend(frameon=False)
```
Filter out the higher frequencies
```python
fk[kfreq > 0.4] = 0.0
# element 0 of fk is the DC component
fk_r = fk.real
fk_i = fk.imag
# Inverse transform: F(k) -> f(x)
fkinv = np.fft.irfft(fk/norm)
plt.plot(xx, fkinv.real)
```
# Linear Algebra
### general manipulations of matrices
you can use regular NumPy arrays or you can use a special matrix class that offers some short cuts
```python
a = np.array([[1.0, 2.0], [3.0, 4.0]])
```
```python
print(a)
print(a.transpose())
print(a.T)
```
[[ 1. 2.]
[ 3. 4.]]
[[ 1. 3.]
[ 2. 4.]]
[[ 1. 3.]
[ 2. 4.]]
```python
ainv = np.linalg.inv(a)
print(ainv)
```
[[-2. 1. ]
[ 1.5 -0.5]]
```python
print(np.dot(a, ainv))
```
[[ 1.00000000e+00 0.00000000e+00]
[ 8.88178420e-16 1.00000000e+00]]
the eye() function will generate an identity matrix (as will the identity())
```python
print(np.eye(2))
print(np.identity(2))
```
[[ 1. 0.]
[ 0. 1.]]
[[ 1. 0.]
[ 0. 1.]]
we can solve Ax = b
```python
b = np.array([5, 7])
x = np.linalg.solve(a, b)
print(x)
```
[-3. 4.]
### The matrix class
```python
A = np.matrix('1.0 2.0; 3.0 4.0')
print(A)
print(A.T)
```
[[ 1. 2.]
[ 3. 4.]]
[[ 1. 3.]
[ 2. 4.]]
```python
X = np.matrix('5.0 7.0')
Y = X.T
print(A*Y)
```
[[ 19.]
[ 43.]]
```python
print(A.I*Y)
```
[[-3.]
[ 4.]]
### tridiagonal matrix solve
Here we'll solve the equation:
$$f^{\prime\prime} = g(x)$$
with $g(x) = sin(x)$, and the domain $x \in [0, 2\pi]$, with boundary conditions $f(0) = f(2\pi) = 0$. The solution is simply $f(x) = -sin(x)$.
We'll use a grid of $N$ points with $x_0$ on the left boundary and $x_{N-1}$ on the right boundary.
We difference our equation as:
$$f_{i+1} - 2 f_i + f_{i-1} = \Delta x^2 g_i$$
We keep the boundary points fixed, so we only need to solve for the $N-2$ interior points. Near the boundaries, our difference is:
$$f_2 - 2 f_1 = \Delta x^2 g_1$$
and
$$-2f_{N-1} + f_{N-2} = \Delta x^2 g_{N-1}$$.
We can write the system of equations for solving for the $N-2$ interior points as:
\begin{equation}
A = \left (
\begin{array}{ccccccc}
-2 & 1 & & & & & \newline
1 & -2 & 1 & & & & \newline
& 1 & -2 & 1 & & & \newline
& & \ddots & \ddots & \ddots & & \newline
& & & \ddots & \ddots & \ddots & \newline
& & & & 1 & -2 & 1 \newline
& & & & & 1 & -2 \newline
\end{array}
\right )
\end{equation}
\begin{equation}
x = \left (
\begin{array}{c}
f_\mathrm{1} \\\
f_\mathrm{2} \\\
f_\mathrm{3} \\\
\vdots \\\
\vdots \\\
f_\mathrm{N-2} \\\
f_\mathrm{N-1} \\\
\end{array}
\right )
\end{equation}
\begin{equation}
b = \Delta x^2 \left (
\begin{array}{c}
g_\mathrm{1} \\\
g_\mathrm{2} \\\
g_\mathrm{3} \\\
\vdots \\\
\vdots \\\
g_\mathrm{N-2} \\\
g_\mathrm{N-1}\\\
\end{array}
\right )
\end{equation}
Then we just solve $A x = b$
```python
import scipy.linalg as linalg
# our grid -- including endpoints
N = 100
x = np.linspace(0.0, 2.0*np.pi, N, endpoint=True)
dx = x[1]-x[0]
# our source
g = np.sin(x)
# our matrix will be tridiagonal, with [1, -2, 1] on the diagonals
# we only solve for the N-2 interior points
# diagonal
d = -2*np.ones(N-2)
# upper -- note that the upper diagonal has 1 less element than the
# main diagonal. The SciPy banded solver wants the matrix in the
# form:
#
# * a01 a12 a23 a34 a45 <- upper diagonal
# a00 a11 a22 a33 a44 a55 <- diagonal
# a10 a21 a32 a43 a54 * <- lower diagonal
#
u = np.ones(N-2)
u[0] = 0.0
# lower
l = np.ones(N-2)
l[N-3] = 0.0
# put the upper, diagonal, and lower parts together as a banded matrix
A = np.matrix([u,d,l])
# solve A sol = dx**2 g for the inner N-2 points
sol = linalg.solve_banded((1,1), A, dx**2*g[1:N-1])
plt.plot(x[1:N-1], sol)
```
| 7c6a96796a7a11eed3ca53bba860960ebc5e6505 | 1,015,999 | ipynb | Jupyter Notebook | Other/scipy-basics.ipynb | xiaozhouli/Jupyter | 68d5a384dd939b3e8079da4470d6401d11b63a4c | [
"MIT"
] | 6 | 2020-02-27T13:09:06.000Z | 2021-11-14T09:50:30.000Z | Other/scipy-basics.ipynb | xiaozhouli/Jupyter | 68d5a384dd939b3e8079da4470d6401d11b63a4c | [
"MIT"
] | null | null | null | Other/scipy-basics.ipynb | xiaozhouli/Jupyter | 68d5a384dd939b3e8079da4470d6401d11b63a4c | [
"MIT"
] | 8 | 2018-10-18T10:20:56.000Z | 2021-09-24T08:09:27.000Z | 170.813551 | 133,884 | 0.841931 | true | 48,413 | Qwen/Qwen-72B | 1. YES
2. YES | 0.787931 | 0.90053 | 0.709556 | __label__eng_Latn | 0.987179 | 0.486867 |
# Almgren and Chriss Model For Optimal Execution of Portfolio Transactions
### Introduction
We consider the execution of portfolio transactions with the aim of minimizing a combination of risk and transaction costs arising from permanent and temporary market impact. As an example, assume that you have a certain number of stocks that you want to sell within a given time frame. If you place this sell order directly to the market as it is, transaction costs may rise due to temporary market impact. On the other hand, if you split up into pieces in time, cost may rise due to volatility in the stock price.
[Almgren and Chriss](https://cims.nyu.edu/~almgren/papers/optliq.pdf) provided a solution to this problem by assuming the permanent and temporary market impact functions are linear functions of the rate of trading, and that stock prices follow a discrete arithmetic random walk.
In this notebook, we will take a look at the model used by Almgren and Chriss to solve the optimal liquidation problem. We will start by stating the formal definitions of *trading trajectory*, *trading list*, and *trading strategy* for liquidating a single stock.
### Trading Trajectory, Trading List, and Trading Strategy
We define trading trajectory, trading list, and trading strategy just as Almgren and Chriss did in their [paper](https://cims.nyu.edu/~almgren/papers/optliq.pdf). Suppose we hold $X$ shares of a stock that we want to liquidate before time $T$. Divide $T$ into $N$ intervals of length $\tau=\frac{T}{N}$ and define:
- $t_k = k\tau$ to be discrete times, where $k = 0,..,N$.
- A **trading trajectory** to be the list $(x_0,..,x_N)$, where $x_k$ is the number of shares we plan to hold at time $t_k$. We require that our initial position $x_0 = X$, and that at liquidation time $T$, $x_N = 0$.
- A **trading list** to be $(n_1,..,n_N)$, $n_k = x_{k-1} - x_k$ as the number of shares that we will sell between times $t_{k-1}$ and $t_k$.
- A **trading strategy** as a rule for determining $n_k$ from the information available at time $t_{k-1}$.
Below, we can see a visual example of a trading trajectory, for $N = 12$.
## Price Dynamics
We will assume that the stock price evolves according to a discrete arithmetic random walk:
\begin{equation}
S_k = S_{k-1} + \sigma \tau^{1/2} \xi_k
\end{equation}
for $k = 1,..,N$ and where:
\begin{equation}
S_k = \text{ stock price at time $k$}\hspace{21.6cm}\\
\sigma = \text{ standard deviation of the fluctuations in stock price}\hspace{16.3cm}\\
\tau = \text{ length of discrete time interval}\hspace{20.2cm}\\
\xi_k = \text{ draws from independent random variables}\hspace{17.8cm}
\end{equation}
We will denote the initial stock price as $S_0$. The role of $\xi_k$ is to simulate random price fluctuations using random numbers drawn from a Normal Gaussian distribution with zero mean and unit variance. The code below shows us what this price model looks like, for an initial stock price of $S_0 =$ \$50 dollars, a standard deviation of price fluctuations of $\sigma = 0.379$, and a discrete time interval of $\tau = 1$.
```python
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Set the number of days to follow the stock price
n_days = 100
# Plot the stock price as a function of time
utils.plot_price_model(seed = 0, num_days = n_days)
```
## Market Impact
As we learned previously the price of a stock is affected by market impact that occurs every time we sell a stock. In their model, Almgren and Chriss distinguish between two types of market impact, permanent and temporary market impact. We will now add these two factors into our price model.
### Permanent Impact
Permanent market impact refers to changes in the equilibrium price of a stock as a direct function of our trading. Permanent market impact is called *permanent* because its effect persists for the entire liquidation period, $T$. We will denote the permanent price impact as $g(v)$, and will add it to our price model:
\begin{equation}
S_k = S_{k-1} + \sigma \tau^{1/2} \xi_k - \tau g\left(\frac{n_k}{\tau}\right)
\end{equation}
Here, we assumed the permanent impact function, $g(v)$, is a linear function of the trading rate, $v = n_k / \tau$. We will take $g(v)$ to have the form:
\begin{equation}
g(v) = \gamma \left(\frac{n_k}{\tau}\right)
\end{equation}
where $\gamma$ is a constant and has units of (\$/share${}^2$). Replacing this in the above equation we get:
\begin{equation}
S_k = S_{k-1} + \sigma \tau^{1/2} \xi_k - \gamma n_k
\end{equation}
With this form, we can see that for each $n$ shares that we sell, we will depress the stock price permanently by $n\gamma$, regardless of the time we take to sell the stocks.
### Temporary Impact
Temporary market impact refers to temporary imbalances in supply and demand caused by our trading. This leads to temporary price movements away from equilibrium. Temporary market impact is called *temporary* because its effect
dissipates by the next trading period. We will denote the temporary price impact as $h(v)$. Given this, the actual stock price at time $k$ is given by:
\begin{equation}
\tilde{S_k} = S_{k-1} - h\left(\frac{n_k}{\tau}\right)
\end{equation}
Where, we have again assumed the temporary impact function, $h(v)$, is a linear function of the trading rate, $v = n_k / \tau$. We will take $h(v)$ to have the form:
\begin{equation}
h(v) = \epsilon \mbox{ sign}(n_k) + \eta \left(\frac{n_k}{\tau}\right)
\end{equation}
where $\epsilon$ and $\eta$ are constants with units (\$/share) and (\$ time/share${}^2$), respectively. It is important to note that $h(v)$ does not affect the price $S_k$.
## Capture
We define the **Capture** to be the total profits resulting from trading along a particular trading trajectory, upon completion of all trades. We can compute the capture via:
\begin{equation}
\sum\limits_{k=1}^{N} n_k \tilde{S_k} = X S_0 + \sum\limits_{k=1}^{N} \left(\sigma \tau^{1/2} \xi_k - \tau g\left(\frac{n_k}{\tau}\right)\right) x_k - \sum\limits_{k=1}^{N} n_k h\left(\frac{n_k}{\tau}\right)
\end{equation}
As we can see this is the sum of the product of the number of shares $n_k$ that we sell in each time interval, times the effective price per share $\tilde{S_k}$ received on that sale.
## Implementation Shortfall
We define the **Implementation Shortfall** as the total cost of trading and is given by:
\begin{equation}
I_s = X S_0 - \sum_{k = 1}^N n_k \tilde{S_k}
\end{equation}
This is what we seek to minimize when determining the best trading strategy!
Note that since $\xi_k$ is random, so is the implementation shortfall. Therefore, we have to frame the minimization problem in terms of the expectation value of the shortfall and its corresponding variance. We'll refer to $E(x)$ as the expected shortfall and $V(x)$ as the variance of the shortfall. Simplifying the above equation for $I_s$, is easy to see that:
\begin{equation}
E(x) = \sum\limits_{k=1}^{N} \tau x_k g\left(\frac{n_k}{\tau}\right) + \sum\limits_{k=1}^{N} n_k h\left(\frac{n_k}{\tau}\right)
\end{equation}
and
\begin{equation}
V(x) = \sigma^2 \sum\limits_{k=1}^{N} \tau {x_k}^2
\end{equation}
The units of $E(x)$ are dollars and the units of $V(x)$ are dollars squared. So now, we can reframe our minimization problem in terms of $E(x)$ and $V(x)$.
For a given level of variance of shortfall, $V(x)$, we seek to minimize the expectation of shortfall, $E(x)$. In the next section we will see how to solve this problem.
## Utility Function
Our goal now is to find the strategy that has the minimum expected shortfall $E(x)$ for a given maximum level of variance $V(x) \ge 0$. This constrained optimization problem can be solved by introducing a Lagrange multiplier $\lambda$. Therefore, our problem reduces to finding the trading strategy that minimizes the **Utility Function** $U(x)$:
\begin{equation}
U(x) = E(x) + \lambda V(x)
\end{equation}
The parameter $\lambda$ is referred to as **trader’s risk aversion** and controls how much we penalize the variance relative to the expected shortfall.
The intuition of this utility function can be thought of as follows. Consider a stock which exhibits high price volatility and thus a high risk of price movement away from the equilibrium price. A risk averse trader would prefer to trade a large portion of the volume immediately, causing a known price impact, rather than risk trading in small increments at successively adverse prices. Alternatively, if the price is expected to be stable over the liquidation period, the trader would rather split the trade into smaller sizes to avoid price impact. This trade-off between speed of execution and risk of price movement is ultimately what governs the structure of the resulting trade list.
# Optimal Trading Strategy
Almgren and Chriss solved the above problem and showed that for each value
of risk aversion there is a uniquely determined optimal execution strategy. The details of their derivation is discussed in their [paper](https://cims.nyu.edu/~almgren/papers/optliq.pdf). Here, we will just state the general solution.
The optimal trajectory is given by:
\begin{equation}
x_j = \frac{\sinh \left( \kappa \left( T-t_j\right)\right)}{ \sinh (\kappa T)}X, \hspace{1cm}\text{ for } j=0,...,N
\end{equation}
and the associated trading list:
\begin{equation}
n_j = \frac{2 \sinh \left(\frac{1}{2} \kappa \tau \right)}{ \sinh \left(\kappa T\right) } \cosh \left(\kappa \left(T - t_{j-\frac{1}{2}}\right)\right) X, \hspace{1cm}\text{ for } j=1,...,N
\end{equation}
where $t_{j-1/2} = (j-\frac{1}{2}) \tau$. The expected shortfall and variance of the optimal trading strategy are given by:
In the above equations $\kappa$ is given by:
\begin{align*}
&\kappa = \frac{1}{\tau}\cosh^{-1}\left(\frac{\tau^2}{2}\tilde{\kappa}^2 + 1\right)
\end{align*}
where:
\begin{align*}
&\tilde{\kappa}^2 = \frac{\lambda \sigma^2}{\tilde{\eta}} = \frac{\lambda \sigma^2}{\eta \left(1-\frac{\gamma \tau}{2 \eta}\right)}
\end{align*}
# Trading Lists and Trading Trajectories
### Introduction
[Almgren and Chriss](https://cims.nyu.edu/~almgren/papers/optliq.pdf) provided a solution to the optimal liquidation problem by assuming the that stock prices follow a discrete arithmetic random walk, and that the permanent and temporary market impact functions are linear functions of the trading rate.
Almgren and Chriss showed that for each value of risk aversion there is a unique optimal execution strategy. This optimal execution strategy is determined by a trading trajectory and its associated trading list. The optimal trading trajectory is given by:
\begin{equation}
x_j = \frac{\sinh \left( \kappa \left( T-t_j\right)\right)}{ \sinh (\kappa T)}X, \hspace{1cm}\text{ for } j=0,...,N
\end{equation}
and the associated trading list is given by:
\begin{equation}
n_j = \frac{2 \sinh \left(\frac{1}{2} \kappa \tau \right)}{ \sinh \left(\kappa T\right) } \cosh \left(\kappa \left(T - t_{j-\frac{1}{2}}\right)\right) X, \hspace{1cm}\text{ for } j=1,...,N
\end{equation}
where $t_{j-1/2} = (j-\frac{1}{2}) \tau$.
Given some initial parameters, such as the number of shares, the liquidation time, the trader's risk aversion, etc..., the trading list will tell us how many shares we should sell at each trade to minimize our transaction costs.
In this notebook, we will see how the trading list varies according to some initial trading parameters.
## Visualizing Trading Lists and Trading Trajectories
Let's assume we have 1,000,000 shares that we wish to liquidate. In the code below, we will plot the optimal trading trajectory and its associated trading list for different trading parameters, such as trader's risk aversion, number of trades, and liquidation time.
```python
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# We set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Set the number of days to sell all shares (i.e. the liquidation time)
l_time = 60
# Set the number of trades
n_trades = 60
# Set the trader's risk aversion
t_risk = 1e-6
# Plot the trading list and trading trajectory. If show_trl = True, the data frame containing the values of the
# trading list and trading trajectory is printed
utils.plot_trade_list(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk, show_trl = True)
```
# Implementing a Trading List
Once we have the trading list for a given set of initial parameters, we can actually implement it. That is, we can sell our shares in the stock market according to the trading list and see how much money we made or lost. To do this, we are going to simulate the stock market with a simple trading environment. This simulated trading environment uses the same price dynamics and market impact functions as the Almgren and Chriss model. That is, stock price movements evolve according to a discrete arithmetic random walk and the permanent and temporary market impact functions are linear functions of the trading rate. We are going to use the same environment to train our Deep Reinforcement Learning algorithm later on.
We will describe the details of the trading environment in another notebook, for now we will just take a look at its default parameters. We will distinguish between financial parameters, such the annual volatility in stock price, and the parameters needed to calculate the trade list using the Almgren and Criss model, such as the trader's risk aversion.
```python
import utils
# Get the default financial and AC Model parameters
financial_params, ac_params = utils.get_env_param()
print(financial_params)
print(ac_params)
```
Financial Parameters
================================================================================
Annual Volatility: 12% Bid-Ask Spread: 0.125
Daily Volatility: 0.8% Daily Trading Volume: 5,000,000
================================================================================
Almgren and Chriss Model Parameters
=========================================================================================================================
Total Number of Shares to Sell: 1,000,000 Fixed Cost of Selling per Share: $0.062
Starting Price per Share: $50.00 Trader's Risk Aversion: 1e-06
Price Impact for Each 1% of Daily Volume Traded: $2.5e-06 Permanent Impact Constant: 2.5e-07
Number of Days to Sell All the Shares: 60 Single Step Variance: 0.144
Number of Trades: 60 Time Interval between trades: 1.0
=========================================================================================================================
```python
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# We set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Set the random seed
sd = 0
# Set the number of days to sell all shares (i.e. the liquidation time)
l_time = 60
# Set the number of trades
n_trades = 60
# Set the trader's risk aversion
t_risk = 1e-6
# Implement the trading list for the given parameters
utils.implement_trade_list(seed = sd, lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk)
```
# The Efficient Frontier of Optimal Portfolio Transactions
### Introduction
[Almgren and Chriss](https://cims.nyu.edu/~almgren/papers/optliq.pdf) showed that for each value of risk aversion there is a unique optimal execution strategy. The optimal strategy is obtained by minimizing the **Utility Function** $U(x)$:
\begin{equation}
U(x) = E(x) + \lambda V(x)
\end{equation}
where $E(x)$ is the **Expected Shortfall**, $V(x)$ is the **Variance of the Shortfall**, and $\lambda$ corresponds to the trader’s risk aversion. The expected shortfall and variance of the optimal trading strategy are given by:
In this notebook, we will learn how to visualize and interpret these equations.
# The Expected Shortfall
As we saw in the previous notebook, even if we use the same trading list, we are not guaranteed to always get the same implementation shortfall due to the random fluctuations in the stock price. This is why we had to reframe the problem of finding the optimal strategy in terms of the average implementation shortfall and the variance of the implementation shortfall. We call the average implementation shortfall, the expected shortfall $E(x)$, and the variance of the implementation shortfall $V(x)$. So, whenever we talk about the expected shortfall we are really talking about the average implementation shortfall. Therefore, we can think of the expected shortfall as follows. Given a single trading list, the expected shortfall will be the value of the average implementation shortfall if we were to implement this trade list in the stock market many times.
To see this, in the code below we implement the same trade list on 50,000 trading simulations. We call each trading simulation an episode. Each episode will consist of different random fluctuations in stock price. For each episode we will compute the corresponding implemented shortfall. After all the 50,000 trading simulations have been carried out we calculate the average implementation shortfall and the variance of the implemented shortfalls. We can then compare these values with the values given by the equations for $E(x)$ and $V(x)$ from the Almgren and Chriss model.
```python
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Set the liquidation time
l_time = 60
# Set the number of trades
n_trades = 60
# Set trader's risk aversion
t_risk = 1e-6
# Set the number of episodes to run the simulation
episodes = 10
utils.get_av_std(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk, trs = episodes)
# Get the AC Optimal strategy for the given parameters
ac_strategy = utils.get_optimal_vals(lq_time = l_time, nm_trades = n_trades, tr_risk = t_risk)
ac_strategy
```
# Extreme Trading Strategies
Because some investors may be willing to take more risk than others, when looking for the optimal strategy we have to consider a wide range of risk values, ranging from those traders that want to take zero risk to those who want to take as much risk as possible. Let's take a look at these two extreme cases. We will define the **Minimum Variance** strategy as that one followed by a trader that wants to take zero risk and the **Minimum Impact** strategy at that one followed by a trader that wants to take as much risk as possible. Let's take a look at the values of $E(x)$ and $V(x)$ for these extreme trading strategies. The `utils.get_min_param()` uses the above equations for $E(x)$ and $V(x)$, along with the parameters from the trading environment to calculate the expected shortfall and standard deviation (the square root of the variance) for these strategies. We'll start by looking at the Minimum Impact strategy.
```python
import utils
# Get the minimum impact and minimum variance strategies
minimum_impact, minimum_variance = utils.get_min_param()
```
### Minimum Impact Strategy
This trading strategy will be taken by trader that has no regard for risk. In the Almgren and Chriss model this will correspond to having the trader's risk aversion set to $\lambda = 0$. In this case the trader will sell the shares at a constant rate over a long period of time. By doing so, he will minimize market impact, but will be at risk of losing a lot of money due to the large variance. Hence, this strategy will yield the lowest possible expected shortfall and the highest possible variance, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of \$197,000 dollars but has a very big standard deviation of over 3 million dollars.
```python
minimum_impact
```
<table class="simpletable">
<caption>AC Optimal Strategy for Minimum Impact</caption>
<tr>
<th>Number of Days to Sell All the Shares:</th> <td>250</td> <th> Initial Portfolio Value:</th> <td>$50,000,000.00</td>
</tr>
<tr>
<th>Half-Life of The Trade:</th> <td>1,284,394.9</td> <th> Expected Shortfall:</th> <td>$197,000.00</td>
</tr>
<tr>
<th>Utility:</th> <td>$197,000.00</td> <th> Standard Deviation of Shortfall:</th> <td>$3,453,707.55</td>
</tr>
</table>
### Minimum Variance Strategy
This trading strategy will be taken by trader that wants to take zero risk, regardless of transaction costs. In the Almgren and Chriss model this will correspond to having a variance of $V(x) = 0$. In this case, the trader would prefer to sell the all his shares immediately, causing a known price impact, rather than risk trading in small increments at successively adverse prices. This strategy will yield the smallest possible variance, $V(x) = 0$, and the highest possible expected shortfall, for a given set of parameters. We can see that for the given parameters, this strategy yields an expected shortfall of over 2.5 million dollars but has a standard deviation equal of zero.
```python
minimum_variance
```
<table class="simpletable">
<caption>AC Optimal Strategy for Minimum Variance</caption>
<tr>
<th>Number of Days to Sell All the Shares:</th> <td>1</td> <th> Initial Portfolio Value:</th> <td>$50,000,000.00</td>
</tr>
<tr>
<th>Half-Life of The Trade:</th> <td>0.2</td> <th> Expected Shortfall:</th> <td>$2,562,500.00</td>
</tr>
<tr>
<th>Utility:</th> <td>$2,562,500.00</td> <th> Standard Deviation of Shortfall:</th> <td>$0.00</td>
</tr>
</table>
# The Efficient Frontier
The goal of Almgren and Chriss was to find the optimal strategies that lie between these two extremes. In their paper, they showed how to compute the trade list that minimizes the expected shortfall for a wide range of risk values. In their model, Almgren and Chriss used the parameter $\lambda$ to measure a trader's risk-aversion. The value of $\lambda$ tells us how much a trader is willing to penalize the variance of the shortfall, $V(X)$, relative to expected shortfall, $E(X)$. They showed that for each value of $\lambda$ there is a uniquely determined optimal execution strategy. We define the **Efficient Frontier** to be the set of all these optimal trading strategies. That is, the efficient frontier is the set that contains the optimal trading strategy for each value of $\lambda$.
The efficient frontier is often visualized by plotting $(x,y)$ pairs for a wide range of $\lambda$ values, where the $x$-coordinate is given by the equation of the expected shortfall, $E(X)$, and the $y$-coordinate is given by the equation of the variance of the shortfall, $V(X)$. Therefore, for a given a set of parameters, the curve defined by the efficient frontier represents the set of optimal trading strategies that give the lowest expected shortfall for a defined level of risk.
In the code below, we plot the efficient frontier for $\lambda$ values in the range $(10^{-7}, 10^{-4})$, using the default parameters in our trading environment. Each point of the frontier represents a distinct strategy for optimally liquidating the same number of stocks. A risk-averse trader, who wishes to sell quickly to reduce exposure to stock price volatility, despite the trading costs incurred in doing so, will likely choose a value of $\lambda = 10^{-4}$. On the other hand, a trader
who likes risk, who wishes to postpones selling, will likely choose a value of $\lambda = 10^{-7}$. In the code, you can choose a particular value of $\lambda$ to see the expected shortfall and level of variance corresponding to that particular value of trader's risk aversion.
```python
%matplotlib inline
import matplotlib.pyplot as plt
import utils
# Set the default figure size
plt.rcParams['figure.figsize'] = [17.0, 7.0]
# Plot the efficient frontier for the default values. The plot points out the expected shortfall and variance of the
# optimal strategy for the given the trader's risk aversion. Valid range for the trader's risk aversion (1e-7, 1e-4).
utils.plot_efficient_frontier(tr_risk = 1e-6)
```
```python
```
```python
```
```python
```
| 2578ff711f7d9e3033591f7f282ca4dc74f257b5 | 292,856 | ipynb | Jupyter Notebook | finance/Almgren and Chriss Model.ipynb | reinaldomaslim/deep-reinforcement-learning | 231a58718922788d892fab7a2a2156ffdfff53c2 | [
"MIT"
] | null | null | null | finance/Almgren and Chriss Model.ipynb | reinaldomaslim/deep-reinforcement-learning | 231a58718922788d892fab7a2a2156ffdfff53c2 | [
"MIT"
] | null | null | null | finance/Almgren and Chriss Model.ipynb | reinaldomaslim/deep-reinforcement-learning | 231a58718922788d892fab7a2a2156ffdfff53c2 | [
"MIT"
] | null | null | null | 293.442886 | 62,628 | 0.886395 | true | 6,139 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.861538 | 0.75598 | __label__eng_Latn | 0.994404 | 0.594726 |
```python
from sympy import *
```
# Доказать (или опровергнуть), что для известного трёхмерного вектора v единичной длины матрица является матрицей поворота.
$R = \begin{bmatrix}
\vec{k}\times(\vec{k}\times\vec{v}) & \vec{k}\times\vec{v} & \vec{k}
\end{bmatrix}^T$
$|\vec{v}|=1$
$R = \begin{bmatrix}
& \vec{k}\times(\vec{k}\times\vec{v}) & \\
& \vec{k}\times\vec{v} & \\
0 & 0 & 1
\end{bmatrix}$
```python
vx, vy,vz = symbols('v_x v_y v_z')
v = Matrix([vx, vy, vz])
k = Matrix([0, 0, 1])
```
```python
k.T
```
$\displaystyle \left[\begin{matrix}0 & 0 & 1\end{matrix}\right]$
```python
v.T
```
$\displaystyle \left[\begin{matrix}v_{x} & v_{y} & v_{z}\end{matrix}\right]$
## Відновлення матриці по стрічкам
$R = \begin{bmatrix}
& \vec{k}\times(\vec{k}\times\vec{v}) & \\
& \vec{k}\times\vec{v} & \\
0 & 0 & 1
\end{bmatrix}$
```python
k.cross(v).T
```
$\displaystyle \left[\begin{matrix}- v_{y} & v_{x} & 0\end{matrix}\right]$
$R = \begin{bmatrix}
- & \vec{k}\times(\vec{k}\times\vec{v}) & -\\
-v_y & v_x & 0\\
0 & 0 & 1
\end{bmatrix}$
```python
k.cross( k.cross(v) ).T
```
$\displaystyle \left[\begin{matrix}- v_{x} & - v_{y} & 0\end{matrix}\right]$
$R = \begin{bmatrix}
-v_x & -v_y & 0\\
-v_y & v_x & 0\\
0 & 0 & 1
\end{bmatrix}$
```python
R = Matrix([[-vx, -vy, 0], [-vy, vx, 0], [0, 0, 1]])
R
```
$\displaystyle \left[\begin{matrix}- v_{x} & - v_{y} & 0\\- v_{y} & v_{x} & 0\\0 & 0 & 1\end{matrix}\right]$
```python
R.det()
```
$\displaystyle - v_{x}^{2} - v_{y}^{2}$
$det(R) \neq 1$, оскільки за умовою лише $v_x^2 + v_y^2 + v_z^2=1$ (ще $det(R) < 0$)
```python
R.inv() - R.T
```
$\displaystyle \left[\begin{matrix}v_{x} - \frac{v_{x}}{v_{x}^{2} + v_{y}^{2}} & v_{y} - \frac{v_{y}}{v_{x}^{2} + v_{y}^{2}} & 0\\v_{y} + \frac{v_{y}}{- v_{x}^{2} - v_{y}^{2}} & - v_{x} + \frac{v_{x}}{v_{x}^{2} + v_{y}^{2}} & 0\\0 & 0 & 0\end{matrix}\right]$
## Офіційна відповідь: матриця $R$ не є матрицею повороту (ябо є лише у випадку коли один з стовпчиків домножити на $-1$, та при умові що $v_z = 0$).
# Вывести матрицу поворота
$\begin{cases} R\vec{v} = \vec{i} \\ R^T=R^{-1} \\ det(R) = 1 \end{cases}$
Перша матриця: поворот навколо осі $Z$
```python
cos_fi = vx/sqrt(vx**2+vy**2)
sin_fi = vy/sqrt(vx**2+vy**2)
R = Matrix([[cos_fi, sin_fi, 0],
[-sin_fi, cos_fi, 0],
[0, 0, 1]])
R
```
$\displaystyle \left[\begin{matrix}\frac{v_{x}}{\sqrt{v_{x}^{2} + v_{y}^{2}}} & \frac{v_{y}}{\sqrt{v_{x}^{2} + v_{y}^{2}}} & 0\\- \frac{v_{y}}{\sqrt{v_{x}^{2} + v_{y}^{2}}} & \frac{v_{x}}{\sqrt{v_{x}^{2} + v_{y}^{2}}} & 0\\0 & 0 & 1\end{matrix}\right]$
Перевірка того, чи це дійсно матриця повороту
```python
R.det()
```
$\displaystyle 1$
```python
R.T - R.inv()
```
$\displaystyle \left[\begin{matrix}0 & 0 & 0\\0 & 0 & 0\\0 & 0 & 0\end{matrix}\right]$
і координата $v_y$ зануляється
```python
v2 = simplify(R*v)
v2
```
$\displaystyle \left[\begin{matrix}\sqrt{v_{x}^{2} + v_{y}^{2}}\\0\\v_{z}\end{matrix}\right]$
Друга матриця: поворот навколо осі $Y$
```python
sin_fi = vz
cos_fi = sqrt(1 - sin_fi**2)
F = Matrix([[cos_fi, 0, sin_fi],
[0, 1, 0],
[-sin_fi, 0, cos_fi]])
F
```
$\displaystyle \left[\begin{matrix}\sqrt{1 - v_{z}^{2}} & 0 & v_{z}\\0 & 1 & 0\\- v_{z} & 0 & \sqrt{1 - v_{z}^{2}}\end{matrix}\right]$
```python
F.det()
```
$\displaystyle 1$
```python
F.inv() - F.T
```
$\displaystyle \left[\begin{matrix}0 & 0 & 0\\0 & 0 & 0\\0 & 0 & 0\end{matrix}\right]$
і координата $v_z$ зануляється
```python
(F*v2).subs(vx**2+vy**2, 1- vz**2)
```
$\displaystyle \left[\begin{matrix}1\\0\\0\end{matrix}\right]$
## Комбінована матриця
```python
FinalR = (F*R).subs(vx**2 + vy**2, 1 - vz**2)
FinalR
```
$\displaystyle \left[\begin{matrix}v_{x} & v_{y} & v_{z}\\- \frac{v_{y}}{\sqrt{1 - v_{z}^{2}}} & \frac{v_{x}}{\sqrt{1 - v_{z}^{2}}} & 0\\- \frac{v_{x} v_{z}}{\sqrt{1 - v_{z}^{2}}} & - \frac{v_{y} v_{z}}{\sqrt{1 - v_{z}^{2}}} & \sqrt{1 - v_{z}^{2}}\end{matrix}\right]$
Перевірка множенням на $\vec{v}$
```python
simplify(FinalR*v)
```
$\displaystyle \left[\begin{matrix}v_{x}^{2} + v_{y}^{2} + v_{z}^{2}\\0\\\frac{v_{z} \left(- v_{x}^{2} - v_{y}^{2} - v_{z}^{2} + 1\right)}{\sqrt{1 - v_{z}^{2}}}\end{matrix}\right]$
Спрощення виразу і підстановка $v_x^2 + v_y^2 + v_z^2 =1$
```python
simplify(FinalR*v).subs(vx**2 + vy**2 + vz**2, 1)
```
$\displaystyle \left[\begin{matrix}1\\0\\0\end{matrix}\right]$
Поворот назад
```python
FinalR.inv()*Matrix([1,0,0])
```
$\displaystyle \left[\begin{matrix}- \frac{v_{x} v_{z}^{2} - v_{x}}{v_{x}^{2} + v_{y}^{2}}\\- \frac{v_{y} v_{z}^{2} - v_{y}}{v_{x}^{2} + v_{y}^{2}}\\v_{z}\end{matrix}\right]$
Спрощення виразу і підстановка $v_x^2 + v_y^2 = 1- v_z^2$
```python
simplify(FinalR.inv()*Matrix([1,0,0])).subs(vx**2 + vy**2, 1 - vz**2)
```
$\displaystyle \left[\begin{matrix}v_{x}\\v_{y}\\v_{z}\end{matrix}\right]$
```python
```
| 14fa19c7b573047c735aa97ab88293314879538c | 16,101 | ipynb | Jupyter Notebook | lab3/rotation.ipynb | ArcaneStudent/stereolabs | 730c64bd3b71809cf0dd36c69748bdf032e5265a | [
"MIT"
] | null | null | null | lab3/rotation.ipynb | ArcaneStudent/stereolabs | 730c64bd3b71809cf0dd36c69748bdf032e5265a | [
"MIT"
] | null | null | null | lab3/rotation.ipynb | ArcaneStudent/stereolabs | 730c64bd3b71809cf0dd36c69748bdf032e5265a | [
"MIT"
] | null | null | null | 21.787551 | 294 | 0.413453 | true | 2,327 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.7773 | 0.703185 | __label__yue_Hant | 0.147802 | 0.472065 |
# Análisis de Estado Estable
## Probabilidades de estado estable
Podemos utilizar la ecuación de Chapman-Kolgomorov para analizar la evolución de las probabilidades de transición de $n$-pasos. Utilicemos los datos de la parte anterior:
```python
import numpy as np
p = np.array([[0.7, 0.1, 0.2], [0.2, 0.7, 0.1], [0.5, 0.2, 0.3]])
p
```
array([[0.7, 0.1, 0.2],
[0.2, 0.7, 0.1],
[0.5, 0.2, 0.3]])
```python
p2 = p @ p
p2
```
array([[0.61, 0.18, 0.21],
[0.33, 0.53, 0.14],
[0.54, 0.25, 0.21]])
```python
p4 = p2 @ p2
p4
```
array([[0.5449, 0.2577, 0.1974],
[0.4518, 0.3753, 0.1729],
[0.5253, 0.2822, 0.1925]])
```python
p8 = p4 @ p4
p8
```
array([[0.51703909, 0.29284182, 0.19011909],
[0.50657073, 0.30607133, 0.18735794],
[0.51485418, 0.29560297, 0.18954285]])
```python
p16 = p8 @ p8
p16
```
array([[0.51355812, 0.29724092, 0.18920096],
[0.51342566, 0.29740832, 0.18916602],
[0.51353048, 0.29727586, 0.18919366]])
En este caso, es notable observar como los valores en cada columna van convergiendo a un mismo valor.
Esta es una propiedad que tienen todas las cadenas de Markov que forman un conjunto irreducible no periódico. Estas cadenas también se conocen como **ergódicas**.
```python
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
nt = 15
pt = np.empty(3 * 3 * nt)
pt.shape = (3, 3, nt)
p_aux = p
n, m = p.shape
for t in range(nt):
pt[:, :, t] = p_aux
p_aux = p_aux @ p
for j in range(n):
for k in range(m):
plt.plot(pt[k, j], color=f"#{format(k*96, '02x')}{format(j*96, '02x')}32", label=f'$p_{{{k}{j}}}$')
ax.set_xlabel('n-ésima potencia de $\mathbf{P}$')
ax.set_ylabel('Probabilidad')
plt.legend(loc='lower right', bbox_to_anchor=(1.2, 0))
plt.show()
```
Se puede observar que cada grupo de líneas que corresponden a las probabilidades de n-ésimo paso de una misma columna convergen. Estos valores a los que convergen todos los valores de una misma columna se conocen como **Probabilidades de estado estable** que se denotan por $\mathbf{\Pi} = \lim\limits_{n \to \infty}\mathbf{P}^n$, y pueden calcularse directamente observando que en algún momento:
\begin{equation}
\mathbf{\Pi} \cdot \mathbf{P} = \mathbf{\Pi}
\end{equation}
Lo que implica que:
\begin{equation}
\pi_j = \sum_{i=1}^{N}\pi_i p_{ij}~\forall j=1, 2, \dots, N
\end{equation}
Esto conduce al sistema de ecuaciones:
\begin{equation}
[\mathbf{P}^t - \mathbf{I}]~\mathbf{\Pi} = \mathbf{0}
\end{equation}
Como todas las ecuaciones quedan igualadas a cero se genera una dependencia lineal, por lo que se debe sustituir una de estas ecuaciones (es decir, una fila del sistema anterior) por el hecho que:
\begin{equation}
\sum_{i=1}^{N}\pi_i = 1
\end{equation}
Así, podemos resolverlo matricialmente mediante la siguiente función:
```python
def steady_state(p):
n = p.shape[0]
a = p.T - np.eye(n)
a[n - 1, :] = np.ones(n)
b = np.append(np.zeros(n - 1), 1)
return np.linalg.solve(a, b)
```
```python
from IPython.display import display, Markdown
pi = steady_state(p)
display(Markdown(rf'$\pi_{{1}} =$ {pi[0]:.6f}'))
display(Markdown(rf'$\pi_{{2}} =$ {pi[1]:.6f}'))
display(Markdown(rf'$\pi_{{3}} =$ {pi[2]:.6f}'))
```
$\pi_{1} =$ 0.513514
$\pi_{2} =$ 0.297297
$\pi_{3} =$ 0.189189
La forma básica en la que pueden interpretarse estos valores es la probabilidad de que el sistema (representado por la cadena de markov) se encuentre en uno estado específico después de un número suficientemente grande de transiciones.
En este caso, $\pi_{1} = 0.513514$ indica que la probabilidad de conseguir un día soleado dentro de 100 o 200 días es aproximadamente $0.51$, independientemente del estado del clima de hoy.
Hay otra interpretación probablemente más interesante de las probabilidades de estado estable, a partir del propio concepto de la probabilidad como frecuencia relativa, y es que de un número suficientemente grande de días, estas probabilidades representan el número de estimado de ocurrencias de un estado.
En este caso, en un año el número de días soleados estaría dado por:
```python
365 * pi[0]
```
187.4324324324324
Esto indica que poco más de $187$ días al año serán soleados. Y en general, que el $51.35 \%$ de los días serán soleados.
| 01498f54cf09941d5d12e992c39a26ccf2df3c9e | 38,980 | ipynb | Jupyter Notebook | docs/02cm_estado_estable.ipynb | map0logo/tci-2019 | 64b83aadf88bf1d666dee6b94eb698a8b6125c14 | [
"Unlicense"
] | 1 | 2022-03-27T04:04:33.000Z | 2022-03-27T04:04:33.000Z | docs/02cm_estado_estable.ipynb | map0logo/tci-2019 | 64b83aadf88bf1d666dee6b94eb698a8b6125c14 | [
"Unlicense"
] | null | null | null | docs/02cm_estado_estable.ipynb | map0logo/tci-2019 | 64b83aadf88bf1d666dee6b94eb698a8b6125c14 | [
"Unlicense"
] | null | null | null | 114.985251 | 30,260 | 0.864802 | true | 1,511 | Qwen/Qwen-72B | 1. YES
2. YES | 0.874077 | 0.903294 | 0.789549 | __label__spa_Latn | 0.939343 | 0.672719 |
# Ritz method for a beam
**November, 2018**
We want to find a Ritz approximation of the deflection $w$ of a beam under applied
transverse uniform load of intensity $f$ per unit lenght and an end moment $M$.
This is described by the following boundary value problem.
$$
\frac{\mathrm{d}^2}{\mathrm{d}x^2}\left(EI \frac{\mathrm{d}^2w}{\mathrm{d}x^2}\right) = f\, ,\quad
0 < x < L,\quad EI>0\, ,
$$
with
$$
w(0) = w'(0) = 0,\quad
\left(EI \frac{\mathrm{d}^2w}{\mathrm{d}x^2}\right)_{x=L} = M,\quad
\left[\frac{\mathrm{d}}{\mathrm{d}x}\left(EI \frac{\mathrm{d}^2w}{\mathrm{d}x^2}\right)\right]_{x=L} = 0\, .
$$
```python
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
```
```python
%matplotlib notebook
init_printing()
# Graphics setup
gray = '#757575'
plt.rcParams["mathtext.fontset"] = "cm"
plt.rcParams["text.color"] = gray
plt.rcParams["font.size"] = 12
plt.rcParams["xtick.color"] = gray
plt.rcParams["ytick.color"] = gray
plt.rcParams["axes.labelcolor"] = gray
plt.rcParams["axes.edgecolor"] = gray
plt.rcParams["axes.spines.right"] = False
plt.rcParams["axes.spines.top"] = False
plt.rcParams["figure.figsize"] = 4, 3
```
The exact solution for this problem is
$$w(x) = \left(\frac{2M + fL^2}{4EI}\right)x^2 - \frac{fL}{6EI}x^3 + \frac{f}{24EI}x^4\, .$$
```python
x = symbols('x')
M, EI, f, L, Mb = symbols("M EI f L Mb")
w_exact = (2*M + f*L**2)/(4*EI)*x**2 - f*L/(6*EI)*x**3 + f/(24*EI)*x**4
psi_exact = -(2*M + f*L**2)/(2*EI)*x + f*L*x**2/(2*EI) - f*x**3/(6*EI)
M_exact = f/2*(x - L)**2 + Mb
lamda_exact = f*(L - x)
```
```python
def plot_expr(expr, x, rango=(0, 1), ax=None, linestyle="solid"):
"""Plot SymPy expressions of a single variable"""
expr_num = lambdify(x, expr, "numpy")
x0 = rango[0]
x1 = rango[1]
x_num = np.linspace(0, 1, 101)
if ax is None:
plt.figure()
ax = plt.gca()
ax.plot(x_num, expr_num(x_num), linestyle=linestyle)
```
## Conventional formulation
We can transform the boundary value problem to
$$\frac{\mathrm{d}^2}{\mathrm{d}x^2}\left(EI\frac{\mathrm{d}^2 u}{\mathrm{d}x^2}\right) = \hat{f}\, ,\quad 0 < x< L$$
with
$$u(0) = u'(0) = EI u''(L) = \left[\frac{\mathrm{d}}{\mathrm{d}x^2}(EI w'')\right]_{x=L} = 0\, ,$$
and
$$u = w - w_0, \hat{f} = f - \frac{\mathrm{d}}{\mathrm{d}x^2}(EI w_0'')$$
where $w_0$ satisfies the boundary conditions. For this case we can chose
$$w_0 = \frac{M x^2}{2EI}\, ,$$
that satisfies the boundary conditions. For this choice, we have $\hat{f} = f$.
The quadratic functional for this problem is
$$J[u] = \int\limits_0^L \left[EI\left(\frac{\mathrm{d}^2 u}{\mathrm{d}x^2}\right)^2 - fu\right]\mathrm{d}x\, ,$$
and the weak problem $B(v, u) = l(v)$, with
$$
B(v, u) = \int\limits_0^L EI\frac{\mathrm{d}^2 v}{\mathrm{d}x^2}\frac{\mathrm{d}^2 u}{\mathrm{d}x^2}\mathrm{d}x\, ,\quad
l(v) = \int\limits_0^L v f\mathrm{d}x\, .
$$
```python
def quad_fun(x, u, M, EI, f, L):
F = EI/2*diff(u, x, 2)**2 - f*u
L = integrate(F, (x, 0, L))
return L
```
```python
def ritz_conventional(x, M, EI, f, L, nterms):
a = symbols("a0:%i"%(nterms))
u = sum(a[k]*x**(k + 2) for k in range(nterms))
M, EI, f, L = symbols("M EI f L")
L = quad_fun(x, u, M, EI, f, L)
eqs = [L.diff(C) for C in a]
sol = solve(eqs, a)
return u.subs(sol)
```
```python
w0 = M*x**2/(2*EI)
subs = {L: 1, EI:1, M:1, f: 1}
errors_conv = []
for nterms in range(1, 4):
u = ritz_conventional(x, M, EI, f, L, nterms)
w = u + w0
err = integrate((w - w_exact)**2, (x, 0, L))
norm = integrate(w_exact**2, (x, 0, L))
errors_conv.append(N(sqrt((err/norm).subs(subs))))
plt.figure(figsize=(8, 3))
ax = plt.subplot(121)
plot_expr(w_exact.subs(subs), x, ax=ax)
plot_expr(w.subs(subs), x, ax=ax, linestyle="dashed")
ax = plt.subplot(122)
plot_expr(psi_exact.subs(subs), x, ax=ax)
plot_expr(-w.diff(x).subs(subs), x, ax=ax, linestyle="dashed")
plt.legend(["Exact", "Ritz"]);
```
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
## Lagrange multiplier formulation
We can write the problem as minimizing the functional
$$J(\psi, w) = \int\limits_0^L\left[\frac{EI}{2}\left(\frac{\mathrm{d} \psi}{\mathrm{d}x}\right)^2 -
f w\right]\mathrm{d}x + M\psi(L)\, ,$$
subject to
$$G(\psi, w) \equiv \psi + \frac{\mathrm{d}w}{\mathrm{d}x} = 0\, .$$
The Lagrangian is given by
$$L(\psi, w, \lambda) = \int\limits_0^L\left[\frac{EI}{2}\left(\frac{\mathrm{d} \psi}{\mathrm{d}x}\right)^2 -
f w\right]\mathrm{d}x + \int\limits_0^L \lambda\left(\psi + \frac{\mathrm{d}x}{\mathrm{d}x}\right)\mathrm{d}x + M\psi(L)\, , $$
where $\lambda$ is the Lagrange multiplier, which in this case represents the shear force.
```python
errors_conv
```
```python
def lagran(x, psi, w, lamda, M, EI, f, L):
F = EI/2*diff(psi, x)**2 - f*w
G = lamda*(psi + diff(w, x))
L = integrate(F, (x, 0, L)) + integrate(G, (x, 0, L)) + M*psi.subs(x, L)
return L
```
```python
def ritz_multiplier(x, M, EI, f, L, nterms):
a = symbols("a0:%i"%(nterms))
b = symbols("b0:%i"%(nterms))
c = symbols("c0:%i"%(nterms))
var = a + b + c
psi = sum(a[k]*x**(k + 1) for k in range(nterms))
w = sum(b[k]*x**(k + 1) for k in range(nterms))
lamda = sum(c[k]*x**k for k in range(nterms))
M, EI, f, L = symbols("M EI f L")
L = lagran(x, psi, w, lamda, M, EI, f, L)
eqs = [L.diff(C) for C in var]
sol = solve(eqs, var)
return w.subs(sol), psi.subs(sol), lamda.subs(sol)
```
```python
subs = {L: 1, EI:1, M:1, f: 1}
errors_mult = []
for nterms in range(1, 4):
w, psi, lamda = ritz_multiplier(x, M, EI, f, L, nterms)
err = (integrate((w - w_exact)**2, (x, 0, L)) +
integrate((psi - psi_exact)**2, (x, 0, L)) +
integrate((lamda - lamda_exact)**2, (x, 0, L)))
norm = (integrate(w_exact**2, (x, 0, L)) +
integrate(psi_exact**2, (x, 0, L)) +
integrate(lamda_exact**2, (x, 0, L)))
errors_mult.append(N(sqrt((err/norm).subs(subs))))
plt.figure(figsize=(8, 3))
ax = plt.subplot(121)
plot_expr(w_exact.subs(subs), x, ax=ax)
plot_expr(w.subs(subs), x, ax=ax, linestyle="dashed")
ax = plt.subplot(122)
plot_expr(psi_exact.subs(subs), x, ax=ax)
plot_expr(psi.subs(subs), x, ax=ax, linestyle="dashed")
plt.legend(["Exact", "Ritz with multipliers"]);
```
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
```python
errors_mult
```
# The penalty function formulation
The augmented functional for this formulation is given by
$$P_K (\psi, w) = J(\psi, w) + \frac{K}{2}\int\limits_0^L \left(\psi + \frac{\mathrm{d}w}{\mathrm{d}x}\right)^2\mathrm{d}x\, ,$$
where $K$ is the penalty parameter.
```python
def augmented(x, psi, w, K, M, EI, f, L):
F = EI/2*diff(psi, x)**2 - f*w
G = (psi + diff(w, x))
P = integrate(F, (x, 0, L)) + K/2*integrate(G**2, (x, 0, L)) + M*psi.subs(x, L)
return P
```
```python
def ritz_penalty(x, K, M, EI, f, L, nterms):
a = symbols("a0:%i"%(nterms))
b = symbols("b0:%i"%(nterms))
var = a + b
w = sum(a[k]*x**(k + 1) for k in range(nterms))
psi = sum(b[k]*x**(k + 1) for k in range(nterms))
M, EI, f, L = symbols("M EI f L")
P = augmented(x, psi, w, K, M, EI, f, L)
eqs = [P.diff(C) for C in var]
sol = solve(eqs, var)
return w.subs(sol), psi.subs(sol)
```
```python
K = symbols("K")
errors_penalty = []
for K_val in [1, 10, 100]:
subs = {L: 1, EI:1, M:1, f: 1, K: K_val}
w, psi = ritz_penalty(x, K, M, EI, f, L, 2)
err = (integrate((w - w_exact)**2, (x, 0, L)) +
integrate((psi - psi_exact)**2, (x, 0, L)) +
integrate((lamda - lamda_exact)**2, (x, 0, L)))
norm = (integrate(w_exact**2, (x, 0, L)) +
integrate(psi_exact**2, (x, 0, L)) +
integrate(lamda_exact**2, (x, 0, L)))
errors_penalty.append(N(sqrt((err/norm).subs(subs))))
plt.figure(figsize=(8, 3))
ax = plt.subplot(121)
plot_expr(w_exact.subs(subs), x, ax=ax)
plot_expr(w.subs(subs), x, ax=ax, linestyle="dashed")
ax = plt.subplot(122)
plot_expr(psi_exact.subs(subs), x, ax=ax)
plot_expr(psi.subs(subs), x, ax=ax, linestyle="dashed")
plt.legend(["Exact", "Ritz with penalty"]);
```
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
```python
errors_penalty
```
## Mixed formulation
The mixed formulation involves rewriting a given higher order equation as a pair of llower
order equations by introducing secondary dependent variables. The original equation can be
decomposed into
$$
\frac{M(x)}{EI} = \frac{\mathrm{d}^2 w}{\mathrm{d}x^2}\, ,\quad
\frac{\mathrm{d}^2M(x)}{\mathrm{d}x^2} = f\, ,\quad 0<x<L\, .
$$
The functional in this case is
$$
I(w, M) = \int\limits_0^L\left(\frac{\mathrm{d}w}{\mathrm{d}x}\frac{\mathrm{d}M}{\mathrm{d}x}
+ \frac{M^2}{2EI}+ fw\right)\mathrm{d}x
$$
```python
def mixed_fun(x, w, M, EI, f, L):
F = diff(w, x)*diff(M, x) + M**2/(2*EI) + f*w
L = integrate(F, (x, 0, L))
return L
```
```python
def ritz_mixed(x, Mb, EI, f, L, nterms):
a = symbols("a0:%i"%(nterms))
b = symbols("b0:%i"%(nterms))
var = a + b
w = sum(a[k]*x**(k + 1) for k in range(nterms))
M = Mb + sum(b[k]*(x - L)**(k + 1) for k in range(nterms))
EI, f, L = symbols("EI f L")
L = mixed_fun(x, w, M, EI, f, L)
eqs = [L.diff(C) for C in var]
sol = solve(eqs, var)
return w.subs(sol), M.subs(sol)
```
```python
subs = {L: 1, EI:1, f: 1, M:1, Mb:1}
Mb = 1
errors_mix = []
for nterms in range(1, 5):
w, Ms = ritz_mixed(x, Mb, EI, f, L, nterms)
err = integrate((w - w_exact)**2, (x, 0, L))
norm = integrate(w_exact**2, (x, 0, L))
errors_mix.append(N(sqrt((err/norm).subs(subs))))
plt.figure(figsize=(8, 3))
ax = plt.subplot(121)
plot_expr(w_exact.subs(subs), x, ax=ax)
plot_expr(w.subs(subs), x, ax=ax, linestyle="dashed")
ax = plt.subplot(122)
plot_expr(M_exact.subs(subs), x, ax=ax)
plot_expr(Ms.subs(subs), x, ax=ax, linestyle="dashed")
plt.legend(["Exact", "Ritz mixed"]);
```
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
```python
```
| 6f94e283d8ca8953f203c50e4e73c0a752401461 | 938,712 | ipynb | Jupyter Notebook | variational/ritz_beam.ipynb | nicoguaro/FEM_resources | 32f032a4e096fdfd2870e0e9b5269046dd555aee | [
"MIT"
] | 28 | 2015-11-06T16:59:39.000Z | 2022-02-25T18:18:49.000Z | variational/ritz_beam.ipynb | oldninja/FEM_resources | e44f315be217fd78ba95c09e3c94b1693773c047 | [
"MIT"
] | null | null | null | variational/ritz_beam.ipynb | oldninja/FEM_resources | e44f315be217fd78ba95c09e3c94b1693773c047 | [
"MIT"
] | 9 | 2018-06-24T22:12:00.000Z | 2022-01-12T15:57:37.000Z | 86.621021 | 40,053 | 0.716947 | true | 3,926 | Qwen/Qwen-72B | 1. YES
2. YES | 0.868827 | 0.826712 | 0.718269 | __label__eng_Latn | 0.413475 | 0.507112 |